Version ArtifactHub License Slack X Reddit

VictoriaMetrics Agent - collects metrics from various sources and stores them to VictoriaMetrics

Prerequisites #

  • Install the follow packages: git, kubectl, helm, helm-docs. See this tutorial .

How to install #

Access a Kubernetes cluster.

Setup chart repository (can be omitted for OCI repositories) #

Add a chart helm repository with follow commands:

      helm repo add vm https://victoriametrics.github.io/helm-charts/

helm repo update

    

List versions of vm/victoria-metrics-agent chart available to installation:

      helm search repo vm/victoria-metrics-agent -l

    

Install victoria-metrics-agent chart #

Export default values of victoria-metrics-agent chart to file values.yaml:

  • For HTTPS repository

          helm show values vm/victoria-metrics-agent > values.yaml
    
        
  • For OCI repository

          helm show values oci://ghcr.io/victoriametrics/helm-charts/victoria-metrics-agent > values.yaml
    
        

Change the values according to the need of the environment in values.yaml file.

Test the installation with command:

  • For HTTPS repository

          helm install vma vm/victoria-metrics-agent -f values.yaml -n NAMESPACE --debug
    
        
  • For OCI repository

          helm install vma oci://ghcr.io/victoriametrics/helm-charts/victoria-metrics-agent -f values.yaml -n NAMESPACE --debug
    
        

Install chart with command:

  • For HTTPS repository

          helm install vma vm/victoria-metrics-agent -f values.yaml -n NAMESPACE
    
        
  • For OCI repository

          helm install vma oci://ghcr.io/victoriametrics/helm-charts/victoria-metrics-agent -f values.yaml -n NAMESPACE
    
        

Get the pods lists by running this commands:

      kubectl get pods -A | grep 'vma'

    

Get the application by running this command:

      helm list -f vma -n NAMESPACE

    

See the history of versions of vma application with command.

      helm history vma -n NAMESPACE

    

Upgrade guide #

Upgrade to 0.13.0 #

  • replace remoteWriteUrls to remoteWrite:

Given below config

      remoteWriteUrls:
- http://address1/api/v1/write
- http://address2/api/v1/write
    

should be changed to

      remoteWrite:
- url: http://address1/api/v1/write
- url: http://address2/api/v1/write
    

How to uninstall #

Remove application with command.

      helm uninstall vma -n NAMESPACE

    

Documentation of Helm Chart #

Install helm-docs following the instructions on this tutorial .

Generate docs with helm-docs command.

      cd charts/victoria-metrics-agent

helm-docs
    

The markdown generation is entirely go template driven. The tool parses metadata from charts and generates a number of sub-templates that can be referenced in a template file (by default README.md.gotmpl). If no template file is provided, the tool has a default internal template that will generate a reasonably formatted README.

Examples #

Daemonset mode #

The vmagent can be deployed as a DaemonSet , launching one pod per Kubernetes node. This setup is a suitable alternative to using a Deployment or StatefulSet. If you are using the VictoriaMetrics Operator and deploying vmagent as a Custom Resource (CRD), refer to VMAgent - Daemon Set documentation.

Key Benefits:

  • Reduced network traffic for scraping metrics by collecting them locally from each node.
  • Distributed scraping load across all nodes.
  • Improved resilience. Scraping continues uninterrupted if a vmagent pod fails on one node.

To use DaemonSet mode effectively, the scraping configuration must be adjusted. Use the spec.nodeName field selector to ensure each vmagent pod scrapes only targets local to its node. In the kubernetes_sd_configs section use role: pod or role: node. Using other roles (e.g., endpoints, service, etc.) may result in increased CPU and memory usage and overload of the Kubernetes API server.

Restrictions and Limitations:

  • Sharding is not supported.
  • PodDisruptionBudget is not supported.
  • HorizontalPodAutoscaler is not supported.
  • Persistent queue volume must be mounted using extraVolumes and extraVolumeMounts, and must use a hostPath volume source.
  • Pod restarts may lead to small gaps in metrics collection, as only a single vmagent pod is scheduled per node.

Below are three common scrape configurations typically used in DaemonSet mode. Each configuration ensures metrics are collected only from the local node.

Scraping kubelet (node) metrics:

      # values.yaml

mode: daemonSet

env:
  - name: KUBE_NODE_NAME
    valueFrom:
      fieldRef:
        fieldPath: spec.nodeName

remoteWrite:
   # replace with your remote write url
  - url: http://vmsingle-vms-victoria-metrics-k8s-stack:8428/api/v1/write

config:
  global:
    scrape_interval: 10s

  scrape_configs:
    - job_name: "kubernetes-nodes"
      scheme: https
      tls_config:
        ca_file: /var/run/secrets/kubernetes.io/serviceaccount/ca.crt
        insecure_skip_verify: true
      bearer_token_file: /var/run/secrets/kubernetes.io/serviceaccount/token
      kubernetes_sd_configs:
        - role: node
      relabel_configs:
        # filter node for local one
        - action: keep
          source_labels: [__meta_kubernetes_node_name]
          regex: "^%{KUBE_NODE_NAME}$"
        - action: labelmap
          regex: __meta_kubernetes_node_label_(.+)
        - target_label: __address__
          source_labels: [__address__]
          regex: ([^:]+)(:[0-9]+)?
          replacement: $1:10250
    

Scraping cAdvisor metrics:

      # values.yaml

mode: daemonSet

env:
  - name: KUBE_NODE_NAME
    valueFrom:
      fieldRef:
        fieldPath: spec.nodeName

remoteWrite:
   # replace with your remote write url
  - url: http://vmsingle-vms-victoria-metrics-k8s-stack:8428/api/v1/write

config:
  global:
    scrape_interval: 10s

  scrape_configs:
    - job_name: "kubernetes-nodes-cadvisor"
      scheme: https
      tls_config:
        ca_file: /var/run/secrets/kubernetes.io/serviceaccount/ca.crt
        insecure_skip_verify: true
      bearer_token_file: /var/run/secrets/kubernetes.io/serviceaccount/token
      metrics_path: /metrics/cadvisor
      kubernetes_sd_configs:
        - role: node
      relabel_configs:
        # filter node for local one
        - action: keep
          source_labels: [__meta_kubernetes_node_name]
          regex: "^%{KUBE_NODE_NAME}$"
        - action: labelmap
          regex: __meta_kubernetes_node_label_(.+)
        - target_label: __address__
          source_labels: [__address__]
          regex: ([^:]+)(:[0-9]+)?
          replacement: $1:10250
      honor_timestamps: false
    

Scraping pod metrics:

      # values.yaml

mode: daemonSet

env:
  - name: KUBE_NODE_NAME
    valueFrom:
      fieldRef:
        fieldPath: spec.nodeName

remoteWrite:
   # replace with your remote write url
  - url: http://vmsingle-vms-victoria-metrics-k8s-stack:8428/api/v1/write

config:
  global:
    scrape_interval: 10s

  scrape_configs:
    - job_name: "kubernetes-pods"
      kubernetes_sd_configs:
        - role: pod
          selectors:
            # use server side selector for pods
            - role: pod
              field: spec.nodeName=%{KUBE_NODE_NAME}
      relabel_configs:
        - action: drop
          source_labels: [__meta_kubernetes_pod_container_init]
          regex: true
        - action: keep_if_equal
          source_labels:
            [__meta_kubernetes_pod_annotation_prometheus_io_port, __meta_kubernetes_pod_container_port_number]
        - source_labels: [__meta_kubernetes_pod_annotation_prometheus_io_scrape]
          action: keep
          regex: true
        - source_labels: [__meta_kubernetes_pod_annotation_prometheus_io_path]
          action: replace
          target_label: __metrics_path__
          regex: (.+)
        - source_labels: [__address__, __meta_kubernetes_pod_annotation_prometheus_io_port]
          action: replace
          regex: ([^:]+)(?::\d+)?;(\d+)
          replacement: $1:$2
          target_label: __address__
        - source_labels: [__meta_kubernetes_pod_name]
          target_label: pod
        - source_labels: [__meta_kubernetes_pod_container_name]
          target_label: container
        - source_labels: [__meta_kubernetes_namespace]
          target_label: namespace
        - source_labels: [__meta_kubernetes_pod_node_name]
          action: replace
          target_label: node
    

Parameters #

The following tables lists the configurable parameters of the chart and their default values.

Change the values according to the need of the environment in victoria-metrics-agent/values.yaml file.

KeyDescription
affinity: {}
(object)

Pod affinity

allowedMetricsEndpoints[0]: /metrics
(string)
annotations: {}
(object)

Annotations to be added to the deployment

config:
    global:
        scrape_interval: 10s
    scrape_configs:
        - job_name: vmagent
          static_configs:
            - targets:
                - localhost:8429
        - bearer_token_file: /var/run/secrets/kubernetes.io/serviceaccount/token
          job_name: kubernetes-apiservers
          kubernetes_sd_configs:
            - role: endpoints
          relabel_configs:
            - action: keep
              regex: default;kubernetes;https
              source_labels:
                - __meta_kubernetes_namespace
                - __meta_kubernetes_service_name
                - __meta_kubernetes_endpoint_port_name
          scheme: https
          tls_config:
            ca_file: /var/run/secrets/kubernetes.io/serviceaccount/ca.crt
            insecure_skip_verify: true
        - bearer_token_file: /var/run/secrets/kubernetes.io/serviceaccount/token
          job_name: kubernetes-nodes
          kubernetes_sd_configs:
            - role: node
          relabel_configs:
            - action: labelmap
              regex: __meta_kubernetes_node_label_(.+)
            - replacement: kubernetes.default.svc:443
              target_label: __address__
            - regex: (.+)
              replacement: /api/v1/nodes/$1/proxy/metrics
              source_labels:
                - __meta_kubernetes_node_name
              target_label: __metrics_path__
          scheme: https
          tls_config:
            ca_file: /var/run/secrets/kubernetes.io/serviceaccount/ca.crt
            insecure_skip_verify: true
        - bearer_token_file: /var/run/secrets/kubernetes.io/serviceaccount/token
          honor_timestamps: false
          job_name: kubernetes-nodes-cadvisor
          kubernetes_sd_configs:
            - role: node
          relabel_configs:
            - action: labelmap
              regex: __meta_kubernetes_node_label_(.+)
            - replacement: kubernetes.default.svc:443
              target_label: __address__
            - regex: (.+)
              replacement: /api/v1/nodes/$1/proxy/metrics/cadvisor
              source_labels:
                - __meta_kubernetes_node_name
              target_label: __metrics_path__
          scheme: https
          tls_config:
            ca_file: /var/run/secrets/kubernetes.io/serviceaccount/ca.crt
            insecure_skip_verify: true
        - job_name: kubernetes-service-endpoints
          kubernetes_sd_configs:
            - role: endpointslices
          relabel_configs:
            - action: drop
              regex: true
              source_labels:
                - __meta_kubernetes_pod_container_init
            - action: keep_if_equal
              source_labels:
                - __meta_kubernetes_service_annotation_prometheus_io_port
                - __meta_kubernetes_pod_container_port_number
            - action: keep
              regex: true
              source_labels:
                - __meta_kubernetes_service_annotation_prometheus_io_scrape
            - action: replace
              regex: (https?)
              source_labels:
                - __meta_kubernetes_service_annotation_prometheus_io_scheme
              target_label: __scheme__
            - action: replace
              regex: (.+)
              source_labels:
                - __meta_kubernetes_service_annotation_prometheus_io_path
              target_label: __metrics_path__
            - action: replace
              regex: ([^:]+)(?::\d+)?;(\d+)
              replacement: $1:$2
              source_labels:
                - __address__
                - __meta_kubernetes_service_annotation_prometheus_io_port
              target_label: __address__
            - action: labelmap
              regex: __meta_kubernetes_service_label_(.+)
            - source_labels:
                - __meta_kubernetes_pod_name
              target_label: pod
            - source_labels:
                - __meta_kubernetes_pod_container_name
              target_label: container
            - source_labels:
                - __meta_kubernetes_namespace
              target_label: namespace
            - source_labels:
                - __meta_kubernetes_service_name
              target_label: service
            - replacement: ${1}
              source_labels:
                - __meta_kubernetes_service_name
              target_label: job
            - action: replace
              source_labels:
                - __meta_kubernetes_pod_node_name
              target_label: node
        - job_name: kubernetes-service-endpoints-slow
          kubernetes_sd_configs:
            - role: endpointslices
          relabel_configs:
            - action: drop
              regex: true
              source_labels:
                - __meta_kubernetes_pod_container_init
            - action: keep_if_equal
              source_labels:
                - __meta_kubernetes_service_annotation_prometheus_io_port
                - __meta_kubernetes_pod_container_port_number
            - action: keep
              regex: true
              source_labels:
                - __meta_kubernetes_service_annotation_prometheus_io_scrape_slow
            - action: replace
              regex: (https?)
              source_labels:
                - __meta_kubernetes_service_annotation_prometheus_io_scheme
              target_label: __scheme__
            - action: replace
              regex: (.+)
              source_labels:
                - __meta_kubernetes_service_annotation_prometheus_io_path
              target_label: __metrics_path__
            - action: replace
              regex: ([^:]+)(?::\d+)?;(\d+)
              replacement: $1:$2
              source_labels:
                - __address__
                - __meta_kubernetes_service_annotation_prometheus_io_port
              target_label: __address__
            - action: labelmap
              regex: __meta_kubernetes_service_label_(.+)
            - source_labels:
                - __meta_kubernetes_pod_name
              target_label: pod
            - source_labels:
                - __meta_kubernetes_pod_container_name
              target_label: container
            - source_labels:
                - __meta_kubernetes_namespace
              target_label: namespace
            - source_labels:
                - __meta_kubernetes_service_name
              target_label: service
            - replacement: ${1}
              source_labels:
                - __meta_kubernetes_service_name
              target_label: job
            - action: replace
              source_labels:
                - __meta_kubernetes_pod_node_name
              target_label: node
          scrape_interval: 5m
          scrape_timeout: 30s
        - job_name: kubernetes-services
          kubernetes_sd_configs:
            - role: service
          metrics_path: /probe
          params:
            module:
                - http_2xx
          relabel_configs:
            - action: keep
              regex: true
              source_labels:
                - __meta_kubernetes_service_annotation_prometheus_io_probe
            - source_labels:
                - __address__
              target_label: __param_target
            - replacement: blackbox
              target_label: __address__
            - source_labels:
                - __param_target
              target_label: instance
            - action: labelmap
              regex: __meta_kubernetes_service_label_(.+)
            - source_labels:
                - __meta_kubernetes_namespace
              target_label: namespace
            - source_labels:
                - __meta_kubernetes_service_name
              target_label: service
        - job_name: kubernetes-pods
          kubernetes_sd_configs:
            - role: pod
          relabel_configs:
            - action: drop
              regex: true
              source_labels:
                - __meta_kubernetes_pod_container_init
            - action: keep_if_equal
              source_labels:
                - __meta_kubernetes_pod_annotation_prometheus_io_port
                - __meta_kubernetes_pod_container_port_number
            - action: keep
              regex: true
              source_labels:
                - __meta_kubernetes_pod_annotation_prometheus_io_scrape
            - action: replace
              regex: (.+)
              source_labels:
                - __meta_kubernetes_pod_annotation_prometheus_io_path
              target_label: __metrics_path__
            - action: replace
              regex: ([^:]+)(?::\d+)?;(\d+)
              replacement: $1:$2
              source_labels:
                - __address__
                - __meta_kubernetes_pod_annotation_prometheus_io_port
              target_label: __address__
            - action: labelmap
              regex: __meta_kubernetes_pod_label_(.+)
            - source_labels:
                - __meta_kubernetes_pod_name
              target_label: pod
            - source_labels:
                - __meta_kubernetes_pod_container_name
              target_label: container
            - source_labels:
                - __meta_kubernetes_namespace
              target_label: namespace
            - action: replace
              source_labels:
                - __meta_kubernetes_pod_node_name
              target_label: node
(object)

VMAgent scrape configuration

configMap: ""
(string)

VMAgent scraping configuration use existing configmap if specified otherwise .config values will be used

containerWorkingDir: /
(string)

Container working directory

daemonSet:
    spec: {}
(object)

K8s DaemonSet specific variables

deployment:
    spec:
        strategy: {}
(object)

K8s Deployment specific variables

deployment.spec.strategy: {}
(object)

Deployment strategy. Check here for details

emptyDir: {}
(object)

Empty dir configuration for a case, when persistence is disabled

env: []
(list)

Additional environment variables (ex.: secret tokens, flags). Check here for more details.

envFrom: []
(list)

Specify alternative source for env variables

extraArgs:
    envflag.enable: true
    envflag.prefix: VM_
    httpListenAddr: :8429
    loggerFormat: json
(object)

VMAgent extra command line arguments

extraContainers: []
(list)

Extra containers to run in a pod with vmagent

extraHostPathMounts: []
(list)

Additional hostPath mounts

extraLabels: {}
(object)

Extra labels for Deployment and Statefulset

extraObjects: []
(list)

Add extra specs dynamically to this chart

extraScrapeConfigs: []
(list)

Extra scrape configs that will be appended to config

extraVolumeMounts: []
(list)

Extra Volume Mounts for the container

extraVolumes: []
(list)

Extra Volumes for the pod

fullnameOverride: ""
(string)

Override resources fullname

global.cluster.dnsDomain: cluster.local.
(string)

K8s cluster domain suffix, uses for building storage pods’ FQDN. Details are here

global.compatibility:
    openshift:
        adaptSecurityContext: auto
(object)

Openshift security context compatibility configuration

global.image.registry: ""
(string)

Image registry, that can be shared across multiple helm charts

global.imagePullSecrets: []
(list)

Image pull secrets, that can be shared across multiple helm charts

horizontalPodAutoscaling:
    enabled: false
    maxReplicas: 10
    metrics: []
    minReplicas: 1
(object)

Horizontal Pod Autoscaling. Note that it is not intended to be used for vmagents which perform scraping. In order to scale scraping vmagents check here

horizontalPodAutoscaling.enabled: false
(bool)

Use HPA for vmagent

horizontalPodAutoscaling.maxReplicas: 10
(int)

Maximum replicas for HPA to use to to scale vmagent

horizontalPodAutoscaling.metrics: []
(list)

Metric for HPA to use to scale vmagent

horizontalPodAutoscaling.minReplicas: 1
(int)

Minimum replicas for HPA to use to scale vmagent

image.pullPolicy: IfNotPresent
(string)

Image pull policy

image.registry: ""
(string)

Image registry

image.repository: victoriametrics/vmagent
(string)

Image repository

image.tag: ""
(string)

Image tag, set to Chart.AppVersion by default

image.variant: ""
(string)

Variant of the image to use. e.g. enterprise, scratch

imagePullSecrets: []
(list)

Image pull secrets

ingress.annotations: {}
(object)

Ingress annotations

ingress.enabled: false
(bool)

Enable deployment of ingress for agent

ingress.extraLabels: {}
(object)

Ingress extra labels

ingress.hosts:
    - name: vmagent.local
      path:
        - /
      port: http
(list)

Array of host objects

ingress.ingressClassName: ""
(string)

Ingress controller class name

ingress.pathType: Prefix
(string)

Ingress path type

ingress.tls: []
(list)

Array of TLS objects

initContainers: []
(list)

Init containers for vmagent

license:
    key: ""
    secret:
        key: ""
        name: ""
(object)

Enterprise license key configuration for VictoriaMetrics enterprise. Required only for VictoriaMetrics enterprise. Check docs here, for more information, visit site. Request a trial license here Supported starting from VictoriaMetrics v1.94.0

license.key: ""
(string)

License key

license.secret:
    key: ""
    name: ""
(object)

Use existing secret with license key

license.secret.key: ""
(string)

Key in secret with license key

license.secret.name: ""
(string)

Existing secret name

lifecycle: {}
(object)

Specify pod lifecycle

mode: deployment
(string)

VMAgent mode: daemonSet, deployment, statefulSet

nameOverride: ""
(string)

Override chart name

nodeSelector: {}
(object)

Pod’s node selector. Details are here

persistentVolume.accessModes:
    - ReadWriteOnce
(list)

Array of access modes. Must match those of existing PV or dynamic provisioner. Details are here

persistentVolume.annotations: {}
(object)

Persistent volume annotations

persistentVolume.enabled: false
(bool)

Create/use Persistent Volume Claim for server component. Empty dir if false

persistentVolume.existingClaim: ""
(string)

Existing Claim name. If defined, PVC must be created manually before volume will be bound

persistentVolume.extraLabels: {}
(object)

Persistent volume additional labels

persistentVolume.matchLabels: {}
(object)

Bind Persistent Volume by labels. Must match all labels of targeted PV.

persistentVolume.size: 10Gi
(string)

Size of the volume. Should be calculated based on the logs you send and retention policy you set.

persistentVolume.storageClassName: ""
(string)

StorageClass to use for persistent volume. Requires server.persistentVolume.enabled: true. If defined, PVC created automatically

podAnnotations: {}
(object)

Annotations to be added to pod

podDisruptionBudget:
    enabled: false
    labels: {}
(object)

See kubectl explain poddisruptionbudget.spec for more or check official documentation

podLabels: {}
(object)

Extra labels for Pods only

podSecurityContext:
    enabled: true
(object)

Security context to be added to pod

priorityClassName: ""
(string)

Priority class to be assigned to the pod(s)

probe.liveness:
    initialDelaySeconds: 5
    periodSeconds: 15
    tcpSocket: {}
    timeoutSeconds: 5
(object)

Liveness probe

probe.readiness:
    httpGet: {}
    initialDelaySeconds: 5
    periodSeconds: 15
(object)

Readiness probe

probe.startup: {}
(object)

Startup probe

rbac.annotations: {}
(object)

Role/RoleBinding annotations

rbac.create: true
(bool)

Enables Role/RoleBinding creation

rbac.extraLabels: {}
(object)

Role/RoleBinding labels

rbac.namespaced: false
(bool)

If true and rbac.enabled, will deploy a Role/RoleBinding instead of a ClusterRole/ClusterRoleBinding

remoteWrite: []
(list)

Generates remoteWrite.* flags and config maps with value content for values, that are of type list of map. Each item should contain url param to pass validation.

replicaCount: 1
(int)

Replica count

resources: {}
(object)

Resource object. Details are here

schedulerName: ""
(string)

Use an alternate scheduler, e.g. “stork”. Check details here

securityContext:
    enabled: true
(object)

Security context to be added to pod’s containers

service.annotations: {}
(object)

Service annotations

service.clusterIP: ""
(string)

Service ClusterIP

service.enabled: false
(bool)

Enable agent service

service.externalIPs: []
(list)

Service external IPs. Check here for details

service.externalTrafficPolicy: ""
(string)

Service external traffic policy. Check here for details

service.extraLabels: {}
(object)

Service labels

service.healthCheckNodePort: ""
(string)

Health check node port for a service. Check here for details

service.internalTrafficPolicy: ""
(string)

Service internal traffic policy. Check here for details

service.ipFamilies: []
(list)

List of service IP families. Check here for details.

service.ipFamilyPolicy: ""
(string)

Service IP family policy. Check here for details.

service.loadBalancerIP: ""
(string)

Service load balancer IP

service.loadBalancerSourceRanges: []
(list)

Load balancer source range

service.servicePort: 8429
(int)

Service port

service.targetPort: http
(string)

Target port

service.trafficDistribution: ""
(string)

Traffic Distribution. Check Traffic distribution

service.type: ClusterIP
(string)

Service type

serviceAccount.annotations: {}
(object)

Annotations to add to the service account

serviceAccount.automountToken: true
(bool)

mount API token to pod directly

serviceAccount.create: true
(bool)

Specifies whether a service account should be created

serviceAccount.name: null
(string)

The name of the service account to use. If not set and create is true, a name is generated using the fullname template

serviceMonitor.annotations: {}
(object)

Service Monitor annotations

serviceMonitor.basicAuth: {}
(object)

Basic auth params for Service Monitor

serviceMonitor.enabled: false
(bool)

Enable deployment of Service Monitor for server component. This is Prometheus operator object

serviceMonitor.extraLabels: {}
(object)

Service Monitor labels

serviceMonitor.metricRelabelings: []
(list)

Service Monitor metricRelabelings

serviceMonitor.relabelings: []
(list)

Service Monitor relabelings

serviceMonitor.targetPort: http
(string)

Service Monitor targetPort

statefulSet:
    clusterMode: false
    replicationFactor: 1
    spec:
        updateStrategy: {}
(object)

K8s StatefulSet specific variables

statefulSet.clusterMode: false
(bool)

create cluster of vmagents. Check here available since v1.77.2

statefulSet.replicationFactor: 1
(int)

replication factor for vmagent in cluster mode

statefulSet.spec.updateStrategy: {}
(object)

StatefulSet update strategy. Check here for details.

tolerations: []
(list)

Node tolerations for server scheduling to nodes with taints. Details are here

topologySpreadConstraints: []
(list)

Pod topologySpreadConstraints