Version ArtifactHub License Slack X Reddit

Kubernetes monitoring on VictoriaMetrics stack. Includes VictoriaMetrics Operator, Grafana dashboards, ServiceScrapes and VMRules

Overview #

This chart is an All-in-one solution to start monitoring kubernetes cluster. It installs multiple dependency charts like grafana, node-exporter, kube-state-metrics and victoria-metrics-operator. Also it installs Custom Resources like VMSingle, VMCluster, VMAgent, VMAlert.

By default, the operator converts all existing prometheus-operator API objects into corresponding VictoriaMetrics Operator objects.

To enable metrics collection for kubernetes this chart installs multiple scrape configurations for kubernetes components like kubelet and kube-proxy, etc. Metrics collection is done by VMAgent. So if want to ship metrics to external VictoriaMetrics database you can disable VMSingle installation by setting vmsingle.enabled to false and setting vmagent.vmagentSpec.remoteWrite.url to your external VictoriaMetrics database.

This chart also installs bunch of dashboards and recording rules from kube-prometheus project.

Overview

Configuration #

Configuration of this chart is done through helm values.

Dependencies #

Dependencies can be enabled or disabled by setting enabled to true or false in values.yaml file.

!Important: for dependency charts anything that you can find in values.yaml of dependency chart can be configured in this chart under key for that dependency. For example if you want to configure grafana you can find all possible configuration options in values.yaml and you should set them in values for this chart under grafana: key. For example if you want to configure grafana.persistence.enabled you should set it in values.yaml like this:

              1
2
3
4
5
6
7
8
9
            
              #################################################
###              dependencies               #####
#################################################
# Grafana dependency chart configuration. For possible values refer to https://github.com/grafana/helm-charts/tree/main/charts/grafana#configuration
grafana:
  enabled: true
  persistence:
    type: pvc
    enabled: false
            

VictoriaMetrics components #

This chart installs multiple VictoriaMetrics components using Custom Resources that are managed by victoria-metrics-operator Each resource can be configured using spec of that resource from API docs of victoria-metrics-operator. For example if you want to configure VMAgent you can find all possible configuration options in API docs and you should set them in values for this chart under vmagent.spec key. For example if you want to configure remoteWrite.url you should set it in values.yaml like this:

              1
2
3
4
            
              vmagent:
  spec:
    remoteWrite:
      - url: "https://insert.vmcluster.domain.com/insert/0/prometheus/api/v1/write"
            

ArgoCD issues #

Operator self signed certificates #

When deploying K8s stack using ArgoCD without Cert Manager (.Values.victoria-metrics-operator.admissionWebhooks.certManager.enabled: false) it will rerender operator’s webhook certificates on each sync since Helm lookup function is not respected by ArgoCD. To prevent this please update you K8s stack Application spec.syncPolicy and spec.ignoreDifferences with a following:

              1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
            
              apiVersion: argoproj.io/v1alpha1
kind: Application
...
spec:
  ...
  destination:
    ...
    namespace: <k8s-stack-namespace>
  ...
  syncPolicy:
    syncOptions:
    # https://argo-cd.readthedocs.io/en/stable/user-guide/sync-options/#respect-ignore-difference-configs
    # argocd must also ignore difference during apply stage
    # otherwise it ll silently override changes and cause a problem
    - RespectIgnoreDifferences=true
  ignoreDifferences:
    - group: ""
      kind: Secret
      name: <fullname>-validation
      namespace: <k8s-stack-namespace>
      jsonPointers:
        - /data
    - group: admissionregistration.k8s.io
      kind: ValidatingWebhookConfiguration
      name: <fullname>-admission
      jqPathExpressions:
      - '.webhooks[]?.clientConfig.caBundle'
            

where <fullname> is output of {{ include "vm-operator.fullname" }} for your setup

metadata.annotations: Too long: must have at most 262144 bytes on dashboards #

If one of dashboards ConfigMap is failing with error Too long: must have at most 262144 bytes, please make sure you’ve added argocd.argoproj.io/sync-options: ServerSideApply=true annotation to your dashboards:

              1
2
3
            
              defaultDashboards:
  annotations:
    argocd.argoproj.io/sync-options: ServerSideApply=true
            

argocd.argoproj.io/sync-options: ServerSideApply=true

Resources are not completely removed after chart uninstallation #

This chart uses pre-delete Helm hook to cleanup resources managed by operator, but it’s not supported in ArgoCD and this hook is ignored. To have a control over resources removal please consider using either ArgoCD sync phases and waves or installing operator chart separately

Rules and dashboards #

This chart by default install multiple dashboards and recording rules from kube-prometheus you can disable dashboards with defaultDashboards.enabled: false and experimentalDashboardsEnabled: false and rules can be configured under defaultRules

Adding external dashboards #

By default, this chart uses sidecar in order to provision default dashboards. If you want to add you own dashboards there are two ways to do it:

  • Add dashboards by creating a ConfigMap. An example ConfigMap:
              1
2
3
4
5
6
7
8
9
            
              apiVersion: v1
kind: ConfigMap
metadata:
  labels:
    grafana_dashboard: "1"
  name: grafana-dashboard
data:
  dashboard.json: |-
      {...}
            
  • Use init container provisioning. Note that this option requires disabling sidecar and will remove all default dashboards provided with this chart. An example configuration:
              1
2
3
4
5
6
7
8
9
            
              grafana:
  sidecar:
    dashboards:
      enabled: false
  dashboards:
    vmcluster:
      gnetId: 11176
      revision: 38
      datasource: VictoriaMetrics
            

When using this approach, you can find dashboards for VictoriaMetrics components published here.

Prometheus scrape configs #

This chart installs multiple scrape configurations for kubernetes monitoring. They are configured under #ServiceMonitors section in values.yaml file. For example if you want to configure scrape config for kubelet you should set it in values.yaml like this:

              1
2
3
4
5
6
            
              kubelet:
  enabled: true
  # spec for VMNodeScrape crd
  # https://docs.victoriametrics.com/operator/api#vmnodescrapespec
  spec:
    interval: "30s"
            

Using externally managed Grafana #

If you want to use an externally managed Grafana instance but still want to use the dashboards provided by this chart you can set grafana.enabled to false and set defaultDashboards.enabled to true. This will install the dashboards but will not install Grafana.

For example:

              1
2
3
4
5
            
              defaultDashboards:
  enabled: true

grafana:
  enabled: false
            

This will create ConfigMaps with dashboards to be imported into Grafana.

If additional configuration for labels or annotations is needed in order to import dashboard to an existing Grafana you can set .grafana.sidecar.dashboards.additionalDashboardLabels or .grafana.sidecar.dashboards.additionalDashboardAnnotations in values.yaml:

For example:

              1
2
3
4
5
6
            
              defaultDashboards:
  enabled: true
  labels:
    key: value
  annotations:
    key: value
            

Using alternative image registry #

All images of VictoriaMetrics components are available on Docker Hub and Quay. It is possible to override default image registry for all components deployed by operator and operator itself by using the following values:

              1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
            
              victoria-metrics-operator:
  image:
    registry: "quay.io"
  env:
    - name: "VM_USECUSTOMCONFIGRELOADER"
      value: "true"
    - name: VM_CUSTOMCONFIGRELOADERIMAGE
      value: "quay.io/victoriametrics/operator:config-reloader-v0.53.0"
    - name: VM_VLOGSDEFAULT_IMAGE
      value: "quay.io/victoriametrics/victoria-logs"
    - name: "VM_VMALERTDEFAULT_IMAGE"
      value: "quay.io/victoriametrics/vmalert"
    - name: "VM_VMAGENTDEFAULT_IMAGE"
      value: "quay.io/victoriametrics/vmagent"
    - name: "VM_VMSINGLEDEFAULT_IMAGE"
      value: "quay.io/victoriametrics/victoria-metrics"
    - name: "VM_VMCLUSTERDEFAULT_VMSELECTDEFAULT_IMAGE"
      value: "quay.io/victoriametrics/vmselect"
    - name: "VM_VMCLUSTERDEFAULT_VMSTORAGEDEFAULT_IMAGE"
      value: "quay.io/victoriametrics/vmstorage"
    - name: "VM_VMCLUSTERDEFAULT_VMINSERTDEFAULT_IMAGE"
      value: "quay.io/victoriametrics/vminsert"
    - name: "VM_VMBACKUP_IMAGE"
      value: "quay.io/victoriametrics/vmbackupmanager"
    - name: "VM_VMAUTHDEFAULT_IMAGE"
      value: "quay.io/victoriametrics/vmauth"
    - name: "VM_VMALERTMANAGER_ALERTMANAGERDEFAULTBASEIMAGE"
      value: "quay.io/prometheus/alertmanager"
            

Prerequisites #

  • Install the follow packages: git, kubectl, helm, helm-docs. See this tutorial.

  • Add dependency chart repositories

              1
2
3
            
              helm repo add grafana https://grafana.github.io/helm-charts
helm repo add prometheus-community https://prometheus-community.github.io/helm-charts
helm repo update
            
  • PV support on underlying infrastructure.

How to install #

Access a Kubernetes cluster.

Setup chart repository (can be omitted for OCI repositories) #

Add a chart helm repository with follow commands:

              1
2
3
4
            
              helm repo add vm https://victoriametrics.github.io/helm-charts/

helm repo update

            

List versions of vm/victoria-metrics-k8s-stack chart available to installation:

              1
2
            
              helm search repo vm/victoria-metrics-k8s-stack -l

            

Install victoria-metrics-k8s-stack chart #

Export default values of victoria-metrics-k8s-stack chart to file values.yaml:

  • For HTTPS repository

                  1
    2
                
                  helm show values vm/victoria-metrics-k8s-stack > values.yaml
    
                
  • For OCI repository

                  1
    2
                
                  helm show values oci://ghcr.io/victoriametrics/helm-charts/victoria-metrics-k8s-stack > values.yaml
    
                

Change the values according to the need of the environment in values.yaml file.

Test the installation with command:

  • For HTTPS repository

                  1
    2
                
                  helm install vmks vm/victoria-metrics-k8s-stack -f values.yaml -n NAMESPACE --debug --dry-run
    
                
  • For OCI repository

                  1
    2
                
                  helm install vmks oci://ghcr.io/victoriametrics/helm-charts/victoria-metrics-k8s-stack -f values.yaml -n NAMESPACE --debug --dry-run
    
                

Install chart with command:

  • For HTTPS repository

                  1
    2
                
                  helm install vmks vm/victoria-metrics-k8s-stack -f values.yaml -n NAMESPACE
    
                
  • For OCI repository

                  1
    2
                
                  helm install vmks oci://ghcr.io/victoriametrics/helm-charts/victoria-metrics-k8s-stack -f values.yaml -n NAMESPACE
    
                

Get the pods lists by running this commands:

              1
2
            
              kubectl get pods -A | grep 'vmks'

            

Get the application by running this command:

              1
2
            
              helm list -f vmks -n NAMESPACE

            

See the history of versions of vmks application with command.

              1
2
            
              helm history vmks -n NAMESPACE

            

Install operator separately #

To have control over an order of managed resources removal or to be able to remove a whole namespace with managed resources it’s recommended to disable operator in k8s-stack chart (victoria-metrics-operator.enabled: false) and install it separately. To move operator from existing k8s-stack release to a separate one please follow the steps below:

  • disable cleanup webhook (victoria-metrics-operator.crds.cleanup.enabled: false) and apply changes
  • disable operator (victoria-metrics-operator.enabled: false) and apply changes
  • deploy operator separately with crds.plain: true

If you’re planning to delete k8s-stack by a whole namespace removal please consider deploying operator in a separate namespace as due to uncontrollable removal order process can hang if operator is removed before at least one resource it manages.

Install locally (Minikube) #

To run VictoriaMetrics stack locally it’s possible to use Minikube. To avoid dashboards and alert rules issues please follow the steps below:

Run Minikube cluster

              1
            
              minikube start --container-runtime=containerd --extra-config=scheduler.bind-address=0.0.0.0 --extra-config=controller-manager.bind-address=0.0.0.0 --extra-config=etcd.listen-metrics-urls=http://0.0.0.0:2381
            

Install helm chart

              1
            
              helm install [RELEASE_NAME] vm/victoria-metrics-k8s-stack -f values.yaml -f values.minikube.yaml -n NAMESPACE --debug --dry-run
            

How to uninstall #

Remove application with command.

              1
2
            
              helm uninstall vmks -n NAMESPACE

            

CRDs created by this chart are not removed by default and should be manually cleaned up:

              1
            
              kubectl get crd | grep victoriametrics.com | awk '{print $1 }' | xargs -i kubectl delete crd {}
            

Troubleshooting #

  • If you cannot install helm chart with error configmap already exist. It could happen because of name collisions, if you set too long release name. Kubernetes by default, allows only 63 symbols at resource names and all resource names are trimmed by helm to 63 symbols. To mitigate it, use shorter name for helm chart release name, like:
              1
2
            
              # stack - is short enough
helm upgrade -i stack vm/victoria-metrics-k8s-stack
            

Or use override for helm chart release name:

              1
            
              helm upgrade -i some-very-long-name vm/victoria-metrics-k8s-stack --set fullnameOverride=stack
            

Upgrade guide #

Usually, helm upgrade doesn’t requires manual actions. Just execute command:

              1
            
              $ helm upgrade [RELEASE_NAME] vm/victoria-metrics-k8s-stack
            

But release with CRD update can only be patched manually with kubectl. Since helm does not perform a CRD update, we recommend that you always perform this when updating the helm-charts version:

              1
2
3
4
5
            
              # 1. check the changes in CRD
$ helm show crds vm/victoria-metrics-k8s-stack --version [YOUR_CHART_VERSION] | kubectl diff -f -

# 2. apply the changes (update CRD)
$ helm show crds vm/victoria-metrics-k8s-stack --version [YOUR_CHART_VERSION] | kubectl apply -f - --server-side
            

All other manual actions upgrades listed below:

Upgrade to 0.29.0 #

To provide more flexibility for VMAuth configuration all <component>.vmauth params were moved to vmauth.spec. Also .vm.write and .vm.read variables are available in vmauth.spec, which represent vmsingle, vminsert, externalVM.write and vmsingle, vmselect, externalVM.read parsed URLs respectively.

If your configuration in version < 0.29.0 looked like below:

              1
2
3
4
5
6
7
8
9
10
11
12
            
              vmcluster:
  vmauth:
    vmselect:
      - src_paths:
          - /select/.*
        url_prefix:
          - /
    vminsert:
      - src_paths:
          - /insert/.*
        url_prefix:
          - /
            

In 0.29.0 it should look like:

              1
2
3
4
5
6
7
8
9
10
11
            
              vmauth:
  spec:
    unauthorizedAccessConfig:
      - src_paths:
          - '{{ .vm.read.path }}/.*'
        url_prefix:
          - '{{ urlJoin (omit .vm.read "path") }}/'
      - src_paths:
          - '{{ .vm.write.path }}/.*'
        url_prefix:
          - '{{ urlJoin (omit .vm.write "path") }}/'
            

Upgrade to 0.13.0 #

  • node-exporter starting from version 4.0.0 is using the Kubernetes recommended labels. Therefore you have to delete the daemonset before you upgrade.
              1
            
              kubectl delete daemonset -l app=prometheus-node-exporter
            
  • scrape configuration for kubernetes components was moved from vmServiceScrape.spec section to spec section. If you previously modified scrape configuration you need to update your values.yaml

  • grafana.defaultDashboardsEnabled was renamed to defaultDashboardsEnabled (moved to top level). You may need to update it in your values.yaml

Upgrade to 0.6.0 #

All CRD must be update to the latest version with command:

              1
            
              kubectl apply -f https://raw.githubusercontent.com/VictoriaMetrics/helm-charts/master/charts/victoria-metrics-k8s-stack/crds/crd.yaml
            

Upgrade to 0.4.0 #

All CRD must be update to v1 version with command:

              1
            
              kubectl apply -f https://raw.githubusercontent.com/VictoriaMetrics/helm-charts/master/charts/victoria-metrics-k8s-stack/crds/crd.yaml
            

Upgrade from 0.2.8 to 0.2.9 #

Update VMAgent crd

command:

              1
            
              kubectl apply -f https://raw.githubusercontent.com/VictoriaMetrics/operator/v0.16.0/config/crd/bases/operator.victoriametrics.com_vmagents.yaml
            

Upgrade from 0.2.5 to 0.2.6 #

New CRD added to operator - VMUser and VMAuth, new fields added to exist crd. Manual commands:

              1
2
3
4
5
6
            
              kubectl apply -f https://raw.githubusercontent.com/VictoriaMetrics/operator/v0.15.0/config/crd/bases/operator.victoriametrics.com_vmusers.yaml
kubectl apply -f https://raw.githubusercontent.com/VictoriaMetrics/operator/v0.15.0/config/crd/bases/operator.victoriametrics.com_vmauths.yaml
kubectl apply -f https://raw.githubusercontent.com/VictoriaMetrics/operator/v0.15.0/config/crd/bases/operator.victoriametrics.com_vmalerts.yaml
kubectl apply -f https://raw.githubusercontent.com/VictoriaMetrics/operator/v0.15.0/config/crd/bases/operator.victoriametrics.com_vmagents.yaml
kubectl apply -f https://raw.githubusercontent.com/VictoriaMetrics/operator/v0.15.0/config/crd/bases/operator.victoriametrics.com_vmsingles.yaml
kubectl apply -f https://raw.githubusercontent.com/VictoriaMetrics/operator/v0.15.0/config/crd/bases/operator.victoriametrics.com_vmclusters.yaml
            

Documentation of Helm Chart #

Install helm-docs following the instructions on this tutorial.

Generate docs with helm-docs command.

              1
2
3
            
              cd charts/victoria-metrics-k8s-stack

helm-docs
            

The markdown generation is entirely go template driven. The tool parses metadata from charts and generates a number of sub-templates that can be referenced in a template file (by default README.md.gotmpl). If no template file is provided, the tool has a default internal template that will generate a reasonably formatted README.

Parameters #

The following tables lists the configurable parameters of the chart and their default values.

Change the values according to the need of the environment in victoria-metrics-k8s-stack/values.yaml file.

KeyDescription
additionalVictoriaMetricsMap: null
(string)

Provide custom recording or alerting rules to be deployed into the cluster.

alertmanager.annotations: {}
(object)

Alertmanager annotations

alertmanager.config:
    receivers:
        - name: blackhole
    route:
        receiver: blackhole
(object)

Alertmanager configuration

alertmanager.enabled: true
(bool)

Create VMAlertmanager CR

alertmanager.ingress:
    annotations: {}
    enabled: false
    extraPaths: []
    hosts:
        - alertmanager.domain.com
    labels: {}
    path: '{{ .Values.alertmanager.spec.routePrefix | default "/" }}'
    pathType: Prefix
    tls: []
(object)

Alertmanager ingress configuration

alertmanager.ingress.extraPaths: []
(list)

Extra paths to prepend to every host configuration. This is useful when working with annotation based services.

alertmanager.monzoTemplate:
    enabled: true
(object)

Better alert templates for slack source

alertmanager.spec:
    configSecret: ""
    externalURL: ""
    image:
        tag: v0.28.1
    port: "9093"
    replicaCount: 1
    routePrefix: /
    selectAllByDefault: true
(object)

Full spec for VMAlertmanager CRD. Allowed values described here

alertmanager.spec.configSecret: ""
(string)

If this one defined, it will be used for alertmanager configuration and config parameter will be ignored

alertmanager.templateFiles: {}
(object)

Extra alert templates

alertmanager.useManagedConfig: false
(bool)

enable storing .Values.alertmanager.config in VMAlertmanagerConfig instead of k8s Secret. Note: VMAlertmanagerConfig and plain Alertmanager config structures are not equal. If you’re migrating existing config, please make sure that .Values.alertmanager.config:

  • with useManagedConfig: false has structure described here.
  • with useManagedConfig: true has structure described here.
argocdReleaseOverride: ""
(string)

If this chart is used in “Argocd” with “releaseName” field then VMServiceScrapes couldn’t select the proper services. For correct working need set value ‘argocdReleaseOverride=$ARGOCD_APP_NAME’

coreDns.enabled: true
(bool)

Enabled CoreDNS metrics scraping

coreDns.service.enabled: true
(bool)

Create service for CoreDNS metrics

coreDns.service.port: 9153
(int)

CoreDNS service port

coreDns.service.selector:
    k8s-app: kube-dns
(object)

CoreDNS service pod selector

coreDns.service.targetPort: 9153
(int)

CoreDNS service target port

coreDns.vmScrape:
    spec:
        endpoints:
            - bearerTokenFile: /var/run/secrets/kubernetes.io/serviceaccount/token
              port: http-metrics
        jobLabel: jobLabel
        namespaceSelector:
            matchNames:
                - kube-system
(object)

Spec for VMServiceScrape CRD is here

defaultDashboards.annotations: {}
(object)
defaultDashboards.dashboards:
    node-exporter-full:
        enabled: true
    victoriametrics-operator:
        enabled: true
    victoriametrics-vmalert:
        enabled: true
(object)

Create dashboards as ConfigMap despite dependency it requires is not installed

defaultDashboards.dashboards.node-exporter-full:
    enabled: true
(object)

In ArgoCD using client-side apply this dashboard reaches annotations size limit and causes k8s issues without server side apply See this issue

defaultDashboards.defaultTimezone: utc
(string)
defaultDashboards.enabled: true
(bool)

Enable custom dashboards installation

defaultDashboards.grafanaOperator.enabled: false
(bool)

Create dashboards as CRDs (requires grafana-operator to be installed)

defaultDashboards.grafanaOperator.spec.allowCrossNamespaceImport: false
(bool)
defaultDashboards.grafanaOperator.spec.instanceSelector.matchLabels.dashboards: grafana
(string)
defaultDashboards.labels: {}
(object)
defaultDatasources.alertmanager:
    datasources:
        - access: proxy
          jsonData:
            implementation: prometheus
          name: Alertmanager
    perReplica: false
(object)

List of alertmanager datasources. Alertmanager generated url will be added to each datasource in template if alertmanager is enabled

defaultDatasources.alertmanager.perReplica: false
(bool)

Create per replica alertmanager compatible datasource

defaultDatasources.extra: []
(list)

Configure additional grafana datasources (passed through tpl). Check here for details

defaultDatasources.grafanaOperator.annotations: {}
(object)
defaultDatasources.grafanaOperator.enabled: false
(bool)

Create datasources as CRDs (requires grafana-operator to be installed)

defaultDatasources.grafanaOperator.spec.allowCrossNamespaceImport: false
(bool)
defaultDatasources.grafanaOperator.spec.instanceSelector.matchLabels.dashboards: grafana
(string)
defaultDatasources.victoriametrics.datasources:
    - access: proxy
      isDefault: true
      name: VictoriaMetrics
      type: prometheus
    - access: proxy
      isDefault: false
      name: VictoriaMetrics (DS)
      type: victoriametrics-metrics-datasource
(list)

List of prometheus compatible datasource configurations. VM url will be added to each of them in templates.

defaultDatasources.victoriametrics.perReplica: false
(bool)

Create per replica prometheus compatible datasource

defaultRules:
    additionalGroupByLabels: []
    alerting:
        spec:
            annotations: {}
            labels: {}
    annotations: {}
    create: true
    group:
        spec:
            params: {}
    groups:
        alertmanager:
            create: true
            rules: {}
        etcd:
            create: true
            rules: {}
        general:
            create: true
            rules: {}
        k8sContainerCpuLimits:
            create: true
            rules: {}
        k8sContainerCpuRequests:
            create: true
            rules: {}
        k8sContainerCpuUsageSecondsTotal:
            create: true
            rules: {}
        k8sContainerMemoryCache:
            create: true
            rules: {}
        k8sContainerMemoryLimits:
            create: true
            rules: {}
        k8sContainerMemoryRequests:
            create: true
            rules: {}
        k8sContainerMemoryRss:
            create: true
            rules: {}
        k8sContainerMemorySwap:
            create: true
            rules: {}
        k8sContainerMemoryWorkingSetBytes:
            create: true
            rules: {}
        k8sContainerResource:
            create: true
            rules: {}
        k8sPodOwner:
            create: true
            rules: {}
        kubeApiserver:
            create: true
            rules: {}
        kubeApiserverAvailability:
            create: true
            rules: {}
        kubeApiserverBurnrate:
            create: true
            rules: {}
        kubeApiserverHistogram:
            create: true
            rules: {}
        kubeApiserverSlos:
            create: true
            rules: {}
        kubePrometheusGeneral:
            create: true
            rules: {}
        kubePrometheusNodeRecording:
            create: true
            rules: {}
        kubeScheduler:
            create: true
            rules: {}
        kubeStateMetrics:
            create: true
            rules: {}
        kubelet:
            create: true
            rules: {}
        kubernetesApps:
            create: true
            rules: {}
            targetNamespace: .*
        kubernetesResources:
            create: true
            rules: {}
        kubernetesStorage:
            create: true
            rules: {}
            targetNamespace: .*
        kubernetesSystem:
            create: true
            rules: {}
        kubernetesSystemApiserver:
            create: true
            rules: {}
        kubernetesSystemControllerManager:
            create: true
            rules: {}
        kubernetesSystemKubelet:
            create: true
            rules: {}
        kubernetesSystemScheduler:
            create: true
            rules: {}
        node:
            create: true
            rules: {}
        nodeNetwork:
            create: true
            rules: {}
        vmHealth:
            create: true
            rules: {}
        vmagent:
            create: true
            rules: {}
        vmcluster:
            create: true
            rules: {}
        vmoperator:
            create: true
            rules: {}
        vmsingle:
            create: true
            rules: {}
    labels: {}
    recording:
        spec:
            annotations: {}
            labels: {}
    rule:
        spec:
            annotations: {}
            labels: {}
    rules: {}
    runbookUrl: https://runbooks.prometheus-operator.dev/runbooks
(object)

Create default rules for monitoring the cluster

defaultRules.additionalGroupByLabels: []
(list)

Labels, which are used for grouping results of the queries. Note that these labels are joined with .Values.global.clusterLabel

defaultRules.alerting:
    spec:
        annotations: {}
        labels: {}
(object)

Common properties for VMRules alerts

defaultRules.alerting.spec.annotations: {}
(object)

Additional annotations for VMRule alerts

defaultRules.alerting.spec.labels: {}
(object)

Additional labels for VMRule alerts

defaultRules.annotations: {}
(object)

Annotations for default rules

defaultRules.group:
    spec:
        params: {}
(object)

Common properties for VMRule groups

defaultRules.group.spec.params: {}
(object)

Optional HTTP URL parameters added to each rule request

defaultRules.groups:
    alertmanager:
        create: true
        rules: {}
    etcd:
        create: true
        rules: {}
    general:
        create: true
        rules: {}
    k8sContainerCpuLimits:
        create: true
        rules: {}
    k8sContainerCpuRequests:
        create: true
        rules: {}
    k8sContainerCpuUsageSecondsTotal:
        create: true
        rules: {}
    k8sContainerMemoryCache:
        create: true
        rules: {}
    k8sContainerMemoryLimits:
        create: true
        rules: {}
    k8sContainerMemoryRequests:
        create: true
        rules: {}
    k8sContainerMemoryRss:
        create: true
        rules: {}
    k8sContainerMemorySwap:
        create: true
        rules: {}
    k8sContainerMemoryWorkingSetBytes:
        create: true
        rules: {}
    k8sContainerResource:
        create: true
        rules: {}
    k8sPodOwner:
        create: true
        rules: {}
    kubeApiserver:
        create: true
        rules: {}
    kubeApiserverAvailability:
        create: true
        rules: {}
    kubeApiserverBurnrate:
        create: true
        rules: {}
    kubeApiserverHistogram:
        create: true
        rules: {}
    kubeApiserverSlos:
        create: true
        rules: {}
    kubePrometheusGeneral:
        create: true
        rules: {}
    kubePrometheusNodeRecording:
        create: true
        rules: {}
    kubeScheduler:
        create: true
        rules: {}
    kubeStateMetrics:
        create: true
        rules: {}
    kubelet:
        create: true
        rules: {}
    kubernetesApps:
        create: true
        rules: {}
        targetNamespace: .*
    kubernetesResources:
        create: true
        rules: {}
    kubernetesStorage:
        create: true
        rules: {}
        targetNamespace: .*
    kubernetesSystem:
        create: true
        rules: {}
    kubernetesSystemApiserver:
        create: true
        rules: {}
    kubernetesSystemControllerManager:
        create: true
        rules: {}
    kubernetesSystemKubelet:
        create: true
        rules: {}
    kubernetesSystemScheduler:
        create: true
        rules: {}
    node:
        create: true
        rules: {}
    nodeNetwork:
        create: true
        rules: {}
    vmHealth:
        create: true
        rules: {}
    vmagent:
        create: true
        rules: {}
    vmcluster:
        create: true
        rules: {}
    vmoperator:
        create: true
        rules: {}
    vmsingle:
        create: true
        rules: {}
(object)

Rule group properties

defaultRules.groups.etcd.rules: {}
(object)

Common properties for all rules in a group

defaultRules.labels: {}
(object)

Labels for default rules

defaultRules.recording:
    spec:
        annotations: {}
        labels: {}
(object)

Common properties for VMRules recording rules

defaultRules.recording.spec.annotations: {}
(object)

Additional annotations for VMRule recording rules

defaultRules.recording.spec.labels: {}
(object)

Additional labels for VMRule recording rules

defaultRules.rule:
    spec:
        annotations: {}
        labels: {}
(object)

Common properties for all VMRules

defaultRules.rule.spec.annotations: {}
(object)

Additional annotations for all VMRules

defaultRules.rule.spec.labels: {}
(object)

Additional labels for all VMRules

defaultRules.rules: {}
(object)

Per rule properties

defaultRules.runbookUrl: https://runbooks.prometheus-operator.dev/runbooks
(string)

Runbook url prefix for default rules

external.grafana.datasource: VictoriaMetrics
(string)

External Grafana datasource name

external.grafana.host: ""
(string)

External Grafana host

external.vm:
    read:
        url: ""
    write:
        url: ""
(object)

External VM read and write URLs

extraObjects: []
(list)

Add extra objects dynamically to this chart

fullnameOverride: ""
(string)

Resource full name override

global.cluster.dnsDomain: cluster.local.
(string)

K8s cluster domain suffix, uses for building storage pods’ FQDN. Details are here

global.clusterLabel: cluster
(string)

Cluster label to use for dashboards and rules

global.license:
    key: ""
    keyRef: {}
(object)

Global license configuration

grafana:
    enabled: true
    forceDeployDatasource: false
    ingress:
        annotations: {}
        enabled: false
        extraPaths: []
        hosts:
            - grafana.domain.com
        labels: {}
        path: /
        pathType: Prefix
        tls: []
    sidecar:
        dashboards:
            defaultFolderName: default
            enabled: true
            folder: /var/lib/grafana/dashboards
            multicluster: false
            provider:
                name: default
                orgid: 1
        datasources:
            enabled: true
            initDatasources: true
            label: grafana_datasource
    vmScrape:
        enabled: true
        spec:
            endpoints:
                - port: '{{ .Values.grafana.service.portName }}'
            selector:
                matchLabels:
                    app.kubernetes.io/name: '{{ include "grafana.name" .Subcharts.grafana }}'
(object)

Grafana dependency chart configuration. For possible values refer here

grafana.forceDeployDatasource: false
(bool)

Create datasource configmap even if grafana deployment has been disabled

grafana.ingress.extraPaths: []
(list)

Extra paths to prepend to every host configuration. This is useful when working with annotation based services.

grafana.vmScrape:
    enabled: true
    spec:
        endpoints:
            - port: '{{ .Values.grafana.service.portName }}'
        selector:
            matchLabels:
                app.kubernetes.io/name: '{{ include "grafana.name" .Subcharts.grafana }}'
(object)

Grafana VM scrape config

grafana.vmScrape.spec:
    endpoints:
        - port: '{{ .Values.grafana.service.portName }}'
    selector:
        matchLabels:
            app.kubernetes.io/name: '{{ include "grafana.name" .Subcharts.grafana }}'
(object)

Scrape configuration for Grafana

kube-state-metrics:
    enabled: true
    vmScrape:
        enabled: true
        spec:
            endpoints:
                - honorLabels: true
                  metricRelabelConfigs:
                    - action: labeldrop
                      regex: (uid|container_id|image_id)
                  port: http
            jobLabel: app.kubernetes.io/name
            selector:
                matchLabels:
                    app.kubernetes.io/instance: '{{ include "vm.release" . }}'
                    app.kubernetes.io/name: '{{ include "kube-state-metrics.name" (index .Subcharts "kube-state-metrics") }}'
(object)

kube-state-metrics dependency chart configuration. For possible values check here

kube-state-metrics.vmScrape:
    enabled: true
    spec:
        endpoints:
            - honorLabels: true
              metricRelabelConfigs:
                - action: labeldrop
                  regex: (uid|container_id|image_id)
              port: http
        jobLabel: app.kubernetes.io/name
        selector:
            matchLabels:
                app.kubernetes.io/instance: '{{ include "vm.release" . }}'
                app.kubernetes.io/name: '{{ include "kube-state-metrics.name" (index .Subcharts "kube-state-metrics") }}'
(object)

Scrape configuration for Kube State Metrics

kubeApiServer.enabled: true
(bool)

Enable Kube Api Server metrics scraping

kubeApiServer.vmScrape:
    spec:
        endpoints:
            - bearerTokenFile: /var/run/secrets/kubernetes.io/serviceaccount/token
              port: https
              scheme: https
              tlsConfig:
                caFile: /var/run/secrets/kubernetes.io/serviceaccount/ca.crt
                serverName: kubernetes
        jobLabel: component
        namespaceSelector:
            matchNames:
                - default
        selector:
            matchLabels:
                component: apiserver
                provider: kubernetes
(object)

Spec for VMServiceScrape CRD is here

kubeControllerManager.enabled: true
(bool)

Enable kube controller manager metrics scraping

kubeControllerManager.endpoints: []
(list)

If your kube controller manager is not deployed as a pod, specify IPs it can be found on

kubeControllerManager.service.enabled: true
(bool)

Create service for kube controller manager metrics scraping

kubeControllerManager.service.port: 10257
(int)

Kube controller manager service port

kubeControllerManager.service.selector:
    component: kube-controller-manager
(object)

Kube controller manager service pod selector

kubeControllerManager.service.targetPort: 10257
(int)

Kube controller manager service target port

kubeControllerManager.vmScrape:
    spec:
        endpoints:
            - bearerTokenFile: /var/run/secrets/kubernetes.io/serviceaccount/token
              port: http-metrics
              scheme: https
              tlsConfig:
                caFile: /var/run/secrets/kubernetes.io/serviceaccount/ca.crt
                serverName: kubernetes
        jobLabel: jobLabel
        namespaceSelector:
            matchNames:
                - kube-system
(object)

Spec for VMServiceScrape CRD is here

kubeDns.enabled: false
(bool)

Enabled KubeDNS metrics scraping

kubeDns.service.enabled: false
(bool)

Create Service for KubeDNS metrics

kubeDns.service.ports:
    dnsmasq:
        port: 10054
        targetPort: 10054
    skydns:
        port: 10055
        targetPort: 10055
(object)

KubeDNS service ports

kubeDns.service.selector:
    k8s-app: kube-dns
(object)

KubeDNS service pods selector

kubeDns.vmScrape:
    spec:
        endpoints:
            - bearerTokenFile: /var/run/secrets/kubernetes.io/serviceaccount/token
              port: http-metrics-dnsmasq
            - bearerTokenFile: /var/run/secrets/kubernetes.io/serviceaccount/token
              port: http-metrics-skydns
        jobLabel: jobLabel
        namespaceSelector:
            matchNames:
                - kube-system
(object)

Spec for VMServiceScrape CRD is here

kubeEtcd.enabled: true
(bool)

Enabled KubeETCD metrics scraping

kubeEtcd.endpoints: []
(list)

If your etcd is not deployed as a pod, specify IPs it can be found on

kubeEtcd.service.enabled: true
(bool)

Enable service for ETCD metrics scraping

kubeEtcd.service.port: 2379
(int)

ETCD service port

kubeEtcd.service.selector:
    component: etcd
(object)

ETCD service pods selector

kubeEtcd.service.targetPort: 2379
(int)

ETCD service target port

kubeEtcd.vmScrape:
    spec:
        endpoints:
            - bearerTokenFile: /var/run/secrets/kubernetes.io/serviceaccount/token
              port: http-metrics
              scheme: https
              tlsConfig:
                caFile: /var/run/secrets/kubernetes.io/serviceaccount/ca.crt
        jobLabel: jobLabel
        namespaceSelector:
            matchNames:
                - kube-system
(object)

Spec for VMServiceScrape CRD is here

kubeProxy.enabled: false
(bool)

Enable kube proxy metrics scraping

kubeProxy.endpoints: []
(list)

If your kube proxy is not deployed as a pod, specify IPs it can be found on

kubeProxy.service.enabled: true
(bool)

Enable service for kube proxy metrics scraping

kubeProxy.service.port: 10249
(int)

Kube proxy service port

kubeProxy.service.selector:
    k8s-app: kube-proxy
(object)

Kube proxy service pod selector

kubeProxy.service.targetPort: 10249
(int)

Kube proxy service target port

kubeProxy.vmScrape:
    spec:
        endpoints:
            - bearerTokenFile: /var/run/secrets/kubernetes.io/serviceaccount/token
              port: http-metrics
              scheme: https
              tlsConfig:
                caFile: /var/run/secrets/kubernetes.io/serviceaccount/ca.crt
        jobLabel: jobLabel
        namespaceSelector:
            matchNames:
                - kube-system
(object)

Spec for VMServiceScrape CRD is here

kubeScheduler.enabled: true
(bool)

Enable KubeScheduler metrics scraping

kubeScheduler.endpoints: []
(list)

If your kube scheduler is not deployed as a pod, specify IPs it can be found on

kubeScheduler.service.enabled: true
(bool)

Enable service for KubeScheduler metrics scrape

kubeScheduler.service.port: 10259
(int)

KubeScheduler service port

kubeScheduler.service.selector:
    component: kube-scheduler
(object)

KubeScheduler service pod selector

kubeScheduler.service.targetPort: 10259
(int)

KubeScheduler service target port

kubeScheduler.vmScrape:
    spec:
        endpoints:
            - bearerTokenFile: /var/run/secrets/kubernetes.io/serviceaccount/token
              port: http-metrics
              scheme: https
              tlsConfig:
                caFile: /var/run/secrets/kubernetes.io/serviceaccount/ca.crt
        jobLabel: jobLabel
        namespaceSelector:
            matchNames:
                - kube-system
(object)

Spec for VMServiceScrape CRD is here

kubelet:
    enabled: true
    vmScrape:
        kind: VMNodeScrape
        spec:
            bearerTokenFile: /var/run/secrets/kubernetes.io/serviceaccount/token
            honorLabels: true
            honorTimestamps: false
            interval: 30s
            metricRelabelConfigs:
                - action: labeldrop
                  regex: (uid)
                - action: labeldrop
                  regex: (id|name)
                - action: drop
                  regex: (rest_client_request_duration_seconds_bucket|rest_client_request_duration_seconds_sum|rest_client_request_duration_seconds_count)
                  source_labels:
                    - __name__
            relabelConfigs:
                - action: labelmap
                  regex: __meta_kubernetes_node_label_(.+)
                - sourceLabels:
                    - __metrics_path__
                  targetLabel: metrics_path
                - replacement: kubelet
                  targetLabel: job
            scheme: https
            scrapeTimeout: 5s
            tlsConfig:
                caFile: /var/run/secrets/kubernetes.io/serviceaccount/ca.crt
                insecureSkipVerify: true
    vmScrapes:
        cadvisor:
            enabled: true
            spec:
                path: /metrics/cadvisor
        kubelet:
            spec: {}
        probes:
            enabled: true
            spec:
                path: /metrics/probes
        resources:
            enabled: true
            spec:
                path: /metrics/resource
(object)

Component scraping the kubelets

kubelet.vmScrape:
    kind: VMNodeScrape
    spec:
        bearerTokenFile: /var/run/secrets/kubernetes.io/serviceaccount/token
        honorLabels: true
        honorTimestamps: false
        interval: 30s
        metricRelabelConfigs:
            - action: labeldrop
              regex: (uid)
            - action: labeldrop
              regex: (id|name)
            - action: drop
              regex: (rest_client_request_duration_seconds_bucket|rest_client_request_duration_seconds_sum|rest_client_request_duration_seconds_count)
              source_labels:
                - __name__
        relabelConfigs:
            - action: labelmap
              regex: __meta_kubernetes_node_label_(.+)
            - sourceLabels:
                - __metrics_path__
              targetLabel: metrics_path
            - replacement: kubelet
              targetLabel: job
        scheme: https
        scrapeTimeout: 5s
        tlsConfig:
            caFile: /var/run/secrets/kubernetes.io/serviceaccount/ca.crt
            insecureSkipVerify: true
(object)

Spec for VMNodeScrape CRD is here

kubelet.vmScrapes.cadvisor:
    enabled: true
    spec:
        path: /metrics/cadvisor
(object)

Enable scraping /metrics/cadvisor from kubelet’s service

kubelet.vmScrapes.probes:
    enabled: true
    spec:
        path: /metrics/probes
(object)

Enable scraping /metrics/probes from kubelet’s service

kubelet.vmScrapes.resources:
    enabled: true
    spec:
        path: /metrics/resource
(object)

Enabled scraping /metrics/resource from kubelet’s service

nameOverride: ""
(string)

Override chart name

prometheus-node-exporter:
    enabled: true
    extraArgs:
        - --collector.filesystem.ignored-mount-points=^/(dev|proc|sys|var/lib/docker/.+|var/lib/kubelet/.+)($|/)
        - --collector.filesystem.ignored-fs-types=^(autofs|binfmt_misc|bpf|cgroup2?|configfs|debugfs|devpts|devtmpfs|fusectl|hugetlbfs|iso9660|mqueue|nsfs|overlay|proc|procfs|pstore|rpc_pipefs|securityfs|selinuxfs|squashfs|erofs|sysfs|tracefs)$
    service:
        labels:
            jobLabel: node-exporter
    vmScrape:
        enabled: true
        spec:
            endpoints:
                - metricRelabelConfigs:
                    - action: drop
                      regex: /var/lib/kubelet/pods.+
                      source_labels:
                        - mountpoint
                  port: metrics
            jobLabel: jobLabel
            selector:
                matchLabels:
                    app.kubernetes.io/name: '{{ include "prometheus-node-exporter.name" (index .Subcharts "prometheus-node-exporter") }}'
(object)

prometheus-node-exporter dependency chart configuration. For possible values check here

prometheus-node-exporter.vmScrape:
    enabled: true
    spec:
        endpoints:
            - metricRelabelConfigs:
                - action: drop
                  regex: /var/lib/kubelet/pods.+
                  source_labels:
                    - mountpoint
              port: metrics
        jobLabel: jobLabel
        selector:
            matchLabels:
                app.kubernetes.io/name: '{{ include "prometheus-node-exporter.name" (index .Subcharts "prometheus-node-exporter") }}'
(object)

Node Exporter VM scrape config

prometheus-node-exporter.vmScrape.spec:
    endpoints:
        - metricRelabelConfigs:
            - action: drop
              regex: /var/lib/kubelet/pods.+
              source_labels:
                - mountpoint
          port: metrics
    jobLabel: jobLabel
    selector:
        matchLabels:
            app.kubernetes.io/name: '{{ include "prometheus-node-exporter.name" (index .Subcharts "prometheus-node-exporter") }}'
(object)

Scrape configuration for Node Exporter

tenant: "0"
(string)

Tenant to use for Grafana datasources and remote write

victoria-metrics-operator:
    crds:
        cleanup:
            enabled: true
            image:
                pullPolicy: IfNotPresent
                repository: bitnami/kubectl
        plain: true
    enabled: true
    operator:
        disable_prometheus_converter: false
    serviceMonitor:
        enabled: true
(object)

VictoriaMetrics Operator dependency chart configuration. More values can be found here. Also checkout here possible ENV variables to configure operator behaviour

victoria-metrics-operator.operator.disable_prometheus_converter: false
(bool)

By default, operator converts prometheus-operator objects.

vmagent.additionalRemoteWrites: []
(list)

Remote write configuration of VMAgent, allowed parameters defined in a spec

vmagent.annotations: {}
(object)

VMAgent annotations

vmagent.enabled: true
(bool)

Create VMAgent CR

vmagent.ingress:
    annotations: {}
    enabled: false
    extraPaths: []
    hosts:
        - vmagent.domain.com
    labels: {}
    path: ""
    pathType: Prefix
    tls: []
(object)

VMAgent ingress configuration

vmagent.spec:
    externalLabels: {}
    extraArgs:
        promscrape.dropOriginalLabels: "true"
        promscrape.streamParse: "true"
    port: "8429"
    scrapeInterval: 20s
    selectAllByDefault: true
(object)

Full spec for VMAgent CRD. Allowed values described here

vmalert.additionalNotifierConfigs: {}
(object)

Allows to configure static notifiers, discover notifiers via Consul and DNS, see specification here. This configuration will be created as separate secret and mounted to VMAlert pod.

vmalert.annotations: {}
(object)

VMAlert annotations

vmalert.enabled: true
(bool)

Create VMAlert CR

vmalert.ingress:
    annotations: {}
    enabled: false
    extraPaths: []
    hosts:
        - vmalert.domain.com
    labels: {}
    path: ""
    pathType: Prefix
    tls: []
(object)

VMAlert ingress config

vmalert.ingress.extraPaths: []
(list)

Extra paths to prepend to every host configuration. This is useful when working with annotation based services.

vmalert.remoteWriteVMAgent: false
(bool)

Controls whether VMAlert should use VMAgent or VMInsert as a target for remotewrite

vmalert.spec:
    evaluationInterval: 20s
    externalLabels: {}
    extraArgs:
        http.pathPrefix: /
    port: "8080"
    selectAllByDefault: true
(object)

Full spec for VMAlert CRD. Allowed values described here

vmalert.templateFiles: {}
(object)

Extra VMAlert annotation templates

vmauth.annotations: {}
(object)

VMAuth annotations

vmauth.enabled: false
(bool)

Enable VMAuth CR

vmauth.spec:
    port: "8427"
    unauthorizedUserAccessSpec:
        disabled: false
        discover_backend_ips: true
        url_map:
            - src_paths:
                - '{{ .vm.read.path }}/.*'
              url_prefix:
                - '{{ urlJoin (omit .vm.read "path") }}/'
            - src_paths:
                - '{{ .vm.write.path }}/.*'
              url_prefix:
                - '{{ urlJoin (omit .vm.write "path") }}/'
(object)

Full spec for VMAuth CRD. Allowed values described here It’s possible to use given below predefined variables in spec: * {{ .vm.read }} - parsed vmselect, vmsingle or external.vm.read URL * {{ .vm.write }} - parsed vminsert, vmsingle or external.vm.write URL

vmauth.spec.unauthorizedUserAccessSpec.disabled: false
(bool)

Flag, that allows to disable default VMAuth unauthorized user access config

vmcluster.annotations: {}
(object)

VMCluster annotations

vmcluster.enabled: false
(bool)

Create VMCluster CR

vmcluster.ingress.insert.annotations: {}
(object)

Ingress annotations

vmcluster.ingress.insert.enabled: false
(bool)

Enable deployment of ingress for server component

vmcluster.ingress.insert.extraPaths: []
(list)

Extra paths to prepend to every host configuration. This is useful when working with annotation based services.

vmcluster.ingress.insert.hosts: []
(list)

Array of host objects

vmcluster.ingress.insert.ingressClassName: ""
(string)

Ingress controller class name

vmcluster.ingress.insert.labels: {}
(object)

Ingress extra labels

vmcluster.ingress.insert.path: '{{ dig "extraArgs" "http.pathPrefix" "/" .Values.vmcluster.spec.vminsert }}'
(string)

Ingress default path

vmcluster.ingress.insert.pathType: Prefix
(string)

Ingress path type

vmcluster.ingress.insert.tls: []
(list)

Array of TLS objects

vmcluster.ingress.select.annotations: {}
(object)

Ingress annotations

vmcluster.ingress.select.enabled: false
(bool)

Enable deployment of ingress for server component

vmcluster.ingress.select.extraPaths: []
(list)

Extra paths to prepend to every host configuration. This is useful when working with annotation based services.

vmcluster.ingress.select.hosts: []
(list)

Array of host objects

vmcluster.ingress.select.ingressClassName: ""
(string)

Ingress controller class name

vmcluster.ingress.select.labels: {}
(object)

Ingress extra labels

vmcluster.ingress.select.path: '{{ dig "extraArgs" "http.pathPrefix" "/" .Values.vmcluster.spec.vmselect }}'
(string)

Ingress default path

vmcluster.ingress.select.pathType: Prefix
(string)

Ingress path type

vmcluster.ingress.select.tls: []
(list)

Array of TLS objects

vmcluster.ingress.storage.annotations: {}
(object)

Ingress annotations

vmcluster.ingress.storage.enabled: false
(bool)

Enable deployment of ingress for server component

vmcluster.ingress.storage.extraPaths: []
(list)

Extra paths to prepend to every host configuration. This is useful when working with annotation based services.

vmcluster.ingress.storage.hosts: []
(list)

Array of host objects

vmcluster.ingress.storage.ingressClassName: ""
(string)

Ingress controller class name

vmcluster.ingress.storage.labels: {}
(object)

Ingress extra labels

vmcluster.ingress.storage.path: ""
(string)

Ingress default path

vmcluster.ingress.storage.pathType: Prefix
(string)

Ingress path type

vmcluster.ingress.storage.tls: []
(list)

Array of TLS objects

vmcluster.spec:
    replicationFactor: 2
    retentionPeriod: "1"
    vminsert:
        enabled: true
        extraArgs: {}
        port: "8480"
        replicaCount: 2
        resources: {}
    vmselect:
        cacheMountPath: /select-cache
        enabled: true
        extraArgs: {}
        port: "8481"
        replicaCount: 2
        resources: {}
        storage:
            volumeClaimTemplate:
                spec:
                    resources:
                        requests:
                            storage: 2Gi
    vmstorage:
        replicaCount: 2
        resources: {}
        storage:
            volumeClaimTemplate:
                spec:
                    resources:
                        requests:
                            storage: 10Gi
        storageDataPath: /vm-data
(object)

Full spec for VMCluster CRD. Allowed values described here

vmcluster.spec.retentionPeriod: "1"
(string)

Data retention period. Possible units character: h(ours), d(ays), w(eeks), y(ears), if no unit character specified - month. The minimum retention period is 24h. See these docs

vmcluster.spec.vminsert.enabled: true
(bool)

Set this value to false to disable VMInsert

vmcluster.spec.vmselect.enabled: true
(bool)

Set this value to false to disable VMSelect

vmsingle.annotations: {}
(object)

VMSingle annotations

vmsingle.enabled: true
(bool)

Create VMSingle CR

vmsingle.ingress.annotations: {}
(object)

Ingress annotations

vmsingle.ingress.enabled: false
(bool)

Enable deployment of ingress for server component

vmsingle.ingress.extraPaths: []
(list)

Extra paths to prepend to every host configuration. This is useful when working with annotation based services.

vmsingle.ingress.hosts: []
(list)

Array of host objects

vmsingle.ingress.ingressClassName: ""
(string)

Ingress controller class name

vmsingle.ingress.labels: {}
(object)

Ingress extra labels

vmsingle.ingress.path: ""
(string)

Ingress default path

vmsingle.ingress.pathType: Prefix
(string)

Ingress path type

vmsingle.ingress.tls: []
(list)

Array of TLS objects

vmsingle.spec:
    extraArgs: {}
    port: "8429"
    replicaCount: 1
    retentionPeriod: "1"
    storage:
        accessModes:
            - ReadWriteOnce
        resources:
            requests:
                storage: 20Gi
(object)

Full spec for VMSingle CRD. Allowed values describe here

vmsingle.spec.retentionPeriod: "1"
(string)

Data retention period. Possible units character: h(ours), d(ays), w(eeks), y(ears), if no unit character specified - month. The minimum retention period is 24h. See these docs