Type: application Version: 0.28.4 Artifact Hub

Kubernetes monitoring on VictoriaMetrics stack. Includes VictoriaMetrics Operator, Grafana dashboards, ServiceScrapes and VMRules

Overview #

This chart is an All-in-one solution to start monitoring kubernetes cluster. It installs multiple dependency charts like grafana, node-exporter, kube-state-metrics and victoria-metrics-operator. Also it installs Custom Resources like VMSingle, VMCluster, VMAgent, VMAlert.

By default, the operator converts all existing prometheus-operator API objects into corresponding VictoriaMetrics Operator objects.

To enable metrics collection for kubernetes this chart installs multiple scrape configurations for kubernetes components like kubelet and kube-proxy, etc. Metrics collection is done by VMAgent. So if want to ship metrics to external VictoriaMetrics database you can disable VMSingle installation by setting vmsingle.enabled to false and setting vmagent.vmagentSpec.remoteWrite.url to your external VictoriaMetrics database.

This chart also installs bunch of dashboards and recording rules from kube-prometheus project.

Overview

Configuration #

Configuration of this chart is done through helm values.

Dependencies #

Dependencies can be enabled or disabled by setting enabled to true or false in values.yaml file.

!Important: for dependency charts anything that you can find in values.yaml of dependency chart can be configured in this chart under key for that dependency. For example if you want to configure grafana you can find all possible configuration options in values.yaml and you should set them in values for this chart under grafana: key. For example if you want to configure grafana.persistence.enabled you should set it in values.yaml like this:

#################################################
###              dependencies               #####
#################################################
# Grafana dependency chart configuration. For possible values refer to https://github.com/grafana/helm-charts/tree/main/charts/grafana#configuration
grafana:
  enabled: true
  persistence:
    type: pvc
    enabled: false

VictoriaMetrics components #

This chart installs multiple VictoriaMetrics components using Custom Resources that are managed by victoria-metrics-operator Each resource can be configured using spec of that resource from API docs of victoria-metrics-operator. For example if you want to configure VMAgent you can find all possible configuration options in API docs and you should set them in values for this chart under vmagent.spec key. For example if you want to configure remoteWrite.url you should set it in values.yaml like this:

vmagent:
  spec:
    remoteWrite:
      - url: "https://insert.vmcluster.domain.com/insert/0/prometheus/api/v1/write"

ArgoCD issues #

Operator self signed certificates #

When deploying K8s stack using ArgoCD without Cert Manager (.Values.victoria-metrics-operator.admissionWebhooks.certManager.enabled: false) it will rerender operator’s webhook certificates on each sync since Helm lookup function is not respected by ArgoCD. To prevent this please update you K8s stack Application spec.syncPolicy and spec.ignoreDifferences with a following:

apiVersion: argoproj.io/v1alpha1
kind: Application
...
spec:
  ...
  syncPolicy:
    syncOptions:
    # https://argo-cd.readthedocs.io/en/stable/user-guide/sync-options/#respect-ignore-difference-configs
    # argocd must also ignore difference during apply stage
    # otherwise it ll silently override changes and cause a problem
    - RespectIgnoreDifferences=true
  ignoreDifferences:
    - group: ""
      kind: Secret
      name: <fullname>-validation
      namespace: kube-system
      jsonPointers:
        - /data
    - group: admissionregistration.k8s.io
      kind: ValidatingWebhookConfiguration
      name: <fullname>-admission
      jqPathExpressions:
      - '.webhooks[]?.clientConfig.caBundle'

where <fullname> is output of {{ include "vm-operator.fullname" }} for your setup

metadata.annotations: Too long: must have at most 262144 bytes on dashboards #

If one of dashboards ConfigMap is failing with error Too long: must have at most 262144 bytes, please make sure you’ve added argocd.argoproj.io/sync-options: ServerSideApply=true annotation to your dashboards:

grafana:
  sidecar:
    dashboards:
      additionalDashboardAnnotations
        argocd.argoproj.io/sync-options: ServerSideApply=true

argocd.argoproj.io/sync-options: ServerSideApply=true

Rules and dashboards #

This chart by default install multiple dashboards and recording rules from kube-prometheus you can disable dashboards with defaultDashboardsEnabled: false and experimentalDashboardsEnabled: false and rules can be configured under defaultRules

Adding external dashboards #

By default, this chart uses sidecar in order to provision default dashboards. If you want to add you own dashboards there are two ways to do it:

  • Add dashboards by creating a ConfigMap. An example ConfigMap:
apiVersion: v1
kind: ConfigMap
metadata:
  labels:
    grafana_dashboard: "1"
  name: grafana-dashboard
data:
  dashboard.json: |-
      {...}      
  • Use init container provisioning. Note that this option requires disabling sidecar and will remove all default dashboards provided with this chart. An example configuration:
grafana:
  sidecar:
    dashboards:
      enabled: false
  dashboards:
    vmcluster:
      gnetId: 11176
      revision: 38
      datasource: VictoriaMetrics

When using this approach, you can find dashboards for VictoriaMetrics components published here.

Prometheus scrape configs #

This chart installs multiple scrape configurations for kubernetes monitoring. They are configured under #ServiceMonitors section in values.yaml file. For example if you want to configure scrape config for kubelet you should set it in values.yaml like this:

kubelet:
  enabled: true
  # spec for VMNodeScrape crd
  # https://docs.victoriametrics.com/operator/api#vmnodescrapespec
  spec:
    interval: "30s"

Using externally managed Grafana #

If you want to use an externally managed Grafana instance but still want to use the dashboards provided by this chart you can set grafana.enabled to false and set defaultDashboardsEnabled to true. This will install the dashboards but will not install Grafana.

For example:

defaultDashboardsEnabled: true

grafana:
  enabled: false

This will create ConfigMaps with dashboards to be imported into Grafana.

If additional configuration for labels or annotations is needed in order to import dashboard to an existing Grafana you can set .grafana.sidecar.dashboards.additionalDashboardLabels or .grafana.sidecar.dashboards.additionalDashboardAnnotations in values.yaml:

For example:

defaultDashboardsEnabled: true

grafana:
  enabled: false
  sidecar:
    dashboards:
      additionalDashboardLabels:
        key: value
      additionalDashboardAnnotations:
        key: value

Prerequisites #

  • Install the follow packages: git, kubectl, helm, helm-docs. See this tutorial.

  • Add dependency chart repositories

helm repo add grafana https://grafana.github.io/helm-charts
helm repo add prometheus-community https://prometheus-community.github.io/helm-charts
helm repo update
  • PV support on underlying infrastructure.

How to install #

Access a Kubernetes cluster.

Setup chart repository (can be omitted for OCI repositories) #

Add a chart helm repository with follow commands:

helm repo add vm https://victoriametrics.github.io/helm-charts/

helm repo update

List versions of vm/victoria-metrics-k8s-stack chart available to installation:

helm search repo vm/victoria-metrics-k8s-stack -l

Install victoria-metrics-k8s-stack chart #

Export default values of victoria-metrics-k8s-stack chart to file values.yaml:

  • For HTTPS repository

    helm show values vm/victoria-metrics-k8s-stack > values.yaml
    
  • For OCI repository

    helm show values oci://ghcr.io/victoriametrics/helm-charts/victoria-metrics-k8s-stack > values.yaml
    

Change the values according to the need of the environment in values.yaml file.

Test the installation with command:

  • For HTTPS repository

    helm install vmks vm/victoria-metrics-k8s-stack -f values.yaml -n NAMESPACE --debug --dry-run
    
  • For OCI repository

    helm install vmks oci://ghcr.io/victoriametrics/helm-charts/victoria-metrics-k8s-stack -f values.yaml -n NAMESPACE --debug --dry-run
    

Install chart with command:

  • For HTTPS repository

    helm install vmks vm/victoria-metrics-k8s-stack -f values.yaml -n NAMESPACE
    
  • For OCI repository

    helm install vmks oci://ghcr.io/victoriametrics/helm-charts/victoria-metrics-k8s-stack -f values.yaml -n NAMESPACE
    

Get the pods lists by running this commands:

kubectl get pods -A | grep 'vmks'

Get the application by running this command:

helm list -f vmks -n NAMESPACE

See the history of versions of vmks application with command.

helm history vmks -n NAMESPACE

Install locally (Minikube) #

To run VictoriaMetrics stack locally it’s possible to use Minikube. To avoid dashboards and alert rules issues please follow the steps below:

Run Minikube cluster

minikube start --container-runtime=containerd --extra-config=scheduler.bind-address=0.0.0.0 --extra-config=controller-manager.bind-address=0.0.0.0 --extra-config=etcd.listen-metrics-urls=http://0.0.0.0:2381

Install helm chart

helm install [RELEASE_NAME] vm/victoria-metrics-k8s-stack -f values.yaml -f values.minikube.yaml -n NAMESPACE --debug --dry-run

How to uninstall #

Remove application with command.

helm uninstall vmks -n NAMESPACE

CRDs created by this chart are not removed by default and should be manually cleaned up:

kubectl get crd | grep victoriametrics.com | awk '{print $1 }' | xargs -i kubectl delete crd {}

Troubleshooting #

  • If you cannot install helm chart with error configmap already exist. It could happen because of name collisions, if you set too long release name. Kubernetes by default, allows only 63 symbols at resource names and all resource names are trimmed by helm to 63 symbols. To mitigate it, use shorter name for helm chart release name, like:
# stack - is short enough
helm upgrade -i stack vm/victoria-metrics-k8s-stack

Or use override for helm chart release name:

helm upgrade -i some-very-long-name vm/victoria-metrics-k8s-stack --set fullnameOverride=stack

Upgrade guide #

Usually, helm upgrade doesn’t requires manual actions. Just execute command:

$ helm upgrade [RELEASE_NAME] vm/victoria-metrics-k8s-stack

But release with CRD update can only be patched manually with kubectl. Since helm does not perform a CRD update, we recommend that you always perform this when updating the helm-charts version:

# 1. check the changes in CRD
$ helm show crds vm/victoria-metrics-k8s-stack --version [YOUR_CHART_VERSION] | kubectl diff -f -

# 2. apply the changes (update CRD)
$ helm show crds vm/victoria-metrics-k8s-stack --version [YOUR_CHART_VERSION] | kubectl apply -f - --server-side

All other manual actions upgrades listed below:

Upgrade to 0.13.0 #

  • node-exporter starting from version 4.0.0 is using the Kubernetes recommended labels. Therefore you have to delete the daemonset before you upgrade.
kubectl delete daemonset -l app=prometheus-node-exporter
  • scrape configuration for kubernetes components was moved from vmServiceScrape.spec section to spec section. If you previously modified scrape configuration you need to update your values.yaml

  • grafana.defaultDashboardsEnabled was renamed to defaultDashboardsEnabled (moved to top level). You may need to update it in your values.yaml

Upgrade to 0.6.0 #

All CRD must be update to the lastest version with command:

kubectl apply -f https://raw.githubusercontent.com/VictoriaMetrics/helm-charts/master/charts/victoria-metrics-k8s-stack/crds/crd.yaml

Upgrade to 0.4.0 #

All CRD must be update to v1 version with command:

kubectl apply -f https://raw.githubusercontent.com/VictoriaMetrics/helm-charts/master/charts/victoria-metrics-k8s-stack/crds/crd.yaml

Upgrade from 0.2.8 to 0.2.9 #

Update VMAgent crd

command:

kubectl apply -f https://raw.githubusercontent.com/VictoriaMetrics/operator/v0.16.0/config/crd/bases/operator.victoriametrics.com_vmagents.yaml

Upgrade from 0.2.5 to 0.2.6 #

New CRD added to operator - VMUser and VMAuth, new fields added to exist crd. Manual commands:

kubectl apply -f https://raw.githubusercontent.com/VictoriaMetrics/operator/v0.15.0/config/crd/bases/operator.victoriametrics.com_vmusers.yaml
kubectl apply -f https://raw.githubusercontent.com/VictoriaMetrics/operator/v0.15.0/config/crd/bases/operator.victoriametrics.com_vmauths.yaml
kubectl apply -f https://raw.githubusercontent.com/VictoriaMetrics/operator/v0.15.0/config/crd/bases/operator.victoriametrics.com_vmalerts.yaml
kubectl apply -f https://raw.githubusercontent.com/VictoriaMetrics/operator/v0.15.0/config/crd/bases/operator.victoriametrics.com_vmagents.yaml
kubectl apply -f https://raw.githubusercontent.com/VictoriaMetrics/operator/v0.15.0/config/crd/bases/operator.victoriametrics.com_vmsingles.yaml
kubectl apply -f https://raw.githubusercontent.com/VictoriaMetrics/operator/v0.15.0/config/crd/bases/operator.victoriametrics.com_vmclusters.yaml

Documentation of Helm Chart #

Install helm-docs following the instructions on this tutorial.

Generate docs with helm-docs command.

cd charts/victoria-metrics-k8s-stack

helm-docs

The markdown generation is entirely go template driven. The tool parses metadata from charts and generates a number of sub-templates that can be referenced in a template file (by default README.md.gotmpl). If no template file is provided, the tool has a default internal template that will generate a reasonably formatted README.

Parameters #

The following tables lists the configurable parameters of the chart and their default values.

Change the values according to the need of the environment in victoria-metrics-k8s-stack/values.yaml file.

KeyTypeDefaultDescription
additionalVictoriaMetricsMapstring
null

Provide custom recording or alerting rules to be deployed into the cluster.

alertmanager.annotationsobject
{}

Alertmanager annotations

alertmanager.configobject
receivers:
    - name: blackhole
route:
    receiver: blackhole
templates:
    - /etc/vm/configs/**/*.tmpl

Alertmanager configuration

alertmanager.enabledbool
true

Create VMAlertmanager CR

alertmanager.ingressobject
annotations: {}
enabled: false
extraPaths: []
hosts:
    - alertmanager.domain.com
labels: {}
path: '{{ .Values.alertmanager.spec.routePrefix | default "/" }}'
pathType: Prefix
tls: []

Alertmanager ingress configuration

alertmanager.ingress.extraPathslist
[]

Extra paths to prepend to every host configuration. This is useful when working with annotation based services.

alertmanager.monzoTemplateobject
enabled: true

Better alert templates for slack source

alertmanager.specobject
configSecret: ""
externalURL: ""
image:
    tag: v0.27.0
port: "9093"
replicaCount: 1
routePrefix: /
selectAllByDefault: true

Full spec for VMAlertmanager CRD. Allowed values described here

alertmanager.spec.configSecretstring
""

If this one defined, it will be used for alertmanager configuration and config parameter will be ignored

alertmanager.templateFilesobject
{}

Extra alert templates

argocdReleaseOverridestring
""

If this chart is used in “Argocd” with “releaseName” field then VMServiceScrapes couldn’t select the proper services. For correct working need set value ‘argocdReleaseOverride=$ARGOCD_APP_NAME’

coreDns.enabledbool
true

Enabled CoreDNS metrics scraping

coreDns.service.enabledbool
true

Create service for CoreDNS metrics

coreDns.service.portint
9153

CoreDNS service port

coreDns.service.selectorobject
k8s-app: kube-dns

CoreDNS service pod selector

coreDns.service.targetPortint
9153

CoreDNS service target port

coreDns.vmScrapeobject
spec:
    endpoints:
        - bearerTokenFile: /var/run/secrets/kubernetes.io/serviceaccount/token
          port: http-metrics
    jobLabel: jobLabel
    namespaceSelector:
        matchNames:
            - kube-system

Spec for VMServiceScrape CRD is here

defaultDashboards.annotationsobject
{}

defaultDashboards.dashboardsobject
node-exporter-full:
    enabled: true
victoriametrics-operator:
    enabled: false
victoriametrics-vmalert:
    enabled: false

Create dashboards as ConfigMap despite dependency it requires is not installed

defaultDashboards.dashboards.node-exporter-fullobject
enabled: true

In ArgoCD using client-side apply this dashboard reaches annotations size limit and causes k8s issues without server side apply See this issue

defaultDashboards.defaultTimezonestring
utc

defaultDashboards.enabledbool
true

Enable custom dashboards installation

defaultDashboards.grafanaOperator.enabledbool
false

Create dashboards as CRDs (reuqires grafana-operator to be installed)

defaultDashboards.grafanaOperator.spec.allowCrossNamespaceImportbool
false

defaultDashboards.grafanaOperator.spec.instanceSelector.matchLabels.dashboardsstring
grafana

defaultDashboards.labelsobject
{}

defaultDatasources.alertmanagerobject
datasources:
    - access: proxy
      jsonData:
        implementation: prometheus
      name: Alertmanager
perReplica: false

List of alertmanager datasources. Alertmanager generated url will be added to each datasource in template if alertmanager is enabled

defaultDatasources.alertmanager.perReplicabool
false

Create per replica alertmanager compatible datasource

defaultDatasources.extralist
[]

Configure additional grafana datasources (passed through tpl). Check here for details

defaultDatasources.victoriametrics.datasourceslist
- isDefault: true
  name: VictoriaMetrics
  type: prometheus
- isDefault: false
  name: VictoriaMetrics (DS)
  type: victoriametrics-datasource

List of prometheus compatible datasource configurations. VM url will be added to each of them in templates.

defaultDatasources.victoriametrics.perReplicabool
false

Create per replica prometheus compatible datasource

defaultRulesobject
alerting:
    spec:
        annotations: {}
        labels: {}
annotations: {}
create: true
group:
    spec:
        params: {}
groups:
    alertmanager:
        create: true
        rules: {}
    etcd:
        create: true
        rules: {}
    general:
        create: true
        rules: {}
    k8sContainerCpuUsageSecondsTotal:
        create: true
        rules: {}
    k8sContainerMemoryCache:
        create: true
        rules: {}
    k8sContainerMemoryRss:
        create: true
        rules: {}
    k8sContainerMemorySwap:
        create: true
        rules: {}
    k8sContainerMemoryWorkingSetBytes:
        create: true
        rules: {}
    k8sContainerResource:
        create: true
        rules: {}
    k8sPodOwner:
        create: true
        rules: {}
    kubeApiserver:
        create: true
        rules: {}
    kubeApiserverAvailability:
        create: true
        rules: {}
    kubeApiserverBurnrate:
        create: true
        rules: {}
    kubeApiserverHistogram:
        create: true
        rules: {}
    kubeApiserverSlos:
        create: true
        rules: {}
    kubePrometheusGeneral:
        create: true
        rules: {}
    kubePrometheusNodeRecording:
        create: true
        rules: {}
    kubeScheduler:
        create: true
        rules: {}
    kubeStateMetrics:
        create: true
        rules: {}
    kubelet:
        create: true
        rules: {}
    kubernetesApps:
        create: true
        rules: {}
        targetNamespace: .*
    kubernetesResources:
        create: true
        rules: {}
    kubernetesStorage:
        create: true
        rules: {}
        targetNamespace: .*
    kubernetesSystem:
        create: true
        rules: {}
    kubernetesSystemApiserver:
        create: true
        rules: {}
    kubernetesSystemControllerManager:
        create: true
        rules: {}
    kubernetesSystemKubelet:
        create: true
        rules: {}
    kubernetesSystemScheduler:
        create: true
        rules: {}
    node:
        create: true
        rules: {}
    nodeNetwork:
        create: true
        rules: {}
    vmHealth:
        create: true
        rules: {}
    vmagent:
        create: true
        rules: {}
    vmcluster:
        create: true
        rules: {}
    vmoperator:
        create: true
        rules: {}
    vmsingle:
        create: true
        rules: {}
labels: {}
recording:
    spec:
        annotations: {}
        labels: {}
rule:
    spec:
        annotations: {}
        labels: {}
rules: {}
runbookUrl: https://runbooks.prometheus-operator.dev/runbooks

Create default rules for monitoring the cluster

defaultRules.alertingobject
spec:
    annotations: {}
    labels: {}

Common properties for VMRules alerts

defaultRules.alerting.spec.annotationsobject
{}

Additional annotations for VMRule alerts

defaultRules.alerting.spec.labelsobject
{}

Additional labels for VMRule alerts

defaultRules.annotationsobject
{}

Annotations for default rules

defaultRules.groupobject
spec:
    params: {}

Common properties for VMRule groups

defaultRules.group.spec.paramsobject
{}

Optional HTTP URL parameters added to each rule request

defaultRules.groupsobject
alertmanager:
    create: true
    rules: {}
etcd:
    create: true
    rules: {}
general:
    create: true
    rules: {}
k8sContainerCpuUsageSecondsTotal:
    create: true
    rules: {}
k8sContainerMemoryCache:
    create: true
    rules: {}
k8sContainerMemoryRss:
    create: true
    rules: {}
k8sContainerMemorySwap:
    create: true
    rules: {}
k8sContainerMemoryWorkingSetBytes:
    create: true
    rules: {}
k8sContainerResource:
    create: true
    rules: {}
k8sPodOwner:
    create: true
    rules: {}
kubeApiserver:
    create: true
    rules: {}
kubeApiserverAvailability:
    create: true
    rules: {}
kubeApiserverBurnrate:
    create: true
    rules: {}
kubeApiserverHistogram:
    create: true
    rules: {}
kubeApiserverSlos:
    create: true
    rules: {}
kubePrometheusGeneral:
    create: true
    rules: {}
kubePrometheusNodeRecording:
    create: true
    rules: {}
kubeScheduler:
    create: true
    rules: {}
kubeStateMetrics:
    create: true
    rules: {}
kubelet:
    create: true
    rules: {}
kubernetesApps:
    create: true
    rules: {}
    targetNamespace: .*
kubernetesResources:
    create: true
    rules: {}
kubernetesStorage:
    create: true
    rules: {}
    targetNamespace: .*
kubernetesSystem:
    create: true
    rules: {}
kubernetesSystemApiserver:
    create: true
    rules: {}
kubernetesSystemControllerManager:
    create: true
    rules: {}
kubernetesSystemKubelet:
    create: true
    rules: {}
kubernetesSystemScheduler:
    create: true
    rules: {}
node:
    create: true
    rules: {}
nodeNetwork:
    create: true
    rules: {}
vmHealth:
    create: true
    rules: {}
vmagent:
    create: true
    rules: {}
vmcluster:
    create: true
    rules: {}
vmoperator:
    create: true
    rules: {}
vmsingle:
    create: true
    rules: {}

Rule group properties

defaultRules.groups.etcd.rulesobject
{}

Common properties for all rules in a group

defaultRules.labelsobject
{}

Labels for default rules

defaultRules.recordingobject
spec:
    annotations: {}
    labels: {}

Common properties for VMRules recording rules

defaultRules.recording.spec.annotationsobject
{}

Additional annotations for VMRule recording rules

defaultRules.recording.spec.labelsobject
{}

Additional labels for VMRule recording rules

defaultRules.ruleobject
spec:
    annotations: {}
    labels: {}

Common properties for all VMRules

defaultRules.rule.spec.annotationsobject
{}

Additional annotations for all VMRules

defaultRules.rule.spec.labelsobject
{}

Additional labels for all VMRules

defaultRules.rulesobject
{}

Per rule properties

defaultRules.runbookUrlstring
https://runbooks.prometheus-operator.dev/runbooks

Runbook url prefix for default rules

externalVMobject
read:
    url: ""
vmauth:
    read:
        - src_paths:
            - /select/.*
          url_prefix:
            - /
    write:
        - src_paths:
            - /insert/.*
          url_prefix:
            - /
write:
    url: ""

External VM read and write URLs

externalVM.vmauthobject
read:
    - src_paths:
        - /select/.*
      url_prefix:
        - /
write:
    - src_paths:
        - /insert/.*
      url_prefix:
        - /

Custom VMAuth config, url_prefix requires only path, which will be appended to a read and write base URL. To disable auth for read or write empty list for component config externalVM.vmauth.<component>: []

extraObjectslist
[]

Add extra objects dynamically to this chart

fullnameOverridestring
""

Resource full name prefix override

global.cluster.dnsDomainstring
cluster.local.

K8s cluster domain suffix, uses for building storage pods’ FQDN. Details are here

global.clusterLabelstring
cluster

Cluster label to use for dashboards and rules

global.licenseobject
key: ""
keyRef: {}

Global license configuration

grafanaobject
enabled: true
forceDeployDatasource: false
ingress:
    annotations: {}
    enabled: false
    extraPaths: []
    hosts:
        - grafana.domain.com
    labels: {}
    path: /
    pathType: Prefix
    tls: []
sidecar:
    dashboards:
        defaultFolderName: default
        enabled: true
        folder: /var/lib/grafana/dashboards
        multicluster: false
        provider:
            name: default
            orgid: 1
    datasources:
        enabled: true
        initDatasources: true
vmScrape:
    enabled: true
    spec:
        endpoints:
            - port: '{{ .Values.grafana.service.portName }}'
        selector:
            matchLabels:
                app.kubernetes.io/name: '{{ include "grafana.name" .Subcharts.grafana }}'

Grafana dependency chart configuration. For possible values refer here

grafana.forceDeployDatasourcebool
false

Create datasource configmap even if grafana deployment has been disabled

grafana.ingress.extraPathslist
[]

Extra paths to prepend to every host configuration. This is useful when working with annotation based services.

grafana.vmScrapeobject
enabled: true
spec:
    endpoints:
        - port: '{{ .Values.grafana.service.portName }}'
    selector:
        matchLabels:
            app.kubernetes.io/name: '{{ include "grafana.name" .Subcharts.grafana }}'

Grafana VM scrape config

grafana.vmScrape.specobject
endpoints:
    - port: '{{ .Values.grafana.service.portName }}'
selector:
    matchLabels:
        app.kubernetes.io/name: '{{ include "grafana.name" .Subcharts.grafana }}'

Scrape configuration for Grafana

kube-state-metricsobject
enabled: true
vmScrape:
    enabled: true
    spec:
        endpoints:
            - honorLabels: true
              metricRelabelConfigs:
                - action: labeldrop
                  regex: (uid|container_id|image_id)
              port: http
        jobLabel: app.kubernetes.io/name
        selector:
            matchLabels:
                app.kubernetes.io/instance: '{{ include "vm.release" . }}'
                app.kubernetes.io/name: '{{ include "kube-state-metrics.name" (index .Subcharts "kube-state-metrics") }}'

kube-state-metrics dependency chart configuration. For possible values check here

kube-state-metrics.vmScrapeobject
enabled: true
spec:
    endpoints:
        - honorLabels: true
          metricRelabelConfigs:
            - action: labeldrop
              regex: (uid|container_id|image_id)
          port: http
    jobLabel: app.kubernetes.io/name
    selector:
        matchLabels:
            app.kubernetes.io/instance: '{{ include "vm.release" . }}'
            app.kubernetes.io/name: '{{ include "kube-state-metrics.name" (index .Subcharts "kube-state-metrics") }}'

Scrape configuration for Kube State Metrics

kubeApiServer.enabledbool
true

Enable Kube Api Server metrics scraping

kubeApiServer.vmScrapeobject
spec:
    endpoints:
        - bearerTokenFile: /var/run/secrets/kubernetes.io/serviceaccount/token
          port: https
          scheme: https
          tlsConfig:
            caFile: /var/run/secrets/kubernetes.io/serviceaccount/ca.crt
            serverName: kubernetes
    jobLabel: component
    namespaceSelector:
        matchNames:
            - default
    selector:
        matchLabels:
            component: apiserver
            provider: kubernetes

Spec for VMServiceScrape CRD is here

kubeControllerManager.enabledbool
true

Enable kube controller manager metrics scraping

kubeControllerManager.endpointslist
[]

If your kube controller manager is not deployed as a pod, specify IPs it can be found on

kubeControllerManager.service.enabledbool
true

Create service for kube controller manager metrics scraping

kubeControllerManager.service.portint
10257

Kube controller manager service port

kubeControllerManager.service.selectorobject
component: kube-controller-manager

Kube controller manager service pod selector

kubeControllerManager.service.targetPortint
10257

Kube controller manager service target port

kubeControllerManager.vmScrapeobject
spec:
    endpoints:
        - bearerTokenFile: /var/run/secrets/kubernetes.io/serviceaccount/token
          port: http-metrics
          scheme: https
          tlsConfig:
            caFile: /var/run/secrets/kubernetes.io/serviceaccount/ca.crt
            serverName: kubernetes
    jobLabel: jobLabel
    namespaceSelector:
        matchNames:
            - kube-system

Spec for VMServiceScrape CRD is here

kubeDns.enabledbool
false

Enabled KubeDNS metrics scraping

kubeDns.service.enabledbool
false

Create Service for KubeDNS metrics

kubeDns.service.portsobject
dnsmasq:
    port: 10054
    targetPort: 10054
skydns:
    port: 10055
    targetPort: 10055

KubeDNS service ports

kubeDns.service.selectorobject
k8s-app: kube-dns

KubeDNS service pods selector

kubeDns.vmScrapeobject
spec:
    endpoints:
        - bearerTokenFile: /var/run/secrets/kubernetes.io/serviceaccount/token
          port: http-metrics-dnsmasq
        - bearerTokenFile: /var/run/secrets/kubernetes.io/serviceaccount/token
          port: http-metrics-skydns
    jobLabel: jobLabel
    namespaceSelector:
        matchNames:
            - kube-system

Spec for VMServiceScrape CRD is here

kubeEtcd.enabledbool
true

Enabled KubeETCD metrics scraping

kubeEtcd.endpointslist
[]

If your etcd is not deployed as a pod, specify IPs it can be found on

kubeEtcd.service.enabledbool
true

Enable service for ETCD metrics scraping

kubeEtcd.service.portint
2379

ETCD service port

kubeEtcd.service.selectorobject
component: etcd

ETCD service pods selector

kubeEtcd.service.targetPortint
2379

ETCD service target port

kubeEtcd.vmScrapeobject
spec:
    endpoints:
        - bearerTokenFile: /var/run/secrets/kubernetes.io/serviceaccount/token
          port: http-metrics
          scheme: https
          tlsConfig:
            caFile: /var/run/secrets/kubernetes.io/serviceaccount/ca.crt
    jobLabel: jobLabel
    namespaceSelector:
        matchNames:
            - kube-system

Spec for VMServiceScrape CRD is here

kubeProxy.enabledbool
false

Enable kube proxy metrics scraping

kubeProxy.endpointslist
[]

If your kube proxy is not deployed as a pod, specify IPs it can be found on

kubeProxy.service.enabledbool
true

Enable service for kube proxy metrics scraping

kubeProxy.service.portint
10249

Kube proxy service port

kubeProxy.service.selectorobject
k8s-app: kube-proxy

Kube proxy service pod selector

kubeProxy.service.targetPortint
10249

Kube proxy service target port

kubeProxy.vmScrapeobject
spec:
    endpoints:
        - bearerTokenFile: /var/run/secrets/kubernetes.io/serviceaccount/token
          port: http-metrics
          scheme: https
          tlsConfig:
            caFile: /var/run/secrets/kubernetes.io/serviceaccount/ca.crt
    jobLabel: jobLabel
    namespaceSelector:
        matchNames:
            - kube-system

Spec for VMServiceScrape CRD is here

kubeScheduler.enabledbool
true

Enable KubeScheduler metrics scraping

kubeScheduler.endpointslist
[]

If your kube scheduler is not deployed as a pod, specify IPs it can be found on

kubeScheduler.service.enabledbool
true

Enable service for KubeScheduler metrics scrape

kubeScheduler.service.portint
10259

KubeScheduler service port

kubeScheduler.service.selectorobject
component: kube-scheduler

KubeScheduler service pod selector

kubeScheduler.service.targetPortint
10259

KubeScheduler service target port

kubeScheduler.vmScrapeobject
spec:
    endpoints:
        - bearerTokenFile: /var/run/secrets/kubernetes.io/serviceaccount/token
          port: http-metrics
          scheme: https
          tlsConfig:
            caFile: /var/run/secrets/kubernetes.io/serviceaccount/ca.crt
    jobLabel: jobLabel
    namespaceSelector:
        matchNames:
            - kube-system

Spec for VMServiceScrape CRD is here

kubeletobject
enabled: true
vmScrape:
    kind: VMNodeScrape
    spec:
        bearerTokenFile: /var/run/secrets/kubernetes.io/serviceaccount/token
        honorLabels: true
        honorTimestamps: false
        interval: 30s
        metricRelabelConfigs:
            - action: labeldrop
              regex: (uid)
            - action: labeldrop
              regex: (id|name)
            - action: drop
              regex: (rest_client_request_duration_seconds_bucket|rest_client_request_duration_seconds_sum|rest_client_request_duration_seconds_count)
              source_labels:
                - __name__
        relabelConfigs:
            - action: labelmap
              regex: __meta_kubernetes_node_label_(.+)
            - sourceLabels:
                - __metrics_path__
              targetLabel: metrics_path
            - replacement: kubelet
              targetLabel: job
        scheme: https
        scrapeTimeout: 5s
        tlsConfig:
            caFile: /var/run/secrets/kubernetes.io/serviceaccount/ca.crt
            insecureSkipVerify: true
vmScrapes:
    cadvisor:
        enabled: true
        spec:
            path: /metrics/cadvisor
    kubelet:
        spec: {}
    probes:
        enabled: true
        spec:
            path: /metrics/probes

Component scraping the kubelets

kubelet.vmScrapeobject
kind: VMNodeScrape
spec:
    bearerTokenFile: /var/run/secrets/kubernetes.io/serviceaccount/token
    honorLabels: true
    honorTimestamps: false
    interval: 30s
    metricRelabelConfigs:
        - action: labeldrop
          regex: (uid)
        - action: labeldrop
          regex: (id|name)
        - action: drop
          regex: (rest_client_request_duration_seconds_bucket|rest_client_request_duration_seconds_sum|rest_client_request_duration_seconds_count)
          source_labels:
            - __name__
    relabelConfigs:
        - action: labelmap
          regex: __meta_kubernetes_node_label_(.+)
        - sourceLabels:
            - __metrics_path__
          targetLabel: metrics_path
        - replacement: kubelet
          targetLabel: job
    scheme: https
    scrapeTimeout: 5s
    tlsConfig:
        caFile: /var/run/secrets/kubernetes.io/serviceaccount/ca.crt
        insecureSkipVerify: true

Spec for VMNodeScrape CRD is here

kubelet.vmScrapes.cadvisorobject
enabled: true
spec:
    path: /metrics/cadvisor

Enable scraping /metrics/cadvisor from kubelet’s service

kubelet.vmScrapes.probesobject
enabled: true
spec:
    path: /metrics/probes

Enable scraping /metrics/probes from kubelet’s service

nameOverridestring
""

Resource full name suffix override

prometheus-node-exporterobject
enabled: true
extraArgs:
    - --collector.filesystem.ignored-mount-points=^/(dev|proc|sys|var/lib/docker/.+|var/lib/kubelet/.+)($|/)
    - --collector.filesystem.ignored-fs-types=^(autofs|binfmt_misc|bpf|cgroup2?|configfs|debugfs|devpts|devtmpfs|fusectl|hugetlbfs|iso9660|mqueue|nsfs|overlay|proc|procfs|pstore|rpc_pipefs|securityfs|selinuxfs|squashfs|sysfs|tracefs)$
service:
    labels:
        jobLabel: node-exporter
vmScrape:
    enabled: true
    spec:
        endpoints:
            - metricRelabelConfigs:
                - action: drop
                  regex: /var/lib/kubelet/pods.+
                  source_labels:
                    - mountpoint
              port: metrics
        jobLabel: jobLabel
        selector:
            matchLabels:
                app.kubernetes.io/name: '{{ include "prometheus-node-exporter.name" (index .Subcharts "prometheus-node-exporter") }}'

prometheus-node-exporter dependency chart configuration. For possible values check here

prometheus-node-exporter.vmScrapeobject
enabled: true
spec:
    endpoints:
        - metricRelabelConfigs:
            - action: drop
              regex: /var/lib/kubelet/pods.+
              source_labels:
                - mountpoint
          port: metrics
    jobLabel: jobLabel
    selector:
        matchLabels:
            app.kubernetes.io/name: '{{ include "prometheus-node-exporter.name" (index .Subcharts "prometheus-node-exporter") }}'

Node Exporter VM scrape config

prometheus-node-exporter.vmScrape.specobject
endpoints:
    - metricRelabelConfigs:
        - action: drop
          regex: /var/lib/kubelet/pods.+
          source_labels:
            - mountpoint
      port: metrics
jobLabel: jobLabel
selector:
    matchLabels:
        app.kubernetes.io/name: '{{ include "prometheus-node-exporter.name" (index .Subcharts "prometheus-node-exporter") }}'

Scrape configuration for Node Exporter

prometheus-operator-crdsobject
enabled: false

Install prometheus operator CRDs

tenantstring
"0"

Tenant to use for Grafana datasources and remote write

victoria-metrics-operatorobject
crds:
    cleanup:
        enabled: true
        image:
            pullPolicy: IfNotPresent
            repository: bitnami/kubectl
    plain: true
enabled: true
operator:
    disable_prometheus_converter: false
serviceMonitor:
    enabled: true

VictoriaMetrics Operator dependency chart configuration. More values can be found here. Also checkout here possible ENV variables to configure operator behaviour

victoria-metrics-operator.crds.plainbool
true

added temporary, till new operator version released

victoria-metrics-operator.operator.disable_prometheus_converterbool
false

By default, operator converts prometheus-operator objects.

vmagent.additionalRemoteWriteslist
[]

Remote write configuration of VMAgent, allowed parameters defined in a spec

vmagent.annotationsobject
{}

VMAgent annotations

vmagent.enabledbool
true

Create VMAgent CR

vmagent.ingressobject
annotations: {}
enabled: false
extraPaths: []
hosts:
    - vmagent.domain.com
labels: {}
path: ""
pathType: Prefix
tls: []

VMAgent ingress configuration

vmagent.specobject
externalLabels: {}
extraArgs:
    promscrape.dropOriginalLabels: "true"
    promscrape.streamParse: "true"
port: "8429"
scrapeInterval: 20s
selectAllByDefault: true

Full spec for VMAgent CRD. Allowed values described here

vmalert.additionalNotifierConfigsobject
{}

Allows to configure static notifiers, discover notifiers via Consul and DNS, see specification here. This configuration will be created as separate secret and mounted to VMAlert pod.

vmalert.annotationsobject
{}

VMAlert annotations

vmalert.enabledbool
true

Create VMAlert CR

vmalert.ingressobject
annotations: {}
enabled: false
extraPaths: []
hosts:
    - vmalert.domain.com
labels: {}
path: ""
pathType: Prefix
tls: []

VMAlert ingress config

vmalert.ingress.extraPathslist
[]

Extra paths to prepend to every host configuration. This is useful when working with annotation based services.

vmalert.remoteWriteVMAgentbool
false

Controls whether VMAlert should use VMAgent or VMInsert as a target for remotewrite

vmalert.specobject
evaluationInterval: 15s
externalLabels: {}
extraArgs:
    http.pathPrefix: /
port: "8080"
selectAllByDefault: true

Full spec for VMAlert CRD. Allowed values described here

vmalert.templateFilesobject
{}

Extra VMAlert annotation templates

vmauth.annotationsobject
{}

VMAuth annotations

vmauth.enabledbool
false

Enable VMAuth CR

vmauth.specobject
discover_backend_ips: true
port: "8427"

Full spec for VMAuth CRD. Allowed values described here

vmcluster.annotationsobject
{}

VMCluster annotations

vmcluster.enabledbool
false

Create VMCluster CR

vmcluster.ingress.insert.annotationsobject
{}

Ingress annotations

vmcluster.ingress.insert.enabledbool
false

Enable deployment of ingress for server component

vmcluster.ingress.insert.extraPathslist
[]

Extra paths to prepend to every host configuration. This is useful when working with annotation based services.

vmcluster.ingress.insert.hostslist
[]

Array of host objects

vmcluster.ingress.insert.ingressClassNamestring
""

Ingress controller class name

vmcluster.ingress.insert.labelsobject
{}

Ingress extra labels

vmcluster.ingress.insert.pathstring
'{{ dig "extraArgs" "http.pathPrefix" "/" .Values.vmcluster.spec.vminsert }}'

Ingress default path

vmcluster.ingress.insert.pathTypestring
Prefix

Ingress path type

vmcluster.ingress.insert.tlslist
[]

Array of TLS objects

vmcluster.ingress.select.annotationsobject
{}

Ingress annotations

vmcluster.ingress.select.enabledbool
false

Enable deployment of ingress for server component

vmcluster.ingress.select.extraPathslist
[]

Extra paths to prepend to every host configuration. This is useful when working with annotation based services.

vmcluster.ingress.select.hostslist
[]

Array of host objects

vmcluster.ingress.select.ingressClassNamestring
""

Ingress controller class name

vmcluster.ingress.select.labelsobject
{}

Ingress extra labels

vmcluster.ingress.select.pathstring
'{{ dig "extraArgs" "http.pathPrefix" "/" .Values.vmcluster.spec.vmselect }}'

Ingress default path

vmcluster.ingress.select.pathTypestring
Prefix

Ingress path type

vmcluster.ingress.select.tlslist
[]

Array of TLS objects

vmcluster.ingress.storage.annotationsobject
{}

Ingress annotations

vmcluster.ingress.storage.enabledbool
false

Enable deployment of ingress for server component

vmcluster.ingress.storage.extraPathslist
[]

Extra paths to prepend to every host configuration. This is useful when working with annotation based services.

vmcluster.ingress.storage.hostslist
[]

Array of host objects

vmcluster.ingress.storage.ingressClassNamestring
""

Ingress controller class name

vmcluster.ingress.storage.labelsobject
{}

Ingress extra labels

vmcluster.ingress.storage.pathstring
""

Ingress default path

vmcluster.ingress.storage.pathTypestring
Prefix

Ingress path type

vmcluster.ingress.storage.tlslist
[]

Array of TLS objects

vmcluster.specobject
replicationFactor: 2
retentionPeriod: "1"
vminsert:
    extraArgs: {}
    port: "8480"
    replicaCount: 2
    resources: {}
vmselect:
    cacheMountPath: /select-cache
    extraArgs: {}
    port: "8481"
    replicaCount: 2
    resources: {}
    storage:
        volumeClaimTemplate:
            spec:
                resources:
                    requests:
                        storage: 2Gi
vmstorage:
    replicaCount: 2
    resources: {}
    storage:
        volumeClaimTemplate:
            spec:
                resources:
                    requests:
                        storage: 10Gi
    storageDataPath: /vm-data

Full spec for VMCluster CRD. Allowed values described here

vmcluster.spec.retentionPeriodstring
"1"

Data retention period. Possible units character: h(ours), d(ays), w(eeks), y(ears), if no unit character specified - month. The minimum retention period is 24h. See these docs

vmcluster.vmauthobject
vminsert:
    - src_paths:
        - /insert/.*
      url_prefix:
        - /
vmselect:
    - src_paths:
        - /select/.*
      url_prefix:
        - /

Custom VMAuth config, url_prefix requires only path, which will be appended to a select and insert base URL. To disable auth for vmselect or vminsert empty list for component config vmcluster.vmauth.<component>: []

vmsingle.annotationsobject
{}

VMSingle annotations

vmsingle.enabledbool
true

Create VMSingle CR

vmsingle.ingress.annotationsobject
{}

Ingress annotations

vmsingle.ingress.enabledbool
false

Enable deployment of ingress for server component

vmsingle.ingress.extraPathslist
[]

Extra paths to prepend to every host configuration. This is useful when working with annotation based services.

vmsingle.ingress.hostslist
[]

Array of host objects

vmsingle.ingress.ingressClassNamestring
""

Ingress controller class name

vmsingle.ingress.labelsobject
{}

Ingress extra labels

vmsingle.ingress.pathstring
""

Ingress default path

vmsingle.ingress.pathTypestring
Prefix

Ingress path type

vmsingle.ingress.tlslist
[]

Array of TLS objects

vmsingle.specobject
extraArgs: {}
port: "8429"
replicaCount: 1
retentionPeriod: "1"
storage:
    accessModes:
        - ReadWriteOnce
    resources:
        requests:
            storage: 20Gi

Full spec for VMSingle CRD. Allowed values describe here

vmsingle.spec.retentionPeriodstring
"1"

Data retention period. Possible units character: h(ours), d(ays), w(eeks), y(ears), if no unit character specified - month. The minimum retention period is 24h. See these docs