Version ArtifactHub License Slack X Reddit

Victoria Logs Single version - high-performance, cost-effective and scalable logs storage

Prerequisites #

  • Install the follow packages: git, kubectl, helm, helm-docs. See this tutorial.

  • PV support on underlying infrastructure.

Chart Details #

This chart will do the following:

  • Rollout Victoria Logs Single.
  • (optional) Rollout vector to collect logs from pods.

Chart allows to configure logs collection from Kubernetes pods to VictoriaLogs. In order to do that you need to enable vector:

vector:
  enabled: true
YAML

By default, vector will forward logs to VictoriaLogs installation deployed by this chart.

How to install #

Access a Kubernetes cluster.

Setup chart repository (can be omitted for OCI repositories) #

Add a chart helm repository with follow commands:

helm repo add vm https://victoriametrics.github.io/helm-charts/

helm repo update
Console

List versions of vm/victoria-logs-single chart available to installation:

helm search repo vm/victoria-logs-single -l
Console

Install victoria-logs-single chart #

Export default values of victoria-logs-single chart to file values.yaml:

  • For HTTPS repository

    helm show values vm/victoria-logs-single > values.yaml
    Console
  • For OCI repository

    helm show values oci://ghcr.io/victoriametrics/helm-charts/victoria-logs-single > values.yaml
    Console

Change the values according to the need of the environment in values.yaml file.

Test the installation with command:

  • For HTTPS repository

    helm install vls vm/victoria-logs-single -f values.yaml -n NAMESPACE --debug --dry-run
    Console
  • For OCI repository

    helm install vls oci://ghcr.io/victoriametrics/helm-charts/victoria-logs-single -f values.yaml -n NAMESPACE --debug --dry-run
    Console

Install chart with command:

  • For HTTPS repository

    helm install vls vm/victoria-logs-single -f values.yaml -n NAMESPACE
    Console
  • For OCI repository

    helm install vls oci://ghcr.io/victoriametrics/helm-charts/victoria-logs-single -f values.yaml -n NAMESPACE
    Console

Get the pods lists by running this commands:

kubectl get pods -A | grep 'vls'
Console

Get the application by running this command:

helm list -f vls -n NAMESPACE
Console

See the history of versions of vls application with command.

helm history vls -n NAMESPACE
Console

How to uninstall #

Remove application with command.

helm uninstall vls -n NAMESPACE
Console

Documentation of Helm Chart #

Install helm-docs following the instructions on this tutorial.

Generate docs with helm-docs command.

cd charts/victoria-logs-single

helm-docs
Bash

The markdown generation is entirely go template driven. The tool parses metadata from charts and generates a number of sub-templates that can be referenced in a template file (by default README.md.gotmpl). If no template file is provided, the tool has a default internal template that will generate a reasonably formatted README.

Parameters #

The following tables lists the configurable parameters of the chart and their default values.

Change the values according to the need of the environment in victoria-logs-single/values.yaml file.

KeyTypeDefaultDescription
dashboards.annotationsobject
{}
YAML

Dashboard annotations

dashboards.enabledbool
false
YAML

Create VictoriaLogs dashboards

dashboards.grafanaOperator.enabledbool
false
YAML
dashboards.grafanaOperator.spec.allowCrossNamespaceImportbool
false
YAML
dashboards.grafanaOperator.spec.instanceSelector.matchLabels.dashboardsstring
grafana
YAML
dashboards.labelsobject
{}
YAML

Dashboard labels

dashboards.namespacestring
""
YAML

Override default namespace, where to create dashboards

extraObjectslist
[]
YAML

Add extra specs dynamically to this chart

global.cluster.dnsDomainstring
cluster.local.
YAML

K8s cluster domain suffix, uses for building storage pods’ FQDN. Details are here

global.compatibilityobject
openshift:
    adaptSecurityContext: auto
YAML

Openshift security context compatibility configuration

global.image.registrystring
""
YAML

Image registry, that can be shared across multiple helm charts

global.imagePullSecretslist
[]
YAML

Image pull secrets, that can be shared across multiple helm charts

nameOverridestring
""
YAML

Override chart name

podDisruptionBudgetobject
enabled: false
extraLabels: {}
YAML

See kubectl explain poddisruptionbudget.spec for more. Details are here

podDisruptionBudget.extraLabelsobject
{}
YAML

PodDisruptionBudget extra labels

printNotesbool
true
YAML

Print chart notes

server.affinityobject
{}
YAML

Pod affinity

server.containerWorkingDirstring
""
YAML

Container workdir

server.emptyDirobject
{}
YAML

Use an alternate scheduler, e.g. “stork”. Check details here schedulerName:

server.enabledbool
true
YAML

Enable deployment of server component. Deployed as StatefulSet

server.envlist
[]
YAML

Additional environment variables (ex.: secret tokens, flags). Details are here

server.envFromlist
[]
YAML

Specify alternative source for env variables

server.extraArgsobject
envflag.enable: true
envflag.prefix: VM_
httpListenAddr: :9428
loggerFormat: json
YAML

Extra command line arguments for container of component

server.extraContainerslist
[]
YAML

Extra containers to run in a pod with Victoria Logs container

server.extraHostPathMountslist
[]
YAML

Additional hostPath mounts

server.extraLabelsobject
{}
YAML

StatefulSet/Deployment additional labels

server.extraVolumeMountslist
[]
YAML

Extra Volume Mounts for the container

server.extraVolumeslist
[]
YAML

Extra Volumes for the pod

server.image.pullPolicystring
IfNotPresent
YAML

Image pull policy

server.image.registrystring
""
YAML

Image registry

server.image.repositorystring
victoriametrics/victoria-logs
YAML

Image repository

server.image.tagstring
""
YAML

Image tag

server.image.variantstring
victorialogs
YAML

Image tag suffix, which is appended to Chart.AppVersion if no server.image.tag is defined

server.imagePullSecretslist
[]
YAML

Image pull secrets

server.ingress.annotationsstring
null
YAML

Ingress annotations

server.ingress.enabledbool
false
YAML

Enable deployment of ingress for server component

server.ingress.extraLabelsobject
{}
YAML

Ingress extra labels

server.ingress.hostslist
- name: vlogs.local
  path:
    - /
  port: http
YAML

Array of host objects

server.ingress.ingressClassNamestring
""
YAML

Ingress controller class name

server.ingress.pathTypestring
Prefix
YAML

Ingress path type

server.ingress.tlslist
[]
YAML

Array of TLS objects

server.initContainerslist
[]
YAML

Init containers for Victoria Logs Pod

server.nodeSelectorobject
{}
YAML

Pod’s node selector. Details are here

server.persistentVolume.accessModeslist
- ReadWriteOnce
YAML

Array of access modes. Must match those of existing PV or dynamic provisioner. Details are here

server.persistentVolume.annotationsobject
{}
YAML

Persistant volume annotations

server.persistentVolume.enabledbool
false
YAML

Create/use Persistent Volume Claim for server component. Empty dir if false

server.persistentVolume.existingClaimstring
""
YAML

Existing Claim name. If defined, PVC must be created manually before volume will be bound

server.persistentVolume.matchLabelsobject
{}
YAML

Bind Persistent Volume by labels. Must match all labels of targeted PV.

server.persistentVolume.mountPathstring
/storage
YAML

Mount path. Server data Persistent Volume mount root path.

server.persistentVolume.namestring
""
YAML

Override Persistent Volume Claim name

server.persistentVolume.sizestring
3Gi
YAML

Size of the volume. Should be calculated based on the logs you send and retention policy you set.

server.persistentVolume.storageClassNamestring
""
YAML

StorageClass to use for persistent volume. Requires server.persistentVolume.enabled: true. If defined, PVC created automatically

server.persistentVolume.subPathstring
""
YAML

Mount subpath

server.podAnnotationsobject
{}
YAML

Pod’s annotations

server.podLabelsobject
{}
YAML

Pod’s additional labels

server.podManagementPolicystring
OrderedReady
YAML

Pod’s management policy

server.podSecurityContextobject
enabled: true
fsGroup: 2000
runAsNonRoot: true
runAsUser: 1000
YAML

Pod’s security context. Details are here

server.priorityClassNamestring
""
YAML

Name of Priority Class

server.probe.livenessobject
failureThreshold: 10
initialDelaySeconds: 30
periodSeconds: 30
tcpSocket: {}
timeoutSeconds: 5
YAML

Indicates whether the Container is running. If the liveness probe fails, the kubelet kills the Container, and the Container is subjected to its restart policy. If a Container does not provide a liveness probe, the default state is Success.

server.probe.readinessobject
failureThreshold: 3
httpGet: {}
initialDelaySeconds: 5
periodSeconds: 15
timeoutSeconds: 5
YAML

Indicates whether the Container is ready to service requests. If the readiness probe fails, the endpoints controller removes the Pod’s IP address from the endpoints of all Services that match the Pod. The default state of readiness before the initial delay is Failure. If a Container does not provide a readiness probe, the default state is Success.

server.probe.startupobject
{}
YAML

Indicates whether the Container is done with potentially costly initialization. If set it is executed first. If it fails Container is restarted. If it succeeds liveness and readiness probes takes over.

server.replicaCountint
1
YAML

Replica count

server.resourcesobject
{}
YAML

Resource object. Details are here

server.retentionDiskSpaceUsagestring
""
YAML

Data retention max capacity. Default unit is GiB. See these docs

server.retentionPeriodint
1
YAML

Data retention period. Possible units character: h(ours), d(ays), w(eeks), y(ears), if no unit character specified - month. The minimum retention period is 24h. See these docs

server.securityContextobject
allowPrivilegeEscalation: false
capabilities:
    drop:
        - ALL
enabled: true
readOnlyRootFilesystem: true
YAML

Security context to be added to server pods

server.service.annotationsobject
{}
YAML

Service annotations

server.service.clusterIPstring
""
YAML

Service ClusterIP

server.service.externalIPslist
[]
YAML

Service external IPs. Details are here

server.service.externalTrafficPolicystring
""
YAML

Service external traffic policy. Check here for details

server.service.healthCheckNodePortstring
""
YAML

Health check node port for a service. Check here for details

server.service.ipFamilieslist
[]
YAML

List of service IP families. Check here for details.

server.service.ipFamilyPolicystring
""
YAML

Service IP family policy. Check here for details.

server.service.labelsobject
{}
YAML

Service labels

server.service.loadBalancerIPstring
""
YAML

Service load balancer IP

server.service.loadBalancerSourceRangeslist
[]
YAML

Load balancer source range

server.service.servicePortint
9428
YAML

Service port

server.service.targetPortstring
http
YAML

Target port

server.service.typestring
ClusterIP
YAML

Service type

server.serviceMonitor.annotationsobject
{}
YAML

Service Monitor annotations

server.serviceMonitor.basicAuthobject
{}
YAML

Basic auth params for Service Monitor

server.serviceMonitor.enabledbool
false
YAML

Enable deployment of Service Monitor for server component. This is Prometheus operator object

server.serviceMonitor.extraLabelsobject
{}
YAML

Service Monitor labels

server.serviceMonitor.metricRelabelingslist
[]
YAML

Service Monitor metricRelabelings

server.serviceMonitor.relabelingslist
[]
YAML

Service Monitor relabelings

server.serviceMonitor.targetPortstring
http
YAML

Service Monitor target port

server.statefulSet.enabledbool
true
YAML

Creates statefulset instead of deployment, useful when you want to keep the cache

server.statefulSet.podManagementPolicystring
OrderedReady
YAML

Deploy order policy for StatefulSet pods

server.terminationGracePeriodSecondsint
60
YAML

Pod’s termination grace period in seconds

server.tolerationslist
[]
YAML

Node tolerations for server scheduling to nodes with taints. Details are here

server.topologySpreadConstraintslist
[]
YAML

Pod topologySpreadConstraints

vectorobject
args:
    - -w
    - --config-dir
    - /etc/vector/
containerPorts:
    - containerPort: 9090
      name: prom-exporter
      protocol: TCP
customConfig:
    api:
        address: 0.0.0.0:8686
        enabled: false
        playground: true
    data_dir: /vector-data-dir
    sinks:
        exporter:
            address: 0.0.0.0:9090
            inputs:
                - internal_metrics
            type: prometheus_exporter
        vlogs:
            api_version: v8
            compression: gzip
            endpoints: << include "vlogs.es.urls" . >>
            healthcheck:
                enabled: false
            inputs:
                - parser
            mode: bulk
            request:
                headers:
                    AccountID: "0"
                    ProjectID: "0"
                    VL-Msg-Field: message,msg,_msg,log.msg,log.message,log
                    VL-Stream-Fields: stream,kubernetes.pod_name,kubernetes.container_name,kubernetes.pod_namespace
                    VL-Time-Field: timestamp
            type: elasticsearch
    sources:
        internal_metrics:
            type: internal_metrics
        k8s:
            type: kubernetes_logs
    transforms:
        parser:
            inputs:
                - k8s
            source: |
                .log = parse_json(.message) ?? .message
                del(.message)
            type: remap
customConfigNamespace: ""
dataDir: /vector-data-dir
enabled: false
existingConfigMaps:
    - vl-config
resources: {}
role: Agent
service:
    enabled: false
YAML

Values for vector helm chart

vector.customConfigNamespacestring
""
YAML

Forces custom configuration creation in a given namespace even if vector.enabled is false

vector.enabledbool
false
YAML

Enable deployment of vector