Cluster mode in VictoriaLogs provides horizontal scaling to many nodes when single-node VictoriaLogs reaches vertical scalability limits of a single host. If you have the ability to run a single-node VictoriaLogs on a host with more CPU / RAM / storage space / storage IO, then it is preferred to do this instead of switching to cluster mode, since a single-node VictoriaLogs instance has the following advantages over cluster mode:
- It is easier to configure, manage and troubleshoot, since it consists of a single self-contained component.
- It provides better performance and capacity on the same hardware, since it doesn’t need to transfer data over the network between cluster components.
The migration path from a single-node VictoriaLogs to cluster mode is very easy - just
upgrade
a single-node VictoriaLogs executable to the
latest available release
and add it to the list of vlstorage nodes
passed via -storageNode command-line flag to vlinsert and vlselect components of the cluster mode. See
cluster architecture
for more details about VictoriaLogs cluster components.
See quick start guide on how to start working with VictoriaLogs cluster.
Architecture #
VictoriaLogs in cluster mode is composed of three main components: vlinsert, vlselect, and vlstorage.
Ingestion flow:
sequenceDiagram
    participant LS as Log Sources
    participant VI as vlinsert
    participant VS1 as vlstorage-1
    participant VS2 as vlstorage-2
    Note over LS,VI: Log Ingestion Flow
    LS->>VI: Send logs via supported protocols
    VI->>VS1: POST /internal/insert (HTTP)
    VI->>VS2: POST /internal/insert (HTTP)
    Note right of VI: Distributes logs evenly<br/>across vlstorage nodesQuerying flow:
sequenceDiagram
    participant QC as Query Client
    participant VL as vlselect
    participant VS1 as vlstorage-1
    participant VS2 as vlstorage-2
    Note over QC,VL: Query Flow
    QC->>VL: Query via HTTP endpoints
    VL->>VS1: GET /internal/select/* (HTTP)
    VL->>VS2: GET /internal/select/* (HTTP)
    VS1-->>VL: Return local results
    VS2-->>VL: Return local results
    VL->>QC: Processed & aggregated results- vlinserthandles log ingestion via all supported protocols .
 It distributes incoming logs evenly across- vlstoragenodes, as specified by the- -storageNodecommand-line flag.
- vlselectreceives queries through all supported HTTP query endpoints .
 It fetches the required data from the configured- vlstoragenodes, processes the queries, and returns the results.
- vlstorageperforms two key roles:- It stores logs received from vlinsertat the directory defined by the-storageDataPathflag.
 See storage configuration docs for details.
- It handles queries from vlselectby retrieving and transforming the requested data locally before returning results.
 
- It stores logs received from 
Each vlstorage node operates as a self-contained VictoriaLogs instance.
Refer to the
single-node and cluster mode duality
documentation for more information.
This design allows you to reuse existing single-node VictoriaLogs instances by listing them in the -storageNode flag for vlselect, enabling unified querying across all nodes.
All VictoriaLogs components are horizontally scalable and can be deployed on hardware best suited to their respective workloads.vlinsert and vlselect can be run on the same node, which allows the minimal cluster to consist of just one vlstorage node and one node acting as both vlinsert and vlselect.
However, for production environments, it is recommended to separate vlinsert and vlselect roles to avoid resource contention — for example, to prevent heavy queries from interfering with log ingestion.
Communication between vlinsert / vlselect and vlstorage is done via HTTP over the port specified by the -httpListenAddr flag:
- vlinsertsends data to the- /internal/insertendpoint on- vlstorage.
- vlselectsends queries to endpoints under- /internal/select/on- vlstorage.
This HTTP-based communication model allows you to use reverse proxies for authorization, routing, and encryption between components.
Use of
vmauth
is recommended for managing access control.
See
Security and Load balancing docs
for details.
For advanced setups, refer to the multi-level cluster setup documentation.
High availability #
VictoriaLogs cluster provides high availability for
data ingestion path
.
It continues to accept incoming logs if some of the vlstorage nodes are temporarily unavailable.
vlinsert evenly spreads new logs among the remaining available vlstorage nodes in this case, so newly ingested logs are properly stored and are available for querying
without any delays. This allows performing maintenance tasks for vlstorage nodes (such as upgrades, configuration updates, etc.) without worrying about data loss.
Make sure that the remaining vlstorage nodes have enough capacity for the increased data ingestion workload, in order to avoid availability problems.
VictoriaLogs cluster returns 502 Bad Gateway errors for
incoming queries
if some of the vlstorage nodes are unavailable. This guarantees consistent query responses
(e.g. all the stored logs are taken into account during the query) during maintenance tasks at vlstorage nodes. Note that all the newly incoming logs are properly stored
to the remaining vlstorage nodes - see the paragraph above, so they become available for querying immediately after all the vlstorage nodes return back to the cluster.
There are practical cases when it is preferred to return partial responses instead of 502 Bad Gateway errors if some of vlstorage nodes are unavailable.
See
these docs
on how to achieve this.
In most real-world cases, vlstorage nodes become unavailable during planned maintenance such as upgrades, config changes, or rolling restarts.
These are typically infrequent (weekly or monthly) and brief (a few minutes) events.
A short period of query downtime during maintenance tasks is acceptable and fits well within most SLAs. For example, 43 minutes of downtime per month during maintenance tasks
provides ~99.9% cluster availability. This is better in practice compared to “magic” HA schemes with opaque auto-recovery — if these schemes fail,
then it is impossible to debug and fix them in a timely manner, so this will likely result in a long outage, which violates SLAs.
The real HA scheme for both data ingestion and querying can be built only when copies of logs are sent into independent VictoriaLogs instances (or clusters) located in fully independent availability zones (datacenters).
If an AZ becomes unavailable, then new logs continue to be written to the remaining AZ, while queries return full responses from the remaining AZ. When the AZ becomes available, then the pending buffered logs can be written to it, so the AZ can be used for querying full responses. This HA scheme can be built with the help of vlagent for data replication and buffering, and vmauth for data querying:
flowchart TB
  subgraph haSolution["HA Solution"]
    direction TB
    subgraph ingestion["Ingestion Layer"]
      direction TB
      LS["Log Sources<br/>(Applications)"]
      VLAGENT["Log Collector<br/>• Buffering<br/>• Replication<br/>• Delivery Guarantees"]
      LS --> VLAGENT
    end
    subgraph storage["Storage Layer"]
      direction TB
      subgraph zoneA["Zone A"]
        VLA["VictoriaLogs Cluster A"]
      end
      subgraph zoneB["Zone B"]
        VLB["VictoriaLogs Cluster B"]
      end
      VLAGENT -->|"Replicate logs to<br/>Zone A cluster"| VLA
      VLAGENT -->|"Replicate logs to<br/>Zone B cluster"| VLB
    end
    subgraph query["Query Layer"]
      direction TB
      LB["Load Balancer<br/>(vmauth)<br/>• Health Checks<br/>• Failover<br/>• Query Distribution"]
      QC["Query Clients<br/>(Grafana, API)"]
      VLA -->|"Serve queries from<br/>Zone A cluster"| LB
      VLB -->|"Serve queries from<br/>Zone B cluster"| LB
      LB --> QC
    end
  end
style VLAGENT fill:#9bc7e4
style VLA fill:#ae9be4
style VLB fill:#ae9be4
style LB fill:#9bc7e4
style QC fill:#9fe49b
style LS fill:#9fe49b- vlagent receives and replicates logs to two VictoriaLogs clusters. If one cluster becomes unavailable, the log shipper continues sending logs to the remaining healthy cluster. It also buffers logs that cannot be delivered to the unavailable cluster. When the failed cluster becomes available again, the log shipper sends the buffered logs and then resumes sending new logs to it. This guarantees that both clusters have full copies of all the ingested logs.
- vmauth
routes query requests to healthy VictoriaLogs clusters.
If one cluster becomes unavailable, vmauthdetects this and automatically redirects all query traffic to the remaining healthy cluster.
There is no magic coordination logic or consensus algorithms in this scheme. This simplifies managing and troubleshooting this HA scheme.
See also Security and Load balancing docs .
Single-node and cluster mode duality #
Every vlstorage node can be used as a single-node VictoriaLogs instance:
- It can accept logs via all the supported data ingestion protocols .
- It can accept selectqueries via all the supported HTTP querying endpoints .
A single-node VictoriaLogs instance can be used as vlstorage node in VictoriaLogs cluster:
- It accepts data ingestion requests from vlinsertvia/internal/insertHTTP endpoint at the TCP port specified via-httpListenAddrcommand-line flag.
- It accepts queries from vlselectvia/internal/select/*HTTP endpoints at the TCP port specified via-httpListenAddrcommand-line flag.
See also security docs .
Multi-level cluster setup #
- vlinsertcan send the ingested logs to other- vlinsertnodes if they are specified via- -storageNodecommand-line flag. This allows building multi-level data ingestion schemes when top-level- vlinsertspreads the incoming logs evenly among multiple lower-level clusters of VictoriaLogs.
- vlselectcan send queries to other- vlselectnodes if they are specified via- -storageNodecommand-line flag. This allows building multi-level cluster schemes when top-level- vlselectqueries multiple lower-level clusters of VictoriaLogs.
See
security docs
on how to protect communications between multiple levels of vlinsert and vlselect nodes.
Security #
All the VictoriaLogs cluster components must run in a protected internal network without direct access from the Internet.
vlstorage must have no access from the Internet. HTTP authorization proxies such as
vmauth
must be used in front of vlinsert and vlselect for authorizing access to these components from the Internet.
See
Security and Load balancing docs
.
It is possible to disallow access to /internal/insert and /internal/select/* endpoints at a single-node VictoriaLogs instance
by running it with -internalinsert.disable and -internalselect.disable command-line flags. Note that
vlagent
sends the collected logs to the /internal/insert endpoint, so it should be available for data ingestion if you use vlagent.
TLS #
By default, vlinsert and vlselect communicate with vlstorage via unencrypted HTTP. This is OK if all these components are located
in the same protected internal network. This isn’t OK if these components communicate over the Internet, since a third party can intercept or modify
the transferred data. It is recommended to switch to HTTPS in this case:
- Specify - -tls,- -tlsCertFileand- -tlsKeyFilecommand-line flags at- vlstorage, so it accepts incoming requests over HTTPS instead of HTTP at the corresponding- -httpListenAddr:- ./victoria-logs-prod -httpListenAddr=... -storageDataPath=... -tls -tlsCertFile=/path/to/certfile -tlsKeyFile=/path/to/keyfile
- Specify - -storageNode.tlscommand-line flag at- vlinsertand- vlselect, which communicate with the- vlstorageover untrusted networks such as the Internet:- ./victoria-logs-prod -storageNode=... -storageNode.tls
It is also recommended to authorize HTTPS requests to vlstorage via Basic Auth:
- Specify - -httpAuth.usernameand- -httpAuth.passwordcommand-line flags at- vlstorage, so it verifies the Basic Auth username + password in HTTPS requests received via- -httpListenAddr:- ./victoria-logs-prod -httpListenAddr=... -storageDataPath=... -tls -tlsCertFile=... -tlsKeyFile=... -httpAuth.username=... -httpAuth.password=...
- Specify - -storageNode.usernameand- -storageNode.passwordcommand-line flags at- vlinsertand- vlselect, which communicate with the- vlstorageover untrusted networks:- ./victoria-logs-prod -storageNode=... -storageNode.tls -storageNode.username=... -storageNode.password=...
Another option is to use third-party HTTP proxies such as
vmauth
, nginx, etc. to authorize and encrypt communications
between VictoriaLogs cluster components over untrusted networks.
By default, all the components (vlinsert, vlselect, vlstorage) support all the HTTP endpoints including /insert/* and /select/*.
It is recommended to disable select endpoints on vlinsert and insert endpoints on vlselect:
      # Disable select endpoints on vlinsert
./victoria-logs-prod -storageNode=... -select.disable
# Disable insert endpoints on vlselect
./victoria-logs-prod -storageNode=... -insert.disable
    This helps prevent sending select requests to vlinsert nodes or insert requests to vlselect nodes in case of a misconfiguration in the authorization proxy
in front of the vlinsert and vlselect nodes.
See also mTLS .
mTLS #
Enterprise version of VictoriaLogs
supports the ability to verify client TLS certificates
at the vlstorage side for TLS connections established from vlinsert and vlselect nodes (aka mTLS
).
See
TLS docs
for details on how to set up TLS communications between VictoriaLogs cluster nodes.
mTLS authentication can be enabled by passing the -mtls command-line flag to the vlstorage node in addition to the -tls command-line flag.
In this case it verifies TLS client certificates for connections from vlinsert and vlselect at the address specified via -httpListenAddr command-line flag.
The client TLS certificate must be specified at vlinsert and vlselect nodes via -storageNode.tlsCertFile and -storageNode.tlsKeyFile command-line flags.
By default, the system-wide root CA certificates
are used for verifying client TLS certificates.
The -mtlsCAFile command-line flag can be used at vlstorage for pointing to custom root CA certificates.
See also generic mTLS docs for VictoriaLogs .
Enterprise version of VictoriaLogs can be downloaded and evaluated for free from the releases page . See how to request a free trial license .
Quick start #
The following guide covers the following topics for a Linux host:
- How to download the VictoriaLogs executable.
- How to start a VictoriaLogs cluster, which consists of two vlstoragenodes, a singlevlinsertnode and a singlevlselectnode running on localhost according to cluster architecture .
- How to ingest logs into the cluster.
- How to query the ingested logs.
Download and unpack the latest VictoriaLogs release:
      curl -L -O https://github.com/VictoriaMetrics/VictoriaLogs/releases/download/v1.37.0/victoria-logs-linux-amd64-v1.37.0.tar.gz
tar xzf victoria-logs-linux-amd64-v1.37.0.tar.gz
    Start the first
vlstorage node
, which accepts incoming requests at the port 9491 and stores the ingested logs in the victoria-logs-data-1 directory:
      ./victoria-logs-prod -httpListenAddr=:9491 -storageDataPath=victoria-logs-data-1 &
    This command and all the following commands start cluster components as background processes.
Use jobs, fg, bg commands for manipulating the running background processes. Use the kill command and/or Ctrl+C to stop running processes when they are no longer needed.
See these docs
for details.
Start the second vlstorage node, which accepts incoming requests at the port 9492 and stores the ingested logs in the victoria-logs-data-2 directory:
      ./victoria-logs-prod -httpListenAddr=:9492 -storageDataPath=victoria-logs-data-2 &
    Start the vlinsert node, which
accepts logs
at the port 9481 and spreads them evenly across the two vlstorage nodes started above:
      ./victoria-logs-prod -httpListenAddr=:9481 -storageNode=localhost:9491,localhost:9492 &
    Start the vlselect node, which
accepts incoming queries
at the port 9471 and requests the needed data from vlstorage nodes started above:
      ./victoria-logs-prod -httpListenAddr=:9471 -storageNode=localhost:9491,localhost:9492 &
    Note that all the VictoriaLogs cluster components - vlstorage, vlinsert and vlselect - share the same executable - victoria-logs-prod.
Their roles depend on whether the -storageNode command-line flag is set - if this flag is set, then the executable runs in vlinsert and vlselect modes.
Otherwise, it runs in vlstorage mode, which is identical to a
single-node VictoriaLogs mode
.
Let’s ingest some logs (aka wide events ) from GitHub archive into the VictoriaLogs cluster with the following command:
      curl -s https://data.gharchive.org/$(date -d '2 days ago' '+%Y-%m-%d')-10.json.gz \
        | curl -T - -X POST -H 'Content-Encoding: gzip' 'http://localhost:9481/insert/jsonline?_time_field=created_at&_stream_fields=type'
    Let’s query the ingested logs via
/select/logsql/query HTTP endpoint
.
For example, the following command returns the number of stored logs in the cluster:
      curl http://localhost:9471/select/logsql/query -d 'query=* | count()'
    See these docs for details on how to query logs from the command line.
Logs can also be explored and queried via the
built-in Web UI
.
Open http://localhost:9471/select/vmui/ in the web browser, select last 7 days time range in the top right corner and explore the ingested logs.
See
LogsQL docs
to familiarize yourself with the query language.
Every vlstorage node can be queried individually because
it is equivalent to a single-node VictoriaLogs
.
For example, the following command returns the number of stored logs at the first vlstorage node started above:
      curl http://localhost:9491/select/logsql/query -d 'query=* | count()'
    We recommend reading key concepts before you start working with VictoriaLogs.
See also security docs .
Performance tuning #
Cluster components of VictoriaLogs automatically adjust their settings for the best performance and the lowest resource usage on the given hardware. So there is no need for any tuning of these components in general. The following options can be used for achieving higher performance / lower resource usage on systems with constrained resources:
- vlinsertlimits the number of concurrent requests to every- vlstoragenode. The default concurrency works great in most cases. Sometimes it can be increased via- -insert.concurrencycommand-line flag at- vlinsertin order to achieve higher data ingestion rate at the cost of higher RAM usage at- vlinsertand- vlstoragenodes.
- vlinsertcompresses the data sent to- vlstoragenodes in order to reduce network bandwidth usage at the cost of slightly higher CPU usage at- vlinsertand- vlstoragenodes. The compression can be disabled by passing- -insert.disableCompressioncommand-line flag to- vlinsert. This reduces CPU usage at- vlinsertand- vlstoragenodes at the cost of significantly higher network bandwidth usage.
- vlselectrequests compressed data from- vlstoragenodes in order to reduce network bandwidth usage at the cost of slightly higher CPU usage at- vlselectand- vlstoragenodes. The compression can be disabled by passing- -select.disableCompressioncommand-line flag to- vlselect. This reduces CPU usage at- vlselectand- vlstoragenodes at the cost of significantly higher network bandwidth usage.
Advanced usage #
Cluster components of VictoriaLogs provide various settings, which can be configured via command-line flags if needed. Default values for all the command-line flags work great in most cases, so it isn’t recommended tuning them without the real need. See the list of supported command-line flags at VictoriaLogs .