VictoriaMetrics is a fast, cost-effective and scalable time series database. It can be used as a long-term remote storage for Prometheus.
It is recommended using single-node version instead of cluster version for ingestion rates lower than a million of data points per second. Single-node version scales perfectly with the number of CPU cores, RAM and available storage space. Single-node version is easier to configure and operate comparing to cluster version, so think twice before sticking to cluster version.
Join our Slack or contact us with consulting and support questions.
VictoriaMetrics cluster consists of the following services:
vmstorage- stores the data
vminsert- proxies the ingested data to
vmstorageshards using consistent hashing
vmselect- performs incoming queries using the data from
Each service may scale independently and may run on the most suitable hardware.
vmstorage nodes don't know about each other, don't communicate with each other and don't share any data. This is shared nothing architecture. It increases cluster availability, simplifies cluster maintenance and cluster scaling.
VictoriaMetrics cluster supports multiple isolated tenants (aka namespaces). Tenants are identified by
accountID:projectID, which are put inside request urls. See these docs for details. Some facts about tenants in VictoriaMetrics:
projectID is identified by an arbitrary 32-bit integer in the range
[0 .. 2^32). If
projectID is missing, then it is automatically assigned to
0. It is expected that other information about tenants such as auth tokens, tenant names, limits, accounting, etc. is stored in a separate relational database. This database must be managed by a separate service sitting in front of VictoriaMetrics cluster such as vmauth or vmgateway. Contact us if you need assistance with such service.
Tenants are automatically created when the first data point is written into the given tenant.
Data for all the tenants is evenly spread among available
vmstorage nodes. This guarantees even load among
vmstorage nodes when different tenants have different amounts of data and different query load.
The database performance and resource usage doesn't depend on the number of tenants. It depends mostly on the total number of active time series in all the tenants. A time series is considered active if it received at least a single sample during the last hour or it has been touched by queries during the last hour.
VictoriaMetrics doesn't support querying multiple tenants in a single request.
Compiled binaries for cluster version are available in the
assets section of releases page. See archives containing
Docker images for cluster version are available here:
Source code for cluster version is available at cluster branch.
There is no need in installing Go on a host system since binaries are built inside the official docker container for Go. This makes reproducible builds. So install docker and run the following command:
make vminsert-prod vmselect-prod vmstorage-prod
Production binaries are built into statically linked binaries. They are put into
bin folder with
$ make vminsert-prod vmselect-prod vmstorage-prod $ ls -1 bin vminsert-prod vmselect-prod vmstorage-prod
makefrom the repository root. It should build
vminsertbinaries and put them into the
make package. It will build the following docker images locally:
<PKG_TAG> is auto-generated image tag, which depends on source code in the repository. The
<PKG_TAG> may be manually set via
PKG_TAG=foobar make package.
By default images are built on top of alpine image in order to improve debuggability. It is possible to build an image on top of any other base image by setting it via
<ROOT_IMAGE> environment variable. For example, the following command builds images on top of scratch image:
ROOT_IMAGE=scratch make package
A minimal cluster must contain the following nodes:
It is recommended to run at least two nodes for each service for high availability purposes.
An http load balancer such as vmauth or
nginx must be put in front of
/insertmust be routed to port
/selectmust be routed to port
Ports may be altered by setting
-httpListenAddr on the corresponding nodes.
It is recommended setting up monitoring for the cluster.
Each flag values can be set thru environment variables by following these rules:
-envflag.enableflag must be set
.in flag names must be substituted by
-insert.maxQueueDuration <duration>will translate to
,as separator (for example
-storageNode <nodeA> -storageNode <nodeB>will translate to
-envflag.prefix. For instance, if
-envflag.prefix=VM_, then env vars must be prepended with
All the cluster components expose various metrics in Prometheus-compatible format at
/metrics page on the TCP port set in
-httpListenAddr command-line flag. By default the following TCP ports are used:
It is recommended setting up vmagent or Prometheus to scrape
/metrics pages from all the cluster components, so they can be monitored and analyzed with the official Grafana dashboard for VictoriaMetrics cluster or an alternative dashboard for VictoriaMetrics cluster.
It is recommended setting up alerts in vmalert or in Prometheus from this config.
<accountID>is an arbitrary 32-bit integer identifying namespace for data ingestion (aka tenant). It is possible to set it as
projectIDis also arbitrary 32-bit integer. If
projectIDisn't set, then it equals to
<suffix>may have the following values:
prometheus/api/v1/write- for inserting data with Prometheus remote write API
influx/api/v2/write- for inserting data with Influx line protocol.
opentsdb/api/put- for accepting OpenTSDB HTTP /api/put requests. This handler is disabled by default. It is exposed on a distinct TCP address set via
-opentsdbHTTPListenAddrcommand-line flag. See these docs for details.
prometheus/api/v1/import- for importing data obtained via
prometheus/api/v1/import/native- for importing data obtained via
prometheus/api/v1/import/csv- for importing arbitrary CSV data. See these docs for details.
prometheus/api/v1/import/prometheus- for importing data in Prometheus text exposition format and in OpenMetrics format. See these docs for details.
<accountID>is an arbitrary number identifying data namespace for the query (aka tenant)
<suffix>may have the following values:
api/v1/query- performs PromQL instant query.
api/v1/query_range- performs PromQL range query.
api/v1/series- performs series query.
api/v1/labels- returns a list of label names.
api/v1/label/<label_name>/values- returns values for the given
<label_name>according to API.
federate- returns federated metrics.
api/v1/export- exports raw data in JSON line format. See this article for details.
api/v1/export/native- exports raw data in native binary format. It may be imported into another VictoriaMetrics via
api/v1/export/csv- exports data in CSV. It may be imported into another VictoriaMetrics via
api/v1/series/count- returns the total number of series.
api/v1/status/tsdb- for time series stats. See these docs for details.
api/v1/status/active_queries- for currently executed active queries. Note that every
vmselectmaintains an independent list of active queries, which is returned in the response.
api/v1/status/top_queries- for listing the most frequently executed queries and queries taking the most duration.
<accountID>is an arbitrary number identifying data namespace for query (aka tenant)
<suffix>may have the following values:
render- implements Graphite Render API. See these docs. This functionality is available in Enterprise package.
metrics/find- searches Graphite metrics. See these docs.
metrics/expand- expands Graphite metrics. See these docs.
metrics/index.json- returns all the metric names. See these docs.
tags/tagSeries- registers time series. See these docs.
tags/tagMultiSeries- register multiple time series. See these docs.
tags- returns tag names. See these docs.
tags/<tag_name>- returns tag values for the given
<tag_name>. See these docs.
tags/findSeries- returns series matching the given
expr. See these docs.
tags/autoComplete/tags- returns tags matching the given
expr. See these docs.
tags/autoComplete/values- returns tag values matching the given
expr. See these docs.
tags/delSeries- deletes series matching the given
path. See these docs.
URL for query stats across all tenants:
http://<vmselect>:8481/api/v1/status/top_queries. It lists with the most frequently executed queries and queries taking the most duration.
URL for time series deletion:
http://<vmselect>:8481/delete/<accountID>/prometheus/api/v1/admin/tsdb/delete_series?match=<timeseries_selector_for_delete>. Note that the
delete_series handler should be used only in exceptional cases such as deletion of accidentally ingested incorrect time series. It shouldn't be used on a regular basis, since it carries non-zero overhead.
vmstoragenodes provide the following HTTP endpoints on
/internal/force_merge- initiate forced compactions on the given
/snapshot/create- create instant snapshot, which can be used for backups in background. Snapshots are created in
<storageDataPath>is the corresponding command-line flag value.
/snapshot/list- list available snasphots.
/snapshot/delete?snapshot=<id>- delete the given snapshot.
/snapshot/delete_all- delete all the snapshots.
Snapshots may be created independently on each
vmstorage node. There is no need in synchronizing snapshots' creation across
Cluster performance and capacity scales with adding new nodes.
vmselectnodes are stateless and may be added / removed at any time. Do not forget updating the list of these nodes on http load balancer. Adding more
vminsertnodes scales data ingestion rate. See this comment about ingestion rate scalability. Adding more
vmselectnodes scales select queries rate.
vmstoragenodes own the ingested data, so they cannot be removed without data loss. Adding more
vmstoragenodes scales cluster capacity.
Steps to add
vmstoragenode with the same
-retentionPeriodas existing nodes in the cluster.
vmselectnodes with new
vminsertnodes with new
All the node types -
vmstorage - may be updated via graceful shutdown. Send
SIGINT signal to the corresponding process, wait until it finishes and then start new version with new configs.
Cluster should remain in working state if at least a single node of each type remains available during the update process. See cluster availability section for details.
The cluster remains available if at least a single
vmstorage node exists:
vminsertre-routes incoming data from unavailable
vmstoragenodes to healthy
vmselectcontinues serving partial responses if at least a single
vmstoragenode is available. If consistency over availability is preferred, then either pass
-search.denyPartialResponsecommand-line flag to
deny_partial_response=1query arg in requests to
vmselect doesn't serve partial responses for API handlers returning raw datapoints -
/api/v1/export* endpoints, since users usually expect this data is always complete.
Data replication can be used for increasing storage durability. See these docs for details.
Each instance type -
vmstorage - can run on the most suitable hardware.
vminsertinstances can be calculated from the ingestion rate:
vCPUs = ingestion_rate / 150K.
vminsertinstance should equal to the number of
vmstorageinstances in the cluster.
vminsertinstance should be 1GB or more. RAM is used as a buffer for spikes in ingestion rate. The maximum amount of used RAM per
vminsertnode can be tuned with
-memory.allowedBytescommand-line flags. For instance,
-memory.allowedPercent=20limits the maximum amount of used RAM to 20% of the available RAM on the host system.
-rpc.disableCompressioncommand-line flag on
vminsertinstances could increase ingestion capacity at the cost of higher network bandwidth usage between
vmstorageinstances can be calculated from the ingestion rate:
vCPUs = ingestion_rate / 150K.
vmstorageinstances can be calculated from the number of active time series:
RAM = 2 * active_time_series * 1KB. Time series is active if it received at least a single data point during the last hour or if it has been queried during the last hour. The required RAM per each
vmstorageshould be multiplied by
-replicationFactorif replication is enabled. Additional RAM can be required for query processing. Calculated RAM requrements may differ from actual RAM requirements due to various factors:
vmstorageinstances can be calculated from the ingestion rate and retention:
storage_space = ingestion_rate * retention_seconds.
The recommended hardware for
vmselect instances highly depends on the type of queries. Lightweight queries over small number of time series usually require small number of vCPU cores and small amount of RAM on
vmselect, while heavy queries over big number of time series (>10K) usually require bigger number of vCPU cores and bigger amounts of RAM.
In general it is recommended increasing the number of vCPU cores and RAM per
vmselect node for higher query performance, while adding new
vmselect nodes only when old nodes are overloaded with incoming query stream.
It is recommended to run all the components for a single cluster in the same subnetwork with high bandwidth, low latency and low error rates. This improves cluster performance and availability. It isn't recommended spreading components for a single cluster across multiple availability zones, since cross-AZ network usually has lower bandwidth, higher latency and higher error rates comparing the network inside AZ.
If you need multi-AZ setup, then it is recommended running independed clusters in each AZ and setting up vmagent in front of these clusters, so it could replicate incoming data into all the cluster. Then promxy could be used for querying the data from multiple clusters.
Another solution is to use multi-level cluster setup, where the top level of
vminsert nodes replicate data among the lower level of
vminsert nodes located at different availability zones. These
vminsert nodes then spread the data among
vmstorage nodes in each AZ. See these docs for more details.
vminsert nodes can accept data from another
vminsert nodes starting from v1.60.0 if
-clusternativeListenAddr command-line flag is set. For example, if
vminsert is started with
-clusternativeListenAddr=:8400 command-line flag, then it can accept data from another
vminsert nodes at TCP port 8400 in the same way as
vmstorage nodes do. This allows chaining
vminsert nodes and building multi-level cluster topologies with flexible configs. For example, the top level of
vminsert nodes can replicate data among the second level of
vminsert nodes located in distinct availability zones (AZ), while the second-level
vminsert nodes can spread the data among
vmstorage nodes located in the same AZ. Such setup guarantees cluster availability if some AZ becomes unavailable. The data from all the
vmstorage nodes in all the AZs can be read via
vmselect nodes, which are configured to query all the
vmstorage nodes in all the availability zones (e.g. all the
vmstorage addresses are passed via
-storageNode command-line flag to
vmselect nodes). Additionally,
-replicationFactor=k+1 must be passed to
vmselect nodes, where
k is the lowest number of
vmstorage nodes in a single AZ. See replication docs for more details.
Helm chart simplifies managing cluster version of VictoriaMetrics in Kubernetes. It is available in the helm-charts repository.
K8s operator simplifies managing VictoriaMetrics components in Kubernetes.
By default VictoriaMetrics offloads replication to the underlying storage pointed by
The replication can be enabled by passing
-replicationFactor=N command-line flag to
vminsert. This guarantees that all the data remains available for querying if up to
vmstorage nodes are unavailable. The cluster must contain at least
vmstorage nodes, where
N is replication factor, in order to maintain the given replication factor for newly ingested data when
N-1 of storage nodes are lost. For example, when
-replicationFactor=3 is passed to
vminsert, then it replicates all the ingested data to 3 distinct
vmstorage nodes, so up to 2
vmstorage nodes can be lost without data loss. The minimum number of
vmstorage nodes should be equal to
2*3-1 = 5, so when 2
vmstorage nodes are lost, the remaining 3
vmstorage nodes could provide the
-replicationFactor=3 for newly ingested data.
When the replication is enabled,
-dedup.minScrapeInterval=1ms command-line flag must be passed to
vmselect nodes. The
-replicationFactor=N improves query performance when up to
N-1 vmstorage nodes respond slowly and/or temporarily unavailable. Sometimes
vmselect nodes can result in partial responses. See this issues for details. The
-dedup.minScrapeInterval=1ms de-duplicates replicated data during queries. It is OK if
-dedup.minScrapeInterval exceeds 1ms when deduplication is used additionally to replication.
Note that replication doesn't save from disaster, so it is recommended performing regular backups. See these docs for details.
Note that the replication increases resource usage - CPU, RAM, disk space, network bandwidth - by up to
-replicationFactor times. So it may be worth offloading the replication to underlying storage pointed by
-storageDataPath such as Google Compute Engine persistent disk, which is protected from data loss and data corruption. It also provide consistently high performance and may be resized without downtime. HDD-based persistent disks should be enough for the majority of use cases.
It is recommended using durable replicated persistent volumes in Kubernetes.
It is recommended performing periodical backups from instant snapshots for protecting from user errors such as accidental data deletion.
The following steps must be performed for each
vmstorage node for creating a backup:
/snapshot/createHTTP handler. It will create snapshot and return its name.
<-storageDataPath>/snapshots/<snapshot_name>folder using vmbackup. The archival process doesn't interfere with
vmstoragework, so it may be performed at any suitable time.
/snapshot/delete_allin order to free up occupied storage space.
There is no need in synchronizing backups among all the
Restoring from backup:
All the cluster components provide the following handlers for profiling:
http://vminsert:8480/debug/pprof/heapfor memory profile and
http://vminsert:8480/debug/pprof/profilefor CPU profile
http://vmselect:8481/debug/pprof/heapfor memory profile and
http://vmselect:8481/debug/pprof/profilefor CPU profile
http://vmstorage:8482/debug/pprof/heapfor memory profile and
http://vmstorage:8482/debug/pprof/profilefor CPU profile
Example command for collecting cpu profile from
curl -s http://vmstorage:8482/debug/pprof/profile > cpu.pprof
Example command for collecting memory profile from
curl -s http://vminsert:8480/debug/pprof/heap > mem.pprof
We are open to third-party pull requests provided they follow the KISS design principle:
Adhering to the
KISS principle simplifies the resulting code and architecture, so it can be reviewed, understood and verified by many people.
KISS, cluster version of VictoriaMetrics has no the following "features" popular in distributed computing world:
Report bugs and propose new features here.
Zip contains three folders with different image orientation (main color and inverted version).
Files included in each folder: