Promtail , Grafana Agent and Grafana Alloy are default log collectors for Grafana Loki. They can be configured to send the collected logs to VictoriaLogs according to the following docs.

Specify clients section in the configuration file for sending the collected logs to VictoriaLogs :

      clients:
  - url: "http://localhost:9428/insert/loki/api/v1/push"
    

Substitute localhost:9428 address inside clients with the real TCP address of VictoriaLogs.

The ingested log entries can be queried according to these docs .

Parsing log message #

VictoriaLogs automatically parses JSON string from the log message into distinct log fields . This behavior can be disabled by passing -loki.disableMessageParsing command-line flag to VictoriaLogs or by adding disable_message_parsing=1 query arg to the /insert/loki/api/v1/push url in the config of log shipper:

      clients:
  - url: "http://localhost:9428/insert/loki/api/v1/push?disable_message_parsing=1"
    

In this case the JSON with log fields is stored as a string in the _msg field , so later it could be parsed at query time with the unpack_json pipe . JSON parsing at query can be slow and can consume a lot of additional CPU time and disk read IO bandwidth. That’s why it is recommended leaving JSON message parsing enabled during data ingestion.

VictoriaLogs provides the ability to add the given prefix to all the parsed log fields in order to minimize the possibility of their clashing with the log stream labels. This can be done via -loki.messageFieldsPrefix=<some_prefix> command-line flag at VictoriaLogs or via message_fields_prefix=<some_prefix> query arg at the /insert/loki/api/v1/push url in the config of the log shipper. For example, the following config adds msg. prefix to all the parsed log fields:

      clients:
  - url: "http://localhost:9428/insert/loki/api/v1/push?message_fields_prefix=msg."
    

Log stream fields #

VictoriaLogs uses log stream labels defined at the client side, e.g. at Promtail, Grafana Agent or Grafana Alloy. Sometimes it may be needed overriding the set of these fields. This can be done via _stream_fields query arg. For example, the following config instructs using only the instance and job labels as log stream fields, while other labels will be stored as usual log fields :

      clients:
  - url: "http://localhost:9428/insert/loki/api/v1/push?_stream_fields=instance,job"
    

See also these docs for details on other supported query args.

Ignoring log fields #

If some log fields must be skipped during data ingestion, then they can be put into ignore_fields parameter . For example, the following config instructs VictoriaLogs to ignore filename and stream fields in the ingested logs:

      clients:
  - url: 'http://localhost:9428/insert/loki/api/v1/push?ignore_fields=filename,stream'
    

See also these docs for details on other supported query args.

Time field #

There is no need in specifying _time_field query arg for reading the log timestamp , since VictoriaLogs automatically extracts timestamp from the ingested Loki data.

VictoriaLogs ignores the _time field in the collected logs and warns about this, so drop this field before ingesting logs into VictoriaLogs via Loki protocol.

Multitenancy #

By default the ingested logs are stored in the (AccountID=0, ProjectID=0) tenant . If you need storing logs in other tenant, then specify the needed tenant via tenant_id field in the Loki client configuration The tenant_id must have AccountID:ProjectID format, where AccountID and ProjectID are arbitrary uint32 numbers. For example, the following config instructs VictoriaLogs to store logs in the (AccountID=12, ProjectID=34) tenant :

      clients:
  - url: "http://localhost:9428/insert/loki/api/v1/push"
    tenant_id: "12:34"
    

Debugging #

It is recommended verifying whether the initial setup generates the needed log fields and uses the correct stream fields . This can be done by specifying debug parameter and inspecting VictoriaLogs logs then:

      clients:
  - url: "http://localhost:9428/insert/loki/api/v1/push?debug=1"
    

See also data ingestion troubleshooting docs.