Promtail , Grafana Agent and Grafana Alloy are default log collectors for Grafana Loki. They can be configured to send the collected logs to VictoriaLogs according to the following docs.
Specify clients
section in the configuration file
for sending the collected logs to
VictoriaLogs
:
clients:
- url: "http://localhost:9428/insert/loki/api/v1/push"
Substitute localhost:9428 address inside clients with the real TCP address of VictoriaLogs.
VictoriaLogs automatically parses JSON string from the log message into
distinct log fields
.
This behavior can be disabled by passing -loki.disableMessageParsing command-line flag to VictoriaLogs or by adding disable_message_parsing=1 query arg
to the /insert/loki/api/v1/push url in the config of log shipper:
clients:
- url: "http://localhost:9428/insert/loki/api/v1/push?disable_message_parsing=1"
In this case the JSON with log fields is stored as a string in the
_msg field
,
so later it could be parsed at query time with the
unpack_json pipe
.
JSON parsing at query can be slow and can consume a lot of additional CPU time and disk read IO bandwidth. That’s why it is
recommended enabling JSON message parsing at data ingestion.
VictoriaLogs uses
log stream labels
defined at the client side,
e.g. at Promtail, Grafana Agent or Grafana Alloy. Sometimes it may be needed overriding the set of these fields. This can be done via _stream_fields
query arg. For example, the following config instructs using only the instance and job labels as log stream fields, while other labels
will be stored as
usual log fields
:
clients:
- url: "http://localhost:9428/insert/loki/api/v1/push?_stream_fields=instance,job"
It is recommended verifying whether the initial setup generates the needed
log fields
and uses the correct
stream fields
.
This can be done by specifying debug
parameter
and inspecting VictoriaLogs logs then:
clients:
- url: "http://localhost:9428/insert/loki/api/v1/push?debug=1"
If some
log fields
must be skipped
during data ingestion, then they can be put into ignore_fields
parameter
.
For example, the following config instructs VictoriaLogs to ignore filename and stream fields in the ingested logs:
clients:
- url: 'http://localhost:9428/insert/loki/api/v1/push?ignore_fields=filename,stream'
See also
these docs
for details on other supported query args.
There is no need in specifying _time_field query arg, since VictoriaLogs automatically extracts timestamp from the ingested Loki data.
By default the ingested logs are stored in the (AccountID=0, ProjectID=0)
tenant
.
If you need storing logs in other tenant, then specify the needed tenant via tenant_id field
in the Loki client configuration
The tenant_id must have AccountID:ProjectID format, where AccountID and ProjectID are arbitrary uint32 numbers.
For example, the following config instructs VictoriaLogs to store logs in the (AccountID=12, ProjectID=34)
tenant
:
clients:
- url: "http://localhost:9428/insert/loki/api/v1/push"
tenant_id: "12:34"
The ingested log entries can be queried according to these docs .
See also data ingestion troubleshooting docs.