VictoriaLogs can accept logs from Vector via the following protocols:
- Elasticsearch - see these docs
- HTTP JSON - see these docs
Elasticsearch #
Specify Elasticsearch sink type in the vector.yaml
for sending the collected logs to VictoriaLogs:
|
|
Replace your_input
with the name of the inputs
section, which collects logs. See these docs for details.
Substitute the localhost:9428
address inside endpoints
section with the real TCP address of VictoriaLogs.
See these docs for details on parameters specified
in the sinks.vlogs.query
section.
It is recommended verifying whether the initial setup generates the needed log fields
and uses the correct stream fields.
This can be done by specifying debug
parameter
in the sinks.vlogs.query
section and inspecting VictoriaLogs logs then:
|
|
If some log fields must be skipped
during data ingestion, then they can be put into ignore_fields
parameter.
For example, the following config instructs VictoriaLogs to ignore log.offset
and event.original
fields in the ingested logs:
|
|
By default, the ingested logs are stored in the (AccountID=0, ProjectID=0)
tenant.
If you need storing logs in other tenant, then specify the needed tenant via sinks.vlogs.request.headers
section.
For example, the following vector.yaml
config instructs Vector to store the data to (AccountID=12, ProjectID=34)
tenant:
|
|
HTTP #
Vector can be configured with HTTP sink type for sending data to VictoriaLogs via JSON stream API format:
|
|
Replace your_input
with the name of the inputs
section, which collects logs. See these docs for details.
Substitute the localhost:9428
address inside endpoints
section with the real TCP address of VictoriaLogs.
See these docs for details on parameters specified
in the query args of the uri (_stream_fields
, _msg_field
and _time_field
).
It is recommended verifying whether the initial setup generates the needed log fields
and uses the correct stream fields.
This can be done by specifying debug
query arg in the uri
:
|
|
See also: