VictoriaLogs can accept logs from Vector via the following protocols:
- Elasticsearch - see these docs
- HTTP JSON - see these docs
Elasticsearch #
Specify Elasticsearch sink type in the vector.yaml
for sending the collected logs to VictoriaLogs:
sinks:
vlogs:
inputs:
- your_input
type: elasticsearch
endpoints:
- http://localhost:9428/insert/elasticsearch/
api_version: v8
compression: gzip
healthcheck:
enabled: false
query:
_msg_field: message
_time_field: timestamp
_stream_fields: host,container_name
Replace your_input
with the name of the inputs
section, which collects logs. See these docs for details.
Substitute the localhost:9428
address inside endpoints
section with the real TCP address of VictoriaLogs.
See these docs for details on parameters specified
in the sinks.vlogs.query
section.
It is recommended verifying whether the initial setup generates the needed log fields
and uses the correct stream fields.
This can be done by specifying debug
parameter
in the sinks.vlogs.query
section and inspecting VictoriaLogs logs then:
sinks:
vlogs:
inputs:
- your_input
type: elasticsearch
endpoints:
- http://localhost:9428/insert/elasticsearch/
api_version: v8
compression: gzip
healthcheck:
enabled: false
query:
_msg_field: message
_time_field: timestamp
_stream_fields: host,container_name
debug: "1"
If some log fields must be skipped
during data ingestion, then they can be put into ignore_fields
parameter.
For example, the following config instructs VictoriaLogs to ignore log.offset
and event.original
fields in the ingested logs:
sinks:
vlogs:
inputs:
- your_input
type: elasticsearch
endpoints:
- http://localhost:9428/insert/elasticsearch/
api_version: v8
compression: gzip
healthcheck:
enabled: false
query:
_msg_field: message
_time_field: timestamp
_stream_fields: host,container_name
ignore_fields: log.offset,event.original
By default, the ingested logs are stored in the (AccountID=0, ProjectID=0)
tenant.
If you need storing logs in other tenant, then specify the needed tenant via sinks.vlogs.request.headers
section.
For example, the following vector.yaml
config instructs Vector to store the data to (AccountID=12, ProjectID=34)
tenant:
sinks:
vlogs:
inputs:
- your_input
type: elasticsearch
endpoints:
- http://localhost:9428/insert/elasticsearch/
mode: bulk
api_version: v8
healthcheck:
enabled: false
query:
_msg_field: message
_time_field: timestamp
_stream_fields: host,container_name
request:
headers:
AccountID: "12"
ProjectID: "34"
HTTP #
Vector can be configured with HTTP sink type for sending data to VictoriaLogs via JSON stream API format:
sinks:
vlogs:
inputs:
- your_input
type: http
uri: http://localhost:9428/insert/jsonline?_stream_fields=host,container_name&_msg_field=message&_time_field=timestamp
compression: gzip
encoding:
codec: json
framing:
method: newline_delimited
healthcheck:
enabled: false
Replace your_input
with the name of the inputs
section, which collects logs. See these docs for details.
Substitute the localhost:9428
address inside endpoints
section with the real TCP address of VictoriaLogs.
See these docs for details on parameters specified
in the query args of the uri (_stream_fields
, _msg_field
and _time_field
).
It is recommended verifying whether the initial setup generates the needed log fields
and uses the correct stream fields.
This can be done by specifying debug
query arg in the uri
:
sinks:
vlogs:
inputs:
- your_input
type: http
uri: http://localhost:9428/insert/jsonline?_stream_fields=host,container_name&_msg_field=message&_time_field=timestamp&debug=1
compression: gzip
encoding:
codec: json
framing:
method: newline_delimited
healthcheck:
enabled: false
See also: