VictoriaLogs supports given below Fluentd outputs:
Loki #
Specify loki output section in the fluentd.conf
for sending the collected logs to VictoriaLogs:
<match **>
@type loki
url "http://localhost:9428/insert"
<buffer>
flush_interval 10s
flush_at_shutdown true
</buffer>
custom_headers {"VL-Msg-Field": "log", "VL-Time-Field": "time", "VL-Stream-Fields": "path"}
buffer_chunk_limit 1m
</match>
HTTP #
Specify http output section in the fluentd.conf
for sending the collected logs to VictoriaLogs:
<match **>
@type http
endpoint "http://localhost:9428/insert/jsonline"
headers {"VL-Msg-Field": "log", "VL-Time-Field": "time", "VL-Stream-Fields": "path"}
</match>
Substitute the host (localhost
) and port (9428
) with the real TCP address of VictoriaLogs.
See these docs for details on the query args specified in the endpoint
.
It is recommended verifying whether the initial setup generates the needed log fields
and uses the correct stream fields.
This can be done by specifying debug
parameter in the endpoint
and inspecting VictoriaLogs logs then:
<match **>
@type http
endpoint "http://localhost:9428/insert/jsonline&debug=1"
headers {"VL-Msg-Field": "log", "VL-Time-Field": "time", "VL-Stream-Fields": "path"}
</match>
If some log fields must be skipped
during data ingestion, then they can be put into ignore_fields
parameter.
For example, the following config instructs VictoriaLogs to ignore log.offset
and event.original
fields in the ingested logs:
<match **>
@type http
endpoint "http://localhost:9428/insert/jsonline&ignore_fields=log.offset,event.original"
headers {"VL-Msg-Field": "log", "VL-Time-Field": "time", "VL-Stream-Fields": "path"}
</match>
If the Fluentd sends logs to VictoriaLogs in another datacenter, then it may be useful enabling data compression via compress gzip
option.
This usually allows saving network bandwidth and costs by up to 5 times:
<match **>
@type http
endpoint "http://localhost:9428/insert/jsonline&ignore_fields=log.offset,event.original"
headers {"VL-Msg-Field": "log", "VL-Time-Field": "time", "VL-Stream-Fields": "path"}
compress gzip
</match>
By default, the ingested logs are stored in the (AccountID=0, ProjectID=0)
tenant.
If you need storing logs in other tenant, then specify the needed tenant via header
options.
For example, the following fluentd.conf
config instructs Fluentd to store the data to (AccountID=12, ProjectID=34)
tenant:
<match **>
@type http
endpoint "http://localhost:9428/insert/jsonline"
headers {"VL-Msg-Field": "log", "VL-Time-Field": "time", "VL-Stream-Fields": "path"}
header AccountID 12
header ProjectID 23
</match>
See also: