The relabeling cookbook provides practical examples and patterns for transforming your metrics data as it flows through VictoriaMetrics, helping you control what gets collected and how it’s labeled.

VictoriaMetrics and vmagent support Prometheus-style relabeling with extra features to enhance the functionality.

The following articles contain useful information about Prometheus relabeling:

Relabeling Stages #

Relabeling in VictoriaMetrics happens in three main stages:

Service Discovery Relabeling #

Relabeling starts with relabel_configs in the prometheus scrape config (-promscrape.config).

      # global relabeling rules applied to all targets
global:
  relabel_configs:

# job-specific target relabeling applied only to targets within this job
scrape_configs:
  - job_name: "my-job"
    relabel_configs:
    

These rules are used during service discovery, before VictoriaMetrics begins scraping metrics from targets.

  • global.relabel_configs: applied to all discovered targets from all jobs.
  • scrape_configs[].relabel_configs: applied only to targets within the specified job.

The main purpose is to change or filter the list of discovered targets. You can add, remove, or update target labels—or drop targets completely.

Refer to the Service Discovery Relabeling Cheatsheet section for more examples.

Scraping Relabeling #

Once VictoriaMetrics has finished selecting the targets using relabel_configs, it starts scraping those endpoints. After scraping, you can apply metric_relabel_configs in the -promscrape.config file Available from v1.106.0 to modify the scraped metrics:

      # global metric relabeling applied to all metrics
global:
  metric_relabel_configs:

# scrape relabeling applied to all metrics
scrape_configs:
  - job_name: "my-job"
    metric_relabel_configs:
    

This is the second stage, and it operates on individual metrics that were just scraped from the targets, not the targets themselves.

  • global.metric_relabel_configs: affects all scraped metrics from all jobs.
  • scrape_configs[].metric_relabel_configs: applies only to metrics scraped from the specific job.

This means you can filter or modify the scraped time series before VictoriaMetrics stores them in its time series database.

Refer to the Scraping Relabeling Cheatsheet section for more examples.

Remote Write Relabeling #

This step takes place after metric_relabel_configs are applied, right before metrics are sent to a storage destination specified by remoteWrite.url in vmagent.

The main goal of this stage is to apply relabeling rules to all incoming metrics, no matter where they come from (push-based or pull-based sources). It includes two phases:

  • -remoteWrite.relabelConfig: This is applied to all metrics before they are sent to any remote storage destination.
  • -remoteWrite.urlRelabelConfig: This is applied to all metrics before they are sent to a specific remote storage destination.

This functionality is essential for routing and filtering data in different ways for multiple backends. For example:

  • Send only metrics with env=prod to a production VictoriaMetrics cluster (see Splitting data streams among multiple systems for how to configure this in vmagent).
  • Send only metrics with env=dev to a development cluster.
  • Send a subset of high-importance metrics to a Kafka topic for real-time analysis, while sending all metrics to long-term storage.
  • Remove certain labels only for a specific backend, while keeping them for others.

Relabeling Enhancements #

VictoriaMetrics relabeling is compatible with Prometheus relabeling and provides the following enhancements:

  • The replacement field: allows you to construct new label values
    by referencing existing ones using the {{label_name}} syntax.
    For example, if a metric has the labels {instance="host123", job="node_exporter"},
    this rule will create or update the fullname label with the value host123-node_exporter:

          - target_label: "fullname"
      replacement: "{{instance}}-{{job}}"
        

    Try the above config

  • The if filter: applies the action only to samples that match one or more time series selectors . It supports a single selector or a list. If any selector matches, the action is applied.

    For example, the following relabeling rule keeps metrics matching node_memory_MemAvailable_bytes{instance="host123"} series selector, while dropping the rest of metrics:

          - if: 'node_memory_MemAvailable_bytes{instance="host123"}'
      action: keep
        

    Try the above config

    This is equivalent to the following, less intuitive Prometheus-compatible rule:

          - action: keep
      source_labels: [__name__, instance]
      regex: "node_memory_MemAvailable_bytes;host123"
        

    The if option can include multiple filters. If any one of them matches a sample, the action will be applied. For example, the rule below adds the label team="infra" to all samples where job="api" OR instance="web-1":

          - target_label: team
      replacement: infra
      if:
        - '{job="api"}'
        - '{instance="web-1"}'
        

    Try the above config

  • The regex: can be split into multiple lines for better readability. VictoriaMetrics automatically combines them using | (OR). The two examples below are treated the same and match http_requests_total, node_memory_MemAvailable_bytes, or any metric starting with nginx_:

          - action: keep_metrics
      regex: "http_requests_total|node_memory_MemAvailable_bytes|nginx_.+"
        

    Try the above config

          - action: keep_metrics
      regex:
        - "http_requests_total"
        - "node_memory_MemAvailable_bytes"
        - "nginx_.+"
        

Beside enhancements, VictoriaMetrics also provides the following new actions:

  • replace_all action: Replaces all matches of regex in source_labels with replacement, and writes the result to target_label. Example: replaces all dashes - with underscores _in metric names (e.g.http-request-latencytohttp_request_latency):

          - action: replace_all
      source_labels: ["__name__"]
      target_label: "__name__"
      regex: "-"
      replacement: "_"
        

    Try the above config

  • labelmap_all action: allows you to create new labels by renaming existing ones based on a regex pattern match against the original label’s name. Example: Replace - with _ in all label names (e.g. pod-label-regionpod_label_region):

          - action: labelmap_all
      regex: "-"
      replacement: "_"
        

    Try the above config

  • keep_if_equal action: Keeps the entry only if all source_labels have the same value. Example: Keep targets where instance and host are equal:

          - action: keep_if_equal
      source_labels: ["instance", "host"]
        

    Try the above config

  • drop_if_equal action: Drops the entry if all source_labels have the same value. Example: Drop targets where instance equals host:

          - action: drop_if_equal
      source_labels: ["instance", "host"]
        

    Try the above config

  • keep_if_contains action: Keeps the entry if target_label contains all values from source_labels. Example: Keep if __meta_consul_tags contains the value of required_tag:

          - action: keep_if_contains
      target_label: __meta_consul_tags
      source_labels: [required_tag]
        

    Try the above config

  • drop_if_contains action: Drops the entry if target_label contains all values from source_labels. Example: Drop if __meta_consul_tags label value contains the value of blocked_tag label value:

          - action: drop_if_contains
      target_label: __meta_consul_tags
      source_labels: [blocked_tag]
        

    Try the above config

  • keep_metrics action: Keeps metrics whose names match the regex. Example: Keep only http_requests_total and node_memory_Active_bytes metrics:

          - action: keep_metrics
      regex: "http_requests_total|node_memory_Active_bytes"
        

    Try the above config

  • drop_metrics action: Drops metrics whose names match the regex. Example: Drop go_gc_duration_seconds and process_cpu_seconds_total metrics:

          - action: drop_metrics
      regex: "go_gc_duration_seconds|process_cpu_seconds_total"
        

    Try the above config

  • graphite action: Applies Graphite-style relabeling rules to extract labels from metric names (Try it ). See Graphite Relabeling for details.

Graphite Relabeling #

VictoriaMetrics components support action: graphite relabeling rules. These rules let you pull parts from Graphite-style metric names and turn them into Prometheus labels. (The matching syntax is similar to Glob matching in statsd_exporter )

You must set the __name__ label inside the labels section to define the new metric name. Otherwise, the original metric name remains unchanged.

For example, this rule transforms a Graphite-style metric like authservice.us-west-2.login.total into a Prometheus-style metric login_total{instance="us-west-2:8080", job="authservice"}:

      - action: graphite
  match: "*.*.*.total"
  labels:
    __name__: "${3}_total"
    job: "$1"
    instance: "${2}:8080"
    

Try the above config

Key points about action: graphite relabeling:

  • The rule applies only to metrics that match the match pattern. Others are ignored.
  • * matches as many characters as possible until the next . or next match part. It can also match nothing if followed by a dot. E.g., match: "app*prod.requests" matches app42prod.requests, and 42 becomes available as $1 in the labels section.
  • $0 is the full original metric name.
  • Rules run in the order they appear in the config.

Using action: graphite is typically easier and faster than using action: replace for parsing Graphite-style metric names.

Relabel Debugging #

vmagent and single-node VictoriaMetrics support debugging at both the target and metric levels.

Start by visiting http://vmagent:8429/targets for vmagent or http://victoriametrics:8428/targets for single-node VictoriaMetrics. You will see two types of targets:

  • Active Targets (/targets): These are the targets that vmagent is currently scraping. Target relabeling rules have already been applied.
  • Discovered Targets (/service-discovery): These are the targets found during service discovery, before any relabeling rules are applied. This includes targets that may later be dropped.

This option is not available when the component is started with the -promscrape.dropOriginalLabels flag.

How to use /targets page?

This /targets page helps answer the following questions:

1. Why are some targets not being scraped?

  • The last error column shows the reason why a target is not being scraped.
  • Click the endpoint link to open the target URL in your browser.
  • Click the response link to view the response vmagent received from the target.

2. What labels does a specific target have?

The labels column shows the labels for each target. These labels are attached to all metrics scraped from that target.

You can click the label column of the target to see the original labels before any relabeling was applied.

This option is not available when the component is started with the -promscrape.dropOriginalLabels flag.

3. Why does a target have a certain set of labels?

Click the target link in the debug relabeling column. This opens a step-by-step view of how the relabeling rules were applied to the original labels.

This option is not available when the component is started with the -promscrape.dropOriginalLabels flag.

4. How are metric relabeling rules applied to scraped metrics?

Click the metrics link in the debug relabeling column. This shows how the metric relabeling rules were applied, step by step.

Each column on the page shows important details:

  • state: shows if the target is currently up or down.
  • scrapes: number of times the target was scraped.
  • errors: number of failed scrapes.
  • last scrape: when the last scrape happened.
  • last scrape size: size of the last scrape.
  • duration: time taken for the last scrape.
  • samples: number of metrics exposed by the target during the last scrape.
How to use /service-discovery page?

This page shows all discovered targets .

This option is not available when the component is started with the -promscrape.dropOriginalLabels flag.

It helps answer the following questions:

1. Why are some targets dropped during service discovery or showing unexpected labels?

Click the debug link in the debug relabeling column for a dropped target. This opens a step-by-step view of how target relabeling rules were applied to that target’s original labels.

2. What were the original labels before relabeling?

The discovered labels column shows the original labels for each discovered target.

Relabeling Use Cases #

Service Discovery Relabeling Cheatsheet #

Target-level relabeling is applied during service discovery and affects the targets (which will be scraped), their labels and all the metrics scraped from them:

Remove discovered targets

How to drop discovered targets #

To drop a particular discovered target, use the following options:

  • action: drop: drops scrape targets with labels matching the if series selector
  • action: keep: keeps scrape targets with labels matching the if series selector, while dropping all other targets

Here are examples of these options:

  • This config discovers pods in Kubernetes and drops all pods with names starting with the test- prefix:

          scrape_configs:
      - job_name: prod_pods_only
        kubernetes_sd_configs:
          - role: pod
        relabel_configs:
          - if: '{__meta_kubernetes_pod_name=~"test-.*"}'
            action: drop
        

    Try the above config

  • This config keeps only pods with names starting with the backend- prefix:

          scrape_configs:
      - job_name: backend_pods
        kubernetes_sd_configs:
          - role: pod
        relabel_configs:
          - if: '{__meta_kubernetes_pod_name=~"backend-.*"}'
            action: keep
        

    Try the above config

See also useful tips for target relabeling .

Change which URL is used to fetch metrics from targets

How to modify scrape URLs in targets #

URLs for scrape targets are composed of the following parts:

  • Scheme (e.g. http, https) is available during target relabeling in a special label - __scheme__. By default, it’s set to http but can be overridden either by specifying the scheme option at scrape_config level or by updating the __scheme__ label during relabeling.
  • Host and port (e.g. host12:3456) is available during target relabeling in a special label - __address__. Its value depends on the service discovery type . Sometimes this value needs to be modified. In this case, just update the __address__ label during relabeling to the needed value.
    • The port part is optional. If it is missing, it’s automatically set depending on the scheme (80 for http or 443 for https). The host:port part from the final __address__ label is automatically set to the instance label. The __address__ label can contain the full scrape URL (e.g. http://host:port/metrics/path?query_args). In this case the __scheme__ and __metrics_path__ labels are ignored.
  • URL path (e.g. /metrics) is available during target relabeling in a special label - __metrics_path__. By default, it’s set to /metrics and can be overridden either by specifying the metrics_path option at scrape_config level or by updating the __metrics_path__ label during relabeling.
  • Query args (e.g. ?foo=bar&baz=xyz) are available during target relabeling in special labels with the __param_ prefix.
    • Take ?foo=bar&baz=xyz for example. There will be two special labels: __param_foo="bar" and __param_baz="xyz". The query args can be specified either via the params section at scrape_config or by updating/setting the corresponding __param_* labels during relabeling.

The resulting scrape URL looks like the following:

      <__scheme__> + "://" + <__address__> + <__metrics_path__> + <"?" + query_args_from_param_labels>
    

Given the scrape URL construction rules above, the following config discovers pod targets in Kubernetes and constructs a per-target scrape URL as https://<pod_name>/metrics/container?name=<container_name>:

      scrape_configs:
  - job_name: k8s
    kubernetes_sd_configs:
      - role: pod
    metrics_path: /metrics/container
    relabel_configs:
      - target_label: __scheme__
        replacement: https
      - source_labels: [__meta_kubernetes_pod_name]
        target_label: __address__
      - source_labels: [__meta_kubernetes_pod_container_name]
        target_label: __param_name
    

Try the above config

Remove labels from discovered targets

How to remove labels from targets #

To remove some labels from targets discovered by the scrape job, use either:

  • action: labeldrop: drops labels with names matching the given regex option
  • action: labelkeep: drops labels with names not matching the given regex option

For example:

      scrape_configs:
  - job_name: k8s
    kubernetes_sd_configs:
      - role: pod
    relabel_configs:
      - action: labelmap
        regex: "__meta_kubernetes_pod_label_(.+)"
        replacement: "pod_label_$1"
      - action: labeldrop
        regex: "pod_label_team_.*"
    

Try the above config

The job above will:

  1. discover pods in Kubernetes
  2. extract pod-level labels (e.g. app.kubernetes.io/name and team)
  3. prefix them with pod_label_ and add them as labels to all scraped metrics
  4. drop all labels starting with pod_label_team_
  5. drop all labels starting with __ (this is done by default by VictoriaMetrics)

Note that:

  • Labels that start with __ are removed automatically after relabeling, so you don’t need to drop them with relabeling rules.
  • Do not remove instance and job labels, since this may result in duplicate scrape targets with identical sets of labels.
  • The regex option must match the whole label name from start to end, not just a part of it.
Remove labels from a subset of targets

How to remove labels from a subset of targets #

To remove some labels from a subset of discovered targets while keeping the rest of the targets unchanged, use the if series selector with action: labeldrop or action: labelkeep relabeling rule.

As an illustration:

  • The job below discovers Kubernetes pods and removes any labels starting with pod_internal_ but only for targets matching the {__address__=~"pod123.+"} selector
      scrape_configs:
  - job_name: k8s
    kubernetes_sd_configs:
      - role: pod
    relabel_configs:
      - action: labeldrop
        if: '{__address__=~"pod123.+"}'
        regex: "pod_internal_.*"
    

Try the above config

Remove prefixes from label names

How to remove prefixes from target label names #

You can modify target-labels including removing prefixes with the action: labelmap option.

For example, Kubernetes service discovery automatically adds special __meta_kubernetes_pod_label_<labelname> labels for each pod-level label.

All labels with the prefix __ will be dropped automatically. To extract and keep only the <labelname> part of this special label, you can use action: labelmap combined with regex and replacement options:

      scrape_configs:
  - job_name: k8s
    kubernetes_sd_configs:
      - role: pod
    relabel_configs:
      - action: labelmap
        regex: "__meta_kubernetes_pod_label_(.+)"
        replacement: "$1"
    

Try the above config

The regex contains a capture group (.+). This capture group can be referenced inside the replacement option with the $N syntax, such as $1 for the first capture group.

This config will create a new label with the name extracted from the regex capture group (.+) for all metrics scraped from the discovered pods.

Note that:

  • The regex option must match the whole label name from start to end, not just a part of it.
Add or update labels by extracting values of other labels

How to extract label parts #

Relabeling allows extracting parts from label values and storing them into arbitrary labels. This is performed with:

  • source_labels: the label(s) whose values are used to compute the new value for target_label,
  • target_label: the label we want to modify or create,
  • replacement: the value that will be computed and assigned to the target_label,
  • regex: the regular expression to be applied to the value of source_labels.

Let’s take this case:

      scrape_configs:
  - job_name: k8s
    kubernetes_sd_configs:
      - role: pod
    relabel_configs:
      - source_labels: [__meta_kubernetes_pod_container_name]
        regex: "[^/]+/(.+)"
        replacement: "team_$1"
        target_label: owner_team
    

Try the above config

The job above discovers pod targets in Kubernetes , and performs these actions:

  1. Extracts the value of __meta_kubernetes_pod_container_name label (e.g. foo/bar),
  2. Matches it against the regex [^/]+/(.+),
  3. Computes the new value as team_$1 with $1 capture from regex (.+),
  4. Stores the result in the owner_team label.

Note that:

  • The regex option must match the whole label value from start to end, not just a part of it.
  • If source_labels contains multiple labels, their values are joined with a ; separator (customized by the separator option) before being matched against the regex.
Add or update labels on discovered targets

How to add labels to scrape targets #

To add or update labels on scrape targets during discovery, use these options:

  • target_label: specifies the label name to add or update
  • replacement: specifies the value to assign to this label

For example, this config adds a environment="production" label to all discovered pods in Kubernetes :

      scrape_configs:
  - job_name: k8s
    kubernetes_sd_configs:
      - role: pod
    relabel_configs:
      - target_label: "environment"
        replacement: "production"
    

Try the above config

If a label from the scrape configuration (target_label) conflicts with a label from the scraped metric (scrape-time label), the original scrape-time label is renamed by adding an exported_ prefix.

To avoid this renaming and instead let the scrape-time labels take priority (overriding target labels), set honor_labels: true in the scrape configuration.

For example, this config adds a environment="production" label to all discovered pods, but if any pod already exports a environment label, that value will override the target label:

      scrape_configs:
  - job_name: k8s
    kubernetes_sd_configs:
      - role: pod
    honor_labels: true
    relabel_configs:
      - target_label: "environment"
        replacement: "production"
    

See also useful tips for target relabeling .

Add labels by copying from other labels

How to copy labels in scrape targets #

Labels can be copied using the following options:

  • source_labels: specifies which labels to copy from
  • target_label: specifies the destination label to receive the value

The following config copies the __meta_kubernetes_pod_name label to the pod label for all discovered pods in Kubernetes :

      scrape_configs:
  - job_name: k8s
    kubernetes_sd_configs:
      - role: pod
    relabel_configs:
      - source_labels: [__meta_kubernetes_pod_name]
        target_label: pod
    

Try the above config

If source_labels contains multiple labels, their values are joined with a ; delimiter by default. Use the separator option to change this delimiter.

For example, this config combines pod name and container port into the host_port label for all discovered pod targets in Kubernetes :

      scrape_configs:
  - job_name: k8s
    kubernetes_sd_configs:
      - role: pod
    relabel_configs:
      - source_labels:
          [
            __meta_kubernetes_pod_name,
            __meta_kubernetes_pod_container_port_number,
          ]
        separator: ":"
        target_label: host_port
    

Try the above config

Rename instance and job labels

How to modify instance and job #

instance and job labels are automatically added by single-node VictoriaMetrics and vmagent for each discovered target.

  • The job label is set to the job_name value specified in the corresponding scrape_config.
  • The instance label is set to the host:port part of the __address__ label value after target-level relabeling. The __address__ label value depends on the type of service discovery and can be overridden during relabeling.

Modifying instance and job labels works like other target-labels by using target_label and replacement options:

      scrape_configs:
  - job_name: k8s
    kubernetes_sd_configs:
      - role: pod
    relabel_configs:
      - target_label: job
        replacement: kubernetes_pod_metrics
    

Try the above config

Note: All the target-level labels which are not prefixed with __ are automatically added to all the metrics scraped from targets.

Scraping Relabeling Cheatsheet #

Metric-level relabeling is applied after metrics are scraped (scraping relabeling metric_relabel_configs and remote write relabeling -remoteWrite.urlRelabelConfig) and affects the individual metrics:

How to remove labels from scraped metrics

How to remove labels from scraped metrics #

Removing labels from scraped metrics is a good idea to avoid high cardinality and high churn rate issues.

This can be done with either of the following actions:

  • action: labeldrop: drops labels with names matching the given regex option
  • action: labelkeep: drops labels with names not matching the given regex option

Let’s see this in action:

  • Remove labels with names starting with the kubernetes_ prefix from all scraped metrics:
          metric_relabel_configs:
      - action: labeldrop
        regex: "kubernetes_.*"
        
    Try the above config

The regex option must match the whole label name from start to end, not just a part of it.

Note that:

  • Labels that start with __ are removed automatically after relabeling, so you don’t need to drop them with relabeling rules.
How to remove labels from metrics subset

How to remove labels from metrics subset #

You can remove certain labels from some metrics without affecting other metrics by using the if parameter with labeldrop action. The if parameter is a series selector - it looks at the metric name and labels of each scraped time series.

For instance, this config below removes the cpu and mode labels, but only from the node_cpu_seconds_total metric where mode="idle":

      metric_relabel_configs:
  - action: labeldrop
    if: 'node_cpu_seconds_total{mode="idle"}'
    regex: "cpu|mode"
    

Try the above config

How to add labels to scraped metrics

How to add labels to scraped metrics #

You can add custom labels to scraped metrics using target_label to set the label name and the replacement field to set the label value. For example:

  • Add a region="us-east-1" label to all scraped metrics:

          metric_relabel_configs:
      - target_label: region
        replacement: us-east-1
        

    Try the above config

  • Add a team="platform" label only for metrics from jobs that match web-.* and are not in the staging environment :

          metric_relabel_configs:
      - if: '{job=~"web-.*", environment!="staging"}'
        target_label: team
        replacement: platform
        

    Try the above config

How to change label values in scraped metrics

How to change label values in scraped metrics #

To change the label values of scraped metrics, we use the following fields:

  • target_label: the label we want to modify (if it exists) or create,
  • source_labels: the label(s) whose values are used to compute the new value for target_label,
  • replacement: the value that will be computed and assigned to the target_label.

Below are a few illustrations:

  • Add prod_ prefix to all values of the job label across all scraped metrics:

          metric_relabel_configs:
      - source_labels: [job]
        target_label: job
        replacement: prod_$1
        

    Try the above config

  • Add prod_ prefix to job label values only for metrics matching {job=~"api-service-.*",env!="dev"}:

          metric_relabel_configs:
      - if: '{job=~"api-service-.*",env!="dev"}'
        source_labels: [job]
        target_label: job
        replacement: prod_$1
        

    Try the above config

How to rename scraped metrics

How to rename scraped metrics #

The metric name is actually the value of a special label called __name__ (see Key Concepts ). So renaming a metric is performed in the same way as changing a label value. Let’s take some examples:

  • Rename node_cpu_seconds_total to vm_node_cpu_seconds_total across all the scraped metrics:

          metric_relabel_configs:
      - if: "node_cpu_seconds_total"
        replacement: vm_node_cpu_seconds_total
        target_label: __name__
        

    Try the above config

  • Rename all metrics starting with http_ to start with web_ instead (e.g. http_requests_totalweb_requests_total):

          metric_relabel_configs:
      - source_labels: [__name__]
        regex: "http_(.*)"
        replacement: web_$1
        target_label: __name__
        

    Try the above config

  • Replace all dashes (-) in metric names with underscores (_) (e.g. nginx-ingress-latencynginx_ingress_latency):

          metric_relabel_configs:
      - source_labels: [__name__]
        action: replace_all
        regex: "-"
        replacement: "_"
        target_label: __name__
        

    Try the above config

How to drop metrics during scrape

How to drop metrics during scrape #

All examples above work at the label level: adding, dropping, or changing label values of scraped metrics. You can also drop entire metrics. This is especially beneficial for metrics that result in high cardinality or high churn rate .

Instead of labeldrop or labelkeep actions, we use drop or keep actions in the metric_relabel_configs section:

For example, the following config drops all metrics with names starting with container_:

      metric_relabel_configs:
  - if: '{__name__=~"container_.*"}'
    action: drop
    

Try the above config

Note that the relabeling config is specified under the metric_relabel_configs section instead of relabel_configs section. They serve different purposes:

  • The scrape_configs[].relabel_configs apply before scraping, modifying or filtering targets. Any changes here affect all metrics from that target.
  • The scrape_configs[].metric_relabel_configs apply after scraping, modifying or filtering individual metrics.

Useful tips for target relabeling #

  • Target relabeling can be debugged by clicking the debug link for a target on the http://vmagent:8429/target or http://vmagent:8429/service-discovery pages. See Relabel Debug - vmagent .
  • Special labels with the __ prefix are automatically added when discovering targets and removed after relabeling:
    • Meta-labels starting with the __meta_ prefix. The specific sets of labels for each supported service discovery option are listed in Prometheus Service Discovery .
    • Additional labels with the __ prefix other than __meta_ labels, such as __scheme__ or __address__ . It is common practice to store temporary labels with names starting with __ during target relabeling.
  • All target-level labels are automatically added to all metrics scraped from targets.
  • The list of discovered scrape targets with all discovered meta-labels is available on the http://vmagent:8429/service-discovery page for vmagent and on the http://victoriametrics:8428/service-discovery page for single-node VictoriaMetrics.
  • The list of active targets with the final set of target-labels after relabeling is available on the http://vmagent:8429/targets page for vmagent and on the http://victoriametrics:8428/targets page for single-node VictoriaMetrics.

Useful tips for metric relabeling #

  • Metric relabeling can be debugged on the http://vmagent:8429/metric-relabel-debug page. See these docs .
  • All labels that start with the __ prefix are automatically removed from metrics after relabeling. It is common practice to store temporary labels with names starting with __ during metrics relabeling.
  • All target-level labels are automatically added to all metrics scraped from targets, making them available during metrics relabeling.
  • If too many labels are removed, different metrics might look the same — this can lead to duplicate time series with conflicting values, which is usually a problem.