vector: vector_processed_events_total metric missing from HTTP sink

Vector Version

vector:0.11.1-debian

Vector Configuration File

# metrics
[sources.p2-metrics]
type = "internal_metrics"

### I have several file sources and add_fields transforms here which I have omitted from this example  ###

## sinks
[sinks.to_dedup]
  type = "http"
#  inputs = ["sampler"]
  inputs = ["back01-oss-fields",
            "cnr01-ipv6-fields",
            "cnr02-ipv6-fields",
            "cnr01-ipv4-fields",
            "cnr02-ipv4-fields",
            "cnr01-autod-fields",
            "cnr02-autod-fields"
           ]
  uri = "http://127.0.0.1:9001"
  encoding.codec = "json"
  healthcheck = false
  buffer.type = "disk"
  buffer.max_size = 104900000
  request.concurrency = "adaptive"

[sinks.p2-to-prometheus]
  type = "prometheus_exporter"
  inputs = ["p2-metrics"]
  address = "0.0.0.0:9501"
  default_namespace = "p2-ingress1"
  buckets = [0.005, 0.01, 0.025, 0.05, 0.1, 0.25, 0.5, 1.0, 2.5, 5.0, 10.0]
  quantiles = [0.5, 0.75, 0.9, 0.95, 0.99]

Debug Output

Debug output: https://gist.github.com/hreidar/dcb9506c8a0a0501110991d9827df32a Metrics output: https://gist.github.com/hreidar/29889bca10c9aa2516038a191e32ba5f

Expected Behavior

I should have vector_processed_events_total for the HTTP sink.

Actual Behavior

vector_processed_events_total for HTTP sink not in my metrics output.

Example Data

Additional Context

The example I have here is from a Vector container running on a VM. It is controlled by docker-compose. I also have a similar container that is running in a pod on Kubernetes. I see the same behavior there, no vector_processed_events_total for HTTP sink.

References

About this issue

  • Original URL
  • State: closed
  • Created 3 years ago
  • Comments: 15 (6 by maintainers)

Most upvoted comments

It is up and running now and I’m seeing the vector_processed_events_total for HTTP sink.

vector_processed_events_total{component_kind="sink",component_name="to_dedup",component_type="http"} 24 1610728157995

Just tryed the nightly and I get an error using the config I had working with 0.11.1

ERROR vector::cli: Configuration error: unknown variant `prometheus_exporter`, expected one of `aws_cloudwatch_logs`, `aws_cloudwatch_metrics`, `aws_kinesis_firehose`, `aws_kinesis_streams`, `aws_s3`, `azure_monitor_logs`, `blackhole`, `clickhouse`, `console`, `datadog_logs`, `datadog_metrics`, `elasticsearch`, `file`, `gcp_cloud_storage`, `gcp_pubsub`, `gcp_stackdriver_logs`, `honeycomb`, `http`, `humio_logs`, `influxdb_logs`, `influxdb_metrics`, `kafka`, `logdna`, `loki`, `new_relic_logs`, `papertrail`, `prometheus`, `pulsar`, `sematext`, `socket`, `splunk_hec`, `statsd`, `vector` for key `sinks.p2-to-prometheus`