vector: input net bandwidth > output net bandwidth causes vector OOM
Vector Version
vector 0.15.0 (x86_64-unknown-linux-gnu b2992c9 2021-06-30)
(Nightly)
Vector Configuration File
data_dir = "/var/vector/data"
[sources.fluent_source]
type = "fluent"
address = "0.0.0.0:24224"
[sinks.elasticsearch-fluent]
# General
type = "elasticsearch"
inputs = ["fluent_source"]
compression = "none" # optional,
endpoint = "http://10.1.0.9:9200/"
index = "{{ viaq_index_name }}-%Y-%m-%d"
healthcheck.enabled = true
Debug Output
https://gist.github.com/AntonSmolkov/10a10cb1749d16418f71419f31cc6209
Expected Behavior
Fluent source does not cause OOM
Actual Behavior
Fluent source causes OOM. Vector eats all machines memory (8GiB in my case)
[Mon Jul 12 13:10:23 2021] oom-kill:constraint=CONSTRAINT_NONE,nodemask=(null),cpuset=/,mems_allowed=0,global_oom,task_memcg=/system.slice/vector.service,task=vector,pid=1082674,uid=1001
[Mon Jul 12 13:10:23 2021] Out of memory: Killed process 1082674 (vector) total-vm:7494848kB, anon-rss:6939664kB, file-rss:0kB, shmem-rss:0kB, UID:1001 pgtables:14032kB oom_score_adj:0
[Mon Jul 12 13:10:23 2021] oom_reaper: reaped process 1082674 (vector), now anon-rss:0kB, file-rss:0kB, shmem-rss:0kB
Example Data
Additional Context
~200 logs/per sec from OpenShift.
Vector is located on Azure - Standard_D2_v3 instance with 1GiB/sec outbound traffic limit. Inbound traffic is unlimited and effectevely is 40GiB/sec(link speed)
Fluentd under the same circumstances works fine and consumes only 1.5 GiB of RAM.
References
About this issue
- Original URL
- State: closed
- Created 3 years ago
- Reactions: 1
- Comments: 28 (15 by maintainers)
@jszwedko You’re right.
log_to_metric+prometheussink. Without elasticsearch at all. Absolutely the same effect