prometheus: Unable to set the storage retention via config file
Proposal
I’d expect to be able to set the storage retention config via the file. I’m running Prometheus via docker so to have to rebuild and run the image to get the right command with the CLI arguments is a pain. I can update the config file and copy it in and sighup the container easily.
Environment Docker Prometheus latest image
- System information:
pi@raspberrypi:~/code/monitoring/prometheus $ uname -srm
Linux 4.19.57-v7l+ armv7l
- Prometheus version:
pi@raspberrypi:~/code/monitoring/prometheus $ docker exec prometheus-rpi /bin/prometheus --version
prometheus, version 2.11.1 (branch: HEAD, revision: e5b22494857deca4b806f74f6e3a6ee30c251763)
build user: root@d94406f2bb6f
build date: 20190710-14:43:39
go version: go1.12.7
- Prometheus configuration file:
pi@raspberrypi:~/code/monitoring/prometheus $ cat prometheus.yml
# my global config
global:
scrape_interval: 15s # Set the scrape interval to every 15 seconds. Default is every 1 minute.
evaluation_interval: 15s # Evaluate rules every 15 seconds. The default is every 1 minute.
# scrape_timeout is set to the global default (10s).
storage.tsdb.retention.time: 150d # i expected this to work
# Alertmanager configuration
alerting:
alertmanagers:
- static_configs:
- targets:
# - alertmanager:9093
# Load rules once and periodically evaluate them according to the global 'evaluation_interval'.
rule_files:
# - "first_rules.yml"
# - "second_rules.yml"
# A scrape configuration containing exactly one endpoint to scrape:
# Here it's Prometheus itself.
scrape_configs:
# The job name is added as a label `job=<job_name>` to any timeseries scraped from this config.
- job_name: 'prometheus'
# metrics_path defaults to '/metrics'
# scheme defaults to 'http'.
static_configs:
- targets: ['localhost:9090']
- job_name: 'rpione'
static_configs:
- targets: ['192.168.0.11:9100']
About this issue
- Original URL
- State: closed
- Created 5 years ago
- Reactions: 6
- Comments: 25 (8 by maintainers)
I think you should make it possible to write all configurations into config files, even those not hot reloadable. It’s just all other softwares do.
Alternatively if you are using compose you can specify the command line for the container, here is an example:
I think we need to put first thing first here:
For me hot reload convenience is secondary while ability to set and track all the settings in one place is primary.
as a user of the docker container, to add command line parameters to the invocation, I also have to duplicate the settings that were already provided to the container in the
CMD
entry of the Dockerfile. That’s why I wish this setting - and every setting - could be specified in the config file.https://github.com/prometheus/prometheus/blob/master/Dockerfile#L26
Think about this situation, when I change the docker image version, I must check the Dockerfile to make sure I have all the default arguments added.
Dear folks, there are plenty of community channels documented here: https://prometheus.io/community/ A closed GitHub issue is really not the right place for a discussion of something that might seem to be easy but actually is not.
Throughout the Prometheus project (which is more than just this repo), we try to follow the general idea of providing one and only one way of configuring one thing. Also, we strictly follow the rule that configuration that updates upon a SIGHUP is in the config file, and configuration that requires a restart to change is in command line flags.
Nobody claims that this is the only or the best way to go, and obviously, there other projects and other people that have a more or less strong different opinion. All of this can be discussed. However, one does not simply change two basic principles of the whole project in a GitHub issue. Those overarching things are discussed via the community channels, see above.
Hi there o/
Two rules would prevent us to do that:
If you still have questions please refer to the prometheus-users mailing list.
As Julius already wrote, you should be able to do
docker run quay.io/prometheus/prometheus --storage.tsdb.retention.time=12d --config.file=/etc/prometheus/prometheus.yml --storage.tsdb.path=/prometheus --web.console.libraries=/usr/share/prometheus/console_libraries --web.console.templates=/usr/share/prometheus/consoles
. Just make sure to pass all the flags specified in the upstream Dockerfile.Changing the retention on the fly would require to stop most of the components (rule engine, query engine, scrapers …), close/re-open the local TSDB and finally start again the components. It would complicate a lot the code IMO.
@beorn7 - I get it. I really do; I’ve been part of the Drupal, Ansible, and Kubernetes communities, and all three (like Prometheus and countless other popular OSS community-based projects) have the same issue with GitHub PRs and Issues becoming a huge unmaintainable tangle.
And all have different solutions (attrition, aggressive closing/pruning, etc.).
But I would like to point out the community page you linked to does state this:
What is this if not a feature request?
And I typically like to continue the conversation on closed issues, since that centralizes all the discussion in one place (otherwise you have to start cross-linking all over the place). But if an issue goes off the rails, it can be closed (or users no longer interested can unsubscribe).
Anyways, sorry for the side rant, feel free to hide this comment (and maybe the two above) as ‘off-topic’ 😉
We recently added a runtime reloadable config for the storage (exemplars), and I am thinking about having more runtime reloadable options for the storage. Time/Size retention feels like the natural next thing to support in the config file, but having same config at 2 places is not something we try to do, so something to think about here.
Consider making these consumable via environment variables in addition to command args. Command args are more buried in k8s land where env variables are first class. I won’t have to keep track of which args were already supplied in the entrypoint of the image, I can just set my ENV var and move on.
With more and more NASes supporting docker containers and alike this becomes more pressing. Because with some of them one can not edit the command line of a container at all or they loose the custom arguments when the NAS restarts.