node_exporter: BTRFS stats missing

Host operating system: output of uname -a

5.10.0-21-amd64 #1 SMP Debian 5.10.162-1 (2023-01-21) x86_64 GNU/Linux

node_exporter version: output of node_exporter --version

node_exporter, version 1.5.0 (branch: HEAD, revision: 1b48970ffcf5630534fb00bb0687d73c66d1c959)
  build user:       root@6e7732a7b81b
  build date:       20221129-18:59:09
  go version:       go1.19.3
  platform:         linux/amd64

node_exporter command line flags

--web.systemd-socket --collector.systemd --collector.textfile.directory=/usr/local/var/lib/prometheus-node-exporter/textfile

node_exporter log output

ts=2023-03-10T15:25:20.138Z caller=node_exporter.go:180 level=info msg="Starting node_exporter" version="(version=1.5.0, branch=HEAD, revision=1b48970ffcf5630534fb00bb0687d73c66>
ts=2023-03-10T15:25:20.138Z caller=node_exporter.go:181 level=info msg="Build context" build_context="(go=go1.19.3, user=root@6e7732a7b81b, date=20221129-18:59:09)"
ts=2023-03-10T15:25:20.138Z caller=diskstats_common.go:111 level=info collector=diskstats msg="Parsed flag --collector.diskstats.device-exclude" flag=^(ram|loop|fd|(h|s|v|xv)d[a>
ts=2023-03-10T15:25:20.138Z caller=systemd_linux.go:152 level=info collector=systemd msg="Parsed flag --collector.systemd.unit-include" flag=.+
ts=2023-03-10T15:25:20.138Z caller=systemd_linux.go:154 level=info collector=systemd msg="Parsed flag --collector.systemd.unit-exclude" flag=.+\.(automount|device|mount|scope|sl>
ts=2023-03-10T15:25:20.139Z caller=filesystem_common.go:111 level=info collector=filesystem msg="Parsed flag --collector.filesystem.mount-points-exclude" flag=^/(dev|proc|run/cr>
ts=2023-03-10T15:25:20.139Z caller=filesystem_common.go:113 level=info collector=filesystem msg="Parsed flag --collector.filesystem.fs-types-exclude" flag=^(autofs|binfmt_misc|b>
ts=2023-03-10T15:25:20.139Z caller=node_exporter.go:110 level=info msg="Enabled collectors"
ts=2023-03-10T15:25:20.139Z caller=node_exporter.go:117 level=info collector=arp
ts=2023-03-10T15:25:20.139Z caller=node_exporter.go:117 level=info collector=bcache
ts=2023-03-10T15:25:20.139Z caller=node_exporter.go:117 level=info collector=bonding
ts=2023-03-10T15:25:20.139Z caller=node_exporter.go:117 level=info collector=btrfs
ts=2023-03-10T15:25:20.139Z caller=node_exporter.go:117 level=info collector=conntrack
ts=2023-03-10T15:25:20.139Z caller=node_exporter.go:117 level=info collector=cpu
ts=2023-03-10T15:25:20.139Z caller=node_exporter.go:117 level=info collector=cpufreq
ts=2023-03-10T15:25:20.139Z caller=node_exporter.go:117 level=info collector=diskstats
ts=2023-03-10T15:25:20.139Z caller=node_exporter.go:117 level=info collector=dmi
ts=2023-03-10T15:25:20.139Z caller=node_exporter.go:117 level=info collector=edac
ts=2023-03-10T15:25:20.139Z caller=node_exporter.go:117 level=info collector=entropy
ts=2023-03-10T15:25:20.139Z caller=node_exporter.go:117 level=info collector=fibrechannel
ts=2023-03-10T15:25:20.139Z caller=node_exporter.go:117 level=info collector=filefd
ts=2023-03-10T15:25:20.138Z caller=node_exporter.go:180 level=info msg="Starting node_exporter" version="(version=1.5.0, branch=HEAD, revision=1b48970ffcf5630534fb00bb0687d73c66d1c959)"
ts=2023-03-10T15:25:20.138Z caller=node_exporter.go:181 level=info msg="Build context" build_context="(go=go1.19.3, user=root@6e7732a7b81b, date=20221129-18:59:09)"
ts=2023-03-10T15:25:20.138Z caller=diskstats_common.go:111 level=info collector=diskstats msg="Parsed flag --collector.diskstats.device-exclude" flag=^(ram|loop|fd|(h|s|v|xv)d[a-z]|nvme\d+n\d+p)\d+$
ts=2023-03-10T15:25:20.138Z caller=systemd_linux.go:152 level=info collector=systemd msg="Parsed flag --collector.systemd.unit-include" flag=.+
ts=2023-03-10T15:25:20.138Z caller=systemd_linux.go:154 level=info collector=systemd msg="Parsed flag --collector.systemd.unit-exclude" flag=.+\.(automount|device|mount|scope|slice)
ts=2023-03-10T15:25:20.139Z caller=filesystem_common.go:111 level=info collector=filesystem msg="Parsed flag --collector.filesystem.mount-points-exclude" flag=^/(dev|proc|run/credentials/.+|sys|var/lib/docker/.+|var/lib/containers/storage/.+)($|/)
ts=2023-03-10T15:25:20.139Z caller=filesystem_common.go:113 level=info collector=filesystem msg="Parsed flag --collector.filesystem.fs-types-exclude" flag=^(autofs|binfmt_misc|bpf|cgroup2?|configfs|debugfs|devpts|devtmpfs|fusectl|hugetlbfs|iso9660|mqueue|nsfs|overlay|proc|procfs|pstore|rpc_pipefs|securityfs|selinuxfs|squashfs|sysfs|tracefs)$
ts=2023-03-10T15:25:20.139Z caller=node_exporter.go:110 level=info msg="Enabled collectors"
ts=2023-03-10T15:25:20.139Z caller=node_exporter.go:117 level=info collector=arp
ts=2023-03-10T15:25:20.139Z caller=node_exporter.go:117 level=info collector=bcache
ts=2023-03-10T15:25:20.139Z caller=node_exporter.go:117 level=info collector=bonding
ts=2023-03-10T15:25:20.139Z caller=node_exporter.go:117 level=info collector=btrfs
ts=2023-03-10T15:25:20.139Z caller=node_exporter.go:117 level=info collector=conntrack
ts=2023-03-10T15:25:20.139Z caller=node_exporter.go:117 level=info collector=cpu
ts=2023-03-10T15:25:20.139Z caller=node_exporter.go:117 level=info collector=cpufreq
ts=2023-03-10T15:25:20.139Z caller=node_exporter.go:117 level=info collector=diskstats
ts=2023-03-10T15:25:20.139Z caller=node_exporter.go:117 level=info collector=dmi
ts=2023-03-10T15:25:20.139Z caller=node_exporter.go:117 level=info collector=edac
ts=2023-03-10T15:25:20.139Z caller=node_exporter.go:117 level=info collector=entropy
ts=2023-03-10T15:25:20.139Z caller=node_exporter.go:117 level=info collector=fibrechannel
ts=2023-03-10T15:25:20.139Z caller=node_exporter.go:117 level=info collector=filefd
ts=2023-03-10T15:25:20.139Z caller=node_exporter.go:117 level=info collector=filesystem
ts=2023-03-10T15:25:20.139Z caller=node_exporter.go:117 level=info collector=hwmon
ts=2023-03-10T15:25:20.139Z caller=node_exporter.go:117 level=info collector=infiniband
ts=2023-03-10T15:25:20.139Z caller=node_exporter.go:117 level=info collector=ipvs
ts=2023-03-10T15:25:20.139Z caller=node_exporter.go:117 level=info collector=loadavg
ts=2023-03-10T15:25:20.139Z caller=node_exporter.go:117 level=info collector=mdadm
ts=2023-03-10T15:25:20.139Z caller=node_exporter.go:117 level=info collector=meminfo
ts=2023-03-10T15:25:20.139Z caller=node_exporter.go:117 level=info collector=netclass
ts=2023-03-10T15:25:20.139Z caller=node_exporter.go:117 level=info collector=netdev
ts=2023-03-10T15:25:20.139Z caller=node_exporter.go:117 level=info collector=netstat
ts=2023-03-10T15:25:20.139Z caller=node_exporter.go:117 level=info collector=nfs
ts=2023-03-10T15:25:20.139Z caller=node_exporter.go:117 level=info collector=nfsd
ts=2023-03-10T15:25:20.139Z caller=node_exporter.go:117 level=info collector=nvme
ts=2023-03-10T15:25:20.139Z caller=node_exporter.go:117 level=info collector=os
ts=2023-03-10T15:25:20.139Z caller=node_exporter.go:117 level=info collector=powersupplyclass
ts=2023-03-10T15:25:20.139Z caller=node_exporter.go:117 level=info collector=pressure
ts=2023-03-10T15:25:20.139Z caller=node_exporter.go:117 level=info collector=rapl
ts=2023-03-10T15:25:20.139Z caller=node_exporter.go:117 level=info collector=schedstat
ts=2023-03-10T15:25:20.139Z caller=node_exporter.go:117 level=info collector=selinux
ts=2023-03-10T15:25:20.139Z caller=node_exporter.go:117 level=info collector=sockstat
ts=2023-03-10T15:25:20.139Z caller=node_exporter.go:117 level=info collector=softnet
ts=2023-03-10T15:25:20.139Z caller=node_exporter.go:117 level=info collector=stat
ts=2023-03-10T15:25:20.139Z caller=node_exporter.go:117 level=info collector=systemd
ts=2023-03-10T15:25:20.139Z caller=node_exporter.go:117 level=info collector=tapestats
ts=2023-03-10T15:25:20.139Z caller=node_exporter.go:117 level=info collector=textfile
ts=2023-03-10T15:25:20.139Z caller=node_exporter.go:117 level=info collector=thermal_zone
ts=2023-03-10T15:25:20.139Z caller=node_exporter.go:117 level=info collector=time
ts=2023-03-10T15:25:20.139Z caller=node_exporter.go:117 level=info collector=timex
ts=2023-03-10T15:25:20.139Z caller=node_exporter.go:117 level=info collector=udp_queues
ts=2023-03-10T15:25:20.139Z caller=node_exporter.go:117 level=info collector=uname
ts=2023-03-10T15:25:20.139Z caller=node_exporter.go:117 level=info collector=vmstat
ts=2023-03-10T15:25:20.139Z caller=node_exporter.go:117 level=info collector=xfs
ts=2023-03-10T15:25:20.139Z caller=node_exporter.go:117 level=info collector=zfs
ts=2023-03-10T15:25:20.139Z caller=tls_config.go:207 level=info msg="Listening on systemd activated listeners instead of port listeners."
ts=2023-03-10T15:25:20.139Z caller=tls_config.go:232 level=info msg="Listening on" address=/run/prometheus-node-exporter/prometheus-node-exporter.sock
ts=2023-03-10T15:25:20.139Z caller=tls_config.go:235 level=info msg="TLS is disabled." http2=false address=/run/prometheus-node-exporter/prometheus-node-exporter.sock

Are you running node_exporter in Docker?

No

What did you do that produced an error?

No error but BTRFS stats are missing.

What did you expect to see?

Missing stats from #2193 like:

node_btrfs_device_size_bytes
node_btrfs_device_unused_bytes
node_btrfs_device_errors_total{type="corruption"}
node_btrfs_device_errors_total{type="flush"}
node_btrfs_device_errors_total{type="generation"}
node_btrfs_device_errors_total{type="read"}
node_btrfs_device_errors_total{type="write"}

What did you see instead?

The following BTRFS metrics appear:

# HELP node_btrfs_allocation_ratio Data allocation ratio for a layout/data type
# TYPE node_btrfs_allocation_ratio gauge
node_btrfs_allocation_ratio{block_group_type="data",mode="raid1",uuid="REDACTED"} 2
node_btrfs_allocation_ratio{block_group_type="metadata",mode="raid1",uuid="REDACTED"} 2
node_btrfs_allocation_ratio{block_group_type="system",mode="raid1",uuid="REDACTED"} 2
# HELP node_btrfs_device_size_bytes Size of a device that is part of the filesystem.
# TYPE node_btrfs_device_size_bytes gauge
node_btrfs_device_size_bytes{device="nvme0n1p2",uuid="REDACTED"} 1.26472945664e+11
node_btrfs_device_size_bytes{device="nvme1n1p2",uuid="REDACTED"} 1.26472945664e+11
# HELP node_btrfs_global_rsv_size_bytes Size of global reserve.
# TYPE node_btrfs_global_rsv_size_bytes gauge
node_btrfs_global_rsv_size_bytes{uuid="REDACTED"} 7.6660736e+07
# HELP node_btrfs_info Filesystem information
# TYPE node_btrfs_info gauge
node_btrfs_info{label="",uuid="REDACTED"} 1
# HELP node_btrfs_reserved_bytes Amount of space reserved for a data type
# TYPE node_btrfs_reserved_bytes gauge
node_btrfs_reserved_bytes{block_group_type="data",uuid="REDACTED"} 0
node_btrfs_reserved_bytes{block_group_type="metadata",uuid="REDACTED"} 147456
node_btrfs_reserved_bytes{block_group_type="system",uuid="REDACTED"} 0
# HELP node_btrfs_size_bytes Amount of space allocated for a layout/data type
# TYPE node_btrfs_size_bytes gauge
node_btrfs_size_bytes{block_group_type="data",mode="raid1",uuid="REDACTED"} 1.24290859008e+11
node_btrfs_size_bytes{block_group_type="metadata",mode="raid1",uuid="REDACTED"} 2.147483648e+09
node_btrfs_size_bytes{block_group_type="system",mode="raid1",uuid="REDACTED"} 3.3554432e+07
# HELP node_btrfs_used_bytes Amount of used space by a layout/data type
# TYPE node_btrfs_used_bytes gauge
node_btrfs_used_bytes{block_group_type="data",mode="raid1",uuid="REDACTED"} 3.9303360512e+10
node_btrfs_used_bytes{block_group_type="metadata",mode="raid1",uuid="REDACTED"} 1.72376064e+08
node_btrfs_used_bytes{block_group_type="system",mode="raid1",uuid="REDACTED"} 49152
...
# HELP node_scrape_collector_duration_seconds node_exporter: Duration of a collector scrape.
# TYPE node_scrape_collector_duration_seconds gauge
node_scrape_collector_duration_seconds{collector="btrfs"} 0.000405861
...
# HELP node_scrape_collector_success node_exporter: Whether a collector succeeded.
# TYPE node_scrape_collector_success gauge
node_scrape_collector_success{collector="btrfs"} 1

I also do see the BTRFS volume in node_filesystem metrics:

# HELP node_filesystem_avail_bytes Filesystem space available to non-root users in bytes.
# TYPE node_filesystem_avail_bytes gauge
node_filesystem_avail_bytes{device="/dev/nvme0n1p2",fstype="btrfs",mountpoint="/"} 8.4987498496e+10
# HELP node_filesystem_device_error Whether an error occurred while getting statistics for the given device.
# TYPE node_filesystem_device_error gauge
node_filesystem_device_error{device="/dev/nvme0n1p2",fstype="btrfs",mountpoint="/"} 0
# HELP node_filesystem_files Filesystem total file nodes.
# TYPE node_filesystem_files gauge
node_filesystem_files{device="/dev/nvme0n1p2",fstype="btrfs",mountpoint="/"} 0
# HELP node_filesystem_files_free Filesystem total free file nodes.
# TYPE node_filesystem_files_free gauge
node_filesystem_files_free{device="/dev/nvme0n1p2",fstype="btrfs",mountpoint="/"} 0
# HELP node_filesystem_free_bytes Filesystem free space in bytes.
# TYPE node_filesystem_free_bytes gauge
node_filesystem_free_bytes{device="/dev/nvme0n1p2",fstype="btrfs",mountpoint="/"} 8.69204992e+10
# HELP node_filesystem_readonly Filesystem read-only status.
# TYPE node_filesystem_readonly gauge
node_filesystem_readonly{device="/dev/nvme0n1p2",fstype="btrfs",mountpoint="/"} 0
# HELP node_filesystem_size_bytes Filesystem size in bytes.
# TYPE node_filesystem_size_bytes gauge
node_filesystem_size_bytes{device="/dev/nvme0n1p2",fstype="btrfs",mountpoint="/"} 1.26472945664e+11

Output of some BTRFS stats (not through node_exporter):

# btrfs device stats /
[/dev/nvme0n1p2].write_io_errs    0
[/dev/nvme0n1p2].read_io_errs     0
[/dev/nvme0n1p2].flush_io_errs    0
[/dev/nvme0n1p2].corruption_errs  0
[/dev/nvme0n1p2].generation_errs  0
[/dev/nvme1n1p2].write_io_errs    0
[/dev/nvme1n1p2].read_io_errs     0
[/dev/nvme1n1p2].flush_io_errs    0
[/dev/nvme1n1p2].corruption_errs  1
[/dev/nvme1n1p2].generation_errs  0

# btrfs device stats /dev/nvme0n1p2
[/dev/nvme0n1p2].write_io_errs    0
[/dev/nvme0n1p2].read_io_errs     0
[/dev/nvme0n1p2].flush_io_errs    0
[/dev/nvme0n1p2].corruption_errs  0
[/dev/nvme0n1p2].generation_errs  0

# btrfs device stats /dev/nvme1n1p2 
[/dev/nvme1n1p2].write_io_errs    0
[/dev/nvme1n1p2].read_io_errs     0
[/dev/nvme1n1p2].flush_io_errs    0
[/dev/nvme1n1p2].corruption_errs  1
[/dev/nvme1n1p2].generation_errs  0

Note that /dev/nvme0n1p2 and /dev/nvme1n1p2 are a BTRFS RAID1 array

NOTE: Apologies for the initial confusing issue, accidentally hit enter too early!

About this issue

  • Original URL
  • State: closed
  • Created a year ago
  • Comments: 15 (15 by maintainers)

Most upvoted comments

Oh nice! Thanks, I’ll try and give that a try over the next few days.

On 11 Mar 2023, 21:56, at 21:56, Perry Naseck @.***> wrote:

Further info:

O_NOATIME (since Linux 2.6.8)
      Do not update the file last access time (st_atime in the
inode) when the file is
[read(2)](https://man7.org/linux/man-pages/man2/read.2.html).

      This flag can be employed only if one of the following
       conditions is true:

      *  The effective UID of the process matches the owner UID
         of the file.

      *  The calling process has the CAP_FOWNER capability in
         its user namespace and the owner UID of the file has a
         mapping in the namespace.

https://man7.org/linux/man-pages/man2/openat.2.html

So falling back to without O_NOATIME seems to be the best option.

– Reply to this email directly or view it on GitHub: https://github.com/prometheus/node_exporter/issues/2632#issuecomment-1465031188 You are receiving this because you were mentioned.

Message ID: @.***>