influxdb: Derivative of nulls is not null

System info:

influxdb 0.13.0

Steps to reproduce:

  1. Use a measurement with some gaps in it (e.g. several hours of no data)
  2. Run a derivative or non_negative_derivative over the mean/median/mode(/etc.) of those values, grouping by a short time period, filling with fill(null)

Expected behavior:

The derivative function should fill with nulls where all its inputs are null.

Actual behavior:

The derivative function behaves the same as if fill(none) had been used, and suppresses records when its inputs are all null.

> SELECT non_negative_derivative(mean("value")) FROM "disk_read" WHERE "host" = 'kirisame' AND "instance" = 'sda' AND time >= now() - 24h GROUP BY "instance", time(1h) fill(none)
name: disk_read
tags: instance=sda
time            non_negative_derivative
----            -----------------------
1471622400000000000 510468.42365956306
1471626000000000000 2.2326707869478226e+07
1471694400000000000 4.2967470870584756e+07
1471708800000000000 9.949401068170846e+06

> SELECT non_negative_derivative(mean("value")) FROM "disk_read" WHERE "host" = 'kirisame' AND "instance" = 'sda' AND time >= now() - 24h GROUP BY "instance", time(1h) fill(null)
name: disk_read
tags: instance=sda
time            non_negative_derivative
----            -----------------------
1471622400000000000 510468.42365956306
1471626000000000000 2.2326707869478226e+07
1471694400000000000 4.2967470870584756e+07
1471708800000000000 9.949401068170846e+06

Additional info:

Seems like it was previously bug reported in #780 but the fix missed these functions.

In practice, the impact of this is incorrect grafana graphs (network usage here):

About this issue

  • Original URL
  • State: closed
  • Created 8 years ago
  • Reactions: 12
  • Comments: 15 (1 by maintainers)

Commits related to this issue

Most upvoted comments

@chooko Ran into the same issue, and the way we solved this was by multiplying by count()/count()

SELECT non_negative_derivative(min("bytes"), 1s) * count("bytes") / count("bytes") * 8 FROM "net" WHERE "id" = 'id' AND time > now() - 24h GROUP BY time(60s) fill(null)

I’ve got the same issue; if both points fed to the derivative are null, I’d like it to output null (or even one point? less sure on that) - I want to see a gap in my graph for that period. Is that incorrect behavior for a derivative function? Do we need something else?

e.g., I’d rather see the gap like:

select mean("bytes_recv") FROM "net" where time > now() - 24h AND "host" = 'xxxx' GROUP BY time(2h)
name: net
---------
time            mean
2016-08-31T16:00:00Z    2.967001493183443e+11
2016-08-31T18:00:00Z    2.971375809658354e+11
2016-08-31T20:00:00Z    2.9758520667994586e+11
2016-08-31T22:00:00Z    2.979088292255e+11
2016-09-01T00:00:00Z    2.980935009536105e+11
2016-09-01T02:00:00Z
2016-09-01T04:00:00Z
2016-09-01T06:00:00Z
2016-09-01T08:00:00Z    2.9952670532007214e+11
2016-09-01T10:00:00Z    2.998899557329299e+11
2016-09-01T12:00:00Z    3.0023459804905347e+11
2016-09-01T14:00:00Z    3.006757222235958e+11
2016-09-01T16:00:00Z    3.009559445886804e+11

Instead of skipping over the null output:

> select non_negative_derivative(mean("bytes_recv")) FROM "net" where time > now() - 24h AND "host" = 'xxxx' GROUP BY time(2h)
name: net
---------
time            non_negative_derivative
2016-08-31T16:00:00Z    4.2512097606988525e+08
2016-08-31T18:00:00Z    4.64121579711792e+08
2016-08-31T20:00:00Z    4.4762571411047363e+08
2016-08-31T22:00:00Z    3.236225455541382e+08
2016-09-01T00:00:00Z    1.8467172811047363e+08
2016-09-01T08:00:00Z    3.583010916154175e+08
2016-09-01T10:00:00Z    3.6325041285772705e+08
2016-09-01T12:00:00Z    3.446423161235962e+08
2016-09-01T14:00:00Z    4.411241745423584e+08
2016-09-01T16:00:00Z    2.798738290499878e+08

I’m seeing this same issue any “fix” yet?

As I don’t seem to be able to re-open someone elses issues: I don’t think this should be closed. You could pretty easily provide an option to the function and offer both behaviors.

@amit-meshbey, nice hack! It works for me.

Think InfluxDB team should be ashamed of having this workaround. Hope more pretty fix will appear sooner than they wanted.