caffeine: LoadingCache no longer respects maximumSize in 2.6.1

This seems to be pretty serious but we’ve noticed that since upgrading to 2.6.1 from 2.6.0 that the maximum size of a cache is no longer respected (for a synchronous loading cache). Attached is a graph of us reporting stats from a cache that has a maximum size of 100k elements.

screen shot 2018-03-01 at 12 24 28 pm

As the graph shows, before 2.6.1 is deployed the cached stayed at a max size of 100k, after 2.6.1, it got into the millions (almost 200M). The size is simply cache.estimatedSize().

The code is as vanilla as it gets:

 Caffeine.newBuilder().
          maximumSize(100000).
          build(key -> {...});

About this issue

  • Original URL
  • State: closed
  • Created 6 years ago
  • Comments: 28 (15 by maintainers)

Commits related to this issue

Most upvoted comments

Released 2.7

Added the prompt rescheduling. Still working on 2.7, but haven’t had the time to wrap up those tests and changes.

You can call cache.cleanUp() if you need to manually retrigger it. Or in your example the same-thread executor shouldn’t be a performance problem and you can just use that.

For testing, it is easier to not deal with asynchronous logic and use a same-thread executor. If you use dependency injection, it is simple to setup.

For production, the cost isn’t that bad. But if you use an expensive removal listener, the caller is penalized. To provide a small and consistent response time it can be beneficial to defer the work to an executor. Since its available, we take advantage of it.

I think it wouldn’t hurt to more aggressively retrigger the maintenance work if it detects a new run is needed after it completes, rather than defer to the next cache read or write. That won’t change much on production, but avoid confusion cases like you saw.