node-lru-cache: `max` option is not strictly followed, throwing `RangeError: Map maximum size exceeded` when adding lots of items
Even after specifying the max
option, the cache keeps adding more items to the internal map, and (most likely) when the V8 map size limit of 2^24 is exceeded the following error is thrown:
RangeError: Map maximum size exceeded
at Map.set (<anonymous>)
at LRUCache.set (/myworkdir/node_modules/lru-cache/dist/commonjs/index.js:862:26)
<snip>
Sample script
const { LRUCache } = require('lru-cache');
const elements = 10_000_000;
const iterations = 5;
const cache = new LRUCache({
max: elements,
});
function getRandomCacheKey() {
return Math.random().toString(36).substring(2, 8); // simulate low hit-rate caching
}
function getRandomFloatValue() {
return Math.random() * 100000;
}
let cacheHits = 0;
for (i = 1; i <= iterations; i++) {
for (e = 1; e <= elements; e++) {
try {
cache.set(getRandomCacheKey(), getRandomFloatValue());
} catch (error) {
console.error(error);
console.log('Iteration', i, '\tElement', e, '\tCache hits', cacheHits);
process.exit(1);
}
if (cache.get(getRandomCacheKey())) {
cacheHits++;
}
}
}
About this issue
- Original URL
- State: closed
- Created 3 months ago
- Reactions: 1
- Comments: 22 (11 by maintainers)
Closing this issue, because there’s no evidence that lru-cache has a bug, and the spurious throw is a bug in V8 and isn’t really worth working around.
Not surprising. Bun has (as of writing this, in spring of 2024) built-in windows support now, but given how much work was poured into making it go fast on posix systems, and how different Windows’ performance characteristics are, I mean, that’s just node having a head start of well over a decade. I’m guessing eventually bun will catch up though, no doubt.
This bug is a crazy good find, actually. Exposing completely different issues in V8 and jscore (or maybe just bun somehow), with one reproduction case. Not bad!
Confirmed, this is a v8 bug. https://issues.chromium.org/issues/335269877
In the meantime… idk. Can you refactor to have a cache with a max lower than
2**23 + 2
? I’m not sure how to even go about working around this, tbh. Would have to think about it. Maybe there’s some data structure we could use to keep thekeyMap
below that limit, but I’m not sure how to do that without impacting performance.I can’t find anything in this library that’s incorrect, per se. I mean, deleting items from a Map object should prevent it from raising the
RangeError: Map maximum size exceeded
, because it keeps the size below the Map maximum size.If you’re observing a memory leak, then that might be worth investigating. It could be that V8 is leaking memory in some way, leading to this error being raised, but I don’t see any evidence of that. I suspect that you seeing the issue in other caching libraries is because it’s a pretty common approach to use a Map somewhere in the implementation of a cache (or even, just using a Map as the cache), so if you’re caching 10M things, there’s a good chance that you’re going to run into this no matter what cache implementation you use.
At this point, it seems like the best course of action is just to not allow the cache to be bigger than
2**23+2
, but that seems pretty restrictive.