toolkit: Cache Stuck

Describe the bug All my actions are running, but are refusing to cache with the same error:

Post job cleanup.
Unable to reserve cache with key Linux-modules-0c1e633988f0463573ae79cec4e4e741153037a6a270a901c8bf040b3d674a4f, another job may be creating this cache.

There must be some sort of race condition, and the cache is stuck, because they all have the same precache log:

Run actions/cache@v2
  with:
    path: **/node_modules
    key: Linux-modules-0c1e633988f0463573ae79cec4e4e741153037a6a270a901c8bf040b3d674a4f
Cache not found for input keys: Linux-modules-0c1e633988f0463573ae79cec4e4e741153037a6a270a901c8bf040b3d674a4f

I imagine that if I changed my yarn lock file, which the cache is based on, this issue would resolve itself.

Expected behavior That the cache always works, and when it gets stuck like this, it would automatically fix itself.

Additional context

  • My github runner is running inside a docker container. Not deferring to a docker container, but are running inside a container where the runner is installed. I don’t think this impacts things as my cache normally works.
  • I don’t know how to reproduce this. Normally the cache works. There just is clearly some sort of race condition where it can get stuck.

About this issue

  • Original URL
  • State: open
  • Created 4 years ago
  • Reactions: 52
  • Comments: 19 (1 by maintainers)

Commits related to this issue

Most upvoted comments

I think I am hitting this on two workflows, as rust-cache uses this underneath. Is there any way to view/delete the caches if they are in a ‘funky’ state?

Manually overriding cache keys, and doing a commit seems like a bit of whack-a-mole approach. Happy to manually delete the cache from a UI (or API) feels like a bit of a cleaner workaround until the root cause is fixed.

I think we encountered this problem by manually cancelling a run that had begun caching with a certain key before it completed the caching step. It seemed to never released the key, and subsequent runs that try to use the same key fail to reserve it:

Run actions/cache@v2 Cache not found for input keys: <key>

Post Run actions/cache@v2 Post job cleanup. Unable to reserve cache with key <key>, another job may be creating this cache.

Our only workaround was to change the key.

I had a relative path in the list of files to cache. But it seems that relative paths are not supported. Removing it fixed the error.

I ran into this issue and think I figured out the most common cause. It seems like there’s two causes for this:

  1. Transient uploading issues (bad file descriptor, chunk, 503s, etc)
  2. trying to save the cache for a cache key that was already saved (and restored)

If you’re running into this problem and don’t have any errors uploading, I’m guessing the root cause is the last one.

Basically I compared what this cache library was doing with how the official actions/cache action actually uses it and here’s the key part: https://github.com/actions/cache/blob/611465405cc89ab98caca2c42504d9289fb5f22e/src/save.ts#L39-L54

The official action does not try to save the cache if there was previously an exact key match on a cache hit. If you just naively do a restore cache + save cache (like I tried), you’ll run into this error every time there’s a cache hit (meaning you’re trying to save a cache key which is already cached). Ideally saveCache was an atomic operation, but since it’s not we have to replicate that behaviour.

So unfortunately the solution is to replicate all the logic within https://github.com/actions/cache/blob/611465405cc89ab98caca2c42504d9289fb5f22e/src/save.ts.

Here’s a utility function to wrap a cacheable function (like calling exec.exec('some command')) which works for me:

async function withCache(cacheable, paths, baseKey, hashPattern) {
  const keyPrefix = `${process.env.RUNNER_OS}-${baseKey}-`;
  const hash = await glob.hashFiles(hashPattern);
  const primaryKey = `${keyPrefix}${hash}`;
  const restoreKeys = [keyPrefix];

  const cacheKey = await cache.restoreCache(paths, primaryKey, restoreKeys);

  if (!cacheKey) {
    core.info(`Cache not found for keys: ${[primaryKey, ...restoreKeys].join(", ")}`);
  }

  core.info(`Cache restored from key: ${cacheKey}`);

  await cacheable();

  if (isExactCacheKeyMatch(primaryKey, cacheKey)) {
    core.info(`Cache hit occurred on the primary key ${primaryKey}, not saving cache.`);
    return;
  }

  await cache.saveCache(paths, primaryKey);
}

await withCache(async () => {
  await exec.exec('npm install')
}, ['node_modules], 'npm', '**/package.json');

You might want to customize the arguments and how the keys are built (maybe accept a list of restore keys too).

Faced the same on macOS matrix.

Error: uploadChunk (start: 0, end: 15553027) failed: Cache service responded with 503

Though everything worked on the very next run.

Post job cleanup.
/usr/bin/tar --posix --use-compress-program zstd -T0 -cf cache.tzst -P -C /home/runner/work/***/*** --files-from manifest.txt
Warning: uploadChunk (start: 33554432, end: 67108863) failed: Cache service responded with 503
/home/runner/work/_actions/actions/cache/v2/dist/save/index.js:4043
                        throw new Error(`Cache upload failed because file read failed with ${error.message}`);
                        ^

Error: Cache upload failed because file read failed with EBADF: bad file descriptor, read
    at ReadStream.<anonymous> (/home/runner/work/_actions/actions/cache/v2/dist/save/index.js:4043:31)
    at ReadStream.emit (events.js:210:5)
    at internal/fs/streams.js:167:12
    at FSReqCallback.wrapper [as oncomplete] (fs.js:470:5)

After reruning the workflow error is gone.

Same here, or some Warning: uploadChunk (start: 67108864, end: 100663295) failed: Cache service responded with 503

I’ve run into the same issue. Each time it starts with Cache not found for input keys: windows-master-N002 and ends with Unable to reserve cache with key windows-master-N002, another job may be creating this cache., so no cache ends up actually getting created.