act: ACTIONS_RUNTIME_URL, ACTIONS_RUNTIME_TOKEN and ACTIONS_CACHE_URL environment variables are missing.

Github actions have undocumented environment variables named ACTIONS_RUNTIME_URL, ACTIONS_RUNTIME_TOKEN and ACTIONS_CACHE_URL. The actions/cache action relies on this to create the url to interact with for storing caches. https://github.com/actions/toolkit/blob/1cc56db0ff126f4d65aeb83798852e02a2c180c3/packages/cache/src/internal/cacheHttpClient.ts#L33-L47

They all get populated over here

This is the cause of #285, #169 is linked.

About this issue

  • Original URL
  • State: closed
  • Created 4 years ago
  • Reactions: 73
  • Comments: 58 (23 by maintainers)

Commits related to this issue

Most upvoted comments

If you are using nektos/act with actions/upload-artifact and actions/download-artifact, artifact server is already implemented in act (though it’s not mentioned in this issue how to run it).

Just use --artifact-server-path option.

It is used like this: --artifact-server-path /tmp/artifacts (don’t forget to mkdir /tmp/artifacts).

The issue is still persistent afaik.

It works! 🎉


I took @simonhkswan’s sample Dockerfile 🙏🏻 and pushed to ghcr.io/jefuller/artifact-server:latest.

Then I added to docker-compose.yml:

  artifact-server:
    image: ghcr.io/jefuller/artifact-server:latest
    environment:
      AUTH_KEY: foo
    ports:
      - "8080:8080

The added (per @econchick 🙏🏻 ) to .actrc:

--env ACTIONS_CACHE_URL=http://localhost:8080/
--env ACTIONS_RUNTIME_URL=http://localhost:8080/
--env ACTIONS_RUNTIME_TOKEN=foo

Sample of upload:

[Main/Build dev]   💬  ::debug::URL is http://localhost:8080/_apis/pipelines/workflows/1/artifacts?api-version=6.0-preview&artifactName=dev
[Main/Build dev]   💬  ::debug::Artifact dev has been successfully uploaded, total size in bytes: 23217
| Finished uploading artifact dev. Reported size is 7573 bytes. There were 0 items that failed to upload
| Artifact dev has been successfully uploaded!
[Main/Build dev]   ✅  Success - actions/upload-artifact@v2

and download:

[Main/Deploy to dev] ⭐  Run actions/download-artifact@v2
[...]
[Main/Deploy to dev]   💬  ::debug::Artifact Url: http://localhost:8080/_apis/pipelines/workflows/1/artifacts?api-version=6.0-preview
| Directory structure has been setup for the artifact
[Main/Deploy to dev]   💬  ::debug::Download file concurrency is set to 2
| Total number of files that will be downloaded: 11
[...]
[Main/Deploy to dev]   ⚙  ::set-output:: download-path=/workspace/.ansible/files/web
| Artifact download has finished successfully
[Main/Deploy to dev]   ✅  Success - actions/download-artifact@v2

I’ve built a very quick and simple express server that will simulate the download and upload endpoints of an artifact server here https://github.com/anthonykawa/artifact-server. If you want to use it to help understand building this into act, feel free. It only works if you specify the name and path inputs when uploading and downloading.

@joshmgross Thank you very much for response! 😄 Unfortunately that is what I was afraid of.

So my idea was that act would spin up small http server available to its containers and work as transparent proxy. I might be able to draft something up whenever I get some more free time.

Thanks a lot @anthonykawa. Will take a look at it soon when I’ll get time to develop artefact support.

👋 @catthehacker is correct, these variables are for internal APIs. If you wanted to manually set these values, you’d also likely need to simulate an internal service to support what these APIs are used for (like caching and artifacts).

Are these values dynamically set?

@jshwi Those environmental variables are only available to GitHub Actions runners, whether it be runner hosted by GitHub or self-hosted one.

Can they be manually set and read by act?

Probably.

If so what to?

No one knows, if it was that easy it would be fixed already.

Perhaps guys from GitHub (@joshmgross from https://github.com/actions/cache or @TingluoHuang from https://github.com/actions/runner, top committers) could chime in here, maybe give us some clues on how we could solve that issue. I presume it won’t be so easy as obtaining and just hardcoding those values since, most likely they are undocumented for a reason and the reason is it’s supposed to work only on GitHub Actions service (but please, prove me wrong). There is artefacts API https://docs.github.com/en/rest/reference/actions#artifacts which could be used to replace artefacts actions but I’ve not found anything regarding cache.

Did anything come of the plan to submit a PR for this?

Since then someone else added a cli flag to control this and you always have to override ACTIONS_RUNTIME_URL manually, so just add that flag.

So you can just use the latest release, [::0] is the ipv6 variant of 0.0.0.0.

ACTIONS_RUNTIME_URL=http://host.docker.internal:4322/ act --artifact-server-addr "[::0]" --artifact-server-port 4322 --artifact-server-path out

I have built (this year) a whole act like clone which have a builtin artifact and cache server. My clone defines all these variables and is able to use the official self-hosted runners. This source code is incompatible with act, because I used the same language as the official runner to reduce rewriting code and maintenance time. My first goal wasn’t to run them locally like act, but on a raspberry pi with a github like server (https://github.com/go-gitea/gitea).

Do i have to run some kind of server or something? Is there a solution provided with act?

Please read the thread.

Update since playing with ⬆️

If you run upload-artifact multiple times it will just keep ‘adding’ to the artifact.

One hacky workaround is to docker exec rm -r '/usr/src/app/1' in artifact-server, but a better solution is to set GITHUB_RUN_ID (which I found here), e.g:

 act -j build-dev --env GITHUB_RUN_ID=$(date '+%s')

@catthehacker would you accept a PR to set that ⬆️ to a random (or incrementing) number each run? I presume when running on GitHub Actions it changes each run.

@ChristopherHX indeed, that was an oversight on my part. I must admit I glossed over the error message and did not notice it had changed.

Anyways, I did as you asked, and everything worked. 👏 🎉

[🏗 Continuous Integration/run-integration-tests   ]   💬  ::debug::Artifact Url: http://host.docker.internal:34567/_apis/pipelines/workflows/1/artifacts?api-version=6.0-preview
[🏗 Continuous Integration/run-integration-tests   ]   💬  ::debug::URL is http://host.docker.internal:34567/_apis/pipelines/workflows/1/artifacts?api-version=6.0-preview&artifactName=code-coverage-data
[🏗 Continuous Integration/run-integration-tests   ]   💬  ::debug::Artifact code-coverage-data has been successfully uploaded, total size in bytes: 71935
| Artifact has been finalized. All files have been successfully uploaded!
| 
| The raw size of all the files that were specified for upload is 71935 bytes
| The size of all the files that were uploaded is 2457 bytes. This takes into account any gzip compression used to reduce the upload size, time and storage
| 
| Note: The size of downloaded zips can differ significantly from the reported size. For more information see: https://github.com/actions/upload-artifact#zipped-artifact-downloads 
| 
| Artifact code-coverage-data has been successfully uploaded!

If you’re following this thread for a solution to the actions/cache to develop GHA workflows locally so that you can get faster feedback when you test-run on your changes to GHA workflow yaml files, see https://github.com/nektos/act/issues/285#issuecomment-987550101

@simonhkswan @chevdor I got @anthonykawa 's server to work!

With the Dockerfile, I added ENV AUTH_KEY=foo (can be whatever). When running the docker image, I dropped the -d flag so I could see any logs.

Then in another terminal, I ran act like so:

$ act -j my_job \
  --env ACTIONS_CACHE_URL=http://localhost/ \
  --env ACTIONS_RUNTIME_URL=http://localhost/ \
  --env ACTIONS_RUNTIME_TOKEN=foo

where ACTIONS_RUNTIME_TOKEN is the same as the AUTH_KEY set in the dockerized server, and the http:// is needed for both URLs.

I am testing the option from @anthonykawa (PRs incoming) and it seems that act does not properly honor the config:

::debug::Resource Url: localhost:8080_apis/artifactcache/cache?keys=Linux-ubuntu-latest-cargo-e3b0c44298fc1c149afbf4c8996fb92427ae41e4649b934ca495991b7852b855&version=00e54f95df51f6341273eea43863fcdc3856075880e81a8467ad4870f1a07f31
[Quick check/rust_fmt-1]   💬  ::debug::getCacheEntry - Attempt 1 of 2 failed with error: connect ECONNREFUSED 127.0.0.1:80

I did set an ENV defined as ACTIONS_CACHE_URL=localhost:8080. We can see in the logs that we connect to it ::debug::Resource Url: localhost:8080_ But then suddenly, act complains about :80 not being available…

@chevdor did you have any more luck with this? I found if I used the server from @anthonykawa with in a simple Docker container

FROM node:16
WORKDIR /usr/src/app
COPY package*.json ./
RUN npm install
COPY . .
EXPOSE 8080
CMD [ "node", "index.js" ]

And ran

docker run -p 80:8080 --network bridge -d artifact-server

with ACTIONS_CACHE_URL=localhost/ (i’m not sure how much docker likes ‘:’ placed in environment variables but we can just let it default to port 80.)

Then it would try to connect but get a 400

::debug::Resource Url: localhost/_apis/...
::debug::getCacheEntry - Attempt 1 of 2 failed with error: Failed request: (400)

… and now I’m stuck at this point.

@viotti I see, docker changed the network stack in docker desktop macOS m1.

You get ECONNREFUSED instead of EHOSTUNREACH, different issue.

The problem is that the artifact server only listens to exactly one outbound ipv4 address, but docker is calling from another one. In this case edit the code of act: https://github.com/nektos/act/blob/7754ba7fcf3f80b76bb4d7cf1edbe5935c8a6bdc/pkg/artifacts/server.go#L279

ip := "0.0.0.0"

go build, then you have your act in the root folder. I will create a PR, if it works.

Add if: ${{ !env.ACT }} to all steps that are actions/cache, run: docker save ... and run: docker load ...

Q1) How can this optimized for act to skip the build when the image is not changed and already available as actions/cache@v2 cannot be used?

Q2) How can the build images used in a second test job using act?

Because you are using buildx action that creates new builder for each run and each run cache is exclusive to that builder. If the buildx builder is the same each run, cache will be re-used.

@simonhkswan @chevdor I got @anthonykawa 's server to work!

With the Dockerfile, I added ENV AUTH_KEY=foo (can be whatever). When running the docker image, I dropped the -d flag so I could see any logs.

Then in another terminal, I ran act like so:

$ act -j my_job \
  --env ACTIONS_CACHE_URL=http://localhost/ \
  --env ACTIONS_RUNTIME_URL=http://localhost/ \
  --env ACTIONS_RUNTIME_TOKEN=foo

where ACTIONS_RUNTIME_TOKEN is the same as the AUTH_KEY set in the dockerized server, and the http:// is needed for both URLs.

I belive @simonhkswan @chevdor are expecting the actions/cache@v2 to work with @anthonykawa server(by what I can see in their execution log request urls), but actually the actions that do work are actions/upload-artifact@v2 and actions/download-artifact@v2.

It works as it should. If you need to change it, you can use the envvar

@catthehacker Thanks for your valuable reply.

Adding if: ${{ !env.ACT }} to the uses: actions/cache@v2 entries skips this unsupported action which save time on the timeout.

You’re right: A cache build works fine for a normal docker build command. I was not aware of this.
I’ll check how to configure the buildx builder to reuses the same cache every time. I’ll investigate this further.

docker save and docker load can be used within the same job and works.
However, actions/cache@v2 was used to share the images between jobs. Is an alternative available (without pushing to a registry)? For example by using a volume of the act build container?

Did someone get this up and running in combination with actions/cache@v2?

There is no plan to support that action at all.

I’m running out of ideas to share Docker images between jobs with Act. Any suggestion?

What do you mean? All your images should be stored on your Docker host, there is no need to cache them.

Any news about this problem? Came here from #285