act: ACTIONS_RUNTIME_URL, ACTIONS_RUNTIME_TOKEN and ACTIONS_CACHE_URL environment variables are missing.
Github actions have undocumented environment variables named ACTIONS_RUNTIME_URL
, ACTIONS_RUNTIME_TOKEN
and ACTIONS_CACHE_URL
. The actions/cache action relies on this to create the url to interact with for storing caches.
https://github.com/actions/toolkit/blob/1cc56db0ff126f4d65aeb83798852e02a2c180c3/packages/cache/src/internal/cacheHttpClient.ts#L33-L47
They all get populated over here
About this issue
- Original URL
- State: closed
- Created 4 years ago
- Reactions: 73
- Comments: 58 (23 by maintainers)
Commits related to this issue
- act action now failing at uploading atrifacts https://github.com/nektos/act/issues/329 — committed to encointer/encointer-node by brenzi 3 years ago
- Add conditional to local run https://github.com/nektos/act issue open : https://github.com/nektos/act/issues/329 — committed to atk4/login by abbadon1334 3 years ago
- Align release workflow 2.4.0 (#73) * Align with atk4/core workflow * Add Behat test workflow from Atk4/Ui * Remove js compilation from Behat tests * Update composer.json * Add phpstan.neo... — committed to atk4/login by abbadon1334 3 years ago
- Fix ACT issue and cleanup * Refactor npm setup into two mutually exclusive steps to workaround https://github.com/nektos/act/issues/329. * Cleanup CI yaml file a bit, and introduce some comments. — committed to mangrovedao/mangrove-archive by dontrolle 3 years ago
- Add Dockerfile From https://github.com/nektos/act/issues/329#issuecomment-923782708 — committed to JEFuller/artifact-server by davetapley 3 years ago
If you are using
nektos/act
withactions/upload-artifact
andactions/download-artifact
, artifact server is already implemented inact
(though it’s not mentioned in this issue how to run it).Just use
--artifact-server-path
option.It is used like this:
--artifact-server-path /tmp/artifacts
(don’t forget tomkdir /tmp/artifacts
).The issue is still persistent afaik.
It works! 🎉
I took @simonhkswan’s sample
Dockerfile
🙏🏻 and pushed toghcr.io/jefuller/artifact-server:latest
.Then I added to
docker-compose.yml
:The added (per @econchick 🙏🏻 ) to
.actrc
:Sample of upload:
and download:
I’ve built a very quick and simple express server that will simulate the download and upload endpoints of an artifact server here https://github.com/anthonykawa/artifact-server. If you want to use it to help understand building this into act, feel free. It only works if you specify the name and path inputs when uploading and downloading.
@joshmgross Thank you very much for response! 😄 Unfortunately that is what I was afraid of.
So my idea was that
act
would spin up small http server available to its containers and work as transparent proxy. I might be able to draft something up whenever I get some more free time.Thanks a lot @anthonykawa. Will take a look at it soon when I’ll get time to develop artefact support.
👋 @catthehacker is correct, these variables are for internal APIs. If you wanted to manually set these values, you’d also likely need to simulate an internal service to support what these APIs are used for (like caching and artifacts).
@jshwi Those environmental variables are only available to GitHub Actions runners, whether it be runner hosted by GitHub or self-hosted one.
Probably.
No one knows, if it was that easy it would be fixed already.
Perhaps guys from GitHub (@joshmgross from https://github.com/actions/cache or @TingluoHuang from https://github.com/actions/runner, top committers) could chime in here, maybe give us some clues on how we could solve that issue. I presume it won’t be so easy as obtaining and just hardcoding those values since, most likely they are undocumented for a reason and the reason is it’s supposed to work only on GitHub Actions service (but please, prove me wrong). There is artefacts API https://docs.github.com/en/rest/reference/actions#artifacts which could be used to replace artefacts actions but I’ve not found anything regarding cache.
Since then someone else added a cli flag to control this and you always have to override ACTIONS_RUNTIME_URL manually, so just add that flag.
So you can just use the latest release,
[::0]
is the ipv6 variant of0.0.0.0
.I have built (this year) a whole
act
like clone which have a builtin artifact and cache server. My clone defines all these variables and is able to use the official self-hosted runners. This source code is incompatible with act, because I used the same language as the official runner to reduce rewriting code and maintenance time. My first goal wasn’t to run them locally like act, but on a raspberry pi with a github like server (https://github.com/go-gitea/gitea).Please read the thread.
Update since playing with ⬆️
If you run
upload-artifact
multiple times it will just keep ‘adding’ to the artifact.One hacky workaround is to
docker exec
rm -r '/usr/src/app/1'
inartifact-server
, but a better solution is to setGITHUB_RUN_ID
(which I found here), e.g:@catthehacker would you accept a PR to set that ⬆️ to a random (or incrementing) number each run? I presume when running on GitHub Actions it changes each run.
@ChristopherHX indeed, that was an oversight on my part. I must admit I glossed over the error message and did not notice it had changed.
Anyways, I did as you asked, and everything worked. 👏 🎉
If you’re following this thread for a solution to the
actions/cache
to develop GHA workflows locally so that you can get faster feedback when you test-run on your changes to GHA workflow yaml files, see https://github.com/nektos/act/issues/285#issuecomment-987550101@simonhkswan @chevdor I got @anthonykawa 's server to work!
With the Dockerfile, I added
ENV AUTH_KEY=foo
(can be whatever). When running the docker image, I dropped the-d
flag so I could see any logs.Then in another terminal, I ran
act
like so:where
ACTIONS_RUNTIME_TOKEN
is the same as theAUTH_KEY
set in the dockerized server, and thehttp://
is needed for both URLs.@chevdor did you have any more luck with this? I found if I used the server from @anthonykawa with in a simple Docker container
And ran
with
ACTIONS_CACHE_URL=localhost/
(i’m not sure how much docker likes ‘:’ placed in environment variables but we can just let it default to port 80.)Then it would try to connect but get a 400
… and now I’m stuck at this point.
@viotti I see, docker changed the network stack in docker desktop macOS m1.
You get
ECONNREFUSED
instead ofEHOSTUNREACH
, different issue.The problem is that the artifact server only listens to exactly one outbound ipv4 address, but docker is calling from another one. In this case edit the code of act: https://github.com/nektos/act/blob/7754ba7fcf3f80b76bb4d7cf1edbe5935c8a6bdc/pkg/artifacts/server.go#L279
go build
, then you have youract
in the root folder. I will create a PR, if it works.Add
if: ${{ !env.ACT }}
to all steps that areactions/cache
,run: docker save ...
andrun: docker load ...
Because you are using buildx action that creates new builder for each run and each run cache is exclusive to that builder. If the buildx builder is the same each run, cache will be re-used.
I belive @simonhkswan @chevdor are expecting the
actions/cache@v2
to work with @anthonykawa server(by what I can see in their execution log request urls), but actually the actions that do work areactions/upload-artifact@v2
andactions/download-artifact@v2
.It works as it should. If you need to change it, you can use the envvar
@catthehacker Thanks for your valuable reply.
Adding
if: ${{ !env.ACT }}
to theuses: actions/cache@v2
entries skips this unsupported action which save time on the timeout.You’re right: A cache build works fine for a normal
docker build
command. I was not aware of this.I’ll check how to configure the buildx builder to reuses the same cache every time. I’ll investigate this further.
docker save
anddocker load
can be used within the same job and works.However,
actions/cache@v2
was used to share the images between jobs. Is an alternative available (without pushing to a registry)? For example by using a volume of the act build container?There is no plan to support that action at all.
What do you mean? All your images should be stored on your Docker host, there is no need to cache them.
Any news about this problem? Came here from #285