configure-aws-credentials: Using `aws-actions/configure-aws-credentials@v1-node16` is broken (bundle is missed dep `aws-sdk`)
Describe the bug
node:internal/modules/cjs/loader:936
throw err;
^
Error: Cannot find module 'aws-sdk'
Require stack:
- /home/runner/work/_actions/aws-actions/configure-aws-credentials/v1-node16/dist/index.js
at Function.Module._resolveFilename (node:internal/modules/cjs/loader:933:15)
at Function.Module._load (node:internal/modules/cjs/loader:778:27)
at Module.require (node:internal/modules/cjs/loader:1005:19)
at require (node:internal/modules/cjs/helpers:102:18)
at Object.159 (/home/runner/work/_actions/aws-actions/configure-aws-credentials/v1-node16/dist/index.js:2694:33)
at __nccwpck_require__ (/home/runner/work/_actions/aws-actions/configure-aws-credentials/v1-node16/dist/index.js:2809:43)
at /home/runner/work/_actions/aws-actions/configure-aws-credentials/v1-node16/dist/index.js:2830:13
at /home/runner/work/_actions/aws-actions/configure-aws-credentials/v1-node16/dist/index.js:3210:3
at Object.<anonymous> (/home/runner/work/_actions/aws-actions/configure-aws-credentials/v1-node16/dist/index.js:3213:12)
at Module._compile (node:internal/modules/cjs/loader:1105:14) {
code: 'MODULE_NOT_FOUND',
requireStack: [
'/home/runner/work/_actions/aws-actions/configure-aws-credentials/v1-node16/dist/index.js'
]
}
Expected Behavior
Continue working as-is
Current Behavior
The action is not working
Reproduction Steps
Use the branch at a19d8471766ca743361afb0250984ad3b9f923c9.
Possible Solution
Revert the branch back to 567d4149d67f15f52b09796bea6573fc32952783.
It appears you haven’t used ncc
to rebuild or used it but didn’t install packages first?
Additional Information/Context
No response
About this issue
- Original URL
- State: closed
- Created a year ago
- Reactions: 75
- Comments: 23 (4 by maintainers)
OK, thanks for reporting everyone. I’m pushing a revert commit right now.
Commit 186395a pushed to v1-node16, reverting to 5f64152
Can we please revert the broken merge, instead of us having to change our pipelines in 20 repos?
For any readers, you can workaround this in the meantime by changing
to
Oh fun, so this is why all of our builds on a Friday evening just broke.
Oh good, it’s not something I did for once. 😅
Thank you @dhermes , but I think many of us would like to avoid the “workaround” since they won’t get updates to the floating tag… Perhaps this should be reverted on this branch/tag for the time being since I’m sure a lot of community actions jobs are depending on it.
It shouldn’t have been merged in the first place if the build failed it’s own checks. 🔥
@farvour You’re right here, and I’m not at all satisfied with the situation we have right now and Node 16. The configuration we have to make sure Dependabot is getting security updates done in both places is working, but it’s too prone to mistakes when we are moving features from
master
tov1-node16
. This branch is getting removed - or at least have its references moved - when we release v2.@mkellerman-relativity The typical deployment mindset at AWS in general is revert first, ask questions later, so yes! Absolutely it’s not my intent to require folks pin their versions to a hash, because at the very least you now have to stay on top of keeping your workflow files up to date. What this means for us though is we need to implement release tags for the v1-node16 series and honestly look at sunsetting the Node 12 version sooner than I expected. I’ll get some input from the team on tagging the Node 16 series next week. Pinging @peterwoodworth for discussion on this later.
We’re also still working on a version 2 of configure-aws-credentials that will put all the version confusion to rest - and let us be much less prone to deployment failures like this in the future. In the meantime, thanks to all for the quick report and for your patience while I undid our mistake.
I was not necessarily looking for an excuse to sign off on a Friday evening, but I’ll take this as a sign 👋
@mkellerman-relativity unfortunately I think the psychological effect of the workaround and it being Friday has lead the issue to be downplayed. While I understand the workaround’s effectiveness, it never should have needed to be invoked in the first place and as you stated, it’s not an “easy workaround” if it’s being used in a LOT of repositories.
Closing this as fixed for now. Please open up another issue if you still see problems.
@kellertk ping your assistance is required. Hope you haven’t left for the cottage yet! 🍹
Our corporate account workflows also affected
I’m also affected. I’m commenting so I can follow this thread. Thank you for the workaround.
Very funny to me! Hahahahahaha I was like crazy looking for the reason why they were failing and this is the reason.
@dhermes Thanks for the workaround for the moment
Thank you very much @dhermes for this!
I feel the pain. Thanks for the workaround.
FYI @kellertk . I will close duplicate issue #671 which I reported at almost the same time as this one 😅