kaniko: AWS error reported when building images inside Kubernetes without direct Internet access

Actual behavior When running Kaniko inside a Kubernetes cluster that don’t have a direct internet access - private registry only - it seems to take an unusual long time - 45 to 60s (timeout?) - before giving the following error:

E0525 12:58:00.462141 15 aws_credentials.go:77] while getting AWS credentials NoCredentialProviders: no valid providers in chain. Deprecated. For verbose messaging see aws.Config.CredentialsChainVerboseErrors

After the error, the image build proceeds as expected

Expected behavior Not spend time on omthing unused…

To Reproduce Start an image build in a Kubernetes cluster with no direct internet access

Additional Information

  • Dockerfile Any
  • Build Context Any
  • Kaniko Image: Last week’s debug: sha256:23ad72c3a3133b6745dacc0178100113dc263f422582f4cddf92a3712f5bbb5d
Description Yes/No
Please check if this a new feature you are proposing
  • - [ ]
Please check if the build works in docker but not in kaniko
  • - [ ]
Please check if this error is seen when you use --cache flag
  • - [ ]
Please check if your dockerfile is a multistage dockerfile
  • - [ ]

About this issue

  • Original URL
  • State: closed
  • Created 4 years ago
  • Reactions: 18
  • Comments: 15

Most upvoted comments

We’ve somewhat recently changed how credentials can be provided for GCR/ECR/ACR, so as not to depend on K8s’s unsupported cred helpers. If you’re interested, this might fix your issues (but then again, it might not!) and in any case, getting recent feedback on whether this is still an issue would be helpful.

You can try out a recent commit-tagged Kaniko image to see if this is fixed:

gcr.io/kaniko-project/executor:343f78408c891ef7a85bab1ecbf2dd69367a58bc
gcr.io/kaniko-project/executor:343f78408c891ef7a85bab1ecbf2dd69367a58bc-debug

Let me know if that fixes your issue 🤞

I’m having this issue and my Pod has external internet access.

That’s correct, the non-debug image doesn’t include a shell.

Closing this since it sounds like the issue has been fixed at some point.

Running kaniko on a k3s cluster on a GCE VM, I was eventually able to work around this particular issue by adding a host alias to the pod spec:

apiVersion: v1
kind: Pod
metadata:
  name: kaniko
spec:
  hostAliases:
    - ip: "169.254.169.254"
      hostnames:
        - metadata.google.internal
  containers:
    - name: kaniko
...