portainer: Cannot deploy from external environment - http: TLS handshake error - status_code=403
Bug description
When trying to deploy a stack from a seperate (non-local) agent environment, I receive a multitude of errors, the UI says failed to deploy a stack: listing workers for Build: failed to list workers: Unavailable: connection closed before server preface received
Expected behavior My stack to deploy just like it does in the local environment
Portainer Logs External:
2022/12/05 23:05:15 http: TLS handshake error from <local-env-ip>:37718: EOF
Local:
2022/12/05 11:05PM ERR github.com/portainer/portainer/api@v0.0.0-20221011053128-b0938875dc5c/http/client/client.go:94 > unexpected status code | status_code=403
Steps to reproduce the issue:
- Go to a non-local environment
- Go to stacks
- Add a new stack
- Click deploy
- Observe error after a moment
Technical details:
- Portainer version: 2.16.2 EE
- Docker version (managed by Portainer): 20.10.12 (API: 1.41)
- Kubernetes version (managed by Portainer): N/A
- Platform (windows/linux): both linux x86_64 Ubuntu 22.04.1 LTS
- Command used to start Portainer (
docker run -p 9443:9443 portainer/portainer
): The given for the agent, https://workbin.dev/?id=1670281882574271 for local env. - Browser:
- Use Case (delete as appropriate): Using Portainer at Home.
- Have you reviewed our technical documentation and knowledge base? Yes
Additional context Add any other context about the problem here.
About this issue
- Original URL
- State: open
- Created 2 years ago
- Comments: 26 (2 by maintainers)
Just an update, this still occurs in 2.18.1
Facing the same issue, we shifted half our business to portainer - added in a few servers as external docker nodes only to find I can’t deploy any of my 90+ git repos to any remote server. This became a nightmare after portainer was pitched in as a perfect solution for all our deployments.
Bumping this request and subscribing to updates on this one.
@ooliver1
I wanted to follow up on this request. I figured out what the issue is. It is related to using
build: .
in thedocker-compose.yml
on Agent.I am forwarding this to Product for review and am logging an internal request. I will update you as I learn more.
Thanks!
I am in the same boat as @TsunayoshiSawada. Portainer seemed like a good solution for all projects until 30 minutes ago. Since portainer claims it can deploy stacks to agents, I didn’t expect this to NOT work.
Internal Ref: EE-4758
I can confirm that this is related to build directive. It also fails if a stack is created directly in portainer if it contains build directive. It fails on all environments connected via an agent.
Update:
This issue is currently being reviewed by Product. I do not yet have a timeline or release.
Thanks!
Yeah, the issue I personally find is not any sort of relative volumes or whatever, but the fact that the error is completely unrelated as it returns 403 without details.
Definitely not a stale issue. I literally ran into this last night and have spent the entire night fighting with Portainer BE. It wouldn’t deploy from a git repo without selecting the relative paths option. Specifically I was trying to deploy stable-diffusion-webui-docker via Portainer instead of shell. Without selecting that option it wouldn’t deploy giving errors about the context or not being able to find the build directory where the Dockerfile was.
After selecting it, it seemed to deploy and load me into a false sense of security. The stack even worked to a extent.
And when I went to investigate, I learned that I could no longer issue commands to the agent or the server that was controlling the agent. They were giving 408s, connection close by peer, timeouts etc.
I was unable to stop the service containers that were running inside the stack, detach the git repo, stop the stack, or delete it via portainer. I wasn’t able to modify or manipulate images or running containers. It would always time out. This is across reboots, as well as restarting Docker. These steps were done to both the server and the agent machine. I was also unable to remove the offending agent’s Environment either. Same result as any other attempt to make any changes.
Other than this portainer was quite responsive navigating around and doing anything that didn’t require more than clicking through the interface.
I ended up having to remove all of the containers, images, and volumes related to that stack from shell. At which point Portainer showed they were gone but still would not let me detach the git repo, stop the stack (which it said was still running) or delete it. I confirmed that this behavior was still being exhibited on the server as well as the agent. Unable to commit any changes, pull any images, basically anything except for browse through the interface. I went ahead and tried restarting docker as well as the servers again and still same issue.
I was eventually able to regain control by purging the portainer agent. I restarted the docker services on both ends again as well to be thorough.
And I was STILL unable to use the server or remove the Environment. Once Portainer finally updated to show that Environment was unreachable, I was finally able to remove it from the server.
After, I deployed a new agent and both server and agent have seemed fine since. Also, I was able to deploy the stack but I went ahead and cloned the repo and built the images first and then used them instead of building the stack during deployment.
I know it’s not much info, but since last night when I was beating my head, I’ve been subscribed to this issue. When I saw the bot comment this morning, the wound was still a little tender and I knew others were still having issues as well so rather than just bump, I wanted to try to give some info. Or at least a decent story…
This issue has been marked as stale as it has not had recent activity, it will be closed if no further activity occurs in the next 7 days. If you believe that it has been incorrectly labelled as stale, leave a comment and the label will be removed.
The issue is that portainer does not support deploying a stack with relative build paths. For instance, OP’s docker-compose.yml file contains the following relative build path:
Per portainer’s docs, building images while deploying from a git repo is not supported yet[1].
There are two workarounds:
docker compose
manually.The second workaround raises the following warning: “This stack was created outside of Portainer. Control over this stack is limited”. You can still stop/start/restart the containers and view logs, but lose access to many portainer features.
Hopefully we can see improvements made to stack deployments via git in the near future.
1: https://portal.portainer.io/knowledge/can-i-build-an-image-while-deploying-a-stack/application-from-git 2: https://docs.portainer.io/user/docker/images/build
Those that are running into issues when deploying stacks that contain
build
directives with relative paths, can you confirm whether the same issue occurs when you are deploying with the Relative path volumes option enabled? Note this is a BE-only feature. I’ve been doing some testing around this and want to confirm what I’m seeing.I’m working on a GitHub Action that will deploy portainer stacks using the API as opposed to the Git integration which clearly doesn’t work as expected.
It’s still work in progress but I’ll drop it here once it’s done 😃
Thank you @tamarahenson for the reply! #7240 seems to be unrelated, #7254 has the same error but the underlying logs include connection refused.
As I have mentioned before, this works on the
local
environment perfectly, but I would like this on the separate host.94e81479b960 portainer/agent:2.16.2 "./agent" 56 minutes ago Up 56 minutes 0.0.0.0:9001->9001/tcp, :::9001->9001/tcp portainer_agent