skaffold: Port forwarding not work when pods recreated

I’m making a simple golang app and trying to use skaffold for local development withskaffold dev --port-forward.

However, it only works for the first time, then fails to open ports again when files changed and pods recreated. I even tried v0.33.0 and the latest version 2f2a6f4, neither works.

I checked issue #1594 #1815 and tried v0.29.0, and it works as I expected.

I notice that when I use v0.33.0, every time I changed files, skaffold will start a new pod then exit (delete?) the old one, but when I use v0.29.0, only the new one starts, and there is no exit log. So I guess is the old one blocks port-forward somehow since v0.29.0

Expected behavior

Recreate port forwarding correctly just like v0.29.0 did.

Actual behavior

Only successful for the first time.

Information

  • Skaffold version: v0.33.0
  • Operating system: MacOSX 10.14.5
  • Contents of skaffold.yaml:
apiVersion: skaffold/v1beta10
kind: Config
build:
  artifacts:
    - image: api
      context: ./api/
deploy:
  kubectl:
    manifests:
      - ./manifests/*
profiles:
  - name: minikube
    activation:
      - kubeContext: minikube
        command: dev

Steps to reproduce the behavior

  1. use minikube and v0.33.0, checkout a minimal app with service and deployment only.
  2. run skaffold dev --port-forward, can see port-forward works for the first time.
  3. change the source codes, trigger the pod recreate.
  4. from the logs, you can see skaffold starts a new pod with a different name with the old one, which will be deleted after the new one started listening on the port.
  5. stop skaffold and rerun the same command skaffold dev --port-forward, it works again.

About this issue

  • Original URL
  • State: closed
  • Created 5 years ago
  • Reactions: 6
  • Comments: 32 (22 by maintainers)

Most upvoted comments

Hey @cmoad @demisx I found some bugs in port forwarding and just opened #2477, once it’s merged can you let me know if it fixes your issue?

@demisx thanks for the info, and glad to hear you’re seeing the issue fixed with the latest commits! this has obviously been a pain for us recently so any progress is good progress 😃 since this particular issue seems to be resolved I’m gonna close this issue, but if you are still seeing the other problems you reported here, please open them up in separate issues so we can track them 👍

@japsen I think this issue should be fixed with #2359 – the healthcheck will wait until deployments have stabilized, and then port forwarding will start, so the pods should be ready by the time port forwarding starts.

cc @tejal29

I’ve been testing with the latest bleeding version 4ba3d06 today and all issues reported earlier still exist. The port forwarding seems got worse, though. I can’t start skaffold dev with all of ports forwarded properly. Something is missing each time and I have to forward manually.

@priyawadhwa I too have this issue. Environment

  • OS: Windows
  • Environment: Minikube (v1.2.0 with virtual box)
  • K8s Versions Tried: 1.12.8 1.15.0 (with corresponding kubectl versions)
  • Skaffold Versions: 0.34.0 and 0.34.1 (installed using chocolaty)

Observed Behaviour

  • skaffold dev --port-forward successfully opens ports for the initial build.
  • When I make a change the hot-reload is successful
  • Connection to the new pod is never established
    • Despite the logs often report success
    • Occasionally logs report a new port # assigned (and still no actual connectivity)

Steps to Reproduce Have recreated using several of the sample applications, most recently I modified the nodejs sample. I made two changes:

  1. I swapped out nodemon src/index.js for node node src/index
  2. I used this skaffold.yaml
apiVersion: skaffold/v1beta13
kind: Config
build:
  artifacts:
  - image: gcr.io/k8s-skaffold/node-example
    context: backend
deploy:
  kubectl:
    manifests:
    - k8s/*.yaml
  1. Run skaffold dev --port-forward -v info which successfully opened port 3000 locally
  2. I tested http://localhost:3000 in my browser several times and it worked as expected
  3. I modified the contents of src/index.js which prompted a reload
  4. Reload was successful but the port was assigned to 3001
  5. I attempted to browse http://localhost:3001 and got nothing
  6. I attempted to browse http://localhost:3000 again and got nothing
  7. Looking at the console output I noticed the following errors which happened exactly when I tried steps 7 & 8:

time=“2019-07-29T18:50:20+10:00” level=info msg=“retrying kubectl port-forward due to error: E0729 18:50:18.953052 13220 portforward.go:400] an error occurred forwarding 3001 -> 3000: error forwarding port 3000 to pod 7a27efad03e8f0807a1c40dde824e7809a65f6e089068df5719eb3bb9eac4e7e, uid : container not running (7a27efad03e8f0807a1c40dde824e7809a65f6e089068df5719eb3bb9eac4e7e)\n” Port forwarded service/node from remote port 3000 to local port 3001 time=“2019-07-29T18:50:46+10:00” level=info msg=“retrying kubectl port-forward due to error: E0729 18:50:44.778788 11392 portforward.go:400] an error occurred forwarding 3000 -> 3000: error forwarding port 3000 to pod 7a27efad03e8f0807a1c40dde824e7809a65f6e089068df5719eb3bb9eac4e7e, uid : Error: No such container: 7a27efad03e8f0807a1c40dde824e7809a65f6e089068df5719eb3bb9eac4e7e\n” Port forwarded service/node from remote port 3000 to local port 3000

  1. Trying the requests again did not bare any fruit

You can find my full console output here, let me know if I can provide anything else that might help:

PS C:\dev\src\github\skaffold\integration\examples\nodejs> skaffold dev --port-forward -v info time=“2019-07-29T18:46:35+10:00” level=info msg=“starting gRPC server on port 50051” time=“2019-07-29T18:46:35+10:00” level=info msg=“starting gRPC HTTP server on port 50052” time=“2019-07-29T18:46:35+10:00” level=info msg=“Skaffold &{Version:v0.34.1 ConfigVersion:skaffold/v1beta13 GitVersion: GitCommit:a1efe8cc46e7584ad71c2f140cbfb94c1b4d82ff GitTreeState:clean BuildDate:2019-07-25T22:35:35Z GoVersion:go1.12 Compiler:gc Platform:windows/amd64}” time=“2019-07-29T18:46:35+10:00” level=info msg=“no config entry found for kube-context minikube” time=“2019-07-29T18:46:35+10:00” level=info msg=“Using kubectl context: minikube” time=“2019-07-29T18:46:35+10:00” level=info msg=“no config entry found for kube-context minikube” time=“2019-07-29T18:46:35+10:00” level=info msg=“no config entry found for kube-context minikube” time=“2019-07-29T18:46:37+10:00” level=info msg=“no config entry found for kube-context minikube” Generating tags…

  • gcr.io/k8s-skaffold/node-example -> gcr.io/k8s-skaffold/node-example:v0.34.1-9-gc1ff25a6-dirty Tags generated in 258.6111ms Starting build… Found [minikube] context, using local docker daemon. Building [gcr.io/k8s-skaffold/node-example]… Sending build context to Docker daemon 101.4kB Step 1/7 : FROM node:10.15.3-alpine 10.15.3-alpine: Pulling from library/node e7c96db7181b: Pulling fs layer df9eac31dfef: Pulling fs layer 0a20433d95a4: Pulling fs layer 0a20433d95a4: Verifying Checksum 0a20433d95a4: Download complete e7c96db7181b: Download complete e7c96db7181b: Pull complete df9eac31dfef: Download complete df9eac31dfef: Pull complete 0a20433d95a4: Pull complete Digest: sha256:aa28f3b6b4087b3f289bebaca8d3fb82b93137ae739aa67df3a04892d521958e Status: Downloaded newer image for node:10.15.3-alpine —> 56bc3a1ed035 Step 2/7 : WORKDIR /app —> Running in a622f22a0d29 Removing intermediate container a622f22a0d29 —> 9da07585ad13 Step 3/7 : EXPOSE 3000 —> Running in bf1bedd6afab Removing intermediate container bf1bedd6afab —> fb8606f40237 Step 4/7 : CMD [“npm”, “run”, “dev”] —> Running in 0466be1bbd7e Removing intermediate container 0466be1bbd7e —> af409cbc06b9 Step 5/7 : COPY package* ./ —> 8a927883b9dc Step 6/7 : RUN npm install —> Running in 17de82aeef89

nodemon@1.18.7 postinstall /app/node_modules/nodemon node bin/postinstall || exit 0

Love nodemon? You can now support the project via the open collective:

https://opencollective.com/nodemon/donate

npm WARN backend@1.0.0 No description npm WARN backend@1.0.0 No repository field. npm WARN backend@1.0.0 No license field. npm WARN optional SKIPPING OPTIONAL DEPENDENCY: fsevents@1.2.4 (node_modules/fsevents): npm WARN notsup SKIPPING OPTIONAL DEPENDENCY: Unsupported platform for fsevents@1.2.4: wanted {“os”:“darwin”,“arch”:“any”} (current: {“os”:“linux”,“arch”:“x64”})

added 265 packages from 161 contributors and audited 2359 packages in 14.22s found 34 high severity vulnerabilities run npm audit fix to fix them, or npm audit for details Removing intermediate container 17de82aeef89 —> 9027af0f5d5e Step 7/7 : COPY . . —> 48a60d092e62 Successfully built 48a60d092e62 Successfully tagged gcr.io/k8s-skaffold/node-example:v0.34.1-9-gc1ff25a6-dirty Build complete in 59.2691226s Starting test… Test complete in 3.7613ms Starting deploy… kubectl client version: 1.15 service/node created deployment.apps/node created Deploy complete in 4.3388013s Watching for changes… time=“2019-07-29T18:47:48+10:00” level=info msg=“Stream logs from pod: node-5d46645698-dnfqs container: node” [node-5d46645698-dnfqs node] [node-5d46645698-dnfqs node] > backend@1.0.0 dev /app [node-5d46645698-dnfqs node] > node src/index.js [node-5d46645698-dnfqs node] [node-5d46645698-dnfqs node] Example app listening on port 3000! Port forwarded service/node from remote port 3000 to local port 3000 time=“2019-07-29T18:49:29+10:00” level=info msg=“files modified: [backend\src\index.js]” Generating tags…

  • gcr.io/k8s-skaffold/node-example -> gcr.io/k8s-skaffold/node-example:v0.34.1-9-gc1ff25a6-dirty Tags generated in 413.9461ms Starting build… Found [minikube] context, using local docker daemon. Building [gcr.io/k8s-skaffold/node-example]… Sending build context to Docker daemon 101.4kB Step 1/7 : FROM node:10.15.3-alpine —> 56bc3a1ed035 Step 2/7 : WORKDIR /app —> Using cache —> 9da07585ad13 Step 3/7 : EXPOSE 3000 —> Using cache —> fb8606f40237 Step 4/7 : CMD [“npm”, “run”, “dev”] —> Using cache —> af409cbc06b9 Step 5/7 : COPY package* ./ —> Using cache —> 8a927883b9dc Step 6/7 : RUN npm install —> Using cache —> 9027af0f5d5e Step 7/7 : COPY . . —> 5bdac18ecf60 Successfully built 5bdac18ecf60 Successfully tagged gcr.io/k8s-skaffold/node-example:v0.34.1-9-gc1ff25a6-dirty Build complete in 594.7379ms Starting test… Test complete in 2.9599ms Starting deploy… kubectl client version: 1.15 deployment.apps/node configured Deploy complete in 2.9984198s Watching for changes… Port forwarded service/node from remote port 3000 to local port 3001 time=“2019-07-29T18:49:36+10:00” level=info msg=“Stream logs from pod: node-68c49d6464-b2pbj container: node” [node-68c49d6464-b2pbj node] [node-68c49d6464-b2pbj node] > backend@1.0.0 dev /app [node-68c49d6464-b2pbj node] > node src/index.js [node-68c49d6464-b2pbj node] [node-68c49d6464-b2pbj node] Example app listening on port 3000! [node-5d46645698-dnfqs node] <Container was Terminated> time=“2019-07-29T18:50:20+10:00” level=info msg=“retrying kubectl port-forward due to error: E0729 18:50:18.953052 13220 portforward.go:400] an error occurred forwarding 3001 -> 3000: error forwarding port 3000 to pod 7a27efad03e8f0807a1c40dde824e7809a65f6e089068df5719eb3bb9eac4e7e, uid : container not running (7a27efad03e8f0807a1c40dde824e7809a65f6e089068df5719eb3bb9eac4e7e)\n” Port forwarded service/node from remote port 3000 to local port 3001 time=“2019-07-29T18:50:46+10:00” level=info msg=“retrying kubectl port-forward due to error: E0729 18:50:44.778788 11392 portforward.go:400] an error occurred forwarding 3000 -> 3000: error forwarding port 3000 to pod 7a27efad03e8f0807a1c40dde824e7809a65f6e089068df5719eb3bb9eac4e7e, uid : Error: No such container: 7a27efad03e8f0807a1c40dde824e7809a65f6e089068df5719eb3bb9eac4e7e\n” Port forwarded service/node from remote port 3000 to local port 3000 Cleaning up… PS C:\dev\src\github\skaffold\integration\examples\nodejs>

@priyawadhwa On first run I can see the services getting successfully forwarded:

Port Forwarding service/abbott-v2 3001 -> 3001
Port Forwarding service/zeppo-v2 5000 -> 5000

However, if the underlying pods change during a deployment update the service port forward becomes unresponsive.

# first run of skaffold dev
% curl http://localhost:3001/health
ok

# after deployment update from a file change
% curl http://localhost:3001/health
curl: (52) Empty reply from server

It looks like the service is still forwarding to the pods correctly inside the minikube VM:

# minikube ssh, then:
$ curl http://10.96.144.148:3001/health
ok

One more test to confirm the tunnel is dropped:

# first run
% sudo netstat -ln | grep 3001
Password:
tcp6       0      0  ::1.52273                                     ::1.3001                                      TIME_WAIT

# after update
% sudo netstat -ln | grep 3001
(no output)