kind: Kind local registry: Error response from daemon: endpoint with name kind-registry already exists in network kind

Hello,

I run the script from the page https://kind.sigs.k8s.io/docs/user/local-registry/ here is the console output:

usr@mcbook  ~  sh kind-local-registry.sh
Creating cluster "kind-registry" ...
 ✓ Ensuring node image (kindest/node:v1.21.1) 🖼
 ✓ Preparing nodes 📦
 ✓ Writing configuration 📜
 ✓ Starting control-plane 🕹️
 ✓ Installing CNI 🔌
 ✓ Installing StorageClass 💾
Set kubectl context to "kind-kind-registry"
You can now use your cluster with:

kubectl cluster-info --context kind-kind-registry

Have a nice day! 👋
Error response from daemon: endpoint with name kind-registry already exists in network kind
configmap/local-registry-hosting created

And it says to me the error: Error response from daemon: endpoint with name kind-registry already exists in network kind. What should I do to handle it properly? I’ve searched google for this kind of issue but no luck.

I’ve walked thru https://github.com/kubernetes-sigs/kind/issues/1213 but there is no mention regarding my topic.

If I miss something or any other details has to be provided from my side please let me know

About this issue

  • Original URL
  • State: closed
  • Created 2 years ago
  • Comments: 28 (28 by maintainers)

Most upvoted comments

If you had asked me, I doubt adding a simple echo might increase the complexity and pureness of the script.

I wasn’t saying that adding an echo increased the complexity notably, I was saying that because the complexity is low, this note could just be inserted as comment, because if something goes wrong there is not much to inspect when looking at the script, but the current comment is terse. As in, the low length and complexity of the script means that having the note as a comment would not bury it.

I think echo-ing a comment about a possible benign log on every run is log-noise, and I would prohibit this in most scripts, that effort is better spent detecting / handling the issue or documenting.

The approach in https://github.com/kubernetes-sigs/kind/pull/2601 resolves this, anyhow, thankfully 🙏

One possible solution proposed in #2601. Would love to hear feedback.

The registry isn’t part of the cluster, so when you delete the cluster the registry is not affected. This is the desired behavior in a lot of cases because you don’t want to have to rebuild that local cache every time.

This is also the reason why there is the script to create the cluster with a registry versus just being able to call kind create.

If your desired behavior is to have the registry removed along with the cluster, one option would be to create a remove-kind-with-registry.sh script that calls kind delete, then docker delete -f kind-registry so you can do it all in one operation.

I am able to reproduce that error message:

KindEndpointError

I think things are working fine though. Have you confirmed it is not using the local registry after recreating your cluster? It may just be an error that can be ignored.

Reopening though since we should probably understand why it is giving the error.

/reopen

Echoing previous comment from Sean, I feel that you are thinking that the registry runs inside the kind cluster, but the script automates the steps you find in the article that you linked: it creates a cluster, a docker with local register, and makes them work together

Perhaps I was confused cause I run 2 clusters eventually: one for the private local registry and one for the application itself.

You should not be running a cluster just for the local registry. Unless your intention is to have the registry pod managed by kubernetes. Otherwise, the registry is just a container run on your local docker engine, and you would have one cluster that would be configured to pull from it. That is what the script provides you.

Thanks Sean for the great answer, I think that we can close it now /close