harbor: Broken replication of the "///" docker images
Hello Harbor team,
Starting from the Harbor v2.1.0 the replication of the docker images that have name <registry URL>/<namespace>/<project>/<image>:tag
is broken. Such images can’t be replicated.
It also seems like in such cases replication process drains all the available CPU and RAM resources leading to DoS.
Harbor v2.0.3 doesn’t have this issue.
Images that have <registry URL>/<namespace>/<project>:tag
name replicate without any issues.
It is also possible to push <registry URL>/<namespace>/<project>/<image>:tag
image directly to Harbor without any issues.
Expected behavior and actual behavior:
This issue has been faced wile setting up a replication from the GitLab Container Registry (GLCR) to the Harbor one.
Some of the docker images stored in the GLCR have name similar to the following:
<registry URL>/<namespace>/<project>/<image>:tag
E.g.,
Image to replicate (GLCR):registry.example.com/group/project/image:latest
This project needs to be replicated into the following Harbor project: <Harbor registry URL>/<namespace>/<project>/<image>:tag
Expected behavior
Harbor successfully does the pull-cased replication of the Docker image <registry URL>/<namespace>/<project>/<image>:tag
to a <Harbor registry URL>/<namespace>/<project>/<image>:tag
without any issues.
Actual behavior
Harbor fails the replication (Error) of the Docker image <registry URL>/<namespace>/<project>/<image>:tag
to a <Harbor registry URL>/<namespace>/<project>/<image>:tag
.
It is also looks like this causes DoS for the Harbor node as all the CPU and RAM resources are drained.
Steps to reproduce the problem:
- Create a GitLab Project in some Group and push into it a Docker image with the following name
<registry URL>/<namespace>/<project>/<image>:tag
. I assume might also be true for other Docker Registries. - Setup a replication in Harbor to replicate
<namespace>/**
from GitLab to a Harbor. - Run the replication.
- Replication will fail without having
<registry URL>/<namespace>/<project>/<image>:tag
. - If many projects should be replicated, the replication of the
<registry URL>/<namespace>/<project>/<image>:tag
image will cause DoS for the Harbor.
Versions:
Please specify the versions of following systems.
- harbor version: [v2.1.0]
Additional context:
Harbor error log
2020-10-26T14:08:16Z [ERROR] [/replication/transfer/image/transfer.go:175]: Get "https://<registry URL>/jwt/auth?scope=repository%!A(MISSING)<namespace>%!F(MISSING)<project>%!F(MISSING)<image>%!A(MISSING)pull&service=container_registry": dial tcp: lookup <registry URL> on 127.0.0.11:53: read udp 127.0.0.1:55739->127.0.0.11:53: i/o timeout
About this issue
- Original URL
- State: closed
- Created 4 years ago
- Reactions: 1
- Comments: 22 (13 by maintainers)
We are currently evaluating Harbor for replicating our Gitlab Container Registry which is hosted in an internal development environment to a Harbor registry running in a production environment.
We have many groups with subgroups, projects and container registry repositories per project. Particularly we have a
docker
subgroup in Gitlab which contains projects for all of our base images (name filtermycompany/docker/**
).The replication is only working for a small number of expected images. I looked into the Gitlab API requests that were performed by running Harbor locally and adding some debug logging. It seems the inital request is
https://gitlab.mycompany.com/api/v4/projects?search=docker&membership=1&per_page=50
to fetch projects with a particular search expression (second part of the filter path). And it does not return the correct number of results (in fact it returns more and seems to be filtered later in the Harbor adapter).After digging into the Gitlab API I found a query parameter
search_namespaces
:After adding the query parameter
&search_namespaces=1
in the Gitlab client (https://github.com/goharbor/harbor/blob/master/src/replication/adapter/gitlab/client.go#L103) it seems like all expected projects (including some more outside of the desired group) were returned.I don’t know if this fixes the originally described issue, but it seems our case should be fixed by it. I’ll create a PR for this change.
Hi @bitsf,
thank you for the quick feedback.
I’ll need to re-create the lab and run the tests again.
Will come back to you once it’s done.