gitea: Pushing large amount of data via git-lfs results in HTTP 413 after some time
- Gitea version (or commit ref): 1.2.3
- Git version: 2.11.0 on server, 2.15.0 on client. git-lfs version 2.3.4 on client and server
- Operating system: Debian Stable
- Database (use
[x]):- PostgreSQL
- MySQL
- MSSQL
- SQLite
- Can you reproduce the bug at https://try.gitea.io:
- Yes (provide example URL)
- No
- Not relevant
The try-instance seems to use git-lfs (if enabled) via https and does not accept my login credentials
- Log gist:
Description
I have problems pushing a very large repository (1.8GB) with git lfs to my gitea instance. git pushes the first 80MB, after that, the counter of already uploaded data jumps between weird values (e.g. it increases, decreases, increases again, and so on), and after some time, i get lots of LFS client errors, which are HTTP 413 status codes reported back from gitea. The last few lines say that there is an authorization error for info/lfs/objects/batch and the push process stops.
This is reproducible for several push attempts. I tried setting the log level to Debug, but the log file stays quite empty.
So i’m asking for help for how to set up Debug logging (i cannot find a list of proper log level options), and i want to notice for this issue in general.
I have configured a Bitbucket Repository as a second remote, and it accepts the big push without problems, so i think it is a server related issue.
About this issue
- Original URL
- State: closed
- Created 7 years ago
- Comments: 32 (6 by maintainers)
This maybe a reverse proxy upload size setting problem?
Damn, it turned to be just a strange ingress issue. I drilled down the Nginx ingress controller and for some reason, it was ignoring any new annotations I was adding to the ingress spec. Deleting the ingress and bringing it back solved the issue! Thanks @theAkito and @sapk for your quick feedback!
UPDATE Still, it doesn’t work over https however now the problem is
identified as ingress problem.@bthulu Perhaps you should open a new issue. It is working for most people, as far as I have seen (including myself). Maybe you have a different root issue.
Update. I was able to fix this issue with
git config http.version HTTP/1.1@ZitRos That is good to know, especially for future readers of this issue.
UPDATE2: YAY! Turns out it’s git-lfs problem, which doesn’t handle 308 redirects properly which results in “file already closed” errors. The latest git-lfs binary (which is not yet released!) works flawlessly!
UPDATE 3 pushing via HTTP is not successful either, however, now it seems not to push because of the size of the objects (~>48mb). @theAkito thanks! I keep trying 😄
@sapk thanks for suggestions! I tried everything 😃
Actually I don’t think the timeout is a problem, as underneath git CLI issues a lot of individual uploads which take ~15s at max.
I even tried the direct connection to k8s service using
NodePort(without ingress), no luck there as well, but now it fails with413after ~1min 30s instead of 1 min 😃 Unfortunately, git CLI seems to start over all the time, which doesn’t allow me to upload all the objects even with this strange errors.Gitea pod logs say nothing about a bunch of 413’s I see after executing
git pushbut just a number of successful uploads:To add even more details, I am uploading ~1.5 Gb of data which are large jpegs of 5-100mb.
UPDATE by looking at
gitCLI debug trace more closely, I figured out that it certainly fails to upload some files. (not to mention it always try tohttpand then gets308tohttps)55ac260c6a81ab0ddd71be1784f9c13b5aff2ccfe99510b0e24a461e3043627ais 178kb file!UPDATE 2 wasting a bit more time I figured out that this issue is related to http->https redirect. Even the direct connection (Gitea server) returns
httpURL (external host as set up in settings), and then gets 308 tohttpsfrom ingress. Uploading via https for some reason fails, and only for some files…60s seems to be the default timeout of nginx. You can try to adjust them (like proxy_read_timeout). http://nginx.org/en/docs/http/ngx_http_proxy_module.html#proxy_next_upstream_timeout
I’m having the same issue: after ~60sec of uploading to LFS it fails.
I am using Kubernetes with nginx ingress (installed from Helm), and playing with
nginx.ingress.kubernetes.io/proxy-body-sizedidn’t help.Fixed it. Thanks, @liu-kan !