gitea: PRs stuck on "Merge conflict checking is in progress. Try again in few moments."

Gitea Version

1.15.3

Git Version

2.17.1

Operating System

Ubuntu Server

How are you running Gitea?

I’m running Gitea on Docker, with image: gitea/gitea:latest

Database

SQLite

Can you reproduce the bug on the Gitea demo site?

No

Log Gist

No response

Description

I’m not sure if it correlates with a recent update of Gitea, but all open PRs for a project are listed as “Merge conflict checking is in progress. Try again in few moments.” I’m also running Drone CI checks, but these are running and completing as expected.

I was reviewing, updating and merging PRs leading up to this occurring.

Creating a new PR seems to work fine – Drone runs its check just fine and the “This pull request can be merged automatically” appears. I’ve tried closing and opening one of the PRs with no change, I’ve also tried checking out the branch, manually merging master into it and pushing (updating the PR) – Drone runs, finishes and I’m left with the PR in “Merge conflict checking is in progress”.

I’m not seeing any lingering processes in the “Monitoring” section of Site Administration. What would the best kind of log configuration to help me drill down to a cause here, and is there a way to forcibly kick off the “merge conflict checking” process again for a PR?

I have also updated + restarted Gitea to see if that’d help, to no avail.

Any assistance would be appreciated!

Cheers.

Screenshots

No response

About this issue

  • Original URL
  • State: closed
  • Created 3 years ago
  • Comments: 72 (36 by maintainers)

Commits related to this issue

Most upvoted comments

Feb 11 16:00:27 git gitea[3385540]: 2022/02/11 16:00:27 ...ueue_disk_channel.go:197:Run() [D] PersistableChannelUniqueQueue: pr_patch_checker Skipping running the empty level queue

And so we can see the problem. The queue is empty by Len but is not empty by Has…

I’m not sure how this can happen. I think your level queues are messed up.

Run:

gitea manager flush-queues

Wait for it to finish.

Shutdown Gitea and delete the /data/queues/common folder.

Restart.

Yup it looks like the value is stuck in the level db and I’m not certain why this is the case.

Do you see in your logs:

PersistableChannelUniqueQueue: pr_patch_checker-level Skipping running the empty level queue

or

LevelUniqueQueue: pr_patch_checker-level flushed so shutting down
NB: you can use <details> blocks to hide the long logs
<details><summary> HEADER </summary>

 ```
text in code - the blank line above and below the block is important...
 ```

</details>

You could also do make release-linux and let xgo handle this.

(this probably means that my incantation above is not correct.)


I think this might be better?

TAGS="netgo usergo bindata sqlite sqlite_unlock_notify sqlite_omit_load_extension" LDFLAGS='-extldflags=-static' make 

Building on the server is also problematic as the nodejs version in the repo is too old.

I would suggest you try to build Gitea with the same OS version, then there will be no problem.

Just download an official one … or use backports or 3rd sources.

https://nodejs.org/dist/v16.14.0/node-v16.14.0-linux-x64.tar.xz

https://www.digitalocean.com/community/tutorials/how-to-install-node-js-on-ubuntu-20-04

I bet this problem will be fixed by #18658 and its backport #18672 - both of which are already merged. Please consider updating to the current 1.16 head - On docker you can use the 1.16-dev tag or just download https://dl.gitea.io/gitea/1.16 - It would be helpful to know if that solves the problem.

If you cannot upgrade - you could set:

[queue.pr_patch_checker]
WORKERS=1

Which should ensure that there is always a worker available.

If you have a lot of PRs you may want to consider changing your underlying queue type to redis or level.