cli: `create issue` fails with "GraphQL error: was submitted too quickly"

Describe the bug

Using a bash script to create several issues (with 3 seconds in between each) fails intermittently, sometimes after 10, or 50 or 150 issues with “GraphQL error: was submitted too quickly”

gh version 2.2.0 (2021-10-25) https://github.com/cli/cli/releases/tag/v2.2.0

Steps to reproduce the behavior

  1. Use a bash script create_issues.sh with content similar to this, in order to add a batch of issues (obviously use more entries/records/issues):
#!/bin/bash
gh issue create --label 'enhancement' --title 'Author `Carlsson, Stefan` has no kthid' --body '`Carlsson, Stefan` - should it be kthi
d `u1agqz5q` and/or orcid `NA` affiliated with 177 5956 874100 879223 882651 879234 882650 879225? - appears on PIDs
- [ ] [1049836](https://kth.diva-portal.org/smash/record.jsf?pid=diva2:1049836)
'
sleep 3
gh issue create --label 'enhancement' --title 'Author `Carminati, Barbara` has no kthid' --body '`Carminati, Barbara` - should it be 
kthid `NA` and/or orcid `0000-0002-7502-4731` affiliated with 177 879223 882650 879232? - appears on PIDs
- [ ] [1256570](https://kth.diva-portal.org/smash/record.jsf?pid=diva2:1256570)
'
sleep 3
gh issue create --label 'enhancement' --title 'Author `Carosio, Federico` has no kthid' --body '`Carosio, Federico` - should it be kt
hid `u1jimame` and/or orcid `NA` affiliated with 177 5923 5940 5948 5954 879224 879315 879340? - appears on PIDs
- [ ] [1291470](https://kth.diva-portal.org/smash/record.jsf?pid=diva2:1291470)
'
  1. Run the script and after 10 or 50 or 150 issues have been created, the error appears (hard to predict or understand when or why)

  2. See error “GraphQL error: was submitted too quickly”

Expected vs actual behavior

Since I’m trying to “rate limit” by having 3 seconds in between each new issue, I was not expecting to see the error.

Also I run gh api rate_limit when the error appears, which doesn’t seem to indicate I’m actually running into a rate limit, so I’m not sure that this is actually what is causing the error to be reported:

{
  "resources": {
    "core": {
      "limit": 5000,
      "used": 0,
      "remaining": 5000,
      "reset": 1637749811
    },
    "search": {
      "limit": 30,
      "used": 0,
      "remaining": 30,
      "reset": 1637746271
    },
    "graphql": {
      "limit": 5000,
      "used": 6,
      "remaining": 4994,
      "reset": 1637749488
    },
    "integration_manifest": {
      "limit": 5000,
      "used": 0,
      "remaining": 5000,
      "reset": 1637749811
    },
    "source_import": {
      "limit": 100,
      "used": 0,
      "remaining": 100,
      "reset": 1637746271
    },
    "code_scanning_upload": {
      "limit": 500,
      "used": 0,
      "remaining": 500,
      "reset": 1637749811
    },
    "actions_runner_registration": {
      "limit": 10000,
      "used": 0,
      "remaining": 10000,
      "reset": 1637749811
    },
    "scim": {
      "limit": 15000,
      "used": 0,
      "remaining": 15000,
      "reset": 1637749811
    }
  },
  "rate": {
    "limit": 5000,
    "used": 0,
    "remaining": 5000,
    "reset": 1637749811
  }
}

About this issue

  • Original URL
  • State: closed
  • Created 3 years ago
  • Reactions: 11
  • Comments: 32 (11 by maintainers)

Most upvoted comments

Closing this because the was submitted too quickly is an intentional restriction on the platform to combat abuse by automated actors. Unfortunately, this rate limit is internal (since it can vary dynamically) and it also affects people who just want to legitimately create a lot of objects at once. The solution is to scan for this error message in your programs and retry creation after a delay of some minutes or up to an hour.

There should really be a batch request feature, or at least give the ability to increase the request limit because this is definitely a problem

Okay so I’ve spoke to people internally and it looks like that, in addition to general API rate limit for queries (which you are not hitting), there is content creation rate limit for resources like Issues. It doesn’t look like this rate limit is shown to API consumers over response headers, but you are allowed only a fixed number of issue creations per minute and then again a fixed number per hour. You seem to be hitting that, so the only thing I can suggest to you now to wait an hour when you hit this error.

Yes, absolutely agreed that this should be somehow communicated in the error message. I’ll follow up internally.

@mislav

So, … the reasonable thing is to throw an “unknown limits, should kinda work normally, but if it’s too much then implement more stuff yourself”? Worse, on the user side, this is either going to be addressed by doing exactly the same kind of busy-wait loop (so the same “hang so long”, or it won’t be addressed and we’d be hurting more collective skulls due to puzzled “why is stuff broken” head-scratching .

The whole purpose of a tool like gh is to have a central place where functionality is implemented once instead of home-cooked solutions that people need to implement and re-implement yourself. Furthermore, since GH is maintaining it, it would allow you to handle non-abusers in a central way. One example is to implement some backoff that won’t put to much load on the servers. Another is, if there’s ever some way to demonstrate good behavior by <whatever> (like throwing a captcha), then that could be implemented here too.

Constructive proposal: (a) add some --maximum-time-I'm-willing-to-wait with a possible value of infinitly. (b) make a known error code just for this case, and then people can do things like

while ./gh issue create ...; [[ $? = 8 ]]; do sleep 1m; done

Hopefully it’s obvious why a specific exit code is needed to make that work, and why that’s much better than letting people grab the output and “parse” it.

Came here to add that trying to use automation to create issues for legit reasons in a reasonably large project is painful because of these limits. I’m coming from https://github.com/opensearch-project/.github/issues/121 where I’m creating PRs in 70+ repos.

This makes any issue migration pretty much impossible.

To avoid the per-hour rate limits you’d actually need to put a 24s delay between post requests 😕 The per-minute rate limit is low, but manageable. It’s really the per-hour rate limit that scuttles the usability here for me.

Thanks for the info!

If the “issues per minute” and “issues per hour” rate limits are documented somewhere, I can try to make sure that my bash-script respects those rate limits? And it would be awesome if the error message contained the “issue creation rate limits” if one runs into those, like I do.

Thanks by the way for the “gh” command, very useful!

Wishing for a “gh create issues” batch operation that takes a CSV with columns for title, body, label and makes a transaction for several issues at once, but as long as I can avoid the rate limits, a custom bash script should do the job, I hope.

Closing this because the was submitted too quickly is an intentional restriction on the platform to combat abuse by automated actors. Unfortunately, this rate limit is internal (since it can vary dynamically) and it also affects people who just want to legitimately create a lot of objects at once. The solution is to scan for this error message in your programs and retry creation after a delay of some minutes or up to an hour.

maybe, and only maybe, it would be a good idea to implement an internal retry logic in gh cli with exponential backoff, queueing in the server side this operation as a backpressure mechanism, … but if every client program implements a custom retry logic … at the end it would be worst, because the most simplest solution is retry until it successfully create the operation, hitting much more the rate-limit, and as a side-effect increasing the load of the server side system.

It is indeed really annoying that these limits aren’t documented (and that they exist). I’m following all the guidelines on avoiding secondary rate limits, plus my own exponential backoff, but still hitting these limits. GitHub really can’t handle 0.3QPS of issue creation? Especially annoying that I can’t give GitHub a single batch query (which I just spent a while constructing) and just have GitHub figure it out.

FWIW, I’ve reversed-engineered the limits (as of right now) to be 20 issues per minute and 150 per hour. That’s just painfully low. I will not be able to accomplish my task here, which I’m only doing to work around another GitHub limitation (that there’s no way to migrate an org-level project).

Constructive proposal: (a) add some --maximum-time-I'm-willing-to-wait with a possible value of infinitly. (b) make a known error code just for this case

I like both these proposals. We are already tracking (b) in https://github.com/cli/cli/issues/3292

Ah triggering notifications! That makes much more sense as a reason for the really low rate limits. I was careful to ensure that these wouldn’t trigger any notifications (unless someone decided to watch my test repository for some reason), but of course it would be hard for GitHub to know that a priori. It does seem like it could know that by the time it sends the notifications though. I wonder actually if the limit could be on notification triggering based on your actions. That seems just generally much more useful.

The issue of these limits being undocumented is also pretty critical. I checked on the rate limits before doing this to make sure it was feasible and concluded from the documentation that it was. I wouldn’t have sunk hours into debugging this otherwise.

Thanks for the suggestion to contact support about getting a temporary exemption from the rate limits. That option is also something that would be helpful to document (although if the secondary limits had been documented I probably would’ve contacted support at the outset).

Anyway, sorry this is really off topic for the cli tool. I’m happy to direct these suggestions elsewhere. My experience with the community discussion forum is that no one ever responds and I’m just shouting into the void.

GitHub really can’t handle 0.3QPS of issue creation?

In purely technical terms, GitHub’s infrastructure can of course handle this (and much more). The opaque limits were not put in place to combat DDoS (there are other systems for that), but to prevent other kind of abuse via creation side-effects: e.g. by spamming a lot of notifications at once (since every issue or comment creation can @-mention people or teams).

Note that I am not on any platform teams so I am not privy to the original decision-making behind this. I have, however, forwarded them mine and others’ feedback about the frustration that the opaqueness of these limits causes.

which I’m only doing to work around another GitHub limitation (that there’s no way to migrate an org-level project).

It sucks that you’ve done all this work to prepare for a large migration between orgs, but that you’re hindered by rate limits that aren’t precisely documented. I don’t think there are any built-in tools for Project migration between orgs, but if you want to unblock your scripts, you can consider writing to Support about your predicament and asking to be exempt from these rate limits for a limited period (e.g. for 24h). Then you could run all your scripts at once.