deploy-pages: Deployment fails if the branchname contains special characters
Steps to reproduce:
- Create a new repository containing a static HTML file
- Add static.yml starter workflow with the branch rules removed
- Remove the branch protection rules from github-pages environment in repository settings
- Push a branch with a name containing non-ASCII characters (eg.:
büg
)
The action fails with a similar error message given in #158, without indication about the source of the problem:
Error: Creating Pages deployment failed
Error: HttpError: invalid json response body at https://api.github.com/repos/oliverosz/gh-pages-test/pages/deployments reason: Unexpected end of JSON input
at /home/runner/work/_actions/actions/deploy-pages/v2/webpack:/deploy-pages/node_modules/@octokit/request/dist-node/index.js:108:1
at processTicksAndRejections (node:internal/process/task_queues:96:5)
at createPagesDeployment (/home/runner/work/_actions/actions/deploy-pages/v2/webpack:/deploy-pages/src/internal/api-client.js:116:1)
at Deployment.create (/home/runner/work/_actions/actions/deploy-pages/v2/webpack:/deploy-pages/src/internal/deployment.js:58:1)
at main (/home/runner/work/_actions/actions/deploy-pages/v2/webpack:/deploy-pages/src/index.js:30:1)
Error: TypeError: Converting circular structure to JSON
--> starting at object with constructor 'TLSSocket'
| property '_httpMessage' -> object with constructor 'ClientRequest'
--- property 'socket' closes the circle
About this issue
- Original URL
- State: open
- Created a year ago
- Reactions: 4
- Comments: 18 (2 by maintainers)
Just for assurance: we’ve filed an internal bug to investigate this one further. In the meanwhile, please try to avoid branch names with non-ASCII characters as a temporary workaround.
Thank you for the report and patience! 🙇
Yes I can confirm the issue does not happen on branches with non-ASCII characters. I used the exact string you provided in the simple test repo we used to isolate and reproduce the issue.
For Everyone’s Information - I double the RAM and increased CPU cores on the appliance VMs during downtime and this issue stopped happening. I suspect that either a magical reboot resolves a bug, or there is memory pressure issue with one of the backend services that handles this API endpoint.