nebari: [bug] 500 : Internal Server Error when starting the server

Describe the bug

A clear and concise description of what the problem is.

I’ve deployed a new QHub cluster on AWS (my first time). The cluster is available and I can log in. But when I try to start the server (“Start My Server” blue button), I get this error in big red letters: 500 : Internal Server Error

I’ve tried this twice now with the same outcome, after clearing out all AWS resources from the first try.

How can we help?

Help us help you.

  • What are you trying to achieve?

Have a functioning JupyterLab in QHub cluster on AWS, following the step-by-step instructions in the documentation.

  • How can we reproduce the problem?

Not sure. But I can describe all steps.

  • What is the expected behaviour?
  • And what is currently happening?

See above. When I try to start the server (“Start My Server” blue button), I get this error in big red letters: 500 : Internal Server Error

  • Any error messages?

500 : Internal Server Error

  • If helpful, please add any screenshots and relevant links to the description.

I’m not sure this will add anything, but I posted some earlier issues at https://github.com/Quansight/qhub/discussions/625 (and got help on those upstream steps, thanks!)

Your environment

Describe the environment in which you are experiencing the bug.

Include your conda version (use conda --version), k8s and any other relevant details.

Installed qhub from conda-forge

$ conda --version
conda 4.10.1

$ qhub --version
0.3.11

About this issue

  • Original URL
  • State: closed
  • Created 3 years ago
  • Comments: 15 (6 by maintainers)

Most upvoted comments

Ah, I see. That’s kind of expected, when you delete resources manually it’s a little hard to make sure that you destroy everything cleanly, we recommend using destroy command for the same.

qhub destroy --config qhub-config.yaml

Yeah, that’s exactly what I’m doing; sorry, I said “qhub destroy” as a short hand of the full statement. The trouble is that b/c this statement often ends in error, AWS resources are left stranded and the next qhub deploy attempt fails. So I’ve found that the only way to get back to square one after qhub destroy --config qhub-config.yaml fails is to clear the resources manually.

By the way I am happy to get on a quick call to help you get started, feel free to let us know in here: https://gitter.im/Quansight/qhub

Thank you! I’ll take you up on it early next week.