llm-vscode: Client is not running

That’s what I get instead of autocompletion, whenever I type: image

Quite cryptic TBH.

Runtime status:

Uncaught Errors (9)
write EPIPE
Client is not running and can't be stopped. It's current state is: starting
write EPIPE
write EPIPE
Client is not running and can't be stopped. It's current state is: starting
Pending response rejected since connection got disposed
Client is not running and can't be stopped. It's current state is: starting
Client is not running and can't be stopped. It's current state is: startFailed
Client is not running and can't be stopped. It's current state is: startFailed

Using Windows 11 with my local LM Studio server.

About this issue

  • Original URL
  • State: open
  • Created 8 months ago
  • Comments: 21 (8 by maintainers)

Most upvoted comments

I have very similar logs as @NicolasAG Interestingly this only happens when running a dev container, and not when running locally. Unfortunately I took quite a lot of pain to move all my workflow to dev containers for portability and this breaks that so its not usable for me right now. Would love to help debug.

It does not run the model locally, it queries an API that is by default our inference API but can be any API you choose.

There were some major changes recently, we know have https://github.com/huggingface/llm-ls running with the extension. Your error is saying that the connection between VSCode and llm-ls is broken for some reason. I work on MacOS silicon and have had no issues running the server though, and I’m pretty sure it should also work on x86 processors for MacOS. My initial guess on what’s wrong for your setup is that you’re using VSCode on a remote directory, but not sure that’s the root cause here.

The client is not running issues on the Ubuntu local environment

This issue is stale because it has been open for 30 days with no activity.

Is there any fix planned for this?

I tried to downgrade the extension to the reported working version and now I am getting the following error data did not match any variant of untagged enum TokenizerConfig

image