continue: can't change model and create session

Before submitting your bug report

Relevant environment info

- OS:
llama.cpp model server: centos 7
continue client (with vs code): windows server 2019, windows 10 
(have tried multiple client device)
- Continue:
newest on VSCode store, 
published (UTC+8)2023/5/28 04:17:59

Description

(like saying on produce) cannot create session once installed the continue extension cannot change model even how i changed the config.py ( i can convinced that the modifying is correct)

To reproduce

  1. install extension on VScode
  2. (modifying the config.py Correctly) image
  3. clicking the “+” button to create a new session
  4. choosing the “llama.app”

image

  1. selecting model preset “code liama instruct”

image

and nothing changed!——still have only one model can be choosed on the bottom, the circle started to round image

Log output

______________________
continue log

actually speaking, the "continue.log" dont update a little, since my first installed (2023-10-19 17:25:27,420)

  File "server\continuedev\libs\util\create_async_task.py", line 21, in callback

  File "asyncio\futures.py", line 201, in result

  File "asyncio\tasks.py", line 232, in __step

  File "server\continuedev\core\context.py", line 288, in load_index

  File "server\continuedev\server\meilisearch_server.py", line 146, in start_meilisearch

  File "server\continuedev\server\meilisearch_server.py", line 84, in ensure_meilisearch_installed

  File "server\continuedev\server\meilisearch_server.py", line 37, in download_meilisearch

  File "server\continuedev\server\meilisearch_server.py", line 21, in download_file

  File "aiohttp\client_reqrep.py", line 1043, in read

  File "aiohttp\streams.py", line 375, in read

  File "aiohttp\streams.py", line 397, in readany

  File "aiohttp\streams.py", line 303, in _wait

  File "aiohttp\helpers.py", line 725, in __exit__

asyncio.exceptions.TimeoutError

[2023-10-19 17:06:16,526] [WARNING] Error loading meilisearch index: 
[2023-10-19 17:24:23,491] [WARNING] Error sending IDE message, websocket probably closed: Unexpected ASGI message 'websocket.send', after sending 'websocket.close'.
[2023-10-19 17:24:26,393] [WARNING] Error sending IDE message, websocket probably closed: Unexpected ASGI message 'websocket.send', after sending 'websocket.close'.
[2023-10-19 17:24:31,818] [WARNING] Error sending IDE message, websocket probably closed: Unexpected ASGI message 'websocket.send', after sending 'websocket.close'.
[2023-10-19 17:25:27,420] [DEBUG] IDE websocket disconnected
[2023-10-19 17:25:27,420] [DEBUG] Closing ide websocket



________________________________________
VSCode log
(dev logs like broswer's F12)

[Extension Host] Websocket error occurred:  {}
I @ console.ts:137
$logExtensionHostMessage @ mainThreadConsole.ts:39
_doInvokeHandler @ rpcProtocol.ts:455
_invokeHandler @ rpcProtocol.ts:440
_receiveRequest @ rpcProtocol.ts:370
_receiveOneMessage @ rpcProtocol.ts:296
(anonymous) @ rpcProtocol.ts:161
invoke @ event.ts:626
deliver @ event.ts:828
fire @ event.ts:789
fire @ ipc.net.ts:646
_receiveMessage @ ipc.net.ts:988
(anonymous) @ ipc.net.ts:852
invoke @ event.ts:626
deliver @ event.ts:828
fire @ event.ts:789
acceptChunk @ ipc.net.ts:390
(anonymous) @ ipc.net.ts:346
E @ ipc.net.ts:87
emit @ node:events:526
addChunk @ node:internal/streams/readable:315
readableAddChunk @ node:internal/streams/readable:289
Readable.push @ node:internal/streams/readable:228
onStreamRead @ node:internal/stream_base_commons:190
console.ts:137 [Extension Host] Websocket connection closed:  {}

(like the pic showned) image

About this issue

  • Original URL
  • State: closed
  • Created 8 months ago
  • Reactions: 1
  • Comments: 41 (18 by maintainers)

Most upvoted comments

Sorry about the new error! If you use open the VSCode Command Palette with the keyboard shortcut command+shift+P, then type “Toggle Developer Tools”, it will open a window on the right. Then clicking “Console” on the top will show the logs. If you could share a screenshot of this I’ll try to have it fixed first thing tomorrow

As for Ollama, it seems clear that this is an error in our side, so I’ll do some testing here on my own to try and reproduce

by the way, sir!

a nice news just minutes ago, i use another computer succecfully using the continue

that prove that, is not our continue problem, sir but i dont do any config changing, so this should be thanks to your update on 2023-11-3, 15:28:21 (UTC+8) it could might be regarded as occult metaphysics or “bad contact(connection)” (接触不良 in Chinese)

haha, finally, we done that!

really thanks to your help! that is really help a lot lot wish you have a good life!

@Xingeqwd I finally have potential good news! We have removed the need for the separate Python server, which means that you should be able to just download the extension (pre-release version 0.7.x) and it will work without changing any URL.

And yes, @zba you are right about this. A suggestion made in haste by me

oh, sorry for the late reply sir! might be I don’t notice to saw the email of github reply (or my reply was eaten by github)

you are so great, are our hero,

i wanna to express the meaning of “你辛苦了!” to you. but dont find some good way to express this in english

and BTW, i didn’t remember the actually time when the client on my work-site computer will return to normal. this should be on several weeks ago, maybe november,

this should must be thanks to your update some time certainly i remember vaguely that i have came to express my gratitude on this time《 i thought, this should be my mis-remember

on nowadays, The program has become more easily to use, this should benefiting on all users. You are such a good person!

wish you have a good day and life, haha, and it’s might be a early to say: wish you have you a happy new year on 2024!

@Xingeqwd I finally have potential good news! We have removed the need for the separate Python server, which means that you should be able to just download the extension (pre-release version 0.7.x) and it will work without changing any URL.

And yes, @zba you are right about this. A suggestion made in haste by me

@Xingeqwd Thanks so much for the error logs! I was able to find the problem and will publish the fix in our next version.

And really happy to hear that you found a way to make things work! Would you still want to run Continue on the other computer which was experiencing the problem, or do you think this is a good enough solution?

Sorry about the new error! If you use open the VSCode Command Palette with the keyboard shortcut command+shift+P, then type “Toggle Developer Tools”, it will open a window on the right. Then clicking “Console” on the top will show the logs. If you could share a screenshot of this I’ll try to have it fixed first thing tomorrow

As for Ollama, it seems clear that this is an error in our side, so I’ll do some testing here on my own to try and reproduce

sure thing, sir, i would do these certainly

this is screenshot of the log on the developer tool(the most valueable part) image

the three index.js mentioned on the console log are as below (although the error Rows different match with the file exported, the function name, variable name are same) index_js_longer.txt(1056) index_js_shorter.txt(40) index_js_med.txt(695)

if you need anything else, sir, please let me know! i will surely provide

thank you sir! you are really a good man, wish you have a nice day!

hello sir @sestinj , that time should be really the almost time.

the client have already connected to the server, and the server managed to sended to the model (mentioned on above) four type of API have many respond : 502, 404, invalid, etc.

The “invalid” error can prove continue server success to send request to the model, Otherwise, this can also reflect the OllamaAPI dont support our continue “/chat/completions” request

Solving the GGML 502 or OPENAI(chatbox) 404 problem should be More feasible i thought, I only needle to solve one of this, this should enough to make our AI friend run up

after solved this any-one of these problem, this continue should really be worked


the logs are as below

The server says there have a Error in the Continue server, this should be the 502(404) BTW, the “overloaded with requests” is the 404

with extension screenshot of four API image

404 error error like:


[2023-11-01 17:42:26,775] [DEBUG] Received GUI message {"messageType":"main_input","data":{"input":"hello"}}
[2023-11-01 17:42:26,797] [ERROR] Error while running step: 
Traceback (most recent call last):

  File "/root/anaconda3/envs/continue-dev/lib/python3.10/site-packages/openai/api_requestor.py", line 403, in handle_error_response
    error_data = resp["error"]

KeyError: 'error'



During handling of the above exception, another exception occurred:


Traceback (most recent call last):

  File "/root/anaconda3/envs/continue-dev/lib/python3.10/site-packages/continuedev/core/autopilot.py", line 448, in _run_singular_step
    observation = await step(self.continue_sdk)

  File "/root/anaconda3/envs/continue-dev/lib/python3.10/site-packages/continuedev/core/main.py", line 382, in __call__
    return await self.run(sdk)

  File "/root/anaconda3/envs/continue-dev/lib/python3.10/site-packages/continuedev/plugins/steps/chat.py", line 110, in run
    async for chunk in generator:

  File "/root/anaconda3/envs/continue-dev/lib/python3.10/site-packages/continuedev/libs/llm/base.py", line 459, in stream_chat
    async for chunk in self._stream_chat(messages=messages, options=options):

  File "/root/anaconda3/envs/continue-dev/lib/python3.10/site-packages/continuedev/libs/llm/openai.py", line 157, in _stream_chat
    async for chunk in await openai.ChatCompletion.acreate(

  File "/root/anaconda3/envs/continue-dev/lib/python3.10/site-packages/openai/api_resources/chat_completion.py", line 45, in acreate
    return await super().acreate(*args, **kwargs)

  File "/root/anaconda3/envs/continue-dev/lib/python3.10/site-packages/openai/api_resources/abstract/engine_api_resource.py", line 217, in acreate
    response, _, api_key = await requestor.arequest(

  File "/root/anaconda3/envs/continue-dev/lib/python3.10/site-packages/openai/api_requestor.py", line 382, in arequest
    resp, got_stream = await self._interpret_async_response(result, stream)

  File "/root/anaconda3/envs/continue-dev/lib/python3.10/site-packages/openai/api_requestor.py", line 728, in _interpret_async_response
    self._interpret_response_line(

  File "/root/anaconda3/envs/continue-dev/lib/python3.10/site-packages/openai/api_requestor.py", line 765, in _interpret_response_line
    raise self.handle_error_response(

  File "/root/anaconda3/envs/continue-dev/lib/python3.10/site-packages/openai/api_requestor.py", line 405, in handle_error_response
    raise error.APIError(

openai.error.APIError: Invalid response object from API: '{"detail":"Not Found"}' (HTTP response code was 404)

OpenAI is overloaded with requests. Please try again.
[2023-11-01 17:42:26,798] [CRITICAL] Exception caught from async task: Traceback (most recent call last):

  File "/root/anaconda3/envs/continue-dev/lib/python3.10/site-packages/openai/api_requestor.py", line 403, in handle_error_response
    error_data = resp["error"]

KeyError: 'error'


During handling of the above exception, another exception occurred:


Traceback (most recent call last):

  File "/root/anaconda3/envs/continue-dev/lib/python3.10/site-packages/continuedev/libs/util/create_async_task.py", line 21, in callback
    future.result()

  File "/root/anaconda3/envs/continue-dev/lib/python3.10/asyncio/futures.py", line 201, in result
    raise self._exception

  File "/root/anaconda3/envs/continue-dev/lib/python3.10/asyncio/tasks.py", line 232, in __step
    result = coro.send(None)

  File "/root/anaconda3/envs/continue-dev/lib/python3.10/site-packages/continuedev/core/autopilot.py", line 645, in create_title
    title = await self.continue_sdk.models.summarize.complete(

  File "/root/anaconda3/envs/continue-dev/lib/python3.10/site-packages/continuedev/libs/llm/base.py", line 406, in complete
    completion = await self._complete(prompt=prompt, options=options)

  File "/root/anaconda3/envs/continue-dev/lib/python3.10/site-packages/continuedev/libs/llm/openai.py", line 186, in _complete
    resp = await openai.ChatCompletion.acreate(

  File "/root/anaconda3/envs/continue-dev/lib/python3.10/site-packages/openai/api_resources/chat_completion.py", line 45, in acreate
    return await super().acreate(*args, **kwargs)

  File "/root/anaconda3/envs/continue-dev/lib/python3.10/site-packages/openai/api_resources/abstract/engine_api_resource.py", line 217, in acreate
    response, _, api_key = await requestor.arequest(

  File "/root/anaconda3/envs/continue-dev/lib/python3.10/site-packages/openai/api_requestor.py", line 382, in arequest
    resp, got_stream = await self._interpret_async_response(result, stream)

  File "/root/anaconda3/envs/continue-dev/lib/python3.10/site-packages/openai/api_requestor.py", line 728, in _interpret_async_response
    self._interpret_response_line(

  File "/root/anaconda3/envs/continue-dev/lib/python3.10/site-packages/openai/api_requestor.py", line 765, in _interpret_response_line
    raise self.handle_error_response(

  File "/root/anaconda3/envs/continue-dev/lib/python3.10/site-packages/openai/api_requestor.py", line 405, in handle_error_response
    raise error.APIError(

openai.error.APIError: Invalid response object from API: '{"detail":"Not Found"}' (HTTP response code was 404)

[2023-11-01 17:42:26,800] [ERROR] Error while running step: 
Invalid response object from API: '{"detail":"Not Found"}' (HTTP response code was 404)
Error in the Continue server

502 error like:


[2023-11-01 17:45:21,501] [DEBUG] Received GUI message {"messageType":"set_model_for_role_from_index","data":{"role":"*","index":1}}
[2023-11-01 17:45:25,203] [DEBUG] Received GUI message {"messageType":"main_input","data":{"input":"hello"}}
[2023-11-01 17:45:25,219] [CRITICAL] Exception caught from async task: Traceback (most recent call last):

  File "/root/anaconda3/envs/continue-dev/lib/python3.10/site-packages/continuedev/libs/util/create_async_task.py", line 21, in callback
    future.result()

  File "/root/anaconda3/envs/continue-dev/lib/python3.10/asyncio/futures.py", line 201, in result
    raise self._exception

  File "/root/anaconda3/envs/continue-dev/lib/python3.10/asyncio/tasks.py", line 232, in __step
    result = coro.send(None)

  File "/root/anaconda3/envs/continue-dev/lib/python3.10/site-packages/continuedev/core/autopilot.py", line 645, in create_title
    title = await self.continue_sdk.models.summarize.complete(

  File "/root/anaconda3/envs/continue-dev/lib/python3.10/site-packages/continuedev/libs/llm/base.py", line 406, in complete
    completion = await self._complete(prompt=prompt, options=options)

  File "/root/anaconda3/envs/continue-dev/lib/python3.10/site-packages/continuedev/libs/llm/ggml.py", line 230, in _complete
    async for chunk in self._raw_stream_complete(prompt, options):

  File "/root/anaconda3/envs/continue-dev/lib/python3.10/site-packages/continuedev/libs/llm/ggml.py", line 107, in _raw_stream_complete
    raise Exception(

Exception: Error calling /chat/completions endpoint: 502

[2023-11-01 17:45:25,220] [ERROR] Error while running step: 
Error calling /chat/completions endpoint: 502
Error in the Continue server
[2023-11-01 17:45:25,222] [ERROR] Error while running step: 
Traceback (most recent call last):

  File "/root/anaconda3/envs/continue-dev/lib/python3.10/site-packages/continuedev/core/autopilot.py", line 448, in _run_singular_step
    observation = await step(self.continue_sdk)

  File "/root/anaconda3/envs/continue-dev/lib/python3.10/site-packages/continuedev/core/main.py", line 382, in __call__
    return await self.run(sdk)

  File "/root/anaconda3/envs/continue-dev/lib/python3.10/site-packages/continuedev/plugins/steps/chat.py", line 110, in run
    async for chunk in generator:

  File "/root/anaconda3/envs/continue-dev/lib/python3.10/site-packages/continuedev/libs/llm/base.py", line 479, in stream_chat
    async for chunk in self._stream_complete(prompt=prompt, options=options):

  File "/root/anaconda3/envs/continue-dev/lib/python3.10/site-packages/continuedev/libs/llm/ggml.py", line 244, in _stream_complete
    async for chunk in self._raw_stream_complete(prompt, options):

  File "/root/anaconda3/envs/continue-dev/lib/python3.10/site-packages/continuedev/libs/llm/ggml.py", line 107, in _raw_stream_complete
    raise Exception(

Exception: Error calling /chat/completions endpoint: 502

Error calling /chat/completions endpoint: 502


i research on these for a preiod, but dont get some valuable progress. not find the problem of continue server. the log level should already be the the most sensitive (debug), is there a level that more sensitive? Probably no more, I thought.

SRY(very thanks) sir, I am coming to bothering(asking) you again😢 I will also try my best to solve this

also thanks to anyone visited this, at the bottom, wish everyone have a good good day and life again!

@Xingeqwd There is a log at ~/.continue/continue.log on the same computer where you are running the Continue server.

It might help to test whether you can accomplish a curl request to the server from your local computer

Also, if none of these things work I am making some changes this weekend to how the server connects, and there’s a chance this will help solve the problem

hello and good afternoon, sir,

I have some valuable very big big progress:
the client successfully connected to the continue-server, but the continue-server cannt managed to find the model _____ but the continue server stucked after i submit a “hello” image

i thought, there should be some mistake on the continue-server config file (on my edit) i have a code-llama model with llama-cpp-python listening on the 48080 port on the server i edited the config file like this (only added this two line_, because the original file also cant be use_stucked) image

(its seems like no grammer error on the log, only the program dont recognize this)

after i hitting the submit buttion(on the github), i see the continue plugin being colorful, haha, that is awosome, master! Just this last bit left


i suddendly thought that: maybe i should follow the instruction writed on the extention? that should be the reason of server cannot find model i will try this tomorrow! hah, i thought, its very very close to the success, sir! thank you for your help before!

yeah, i go and see the instruction image (https://github.com/continuedev/continue/assets/46107662/b97c2d37-0301-4a13-a0ac-3e8a29af03af)

its came out that! i should let the continue server booting with the model parameter ,haha that’s for sure ,nice! i will try this tomorrow!

@Xingeqwd There is a log at ~/.continue/continue.log on the same computer where you are running the Continue server.

It might help to test whether you can accomplish a curl request to the server from your local computer

Also, if none of these things work I am making some changes this weekend to how the server connects, and there’s a chance this will help solve the problem

okay, thank you a lot, sir! you are such a good men. I will trying to do something else, and be waiting for the update

@Xingeqwd There is a log at ~/.continue/continue.log on the same computer where you are running the Continue server.

It might help to test whether you can accomplish a curl request to the server from your local computer

Also, if none of these things work I am making some changes this weekend to how the server connects, and there’s a chance this will help solve the problem

@Xingeqwd great, let me know how it goes! Have a great day!

@Xingeqwd To help me better understand the situation, is this 192.168… server on the same computer as VS Code, or is this a remote server?

If it’s on a different computer, you will probably have to use https:// instead of http://

@sestinj certainly sir, the 192.168… server is a remote server (on a different device). And by the way, I have also sent the http request to the server minutes ago, and I also got a “{“detail”:“Not Found”}”. you are right sir, i will try the way of “https” (that’s should very likely to be right, because i forgot to try this before)

I will also try the solution of @yl3469 (congradulate for his success!)

on the end, thanks a lot lot to these two masters, and hope you both have a good good day today!

@yl3469 Glad you were able to solve this! Is there anything else that might have helped you to do so more quickly, for example better documentation, different placement of the settings, etc…?

Hi Nate,

Thanks so much for getting back quickly! Actually, I have already resolved the issue. So the keys are:

  1. Make sure that you have the manually installed server and clean up previous sessions
  2. Restart vs code and make sure settings are right under the remote plugin

[A screenshot of a computer Description automatically generated] From: Nate Sesti @.> Date: Monday, October 23, 2023 at 2:49 PM To: continuedev/continue @.> Cc: Yueying Li @.>, Mention @.> Subject: Re: [continuedev/continue] can’t change model and create session (Issue #570)

@yl3469https://github.com/yl3469 could you please share the curl request that you tried and I’ll test some things on my own to try and solve the problem?

— Reply to this email directly, view it on GitHubhttps://github.com/continuedev/continue/issues/570#issuecomment-1775818170, or unsubscribehttps://github.com/notifications/unsubscribe-auth/AQ6OZMDH646JREVOE6RKNODYA2347AVCNFSM6AAAAAA6IY7JHCVHI2DSMVQWIX3LMV43OSLTON2WKQ3PNVWWK3TUHMYTONZVHAYTQMJXGA. You are receiving this because you were mentioned.Message ID: @.***>

@Xingeqwd Since the error looks related to Meilisearch, it might be helpful to try our pre-release version, where I’ve made a couple of related fixes.

Screen Shot 2023-10-20 at 11 27 18 AM

@sestinj thank you for replying sir. and SRY for my late replying

I forgot to say days ago, that I have also tried pre-release version on this time, and no obvious effect shown (only found the model changed into “GPT4”) On minutes ago, I tried the pre-release version again, and the problem still exist image

I thought, there should be many people have the issue same like me (because this circumstance is so weired). At the end, thank you sir, wish you have a good day.

@Xingeqwd Since the error looks related to Meilisearch, it might be helpful to try our pre-release version, where I’ve made a couple of related fixes.

Screen Shot 2023-10-20 at 11 27 18 AM

ahhh, i have do it for two days(10.19-10.20) I can confirm that there is no problem with the firewall, the service is normal, and the connection is possible I have even switched computers a few times I searched almost all the first dozens of results on Google

SRY for the poor grammar Time is a little tight, I have to hurry to do something… so it is a little…SRY SRY I will come back later to improve the site

thanks a lotlot for anyone, wish you have a good day