continue: can't change model and create session
Before submitting your bug report
- I believe this is a bug. I’ll try to join the Continue Discord for questions
- I’m not able to find an open issue that reports the same bug
- I’ve seen the troubleshooting guide on the Continue Docs
Relevant environment info
- OS:
llama.cpp model server: centos 7
continue client (with vs code): windows server 2019, windows 10
(have tried multiple client device)
- Continue:
newest on VSCode store,
published (UTC+8)2023/5/28 04:17:59
Description
(like saying on produce) cannot create session once installed the continue extension cannot change model even how i changed the config.py ( i can convinced that the modifying is correct)
To reproduce
- install extension on VScode
- (modifying the config.py Correctly)
- clicking the “+” button to create a new session
- choosing the “llama.app”
- selecting model preset “code liama instruct”
and nothing changed!——still have only one model can be choosed on the bottom, the circle started to round
Log output
______________________
continue log
actually speaking, the "continue.log" dont update a little, since my first installed (2023-10-19 17:25:27,420)
File "server\continuedev\libs\util\create_async_task.py", line 21, in callback
File "asyncio\futures.py", line 201, in result
File "asyncio\tasks.py", line 232, in __step
File "server\continuedev\core\context.py", line 288, in load_index
File "server\continuedev\server\meilisearch_server.py", line 146, in start_meilisearch
File "server\continuedev\server\meilisearch_server.py", line 84, in ensure_meilisearch_installed
File "server\continuedev\server\meilisearch_server.py", line 37, in download_meilisearch
File "server\continuedev\server\meilisearch_server.py", line 21, in download_file
File "aiohttp\client_reqrep.py", line 1043, in read
File "aiohttp\streams.py", line 375, in read
File "aiohttp\streams.py", line 397, in readany
File "aiohttp\streams.py", line 303, in _wait
File "aiohttp\helpers.py", line 725, in __exit__
asyncio.exceptions.TimeoutError
[2023-10-19 17:06:16,526] [WARNING] Error loading meilisearch index:
[2023-10-19 17:24:23,491] [WARNING] Error sending IDE message, websocket probably closed: Unexpected ASGI message 'websocket.send', after sending 'websocket.close'.
[2023-10-19 17:24:26,393] [WARNING] Error sending IDE message, websocket probably closed: Unexpected ASGI message 'websocket.send', after sending 'websocket.close'.
[2023-10-19 17:24:31,818] [WARNING] Error sending IDE message, websocket probably closed: Unexpected ASGI message 'websocket.send', after sending 'websocket.close'.
[2023-10-19 17:25:27,420] [DEBUG] IDE websocket disconnected
[2023-10-19 17:25:27,420] [DEBUG] Closing ide websocket
________________________________________
VSCode log
(dev logs like broswer's F12)
[Extension Host] Websocket error occurred: {}
I @ console.ts:137
$logExtensionHostMessage @ mainThreadConsole.ts:39
_doInvokeHandler @ rpcProtocol.ts:455
_invokeHandler @ rpcProtocol.ts:440
_receiveRequest @ rpcProtocol.ts:370
_receiveOneMessage @ rpcProtocol.ts:296
(anonymous) @ rpcProtocol.ts:161
invoke @ event.ts:626
deliver @ event.ts:828
fire @ event.ts:789
fire @ ipc.net.ts:646
_receiveMessage @ ipc.net.ts:988
(anonymous) @ ipc.net.ts:852
invoke @ event.ts:626
deliver @ event.ts:828
fire @ event.ts:789
acceptChunk @ ipc.net.ts:390
(anonymous) @ ipc.net.ts:346
E @ ipc.net.ts:87
emit @ node:events:526
addChunk @ node:internal/streams/readable:315
readableAddChunk @ node:internal/streams/readable:289
Readable.push @ node:internal/streams/readable:228
onStreamRead @ node:internal/stream_base_commons:190
console.ts:137 [Extension Host] Websocket connection closed: {}
(like the pic showned)
About this issue
- Original URL
- State: closed
- Created 8 months ago
- Reactions: 1
- Comments: 41 (18 by maintainers)
by the way, sir!
a nice news just minutes ago, i use another computer succecfully using the continue
that prove that, is not our continue problem, sir but i dont do any config changing, so this should be thanks to your update on 2023-11-3, 15:28:21 (UTC+8) it could might be regarded as occult metaphysics or “bad contact(connection)” (接触不良 in Chinese)
haha, finally, we done that!
really thanks to your help! that is really help a lot lot wish you have a good life!
oh, sorry for the late reply sir! might be I don’t notice to saw the email of github reply (or my reply was eaten by github)
you are so great, are our hero,
i wanna to express the meaning of “你辛苦了!” to you. but dont find some good way to express this in english
and BTW, i didn’t remember the actually time when the client on my work-site computer will return to normal. this should be on several weeks ago, maybe november,
this should must be thanks to your update some time certainly i remember vaguely that i have came to express my gratitude on this time《 i thought, this should be my mis-remember
on nowadays, The program has become more easily to use, this should benefiting on all users. You are such a good person!
wish you have a good day and life, haha, and it’s might be a early to say: wish you have you a happy new year on 2024!
@Xingeqwd I finally have potential good news! We have removed the need for the separate Python server, which means that you should be able to just download the extension (pre-release version 0.7.x) and it will work without changing any URL.
And yes, @zba you are right about this. A suggestion made in haste by me
@Xingeqwd Thanks so much for the error logs! I was able to find the problem and will publish the fix in our next version.
And really happy to hear that you found a way to make things work! Would you still want to run Continue on the other computer which was experiencing the problem, or do you think this is a good enough solution?
sure thing, sir, i would do these certainly
this is screenshot of the log on the developer tool(the most valueable part)
the three index.js mentioned on the console log are as below (although the error Rows different match with the file exported, the function name, variable name are same) index_js_longer.txt(1056) index_js_shorter.txt(40) index_js_med.txt(695)
if you need anything else, sir, please let me know! i will surely provide
thank you sir! you are really a good man, wish you have a nice day!
hello sir @sestinj , that time should be really the almost time.
the client have already connected to the server, and the server managed to sended to the model (mentioned on above) four type of API have many respond : 502, 404, invalid, etc.
The “invalid” error can prove continue server success to send request to the model, Otherwise, this can also reflect the OllamaAPI dont support our continue “/chat/completions” request
Solving the GGML 502 or OPENAI(chatbox) 404 problem should be More feasible i thought, I only needle to solve one of this, this should enough to make our AI friend run up
after solved this any-one of these problem, this continue should really be worked
the logs are as below
The server says there have a Error in the Continue server, this should be the 502(404) BTW, the “overloaded with requests” is the 404
with extension screenshot of four API
404 error error like:
502 error like:
i research on these for a preiod, but dont get some valuable progress. not find the problem of continue server. the log level should already be the the most sensitive (debug), is there a level that more sensitive? Probably no more, I thought.
SRY(very thanks) sir, I am coming to bothering(asking) you again😢 I will also try my best to solve this
also thanks to anyone visited this, at the bottom, wish everyone have a good good day and life again!
hello and good afternoon, sir,
I have some valuable very big big progress:
the client successfully connected to the continue-server, but the continue-server cannt managed to find the model _____ but the continue server stucked after i submit a “hello”
i thought, there should be some mistake on the continue-server config file (on my edit) i have a code-llama model with llama-cpp-python listening on the 48080 port on the server i edited the config file like this (only added this two line_, because the original file also cant be use_stucked)
(its seems like no grammer error on the log, only the program dont recognize this)
after i hitting the submit buttion(on the github), i see the continue plugin being colorful, haha, that is awosome, master! Just this last bit left
i suddendly thought that: maybe i should follow the instruction writed on the extention? that should be the reason of server cannot find model i will try this tomorrow! hah, i thought, its very very close to the success, sir! thank you for your help before!
yeah, i go and see the instruction
(https://github.com/continuedev/continue/assets/46107662/b97c2d37-0301-4a13-a0ac-3e8a29af03af)
its came out that! i should let the continue server booting with the model parameter ,haha that’s for sure ,nice! i will try this tomorrow!
okay, thank you a lot, sir! you are such a good men. I will trying to do something else, and be waiting for the update
@Xingeqwd There is a log at
~/.continue/continue.logon the same computer where you are running the Continue server.It might help to test whether you can accomplish a curl request to the server from your local computer
Also, if none of these things work I am making some changes this weekend to how the server connects, and there’s a chance this will help solve the problem
@Xingeqwd great, let me know how it goes! Have a great day!
@sestinj certainly sir, the 192.168… server is a remote server (on a different device). And by the way, I have also sent the http request to the server minutes ago, and I also got a “{“detail”:“Not Found”}”. you are right sir, i will try the way of “https” (that’s should very likely to be right, because i forgot to try this before)
I will also try the solution of @yl3469 (congradulate for his success!)
on the end, thanks a lot lot to these two masters, and hope you both have a good good day today!
@yl3469 Glad you were able to solve this! Is there anything else that might have helped you to do so more quickly, for example better documentation, different placement of the settings, etc…?
Hi Nate,
Thanks so much for getting back quickly! Actually, I have already resolved the issue. So the keys are:
[A screenshot of a computer Description automatically generated] From: Nate Sesti @.> Date: Monday, October 23, 2023 at 2:49 PM To: continuedev/continue @.> Cc: Yueying Li @.>, Mention @.> Subject: Re: [continuedev/continue] can’t change model and create session (Issue #570)
@yl3469https://github.com/yl3469 could you please share the curl request that you tried and I’ll test some things on my own to try and solve the problem?
— Reply to this email directly, view it on GitHubhttps://github.com/continuedev/continue/issues/570#issuecomment-1775818170, or unsubscribehttps://github.com/notifications/unsubscribe-auth/AQ6OZMDH646JREVOE6RKNODYA2347AVCNFSM6AAAAAA6IY7JHCVHI2DSMVQWIX3LMV43OSLTON2WKQ3PNVWWK3TUHMYTONZVHAYTQMJXGA. You are receiving this because you were mentioned.Message ID: @.***>
@sestinj thank you for replying sir. and SRY for my late replying
I forgot to say days ago, that I have also tried pre-release version on this time, and no obvious effect shown (only found the model changed into “GPT4”) On minutes ago, I tried the pre-release version again, and the problem still exist
I thought, there should be many people have the issue same like me (because this circumstance is so weired). At the end, thank you sir, wish you have a good day.
@Xingeqwd Since the error looks related to Meilisearch, it might be helpful to try our pre-release version, where I’ve made a couple of related fixes.
ahhh, i have do it for two days(10.19-10.20) I can confirm that there is no problem with the firewall, the service is normal, and the connection is possible I have even switched computers a few times I searched almost all the first dozens of results on Google
SRY for the poor grammar Time is a little tight, I have to hurry to do something… so it is a little…SRY SRY I will come back later to improve the site
thanks a lotlot for anyone, wish you have a good day