CudaText: Language server can't keep up with typing speed

Is it just normal overheat of rpc-json connection? Or is it the limitation of cudatext’s implementation of lsp? The language server is clangd (I got it as part of the distribution from winlibs.com) and the lexer is C++. Is there anything I could do to tweak it? This is my lsp_c.json:

{
  "lexers": {
    "C++": "cpp",
    "C": "c"
  },
  "cmd_windows": [
      "clangd",
  ],
}

(I added the bin directory of mingw64 to PATH so clangd is already in PATH).

I got the sample from here: https://wiki.freepascal.org/CudaText_plugins#LSP_servers_for_C.2FC.2B.2B

It seems you didn’t expect clangd to be used on Windows? The sample is for unix (cmd_unix), I changed it to use on Windows.

About this issue

  • Original URL
  • State: closed
  • Created 2 years ago
  • Comments: 48 (40 by maintainers)

Most upvoted comments

Yes, caching is good but caching like his that suggested completely wrong and nonsense suggestions is not good at all. He should spend more time polishing it before even attempt to get it merged.

@jpgpng cache branch is merged to main branch. you can test and tell if it suggests nonsense suggestions to you now.

Note: there is a new option “hard_filter” (default 0) also speed must be improved with use_cache=1 option (default 1)

rebased and improved: https://github.com/veksha/cuda_lsp/tree/autocomplete.cache.2 now cache works with latest cuda_lsp + made the following change:

  • ignore “textEdit.range” data from server, so autocompletion of v|oid(to volatile) will replace whole word.

let’s test it for some long time.

And I think I have to return to my cache branch experiments, because I see in specs:

  • “for speed clients should be able to filter an already received completion list if the user continues typing. Servers can opt out of this using a CompletionList and mark it as isIncomplete.”

clangd returns isIncomplete=True, so I will need to disable cache for such servers.

@Alexey-T @jpgpng

try to change 250 to 10 for MAX_TIMER_TIME in file cudatext\py\cuda_lsp\language.py (path on Windows OS)

# change this to 10
MAX_TIMER_TIME = 250    # ms

https://user-images.githubusercontent.com/275333/187281129-58e14434-cea2-41ac-884c-3abc058da59d.mp4

Yes, caching is good but caching like his that suggested completely wrong and nonsense suggestions is not good at all.

what do you mean? how to cache correctly? (and you are not polite at all talking about me in third person while i’m here 😆 )

for the sake of nothing…

for the sake of [it seems, the illusion of as] there is progress and it’s actively developed… software development nowadays, some weeks and a new browser version… this text editor’s release speed is even more furious. do you expect it stable?

yes, so let’s test it for some days, and then merge the PR

@jpgpng @Alexey-T I have experimental version of cuda_lsp in this branch. https://github.com/veksha/cuda_lsp/tree/autocomplete.cache.2 you can download it and try. it will not speed up the first request (not worked on this yet), but it will show cached list while you typing your keyword.

https://user-images.githubusercontent.com/275333/186895875-fcf7d2c6-1a7c-4b75-84cc-b44bc7e887a0.mp4