nextjs-ollama-llm-ui: bug: issues pulling model (CORS)

I’m running the ollama server on localhost:11434. I go to “Settings” and try to pull a model as no models are listed in the dropdown (even though tinydolphin is pulled and working on localhost). However, the app pukes on this as follows: ⨯ TypeError: fetch failed at Object.fetch (node:internal/deps/undici/undici:11576:11) at process.processTicksAndRejections (node:internal/process/task_queues:95:5) at async globalThis.fetch (/home/ubuntu/nextjs-ollama-llm-ui/node_modules/next/dist/compiled/next-server/app-route.runtime.dev.js:6:57569) at async POST (webpack-internal:///(rsc)/./src/app/api/model/route.ts:7:22) at async /home/ubuntu/nextjs-ollama-llm-ui/node_modules/next/dist/compiled/next-server/app-route.runtime.dev.js:6:63809 at async eU.execute (/home/ubuntu/nextjs-ollama-llm-ui/node_modules/next/dist/compiled/next-server/app-route.runtime.dev.js:6:53964) at async eU.handle (/home/ubuntu/nextjs-ollama-llm-ui/node_modules/next/dist/compiled/next-server/app-route.runtime.dev.js:6:65062) at async doRender (/home/ubuntu/nextjs-ollama-llm-ui/node_modules/next/dist/server/base-server.js:1333:42) at async cacheEntry.responseCache.get.routeKind (/home/ubuntu/nextjs-ollama-llm-ui/node_modules/next/dist/server/base-server.js:1555:28) at async DevServer.renderToResponseWithComponentsImpl (/home/ubuntu/nextjs-ollama-llm-ui/node_modules/next/dist/server/base-server.js:1463:28) at async DevServer.renderPageComponent (/home/ubuntu/nextjs-ollama-llm-ui/node_modules/next/dist/server/base-server.js:1856:24) at async DevServer.renderToResponseImpl (/home/ubuntu/nextjs-ollama-llm-ui/node_modules/next/dist/server/base-server.js:1894:32) at async DevServer.pipeImpl (/home/ubuntu/nextjs-ollama-llm-ui/node_modules/next/dist/server/base-server.js:911:25) at async NextNodeServer.handleCatchallRenderRequest (/home/ubuntu/nextjs-ollama-llm-ui/node_modules/next/dist/server/next-server.js:271:17) at async DevServer.handleRequestImpl (/home/ubuntu/nextjs-ollama-llm-ui/node_modules/next/dist/server/base-server.js:807:17) at async /home/ubuntu/nextjs-ollama-llm-ui/node_modules/next/dist/server/dev/next-dev-server.js:331:20 at async Span.traceAsyncFn (/home/ubuntu/nextjs-ollama-llm-ui/node_modules/next/dist/trace/trace.js:151:20) at async DevServer.handleRequest (/home/ubuntu/nextjs-ollama-llm-ui/node_modules/next/dist/server/dev/next-dev-server.js:328:24) at async invokeRender (/home/ubuntu/nextjs-ollama-llm-ui/node_modules/next/dist/server/lib/router-server.js:163:21) at async handleRequest (/home/ubuntu/nextjs-ollama-llm-ui/node_modules/next/dist/server/lib/router-server.js:342:24) at async requestHandlerImpl (/home/ubuntu/nextjs-ollama-llm-ui/node_modules/next/dist/server/lib/router-server.js:366:13) at async Server.requestListener (/home/ubuntu/nextjs-ollama-llm-ui/node_modules/next/dist/server/lib/start-server.js:140:13) { cause: Error: connect ECONNREFUSED ::1:11434 at TCPConnectWrap.afterConnect [as oncomplete] (node:net:1495:16) at TCPConnectWrap.callbackTrampoline (node:internal/async_hooks:130:17) { errno: -111, code: 'ECONNREFUSED', syscall: 'connect', address: '::1', port: 11434 } }

Screenshot 2024-02-13 at 10 10 48 Screenshot 2024-02-13 at 10 00 10

About this issue

  • Original URL
  • State: closed
  • Created 5 months ago
  • Comments: 31 (12 by maintainers)

Most upvoted comments

Thanks! I haven’t had a chance to go back AND my service provider was down last week. I’m finishing another project atm and will check this out later this week.

THANKYOU

On 28 Feb 2024, at 13:24, Jakob Hoeg Mørk @.***> wrote:

@tmattoneill https://github.com/tmattoneill hey, any progress here? There has also just been a pull request merged, which might fix the issue.

— Reply to this email directly, view it on GitHub https://github.com/jakobhoeg/nextjs-ollama-llm-ui/issues/8#issuecomment-1968978620, or unsubscribe https://github.com/notifications/unsubscribe-auth/AA7E3Q4MK5KLHDPHLVTFILTYV4V2VAVCNFSM6AAAAABDGG56HCVHI2DSMVQWIX3LMV43OSLTON2WKQ3PNVWWK3TUHMYTSNRYHE3TQNRSGA. You are receiving this because you were mentioned.

So, to summarize…

If you’re having this problem, it’s probably because this JS frontend cannot pull the list of models/tags from the Ollama API endpoint. You should be able to verify this by looking in the JS Console immediately upon loading the UI (no need to add a nickname or pull a model). image ^^ If you see the blocked request to api/tags, then this CORS-related issue is almost certainly your problem. The URL scheme (protocol + host + port) for the blocked request should be different from the URL scheme in the location bar of your browser – that’s what makes the request “cross origin”. Note that you can change the base location of the blocked request by setting NEXT_PUBLIC_OLLAMA_URL in the environment where you run nextjs-ollama-llm-ui .

If you’re being blocked by CORS rules. Here’s how to fix it:

  1. Ensure that the protocol in your browser (for the JS frontend) and the protocol of the blocked request (for the API server, read from the JS Console) are both either HTTPs or plain HTTP. In other words, if you’re using HTTPs for one, you must use it for both!
  2. Copy the URI scheme – the protocol + host + port (e.g. http://olllama-ui.mydomain.org, or http://127.0.0.1:3000) – from the URL in your browser and restart the Ollama API server with the env var OLLAMA_ORIGINS set to the scheme’s value. You can use multiple values, separated by commas (I have not tried *).

Correction - I do see the blocked request to. /api/tags in Safari’s JS console. image

Look like it could be a CORS problem. I believe that if the URL scheme that serves the Web UI is not the same as the URL scheme of the API server, CORS headers are required on the API server, or the browser will block the requests.

It’s possible that adding the Header: Access-Control-Allow-Origin: * to the Ollama API server may fix the problem.

Thanks for the feedback. I think with your setup the issue might indeed be related to CORS. By default localhost and 0.0.0.0 should be allowed as origins. However, since you’re facing issues, have you tried setting the OLLAMA_ORIGINS to your http://ollama.h.usky.io:11434? Check out the docs. Please report back to me if it fixed your issue.

No worries at all. Thanks for the feedback and link to the issue.

Well, it managed to fix itself, not sure how.

Hassle? Don’t be silly. It’s my pleasure. This will be SUPER powerful when I can separate the LLM processing and the UI. You should check out https://power.netmind.ai they let you spin up a 4090 2X GPU machine (200GB HDD) for FREE during their beta.

So what I’m trying to do is:

  • Run the next UI on my web server
  • Offload the LLM processing to the Ollama API on NetMind 4090 GPUs

It’ll then be SUPER fast. You should check them out and have a play. You can run a huge model ridiculously fast.

Screenshot 2024-02-15 at 08 11 08 Screenshot 2024-02-15 at 08 10 51

Same on a prod website, i can pull model, but can’t see them