nextjs-ollama-llm-ui: bug: issues pulling model (CORS)
I’m running the ollama server on localhost:11434. I go to “Settings” and try to pull a model as no models are listed in the dropdown (even though tinydolphin is pulled and working on localhost). However, the app pukes on this as follows:
⨯ TypeError: fetch failed at Object.fetch (node:internal/deps/undici/undici:11576:11) at process.processTicksAndRejections (node:internal/process/task_queues:95:5) at async globalThis.fetch (/home/ubuntu/nextjs-ollama-llm-ui/node_modules/next/dist/compiled/next-server/app-route.runtime.dev.js:6:57569) at async POST (webpack-internal:///(rsc)/./src/app/api/model/route.ts:7:22) at async /home/ubuntu/nextjs-ollama-llm-ui/node_modules/next/dist/compiled/next-server/app-route.runtime.dev.js:6:63809 at async eU.execute (/home/ubuntu/nextjs-ollama-llm-ui/node_modules/next/dist/compiled/next-server/app-route.runtime.dev.js:6:53964) at async eU.handle (/home/ubuntu/nextjs-ollama-llm-ui/node_modules/next/dist/compiled/next-server/app-route.runtime.dev.js:6:65062) at async doRender (/home/ubuntu/nextjs-ollama-llm-ui/node_modules/next/dist/server/base-server.js:1333:42) at async cacheEntry.responseCache.get.routeKind (/home/ubuntu/nextjs-ollama-llm-ui/node_modules/next/dist/server/base-server.js:1555:28) at async DevServer.renderToResponseWithComponentsImpl (/home/ubuntu/nextjs-ollama-llm-ui/node_modules/next/dist/server/base-server.js:1463:28) at async DevServer.renderPageComponent (/home/ubuntu/nextjs-ollama-llm-ui/node_modules/next/dist/server/base-server.js:1856:24) at async DevServer.renderToResponseImpl (/home/ubuntu/nextjs-ollama-llm-ui/node_modules/next/dist/server/base-server.js:1894:32) at async DevServer.pipeImpl (/home/ubuntu/nextjs-ollama-llm-ui/node_modules/next/dist/server/base-server.js:911:25) at async NextNodeServer.handleCatchallRenderRequest (/home/ubuntu/nextjs-ollama-llm-ui/node_modules/next/dist/server/next-server.js:271:17) at async DevServer.handleRequestImpl (/home/ubuntu/nextjs-ollama-llm-ui/node_modules/next/dist/server/base-server.js:807:17) at async /home/ubuntu/nextjs-ollama-llm-ui/node_modules/next/dist/server/dev/next-dev-server.js:331:20 at async Span.traceAsyncFn (/home/ubuntu/nextjs-ollama-llm-ui/node_modules/next/dist/trace/trace.js:151:20) at async DevServer.handleRequest (/home/ubuntu/nextjs-ollama-llm-ui/node_modules/next/dist/server/dev/next-dev-server.js:328:24) at async invokeRender (/home/ubuntu/nextjs-ollama-llm-ui/node_modules/next/dist/server/lib/router-server.js:163:21) at async handleRequest (/home/ubuntu/nextjs-ollama-llm-ui/node_modules/next/dist/server/lib/router-server.js:342:24) at async requestHandlerImpl (/home/ubuntu/nextjs-ollama-llm-ui/node_modules/next/dist/server/lib/router-server.js:366:13) at async Server.requestListener (/home/ubuntu/nextjs-ollama-llm-ui/node_modules/next/dist/server/lib/start-server.js:140:13) { cause: Error: connect ECONNREFUSED ::1:11434 at TCPConnectWrap.afterConnect [as oncomplete] (node:net:1495:16) at TCPConnectWrap.callbackTrampoline (node:internal/async_hooks:130:17) { errno: -111, code: 'ECONNREFUSED', syscall: 'connect', address: '::1', port: 11434 } }
About this issue
- Original URL
- State: closed
- Created 5 months ago
- Comments: 31 (12 by maintainers)
Thanks! I haven’t had a chance to go back AND my service provider was down last week. I’m finishing another project atm and will check this out later this week.
THANKYOU
So, to summarize…
If you’re having this problem, it’s probably because this JS frontend cannot pull the list of models/tags from the Ollama API endpoint. You should be able to verify this by looking in the JS Console immediately upon loading the UI (no need to add a nickname or pull a model).
^^ If you see the blocked request to
api/tags, then this CORS-related issue is almost certainly your problem. The URL scheme (protocol + host + port) for the blocked request should be different from the URL scheme in the location bar of your browser – that’s what makes the request “cross origin”. Note that you can change the base location of the blocked request by settingNEXT_PUBLIC_OLLAMA_URLin the environment where you runnextjs-ollama-llm-ui.If you’re being blocked by CORS rules. Here’s how to fix it:
http://olllama-ui.mydomain.org, orhttp://127.0.0.1:3000) – from the URL in your browser and restart the Ollama API server with the env varOLLAMA_ORIGINSset to the scheme’s value. You can use multiple values, separated by commas (I have not tried*).Thanks for the feedback. I think with your setup the issue might indeed be related to CORS. By default
localhostand0.0.0.0should be allowed as origins. However, since you’re facing issues, have you tried setting theOLLAMA_ORIGINSto yourhttp://ollama.h.usky.io:11434? Check out the docs. Please report back to me if it fixed your issue.Well, it managed to fix itself, not sure how.
Hassle? Don’t be silly. It’s my pleasure. This will be SUPER powerful when I can separate the LLM processing and the UI. You should check out https://power.netmind.ai they let you spin up a 4090 2X GPU machine (200GB HDD) for FREE during their beta.
So what I’m trying to do is:
It’ll then be SUPER fast. You should check them out and have a play. You can run a huge model ridiculously fast.
Same on a prod website, i can pull model, but can’t see them