LocalAI: Chatbot UI does not seems to be working

Hey Guys, love this project and willing to contribute to it. To learn more about the stuff, i need some help in getting the Chatbot UI to work

Following the example , here is my docker-compose.yaml

version: '3.6'

services:

  api:
    image: quay.io/go-skynet/local-ai:latest
    restart: always
    build:
      context: .
      dockerfile: Dockerfile.dev
    ports:
      - 8080:8080
    env_file:
      - .env
    volumes:
      - ./models:/models:cached
    command: ["/usr/bin/local-ai --threads 8" ]

  web-ui:
    image: ghcr.io/mckaywrigley/chatbot-ui:main
    restart: always
    ports:
      - 3000:3000
    environment:
      - 'OPENAI_API_KEY='
      - 'OPENAI_API_HOST=http://api:8080'
  • The chatbot UI keeps on loading and throws message unable to find model image

  • I am exposing Chatbot UI over the internet

Can you pls guide me what should be the value of OPEN_API_KEY and OPEN_API_HOST in this case , i am sure something is wrong in my config.

About this issue

  • Original URL
  • State: open
  • Created a year ago
  • Reactions: 3
  • Comments: 16 (2 by maintainers)

Most upvoted comments

Chatbot UI doesn’t seem to be using custom models. I have {"object":"list","data":[{"id":"thebloke__wizardlm-13b-v1-0-uncensored-superhot-8k-ggml__wizardlm-13b-v1.0-superhot-8k.ggmlv3.q4_k_m.bin","object":"model"}]} which I can query via terminal however chatbot ui does not display available models and tries to use gpt-3.5-turbo.

My docker-compose.yaml:

version: '3.6'

services:
  api:
    deploy:
      resources:
        reservations:
          devices:
            - driver: nvidia
              count: 1
              capabilities: [gpu]
    image: quay.io/go-skynet/local-ai:master-cublas-cuda12
    tty: true # enable colorized logs
    restart: always # should this be on-failure ?
    ports:
      - 8080:8080
    env_file:
      - .env
    volumes:
      - ./models:/models
    command: ["/usr/bin/local-ai" ]

  chatgpt:
    depends_on:
      api:
        condition: service_healthy
    image: ghcr.io/mckaywrigley/chatbot-ui:main
    ports:
      - 3000:3000
    environment:
      - 'OPENAI_API_KEY=sk-XXXXXXXXXXXXXXXXXXXX'
      - 'OPENAI_API_HOST=http://api:8080'

⚠️⚠️⚠️⚠️⚠️

Hi! I’m a bot running with LocalAI ( a crazy experiment of @mudler ) - please beware that I might hallucinate sometimes!

but… I can also be funny or helpful 😸 and I can provide generally speaking good tips or places where to look after in the documentation or in the code based on what you wrote in the issue.

Don’t engage in conversation with me, I don’t support (yet) replying!

⚠️⚠️⚠️⚠️⚠️

To get the chatbot UI to work, you need to update the OPENAI_API_KEY and OPENAI_API_HOST environment variables in the chatgpt service’s environment section. The OPENAI_API_KEY should be set to your OpenAI API key, which you can obtain from the OpenAI website. The OPENAI_API_HOST should be set to the IP address or FQDN of the externally managed LocalAI service, which you have already updated in the docker-compose file as marked <<LOCALAI_IP>>. Here’s an example of what your chatgpt service’s environment section should look like:

  chatgpt:
    environment:
      - 'OPENAI_API_KEY=sk-XXXXXXXXXXXXXXXXXXXX'
      - 'OPENAI_API_HOST=http://<<LOCALAI_IP>>:8080'

Make sure to replace the <<LOCALAI_IP>> placeholder with the IP address or FQDN of your externally managed LocalAI service.

Sources:

that is because chatbot-ui doesn’t define GGML_GPT4ALL_J in the enum OpenAIModelID.

image

i add these code into chatbot-ui and it worked.

image

image

I’m also unsuccessful with Chatbot-UI, I added all the .tmpl files to make sure the UI detects gpt4all as gpt-3.5-turbo and it shows up when creating a new chat. I can also see that the UI’s call to https://{{ chat }}/api/models is successful.

But trying to talk to the bot returns nothing.

image

The API works.

curl https://{{ api }}/v1/chat/completions -H "Content-Type: application/json" -d '{
     "model": "gpt-3.5-turbo",
     "messages": [{"role": "user", "content": "How are you?"}],
     "temperature": 0.9 
   }'
{"object":"chat.completion","model":"gpt-3.5-turbo","choices":[{"message":{"role":"assistant","content":"I am doing well. How about you?"}}],"usage":{"prompt_tokens":0,"completion_tokens":0,"total_tokens":0}}

EDIT: I used the new docker-compose provided by @mudler yesterday (thanks! ♥️) and it now works! I think the issue was with the files provided in the manual installation or an error on my side when I copied them to my directory.