AGiXT: Docker: Backend dies with error code 132

Version: ghcr.io/josh-xt/agent-llm-backend:v1.0.7

Reproduction:

  • docker compose up --build
  • Navigate to web interface
  • Type a task into the “Provide agent with objective” field and submit
  • Backend unexpectedly exits

Logs:

agent-llm-frontend-1  | yarn run v1.22.19
agent-llm-frontend-1  | $ next start
agent-llm-frontend-1  | ready - started server on 0.0.0.0:3000, url: http://localhost:3000
agent-llm-backend-1   | INFO:     Started server process [1]
agent-llm-backend-1   | INFO:     Waiting for application startup.
agent-llm-backend-1   | INFO:     Application startup complete.
agent-llm-backend-1   | INFO:     Uvicorn running on http://0.0.0.0:5000 (Press CTRL+C to quit)
agent-llm-backend-1   | playsound is relying on another python subprocess. Please use `pip install pygobject` if you want playsound to run more efficiently.
agent-llm-backend-1   | INFO:     172.26.0.1:54466 - "GET /api/agent HTTP/1.1" 200 OK
agent-llm-backend-1   | INFO:     172.26.0.1:54476 - "GET /api/agent/Agent-LLM HTTP/1.1" 200 OK
agent-llm-backend-1   | INFO:     172.26.0.1:54490 - "GET /api/agent/Agent-LLM/command HTTP/1.1" 200 OK
Downloading (…)e9125/.gitattributes: 100%|██████████| 1.18k/1.18k [00:00<00:00, 6.21MB/s]
Downloading (…)_Pooling/config.json: 100%|██████████| 190/190 [00:00<00:00, 442kB/s]
Downloading (…)7e55de9125/README.md: 100%|██████████| 10.6k/10.6k [00:00<00:00, 16.3MB/s]
Downloading (…)55de9125/config.json: 100%|██████████| 612/612 [00:00<00:00, 1.21MB/s]
Downloading (…)ce_transformers.json: 100%|██████████| 116/116 [00:00<00:00, 239kB/s]
Downloading (…)125/data_config.json: 100%|██████████| 39.3k/39.3k [00:00<00:00, 8.81MB/s]
Downloading pytorch_model.bin: 100%|██████████| 90.9M/90.9M [00:01<00:00, 60.9MB/s]
Downloading (…)nce_bert_config.json: 100%|██████████| 53.0/53.0 [00:00<00:00, 81.2kB/s]
Downloading (…)cial_tokens_map.json: 100%|██████████| 112/112 [00:00<00:00, 226kB/s]
Downloading (…)e9125/tokenizer.json: 100%|██████████| 466k/466k [00:00<00:00, 9.72MB/s]
Downloading (…)okenizer_config.json: 100%|██████████| 350/350 [00:00<00:00, 673kB/s]
Downloading (…)9125/train_script.py: 100%|██████████| 13.2k/13.2k [00:00<00:00, 17.8MB/s]
Downloading (…)7e55de9125/vocab.txt: 100%|██████████| 232k/232k [00:00<00:00, 3.01MB/s]
Downloading (…)5de9125/modules.json: 100%|██████████| 349/349 [00:00<00:00, 1.54MB/s]
agent-llm-backend-1   | Using embedded DuckDB with persistence: data will be stored in: agents/Agent-LLM/memories
agent-llm-backend-1 exited with code 132

.env:

AGENT_NAME=Agent-LLM
WORKING_DIRECTORY=WORKSPACE
OBJECTIVE=Write an engaging tweet about AI.
INITIAL_TASK=Develop an initial task list.
AI_PROVIDER=oobabooga
AI_MODEL=vicuna
AI_TEMPERATURE=0.5
MAX_TOKENS=2040
AI_PROVIDER_URI=http://192.168.1.23:7860
COMMANDS_ENABLED=True
NO_MEMORY=False
USE_LONG_TERM_MEMORY_ONLY=False
USE_BRIAN_TTS=True
USE_MAC_OS_TTS=False

About this issue

  • Original URL
  • State: closed
  • Created a year ago
  • Comments: 19 (9 by maintainers)

Commits related to this issue

Most upvoted comments

I was having this problem, building the docker images locally fixed it. Not sure why that worked but would assume there’s a problem with the CI images pushed to the registry.

@Josh-XT

--build flag isn’t necessary if the images are pointing to the Dockerfiles in the compose, it will automatically build the first time is sets up the stack. Adding --build will just force it to rebuild despite if the images already exists locally, I would take that out of the Readme.

I can’t tell if the new images are working as one of your latest commits are now causing a whole new error but at least the error thrown is the same using the registry images and my locally built one from master.

In regards to the why the registry was failing it is probably due to the cache and the fact the backend image was 31 layers which is nutty and can be problematic especially on an image that’s 5gb.

You can probably fix this by either using a scratch image to clamp all the layer together at the end like so:

# Run FastAPI app with Uvicorn
FROM scratch AS uvicorn
COPY --from=base / /
WORKDIR /app
COPY . .
EXPOSE 5000
CMD ["uvicorn", "app:app", "--host", "0.0.0.0", "--port", "5000"]

or combine all those RUN commands into one in the base image:

RUN apt-get update && \
    apt-get install -y --no-install-recommends git build-essential && \
    apt-get install g++ -y && \
    pip install --upgrade pip && \
    pip install --no-cache-dir -r requirements.txt && \
    pip install --force-reinstall --no-cache-dir hnswlib && \
    apt-get remove -y git build-essential && \
    apt-get install libgomp1 -y && \
    apt-get install git -y && \
    apt-get autoremove -y && \
    rm -rf /var/lib/apt/lists/*

Also I would update the docker-compose.yml to this:

version: "3.8"
services:
  frontend:
    image: ghcr.io/josh-xt/agent-llm-frontend
    build:
      context: ./frontend
      dockerfile: Dockerfile
    init: true
    ports:
      - "80:3000"
    environment:
      NEXT_PUBLIC_API_URI: ${NEXT_PUBLIC_API_URI:-http://backend:5000}
    env_file:
      - .env
    depends_on:
      - backend

  backend:
    image: ghcr.io/josh-xt/agent-llm-backend
    build:
      context: .
      dockerfile: Dockerfile-backend
    init: true
    ports:
      - "5000:5000"
    env_file:
      - .env
    # Optional persistent data
    volumes: 
      - ./data/agents:/app/agents:rw
      - ./data/workspace:/app/WORKSPACE:rw

# Some seem to be having problems with network on mac and windows struggles with docker dns 
# adding a contained docker network could help reduce these issues
networks:
    default:
      name: agent-llm

now most can just do:

docker compose up -d to pull latest images from registry or they can add --build to build locally

Thanks @ceramicwhite. Building locally worked for me. It’s not prematurely exiting anymore, though I’m getting 500 errors, which I suspect might be a different problem.

My docker-compose.yml for anyone interested in building locally:

version: "3.8"
services:
  frontend:
    build: ./frontend
    ports:
      - "80:3000"
    environment:
      NEXT_PUBLIC_API_URI: http://backend:5000
    env_file:
      - .env
    depends_on:
      - backend
  backend:
    build:
        context: .
        dockerfile: Dockerfile-backend
    ports:
      - "5000:5000"
    env_file:
      - .env

Clone the repo, create your .env file, and run docker compose up --build

me also, using gpt-4 model

Last output before it dies is: 2023-04-25 18:06:18 Using embedded DuckDB with persistence: data will be stored in: agents/Agent-LLM/memories