LlamaEdge: bug: Failed to run start script

Summary

bash <(curl -sSfL 'https://code.flows.network/webhook/iwYN1SdN3AmPgR5ao5Gt/run-llm.sh')

error:

sed: /Users/user/.bash_profile: in-place editing only works for regular files

Reproduction steps

  1. Run bash <(curl -sSfL 'https://code.flows.network/webhook/iwYN1SdN3AmPgR5ao5Gt/run-llm.sh')
  2. Press Enter
  3. Press 1
  4. Press Enter

Screenshots

Screenshot 2024-01-31 at 13 05 42

Any logs you want to share for showing the specific issue

No response

Model Information

None

Operating system information

macOS 14.2.1 (23C71)

ARCH

Not sure

CPU Information

2.6 GHz 6-Core Intel Core i7

Memory Size

32GB

GPU Information

Radeon Pro 560X 4 GB Intel UHD Graphics 630 1536 MB

VRAM Size

Not sure

About this issue

  • Original URL
  • State: closed
  • Created 5 months ago
  • Comments: 15 (9 by maintainers)

Most upvoted comments

Thanks @liebkne! This issue helps us determine the potential problems with the MacBook Intel model.

And I found that your GPU memory is a little bit small for running some LLM models. If you are trying to execute a larger model, you may need to append --n-gpu-layers <a small value> after the execution command, and run the command by youself:

[+] Will run the following command to start CLI Chat:

        wasmedge --dir .:. --nn-preload default:GGML:AUTO:Llama-2-7b-chat-hf-Q5_K_M.gguf llama-chat.wasm --prompt-template llama-2-chat --n-gpu-layers <a smaller value>

@liebkne I don’t have an Intel model macbook with me now. I will use GitHub CI Runner to reproduce this issue. Will get you back once I have the result. Thanks!