mirror of
https://github.com/OpenHands/OpenHands.git
synced 2025-12-26 05:48:36 +08:00
* Update LOCAL_LLM_GUIDE.md * Create WINDOWS_WSL_GUIDE.md * update docs * Update docs/Agents.md * update windows docs --------- Co-authored-by: Engel Nyst <enyst@users.noreply.github.com>
1.8 KiB
1.8 KiB
Local LLM Guide with Ollama server
0. Install and Start ollama:
run the following command in a conda env with CUDA etc.
Linux:
curl -fsSL https://ollama.com/install.sh | sh
Windows or macOS:
- Download from here
Then run:
ollama serve
1. Install Models:
Ollama model names can be found here. For a small example, you can use the codellama:7b model. Bigger models will generally perform better.
ollama pull codellama:7b
you can check which models you have downloaded like this:
~$ ollama list
NAME ID SIZE MODIFIED
llama2:latest 78e26419b446 3.8 GB 6 weeks ago
mistral:7b-instruct-v0.2-q4_K_M eb14864c7427 4.4 GB 2 weeks ago
starcoder2:latest f67ae0f64584 1.7 GB 19 hours ago
3. Start OpenDevin
Use the instructions in README.md to start OpenDevin using Docker.
When running docker run, add the following environment variables using -e:
LLM_API_KEY="ollama"
LLM_BASE_URL="http://localhost:11434"
For example:
# The directory you want OpenDevin to modify. MUST be an absolute path!
export WORKSPACE_DIR=$(pwd)/workspace
docker run \
-e LLM_API_KEY="ollama" \
-e LLM_BASE_URL="http://localhost:11434"
-e WORKSPACE_MOUNT_PATH=$WORKSPACE_DIR \
-v $WORKSPACE_DIR:/opt/workspace_base \
-v /var/run/docker.sock:/var/run/docker.sock \
-p 3000:3000 \
ghcr.io/opendevin/opendevin:main
You should now be able to connect to http://localhost:3001/
4. Select your Model
In the OpenDevin UI, click on the Settings wheel in the bottom-left corner.
Then in the Model input, enter codellama:7b, or the name of the model you pulled earlier, and click Save.
And now you're ready to go!