feat: LocalLLM docs without docker (#1269)

* feat:  start Devin without Docker locally

* chore: make consistent model choices

* chore: more detailed explanation for using litellm server as walkaround

* chore: simply pr
This commit is contained in:
Season 2024-04-25 00:47:20 +08:00 committed by GitHub
parent 236b7bf6ea
commit ab3e18667b
No known key found for this signature in database
GPG Key ID: B5690EEEBB952194

View File

@ -26,6 +26,7 @@ starcoder2:latest f67ae0f64584 1.7 GB 19 hours ago
## 2. Start OpenDevin
### 2.1 Docker
Use the instructions in [README.md](/README.md) to start OpenDevin using Docker.
But when running `docker run`, you'll need to add a few more arguments:
@ -54,6 +55,22 @@ docker run \
You should now be able to connect to `http://localhost:3000/`
### 2.2 Build from Source
Use the instructions in [Development.md](/Development.md) to build OpenDevin.
Make sure `config.toml` is there by running `make setup-config` which will create one for you. In `config.toml`, enter the followings:
```
LLM_MODEL="ollama/codellama:7b"
LLM_API_KEY="ollama"
LLM_EMBEDDING_MODEL="local"
LLM_BASE_URL="http://localhost:11434"
WORKSPACE_BASE="./workspace"
WORKSPACE_DIR="$(pwd)/workspace"
```
Replace `LLM_MODEL` of your choice if you need to.
Done! Now you can start Devin by: `make run` without Docker. You now should be able to connect to `http://localhost:3000/`
## 3. Select your Model
In the OpenDevin UI, click on the Settings wheel in the bottom-left corner.