Added local ollama models (#2433)

* added local ollama models

* add ollama_base_url config

* Update listen.py

* add docs

* Update opendevin/server/listen.py

Co-authored-by: Graham Neubig <neubig@gmail.com>

* lint

---------

Co-authored-by: Graham Neubig <neubig@gmail.com>
This commit is contained in:
மனோஜ்குமார் பழனிச்சாமி
2024-07-04 21:26:26 +05:30
committed by GitHub
parent 6853cbb4f6
commit 688bd2a8fc
3 changed files with 19 additions and 1 deletions

View File

@@ -35,8 +35,11 @@ But when running `docker run`, you'll need to add a few more arguments:
--add-host host.docker.internal:host-gateway \
-e LLM_API_KEY="ollama" \
-e LLM_BASE_URL="http://host.docker.internal:11434" \
-e LLM_OLLAMA_BASE_URL="http://host.docker.internal:11434" \
```
LLM_OLLAMA_BASE_URL is optional. If you set it, it will be used to show the available installed models in the UI.
For example:
```bash