doc: Small fixes to documentation (#1783)

This commit is contained in:
mamoodi 2024-05-14 13:36:53 -04:00 committed by GitHub
parent dcb5d1ce0a
commit 1d8402a14a
No known key found for this signature in database
GPG Key ID: B5690EEEBB952194
5 changed files with 10 additions and 10 deletions

View File

@ -34,7 +34,7 @@ Now we have both Slack workspace for the collaboration on building OpenDevin and
- [Slack workspace](https://join.slack.com/t/opendevin/shared_invite/zt-2ggtwn3k5-PvAA2LUmqGHVZ~XzGq~ILw)
- [Discord server](https://discord.gg/ESHStjSjD4)
If you would love to contribute, feel free to join our community (note that now there is no need to fill in the [form](https://forms.gle/758d5p6Ve8r2nxxq6)). Let's simplify software engineering together!
If you would love to contribute, feel free to join our community. Let's simplify software engineering together!
🐚 **Code less, make more with OpenDevin.**

View File

@ -60,7 +60,7 @@ Explore the codebase of OpenDevin on [GitHub](https://github.com/OpenDevin/OpenD
The easiest way to run OpenDevin is inside a Docker container.
To start the app, run these commands, replacing `$(pwd)/workspace` with the path to the code you want OpenDevin to work with.
To start the app, run these commands, replacing `$(pwd)/workspace` with the directory you want OpenDevin to work with.
```
# Your OpenAI API key, or any other LLM API key

View File

@ -44,4 +44,4 @@ are actively working on building better open source models!
Some LLMs have rate limits and may require retries. OpenDevin will automatically retry requests if it receives a 429 error or API connection error.
You can set `LLM_NUM_RETRIES`, `LLM_RETRY_MIN_WAIT`, `LLM_RETRY_MAX_WAIT` environment variables to control the number of retries and the time between retries.
By default, `LLM_NUM_RETRIES` is 5 and `LLM_RETRY_MIN_WAIT`, `LLM_RETRY_MAX_WAIT` are 3 seconds and respectively 60 seconds.
By default, `LLM_NUM_RETRIES` is 5 and `LLM_RETRY_MIN_WAIT`, `LLM_RETRY_MAX_WAIT` are 3 seconds and 60 seconds respectively.

View File

@ -1,9 +1,9 @@
# Local LLM with Ollama
Ensure that you have the Ollama server up and running.
For detailed startup instructions, refer to the [here](https://github.com/ollama/ollama)
For detailed startup instructions, refer to [here](https://github.com/ollama/ollama)
This guide assumes you've started ollama with `ollama serve`. If you're running ollama differently (e.g. inside docker), the instructions might need to be modified. Please note that if you're running wsl the default ollama configuration blocks requests from docker containers. See [here](#4-configuring-the-ollama-service-wsl).
This guide assumes you've started ollama with `ollama serve`. If you're running ollama differently (e.g. inside docker), the instructions might need to be modified. Please note that if you're running WSL the default ollama configuration blocks requests from docker containers. See [here](#configuring-the-ollama-service-wsl).
## Pull Models
@ -87,7 +87,7 @@ And now you're ready to go!
## Configuring the ollama service (WSL)
The default configuration for ollama in wsl only serves localhost. This means you can't reach it from a docker container. eg. it wont work with OpenDevin. First let's test that ollama is running correctly.
The default configuration for ollama in WSL only serves localhost. This means you can't reach it from a docker container. eg. it wont work with OpenDevin. First let's test that ollama is running correctly.
```bash
ollama list # get list of installed models
@ -96,7 +96,7 @@ curl http://localhost:11434/api/generate -d '{"model":"[NAME]","prompt":"hi"}'
#ex. curl http://localhost:11434/api/generate -d '{"model":"codellama","prompt":"hi"}' #the tag is optional if there is only one
```
Once that is done test that it allows "outside" requests, like those from inside a docker container.
Once that is done, test that it allows "outside" requests, like those from inside a docker container.
```bash
docker ps # get list of running docker containers, for most accurate test choose the open devin sandbox container.
@ -106,7 +106,7 @@ docker exec [CONTAINER ID] curl http://host.docker.internal:11434/api/generate -
## Fixing it
Now let's make it work, edit /etc/systemd/system/ollama.service with sudo privileges. (Path may vary depending on linux flavor)
Now let's make it work. Edit /etc/systemd/system/ollama.service with sudo privileges. (Path may vary depending on linux flavor)
```bash
sudo vi /etc/systemd/system/ollama.service

View File

@ -7,7 +7,7 @@ Please be sure to run all commands inside your WSL terminal.
### Failed to create opendevin user
If you encounter the following error during setup: `Exception: Failed to create opendevin user in sandbox: b'useradd: UID 0 is not unique\n'`
If you encounter the following error during setup: `Exception: Failed to create opendevin user in sandbox: b'useradd: UID 0 is not unique\n'`.
You can resolve it by running:
` export SANDBOX_USER_ID=1000
`
@ -20,7 +20,7 @@ If you face issues running Poetry even after installing it during the build proc
### NoneType object has no attribute 'request'
If you experiencing issues related to networking, such as `NoneType object has no attribute 'request'` when executing `make run`, you may need to configure your WSL2 networking settings. Follow these steps:
If you are experiencing issues related to networking, such as `NoneType object has no attribute 'request'` when executing `make run`, you may need to configure your WSL2 networking settings. Follow these steps:
- Open or create the `.wslconfig` file located at `C:\Users\%username%\.wslconfig` on your Windows host machine.
- Add the following configuration to the `.wslconfig` file: