simplify readme (#366)

* simplify readme

* Update config.toml.template

* Update vite.config.ts (#372)

* Update vite.config.ts

* Update frontend/vite.config.ts

---------

Co-authored-by: Robert Brennan <accounts@rbren.io>

* remove old langchains infra

* remove refs to OPENAI_API_KEY

* simplify opendevin readme

---------

Co-authored-by: Engel Nyst <enyst@users.noreply.github.com>
This commit is contained in:
Robert Brennan 2024-03-30 10:15:20 -04:00 committed by GitHub
parent 11ed011b11
commit 6bd566d780
No known key found for this signature in database
GPG Key ID: B5690EEEBB952194
7 changed files with 18 additions and 100 deletions

View File

@ -23,26 +23,18 @@ OpenDevin is still a work in progress. But you can run the alpha version to see
* [NodeJS](https://docs.npmjs.com/downloading-and-installing-node-js-and-npm) >= 14.8
### Installation
First, make sure Docker is running:
```bash
docker ps # this should exit successfully
```
Then pull our latest image [here](https://github.com/opendevin/OpenDevin/pkgs/container/sandbox)
First, pull our latest sandbox image [here](https://github.com/opendevin/OpenDevin/pkgs/container/sandbox)
```bash
docker pull ghcr.io/opendevin/sandbox
```
Then copy `config.toml.template` to `config.toml`. Add an API key to `config.toml`.
(See below for how to use different models.)
Then copy `config.toml.template` to `config.toml`. Add an OpenAI API key to `config.toml`,
or see below for how to use different models.
```toml
OPENAI_API_KEY="..."
WORKSPACE_DIR="..."
LLM_API_KEY="sk-..."
```
Next, start the backend.
We manage python packages and the virtual environment with `pipenv`.
Make sure you have python >= 3.10.
Next, start the backend:
```bash
python -m pip install pipenv
python -m pipenv install -v
@ -56,6 +48,7 @@ cd frontend
npm install
npm start
```
You'll see OpenDevin running at localhost:3001
### Picking a Model
We use LiteLLM, so you can run OpenDevin with any foundation model, including OpenAI, Claude, and Gemini.
@ -79,20 +72,6 @@ And you can customize which embeddings are used for the vector database storage:
LLM_EMBEDDING_MODEL="llama2" # can be "llama2", "openai", "azureopenai", or "local"
```
### Running the app
You should be able to run the backend now
```bash
uvicorn opendevin.server.listen:app --port 3000
```
Then in a second terminal:
```bash
cd frontend
npm install
npm run start -- --port 3001
```
You'll see OpenDevin running at localhost:3001
### Running on the Command Line
You can run OpenDevin from your command line:
```bash

View File

@ -1,20 +0,0 @@
from python:3.12-bookworm
ENV OPENAI_API_KEY=""
ENV OPENAI_MODEL="gpt-4-0125-preview"
RUN git config --global user.email "devin@opendevin.com"
RUN git config --global user.name "Devin Abierto"
RUN apt-get update
RUN apt-get install -y git sudo curl
WORKDIR /app
COPY requirements.txt ./requirements.txt
RUN python -m pip install -r requirements.txt
WORKDIR /workspace
CMD ["python", "/app/opendevin/main.py", "/workspace"]

View File

@ -1,19 +0,0 @@
#!/bin/bash
set -eo pipefail
rm -rf `pwd`/workspace
mkdir -p `pwd`/workspace
pushd agenthub/langchains_agent
docker build -t control-loop .
popd
docker run \
-e DEBUG=$DEBUG \
-e OPENAI_API_KEY=$OPENAI_API_KEY \
-u `id -u`:`id -g` \
-v `pwd`/workspace:/workspace \
-v `pwd`:/app:ro \
-e PYTHONPATH=/app \
control-loop \
python /app/opendevin/main.py -d /workspace -t "${1}"

View File

@ -1,4 +1,4 @@
# This is a template. Run `cp config.toml.template config.toml` to use it.
OPENAI_API_KEY="<YOUR OPENAI API KEY>"
LLM_API_KEY="<YOUR OPENAI API KEY>"
WORKSPACE_DIR="./workspace"

View File

@ -7,7 +7,6 @@ export default defineConfig({
base: "",
plugins: [react(), viteTsconfigPaths()],
server: {
// this sets a default port to 3000
port: 3001,
},
});

View File

@ -2,9 +2,16 @@
This is a Python package that contains all the shared abstraction (e.g., Agent) and components (e.g., sandbox, web browser, search API, selenium).
## Sandbox component
See the [main README](../README.md) for instructions on how to run OpenDevin from the command line.
Run the docker-based sandbox interactive:
## Sandbox Image
```bash
docker build -f opendevin/sandbox/Dockerfile -t opendevin/sandbox:v0.1 .
```
## Sandbox Runner
Run the docker-based interactive sandbox:
```bash
mkdir workspace
@ -17,31 +24,3 @@ Example screenshot:
<img width="868" alt="image" src="https://github.com/OpenDevin/OpenDevin/assets/38853559/8dedcdee-437a-4469-870f-be29ca2b7c32">
## How to run
1. Build the sandbox image local. If you want to use specific image tags, please also fix the variable in code, in code default image tag is `latest`.
```bash
docker build -f opendevin/sandbox/Dockerfile -t opendevin/sandbox:v0.1 .
```
Or you can pull the latest image [here](https://github.com/opendevin/OpenDevin/pkgs/container/sandbox):
```bash
docker pull ghcr.io/opendevin/sandbox
```
2. Set the `OPENAI_API_KEY`, please find more details [here](https://help.openai.com/en/articles/5112595-best-practices-for-api-key-safety). Also, choose the model you want. Default is `gpt-4-0125-preview`
```bash
export OPENAI_API_KEY=xxxxxxx
```
3. Install the requirement package.
```bash
pip install -r requirements.txt
```
If you still meet problem like `ModuleNotFoundError: No module named 'agenthub'`, try to add the `opendevin` root path into `PATH` env.
4. Run following cmd to start.
```bash
PYTHONPATH=`pwd` python ./opendevin/main.py -d ./workspace -t "write a bash script that prints hello world"
```

View File

@ -7,7 +7,7 @@ This is a WebSocket server that executes tasks using an agent.
Create a `.env` file with the contents
```sh
OPENAI_API_KEY=<YOUR OPENAI API KEY>
LLM_API_KEY=<YOUR OPENAI API KEY>
```
Install requirements:
@ -36,7 +36,7 @@ websocat ws://127.0.0.1:3000/ws
## Supported Environment Variables
```sh
OPENAI_API_KEY=sk-... # Your OpenAI API Key
LLM_API_KEY=sk-... # Your OpenAI API Key
LLM_MODEL=gpt-4-0125-preview # Default model for the agent to use
WORKSPACE_DIR=/path/to/your/workspace # Default path to model's workspace
```