mirror of
https://github.com/OpenHands/OpenHands.git
synced 2026-03-22 05:37:20 +08:00
Add Docker DOOD setup (#1023)
* simplified get * resolved merge conflicts * removed default param for get * add dood setup * add readme * better build process * multi-stage build * revert makefile * rm entrypoint.sh * adjust ssh box for docker * update readme * update readme * fix hostname * change workspace setting * add workspace_mount_base * fixes for workspace dir * clean up frontend * refactor dockerfile * try download.py * change docker order a bit * remove workspace_dir from frontend settings * fix merge issues * Update opendevin/config.py * remove relpath logic from server * rename workspace_mount_base to workspace_base * remove workspace dir plumbing for now * delint * delint * move workspace base dir * remove refs to workspace_dir * factor out constant * fix local directory usage * dont require dir * fix docs * fix arg parsing for task * implement WORKSPACE_MOUNT_PATH * fix workspace dir * fix ports * fix merge issues * add makefile * revert settingsService * fix string * Add address * Update Dockerfile * Update local_box.py * fix lint * move to port 3000 --------- Co-authored-by: மனோஜ்குமார் பழனிச்சாமி <smartmanoj42857@gmail.com> Co-authored-by: enyst <engel.nyst@gmail.com>
This commit is contained in:
4
.dockerignore
Normal file
4
.dockerignore
Normal file
@@ -0,0 +1,4 @@
|
||||
frontend/node_modules
|
||||
config.toml
|
||||
.envrc
|
||||
.env
|
||||
66
Development.md
Normal file
66
Development.md
Normal file
@@ -0,0 +1,66 @@
|
||||
# Development Guide
|
||||
This guide is for people working on OpenDevin and editing the source code.
|
||||
|
||||
## Start the server for development
|
||||
|
||||
### 1. Requirements
|
||||
* Linux, Mac OS, or [WSL on Windows](https://learn.microsoft.com/en-us/windows/wsl/install)
|
||||
* [Docker](https://docs.docker.com/engine/install/)(For those on MacOS, make sure to allow the default Docker socket to be used from advanced settings!)
|
||||
* [Python](https://www.python.org/downloads/) >= 3.11
|
||||
* [NodeJS](https://nodejs.org/en/download/package-manager) >= 18.17.1
|
||||
* [Poetry](https://python-poetry.org/docs/#installing-with-the-official-installer) >= 1.8
|
||||
|
||||
Make sure you have all these dependencies installed before moving on to `make build`.
|
||||
|
||||
### 2. Build and Setup The Environment
|
||||
|
||||
- **Build the Project:** Begin by building the project, which includes setting up the environment and installing dependencies. This step ensures that OpenDevin is ready to run smoothly on your system.
|
||||
```bash
|
||||
make build
|
||||
```
|
||||
|
||||
### 3. Configuring the Language Model
|
||||
|
||||
OpenDevin supports a diverse array of Language Models (LMs) through the powerful [litellm](https://docs.litellm.ai) library. By default, we've chosen the mighty GPT-4 from OpenAI as our go-to model, but the world is your oyster! You can unleash the potential of Anthropic's suave Claude, the enigmatic Llama, or any other LM that piques your interest.
|
||||
|
||||
To configure the LM of your choice, follow these steps:
|
||||
|
||||
1. **Using the Makefile: The Effortless Approach**
|
||||
With a single command, you can have a smooth LM setup for your OpenDevin experience. Simply run:
|
||||
```bash
|
||||
make setup-config
|
||||
```
|
||||
This command will prompt you to enter the LLM API key and model name, ensuring that OpenDevin is tailored to your specific needs.
|
||||
|
||||
**Note on Alternative Models:**
|
||||
Some alternative models may prove more challenging to tame than others. Fear not, brave adventurer! We shall soon unveil LLM-specific documentation to guide you on your quest. And if you've already mastered the art of wielding a model other than OpenAI's GPT, we encourage you to [share your setup instructions with us](https://github.com/OpenDevin/OpenDevin/issues/417).
|
||||
|
||||
For a full list of the LM providers and models available, please consult the [litellm documentation](https://docs.litellm.ai/docs/providers).
|
||||
|
||||
There is also [documentation for running with local models using ollama](./docs/documentation/LOCAL_LLM_GUIDE.md).
|
||||
|
||||
### 4. Run the Application
|
||||
|
||||
- **Run the Application:** Once the setup is complete, launching OpenDevin is as simple as running a single command. This command starts both the backend and frontend servers seamlessly, allowing you to interact with OpenDevin without any hassle.
|
||||
```bash
|
||||
make run
|
||||
```
|
||||
|
||||
### 5. Individual Server Startup
|
||||
|
||||
- **Start the Backend Server:** If you prefer, you can start the backend server independently to focus on backend-related tasks or configurations.
|
||||
```bash
|
||||
make start-backend
|
||||
```
|
||||
|
||||
- **Start the Frontend Server:** Similarly, you can start the frontend server on its own to work on frontend-related components or interface enhancements.
|
||||
```bash
|
||||
make start-frontend
|
||||
```
|
||||
|
||||
### 6. Help
|
||||
|
||||
- **Get Some Help:** Need assistance or information on available targets and commands? The help command provides all the necessary guidance to ensure a smooth experience with OpenDevin.
|
||||
```bash
|
||||
make help
|
||||
```
|
||||
91
README.md
91
README.md
@@ -121,76 +121,51 @@ After completing the MVP, the team will focus on research in various areas, incl
|
||||
|
||||
Getting started with the OpenDevin project is incredibly easy. Follow these simple steps to set up and run OpenDevin on your system:
|
||||
|
||||
### 1. Requirements
|
||||
* Linux, Mac OS, or [WSL on Windows](https://learn.microsoft.com/en-us/windows/wsl/install)
|
||||
* [Docker](https://docs.docker.com/engine/install/)(For those on MacOS, make sure to allow the default Docker socket to be used from advanced settings!)
|
||||
* [Python](https://www.python.org/downloads/) >= 3.11
|
||||
* [NodeJS](https://nodejs.org/en/download/package-manager) >= 18.17.1
|
||||
* [Poetry](https://python-poetry.org/docs/#installing-with-the-official-installer) >= 1.8
|
||||
The easiest way to run OpenDevin is inside a Docker container.
|
||||
You can run:
|
||||
```bash
|
||||
# Your OpenAI API key, or any other LLM API key
|
||||
export LLM_API_KEY="sk-..."
|
||||
|
||||
Make sure you have all these dependencies installed before moving on to `make build`.
|
||||
# The directory you want OpenDevin to modify. MUST be an absolute path!
|
||||
export WORKSPACE_DIR=$(pwd)/workspace
|
||||
|
||||
### 2. Build and Setup The Environment
|
||||
docker build -t opendevin-app -f container/Dockerfile .
|
||||
|
||||
- **Build the Project:** Begin by building the project, which includes setting up the environment and installing dependencies. This step ensures that OpenDevin is ready to run smoothly on your system.
|
||||
```bash
|
||||
make build
|
||||
```
|
||||
docker run \
|
||||
-e LLM_API_KEY \
|
||||
-e WORKSPACE_MOUNT_PATH=$WORKSPACE_DIR \
|
||||
-v $WORKSPACE_DIR:/opt/workspace_base \
|
||||
-v /var/run/docker.sock:/var/run/docker.sock \
|
||||
-p 3000:3000 \
|
||||
opendevin-app
|
||||
```
|
||||
Replace `$(pwd)/workspace` with the path to the code you want OpenDevin to work with.
|
||||
|
||||
### 3. Configuring the Language Model
|
||||
You can find opendevin running at `http://localhost:3000`.
|
||||
|
||||
OpenDevin supports a diverse array of Language Models (LMs) through the powerful [litellm](https://docs.litellm.ai) library. By default, we've chosen the mighty GPT-4 from OpenAI as our go-to model, but the world is your oyster! You can unleash the potential of Anthropic's suave Claude, the enigmatic Llama, or any other LM that piques your interest.
|
||||
See [Development.md](Development.md) for instructions on running OpenDevin without Docker.
|
||||
|
||||
To configure the LM of your choice, follow these steps:
|
||||
## 🤖 LLM Backends
|
||||
OpenDevin can work with any LLM backend.
|
||||
For a full list of the LM providers and models available, please consult the
|
||||
[litellm documentation](https://docs.litellm.ai/docs/providers).
|
||||
|
||||
1. **Using the Makefile: The Effortless Approach**
|
||||
With a single command, you can have a smooth LM setup for your OpenDevin experience. Simply run:
|
||||
```bash
|
||||
make setup-config
|
||||
```
|
||||
This command will prompt you to enter the LLM API key and model name, ensuring that OpenDevin is tailored to your specific needs.
|
||||
The following environment variables might be necessary for some LLMs:
|
||||
* `LLM_API_KEY`
|
||||
* `LLM_BASE_URL`
|
||||
* `LLM_EMBEDDING_MODEL`
|
||||
* `LLM_DEPLOYMENT_NAME`
|
||||
* `LLM_API_VERSION`
|
||||
|
||||
**Note on Alternative Models:**
|
||||
Some alternative models may prove more challenging to tame than others. Fear not, brave adventurer! We shall soon unveil LLM-specific documentation to guide you on your quest. And if you've already mastered the art of wielding a model other than OpenAI's GPT, we encourage you to [share your setup instructions with us](https://github.com/OpenDevin/OpenDevin/issues/417).
|
||||
|
||||
For a full list of the LM providers and models available, please consult the [litellm documentation](https://docs.litellm.ai/docs/providers).
|
||||
Some alternative models may prove more challenging to tame than others.
|
||||
Fear not, brave adventurer! We shall soon unveil LLM-specific documentation to guide you on your quest.
|
||||
And if you've already mastered the art of wielding a model other than OpenAI's GPT,
|
||||
we encourage you to [share your setup instructions with us](https://github.com/OpenDevin/OpenDevin/issues/417).
|
||||
|
||||
There is also [documentation for running with local models using ollama](./docs/documentation/LOCAL_LLM_GUIDE.md).
|
||||
|
||||
We are working on a [guide for running OpenDevin with Azure](./docs/documentation/AZURE_LLM_GUIDE.md).
|
||||
|
||||
### 4. Run the Application
|
||||
|
||||
- **Run the Application:** Once the setup is complete, launching OpenDevin is as simple as running a single command. This command starts both the backend and frontend servers seamlessly, allowing you to interact with OpenDevin without any hassle.
|
||||
```bash
|
||||
make run
|
||||
```
|
||||
|
||||
### 5. Individual Server Startup
|
||||
|
||||
- **Start the Backend Server:** If you prefer, you can start the backend server independently to focus on backend-related tasks or configurations.
|
||||
```bash
|
||||
make start-backend
|
||||
```
|
||||
|
||||
- **Start the Frontend Server:** Similarly, you can start the frontend server on its own to work on frontend-related components or interface enhancements.
|
||||
```bash
|
||||
make start-frontend
|
||||
```
|
||||
|
||||
### 6. Help
|
||||
|
||||
- **Get Some Help:** Need assistance or information on available targets and commands? The help command provides all the necessary guidance to ensure a smooth experience with OpenDevin.
|
||||
```bash
|
||||
make help
|
||||
```
|
||||
|
||||
<p align="right" style="font-size: 14px; color: #555; margin-top: 20px;">
|
||||
<a href="#readme-top" style="text-decoration: none; color: #007bff; font-weight: bold;">
|
||||
↑ Back to Top ↑
|
||||
</a>
|
||||
</p>
|
||||
|
||||
## ⭐️ Research Strategy
|
||||
|
||||
Achieving full replication of production-grade applications with LLMs is a complex endeavor. Our strategy involves:
|
||||
|
||||
35
container/Dockerfile
Normal file
35
container/Dockerfile
Normal file
@@ -0,0 +1,35 @@
|
||||
FROM node:21.7.2-bookworm-slim as frontend-builder
|
||||
|
||||
WORKDIR /app
|
||||
|
||||
COPY ./frontend/package.json frontend/package-lock.json ./
|
||||
RUN npm install
|
||||
|
||||
COPY ./frontend ./
|
||||
RUN npm run build
|
||||
|
||||
|
||||
FROM python:3.12-slim as runtime
|
||||
|
||||
WORKDIR /app
|
||||
ENV PYTHONPATH '/app'
|
||||
ENV RUN_AS_DEVIN=false
|
||||
ENV USE_HOST_NETWORK=false
|
||||
ENV SSH_HOSTNAME=host.docker.internal
|
||||
ENV WORKSPACE_BASE=/opt/workspace_base
|
||||
RUN mkdir -p $WORKSPACE_BASE
|
||||
|
||||
RUN apt-get update -y \
|
||||
&& apt-get install -y curl make git build-essential \
|
||||
&& python3 -m pip install poetry --break-system-packages
|
||||
|
||||
COPY ./pyproject.toml ./poetry.lock ./
|
||||
RUN poetry install --without evaluation
|
||||
|
||||
COPY ./opendevin ./opendevin
|
||||
COPY ./agenthub ./agenthub
|
||||
RUN poetry run python opendevin/download.py # No-op to download assets
|
||||
|
||||
COPY --from=frontend-builder /app/dist ./frontend/dist
|
||||
|
||||
CMD ["poetry", "run", "uvicorn", "opendevin.server.listen:app", "--host", "0.0.0.0", "--port", "3000"]
|
||||
31
container/Makefile
Normal file
31
container/Makefile
Normal file
@@ -0,0 +1,31 @@
|
||||
DOCKER_BUILD_REGISTRY=ghcr.io
|
||||
DOCKER_BUILD_ORG=opendevin
|
||||
DOCKER_BUILD_REPO=opendevin
|
||||
DOCKER_BUILD_TAG=v0.2
|
||||
FULL_IMAGE=$(DOCKER_BUILD_REGISTRY)/$(DOCKER_BUILD_ORG)/$(DOCKER_BUILD_REPO):$(DOCKER_BUILD_TAG)
|
||||
|
||||
LATEST_FULL_IMAGE=$(DOCKER_BUILD_REGISTRY)/$(DOCKER_BUILD_ORG)/$(DOCKER_BUILD_REPO):latest
|
||||
|
||||
MAJOR_VERSION=$(shell echo $(DOCKER_BUILD_TAG) | cut -d. -f1)
|
||||
MAJOR_FULL_IMAGE=$(DOCKER_BUILD_REGISTRY)/$(DOCKER_BUILD_ORG)/$(DOCKER_BUILD_REPO):$(MAJOR_VERSION)
|
||||
MINOR_VERSION=$(shell echo $(DOCKER_BUILD_TAG) | cut -d. -f1,2)
|
||||
MINOR_FULL_IMAGE=$(DOCKER_BUILD_REGISTRY)/$(DOCKER_BUILD_ORG)/$(DOCKER_BUILD_REPO):$(MINOR_VERSION)
|
||||
|
||||
# normally, for local build testing or development. use cross platform build for sharing images to others.
|
||||
build:
|
||||
docker build -f Dockerfile -t ${FULL_IMAGE} -t ${LATEST_FULL_IMAGE} ..
|
||||
|
||||
push:
|
||||
docker push ${FULL_IMAGE} ${LATEST_FULL_IMAGE}
|
||||
|
||||
test:
|
||||
docker buildx build --platform linux/amd64 \
|
||||
-t ${FULL_IMAGE} -t ${LATEST_FULL_IMAGE} --load -f Dockerfile ..
|
||||
|
||||
# cross platform build, you may need to manually stop the buildx(buildkit) container
|
||||
all:
|
||||
docker buildx build --platform linux/amd64,linux/arm64 \
|
||||
-t ${FULL_IMAGE} -t ${LATEST_FULL_IMAGE} -t ${MINOR_FULL_IMAGE} --push -f Dockerfile ..
|
||||
|
||||
get-full-image:
|
||||
@echo ${FULL_IMAGE}
|
||||
@@ -4,7 +4,6 @@ import {
|
||||
Autocomplete,
|
||||
AutocompleteItem,
|
||||
Button,
|
||||
Input,
|
||||
Modal,
|
||||
ModalBody,
|
||||
ModalContent,
|
||||
@@ -46,9 +45,6 @@ function InnerSettingModal({ isOpen, onClose }: Props): JSX.Element {
|
||||
settings[ArgConfigType.LLM_MODEL],
|
||||
);
|
||||
const [agent, setAgent] = useState(settings[ArgConfigType.AGENT]);
|
||||
const [workspaceDirectory, setWorkspaceDirectory] = useState(
|
||||
settings[ArgConfigType.WORKSPACE_DIR],
|
||||
);
|
||||
const [language, setLanguage] = useState(settings[ArgConfigType.LANGUAGE]);
|
||||
|
||||
const { t } = useTranslation();
|
||||
@@ -78,7 +74,6 @@ function InnerSettingModal({ isOpen, onClose }: Props): JSX.Element {
|
||||
{
|
||||
[ArgConfigType.LLM_MODEL]: model ?? inputModel,
|
||||
[ArgConfigType.AGENT]: agent,
|
||||
[ArgConfigType.WORKSPACE_DIR]: workspaceDirectory,
|
||||
[ArgConfigType.LANGUAGE]: language,
|
||||
},
|
||||
Object.fromEntries(
|
||||
@@ -100,18 +95,6 @@ function InnerSettingModal({ isOpen, onClose }: Props): JSX.Element {
|
||||
{t(I18nKey.CONFIGURATION$MODAL_TITLE)}
|
||||
</ModalHeader>
|
||||
<ModalBody>
|
||||
<Input
|
||||
type="text"
|
||||
label={t(
|
||||
I18nKey.CONFIGURATION$OPENDEVIN_WORKSPACE_DIRECTORY_INPUT_LABEL,
|
||||
)}
|
||||
defaultValue={workspaceDirectory}
|
||||
placeholder={t(
|
||||
I18nKey.CONFIGURATION$OPENDEVIN_WORKSPACE_DIRECTORY_INPUT_PLACEHOLDER,
|
||||
)}
|
||||
onChange={(e) => setWorkspaceDirectory(e.target.value)}
|
||||
/>
|
||||
|
||||
<Autocomplete
|
||||
defaultItems={supportedModels.map((v: string) => ({
|
||||
label: v,
|
||||
|
||||
@@ -9,8 +9,6 @@ export const settingsSlice = createSlice({
|
||||
[ArgConfigType.LLM_MODEL]:
|
||||
localStorage.getItem(ArgConfigType.LLM_MODEL) || "",
|
||||
[ArgConfigType.AGENT]: localStorage.getItem(ArgConfigType.AGENT) || "",
|
||||
[ArgConfigType.WORKSPACE_DIR]:
|
||||
localStorage.getItem(ArgConfigType.WORKSPACE_DIR) || "",
|
||||
[ArgConfigType.LANGUAGE]:
|
||||
localStorage.getItem(ArgConfigType.LANGUAGE) || "en",
|
||||
} as { [key: string]: string },
|
||||
|
||||
@@ -19,7 +19,7 @@ enum ArgConfigType {
|
||||
const SupportedList: string[] = [
|
||||
// ArgConfigType.LLM_API_KEY,
|
||||
// ArgConfigType.LLM_BASE_URL,
|
||||
ArgConfigType.WORKSPACE_DIR,
|
||||
// ArgConfigType.WORKSPACE_DIR,
|
||||
ArgConfigType.LLM_MODEL,
|
||||
// ArgConfigType.SANDBOX_CONTAINER_IMAGE,
|
||||
// ArgConfigType.RUN_AS_DEVIN,
|
||||
|
||||
@@ -24,7 +24,6 @@ export default defineConfig({
|
||||
"/api": {
|
||||
target: `http://${BACKEND_HOST}/`,
|
||||
changeOrigin: true,
|
||||
rewrite: (path: string) => path.replace(/^\/api/, ""),
|
||||
},
|
||||
"/ws": {
|
||||
target: `ws://${BACKEND_HOST}/`,
|
||||
|
||||
@@ -3,18 +3,18 @@ from dataclasses import dataclass
|
||||
|
||||
from opendevin.observation import FileReadObservation, FileWriteObservation
|
||||
from opendevin.schema import ActionType
|
||||
from opendevin import config
|
||||
|
||||
from .base import ExecutableAction
|
||||
|
||||
# This is the path where the workspace is mounted in the container
|
||||
# The LLM sometimes returns paths with this prefix, so we need to remove it
|
||||
PATH_PREFIX = '/workspace/'
|
||||
SANDBOX_PATH_PREFIX = '/workspace/'
|
||||
|
||||
|
||||
def resolve_path(base_path, file_path):
|
||||
if file_path.startswith(PATH_PREFIX):
|
||||
file_path = file_path[len(PATH_PREFIX):]
|
||||
return os.path.join(base_path, file_path)
|
||||
def resolve_path(file_path):
|
||||
if file_path.startswith(SANDBOX_PATH_PREFIX):
|
||||
# Sometimes LLMs include the absolute path of the file inside the sandbox
|
||||
file_path = file_path[len(SANDBOX_PATH_PREFIX):]
|
||||
return os.path.join(config.get('WORKSPACE_BASE'), file_path)
|
||||
|
||||
|
||||
@dataclass
|
||||
@@ -23,7 +23,7 @@ class FileReadAction(ExecutableAction):
|
||||
action: str = ActionType.READ
|
||||
|
||||
def run(self, controller) -> FileReadObservation:
|
||||
path = resolve_path(controller.workdir, self.path)
|
||||
path = resolve_path(self.path)
|
||||
with open(path, 'r', encoding='utf-8') as file:
|
||||
return FileReadObservation(path=path, content=file.read())
|
||||
|
||||
@@ -39,7 +39,7 @@ class FileWriteAction(ExecutableAction):
|
||||
action: str = ActionType.WRITE
|
||||
|
||||
def run(self, controller) -> FileWriteObservation:
|
||||
whole_path = resolve_path(controller.workdir, self.path)
|
||||
whole_path = resolve_path(self.path)
|
||||
with open(whole_path, 'w', encoding='utf-8') as file:
|
||||
file.write(self.content)
|
||||
return FileWriteObservation(content='', path=self.path)
|
||||
|
||||
@@ -1,6 +1,7 @@
|
||||
import copy
|
||||
import os
|
||||
|
||||
import argparse
|
||||
import toml
|
||||
from dotenv import load_dotenv
|
||||
|
||||
@@ -11,7 +12,9 @@ load_dotenv()
|
||||
DEFAULT_CONFIG: dict = {
|
||||
ConfigType.LLM_API_KEY: None,
|
||||
ConfigType.LLM_BASE_URL: None,
|
||||
ConfigType.WORKSPACE_DIR: os.path.join(os.getcwd(), 'workspace'),
|
||||
ConfigType.WORKSPACE_BASE: os.getcwd(),
|
||||
ConfigType.WORKSPACE_MOUNT_PATH: None,
|
||||
ConfigType.WORKSPACE_MOUNT_REWRITE: None,
|
||||
ConfigType.LLM_MODEL: 'gpt-3.5-turbo-1106',
|
||||
ConfigType.SANDBOX_CONTAINER_IMAGE: 'ghcr.io/opendevin/sandbox',
|
||||
ConfigType.RUN_AS_DEVIN: 'true',
|
||||
@@ -20,7 +23,6 @@ DEFAULT_CONFIG: dict = {
|
||||
ConfigType.LLM_API_VERSION: None,
|
||||
ConfigType.LLM_NUM_RETRIES: 6,
|
||||
ConfigType.LLM_COOLDOWN_TIME: 1,
|
||||
ConfigType.DIRECTORY_REWRITE: '',
|
||||
ConfigType.MAX_ITERATIONS: 100,
|
||||
# GPT-4 pricing is $10 per 1M input tokens. Since tokenization happens on LLM side,
|
||||
# we cannot easily count number of tokens, but we can count characters.
|
||||
@@ -28,6 +30,8 @@ DEFAULT_CONFIG: dict = {
|
||||
ConfigType.MAX_CHARS: 5_000_000,
|
||||
ConfigType.AGENT: 'MonologueAgent',
|
||||
ConfigType.SANDBOX_TYPE: 'ssh',
|
||||
ConfigType.USE_HOST_NETWORK: 'false',
|
||||
ConfigType.SSH_HOSTNAME: 'localhost',
|
||||
ConfigType.DISABLE_COLOR: 'false',
|
||||
}
|
||||
|
||||
@@ -45,13 +49,39 @@ for k, v in config.items():
|
||||
config[k] = tomlConfig[k]
|
||||
|
||||
|
||||
def parse_arguments():
|
||||
parser = argparse.ArgumentParser(
|
||||
description='Run an agent with a specific task')
|
||||
parser.add_argument(
|
||||
'-d',
|
||||
'--directory',
|
||||
type=str,
|
||||
help='The working directory for the agent',
|
||||
)
|
||||
args, _ = parser.parse_known_args()
|
||||
if args.directory:
|
||||
config[ConfigType.WORKSPACE_BASE] = os.path.abspath(args.directory)
|
||||
print(f"Setting workspace base to {config[ConfigType.WORKSPACE_BASE]}")
|
||||
|
||||
|
||||
parse_arguments()
|
||||
|
||||
|
||||
def finalize_config():
|
||||
if config.get(ConfigType.WORKSPACE_MOUNT_REWRITE) and not config.get(ConfigType.WORKSPACE_MOUNT_PATH):
|
||||
base = config.get(ConfigType.WORKSPACE_BASE) or os.getcwd()
|
||||
parts = config[ConfigType.WORKSPACE_MOUNT_REWRITE].split(':')
|
||||
config[ConfigType.WORKSPACE_MOUNT_PATH] = base.replace(parts[0], parts[1])
|
||||
|
||||
|
||||
finalize_config()
|
||||
|
||||
|
||||
def get(key: str, required: bool = False):
|
||||
"""
|
||||
Get a key from the environment variables or config.toml or default configs.
|
||||
"""
|
||||
value = os.environ.get(key)
|
||||
if not value:
|
||||
value = config.get(key)
|
||||
value = config.get(key)
|
||||
if not value and required:
|
||||
raise KeyError(f"Please set '{key}' in `config.toml` or `.env`.")
|
||||
return value
|
||||
|
||||
@@ -73,14 +73,12 @@ class AgentController:
|
||||
id: str
|
||||
agent: Agent
|
||||
max_iterations: int
|
||||
workdir: str
|
||||
command_manager: CommandManager
|
||||
callbacks: List[Callable]
|
||||
|
||||
def __init__(
|
||||
self,
|
||||
agent: Agent,
|
||||
workdir: str,
|
||||
sid: str = '',
|
||||
max_iterations: int = MAX_ITERATIONS,
|
||||
max_chars: int = MAX_CHARS,
|
||||
@@ -90,10 +88,8 @@ class AgentController:
|
||||
self.id = sid
|
||||
self.agent = agent
|
||||
self.max_iterations = max_iterations
|
||||
self.command_manager = CommandManager(self.id, container_image)
|
||||
self.max_chars = max_chars
|
||||
self.workdir = workdir
|
||||
self.command_manager = CommandManager(
|
||||
self.id, workdir, container_image)
|
||||
self.callbacks = callbacks
|
||||
|
||||
def update_state_for_step(self, i):
|
||||
@@ -132,7 +128,6 @@ class AgentController:
|
||||
print('\n\n==============', flush=True)
|
||||
print('STEP', i, flush=True)
|
||||
print_with_color(self.state.plan.main_goal, 'PLAN')
|
||||
|
||||
if self.state.num_of_chars > self.max_chars:
|
||||
raise MaxCharsExceedError(
|
||||
self.state.num_of_chars, self.max_chars)
|
||||
|
||||
@@ -8,26 +8,23 @@ from opendevin.schema import ConfigType
|
||||
|
||||
class CommandManager:
|
||||
id: str
|
||||
directory: str
|
||||
shell: Sandbox
|
||||
|
||||
def __init__(
|
||||
self,
|
||||
sid: str,
|
||||
directory: str,
|
||||
container_image: str | None = None,
|
||||
):
|
||||
self.directory = directory
|
||||
sandbox_type = config.get(ConfigType.SANDBOX_TYPE).lower()
|
||||
if sandbox_type == 'exec':
|
||||
self.shell = DockerExecBox(
|
||||
sid=(sid or 'default'), workspace_dir=directory, container_image=container_image
|
||||
sid=(sid or 'default'), container_image=container_image
|
||||
)
|
||||
elif sandbox_type == 'local':
|
||||
self.shell = LocalBox(workspace_dir=directory)
|
||||
self.shell = LocalBox()
|
||||
elif sandbox_type == 'ssh':
|
||||
self.shell = DockerSSHBox(
|
||||
sid=(sid or 'default'), workspace_dir=directory, container_image=container_image
|
||||
sid=(sid or 'default'), container_image=container_image
|
||||
)
|
||||
else:
|
||||
raise ValueError(f'Invalid sandbox type: {sandbox_type}')
|
||||
|
||||
2
opendevin/download.py
Normal file
2
opendevin/download.py
Normal file
@@ -0,0 +1,2 @@
|
||||
# Run this file to trigger a model download
|
||||
import agenthub # noqa F401 (we import this to get the agents registered)
|
||||
@@ -29,7 +29,6 @@ def parse_arguments():
|
||||
parser.add_argument(
|
||||
'-d',
|
||||
'--directory',
|
||||
required=True,
|
||||
type=str,
|
||||
help='The working directory for the agent',
|
||||
)
|
||||
@@ -70,7 +69,8 @@ def parse_arguments():
|
||||
type=int,
|
||||
help='The maximum number of characters to send to and receive from LLM per task',
|
||||
)
|
||||
return parser.parse_args()
|
||||
args, _ = parser.parse_known_args()
|
||||
return args
|
||||
|
||||
|
||||
async def main():
|
||||
@@ -80,12 +80,11 @@ async def main():
|
||||
# Determine the task source
|
||||
if args.file:
|
||||
task = read_task_from_file(args.file)
|
||||
elif args.task:
|
||||
task = args.task
|
||||
elif not sys.stdin.isatty():
|
||||
task = read_task_from_stdin()
|
||||
else:
|
||||
task = args.task
|
||||
|
||||
if not task:
|
||||
raise ValueError(
|
||||
'No task provided. Please specify a task through -t, -f.')
|
||||
|
||||
@@ -96,7 +95,7 @@ async def main():
|
||||
AgentCls: Type[Agent] = Agent.get_cls(args.agent_cls)
|
||||
agent = AgentCls(llm=llm)
|
||||
controller = AgentController(
|
||||
agent=agent, workdir=args.directory, max_iterations=args.max_iterations, max_chars=args.max_chars
|
||||
agent=agent, max_iterations=args.max_iterations, max_chars=args.max_chars
|
||||
)
|
||||
|
||||
await controller.start_loop(task)
|
||||
|
||||
@@ -18,9 +18,8 @@ from opendevin.exceptions import SandboxInvalidBackgroundCommandError
|
||||
InputType = namedtuple('InputType', ['content'])
|
||||
OutputType = namedtuple('OutputType', ['content'])
|
||||
|
||||
# helpful for docker-in-docker scenarios
|
||||
DIRECTORY_REWRITE = config.get(ConfigType.DIRECTORY_REWRITE)
|
||||
CONTAINER_IMAGE = config.get(ConfigType.SANDBOX_CONTAINER_IMAGE)
|
||||
SANDBOX_WORKSPACE_DIR = '/workspace'
|
||||
|
||||
# FIXME: On some containers, the devin user doesn't have enough permission, e.g. to install packages
|
||||
# How do we make this more flexible?
|
||||
@@ -45,7 +44,6 @@ class DockerExecBox(Sandbox):
|
||||
|
||||
def __init__(
|
||||
self,
|
||||
workspace_dir: str | None = None,
|
||||
container_image: str | None = None,
|
||||
timeout: int = 120,
|
||||
sid: str | None = None,
|
||||
@@ -59,21 +57,6 @@ class DockerExecBox(Sandbox):
|
||||
raise ex
|
||||
|
||||
self.instance_id = sid if sid is not None else str(uuid.uuid4())
|
||||
if workspace_dir is not None:
|
||||
os.makedirs(workspace_dir, exist_ok=True)
|
||||
# expand to absolute path
|
||||
self.workspace_dir = os.path.abspath(workspace_dir)
|
||||
else:
|
||||
self.workspace_dir = os.getcwd()
|
||||
logger.info(
|
||||
'workspace unspecified, using current directory: %s', workspace_dir)
|
||||
if DIRECTORY_REWRITE != '':
|
||||
parts = DIRECTORY_REWRITE.split(':')
|
||||
self.workspace_dir = self.workspace_dir.replace(parts[0], parts[1])
|
||||
logger.info('Rewriting workspace directory to: %s',
|
||||
self.workspace_dir)
|
||||
else:
|
||||
logger.info('Using workspace directory: %s', self.workspace_dir)
|
||||
|
||||
# TODO: this timeout is actually essential - need a better way to set it
|
||||
# if it is too short, the container may still waiting for previous
|
||||
@@ -98,7 +81,7 @@ class DockerExecBox(Sandbox):
|
||||
]
|
||||
for cmd in cmds:
|
||||
exit_code, logs = self.container.exec_run(
|
||||
['/bin/bash', '-c', cmd], workdir='/workspace'
|
||||
['/bin/bash', '-c', cmd], workdir=SANDBOX_WORKSPACE_DIR
|
||||
)
|
||||
if exit_code != 0:
|
||||
raise Exception(f'Failed to setup devin user: {logs}')
|
||||
@@ -118,7 +101,7 @@ class DockerExecBox(Sandbox):
|
||||
def execute(self, cmd: str) -> Tuple[int, str]:
|
||||
# TODO: each execute is not stateful! We need to keep track of the current working directory
|
||||
def run_command(container, command):
|
||||
return container.exec_run(command, workdir='/workspace')
|
||||
return container.exec_run(command, workdir=SANDBOX_WORKSPACE_DIR)
|
||||
|
||||
# Use ThreadPoolExecutor to control command and set timeout
|
||||
with concurrent.futures.ThreadPoolExecutor() as executor:
|
||||
@@ -133,13 +116,13 @@ class DockerExecBox(Sandbox):
|
||||
pid = self.get_pid(cmd)
|
||||
if pid is not None:
|
||||
self.container.exec_run(
|
||||
f'kill -9 {pid}', workdir='/workspace')
|
||||
f'kill -9 {pid}', workdir=SANDBOX_WORKSPACE_DIR)
|
||||
return -1, f'Command: "{cmd}" timed out'
|
||||
return exit_code, logs.decode('utf-8')
|
||||
|
||||
def execute_in_background(self, cmd: str) -> BackgroundCommand:
|
||||
result = self.container.exec_run(
|
||||
self.get_exec_cmd(cmd), socket=True, workdir='/workspace'
|
||||
self.get_exec_cmd(cmd), socket=True, workdir=SANDBOX_WORKSPACE_DIR
|
||||
)
|
||||
result.output._sock.setblocking(0)
|
||||
pid = self.get_pid(cmd)
|
||||
@@ -165,7 +148,7 @@ class DockerExecBox(Sandbox):
|
||||
bg_cmd = self.background_commands[id]
|
||||
if bg_cmd.pid is not None:
|
||||
self.container.exec_run(
|
||||
f'kill -9 {bg_cmd.pid}', workdir='/workspace')
|
||||
f'kill -9 {bg_cmd.pid}', workdir=SANDBOX_WORKSPACE_DIR)
|
||||
bg_cmd.result.output.close()
|
||||
self.background_commands.pop(id)
|
||||
return bg_cmd
|
||||
@@ -206,15 +189,16 @@ class DockerExecBox(Sandbox):
|
||||
|
||||
try:
|
||||
# start the container
|
||||
mount_dir = config.get('WORKSPACE_MOUNT_PATH')
|
||||
self.container = self.docker_client.containers.run(
|
||||
self.container_image,
|
||||
command='tail -f /dev/null',
|
||||
network_mode='host',
|
||||
working_dir='/workspace',
|
||||
working_dir=SANDBOX_WORKSPACE_DIR,
|
||||
name=self.container_name,
|
||||
detach=True,
|
||||
volumes={self.workspace_dir: {
|
||||
'bind': '/workspace', 'mode': 'rw'}},
|
||||
volumes={mount_dir: {
|
||||
'bind': SANDBOX_WORKSPACE_DIR, 'mode': 'rw'}},
|
||||
)
|
||||
logger.info('Container started')
|
||||
except Exception as ex:
|
||||
@@ -231,7 +215,8 @@ class DockerExecBox(Sandbox):
|
||||
break
|
||||
time.sleep(1)
|
||||
elapsed += 1
|
||||
self.container = self.docker_client.containers.get(self.container_name)
|
||||
self.container = self.docker_client.containers.get(
|
||||
self.container_name)
|
||||
if elapsed > self.timeout:
|
||||
break
|
||||
if self.container.status != 'running':
|
||||
@@ -249,23 +234,8 @@ class DockerExecBox(Sandbox):
|
||||
|
||||
|
||||
if __name__ == '__main__':
|
||||
import argparse
|
||||
|
||||
parser = argparse.ArgumentParser(
|
||||
description='Interactive Docker container')
|
||||
parser.add_argument(
|
||||
'-d',
|
||||
'--directory',
|
||||
type=str,
|
||||
default=None,
|
||||
help='The directory to mount as the workspace in the Docker container.',
|
||||
)
|
||||
args = parser.parse_args()
|
||||
|
||||
try:
|
||||
exec_box = DockerExecBox(
|
||||
workspace_dir=args.directory,
|
||||
)
|
||||
exec_box = DockerExecBox()
|
||||
except Exception as e:
|
||||
logger.exception('Failed to start Docker container: %s', e)
|
||||
sys.exit(1)
|
||||
|
||||
@@ -1,9 +1,8 @@
|
||||
import subprocess
|
||||
import atexit
|
||||
import os
|
||||
from typing import Tuple, Dict, Optional
|
||||
from typing import Tuple, Dict
|
||||
from opendevin.sandbox.sandbox import Sandbox, BackgroundCommand
|
||||
|
||||
from opendevin import config
|
||||
|
||||
# ===============================================================================
|
||||
# ** WARNING **
|
||||
@@ -20,9 +19,9 @@ from opendevin.sandbox.sandbox import Sandbox, BackgroundCommand
|
||||
# DO NOT USE THIS SANDBOX IN A PRODUCTION ENVIRONMENT
|
||||
# ===============================================================================
|
||||
|
||||
|
||||
class LocalBox(Sandbox):
|
||||
def __init__(self, workspace_dir: Optional[str] = None, timeout: int = 120):
|
||||
self.workspace_dir = workspace_dir or os.getcwd()
|
||||
def __init__(self, timeout: int = 120):
|
||||
self.timeout = timeout
|
||||
self.background_commands: Dict[int, BackgroundCommand] = {}
|
||||
self.cur_background_id = 0
|
||||
@@ -32,7 +31,7 @@ class LocalBox(Sandbox):
|
||||
try:
|
||||
completed_process = subprocess.run(
|
||||
cmd, shell=True, text=True, capture_output=True,
|
||||
timeout=self.timeout, cwd=self.workspace_dir
|
||||
timeout=self.timeout, cwd=config.get('WORKSPACE_BASE')
|
||||
)
|
||||
return completed_process.returncode, completed_process.stdout
|
||||
except subprocess.TimeoutExpired:
|
||||
@@ -41,7 +40,7 @@ class LocalBox(Sandbox):
|
||||
def execute_in_background(self, cmd: str) -> BackgroundCommand:
|
||||
process = subprocess.Popen(
|
||||
cmd, shell=True, stdout=subprocess.PIPE, stderr=subprocess.STDOUT,
|
||||
text=True, cwd=self.workspace_dir
|
||||
text=True, cwd=config.get('WORKSPACE_BASE')
|
||||
)
|
||||
bg_cmd = BackgroundCommand(
|
||||
id=self.cur_background_id, command=cmd, result=process, pid=process.pid
|
||||
|
||||
@@ -20,10 +20,17 @@ from opendevin.exceptions import SandboxInvalidBackgroundCommandError
|
||||
InputType = namedtuple('InputType', ['content'])
|
||||
OutputType = namedtuple('OutputType', ['content'])
|
||||
|
||||
# helpful for docker-in-docker scenarios
|
||||
DIRECTORY_REWRITE = config.get(ConfigType.DIRECTORY_REWRITE)
|
||||
SANDBOX_WORKSPACE_DIR = '/workspace'
|
||||
|
||||
CONTAINER_IMAGE = config.get(ConfigType.SANDBOX_CONTAINER_IMAGE)
|
||||
|
||||
SSH_HOSTNAME = config.get(ConfigType.SSH_HOSTNAME)
|
||||
|
||||
USE_HOST_NETWORK = platform.system() == 'Linux'
|
||||
if config.get(ConfigType.USE_HOST_NETWORK) is not None:
|
||||
USE_HOST_NETWORK = config.get(
|
||||
ConfigType.USE_HOST_NETWORK).lower() != 'false'
|
||||
|
||||
# FIXME: On some containers, the devin user doesn't have enough permission, e.g. to install packages
|
||||
# How do we make this more flexible?
|
||||
RUN_AS_DEVIN = config.get('RUN_AS_DEVIN').lower() != 'false'
|
||||
@@ -50,7 +57,6 @@ class DockerSSHBox(Sandbox):
|
||||
|
||||
def __init__(
|
||||
self,
|
||||
workspace_dir: str | None = None,
|
||||
container_image: str | None = None,
|
||||
timeout: int = 120,
|
||||
sid: str | None = None,
|
||||
@@ -64,21 +70,6 @@ class DockerSSHBox(Sandbox):
|
||||
raise ex
|
||||
|
||||
self.instance_id = sid if sid is not None else str(uuid.uuid4())
|
||||
if workspace_dir is not None:
|
||||
os.makedirs(workspace_dir, exist_ok=True)
|
||||
# expand to absolute path
|
||||
self.workspace_dir = os.path.abspath(workspace_dir)
|
||||
else:
|
||||
self.workspace_dir = os.getcwd()
|
||||
logger.info(
|
||||
'workspace unspecified, using current directory: %s', workspace_dir)
|
||||
if DIRECTORY_REWRITE != '':
|
||||
parts = DIRECTORY_REWRITE.split(':')
|
||||
self.workspace_dir = self.workspace_dir.replace(parts[0], parts[1])
|
||||
logger.info('Rewriting workspace directory to: %s',
|
||||
self.workspace_dir)
|
||||
else:
|
||||
logger.info('Using workspace directory: %s', self.workspace_dir)
|
||||
|
||||
# TODO: this timeout is actually essential - need a better way to set it
|
||||
# if it is too short, the container may still waiting for previous
|
||||
@@ -106,7 +97,7 @@ class DockerSSHBox(Sandbox):
|
||||
exit_code, logs = self.container.exec_run(
|
||||
['/bin/bash', '-c',
|
||||
r"echo '%sudo ALL=(ALL) NOPASSWD:ALL' >> /etc/sudoers"],
|
||||
workdir='/workspace',
|
||||
workdir=SANDBOX_WORKSPACE_DIR,
|
||||
)
|
||||
if exit_code != 0:
|
||||
raise Exception(
|
||||
@@ -115,54 +106,54 @@ class DockerSSHBox(Sandbox):
|
||||
# Check if the opendevin user exists
|
||||
exit_code, logs = self.container.exec_run(
|
||||
['/bin/bash', '-c', 'id -u opendevin'],
|
||||
workdir='/workspace',
|
||||
workdir=SANDBOX_WORKSPACE_DIR,
|
||||
)
|
||||
if exit_code == 0:
|
||||
# User exists, delete it
|
||||
exit_code, logs = self.container.exec_run(
|
||||
['/bin/bash', '-c', 'userdel -r opendevin'],
|
||||
workdir='/workspace',
|
||||
workdir=SANDBOX_WORKSPACE_DIR,
|
||||
)
|
||||
if exit_code != 0:
|
||||
raise Exception(
|
||||
f'Failed to remove opendevin user in sandbox: {logs}')
|
||||
|
||||
# Create the opendevin user
|
||||
exit_code, logs = self.container.exec_run(
|
||||
['/bin/bash', '-c',
|
||||
f'useradd -rm -d /home/opendevin -s /bin/bash -g root -G sudo -u {USER_ID} opendevin'],
|
||||
workdir='/workspace',
|
||||
)
|
||||
if exit_code != 0:
|
||||
raise Exception(
|
||||
f'Failed to create opendevin user in sandbox: {logs}')
|
||||
exit_code, logs = self.container.exec_run(
|
||||
['/bin/bash', '-c',
|
||||
f"echo 'opendevin:{self._ssh_password}' | chpasswd"],
|
||||
workdir='/workspace',
|
||||
)
|
||||
if exit_code != 0:
|
||||
raise Exception(f'Failed to set password in sandbox: {logs}')
|
||||
|
||||
if not RUN_AS_DEVIN:
|
||||
if RUN_AS_DEVIN:
|
||||
# Create the opendevin user
|
||||
exit_code, logs = self.container.exec_run(
|
||||
['/bin/bash', '-c',
|
||||
f'useradd -rm -d /home/opendevin -s /bin/bash -g root -G sudo -u {USER_ID} opendevin'],
|
||||
workdir=SANDBOX_WORKSPACE_DIR,
|
||||
)
|
||||
if exit_code != 0:
|
||||
raise Exception(
|
||||
f'Failed to create opendevin user in sandbox: {logs}')
|
||||
exit_code, logs = self.container.exec_run(
|
||||
['/bin/bash', '-c',
|
||||
f"echo 'opendevin:{self._ssh_password}' | chpasswd"],
|
||||
workdir=SANDBOX_WORKSPACE_DIR,
|
||||
)
|
||||
if exit_code != 0:
|
||||
raise Exception(f'Failed to set password in sandbox: {logs}')
|
||||
else:
|
||||
exit_code, logs = self.container.exec_run(
|
||||
# change password for root
|
||||
['/bin/bash', '-c',
|
||||
f"echo 'root:{self._ssh_password}' | chpasswd"],
|
||||
workdir='/workspace',
|
||||
workdir=SANDBOX_WORKSPACE_DIR,
|
||||
)
|
||||
if exit_code != 0:
|
||||
raise Exception(
|
||||
f'Failed to set password for root in sandbox: {logs}')
|
||||
exit_code, logs = self.container.exec_run(
|
||||
['/bin/bash', '-c', "echo 'opendevin-sandbox' > /etc/hostname"],
|
||||
workdir='/workspace',
|
||||
workdir=SANDBOX_WORKSPACE_DIR,
|
||||
)
|
||||
|
||||
def start_ssh_session(self):
|
||||
# start ssh session at the background
|
||||
self.ssh = pxssh.pxssh()
|
||||
hostname = 'localhost'
|
||||
hostname = SSH_HOSTNAME
|
||||
if RUN_AS_DEVIN:
|
||||
username = 'opendevin'
|
||||
else:
|
||||
@@ -217,7 +208,7 @@ class DockerSSHBox(Sandbox):
|
||||
|
||||
def execute_in_background(self, cmd: str) -> BackgroundCommand:
|
||||
result = self.container.exec_run(
|
||||
self.get_exec_cmd(cmd), socket=True, workdir='/workspace'
|
||||
self.get_exec_cmd(cmd), socket=True, workdir=SANDBOX_WORKSPACE_DIR
|
||||
)
|
||||
result.output._sock.setblocking(0)
|
||||
pid = self.get_pid(cmd)
|
||||
@@ -243,7 +234,7 @@ class DockerSSHBox(Sandbox):
|
||||
bg_cmd = self.background_commands[id]
|
||||
if bg_cmd.pid is not None:
|
||||
self.container.exec_run(
|
||||
f'kill -9 {bg_cmd.pid}', workdir='/workspace')
|
||||
f'kill -9 {bg_cmd.pid}', workdir=SANDBOX_WORKSPACE_DIR)
|
||||
bg_cmd.result.output.close()
|
||||
self.background_commands.pop(id)
|
||||
return bg_cmd
|
||||
@@ -284,9 +275,9 @@ class DockerSSHBox(Sandbox):
|
||||
|
||||
try:
|
||||
network_kwargs: Dict[str, Union[str, Dict[str, int]]] = {}
|
||||
if platform.system() == 'Linux':
|
||||
if USE_HOST_NETWORK:
|
||||
network_kwargs['network_mode'] = 'host'
|
||||
elif platform.system() == 'Darwin':
|
||||
else:
|
||||
# FIXME: This is a temporary workaround for Mac OS
|
||||
network_kwargs['ports'] = {'2222/tcp': self._ssh_port}
|
||||
logger.warning(
|
||||
@@ -296,18 +287,24 @@ class DockerSSHBox(Sandbox):
|
||||
)
|
||||
)
|
||||
|
||||
mount_dir = config.get('WORKSPACE_MOUNT_PATH')
|
||||
print('Mounting workspace directory: ', mount_dir)
|
||||
# start the container
|
||||
self.container = self.docker_client.containers.run(
|
||||
self.container_image,
|
||||
# allow root login
|
||||
command="/usr/sbin/sshd -D -p 2222 -o 'PermitRootLogin=yes'",
|
||||
**network_kwargs,
|
||||
working_dir='/workspace',
|
||||
working_dir=SANDBOX_WORKSPACE_DIR,
|
||||
name=self.container_name,
|
||||
hostname='opendevin_sandbox',
|
||||
detach=True,
|
||||
volumes={self.workspace_dir: {
|
||||
'bind': '/workspace', 'mode': 'rw'}},
|
||||
volumes={
|
||||
mount_dir: {
|
||||
'bind': SANDBOX_WORKSPACE_DIR,
|
||||
'mode': 'rw'
|
||||
},
|
||||
},
|
||||
)
|
||||
logger.info('Container started')
|
||||
except Exception as ex:
|
||||
@@ -345,23 +342,9 @@ class DockerSSHBox(Sandbox):
|
||||
|
||||
|
||||
if __name__ == '__main__':
|
||||
import argparse
|
||||
|
||||
parser = argparse.ArgumentParser(
|
||||
description='Interactive Docker container')
|
||||
parser.add_argument(
|
||||
'-d',
|
||||
'--directory',
|
||||
type=str,
|
||||
default=None,
|
||||
help='The directory to mount as the workspace in the Docker container.',
|
||||
)
|
||||
args = parser.parse_args()
|
||||
|
||||
try:
|
||||
ssh_box = DockerSSHBox(
|
||||
workspace_dir=args.directory,
|
||||
)
|
||||
ssh_box = DockerSSHBox()
|
||||
except Exception as e:
|
||||
logger.exception('Failed to start Docker container: %s', e)
|
||||
sys.exit(1)
|
||||
|
||||
@@ -4,7 +4,9 @@ from enum import Enum
|
||||
class ConfigType(str, Enum):
|
||||
LLM_API_KEY = 'LLM_API_KEY'
|
||||
LLM_BASE_URL = 'LLM_BASE_URL'
|
||||
WORKSPACE_DIR = 'WORKSPACE_DIR'
|
||||
WORKSPACE_BASE = 'WORKSPACE_BASE'
|
||||
WORKSPACE_MOUNT_PATH = 'WORKSPACE_MOUNT_PATH'
|
||||
WORKSPACE_MOUNT_REWRITE = 'WORKSPACE_MOUNT_REWRITE'
|
||||
LLM_MODEL = 'LLM_MODEL'
|
||||
SANDBOX_CONTAINER_IMAGE = 'SANDBOX_CONTAINER_IMAGE'
|
||||
RUN_AS_DEVIN = 'RUN_AS_DEVIN'
|
||||
@@ -13,9 +15,10 @@ class ConfigType(str, Enum):
|
||||
LLM_API_VERSION = 'LLM_API_VERSION'
|
||||
LLM_NUM_RETRIES = 'LLM_NUM_RETRIES'
|
||||
LLM_COOLDOWN_TIME = 'LLM_COOLDOWN_TIME'
|
||||
DIRECTORY_REWRITE = 'DIRECTORY_REWRITE'
|
||||
MAX_ITERATIONS = 'MAX_ITERATIONS'
|
||||
MAX_CHARS = 'MAX_CHARS'
|
||||
AGENT = 'AGENT'
|
||||
SANDBOX_TYPE = 'SANDBOX_TYPE'
|
||||
USE_HOST_NETWORK = 'USE_HOST_NETWORK'
|
||||
SSH_HOSTNAME = 'SSH_HOSTNAME'
|
||||
DISABLE_COLOR = 'DISABLE_COLOR'
|
||||
|
||||
@@ -25,7 +25,7 @@ websocat ws://127.0.0.1:3000/ws
|
||||
```sh
|
||||
LLM_API_KEY=sk-... # Your OpenAI API Key
|
||||
LLM_MODEL=gpt-3.5-turbo-1106 # Default model for the agent to use
|
||||
WORKSPACE_DIR=/path/to/your/workspace # Default path to model's workspace
|
||||
WORKSPACE_BASE=/path/to/your/workspace # Default path to model's workspace
|
||||
```
|
||||
|
||||
## API Schema
|
||||
|
||||
@@ -1,5 +1,4 @@
|
||||
import asyncio
|
||||
import os
|
||||
from typing import Optional
|
||||
|
||||
from opendevin import config
|
||||
@@ -105,7 +104,6 @@ class AgentUnit:
|
||||
for key, value in start_event.get('args', {}).items()
|
||||
if value != ''
|
||||
} # remove empty values, prevent FE from sending empty strings
|
||||
directory = self.get_arg_or_default(args, ConfigType.WORKSPACE_DIR)
|
||||
agent_cls = self.get_arg_or_default(args, ConfigType.AGENT)
|
||||
model = self.get_arg_or_default(args, ConfigType.LLM_MODEL)
|
||||
api_key = config.get(ConfigType.LLM_API_KEY)
|
||||
@@ -115,18 +113,11 @@ class AgentUnit:
|
||||
args, ConfigType.MAX_ITERATIONS)
|
||||
max_chars = self.get_arg_or_default(args, ConfigType.MAX_CHARS)
|
||||
|
||||
if not os.path.exists(directory):
|
||||
logger.info(
|
||||
'Workspace directory %s does not exist. Creating it...', directory
|
||||
)
|
||||
os.makedirs(directory)
|
||||
directory = os.path.relpath(directory, os.getcwd())
|
||||
llm = LLM(model=model, api_key=api_key, base_url=api_base)
|
||||
try:
|
||||
self.controller = AgentController(
|
||||
sid=self.sid,
|
||||
agent=Agent.get_cls(agent_cls)(llm),
|
||||
workdir=directory,
|
||||
max_iterations=int(max_iterations),
|
||||
max_chars=int(max_chars),
|
||||
container_image=container_image,
|
||||
|
||||
@@ -5,6 +5,8 @@ import litellm
|
||||
from fastapi import Depends, FastAPI, WebSocket
|
||||
from fastapi.middleware.cors import CORSMiddleware
|
||||
from fastapi.security import HTTPAuthorizationCredentials, HTTPBearer
|
||||
from fastapi.staticfiles import StaticFiles
|
||||
from fastapi.responses import RedirectResponse
|
||||
from starlette import status
|
||||
from starlette.responses import JSONResponse
|
||||
|
||||
@@ -41,7 +43,7 @@ async def websocket_endpoint(websocket: WebSocket):
|
||||
await session_manager.loop_recv(sid, agent_manager.dispatch)
|
||||
|
||||
|
||||
@app.get('/litellm-models')
|
||||
@app.get('/api/litellm-models')
|
||||
async def get_litellm_models():
|
||||
"""
|
||||
Get all models supported by LiteLLM.
|
||||
@@ -49,7 +51,7 @@ async def get_litellm_models():
|
||||
return list(set(litellm.model_list + list(litellm.model_cost.keys())))
|
||||
|
||||
|
||||
@app.get('/litellm-agents')
|
||||
@app.get('/api/litellm-agents')
|
||||
async def get_litellm_agents():
|
||||
"""
|
||||
Get all agents supported by LiteLLM.
|
||||
@@ -57,7 +59,7 @@ async def get_litellm_agents():
|
||||
return Agent.list_agents()
|
||||
|
||||
|
||||
@app.get('/auth')
|
||||
@app.get('/api/auth')
|
||||
async def get_token(
|
||||
credentials: HTTPAuthorizationCredentials = Depends(security_scheme),
|
||||
):
|
||||
@@ -72,7 +74,7 @@ async def get_token(
|
||||
)
|
||||
|
||||
|
||||
@app.get('/messages')
|
||||
@app.get('/api/messages')
|
||||
async def get_messages(
|
||||
credentials: HTTPAuthorizationCredentials = Depends(security_scheme),
|
||||
):
|
||||
@@ -87,7 +89,7 @@ async def get_messages(
|
||||
)
|
||||
|
||||
|
||||
@app.get('/messages/total')
|
||||
@app.get('/api/messages/total')
|
||||
async def get_message_total(
|
||||
credentials: HTTPAuthorizationCredentials = Depends(security_scheme),
|
||||
):
|
||||
@@ -110,20 +112,28 @@ async def del_messages(
|
||||
)
|
||||
|
||||
|
||||
@app.get('/configurations')
|
||||
@app.get('/api/configurations')
|
||||
def read_default_model():
|
||||
return config.get_fe_config()
|
||||
|
||||
|
||||
@app.get('/refresh-files')
|
||||
@app.get('/api/refresh-files')
|
||||
def refresh_files():
|
||||
structure = files.get_folder_structure(
|
||||
Path(str(config.get('WORKSPACE_DIR'))))
|
||||
Path(str(config.get('WORKSPACE_BASE'))))
|
||||
return structure.to_dict()
|
||||
|
||||
|
||||
@app.get('/select-file')
|
||||
@app.get('/api/select-file')
|
||||
def select_file(file: str):
|
||||
with open(Path(Path(str(config.get('WORKSPACE_DIR'))), file), 'r') as selected_file:
|
||||
with open(Path(Path(str(config.get('WORKSPACE_BASE'))), file), 'r') as selected_file:
|
||||
content = selected_file.read()
|
||||
return {'code': content}
|
||||
|
||||
|
||||
@app.get('/')
|
||||
async def docs_redirect():
|
||||
response = RedirectResponse(url='/index.html')
|
||||
return response
|
||||
|
||||
app.mount('/', StaticFiles(directory='./frontend/dist'), name='dist')
|
||||
|
||||
Reference in New Issue
Block a user