docs(docs): start implementing docs website (#1372)

* docs(docs): start implementing docs website

* update video url

* add autogenerated codebase docs for backend

* precommit

* update links

* fix config and video

* gh actions

* rename

* workdirs

* path

* path

* fix doc1

* redo markdown

* docs

* change main folder name

* simplify readme

* add back architecture

* Fix lint errors

* lint

* update poetry lock

---------

Co-authored-by: Jim Su <jimsu@protonmail.com>
This commit is contained in:
Alex Bäuerle 2024-04-29 10:00:51 -07:00 committed by GitHub
parent 46bd83678a
commit cd58194d2a
No known key found for this signature in database
GPG Key ID: B5690EEEBB952194
114 changed files with 19187 additions and 647 deletions

58
.github/workflows/deploy-docs.yml vendored Normal file
View File

@ -0,0 +1,58 @@
name: Deploy Docs to GitHub Pages
on:
push:
branches:
- main
pull_request:
branches:
- main
jobs:
build:
name: Build Docusaurus
runs-on: ubuntu-latest
defaults:
run:
working-directory: docs
steps:
- uses: actions/checkout@v4
with:
fetch-depth: 0
- uses: actions/setup-node@v4
with:
node-version: 18
cache: npm
cache-dependency-path: docs/package-lock.json
- name: Install dependencies
run: npm ci
- name: Build website
run: npm run build
- name: Upload Build Artifact
if: github.ref == 'refs/heads/main'
uses: actions/upload-pages-artifact@v3
with:
path: build
deploy:
name: Deploy to GitHub Pages
needs: build
if: github.ref == 'refs/heads/main'
# Grant GITHUB_TOKEN the permissions required to make a Pages deployment
permissions:
pages: write # to deploy to Pages
id-token: write # to verify the deployment originates from an appropriate source
# Deploy to the github-pages environment
environment:
name: github-pages
url: ${{ steps.deployment.outputs.page_url }}
runs-on: ubuntu-latest
defaults:
run:
working-directory: docs
steps:
- name: Deploy to GitHub Pages
id: deployment
uses: actions/deploy-pages@v4

167
README.md
View File

@ -1,5 +1,3 @@
[English](README.md) | [中文](docs/README-zh.md)
<a name="readme-top"></a>
<!--
@ -32,168 +30,17 @@
<!-- PROJECT LOGO -->
<div align="center">
<img src="./logo.png" alt="Logo" width="200" height="200">
<img src="./docs/static/img/logo.png" alt="Logo" width="200" height="200">
<h1 align="center">OpenDevin: Code Less, Make More</h1>
</div>
<!-- TABLE OF CONTENTS -->
<details>
<summary>🗂️ Table of Contents</summary>
<ol>
<li><a href="#-mission">🎯 Mission</a></li>
<li><a href="#-what-is-devin">🤔 What is Devin?</a></li>
<li><a href="#-why-opendevin">🐚 Why OpenDevin?</a></li>
<li><a href="#-project-status">🚧 Project Status</a></li>
<a href="#-get-started">🚀 Get Started</a>
<ul>
<li><a href="#1-requirements">1. Requirements</a></li>
<li><a href="#2-build-and-setup">2. Build and Setup</a></li>
<li><a href="#3-run-the-application">3. Run the Application</a></li>
<li><a href="#4-individual-server-startup">4. Individual Server Startup</a></li>
<li><a href="#5-help">5. Help</a></li>
</ul>
</li>
<li><a href="#%EF%B8%8F-research-strategy">⭐️ Research Strategy</a></li>
<li><a href="#-how-to-contribute">🤝 How to Contribute</a></li>
<li><a href="#-join-our-community">🤖 Join Our Community</a></li>
<li><a href="#%EF%B8%8F-built-with">🛠️ Built With</a></li>
<li><a href="#-license">📜 License</a></li>
</ol>
</details>
## 🎯 Mission
[Project Demo Video](https://github.com/OpenDevin/OpenDevin/assets/38853559/71a472cc-df34-430c-8b1d-4d7286c807c9)
[Project Demo Video](./docs/static/img/teaser.mp4)
Welcome to OpenDevin, an open-source project aiming to replicate Devin, an autonomous AI software engineer who is capable of executing complex engineering tasks and collaborating actively with users on software development projects. This project aspires to replicate, enhance, and innovate upon Devin through the power of the open-source community.
<p align="right" style="font-size: 14px; color: #555; margin-top: 20px;">
<a href="#readme-top" style="text-decoration: none; color: #007bff; font-weight: bold;">
↑ Back to Top ↑
</a>
</p>
## 🤔 What is Devin?
Devin represents a cutting-edge autonomous agent designed to navigate the complexities of software engineering. It leverages a combination of tools such as a shell, code editor, and web browser, showcasing the untapped potential of LLMs in software development. Our goal is to explore and expand upon Devin's capabilities, identifying both its strengths and areas for improvement, to guide the progress of open code models.
<p align="right" style="font-size: 14px; color: #555; margin-top: 20px;">
<a href="#readme-top" style="text-decoration: none; color: #007bff; font-weight: bold;">
↑ Back to Top ↑
</a>
</p>
## 🐚 Why OpenDevin?
The OpenDevin project is born out of a desire to replicate, enhance, and innovate beyond the original Devin model. By engaging the open-source community, we aim to tackle the challenges faced by Code LLMs in practical scenarios, producing works that significantly contribute to the community and pave the way for future advancements.
<p align="right" style="font-size: 14px; color: #555; margin-top: 20px;">
<a href="#readme-top" style="text-decoration: none; color: #007bff; font-weight: bold;">
↑ Back to Top ↑
</a>
</p>
## 🚧 Project Status
OpenDevin is currently a work in progress, but you can already run the alpha version to see the end-to-end system in action. The project team is actively working on the following key milestones:
- **UI**: Developing a user-friendly interface, including a chat interface, a shell demonstrating commands, and a web browser.
- **Architecture**: Building a stable agent framework with a robust backend that can read, write, and run simple commands.
- **Agent Capabilities**: Enhancing the agent's abilities to generate bash scripts, run tests, and perform other software engineering tasks.
- **Evaluation**: Establishing a minimal evaluation pipeline that is consistent with Devin's evaluation criteria.
After completing the MVP, the team will focus on research in various areas, including foundation models, specialist capabilities, evaluation, and agent studies.
<p align="right" style="font-size: 14px; color: #555; margin-top: 20px;">
<a href="#readme-top" style="text-decoration: none; color: #007bff; font-weight: bold;">
↑ Back to Top ↑
</a>
</p>
## ⚠️ Caveats and Warnings
- OpenDevin is still an alpha project. It is changing very quickly and is unstable. We are working on getting a stable release out in the coming weeks.
- OpenDevin will issue many prompts to the LLM you configure. Most of these LLMs cost money--be sure to set spending limits and monitor usage.
- OpenDevin runs `bash` commands within a Docker sandbox, so it should not affect your machine. But your workspace directory will be attached to that sandbox, and files in the directory may be modified or deleted.
- Our default Agent is currently the MonologueAgent, which has limited capabilities, but is fairly stable. We're working on other Agent implementations, including [SWE Agent](https://swe-agent.com/). You can [read about our current set of agents here](./docs/Agents.md).
## 🚀 Get Started
The easiest way to run OpenDevin is inside a Docker container.
To start the app, run these commands, replacing `$(pwd)/workspace` with the path to the code you want OpenDevin to work with.
```bash
# Your OpenAI API key, or any other LLM API key
export LLM_API_KEY="sk-..."
# The directory you want OpenDevin to modify. MUST be an absolute path!
export WORKSPACE_BASE=$(pwd)/workspace
docker run \
-e LLM_API_KEY \
-e WORKSPACE_MOUNT_PATH=$WORKSPACE_BASE \
-v $WORKSPACE_BASE:/opt/workspace_base \
-v /var/run/docker.sock:/var/run/docker.sock \
-p 3000:3000 \
--add-host host.docker.internal=host-gateway \
ghcr.io/opendevin/opendevin:0.4.0
```
You'll find opendevin running at `http://localhost:3000`.
If you want to use the (unstable!) bleeding edge, you can use `ghcr.io/opendevin/opendevin:main` as the image.
See [Development.md](Development.md) for instructions on running OpenDevin without Docker.
Having trouble? Check out our [Troubleshooting Guide](./docs/guides/Troubleshooting.md).
## 🤖 LLM Backends
OpenDevin can work with any LLM backend.
For a full list of the LM providers and models available, please consult the
[litellm documentation](https://docs.litellm.ai/docs/providers).
The `LLM_MODEL` environment variable controls which model is used in programmatic interactions.
But when using the OpenDevin UI, you'll need to choose your model in the settings window (the gear
wheel on the bottom left).
The following environment variables might be necessary for some LLMs:
- `LLM_API_KEY`
- `LLM_BASE_URL`
- `LLM_EMBEDDING_MODEL`
- `LLM_EMBEDDING_DEPLOYMENT_NAME`
- `LLM_API_VERSION`
We have a few guides for running OpenDevin with specific model providers:
- [ollama](./docs/guides/LocalLLMs.md)
- [Azure](./docs/guides/AzureLLMs.md)
If you're using another provider, we encourage you to open a PR to share your setup!
**Note on Alternative Models:**
The best models are GPT-4 and Claude 3. Current local and open source models are
not nearly as powerful. When using an alternative model,
you may see long wait times between messages,
poor responses, or errors about malformed JSON. OpenDevin
can only be as powerful as the models driving it--fortunately folks on our team
are actively working on building better open source models!
**Note on API retries and rate limits:**
Some LLMs have rate limits and may require retries. OpenDevin will automatically retry requests if it receives a 429 error or API connection error.
You can set LLM_NUM_RETRIES, LLM_RETRY_MIN_WAIT, LLM_RETRY_MAX_WAIT environment variables to control the number of retries and the time between retries.
By default, LLM_NUM_RETRIES is 5 and LLM_RETRY_MIN_WAIT, LLM_RETRY_MAX_WAIT are 3 seconds and respectively 60 seconds.
## ⭐️ Research Strategy
Achieving full replication of production-grade applications with LLMs is a complex endeavor. Our strategy involves:
1. **Core Technical Research:** Focusing on foundational research to understand and improve the technical aspects of code generation and handling.
2. **Specialist Abilities:** Enhancing the effectiveness of core components through data curation, training methods, and more.
3. **Task Planning:** Developing capabilities for bug detection, codebase management, and optimization.
4. **Evaluation:** Establishing comprehensive evaluation metrics to better understand and improve our models.
To learn more and to use OpenDevin, check out our [documentation](https://opendevin.github.io/OpenDevin/).
<p align="right" style="font-size: 14px; color: #555; margin-top: 20px;">
<a href="#readme-top" style="text-decoration: none; color: #007bff; font-weight: bold;">
@ -230,14 +77,6 @@ If you would love to contribute, feel free to join our community (note that now
[![Star History Chart](https://api.star-history.com/svg?repos=OpenDevin/OpenDevin&type=Date)](https://star-history.com/#OpenDevin/OpenDevin&Date)
## 🛠️ Built With
OpenDevin is built using a combination of powerful frameworks and libraries, providing a robust foundation for its development. Here are the key technologies used in the project:
![FastAPI](https://img.shields.io/badge/FastAPI-black?style=for-the-badge) ![uvicorn](https://img.shields.io/badge/uvicorn-black?style=for-the-badge) ![LiteLLM](https://img.shields.io/badge/LiteLLM-black?style=for-the-badge) ![Docker](https://img.shields.io/badge/Docker-black?style=for-the-badge) ![Ruff](https://img.shields.io/badge/Ruff-black?style=for-the-badge) ![MyPy](https://img.shields.io/badge/MyPy-black?style=for-the-badge) ![LlamaIndex](https://img.shields.io/badge/LlamaIndex-black?style=for-the-badge) ![React](https://img.shields.io/badge/React-black?style=for-the-badge)
Please note that the selection of these technologies is in progress, and additional technologies may be added or existing ones may be removed as the project evolves. We strive to adopt the most suitable and efficient tools to enhance the capabilities of OpenDevin.
<p align="right" style="font-size: 14px; color: #555; margin-top: 20px;">
<a href="#readme-top" style="text-decoration: none; color: #007bff; font-weight: bold;">
↑ Back to Top ↑

View File

@ -3,7 +3,9 @@ repos:
rev: v4.5.0
hooks:
- id: trailing-whitespace
exclude: docs/modules/python
- id: end-of-file-fixer
exclude: docs/modules/python
- id: check-yaml
- id: debug-statements
@ -16,7 +18,6 @@ repos:
hooks:
- id: validate-pyproject
- repo: https://github.com/astral-sh/ruff-pre-commit
# Ruff version.
rev: v0.3.7
@ -24,18 +25,24 @@ repos:
# Run the linter.
- id: ruff
entry: ruff check --config dev_config/python/ruff.toml opendevin/ agenthub/
types_or: [ python, pyi, jupyter ]
args: [ --fix ]
types_or: [python, pyi, jupyter]
args: [--fix]
# Run the formatter.
- id: ruff-format
entry: ruff check --config dev_config/python/ruff.toml opendevin/ agenthub/
types_or: [ python, pyi, jupyter ]
types_or: [python, pyi, jupyter]
- repo: https://github.com/pre-commit/mirrors-mypy
rev: v1.9.0
hooks:
- id: mypy
additional_dependencies: [types-requests, types-setuptools, types-pyyaml, types-toml]
additional_dependencies:
[types-requests, types-setuptools, types-pyyaml, types-toml]
entry: mypy --config-file dev_config/python/mypy.ini opendevin/ agenthub/
always_run: true
pass_filenames: false
- repo: https://github.com/NiklasRosenstein/pydoc-markdown
rev: develop
hooks:
- id: pydoc-markdown

20
docs/.gitignore vendored Normal file
View File

@ -0,0 +1,20 @@
# Dependencies
/node_modules
# Production
/build
# Generated files
.docusaurus
.cache-loader
# Misc
.DS_Store
.env.local
.env.development.local
.env.test.local
.env.production.local
npm-debug.log*
yarn-debug.log*
yarn-error.log*

View File

@ -1,98 +0,0 @@
# Agents and Capabilities
## Monologue Agent:
### Description:
The Monologue Agent utilizes long and short term memory to complete tasks.
Long term memory is stored as a LongTermMemory object and the model uses it to search for examples from the past.
Short term memory is stored as a Monologue object and the model can condense it as necessary.
### Actions:
`Action`,
`NullAction`,
`CmdRunAction`,
`FileWriteAction`,
`FileReadAction`,
`AgentRecallAction`,
`BrowseURLAction`,
`GithubPushAction`,
`AgentThinkAction`
### Observations:
`Observation`,
`NullObservation`,
`CmdOutputObservation`,
`FileReadObservation`,
`AgentRecallObservation`,
`BrowserOutputObservation`
### Methods:
`__init__`: Initializes the agent with a long term memory, and an internal monologue
`_add_event`: Appends events to the monologue of the agent and condenses with summary automatically if the monologue is too long
`_initialize`: Utilizes the `INITIAL_THOUGHTS` list to give the agent a context for its capabilities and how to navigate the `/workspace`
`step`: Modifies the current state by adding the most recent actions and observations, then prompts the model to think about its next action to take.
`search_memory`: Uses `VectorIndexRetriever` to find related memories within the long term memory.
## Planner Agent:
### Description:
The planner agent utilizes a special prompting strategy to create long term plans for solving problems.
The agent is given its previous action-observation pairs, current task, and hint based on last action taken at every step.
### Actions:
`NullAction`,
`CmdRunAction`,
`CmdKillAction`,
`BrowseURLAction`,
`GithubPushAction`,
`FileReadAction`,
`FileWriteAction`,
`AgentRecallAction`,
`AgentThinkAction`,
`AgentFinishAction`,
`AgentSummarizeAction`,
`AddTaskAction`,
`ModifyTaskAction`,
### Observations:
`Observation`,
`NullObservation`,
`CmdOutputObservation`,
`FileReadObservation`,
`AgentRecallObservation`,
`BrowserOutputObservation`
### Methods:
`__init__`: Initializes an agent with `llm`
`step`: Checks to see if current step is completed, returns `AgentFinishAction` if True. Otherwise, creates a plan prompt and sends to model for inference, adding the result as the next action.
`search_memory`: Not yet implemented
## CodeAct Agent:
### Description:
The Code Act Agent is a minimalist agent. The agent works by passing the model a list of action-observation pairs and prompting the model to take the next step.
### Actions:
`Action`,
`CmdRunAction`,
`AgentEchoAction`,
`AgentFinishAction`,
### Observations:
`CmdOutputObservation`,
`AgentMessageObservation`,
### Methods:
`__init__`: Initializes an agent with `llm` and a list of messages `List[Mapping[str, str]]`
`step`: First, gets messages from state and then compiles them into a list for context. Next, pass the context list with the prompt to get the next command to execute. Finally, Execute command if valid, else return `AgentEchoAction(INVALID_INPUT_MESSAGE)`
`search_memory`: Not yet implemented

View File

@ -1,253 +0,0 @@
> 警告:此说明文件可能已过时。应将 README.md 视为真实的来源。如果您注意到差异,请打开一个拉取请求以更新此说明文件。
[English](../README.md) | [中文](README-zh.md)
<a name="readme-top"></a>
<!--
*** Thanks for checking out the Best-README-Template. If you have a suggestion
*** that would make this better, please fork the repo and create a pull request
*** or simply open an issue with the tag "enhancement".
*** Don't forget to give the project a star!
*** Thanks again! Now go create something AMAZING! :D
-->
<!-- PROJECT SHIELDS -->
<!--
*** I'm using markdown "reference style" links for readability.
*** Reference links are enclosed in brackets [ ] instead of parentheses ( ).
*** See the bottom of this document for the declaration of the reference variables
*** for contributors-url, forks-url, etc. This is an optional, concise syntax you may use.
*** https://www.markdownguide.org/basic-syntax/#reference-style-links
-->
<div align="center">
<a href="https://github.com/OpenDevin/OpenDevin/graphs/contributors"><img src="https://img.shields.io/github/contributors/opendevin/opendevin?style=for-the-badge" alt="Contributors"></a>
<a href="https://github.com/OpenDevin/OpenDevin/network/members"><img src="https://img.shields.io/github/forks/opendevin/opendevin?style=for-the-badge" alt="Forks"></a>
<a href="https://github.com/OpenDevin/OpenDevin/stargazers"><img src="https://img.shields.io/github/stars/opendevin/opendevin?style=for-the-badge" alt="Stargazers"></a>
<a href="https://github.com/OpenDevin/OpenDevin/issues"><img src="https://img.shields.io/github/issues/opendevin/opendevin?style=for-the-badge" alt="Issues"></a>
<a href="https://github.com/OpenDevin/OpenDevin/blob/main/LICENSE"><img src="https://img.shields.io/github/license/opendevin/opendevin?style=for-the-badge" alt="MIT License"></a>
</br>
<a href="https://join.slack.com/t/opendevin/shared_invite/zt-2etftj1dd-X1fDL2PYIVpsmJZkqEYANw"><img src="https://img.shields.io/badge/Slack-Join%20Us-red?logo=slack&logoColor=white&style=for-the-badge" alt="Join our Slack community"></a>
<a href="https://discord.gg/mBuDGRzzES"><img src="https://img.shields.io/badge/Discord-Join%20Us-purple?logo=discord&logoColor=white&style=for-the-badge" alt="Join our Discord community"></a>
</div>
<!-- PROJECT LOGO -->
<div align="center">
<img src="../logo.png" alt="Logo" width="200" height="200">
<h1 align="center">OpenDevin少写代码多创作</h1>
</div>
<!-- TABLE OF CONTENTS -->
<details>
<summary>🗂️ Table of Contents</summary>
<ol>
<li><a href="#-mission">🎯 Mission</a></li>
<li><a href="#-what-is-devin">🤔 What is Devin?</a></li>
<li><a href="#-why-opendevin">🐚 Why OpenDevin?</a></li>
<li><a href="#-project-status">🚧 Project Status</a></li>
<a href="#-get-started">🚀 Get Started</a>
<ul>
<li><a href="#1-requirements">1. Requirements</a></li>
<li><a href="#2-build-and-setup">2. Build and Setup</a></li>
<li><a href="#3-run-the-application">3. Run the Application</a></li>
<li><a href="#4-individual-server-startup">4. Individual Server Startup</a></li>
<li><a href="#5-help">5. Help</a></li>
</ul>
</li>
<li><a href="#%EF%B8%8F-research-strategy">⭐️ Research Strategy</a></li>
<li><a href="#-how-to-contribute">🤝 How to Contribute</a></li>
<li><a href="#-join-our-community">🤖 Join Our Community</a></li>
<li><a href="#%EF%B8%8F-built-with">🛠️ Built With</a></li>
<li><a href="#-license">📜 License</a></li>
</ol>
</details>
## 🎯 使命
[Project Demo Video](https://github.com/OpenDevin/OpenDevin/assets/38853559/71a472cc-df34-430c-8b1d-4d7286c807c9)
欢迎来到 OpenDevin一个开源项目旨在复制 Devin一款自主的 AI 软件工程师,能够执行复杂的工程任务,并与用户积极合作,共同进行软件开发项目。该项目立志通过开源社区的力量复制、增强和创新 Devin。
<p align="right" style="font-size: 14px; color: #555; margin-top: 20px;">
<a href="#readme-top" style="text-decoration: none; color: #007bff; font-weight: bold;">
↑ Back to Top ↑
</a>
</p>
## 🤔 Devin 是什么?
Devin 代表着一种尖端的自主代理程序,旨在应对软件工程的复杂性。它利用诸如 shell、代码编辑器和 Web 浏览器等工具的组合,展示了在软件开发中利用 LLMs大型语言模型的未开发潜力。我们的目标是探索和拓展 Devin 的能力,找出其优势和改进空间,以指导开源代码模型的进展。
<p align="right" style="font-size: 14px; color: #555; margin-top: 20px;">
<a href="#readme-top" style="text-decoration: none; color: #007bff; font-weight: bold;">
↑ Back to Top ↑
</a>
</p>
## 🐚 为什么选择 OpenDevin
OpenDevin 项目源于对复制、增强和超越原始 Devin 模型的愿望。通过与开源社区的互动,我们旨在解决 Code LLMs 在实际场景中面临的挑战,创作出对社区有重大贡献并为未来进步铺平道路的作品。
<p align="right" style="font-size: 14px; color: #555; margin-top: 20px;">
<a href="#readme-top" style="text-decoration: none; color: #007bff; font-weight: bold;">
↑ Back to Top ↑
</a>
</p>
## 🚧 项目状态
OpenDevin 目前仍在进行中,但您已经可以运行 alpha 版本来查看端到端系统的运行情况。项目团队正在积极努力实现以下关键里程碑:
- **用户界面UI**:开发用户友好的界面,包括聊天界面、演示命令的 shell 和 Web 浏览器。
- **架构**:构建一个稳定的代理框架,具有强大的后端,可以读取、写入和运行简单的命令。
- **代理能力**:增强代理的能力,以生成 bash 脚本、运行测试和执行其他软件工程任务。
- **评估**:建立一个与 Devin 评估标准一致的最小评估流水线。
在完成 MVP 后,团队将专注于各个领域的研究,包括基础模型、专家能力、评估和代理研究。
<p align="right" style="font-size: 14px; color: #555; margin-top: 20px;">
<a href="#readme-top" style="text-decoration: none; color: #007bff; font-weight: bold;">
↑ Back to Top ↑
</a>
</p>
## ⚠️ 注意事项和警告
- OpenDevin 仍然是一个 alpha 项目。它变化很快且不稳定。我们正在努力在未来几周发布稳定版本。
- OpenDevin 会向您配置的 LLM 发出许多提示。大多数 LLM 都需要花费金钱,请务必设置花费限制并监控使用情况。
- OpenDevin 在 Docker 沙箱中运行 `bash` 命令,因此不应影响您的计算机。但您的工作区目录将附加到该沙箱,并且目录中的文件可能会被修改或删除。
- 我们默认的代理目前是 MonologueAgent具有有限的功能但相当稳定。我们正在开发其他代理实现包括 [SWE 代理](https://swe-agent.com/)。您可以[在这里阅读我们当前的代理集合](./docs/documentation/Agents.md)。
## 🚀 开始
开始使用 OpenDevin 项目非常简单。按照以下简单步骤在您的系统上设置和运行 OpenDevin
运行 OpenDevin 最简单的方法是在 Docker 容器中。
您可以运行:
```bash
# 您的 OpenAI API 密钥,或任何其他 LLM API 密钥
export LLM_API_KEY="sk-..."
# 您想要 OpenDevin 修改的目录。必须是绝对路径!
export WORKSPACE_BASE=$(pwd)/workspace
docker run \
-e LLM_API_KEY \
-e WORKSPACE_MOUNT_PATH=$WORKSPACE_BASE \
-v $WORKSPACE_BASE:/opt/workspace_base \
-v /var/run/docker.sock:/var/run/docker.sock \
-p 3000:3000 \
ghcr.io/opendevin/opendevin:latest
```
`$(pwd)/workspace` 替换为您希望 OpenDevin 使用的代码路径。
您可以在 `http://localhost:3000` 找到正在运行的 OpenDevin。
请参阅[Development.md](Development.md)以获取在没有 Docker 的情况下运行 OpenDevin 的说明。
## 🤖 LLM 后端
OpenDevin 可以与任何 LLM 后端配合使用。
要获取提供的 LM 提供商和模型的完整列表,请参阅
[litellm 文档](https://docs.litellm.ai/docs/providers)。
`LLM_MODEL` 环境变量控制在编程交互中使用哪个模型,
但在 OpenDevin UI 中选择模型将覆盖此设置。
对于某些 LLM可能需要以下环境变量
- `LLM_API_KEY`
- `LLM_BASE_URL`
- `LLM_EMBEDDING_MODEL`
- `LLM_EMBEDDING_DEPLOYMENT_NAME`
- `LLM_API_VERSION`
**关于替代模型的说明:**
某些替代模型可能比其他模型更具挑战性。
不要害怕,勇敢的冒险家!我们将很快公布 LLM 特定的文档,指导您完成您的探险。
如果您已经掌握了除 OpenAI 的 GPT 之外的模型使用技巧,
我们鼓励您[与我们分享您的设置说明](https://github.com/OpenDevin/OpenDevin/issues/417)。
还有[使用 ollama 运行本地模型的文档](./docs/documentation/LOCAL_LLM_GUIDE.md)。
## ⭐️ 研究策略
利用 LLMs 实现生产级应用程序的完全复制是一个复杂的任务。我们的策略包括:
1. **核心技术研究:** 专注于基础研究,以了解和改进代码生成和处理的技术方面。
2. **专业能力:** 通过数据整理、训练方法等手段增强核心组件的效能。
3. **任务规划:** 开发能力,用于错误检测、代码库管理和优化。
4. **评估:** 建立全面的评估指标,以更好地了解和改进我们的模型。
<p align="right" style="font-size: 14px; color: #555; margin-top: 20px;">
<a href="#readme-top" style="text-decoration: none; color: #007bff; font-weight: bold;">
↑ Back to Top ↑
</a>
</p>
## 🤝 如何贡献
OpenDevin 是一个社区驱动的项目,我们欢迎所有人的贡献。无论您是开发人员、研究人员,还是对利用人工智能推动软件工程领域发展充满热情的人,都有许多参与方式:
- **代码贡献:** 帮助我们开发核心功能、前端界面或沙盒解决方案。
- **研究和评估:** 为我们对软件工程中的 LLMs 的理解做出贡献,参与评估模型,或提出改进意见。
- **反馈和测试:** 使用 OpenDevin 工具集,报告错误,提出功能建议,或就可用性提供反馈。
详情请查看[此文档](./CONTRIBUTING.md)。
<p align="right" style="font-size: 14px; color: #555; margin-top: 20px;">
<a href="#readme-top" style="text-decoration: none; color: #007bff; font-weight: bold;">
↑ Back to Top ↑
</a>
</p>
## 🤖 加入我们的社区
现在我们既有 Slack 工作空间用于协作构建 OpenDevin也有 Discord 服务器用于讨论与项目、LLM、Agent 等相关的任何事情。
- [Slack 工作空间](https://join.slack.com/t/opendevin/shared_invite/zt-2etftj1dd-X1fDL2PYIVpsmJZkqEYANw)
- [Discord 服务器](https://discord.gg/mBuDGRzzES)
如果你愿意贡献,欢迎加入我们的社区(请注意,现在无需填写[表格](https://forms.gle/758d5p6Ve8r2nxxq6))。让我们一起简化软件工程!
🐚 **少写代码,用 OpenDevin 创造更多。**
[![Star History Chart](https://api.star-history.com/svg?repos=OpenDevin/OpenDevin&type=Date)](https://star-history.com/#OpenDevin/OpenDevin&Date)
## 🛠️ 技术栈
OpenDevin 使用了一系列强大的框架和库的组合,为其开发提供了坚实的基础。以下是项目中使用的关键技术:
![FastAPI](https://img.shields.io/badge/FastAPI-black?style=for-the-badge) ![uvicorn](https://img.shields.io/badge/uvicorn-black?style=for-the-badge) ![LiteLLM](https://img.shields.io/badge/LiteLLM-black?style=for-the-badge) ![Docker](https://img.shields.io/badge/Docker-black?style=for-the-badge) ![Ruff](https://img.shields.io/badge/Ruff-black?style=for-the-badge) ![MyPy](https://img.shields.io/badge/MyPy-black?style=for-the-badge) ![LlamaIndex](https://img.shields.io/badge/LlamaIndex-black?style=for-the-badge) ![React](https://img.shields.io/badge/React-black?style=for-the-badge)
请注意,这些技术的选择正在进行中,随着项目的发展,可能会添加其他技术或移除现有技术。我们致力于采用最合适和最有效的工具,以增强 OpenDevin 的功能。
<p align="right" style="font-size: 14px; color: #555; margin-top: 20px;">
<a href="#readme-top" style="text-decoration: none; color: #007bff; font-weight: bold;">
↑ Back to Top ↑
</a>
</p>
## 📜 许可证
根据 MIT 许可证分发。有关更多信息,请参阅 [`LICENSE`](./LICENSE)。
<p align="right" style="font-size: 14px; color: #555; margin-top: 20px;">
<a href="#readme-top" style="text-decoration: none; color: #007bff; font-weight: bold;">
↑ Back to Top ↑
</a>
</p>
[contributors-shield]: https://img.shields.io/github/contributors/opendevin/opendevin?style=for-the-badge
[contributors-url]: https://github.com/OpenDevin/OpenDevin/graphs/contributors
[forks-shield]: https://img.shields.io/github/forks/opendevin/opendevin?style=for-the-badge
[forks-url]: https://github.com/OpenDevin/OpenDevin/network/members
[stars-shield]: https://img.shields.io/github/stars/opendevin/opendevin?style=for-the-badge
[stars-url]: https://github.com/OpenDevin/OpenDevin/stargazers
[issues-shield]: https://img.shields.io/github/issues/opendevin/opendevin?style=for-the-badge
[issues-url]: https://github.com/OpenDevin/OpenDevin/issues
[license-shield]: https://img.shields.io/github/license/opendevin/opendevin?style=for-the-badge
[license-url]: https://github.com/OpenDevin/OpenDevin/blob/main/LICENSE

41
docs/README.md Normal file
View File

@ -0,0 +1,41 @@
# Website
This website is built using [Docusaurus](https://docusaurus.io/), a modern static website generator.
### Installation
```
$ yarn
```
### Local Development
```
$ yarn start
```
This command starts a local development server and opens up a browser window. Most changes are reflected live without having to restart the server.
### Build
```
$ yarn build
```
This command generates static content into the `build` directory and can be served using any static contents hosting service.
### Deployment
Using SSH:
```
$ USE_SSH=true yarn deploy
```
Not using SSH:
```
$ GIT_USER=<Your GitHub username> yarn deploy
```
If you are using GitHub pages for hosting, this command is a convenient way to build the website and push to the `gh-pages` branch.

View File

@ -1,14 +0,0 @@
# System Architecture Overview
This is a high-level overview of the system architecture. The system is divided into two main components: the frontend and the backend. The frontend is responsible for handling user interactions and displaying the results. The backend is responsible for handling the business logic and executing the agents.
![system_architecture.svg](system_architecture.svg)
This Overview is simplified to show the main components and their interactions. For a more detailed view of the backend architecture, see the [Backend Architecture](#backend-architecture) section.
# Backend Architecture
*__Disclaimer__: The backend architecture is a work in progress and is subject to change. The following diagram shows the current architecture of the backend based on the commit that is shown in the footer of the diagram.*
![backend_architecture.svg](backend_architecture.svg)

View File

@ -1,22 +0,0 @@
# Process for updating the backend architecture diagram
The generation of the backend architecture diagram is partially automated. The diagram is generated from the type hints in the code using the py2puml tool. The diagram is then manually reviewed, adjusted and exported to PNG and SVG.
## Prerequisites
- Running python environment in which opendevin is executable (according to the instructions in the README.md file in the root of the repository)
- [py2puml](https://github.com/lucsorel/py2puml) installed
## Steps
1. Autogenerate the diagram by running the following command from the root of the repository:
```py2puml opendevin opendevin > docs/architecture/backend_architecture.puml```
2. Open the generated file in a PlantUML editor, e.g. Visual Studio Code with the PlantUML extension or [PlantText](https://www.planttext.com/)
3. Review the generated PUML and make all necessary adjustments to the diagram (add missing parts, fix mistakes, improve positioning).
*py2puml creates the diagram based on the type hints in the code, so missing or incorrect type hints may result in an incomplete or incorrect diagram.*
4. Review the diff between the new and the previous diagram and manually check if the changes are correct.
*Make sure not to remove parts that were manually added to the diagram in the past and are still relevant.*
4. Add the commit hash of the commit that was used to generate the diagram to the diagram footer.
5. Export the diagram as PNG and SVG files and replace the existing diagrams in the `docs/architecture` directory. This can be done with (e.g. [PlantText](https://www.planttext.com/))

3
docs/babel.config.js Normal file
View File

@ -0,0 +1,3 @@
module.exports = {
presets: [require.resolve('@docusaurus/core/lib/babel/preset')],
};

128
docs/docusaurus.config.ts Normal file
View File

@ -0,0 +1,128 @@
import type * as Preset from "@docusaurus/preset-classic";
import type { Config } from "@docusaurus/types";
import { themes as prismThemes } from "prism-react-renderer";
const config: Config = {
title: "OpenDevin",
tagline: "Code Less, Make More",
favicon: "img/logo.png",
// Set the production url of your site here
url: "https://OpenDevin.github.io",
baseUrl: "/OpenDevin/",
// GitHub pages deployment config.
organizationName: "OpenDevin",
projectName: "OpenDevin",
trailingSlash: false,
onBrokenLinks: "throw",
onBrokenMarkdownLinks: "warn",
// Even if you don't use internationalization, you can use this field to set
// useful metadata like html lang. For example, if your site is Chinese, you
// may want to replace "en" with "zh-Hans".
i18n: {
defaultLocale: "en",
locales: ["en"],
},
presets: [
[
"classic",
{
docs: {
path: "modules",
routeBasePath: "modules",
sidebarPath: "./sidebars.ts",
exclude: [
// '**/_*.{js,jsx,ts,tsx,md,mdx}',
// '**/_*/**',
"**/*.test.{js,jsx,ts,tsx}",
"**/__tests__/**",
],
},
blog: {
showReadingTime: true,
},
theme: {
customCss: "./src/css/custom.css",
},
} satisfies Preset.Options,
],
],
themeConfig: {
image: "img/docusaurus.png",
navbar: {
title: "OpenDevin",
logo: {
alt: "OpenDevin",
src: "img/logo.png",
},
items: [
{
type: "docSidebar",
sidebarId: "docsSidebar",
position: "left",
label: "Docs",
},
{
type: "docSidebar",
sidebarId: "apiSidebar",
position: "left",
label: "Codebase",
},
{ to: "/faq", label: "FAQ", position: "left" },
{
href: "https://github.com/OpenDevin/OpenDevin",
label: "GitHub",
position: "right",
},
],
},
footer: {
style: "dark",
links: [
{
title: "OpenDevin",
items: [
{
label: "Docs",
to: "/modules/usage/intro",
},
],
},
{
title: "Community",
items: [
{
label: "Slack",
href: "https://join.slack.com/t/opendevin/shared_invite/zt-2etftj1dd-X1fDL2PYIVpsmJZkqEYANw",
},
{
label: "Discord",
href: "https://discord.gg/mBuDGRzzES",
},
],
},
{
title: "More",
items: [
{
label: "GitHub",
href: "https://github.com/OpenDevin/OpenDevin",
},
],
},
],
copyright: `Copyright © ${new Date().getFullYear()} OpenDevin`,
},
prism: {
theme: prismThemes.oneLight,
darkTheme: prismThemes.oneDark,
},
} satisfies Preset.ThemeConfig,
};
export default config;

Binary file not shown.

Before

Width:  |  Height:  |  Size: 39 KiB

View File

@ -0,0 +1,34 @@
---
sidebar_label: agent
title: agenthub.SWE_agent.agent
---
## SWEAgent Objects
```python
class SWEAgent(Agent)
```
An attempt to recreate swe_agent with output parsing, prompting style, and Application Computer Interface (ACI).
SWE-agent includes ACI functions like &#x27;goto&#x27;, &#x27;search_for&#x27;, &#x27;edit&#x27;, &#x27;scroll&#x27;, &#x27;run&#x27;
#### step
```python
def step(state: State) -> Action
```
SWE-Agent step:
1. Get context - past actions, custom commands, current step
2. Perform think-act - prompt model for action and reasoning
3. Catch errors - ensure model takes action (5 attempts max)
#### reset
```python
def reset() -> None
```
Used to reset the agent

View File

@ -0,0 +1,34 @@
---
sidebar_label: parser
title: agenthub.SWE_agent.parser
---
#### get\_action\_from\_string
```python
def get_action_from_string(command_string: str,
path: str,
line: int,
thoughts: str = '') -> Action | None
```
Parses the command string to find which command the agent wants to run
Converts the command into a proper Action and returns
#### parse\_command
```python
def parse_command(input_str: str, path: str, line: int)
```
Parses a given string and separates the command (enclosed in triple backticks) from any accompanying text.
**Arguments**:
- `input_str` _str_ - The input string to be parsed.
**Returns**:
- `tuple` - A tuple containing the command and the accompanying text (if any).

View File

@ -0,0 +1,50 @@
---
sidebar_label: codeact_agent
title: agenthub.codeact_agent.codeact_agent
---
## CodeActAgent Objects
```python
class CodeActAgent(Agent)
```
The Code Act Agent is a minimalist agent.
The agent works by passing the model a list of action-observation pairs and prompting the model to take the next step.
#### \_\_init\_\_
```python
def __init__(llm: LLM) -> None
```
Initializes a new instance of the CodeActAgent class.
**Arguments**:
- llm (LLM): The llm to be used by this agent
#### step
```python
def step(state: State) -> Action
```
Performs one step using the Code Act Agent.
This includes gathering info on previous steps and prompting the model to make a command to execute.
**Arguments**:
- state (State): used to get updated info and background commands
**Returns**:
- CmdRunAction(command) - command action to run
- AgentEchoAction(content=INVALID_INPUT_MESSAGE) - invalid command output
**Raises**:
- NotImplementedError - for actions other than CmdOutputObservation or AgentMessageObservation

View File

@ -0,0 +1,45 @@
---
sidebar_label: agent
title: agenthub.delegator_agent.agent
---
## DelegatorAgent Objects
```python
class DelegatorAgent(Agent)
```
The planner agent utilizes a special prompting strategy to create long term plans for solving problems.
The agent is given its previous action-observation pairs, current task, and hint based on last action taken at every step.
#### \_\_init\_\_
```python
def __init__(llm: LLM)
```
Initialize the Delegator Agent with an LLM
**Arguments**:
- llm (LLM): The llm to be used by this agent
#### step
```python
def step(state: State) -> Action
```
Checks to see if current step is completed, returns AgentFinishAction if True.
Otherwise, creates a plan prompt and sends to model for inference, returning the result as the next action.
**Arguments**:
- state (State): The current state given the previous actions and observations
**Returns**:
- AgentFinishAction: If the last state was &#x27;completed&#x27;, &#x27;verified&#x27;, or &#x27;abandoned&#x27;
- Action: The next action to take based on llm response

View File

@ -0,0 +1,15 @@
---
sidebar_label: agent
title: agenthub.dummy_agent.agent
---
Module for a Dummy agent.
## DummyAgent Objects
```python
class DummyAgent(Agent)
```
A dummy agent that does nothing but can be used in testing.

View File

@ -0,0 +1,30 @@
---
sidebar_label: agent
title: agenthub.micro.agent
---
#### my\_encoder
```python
def my_encoder(obj)
```
Encodes objects as dictionaries
**Arguments**:
- obj (Object): An object that will be converted
**Returns**:
- dict: If the object can be converted it is returned in dict format
#### to\_json
```python
def to_json(obj, **kwargs)
```
Serialize an object to str format

View File

@ -0,0 +1,62 @@
---
sidebar_label: agent
title: agenthub.monologue_agent.agent
---
## MonologueAgent Objects
```python
class MonologueAgent(Agent)
```
The Monologue Agent utilizes long and short term memory to complete tasks.
Long term memory is stored as a LongTermMemory object and the model uses it to search for examples from the past.
Short term memory is stored as a Monologue object and the model can condense it as necessary.
#### \_\_init\_\_
```python
def __init__(llm: LLM)
```
Initializes the Monologue Agent with an llm, monologue, and memory.
**Arguments**:
- llm (LLM): The llm to be used by this agent
#### step
```python
def step(state: State) -> Action
```
Modifies the current state by adding the most recent actions and observations, then prompts the model to think about it&#x27;s next action to take using monologue, memory, and hint.
**Arguments**:
- state (State): The current state based on previous steps taken
**Returns**:
- Action: The next action to take based on LLM response
#### search\_memory
```python
def search_memory(query: str) -> List[str]
```
Uses VectorIndexRetriever to find related memories within the long term memory.
Uses search to produce top 10 results.
**Arguments**:
- query (str): The query that we want to find related memories for
**Returns**:
- List[str]: A list of top 10 text results that matched the query

View File

@ -0,0 +1,38 @@
---
sidebar_label: json
title: agenthub.monologue_agent.utils.json
---
#### my\_encoder
```python
def my_encoder(obj)
```
Encodes objects as dictionaries
**Arguments**:
- obj (Object): An object that will be converted
**Returns**:
- dict: If the object can be converted it is returned in dict format
#### dumps
```python
def dumps(obj, **kwargs)
```
Serialize an object to str format
#### loads
```python
def loads(s, **kwargs)
```
Create a JSON object from str

View File

@ -0,0 +1,52 @@
---
sidebar_label: memory
title: agenthub.monologue_agent.utils.memory
---
## LongTermMemory Objects
```python
class LongTermMemory()
```
Responsible for storing information that the agent can call on later for better insights and context.
Uses chromadb to store and search through memories.
#### \_\_init\_\_
```python
def __init__()
```
Initialize the chromadb and set up ChromaVectorStore for later use.
#### add\_event
```python
def add_event(event: dict)
```
Adds a new event to the long term memory with a unique id.
**Arguments**:
- event (dict): The new event to be added to memory
#### search
```python
def search(query: str, k: int = 10)
```
Searches through the current memory using VectorIndexRetriever
**Arguments**:
- query (str): A query to match search results to
- k (int): Number of top results to return
**Returns**:
- List[str]: List of top k results found in current memory

View File

@ -0,0 +1,80 @@
---
sidebar_label: monologue
title: agenthub.monologue_agent.utils.monologue
---
## Monologue Objects
```python
class Monologue()
```
The monologue is a representation for the agent&#x27;s internal monologue where it can think.
The agent has the capability of using this monologue for whatever it wants.
#### \_\_init\_\_
```python
def __init__()
```
Initialize the empty list of thoughts
#### add\_event
```python
def add_event(t: dict)
```
Adds an event to memory if it is a valid event.
**Arguments**:
- t (dict): The thought that we want to add to memory
**Raises**:
- AgentEventTypeError: If t is not a dict
#### get\_thoughts
```python
def get_thoughts()
```
Get the current thoughts of the agent.
**Returns**:
- List: The list of thoughts that the agent has.
#### get\_total\_length
```python
def get_total_length()
```
Gives the total number of characters in all thoughts
**Returns**:
- Int: Total number of chars in thoughts.
#### condense
```python
def condense(llm: LLM)
```
Attempts to condense the monologue by using the llm
**Arguments**:
- llm (LLM): llm to be used for summarization
**Raises**:
- Exception: the same exception as it got from the llm or processing the response

View File

@ -0,0 +1,73 @@
---
sidebar_label: prompts
title: agenthub.monologue_agent.utils.prompts
---
#### get\_summarize\_monologue\_prompt
```python
def get_summarize_monologue_prompt(thoughts: List[dict])
```
Gets the prompt for summarizing the monologue
**Returns**:
- str: A formatted string with the current monologue within the prompt
#### get\_request\_action\_prompt
```python
def get_request_action_prompt(
task: str,
thoughts: List[dict],
background_commands_obs: List[CmdOutputObservation] = [])
```
Gets the action prompt formatted with appropriate values.
**Arguments**:
- task (str): The current task the agent is trying to accomplish
- thoughts (List[dict]): The agent&#x27;s current thoughts
- background_commands_obs (List[CmdOutputObservation]): List of all observed background commands running
**Returns**:
- str: Formatted prompt string with hint, task, monologue, and background included
#### parse\_action\_response
```python
def parse_action_response(response: str) -> Action
```
Parses a string to find an action within it
**Arguments**:
- response (str): The string to be parsed
**Returns**:
- Action: The action that was found in the response string
#### parse\_summary\_response
```python
def parse_summary_response(response: str) -> List[dict]
```
Parses a summary of the monologue
**Arguments**:
- response (str): The response string to be parsed
**Returns**:
- List[dict]: The list of summaries output by the model

View File

@ -0,0 +1,45 @@
---
sidebar_label: agent
title: agenthub.planner_agent.agent
---
## PlannerAgent Objects
```python
class PlannerAgent(Agent)
```
The planner agent utilizes a special prompting strategy to create long term plans for solving problems.
The agent is given its previous action-observation pairs, current task, and hint based on last action taken at every step.
#### \_\_init\_\_
```python
def __init__(llm: LLM)
```
Initialize the Planner Agent with an LLM
**Arguments**:
- llm (LLM): The llm to be used by this agent
#### step
```python
def step(state: State) -> Action
```
Checks to see if current step is completed, returns AgentFinishAction if True.
Otherwise, creates a plan prompt and sends to model for inference, returning the result as the next action.
**Arguments**:
- state (State): The current state given the previous actions and observations
**Returns**:
- AgentFinishAction: If the last state was &#x27;completed&#x27;, &#x27;verified&#x27;, or &#x27;abandoned&#x27;
- Action: The next action to take based on llm response

View File

@ -0,0 +1,49 @@
---
sidebar_label: prompt
title: agenthub.planner_agent.prompt
---
#### get\_hint
```python
def get_hint(latest_action_id: str) -> str
```
Returns action type hint based on given action_id
#### get\_prompt
```python
def get_prompt(plan: Plan, history: List[Tuple[Action, Observation]]) -> str
```
Gets the prompt for the planner agent.
Formatted with the most recent action-observation pairs, current task, and hint based on last action
**Arguments**:
- plan (Plan): The original plan outlined by the user with LLM defined tasks
- history (List[Tuple[Action, Observation]]): List of corresponding action-observation pairs
**Returns**:
- str: The formatted string prompt with historical values
#### parse\_response
```python
def parse_response(response: str) -> Action
```
Parses the model output to find a valid action to take
**Arguments**:
- response (str): A response from the model that potentially contains an Action.
**Returns**:
- Action: A valid next action to perform from model output

View File

@ -0,0 +1,9 @@
---
sidebar_label: action
title: opendevin.action
---
#### ACTION\_TYPE\_TO\_CLASS
type: ignore[attr-defined]

View File

@ -0,0 +1,14 @@
---
sidebar_label: base
title: opendevin.action.base
---
## NullAction Objects
```python
@dataclass
class NullAction(NotExecutableAction)
```
An action that does nothing.

View File

@ -0,0 +1,16 @@
---
sidebar_label: fileop
title: opendevin.action.fileop
---
## FileReadAction Objects
```python
@dataclass
class FileReadAction(ExecutableAction)
```
Reads a file from a given path.
Can be set to read specific lines using start and end
Default lines 0:-1 (whole file)

View File

@ -0,0 +1,46 @@
---
sidebar_label: github
title: opendevin.action.github
---
## GitHubPushAction Objects
```python
@dataclass
class GitHubPushAction(ExecutableAction)
```
This pushes the current branch to github.
To use this, you need to set the GITHUB_TOKEN environment variable.
The agent will return a message with a URL that you can click to make a pull
request.
**Attributes**:
- `owner` - The owner of the source repo
- `repo` - The name of the source repo
- `branch` - The branch to push
- `action` - The action identifier
## GitHubSendPRAction Objects
```python
@dataclass
class GitHubSendPRAction(ExecutableAction)
```
An action to send a github PR.
To use this, you need to set the GITHUB_TOKEN environment variable.
**Attributes**:
- `owner` - The owner of the source repo
- `repo` - The name of the source repo
- `title` - The title of the PR
- `head` - The branch to send the PR from
- `head_repo` - The repo to send the PR from
- `base` - The branch to send the PR to
- `body` - The body of the PR

View File

@ -0,0 +1,14 @@
---
sidebar_label: tasks
title: opendevin.action.tasks
---
## TaskStateChangedAction Objects
```python
@dataclass
class TaskStateChangedAction(NotExecutableAction)
```
Fake action, just to notify the client that a task state has changed.

View File

@ -0,0 +1,121 @@
---
sidebar_label: agent
title: opendevin.agent
---
## Agent Objects
```python
class Agent(ABC)
```
This abstract base class is an general interface for an agent dedicated to
executing a specific instruction and allowing human interaction with the
agent during execution.
It tracks the execution status and maintains a history of interactions.
#### complete
```python
@property
def complete() -> bool
```
Indicates whether the current instruction execution is complete.
**Returns**:
- complete (bool): True if execution is complete; False otherwise.
#### step
```python
@abstractmethod
def step(state: 'State') -> 'Action'
```
Starts the execution of the assigned instruction. This method should
be implemented by subclasses to define the specific execution logic.
#### search\_memory
```python
@abstractmethod
def search_memory(query: str) -> List[str]
```
Searches the agent&#x27;s memory for information relevant to the given query.
**Arguments**:
- query (str): The query to search for in the agent&#x27;s memory.
**Returns**:
- response (str): The response to the query.
#### reset
```python
def reset() -> None
```
Resets the agent&#x27;s execution status and clears the history. This method can be used
to prepare the agent for restarting the instruction or cleaning up before destruction.
#### register
```python
@classmethod
def register(cls, name: str, agent_cls: Type['Agent'])
```
Registers an agent class in the registry.
**Arguments**:
- name (str): The name to register the class under.
- agent_cls (Type[&#x27;Agent&#x27;]): The class to register.
**Raises**:
- AgentAlreadyRegisteredError: If name already registered
#### get\_cls
```python
@classmethod
def get_cls(cls, name: str) -> Type['Agent']
```
Retrieves an agent class from the registry.
**Arguments**:
- name (str): The name of the class to retrieve
**Returns**:
- agent_cls (Type[&#x27;Agent&#x27;]): The class registered under the specified name.
**Raises**:
- AgentNotRegisteredError: If name not registered
#### list\_agents
```python
@classmethod
def list_agents(cls) -> list[str]
```
Retrieves the list of all agent names from the registry.
**Raises**:
- AgentNotRegisteredError: If no agent is registered

View File

@ -0,0 +1,13 @@
---
sidebar_label: config
title: opendevin.config
---
#### get
```python
def get(key: ConfigType, required: bool = False)
```
Get a key from the environment variables or config.toml or default configs.

View File

@ -0,0 +1,36 @@
---
sidebar_label: agent_controller
title: opendevin.controller.agent_controller
---
## AgentController Objects
```python
class AgentController()
```
#### setup\_task
```python
async def setup_task(task: str, inputs: dict = {})
```
Sets up the agent controller with a task.
#### start
```python
async def start(task: str)
```
Starts the agent controller with a task.
If task already run before, it will continue from the last step.
#### get\_task\_state
```python
def get_task_state()
```
Returns the current state of the agent task.

View File

@ -0,0 +1,40 @@
---
sidebar_label: files
title: opendevin.files
---
## WorkspaceFile Objects
```python
class WorkspaceFile()
```
#### to\_dict
```python
def to_dict() -> Dict[str, Any]
```
Converts the File object to a dictionary.
**Returns**:
The dictionary representation of the File object.
#### get\_folder\_structure
```python
def get_folder_structure(workdir: Path) -> WorkspaceFile
```
Gets the folder structure of a directory.
**Arguments**:
- `workdir` - The directory path.
**Returns**:
The folder structure.

View File

@ -0,0 +1,56 @@
---
sidebar_label: llm
title: opendevin.llm.llm
---
## LLM Objects
```python
class LLM()
```
The LLM class represents a Language Model instance.
#### \_\_init\_\_
```python
def __init__(model=DEFAULT_MODEL_NAME,
api_key=DEFAULT_API_KEY,
base_url=DEFAULT_BASE_URL,
api_version=DEFAULT_API_VERSION,
num_retries=LLM_NUM_RETRIES,
retry_min_wait=LLM_RETRY_MIN_WAIT,
retry_max_wait=LLM_RETRY_MAX_WAIT,
llm_timeout=LLM_TIMEOUT,
llm_max_return_tokens=LLM_MAX_RETURN_TOKENS)
```
**Arguments**:
- `model` _str, optional_ - The name of the language model. Defaults to LLM_MODEL.
- `api_key` _str, optional_ - The API key for accessing the language model. Defaults to LLM_API_KEY.
- `base_url` _str, optional_ - The base URL for the language model API. Defaults to LLM_BASE_URL. Not necessary for OpenAI.
- `api_version` _str, optional_ - The version of the API to use. Defaults to LLM_API_VERSION. Not necessary for OpenAI.
- `num_retries` _int, optional_ - The number of retries for API calls. Defaults to LLM_NUM_RETRIES.
- `retry_min_wait` _int, optional_ - The minimum time to wait between retries in seconds. Defaults to LLM_RETRY_MIN_TIME.
- `retry_max_wait` _int, optional_ - The maximum time to wait between retries in seconds. Defaults to LLM_RETRY_MAX_TIME.
- `llm_timeout` _int, optional_ - The maximum time to wait for a response in seconds. Defaults to LLM_TIMEOUT.
- `llm_max_return_tokens` _int, optional_ - The maximum number of tokens to return. Defaults to LLM_MAX_RETURN_TOKENS.
**Attributes**:
- `model_name` _str_ - The name of the language model.
- `api_key` _str_ - The API key for accessing the language model.
- `base_url` _str_ - The base URL for the language model API.
- `api_version` _str_ - The version of the API to use.
#### completion
```python
@property
def completion()
```
Decorator for the litellm completion function.

View File

@ -0,0 +1,92 @@
---
sidebar_label: logger
title: opendevin.logger
---
#### get\_console\_handler
```python
def get_console_handler()
```
Returns a console handler for logging.
#### get\_file\_handler
```python
def get_file_handler()
```
Returns a file handler for logging.
#### log\_uncaught\_exceptions
```python
def log_uncaught_exceptions(ex_cls, ex, tb)
```
Logs uncaught exceptions along with the traceback.
**Arguments**:
- `ex_cls` _type_ - The type of the exception.
- `ex` _Exception_ - The exception instance.
- `tb` _traceback_ - The traceback object.
**Returns**:
None
## LlmFileHandler Objects
```python
class LlmFileHandler(logging.FileHandler)
```
__LLM prompt and response logging__
#### \_\_init\_\_
```python
def __init__(filename, mode='a', encoding='utf-8', delay=False)
```
Initializes an instance of LlmFileHandler.
**Arguments**:
- `filename` _str_ - The name of the log file.
- `mode` _str, optional_ - The file mode. Defaults to &#x27;a&#x27;.
- `encoding` _str, optional_ - The file encoding. Defaults to None.
- `delay` _bool, optional_ - Whether to delay file opening. Defaults to False.
#### emit
```python
def emit(record)
```
Emits a log record.
**Arguments**:
- `record` _logging.LogRecord_ - The log record to emit.
#### get\_llm\_prompt\_file\_handler
```python
def get_llm_prompt_file_handler()
```
Returns a file handler for LLM prompt logging.
#### get\_llm\_response\_file\_handler
```python
def get_llm_response_file_handler()
```
Returns a file handler for LLM response logging.

View File

@ -0,0 +1,29 @@
---
sidebar_label: main
title: opendevin.main
---
#### read\_task\_from\_file
```python
def read_task_from_file(file_path: str) -> str
```
Read task from the specified file.
#### read\_task\_from\_stdin
```python
def read_task_from_stdin() -> str
```
Read task from stdin.
#### main
```python
async def main(task_str: str = '')
```
Main coroutine to run the agent controller with task input flexibility.

View File

@ -0,0 +1,9 @@
---
sidebar_label: observation
title: opendevin.observation
---
#### OBSERVATION\_TYPE\_TO\_CLASS
type: ignore[attr-defined]

View File

@ -0,0 +1,49 @@
---
sidebar_label: base
title: opendevin.observation.base
---
## Observation Objects
```python
@dataclass
class Observation()
```
This data class represents an observation of the environment.
#### to\_dict
```python
def to_dict() -> dict
```
Converts the observation to a dictionary and adds user message.
#### to\_memory
```python
def to_memory() -> dict
```
Converts the observation to a dictionary.
#### message
```python
@property
def message() -> str
```
Returns a message describing the observation.
## NullObservation Objects
```python
@dataclass
class NullObservation(Observation)
```
This data class represents a null observation.
This is used when the produced action is NOT executable.

View File

@ -0,0 +1,14 @@
---
sidebar_label: browse
title: opendevin.observation.browse
---
## BrowserOutputObservation Objects
```python
@dataclass
class BrowserOutputObservation(Observation)
```
This data class represents the output of a browser.

View File

@ -0,0 +1,15 @@
---
sidebar_label: delegate
title: opendevin.observation.delegate
---
## AgentDelegateObservation Objects
```python
@dataclass
class AgentDelegateObservation(Observation)
```
This data class represents a delegate observation.
This is used when the produced action is NOT executable.

View File

@ -0,0 +1,14 @@
---
sidebar_label: error
title: opendevin.observation.error
---
## AgentErrorObservation Objects
```python
@dataclass
class AgentErrorObservation(Observation)
```
This data class represents an error encountered by the agent.

View File

@ -0,0 +1,23 @@
---
sidebar_label: files
title: opendevin.observation.files
---
## FileReadObservation Objects
```python
@dataclass
class FileReadObservation(Observation)
```
This data class represents the content of a file.
## FileWriteObservation Objects
```python
@dataclass
class FileWriteObservation(Observation)
```
This data class represents a file write operation

View File

@ -0,0 +1,23 @@
---
sidebar_label: message
title: opendevin.observation.message
---
## UserMessageObservation Objects
```python
@dataclass
class UserMessageObservation(Observation)
```
This data class represents a message sent by the user.
## AgentMessageObservation Objects
```python
@dataclass
class AgentMessageObservation(Observation)
```
This data class represents a message sent by the agent.

View File

@ -0,0 +1,14 @@
---
sidebar_label: recall
title: opendevin.observation.recall
---
## AgentRecallObservation Objects
```python
@dataclass
class AgentRecallObservation(Observation)
```
This data class represents a list of memories recalled by the agent.

View File

@ -0,0 +1,14 @@
---
sidebar_label: run
title: opendevin.observation.run
---
## CmdOutputObservation Objects
```python
@dataclass
class CmdOutputObservation(Observation)
```
This data class represents the output of a command.

View File

@ -0,0 +1,182 @@
---
sidebar_label: plan
title: opendevin.plan
---
## Task Objects
```python
class Task()
```
#### \_\_init\_\_
```python
def __init__(parent: 'Task | None',
goal: str,
state: str = OPEN_STATE,
subtasks: List = [])
```
Initializes a new instance of the Task class.
**Arguments**:
- `parent` - The parent task, or None if it is the root task.
- `goal` - The goal of the task.
- `state` - The initial state of the task.
- `subtasks` - A list of subtasks associated with this task.
#### to\_string
```python
def to_string(indent='')
```
Returns a string representation of the task and its subtasks.
**Arguments**:
- `indent` - The indentation string for formatting the output.
**Returns**:
A string representation of the task and its subtasks.
#### to\_dict
```python
def to_dict()
```
Returns a dictionary representation of the task.
**Returns**:
A dictionary containing the task&#x27;s attributes.
#### set\_state
```python
def set_state(state)
```
Sets the state of the task and its subtasks.
Args: state: The new state of the task.
**Raises**:
- `PlanInvalidStateError` - If the provided state is invalid.
#### get\_current\_task
```python
def get_current_task() -> 'Task | None'
```
Retrieves the current task in progress.
**Returns**:
The current task in progress, or None if no task is in progress.
## Plan Objects
```python
class Plan()
```
Represents a plan consisting of tasks.
**Attributes**:
- `main_goal` - The main goal of the plan.
- `task` - The root task of the plan.
#### \_\_init\_\_
```python
def __init__(task: str)
```
Initializes a new instance of the Plan class.
**Arguments**:
- `task` - The main goal of the plan.
#### \_\_str\_\_
```python
def __str__()
```
Returns a string representation of the plan.
**Returns**:
A string representation of the plan.
#### get\_task\_by\_id
```python
def get_task_by_id(id: str) -> Task
```
Retrieves a task by its ID.
**Arguments**:
- `id` - The ID of the task.
**Returns**:
The task with the specified ID.
**Raises**:
- `ValueError` - If the provided task ID is invalid or does not exist.
#### add\_subtask
```python
def add_subtask(parent_id: str, goal: str, subtasks: List = [])
```
Adds a subtask to a parent task.
**Arguments**:
- `parent_id` - The ID of the parent task.
- `goal` - The goal of the subtask.
- `subtasks` - A list of subtasks associated with the new subtask.
#### set\_subtask\_state
```python
def set_subtask_state(id: str, state: str)
```
Sets the state of a subtask.
**Arguments**:
- `id` - The ID of the subtask.
- `state` - The new state of the subtask.
#### get\_current\_task
```python
def get_current_task()
```
Retrieves the current task in progress.
**Returns**:
The current task in progress, or None if no task is in progress.

View File

@ -0,0 +1,19 @@
---
sidebar_label: sandbox
title: opendevin.sandbox.e2b.sandbox
---
## E2BBox Objects
```python
class E2BBox(Sandbox)
```
#### copy\_to
```python
def copy_to(host_src: str, sandbox_dest: str, recursive: bool = False)
```
Copies a local file or directory to the sandbox.

View File

@ -0,0 +1,16 @@
---
sidebar_label: jupyter
title: opendevin.sandbox.plugins.jupyter
---
## JupyterRequirement Objects
```python
@dataclass
class JupyterRequirement(PluginRequirement)
```
#### host\_src
The directory of this file (sandbox/plugins/jupyter)

View File

@ -0,0 +1,21 @@
---
sidebar_label: mixin
title: opendevin.sandbox.plugins.mixin
---
## PluginMixin Objects
```python
class PluginMixin()
```
Mixin for Sandbox to support plugins.
#### init\_plugins
```python
def init_plugins(requirements: List[PluginRequirement])
```
Load a plugin into the sandbox.

View File

@ -0,0 +1,14 @@
---
sidebar_label: requirement
title: opendevin.sandbox.plugins.requirement
---
## PluginRequirement Objects
```python
@dataclass
class PluginRequirement()
```
Requirement for a plugin.

View File

@ -0,0 +1,76 @@
---
sidebar_label: action
title: opendevin.schema.action
---
## ActionTypeSchema Objects
```python
class ActionTypeSchema(BaseModel)
```
#### INIT
Initializes the agent. Only sent by client.
#### START
Starts a new development task. Only sent by the client.
#### READ
Reads the content of a file.
#### WRITE
Writes the content to a file.
#### RUN
Runs a command.
#### KILL
Kills a background command.
#### BROWSE
Opens a web page.
#### RECALL
Searches long-term memory
#### THINK
Allows the agent to make a plan, set a goal, or record thoughts
#### DELEGATE
Delegates a task to another agent.
#### FINISH
If you&#x27;re absolutely certain that you&#x27;ve completed your task and have tested your work,
use the finish action to stop working.
#### PAUSE
Pauses the task.
#### RESUME
Resumes the task.
#### STOP
Stops the task. Must send a start action to restart a new task.
#### PUSH
Push a branch to github.
#### SEND\_PR
Send a PR to github.

View File

@ -0,0 +1,35 @@
---
sidebar_label: observation
title: opendevin.schema.observation
---
## ObservationTypeSchema Objects
```python
class ObservationTypeSchema(BaseModel)
```
#### READ
The content of a file
#### BROWSE
The HTML content of a URL
#### RUN
The output of a command
#### RECALL
The result of a search
#### CHAT
A message from the user
#### DELEGATE
The result of a task delegated to another agent

View File

@ -0,0 +1,57 @@
---
sidebar_label: task
title: opendevin.schema.task
---
## TaskState Objects
```python
class TaskState(str, Enum)
```
#### INIT
Initial state of the task.
#### RUNNING
The task is running.
#### PAUSED
The task is paused.
#### STOPPED
The task is stopped.
#### FINISHED
The task is finished.
#### ERROR
An error occurred during the task.
## TaskStateAction Objects
```python
class TaskStateAction(str, Enum)
```
#### START
Starts the task.
#### PAUSE
Pauses the task.
#### RESUME
Resumes the task.
#### STOP
Stops the task.

View File

@ -0,0 +1,132 @@
---
sidebar_label: agent
title: opendevin.server.agent.agent
---
## AgentUnit Objects
```python
class AgentUnit()
```
Represents a session with an agent.
**Attributes**:
- `controller` - The AgentController instance for controlling the agent.
- `agent_task` - The task representing the agent&#x27;s execution.
#### \_\_init\_\_
```python
def __init__(sid)
```
Initializes a new instance of the Session class.
#### send\_error
```python
async def send_error(message)
```
Sends an error message to the client.
**Arguments**:
- `message` - The error message to send.
#### send\_message
```python
async def send_message(message)
```
Sends a message to the client.
**Arguments**:
- `message` - The message to send.
#### send
```python
async def send(data)
```
Sends data to the client.
**Arguments**:
- `data` - The data to send.
#### dispatch
```python
async def dispatch(action: str | None, data: dict)
```
Dispatches actions to the agent from the client.
#### get\_arg\_or\_default
```python
def get_arg_or_default(_args: dict, key: ConfigType) -> str
```
Gets an argument from the args dictionary or the default value.
**Arguments**:
- `_args` - The args dictionary.
- `key` - The key to get.
**Returns**:
The value of the key or the default value.
#### create\_controller
```python
async def create_controller(start_event: dict)
```
Creates an AgentController instance.
**Arguments**:
- `start_event` - The start event data (optional).
#### start\_task
```python
async def start_task(start_event)
```
Starts a task for the agent.
**Arguments**:
- `start_event` - The start event data.
#### set\_task\_state
```python
async def set_task_state(new_state_action: TaskStateAction)
```
Sets the state of the agent task.
#### on\_agent\_event
```python
async def on_agent_event(event: Observation | Action)
```
Callback function for agent events.
**Arguments**:
- `event` - The agent event (Observation or Action).

View File

@ -0,0 +1,31 @@
---
sidebar_label: manager
title: opendevin.server.agent.manager
---
## AgentManager Objects
```python
class AgentManager()
```
#### register\_agent
```python
def register_agent(sid: str)
```
Registers a new agent.
**Arguments**:
- `sid` - The session ID of the agent.
#### dispatch
```python
async def dispatch(sid: str, action: str | None, data: dict)
```
Dispatches actions to the agent from the client.

View File

@ -0,0 +1,30 @@
---
sidebar_label: auth
title: opendevin.server.auth.auth
---
#### get\_sid\_from\_token
```python
def get_sid_from_token(token: str) -> str
```
Retrieves the session id from a JWT token.
**Arguments**:
- `token` _str_ - The JWT token from which the session id is to be extracted.
**Returns**:
- `str` - The session id if found and valid, otherwise an empty string.
#### sign\_token
```python
def sign_token(payload: Dict[str, object]) -> str
```
Signs a JWT token.

View File

@ -0,0 +1,34 @@
---
sidebar_label: listen
title: opendevin.server.listen
---
#### get\_litellm\_models
```python
@app.get('/api/litellm-models')
async def get_litellm_models()
```
Get all models supported by LiteLLM.
#### get\_agents
```python
@app.get('/api/agents')
async def get_agents()
```
Get all agents supported by LiteLLM.
#### get\_token
```python
@app.get('/api/auth')
async def get_token(
credentials: HTTPAuthorizationCredentials = Depends(security_scheme))
```
Generate a JWT for authentication when starting a WebSocket connection. This endpoint checks if valid credentials
are provided and uses them to get a session ID. If no valid credentials are provided, it generates a new session ID.

View File

@ -0,0 +1,35 @@
---
sidebar_label: manager
title: opendevin.server.session.manager
---
## SessionManager Objects
```python
class SessionManager()
```
#### send
```python
async def send(sid: str, data: Dict[str, object]) -> bool
```
Sends data to the client.
#### send\_error
```python
async def send_error(sid: str, message: str) -> bool
```
Sends an error message to the client.
#### send\_message
```python
async def send_message(sid: str, message: str) -> bool
```
Sends a message to the client.

View File

@ -0,0 +1,15 @@
---
sidebar_label: msg_stack
title: opendevin.server.session.msg_stack
---
## Message Objects
```python
class Message()
```
#### role
&quot;user&quot;| &quot;assistant&quot;

View File

@ -0,0 +1,27 @@
---
sidebar_label: session
title: opendevin.server.session.session
---
## Session Objects
```python
class Session()
```
#### send\_error
```python
async def send_error(message: str) -> bool
```
Sends an error message to the client.
#### send\_message
```python
async def send_message(message: str) -> bool
```
Sends a message to the client.

View File

@ -0,0 +1,13 @@
---
sidebar_label: system
title: opendevin.utils.system
---
#### find\_available\_tcp\_port
```python
def find_available_tcp_port() -> int
```
Find an available TCP port, return -1 if none available.

View File

@ -0,0 +1,207 @@
{
"items": [
{
"items": [
{
"items": [
"python/agenthub/SWE_agent/agent",
"python/agenthub/SWE_agent/parser"
],
"label": "agenthub.SWE_agent",
"type": "category"
},
{
"items": [
"python/agenthub/codeact_agent/codeact_agent"
],
"label": "agenthub.codeact_agent",
"type": "category"
},
{
"items": [
"python/agenthub/delegator_agent/agent"
],
"label": "agenthub.delegator_agent",
"type": "category"
},
{
"items": [
"python/agenthub/dummy_agent/agent"
],
"label": "agenthub.dummy_agent",
"type": "category"
},
{
"items": [
"python/agenthub/micro/agent"
],
"label": "agenthub.micro",
"type": "category"
},
{
"items": [
{
"items": [
"python/agenthub/monologue_agent/utils/json",
"python/agenthub/monologue_agent/utils/memory",
"python/agenthub/monologue_agent/utils/monologue",
"python/agenthub/monologue_agent/utils/prompts"
],
"label": "agenthub.monologue_agent.utils",
"type": "category"
},
"python/agenthub/monologue_agent/agent"
],
"label": "agenthub.monologue_agent",
"type": "category"
},
{
"items": [
"python/agenthub/planner_agent/agent",
"python/agenthub/planner_agent/prompt"
],
"label": "agenthub.planner_agent",
"type": "category"
}
],
"label": "agenthub",
"type": "category"
},
{
"items": [
{
"items": [
"python/opendevin/action/__init__",
"python/opendevin/action/base",
"python/opendevin/action/fileop",
"python/opendevin/action/github",
"python/opendevin/action/tasks"
],
"label": "opendevin.action",
"type": "category"
},
{
"items": [
"python/opendevin/controller/agent_controller"
],
"label": "opendevin.controller",
"type": "category"
},
{
"items": [
"python/opendevin/llm/llm"
],
"label": "opendevin.llm",
"type": "category"
},
{
"items": [
"python/opendevin/observation/__init__",
"python/opendevin/observation/base",
"python/opendevin/observation/browse",
"python/opendevin/observation/delegate",
"python/opendevin/observation/error",
"python/opendevin/observation/files",
"python/opendevin/observation/message",
"python/opendevin/observation/recall",
"python/opendevin/observation/run"
],
"label": "opendevin.observation",
"type": "category"
},
{
"items": [
{
"items": [
"python/opendevin/sandbox/docker/process"
],
"label": "opendevin.sandbox.docker",
"type": "category"
},
{
"items": [
"python/opendevin/sandbox/e2b/sandbox"
],
"label": "opendevin.sandbox.e2b",
"type": "category"
},
{
"items": [
{
"items": [
"python/opendevin/sandbox/plugins/jupyter/__init__"
],
"label": "opendevin.sandbox.plugins.jupyter",
"type": "category"
},
"python/opendevin/sandbox/plugins/mixin",
"python/opendevin/sandbox/plugins/requirement"
],
"label": "opendevin.sandbox.plugins",
"type": "category"
}
],
"label": "opendevin.sandbox",
"type": "category"
},
{
"items": [
"python/opendevin/schema/action",
"python/opendevin/schema/observation",
"python/opendevin/schema/task"
],
"label": "opendevin.schema",
"type": "category"
},
{
"items": [
{
"items": [
"python/opendevin/server/agent/agent",
"python/opendevin/server/agent/manager"
],
"label": "opendevin.server.agent",
"type": "category"
},
{
"items": [
"python/opendevin/server/auth/auth"
],
"label": "opendevin.server.auth",
"type": "category"
},
{
"items": [
"python/opendevin/server/session/manager",
"python/opendevin/server/session/msg_stack",
"python/opendevin/server/session/session"
],
"label": "opendevin.server.session",
"type": "category"
},
"python/opendevin/server/listen"
],
"label": "opendevin.server",
"type": "category"
},
{
"items": [
"python/opendevin/utils/system"
],
"label": "opendevin.utils",
"type": "category"
},
"python/opendevin/agent",
"python/opendevin/config",
"python/opendevin/files",
"python/opendevin/logger",
"python/opendevin/main",
"python/opendevin/plan"
],
"label": "opendevin",
"type": "category"
}
],
"label": "Backend",
"type": "category"
}

View File

@ -0,0 +1,53 @@
---
sidebar_position: 6
---
# 📚 Misc
## ⭐️ Research Strategy
Achieving full replication of production-grade applications with LLMs is a complex endeavor. Our strategy involves:
1. **Core Technical Research:** Focusing on foundational research to understand and improve the technical aspects of code generation and handling.
2. **Specialist Abilities:** Enhancing the effectiveness of core components through data curation, training methods, and more.
3. **Task Planning:** Developing capabilities for bug detection, codebase management, and optimization.
4. **Evaluation:** Establishing comprehensive evaluation metrics to better understand and improve our models.
## 🚧 Default Agent
- Our default Agent is currently the MonologueAgent, which has limited capabilities, but is fairly stable. We're working on other Agent implementations, including [SWE Agent](https://swe-agent.com/). You can [read about our current set of agents here](./agents).
## 🤝 How to Contribute
OpenDevin is a community-driven project, and we welcome contributions from everyone. Whether you're a developer, a researcher, or simply enthusiastic about advancing the field of software engineering with AI, there are many ways to get involved:
- **Code Contributions:** Help us develop the core functionalities, frontend interface, or sandboxing solutions.
- **Research and Evaluation:** Contribute to our understanding of LLMs in software engineering, participate in evaluating the models, or suggest improvements.
- **Feedback and Testing:** Use the OpenDevin toolset, report bugs, suggest features, or provide feedback on usability.
For details, please check [this document](https://github.com/OpenDevin/OpenDevin/blob/main/CONTRIBUTING.md).
## 🤖 Join Our Community
Now we have both Slack workspace for the collaboration on building OpenDevin and Discord server for discussion about anything related, e.g., this project, LLM, agent, etc.
- [Slack workspace](https://join.slack.com/t/opendevin/shared_invite/zt-2etftj1dd-X1fDL2PYIVpsmJZkqEYANw)
- [Discord server](https://discord.gg/mBuDGRzzES)
If you would love to contribute, feel free to join our community (note that now there is no need to fill in the [form](https://forms.gle/758d5p6Ve8r2nxxq6)). Let's simplify software engineering together!
🐚 **Code less, make more with OpenDevin.**
[![Star History Chart](https://api.star-history.com/svg?repos=OpenDevin/OpenDevin&type=Date)](https://star-history.com/#OpenDevin/OpenDevin&Date)
## 🛠️ Built With
OpenDevin is built using a combination of powerful frameworks and libraries, providing a robust foundation for its development. Here are the key technologies used in the project:
![FastAPI](https://img.shields.io/badge/FastAPI-black?style=for-the-badge) ![uvicorn](https://img.shields.io/badge/uvicorn-black?style=for-the-badge) ![LiteLLM](https://img.shields.io/badge/LiteLLM-black?style=for-the-badge) ![Docker](https://img.shields.io/badge/Docker-black?style=for-the-badge) ![Ruff](https://img.shields.io/badge/Ruff-black?style=for-the-badge) ![MyPy](https://img.shields.io/badge/MyPy-black?style=for-the-badge) ![LlamaIndex](https://img.shields.io/badge/LlamaIndex-black?style=for-the-badge) ![React](https://img.shields.io/badge/React-black?style=for-the-badge)
Please note that the selection of these technologies is in progress, and additional technologies may be added or existing ones may be removed as the project evolves. We strive to adopt the most suitable and efficient tools to enhance the capabilities of OpenDevin.
## 📜 License
Distributed under the MIT License. See [our license](https://github.com/OpenDevin/OpenDevin/blob/main/LICENSE) for more information.

View File

@ -0,0 +1,110 @@
---
sidebar_position: 3
---
# 🧠 Agents and Capabilities
## Monologue Agent
### Description
The Monologue Agent utilizes long and short term memory to complete tasks.
Long term memory is stored as a LongTermMemory object and the model uses it to search for examples from the past.
Short term memory is stored as a Monologue object and the model can condense it as necessary.
### Actions
`Action`,
`NullAction`,
`CmdRunAction`,
`FileWriteAction`,
`FileReadAction`,
`AgentRecallAction`,
`BrowseURLAction`,
`GithubPushAction`,
`AgentThinkAction`
### Observations
`Observation`,
`NullObservation`,
`CmdOutputObservation`,
`FileReadObservation`,
`AgentRecallObservation`,
`BrowserOutputObservation`
### Methods
| Method | Description |
| --------------- | --------------------------------------------------------------------------------------------------------------------------------------------- |
| `__init__` | Initializes the agent with a long term memory, and an internal monologue |
| `_add_event` | Appends events to the monologue of the agent and condenses with summary automatically if the monologue is too long |
| `_initialize` | Utilizes the `INITIAL_THOUGHTS` list to give the agent a context for its capabilities and how to navigate the `/workspace` |
| `step` | Modifies the current state by adding the most recent actions and observations, then prompts the model to think about its next action to take. |
| `search_memory` | Uses `VectorIndexRetriever` to find related memories within the long term memory. |
## Planner Agent
### Description
The planner agent utilizes a special prompting strategy to create long term plans for solving problems.
The agent is given its previous action-observation pairs, current task, and hint based on last action taken at every step.
### Actions
`NullAction`,
`CmdRunAction`,
`CmdKillAction`,
`BrowseURLAction`,
`GithubPushAction`,
`FileReadAction`,
`FileWriteAction`,
`AgentRecallAction`,
`AgentThinkAction`,
`AgentFinishAction`,
`AgentSummarizeAction`,
`AddTaskAction`,
`ModifyTaskAction`,
### Observations
`Observation`,
`NullObservation`,
`CmdOutputObservation`,
`FileReadObservation`,
`AgentRecallObservation`,
`BrowserOutputObservation`
### Methods
| Method | Description |
| --------------- | ----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
| `__init__` | Initializes an agent with `llm` |
| `step` | Checks to see if current step is completed, returns `AgentFinishAction` if True. Otherwise, creates a plan prompt and sends to model for inference, adding the result as the next action. |
| `search_memory` | Not yet implemented |
## CodeAct Agent
### Description
The Code Act Agent is a minimalist agent. The agent works by passing the model a list of action-observation pairs and prompting the model to take the next step.
### Actions
`Action`,
`CmdRunAction`,
`AgentEchoAction`,
`AgentFinishAction`,
### Observations
`CmdOutputObservation`,
`AgentMessageObservation`,
### Methods
| Method | Description |
| --------------- | ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
| `__init__` | Initializes an agent with `llm` and a list of messages `List[Mapping[str, str]]` |
| `step` | First, gets messages from state and then compiles them into a list for context. Next, pass the context list with the prompt to get the next command to execute. Finally, Execute command if valid, else return `AgentEchoAction(INVALID_INPUT_MESSAGE)` |
| `search_memory` | Not yet implemented |

View File

@ -0,0 +1,51 @@
---
sidebar_position: 4
---
# 🏛️ System Architecture Overview
This is a high-level overview of the system architecture. The system is divided into two main components: the frontend and the backend. The frontend is responsible for handling user interactions and displaying the results. The backend is responsible for handling the business logic and executing the agents.
![system_architecture.svg](/img/system_architecture.svg)
This Overview is simplified to show the main components and their interactions. For a more detailed view of the backend architecture, see the [Backend Architecture](#backend-architecture) section.
# Backend Architecture
_**Disclaimer**: The backend architecture is a work in progress and is subject to change. The following diagram shows the current architecture of the backend based on the commit that is shown in the footer of the diagram._
![backend_architecture.svg](/img/backend_architecture.svg)
<details>
<summary>Updating this Diagram</summary>
<div>
The generation of the backend architecture diagram is partially automated.
The diagram is generated from the type hints in the code using the py2puml
tool. The diagram is then manually reviewed, adjusted and exported to PNG
and SVG.
## Prerequisites
- Running python environment in which opendevin is executable
(according to the instructions in the README.md file in the root of the repository)
- [py2puml](https://github.com/lucsorel/py2puml) installed
## Steps
1. Autogenerate the diagram by running the following command from the root of the repository:
`py2puml opendevin opendevin > docs/architecture/backend_architecture.puml`
2. Open the generated file in a PlantUML editor, e.g. Visual Studio Code with the PlantUML extension or [PlantText](https://www.planttext.com/)
3. Review the generated PUML and make all necessary adjustments to the diagram (add missing parts, fix mistakes, improve positioning).
_py2puml creates the diagram based on the type hints in the code, so missing or incorrect type hints may result in an incomplete or incorrect diagram._
4. Review the diff between the new and the previous diagram and manually check if the changes are correct.
_Make sure not to remove parts that were manually added to the diagram in the past and are still relevant._
5. Add the commit hash of the commit that was used to generate the diagram to the diagram footer.
6. Export the diagram as PNG and SVG files and replace the existing diagrams in the `docs/architecture` directory. This can be done with (e.g. [PlantText](https://www.planttext.com/))
</div>
</details>

View File

@ -0,0 +1,114 @@
---
sidebar_position: 1
---
# 💻 OpenDevin
OpenDevin is an **autonomous AI software engineer** capable of executing complex engineering tasks and collaborating actively with users on software development projects.
This project is fully open-source, so you can use and modify it however you like.
:::tip
Explore the codebase of OpenDevin on [GitHub](https://github.com/OpenDevin/OpenDevin) or join one of our communities!
<a href="https://github.com/OpenDevin/OpenDevin/graphs/contributors">
<img
src="https://img.shields.io/github/contributors/opendevin/opendevin?style=for-the-badge"
alt="Contributors"
/>
</a>
<a href="https://github.com/OpenDevin/OpenDevin/network/members">
<img
src="https://img.shields.io/github/forks/opendevin/opendevin?style=for-the-badge"
alt="Forks"
/>
</a>
<a href="https://github.com/OpenDevin/OpenDevin/stargazers">
<img
src="https://img.shields.io/github/stars/opendevin/opendevin?style=for-the-badge"
alt="Stargazers"
/>
</a>
<a href="https://github.com/OpenDevin/OpenDevin/issues">
<img
src="https://img.shields.io/github/issues/opendevin/opendevin?style=for-the-badge"
alt="Issues"
/>
</a>
<br></br>
<a href="https://github.com/OpenDevin/OpenDevin/blob/main/LICENSE">
<img
src="https://img.shields.io/github/license/opendevin/opendevin?style=for-the-badge"
alt="MIT License"
/>
</a>
<br></br>
<a href="https://join.slack.com/t/opendevin/shared_invite/zt-2etftj1dd-X1fDL2PYIVpsmJZkqEYANw">
<img
src="https://img.shields.io/badge/Slack-Join%20Us-red?logo=slack&logoColor=white&style=for-the-badge"
alt="Join our Slack community"
/>
</a>
<a href="https://discord.gg/mBuDGRzzES">
<img
src="https://img.shields.io/badge/Discord-Join%20Us-purple?logo=discord&logoColor=white&style=for-the-badge"
alt="Join our Discord community"
/>
</a>
:::
## 🛠️ Getting Started
The easiest way to run OpenDevin is inside a Docker container.
To start the app, run these commands, replacing `$(pwd)/workspace` with the path to the code you want OpenDevin to work with.
```
# Your OpenAI API key, or any other LLM API key
export LLM_API_KEY="sk-..."
```
```
# The directory you want OpenDevin to modify.
# MUST be an absolute path!
export WORKSPACE_BASE=$(pwd)/workspace
```
:::warning
OpenDevin runs bash commands within a Docker sandbox, so it should not affect your machine. But your workspace directory will be attached to that sandbox, and files in the directory may be modified or deleted.
:::
```
docker run \
-e LLM_API_KEY \
-e WORKSPACE_MOUNT_PATH=$WORKSPACE_BASE \
-v $WORKSPACE_BASE:/opt/workspace_base \
-v /var/run/docker.sock:/var/run/docker.sock \
-p 3000:3000 \
--add-host host.docker.internal=host-gateway \
ghcr.io/opendevin/opendevin:0.4.0
```
You'll find opendevin running at [http://localhost:3000](http://localhost:3000).
:::tip
If you want to use the **(unstable!)** bleeding edge, you can use `ghcr.io/opendevin/opendevin:main` as the image (last line).
:::
See Development.md for instructions on running OpenDevin without Docker.
Having trouble? Check out our Troubleshooting Guide.
:::warning
OpenDevin is currently a work in progress, but you can already run the alpha version to see the end-to-end system in action.
:::
[contributors-shield]: https://img.shields.io/github/contributors/opendevin/opendevin?style=for-the-badge
[contributors-url]: https://github.com/OpenDevin/OpenDevin/graphs/contributors
[forks-shield]: https://img.shields.io/github/forks/opendevin/opendevin?style=for-the-badge
[forks-url]: https://github.com/OpenDevin/OpenDevin/network/members
[stars-shield]: https://img.shields.io/github/stars/opendevin/opendevin?style=for-the-badge
[stars-url]: https://github.com/OpenDevin/OpenDevin/stargazers
[issues-shield]: https://img.shields.io/github/issues/opendevin/opendevin?style=for-the-badge
[issues-url]: https://github.com/OpenDevin/OpenDevin/issues
[license-shield]: https://img.shields.io/github/license/opendevin/opendevin?style=for-the-badge
[license-url]: https://github.com/OpenDevin/OpenDevin/blob/main/LICENSE

View File

@ -1,12 +1,13 @@
# Azure OpenAI LLM Guide
# Azure OpenAI LLM
# 1. Completion
## Completion
OpenDevin uses LiteLLM for completion calls. You can find their documentation on Azure [here](https://docs.litellm.ai/docs/providers/azure)
## azure openai configs
### Azure openai configs
When running the OpenDevin Docker image, you'll need to set the following environment variables using `-e`:
```
LLM_BASE_URL="<azure-api-base-url>" # e.g. "https://openai-gpt-4-test-v-1.openai.azure.com/"
LLM_API_KEY="<azure-api-key>"
@ -14,20 +15,21 @@ LLM_MODEL="azure/<your-gpt-deployment-name>"
LLM_API_VERSION = "<api-version>" # e.g. "2024-02-15-preview"
```
## Important Note:
:::note
You can find your ChatGPT deployment name on the deployments page in Azure. It could be the same with the chat model name (e.g. 'GPT4-1106-preview'), by default or initially set, but it doesn't have to be the same. Run opendevin, and when you load it in the browser, go to Settings and set model as above: "azure/&lt;your-actual-gpt-deployment-name&gt;". If it's not in the list, enter your own text and save it.
:::
# 2. Embeddings
## Embeddings
OpenDevin uses llama-index for embeddings. You can find their documentation on Azure [here](https://docs.llamaindex.ai/en/stable/api_reference/embeddings/azure_openai/)
## azure openai configs
### Azure openai configs
The model used for Azure OpenAI embeddings is "text-embedding-ada-002".
You need the correct deployment name for this model in your Azure account.
When running OpenDevin in Docker, set the following environment variables using `-e`:
```
LLM_EMBEDDING_MODEL="azureopenai"
LLM_EMBEDDING_DEPLOYMENT_NAME = "<your-embedding-deployment-name>" # e.g. "TextEmbedding...<etc>"

View File

@ -1,23 +1,25 @@
# Google Gemini/Vertex LLM Guide
# Google Gemini/Vertex LLM
# 1. Completion
## Completion
OpenDevin uses LiteLLM for completion calls. The following resources are relevant for using OpenDevin with Google's LLMs
- [Gemini - Google AI Studio](https://docs.litellm.ai/docs/providers/gemini)
- [VertexAI - Google Cloud Platform](https://docs.litellm.ai/docs/providers/vertex)
## Gemini - Google AI Studio Configs
### Gemini - Google AI Studio Configs
To use Gemini through Google AI Studio when running the OpenDevin Docker image, you'll need to set the following environment variables using `-e`:
```
GEMINI_API_KEY="<your-google-api-key>"
LLM_MODEL="gemini/gemini-1.5-pro"
```
## Vertex AI - Google Cloud Platform Configs
### Vertex AI - Google Cloud Platform Configs
To use Vertex AI through Google Cloud Platform when running the OpenDevin Docker image, you'll need to set the following environment variables using `-e`:
```
GOOGLE_APPLICATION_CREDENTIALS="<json-dump-of-gcp-service-account-json>"
VERTEXAI_PROJECT="<your-gcp-project-id>"

View File

@ -0,0 +1,47 @@
---
sidebar_position: 2
---
# 🤖 LLM Backends
OpenDevin can work with any LLM backend.
For a full list of the LM providers and models available, please consult the
[litellm documentation](https://docs.litellm.ai/docs/providers).
:::warning
OpenDevin will issue many prompts to the LLM you configure. Most of these LLMs cost money--be sure to set spending limits and monitor usage.
:::
The `LLM_MODEL` environment variable controls which model is used in programmatic interactions.
But when using the OpenDevin UI, you'll need to choose your model in the settings window (the gear
wheel on the bottom left).
The following environment variables might be necessary for some LLMs:
- `LLM_API_KEY`
- `LLM_BASE_URL`
- `LLM_EMBEDDING_MODEL`
- `LLM_EMBEDDING_DEPLOYMENT_NAME`
- `LLM_API_VERSION`
We have a few guides for running OpenDevin with specific model providers:
- [ollama](llms/localLLMs)
- [Azure](llms/azureLLMs)
If you're using another provider, we encourage you to open a PR to share your setup!
## Note on Alternative Models
The best models are GPT-4 and Claude 3. Current local and open source models are
not nearly as powerful. When using an alternative model,
you may see long wait times between messages,
poor responses, or errors about malformed JSON. OpenDevin
can only be as powerful as the models driving it--fortunately folks on our team
are actively working on building better open source models!
## API retries and rate limits
Some LLMs have rate limits and may require retries. OpenDevin will automatically retry requests if it receives a 429 error or API connection error.
You can set `LLM_NUM_RETRIES`, `LLM_RETRY_MIN_WAIT`, `LLM_RETRY_MAX_WAIT` environment variables to control the number of retries and the time between retries.
By default, `LLM_NUM_RETRIES` is 5 and `LLM_RETRY_MIN_WAIT`, `LLM_RETRY_MAX_WAIT` are 3 seconds and respectively 60 seconds.

View File

@ -1,11 +1,11 @@
# Local LLM Guide with Ollama server
# Local LLM with Ollama
Ensure that you have the Ollama server up and running.
For detailed startup instructions, refer to the [here](https://github.com/ollama/ollama)
This guide assumes you've started ollama with `ollama serve`. If you're running ollama differently (e.g. inside docker), the instructions might need to be modified. Please note that if you're running wsl the default ollama configuration blocks requests from docker containers. See [here](#4-configuring-the-ollama-service-wsl).
## 1. Pull Models
## Pull Models
Ollama model names can be found [here](https://ollama.com/library). For a small example, you can use
the `codellama:7b` model. Bigger models will generally perform better.
@ -24,10 +24,11 @@ mistral:7b-instruct-v0.2-q4_K_M eb14864c7427 4.4 GB 2 weeks ago
starcoder2:latest f67ae0f64584 1.7 GB 19 hours ago
```
## 2. Start OpenDevin
## Start OpenDevin
### 2.1 Docker
Use the instructions in [README.md](/README.md) to start OpenDevin using Docker.
### Docker
Use the instructions [here](../intro) to start OpenDevin using Docker.
But when running `docker run`, you'll need to add a few more arguments:
```bash
@ -55,8 +56,9 @@ docker run \
You should now be able to connect to `http://localhost:3000/`
### 2.2 Build from Source
Use the instructions in [Development.md](/Development.md) to build OpenDevin.
### Build from Source
Use the instructions in [Development.md](https://github.com/OpenDevin/OpenDevin/blob/main/Development.md) to build OpenDevin.
Make sure `config.toml` is there by running `make setup-config` which will create one for you. In `config.toml`, enter the followings:
```
@ -67,11 +69,12 @@ LLM_BASE_URL="http://localhost:11434"
WORKSPACE_BASE="./workspace"
WORKSPACE_DIR="$(pwd)/workspace"
```
Replace `LLM_MODEL` of your choice if you need to.
Done! Now you can start Devin by: `make run` without Docker. You now should be able to connect to `http://localhost:3000/`
## 3. Select your Model
## Select your Model
In the OpenDevin UI, click on the Settings wheel in the bottom-left corner.
Then in the `Model` input, enter `ollama/codellama:7b`, or the name of the model you pulled earlier.
@ -79,7 +82,7 @@ If it doesnt show up in a dropdown, thats fine, just type it in. Click Sav
And now you're ready to go!
## 4. Configuring the ollama service (WSL)
## Configuring the ollama service (WSL)
The default configuration for ollama in wsl only serves localhost. This means you can't reach it from a docker container. eg. it wont work with OpenDevin. First let's test that ollama is running correctly.
@ -98,7 +101,7 @@ docker exec [CONTAINER ID] curl http://host.docker.internal:11434/api/generate -
#ex. docker exec cd9cc82f7a11 curl http://host.docker.internal:11434/api/generate -d '{"model":"codellama","prompt":"hi"}'
```
### Fixing it
## Fixing it
Now let's make it work, edit /etc/systemd/system/ollama.service with sudo priviledges. (Path may vary depending on linux flavor)

View File

@ -1,6 +1,8 @@
# Troubleshooting
---
sidebar_position: 5
---
> If you're running on Windows and having trouble, check out our [guide for Windows users](./Windows.md)
# 🚧 Troubleshooting
There are some error messages that get reported over and over by users.
We'll try and make the install process easier, and to make these error messages
@ -13,29 +15,35 @@ open an new issue--just comment there.
If you find more information or a workaround for one of these issues, please
open a PR to add details to this file.
## Unable to connect to docker
https://github.com/OpenDevin/OpenDevin/issues/1226
:::tip
If you're running on Windows and having trouble, check out our [guide for Windows users](troubleshooting/windows)
:::
## [Unable to connect to docker](https://github.com/OpenDevin/OpenDevin/issues/1226)
### Symptoms
```
Error creating controller. Please check Docker is running using docker ps
```
```
docker.errors.DockerException: Error while fetching server API version: ('Connection aborted.', FileNotFoundError(2, 'No such file or directory'))
```
### Details
OpenDevin uses a docker container to do its work safely, without potentially breaking your machine.
### Workarounds
* Run `docker ps` to ensure that docker is running
* Make sure you don't need `sudo` to run docker [see here](https://www.baeldung.com/linux/docker-run-without-sudo)
- Run `docker ps` to ensure that docker is running
- Make sure you don't need `sudo` to run docker [see here](https://www.baeldung.com/linux/docker-run-without-sudo)
## Unable to connect to SSH box
https://github.com/OpenDevin/OpenDevin/issues/1156
## [Unable to connect to SSH box](https://github.com/OpenDevin/OpenDevin/issues/1156)
### Symptoms
```
self.shell = DockerSSHBox(
...
@ -43,19 +51,21 @@ pexpect.pxssh.ExceptionPxssh: Could not establish connection to host
```
### Details
By default, OpenDevin connects to a running container using SSH. On some machines,
especially Windows, this seems to fail.
### Workarounds
* Restart your computer (sometimes works?)
* Be sure to have the latest versions of WSL and Docker
* Try [this reinstallation guide](https://github.com/OpenDevin/OpenDevin/issues/1156#issuecomment-2064549427)
* Set `-e SANDBOX_TYPE=exec` to switch to the ExecBox docker container
## Unable to connect to LLM
https://github.com/OpenDevin/OpenDevin/issues/1208
- Restart your computer (sometimes works?)
- Be sure to have the latest versions of WSL and Docker
- Try [this reinstallation guide](https://github.com/OpenDevin/OpenDevin/issues/1156#issuecomment-2064549427)
- Set `-e SANDBOX_TYPE=exec` to switch to the ExecBox docker container
## [Unable to connect to LLM](https://github.com/OpenDevin/OpenDevin/issues/1208)
### Symptoms
```
File "/app/.venv/lib/python3.12/site-packages/openai/_exceptions.py", line 81, in __init__
super().__init__(message, response.request, body=body)
@ -64,10 +74,12 @@ AttributeError: 'NoneType' object has no attribute 'request'
```
### Details
This usually happens with local LLM setups, when OpenDevin can't connect to the LLM server.
See our guide for [local LLMs](./LocalLLMs.md) for more information.
See our guide for [local LLMs](llms/localLLMs) for more information.
### Workarounds
* Check your `LLM_BASE_URL`
* Check that ollama is running OK
* Make sure you're using `--add-host host.docker.internal=host-gateway` when running in docker
- Check your `LLM_BASE_URL`
- Check that ollama is running OK
- Make sure you're using `--add-host host.docker.internal=host-gateway` when running in docker

View File

@ -1,31 +1,36 @@
# Notes for Windows and WSL Users
OpenDevin only supports Windows via [WSL](https://learn.microsoft.com/en-us/windows/wsl/install).
Please be sure to run all commands inside your WSL terminal.
## Troubleshooting
### Failed to create opendevin user
If you encounter the following error during setup: `Exception: Failed to create opendevin user in sandbox: b'useradd: UID 0 is not unique\n'`
You can resolve it by running:
```
export SANDBOX_USER_ID=1000
```
` export SANDBOX_USER_ID=1000
`
### Poetry Installation
If you face issues running Poetry even after installing it during the build process, you may need to add its binary path to your environment:
```
export PATH="$HOME/.local/bin:$PATH"
```
` export PATH="$HOME/.local/bin:$PATH"
`
### NoneType object has no attribute 'request'
If you experiencing issues related to networking, such as `NoneType object has no attribute 'request'` when executing `make run`, you may need to configure your WSL2 networking settings. Follow these steps:
- Open or create the `.wslconfig` file located at `C:\Users\%username%\.wslconfig` on your Windows host machine.
- Add the following configuration to the `.wslconfig` file:
```
[wsl2]
networkingMode=mirrored
localhostForwarding=true
```
- Save the `.wslconfig` file.
- Restart WSL2 completely by exiting any running WSL2 instances and executing the command `wsl --shutdown` in your command prompt or terminal.
- After restarting WSL, attempt to execute `make run` again. The networking issue should be resolved.

15363
docs/package-lock.json generated Normal file

File diff suppressed because it is too large Load Diff

51
docs/package.json Normal file
View File

@ -0,0 +1,51 @@
{
"name": "docs",
"version": "0.0.0",
"private": true,
"scripts": {
"docusaurus": "docusaurus",
"start": "docusaurus start",
"build": "docusaurus build",
"swizzle": "docusaurus swizzle",
"deploy": "docusaurus deploy",
"clear": "docusaurus clear",
"serve": "docusaurus serve",
"write-translations": "docusaurus write-translations",
"write-heading-ids": "docusaurus write-heading-ids",
"typecheck": "tsc"
},
"dependencies": {
"@docusaurus/core": "3.2.1",
"@docusaurus/preset-classic": "3.2.1",
"@mdx-js/react": "^3.0.0",
"autoprefixer": "^10.4.19",
"clsx": "^2.0.0",
"postcss": "^8.4.38",
"prism-react-renderer": "^2.3.0",
"react": "^18.0.0",
"react-dom": "^18.0.0",
"react-use": "^17.5.0",
"tailwindcss": "^3.4.3"
},
"devDependencies": {
"@docusaurus/module-type-aliases": "3.2.1",
"@docusaurus/tsconfig": "3.2.1",
"@docusaurus/types": "3.2.1",
"typescript": "~5.2.2"
},
"browserslist": {
"production": [
">0.5%",
"not dead",
"not op_mini all"
],
"development": [
"last 3 chrome version",
"last 3 firefox version",
"last 5 safari version"
]
},
"engines": {
"node": ">=18.0"
}
}

8
docs/sidebars.ts Normal file
View File

@ -0,0 +1,8 @@
import type { SidebarsConfig } from "@docusaurus/plugin-content-docs";
const sidebars: SidebarsConfig = {
docsSidebar: [{ type: "autogenerated", dirName: "usage" }],
apiSidebar: [require("./modules/python/sidebar.json")],
};
export default sidebars;

View File

@ -0,0 +1,47 @@
import Link from "@docusaurus/Link";
import { Header } from "@site/src/pages";
import { CodeBlock } from "./CodeBlock";
import styles from "./styles.module.css";
export function Code() {
const keyCode = `# Your OpenAI API key, or any other LLM API key
export LLM_API_KEY="sk-..."`;
const workspaceCode = `# The directory you want OpenDevin to modify. MUST be an absolute path!
export WORKSPACE_BASE=$(pwd)/workspace`;
const dockerCode = `docker run \\
-e LLM_API_KEY \\
-e WORKSPACE_MOUNT_PATH=$WORKSPACE_BASE \\
-v $WORKSPACE_BASE:/opt/workspace_base \\
-v /var/run/docker.sock:/var/run/docker.sock \\
-p 3000:3000 \\
--add-host host.docker.internal=host-gateway \\
ghcr.io/opendevin/opendevin:0.3.1`;
return (
<div className={styles.container}>
<div className={styles.innerContainer}>
<div className={styles.header}>
<Header
title="Getting Started"
summary="Getting Started"
description="Get started using OpenDevin in just a few lines of code"
></Header>
<div className={styles.buttons}>
<Link
className="button button--secondary button--lg"
to="/modules/usage/intro"
>
Learn More
</Link>
</div>
</div>
<br />
<CodeBlock language="python" code={keyCode} />
<CodeBlock language="python" code={workspaceCode} />
<CodeBlock language="python" code={dockerCode} />
</div>
</div>
);
}

View File

@ -0,0 +1,63 @@
import { useColorMode } from "@docusaurus/theme-common";
import { Highlight, themes } from "prism-react-renderer";
import { useCopyToClipboard } from "react-use";
interface CodeBlockProps {
language: string;
code: string;
}
export function CodeBlock({ language, code }: CodeBlockProps) {
const [state, copyToClipboard] = useCopyToClipboard();
const { isDarkTheme } = useColorMode();
const copyCode = () => {
copyToClipboard(code);
};
return (
<div
style={{
position: "relative",
}}
>
<Highlight
theme={isDarkTheme ? themes.vsLight : themes.vsDark}
code={code}
language={language}
>
{({ style, tokens, getLineProps, getTokenProps }) => (
<pre style={style}>
{tokens.map((line, i) => (
<div key={i} {...getLineProps({ line })}>
<span
style={{
display: "inline-block",
width: "3em",
color: "var(--gray)",
}}
>
{i + 1}
</span>
{line.map((token, key) => (
<span key={key} {...getTokenProps({ token })} />
))}
</div>
))}
</pre>
)}
</Highlight>
<button
className="button button--secondary"
style={{
position: "absolute",
top: "10px",
right: "10px",
}}
onClick={copyCode}
>
{state.value ? "Copied!" : "Copy"}
</button>
</div>
);
}

View File

@ -0,0 +1,26 @@
.container {
display: flex;
flex-direction: column;
padding-top: 25px;
padding-bottom: 25px;
width: 100%;
}
.innerContainer {
padding: 50px;
width: 100%;
max-width: 1300px;
padding-top: 30px;
margin: auto;
}
.header {
display: flex;
justify-content: space-between;
}
@media (max-width: 768px) {
.header {
flex-direction: column;
}
}

View File

@ -0,0 +1,25 @@
import React from "react";
import styles from "./index.module.css";
export function Demo() {
const videoRef = React.useRef<HTMLVideoElement>(null);
return (
<div
style={{ paddingBottom: "30px", paddingTop: "20px", textAlign: "center" }}
>
<video
playsInline
autoPlay={true}
loop
className={styles.demo}
muted
onMouseOver={() => (videoRef.current.controls = true)}
onMouseOut={() => (videoRef.current.controls = false)}
ref={videoRef}
>
<source src="img/teaser.mp4" type="video/mp4"></source>
</video>
</div>
);
}

View File

@ -0,0 +1,7 @@
.demo {
width: 100%;
padding: 30px;
max-width: 800px;
text-align: center;
border-radius: 40px;
}

View File

@ -0,0 +1,28 @@
import Link from "@docusaurus/Link";
import useDocusaurusContext from "@docusaurus/useDocusaurusContext";
import Heading from "@theme/Heading";
import { Demo } from "../Demo/Demo";
import styles from "./index.module.css";
export function HomepageHeader() {
const { siteConfig } = useDocusaurusContext();
return (
<div className={styles.headerContainer}>
<div className={styles.header}>
<Heading as="h1" className="hero__title">
{siteConfig.title}
</Heading>
<p className="hero__subtitle">{siteConfig.tagline}</p>
<div className={styles.buttons}>
<Link
className="button button--secondary button--lg"
to="/modules/usage/intro"
>
Get Started
</Link>
</div>
</div>{" "}
<Demo />
</div>
);
}

View File

@ -0,0 +1,37 @@
.headerContainer {
background: radial-gradient(circle, var(--secondary), var(--secondary-light));
background-size: 200% 200%;
animation: gradientAnimation 10s linear infinite;
display: flex;
justify-content: center;
}
@media only screen and (max-width: 600px) {
.headerContainer {
flex-direction: column;
}
}
@keyframes gradientAnimation {
0% {
background-position: left center;
}
50% {
background-position: right center;
}
100% {
background-position: left center;
}
}
.header {
max-width: 1300px;
color: white;
display: flex;
margin-left: 100px;
margin-right: 100px;
flex-direction: column;
align-items: center;
justify-content: center;
overflow: hidden;
padding: 70px 30px 30px;
}

View File

@ -0,0 +1,19 @@
import styles from "./styles.module.css";
export function Welcome() {
return (
<div className={styles.container}>
<div className={styles.innerContainer}>
<img src="img/logo.png" className={styles.sidebarImage} />
<p className={styles.welcomeText}>
Welcome to OpenDevin, an open-source project aiming to replicate
Devin, an autonomous AI software engineer who is capable of executing
complex engineering tasks and collaborating actively with users on
software development projects. This project aspires to replicate,
enhance, and innovate upon Devin through the power of the open-source
community.
</p>
</div>
</div>
);
}

View File

@ -0,0 +1,27 @@
.container {
display: flex;
flex-direction: column;
padding-top: 25px;
padding-bottom: 25px;
width: 100%;
}
.innerContainer {
padding: 50px;
width: 100%;
max-width: 1300px;
padding-top: 30px;
margin: auto;
display: flex;
align-items: center;
}
.sidebarImage {
max-width: 400px;
padding-right: 30px;
}
.welcomeText {
text-align: justify;
font-size: larger;
}

36
docs/src/css/custom.css Normal file
View File

@ -0,0 +1,36 @@
/**
* Any CSS included here will be global. The classic template
* bundles Infima by default. Infima is a CSS framework designed to
* work well for content-centric websites.
*/
/* You can override the default Infima variables here. */
:root {
--ifm-color-primary: #4465db;
--ifm-code-font-size: 95%;
--docusaurus-highlighted-code-line-bg: rgba(0, 0, 0, 0.1);
--secondary: #171717;
--secondary-dark: #0a0a0a;
--secondary-light: #737373;
}
/* For readability concerns, you should choose a lighter palette in dark mode. */
[data-theme="dark"] {
--ifm-color-primary: #4465db;
--docusaurus-highlighted-code-line-bg: rgba(0, 0, 0, 0.3);
--secondary: #737373;
--secondary-dark: #171717;
--secondary-light: #d4d4d4;
}
.footer--dark {
background-image: linear-gradient(
140deg,
var(--secondary) 20%,
var(--secondary-light) 100%
);
}
.a {
text-decoration: underline;
}

55
docs/src/pages/faq.tsx Normal file
View File

@ -0,0 +1,55 @@
import Layout from "@theme/Layout";
export default function FAQ() {
return (
<Layout title="FAQ" description="Frequently Asked Questions">
<div
id="faq"
style={{
maxWidth: "900px",
margin: "0px auto",
padding: "40px",
textAlign: "justify",
}}
>
<h1 style={{ fontSize: "3rem" }}>Frequently Asked Questions</h1>
<h2 style={{ fontSize: "2rem" }}>Support</h2>
<h3>How can I report an issue with OpenDevin?</h3>
<p>
Please send us a message on our{" "}
<a href="https://discord.gg/mBuDGRzzES">Discord channel</a> or file a
bug on{" "}
<a href="https://github.com/OpenDevin/OpenDevin/issues">GitHub</a> if
you run into any issues!
</p>
<h2 style={{ fontSize: "2rem" }}>General</h2>
<h3>What is Devin?</h3>
<p>
<span style={{ fontWeight: "600", color: "var(--logo)" }}>Devin</span>{" "}
represents a cutting-edge autonomous agent designed to navigate the
complexities of software engineering. It leverages a combination of
tools such as a shell, code editor, and web browser, showcasing the
untapped potential of LLMs in software development. Our goal is to
explore and expand upon Devin's capabilities, identifying both its
strengths and areas for improvement, to guide the progress of open
code models.
</p>
<h3>Why OpenDevin?</h3>
<p>
The{" "}
<span style={{ fontWeight: "600", color: "var(--logo)" }}>
OpenDevin
</span>{" "}
project is born out of a desire to replicate, enhance, and innovate
beyond the original Devin model. By engaging the{" "}
<a href="https://github.com/OpenDevin/OpenDevin">
open-source community
</a>
, we aim to tackle the challenges faced by Code LLMs in practical
scenarios, producing works that significantly contribute to the
community and pave the way for future advancements.
</p>
</div>
</Layout>
);
}

View File

@ -0,0 +1,23 @@
/**
* CSS files with the .module.css suffix will be treated as CSS modules
* and scoped locally.
*/
.heroBanner {
padding: 4rem 0;
text-align: center;
position: relative;
overflow: hidden;
}
@media screen and (max-width: 996px) {
.heroBanner {
padding: 2rem;
}
}
.buttons {
display: flex;
align-items: center;
justify-content: center;
}

33
docs/src/pages/index.tsx Normal file
View File

@ -0,0 +1,33 @@
import useDocusaurusContext from "@docusaurus/useDocusaurusContext";
import Layout from "@theme/Layout";
import { Code } from "../components/Code/Code";
import { HomepageHeader } from "../components/HomepageHeader/HomepageHeader";
import { Welcome } from "../components/Welcome/Welcome";
export function Header({ title, summary, description }): JSX.Element {
return (
<div>
<h2 style={{ fontSize: "40px" }}>{summary}</h2>
<h3 className="headerDescription">{description}</h3>
</div>
);
}
export default function Home(): JSX.Element {
const { siteConfig } = useDocusaurusContext();
return (
<Layout
title={`Hello from ${siteConfig.title}`}
description="AI-powered code generation for software engineering."
>
<div>
<HomepageHeader />
<div>
<Welcome />
<Code />
</div>
</div>
</Layout>
);
}

0
docs/static/.nojekyll vendored Normal file
View File

View File

Before

Width:  |  Height:  |  Size: 267 KiB

After

Width:  |  Height:  |  Size: 267 KiB

View File

Before

Width:  |  Height:  |  Size: 113 KiB

After

Width:  |  Height:  |  Size: 113 KiB

View File

Before

Width:  |  Height:  |  Size: 386 KiB

After

Width:  |  Height:  |  Size: 386 KiB

View File

Before

Width:  |  Height:  |  Size: 42 KiB

After

Width:  |  Height:  |  Size: 42 KiB

View File

Before

Width:  |  Height:  |  Size: 14 KiB

After

Width:  |  Height:  |  Size: 14 KiB

BIN
docs/static/img/teaser.mp4 vendored Normal file

Binary file not shown.

7
docs/tsconfig.json Normal file
View File

@ -0,0 +1,7 @@
{
// This file is not used in compilation. It is here just for a nice editor experience.
"extends": "@docusaurus/tsconfig",
"compilerOptions": {
"baseUrl": "."
}
}

Some files were not shown because too many files have changed in this diff Show More