mirror of
https://github.com/OpenHands/OpenHands.git
synced 2025-12-26 13:52:43 +08:00
* simplified get * resolved merge conflicts * removed default param for get * add dood setup * add readme * better build process * multi-stage build * revert makefile * rm entrypoint.sh * adjust ssh box for docker * update readme * update readme * fix hostname * change workspace setting * add workspace_mount_base * fixes for workspace dir * clean up frontend * refactor dockerfile * try download.py * change docker order a bit * remove workspace_dir from frontend settings * fix merge issues * Update opendevin/config.py * remove relpath logic from server * rename workspace_mount_base to workspace_base * remove workspace dir plumbing for now * delint * delint * move workspace base dir * remove refs to workspace_dir * factor out constant * fix local directory usage * dont require dir * fix docs * fix arg parsing for task * implement WORKSPACE_MOUNT_PATH * fix workspace dir * fix ports * fix merge issues * add makefile * revert settingsService * fix string * Add address * Update Dockerfile * Update local_box.py * fix lint * move to port 3000 --------- Co-authored-by: மனோஜ்குமார் பழனிச்சாமி <smartmanoj42857@gmail.com> Co-authored-by: enyst <engel.nyst@gmail.com>
2.8 KiB
2.8 KiB
OpenDevin Server
This is a WebSocket server that executes tasks using an agent.
Install
Follow the instructions in the base README.md to install dependencies and set up.
Start the Server
uvicorn opendevin.server.listen:app --reload --port 3000
Test the Server
You can use websocat to test the server: https://github.com/vi/websocat
websocat ws://127.0.0.1:3000/ws
{"action": "start", "args": {"task": "write a bash script that prints hello"}}
Supported Environment Variables
LLM_API_KEY=sk-... # Your OpenAI API Key
LLM_MODEL=gpt-3.5-turbo-1106 # Default model for the agent to use
WORKSPACE_BASE=/path/to/your/workspace # Default path to model's workspace
API Schema
There are two types of messages that can be sent to, or received from, the server:
- Actions
- Observations
Actions
An action has three parts:
action: The action to be takenargs: The arguments for the actionmessage: A friendly message that can be put in the chat log
There are several kinds of actions. Their arguments are listed below. This list may grow over time.
initialize- initializes the agent. Only sent by client.model- the name of the model to usedirectory- the path to the workspaceagent_cls- the class of the agent to use
start- starts a new development task. Only sent by the client.task- the task to start
read- reads the content of a file.path- the path of the file to read
write- writes the content to a file.path- the path of the file to writecontent- the content to write to the file
run- runs a command.command- the command to runbackground- if true, run the command in the background
kill- kills a background commandid- the ID of the background command to kill
browse- opens a web page.url- the URL to open
recall- searches long-term memoryquery- the query to search for
think- Allows the agent to make a plan, set a goal, or record thoughtsthought- the thought to record
finish- agent signals that the task is completed
Observations
An observation has four parts:
observation: The observation typecontent: A string representing the observed dataextras: additional structured datamessage: A friendly message that can be put in the chat log
There are several kinds of observations. Their extras are listed below. This list may grow over time.
read- the content of a filepath- the path of the file read
browse- the HTML content of a urlurl- the URL opened
run- the output of a commandcommand- the command runexit_code- the exit code of the command
recall- the result of a searchquery- the query searched for
chat- a message from the user