* finish is working * start reworking main_goal * remove main_goal from microagents * remove main_goal from other agents * fix issues * revert codeact line * make plan a subclass of task * fix frontend for new plan setup * lint * fix type * more lint * fix build issues * fix codeact mgs * fix edge case in regen script * fix task validation errors * regenerate integration tests * fix up tests * fix sweagent * revert codeact prompt * update integration tests * update integration tests * handle loading state * Update agenthub/codeact_agent/codeact_agent.py Co-authored-by: Engel Nyst <enyst@users.noreply.github.com> * Update opendevin/controller/agent_controller.py Co-authored-by: Engel Nyst <enyst@users.noreply.github.com> * Update agenthub/codeact_agent/codeact_agent.py Co-authored-by: Engel Nyst <enyst@users.noreply.github.com> * Update opendevin/controller/state/plan.py Co-authored-by: Engel Nyst <enyst@users.noreply.github.com> * update docs * regenerate tests * remove none from state type * revert test files * update integration tests * rename plan to root_task * revert plugin perms * regen integration tests * tweak integration script * prettier * fix test * set workspace up for regeneration * regenerate tests * Change directory of copy * Updated tests * Disable PlannerAgent test * Fix listen * Updated prompts * Disable planner again * Make codecov more lenient * Update agenthub/README.md * Update opendevin/server/README.md * re-enable planner tests * finish top level tasks * regen planner * fix root task factory --------- Co-authored-by: Engel Nyst <enyst@users.noreply.github.com> Co-authored-by: Xingyao Wang <xingyao6@illinois.edu> Co-authored-by: Graham Neubig <neubig@gmail.com> Co-authored-by: Boxuan Li <liboxuan@connect.hku.hk>
4.5 KiB
Agent Framework Research
In this folder, there may exist multiple implementations of Agent that will be used by the framework.
For example, agenthub/monologue_agent, agenthub/metagpt_agent, agenthub/codeact_agent, etc.
Contributors from different backgrounds and interests can choose to contribute to any (or all!) of these directions.
Constructing an Agent
The abstraction for an agent can be found here.
Agents are run inside of a loop. At each iteration, agent.step() is called with a
State input, and the agent must output an Action.
Every agent also has a self.llm which it can use to interact with the LLM configured by the user.
See the LiteLLM docs for self.llm.completion.
State
The state contains:
- A history of actions taken by the agent, as well as any observations (e.g. file content, command output) from those actions
- A list of actions/observations that have happened since the most recent step
- A
root_task, which contains a plan of action- The agent can add and modify subtasks through the
AddTaskActionandModifyTaskAction
- The agent can add and modify subtasks through the
Actions
Here is a list of available Actions, which can be returned by agent.step():
CmdRunAction- Runs a command inside a sandboxed terminalCmdKillAction- Kills a background commandIPythonRunCellAction- Execute a block of Python code interactively (in Jupyter notebook) and receivesCmdOutputObservation. Requires setting upjupyterplugin as a requirement.FileReadAction- Reads the content of a fileFileWriteAction- Writes new content to a fileBrowseURLAction- Gets the content of a URLAgentRecallAction- Searches memory (e.g. a vector database)AddTaskAction- Adds a subtask to the planModifyTaskAction- Changes the state of a subtaskAgentThinkAction- A no-op that allows the agent to add plaintext to the history (as well as the chat log)AgentTalkAction- A no-op that allows the agent to add plaintext to the history and talk to the user.AgentFinishAction- Stops the control loop, allowing the user/delegator agent to enter a new taskAgentRejectAction- Stops the control loop, allowing the user/delegator agent to enter a new taskAgentFinishAction- Stops the control loop, allowing the user to enter a new taskMessageAction- Represents a message from an agent or the user
You can use action.to_dict() and action_from_dict to serialize and deserialize actions.
Observations
There are also several types of Observations. These are typically available in the step following the corresponding Action. But they may also appear as a result of asynchronous events (e.g. a message from the user, logs from a command running in the background).
Here is a list of available Observations:
CmdOutputObservationBrowserOutputObservationFileReadObservationFileWriteObservationAgentRecallObservationErrorObservationSuccessObservation
You can use observation.to_dict() and observation_from_dict to serialize and deserialize observations.
Interface
Every agent must implement the following methods:
step
def step(self, state: "State") -> "Action"
step moves the agent forward one step towards its goal. This probably means
sending a prompt to the LLM, then parsing the response into an Action.
search_memory
def search_memory(self, query: str) -> list[str]:
search_memory should return a list of events that match the query. This will be used
for the recall action.
You can optionally just return [] for this method, meaning the agent has no long-term memory.