* Updated documentation using ruff's autofix feature
* Updated pyproject.toml to include docstring validations
* Updated documentation using ruff's autofix feature
* Updated pyproject.toml to include docstring validations
* Updated docstrings using ruff's autfix feature
* Deleted opendevin/runtime/utils/soource.py, Keeping in sync with main
---------
Co-authored-by: Graham Neubig <neubig@gmail.com>
Currently, OpenDevin uses a global singleton LLM config and a global singleton agent config. This PR allows customers to configure an LLM config for each agent. A hypothetically useful scenario is to use a cheaper LLM for repo exploration / code search, and a more powerful LLM to actually do the problem solving (CodeActAgent).
Partially solves #2075 (web GUI improvement is not the goal of this PR)
This PR fixes#1897. In addition, this PR fixes and tweaks a few micro-agents.
For the first time, I am able to use ManagerAgent to complete test_write_simple_script and test_edits tasks in integration tests, so this PR also adds ManagerAgent as part of integration tests. test_write_simple_script involves delegation to CoderAgent while test_edits involves delegation to TypoFixerAgent.
Also for the first time, I am able to use DelegateAgent to complete test_write_simple_script and test_edits tasks in integration tests, so this PR also adds DelegateAgent as part of integration tests. It involves delegation to StudyRepoForTaskAgent, CoderAgent and VerifierAgent.
This PR is a blocker for #1735 and likely #1945.
I was able to run a few benchmark instances from SWE-Bench by myself following the documentation - it was great! In general the experience was smooth, thanks to @xingyaoww, @libowen2121 and the team! I made a few small enhancements and fixes to further improve the developer experience.
Always use poetry run python (using python from poetry's virtual environment) over python or python3 in scripts to make sure the behavior is consistent.
Make AGENT configurable. One can use an argument to control which agent they would like to benchmark. To facilitate this, I removed hardcoded CodeActAgent from run_infer.sh, and also added VERSION attribute to all agents, as the benchmark needs to record the agent version.
Make EVAL_LIMIT configurable. One can use an argument to control how many instances they'd like to benchmark. Useful for debugging & development purposes.
Fix 'eval_output_dir' not defined error in run_infer.py.
Other enhancements to the README file and logs.
I also notice that a lot of code from run_infer.py could be shared by other benchmarks, but since we only have one benchmark now, I think we could avoid over-engineering. A refactor and code dedup would be useful in the future once we have more benchmarks, though.
* move MemoryCondenser, LongTermMemory, json, out of the monologue
* PlannerAgent and Microagents use the custom json.loads/dumps
* Move short term history out of monologue agent...
* move memory in their package
* add __init__
* Add TypoFixerAgent micro-agent to fix typos
* Improve parse_response to accurately extract the first complete JSON object
* Add tests for parse_response function handling complex scenarios
* Fix tests and logic to use action_from_dict
* Fix small formatting issues
* remove screenshot in browser observation
* refactor utils
* allow only dict
* fix screenshot not showing up in frontend
---------
Co-authored-by: Robert Brennan <accounts@rbren.io>
* Fix micro agents definitions
* Add tests for micro agents
* Add to CI
* Revert "Add to CI"
This reverts commit 94f3b4e7c8408a1b0267f3847cbaefdcd995db05.
* Remove test artifacts for ManagerAgent
---------
Co-authored-by: Engel Nyst <enyst@users.noreply.github.com>