* remove openai key assertion * support different embedding models * add todo * add local embeddings * Make lint happy (#232) * Include Azure AI embedding model (#239) * Include Azure AI embedding model * updated requirements --------- Co-authored-by: Rohit Rushil <rohit.rushil@honeywell.com> * Update agenthub/langchains_agent/utils/memory.py * Update agenthub/langchains_agent/utils/memory.py * add base url * add docs * Update requirements.txt * default to local embeddings * Update llm.py * fix fn --------- Co-authored-by: Engel Nyst <enyst@users.noreply.github.com> Co-authored-by: RoHitRushil <43521824+RohitX0X@users.noreply.github.com> Co-authored-by: Rohit Rushil <rohit.rushil@honeywell.com>
CodeAct-based Agent Framework
This folder implements the CodeAct idea that relies on LLM to autonomously perform actions in a Bash shell. It requires more from the LLM itself: LLM needs to be capable enough to do all the stuff autonomously, instead of stuck in an infinite loop.
A minimalistic exmaple can be found at research/codeact/examples/run_flask_server_with_bash.py:
mkdir workspace
PYTHONPATH=`pwd`:$PYTHONPATH python3 opendevin/main.py -d ./workspace -c CodeActAgent -t "Please write a flask app that returns 'Hello, World\!' at the root URL, then start the app on port 5000. python3 has already been installed for you."
Example: prompts gpt-4-0125-preview to write a flask server, install flask library, and start the server.
Most of the things are working as expected, except at the end, the model did not follow the instruction to stop the interaction by outputting <execute> exit </execute> as instructed.
TODO: This should be fixable by either (1) including a complete in-context example like this, OR (2) collect some interaction data like this and fine-tune a model (like this, a more complex route).