Fix CodeAct paper link (#1784)

https://arxiv.org/abs/2402.13463 is RefuteBench: Evaluating Refuting Instruction-Following for Large Language Models

https://arxiv.org/abs/2402.01030 is Executable Code Actions Elicit Better LLM Agents
This commit is contained in:
Marshall Roch 2024-05-14 13:40:07 -04:00 committed by GitHub
parent 1d8402a14a
commit 64ee5d404d
No known key found for this signature in database
GPG Key ID: B5690EEEBB952194

View File

@ -8,7 +8,7 @@ sidebar_position: 3
### Description
This agent implements the CodeAct idea ([paper](https://arxiv.org/abs/2402.13463), [tweet](https://twitter.com/xingyaow_/status/1754556835703751087)) that consolidates LLM agents **act**ions into a unified **code** action space for both _simplicity_ and _performance_ (see paper for more details).
This agent implements the CodeAct idea ([paper](https://arxiv.org/abs/2402.01030), [tweet](https://twitter.com/xingyaow_/status/1754556835703751087)) that consolidates LLM agents **act**ions into a unified **code** action space for both _simplicity_ and _performance_ (see paper for more details).
The conceptual idea is illustrated below. At each turn, the agent can: