Engel Nyst
c567c11267
Enable/disable function calling by user configuration ( #5992 )
...
Co-authored-by: co <yc5@tju.edu.cn>
Co-authored-by: Cheng Yang <93481273+young010101@users.noreply.github.com>
2025-01-03 01:40:49 +01:00
Robert Brennan
e628615094
Revert "feat(config): enable/disable LLM model tools/funcs usage by config" ( #5989 )
...
Co-authored-by: tofarr <tofarr@gmail.com>
2025-01-03 00:28:07 +01:00
Cheng Yang
8d627e52cb
feat(config): enable/disable LLM model tools/funcs usage by config ( #5576 )
...
Co-authored-by: Engel Nyst <enyst@users.noreply.github.com>
2025-01-02 21:20:37 +00:00
Engel Nyst
a2e9e206e8
Reset a failed tool call ( #5666 )
...
Co-authored-by: openhands <openhands@all-hands.dev>
2024-12-31 21:21:32 +01:00
Cheng Yang
c7a8dcf079
chore(log): better json parse ( #5581 )
2024-12-31 00:04:21 +08:00
Xingyao Wang
a021045dce
fix( #5818 ): Force to use string serializer for deepseek function calling ( #5824 )
...
Co-authored-by: Engel Nyst <enyst@users.noreply.github.com>
2024-12-26 20:45:39 +00:00
Engel Nyst
3297e4d5a8
Use litellm's modify params ( #5636 )
2024-12-17 21:32:49 +01:00
Engel Nyst
b295f5775c
Revert "Fix issue #5609 : Use litellm's modify_params with default True" ( #5631 )
2024-12-16 20:39:57 +00:00
OpenHands
09735c7869
Fix issue #5609 : Use litellm's modify_params with default True ( #5611 )
...
Co-authored-by: Engel Nyst <enyst@users.noreply.github.com>
2024-12-16 20:18:45 +01:00
Engel Nyst
590ebb6e47
Small fix and addition for token counting ( #5550 )
...
Co-authored-by: openhands <openhands@all-hands.dev>
2024-12-15 15:12:05 +01:00
Calvin Smith
7ef6fa666d
feat(eval): Response Latency Tracking ( #5588 )
...
Co-authored-by: openhands <openhands@all-hands.dev>
Co-authored-by: Calvin Smith <calvin@all-hands.dev>
2024-12-13 22:51:13 +01:00
OpenHands
794408cd31
Fix issue #5383 : [Bug]: LLM Cost is added to the metrics twice ( #5396 )
...
Co-authored-by: Engel Nyst <enyst@users.noreply.github.com>
2024-12-04 21:32:08 +01:00
Robert Brennan
3ac57a61a7
fix issue where message is none ( #5307 )
2024-11-28 02:02:52 +01:00
Ryan H. Tran
9fab9ae8a6
Add fn call in response debug logging ( #5301 )
2024-11-27 20:29:35 +01:00
OpenHands
6184b9d7f4
Fix issue #4820 : [Bug]: litellm doesn't support function calling model from OpenRouter. bug cause codeactagent couldn't interact with internet solely without ask browser agent for help ( #4822 )
...
Co-authored-by: Xingyao Wang <xingyao@all-hands.dev>
2024-11-25 16:26:27 +00:00
niliy01
68d1e76ccd
fix: remove repeated completion assignment in llm.py ( #5167 )
2024-11-22 01:55:26 +01:00
Engel Nyst
d08886f30e
Fix non-function calls messages ( #5026 )
...
Co-authored-by: Xingyao Wang <xingyao@all-hands.dev>
2024-11-21 18:18:49 +00:00
Cheng Yang
68e52a9c62
feat: add return type hints to LLM class methods ( #5173 )
2024-11-21 14:00:46 +01:00
young010101
746722e1b5
style: remove extra newline in LLM wrapper function ( #5149 )
2024-11-20 22:41:51 -05:00
Xingyao Wang
07f0d1ccb3
feat(llm): convert function call request for non-funcall OSS model ( #4711 )
...
Co-authored-by: Calvin Smith <email@cjsmith.io>
2024-11-15 00:40:09 +08:00
Xingyao Wang
4405b109e3
Fix issue #4809 : [Bug]: Model does not support image upload when usin… ( #4810 )
...
Co-authored-by: openhands <openhands@all-hands.dev>
2024-11-07 02:28:16 +00:00
Xingyao Wang
24117143ae
feat(llm): add new haiku into func calling support model ( #4738 )
2024-11-04 22:38:00 +00:00
Robert Brennan
250fcbe62c
Various async fixes ( #4722 )
2024-11-04 10:08:09 -05:00
Xingyao Wang
1747b3d6b2
fix: prompt caching ( #4704 )
2024-11-02 07:21:21 -04:00
Engel Nyst
bde978cf0f
Fix Openrouter ( #4641 )
2024-10-30 18:31:24 +00:00
Xingyao Wang
2587220b12
fix(llm): fallback when model is out of function calling supported list ( #4617 )
...
Co-authored-by: openhands <openhands@all-hands.dev>
2024-10-31 01:54:50 +08:00
OpenHands
866ba6e3b2
Fix issue #4629 : [Bug]: Replace claude-3-5-sonnet-20240620 with claude-3-5-sonnet-20241022 ( #4631 )
...
Co-authored-by: Graham Neubig <neubig@gmail.com>
2024-10-30 17:16:04 +00:00
Robert Brennan
572b3ad682
safer model info access ( #4619 )
2024-10-29 17:44:51 -04:00
Robert Brennan
30eeaa641c
Major logging overhaul ( #4563 )
2024-10-29 07:30:50 +01:00
Xingyao Wang
ae13171194
feat(agent): CodeAct with function calling ( #4537 )
...
Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: tobitege <10787084+tobitege@users.noreply.github.com>
Co-authored-by: Engel Nyst <enyst@users.noreply.github.com>
Co-authored-by: tofarr <tofarr@gmail.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2024-10-29 11:06:33 +08:00
Xingyao Wang
7340b78962
feat(eval): rewrite log_completions to save completions to directory ( #4566 )
2024-10-25 16:36:11 +00:00
Xingyao Wang
dcd4b04f57
feat(llm): update prompt caching list to include new sonnet ( #4552 )
2024-10-25 20:36:35 +08:00
Xingyao Wang
da548d308c
[agent] LLM-based editing ( #3985 )
...
Co-authored-by: Tim O'Farrell <tofarr@gmail.com>
Co-authored-by: Engel Nyst <enyst@users.noreply.github.com>
Co-authored-by: Robert Brennan <accounts@rbren.io>
Co-authored-by: Graham Neubig <neubig@gmail.com>
2024-10-22 04:51:44 +08:00
Graham Neubig
b660aa99b8
Fix issue #4480 : '[Bug]: Being blocked by cloudflare results in futile retries ( #4482 )
...
Co-authored-by: openhands <openhands@all-hands.dev>
2024-10-18 13:04:34 -04:00
Engel Nyst
caa77cf7a6
Log cache hit/miss for deepseek ( #4343 )
2024-10-11 18:43:43 +02:00
Engel Nyst
8c32ef2234
Fix to use async variant of completion ( #4228 )
2024-10-06 05:10:36 +02:00
Engel Nyst
1abfd3b808
Retry on litellm's APIError, which includes 502 ( #4167 )
2024-10-03 01:54:49 +02:00
Engel Nyst
e582806004
Vision and prompt caching fixes ( #4014 )
2024-09-28 14:37:29 +02:00
Engel Nyst
0a03c802f5
Refactor llm.py ( #4057 )
2024-09-26 17:44:18 +00:00
Engel Nyst
798aaeaef6
remove Exception in the agent ( #4054 )
2024-09-26 06:39:17 +02:00
Xingyao Wang
8ea2d61ff2
[llm] Add app name for OpenRouter ( #4010 )
2024-09-24 00:26:07 +02:00
Graham Neubig
73ded7de10
Make drop_params default in llm_config ( #4012 )
2024-09-23 16:57:10 -04:00
Xingyao Wang
714e46f29a
[eval] save eventstream & llm completions for SWE-Bench run_infer ( #3923 )
2024-09-22 04:39:13 +00:00
tobitege
01462e11d7
(fix) CodeActAgent/LLM: react on should_exit flag (user cancellation) ( #3968 )
2024-09-20 23:49:45 +02:00
Engel Nyst
8fdfece059
Refactor messages serialization ( #3832 )
...
Co-authored-by: Robert Brennan <accounts@rbren.io>
2024-09-18 23:48:58 +02:00
tofarr
ad0b549d8b
Feat Tightening up Timeouts and interrupt conditions. ( #3926 )
2024-09-18 20:50:42 +00:00
Engel Nyst
47f60b8275
Don't send gemini settings when the llm is not gemini ( #3940 )
2024-09-18 20:12:58 +00:00
tobitege
b4408b41c9
(feat) LLM class: add safety_settings for Gemini; improve max_output_tokens defaulting ( #3925 )
2024-09-18 11:51:23 -04:00
Cole Murray
97a03faf33
Add Handling of Cache Prompt When Formatting Messages ( #3773 )
...
* Add Handling of Cache Prompt When Formatting Messages
* Fix Value for Cache Control
* Fix Value for Cache Control
* Update openhands/core/message.py
Co-authored-by: Engel Nyst <enyst@users.noreply.github.com>
* Fix lint error
* Serialize Messages if Propt Caching Is Enabled
* Remove formatting message change
---------
Co-authored-by: Engel Nyst <enyst@users.noreply.github.com>
Co-authored-by: tobitege <10787084+tobitege@users.noreply.github.com>
2024-09-10 16:34:41 +00:00
tobitege
5ffff742de
Regression fixes: LLM logging; client readiness (EventStreamRuntime) ( #3776 )
...
* Regression fixes: LLM logging; client readiness (EventStreamRuntime)
* fix llm.async_completion_wrapper bad edit in previous commit
* regen couple of mock files
* client: always log initialized status
2024-09-09 21:02:43 +02:00