139 Commits

Author SHA1 Message Date
Rohit Malhotra
25d9cf2890
[Refactor]: Add LLMRegistry for llm services (#9589)
Co-authored-by: openhands <openhands@all-hands.dev>
Co-authored-by: Graham Neubig <neubig@gmail.com>
Co-authored-by: Engel Nyst <enyst@users.noreply.github.com>
2025-08-18 02:11:20 -04:00
Xingyao Wang
4830b9a67d
fix(llm): include gpt-5 to fn call model; set top p default value to None (#10363) 2025-08-15 15:08:01 +00:00
Xingyao Wang
c2f46200c0
chore(lint): Apply comprehensive linting and formatting fixes (#10287)
Co-authored-by: openhands <openhands@all-hands.dev>
2025-08-13 21:13:19 +02:00
Jesse
4e3a862571
Add llm disable stop word env var (#10274)
Co-authored-by: Xingyao Wang <xingyao@all-hands.dev>
2025-08-13 03:52:11 +00:00
Xingyao Wang
1474c5bc1c
Support gpt-5-2025-08-07 and add it to OpenHands provider (#10172)
Co-authored-by: openhands <openhands@all-hands.dev>
2025-08-08 16:05:51 +00:00
Kenny Dizi
3a629cdf08
Add support model claude-opus-4-1-20250805 (#10120) 2025-08-07 18:48:34 +00:00
Graham Neubig
238ae611f6
Fix: Add APIConnectionError to LLM_RETRY_EXCEPTIONS to handle temporary API errors (#9818)
Co-authored-by: openhands <openhands@all-hands.dev>
2025-08-05 16:38:41 -04:00
Xingyao Wang
5282770a4c
Fix litellm_proxy model info JSON parsing error handling (#10009)
Co-authored-by: openhands <openhands@all-hands.dev>
2025-07-31 14:52:36 +00:00
Xingyao Wang
6cea73b6da
Add qwen-3-coder-480b to OpenHands provider (#9985)
Co-authored-by: openhands <openhands@all-hands.dev>
2025-07-30 23:12:31 +08:00
Engel Nyst
a32a623078
perf(gemini): Apply Gemini 2.5 Pro performance optimizations from PR 9913 (#9925)
Co-authored-by: OpenHands-Claude <openhands@all-hands.dev>
2025-07-29 23:28:50 +00:00
Xingyao Wang
c8f9e6b9fc
feat(llm) : add qwen to fn call supported model (#9929) 2025-07-27 04:53:55 +00:00
Regis David Souza Mesquita
0daaf21607
Add support for the groq hosted kimi-k2-instruct model in the functio… (#9759) 2025-07-21 15:14:09 +02:00
Ray Myers
bc8ef37192
fix - Avoid building debug log message when not logged (#9600) 2025-07-17 11:42:06 -05:00
Peter Hamilton
11c37d8d70
Update llm constants to match on unpinned claude-sonnet-4 (#9681) 2025-07-17 13:48:35 +02:00
Xingyao Wang
376dc21e34
(llm): Add Kimi K2 to function calling supported model (#9747) 2025-07-16 17:19:10 +00:00
Xingyao Wang
cd32b5508c
Add OpenAI o3 model support to verified models and OpenHands provider (#9720)
Co-authored-by: openhands <openhands@all-hands.dev>
2025-07-15 18:19:44 +00:00
Xingyao Wang
6e25d4bbb6
Add OpenHands provider for LLM through OH Cloud (#9526)
Co-authored-by: openhands <openhands@all-hands.dev>
2025-07-15 01:44:49 +08:00
OpenHands
8937b3fbfc
Fix issue #9655: [Bug]: CodeActAgent is incompatible with xAI Grok-4 due to hardcoded stop parameter (#9666)
Co-authored-by: Engel Nyst <enyst@users.noreply.github.com>
2025-07-11 15:31:11 +00:00
TakumaNakao
1be77faf94
ADD gemini-2.5 to REASONING_EFFORT_SUPPORTED_MODELS (#9546) 2025-07-06 06:31:41 +00:00
Graham Neubig
17853cd5bd
Change default max_output_tokens to None and add comprehensive model tests (#9366)
Co-authored-by: openhands <openhands@all-hands.dev>
2025-06-29 21:57:34 -04:00
Peter Hamilton
66b95adbc9
Fix: Retry on Bedrock ServiceUnavailableError (#9419)
Co-authored-by: openhands <openhands@all-hands.dev>
2025-06-27 22:17:50 +02:00
Xingyao Wang
8e4a8a65f8
Revert "Simplify max_output_tokens handling in LLM classes" (#9364) 2025-06-25 20:01:23 +00:00
MXDI
6aad23d35c
feat: Add support for Mistral AI models with customizable safety sett… (#8802)
Co-authored-by: Mahdiglm <mahdiglm@users.noreply.github.com>
Co-authored-by: Engel Nyst <engel.nyst@gmail.com>
Co-authored-by: mamoodi <mamoodiha@gmail.com>
Co-authored-by: Engel Nyst <enyst@users.noreply.github.com>
2025-06-23 18:37:06 +00:00
Graham Neubig
1e33624951
Simplify max_output_tokens handling in LLM classes (#9296)
Co-authored-by: openhands <openhands@all-hands.dev>
2025-06-23 06:48:45 -04:00
Xingyao Wang
078534c2ab
Fix httpx deprecation warning during LLM API calls (#9261)
Co-authored-by: Rohit Malhotra <rohitvinodmalhotra@gmail.com>
Co-authored-by: openhands <openhands@all-hands.dev>
2025-06-20 18:36:31 +00:00
Rohit Malhotra
2fd1fdcd7e
[Refactor, Fix]: Agent controller state/metrics management (#9012)
Co-authored-by: openhands <openhands@all-hands.dev>
2025-06-16 11:24:13 -04:00
Engel Nyst
8ee85a45a2
Reduce more logs (#8712) 2025-05-27 16:05:04 +02:00
Xingyao Wang
926b425e12
add claude 4 to prompt caching and fn call list; do not add icl for devstral (#8642) 2025-05-22 19:10:00 +00:00
Engel Nyst
c26ef180f2
Fix unsupported MCP tools param (#8610) 2025-05-21 14:41:01 +00:00
Engel Nyst
e4586432ad
Add top_k (#8480) 2025-05-14 21:46:03 +02:00
Chase
e72153629d
fix #8424: change native_tool_calling semantics (#8463) 2025-05-12 19:21:51 -04:00
Polly
27c49471a8
Fix Bug #8425 - Enable prompt cache for OpenRouter model of calude-3.7-sonnet (#8426) 2025-05-11 00:07:31 +02:00
Polly
90aab29bc0
Fix Issue #8413 max_output_tokens in openrouter/anthropic/claude-3.7-sonnet doesn't work correctly (#8415) 2025-05-10 08:29:39 +00:00
Graham Neubig
b5dbf81179
Fix typing issues in openhands/llm directory (#8377)
Co-authored-by: openhands <openhands@all-hands.dev>
2025-05-09 18:26:59 +00:00
AutoLTX
3d68711ca3
Display context window usage status in UI (#8267) 2025-05-09 11:39:14 +08:00
Graham Neubig
4ff43d1d99
Add claude-sonnet-latest to supported models lists (#8365)
Co-authored-by: openhands <openhands@all-hands.dev>
Co-authored-by: Engel Nyst <enyst@users.noreply.github.com>
2025-05-09 00:08:09 +00:00
Graham Neubig
d5a8d4251c
Fix missing comma in hosts (#8345) 2025-05-07 21:54:15 -04:00
Engel Nyst
9b1aaa53fe
Reset the reset (#8079) 2025-04-25 00:59:37 +02:00
Kenny Dizi
85e2b73eb4
[Feat] Support o3 and o4 mini (#7898)
Co-authored-by: Xingyao Wang <xingyao@all-hands.dev>
2025-04-18 21:23:19 +00:00
Xingyao Wang
e69ae81ad2
Add GPT-4.1 to function calling list (#7866)
Co-authored-by: Juan Michelini <juan@juan.com.uy>
Co-authored-by: openhands <openhands@all-hands.dev>
2025-04-15 18:12:15 +02:00
NarwhalChen
513f7ab7e7
fix(llm): ensure base_url has protocol prefix for model info fetch when using LiteLLM (#7782)
Co-authored-by: Engel Nyst <enyst@users.noreply.github.com>
Co-authored-by: sp.wack <83104063+amanape@users.noreply.github.com>
2025-04-10 20:10:06 +04:00
Graham Neubig
9b8a628395
Add more extensive typing to openhands/llm/ directory (#7727)
Co-authored-by: openhands <openhands@all-hands.dev>
2025-04-06 17:59:25 +00:00
Xingyao Wang
0ab9d97f2d
fix(llm): retry on InternalServerError and Timeout; handle Gemini returns len(choices) < 1 (#7713)
Co-authored-by: Engel Nyst <enyst@users.noreply.github.com>
Co-authored-by: openhands <openhands@all-hands.dev>
2025-04-05 21:33:17 +00:00
Engel Nyst
d7f651a06c
Fix cached tokens (#7679)
Co-authored-by: OpenHands Bot <openhands@all-hands.dev>
2025-04-03 01:11:18 +00:00
Xingyao Wang
648c8ffb21
(llm): Support OpenHands LM (#7598)
Co-authored-by: mamoodi <mamoodiha@gmail.com>
2025-03-31 17:29:31 +00:00
tofarr
1230b229b5
Replace use of requests with httpx (#7354) 2025-03-26 13:37:10 +00:00
மனோஜ்குமார் பழனிச்சாமி
2518901e6e
feat: Support seed parameter (#7441) 2025-03-24 19:18:00 +01:00
Xingyao Wang
6f9ced1c23
[Observability] add metadata to track llm request for sessions (#7381)
Co-authored-by: Robert Brennan <accounts@rbren.io>
2025-03-23 04:20:38 +08:00
Engel Nyst
cc45f5d9c3
Add RecallActions and observations for retrieval of prompt extensions (#6909)
Co-authored-by: openhands <openhands@all-hands.dev>
Co-authored-by: Calvin Smith <email@cjsmith.io>
2025-03-15 21:48:37 +01:00
Robert Brennan
366fd7ab8a
Improve agent loop tracking and make concurrent limit configurable (#6945)
Co-authored-by: openhands <openhands@all-hands.dev>
Co-authored-by: Tim O'Farrell <tofarr@gmail.com>
2025-03-07 09:19:50 -07:00