mirror of
https://github.com/OpenHands/OpenHands.git
synced 2025-12-26 05:48:36 +08:00
Merge branch 'main' into hieptl/app-35
This commit is contained in:
commit
306b115b98
16
.vscode/settings.json
vendored
16
.vscode/settings.json
vendored
@ -3,4 +3,20 @@
|
||||
"files.eol": "\n",
|
||||
"files.trimTrailingWhitespace": true,
|
||||
"files.insertFinalNewline": true,
|
||||
|
||||
"python.defaultInterpreterPath": "./.venv/bin/python",
|
||||
"python.terminal.activateEnvironment": true,
|
||||
"python.analysis.autoImportCompletions": true,
|
||||
"python.analysis.autoSearchPaths": true,
|
||||
"python.analysis.extraPaths": [
|
||||
"./.venv/lib/python3.12/site-packages"
|
||||
],
|
||||
"python.analysis.packageIndexDepths": [
|
||||
{
|
||||
"name": "openhands",
|
||||
"depth": 10,
|
||||
"includeAllSymbols": true
|
||||
}
|
||||
],
|
||||
"python.analysis.stubPath": "./.venv/lib/python3.12/site-packages",
|
||||
}
|
||||
|
||||
274
enterprise/enterprise_local/README.md
Normal file
274
enterprise/enterprise_local/README.md
Normal file
@ -0,0 +1,274 @@
|
||||
# Instructions for developing SAAS locally
|
||||
|
||||
You have a few options here, which are expanded on below:
|
||||
|
||||
- A simple local development setup, with live reloading for both OSS and this repo
|
||||
- A more complex setup that includes Redis
|
||||
- An even more complex setup that includes GitHub events
|
||||
|
||||
## Prerequisites
|
||||
|
||||
Before starting, make sure you have the following tools installed:
|
||||
|
||||
### Required for all options:
|
||||
|
||||
- [gcloud CLI](https://cloud.google.com/sdk/docs/install) - For authentication and secrets management
|
||||
- [sops](https://github.com/mozilla/sops) - For secrets decryption
|
||||
- macOS: `brew install sops`
|
||||
- Linux: `sudo apt-get install sops` or download from GitHub releases
|
||||
- Windows: Install via Chocolatey `choco install sops` or download from GitHub releases
|
||||
|
||||
### Additional requirements for enabling GitHub webhook events
|
||||
|
||||
- make
|
||||
- Python development tools (build-essential, python3-dev)
|
||||
- [ngrok](https://ngrok.com/download) - For creating tunnels to localhost
|
||||
|
||||
## Option 1: Simple local development
|
||||
|
||||
This option will allow you to modify the both the OSS code and the code in this repo,
|
||||
and see the changes in real-time.
|
||||
|
||||
This option works best for most scenarios. The only thing it's missing is
|
||||
the GitHub events webhook, which is not necessary for most development.
|
||||
|
||||
### 1. OpenHands location
|
||||
|
||||
The open source OpenHands repo should be cloned as a sibling directory,
|
||||
in `../OpenHands`. This is hard-coded in the pyproject.toml (edit if necessary)
|
||||
|
||||
If you're doing this the first time, you may need to run
|
||||
|
||||
```
|
||||
poetry update openhands-ai
|
||||
```
|
||||
|
||||
### 2. Set up env
|
||||
|
||||
First run this to retrieve Github App secrets
|
||||
|
||||
```
|
||||
gcloud auth application-default login
|
||||
gcloud config set project global-432717
|
||||
local/decrypt_env.sh
|
||||
```
|
||||
|
||||
Now run this to generate a `.env` file, which will used to run SAAS locally
|
||||
|
||||
```
|
||||
python -m pip install PyYAML
|
||||
export LITE_LLM_API_KEY=<your LLM API key>
|
||||
python enterprise_local/convert_to_env.py
|
||||
```
|
||||
|
||||
You'll also need to set up the runtime image, so that the dev server doesn't try to rebuild it.
|
||||
|
||||
```
|
||||
export SANDBOX_RUNTIME_CONTAINER_IMAGE=ghcr.io/all-hands-ai/runtime:main-nikolaik
|
||||
docker pull $SANDBOX_RUNTIME_CONTAINER_IMAGE
|
||||
```
|
||||
|
||||
By default the application will log in json, you can override.
|
||||
|
||||
```
|
||||
export LOG_PLAIN_TEXT=1
|
||||
```
|
||||
|
||||
### 3. Start the OpenHands frontend
|
||||
|
||||
Start the frontend like you normally would in the open source OpenHands repo.
|
||||
|
||||
### 4. Start the SaaS backend
|
||||
|
||||
```
|
||||
make build
|
||||
|
||||
make start-backend
|
||||
```
|
||||
|
||||
You should have a server running on `localhost:3000`, similar to the open source backend.
|
||||
Oauth should work properly.
|
||||
|
||||
## Option 2: With Redis
|
||||
|
||||
Follow all the steps above, then setup redis:
|
||||
|
||||
```bash
|
||||
docker run -p 6379:6379 --name openhands-redis -d redis
|
||||
export REDIS_HOST=host.docker.internal # you may want this to be localhost
|
||||
export REDIS_PORT=6379
|
||||
```
|
||||
|
||||
## Option 3: Work with GitHub events
|
||||
|
||||
### 1. Setup env file
|
||||
|
||||
(see above)
|
||||
|
||||
### 2. Build OSS Openhands
|
||||
|
||||
Develop on [Openhands](https://github.com/All-Hands-AI/OpenHands) locally. When ready, run the following inside Openhands repo (not the Deploy repo)
|
||||
|
||||
```
|
||||
docker build -f containers/app/Dockerfile -t openhands .
|
||||
```
|
||||
|
||||
### 3. Build SAAS Openhands
|
||||
|
||||
Build the SAAS image locally inside Deploy repo. Note that `openhands` is the name of the image built in Step 2
|
||||
|
||||
```
|
||||
docker build -t openhands-saas ./app/ --build-arg BASE="openhands"
|
||||
```
|
||||
|
||||
### 4. Create a tunnel
|
||||
|
||||
Run in a separate terminal
|
||||
|
||||
```
|
||||
ngrok http 3000
|
||||
```
|
||||
|
||||
There will be a line
|
||||
|
||||
```
|
||||
Forwarding https://bc71-2603-7000-5000-1575-e4a6-697b-589e-5801.ngrok-free.app
|
||||
```
|
||||
|
||||
Remember this URL as it will be used in Step 5 and 6
|
||||
|
||||
### 5. Setup Staging Github App callback/webhook urls
|
||||
|
||||
Using the URL found in Step 4, add another callback URL (`https://bc71-2603-7000-5000-1575-e4a6-697b-589e-5801.ngrok-free.app/oauth/github/callback`)
|
||||
|
||||
### 6. Run
|
||||
|
||||
This is the last step! Run SAAS openhands locally using
|
||||
|
||||
```
|
||||
docker run --env-file ./app/.env -p 3000:3000 openhands-saas
|
||||
```
|
||||
|
||||
Note `--env-file` is what injects the `.env` file created in Step 1
|
||||
|
||||
Visit the tunnel domain found in Step 4 to run the app (`https://bc71-2603-7000-5000-1575-e4a6-697b-589e-5801.ngrok-free.app`)
|
||||
|
||||
### Local Debugging with VSCode
|
||||
|
||||
Local Development necessitates running a version of OpenHands that is as similar as possible to the version running in the SAAS Environment. Before running these steps, it is assumed you have a local development version of the OSS OpenHands project running.
|
||||
|
||||
#### Redis
|
||||
|
||||
A Local redis instance is required for clustered communication between server nodes. The standard docker instance will suffice.
|
||||
`docker run -it -p 6379:6379 --name my-redis -d redis`
|
||||
|
||||
#### Postgres
|
||||
|
||||
A Local postgres instance is required. I used the official docker image:
|
||||
`docker run -p 5432:5432 --name my-postgres -e POSTGRES_USER=postgres -e POSTGRES_PASSWORD=postgres -e POSTGRES_DB=openhands -d postgres`
|
||||
Run the alembic migrations:
|
||||
`poetry run alembic upgrade head `
|
||||
|
||||
#### VSCode launch.json
|
||||
|
||||
The VSCode launch.json below sets up 2 servers to test clustering, running independently on localhost:3030 and localhost:3031. Running only the server on 3030 is usually sufficient unless tests of the clustered functionality are required. Secrets may be harvested directly from staging by connecting...
|
||||
`kubectl exec --stdin --tty <POD_NAME> -n <NAMESPACE> -- /bin/bash`
|
||||
And then invoking `printenv`. NOTE: _DO NOT DO THIS WITH PROD!!!_ (Hopefully by the time you read this, nobody will have access.)
|
||||
|
||||
```
|
||||
{
|
||||
"configurations": [
|
||||
{
|
||||
"name": "Python Debugger: Python File",
|
||||
"type": "debugpy",
|
||||
"request": "launch",
|
||||
"program": "${file}"
|
||||
},
|
||||
{
|
||||
"name": "OpenHands Deploy",
|
||||
"type": "debugpy",
|
||||
"request": "launch",
|
||||
"module": "uvicorn",
|
||||
"args": [
|
||||
"saas_server:app",
|
||||
"--reload",
|
||||
"--host",
|
||||
"0.0.0.0",
|
||||
"--port",
|
||||
"3030"
|
||||
],
|
||||
"env": {
|
||||
"DEBUG": "1",
|
||||
"FILE_STORE": "local",
|
||||
"REDIS_HOST": "localhost:6379",
|
||||
"OPENHANDS": "<YOUR LOCAL OSS OPENHANDS DIR>",
|
||||
"FRONTEND_DIRECTORY": "<YOUR LOCAL OSS OPENHANDS DIR>/frontend/build",
|
||||
"SANDBOX_RUNTIME_CONTAINER_IMAGE": "ghcr.io/all-hands-ai/runtime:main-nikolaik",
|
||||
"FILE_STORE_PATH": "<YOUR HOME DIRECTORY>>/.openhands-state",
|
||||
"OPENHANDS_CONFIG_CLS": "server.config.SaaSServerConfig",
|
||||
"GITHUB_APP_ID": "1062351",
|
||||
"GITHUB_APP_PRIVATE_KEY": "<GITHUB PRIVATE KEY>",
|
||||
"GITHUB_APP_CLIENT_ID": "Iv23lis7eUWDQHIq8US0",
|
||||
"GITHUB_APP_CLIENT_SECRET": "<GITHUB CLIENT SECRET>",
|
||||
"POSTHOG_CLIENT_KEY": "<POSTHOG CLIENT KEY>",
|
||||
"LITE_LLM_API_URL": "https://llm-proxy.staging.all-hands.dev",
|
||||
"LITE_LLM_TEAM_ID": "62ea39c4-8886-44f3-b7ce-07ed4fe42d2c",
|
||||
"LITE_LLM_API_KEY": "<LITE LLM API KEY>"
|
||||
},
|
||||
"justMyCode": false,
|
||||
"cwd": "${workspaceFolder}/app"
|
||||
},
|
||||
{
|
||||
"name": "OpenHands Deploy 2",
|
||||
"type": "debugpy",
|
||||
"request": "launch",
|
||||
"module": "uvicorn",
|
||||
"args": [
|
||||
"saas_server:app",
|
||||
"--reload",
|
||||
"--host",
|
||||
"0.0.0.0",
|
||||
"--port",
|
||||
"3031"
|
||||
],
|
||||
"env": {
|
||||
"DEBUG": "1",
|
||||
"FILE_STORE": "local",
|
||||
"REDIS_HOST": "localhost:6379",
|
||||
"OPENHANDS": "<YOUR LOCAL OSS OPENHANDS DIR>",
|
||||
"FRONTEND_DIRECTORY": "<YOUR LOCAL OSS OPENHANDS DIR>/frontend/build",
|
||||
"SANDBOX_RUNTIME_CONTAINER_IMAGE": "ghcr.io/all-hands-ai/runtime:main-nikolaik",
|
||||
"FILE_STORE_PATH": "<YOUR HOME DIRECTORY>>/.openhands-state",
|
||||
"OPENHANDS_CONFIG_CLS": "server.config.SaaSServerConfig",
|
||||
"GITHUB_APP_ID": "1062351",
|
||||
"GITHUB_APP_PRIVATE_KEY": "<GITHUB PRIVATE KEY>",
|
||||
"GITHUB_APP_CLIENT_ID": "Iv23lis7eUWDQHIq8US0",
|
||||
"GITHUB_APP_CLIENT_SECRET": "<GITHUB CLIENT SECRET>",
|
||||
"POSTHOG_CLIENT_KEY": "<POSTHOG CLIENT KEY>",
|
||||
"LITE_LLM_API_URL": "https://llm-proxy.staging.all-hands.dev",
|
||||
"LITE_LLM_TEAM_ID": "62ea39c4-8886-44f3-b7ce-07ed4fe42d2c",
|
||||
"LITE_LLM_API_KEY": "<LITE LLM API KEY>"
|
||||
},
|
||||
"justMyCode": false,
|
||||
"cwd": "${workspaceFolder}/app"
|
||||
},
|
||||
{
|
||||
"name": "Unit Tests",
|
||||
"type": "debugpy",
|
||||
"request": "launch",
|
||||
"module": "pytest",
|
||||
"args": [
|
||||
"./tests/unit",
|
||||
//"./tests/unit/test_clustered_conversation_manager.py",
|
||||
"--durations=0"
|
||||
],
|
||||
"env": {
|
||||
"DEBUG": "1"
|
||||
},
|
||||
"justMyCode": false,
|
||||
"cwd": "${workspaceFolder}/app"
|
||||
},
|
||||
// set working directory...
|
||||
]
|
||||
}
|
||||
```
|
||||
127
enterprise/enterprise_local/convert_to_env.py
Normal file
127
enterprise/enterprise_local/convert_to_env.py
Normal file
@ -0,0 +1,127 @@
|
||||
import base64
|
||||
import os
|
||||
import sys
|
||||
|
||||
import yaml
|
||||
|
||||
|
||||
def convert_yaml_to_env(yaml_file, target_parameters, output_env_file, prefix):
|
||||
"""Converts a YAML file into .env file format for specified target parameters under 'stringData' and 'data'.
|
||||
|
||||
:param yaml_file: Path to the YAML file.
|
||||
:param target_parameters: List of keys to extract from the YAML file.
|
||||
:param output_env_file: Path to the output .env file.
|
||||
:param prefix: Prefix for environment variables.
|
||||
"""
|
||||
try:
|
||||
# Load the YAML file
|
||||
with open(yaml_file, 'r') as file:
|
||||
yaml_data = yaml.safe_load(file)
|
||||
|
||||
# Extract sections
|
||||
string_data = yaml_data.get('stringData', None)
|
||||
data = yaml_data.get('data', None)
|
||||
|
||||
if string_data:
|
||||
env_source = string_data
|
||||
process_base64 = False
|
||||
elif data:
|
||||
env_source = data
|
||||
process_base64 = True
|
||||
else:
|
||||
print(
|
||||
"Error: Neither 'stringData' nor 'data' section found in the YAML file."
|
||||
)
|
||||
return
|
||||
|
||||
env_lines = []
|
||||
|
||||
for param in target_parameters:
|
||||
if param in env_source:
|
||||
value = env_source[param]
|
||||
if process_base64:
|
||||
try:
|
||||
decoded_value = base64.b64decode(value).decode('utf-8')
|
||||
formatted_value = (
|
||||
decoded_value.replace('\n', '\\n')
|
||||
if '\n' in decoded_value
|
||||
else decoded_value
|
||||
)
|
||||
except Exception as decode_error:
|
||||
print(f"Error decoding base64 for '{param}': {decode_error}")
|
||||
continue
|
||||
else:
|
||||
formatted_value = (
|
||||
value.replace('\n', '\\n')
|
||||
if isinstance(value, str) and '\n' in value
|
||||
else value
|
||||
)
|
||||
|
||||
new_key = prefix + param.upper().replace('-', '_')
|
||||
env_lines.append(f'{new_key}={formatted_value}')
|
||||
else:
|
||||
print(
|
||||
f"Warning: Parameter '{param}' not found in the selected section."
|
||||
)
|
||||
|
||||
# Write to the .env file
|
||||
with open(output_env_file, 'a') as env_file:
|
||||
env_file.write('\n'.join(env_lines) + '\n')
|
||||
|
||||
except Exception as e:
|
||||
print(f'Error: {e}')
|
||||
|
||||
|
||||
lite_llm_api_key = os.getenv('LITE_LLM_API_KEY')
|
||||
if not lite_llm_api_key:
|
||||
print('Set the LITE_LLM_API_KEY environment variable to your API key')
|
||||
sys.exit(1)
|
||||
|
||||
yaml_file = 'github_decrypted.yaml'
|
||||
target_parameters = ['client-id', 'client-secret', 'webhook-secret', 'private-key']
|
||||
output_env_file = './enterprise/.env'
|
||||
|
||||
if os.path.exists(output_env_file):
|
||||
os.remove(output_env_file)
|
||||
convert_yaml_to_env(yaml_file, target_parameters, output_env_file, 'GITHUB_APP_')
|
||||
os.remove(yaml_file)
|
||||
|
||||
yaml_file = 'keycloak_realm_decrypted.yaml'
|
||||
target_parameters = ['client-id', 'client-secret', 'provider-name', 'realm-name']
|
||||
convert_yaml_to_env(yaml_file, target_parameters, output_env_file, 'KEYCLOAK_')
|
||||
os.remove(yaml_file)
|
||||
|
||||
yaml_file = 'keycloak_admin_decrypted.yaml'
|
||||
target_parameters = ['admin-password']
|
||||
convert_yaml_to_env(yaml_file, target_parameters, output_env_file, 'KEYCLOAK_')
|
||||
os.remove(yaml_file)
|
||||
|
||||
lines = []
|
||||
lines.append('KEYCLOAK_SERVER_URL=https://auth.staging.all-hands.dev/')
|
||||
lines.append('KEYCLOAK_SERVER_URL_EXT=https://auth.staging.all-hands.dev/')
|
||||
lines.append('OPENHANDS_CONFIG_CLS=server.config.SaaSServerConfig')
|
||||
lines.append(
|
||||
'OPENHANDS_GITHUB_SERVICE_CLS=integrations.github.github_service.SaaSGitHubService'
|
||||
)
|
||||
lines.append(
|
||||
'OPENHANDS_GITLAB_SERVICE_CLS=integrations.gitlab.gitlab_service.SaaSGitLabService'
|
||||
)
|
||||
lines.append(
|
||||
'OPENHANDS_BITBUCKET_SERVICE_CLS=integrations.bitbucket.bitbucket_service.SaaSBitBucketService'
|
||||
)
|
||||
lines.append(
|
||||
'OPENHANDS_CONVERSATION_VALIDATOR_CLS=storage.saas_conversation_validator.SaasConversationValidator'
|
||||
)
|
||||
lines.append('POSTHOG_CLIENT_KEY=test')
|
||||
lines.append('ENABLE_PROACTIVE_CONVERSATION_STARTERS=true')
|
||||
lines.append('MAX_CONCURRENT_CONVERSATIONS=10')
|
||||
lines.append('LITE_LLM_API_URL=https://llm-proxy.eval.all-hands.dev')
|
||||
lines.append('LITELLM_DEFAULT_MODEL=litellm_proxy/claude-sonnet-4-20250514')
|
||||
lines.append(f'LITE_LLM_API_KEY={lite_llm_api_key}')
|
||||
lines.append('LOCAL_DEPLOYMENT=true')
|
||||
lines.append('DB_HOST=localhost')
|
||||
|
||||
with open(output_env_file, 'a') as env_file:
|
||||
env_file.write('\n'.join(lines))
|
||||
|
||||
print(f'.env file created at: {output_env_file}')
|
||||
27
enterprise/enterprise_local/decrypt_env.sh
Normal file
27
enterprise/enterprise_local/decrypt_env.sh
Normal file
@ -0,0 +1,27 @@
|
||||
#!/bin/bash
|
||||
set -euo pipefail
|
||||
|
||||
# Check if DEPLOY_DIR argument was provided
|
||||
if [ $# -lt 1 ]; then
|
||||
echo "Usage: $0 <DEPLOY_DIR>"
|
||||
echo "Example: $0 /path/to/deploy"
|
||||
exit 1
|
||||
fi
|
||||
|
||||
# Normalize path (remove trailing slash)
|
||||
DEPLOY_DIR="${DEPLOY_DIR%/}"
|
||||
|
||||
# Function to decrypt and rename
|
||||
decrypt_and_move() {
|
||||
local secret_path="$1"
|
||||
local output_name="$2"
|
||||
|
||||
${DEPLOY_DIR}/scripts/decrypt.sh "${DEPLOY_DIR}/${secret_path}"
|
||||
mv decrypted.yaml "${output_name}"
|
||||
echo "Moved decrypted.yaml to ${output_name}"
|
||||
}
|
||||
|
||||
# Decrypt each secret file
|
||||
decrypt_and_move "openhands/envs/feature/secrets/github-app.yaml" "github_decrypted.yaml"
|
||||
decrypt_and_move "openhands/envs/staging/secrets/keycloak-realm.yaml" "keycloak_realm_decrypted.yaml"
|
||||
decrypt_and_move "openhands/envs/staging/secrets/keycloak-admin.yaml" "keycloak_admin_decrypted.yaml"
|
||||
@ -1,18 +1,47 @@
|
||||
from uuid import UUID
|
||||
|
||||
from experiments.constants import (
|
||||
ENABLE_EXPERIMENT_MANAGER,
|
||||
EXPERIMENT_SYSTEM_PROMPT_EXPERIMENT,
|
||||
)
|
||||
from experiments.experiment_versions import (
|
||||
handle_condenser_max_step_experiment,
|
||||
handle_system_prompt_experiment,
|
||||
)
|
||||
from experiments.experiment_versions._004_condenser_max_step_experiment import (
|
||||
handle_condenser_max_step_experiment__v1,
|
||||
)
|
||||
|
||||
from openhands.core.config.openhands_config import OpenHandsConfig
|
||||
from openhands.core.logger import openhands_logger as logger
|
||||
from openhands.experiments.experiment_manager import ExperimentManager
|
||||
from openhands.sdk import Agent
|
||||
from openhands.server.session.conversation_init_data import ConversationInitData
|
||||
|
||||
|
||||
class SaaSExperimentManager(ExperimentManager):
|
||||
@staticmethod
|
||||
def run_agent_variant_tests__v1(
|
||||
user_id: str | None, conversation_id: UUID, agent: Agent
|
||||
) -> Agent:
|
||||
if not ENABLE_EXPERIMENT_MANAGER:
|
||||
logger.info(
|
||||
'experiment_manager:run_conversation_variant_test:skipped',
|
||||
extra={'reason': 'experiment_manager_disabled'},
|
||||
)
|
||||
return agent
|
||||
|
||||
agent = handle_condenser_max_step_experiment__v1(
|
||||
user_id, conversation_id, agent
|
||||
)
|
||||
|
||||
if EXPERIMENT_SYSTEM_PROMPT_EXPERIMENT:
|
||||
agent = agent.model_copy(
|
||||
update={'system_prompt_filename': 'system_prompt_long_horizon.j2'}
|
||||
)
|
||||
|
||||
return agent
|
||||
|
||||
@staticmethod
|
||||
def run_conversation_variant_test(
|
||||
user_id, conversation_id, conversation_settings
|
||||
|
||||
@ -5,12 +5,18 @@ This module contains the handler for the condenser max step experiment that test
|
||||
different max_size values for the condenser configuration.
|
||||
"""
|
||||
|
||||
from uuid import UUID
|
||||
|
||||
import posthog
|
||||
from experiments.constants import EXPERIMENT_CONDENSER_MAX_STEP
|
||||
from server.constants import IS_FEATURE_ENV
|
||||
from storage.experiment_assignment_store import ExperimentAssignmentStore
|
||||
|
||||
from openhands.core.logger import openhands_logger as logger
|
||||
from openhands.sdk import Agent
|
||||
from openhands.sdk.context.condenser import (
|
||||
LLMSummarizingCondenser,
|
||||
)
|
||||
from openhands.server.session.conversation_init_data import ConversationInitData
|
||||
|
||||
|
||||
@ -190,3 +196,37 @@ def handle_condenser_max_step_experiment(
|
||||
return conversation_settings
|
||||
|
||||
return conversation_settings
|
||||
|
||||
|
||||
def handle_condenser_max_step_experiment__v1(
|
||||
user_id: str | None,
|
||||
conversation_id: UUID,
|
||||
agent: Agent,
|
||||
) -> Agent:
|
||||
enabled_variant = _get_condenser_max_step_variant(user_id, str(conversation_id))
|
||||
|
||||
if enabled_variant is None:
|
||||
return agent
|
||||
|
||||
if enabled_variant == 'control':
|
||||
condenser_max_size = 120
|
||||
elif enabled_variant == 'treatment':
|
||||
condenser_max_size = 80
|
||||
else:
|
||||
logger.error(
|
||||
'condenser_max_step_experiment:unknown_variant',
|
||||
extra={
|
||||
'user_id': user_id,
|
||||
'convo_id': conversation_id,
|
||||
'variant': enabled_variant,
|
||||
'reason': 'unknown variant; returning original conversation settings',
|
||||
},
|
||||
)
|
||||
return agent
|
||||
|
||||
condenser_llm = agent.llm.model_copy(update={'usage_id': 'condenser'})
|
||||
condenser = LLMSummarizingCondenser(
|
||||
llm=condenser_llm, max_size=condenser_max_size, keep_first=4
|
||||
)
|
||||
|
||||
return agent.model_copy(update={'condenser': condenser})
|
||||
|
||||
@ -14,6 +14,7 @@ from openhands.core.logger import openhands_logger as logger
|
||||
from openhands.core.schema.agent import AgentState
|
||||
from openhands.events.action import MessageAction
|
||||
from openhands.events.serialization.event import event_to_dict
|
||||
from openhands.integrations.provider import ProviderHandler
|
||||
from openhands.server.services.conversation_service import (
|
||||
create_new_conversation,
|
||||
setup_init_conversation_settings,
|
||||
@ -188,19 +189,27 @@ class SlackNewConversationView(SlackViewInterface):
|
||||
user_secrets = await self.saas_user_auth.get_user_secrets()
|
||||
user_instructions, conversation_instructions = self._get_instructions(jinja)
|
||||
|
||||
# Determine git provider from repository
|
||||
git_provider = None
|
||||
if self.selected_repo and provider_tokens:
|
||||
provider_handler = ProviderHandler(provider_tokens)
|
||||
repository = await provider_handler.verify_repo_provider(self.selected_repo)
|
||||
git_provider = repository.git_provider
|
||||
|
||||
agent_loop_info = await create_new_conversation(
|
||||
user_id=self.slack_to_openhands_user.keycloak_user_id,
|
||||
git_provider_tokens=provider_tokens,
|
||||
selected_repository=self.selected_repo,
|
||||
selected_branch=None,
|
||||
initial_user_msg=user_instructions,
|
||||
conversation_instructions=conversation_instructions
|
||||
if conversation_instructions
|
||||
else None,
|
||||
conversation_instructions=(
|
||||
conversation_instructions if conversation_instructions else None
|
||||
),
|
||||
image_urls=None,
|
||||
replay_json=None,
|
||||
conversation_trigger=ConversationTrigger.SLACK,
|
||||
custom_secrets=user_secrets.custom_secrets if user_secrets else None,
|
||||
git_provider=git_provider,
|
||||
)
|
||||
|
||||
self.conversation_id = agent_loop_info.conversation_id
|
||||
|
||||
24
enterprise/poetry.lock
generated
24
enterprise/poetry.lock
generated
@ -5737,7 +5737,7 @@ llama = ["llama-index (>=0.12.29,<0.13.0)", "llama-index-core (>=0.12.29,<0.13.0
|
||||
|
||||
[[package]]
|
||||
name = "openhands-agent-server"
|
||||
version = "1.0.0a2"
|
||||
version = "1.0.0a3"
|
||||
description = "OpenHands Agent Server - REST/WebSocket interface for OpenHands AI Agent"
|
||||
optional = false
|
||||
python-versions = ">=3.12"
|
||||
@ -5759,8 +5759,8 @@ wsproto = ">=1.2.0"
|
||||
[package.source]
|
||||
type = "git"
|
||||
url = "https://github.com/All-Hands-AI/agent-sdk.git"
|
||||
reference = "512399d896521aee3131eea4bb59087fb9dfa243"
|
||||
resolved_reference = "512399d896521aee3131eea4bb59087fb9dfa243"
|
||||
reference = "8d8134ca5a87cc3e90e3ff968327a7f4c961e22e"
|
||||
resolved_reference = "8d8134ca5a87cc3e90e3ff968327a7f4c961e22e"
|
||||
subdirectory = "openhands-agent-server"
|
||||
|
||||
[[package]]
|
||||
@ -5805,9 +5805,9 @@ memory-profiler = "^0.61.0"
|
||||
numpy = "*"
|
||||
openai = "1.99.9"
|
||||
openhands-aci = "0.3.2"
|
||||
openhands-agent-server = {git = "https://github.com/All-Hands-AI/agent-sdk.git", rev = "512399d896521aee3131eea4bb59087fb9dfa243", subdirectory = "openhands-agent-server"}
|
||||
openhands-sdk = {git = "https://github.com/All-Hands-AI/agent-sdk.git", rev = "512399d896521aee3131eea4bb59087fb9dfa243", subdirectory = "openhands-sdk"}
|
||||
openhands-tools = {git = "https://github.com/All-Hands-AI/agent-sdk.git", rev = "512399d896521aee3131eea4bb59087fb9dfa243", subdirectory = "openhands-tools"}
|
||||
openhands-agent-server = {git = "https://github.com/All-Hands-AI/agent-sdk.git", rev = "8d8134ca5a87cc3e90e3ff968327a7f4c961e22e", subdirectory = "openhands-agent-server"}
|
||||
openhands-sdk = {git = "https://github.com/All-Hands-AI/agent-sdk.git", rev = "8d8134ca5a87cc3e90e3ff968327a7f4c961e22e", subdirectory = "openhands-sdk"}
|
||||
openhands-tools = {git = "https://github.com/All-Hands-AI/agent-sdk.git", rev = "8d8134ca5a87cc3e90e3ff968327a7f4c961e22e", subdirectory = "openhands-tools"}
|
||||
opentelemetry-api = "^1.33.1"
|
||||
opentelemetry-exporter-otlp-proto-grpc = "^1.33.1"
|
||||
pathspec = "^0.12.1"
|
||||
@ -5863,7 +5863,7 @@ url = ".."
|
||||
|
||||
[[package]]
|
||||
name = "openhands-sdk"
|
||||
version = "1.0.0a2"
|
||||
version = "1.0.0a3"
|
||||
description = "OpenHands SDK - Core functionality for building AI agents"
|
||||
optional = false
|
||||
python-versions = ">=3.12"
|
||||
@ -5887,13 +5887,13 @@ boto3 = ["boto3 (>=1.35.0)"]
|
||||
[package.source]
|
||||
type = "git"
|
||||
url = "https://github.com/All-Hands-AI/agent-sdk.git"
|
||||
reference = "512399d896521aee3131eea4bb59087fb9dfa243"
|
||||
resolved_reference = "512399d896521aee3131eea4bb59087fb9dfa243"
|
||||
reference = "8d8134ca5a87cc3e90e3ff968327a7f4c961e22e"
|
||||
resolved_reference = "8d8134ca5a87cc3e90e3ff968327a7f4c961e22e"
|
||||
subdirectory = "openhands-sdk"
|
||||
|
||||
[[package]]
|
||||
name = "openhands-tools"
|
||||
version = "1.0.0a2"
|
||||
version = "1.0.0a3"
|
||||
description = "OpenHands Tools - Runtime tools for AI agents"
|
||||
optional = false
|
||||
python-versions = ">=3.12"
|
||||
@ -5914,8 +5914,8 @@ pydantic = ">=2.11.7"
|
||||
[package.source]
|
||||
type = "git"
|
||||
url = "https://github.com/All-Hands-AI/agent-sdk.git"
|
||||
reference = "512399d896521aee3131eea4bb59087fb9dfa243"
|
||||
resolved_reference = "512399d896521aee3131eea4bb59087fb9dfa243"
|
||||
reference = "8d8134ca5a87cc3e90e3ff968327a7f4c961e22e"
|
||||
resolved_reference = "8d8134ca5a87cc3e90e3ff968327a7f4c961e22e"
|
||||
subdirectory = "openhands-tools"
|
||||
|
||||
[[package]]
|
||||
|
||||
@ -293,11 +293,12 @@ class TokenManager:
|
||||
refresh_token_expires_at: int,
|
||||
) -> dict[str, str | int] | None:
|
||||
current_time = int(time.time())
|
||||
# expire access_token ten minutes before actual expiration
|
||||
# expire access_token four hours before actual expiration
|
||||
# This ensures tokens are refreshed on resume to have at least 4 hours validity
|
||||
access_expired = (
|
||||
False
|
||||
if access_token_expires_at == 0
|
||||
else access_token_expires_at < current_time + 600
|
||||
else access_token_expires_at < current_time + 14400
|
||||
)
|
||||
refresh_expired = (
|
||||
False
|
||||
|
||||
1
enterprise/tests/unit/experiments/__init__.py
Normal file
1
enterprise/tests/unit/experiments/__init__.py
Normal file
@ -0,0 +1 @@
|
||||
"""Unit tests for experiments module."""
|
||||
@ -0,0 +1,137 @@
|
||||
# tests/test_condenser_max_step_experiment_v1.py
|
||||
|
||||
from unittest.mock import patch
|
||||
from uuid import uuid4
|
||||
|
||||
from experiments.experiment_manager import SaaSExperimentManager
|
||||
|
||||
# SUT imports (update the module path if needed)
|
||||
from experiments.experiment_versions._004_condenser_max_step_experiment import (
|
||||
handle_condenser_max_step_experiment__v1,
|
||||
)
|
||||
from pydantic import SecretStr
|
||||
|
||||
from openhands.sdk import LLM, Agent
|
||||
from openhands.sdk.context.condenser import LLMSummarizingCondenser
|
||||
|
||||
|
||||
def make_agent() -> Agent:
|
||||
"""Build a minimal valid Agent."""
|
||||
llm = LLM(
|
||||
usage_id='primary-llm',
|
||||
model='provider/model',
|
||||
api_key=SecretStr('sk-test'),
|
||||
)
|
||||
return Agent(llm=llm)
|
||||
|
||||
|
||||
def _patch_variant(monkeypatch, return_value):
|
||||
"""Patch the internal variant getter to return a specific value."""
|
||||
monkeypatch.setattr(
|
||||
'experiments.experiment_versions._004_condenser_max_step_experiment._get_condenser_max_step_variant',
|
||||
lambda user_id, conv_id: return_value,
|
||||
raising=True,
|
||||
)
|
||||
|
||||
|
||||
def test_control_variant_sets_condenser_with_max_size_120(monkeypatch):
|
||||
_patch_variant(monkeypatch, 'control')
|
||||
agent = make_agent()
|
||||
conv_id = uuid4()
|
||||
|
||||
result = handle_condenser_max_step_experiment__v1('user-1', conv_id, agent)
|
||||
|
||||
# Should be a new Agent instance with a condenser installed
|
||||
assert result is not agent
|
||||
assert isinstance(result.condenser, LLMSummarizingCondenser)
|
||||
|
||||
# The condenser should have its own LLM (usage_id overridden to "condenser")
|
||||
assert result.condenser.llm.usage_id == 'condenser'
|
||||
# The original agent LLM remains unchanged
|
||||
assert agent.llm.usage_id == 'primary-llm'
|
||||
|
||||
# Control: max_size = 120, keep_first = 4
|
||||
assert result.condenser.max_size == 120
|
||||
assert result.condenser.keep_first == 4
|
||||
|
||||
|
||||
def test_treatment_variant_sets_condenser_with_max_size_80(monkeypatch):
|
||||
_patch_variant(monkeypatch, 'treatment')
|
||||
agent = make_agent()
|
||||
conv_id = uuid4()
|
||||
|
||||
result = handle_condenser_max_step_experiment__v1('user-2', conv_id, agent)
|
||||
|
||||
assert result is not agent
|
||||
assert isinstance(result.condenser, LLMSummarizingCondenser)
|
||||
assert result.condenser.llm.usage_id == 'condenser'
|
||||
assert result.condenser.max_size == 80
|
||||
assert result.condenser.keep_first == 4
|
||||
|
||||
|
||||
def test_none_variant_returns_original_agent_without_changes(monkeypatch):
|
||||
_patch_variant(monkeypatch, None)
|
||||
agent = make_agent()
|
||||
conv_id = uuid4()
|
||||
|
||||
result = handle_condenser_max_step_experiment__v1('user-3', conv_id, agent)
|
||||
|
||||
# No changes—same instance and no condenser attribute added
|
||||
assert result is agent
|
||||
assert getattr(result, 'condenser', None) is None
|
||||
|
||||
|
||||
def test_unknown_variant_returns_original_agent_without_changes(monkeypatch):
|
||||
_patch_variant(monkeypatch, 'weird-variant')
|
||||
agent = make_agent()
|
||||
conv_id = uuid4()
|
||||
|
||||
result = handle_condenser_max_step_experiment__v1('user-4', conv_id, agent)
|
||||
|
||||
assert result is agent
|
||||
assert getattr(result, 'condenser', None) is None
|
||||
|
||||
|
||||
@patch('experiments.experiment_manager.handle_condenser_max_step_experiment__v1')
|
||||
@patch('experiments.experiment_manager.ENABLE_EXPERIMENT_MANAGER', False)
|
||||
def test_run_agent_variant_tests_v1_noop_when_manager_disabled(
|
||||
mock_handle_condenser,
|
||||
):
|
||||
"""If ENABLE_EXPERIMENT_MANAGER is False, the method returns the exact same agent and does not call the handler."""
|
||||
agent = make_agent()
|
||||
conv_id = uuid4()
|
||||
|
||||
result = SaaSExperimentManager.run_agent_variant_tests__v1(
|
||||
user_id='user-123',
|
||||
conversation_id=conv_id,
|
||||
agent=agent,
|
||||
)
|
||||
|
||||
# Same object returned (no copy)
|
||||
assert result is agent
|
||||
# Handler should not have been called
|
||||
mock_handle_condenser.assert_not_called()
|
||||
|
||||
|
||||
@patch('experiments.experiment_manager.ENABLE_EXPERIMENT_MANAGER', True)
|
||||
@patch('experiments.experiment_manager.EXPERIMENT_SYSTEM_PROMPT_EXPERIMENT', True)
|
||||
def test_run_agent_variant_tests_v1_calls_handler_and_sets_system_prompt(monkeypatch):
|
||||
"""When enabled, it should call the condenser experiment handler and set the long-horizon system prompt."""
|
||||
agent = make_agent()
|
||||
conv_id = uuid4()
|
||||
|
||||
_patch_variant(monkeypatch, 'treatment')
|
||||
|
||||
result: Agent = SaaSExperimentManager.run_agent_variant_tests__v1(
|
||||
user_id='user-abc',
|
||||
conversation_id=conv_id,
|
||||
agent=agent,
|
||||
)
|
||||
|
||||
# Should be a different instance than the original (copied after handler runs)
|
||||
assert result is not agent
|
||||
assert result.system_prompt_filename == 'system_prompt_long_horizon.j2'
|
||||
|
||||
# The condenser returned by the handler must be preserved after the system-prompt override copy
|
||||
assert isinstance(result.condenser, LLMSummarizingCondenser)
|
||||
assert result.condenser.max_size == 80
|
||||
@ -105,10 +105,17 @@ describe("Content", () => {
|
||||
});
|
||||
});
|
||||
|
||||
});
|
||||
|
||||
describe("Advanced form", () => {
|
||||
it("should conditionally show security analyzer based on confirmation mode", async () => {
|
||||
renderLlmSettingsScreen();
|
||||
await screen.findByTestId("llm-settings-screen");
|
||||
|
||||
// Enable advanced mode first
|
||||
const advancedSwitch = screen.getByTestId("advanced-settings-switch");
|
||||
await userEvent.click(advancedSwitch);
|
||||
|
||||
const confirmation = screen.getByTestId(
|
||||
"enable-confirmation-mode-switch",
|
||||
);
|
||||
@ -135,9 +142,7 @@ describe("Content", () => {
|
||||
screen.queryByTestId("security-analyzer-input"),
|
||||
).not.toBeInTheDocument();
|
||||
});
|
||||
});
|
||||
|
||||
describe("Advanced form", () => {
|
||||
it("should render the advanced form if the switch is toggled", async () => {
|
||||
renderLlmSettingsScreen();
|
||||
await screen.findByTestId("llm-settings-screen");
|
||||
@ -615,7 +620,7 @@ describe("Form submission", () => {
|
||||
expect.objectContaining({
|
||||
llm_model: "openhands/claude-sonnet-4-20250514",
|
||||
llm_base_url: "",
|
||||
confirmation_mode: true, // Confirmation mode is now a basic setting, should be preserved
|
||||
confirmation_mode: false, // Confirmation mode is now an advanced setting, should be cleared when saving basic settings
|
||||
}),
|
||||
);
|
||||
});
|
||||
@ -776,9 +781,6 @@ describe("SaaS mode", () => {
|
||||
const modelInput = screen.getByTestId("llm-model-input");
|
||||
const apiKeyInput = screen.getByTestId("llm-api-key-input");
|
||||
const advancedSwitch = screen.getByTestId("advanced-settings-switch");
|
||||
const confirmationModeSwitch = screen.getByTestId(
|
||||
"enable-confirmation-mode-switch",
|
||||
);
|
||||
const submitButton = screen.getByTestId("submit-button");
|
||||
|
||||
// Inputs should be disabled
|
||||
@ -786,9 +788,13 @@ describe("SaaS mode", () => {
|
||||
expect(modelInput).toBeDisabled();
|
||||
expect(apiKeyInput).toBeDisabled();
|
||||
expect(advancedSwitch).toBeDisabled();
|
||||
expect(confirmationModeSwitch).toBeDisabled();
|
||||
expect(submitButton).toBeDisabled();
|
||||
|
||||
// Confirmation mode switch is in advanced view, so it's not visible in basic view
|
||||
expect(
|
||||
screen.queryByTestId("enable-confirmation-mode-switch"),
|
||||
).not.toBeInTheDocument();
|
||||
|
||||
// Try to interact with inputs - they should not respond
|
||||
await userEvent.click(providerInput);
|
||||
await userEvent.type(apiKeyInput, "test-key");
|
||||
@ -935,19 +941,17 @@ describe("SaaS mode", () => {
|
||||
renderLlmSettingsScreen();
|
||||
await screen.findByTestId("llm-settings-screen");
|
||||
|
||||
// Verify that form elements are disabled for unsubscribed users
|
||||
const confirmationModeSwitch = screen.getByTestId(
|
||||
"enable-confirmation-mode-switch",
|
||||
);
|
||||
// Verify that basic form elements are disabled for unsubscribed users
|
||||
const advancedSwitch = screen.getByTestId("advanced-settings-switch");
|
||||
const submitButton = screen.getByTestId("submit-button");
|
||||
|
||||
expect(confirmationModeSwitch).not.toBeChecked();
|
||||
expect(confirmationModeSwitch).toBeDisabled();
|
||||
expect(advancedSwitch).toBeDisabled();
|
||||
expect(submitButton).toBeDisabled();
|
||||
|
||||
// Try to click the disabled confirmation mode switch - it should not change state
|
||||
await userEvent.click(confirmationModeSwitch);
|
||||
expect(confirmationModeSwitch).not.toBeChecked(); // Should remain unchecked
|
||||
// Confirmation mode switch is in advanced view, which can't be accessed when form is disabled
|
||||
expect(
|
||||
screen.queryByTestId("enable-confirmation-mode-switch"),
|
||||
).not.toBeInTheDocument();
|
||||
|
||||
// Try to submit the form - button should remain disabled
|
||||
await userEvent.click(submitButton);
|
||||
@ -1107,14 +1111,17 @@ describe("SaaS mode", () => {
|
||||
const providerInput = screen.getByTestId("llm-provider-input");
|
||||
const modelInput = screen.getByTestId("llm-model-input");
|
||||
const apiKeyInput = screen.getByTestId("llm-api-key-input");
|
||||
const confirmationModeSwitch = screen.getByTestId(
|
||||
"enable-confirmation-mode-switch",
|
||||
);
|
||||
const advancedSwitch = screen.getByTestId("advanced-settings-switch");
|
||||
|
||||
expect(providerInput).toBeDisabled();
|
||||
expect(modelInput).toBeDisabled();
|
||||
expect(apiKeyInput).toBeDisabled();
|
||||
expect(confirmationModeSwitch).toBeDisabled();
|
||||
expect(advancedSwitch).toBeDisabled();
|
||||
|
||||
// Confirmation mode switch is in advanced view, which can't be accessed when form is disabled
|
||||
expect(
|
||||
screen.queryByTestId("enable-confirmation-mode-switch"),
|
||||
).not.toBeInTheDocument();
|
||||
});
|
||||
});
|
||||
});
|
||||
|
||||
@ -187,7 +187,7 @@ class ConversationService {
|
||||
static async getRuntimeId(
|
||||
conversationId: string,
|
||||
): Promise<{ runtime_id: string }> {
|
||||
const url = `${this.getConversationUrl(conversationId)}/config`;
|
||||
const url = `/api/conversations/${conversationId}/config`;
|
||||
const { data } = await openHands.get<{ runtime_id: string }>(url, {
|
||||
headers: this.getConversationHeaders(),
|
||||
});
|
||||
|
||||
@ -40,7 +40,12 @@ import { useConfig } from "#/hooks/query/use-config";
|
||||
import { validateFiles } from "#/utils/file-validation";
|
||||
import { useConversationStore } from "#/state/conversation-store";
|
||||
import ConfirmationModeEnabled from "./confirmation-mode-enabled";
|
||||
import { isV0Event, isV1Event } from "#/types/v1/type-guards";
|
||||
import {
|
||||
isV0Event,
|
||||
isV1Event,
|
||||
isSystemPromptEvent,
|
||||
isConversationStateUpdateEvent,
|
||||
} from "#/types/v1/type-guards";
|
||||
import { useActiveConversation } from "#/hooks/query/use-active-conversation";
|
||||
|
||||
function getEntryPoint(
|
||||
@ -111,7 +116,14 @@ export function ChatInterface() {
|
||||
event.source === "agent" &&
|
||||
event.action !== "system",
|
||||
) ||
|
||||
storeEvents.filter(isV1Event).some((event) => event.source === "agent"),
|
||||
storeEvents
|
||||
.filter(isV1Event)
|
||||
.some(
|
||||
(event) =>
|
||||
event.source === "agent" &&
|
||||
!isSystemPromptEvent(event) &&
|
||||
!isConversationStateUpdateEvent(event),
|
||||
),
|
||||
[storeEvents],
|
||||
);
|
||||
|
||||
|
||||
@ -5,6 +5,7 @@ import { GitControlBarPullButton } from "./git-control-bar-pull-button";
|
||||
import { GitControlBarPushButton } from "./git-control-bar-push-button";
|
||||
import { GitControlBarPrButton } from "./git-control-bar-pr-button";
|
||||
import { useActiveConversation } from "#/hooks/query/use-active-conversation";
|
||||
import { useTaskPolling } from "#/hooks/query/use-task-polling";
|
||||
import { Provider } from "#/types/settings";
|
||||
import { I18nKey } from "#/i18n/declaration";
|
||||
import { GitControlBarTooltipWrapper } from "./git-control-bar-tooltip-wrapper";
|
||||
@ -17,10 +18,16 @@ export function GitControlBar({ onSuggestionsClick }: GitControlBarProps) {
|
||||
const { t } = useTranslation();
|
||||
|
||||
const { data: conversation } = useActiveConversation();
|
||||
const { repositoryInfo } = useTaskPolling();
|
||||
|
||||
const selectedRepository = conversation?.selected_repository;
|
||||
const gitProvider = conversation?.git_provider as Provider;
|
||||
const selectedBranch = conversation?.selected_branch;
|
||||
// Priority: conversation data > task data
|
||||
// This ensures we show repository info immediately from task, then transition to conversation data
|
||||
const selectedRepository =
|
||||
conversation?.selected_repository || repositoryInfo?.selectedRepository;
|
||||
const gitProvider = (conversation?.git_provider ||
|
||||
repositoryInfo?.gitProvider) as Provider;
|
||||
const selectedBranch =
|
||||
conversation?.selected_branch || repositoryInfo?.selectedBranch;
|
||||
|
||||
const hasRepository = !!selectedRepository;
|
||||
|
||||
|
||||
@ -1,10 +1,6 @@
|
||||
import { useMutation, useQueryClient } from "@tanstack/react-query";
|
||||
import toast from "react-hot-toast";
|
||||
import { useTranslation } from "react-i18next";
|
||||
import { Provider } from "#/types/settings";
|
||||
import { useErrorMessageStore } from "#/stores/error-message-store";
|
||||
import { TOAST_OPTIONS } from "#/utils/custom-toast-handlers";
|
||||
import { I18nKey } from "#/i18n/declaration";
|
||||
import {
|
||||
getConversationVersionFromQueryCache,
|
||||
resumeV1ConversationSandbox,
|
||||
@ -25,7 +21,6 @@ import {
|
||||
* startConversation({ conversationId: "some-id", providers: [...] });
|
||||
*/
|
||||
export const useUnifiedResumeConversationSandbox = () => {
|
||||
const { t } = useTranslation();
|
||||
const queryClient = useQueryClient();
|
||||
const removeErrorMessage = useErrorMessageStore(
|
||||
(state) => state.removeErrorMessage,
|
||||
@ -53,8 +48,6 @@ export const useUnifiedResumeConversationSandbox = () => {
|
||||
return startV0Conversation(variables.conversationId, variables.providers);
|
||||
},
|
||||
onMutate: async () => {
|
||||
toast.loading(t(I18nKey.TOAST$STARTING_CONVERSATION), TOAST_OPTIONS);
|
||||
|
||||
await queryClient.cancelQueries({ queryKey: ["user", "conversations"] });
|
||||
const previousConversations = queryClient.getQueryData([
|
||||
"user",
|
||||
@ -64,9 +57,6 @@ export const useUnifiedResumeConversationSandbox = () => {
|
||||
return { previousConversations };
|
||||
},
|
||||
onError: (_, __, context) => {
|
||||
toast.dismiss();
|
||||
toast.error(t(I18nKey.TOAST$FAILED_TO_START_CONVERSATION), TOAST_OPTIONS);
|
||||
|
||||
if (context?.previousConversations) {
|
||||
queryClient.setQueryData(
|
||||
["user", "conversations"],
|
||||
@ -78,9 +68,6 @@ export const useUnifiedResumeConversationSandbox = () => {
|
||||
invalidateConversationQueries(queryClient, variables.conversationId);
|
||||
},
|
||||
onSuccess: (_, variables) => {
|
||||
toast.dismiss();
|
||||
toast.success(t(I18nKey.TOAST$CONVERSATION_STARTED), TOAST_OPTIONS);
|
||||
|
||||
// Clear error messages when starting/resuming conversation
|
||||
removeErrorMessage();
|
||||
|
||||
|
||||
@ -50,7 +50,10 @@ export const useUnifiedPauseConversationSandbox = () => {
|
||||
return stopV0Conversation(variables.conversationId);
|
||||
},
|
||||
onMutate: async () => {
|
||||
toast.loading(t(I18nKey.TOAST$STOPPING_CONVERSATION), TOAST_OPTIONS);
|
||||
const toastId = toast.loading(
|
||||
t(I18nKey.TOAST$STOPPING_CONVERSATION),
|
||||
TOAST_OPTIONS,
|
||||
);
|
||||
|
||||
await queryClient.cancelQueries({ queryKey: ["user", "conversations"] });
|
||||
const previousConversations = queryClient.getQueryData([
|
||||
@ -58,10 +61,12 @@ export const useUnifiedPauseConversationSandbox = () => {
|
||||
"conversations",
|
||||
]);
|
||||
|
||||
return { previousConversations };
|
||||
return { previousConversations, toastId };
|
||||
},
|
||||
onError: (_, __, context) => {
|
||||
toast.dismiss();
|
||||
if (context?.toastId) {
|
||||
toast.dismiss(context.toastId);
|
||||
}
|
||||
toast.error(t(I18nKey.TOAST$FAILED_TO_STOP_CONVERSATION), TOAST_OPTIONS);
|
||||
|
||||
if (context?.previousConversations) {
|
||||
@ -74,8 +79,10 @@ export const useUnifiedPauseConversationSandbox = () => {
|
||||
onSettled: (_, __, variables) => {
|
||||
invalidateConversationQueries(queryClient, variables.conversationId);
|
||||
},
|
||||
onSuccess: (_, variables) => {
|
||||
toast.dismiss();
|
||||
onSuccess: (_, variables, context) => {
|
||||
if (context?.toastId) {
|
||||
toast.dismiss(context.toastId);
|
||||
}
|
||||
toast.success(t(I18nKey.TOAST$CONVERSATION_STOPPED), TOAST_OPTIONS);
|
||||
|
||||
updateConversationStatusInCache(
|
||||
|
||||
@ -68,5 +68,11 @@ export const useTaskPolling = () => {
|
||||
taskDetail: taskQuery.data?.detail,
|
||||
taskError: taskQuery.error,
|
||||
isLoadingTask: taskQuery.isLoading,
|
||||
// Repository information from task request
|
||||
repositoryInfo: {
|
||||
selectedRepository: taskQuery.data?.request?.selected_repository,
|
||||
selectedBranch: taskQuery.data?.request?.selected_branch,
|
||||
gitProvider: taskQuery.data?.request?.git_provider,
|
||||
},
|
||||
};
|
||||
};
|
||||
|
||||
@ -927,9 +927,6 @@ export enum I18nKey {
|
||||
CONVERSATION$FAILED_TO_START_FROM_TASK = "CONVERSATION$FAILED_TO_START_FROM_TASK",
|
||||
CONVERSATION$NOT_EXIST_OR_NO_PERMISSION = "CONVERSATION$NOT_EXIST_OR_NO_PERMISSION",
|
||||
CONVERSATION$FAILED_TO_START_WITH_ERROR = "CONVERSATION$FAILED_TO_START_WITH_ERROR",
|
||||
TOAST$STARTING_CONVERSATION = "TOAST$STARTING_CONVERSATION",
|
||||
TOAST$FAILED_TO_START_CONVERSATION = "TOAST$FAILED_TO_START_CONVERSATION",
|
||||
TOAST$CONVERSATION_STARTED = "TOAST$CONVERSATION_STARTED",
|
||||
TOAST$STOPPING_CONVERSATION = "TOAST$STOPPING_CONVERSATION",
|
||||
TOAST$FAILED_TO_STOP_CONVERSATION = "TOAST$FAILED_TO_STOP_CONVERSATION",
|
||||
TOAST$CONVERSATION_STOPPED = "TOAST$CONVERSATION_STOPPED",
|
||||
|
||||
@ -14831,54 +14831,6 @@
|
||||
"de": "Konversation konnte nicht gestartet werden: {{error}}",
|
||||
"uk": "Не вдалося запустити розмову: {{error}}"
|
||||
},
|
||||
"TOAST$STARTING_CONVERSATION": {
|
||||
"en": "Starting conversation...",
|
||||
"ja": "会話を開始しています...",
|
||||
"zh-CN": "正在启动对话...",
|
||||
"zh-TW": "正在啟動對話...",
|
||||
"ko-KR": "대화 시작 중...",
|
||||
"no": "Starter samtale...",
|
||||
"it": "Avvio della conversazione...",
|
||||
"pt": "Iniciando conversa...",
|
||||
"es": "Iniciando conversación...",
|
||||
"ar": "بدء المحادثة...",
|
||||
"fr": "Démarrage de la conversation...",
|
||||
"tr": "Konuşma başlatılıyor...",
|
||||
"de": "Konversation wird gestartet...",
|
||||
"uk": "Запуск розмови..."
|
||||
},
|
||||
"TOAST$FAILED_TO_START_CONVERSATION": {
|
||||
"en": "Failed to start conversation",
|
||||
"ja": "会話の開始に失敗しました",
|
||||
"zh-CN": "启动对话失败",
|
||||
"zh-TW": "啟動對話失敗",
|
||||
"ko-KR": "대화 시작 실패",
|
||||
"no": "Kunne ikke starte samtale",
|
||||
"it": "Impossibile avviare la conversazione",
|
||||
"pt": "Falha ao iniciar conversa",
|
||||
"es": "No se pudo iniciar la conversación",
|
||||
"ar": "فشل بدء المحادثة",
|
||||
"fr": "Échec du démarrage de la conversation",
|
||||
"tr": "Konuşma başlatılamadı",
|
||||
"de": "Konversation konnte nicht gestartet werden",
|
||||
"uk": "Не вдалося запустити розмову"
|
||||
},
|
||||
"TOAST$CONVERSATION_STARTED": {
|
||||
"en": "Conversation started",
|
||||
"ja": "会話が開始されました",
|
||||
"zh-CN": "对话已启动",
|
||||
"zh-TW": "對話已啟動",
|
||||
"ko-KR": "대화가 시작되었습니다",
|
||||
"no": "Samtale startet",
|
||||
"it": "Conversazione avviata",
|
||||
"pt": "Conversa iniciada",
|
||||
"es": "Conversación iniciada",
|
||||
"ar": "بدأت المحادثة",
|
||||
"fr": "Conversation démarrée",
|
||||
"tr": "Konuşma başlatıldı",
|
||||
"de": "Konversation gestartet",
|
||||
"uk": "Розмову запущено"
|
||||
},
|
||||
"TOAST$STOPPING_CONVERSATION": {
|
||||
"en": "Stopping conversation...",
|
||||
"ja": "会話を停止しています...",
|
||||
|
||||
@ -30,12 +30,14 @@ import { WebSocketProviderWrapper } from "#/contexts/websocket-provider-wrapper"
|
||||
import { useErrorMessageStore } from "#/stores/error-message-store";
|
||||
import { useUnifiedResumeConversationSandbox } from "#/hooks/mutation/use-unified-start-conversation";
|
||||
import { I18nKey } from "#/i18n/declaration";
|
||||
import { useEventStore } from "#/stores/use-event-store";
|
||||
|
||||
function AppContent() {
|
||||
useConversationConfig();
|
||||
|
||||
const { t } = useTranslation();
|
||||
const { conversationId } = useConversationId();
|
||||
const clearEvents = useEventStore((state) => state.clearEvents);
|
||||
|
||||
// Handle both task IDs (task-{uuid}) and regular conversation IDs
|
||||
const { isTask, taskStatus, taskDetail } = useTaskPolling();
|
||||
@ -72,6 +74,7 @@ function AppContent() {
|
||||
resetConversationState();
|
||||
setCurrentAgentState(AgentState.LOADING);
|
||||
removeErrorMessage();
|
||||
clearEvents();
|
||||
|
||||
// Reset tracking ONLY if we're navigating to a DIFFERENT conversation
|
||||
// Don't reset on StrictMode remounts (conversationId is the same)
|
||||
@ -85,6 +88,7 @@ function AppContent() {
|
||||
resetConversationState,
|
||||
setCurrentAgentState,
|
||||
removeErrorMessage,
|
||||
clearEvents,
|
||||
]);
|
||||
|
||||
// 2. Task Error Display Effect
|
||||
@ -150,7 +154,7 @@ function AppContent() {
|
||||
t,
|
||||
]);
|
||||
|
||||
const isV1Conversation = conversation?.conversation_version === "V1";
|
||||
const isV0Conversation = conversation?.conversation_version === "V0";
|
||||
|
||||
const content = (
|
||||
<ConversationSubscriptionsProvider>
|
||||
@ -170,15 +174,11 @@ function AppContent() {
|
||||
</ConversationSubscriptionsProvider>
|
||||
);
|
||||
|
||||
// Wait for conversation data to load before rendering WebSocket provider
|
||||
// This prevents the provider from unmounting/remounting when version changes from 0 to 1
|
||||
if (!conversation) {
|
||||
return content;
|
||||
}
|
||||
|
||||
// Render WebSocket provider immediately to avoid mount/remount cycles
|
||||
// The providers internally handle waiting for conversation data to be ready
|
||||
return (
|
||||
<WebSocketProviderWrapper
|
||||
version={isV1Conversation ? 1 : 0}
|
||||
version={isV0Conversation ? 0 : 1}
|
||||
conversationId={conversationId}
|
||||
>
|
||||
{content}
|
||||
|
||||
@ -531,34 +531,6 @@ function LlmSettingsScreen() {
|
||||
linkText={t(I18nKey.SETTINGS$CLICK_FOR_INSTRUCTIONS)}
|
||||
href="https://docs.all-hands.dev/usage/local-setup#getting-an-api-key"
|
||||
/>
|
||||
|
||||
{config?.APP_MODE !== "saas" && (
|
||||
<SettingsInput
|
||||
testId="search-api-key-input"
|
||||
name="search-api-key-input"
|
||||
label={t(I18nKey.SETTINGS$SEARCH_API_KEY)}
|
||||
type="password"
|
||||
className="w-full max-w-[680px]"
|
||||
defaultValue={settings.SEARCH_API_KEY || ""}
|
||||
onChange={handleSearchApiKeyIsDirty}
|
||||
placeholder={t(I18nKey.API$TAVILY_KEY_EXAMPLE)}
|
||||
isDisabled={shouldShowUpgradeBanner}
|
||||
startContent={
|
||||
settings.SEARCH_API_KEY_SET && (
|
||||
<KeyStatusIcon isSet={settings.SEARCH_API_KEY_SET} />
|
||||
)
|
||||
}
|
||||
/>
|
||||
)}
|
||||
|
||||
{config?.APP_MODE !== "saas" && (
|
||||
<HelpLink
|
||||
testId="search-api-key-help-anchor"
|
||||
text={t(I18nKey.SETTINGS$SEARCH_API_KEY_OPTIONAL)}
|
||||
linkText={t(I18nKey.SETTINGS$SEARCH_API_KEY_INSTRUCTIONS)}
|
||||
href="https://tavily.com/"
|
||||
/>
|
||||
)}
|
||||
</div>
|
||||
)}
|
||||
|
||||
@ -686,68 +658,68 @@ function LlmSettingsScreen() {
|
||||
>
|
||||
{t(I18nKey.SETTINGS$ENABLE_MEMORY_CONDENSATION)}
|
||||
</SettingsSwitch>
|
||||
</div>
|
||||
)}
|
||||
|
||||
{/* Confirmation mode and security analyzer - always visible */}
|
||||
<div className="flex items-center gap-2">
|
||||
<SettingsSwitch
|
||||
testId="enable-confirmation-mode-switch"
|
||||
name="enable-confirmation-mode-switch"
|
||||
onToggle={handleConfirmationModeIsDirty}
|
||||
defaultIsToggled={settings.CONFIRMATION_MODE}
|
||||
isBeta
|
||||
isDisabled={shouldShowUpgradeBanner}
|
||||
>
|
||||
{t(I18nKey.SETTINGS$CONFIRMATION_MODE)}
|
||||
</SettingsSwitch>
|
||||
<TooltipButton
|
||||
tooltip={t(I18nKey.SETTINGS$CONFIRMATION_MODE_TOOLTIP)}
|
||||
ariaLabel={t(I18nKey.SETTINGS$CONFIRMATION_MODE)}
|
||||
className="text-[#9099AC] hover:text-white cursor-help"
|
||||
>
|
||||
<QuestionCircleIcon width={16} height={16} />
|
||||
</TooltipButton>
|
||||
</div>
|
||||
|
||||
{confirmationModeEnabled && (
|
||||
<>
|
||||
<div className="w-full max-w-[680px]">
|
||||
<SettingsDropdownInput
|
||||
testId="security-analyzer-input"
|
||||
name="security-analyzer-display"
|
||||
label={t(I18nKey.SETTINGS$SECURITY_ANALYZER)}
|
||||
items={getSecurityAnalyzerOptions()}
|
||||
placeholder={t(
|
||||
I18nKey.SETTINGS$SECURITY_ANALYZER_PLACEHOLDER,
|
||||
)}
|
||||
selectedKey={selectedSecurityAnalyzer || "none"}
|
||||
isClearable={false}
|
||||
onSelectionChange={(key) => {
|
||||
const newValue = key?.toString() || "";
|
||||
setSelectedSecurityAnalyzer(newValue);
|
||||
handleSecurityAnalyzerIsDirty(newValue);
|
||||
}}
|
||||
onInputChange={(value) => {
|
||||
// Handle when input is cleared
|
||||
if (!value) {
|
||||
setSelectedSecurityAnalyzer("");
|
||||
handleSecurityAnalyzerIsDirty("");
|
||||
}
|
||||
}}
|
||||
wrapperClassName="w-full"
|
||||
/>
|
||||
{/* Hidden input to store the actual key value for form submission */}
|
||||
<input
|
||||
type="hidden"
|
||||
name="security-analyzer-input"
|
||||
value={selectedSecurityAnalyzer || ""}
|
||||
/>
|
||||
{/* Confirmation mode and security analyzer */}
|
||||
<div className="flex items-center gap-2">
|
||||
<SettingsSwitch
|
||||
testId="enable-confirmation-mode-switch"
|
||||
name="enable-confirmation-mode-switch"
|
||||
onToggle={handleConfirmationModeIsDirty}
|
||||
defaultIsToggled={settings.CONFIRMATION_MODE}
|
||||
isBeta
|
||||
isDisabled={shouldShowUpgradeBanner}
|
||||
>
|
||||
{t(I18nKey.SETTINGS$CONFIRMATION_MODE)}
|
||||
</SettingsSwitch>
|
||||
<TooltipButton
|
||||
tooltip={t(I18nKey.SETTINGS$CONFIRMATION_MODE_TOOLTIP)}
|
||||
ariaLabel={t(I18nKey.SETTINGS$CONFIRMATION_MODE)}
|
||||
className="text-[#9099AC] hover:text-white cursor-help"
|
||||
>
|
||||
<QuestionCircleIcon width={16} height={16} />
|
||||
</TooltipButton>
|
||||
</div>
|
||||
<p className="text-xs text-tertiary-alt max-w-[680px]">
|
||||
{t(I18nKey.SETTINGS$SECURITY_ANALYZER_DESCRIPTION)}
|
||||
</p>
|
||||
</>
|
||||
|
||||
{confirmationModeEnabled && (
|
||||
<>
|
||||
<div className="w-full max-w-[680px]">
|
||||
<SettingsDropdownInput
|
||||
testId="security-analyzer-input"
|
||||
name="security-analyzer-display"
|
||||
label={t(I18nKey.SETTINGS$SECURITY_ANALYZER)}
|
||||
items={getSecurityAnalyzerOptions()}
|
||||
placeholder={t(
|
||||
I18nKey.SETTINGS$SECURITY_ANALYZER_PLACEHOLDER,
|
||||
)}
|
||||
selectedKey={selectedSecurityAnalyzer || "none"}
|
||||
isClearable={false}
|
||||
onSelectionChange={(key) => {
|
||||
const newValue = key?.toString() || "";
|
||||
setSelectedSecurityAnalyzer(newValue);
|
||||
handleSecurityAnalyzerIsDirty(newValue);
|
||||
}}
|
||||
onInputChange={(value) => {
|
||||
// Handle when input is cleared
|
||||
if (!value) {
|
||||
setSelectedSecurityAnalyzer("");
|
||||
handleSecurityAnalyzerIsDirty("");
|
||||
}
|
||||
}}
|
||||
wrapperClassName="w-full"
|
||||
/>
|
||||
{/* Hidden input to store the actual key value for form submission */}
|
||||
<input
|
||||
type="hidden"
|
||||
name="security-analyzer-input"
|
||||
value={selectedSecurityAnalyzer || ""}
|
||||
/>
|
||||
</div>
|
||||
<p className="text-xs text-tertiary-alt max-w-[680px]">
|
||||
{t(I18nKey.SETTINGS$SECURITY_ANALYZER_DESCRIPTION)}
|
||||
</p>
|
||||
</>
|
||||
)}
|
||||
</div>
|
||||
)}
|
||||
</div>
|
||||
|
||||
|
||||
@ -13,6 +13,7 @@ import {
|
||||
ConversationStateUpdateEventAgentStatus,
|
||||
ConversationStateUpdateEventFullState,
|
||||
} from "./core/events/conversation-state-event";
|
||||
import { SystemPromptEvent } from "./core/events/system-event";
|
||||
import type { OpenHandsParsedEvent } from "../core/index";
|
||||
|
||||
/**
|
||||
@ -108,6 +109,18 @@ export const isExecuteBashObservationEvent = (
|
||||
isObservationEvent(event) &&
|
||||
event.observation.kind === "ExecuteBashObservation";
|
||||
|
||||
/**
|
||||
* Type guard function to check if an event is a system prompt event
|
||||
*/
|
||||
export const isSystemPromptEvent = (
|
||||
event: OpenHandsEvent,
|
||||
): event is SystemPromptEvent =>
|
||||
event.source === "agent" &&
|
||||
"system_prompt" in event &&
|
||||
"tools" in event &&
|
||||
typeof event.system_prompt === "object" &&
|
||||
Array.isArray(event.tools);
|
||||
|
||||
/**
|
||||
* Type guard function to check if an event is a conversation state update event
|
||||
*/
|
||||
|
||||
@ -4,7 +4,7 @@ requires = [ "hatchling>=1.25" ]
|
||||
|
||||
[project]
|
||||
name = "openhands"
|
||||
version = "1.0.1"
|
||||
version = "1.0.2"
|
||||
description = "OpenHands CLI - Terminal User Interface for OpenHands AI Agent"
|
||||
readme = "README.md"
|
||||
license = { text = "MIT" }
|
||||
|
||||
@ -5,7 +5,7 @@ from dataclasses import dataclass
|
||||
from datetime import datetime, timedelta
|
||||
from time import time
|
||||
from typing import AsyncGenerator, Sequence
|
||||
from uuid import UUID
|
||||
from uuid import UUID, uuid4
|
||||
|
||||
import httpx
|
||||
from fastapi import Request
|
||||
@ -52,6 +52,7 @@ from openhands.app_server.services.injector import InjectorState
|
||||
from openhands.app_server.services.jwt_service import JwtService
|
||||
from openhands.app_server.user.user_context import UserContext
|
||||
from openhands.app_server.utils.async_remote_workspace import AsyncRemoteWorkspace
|
||||
from openhands.experiments.experiment_manager import ExperimentManagerImpl
|
||||
from openhands.integrations.provider import ProviderType
|
||||
from openhands.sdk import LocalWorkspace
|
||||
from openhands.sdk.conversation.secret_source import LookupSecret, StaticSecret
|
||||
@ -205,7 +206,7 @@ class LiveStatusAppConversationService(GitAppConversationService):
|
||||
response = await self.httpx_client.post(
|
||||
f'{agent_server_url}/api/conversations',
|
||||
json=start_conversation_request.model_dump(
|
||||
context={'expose_secrets': True}
|
||||
mode='json', context={'expose_secrets': True}
|
||||
),
|
||||
headers={'X-Session-API-Key': sandbox.session_api_key},
|
||||
timeout=self.sandbox_startup_timeout,
|
||||
@ -278,13 +279,15 @@ class LiveStatusAppConversationService(GitAppConversationService):
|
||||
|
||||
# Build app_conversation from info
|
||||
result = [
|
||||
self._build_conversation(
|
||||
app_conversation_info,
|
||||
sandboxes_by_id.get(app_conversation_info.sandbox_id),
|
||||
conversation_info_by_id.get(app_conversation_info.id),
|
||||
(
|
||||
self._build_conversation(
|
||||
app_conversation_info,
|
||||
sandboxes_by_id.get(app_conversation_info.sandbox_id),
|
||||
conversation_info_by_id.get(app_conversation_info.id),
|
||||
)
|
||||
if app_conversation_info
|
||||
else None
|
||||
)
|
||||
if app_conversation_info
|
||||
else None
|
||||
for app_conversation_info in app_conversation_infos
|
||||
]
|
||||
|
||||
@ -368,7 +371,6 @@ class LiveStatusAppConversationService(GitAppConversationService):
|
||||
self, task: AppConversationStartTask
|
||||
) -> AsyncGenerator[AppConversationStartTask, None]:
|
||||
"""Wait for sandbox to start and return info."""
|
||||
|
||||
# Get the sandbox
|
||||
if not task.request.sandbox_id:
|
||||
sandbox = await self.sandbox_service.start_sandbox()
|
||||
@ -458,20 +460,75 @@ class LiveStatusAppConversationService(GitAppConversationService):
|
||||
model=user.llm_model,
|
||||
base_url=user.llm_base_url,
|
||||
api_key=user.llm_api_key,
|
||||
service_id='agent',
|
||||
usage_id='agent',
|
||||
)
|
||||
agent = get_default_agent(llm=llm)
|
||||
|
||||
conversation_id = uuid4()
|
||||
agent = ExperimentManagerImpl.run_agent_variant_tests__v1(
|
||||
user.id, conversation_id, agent
|
||||
)
|
||||
|
||||
start_conversation_request = StartConversationRequest(
|
||||
conversation_id=conversation_id,
|
||||
agent=agent,
|
||||
workspace=workspace,
|
||||
confirmation_policy=AlwaysConfirm()
|
||||
if user.confirmation_mode
|
||||
else NeverConfirm(),
|
||||
confirmation_policy=(
|
||||
AlwaysConfirm() if user.confirmation_mode else NeverConfirm()
|
||||
),
|
||||
initial_message=initial_message,
|
||||
secrets=secrets,
|
||||
)
|
||||
return start_conversation_request
|
||||
|
||||
async def update_agent_server_conversation_title(
|
||||
self,
|
||||
conversation_id: str,
|
||||
new_title: str,
|
||||
app_conversation_info: AppConversationInfo,
|
||||
) -> None:
|
||||
"""Update the conversation title in the agent-server.
|
||||
|
||||
Args:
|
||||
conversation_id: The conversation ID as a string
|
||||
new_title: The new title to set
|
||||
app_conversation_info: The app conversation info containing sandbox_id
|
||||
"""
|
||||
# Get the sandbox info to find the agent-server URL
|
||||
sandbox = await self.sandbox_service.get_sandbox(
|
||||
app_conversation_info.sandbox_id
|
||||
)
|
||||
assert sandbox is not None, (
|
||||
f'Sandbox {app_conversation_info.sandbox_id} not found for conversation {conversation_id}'
|
||||
)
|
||||
assert sandbox.exposed_urls is not None, (
|
||||
f'Sandbox {app_conversation_info.sandbox_id} has no exposed URLs for conversation {conversation_id}'
|
||||
)
|
||||
|
||||
# Use the existing method to get the agent-server URL
|
||||
agent_server_url = self._get_agent_server_url(sandbox)
|
||||
|
||||
# Prepare the request
|
||||
url = f'{agent_server_url.rstrip("/")}/api/conversations/{conversation_id}'
|
||||
headers = {}
|
||||
if sandbox.session_api_key:
|
||||
headers['X-Session-API-Key'] = sandbox.session_api_key
|
||||
|
||||
payload = {'title': new_title}
|
||||
|
||||
# Make the PATCH request to the agent-server
|
||||
response = await self.httpx_client.patch(
|
||||
url,
|
||||
json=payload,
|
||||
headers=headers,
|
||||
timeout=30.0,
|
||||
)
|
||||
response.raise_for_status()
|
||||
|
||||
_logger.info(
|
||||
f'Successfully updated agent-server conversation {conversation_id} title to "{new_title}"'
|
||||
)
|
||||
|
||||
|
||||
class LiveStatusAppConversationServiceInjector(AppConversationServiceInjector):
|
||||
sandbox_startup_timeout: int = Field(
|
||||
|
||||
@ -104,7 +104,7 @@ class AppServerConfig(OpenHandsModel):
|
||||
)
|
||||
|
||||
# Services
|
||||
lifespan: AppLifespanService = Field(default_factory=_get_default_lifespan)
|
||||
lifespan: AppLifespanService | None = Field(default_factory=_get_default_lifespan)
|
||||
|
||||
|
||||
def config_from_env() -> AppServerConfig:
|
||||
@ -291,7 +291,7 @@ def get_db_session(
|
||||
return get_global_config().db_session.context(state, request)
|
||||
|
||||
|
||||
def get_app_lifespan_service() -> AppLifespanService:
|
||||
def get_app_lifespan_service() -> AppLifespanService | None:
|
||||
config = get_global_config()
|
||||
return config.lifespan
|
||||
|
||||
|
||||
@ -1,9 +1,11 @@
|
||||
import os
|
||||
from uuid import UUID
|
||||
|
||||
from pydantic import BaseModel
|
||||
|
||||
from openhands.core.config.openhands_config import OpenHandsConfig
|
||||
from openhands.core.logger import openhands_logger as logger
|
||||
from openhands.sdk import Agent
|
||||
from openhands.server.session.conversation_init_data import ConversationInitData
|
||||
from openhands.server.shared import file_store
|
||||
from openhands.storage.locations import get_experiment_config_filename
|
||||
@ -29,6 +31,12 @@ def load_experiment_config(conversation_id: str) -> ExperimentConfig | None:
|
||||
|
||||
|
||||
class ExperimentManager:
|
||||
@staticmethod
|
||||
def run_agent_variant_tests__v1(
|
||||
user_id: str | None, conversation_id: UUID, agent: Agent
|
||||
) -> Agent:
|
||||
return agent
|
||||
|
||||
@staticmethod
|
||||
def run_conversation_variant_test(
|
||||
user_id: str | None,
|
||||
|
||||
@ -4,6 +4,7 @@ import copy
|
||||
import json
|
||||
import os
|
||||
import random
|
||||
import shlex
|
||||
import shutil
|
||||
import string
|
||||
import tempfile
|
||||
@ -447,8 +448,12 @@ class Runtime(FileEditRuntimeMixin):
|
||||
)
|
||||
openhands_workspace_branch = f'openhands-workspace-{random_str}'
|
||||
|
||||
repo_path = self.workspace_root / dir_name
|
||||
quoted_repo_path = shlex.quote(str(repo_path))
|
||||
quoted_remote_repo_url = shlex.quote(remote_repo_url)
|
||||
|
||||
# Clone repository command
|
||||
clone_command = f'git clone {remote_repo_url} {dir_name}'
|
||||
clone_command = f'git clone {quoted_remote_repo_url} {quoted_repo_path}'
|
||||
|
||||
# Checkout to appropriate branch
|
||||
checkout_command = (
|
||||
@ -461,11 +466,35 @@ class Runtime(FileEditRuntimeMixin):
|
||||
await call_sync_from_async(self.run_action, clone_action)
|
||||
|
||||
cd_checkout_action = CmdRunAction(
|
||||
command=f'cd {dir_name} && {checkout_command}'
|
||||
command=f'cd {quoted_repo_path} && {checkout_command}'
|
||||
)
|
||||
action = cd_checkout_action
|
||||
self.log('info', f'Cloning repo: {selected_repository}')
|
||||
await call_sync_from_async(self.run_action, action)
|
||||
|
||||
if remote_repo_url:
|
||||
set_remote_action = CmdRunAction(
|
||||
command=(
|
||||
f'cd {quoted_repo_path} && '
|
||||
f'git remote set-url origin {quoted_remote_repo_url}'
|
||||
)
|
||||
)
|
||||
obs = await call_sync_from_async(self.run_action, set_remote_action)
|
||||
if isinstance(obs, CmdOutputObservation) and obs.exit_code == 0:
|
||||
self.log(
|
||||
'info',
|
||||
f'Set git remote origin to authenticated URL for {selected_repository}',
|
||||
)
|
||||
else:
|
||||
self.log(
|
||||
'warning',
|
||||
(
|
||||
'Failed to set git remote origin while ensuring fresh token '
|
||||
f'for {selected_repository}: '
|
||||
f'{obs.content if isinstance(obs, CmdOutputObservation) else "unknown error"}'
|
||||
),
|
||||
)
|
||||
|
||||
return dir_name
|
||||
|
||||
def maybe_run_setup_script(self):
|
||||
|
||||
@ -1,7 +1,13 @@
|
||||
import uuid
|
||||
|
||||
from fastapi import APIRouter, Depends, HTTPException, Request, status
|
||||
from fastapi.responses import JSONResponse
|
||||
from pydantic import BaseModel
|
||||
|
||||
from openhands.app_server.app_conversation.app_conversation_service import (
|
||||
AppConversationService,
|
||||
)
|
||||
from openhands.app_server.config import depends_app_conversation_service
|
||||
from openhands.core.logger import openhands_logger as logger
|
||||
from openhands.events.action.message import MessageAction
|
||||
from openhands.events.event_filter import EventFilter
|
||||
@ -21,24 +27,116 @@ app = APIRouter(
|
||||
prefix='/api/conversations/{conversation_id}', dependencies=get_dependencies()
|
||||
)
|
||||
|
||||
# Dependency for app conversation service
|
||||
app_conversation_service_dependency = depends_app_conversation_service()
|
||||
|
||||
@app.get('/config')
|
||||
async def get_remote_runtime_config(
|
||||
conversation: ServerConversation = Depends(get_conversation),
|
||||
) -> JSONResponse:
|
||||
"""Retrieve the runtime configuration.
|
||||
|
||||
Currently, this is the session ID and runtime ID (if available).
|
||||
async def _is_v1_conversation(
|
||||
conversation_id: str, app_conversation_service: AppConversationService
|
||||
) -> bool:
|
||||
"""Check if the given conversation_id corresponds to a V1 conversation.
|
||||
|
||||
Args:
|
||||
conversation_id: The conversation ID to check
|
||||
app_conversation_service: Service to query V1 conversations
|
||||
|
||||
Returns:
|
||||
True if this is a V1 conversation, False otherwise
|
||||
"""
|
||||
try:
|
||||
conversation_uuid = uuid.UUID(conversation_id)
|
||||
app_conversation = await app_conversation_service.get_app_conversation(
|
||||
conversation_uuid
|
||||
)
|
||||
return app_conversation is not None
|
||||
except (ValueError, TypeError):
|
||||
# Not a valid UUID, so it's not a V1 conversation
|
||||
return False
|
||||
except Exception:
|
||||
# Service error, assume it's not a V1 conversation
|
||||
return False
|
||||
|
||||
|
||||
async def _get_v1_conversation_config(
|
||||
conversation_id: str, app_conversation_service: AppConversationService
|
||||
) -> dict[str, str | None]:
|
||||
"""Get configuration for a V1 conversation.
|
||||
|
||||
Args:
|
||||
conversation_id: The conversation ID
|
||||
app_conversation_service: Service to query V1 conversations
|
||||
|
||||
Returns:
|
||||
Dictionary with runtime_id (sandbox_id) and session_id (conversation_id)
|
||||
"""
|
||||
conversation_uuid = uuid.UUID(conversation_id)
|
||||
app_conversation = await app_conversation_service.get_app_conversation(
|
||||
conversation_uuid
|
||||
)
|
||||
|
||||
if app_conversation is None:
|
||||
raise ValueError(f'V1 conversation {conversation_id} not found')
|
||||
|
||||
return {
|
||||
'runtime_id': app_conversation.sandbox_id,
|
||||
'session_id': conversation_id,
|
||||
}
|
||||
|
||||
|
||||
def _get_v0_conversation_config(
|
||||
conversation: ServerConversation,
|
||||
) -> dict[str, str | None]:
|
||||
"""Get configuration for a V0 conversation.
|
||||
|
||||
Args:
|
||||
conversation: The server conversation object
|
||||
|
||||
Returns:
|
||||
Dictionary with runtime_id and session_id from the runtime
|
||||
"""
|
||||
runtime = conversation.runtime
|
||||
runtime_id = runtime.runtime_id if hasattr(runtime, 'runtime_id') else None
|
||||
session_id = runtime.sid if hasattr(runtime, 'sid') else None
|
||||
return JSONResponse(
|
||||
content={
|
||||
'runtime_id': runtime_id,
|
||||
'session_id': session_id,
|
||||
}
|
||||
)
|
||||
|
||||
return {
|
||||
'runtime_id': runtime_id,
|
||||
'session_id': session_id,
|
||||
}
|
||||
|
||||
|
||||
@app.get('/config')
|
||||
async def get_remote_runtime_config(
|
||||
conversation_id: str,
|
||||
app_conversation_service: AppConversationService = app_conversation_service_dependency,
|
||||
user_id: str | None = Depends(get_user_id),
|
||||
) -> JSONResponse:
|
||||
"""Retrieve the runtime configuration.
|
||||
|
||||
For V0 conversations: returns runtime_id and session_id from the runtime.
|
||||
For V1 conversations: returns sandbox_id as runtime_id and conversation_id as session_id.
|
||||
"""
|
||||
# Check if this is a V1 conversation first
|
||||
if await _is_v1_conversation(conversation_id, app_conversation_service):
|
||||
# This is a V1 conversation
|
||||
config = await _get_v1_conversation_config(
|
||||
conversation_id, app_conversation_service
|
||||
)
|
||||
else:
|
||||
# V0 conversation - get the conversation and use the existing logic
|
||||
conversation = await conversation_manager.attach_to_conversation(
|
||||
conversation_id, user_id
|
||||
)
|
||||
if not conversation:
|
||||
raise HTTPException(
|
||||
status_code=status.HTTP_404_NOT_FOUND,
|
||||
detail=f'Conversation {conversation_id} not found',
|
||||
)
|
||||
try:
|
||||
config = _get_v0_conversation_config(conversation)
|
||||
finally:
|
||||
await conversation_manager.detach_from_conversation(conversation)
|
||||
|
||||
return JSONResponse(content=config)
|
||||
|
||||
|
||||
@app.get('/vscode-url')
|
||||
@ -279,12 +377,14 @@ async def get_microagents(
|
||||
content=r_agent.content,
|
||||
triggers=[],
|
||||
inputs=r_agent.metadata.inputs,
|
||||
tools=[
|
||||
server.name
|
||||
for server in r_agent.metadata.mcp_tools.stdio_servers
|
||||
]
|
||||
if r_agent.metadata.mcp_tools
|
||||
else [],
|
||||
tools=(
|
||||
[
|
||||
server.name
|
||||
for server in r_agent.metadata.mcp_tools.stdio_servers
|
||||
]
|
||||
if r_agent.metadata.mcp_tools
|
||||
else []
|
||||
),
|
||||
)
|
||||
)
|
||||
|
||||
@ -297,12 +397,14 @@ async def get_microagents(
|
||||
content=k_agent.content,
|
||||
triggers=k_agent.triggers,
|
||||
inputs=k_agent.metadata.inputs,
|
||||
tools=[
|
||||
server.name
|
||||
for server in k_agent.metadata.mcp_tools.stdio_servers
|
||||
]
|
||||
if k_agent.metadata.mcp_tools
|
||||
else [],
|
||||
tools=(
|
||||
[
|
||||
server.name
|
||||
for server in k_agent.metadata.mcp_tools.stdio_servers
|
||||
]
|
||||
if k_agent.metadata.mcp_tools
|
||||
else []
|
||||
),
|
||||
)
|
||||
)
|
||||
|
||||
|
||||
@ -12,6 +12,9 @@ from fastapi.responses import JSONResponse
|
||||
from jinja2 import Environment, FileSystemLoader
|
||||
from pydantic import BaseModel, ConfigDict, Field
|
||||
|
||||
from openhands.app_server.app_conversation.app_conversation_info_service import (
|
||||
AppConversationInfoService,
|
||||
)
|
||||
from openhands.app_server.app_conversation.app_conversation_models import (
|
||||
AppConversation,
|
||||
)
|
||||
@ -19,6 +22,7 @@ from openhands.app_server.app_conversation.app_conversation_service import (
|
||||
AppConversationService,
|
||||
)
|
||||
from openhands.app_server.config import (
|
||||
depends_app_conversation_info_service,
|
||||
depends_app_conversation_service,
|
||||
)
|
||||
from openhands.core.config.llm_config import LLMConfig
|
||||
@ -90,6 +94,7 @@ from openhands.utils.conversation_summary import get_default_conversation_title
|
||||
|
||||
app = APIRouter(prefix='/api', dependencies=get_dependencies())
|
||||
app_conversation_service_dependency = depends_app_conversation_service()
|
||||
app_conversation_info_service_dependency = depends_app_conversation_info_service()
|
||||
|
||||
|
||||
def _filter_conversations_by_age(
|
||||
@ -759,23 +764,201 @@ class UpdateConversationRequest(BaseModel):
|
||||
model_config = ConfigDict(extra='forbid')
|
||||
|
||||
|
||||
async def _update_v1_conversation(
|
||||
conversation_uuid: uuid.UUID,
|
||||
new_title: str,
|
||||
user_id: str | None,
|
||||
app_conversation_info_service: AppConversationInfoService,
|
||||
app_conversation_service: AppConversationService,
|
||||
) -> JSONResponse | bool:
|
||||
"""Update a V1 conversation title.
|
||||
|
||||
Args:
|
||||
conversation_uuid: The conversation ID as a UUID
|
||||
new_title: The new title to set
|
||||
user_id: The authenticated user ID
|
||||
app_conversation_info_service: The app conversation info service
|
||||
app_conversation_service: The app conversation service for agent-server communication
|
||||
|
||||
Returns:
|
||||
JSONResponse on error, True on success
|
||||
"""
|
||||
conversation_id = str(conversation_uuid)
|
||||
logger.info(
|
||||
f'Updating V1 conversation {conversation_uuid}',
|
||||
extra={'session_id': conversation_id, 'user_id': user_id},
|
||||
)
|
||||
|
||||
# Get the V1 conversation info
|
||||
app_conversation_info = (
|
||||
await app_conversation_info_service.get_app_conversation_info(conversation_uuid)
|
||||
)
|
||||
|
||||
if not app_conversation_info:
|
||||
# Not a V1 conversation
|
||||
return None
|
||||
|
||||
# Validate that the user owns this conversation
|
||||
if user_id and app_conversation_info.created_by_user_id != user_id:
|
||||
logger.warning(
|
||||
f'User {user_id} attempted to update V1 conversation {conversation_uuid} owned by {app_conversation_info.created_by_user_id}',
|
||||
extra={'session_id': conversation_id, 'user_id': user_id},
|
||||
)
|
||||
return JSONResponse(
|
||||
content={
|
||||
'status': 'error',
|
||||
'message': 'Permission denied: You can only update your own conversations',
|
||||
'msg_id': 'AUTHORIZATION$PERMISSION_DENIED',
|
||||
},
|
||||
status_code=status.HTTP_403_FORBIDDEN,
|
||||
)
|
||||
|
||||
# Update the title and timestamp
|
||||
original_title = app_conversation_info.title
|
||||
app_conversation_info.title = new_title
|
||||
app_conversation_info.updated_at = datetime.now(timezone.utc)
|
||||
|
||||
# Save the updated conversation info
|
||||
try:
|
||||
await app_conversation_info_service.save_app_conversation_info(
|
||||
app_conversation_info
|
||||
)
|
||||
except AssertionError:
|
||||
# This happens when user doesn't own the conversation
|
||||
logger.warning(
|
||||
f'User {user_id} attempted to update V1 conversation {conversation_uuid} - permission denied',
|
||||
extra={'session_id': conversation_id, 'user_id': user_id},
|
||||
)
|
||||
return JSONResponse(
|
||||
content={
|
||||
'status': 'error',
|
||||
'message': 'Permission denied: You can only update your own conversations',
|
||||
'msg_id': 'AUTHORIZATION$PERMISSION_DENIED',
|
||||
},
|
||||
status_code=status.HTTP_403_FORBIDDEN,
|
||||
)
|
||||
|
||||
# Try to update the agent-server as well
|
||||
try:
|
||||
if hasattr(app_conversation_service, 'update_agent_server_conversation_title'):
|
||||
await app_conversation_service.update_agent_server_conversation_title(
|
||||
conversation_id=conversation_id,
|
||||
new_title=new_title,
|
||||
app_conversation_info=app_conversation_info,
|
||||
)
|
||||
except Exception as e:
|
||||
# Log the error but don't fail the database update
|
||||
logger.warning(
|
||||
f'Failed to update agent-server for conversation {conversation_uuid}: {e}',
|
||||
extra={'session_id': conversation_id, 'user_id': user_id},
|
||||
)
|
||||
|
||||
logger.info(
|
||||
f'Successfully updated V1 conversation {conversation_uuid} title from "{original_title}" to "{app_conversation_info.title}"',
|
||||
extra={'session_id': conversation_id, 'user_id': user_id},
|
||||
)
|
||||
|
||||
return True
|
||||
|
||||
|
||||
async def _update_v0_conversation(
|
||||
conversation_id: str,
|
||||
new_title: str,
|
||||
user_id: str | None,
|
||||
conversation_store: ConversationStore,
|
||||
) -> JSONResponse | bool:
|
||||
"""Update a V0 conversation title.
|
||||
|
||||
Args:
|
||||
conversation_id: The conversation ID
|
||||
new_title: The new title to set
|
||||
user_id: The authenticated user ID
|
||||
conversation_store: The conversation store
|
||||
|
||||
Returns:
|
||||
JSONResponse on error, True on success
|
||||
|
||||
Raises:
|
||||
FileNotFoundError: If the conversation is not found
|
||||
"""
|
||||
logger.info(
|
||||
f'Updating V0 conversation {conversation_id}',
|
||||
extra={'session_id': conversation_id, 'user_id': user_id},
|
||||
)
|
||||
|
||||
# Get the existing conversation metadata
|
||||
metadata = await conversation_store.get_metadata(conversation_id)
|
||||
|
||||
# Validate that the user owns this conversation
|
||||
if user_id and metadata.user_id != user_id:
|
||||
logger.warning(
|
||||
f'User {user_id} attempted to update conversation {conversation_id} owned by {metadata.user_id}',
|
||||
extra={'session_id': conversation_id, 'user_id': user_id},
|
||||
)
|
||||
return JSONResponse(
|
||||
content={
|
||||
'status': 'error',
|
||||
'message': 'Permission denied: You can only update your own conversations',
|
||||
'msg_id': 'AUTHORIZATION$PERMISSION_DENIED',
|
||||
},
|
||||
status_code=status.HTTP_403_FORBIDDEN,
|
||||
)
|
||||
|
||||
# Update the conversation metadata
|
||||
original_title = metadata.title
|
||||
metadata.title = new_title
|
||||
metadata.last_updated_at = datetime.now(timezone.utc)
|
||||
|
||||
# Save the updated metadata
|
||||
await conversation_store.save_metadata(metadata)
|
||||
|
||||
# Emit a status update to connected clients about the title change
|
||||
try:
|
||||
status_update_dict = {
|
||||
'status_update': True,
|
||||
'type': 'info',
|
||||
'message': conversation_id,
|
||||
'conversation_title': metadata.title,
|
||||
}
|
||||
await conversation_manager.sio.emit(
|
||||
'oh_event',
|
||||
status_update_dict,
|
||||
to=f'room:{conversation_id}',
|
||||
)
|
||||
except Exception as e:
|
||||
logger.error(f'Error emitting title update event: {e}')
|
||||
# Don't fail the update if we can't emit the event
|
||||
|
||||
logger.info(
|
||||
f'Successfully updated conversation {conversation_id} title from "{original_title}" to "{metadata.title}"',
|
||||
extra={'session_id': conversation_id, 'user_id': user_id},
|
||||
)
|
||||
|
||||
return True
|
||||
|
||||
|
||||
@app.patch('/conversations/{conversation_id}')
|
||||
async def update_conversation(
|
||||
data: UpdateConversationRequest,
|
||||
conversation_id: str = Depends(validate_conversation_id),
|
||||
user_id: str | None = Depends(get_user_id),
|
||||
conversation_store: ConversationStore = Depends(get_conversation_store),
|
||||
app_conversation_info_service: AppConversationInfoService = app_conversation_info_service_dependency,
|
||||
app_conversation_service: AppConversationService = app_conversation_service_dependency,
|
||||
) -> bool:
|
||||
"""Update conversation metadata.
|
||||
|
||||
This endpoint allows updating conversation details like title.
|
||||
Only the conversation owner can update the conversation.
|
||||
Supports both V0 and V1 conversations.
|
||||
|
||||
Args:
|
||||
conversation_id: The ID of the conversation to update
|
||||
data: The conversation update data (title, etc.)
|
||||
user_id: The authenticated user ID
|
||||
conversation_store: The conversation store dependency
|
||||
app_conversation_info_service: The app conversation info service for V1 conversations
|
||||
app_conversation_service: The app conversation service for agent-server communication
|
||||
|
||||
Returns:
|
||||
bool: True if the conversation was updated successfully
|
||||
@ -788,57 +971,41 @@ async def update_conversation(
|
||||
extra={'session_id': conversation_id, 'user_id': user_id},
|
||||
)
|
||||
|
||||
new_title = data.title.strip()
|
||||
|
||||
# Try to handle as V1 conversation first
|
||||
try:
|
||||
# Get the existing conversation metadata
|
||||
metadata = await conversation_store.get_metadata(conversation_id)
|
||||
|
||||
# Validate that the user owns this conversation
|
||||
if user_id and metadata.user_id != user_id:
|
||||
logger.warning(
|
||||
f'User {user_id} attempted to update conversation {conversation_id} owned by {metadata.user_id}',
|
||||
extra={'session_id': conversation_id, 'user_id': user_id},
|
||||
)
|
||||
return JSONResponse(
|
||||
content={
|
||||
'status': 'error',
|
||||
'message': 'Permission denied: You can only update your own conversations',
|
||||
'msg_id': 'AUTHORIZATION$PERMISSION_DENIED',
|
||||
},
|
||||
status_code=status.HTTP_403_FORBIDDEN,
|
||||
)
|
||||
|
||||
# Update the conversation metadata
|
||||
original_title = metadata.title
|
||||
metadata.title = data.title.strip()
|
||||
metadata.last_updated_at = datetime.now(timezone.utc)
|
||||
|
||||
# Save the updated metadata
|
||||
await conversation_store.save_metadata(metadata)
|
||||
|
||||
# Emit a status update to connected clients about the title change
|
||||
try:
|
||||
status_update_dict = {
|
||||
'status_update': True,
|
||||
'type': 'info',
|
||||
'message': conversation_id,
|
||||
'conversation_title': metadata.title,
|
||||
}
|
||||
await conversation_manager.sio.emit(
|
||||
'oh_event',
|
||||
status_update_dict,
|
||||
to=f'room:{conversation_id}',
|
||||
)
|
||||
except Exception as e:
|
||||
logger.error(f'Error emitting title update event: {e}')
|
||||
# Don't fail the update if we can't emit the event
|
||||
|
||||
logger.info(
|
||||
f'Successfully updated conversation {conversation_id} title from "{original_title}" to "{metadata.title}"',
|
||||
extra={'session_id': conversation_id, 'user_id': user_id},
|
||||
conversation_uuid = uuid.UUID(conversation_id)
|
||||
result = await _update_v1_conversation(
|
||||
conversation_uuid=conversation_uuid,
|
||||
new_title=new_title,
|
||||
user_id=user_id,
|
||||
app_conversation_info_service=app_conversation_info_service,
|
||||
app_conversation_service=app_conversation_service,
|
||||
)
|
||||
|
||||
return True
|
||||
# If result is not None, it's a V1 conversation (either success or error)
|
||||
if result is not None:
|
||||
return result
|
||||
|
||||
except (ValueError, TypeError):
|
||||
# Not a valid UUID, fall through to V0 logic
|
||||
pass
|
||||
except Exception as e:
|
||||
logger.warning(
|
||||
f'Error checking V1 conversation {conversation_id}: {str(e)}',
|
||||
extra={'session_id': conversation_id, 'user_id': user_id},
|
||||
)
|
||||
# Fall through to V0 logic
|
||||
|
||||
# Handle as V0 conversation
|
||||
try:
|
||||
return await _update_v0_conversation(
|
||||
conversation_id=conversation_id,
|
||||
new_title=new_title,
|
||||
user_id=user_id,
|
||||
conversation_store=conversation_store,
|
||||
)
|
||||
except FileNotFoundError:
|
||||
logger.warning(
|
||||
f'Conversation {conversation_id} not found for update',
|
||||
|
||||
23
poetry.lock
generated
23
poetry.lock
generated
@ -5711,8 +5711,11 @@ files = [
|
||||
{file = "lxml-5.4.0-cp36-cp36m-win_amd64.whl", hash = "sha256:7ce1a171ec325192c6a636b64c94418e71a1964f56d002cc28122fceff0b6121"},
|
||||
{file = "lxml-5.4.0-cp37-cp37m-macosx_10_9_x86_64.whl", hash = "sha256:795f61bcaf8770e1b37eec24edf9771b307df3af74d1d6f27d812e15a9ff3872"},
|
||||
{file = "lxml-5.4.0-cp37-cp37m-manylinux_2_12_i686.manylinux2010_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:29f451a4b614a7b5b6c2e043d7b64a15bd8304d7e767055e8ab68387a8cacf4e"},
|
||||
{file = "lxml-5.4.0-cp37-cp37m-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:891f7f991a68d20c75cb13c5c9142b2a3f9eb161f1f12a9489c82172d1f133c0"},
|
||||
{file = "lxml-5.4.0-cp37-cp37m-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:4aa412a82e460571fad592d0f93ce9935a20090029ba08eca05c614f99b0cc92"},
|
||||
{file = "lxml-5.4.0-cp37-cp37m-manylinux_2_28_aarch64.whl", hash = "sha256:ac7ba71f9561cd7d7b55e1ea5511543c0282e2b6450f122672a2694621d63b7e"},
|
||||
{file = "lxml-5.4.0-cp37-cp37m-manylinux_2_28_x86_64.whl", hash = "sha256:c5d32f5284012deaccd37da1e2cd42f081feaa76981f0eaa474351b68df813c5"},
|
||||
{file = "lxml-5.4.0-cp37-cp37m-musllinux_1_2_aarch64.whl", hash = "sha256:ce31158630a6ac85bddd6b830cffd46085ff90498b397bd0a259f59d27a12188"},
|
||||
{file = "lxml-5.4.0-cp37-cp37m-musllinux_1_2_x86_64.whl", hash = "sha256:31e63621e073e04697c1b2d23fcb89991790eef370ec37ce4d5d469f40924ed6"},
|
||||
{file = "lxml-5.4.0-cp37-cp37m-win32.whl", hash = "sha256:be2ba4c3c5b7900246a8f866580700ef0d538f2ca32535e991027bdaba944063"},
|
||||
{file = "lxml-5.4.0-cp37-cp37m-win_amd64.whl", hash = "sha256:09846782b1ef650b321484ad429217f5154da4d6e786636c38e434fa32e94e49"},
|
||||
@ -7272,7 +7275,7 @@ llama = ["llama-index (>=0.12.29,<0.13.0)", "llama-index-core (>=0.12.29,<0.13.0
|
||||
|
||||
[[package]]
|
||||
name = "openhands-agent-server"
|
||||
version = "1.0.0a2"
|
||||
version = "1.0.0a3"
|
||||
description = "OpenHands Agent Server - REST/WebSocket interface for OpenHands AI Agent"
|
||||
optional = false
|
||||
python-versions = ">=3.12"
|
||||
@ -7294,13 +7297,13 @@ wsproto = ">=1.2.0"
|
||||
[package.source]
|
||||
type = "git"
|
||||
url = "https://github.com/All-Hands-AI/agent-sdk.git"
|
||||
reference = "512399d896521aee3131eea4bb59087fb9dfa243"
|
||||
resolved_reference = "512399d896521aee3131eea4bb59087fb9dfa243"
|
||||
reference = "8d8134ca5a87cc3e90e3ff968327a7f4c961e22e"
|
||||
resolved_reference = "8d8134ca5a87cc3e90e3ff968327a7f4c961e22e"
|
||||
subdirectory = "openhands-agent-server"
|
||||
|
||||
[[package]]
|
||||
name = "openhands-sdk"
|
||||
version = "1.0.0a2"
|
||||
version = "1.0.0a3"
|
||||
description = "OpenHands SDK - Core functionality for building AI agents"
|
||||
optional = false
|
||||
python-versions = ">=3.12"
|
||||
@ -7324,13 +7327,13 @@ boto3 = ["boto3 (>=1.35.0)"]
|
||||
[package.source]
|
||||
type = "git"
|
||||
url = "https://github.com/All-Hands-AI/agent-sdk.git"
|
||||
reference = "512399d896521aee3131eea4bb59087fb9dfa243"
|
||||
resolved_reference = "512399d896521aee3131eea4bb59087fb9dfa243"
|
||||
reference = "8d8134ca5a87cc3e90e3ff968327a7f4c961e22e"
|
||||
resolved_reference = "8d8134ca5a87cc3e90e3ff968327a7f4c961e22e"
|
||||
subdirectory = "openhands-sdk"
|
||||
|
||||
[[package]]
|
||||
name = "openhands-tools"
|
||||
version = "1.0.0a2"
|
||||
version = "1.0.0a3"
|
||||
description = "OpenHands Tools - Runtime tools for AI agents"
|
||||
optional = false
|
||||
python-versions = ">=3.12"
|
||||
@ -7351,8 +7354,8 @@ pydantic = ">=2.11.7"
|
||||
[package.source]
|
||||
type = "git"
|
||||
url = "https://github.com/All-Hands-AI/agent-sdk.git"
|
||||
reference = "512399d896521aee3131eea4bb59087fb9dfa243"
|
||||
resolved_reference = "512399d896521aee3131eea4bb59087fb9dfa243"
|
||||
reference = "8d8134ca5a87cc3e90e3ff968327a7f4c961e22e"
|
||||
resolved_reference = "8d8134ca5a87cc3e90e3ff968327a7f4c961e22e"
|
||||
subdirectory = "openhands-tools"
|
||||
|
||||
[[package]]
|
||||
@ -16521,4 +16524,4 @@ third-party-runtimes = ["daytona", "e2b-code-interpreter", "modal", "runloop-api
|
||||
[metadata]
|
||||
lock-version = "2.1"
|
||||
python-versions = "^3.12,<3.14"
|
||||
content-hash = "03639ad9782d05163b25c507e7232d797572902ee57408bf999b72c21e3adf5e"
|
||||
content-hash = "fd68ed845befeb646ee910db46f1ef9c5a1fd2e6d1ac6189c04864e0665f66ed"
|
||||
|
||||
@ -113,10 +113,9 @@ e2b-code-interpreter = { version = "^2.0.0", optional = true }
|
||||
pybase62 = "^1.0.0"
|
||||
|
||||
# V1 dependencies
|
||||
openhands-agent-server = { git = "https://github.com/All-Hands-AI/agent-sdk.git", subdirectory = "openhands-agent-server", rev = "512399d896521aee3131eea4bb59087fb9dfa243" }
|
||||
openhands-sdk = { git = "https://github.com/All-Hands-AI/agent-sdk.git", subdirectory = "openhands-sdk", rev = "512399d896521aee3131eea4bb59087fb9dfa243" }
|
||||
# This refuses to install
|
||||
openhands-tools = { git = "https://github.com/All-Hands-AI/agent-sdk.git", subdirectory = "openhands-tools", rev = "512399d896521aee3131eea4bb59087fb9dfa243" }
|
||||
openhands-agent-server = { git = "https://github.com/All-Hands-AI/agent-sdk.git", subdirectory = "openhands-agent-server", rev = "8d8134ca5a87cc3e90e3ff968327a7f4c961e22e" }
|
||||
openhands-sdk = { git = "https://github.com/All-Hands-AI/agent-sdk.git", subdirectory = "openhands-sdk", rev = "8d8134ca5a87cc3e90e3ff968327a7f4c961e22e" }
|
||||
openhands-tools = { git = "https://github.com/All-Hands-AI/agent-sdk.git", subdirectory = "openhands-tools", rev = "8d8134ca5a87cc3e90e3ff968327a7f4c961e22e" }
|
||||
python-jose = { version = ">=3.3", extras = [ "cryptography" ] }
|
||||
sqlalchemy = { extras = [ "asyncio" ], version = "^2.0.40" }
|
||||
pg8000 = "^1.31.5"
|
||||
|
||||
0
tests/unit/experiments/__init__.py
Normal file
0
tests/unit/experiments/__init__.py
Normal file
215
tests/unit/experiments/test_experiment_manager.py
Normal file
215
tests/unit/experiments/test_experiment_manager.py
Normal file
@ -0,0 +1,215 @@
|
||||
"""Unit tests for ExperimentManager class, focusing on the v1 agent method."""
|
||||
|
||||
from types import SimpleNamespace
|
||||
from unittest.mock import Mock, patch
|
||||
from uuid import UUID, uuid4
|
||||
|
||||
import pytest
|
||||
|
||||
from openhands.app_server.app_conversation.live_status_app_conversation_service import (
|
||||
LiveStatusAppConversationService,
|
||||
)
|
||||
from openhands.experiments.experiment_manager import ExperimentManager
|
||||
from openhands.sdk import Agent
|
||||
from openhands.sdk.llm import LLM
|
||||
|
||||
|
||||
class TestExperimentManager:
|
||||
"""Test cases for ExperimentManager class."""
|
||||
|
||||
def setup_method(self):
|
||||
"""Set up test fixtures."""
|
||||
self.user_id = 'test_user_123'
|
||||
self.conversation_id = uuid4()
|
||||
|
||||
# Create a mock LLM
|
||||
self.mock_llm = Mock(spec=LLM)
|
||||
self.mock_llm.model = 'gpt-4'
|
||||
self.mock_llm.usage_id = 'agent'
|
||||
|
||||
# Create a mock Agent
|
||||
self.mock_agent = Mock(spec=Agent)
|
||||
self.mock_agent.llm = self.mock_llm
|
||||
self.mock_agent.system_prompt_filename = 'default_system_prompt.j2'
|
||||
self.mock_agent.model_copy = Mock(return_value=self.mock_agent)
|
||||
|
||||
def test_run_agent_variant_tests__v1_returns_agent_unchanged(self):
|
||||
"""Test that the base ExperimentManager returns the agent unchanged."""
|
||||
result = ExperimentManager.run_agent_variant_tests__v1(
|
||||
self.user_id, self.conversation_id, self.mock_agent
|
||||
)
|
||||
|
||||
assert result is self.mock_agent
|
||||
assert result == self.mock_agent
|
||||
|
||||
def test_run_agent_variant_tests__v1_with_none_user_id(self):
|
||||
"""Test that the method works with None user_id."""
|
||||
# Act
|
||||
result = ExperimentManager.run_agent_variant_tests__v1(
|
||||
None, self.conversation_id, self.mock_agent
|
||||
)
|
||||
|
||||
# Assert
|
||||
assert result is self.mock_agent
|
||||
|
||||
def test_run_agent_variant_tests__v1_with_different_conversation_ids(self):
|
||||
"""Test that the method works with different conversation IDs."""
|
||||
conversation_id_1 = uuid4()
|
||||
conversation_id_2 = uuid4()
|
||||
|
||||
# Act
|
||||
result_1 = ExperimentManager.run_agent_variant_tests__v1(
|
||||
self.user_id, conversation_id_1, self.mock_agent
|
||||
)
|
||||
result_2 = ExperimentManager.run_agent_variant_tests__v1(
|
||||
self.user_id, conversation_id_2, self.mock_agent
|
||||
)
|
||||
|
||||
# Assert
|
||||
assert result_1 is self.mock_agent
|
||||
assert result_2 is self.mock_agent
|
||||
|
||||
|
||||
class TestExperimentManagerIntegration:
|
||||
"""Integration tests for ExperimentManager with start_app_conversation."""
|
||||
|
||||
def setup_method(self):
|
||||
"""Set up test fixtures."""
|
||||
self.user_id = 'test_user_123'
|
||||
self.conversation_id = uuid4()
|
||||
|
||||
# Create a mock LLM
|
||||
self.mock_llm = Mock(spec=LLM)
|
||||
self.mock_llm.model = 'gpt-4'
|
||||
self.mock_llm.usage_id = 'agent'
|
||||
|
||||
# Create a mock Agent
|
||||
self.mock_agent = Mock(spec=Agent)
|
||||
self.mock_agent.llm = self.mock_llm
|
||||
self.mock_agent.system_prompt_filename = 'default_system_prompt.j2'
|
||||
self.mock_agent.model_copy = Mock(return_value=self.mock_agent)
|
||||
|
||||
@patch('openhands.experiments.experiment_manager.ExperimentManagerImpl')
|
||||
def test_start_app_conversation_calls_experiment_manager_v1(
|
||||
self, mock_experiment_manager_impl
|
||||
):
|
||||
"""Test that start_app_conversation calls the experiment manager v1 method with correct parameters."""
|
||||
# Arrange
|
||||
mock_experiment_manager_impl.run_agent_variant_tests__v1.return_value = (
|
||||
self.mock_agent
|
||||
)
|
||||
|
||||
# Create a mock service instance
|
||||
mock_service = Mock(spec=LiveStatusAppConversationService)
|
||||
|
||||
# Mock the _build_start_conversation_request_for_user method to simulate the call
|
||||
with patch.object(mock_service, '_build_start_conversation_request_for_user'):
|
||||
# Simulate the part of the code that calls the experiment manager
|
||||
from uuid import uuid4
|
||||
|
||||
conversation_id = uuid4()
|
||||
|
||||
# This simulates the call that happens in the actual service
|
||||
result_agent = mock_experiment_manager_impl.run_agent_variant_tests__v1(
|
||||
self.user_id, conversation_id, self.mock_agent
|
||||
)
|
||||
|
||||
# Assert
|
||||
mock_experiment_manager_impl.run_agent_variant_tests__v1.assert_called_once_with(
|
||||
self.user_id, conversation_id, self.mock_agent
|
||||
)
|
||||
assert result_agent == self.mock_agent
|
||||
|
||||
@pytest.mark.asyncio
|
||||
async def test_experiment_manager_called_with_correct_parameters_in_context__noop_pass_through(
|
||||
self,
|
||||
):
|
||||
"""
|
||||
Use the real LiveStatusAppConversationService to build a StartConversationRequest,
|
||||
and verify ExperimentManagerImpl.run_agent_variant_tests__v1:
|
||||
- is called exactly once with the (user_id, generated conversation_id, agent)
|
||||
- returns the *same* agent instance (no copy/mutation)
|
||||
- does not tweak agent fields (LLM, system prompt, etc.)
|
||||
"""
|
||||
# --- Arrange: fixed UUID to assert call parameters deterministically
|
||||
fixed_conversation_id = UUID('00000000-0000-0000-0000-000000000001')
|
||||
|
||||
# Create a stable Agent (and LLM) we can identity-check later
|
||||
mock_llm = Mock(spec=LLM)
|
||||
mock_llm.model = 'gpt-4'
|
||||
mock_llm.usage_id = 'agent'
|
||||
|
||||
mock_agent = Mock(spec=Agent)
|
||||
mock_agent.llm = mock_llm
|
||||
mock_agent.system_prompt_filename = 'default_system_prompt.j2'
|
||||
|
||||
# Minimal, real-ish user context used by the service
|
||||
class DummyUserContext:
|
||||
async def get_user_info(self):
|
||||
# confirmation_mode=False -> NeverConfirm()
|
||||
return SimpleNamespace(
|
||||
id='test_user_123',
|
||||
llm_model='gpt-4',
|
||||
llm_base_url=None,
|
||||
llm_api_key=None,
|
||||
confirmation_mode=False,
|
||||
)
|
||||
|
||||
async def get_secrets(self):
|
||||
return {}
|
||||
|
||||
async def get_latest_token(self, provider):
|
||||
return None
|
||||
|
||||
async def get_user_id(self):
|
||||
return 'test_user_123'
|
||||
|
||||
user_context = DummyUserContext()
|
||||
|
||||
# The service requires a lot of deps, but for this test we won't exercise them.
|
||||
app_conversation_info_service = Mock()
|
||||
app_conversation_start_task_service = Mock()
|
||||
sandbox_service = Mock()
|
||||
sandbox_spec_service = Mock()
|
||||
jwt_service = Mock()
|
||||
httpx_client = Mock()
|
||||
|
||||
service = LiveStatusAppConversationService(
|
||||
init_git_in_empty_workspace=False,
|
||||
user_context=user_context,
|
||||
app_conversation_info_service=app_conversation_info_service,
|
||||
app_conversation_start_task_service=app_conversation_start_task_service,
|
||||
sandbox_service=sandbox_service,
|
||||
sandbox_spec_service=sandbox_spec_service,
|
||||
jwt_service=jwt_service,
|
||||
sandbox_startup_timeout=30,
|
||||
sandbox_startup_poll_frequency=1,
|
||||
httpx_client=httpx_client,
|
||||
web_url=None,
|
||||
access_token_hard_timeout=None,
|
||||
)
|
||||
|
||||
# Patch the pieces invoked by the service
|
||||
with (
|
||||
patch(
|
||||
'openhands.app_server.app_conversation.live_status_app_conversation_service.get_default_agent',
|
||||
return_value=mock_agent,
|
||||
),
|
||||
patch(
|
||||
'openhands.app_server.app_conversation.live_status_app_conversation_service.uuid4',
|
||||
return_value=fixed_conversation_id,
|
||||
),
|
||||
):
|
||||
# --- Act: build the start request
|
||||
start_req = await service._build_start_conversation_request_for_user(
|
||||
initial_message=None,
|
||||
git_provider=None, # Keep secrets path simple
|
||||
working_dir='/tmp/project', # Arbitrary path
|
||||
)
|
||||
|
||||
# The agent in the StartConversationRequest is the *same* object we provided
|
||||
assert start_req.agent is mock_agent
|
||||
|
||||
# No tweaks to agent fields by the experiment manager (noop)
|
||||
assert start_req.agent.llm is mock_llm
|
||||
assert start_req.agent.system_prompt_filename == 'default_system_prompt.j2'
|
||||
@ -8,7 +8,11 @@ from openhands.core.config import OpenHandsConfig
|
||||
from openhands.core.config.mcp_config import MCPConfig, MCPStdioServerConfig
|
||||
from openhands.events.action import Action
|
||||
from openhands.events.action.commands import CmdRunAction
|
||||
from openhands.events.observation import NullObservation, Observation
|
||||
from openhands.events.observation import (
|
||||
CmdOutputObservation,
|
||||
NullObservation,
|
||||
Observation,
|
||||
)
|
||||
from openhands.events.stream import EventStream
|
||||
from openhands.integrations.provider import ProviderHandler, ProviderToken, ProviderType
|
||||
from openhands.integrations.service_types import AuthenticationError, Repository
|
||||
@ -73,6 +77,36 @@ class MockRuntime(Runtime):
|
||||
|
||||
def run_action(self, action: Action) -> Observation:
|
||||
self.run_action_calls.append(action)
|
||||
# Return a mock git remote URL for git remote get-url commands
|
||||
# Use an OLD token to simulate token refresh scenario
|
||||
if (
|
||||
isinstance(action, CmdRunAction)
|
||||
and 'git remote get-url origin' in action.command
|
||||
):
|
||||
# Extract provider from previous clone command
|
||||
if len(self.run_action_calls) > 0:
|
||||
clone_cmd = (
|
||||
self.run_action_calls[0].command if self.run_action_calls else ''
|
||||
)
|
||||
if 'github.com' in clone_cmd:
|
||||
mock_url = 'https://old_github_token@github.com/owner/repo.git'
|
||||
elif 'gitlab.com' in clone_cmd:
|
||||
mock_url = (
|
||||
'https://oauth2:old_gitlab_token@gitlab.com/owner/repo.git'
|
||||
)
|
||||
else:
|
||||
mock_url = 'https://github.com/owner/repo.git'
|
||||
return CmdOutputObservation(
|
||||
content=mock_url, command_id=-1, command='', exit_code=0
|
||||
)
|
||||
# Return success for git remote set-url commands
|
||||
if (
|
||||
isinstance(action, CmdRunAction)
|
||||
and 'git remote set-url origin' in action.command
|
||||
):
|
||||
return CmdOutputObservation(
|
||||
content='', command_id=-1, command='', exit_code=0
|
||||
)
|
||||
return NullObservation(content='')
|
||||
|
||||
def call_tool_mcp(self, action):
|
||||
@ -330,22 +364,29 @@ async def test_clone_or_init_repo_github_with_token(temp_dir, monkeypatch):
|
||||
|
||||
result = await runtime.clone_or_init_repo(git_provider_tokens, 'owner/repo', None)
|
||||
|
||||
# Verify that git clone and checkout were called as separate commands
|
||||
assert len(runtime.run_action_calls) == 2
|
||||
# Verify that git clone, checkout, and git remote URL update were called
|
||||
assert len(runtime.run_action_calls) == 3 # clone, checkout, set-url
|
||||
assert isinstance(runtime.run_action_calls[0], CmdRunAction)
|
||||
assert isinstance(runtime.run_action_calls[1], CmdRunAction)
|
||||
assert isinstance(runtime.run_action_calls[2], CmdRunAction)
|
||||
|
||||
# Check that the first command is the git clone with the correct URL format with token
|
||||
clone_cmd = runtime.run_action_calls[0].command
|
||||
assert (
|
||||
f'git clone https://{github_token}@github.com/owner/repo.git repo' in clone_cmd
|
||||
)
|
||||
assert f'https://{github_token}@github.com/owner/repo.git' in clone_cmd
|
||||
expected_repo_path = str(runtime.workspace_root / 'repo')
|
||||
assert expected_repo_path in clone_cmd
|
||||
|
||||
# Check that the second command is the checkout
|
||||
checkout_cmd = runtime.run_action_calls[1].command
|
||||
assert 'cd repo' in checkout_cmd
|
||||
assert f'cd {expected_repo_path}' in checkout_cmd
|
||||
assert 'git checkout -b openhands-workspace-' in checkout_cmd
|
||||
|
||||
# Check that the third command sets the remote URL immediately after clone
|
||||
set_url_cmd = runtime.run_action_calls[2].command
|
||||
assert f'cd {expected_repo_path}' in set_url_cmd
|
||||
assert 'git remote set-url origin' in set_url_cmd
|
||||
assert github_token in set_url_cmd
|
||||
|
||||
assert result == 'repo'
|
||||
|
||||
|
||||
@ -363,20 +404,28 @@ async def test_clone_or_init_repo_github_no_token(temp_dir, monkeypatch):
|
||||
mock_repo_and_patch(monkeypatch, provider=ProviderType.GITHUB)
|
||||
result = await runtime.clone_or_init_repo(None, 'owner/repo', None)
|
||||
|
||||
# Verify that git clone and checkout were called as separate commands
|
||||
assert len(runtime.run_action_calls) == 2
|
||||
# Verify that git clone, checkout, and remote update were called
|
||||
assert len(runtime.run_action_calls) == 3 # clone, checkout, set-url
|
||||
assert isinstance(runtime.run_action_calls[0], CmdRunAction)
|
||||
assert isinstance(runtime.run_action_calls[1], CmdRunAction)
|
||||
assert isinstance(runtime.run_action_calls[2], CmdRunAction)
|
||||
|
||||
# Check that the first command is the git clone with the correct URL format without token
|
||||
clone_cmd = runtime.run_action_calls[0].command
|
||||
assert 'git clone https://github.com/owner/repo.git repo' in clone_cmd
|
||||
expected_repo_path = str(runtime.workspace_root / 'repo')
|
||||
assert 'git clone https://github.com/owner/repo.git' in clone_cmd
|
||||
assert expected_repo_path in clone_cmd
|
||||
|
||||
# Check that the second command is the checkout
|
||||
checkout_cmd = runtime.run_action_calls[1].command
|
||||
assert 'cd repo' in checkout_cmd
|
||||
assert f'cd {expected_repo_path}' in checkout_cmd
|
||||
assert 'git checkout -b openhands-workspace-' in checkout_cmd
|
||||
|
||||
# Check that the third command sets the remote URL after clone
|
||||
set_url_cmd = runtime.run_action_calls[2].command
|
||||
assert f'cd {expected_repo_path}' in set_url_cmd
|
||||
assert 'git remote set-url origin' in set_url_cmd
|
||||
|
||||
assert result == 'repo'
|
||||
|
||||
|
||||
@ -403,23 +452,29 @@ async def test_clone_or_init_repo_gitlab_with_token(temp_dir, monkeypatch):
|
||||
|
||||
result = await runtime.clone_or_init_repo(git_provider_tokens, 'owner/repo', None)
|
||||
|
||||
# Verify that git clone and checkout were called as separate commands
|
||||
assert len(runtime.run_action_calls) == 2
|
||||
# Verify that git clone, checkout, and git remote URL update were called
|
||||
assert len(runtime.run_action_calls) == 3 # clone, checkout, set-url
|
||||
assert isinstance(runtime.run_action_calls[0], CmdRunAction)
|
||||
assert isinstance(runtime.run_action_calls[1], CmdRunAction)
|
||||
assert isinstance(runtime.run_action_calls[2], CmdRunAction)
|
||||
|
||||
# Check that the first command is the git clone with the correct URL format with token
|
||||
clone_cmd = runtime.run_action_calls[0].command
|
||||
assert (
|
||||
f'git clone https://oauth2:{gitlab_token}@gitlab.com/owner/repo.git repo'
|
||||
in clone_cmd
|
||||
)
|
||||
expected_repo_path = str(runtime.workspace_root / 'repo')
|
||||
assert f'https://oauth2:{gitlab_token}@gitlab.com/owner/repo.git' in clone_cmd
|
||||
assert expected_repo_path in clone_cmd
|
||||
|
||||
# Check that the second command is the checkout
|
||||
checkout_cmd = runtime.run_action_calls[1].command
|
||||
assert 'cd repo' in checkout_cmd
|
||||
assert f'cd {expected_repo_path}' in checkout_cmd
|
||||
assert 'git checkout -b openhands-workspace-' in checkout_cmd
|
||||
|
||||
# Check that the third command sets the remote URL immediately after clone
|
||||
set_url_cmd = runtime.run_action_calls[2].command
|
||||
assert f'cd {expected_repo_path}' in set_url_cmd
|
||||
assert 'git remote set-url origin' in set_url_cmd
|
||||
assert gitlab_token in set_url_cmd
|
||||
|
||||
assert result == 'repo'
|
||||
|
||||
|
||||
@ -437,18 +492,24 @@ async def test_clone_or_init_repo_with_branch(temp_dir, monkeypatch):
|
||||
mock_repo_and_patch(monkeypatch, provider=ProviderType.GITHUB)
|
||||
result = await runtime.clone_or_init_repo(None, 'owner/repo', 'feature-branch')
|
||||
|
||||
# Verify that git clone and checkout were called as separate commands
|
||||
assert len(runtime.run_action_calls) == 2
|
||||
# Verify that git clone, checkout, and remote update were called
|
||||
assert len(runtime.run_action_calls) == 3 # clone, checkout, set-url
|
||||
assert isinstance(runtime.run_action_calls[0], CmdRunAction)
|
||||
assert isinstance(runtime.run_action_calls[1], CmdRunAction)
|
||||
assert isinstance(runtime.run_action_calls[2], CmdRunAction)
|
||||
|
||||
# Check that the first command is the git clone
|
||||
clone_cmd = runtime.run_action_calls[0].command
|
||||
expected_repo_path = str(runtime.workspace_root / 'repo')
|
||||
assert 'git clone https://github.com/owner/repo.git' in clone_cmd
|
||||
assert expected_repo_path in clone_cmd
|
||||
|
||||
# Check that the second command contains the correct branch checkout
|
||||
checkout_cmd = runtime.run_action_calls[1].command
|
||||
assert 'git clone https://github.com/owner/repo.git repo' in clone_cmd
|
||||
assert 'cd repo' in checkout_cmd
|
||||
assert f'cd {expected_repo_path}' in checkout_cmd
|
||||
assert 'git checkout feature-branch' in checkout_cmd
|
||||
set_url_cmd = runtime.run_action_calls[2].command
|
||||
assert f'cd {expected_repo_path}' in set_url_cmd
|
||||
assert 'git remote set-url origin' in set_url_cmd
|
||||
assert 'git checkout -b' not in checkout_cmd # Should not create a new branch
|
||||
assert result == 'repo'
|
||||
|
||||
@ -1,11 +1,18 @@
|
||||
import json
|
||||
from datetime import datetime, timezone
|
||||
from unittest.mock import AsyncMock, MagicMock, patch
|
||||
from uuid import uuid4
|
||||
|
||||
import pytest
|
||||
from fastapi import status
|
||||
from fastapi.responses import JSONResponse
|
||||
|
||||
from openhands.app_server.app_conversation.app_conversation_info_service import (
|
||||
AppConversationInfoService,
|
||||
)
|
||||
from openhands.app_server.app_conversation.app_conversation_models import (
|
||||
AppConversationInfo,
|
||||
)
|
||||
from openhands.microagent.microagent import KnowledgeMicroagent, RepoMicroagent
|
||||
from openhands.microagent.types import MicroagentMetadata, MicroagentType
|
||||
from openhands.server.routes.conversation import (
|
||||
@ -625,6 +632,392 @@ async def test_update_conversation_no_user_id_no_metadata_user_id():
|
||||
mock_conversation_store.save_metadata.assert_called_once()
|
||||
|
||||
|
||||
@pytest.mark.update_conversation
|
||||
@pytest.mark.asyncio
|
||||
async def test_update_v1_conversation_success():
|
||||
"""Test successful V1 conversation update."""
|
||||
# Mock data
|
||||
conversation_uuid = uuid4()
|
||||
conversation_id = str(conversation_uuid)
|
||||
user_id = 'test_user_456'
|
||||
original_title = 'Original V1 Title'
|
||||
new_title = 'Updated V1 Title'
|
||||
|
||||
# Create mock V1 conversation info
|
||||
mock_app_conversation_info = AppConversationInfo(
|
||||
id=conversation_uuid,
|
||||
created_by_user_id=user_id,
|
||||
sandbox_id='test_sandbox_123',
|
||||
title=original_title,
|
||||
created_at=datetime.now(timezone.utc),
|
||||
updated_at=datetime.now(timezone.utc),
|
||||
)
|
||||
|
||||
# Create mock app conversation info service
|
||||
mock_app_conversation_info_service = MagicMock(spec=AppConversationInfoService)
|
||||
mock_app_conversation_info_service.get_app_conversation_info = AsyncMock(
|
||||
return_value=mock_app_conversation_info
|
||||
)
|
||||
mock_app_conversation_info_service.save_app_conversation_info = AsyncMock(
|
||||
return_value=mock_app_conversation_info
|
||||
)
|
||||
|
||||
# Create mock conversation store (won't be used for V1)
|
||||
mock_conversation_store = MagicMock(spec=ConversationStore)
|
||||
|
||||
# Create update request
|
||||
update_request = UpdateConversationRequest(title=new_title)
|
||||
|
||||
# Call the function
|
||||
result = await update_conversation(
|
||||
conversation_id=conversation_id,
|
||||
data=update_request,
|
||||
user_id=user_id,
|
||||
conversation_store=mock_conversation_store,
|
||||
app_conversation_info_service=mock_app_conversation_info_service,
|
||||
)
|
||||
|
||||
# Verify the result
|
||||
assert result is True
|
||||
|
||||
# Verify V1 service was called
|
||||
mock_app_conversation_info_service.get_app_conversation_info.assert_called_once_with(
|
||||
conversation_uuid
|
||||
)
|
||||
mock_app_conversation_info_service.save_app_conversation_info.assert_called_once()
|
||||
|
||||
# Verify the conversation store was NOT called (V1 doesn't use it)
|
||||
mock_conversation_store.get_metadata.assert_not_called()
|
||||
|
||||
# Verify the saved info has updated title
|
||||
saved_info = (
|
||||
mock_app_conversation_info_service.save_app_conversation_info.call_args[0][0]
|
||||
)
|
||||
assert saved_info.title == new_title.strip()
|
||||
assert saved_info.updated_at is not None
|
||||
|
||||
|
||||
@pytest.mark.update_conversation
|
||||
@pytest.mark.asyncio
|
||||
async def test_update_v1_conversation_not_found():
|
||||
"""Test V1 conversation update when conversation doesn't exist."""
|
||||
conversation_uuid = uuid4()
|
||||
conversation_id = str(conversation_uuid)
|
||||
user_id = 'test_user_456'
|
||||
|
||||
# Create mock app conversation info service that returns None
|
||||
mock_app_conversation_info_service = MagicMock(spec=AppConversationInfoService)
|
||||
mock_app_conversation_info_service.get_app_conversation_info = AsyncMock(
|
||||
return_value=None
|
||||
)
|
||||
|
||||
# Create mock conversation store that also raises FileNotFoundError
|
||||
mock_conversation_store = MagicMock(spec=ConversationStore)
|
||||
mock_conversation_store.get_metadata = AsyncMock(side_effect=FileNotFoundError())
|
||||
|
||||
# Create update request
|
||||
update_request = UpdateConversationRequest(title='New Title')
|
||||
|
||||
# Call the function
|
||||
result = await update_conversation(
|
||||
conversation_id=conversation_id,
|
||||
data=update_request,
|
||||
user_id=user_id,
|
||||
conversation_store=mock_conversation_store,
|
||||
app_conversation_info_service=mock_app_conversation_info_service,
|
||||
)
|
||||
|
||||
# Verify the result is a 404 error response
|
||||
assert isinstance(result, JSONResponse)
|
||||
assert result.status_code == status.HTTP_404_NOT_FOUND
|
||||
|
||||
# Parse the JSON content
|
||||
content = json.loads(result.body)
|
||||
assert content['status'] == 'error'
|
||||
assert content['message'] == 'Conversation not found'
|
||||
assert content['msg_id'] == 'CONVERSATION$NOT_FOUND'
|
||||
|
||||
|
||||
@pytest.mark.update_conversation
|
||||
@pytest.mark.asyncio
|
||||
async def test_update_v1_conversation_permission_denied():
|
||||
"""Test V1 conversation update when user doesn't own the conversation."""
|
||||
conversation_uuid = uuid4()
|
||||
conversation_id = str(conversation_uuid)
|
||||
user_id = 'test_user_456'
|
||||
owner_id = 'different_user_789'
|
||||
|
||||
# Create mock V1 conversation info owned by different user
|
||||
mock_app_conversation_info = AppConversationInfo(
|
||||
id=conversation_uuid,
|
||||
created_by_user_id=owner_id,
|
||||
sandbox_id='test_sandbox_123',
|
||||
title='Original Title',
|
||||
created_at=datetime.now(timezone.utc),
|
||||
updated_at=datetime.now(timezone.utc),
|
||||
)
|
||||
|
||||
# Create mock app conversation info service
|
||||
mock_app_conversation_info_service = MagicMock(spec=AppConversationInfoService)
|
||||
mock_app_conversation_info_service.get_app_conversation_info = AsyncMock(
|
||||
return_value=mock_app_conversation_info
|
||||
)
|
||||
|
||||
# Create mock conversation store (won't be used)
|
||||
mock_conversation_store = MagicMock(spec=ConversationStore)
|
||||
|
||||
# Create update request
|
||||
update_request = UpdateConversationRequest(title='New Title')
|
||||
|
||||
# Call the function
|
||||
result = await update_conversation(
|
||||
conversation_id=conversation_id,
|
||||
data=update_request,
|
||||
user_id=user_id,
|
||||
conversation_store=mock_conversation_store,
|
||||
app_conversation_info_service=mock_app_conversation_info_service,
|
||||
)
|
||||
|
||||
# Verify the result is a 403 error response
|
||||
assert isinstance(result, JSONResponse)
|
||||
assert result.status_code == status.HTTP_403_FORBIDDEN
|
||||
|
||||
# Parse the JSON content
|
||||
content = json.loads(result.body)
|
||||
assert content['status'] == 'error'
|
||||
assert (
|
||||
content['message']
|
||||
== 'Permission denied: You can only update your own conversations'
|
||||
)
|
||||
assert content['msg_id'] == 'AUTHORIZATION$PERMISSION_DENIED'
|
||||
|
||||
# Verify save was NOT called
|
||||
mock_app_conversation_info_service.save_app_conversation_info.assert_not_called()
|
||||
|
||||
|
||||
@pytest.mark.update_conversation
|
||||
@pytest.mark.asyncio
|
||||
async def test_update_v1_conversation_save_assertion_error():
|
||||
"""Test V1 conversation update when save raises AssertionError (permission check)."""
|
||||
conversation_uuid = uuid4()
|
||||
conversation_id = str(conversation_uuid)
|
||||
user_id = 'test_user_456'
|
||||
|
||||
# Create mock V1 conversation info
|
||||
mock_app_conversation_info = AppConversationInfo(
|
||||
id=conversation_uuid,
|
||||
created_by_user_id=user_id,
|
||||
sandbox_id='test_sandbox_123',
|
||||
title='Original Title',
|
||||
created_at=datetime.now(timezone.utc),
|
||||
updated_at=datetime.now(timezone.utc),
|
||||
)
|
||||
|
||||
# Create mock app conversation info service
|
||||
mock_app_conversation_info_service = MagicMock(spec=AppConversationInfoService)
|
||||
mock_app_conversation_info_service.get_app_conversation_info = AsyncMock(
|
||||
return_value=mock_app_conversation_info
|
||||
)
|
||||
# Simulate AssertionError on save (permission check in service)
|
||||
mock_app_conversation_info_service.save_app_conversation_info = AsyncMock(
|
||||
side_effect=AssertionError('User does not own conversation')
|
||||
)
|
||||
|
||||
# Create mock conversation store (won't be used)
|
||||
mock_conversation_store = MagicMock(spec=ConversationStore)
|
||||
|
||||
# Create update request
|
||||
update_request = UpdateConversationRequest(title='New Title')
|
||||
|
||||
# Call the function
|
||||
result = await update_conversation(
|
||||
conversation_id=conversation_id,
|
||||
data=update_request,
|
||||
user_id=user_id,
|
||||
conversation_store=mock_conversation_store,
|
||||
app_conversation_info_service=mock_app_conversation_info_service,
|
||||
)
|
||||
|
||||
# Verify the result is a 403 error response
|
||||
assert isinstance(result, JSONResponse)
|
||||
assert result.status_code == status.HTTP_403_FORBIDDEN
|
||||
|
||||
# Parse the JSON content
|
||||
content = json.loads(result.body)
|
||||
assert content['status'] == 'error'
|
||||
assert (
|
||||
content['message']
|
||||
== 'Permission denied: You can only update your own conversations'
|
||||
)
|
||||
assert content['msg_id'] == 'AUTHORIZATION$PERMISSION_DENIED'
|
||||
|
||||
|
||||
@pytest.mark.update_conversation
|
||||
@pytest.mark.asyncio
|
||||
async def test_update_v1_conversation_title_whitespace_trimming():
|
||||
"""Test that V1 conversation title is properly trimmed of whitespace."""
|
||||
conversation_uuid = uuid4()
|
||||
conversation_id = str(conversation_uuid)
|
||||
user_id = 'test_user_456'
|
||||
title_with_whitespace = ' Trimmed V1 Title '
|
||||
expected_title = 'Trimmed V1 Title'
|
||||
|
||||
# Create mock V1 conversation info
|
||||
mock_app_conversation_info = AppConversationInfo(
|
||||
id=conversation_uuid,
|
||||
created_by_user_id=user_id,
|
||||
sandbox_id='test_sandbox_123',
|
||||
title='Original Title',
|
||||
created_at=datetime.now(timezone.utc),
|
||||
updated_at=datetime.now(timezone.utc),
|
||||
)
|
||||
|
||||
# Create mock app conversation info service
|
||||
mock_app_conversation_info_service = MagicMock(spec=AppConversationInfoService)
|
||||
mock_app_conversation_info_service.get_app_conversation_info = AsyncMock(
|
||||
return_value=mock_app_conversation_info
|
||||
)
|
||||
mock_app_conversation_info_service.save_app_conversation_info = AsyncMock(
|
||||
return_value=mock_app_conversation_info
|
||||
)
|
||||
|
||||
# Create mock conversation store (won't be used)
|
||||
mock_conversation_store = MagicMock(spec=ConversationStore)
|
||||
|
||||
# Create update request with whitespace
|
||||
update_request = UpdateConversationRequest(title=title_with_whitespace)
|
||||
|
||||
# Call the function
|
||||
result = await update_conversation(
|
||||
conversation_id=conversation_id,
|
||||
data=update_request,
|
||||
user_id=user_id,
|
||||
conversation_store=mock_conversation_store,
|
||||
app_conversation_info_service=mock_app_conversation_info_service,
|
||||
)
|
||||
|
||||
# Verify the result
|
||||
assert result is True
|
||||
|
||||
# Verify the saved info has trimmed title
|
||||
saved_info = (
|
||||
mock_app_conversation_info_service.save_app_conversation_info.call_args[0][0]
|
||||
)
|
||||
assert saved_info.title == expected_title
|
||||
|
||||
|
||||
@pytest.mark.update_conversation
|
||||
@pytest.mark.asyncio
|
||||
async def test_update_v1_conversation_invalid_uuid_falls_back_to_v0():
|
||||
"""Test that invalid UUID conversation_id falls back to V0 logic."""
|
||||
conversation_id = 'not_a_valid_uuid_123'
|
||||
user_id = 'test_user_456'
|
||||
new_title = 'Updated Title'
|
||||
|
||||
# Create mock V0 metadata
|
||||
mock_metadata = ConversationMetadata(
|
||||
conversation_id=conversation_id,
|
||||
user_id=user_id,
|
||||
title='Original Title',
|
||||
selected_repository=None,
|
||||
last_updated_at=datetime.now(timezone.utc),
|
||||
)
|
||||
|
||||
# Create mock conversation store for V0
|
||||
mock_conversation_store = MagicMock(spec=ConversationStore)
|
||||
mock_conversation_store.get_metadata = AsyncMock(return_value=mock_metadata)
|
||||
mock_conversation_store.save_metadata = AsyncMock()
|
||||
|
||||
# Create mock app conversation info service (won't be called)
|
||||
mock_app_conversation_info_service = MagicMock(spec=AppConversationInfoService)
|
||||
|
||||
# Create update request
|
||||
update_request = UpdateConversationRequest(title=new_title)
|
||||
|
||||
# Mock the conversation manager socket
|
||||
mock_sio = AsyncMock()
|
||||
|
||||
with patch(
|
||||
'openhands.server.routes.manage_conversations.conversation_manager'
|
||||
) as mock_manager:
|
||||
mock_manager.sio = mock_sio
|
||||
|
||||
# Call the function
|
||||
result = await update_conversation(
|
||||
conversation_id=conversation_id,
|
||||
data=update_request,
|
||||
user_id=user_id,
|
||||
conversation_store=mock_conversation_store,
|
||||
app_conversation_info_service=mock_app_conversation_info_service,
|
||||
)
|
||||
|
||||
# Verify the result is successful
|
||||
assert result is True
|
||||
|
||||
# Verify V0 store was used, not V1 service
|
||||
mock_conversation_store.get_metadata.assert_called_once_with(conversation_id)
|
||||
mock_conversation_store.save_metadata.assert_called_once()
|
||||
mock_app_conversation_info_service.get_app_conversation_info.assert_not_called()
|
||||
|
||||
|
||||
@pytest.mark.update_conversation
|
||||
@pytest.mark.asyncio
|
||||
async def test_update_v1_conversation_no_socket_emission():
|
||||
"""Test that V1 conversation update does NOT emit socket.io events."""
|
||||
conversation_uuid = uuid4()
|
||||
conversation_id = str(conversation_uuid)
|
||||
user_id = 'test_user_456'
|
||||
new_title = 'Updated V1 Title'
|
||||
|
||||
# Create mock V1 conversation info
|
||||
mock_app_conversation_info = AppConversationInfo(
|
||||
id=conversation_uuid,
|
||||
created_by_user_id=user_id,
|
||||
sandbox_id='test_sandbox_123',
|
||||
title='Original Title',
|
||||
created_at=datetime.now(timezone.utc),
|
||||
updated_at=datetime.now(timezone.utc),
|
||||
)
|
||||
|
||||
# Create mock app conversation info service
|
||||
mock_app_conversation_info_service = MagicMock(spec=AppConversationInfoService)
|
||||
mock_app_conversation_info_service.get_app_conversation_info = AsyncMock(
|
||||
return_value=mock_app_conversation_info
|
||||
)
|
||||
mock_app_conversation_info_service.save_app_conversation_info = AsyncMock(
|
||||
return_value=mock_app_conversation_info
|
||||
)
|
||||
|
||||
# Create mock conversation store (won't be used)
|
||||
mock_conversation_store = MagicMock(spec=ConversationStore)
|
||||
|
||||
# Create update request
|
||||
update_request = UpdateConversationRequest(title=new_title)
|
||||
|
||||
# Mock the conversation manager socket
|
||||
mock_sio = AsyncMock()
|
||||
|
||||
with patch(
|
||||
'openhands.server.routes.manage_conversations.conversation_manager'
|
||||
) as mock_manager:
|
||||
mock_manager.sio = mock_sio
|
||||
|
||||
# Call the function
|
||||
result = await update_conversation(
|
||||
conversation_id=conversation_id,
|
||||
data=update_request,
|
||||
user_id=user_id,
|
||||
conversation_store=mock_conversation_store,
|
||||
app_conversation_info_service=mock_app_conversation_info_service,
|
||||
)
|
||||
|
||||
# Verify the result is successful
|
||||
assert result is True
|
||||
|
||||
# Verify socket.io was NOT called for V1 conversation
|
||||
mock_sio.emit.assert_not_called()
|
||||
|
||||
|
||||
@pytest.mark.asyncio
|
||||
async def test_add_message_success():
|
||||
"""Test successful message addition to conversation."""
|
||||
|
||||
@ -14,7 +14,9 @@ from openhands.integrations.provider import (
|
||||
ProviderToken,
|
||||
ProviderType,
|
||||
)
|
||||
from openhands.server.routes.secrets import app as secrets_app
|
||||
from openhands.server.routes.secrets import (
|
||||
app as secrets_app,
|
||||
)
|
||||
from openhands.storage import get_file_store
|
||||
from openhands.storage.data_models.user_secrets import UserSecrets
|
||||
from openhands.storage.secrets.file_secrets_store import FileSecretsStore
|
||||
|
||||
Loading…
x
Reference in New Issue
Block a user