refactor: update web demo, owl agent running (#261)

This commit is contained in:
Wendong-Fan
2025-03-15 13:30:16 +08:00
committed by GitHub
27 changed files with 3505 additions and 2357 deletions

View File

@@ -104,6 +104,7 @@ Our vision is to revolutionize how AI agents collaborate to solve real-world tas
</p>
</div>
- **[2025.03.15]**: Restructured the web-based user interface architecture for improved system stability; optimized OWL Agent execution mechanisms for enhanced efficiency and performance; integrated Baidu search engine into SearchToolkit.
- **[2025.03.12]**: Added Bocha search in SearchToolkit, integrated Volcano Engine model platform, and enhanced Azure and OpenAI Compatible models with structured output and tool calling.
- **[2025.03.11]**: We added MCPToolkit, FileWriteToolkit, and TerminalToolkit to enhance OWL agents with MCP tool calling, file writing capabilities, and terminal command execution.
- **[2025.03.09]**: We added a web-based user interface that makes it easier to interact with the system.
@@ -119,7 +120,7 @@ https://private-user-images.githubusercontent.com/55657767/420212194-e813fc05-13
# ✨️ Core Features
- **Real-time Information Retrieval**: Leverage Wikipedia, Google Search, and other online sources for up-to-date information.
- **Online Search**: Support for multiple search engines (including Wikipedia, Google, DuckDuckGo, Baidu, Bocha, etc.) for real-time information retrieval and knowledge acquisition.
- **Multimodal Processing**: Support for handling internet or local videos, images, and audio data.
- **Browser Automation**: Utilize the Playwright framework for simulating browser interactions, including scrolling, clicking, input handling, downloading, navigation, and more.
- **Document Parsing**: Extract content from Word, Excel, PDF, and PowerPoint files, converting them into text or Markdown format.
@@ -224,7 +225,7 @@ OWL requires various API keys to interact with different services. The `owl/.env
2. **Configure Your API Keys**:
Open the `.env` file in your preferred text editor and insert your API keys in the corresponding fields.
> **Note**: For the minimal example (`run_mini.py`), you only need to configure the LLM API key (e.g., `OPENAI_API_KEY`).
> **Note**: For the minimal example (`examples/run_mini.py`), you only need to configure the LLM API key (e.g., `OPENAI_API_KEY`).
### Option 2: Setting Environment Variables Directly
@@ -275,7 +276,7 @@ cd .. && source .venv/bin/activate && cd owl
playwright install-deps
#run example demo script
xvfb-python run.py
xvfb-python examples/run.py
# Option 2: Build and run using the provided scripts
cd .container
@@ -299,17 +300,17 @@ npx -y @smithery/cli install @wonderwhy-er/desktop-commander --client claude
npx @wonderwhy-er/desktop-commander setup
# Run the MCP example
python owl/run_mcp.py
python examples/run_mcp.py
```
This example showcases how OWL agents can seamlessly interact with file systems, web automation, and information retrieval through the MCP protocol. Check out `owl/run_mcp.py` for the full implementation.
This example showcases how OWL agents can seamlessly interact with file systems, web automation, and information retrieval through the MCP protocol. Check out `examples/run_mcp.py` for the full implementation.
## Basic Usage
After installation and setting up your environment variables, you can start using OWL right away:
```bash
python owl/run.py
python examples/run.py
```
## Running with Different Models
@@ -330,28 +331,28 @@ OWL supports various LLM backends, though capabilities may vary depending on the
```bash
# Run with Qwen model
python owl/run_qwen_zh.py
python examples/run_qwen_zh.py
# Run with Deepseek model
python owl/run_deepseek_zh.py
python examples/run_deepseek_zh.py
# Run with other OpenAI-compatible models
python owl/run_openai_compatiable_model.py
python examples/run_openai_compatiable_model.py
# Run with Azure OpenAI
python owl/run_azure_openai.py
python examples/run_azure_openai.py
# Run with Ollama
python owl/run_ollama.py
python examples/run_ollama.py
```
For a simpler version that only requires an LLM API key, you can try our minimal example:
```bash
python owl/run_mini.py
python examples/run_mini.py
```
You can run OWL agent with your own task by modifying the `run.py` script:
You can run OWL agent with your own task by modifying the `examples/run.py` script:
```python
# Define your own task
@@ -393,7 +394,7 @@ Here are some tasks you can try with OWL:
OWL's MCP integration provides a standardized way for AI models to interact with various tools and data sources:
Try our comprehensive MCP example in `owl/run_mcp.py` to see these capabilities in action!
Try our comprehensive MCP example in `examples/run_mcp.py` to see these capabilities in action!
## Available Toolkits
@@ -464,10 +465,10 @@ OWL includes an intuitive web-based user interface that makes it easier to inter
```bash
# Start the Chinese version
python run_app_zh.py
python owl/webapp_zh.py
# Start the English version
python run_app.py
python owl/webapp.py
```
## Features
@@ -545,7 +546,7 @@ Join us ([*Discord*](https://discord.camel-ai.org/) or [*WeChat*](https://ghli.o
Join us for further discussions!
<!-- ![](./assets/community.png) -->
![](./assets/community_8.jpg)
![](./assets/community.jpg)
# ❓ FAQ

View File

@@ -104,6 +104,7 @@
</p>
</div>
- **[2025.03.15]**: 重构网页用户界面提升系统稳定性优化OWL Agent的运行机制提高执行效率与性能在SearchToolkit中整合百度搜索引擎
- **[2025.03.12]**: 在SearchToolkit中添加了Bocha搜索功能集成了火山引擎模型平台并更新了Azure和OpenAI Compatible模型的结构化输出和工具调用能力。
- **[2025.03.11]**: 我们添加了 MCPToolkit、FileWriteToolkit 和 TerminalToolkit增强了 OWL Agent 的 MCP模型上下文协议集成、文件写入能力和终端命令执行功能。MCP 作为一个通用协议层,标准化了 AI 模型与各种数据源和工具的交互方式。
- **[2025.03.09]**: 我们添加了基于网页的用户界面,使系统交互变得更加简便。
@@ -118,7 +119,7 @@ https://private-user-images.githubusercontent.com/55657767/420212194-e813fc05-13
# ✨️ 核心功能
- **在线搜索**使用维基百科、谷歌搜索等,进行实时信息检索
- **在线搜索**支持多种搜索引擎包括维基百科、Google、DuckDuckGo、百度、博查等实现实时信息检索与知识获取
- **多模态处理**:支持互联网或本地视频、图片、语音处理
- **浏览器操作**借助Playwright框架开发浏览器模拟交互支持页面滚动、点击、输入、下载、历史回退等功能
- **文件解析**word、excel、PDF、PowerPoint信息提取内容转文本/Markdown
@@ -219,7 +220,7 @@ OWL 需要各种 API 密钥来与不同的服务进行交互。`owl/.env_templat
2. **配置你的 API 密钥**
在你喜欢的文本编辑器中打开 `.env` 文件,并在相应字段中插入你的 API 密钥。
> **注意**:对于最小示例(`run_mini.py`),你只需要配置 LLM API 密钥(例如,`OPENAI_API_KEY`)。
> **注意**:对于最小示例(`examples/run_mini.py`),你只需要配置 LLM API 密钥(例如,`OPENAI_API_KEY`)。
### 选项 2直接设置环境变量
@@ -269,7 +270,7 @@ cd .. && source .venv/bin/activate && cd owl
playwright install-deps
#运行例子演示脚本
xvfb-python run.py
xvfb-python examples/run.py
# 选项2使用提供的脚本构建和运行
cd .container
@@ -293,23 +294,23 @@ npx -y @smithery/cli install @wonderwhy-er/desktop-commander --client claude
npx @wonderwhy-er/desktop-commander setup
# 运行 MCP 示例
python owl/run_mcp.py
python examples/run_mcp.py
```
这个示例展示了 OWL 智能体如何通过 MCP 协议无缝地与文件系统、网页自动化和信息检索进行交互。查看 `owl/run_mcp.py` 了解完整实现。
这个示例展示了 OWL 智能体如何通过 MCP 协议无缝地与文件系统、网页自动化和信息检索进行交互。查看 `examples/run_mcp.py` 了解完整实现。
## 基本用法
运行以下示例:
```bash
python owl/run.py
python examples/run.py
```
我们还提供了一个最小化示例只需配置LLM的API密钥即可运行
```bash
python owl/run_mini.py
python examples/run_mini.py
```
## 使用不同的模型
@@ -330,22 +331,22 @@ OWL 支持多种 LLM 后端,但功能可能因模型的工具调用和多模
```bash
# 使用 Qwen 模型运行
python owl/run_qwen_zh.py
python examples/run_qwen_zh.py
# 使用 Deepseek 模型运行
python owl/run_deepseek_zh.py
python examples/run_deepseek_zh.py
# 使用其他 OpenAI 兼容模型运行
python owl/run_openai_compatiable_model.py
python examples/run_openai_compatiable_model.py
# 使用 Azure OpenAI模型运行
python owl/run_azure_openai.py
python examples/run_azure_openai.py
# 使用 Ollama 运行
python owl/run_ollama.py
python examples/run_ollama.py
```
你可以通过修改 `run.py` 脚本来运行自己的任务:
你可以通过修改 `examples/run.py` 脚本来运行自己的任务:
```python
# Define your own task
@@ -383,7 +384,7 @@ OWL 将自动调用与文档相关的工具来处理文件并提取答案。
OWL 的 MCP 集成为 AI 模型与各种工具和数据源的交互提供了标准化的方式。
查看我们的综合示例 `owl/run_mcp.py` 来体验这些功能!
查看我们的综合示例 `examples/run_mcp.py` 来体验这些功能!
## 可用工具包
@@ -452,10 +453,10 @@ OWL 现在包含一个基于网页的用户界面,使与系统交互变得更
```bash
# 中文版本
python run_app_zh.py
python owl/webapp_zh.py
# 英文版本
python run_app.py
python owl/webapp.py
```
网页界面提供以下功能:
@@ -479,7 +480,7 @@ git checkout gaia58.18
2. 运行评估脚本:
```bash
python run_gaia_roleplaying.py
python examples/run_gaia_roleplaying.py
```
# ⏱️ 未来计划
@@ -531,7 +532,7 @@ python run_gaia_roleplaying.py
加入我们,参与更多讨论!
<!-- ![](./assets/community.png) -->
![](./assets/community_8.jpg)
![](./assets/community.jpg)
<!-- ![](./assets/meetup.jpg) -->
# ❓ 常见问题

View File

@@ -25,22 +25,23 @@ from camel.toolkits import (
)
from camel.types import ModelPlatformType, ModelType
from camel.logger import set_log_level
from camel.societies import RolePlaying
from utils import OwlRolePlaying, run_society, DocumentProcessingToolkit
from owl.utils import run_society, DocumentProcessingToolkit
load_dotenv()
set_log_level(level="DEBUG")
def construct_society(question: str) -> OwlRolePlaying:
def construct_society(question: str) -> RolePlaying:
r"""Construct a society of agents based on the given question.
Args:
question (str): The task or question to be addressed by the society.
Returns:
OwlRolePlaying: A configured society of agents ready to address the question.
RolePlaying: A configured society of agents ready to address the question.
"""
# Create models for different components
@@ -112,7 +113,7 @@ def construct_society(question: str) -> OwlRolePlaying:
}
# Create and return the society
society = OwlRolePlaying(
society = RolePlaying(
**task_kwargs,
user_role_name="user",
user_agent_kwargs=user_agent_kwargs,

View File

@@ -25,7 +25,7 @@ from camel.toolkits import (
)
from camel.types import ModelPlatformType
from utils import OwlRolePlaying, run_society
from owl.utils import OwlRolePlaying, run_society
from camel.logger import set_log_level

View File

@@ -31,7 +31,9 @@ from camel.toolkits import (
from camel.types import ModelPlatformType, ModelType
from utils import OwlRolePlaying, run_society
from owl.utils import run_society
from camel.societies import RolePlaying
from camel.logger import set_log_level
@@ -40,14 +42,14 @@ set_log_level(level="DEBUG")
load_dotenv()
def construct_society(question: str) -> OwlRolePlaying:
def construct_society(question: str) -> RolePlaying:
r"""Construct a society of agents based on the given question.
Args:
question (str): The task or question to be addressed by the society.
Returns:
OwlRolePlaying: A configured society of agents ready to address the question.
RolePlaying: A configured society of agents ready to address the question.
"""
# Create models for different components
@@ -84,7 +86,7 @@ def construct_society(question: str) -> OwlRolePlaying:
}
# Create and return the society
society = OwlRolePlaying(
society = RolePlaying(
**task_kwargs,
user_role_name="user",
user_agent_kwargs=user_agent_kwargs,

View File

@@ -32,7 +32,7 @@ from camel.toolkits import (
from camel.types import ModelPlatformType, ModelType
from camel.configs import ChatGPTConfig
from utils import GAIABenchmark
from owl.utils import GAIABenchmark
from camel.logger import set_log_level
set_log_level(level="DEBUG")

View File

@@ -102,7 +102,7 @@ from camel.types import ModelPlatformType, ModelType
from camel.logger import set_log_level
from camel.toolkits import MCPToolkit
from utils.enhanced_role_playing import OwlRolePlaying, arun_society
from owl.utils.enhanced_role_playing import OwlRolePlaying, arun_society
load_dotenv()

View File

@@ -22,20 +22,22 @@ from camel.toolkits import (
from camel.types import ModelPlatformType, ModelType
from camel.logger import set_log_level
from utils import OwlRolePlaying, run_society
from owl.utils import run_society
from camel.societies import RolePlaying
load_dotenv()
set_log_level(level="DEBUG")
def construct_society(question: str) -> OwlRolePlaying:
def construct_society(question: str) -> RolePlaying:
r"""Construct a society of agents based on the given question.
Args:
question (str): The task or question to be addressed by the society.
Returns:
OwlRolePlaying: A configured society of agents ready to address the
RolePlaying: A configured society of agents ready to address the
question.
"""
@@ -86,7 +88,7 @@ def construct_society(question: str) -> OwlRolePlaying:
}
# Create and return the society
society = OwlRolePlaying(
society = RolePlaying(
**task_kwargs,
user_role_name="user",
user_agent_kwargs=user_agent_kwargs,

View File

@@ -25,7 +25,9 @@ from camel.toolkits import (
)
from camel.types import ModelPlatformType
from utils import OwlRolePlaying, run_society
from owl.utils import run_society
from camel.societies import RolePlaying
from camel.logger import set_log_level
@@ -34,14 +36,14 @@ set_log_level(level="DEBUG")
load_dotenv()
def construct_society(question: str) -> OwlRolePlaying:
def construct_society(question: str) -> RolePlaying:
r"""Construct a society of agents based on the given question.
Args:
question (str): The task or question to be addressed by the society.
Returns:
OwlRolePlaying: A configured society of agents ready to address the question.
RolePlaying: A configured society of agents ready to address the question.
"""
# Create models for different components
@@ -105,7 +107,7 @@ def construct_society(question: str) -> OwlRolePlaying:
}
# Create and return the society
society = OwlRolePlaying(
society = RolePlaying(
**task_kwargs,
user_role_name="user",
user_agent_kwargs=user_agent_kwargs,

View File

@@ -25,8 +25,8 @@ from camel.toolkits import (
)
from camel.types import ModelPlatformType
from utils import OwlRolePlaying, run_society
from owl.utils import run_society
from camel.societies import RolePlaying
from camel.logger import set_log_level
set_log_level(level="DEBUG")
@@ -34,14 +34,14 @@ set_log_level(level="DEBUG")
load_dotenv()
def construct_society(question: str) -> OwlRolePlaying:
def construct_society(question: str) -> RolePlaying:
r"""Construct a society of agents based on the given question.
Args:
question (str): The task or question to be addressed by the society.
Returns:
OwlRolePlaying: A configured society of agents ready to address the question.
RolePlaying: A configured society of agents ready to address the question.
"""
# Create models for different components
@@ -110,7 +110,7 @@ def construct_society(question: str) -> OwlRolePlaying:
}
# Create and return the society
society = OwlRolePlaying(
society = RolePlaying(
**task_kwargs,
user_role_name="user",
user_agent_kwargs=user_agent_kwargs,

View File

@@ -22,7 +22,9 @@ from camel.models import ModelFactory
from camel.toolkits import BrowserToolkit, SearchToolkit, FileWriteToolkit
from camel.types import ModelPlatformType, ModelType
from utils import OwlRolePlaying, run_society
from owl.utils import run_society
from camel.societies import RolePlaying
from camel.logger import set_log_level
@@ -31,7 +33,7 @@ set_log_level(level="DEBUG")
load_dotenv()
def construct_society(question: str) -> OwlRolePlaying:
def construct_society(question: str) -> RolePlaying:
r"""Construct the society based on the question."""
user_role_name = "user"
@@ -82,7 +84,7 @@ def construct_society(question: str) -> OwlRolePlaying:
"with_task_specify": False,
}
society = OwlRolePlaying(
society = RolePlaying(
**task_kwargs,
user_role_name=user_role_name,
user_agent_kwargs=user_agent_kwargs,

View File

@@ -28,8 +28,9 @@ from camel.toolkits import (
FileWriteToolkit,
)
from camel.types import ModelPlatformType, ModelType
from camel.societies import RolePlaying
from utils import OwlRolePlaying, run_society, DocumentProcessingToolkit
from owl.utils import run_society, DocumentProcessingToolkit
from camel.logger import set_log_level
@@ -38,7 +39,7 @@ set_log_level(level="DEBUG")
load_dotenv()
def construct_society(question: str) -> OwlRolePlaying:
def construct_society(question: str) -> RolePlaying:
"""
Construct a society of agents based on the given question.
@@ -46,7 +47,7 @@ def construct_society(question: str) -> OwlRolePlaying:
question (str): The task or question to be addressed by the society.
Returns:
OwlRolePlaying: A configured society of agents ready to address the question.
RolePlaying: A configured society of agents ready to address the question.
"""
# Create models for different components
@@ -118,7 +119,7 @@ def construct_society(question: str) -> OwlRolePlaying:
}
# Create and return the society
society = OwlRolePlaying(
society = RolePlaying(
**task_kwargs,
user_role_name="user",
user_agent_kwargs=user_agent_kwargs,

View File

@@ -23,7 +23,8 @@ from camel.toolkits import (
from camel.types import ModelPlatformType, ModelType
from camel.logger import set_log_level
from utils import OwlRolePlaying, run_society
from owl.utils import run_society
from camel.societies import RolePlaying
load_dotenv()
set_log_level(level="DEBUG")
@@ -31,14 +32,14 @@ set_log_level(level="DEBUG")
base_dir = os.path.dirname(os.path.abspath(__file__))
def construct_society(question: str) -> OwlRolePlaying:
def construct_society(question: str) -> RolePlaying:
r"""Construct a society of agents based on the given question.
Args:
question (str): The task or question to be addressed by the society.
Returns:
OwlRolePlaying: A configured society of agents ready to address the
RolePlaying: A configured society of agents ready to address the
question.
"""
@@ -90,7 +91,7 @@ def construct_society(question: str) -> OwlRolePlaying:
}
# Create and return the society
society = OwlRolePlaying(
society = RolePlaying(
**task_kwargs,
user_role_name="user",
user_agent_kwargs=user_agent_kwargs,

View File

@@ -23,7 +23,8 @@ from camel.toolkits import (
from camel.types import ModelPlatformType, ModelType
from camel.logger import set_log_level
from utils import OwlRolePlaying, run_society
from owl.utils import run_society
from camel.societies import RolePlaying
load_dotenv()
set_log_level(level="DEBUG")
@@ -33,14 +34,14 @@ set_log_level(level="DEBUG")
base_dir = os.path.dirname(os.path.abspath(__file__))
def construct_society(question: str) -> OwlRolePlaying:
def construct_society(question: str) -> RolePlaying:
r"""Construct a society of agents based on the given question.
Args:
question (str): The task or question to be addressed by the society.
Returns:
OwlRolePlaying: A configured society of agents ready to address the
RolePlaying: A configured society of agents ready to address the
question.
"""
@@ -92,7 +93,7 @@ def construct_society(question: str) -> OwlRolePlaying:
}
# Create and return the society
society = OwlRolePlaying(
society = RolePlaying(
**task_kwargs,
user_role_name="user",
user_agent_kwargs=user_agent_kwargs,

View File

@@ -1,7 +1,10 @@
# MODEL & API (See https://docs.camel-ai.org/key_modules/models.html#)
#===========================================
# MODEL & API
# (See https://docs.camel-ai.org/key_modules/models.html#)
#===========================================
# OPENAI API
# OPENAI_API_KEY= ""
# OPENAI API (https://platform.openai.com/api-keys)
OPENAI_API_KEY='Your_Key'
# OPENAI_API_BASE_URL=""
# Azure OpenAI API
@@ -12,25 +15,22 @@
# Qwen API (https://help.aliyun.com/zh/model-studio/developer-reference/get-api-key)
# QWEN_API_KEY=""
QWEN_API_KEY='Your_Key'
# DeepSeek API (https://platform.deepseek.com/api_keys)
# DEEPSEEK_API_KEY=""
DEEPSEEK_API_KEY='Your_Key'
#===========================================
# Tools & Services API
#===========================================
# Google Search API (https://developers.google.com/custom-search/v1/overview)
GOOGLE_API_KEY=""
SEARCH_ENGINE_ID=""
# Hugging Face API (https://huggingface.co/join)
HF_TOKEN=""
# Google Search API (https://coda.io/@jon-dallas/google-image-search-pack-example/search-engine-id-and-google-api-key-3)
GOOGLE_API_KEY='Your_Key'
SEARCH_ENGINE_ID='Your_ID'
# Chunkr API (https://chunkr.ai/)
CHUNKR_API_KEY=""
CHUNKR_API_KEY='Your_Key'
# Firecrawl API (https://www.firecrawl.dev/)
FIRECRAWL_API_KEY=""
FIRECRAWL_API_KEY='Your_Key'
#FIRECRAWL_API_URL="https://api.firecrawl.dev"

View File

@@ -1,921 +0,0 @@
# ========= Copyright 2023-2024 @ CAMEL-AI.org. All Rights Reserved. =========
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
# ========= Copyright 2023-2024 @ CAMEL-AI.org. All Rights Reserved. =========
import os
import sys
import gradio as gr
import subprocess
import threading
import time
from datetime import datetime
import queue
from pathlib import Path
import json
import signal
import dotenv
# 设置日志队列
log_queue: queue.Queue[str] = queue.Queue()
# 当前运行的进程
current_process = None
process_lock = threading.Lock()
# 脚本选项
SCRIPTS = {
"Qwen Mini (中文)": "run_qwen_mini_zh.py",
"Qwen (中文)": "run_qwen_zh.py",
"Mini": "run_mini.py",
"DeepSeek (中文)": "run_deepseek_zh.py",
"Default": "run.py",
"GAIA Roleplaying": "run_gaia_roleplaying.py",
"OpenAI Compatible": "run_openai_compatiable_model.py",
"Azure OpenAI": "run_azure_openai.py",
"Ollama": "run_ollama.py",
"Terminal": "run_terminal_zh.py",
}
# 脚本描述
SCRIPT_DESCRIPTIONS = {
"Qwen Mini (中文)": "使用阿里云Qwen模型的中文版本适合中文问答和任务",
"Qwen (中文)": "使用阿里云Qwen模型支持多种工具和功能",
"Mini": "轻量级版本使用OpenAI GPT-4o模型",
"DeepSeek (中文)": "使用DeepSeek模型适合非多模态任务",
"Default": "默认OWL实现使用OpenAI GPT-4o模型和全套工具",
"GAIA Roleplaying": "GAIA基准测试实现用于评估模型能力",
"OpenAI Compatible": "使用兼容OpenAI API的第三方模型支持自定义API端点",
"Azure OpenAI": "使用Azure OpenAI API",
"Ollama": "使用Ollama API",
"Terminal": "使用本地终端执行python文件",
}
# 环境变量分组
ENV_GROUPS = {
"模型API": [
{
"name": "OPENAI_API_KEY",
"label": "OpenAI API密钥",
"type": "password",
"required": False,
"help": "OpenAI API密钥用于访问GPT模型。获取方式https://platform.openai.com/api-keys",
},
{
"name": "OPENAI_API_BASE_URL",
"label": "OpenAI API基础URL",
"type": "text",
"required": False,
"help": "OpenAI API的基础URL可选。如果使用代理或自定义端点请设置此项。",
},
{
"name": "AZURE_OPENAI_KEY",
"label": "Azure OpenAI API密钥",
"type": "password",
"required": False,
"help": "Azure OpenAI API密钥用于访问Azure部署的GPT模型",
},
{
"name": "AZURE_OPENAI_ENDPOINT",
"label": "Azure OpenAI端点",
"type": "text",
"required": False,
"help": "Azure OpenAI服务的端点URL",
},
{
"name": "AZURE_DEPLOYMENT_NAME",
"label": "Azure OpenAI部署名称",
"type": "text",
"required": False,
"help": "Azure OpenAI服务的部署名称",
},
{
"name": "AZURE_OPENAI_VERSION",
"label": "Azure OpenAI API版本",
"type": "text",
"required": False,
"help": "Azure OpenAI API版本例如2023-12-01-preview",
},
{
"name": "QWEN_API_KEY",
"label": "阿里云Qwen API密钥",
"type": "password",
"required": False,
"help": "阿里云Qwen API密钥用于访问Qwen模型。获取方式https://help.aliyun.com/zh/model-studio/developer-reference/get-api-key",
},
{
"name": "DEEPSEEK_API_KEY",
"label": "DeepSeek API密钥",
"type": "password",
"required": False,
"help": "DeepSeek API密钥用于访问DeepSeek模型。获取方式https://platform.deepseek.com/api_keys",
},
],
"搜索工具": [
{
"name": "GOOGLE_API_KEY",
"label": "Google API密钥",
"type": "password",
"required": False,
"help": "Google搜索API密钥用于网络搜索功能。获取方式https://developers.google.com/custom-search/v1/overview",
},
{
"name": "SEARCH_ENGINE_ID",
"label": "搜索引擎ID",
"type": "text",
"required": False,
"help": "Google自定义搜索引擎ID与Google API密钥配合使用。获取方式https://developers.google.com/custom-search/v1/overview",
},
],
"其他工具": [
{
"name": "HF_TOKEN",
"label": "Hugging Face令牌",
"type": "password",
"required": False,
"help": "Hugging Face API令牌用于访问Hugging Face模型和数据集。获取方式https://huggingface.co/join",
},
{
"name": "CHUNKR_API_KEY",
"label": "Chunkr API密钥",
"type": "password",
"required": False,
"help": "Chunkr API密钥用于文档处理功能。获取方式https://chunkr.ai/",
},
{
"name": "FIRECRAWL_API_KEY",
"label": "Firecrawl API密钥",
"type": "password",
"required": False,
"help": "Firecrawl API密钥用于网页爬取功能。获取方式https://www.firecrawl.dev/",
},
],
"自定义环境变量": [], # 用户自定义的环境变量将存储在这里
}
def get_script_info(script_name):
"""获取脚本的详细信息"""
return SCRIPT_DESCRIPTIONS.get(script_name, "无描述信息")
def load_env_vars():
"""加载环境变量"""
env_vars = {}
# 尝试从.env文件加载
dotenv.load_dotenv()
# 获取所有环境变量
for group in ENV_GROUPS.values():
for var in group:
env_vars[var["name"]] = os.environ.get(var["name"], "")
# 加载.env文件中可能存在的其他环境变量
if Path(".env").exists():
try:
with open(".env", "r", encoding="utf-8") as f:
for line in f:
line = line.strip()
if line and not line.startswith("#") and "=" in line:
try:
key, value = line.split("=", 1)
key = key.strip()
value = value.strip()
# 处理引号包裹的值
if (value.startswith('"') and value.endswith('"')) or (
value.startswith("'") and value.endswith("'")
):
value = value[1:-1] # 移除首尾的引号
# 检查是否是已知的环境变量
known_var = False
for group in ENV_GROUPS.values():
if any(var["name"] == key for var in group):
known_var = True
break
# 如果不是已知的环境变量,添加到自定义环境变量组
if not known_var and key not in env_vars:
ENV_GROUPS["自定义环境变量"].append(
{
"name": key,
"label": key,
"type": "text",
"required": False,
"help": "用户自定义环境变量",
}
)
env_vars[key] = value
except Exception as e:
print(f"解析环境变量行时出错: {line}, 错误: {str(e)}")
except Exception as e:
print(f"加载.env文件时出错: {str(e)}")
return env_vars
def save_env_vars(env_vars):
"""保存环境变量到.env文件"""
# 读取现有的.env文件内容
env_path = Path(".env")
existing_content = {}
if env_path.exists():
try:
with open(env_path, "r", encoding="utf-8") as f:
for line in f:
line = line.strip()
if line and not line.startswith("#") and "=" in line:
try:
key, value = line.split("=", 1)
existing_content[key.strip()] = value.strip()
except Exception as e:
print(f"解析环境变量行时出错: {line}, 错误: {str(e)}")
except Exception as e:
print(f"读取.env文件时出错: {str(e)}")
# 更新环境变量
for key, value in env_vars.items():
if value is not None: # 允许空字符串值但不允许None
# 确保值是字符串形式
value = str(value) # 确保值是字符串
# 检查值是否已经被引号包裹
if (value.startswith('"') and value.endswith('"')) or (
value.startswith("'") and value.endswith("'")
):
# 已经被引号包裹,保持原样
existing_content[key] = value
# 更新环境变量时移除引号
os.environ[key] = value[1:-1]
else:
# 没有被引号包裹,添加双引号
# 用双引号包裹值,确保特殊字符被正确处理
quoted_value = f'"{value}"'
existing_content[key] = quoted_value
# 同时更新当前进程的环境变量(使用未引用的值)
os.environ[key] = value
# 写入.env文件
try:
with open(env_path, "w", encoding="utf-8") as f:
for key, value in existing_content.items():
f.write(f"{key}={value}\n")
except Exception as e:
print(f"写入.env文件时出错: {str(e)}")
return f"❌ 保存环境变量失败: {str(e)}"
return "✅ 环境变量已保存"
def add_custom_env_var(name, value, var_type):
"""添加自定义环境变量"""
if not name:
return "❌ 环境变量名不能为空", None
# 检查是否已存在同名环境变量
for group in ENV_GROUPS.values():
if any(var["name"] == name for var in group):
return f"❌ 环境变量 {name} 已存在", None
# 添加到自定义环境变量组
ENV_GROUPS["自定义环境变量"].append(
{
"name": name,
"label": name,
"type": var_type,
"required": False,
"help": "用户自定义环境变量",
}
)
# 保存环境变量
env_vars = {name: value}
save_env_vars(env_vars)
# 返回成功消息和更新后的环境变量组
return f"✅ 已添加环境变量 {name}", ENV_GROUPS["自定义环境变量"]
def update_custom_env_var(name, value, var_type):
"""更改自定义环境变量"""
if not name:
return "❌ 环境变量名不能为空", None
# 检查环境变量是否存在于自定义环境变量组中
found = False
for i, var in enumerate(ENV_GROUPS["自定义环境变量"]):
if var["name"] == name:
# 更新类型
ENV_GROUPS["自定义环境变量"][i]["type"] = var_type
found = True
break
if not found:
return f"❌ 自定义环境变量 {name} 不存在", None
# 保存环境变量值
env_vars = {name: value}
save_env_vars(env_vars)
# 返回成功消息和更新后的环境变量组
return f"✅ 已更新环境变量 {name}", ENV_GROUPS["自定义环境变量"]
def delete_custom_env_var(name):
"""删除自定义环境变量"""
if not name:
return "❌ 环境变量名不能为空", None
# 检查环境变量是否存在于自定义环境变量组中
found = False
for i, var in enumerate(ENV_GROUPS["自定义环境变量"]):
if var["name"] == name:
# 从自定义环境变量组中删除
del ENV_GROUPS["自定义环境变量"][i]
found = True
break
if not found:
return f"❌ 自定义环境变量 {name} 不存在", None
# 从.env文件中删除该环境变量
env_path = Path(".env")
if env_path.exists():
try:
with open(env_path, "r", encoding="utf-8") as f:
lines = f.readlines()
with open(env_path, "w", encoding="utf-8") as f:
for line in lines:
try:
# 更精确地匹配环境变量行
line_stripped = line.strip()
# 检查是否为注释行或空行
if not line_stripped or line_stripped.startswith("#"):
f.write(line) # 保留注释行和空行
continue
# 检查是否包含等号
if "=" not in line_stripped:
f.write(line) # 保留不包含等号的行
continue
# 提取变量名并检查是否与要删除的变量匹配
var_name = line_stripped.split("=", 1)[0].strip()
if var_name != name:
f.write(line) # 保留不匹配的变量
except Exception as e:
print(f"处理.env文件行时出错: {line}, 错误: {str(e)}")
# 出错时保留原行
f.write(line)
except Exception as e:
print(f"删除环境变量时出错: {str(e)}")
return f"❌ 删除环境变量失败: {str(e)}", None
# 从当前进程的环境变量中删除
if name in os.environ:
del os.environ[name]
# 返回成功消息和更新后的环境变量组
return f"✅ 已删除环境变量 {name}", ENV_GROUPS["自定义环境变量"]
def terminate_process():
"""终止当前运行的进程"""
global current_process
with process_lock:
if current_process is not None and current_process.poll() is None:
try:
# 在Windows上使用taskkill强制终止进程树
if os.name == "nt":
# 获取进程ID
pid = current_process.pid
# 使用taskkill命令终止进程及其子进程 - 避免使用shell=True以提高安全性
try:
subprocess.run(
["taskkill", "/F", "/T", "/PID", str(pid)], check=False
)
except subprocess.SubprocessError as e:
log_queue.put(f"终止进程时出错: {str(e)}\n")
return f"❌ 终止进程时出错: {str(e)}"
else:
# 在Unix上使用SIGTERM和SIGKILL
current_process.terminate()
try:
current_process.wait(timeout=3)
except subprocess.TimeoutExpired:
current_process.kill()
# 等待进程终止
try:
current_process.wait(timeout=2)
except subprocess.TimeoutExpired:
pass # 已经尝试强制终止,忽略超时
log_queue.put("进程已终止\n")
return "✅ 进程已终止"
except Exception as e:
log_queue.put(f"终止进程时出错: {str(e)}\n")
return f"❌ 终止进程时出错: {str(e)}"
else:
return "❌ 没有正在运行的进程"
def run_script(script_dropdown, question, progress=gr.Progress()):
"""运行选定的脚本并返回输出"""
global current_process
script_name = SCRIPTS.get(script_dropdown)
if not script_name:
return "❌ 无效的脚本选择", "", "", "", None
if not question.strip():
return "请输入问题!", "", "", "", None
# 清空日志队列
while not log_queue.empty():
log_queue.get()
# 创建日志目录
log_dir = Path("logs")
log_dir.mkdir(exist_ok=True)
# 创建带时间戳的日志文件
timestamp = datetime.now().strftime("%Y%m%d_%H%M%S")
log_file = log_dir / f"{script_name.replace('.py', '')}_{timestamp}.log"
# 构建命令
# 获取当前脚本所在的基础路径
base_path = os.path.dirname(os.path.dirname(os.path.abspath(__file__)))
cmd = [
sys.executable,
os.path.join(base_path, "owl", "script_adapter.py"),
os.path.join(base_path, "owl", script_name),
]
# 创建环境变量副本并添加问题
env = os.environ.copy()
# 确保问题是字符串类型
if not isinstance(question, str):
question = str(question)
# 保留换行符,但确保是有效的字符串
env["OWL_QUESTION"] = question
# 启动进程
with process_lock:
current_process = subprocess.Popen(
cmd,
stdout=subprocess.PIPE,
stderr=subprocess.STDOUT,
text=True,
bufsize=1,
env=env,
encoding="utf-8",
)
# 创建线程来读取输出
def read_output():
try:
# 使用唯一的时间戳确保日志文件名不重复
timestamp_unique = datetime.now().strftime("%Y%m%d_%H%M%S_%f")
unique_log_file = (
log_dir / f"{script_name.replace('.py', '')}_{timestamp_unique}.log"
)
# 使用这个唯一的文件名写入日志
with open(unique_log_file, "w", encoding="utf-8") as f:
# 更新全局日志文件路径
nonlocal log_file
log_file = unique_log_file
for line in iter(current_process.stdout.readline, ""):
if line:
# 写入日志文件
f.write(line)
f.flush()
# 添加到队列
log_queue.put(line)
except Exception as e:
log_queue.put(f"读取输出时出错: {str(e)}\n")
# 启动读取线程
threading.Thread(target=read_output, daemon=True).start()
# 收集日志
logs = []
progress(0, desc="正在运行...")
# 等待进程完成或超时
start_time = time.time()
timeout = 1800 # 30分钟超时
while current_process.poll() is None:
# 检查是否超时
if time.time() - start_time > timeout:
with process_lock:
if current_process.poll() is None:
if os.name == "nt":
current_process.send_signal(signal.CTRL_BREAK_EVENT)
else:
current_process.terminate()
log_queue.put("执行超时,已终止进程\n")
break
# 从队列获取日志
while not log_queue.empty():
log = log_queue.get()
logs.append(log)
# 更新进度
elapsed = time.time() - start_time
progress(min(elapsed / 300, 0.99), desc="正在运行...")
# 短暂休眠以减少CPU使用
time.sleep(0.1)
# 每秒更新一次日志显示
yield (
status_message(current_process),
extract_answer(logs),
"".join(logs),
str(log_file),
None,
)
# 获取剩余日志
while not log_queue.empty():
logs.append(log_queue.get())
# 提取聊天历史(如果有)
chat_history = extract_chat_history(logs)
# 返回最终状态和日志
return (
status_message(current_process),
extract_answer(logs),
"".join(logs),
str(log_file),
chat_history,
)
def status_message(process):
"""根据进程状态返回状态消息"""
if process.poll() is None:
return "⏳ 正在运行..."
elif process.returncode == 0:
return "✅ 执行成功"
else:
return f"❌ 执行失败 (返回码: {process.returncode})"
def extract_answer(logs):
"""从日志中提取答案"""
answer = ""
for log in logs:
if "Answer:" in log:
answer = log.split("Answer:", 1)[1].strip()
break
return answer
def extract_chat_history(logs):
"""尝试从日志中提取聊天历史"""
try:
chat_json_str = ""
capture_json = False
for log in logs:
if "chat_history" in log:
# 开始捕获JSON
start_idx = log.find("[")
if start_idx != -1:
capture_json = True
chat_json_str = log[start_idx:]
elif capture_json:
# 继续捕获JSON直到找到匹配的结束括号
chat_json_str += log
if "]" in log:
# 找到结束括号尝试解析JSON
end_idx = chat_json_str.rfind("]") + 1
if end_idx > 0:
try:
# 清理可能的额外文本
json_str = chat_json_str[:end_idx].strip()
chat_data = json.loads(json_str)
# 格式化为Gradio聊天组件可用的格式
formatted_chat = []
for msg in chat_data:
if "role" in msg and "content" in msg:
role = "用户" if msg["role"] == "user" else "助手"
formatted_chat.append([role, msg["content"]])
return formatted_chat
except json.JSONDecodeError:
# 如果解析失败,继续捕获
pass
except Exception:
# 其他错误,停止捕获
capture_json = False
except Exception:
pass
return None
def create_ui():
"""创建Gradio界面"""
# 加载环境变量
env_vars = load_env_vars()
with gr.Blocks(theme=gr.themes.Soft(primary_hue="blue")) as app:
gr.Markdown(
"""
# 🦉 OWL 智能助手运行平台
选择一个模型并输入您的问题,系统将运行相应的脚本并显示结果。
"""
)
with gr.Tabs():
with gr.TabItem("运行模式"):
with gr.Row():
with gr.Column(scale=1):
# 确保默认值是SCRIPTS中存在的键
default_script = list(SCRIPTS.keys())[0] if SCRIPTS else None
script_dropdown = gr.Dropdown(
choices=list(SCRIPTS.keys()),
value=default_script,
label="选择模式",
)
script_info = gr.Textbox(
value=get_script_info(default_script)
if default_script
else "",
label="模型描述",
interactive=False,
)
script_dropdown.change(
fn=lambda x: get_script_info(x),
inputs=script_dropdown,
outputs=script_info,
)
question_input = gr.Textbox(
lines=8,
placeholder="请输入您的问题...",
label="问题",
elem_id="question_input",
show_copy_button=True,
)
gr.Markdown(
"""
> **注意**: 您输入的问题将替换脚本中的默认问题。系统会自动处理问题的替换,确保您的问题被正确使用。
> 支持多行输入,换行将被保留。
"""
)
with gr.Row():
run_button = gr.Button("运行", variant="primary")
stop_button = gr.Button("终止", variant="stop")
with gr.Column(scale=2):
with gr.Tabs():
with gr.TabItem("结果"):
status_output = gr.Textbox(label="状态")
answer_output = gr.Textbox(label="回答", lines=10)
log_file_output = gr.Textbox(label="日志文件路径")
with gr.TabItem("运行日志"):
log_output = gr.Textbox(label="完整日志", lines=25)
with gr.TabItem("聊天历史"):
chat_output = gr.Chatbot(label="对话历史")
# 示例问题
examples = [
[
"Qwen Mini (中文)",
"浏览亚马逊并找出一款对程序员有吸引力的产品。请提供产品名称和价格",
],
[
"DeepSeek (中文)",
"请分析GitHub上CAMEL-AI项目的最新统计数据。找出该项目的星标数量、贡献者数量和最近的活跃度。然后创建一个简单的Excel表格来展示这些数据并生成一个柱状图来可视化这些指标。最后总结CAMEL项目的受欢迎程度和发展趋势。",
],
[
"Default",
"Navigate to Amazon.com and identify one product that is attractive to coders. Please provide me with the product name and price. No need to verify your answer.",
],
]
gr.Examples(examples=examples, inputs=[script_dropdown, question_input])
with gr.TabItem("环境变量配置"):
env_inputs = {}
save_status = gr.Textbox(label="保存状态", interactive=False)
# 添加自定义环境变量部分
with gr.Accordion("添加自定义环境变量", open=True):
with gr.Row():
new_var_name = gr.Textbox(
label="环境变量名", placeholder="例如MY_CUSTOM_API_KEY"
)
new_var_value = gr.Textbox(
label="环境变量值", placeholder="输入值"
)
new_var_type = gr.Dropdown(
choices=["text", "password"], value="text", label="类型"
)
add_var_button = gr.Button("添加环境变量", variant="primary")
add_var_status = gr.Textbox(label="添加状态", interactive=False)
# 自定义环境变量列表
custom_vars_list = gr.JSON(
value=ENV_GROUPS["自定义环境变量"],
label="已添加的自定义环境变量",
visible=len(ENV_GROUPS["自定义环境变量"]) > 0,
)
# 更改和删除自定义环境变量部分
with gr.Accordion(
"更改或删除自定义环境变量",
open=True,
visible=len(ENV_GROUPS["自定义环境变量"]) > 0,
) as update_delete_accordion:
with gr.Row():
# 创建下拉菜单,显示所有自定义环境变量
custom_var_dropdown = gr.Dropdown(
choices=[
var["name"] for var in ENV_GROUPS["自定义环境变量"]
],
label="选择环境变量",
interactive=True,
)
update_var_value = gr.Textbox(
label="新的环境变量值", placeholder="输入新值"
)
update_var_type = gr.Dropdown(
choices=["text", "password"], value="text", label="类型"
)
with gr.Row():
update_var_button = gr.Button("更新环境变量", variant="primary")
delete_var_button = gr.Button("删除环境变量", variant="stop")
update_var_status = gr.Textbox(label="操作状态", interactive=False)
# 添加环境变量按钮点击事件
add_var_button.click(
fn=add_custom_env_var,
inputs=[new_var_name, new_var_value, new_var_type],
outputs=[add_var_status, custom_vars_list],
).then(
fn=lambda vars: {"visible": len(vars) > 0},
inputs=[custom_vars_list],
outputs=[update_delete_accordion],
)
# 更新环境变量按钮点击事件
update_var_button.click(
fn=update_custom_env_var,
inputs=[custom_var_dropdown, update_var_value, update_var_type],
outputs=[update_var_status, custom_vars_list],
)
# 删除环境变量按钮点击事件
delete_var_button.click(
fn=delete_custom_env_var,
inputs=[custom_var_dropdown],
outputs=[update_var_status, custom_vars_list],
).then(
fn=lambda vars: {"visible": len(vars) > 0},
inputs=[custom_vars_list],
outputs=[update_delete_accordion],
)
# 当自定义环境变量列表更新时,更新下拉菜单选项
custom_vars_list.change(
fn=lambda vars: {
"choices": [var["name"] for var in vars],
"value": None,
},
inputs=[custom_vars_list],
outputs=[custom_var_dropdown],
)
# 现有环境变量配置
for group_name, vars in ENV_GROUPS.items():
if (
group_name != "自定义环境变量" or len(vars) > 0
): # 只显示非空的自定义环境变量组
with gr.Accordion(
group_name, open=(group_name != "自定义环境变量")
):
for var in vars:
# 添加帮助信息
gr.Markdown(f"**{var['help']}**")
if var["type"] == "password":
env_inputs[var["name"]] = gr.Textbox(
value=env_vars.get(var["name"], ""),
label=var["label"],
placeholder=f"请输入{var['label']}",
type="password",
)
else:
env_inputs[var["name"]] = gr.Textbox(
value=env_vars.get(var["name"], ""),
label=var["label"],
placeholder=f"请输入{var['label']}",
)
save_button = gr.Button("保存环境变量", variant="primary")
# 保存环境变量
save_inputs = [
env_inputs[var_name]
for group in ENV_GROUPS.values()
for var in group
for var_name in [var["name"]]
if var_name in env_inputs
]
save_button.click(
fn=lambda *values: save_env_vars(
dict(
zip(
[
var["name"]
for group in ENV_GROUPS.values()
for var in group
if var["name"] in env_inputs
],
values,
)
)
),
inputs=save_inputs,
outputs=save_status,
)
# 运行脚本
run_button.click(
fn=run_script,
inputs=[script_dropdown, question_input],
outputs=[
status_output,
answer_output,
log_output,
log_file_output,
chat_output,
],
show_progress=True,
)
# 终止运行
stop_button.click(fn=terminate_process, inputs=[], outputs=[status_output])
# 添加页脚
gr.Markdown(
"""
### 📝 使用说明
- 选择一个模型并输入您的问题
- 点击"运行"按钮开始执行
- 如需终止运行,点击"终止"按钮
- 在"结果"标签页查看执行状态和回答
- 在"运行日志"标签页查看完整日志
- 在"聊天历史"标签页查看对话历史(如果有)
- 在"环境变量配置"标签页配置API密钥和其他环境变量
- 您可以添加自定义环境变量,满足特殊需求
### ⚠️ 注意事项
- 运行某些模型可能需要API密钥请确保在"环境变量配置"标签页中设置了相应的环境变量
- 某些脚本可能需要较长时间运行,请耐心等待
- 如果运行超过30分钟进程将自动终止
- 您输入的问题将替换脚本中的默认问题,确保问题与所选模型兼容
"""
)
return app
if __name__ == "__main__":
# 创建并启动应用
app = create_ui()
app.queue().launch(share=True)

View File

@@ -1,948 +0,0 @@
# ========= Copyright 2023-2024 @ CAMEL-AI.org. All Rights Reserved. =========
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
# ========= Copyright 2023-2024 @ CAMEL-AI.org. All Rights Reserved. =========
import os
import sys
import gradio as gr
import subprocess
import threading
import time
from datetime import datetime
import queue
from pathlib import Path
import json
import signal
import dotenv
# Set up log queue
log_queue: queue.Queue[str] = queue.Queue()
# Currently running process
current_process = None
process_lock = threading.Lock()
# Script options
SCRIPTS = {
"Qwen Mini (Chinese)": "run_qwen_mini_zh.py",
"Qwen (Chinese)": "run_qwen_zh.py",
"Mini": "run_mini.py",
"DeepSeek (Chinese)": "run_deepseek_zh.py",
"Default": "run.py",
"GAIA Roleplaying": "run_gaia_roleplaying.py",
"OpenAI Compatible": "run_openai_compatiable_model.py",
"Azure OpenAI": "run_azure_openai.py",
"Ollama": "run_ollama.py",
"Terminal": "run_terminal.py",
}
# Script descriptions
SCRIPT_DESCRIPTIONS = {
"Qwen Mini (Chinese)": "Uses the Chinese version of Alibaba Cloud's Qwen model, suitable for Chinese Q&A and tasks",
"Qwen (Chinese)": "Uses Alibaba Cloud's Qwen model, supports various tools and functions",
"Mini": "Lightweight version, uses OpenAI GPT-4o model",
"DeepSeek (Chinese)": "Uses DeepSeek model, suitable for non-multimodal tasks",
"Default": "Default OWL implementation, uses OpenAI GPT-4o model and full set of tools",
"GAIA Roleplaying": "GAIA benchmark implementation, used to evaluate model capabilities",
"OpenAI Compatible": "Uses third-party models compatible with OpenAI API, supports custom API endpoints",
"Azure OpenAI": "Uses Azure OpenAI API",
"Ollama": "Uses Ollama API",
"Terminal": "Uses local terminal to execute python files",
}
# Environment variable groups
ENV_GROUPS = {
"Model API": [
{
"name": "OPENAI_API_KEY",
"label": "OpenAI API Key",
"type": "password",
"required": False,
"help": "OpenAI API key for accessing GPT models. Get it from: https://platform.openai.com/api-keys",
},
{
"name": "OPENAI_API_BASE_URL",
"label": "OpenAI API Base URL",
"type": "text",
"required": False,
"help": "Base URL for OpenAI API, optional. Set this if using a proxy or custom endpoint.",
},
{
"name": "AZURE_OPENAI_KEY",
"label": "Azure OpenAI API Key",
"type": "password",
"required": False,
"help": "Azure OpenAI API key for accessing Azure deployed GPT models. Get it from: https://portal.azure.com/",
},
{
"name": "AZURE_OPENAI_ENDPOINT",
"label": "Azure OpenAI Endpoint",
"type": "text",
"required": False,
"help": "Azure OpenAI service endpoint URL",
},
{
"name": "AZURE_DEPLOYMENT_NAME",
"label": "Azure OpenAI Deployment Name",
"type": "text",
"required": False,
"help": "Azure OpenAI service deployment name",
},
{
"name": "AZURE_OPENAI_VERSION",
"label": "Azure OpenAI API Version",
"type": "text",
"required": False,
"help": "Azure OpenAI API version, e.g. 2023-12-01-preview",
},
{
"name": "QWEN_API_KEY",
"label": "Alibaba Cloud Qwen API Key",
"type": "password",
"required": False,
"help": "Alibaba Cloud Qwen API key for accessing Qwen models. Get it from: https://help.aliyun.com/zh/model-studio/developer-reference/get-api-key",
},
{
"name": "DEEPSEEK_API_KEY",
"label": "DeepSeek API Key",
"type": "password",
"required": False,
"help": "DeepSeek API key for accessing DeepSeek models. Get it from: https://platform.deepseek.com/api_keys",
},
],
"Search Tools": [
{
"name": "GOOGLE_API_KEY",
"label": "Google API Key",
"type": "password",
"required": False,
"help": "Google Search API key for web search functionality. Get it from: https://developers.google.com/custom-search/v1/overview",
},
{
"name": "SEARCH_ENGINE_ID",
"label": "Search Engine ID",
"type": "text",
"required": False,
"help": "Google Custom Search Engine ID, used with Google API key. Get it from: https://developers.google.com/custom-search/v1/overview",
},
],
"Other Tools": [
{
"name": "HF_TOKEN",
"label": "Hugging Face Token",
"type": "password",
"required": False,
"help": "Hugging Face API token for accessing Hugging Face models and datasets. Get it from: https://huggingface.co/join",
},
{
"name": "CHUNKR_API_KEY",
"label": "Chunkr API Key",
"type": "password",
"required": False,
"help": "Chunkr API key for document processing functionality. Get it from: https://chunkr.ai/",
},
{
"name": "FIRECRAWL_API_KEY",
"label": "Firecrawl API Key",
"type": "password",
"required": False,
"help": "Firecrawl API key for web crawling functionality. Get it from: https://www.firecrawl.dev/",
},
],
"Custom Environment Variables": [], # User-defined environment variables will be stored here
}
def get_script_info(script_name):
"""Get detailed information about the script"""
return SCRIPT_DESCRIPTIONS.get(script_name, "No description available")
def load_env_vars():
"""Load environment variables"""
env_vars = {}
# Try to load from .env file
dotenv.load_dotenv()
# Get all environment variables
for group in ENV_GROUPS.values():
for var in group:
env_vars[var["name"]] = os.environ.get(var["name"], "")
# Load other environment variables that may exist in the .env file
if Path(".env").exists():
try:
with open(".env", "r", encoding="utf-8") as f:
for line in f:
line = line.strip()
if line and not line.startswith("#") and "=" in line:
try:
key, value = line.split("=", 1)
key = key.strip()
value = value.strip()
# Handle quoted values
if (value.startswith('"') and value.endswith('"')) or (
value.startswith("'") and value.endswith("'")
):
value = value[
1:-1
] # Remove quotes at the beginning and end
# Check if it's a known environment variable
known_var = False
for group in ENV_GROUPS.values():
if any(var["name"] == key for var in group):
known_var = True
break
# If it's not a known environment variable, add it to the custom environment variables group
if not known_var and key not in env_vars:
ENV_GROUPS["Custom Environment Variables"].append(
{
"name": key,
"label": key,
"type": "text",
"required": False,
"help": "User-defined environment variable",
}
)
env_vars[key] = value
except Exception as e:
print(
f"Error parsing environment variable line: {line}, error: {str(e)}"
)
except Exception as e:
print(f"Error loading .env file: {str(e)}")
return env_vars
def save_env_vars(env_vars):
"""Save environment variables to .env file"""
# Read existing .env file content
env_path = Path(".env")
existing_content = {}
if env_path.exists():
try:
with open(env_path, "r", encoding="utf-8") as f:
for line in f:
line = line.strip()
if line and not line.startswith("#") and "=" in line:
try:
key, value = line.split("=", 1)
existing_content[key.strip()] = value.strip()
except Exception as e:
print(
f"Error parsing environment variable line: {line}, error: {str(e)}"
)
except Exception as e:
print(f"Error reading .env file: {str(e)}")
# Update environment variables
for key, value in env_vars.items():
if value is not None: # Allow empty string values, but not None
# Ensure the value is a string
value = str(value) # Ensure the value is a string
# Check if the value is already wrapped in quotes
if (value.startswith('"') and value.endswith('"')) or (
value.startswith("'") and value.endswith("'")
):
# Already wrapped in quotes, keep as is
existing_content[key] = value
# Update environment variable by removing quotes
os.environ[key] = value[1:-1]
else:
# Not wrapped in quotes, add double quotes
# Wrap the value in double quotes to ensure special characters are handled correctly
quoted_value = f'"{value}"'
existing_content[key] = quoted_value
# Also update the environment variable for the current process (using the unquoted value)
os.environ[key] = value
# Write to .env file
try:
with open(env_path, "w", encoding="utf-8") as f:
for key, value in existing_content.items():
f.write(f"{key}={value}\n")
except Exception as e:
print(f"Error writing to .env file: {str(e)}")
return f"❌ Failed to save environment variables: {str(e)}"
return "✅ Environment variables saved"
def add_custom_env_var(name, value, var_type):
"""Add custom environment variable"""
if not name:
return "❌ Environment variable name cannot be empty", None
# Check if an environment variable with the same name already exists
for group in ENV_GROUPS.values():
if any(var["name"] == name for var in group):
return f"❌ Environment variable {name} already exists", None
# Add to custom environment variables group
ENV_GROUPS["Custom Environment Variables"].append(
{
"name": name,
"label": name,
"type": var_type,
"required": False,
"help": "User-defined environment variable",
}
)
# Save environment variables
env_vars = {name: value}
save_env_vars(env_vars)
# Return success message and updated environment variable group
return f"✅ Added environment variable {name}", ENV_GROUPS[
"Custom Environment Variables"
]
def update_custom_env_var(name, value, var_type):
"""Update custom environment variable"""
if not name:
return "❌ Environment variable name cannot be empty", None
# Check if the environment variable exists in the custom environment variables group
found = False
for i, var in enumerate(ENV_GROUPS["Custom Environment Variables"]):
if var["name"] == name:
# Update type
ENV_GROUPS["Custom Environment Variables"][i]["type"] = var_type
found = True
break
if not found:
return f"❌ Custom environment variable {name} does not exist", None
# Save environment variable value
env_vars = {name: value}
save_env_vars(env_vars)
# Return success message and updated environment variable group
return f"✅ Updated environment variable {name}", ENV_GROUPS[
"Custom Environment Variables"
]
def delete_custom_env_var(name):
"""Delete custom environment variable"""
if not name:
return "❌ Environment variable name cannot be empty", None
# Check if the environment variable exists in the custom environment variables group
found = False
for i, var in enumerate(ENV_GROUPS["Custom Environment Variables"]):
if var["name"] == name:
# Delete from custom environment variables group
del ENV_GROUPS["Custom Environment Variables"][i]
found = True
break
if not found:
return f"❌ Custom environment variable {name} does not exist", None
# Delete the environment variable from .env file
env_path = Path(".env")
if env_path.exists():
try:
with open(env_path, "r", encoding="utf-8") as f:
lines = f.readlines()
with open(env_path, "w", encoding="utf-8") as f:
for line in lines:
try:
# More precisely match environment variable lines
line_stripped = line.strip()
# Check if it's a comment line or empty line
if not line_stripped or line_stripped.startswith("#"):
f.write(line) # Keep comment lines and empty lines
continue
# Check if it contains an equals sign
if "=" not in line_stripped:
f.write(line) # Keep lines without equals sign
continue
# Extract variable name and check if it matches the variable to be deleted
var_name = line_stripped.split("=", 1)[0].strip()
if var_name != name:
f.write(line) # Keep variables that don't match
except Exception as e:
print(
f"Error processing .env file line: {line}, error: {str(e)}"
)
# Keep the original line when an error occurs
f.write(line)
except Exception as e:
print(f"Error deleting environment variable: {str(e)}")
return f"❌ Failed to delete environment variable: {str(e)}", None
# Delete from current process environment variables
if name in os.environ:
del os.environ[name]
# Return success message and updated environment variable group
return f"✅ Deleted environment variable {name}", ENV_GROUPS[
"Custom Environment Variables"
]
def terminate_process():
"""Terminate the currently running process"""
global current_process
with process_lock:
if current_process is not None and current_process.poll() is None:
try:
# On Windows, use taskkill to forcibly terminate the process tree
if os.name == "nt":
# Get process ID
pid = current_process.pid
# Use taskkill command to terminate the process and its children - avoid using shell=True for better security
try:
subprocess.run(
["taskkill", "/F", "/T", "/PID", str(pid)], check=False
)
except subprocess.SubprocessError as e:
log_queue.put(f"Error terminating process: {str(e)}\n")
return f"❌ Error terminating process: {str(e)}"
else:
# On Unix, use SIGTERM and SIGKILL
current_process.terminate()
try:
current_process.wait(timeout=3)
except subprocess.TimeoutExpired:
current_process.kill()
# Wait for process to terminate
try:
current_process.wait(timeout=2)
except subprocess.TimeoutExpired:
pass # Already tried to force terminate, ignore timeout
log_queue.put("Process terminated\n")
return "✅ Process terminated"
except Exception as e:
log_queue.put(f"Error terminating process: {str(e)}\n")
return f"❌ Error terminating process: {str(e)}"
else:
return "❌ No process is currently running"
def run_script(script_dropdown, question, progress=gr.Progress()):
"""Run the selected script and return the output"""
global current_process
script_name = SCRIPTS.get(script_dropdown)
if not script_name:
return "❌ Invalid script selection", "", "", "", None
if not question.strip():
return "Please enter a question!", "", "", "", None
# Clear the log queue
while not log_queue.empty():
log_queue.get()
# Create log directory
log_dir = Path("logs")
log_dir.mkdir(exist_ok=True)
# Create log file with timestamp
timestamp = datetime.now().strftime("%Y%m%d_%H%M%S")
log_file = log_dir / f"{script_name.replace('.py', '')}_{timestamp}.log"
# Build command
base_path = os.path.dirname(os.path.dirname(os.path.abspath(__file__)))
cmd = [
sys.executable,
os.path.join(base_path, "owl", "script_adapter.py"),
os.path.join(base_path, "owl", script_name),
]
# Create a copy of environment variables and add the question
env = os.environ.copy()
# Ensure question is a string type
if not isinstance(question, str):
question = str(question)
# Preserve newlines, but ensure it's a valid string
env["OWL_QUESTION"] = question
# Start the process
with process_lock:
current_process = subprocess.Popen(
cmd,
stdout=subprocess.PIPE,
stderr=subprocess.STDOUT,
text=True,
bufsize=1,
env=env,
encoding="utf-8",
)
# Create thread to read output
def read_output():
try:
# Use a unique timestamp to ensure log filename is not duplicated
timestamp_unique = datetime.now().strftime("%Y%m%d_%H%M%S_%f")
unique_log_file = (
log_dir / f"{script_name.replace('.py', '')}_{timestamp_unique}.log"
)
# Use this unique filename to write logs
with open(unique_log_file, "w", encoding="utf-8") as f:
# Update global log file path
nonlocal log_file
log_file = unique_log_file
for line in iter(current_process.stdout.readline, ""):
if line:
# Write to log file
f.write(line)
f.flush()
# Add to queue
log_queue.put(line)
except Exception as e:
log_queue.put(f"Error reading output: {str(e)}\n")
# Start the reading thread
threading.Thread(target=read_output, daemon=True).start()
# Collect logs
logs = []
progress(0, desc="Running...")
# Wait for process to complete or timeout
start_time = time.time()
timeout = 1800 # 30 minutes timeout
while current_process.poll() is None:
# Check if timeout
if time.time() - start_time > timeout:
with process_lock:
if current_process.poll() is None:
if os.name == "nt":
current_process.send_signal(signal.CTRL_BREAK_EVENT)
else:
current_process.terminate()
log_queue.put("Execution timeout, process terminated\n")
break
# Get logs from queue
while not log_queue.empty():
log = log_queue.get()
logs.append(log)
# Update progress
elapsed = time.time() - start_time
progress(min(elapsed / 300, 0.99), desc="Running...")
# Short sleep to reduce CPU usage
time.sleep(0.1)
# Update log display once per second
yield (
status_message(current_process),
extract_answer(logs),
"".join(logs),
str(log_file),
None,
)
# Get remaining logs
while not log_queue.empty():
logs.append(log_queue.get())
# Extract chat history (if any)
chat_history = extract_chat_history(logs)
# Return final status and logs
return (
status_message(current_process),
extract_answer(logs),
"".join(logs),
str(log_file),
chat_history,
)
def status_message(process):
"""Return status message based on process status"""
if process.poll() is None:
return "⏳ Running..."
elif process.returncode == 0:
return "✅ Execution successful"
else:
return f"❌ Execution failed (return code: {process.returncode})"
def extract_answer(logs):
"""Extract answer from logs"""
answer = ""
for log in logs:
if "Answer:" in log:
answer = log.split("Answer:", 1)[1].strip()
break
return answer
def extract_chat_history(logs):
"""Try to extract chat history from logs"""
try:
chat_json_str = ""
capture_json = False
for log in logs:
if "chat_history" in log:
# Start capturing JSON
start_idx = log.find("[")
if start_idx != -1:
capture_json = True
chat_json_str = log[start_idx:]
elif capture_json:
# Continue capturing JSON until finding the matching closing bracket
chat_json_str += log
if "]" in log:
# Found closing bracket, try to parse JSON
end_idx = chat_json_str.rfind("]") + 1
if end_idx > 0:
try:
# Clean up possible extra text
json_str = chat_json_str[:end_idx].strip()
chat_data = json.loads(json_str)
# Format for use with Gradio chat component
formatted_chat = []
for msg in chat_data:
if "role" in msg and "content" in msg:
role = (
"User" if msg["role"] == "user" else "Assistant"
)
formatted_chat.append([role, msg["content"]])
return formatted_chat
except json.JSONDecodeError:
# If parsing fails, continue capturing
pass
except Exception:
# Other errors, stop capturing
capture_json = False
except Exception:
pass
return None
def create_ui():
"""Create Gradio interface"""
# Load environment variables
env_vars = load_env_vars()
with gr.Blocks(theme=gr.themes.Soft(primary_hue="blue")) as app:
gr.Markdown(
"""
# 🦉 OWL Intelligent Assistant Platform
Select a model and enter your question, the system will run the corresponding script and display the results.
"""
)
with gr.Tabs():
with gr.TabItem("Run Mode"):
with gr.Row():
with gr.Column(scale=1):
# Ensure default value is a key that exists in SCRIPTS
default_script = list(SCRIPTS.keys())[0] if SCRIPTS else None
script_dropdown = gr.Dropdown(
choices=list(SCRIPTS.keys()),
value=default_script,
label="Select Mode",
)
script_info = gr.Textbox(
value=get_script_info(default_script)
if default_script
else "",
label="Model Description",
interactive=False,
)
script_dropdown.change(
fn=lambda x: get_script_info(x),
inputs=script_dropdown,
outputs=script_info,
)
question_input = gr.Textbox(
lines=8,
placeholder="Please enter your question...",
label="Question",
elem_id="question_input",
show_copy_button=True,
)
gr.Markdown(
"""
> **Note**: Your question will replace the default question in the script. The system will automatically handle the replacement, ensuring your question is used correctly.
> Multi-line input is supported, line breaks will be preserved.
"""
)
with gr.Row():
run_button = gr.Button("Run", variant="primary")
stop_button = gr.Button("Stop", variant="stop")
with gr.Column(scale=2):
with gr.Tabs():
with gr.TabItem("Results"):
status_output = gr.Textbox(label="Status")
answer_output = gr.Textbox(label="Answer", lines=10)
log_file_output = gr.Textbox(label="Log File Path")
with gr.TabItem("Run Logs"):
log_output = gr.Textbox(label="Complete Logs", lines=25)
with gr.TabItem("Chat History"):
chat_output = gr.Chatbot(label="Conversation History")
# Example questions
examples = [
[
"Qwen Mini (Chinese)",
"Browse Amazon and find a product that is attractive to programmers. Please provide the product name and price.",
],
[
"DeepSeek (Chinese)",
"Please analyze the latest statistics of the CAMEL-AI project on GitHub. Find out the number of stars, number of contributors, and recent activity of the project. Then, create a simple Excel spreadsheet to display this data and generate a bar chart to visualize these metrics. Finally, summarize the popularity and development trends of the CAMEL project.",
],
[
"Default",
"Navigate to Amazon.com and identify one product that is attractive to coders. Please provide me with the product name and price. No need to verify your answer.",
],
]
gr.Examples(examples=examples, inputs=[script_dropdown, question_input])
with gr.TabItem("Environment Variable Configuration"):
env_inputs = {}
save_status = gr.Textbox(label="Save Status", interactive=False)
# Add custom environment variables section
with gr.Accordion("Add Custom Environment Variables", open=True):
with gr.Row():
new_var_name = gr.Textbox(
label="Environment Variable Name",
placeholder="Example: MY_CUSTOM_API_KEY",
)
new_var_value = gr.Textbox(
label="Environment Variable Value",
placeholder="Enter value",
)
new_var_type = gr.Dropdown(
choices=["text", "password"], value="text", label="Type"
)
add_var_button = gr.Button(
"Add Environment Variable", variant="primary"
)
add_var_status = gr.Textbox(label="Add Status", interactive=False)
# Custom environment variables list
custom_vars_list = gr.JSON(
value=ENV_GROUPS["Custom Environment Variables"],
label="Added Custom Environment Variables",
visible=len(ENV_GROUPS["Custom Environment Variables"]) > 0,
)
# Update and delete custom environment variables section
with gr.Accordion(
"Update or Delete Custom Environment Variables",
open=True,
visible=len(ENV_GROUPS["Custom Environment Variables"]) > 0,
) as update_delete_accordion:
with gr.Row():
# Create dropdown menu to display all custom environment variables
custom_var_dropdown = gr.Dropdown(
choices=[
var["name"]
for var in ENV_GROUPS["Custom Environment Variables"]
],
label="Select Environment Variable",
interactive=True,
)
update_var_value = gr.Textbox(
label="New Environment Variable Value",
placeholder="Enter new value",
)
update_var_type = gr.Dropdown(
choices=["text", "password"], value="text", label="Type"
)
with gr.Row():
update_var_button = gr.Button(
"Update Environment Variable", variant="primary"
)
delete_var_button = gr.Button(
"Delete Environment Variable", variant="stop"
)
update_var_status = gr.Textbox(
label="Operation Status", interactive=False
)
# Add environment variable button click event
add_var_button.click(
fn=add_custom_env_var,
inputs=[new_var_name, new_var_value, new_var_type],
outputs=[add_var_status, custom_vars_list],
).then(
fn=lambda vars: {"visible": len(vars) > 0},
inputs=[custom_vars_list],
outputs=[update_delete_accordion],
)
# Update environment variable button click event
update_var_button.click(
fn=update_custom_env_var,
inputs=[custom_var_dropdown, update_var_value, update_var_type],
outputs=[update_var_status, custom_vars_list],
)
# Delete environment variable button click event
delete_var_button.click(
fn=delete_custom_env_var,
inputs=[custom_var_dropdown],
outputs=[update_var_status, custom_vars_list],
).then(
fn=lambda vars: {"visible": len(vars) > 0},
inputs=[custom_vars_list],
outputs=[update_delete_accordion],
)
# When custom environment variables list is updated, update dropdown menu options
custom_vars_list.change(
fn=lambda vars: {
"choices": [var["name"] for var in vars],
"value": None,
},
inputs=[custom_vars_list],
outputs=[custom_var_dropdown],
)
# Existing environment variable configuration
for group_name, vars in ENV_GROUPS.items():
if (
group_name != "Custom Environment Variables" or len(vars) > 0
): # Only show non-empty custom environment variable groups
with gr.Accordion(
group_name,
open=(group_name != "Custom Environment Variables"),
):
for var in vars:
# Add help information
gr.Markdown(f"**{var['help']}**")
if var["type"] == "password":
env_inputs[var["name"]] = gr.Textbox(
value=env_vars.get(var["name"], ""),
label=var["label"],
placeholder=f"Please enter {var['label']}",
type="password",
)
else:
env_inputs[var["name"]] = gr.Textbox(
value=env_vars.get(var["name"], ""),
label=var["label"],
placeholder=f"Please enter {var['label']}",
)
save_button = gr.Button("Save Environment Variables", variant="primary")
# Save environment variables
save_inputs = [
env_inputs[var_name]
for group in ENV_GROUPS.values()
for var in group
for var_name in [var["name"]]
if var_name in env_inputs
]
save_button.click(
fn=lambda *values: save_env_vars(
dict(
zip(
[
var["name"]
for group in ENV_GROUPS.values()
for var in group
if var["name"] in env_inputs
],
values,
)
)
),
inputs=save_inputs,
outputs=save_status,
)
# Run script
run_button.click(
fn=run_script,
inputs=[script_dropdown, question_input],
outputs=[
status_output,
answer_output,
log_output,
log_file_output,
chat_output,
],
show_progress=True,
)
# Terminate execution
stop_button.click(fn=terminate_process, inputs=[], outputs=[status_output])
# Add footer
gr.Markdown(
"""
### 📝 Instructions
- Select a model and enter your question
- Click the "Run" button to start execution
- To stop execution, click the "Stop" button
- View execution status and answers in the "Results" tab
- View complete logs in the "Run Logs" tab
- View conversation history in the "Chat History" tab (if available)
- Configure API keys and other environment variables in the "Environment Variable Configuration" tab
- You can add custom environment variables to meet special requirements
### ⚠️ Notes
- Running some models may require API keys, please make sure you have set the corresponding environment variables in the "Environment Variable Configuration" tab
- Some scripts may take a long time to run, please be patient
- If execution exceeds 30 minutes, the process will automatically terminate
- Your question will replace the default question in the script, ensure the question is compatible with the selected model
"""
)
return app
if __name__ == "__main__":
# Create and launch the application
app = create_ui()
app.queue().launch(share=True)

View File

@@ -1,267 +0,0 @@
# ========= Copyright 2023-2024 @ CAMEL-AI.org. All Rights Reserved. =========
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
# ========= Copyright 2023-2024 @ CAMEL-AI.org. All Rights Reserved. =========
import os
import sys
import importlib.util
import re
from pathlib import Path
import traceback
def load_module_from_path(module_name, file_path):
"""从文件路径加载Python模块"""
try:
spec = importlib.util.spec_from_file_location(module_name, file_path)
if spec is None:
print(f"错误: 无法从 {file_path} 创建模块规范")
return None
module = importlib.util.module_from_spec(spec)
sys.modules[module_name] = module
spec.loader.exec_module(module)
return module
except Exception as e:
print(f"加载模块时出错: {e}")
traceback.print_exc()
return None
def run_script_with_env_question(script_name):
"""使用环境变量中的问题运行脚本"""
# 获取环境变量中的问题
question = os.environ.get("OWL_QUESTION")
if not question:
print("错误: 未设置OWL_QUESTION环境变量")
sys.exit(1)
# 脚本路径
script_path = Path(script_name).resolve()
if not script_path.exists():
print(f"错误: 脚本 {script_path} 不存在")
sys.exit(1)
# 创建临时文件路径
temp_script_path = script_path.with_name(f"temp_{script_path.name}")
try:
# 读取脚本内容
try:
with open(script_path, "r", encoding="utf-8") as f:
content = f.read()
except Exception as e:
print(f"读取脚本文件时出错: {e}")
sys.exit(1)
# 检查脚本是否有main函数
has_main = re.search(r"def\s+main\s*\(\s*\)\s*:", content) is not None
# 转义问题中的特殊字符
escaped_question = (
question.replace("\\", "\\\\")
.replace('"', '\\"')
.replace("'", "\\'")
.replace("\n", "\\n") # 转义换行符
.replace("\r", "\\r") # 转义回车符
)
# 查找脚本中所有的question赋值 - 改进的正则表达式
# 匹配单行和多行字符串赋值
question_assignments = re.findall(
r'question\s*=\s*(?:["\'].*?["\']|""".*?"""|\'\'\'.*?\'\'\'|\(.*?\))',
content,
re.DOTALL,
)
print(f"在脚本中找到 {len(question_assignments)} 个question赋值")
# 修改脚本内容替换所有的question赋值
modified_content = content
# 如果脚本中有question赋值替换所有的赋值
if question_assignments:
for assignment in question_assignments:
modified_content = modified_content.replace(
assignment, f'question = "{escaped_question}"'
)
print(f"已替换脚本中的所有question赋值为: {question}")
else:
# 如果没有找到question赋值尝试在main函数前插入
if has_main:
main_match = re.search(r"def\s+main\s*\(\s*\)\s*:", content)
if main_match:
insert_pos = main_match.start()
modified_content = (
content[:insert_pos]
+ f'\n# 用户输入的问题\nquestion = "{escaped_question}"\n\n'
+ content[insert_pos:]
)
print(f"已在main函数前插入问题: {question}")
else:
# 如果没有main函数在文件开头插入
modified_content = (
f'# 用户输入的问题\nquestion = "{escaped_question}"\n\n' + content
)
print(f"已在文件开头插入问题: {question}")
# 添加monkey patch代码确保construct_society函数使用用户的问题
monkey_patch_code = f"""
# 确保construct_society函数使用用户的问题
original_construct_society = globals().get('construct_society')
if original_construct_society:
def patched_construct_society(*args, **kwargs):
# 忽略传入的参数,始终使用用户的问题
return original_construct_society("{escaped_question}")
# 替换原始函数
globals()['construct_society'] = patched_construct_society
print("已修补construct_society函数确保使用用户问题")
"""
# 在文件末尾添加monkey patch代码
modified_content += monkey_patch_code
# 如果脚本没有调用main函数添加调用代码
if has_main and "__main__" not in content:
modified_content += """
# 确保调用main函数
if __name__ == "__main__":
main()
"""
print("已添加main函数调用代码")
# 如果脚本没有construct_society调用添加调用代码
if (
"construct_society" in content
and "run_society" in content
and "Answer:" not in content
):
modified_content += f"""
# 确保执行construct_society和run_society
if "construct_society" in globals() and "run_society" in globals():
try:
society = construct_society("{escaped_question}")
from utils import run_society
answer, chat_history, token_count = run_society(society)
print(f"Answer: {{answer}}")
except Exception as e:
print(f"运行时出错: {{e}}")
import traceback
traceback.print_exc()
"""
print("已添加construct_society和run_society调用代码")
# 执行修改后的脚本
try:
# 将脚本目录添加到sys.path
script_dir = script_path.parent
if str(script_dir) not in sys.path:
sys.path.insert(0, str(script_dir))
# 创建临时文件
try:
with open(temp_script_path, "w", encoding="utf-8") as f:
f.write(modified_content)
print(f"已创建临时脚本文件: {temp_script_path}")
except Exception as e:
print(f"创建临时脚本文件时出错: {e}")
sys.exit(1)
try:
# 直接执行临时脚本
print("开始执行脚本...")
# 如果有main函数加载模块并调用main
if has_main:
# 加载临时模块
module_name = f"temp_{script_path.stem}"
module = load_module_from_path(module_name, temp_script_path)
if module is None:
print(f"错误: 无法加载模块 {module_name}")
sys.exit(1)
# 确保模块中有question变量并且值是用户输入的问题
setattr(module, "question", question)
# 如果模块中有construct_society函数修补它
if hasattr(module, "construct_society"):
original_func = module.construct_society
def patched_func(*args, **kwargs):
return original_func(question)
module.construct_society = patched_func
print("已在模块级别修补construct_society函数")
# 调用main函数
if hasattr(module, "main"):
print("调用main函数...")
module.main()
else:
print(f"错误: 脚本 {script_path} 中没有main函数")
sys.exit(1)
else:
# 如果没有main函数直接执行修改后的脚本
print("直接执行脚本内容...")
# 使用更安全的方式执行脚本
with open(temp_script_path, "r", encoding="utf-8") as f:
script_code = f.read()
# 创建一个安全的全局命名空间
safe_globals = {
"__file__": str(temp_script_path),
"__name__": "__main__",
}
# 添加内置函数
safe_globals.update(
{k: v for k, v in globals().items() if k in ["__builtins__"]}
)
# 执行脚本
exec(script_code, safe_globals)
except Exception as e:
print(f"执行脚本时出错: {e}")
traceback.print_exc()
sys.exit(1)
except Exception as e:
print(f"处理脚本时出错: {e}")
traceback.print_exc()
sys.exit(1)
except Exception as e:
print(f"处理脚本时出错: {e}")
traceback.print_exc()
sys.exit(1)
finally:
# 删除临时文件
if temp_script_path.exists():
try:
temp_script_path.unlink()
print(f"已删除临时脚本文件: {temp_script_path}")
except Exception as e:
print(f"删除临时脚本文件时出错: {e}")
if __name__ == "__main__":
# 检查命令行参数
if len(sys.argv) < 2:
print("用法: python script_adapter.py <script_path>")
sys.exit(1)
# 运行指定的脚本
run_script_with_env_question(sys.argv[1])

View File

@@ -450,11 +450,21 @@ def run_society(
"""
input_msg = society.init_chat(init_prompt)
for _round in range(round_limit):
# Check if previous user response had TASK_DONE before getting next assistant response
if _round > 0 and (
"TASK_DONE" in input_msg.content or "任务已完成" in input_msg.content
):
break
assistant_response, user_response = society.step(input_msg)
overall_completion_token_count += (
assistant_response.info["usage"]["completion_tokens"]
+ user_response.info["usage"]["completion_tokens"]
)
overall_prompt_token_count += (
assistant_response.info["usage"]["prompt_tokens"]
+ user_response.info["usage"]["prompt_tokens"]
)
# convert tool call to dict
tool_call_records: List[dict] = []
@@ -530,10 +540,12 @@ async def arun_society(
f"Round #{_round} assistant_response:\n {assistant_response.msgs[0].content}"
)
# Check other termination conditions
if (
assistant_response.terminated
or user_response.terminated
or "TASK_DONE" in user_response.msg.content
or "任务已完成" in user_response.msg.content
):
break

1306
owl/webapp.py Normal file

File diff suppressed because it is too large Load Diff

804
owl/webapp_backup.py Normal file
View File

@@ -0,0 +1,804 @@
# ========= Copyright 2023-2024 @ CAMEL-AI.org. All Rights Reserved. =========
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
# ========= Copyright 2023-2024 @ CAMEL-AI.org. All Rights Reserved. =========
# Import from the correct module path
from owl.utils import run_society
import os
import gradio as gr
from typing import Tuple, List, Dict
import importlib
from dotenv import load_dotenv, set_key, find_dotenv, unset_key
os.environ["PYTHONIOENCODING"] = "utf-8"
# Enhanced CSS with navigation bar and additional styling
custom_css = """
:root {
--primary-color: #4a89dc;
--secondary-color: #5d9cec;
--accent-color: #7baaf7;
--light-bg: #f8f9fa;
--border-color: #e4e9f0;
--text-muted: #8a9aae;
}
.container {
max-width: 1200px;
margin: 0 auto;
}
.navbar {
display: flex;
justify-content: space-between;
align-items: center;
padding: 15px 30px;
background: linear-gradient(90deg, var(--primary-color), var(--secondary-color));
color: white;
border-radius: 10px 10px 0 0;
margin-bottom: 0;
box-shadow: 0 2px 10px rgba(74, 137, 220, 0.15);
}
.navbar-logo {
display: flex;
align-items: center;
gap: 10px;
font-size: 1.5em;
font-weight: bold;
}
.navbar-menu {
display: flex;
gap: 20px;
}
/* Navbar styles moved to a more specific section below */
.header {
text-align: center;
margin-bottom: 20px;
background: linear-gradient(180deg, var(--secondary-color), var(--accent-color));
color: white;
padding: 40px 20px;
border-radius: 0 0 10px 10px;
box-shadow: 0 4px 6px rgba(93, 156, 236, 0.12);
}
.module-info {
background-color: var(--light-bg);
border-left: 5px solid var(--primary-color);
padding: 10px 15px;
margin-top: 10px;
border-radius: 5px;
font-size: 0.9em;
}
.answer-box {
background-color: var(--light-bg);
border-left: 5px solid var(--secondary-color);
padding: 15px;
margin-bottom: 20px;
border-radius: 5px;
box-shadow: 0 2px 4px rgba(0, 0, 0, 0.05);
}
.token-count {
background-color: #e9ecef;
padding: 10px;
border-radius: 5px;
text-align: center;
font-weight: bold;
margin-bottom: 20px;
}
.chat-container {
border: 1px solid var(--border-color);
border-radius: 5px;
max-height: 500px;
overflow-y: auto;
margin-bottom: 20px;
}
.footer {
text-align: center;
margin-top: 20px;
color: var(--text-muted);
font-size: 0.9em;
padding: 20px;
border-top: 1px solid var(--border-color);
}
.features-section {
display: grid;
grid-template-columns: repeat(3, 1fr);
gap: 20px;
margin: 20px 0;
}
@media (max-width: 1200px) {
.features-section {
grid-template-columns: repeat(2, 1fr);
}
}
@media (max-width: 768px) {
.features-section {
grid-template-columns: 1fr;
}
}
.feature-card {
background-color: white;
border-radius: 8px;
padding: 20px;
box-shadow: 0 2px 8px rgba(74, 137, 220, 0.08);
transition: transform 0.3s, box-shadow 0.3s;
height: 100%;
display: flex;
flex-direction: column;
border: 1px solid rgba(228, 233, 240, 0.6);
}
.feature-card:hover {
transform: translateY(-5px);
box-shadow: 0 5px 15px rgba(74, 137, 220, 0.15);
border-color: rgba(93, 156, 236, 0.3);
}
.feature-icon {
font-size: 2em;
color: var(--primary-color);
margin-bottom: 10px;
text-shadow: 0 1px 2px rgba(74, 137, 220, 0.1);
}
.feature-card h3 {
margin-top: 10px;
margin-bottom: 10px;
}
.feature-card p {
flex-grow: 1;
font-size: 0.95em;
line-height: 1.5;
}
/* Navbar link styles - ensuring consistent colors */
.navbar-menu a {
color: #ffffff !important;
text-decoration: none;
padding: 5px 10px;
border-radius: 5px;
transition: background-color 0.3s, color 0.3s;
font-weight: 500;
text-shadow: 0 1px 2px rgba(0, 0, 0, 0.1);
}
.navbar-menu a:hover {
background-color: rgba(255, 255, 255, 0.15);
color: #ffffff !important;
}
/* Improved button and input styles */
button.primary {
background: linear-gradient(90deg, var(--primary-color), var(--secondary-color));
transition: all 0.3s;
}
button.primary:hover {
background: linear-gradient(90deg, var(--secondary-color), var(--primary-color));
transform: translateY(-2px);
box-shadow: 0 4px 8px rgba(0, 0, 0, 0.15);
}
.env-section {
background-color: var(--light-bg);
border-radius: 8px;
padding: 20px;
margin: 20px 0;
box-shadow: 0 2px 4px rgba(0, 0, 0, 0.05);
}
.env-table {
width: 100%;
border-collapse: collapse;
margin-top: 15px;
}
.env-table th, .env-table td {
padding: 10px;
border: 1px solid var(--border-color);
}
.env-table th {
background-color: var(--primary-color);
color: white;
text-align: left;
}
.env-table tr:nth-child(even) {
background-color: rgba(0, 0, 0, 0.02);
}
.env-actions {
display: flex;
gap: 10px;
}
.env-var-input {
margin-bottom: 15px;
}
.env-save-status {
margin-top: 15px;
padding: 10px;
border-radius: 5px;
}
.success {
background-color: #d4edda;
color: #155724;
border: 1px solid #c3e6cb;
}
.error {
background-color: #f8d7da;
color: #721c24;
border: 1px solid #f5c6cb;
}
"""
# Dictionary containing module descriptions
MODULE_DESCRIPTIONS = {
"run": "默认模式使用OpenAI模型的默认的智能体协作模式适合大多数任务。",
"run_mini": "使用使用OpenAI模型最小化配置处理任务",
"run_deepseek_zh": "使用deepseek模型处理中文任务",
"run_terminal_zh": "终端模式可执行命令行操作支持网络搜索、文件处理等功能。适合需要系统交互的任务使用OpenAI模型",
"run_gaia_roleplaying": "GAIA基准测试实现用于评估Agent能力",
"run_openai_compatiable_model": "使用openai兼容模型处理任务",
"run_ollama": "使用本地ollama模型处理任务",
"run_qwen_mini_zh": "使用qwen模型最小化配置处理任务",
"run_qwen_zh": "使用qwen模型处理任务",
}
# 默认环境变量模板
DEFAULT_ENV_TEMPLATE = """# MODEL & API (See https://docs.camel-ai.org/key_modules/models.html#)
# OPENAI API
# OPENAI_API_KEY= ""
# OPENAI_API_BASE_URL=""
# Qwen API (https://help.aliyun.com/zh/model-studio/developer-reference/get-api-key)
# QWEN_API_KEY=""
# DeepSeek API (https://platform.deepseek.com/api_keys)
# DEEPSEEK_API_KEY=""
#===========================================
# Tools & Services API
#===========================================
# Google Search API (https://developers.google.com/custom-search/v1/overview)
GOOGLE_API_KEY=""
SEARCH_ENGINE_ID=""
# Hugging Face API (https://huggingface.co/join)
HF_TOKEN=""
# Chunkr API (https://chunkr.ai/)
CHUNKR_API_KEY=""
# Firecrawl API (https://www.firecrawl.dev/)
FIRECRAWL_API_KEY=""
#FIRECRAWL_API_URL="https://api.firecrawl.dev"
"""
def format_chat_history(chat_history: List[Dict[str, str]]) -> List[List[str]]:
"""将聊天历史格式化为Gradio聊天组件可接受的格式
Args:
chat_history: 原始聊天历史
Returns:
List[List[str]]: 格式化后的聊天历史
"""
formatted_history = []
for message in chat_history:
user_msg = message.get("user", "")
assistant_msg = message.get("assistant", "")
if user_msg:
formatted_history.append([user_msg, None])
if assistant_msg and formatted_history:
formatted_history[-1][1] = assistant_msg
elif assistant_msg:
formatted_history.append([None, assistant_msg])
return formatted_history
def validate_input(question: str) -> bool:
"""验证用户输入是否有效
Args:
question: 用户问题
Returns:
bool: 输入是否有效
"""
# 检查输入是否为空或只包含空格
if not question or question.strip() == "":
return False
return True
def run_owl(
question: str, example_module: str
) -> Tuple[str, List[List[str]], str, str]:
"""运行OWL系统并返回结果
Args:
question: 用户问题
example_module: 要导入的示例模块名(如 "run_terminal_zh""run_deep"
Returns:
Tuple[...]: 回答、聊天历史、令牌计数、状态
"""
# 验证输入
if not validate_input(question):
return ("请输入有效的问题", [], "0", "❌ 错误: 输入无效")
try:
# 确保环境变量已加载
load_dotenv(find_dotenv(), override=True)
# 检查模块是否在MODULE_DESCRIPTIONS中
if example_module not in MODULE_DESCRIPTIONS:
return (
f"所选模块 '{example_module}' 不受支持",
[],
"0",
"❌ 错误: 不支持的模块",
)
# 动态导入目标模块
module_path = f"owl.examples.{example_module}"
try:
module = importlib.import_module(module_path)
except ImportError as ie:
return (
f"无法导入模块: {module_path}",
[],
"0",
f"❌ 错误: 模块 {example_module} 不存在或无法加载 - {str(ie)}",
)
except Exception as e:
return (f"导入模块时发生错误: {module_path}", [], "0", f"❌ 错误: {str(e)}")
# 检查是否包含construct_society函数
if not hasattr(module, "construct_society"):
return (
f"模块 {module_path} 中未找到 construct_society 函数",
[],
"0",
"❌ 错误: 模块接口不兼容",
)
# 构建社会模拟
try:
society = module.construct_society(question)
except Exception as e:
return (
f"构建社会模拟时发生错误: {str(e)}",
[],
"0",
f"❌ 错误: 构建失败 - {str(e)}",
)
# 运行社会模拟
try:
answer, chat_history, token_info = run_society(society)
except Exception as e:
return (
f"运行社会模拟时发生错误: {str(e)}",
[],
"0",
f"❌ 错误: 运行失败 - {str(e)}",
)
# 格式化聊天历史
try:
formatted_chat_history = format_chat_history(chat_history)
except Exception:
# 如果格式化失败,返回空历史记录但继续处理
formatted_chat_history = []
# 安全地获取令牌计数
if not isinstance(token_info, dict):
token_info = {}
completion_tokens = token_info.get("completion_token_count", 0)
prompt_tokens = token_info.get("prompt_token_count", 0)
total_tokens = completion_tokens + prompt_tokens
return (
answer,
formatted_chat_history,
f"完成令牌: {completion_tokens:,} | 提示令牌: {prompt_tokens:,} | 总计: {total_tokens:,}",
"✅ 成功完成",
)
except Exception as e:
return (f"发生错误: {str(e)}", [], "0", f"❌ 错误: {str(e)}")
def update_module_description(module_name: str) -> str:
"""返回所选模块的描述"""
return MODULE_DESCRIPTIONS.get(module_name, "无可用描述")
# 环境变量管理功能
def init_env_file():
"""初始化.env文件如果不存在"""
dotenv_path = find_dotenv()
if not dotenv_path:
with open(".env", "w") as f:
f.write(DEFAULT_ENV_TEMPLATE)
dotenv_path = find_dotenv()
return dotenv_path
def load_env_vars():
"""加载环境变量并返回字典格式"""
dotenv_path = init_env_file()
load_dotenv(dotenv_path, override=True)
env_vars = {}
with open(dotenv_path, "r") as f:
for line in f:
line = line.strip()
if line and not line.startswith("#"):
if "=" in line:
key, value = line.split("=", 1)
env_vars[key.strip()] = value.strip().strip("\"'")
return env_vars
def save_env_vars(env_vars):
"""保存环境变量到.env文件"""
try:
dotenv_path = init_env_file()
# 保存每个环境变量
for key, value in env_vars.items():
if key and key.strip(): # 确保键不为空
set_key(dotenv_path, key.strip(), value.strip())
# 重新加载环境变量以确保生效
load_dotenv(dotenv_path, override=True)
return True, "环境变量已成功保存!"
except Exception as e:
return False, f"保存环境变量时出错: {str(e)}"
def add_env_var(key, value):
"""添加或更新单个环境变量"""
try:
if not key or not key.strip():
return False, "变量名不能为空"
dotenv_path = init_env_file()
set_key(dotenv_path, key.strip(), value.strip())
load_dotenv(dotenv_path, override=True)
return True, f"环境变量 {key} 已成功添加/更新!"
except Exception as e:
return False, f"添加环境变量时出错: {str(e)}"
def delete_env_var(key):
"""删除环境变量"""
try:
if not key or not key.strip():
return False, "变量名不能为空"
dotenv_path = init_env_file()
unset_key(dotenv_path, key.strip())
# 从当前进程环境中也删除
if key in os.environ:
del os.environ[key]
return True, f"环境变量 {key} 已成功删除!"
except Exception as e:
return False, f"删除环境变量时出错: {str(e)}"
def mask_sensitive_value(key: str, value: str) -> str:
"""对敏感信息进行掩码处理
Args:
key: 环境变量名
value: 环境变量值
Returns:
str: 处理后的值
"""
# 定义需要掩码的敏感关键词
sensitive_keywords = ["key", "token", "secret", "password", "api"]
# 检查是否包含敏感关键词(不区分大小写)
is_sensitive = any(keyword in key.lower() for keyword in sensitive_keywords)
if is_sensitive and value:
# 如果是敏感信息且有值,则显示掩码
return "*" * 8
return value
def update_env_table():
"""更新环境变量表格显示,对敏感信息进行掩码处理"""
env_vars = load_env_vars()
# 对敏感值进行掩码处理
masked_env_vars = [[k, mask_sensitive_value(k, v)] for k, v in env_vars.items()]
return masked_env_vars
def create_ui():
"""创建增强版Gradio界面"""
with gr.Blocks(css=custom_css, theme=gr.themes.Soft(primary_hue="blue")) as app:
with gr.Column(elem_classes="container"):
gr.HTML("""
<div class="navbar">
<div class="navbar-logo">
🦉 OWL 多智能体协作系统
</div>
<div class="navbar-menu">
<a href="#home">首页</a>
<a href="#env-settings">环境设置</a>
<a href="https://github.com/camel-ai/owl/blob/main/README.md#-community">加入交流群</a>
<a href="https://github.com/camel-ai/owl/blob/main/README.md">OWL文档</a>
<a href="https://github.com/camel-ai/camel">CAMEL框架</a>
<a href="https://camel-ai.org">CAMEL-AI官网</a>
</div>
</div>
<div class="header" id="home">
<p>我们的愿景是彻底改变AI代理协作解决现实世界任务的方式。通过利用动态代理交互OWL能够在多个领域实现更自然、高效和稳健的任务自动化。</p>
</div>
""")
with gr.Row(elem_id="features"):
gr.HTML("""
<div class="features-section">
<div class="feature-card">
<div class="feature-icon">🔍</div>
<h3>实时信息检索</h3>
<p>利用维基百科、谷歌搜索和其他在线资源获取最新信息。</p>
</div>
<div class="feature-card">
<div class="feature-icon">📹</div>
<h3>多模态处理</h3>
<p>支持处理互联网或本地的视频、图像和音频数据。</p>
</div>
<div class="feature-card">
<div class="feature-icon">🌐</div>
<h3>浏览器自动化</h3>
<p>使用Playwright框架模拟浏览器交互实现网页操作自动化。</p>
</div>
<div class="feature-card">
<div class="feature-icon">📄</div>
<h3>文档解析</h3>
<p>从各种文档格式中提取内容,并转换为易于处理的格式。</p>
</div>
<div class="feature-card">
<div class="feature-icon">💻</div>
<h3>代码执行</h3>
<p>使用解释器编写和运行Python代码实现自动化数据处理。</p>
</div>
<div class="feature-card">
<div class="feature-icon">🧰</div>
<h3>内置工具包</h3>
<p>提供丰富的工具包,支持搜索、数据分析、代码执行等多种功能。</p>
</div>
<div class="feature-card">
<div class="feature-icon">🔑</div>
<h3>环境变量管理</h3>
<p>便捷管理API密钥和环境配置安全存储敏感信息。</p>
</div>
</div>
""")
with gr.Row():
with gr.Column(scale=2):
question_input = gr.Textbox(
lines=5,
placeholder="请输入您的问题...",
label="问题",
elem_id="question_input",
show_copy_button=True,
)
# 增强版模块选择下拉菜单
# 只包含MODULE_DESCRIPTIONS中定义的模块
module_dropdown = gr.Dropdown(
choices=list(MODULE_DESCRIPTIONS.keys()),
value="run_terminal_zh",
label="选择功能模块",
interactive=True,
)
# 模块描述文本框
module_description = gr.Textbox(
value=MODULE_DESCRIPTIONS["run_terminal_zh"],
label="模块描述",
interactive=False,
elem_classes="module-info",
)
run_button = gr.Button(
"运行", variant="primary", elem_classes="primary"
)
with gr.Column(scale=1):
gr.Markdown("""
### 使用指南
1. **选择适合的模块**:根据您的任务需求选择合适的功能模块
2. **详细描述您的需求**:在输入框中清晰描述您的问题或任务
3. **启动智能处理**:点击"运行"按钮开始多智能体协作处理
4. **查看结果**:在下方标签页查看回答和完整对话历史
> **高级提示**: 对于复杂任务,可以尝试指定具体步骤和预期结果
""")
status_output = gr.Textbox(label="状态", interactive=False)
with gr.Tabs():
with gr.TabItem("回答"):
answer_output = gr.Textbox(
label="回答", lines=10, elem_classes="answer-box"
)
with gr.TabItem("对话历史"):
chat_output = gr.Chatbot(
label="完整对话记录", elem_classes="chat-container", height=500
)
token_count_output = gr.Textbox(
label="令牌计数", interactive=False, elem_classes="token-count"
)
# 示例问题
examples = [
"打开百度搜索总结一下camel-ai的camel框架的github star、fork数目等并把数字用plot包写成python文件保存到本地用本地终端执行python文件显示图出来给我",
"请分析GitHub上CAMEL-AI项目的最新统计数据。找出该项目的星标数量、贡献者数量和最近的活跃度。",
"浏览亚马逊并找出一款对程序员有吸引力的产品。请提供产品名称和价格",
"写一个hello world的python文件保存到本地",
]
gr.Examples(examples=examples, inputs=question_input)
# 新增: 环境变量管理选项卡
with gr.TabItem("环境变量管理", id="env-settings"):
gr.Markdown("""
## 环境变量管理
在此处设置模型API密钥和其他服务凭证。这些信息将保存在本地的`.env`文件中确保您的API密钥安全存储且不会上传到网络。
""")
# 环境变量表格
env_table = gr.Dataframe(
headers=["变量名", ""],
datatype=["str", "str"],
row_count=10,
col_count=(2, "fixed"),
value=update_env_table,
label="当前环境变量",
interactive=False,
)
with gr.Row():
with gr.Column(scale=1):
new_env_key = gr.Textbox(
label="变量名", placeholder="例如: OPENAI_API_KEY"
)
with gr.Column(scale=2):
new_env_value = gr.Textbox(
label="", placeholder="输入API密钥或其他配置值"
)
with gr.Row():
add_env_button = gr.Button("添加/更新变量", variant="primary")
refresh_button = gr.Button("刷新变量列表")
delete_env_button = gr.Button("删除选定变量", variant="stop")
env_status = gr.Textbox(label="状态", interactive=False)
# 变量选择器(用于删除)
env_var_to_delete = gr.Dropdown(
choices=[], label="选择要删除的变量", interactive=True
)
# 更新变量选择器的选项
def update_delete_dropdown():
env_vars = load_env_vars()
return gr.Dropdown.update(choices=list(env_vars.keys()))
# 连接事件处理函数
add_env_button.click(
fn=lambda k, v: add_env_var(k, v),
inputs=[new_env_key, new_env_value],
outputs=[env_status],
).then(fn=update_env_table, outputs=[env_table]).then(
fn=update_delete_dropdown, outputs=[env_var_to_delete]
).then(
fn=lambda: ("", ""), # 修改为返回两个空字符串的元组
outputs=[new_env_key, new_env_value],
)
refresh_button.click(fn=update_env_table, outputs=[env_table]).then(
fn=update_delete_dropdown, outputs=[env_var_to_delete]
)
delete_env_button.click(
fn=lambda k: delete_env_var(k),
inputs=[env_var_to_delete],
outputs=[env_status],
).then(fn=update_env_table, outputs=[env_table]).then(
fn=update_delete_dropdown, outputs=[env_var_to_delete]
)
gr.HTML("""
<div class="footer" id="about">
<h3>关于 OWL 多智能体协作系统</h3>
<p>OWL 是一个基于CAMEL框架开发的先进多智能体协作系统旨在通过智能体协作解决复杂问题。</p>
<p>© 2025 CAMEL-AI.org. 基于Apache License 2.0开源协议</p>
<p><a href="https://github.com/camel-ai/owl" target="_blank">GitHub</a></p>
</div>
""")
# 设置事件处理
run_button.click(
fn=run_owl,
inputs=[question_input, module_dropdown],
outputs=[answer_output, chat_output, token_count_output, status_output],
)
# 模块选择更新描述
module_dropdown.change(
fn=update_module_description,
inputs=module_dropdown,
outputs=module_description,
)
return app
# 主函数
def main():
try:
# 初始化.env文件如果不存在
init_env_file()
app = create_ui()
app.launch(share=False)
except Exception as e:
print(f"启动应用程序时发生错误: {str(e)}")
import traceback
traceback.print_exc()
if __name__ == "__main__":
main()

1276
owl/webapp_zh.py Normal file

File diff suppressed because it is too large Load Diff

View File

@@ -21,7 +21,7 @@ keywords = [
"learning-systems"
]
dependencies = [
"camel-ai[all]==0.2.29",
"camel-ai[all]==0.2.30",
"chunkr-ai>=0.0.41",
"docx2markdown>=0.1.1",
"gradio>=3.50.2",

View File

@@ -1,4 +1,4 @@
camel-ai[all]==0.2.29
camel-ai[all]==0.2.30
chunkr-ai>=0.0.41
docx2markdown>=0.1.1
gradio>=3.50.2

View File

@@ -1,65 +0,0 @@
# ========= Copyright 2023-2024 @ CAMEL-AI.org. All Rights Reserved. =========
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
# ========= Copyright 2023-2024 @ CAMEL-AI.org. All Rights Reserved. =========
#!/usr/bin/env python
# -*- coding: utf-8 -*-
"""
OWL Intelligent Assistant Platform Launch Script
"""
import os
import sys
from pathlib import Path
os.environ["PYTHONIOENCODING"] = "utf-8"
def main():
"""Main function to launch the OWL Intelligent Assistant Platform"""
# Ensure the current directory is the project root
project_root = Path(__file__).resolve().parent
os.chdir(project_root)
# Create log directory
log_dir = project_root / "logs"
log_dir.mkdir(exist_ok=True)
# Add project root to Python path
sys.path.insert(0, str(project_root))
try:
from owl.app_en import create_ui
# Create and launch the application
app = create_ui()
app.queue().launch(share=False)
except ImportError as e:
print(
f"Error: Unable to import necessary modules. Please ensure all dependencies are installed: {e}"
)
print(
"Tip: Run 'pip install -r requirements.txt --use-pep517' to install all dependencies"
)
sys.exit(1)
except Exception as e:
print(f"Error occurred while starting the application: {e}")
import traceback
traceback.print_exc()
sys.exit(1)
if __name__ == "__main__":
main()

View File

@@ -1,63 +0,0 @@
# ========= Copyright 2023-2024 @ CAMEL-AI.org. All Rights Reserved. =========
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
# ========= Copyright 2023-2024 @ CAMEL-AI.org. All Rights Reserved. =========
#!/usr/bin/env python
# -*- coding: utf-8 -*-
"""
OWL 智能助手运行平台启动脚本
"""
import os
import sys
from pathlib import Path
os.environ["PYTHONIOENCODING"] = "utf-8"
def main():
"""主函数启动OWL智能助手运行平台"""
# 确保当前目录是项目根目录
project_root = Path(__file__).resolve().parent
os.chdir(project_root)
# 创建日志目录
log_dir = project_root / "logs"
log_dir.mkdir(exist_ok=True)
# 导入并运行应用
sys.path.insert(0, str(project_root))
try:
from owl.app import create_ui
# 创建并启动应用
app = create_ui()
app.queue().launch(share=False)
except ImportError as e:
print(f"错误: 无法导入必要的模块。请确保已安装所有依赖项: {e}")
print(
"提示: 运行 'pip install -r requirements.txt --use-pep517' 安装所有依赖项"
)
sys.exit(1)
except Exception as e:
print(f"启动应用程序时出错: {e}")
import traceback
traceback.print_exc()
sys.exit(1)
if __name__ == "__main__":
main()

8
uv.lock generated
View File

@@ -482,7 +482,7 @@ wheels = [
[[package]]
name = "camel-ai"
version = "0.2.29"
version = "0.2.30"
source = { registry = "https://pypi.org/simple" }
dependencies = [
{ name = "colorama" },
@@ -499,9 +499,9 @@ dependencies = [
{ name = "pyyaml" },
{ name = "tiktoken" },
]
sdist = { url = "https://files.pythonhosted.org/packages/00/f8/fdb2478ec3b61f78af2a8a8ab0b575e795a015e89c2c058cee61d63a3951/camel_ai-0.2.29.tar.gz", hash = "sha256:b077885ea7a1fd6b4d53dd77e83b6b4c2ded96e43ced6a2f4bd51a434a29bbdb", size = 440795 }
sdist = { url = "https://files.pythonhosted.org/packages/ef/86/57cbcae86d2d60dab0aad31b5302525c75f45ff5edc3c3819a378fa9e12c/camel_ai-0.2.30.tar.gz", hash = "sha256:e1639376e70e9cf1477eca88d1bdc1813855cbd1db683528e1f93027b6aa0b0a", size = 442842 }
wheels = [
{ url = "https://files.pythonhosted.org/packages/2b/c4/4c0c388464d4c8f8ec7704d39459883e0769268b566a82245f545b09f703/camel_ai-0.2.29-py3-none-any.whl", hash = "sha256:812143a204e364703be40066101c0cf34769bc589dac81373444acc6bab8fe7b", size = 746424 },
{ url = "https://files.pythonhosted.org/packages/85/fe/8f1d17896aedbc9e0dfa1bff40d560e5a6808d9b727e04c293be6be5954f/camel_ai-0.2.30-py3-none-any.whl", hash = "sha256:e09eec860331cdb4da4e49f46f5d45345a81820c5847556fdf9e7827dd9bbfa9", size = 752672 },
]
[package.optional-dependencies]
@@ -3622,7 +3622,7 @@ dependencies = [
[package.metadata]
requires-dist = [
{ name = "camel-ai", extras = ["all"], specifier = "==0.2.29" },
{ name = "camel-ai", extras = ["all"], specifier = "==0.2.30" },
{ name = "chunkr-ai", specifier = ">=0.0.41" },
{ name = "docx2markdown", specifier = ">=0.1.1" },
{ name = "gradio", specifier = ">=3.50.2" },