mirror of
https://github.com/camel-ai/owl.git
synced 2026-03-22 05:57:17 +08:00
update wendong
This commit is contained in:
32
README.md
32
README.md
@@ -224,7 +224,7 @@ OWL requires various API keys to interact with different services. The `owl/.env
|
||||
2. **Configure Your API Keys**:
|
||||
Open the `.env` file in your preferred text editor and insert your API keys in the corresponding fields.
|
||||
|
||||
> **Note**: For the minimal example (`run_mini.py`), you only need to configure the LLM API key (e.g., `OPENAI_API_KEY`).
|
||||
> **Note**: For the minimal example (`examples/run_mini.py`), you only need to configure the LLM API key (e.g., `OPENAI_API_KEY`).
|
||||
|
||||
### Option 2: Setting Environment Variables Directly
|
||||
|
||||
@@ -275,7 +275,7 @@ cd .. && source .venv/bin/activate && cd owl
|
||||
playwright install-deps
|
||||
|
||||
#run example demo script
|
||||
xvfb-python run.py
|
||||
xvfb-python examples/run.py
|
||||
|
||||
# Option 2: Build and run using the provided scripts
|
||||
cd .container
|
||||
@@ -299,17 +299,17 @@ npx -y @smithery/cli install @wonderwhy-er/desktop-commander --client claude
|
||||
npx @wonderwhy-er/desktop-commander setup
|
||||
|
||||
# Run the MCP example
|
||||
python owl/run_mcp.py
|
||||
python examples/run_mcp.py
|
||||
```
|
||||
|
||||
This example showcases how OWL agents can seamlessly interact with file systems, web automation, and information retrieval through the MCP protocol. Check out `owl/run_mcp.py` for the full implementation.
|
||||
This example showcases how OWL agents can seamlessly interact with file systems, web automation, and information retrieval through the MCP protocol. Check out `examples/run_mcp.py` for the full implementation.
|
||||
|
||||
## Basic Usage
|
||||
|
||||
After installation and setting up your environment variables, you can start using OWL right away:
|
||||
|
||||
```bash
|
||||
python owl/run.py
|
||||
python examples/run.py
|
||||
```
|
||||
|
||||
## Running with Different Models
|
||||
@@ -330,28 +330,28 @@ OWL supports various LLM backends, though capabilities may vary depending on the
|
||||
|
||||
```bash
|
||||
# Run with Qwen model
|
||||
python owl/examples/run_qwen_zh.py
|
||||
python examples/run_qwen_zh.py
|
||||
|
||||
# Run with Deepseek model
|
||||
python owl/examples/run_deepseek_zh.py
|
||||
python examples/run_deepseek_zh.py
|
||||
|
||||
# Run with other OpenAI-compatible models
|
||||
python owl/examples/run_openai_compatiable_model.py
|
||||
python examples/run_openai_compatiable_model.py
|
||||
|
||||
# Run with Azure OpenAI
|
||||
python owl/run_azure_openai.py
|
||||
python examples/run_azure_openai.py
|
||||
|
||||
# Run with Ollama
|
||||
python owl/examples/run_ollama.py
|
||||
python examples/run_ollama.py
|
||||
```
|
||||
|
||||
For a simpler version that only requires an LLM API key, you can try our minimal example:
|
||||
|
||||
```bash
|
||||
python owl/examples/run_mini.py
|
||||
python examples/run_mini.py
|
||||
```
|
||||
|
||||
You can run OWL agent with your own task by modifying the `run.py` script:
|
||||
You can run OWL agent with your own task by modifying the `examples/run.py` script:
|
||||
|
||||
```python
|
||||
# Define your own task
|
||||
@@ -393,7 +393,7 @@ Here are some tasks you can try with OWL:
|
||||
|
||||
OWL's MCP integration provides a standardized way for AI models to interact with various tools and data sources:
|
||||
|
||||
Try our comprehensive MCP example in `owl/run_mcp.py` to see these capabilities in action!
|
||||
Try our comprehensive MCP example in `examples/run_mcp.py` to see these capabilities in action!
|
||||
|
||||
## Available Toolkits
|
||||
|
||||
@@ -464,10 +464,10 @@ OWL includes an intuitive web-based user interface that makes it easier to inter
|
||||
|
||||
```bash
|
||||
# Start the Chinese version
|
||||
python owl/webapp_zh.py
|
||||
python examples/webapp_zh.py
|
||||
|
||||
# Start the English version
|
||||
python owl/webapp.py
|
||||
python examples/webapp.py
|
||||
```
|
||||
|
||||
## Features
|
||||
@@ -545,7 +545,7 @@ Join us ([*Discord*](https://discord.camel-ai.org/) or [*WeChat*](https://ghli.o
|
||||
|
||||
Join us for further discussions!
|
||||
<!--  -->
|
||||

|
||||

|
||||
|
||||
# ❓ FAQ
|
||||
|
||||
|
||||
30
README_zh.md
30
README_zh.md
@@ -219,7 +219,7 @@ OWL 需要各种 API 密钥来与不同的服务进行交互。`owl/.env_templat
|
||||
2. **配置你的 API 密钥**:
|
||||
在你喜欢的文本编辑器中打开 `.env` 文件,并在相应字段中插入你的 API 密钥。
|
||||
|
||||
> **注意**:对于最小示例(`run_mini.py`),你只需要配置 LLM API 密钥(例如,`OPENAI_API_KEY`)。
|
||||
> **注意**:对于最小示例(`examples/run_mini.py`),你只需要配置 LLM API 密钥(例如,`OPENAI_API_KEY`)。
|
||||
|
||||
### 选项 2:直接设置环境变量
|
||||
|
||||
@@ -269,7 +269,7 @@ cd .. && source .venv/bin/activate && cd owl
|
||||
playwright install-deps
|
||||
|
||||
#运行例子演示脚本
|
||||
xvfb-python run.py
|
||||
xvfb-python examples/run.py
|
||||
|
||||
# 选项2:使用提供的脚本构建和运行
|
||||
cd .container
|
||||
@@ -293,23 +293,23 @@ npx -y @smithery/cli install @wonderwhy-er/desktop-commander --client claude
|
||||
npx @wonderwhy-er/desktop-commander setup
|
||||
|
||||
# 运行 MCP 示例
|
||||
python owl/run_mcp.py
|
||||
python examples/run_mcp.py
|
||||
```
|
||||
|
||||
这个示例展示了 OWL 智能体如何通过 MCP 协议无缝地与文件系统、网页自动化和信息检索进行交互。查看 `owl/run_mcp.py` 了解完整实现。
|
||||
这个示例展示了 OWL 智能体如何通过 MCP 协议无缝地与文件系统、网页自动化和信息检索进行交互。查看 `examples/run_mcp.py` 了解完整实现。
|
||||
|
||||
## 基本用法
|
||||
|
||||
运行以下示例:
|
||||
|
||||
```bash
|
||||
python owl/run.py
|
||||
python examples/run.py
|
||||
```
|
||||
|
||||
我们还提供了一个最小化示例,只需配置LLM的API密钥即可运行:
|
||||
|
||||
```bash
|
||||
python owl/run_mini.py
|
||||
python examples/run_mini.py
|
||||
```
|
||||
|
||||
## 使用不同的模型
|
||||
@@ -330,22 +330,22 @@ OWL 支持多种 LLM 后端,但功能可能因模型的工具调用和多模
|
||||
|
||||
```bash
|
||||
# 使用 Qwen 模型运行
|
||||
python owl/examples/run_qwen_zh.py
|
||||
python examples/run_qwen_zh.py
|
||||
|
||||
# 使用 Deepseek 模型运行
|
||||
python owl/examples/run_deepseek_zh.py
|
||||
python examples/run_deepseek_zh.py
|
||||
|
||||
# 使用其他 OpenAI 兼容模型运行
|
||||
python owl/examples/run_openai_compatiable_model.py
|
||||
python examples/run_openai_compatiable_model.py
|
||||
|
||||
# 使用 Azure OpenAI模型运行
|
||||
python owl/run_azure_openai.py
|
||||
python examples/run_azure_openai.py
|
||||
|
||||
# 使用 Ollama 运行
|
||||
python owl/examples/run_ollama.py
|
||||
python examples/run_ollama.py
|
||||
```
|
||||
|
||||
你可以通过修改 `run.py` 脚本来运行自己的任务:
|
||||
你可以通过修改 `examples/run.py` 脚本来运行自己的任务:
|
||||
|
||||
```python
|
||||
# Define your own task
|
||||
@@ -383,7 +383,7 @@ OWL 将自动调用与文档相关的工具来处理文件并提取答案。
|
||||
|
||||
OWL 的 MCP 集成为 AI 模型与各种工具和数据源的交互提供了标准化的方式。
|
||||
|
||||
查看我们的综合示例 `owl/run_mcp.py` 来体验这些功能!
|
||||
查看我们的综合示例 `examples/run_mcp.py` 来体验这些功能!
|
||||
|
||||
## 可用工具包
|
||||
|
||||
@@ -479,7 +479,7 @@ git checkout gaia58.18
|
||||
|
||||
2. 运行评估脚本:
|
||||
```bash
|
||||
python run_gaia_roleplaying.py
|
||||
python examples/run_gaia_roleplaying.py
|
||||
```
|
||||
|
||||
# ⏱️ 未来计划
|
||||
@@ -531,7 +531,7 @@ python run_gaia_roleplaying.py
|
||||
|
||||
加入我们,参与更多讨论!
|
||||
<!--  -->
|
||||

|
||||

|
||||
<!--  -->
|
||||
|
||||
# ❓ 常见问题
|
||||
|
||||
@@ -25,7 +25,6 @@ from camel.logger import set_log_level
|
||||
|
||||
from owl.utils import run_society
|
||||
from camel.societies import RolePlaying
|
||||
import os
|
||||
|
||||
load_dotenv()
|
||||
set_log_level(level="DEBUG")
|
||||
@@ -4,7 +4,7 @@
|
||||
#===========================================
|
||||
|
||||
# OPENAI API (https://platform.openai.com/api-keys)
|
||||
# OPENAI_API_KEY= ""
|
||||
OPENAI_API_KEY='Your_Key'
|
||||
# OPENAI_API_BASE_URL=""
|
||||
|
||||
# Azure OpenAI API
|
||||
@@ -15,22 +15,22 @@
|
||||
|
||||
|
||||
# Qwen API (https://help.aliyun.com/zh/model-studio/developer-reference/get-api-key)
|
||||
# QWEN_API_KEY=""
|
||||
QWEN_API_KEY='Your_Key'
|
||||
|
||||
# DeepSeek API (https://platform.deepseek.com/api_keys)
|
||||
# DEEPSEEK_API_KEY=""
|
||||
DEEPSEEK_API_KEY='Your_Key'
|
||||
|
||||
#===========================================
|
||||
# Tools & Services API
|
||||
#===========================================
|
||||
|
||||
# Google Search API (https://developers.google.com/custom-search/v1/overview)
|
||||
# GOOGLE_API_KEY=""
|
||||
# SEARCH_ENGINE_ID=""
|
||||
# Google Search API (https://coda.io/@jon-dallas/google-image-search-pack-example/search-engine-id-and-google-api-key-3)
|
||||
GOOGLE_API_KEY='Your_Key'
|
||||
SEARCH_ENGINE_ID='Your_ID'
|
||||
|
||||
# Chunkr API (https://chunkr.ai/)
|
||||
# CHUNKR_API_KEY=""
|
||||
CHUNKR_API_KEY='Your_Key'
|
||||
|
||||
# Firecrawl API (https://www.firecrawl.dev/)
|
||||
#FIRECRAWL_API_KEY=""
|
||||
FIRECRAWL_API_KEY='Your_Key'
|
||||
#FIRECRAWL_API_URL="https://api.firecrawl.dev"
|
||||
@@ -461,6 +461,10 @@ def run_society(
|
||||
assistant_response.info["usage"]["completion_tokens"]
|
||||
+ user_response.info["usage"]["completion_tokens"]
|
||||
)
|
||||
overall_prompt_token_count += (
|
||||
assistant_response.info["usage"]["prompt_tokens"]
|
||||
+ user_response.info["usage"]["prompt_tokens"]
|
||||
)
|
||||
|
||||
# convert tool call to dict
|
||||
tool_call_records: List[dict] = []
|
||||
|
||||
1316
owl/webapp.py
Normal file
1316
owl/webapp.py
Normal file
File diff suppressed because it is too large
Load Diff
114
owl/webapp_zh.py
114
owl/webapp_zh.py
@@ -151,7 +151,7 @@ def get_latest_logs(max_lines=100, queue_source=None):
|
||||
|
||||
# 如果仍然没有日志,返回提示信息
|
||||
if not logs:
|
||||
return "暂无对话记录。"
|
||||
return "初始化运行中..."
|
||||
|
||||
# 过滤日志,只保留 camel.agents.chat_agent - INFO 的日志
|
||||
filtered_logs = []
|
||||
@@ -242,87 +242,49 @@ MODULE_DESCRIPTIONS = {
|
||||
"run": "默认模式:使用OpenAI模型的默认的智能体协作模式,适合大多数任务。",
|
||||
"run_mini": "使用使用OpenAI模型最小化配置处理任务",
|
||||
"run_deepseek_zh": "使用deepseek模型处理中文任务",
|
||||
"run_terminal_zh": "终端模式:可执行命令行操作,支持网络搜索、文件处理等功能。适合需要系统交互的任务,使用OpenAI模型",
|
||||
"run_gaia_roleplaying": "GAIA基准测试实现,用于评估Agent能力",
|
||||
"run_openai_compatiable_model": "使用openai兼容模型处理任务",
|
||||
"run_ollama": "使用本地ollama模型处理任务",
|
||||
"run_qwen_mini_zh": "使用qwen模型最小化配置处理任务",
|
||||
"run_qwen_zh": "使用qwen模型处理任务",
|
||||
}
|
||||
|
||||
# API帮助信息
|
||||
API_HELP_INFO = {
|
||||
"OPENAI_API_KEY": {
|
||||
"name": "OpenAI API",
|
||||
"desc": "OpenAI API密钥,用于访问GPT系列模型",
|
||||
"url": "https://platform.openai.com/api-keys",
|
||||
},
|
||||
"QWEN_API_KEY": {
|
||||
"name": "通义千问 API",
|
||||
"desc": "阿里云通义千问API密钥",
|
||||
"url": "https://help.aliyun.com/zh/model-studio/developer-reference/get-api-key",
|
||||
},
|
||||
"DEEPSEEK_API_KEY": {
|
||||
"name": "DeepSeek API",
|
||||
"desc": "DeepSeek API密钥",
|
||||
"url": "https://platform.deepseek.com/api_keys",
|
||||
},
|
||||
"GOOGLE_API_KEY": {
|
||||
"name": "Google Search API",
|
||||
"desc": "Google自定义搜索API密钥",
|
||||
"url": "https://developers.google.com/custom-search/v1/overview",
|
||||
},
|
||||
"SEARCH_ENGINE_ID": {
|
||||
"name": "Google Search Engine ID",
|
||||
"desc": "Google自定义搜索引擎ID",
|
||||
"url": "https://developers.google.com/custom-search/v1/overview",
|
||||
},
|
||||
"HF_TOKEN": {
|
||||
"name": "Hugging Face API",
|
||||
"desc": "Hugging Face API令牌",
|
||||
"url": "https://huggingface.co/join",
|
||||
},
|
||||
"CHUNKR_API_KEY": {
|
||||
"name": "Chunkr API",
|
||||
"desc": "Chunkr API密钥",
|
||||
"url": "https://chunkr.ai/",
|
||||
},
|
||||
"FIRECRAWL_API_KEY": {
|
||||
"name": "Firecrawl API",
|
||||
"desc": "Firecrawl API密钥",
|
||||
"url": "https://www.firecrawl.dev/",
|
||||
},
|
||||
}
|
||||
|
||||
# 默认环境变量模板
|
||||
DEFAULT_ENV_TEMPLATE = """# MODEL & API (See https://docs.camel-ai.org/key_modules/models.html#)
|
||||
DEFAULT_ENV_TEMPLATE = """#===========================================
|
||||
# MODEL & API
|
||||
# (See https://docs.camel-ai.org/key_modules/models.html#)
|
||||
#===========================================
|
||||
|
||||
# OPENAI API
|
||||
# OPENAI_API_KEY= ""
|
||||
# OPENAI API (https://platform.openai.com/api-keys)
|
||||
OPENAI_API_KEY='Your_Key'
|
||||
# OPENAI_API_BASE_URL=""
|
||||
|
||||
# Azure OpenAI API
|
||||
# AZURE_OPENAI_BASE_URL=""
|
||||
# AZURE_API_VERSION=""
|
||||
# AZURE_OPENAI_API_KEY=""
|
||||
# AZURE_DEPLOYMENT_NAME=""
|
||||
|
||||
|
||||
# Qwen API (https://help.aliyun.com/zh/model-studio/developer-reference/get-api-key)
|
||||
# QWEN_API_KEY=""
|
||||
QWEN_API_KEY='Your_Key'
|
||||
|
||||
# DeepSeek API (https://platform.deepseek.com/api_keys)
|
||||
# DEEPSEEK_API_KEY=""
|
||||
DEEPSEEK_API_KEY='Your_Key'
|
||||
|
||||
#===========================================
|
||||
# Tools & Services API
|
||||
#===========================================
|
||||
|
||||
# Google Search API (https://developers.google.com/custom-search/v1/overview)
|
||||
GOOGLE_API_KEY=""
|
||||
SEARCH_ENGINE_ID=""
|
||||
|
||||
# Hugging Face API (https://huggingface.co/join)
|
||||
HF_TOKEN=""
|
||||
# Google Search API (https://coda.io/@jon-dallas/google-image-search-pack-example/search-engine-id-and-google-api-key-3)
|
||||
GOOGLE_API_KEY='Your_Key'
|
||||
SEARCH_ENGINE_ID='Your_ID'
|
||||
|
||||
# Chunkr API (https://chunkr.ai/)
|
||||
CHUNKR_API_KEY=""
|
||||
CHUNKR_API_KEY='Your_Key'
|
||||
|
||||
# Firecrawl API (https://www.firecrawl.dev/)
|
||||
FIRECRAWL_API_KEY=""
|
||||
FIRECRAWL_API_KEY='Your_Key'
|
||||
#FIRECRAWL_API_URL="https://api.firecrawl.dev"
|
||||
"""
|
||||
|
||||
@@ -357,7 +319,7 @@ def run_owl(question: str, example_module: str) -> Tuple[str, str, str]:
|
||||
# 验证输入
|
||||
if not validate_input(question):
|
||||
logging.warning("用户提交了无效的输入")
|
||||
return ("请输入有效的问题", "0", "❌ 错误: 输入无效")
|
||||
return ("请输入有效的问题", "0", "❌ 错误: 输入问题无效")
|
||||
|
||||
try:
|
||||
# 确保环境变量已加载
|
||||
@@ -374,7 +336,7 @@ def run_owl(question: str, example_module: str) -> Tuple[str, str, str]:
|
||||
)
|
||||
|
||||
# 动态导入目标模块
|
||||
module_path = f"owl.examples.{example_module}"
|
||||
module_path = f"examples.{example_module}"
|
||||
try:
|
||||
logging.info(f"正在导入模块: {module_path}")
|
||||
module = importlib.import_module(module_path)
|
||||
@@ -452,8 +414,6 @@ def update_module_description(module_name: str) -> str:
|
||||
return MODULE_DESCRIPTIONS.get(module_name, "无可用描述")
|
||||
|
||||
|
||||
# 环境变量管理功能
|
||||
|
||||
# 存储前端配置的环境变量
|
||||
WEB_FRONTEND_ENV_VARS: dict[str, str] = {}
|
||||
|
||||
@@ -646,7 +606,9 @@ def get_api_guide(key: str) -> str:
|
||||
elif "deepseek" in key_lower:
|
||||
return "https://platform.deepseek.com/api_keys"
|
||||
elif "google" in key_lower:
|
||||
return "https://developers.google.com/custom-search/v1/overview"
|
||||
return "https://coda.io/@jon-dallas/google-image-search-pack-example/search-engine-id-and-google-api-key-3"
|
||||
elif "search_engine_id" in key_lower:
|
||||
return "https://coda.io/@jon-dallas/google-image-search-pack-example/search-engine-id-and-google-api-key-3"
|
||||
elif "chunkr" in key_lower:
|
||||
return "https://chunkr.ai/"
|
||||
elif "firecrawl" in key_lower:
|
||||
@@ -701,11 +663,11 @@ def save_env_table_changes(data):
|
||||
|
||||
# 遍历DataFrame的每一行
|
||||
for index, row in data.iterrows():
|
||||
# 使用列名或索引访问数据
|
||||
# 使用列名访问数据
|
||||
if len(columns) >= 3:
|
||||
# 如果有列名,使用列名访问
|
||||
key = row.iloc[1] if hasattr(row, "iloc") else row[1]
|
||||
value = row.iloc[2] if hasattr(row, "iloc") else row[2]
|
||||
# 获取变量名和值 (第0列是变量名,第1列是值)
|
||||
key = row[0] if isinstance(row, pd.Series) else row.iloc[0]
|
||||
value = row[1] if isinstance(row, pd.Series) else row.iloc[1]
|
||||
|
||||
# 检查是否为空行或已删除的变量
|
||||
if key and str(key).strip(): # 如果键名不为空,则添加或更新
|
||||
@@ -812,6 +774,9 @@ def create_ui():
|
||||
"""处理问题并实时更新日志"""
|
||||
global CURRENT_PROCESS
|
||||
|
||||
# 清空日志文件
|
||||
clear_log_file()
|
||||
|
||||
# 创建一个后台线程来处理问题
|
||||
result_queue = queue.Queue()
|
||||
|
||||
@@ -874,6 +839,8 @@ def create_ui():
|
||||
# 🦉 OWL 多智能体协作系统
|
||||
|
||||
基于CAMEL框架开发的先进多智能体协作系统,旨在通过智能体协作解决复杂问题。
|
||||
可以通过修改本地脚本自定义模型和工具。
|
||||
本网页应用目前处于测试阶段,仅供演示和测试使用,尚未推荐用于生产环境。
|
||||
"""
|
||||
)
|
||||
|
||||
@@ -1082,6 +1049,7 @@ def create_ui():
|
||||
label="问题",
|
||||
elem_id="question_input",
|
||||
show_copy_button=True,
|
||||
value="打开百度搜索,总结一下camel-ai的camel框架的github star、fork数目等,并把数字用plot包写成python文件保存到本地,并运行生成的python文件。",
|
||||
)
|
||||
|
||||
# 增强版模块选择下拉菜单
|
||||
@@ -1141,7 +1109,7 @@ def create_ui():
|
||||
gr.Markdown("""
|
||||
## 环境变量管理
|
||||
|
||||
在此处设置模型API密钥和其他服务凭证。这些信息将保存在本地的`.env`文件中,确保您的API密钥安全存储且不会上传到网络。
|
||||
在此处设置模型API密钥和其他服务凭证。这些信息将保存在本地的`.env`文件中,确保您的API密钥安全存储且不会上传到网络。正确设置API密钥对于OWL系统的功能至关重要, 可以按找工具需求灵活配置环境变量。
|
||||
""")
|
||||
|
||||
# 主要内容分为两列布局
|
||||
@@ -1150,12 +1118,9 @@ def create_ui():
|
||||
with gr.Column(scale=3):
|
||||
with gr.Box(elem_classes="env-controls"):
|
||||
# 环境变量表格 - 设置为可交互以直接编辑
|
||||
gr.Markdown("### 环境变量管理")
|
||||
gr.Markdown("""
|
||||
管理您的API密钥和其他环境变量。正确设置API密钥对于OWL系统的功能至关重要。
|
||||
|
||||
<div style="background-color: #e7f3fe; border-left: 6px solid #2196F3; padding: 10px; margin: 15px 0; border-radius: 4px;">
|
||||
<strong>提示:</strong> 请确保正确设置API密钥以确保系统功能正常
|
||||
<strong>提示:</strong> 请确保运行cp .env_template .env创建本地.env文件,根据运行模块灵活配置所需环境变量
|
||||
</div>
|
||||
""")
|
||||
|
||||
@@ -1186,7 +1151,6 @@ def create_ui():
|
||||
<li><strong>删除变量</strong>: 清空变量名即可删除该行</li>
|
||||
<li><strong>获取API密钥</strong>: 点击"获取指南"列中的链接获取相应API密钥</li>
|
||||
</ul>
|
||||
<strong>注意</strong>: 所有API密钥都安全地存储在本地,不会上传到网络
|
||||
</div>
|
||||
""",
|
||||
elem_classes="env-instructions",
|
||||
@@ -1221,7 +1185,7 @@ def create_ui():
|
||||
|
||||
# 示例问题
|
||||
examples = [
|
||||
"打开百度搜索,总结一下camel-ai的camel框架的github star、fork数目等,并把数字用plot包写成python文件保存到本地,用本地终端执行python文件显示图出来给我",
|
||||
"打开百度搜索,总结一下camel-ai的camel框架的github star、fork数目等,并把数字用plot包写成python文件保存到本地,并运行生成的python文件。",
|
||||
"浏览亚马逊并找出一款对程序员有吸引力的产品。请提供产品名称和价格",
|
||||
"写一个hello world的python文件,保存到本地",
|
||||
]
|
||||
|
||||
Reference in New Issue
Block a user