diff --git a/README.md b/README.md index d26b1ac..8e563a7 100644 --- a/README.md +++ b/README.md @@ -224,7 +224,7 @@ OWL requires various API keys to interact with different services. The `owl/.env 2. **Configure Your API Keys**: Open the `.env` file in your preferred text editor and insert your API keys in the corresponding fields. - > **Note**: For the minimal example (`run_mini.py`), you only need to configure the LLM API key (e.g., `OPENAI_API_KEY`). + > **Note**: For the minimal example (`examples/run_mini.py`), you only need to configure the LLM API key (e.g., `OPENAI_API_KEY`). ### Option 2: Setting Environment Variables Directly @@ -275,7 +275,7 @@ cd .. && source .venv/bin/activate && cd owl playwright install-deps #run example demo script -xvfb-python run.py +xvfb-python examples/run.py # Option 2: Build and run using the provided scripts cd .container @@ -299,17 +299,17 @@ npx -y @smithery/cli install @wonderwhy-er/desktop-commander --client claude npx @wonderwhy-er/desktop-commander setup # Run the MCP example -python owl/run_mcp.py +python examples/run_mcp.py ``` -This example showcases how OWL agents can seamlessly interact with file systems, web automation, and information retrieval through the MCP protocol. Check out `owl/run_mcp.py` for the full implementation. +This example showcases how OWL agents can seamlessly interact with file systems, web automation, and information retrieval through the MCP protocol. Check out `examples/run_mcp.py` for the full implementation. ## Basic Usage After installation and setting up your environment variables, you can start using OWL right away: ```bash -python owl/run.py +python examples/run.py ``` ## Running with Different Models @@ -330,28 +330,28 @@ OWL supports various LLM backends, though capabilities may vary depending on the ```bash # Run with Qwen model -python owl/examples/run_qwen_zh.py +python examples/run_qwen_zh.py # Run with Deepseek model -python owl/examples/run_deepseek_zh.py +python examples/run_deepseek_zh.py # Run with other OpenAI-compatible models -python owl/examples/run_openai_compatiable_model.py +python examples/run_openai_compatiable_model.py # Run with Azure OpenAI -python owl/run_azure_openai.py +python examples/run_azure_openai.py # Run with Ollama -python owl/examples/run_ollama.py +python examples/run_ollama.py ``` For a simpler version that only requires an LLM API key, you can try our minimal example: ```bash -python owl/examples/run_mini.py +python examples/run_mini.py ``` -You can run OWL agent with your own task by modifying the `run.py` script: +You can run OWL agent with your own task by modifying the `examples/run.py` script: ```python # Define your own task @@ -393,7 +393,7 @@ Here are some tasks you can try with OWL: OWL's MCP integration provides a standardized way for AI models to interact with various tools and data sources: -Try our comprehensive MCP example in `owl/run_mcp.py` to see these capabilities in action! +Try our comprehensive MCP example in `examples/run_mcp.py` to see these capabilities in action! ## Available Toolkits @@ -464,10 +464,10 @@ OWL includes an intuitive web-based user interface that makes it easier to inter ```bash # Start the Chinese version -python owl/webapp_zh.py +python examples/webapp_zh.py # Start the English version -python owl/webapp.py +python examples/webapp.py ``` ## Features @@ -545,7 +545,7 @@ Join us ([*Discord*](https://discord.camel-ai.org/) or [*WeChat*](https://ghli.o Join us for further discussions! - + # ❓ FAQ diff --git a/README_zh.md b/README_zh.md index da0622f..b15c114 100644 --- a/README_zh.md +++ b/README_zh.md @@ -219,7 +219,7 @@ OWL 需要各种 API 密钥来与不同的服务进行交互。`owl/.env_templat 2. **配置你的 API 密钥**: 在你喜欢的文本编辑器中打开 `.env` 文件,并在相应字段中插入你的 API 密钥。 - > **注意**:对于最小示例(`run_mini.py`),你只需要配置 LLM API 密钥(例如,`OPENAI_API_KEY`)。 + > **注意**:对于最小示例(`examples/run_mini.py`),你只需要配置 LLM API 密钥(例如,`OPENAI_API_KEY`)。 ### 选项 2:直接设置环境变量 @@ -269,7 +269,7 @@ cd .. && source .venv/bin/activate && cd owl playwright install-deps #运行例子演示脚本 -xvfb-python run.py +xvfb-python examples/run.py # 选项2:使用提供的脚本构建和运行 cd .container @@ -293,23 +293,23 @@ npx -y @smithery/cli install @wonderwhy-er/desktop-commander --client claude npx @wonderwhy-er/desktop-commander setup # 运行 MCP 示例 -python owl/run_mcp.py +python examples/run_mcp.py ``` -这个示例展示了 OWL 智能体如何通过 MCP 协议无缝地与文件系统、网页自动化和信息检索进行交互。查看 `owl/run_mcp.py` 了解完整实现。 +这个示例展示了 OWL 智能体如何通过 MCP 协议无缝地与文件系统、网页自动化和信息检索进行交互。查看 `examples/run_mcp.py` 了解完整实现。 ## 基本用法 运行以下示例: ```bash -python owl/run.py +python examples/run.py ``` 我们还提供了一个最小化示例,只需配置LLM的API密钥即可运行: ```bash -python owl/run_mini.py +python examples/run_mini.py ``` ## 使用不同的模型 @@ -330,22 +330,22 @@ OWL 支持多种 LLM 后端,但功能可能因模型的工具调用和多模 ```bash # 使用 Qwen 模型运行 -python owl/examples/run_qwen_zh.py +python examples/run_qwen_zh.py # 使用 Deepseek 模型运行 -python owl/examples/run_deepseek_zh.py +python examples/run_deepseek_zh.py # 使用其他 OpenAI 兼容模型运行 -python owl/examples/run_openai_compatiable_model.py +python examples/run_openai_compatiable_model.py # 使用 Azure OpenAI模型运行 -python owl/run_azure_openai.py +python examples/run_azure_openai.py # 使用 Ollama 运行 -python owl/examples/run_ollama.py +python examples/run_ollama.py ``` -你可以通过修改 `run.py` 脚本来运行自己的任务: +你可以通过修改 `examples/run.py` 脚本来运行自己的任务: ```python # Define your own task @@ -383,7 +383,7 @@ OWL 将自动调用与文档相关的工具来处理文件并提取答案。 OWL 的 MCP 集成为 AI 模型与各种工具和数据源的交互提供了标准化的方式。 -查看我们的综合示例 `owl/run_mcp.py` 来体验这些功能! +查看我们的综合示例 `examples/run_mcp.py` 来体验这些功能! ## 可用工具包 @@ -479,7 +479,7 @@ git checkout gaia58.18 2. 运行评估脚本: ```bash -python run_gaia_roleplaying.py +python examples/run_gaia_roleplaying.py ``` # ⏱️ 未来计划 @@ -531,7 +531,7 @@ python run_gaia_roleplaying.py 加入我们,参与更多讨论! - + # ❓ 常见问题 diff --git a/owl/examples/run.py b/examples/run.py similarity index 100% rename from owl/examples/run.py rename to examples/run.py diff --git a/owl/run_azure_openai.py b/examples/run_azure_openai.py similarity index 100% rename from owl/run_azure_openai.py rename to examples/run_azure_openai.py diff --git a/owl/examples/run_deepseek_zh.py b/examples/run_deepseek_zh.py similarity index 100% rename from owl/examples/run_deepseek_zh.py rename to examples/run_deepseek_zh.py diff --git a/owl/examples/run_gaia_roleplaying.py b/examples/run_gaia_roleplaying.py similarity index 100% rename from owl/examples/run_gaia_roleplaying.py rename to examples/run_gaia_roleplaying.py diff --git a/owl/run_mcp.py b/examples/run_mcp.py similarity index 100% rename from owl/run_mcp.py rename to examples/run_mcp.py diff --git a/owl/examples/run_mini.py b/examples/run_mini.py similarity index 100% rename from owl/examples/run_mini.py rename to examples/run_mini.py diff --git a/owl/examples/run_ollama.py b/examples/run_ollama.py similarity index 100% rename from owl/examples/run_ollama.py rename to examples/run_ollama.py diff --git a/owl/examples/run_openai_compatiable_model.py b/examples/run_openai_compatiable_model.py similarity index 100% rename from owl/examples/run_openai_compatiable_model.py rename to examples/run_openai_compatiable_model.py diff --git a/owl/examples/run_qwen_mini_zh.py b/examples/run_qwen_mini_zh.py similarity index 100% rename from owl/examples/run_qwen_mini_zh.py rename to examples/run_qwen_mini_zh.py diff --git a/owl/examples/run_qwen_zh.py b/examples/run_qwen_zh.py similarity index 100% rename from owl/examples/run_qwen_zh.py rename to examples/run_qwen_zh.py diff --git a/owl/examples/run_terminal.py b/examples/run_terminal.py similarity index 100% rename from owl/examples/run_terminal.py rename to examples/run_terminal.py diff --git a/owl/examples/run_terminal_zh.py b/examples/run_terminal_zh.py similarity index 99% rename from owl/examples/run_terminal_zh.py rename to examples/run_terminal_zh.py index 2174bd1..f0a290d 100644 --- a/owl/examples/run_terminal_zh.py +++ b/examples/run_terminal_zh.py @@ -25,7 +25,6 @@ from camel.logger import set_log_level from owl.utils import run_society from camel.societies import RolePlaying -import os load_dotenv() set_log_level(level="DEBUG") diff --git a/owl/.env_template b/owl/.env_template index 9e4328f..c30f21e 100644 --- a/owl/.env_template +++ b/owl/.env_template @@ -4,7 +4,7 @@ #=========================================== # OPENAI API (https://platform.openai.com/api-keys) -# OPENAI_API_KEY= "" +OPENAI_API_KEY='Your_Key' # OPENAI_API_BASE_URL="" # Azure OpenAI API @@ -15,22 +15,22 @@ # Qwen API (https://help.aliyun.com/zh/model-studio/developer-reference/get-api-key) -# QWEN_API_KEY="" +QWEN_API_KEY='Your_Key' # DeepSeek API (https://platform.deepseek.com/api_keys) -# DEEPSEEK_API_KEY="" +DEEPSEEK_API_KEY='Your_Key' #=========================================== # Tools & Services API #=========================================== -# Google Search API (https://developers.google.com/custom-search/v1/overview) -# GOOGLE_API_KEY="" -# SEARCH_ENGINE_ID="" +# Google Search API (https://coda.io/@jon-dallas/google-image-search-pack-example/search-engine-id-and-google-api-key-3) +GOOGLE_API_KEY='Your_Key' +SEARCH_ENGINE_ID='Your_ID' # Chunkr API (https://chunkr.ai/) -# CHUNKR_API_KEY="" +CHUNKR_API_KEY='Your_Key' # Firecrawl API (https://www.firecrawl.dev/) -#FIRECRAWL_API_KEY="" +FIRECRAWL_API_KEY='Your_Key' #FIRECRAWL_API_URL="https://api.firecrawl.dev" \ No newline at end of file diff --git a/owl/utils/enhanced_role_playing.py b/owl/utils/enhanced_role_playing.py index d50b337..76bf757 100644 --- a/owl/utils/enhanced_role_playing.py +++ b/owl/utils/enhanced_role_playing.py @@ -461,6 +461,10 @@ def run_society( assistant_response.info["usage"]["completion_tokens"] + user_response.info["usage"]["completion_tokens"] ) + overall_prompt_token_count += ( + assistant_response.info["usage"]["prompt_tokens"] + + user_response.info["usage"]["prompt_tokens"] + ) # convert tool call to dict tool_call_records: List[dict] = [] diff --git a/owl/webapp.py b/owl/webapp.py new file mode 100644 index 0000000..52b26c6 --- /dev/null +++ b/owl/webapp.py @@ -0,0 +1,1316 @@ +# ========= Copyright 2023-2024 @ CAMEL-AI.org. All Rights Reserved. ========= +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. +# ========= Copyright 2023-2024 @ CAMEL-AI.org. All Rights Reserved. ========= +# Import from the correct module path +from owl.utils import run_society +import os +import gradio as gr +import time +import json +import logging +import datetime +from typing import Tuple +import importlib +from dotenv import load_dotenv, set_key, find_dotenv, unset_key +import threading +import queue +import re # For regular expression operations + +os.environ["PYTHONIOENCODING"] = "utf-8" + + +# Configure logging system +def setup_logging(): + """Configure logging system to output logs to file, memory queue, and console""" + # Create logs directory (if it doesn't exist) + logs_dir = os.path.join(os.path.dirname(__file__), "logs") + os.makedirs(logs_dir, exist_ok=True) + + # Generate log filename (using current date) + current_date = datetime.datetime.now().strftime("%Y-%m-%d") + log_file = os.path.join(logs_dir, f"gradio_log_{current_date}.txt") + + # Configure root logger (captures all logs) + root_logger = logging.getLogger() + + # Clear existing handlers to avoid duplicate logs + for handler in root_logger.handlers[:]: + root_logger.removeHandler(handler) + + root_logger.setLevel(logging.INFO) + + # Create file handler + file_handler = logging.FileHandler(log_file, encoding="utf-8", mode="a") + file_handler.setLevel(logging.INFO) + + # Create console handler + console_handler = logging.StreamHandler() + console_handler.setLevel(logging.INFO) + + # Create formatter + formatter = logging.Formatter( + "%(asctime)s - %(name)s - %(levelname)s - %(message)s" + ) + file_handler.setFormatter(formatter) + console_handler.setFormatter(formatter) + + # Add handlers to root logger + root_logger.addHandler(file_handler) + root_logger.addHandler(console_handler) + + logging.info("Logging system initialized, log file: %s", log_file) + return log_file + + +# Global variables +LOG_FILE = None +LOG_QUEUE: queue.Queue = queue.Queue() # Log queue +STOP_LOG_THREAD = threading.Event() +CURRENT_PROCESS = None # Used to track the currently running process +STOP_REQUESTED = threading.Event() # Used to mark if stop was requested + + +# Log reading and updating functions +def log_reader_thread(log_file): + """Background thread that continuously reads the log file and adds new lines to the queue""" + try: + with open(log_file, "r", encoding="utf-8") as f: + # Move to the end of file + f.seek(0, 2) + + while not STOP_LOG_THREAD.is_set(): + line = f.readline() + if line: + LOG_QUEUE.put(line) # Add to conversation record queue + else: + # No new lines, wait for a short time + time.sleep(0.1) + except Exception as e: + logging.error(f"Log reader thread error: {str(e)}") + + +def get_latest_logs(max_lines=100, queue_source=None): + """Get the latest log lines from the queue, or read directly from the file if the queue is empty + + Args: + max_lines: Maximum number of lines to return + queue_source: Specify which queue to use, default is LOG_QUEUE + + Returns: + str: Log content + """ + logs = [] + log_queue = queue_source if queue_source else LOG_QUEUE + + # Create a temporary queue to store logs so we can process them without removing them from the original queue + temp_queue = queue.Queue() + temp_logs = [] + + try: + # Try to get all available log lines from the queue + while not log_queue.empty() and len(temp_logs) < max_lines: + log = log_queue.get_nowait() + temp_logs.append(log) + temp_queue.put(log) # Put the log back into the temporary queue + except queue.Empty: + pass + + # Process conversation records + logs = temp_logs + + # If there are no new logs or not enough logs, try to read the last few lines directly from the file + if len(logs) < max_lines and LOG_FILE and os.path.exists(LOG_FILE): + try: + with open(LOG_FILE, "r", encoding="utf-8") as f: + all_lines = f.readlines() + # If there are already some logs in the queue, only read the remaining needed lines + remaining_lines = max_lines - len(logs) + file_logs = ( + all_lines[-remaining_lines:] + if len(all_lines) > remaining_lines + else all_lines + ) + + # Add file logs before queue logs + logs = file_logs + logs + except Exception as e: + error_msg = f"Error reading log file: {str(e)}" + logging.error(error_msg) + if not logs: # Only add error message if there are no logs + logs = [error_msg] + + # If there are still no logs, return a prompt message + if not logs: + return "Initialization in progress..." + + # Filter logs, only keep logs with 'camel.agents.chat_agent - INFO' + filtered_logs = [] + for log in logs: + if "camel.agents.chat_agent - INFO" in log: + filtered_logs.append(log) + + # If there are no logs after filtering, return a prompt message + if not filtered_logs: + return "No conversation records yet." + + # Process log content, extract the latest user and assistant messages + simplified_logs = [] + + # Use a set to track messages that have already been processed, to avoid duplicates + processed_messages = set() + + def process_message(role, content): + # 创建一个唯一标识符来跟踪消息 + msg_id = f"{role}:{content}" + if msg_id in processed_messages: + return None + + processed_messages.add(msg_id) + content = content.replace("\\n", "\n") + lines = [line.strip() for line in content.split("\n")] + content = "\n".join(lines) + + return f"[{role.title()} Agent]: {content}" + + for log in filtered_logs: + formatted_messages = [] + # 尝试提取消息数组 + messages_match = re.search( + r"Model (.*?), index (\d+), processed these messages: (\[.*\])", log + ) + + if messages_match: + try: + messages = json.loads(messages_match.group(3)) + for msg in messages: + if msg.get("role") in ["user", "assistant"]: + formatted_msg = process_message( + msg.get("role"), msg.get("content", "") + ) + if formatted_msg: + formatted_messages.append(formatted_msg) + except json.JSONDecodeError: + pass + + # If JSON parsing fails or no message array is found, try to extract conversation content directly + if not formatted_messages: + user_pattern = re.compile(r"\{'role': 'user', 'content': '(.*?)'\}") + assistant_pattern = re.compile( + r"\{'role': 'assistant', 'content': '(.*?)'\}" + ) + + for content in user_pattern.findall(log): + formatted_msg = process_message("user", content) + if formatted_msg: + formatted_messages.append(formatted_msg) + + for content in assistant_pattern.findall(log): + formatted_msg = process_message("assistant", content) + if formatted_msg: + formatted_messages.append(formatted_msg) + + if formatted_messages: + simplified_logs.append("\n\n".join(formatted_messages)) + + # Format log output, ensure appropriate separation between each conversation record + formatted_logs = [] + for i, log in enumerate(simplified_logs): + # Remove excess whitespace characters from beginning and end + log = log.strip() + + formatted_logs.append(log) + + # Ensure each conversation record ends with a newline + if not log.endswith("\n"): + formatted_logs.append("\n") + + return "".join(formatted_logs) + + +# Dictionary containing module descriptions +MODULE_DESCRIPTIONS = { + "run": "Default mode: Using OpenAI model's default agent collaboration mode, suitable for most tasks.", + "run_mini": "Using OpenAI model with minimal configuration to process tasks", + "run_deepseek_zh": "Using deepseek model to process Chinese tasks", + "run_openai_compatiable_model": "Using openai compatible model to process tasks", + "run_ollama": "Using local ollama model to process tasks", + "run_qwen_mini_zh": "Using qwen model with minimal configuration to process tasks", + "run_qwen_zh": "Using qwen model to process tasks", +} + + +# Default environment variable template +DEFAULT_ENV_TEMPLATE = """#=========================================== +# MODEL & API +# (See https://docs.camel-ai.org/key_modules/models.html#) +#=========================================== + +# OPENAI API (https://platform.openai.com/api-keys) +OPENAI_API_KEY='Your_Key' +# OPENAI_API_BASE_URL="" + +# Azure OpenAI API +# AZURE_OPENAI_BASE_URL="" +# AZURE_API_VERSION="" +# AZURE_OPENAI_API_KEY="" +# AZURE_DEPLOYMENT_NAME="" + + +# Qwen API (https://help.aliyun.com/zh/model-studio/developer-reference/get-api-key) +QWEN_API_KEY='Your_Key' + +# DeepSeek API (https://platform.deepseek.com/api_keys) +DEEPSEEK_API_KEY='Your_Key' + +#=========================================== +# Tools & Services API +#=========================================== + +# Google Search API (https://coda.io/@jon-dallas/google-image-search-pack-example/search-engine-id-and-google-api-key-3) +GOOGLE_API_KEY='Your_Key' +SEARCH_ENGINE_ID='Your_ID' + +# Chunkr API (https://chunkr.ai/) +CHUNKR_API_KEY='Your_Key' + +# Firecrawl API (https://www.firecrawl.dev/) +FIRECRAWL_API_KEY='Your_Key' +#FIRECRAWL_API_URL="https://api.firecrawl.dev" +""" + + +def validate_input(question: str) -> bool: + """Validate if user input is valid + + Args: + question: User question + + Returns: + bool: Whether the input is valid + """ + # Check if input is empty or contains only spaces + if not question or question.strip() == "": + return False + return True + + +def run_owl(question: str, example_module: str) -> Tuple[str, str, str]: + """Run the OWL system and return results + + Args: + question: User question + example_module: Example module name to import (e.g., "run_terminal_zh" or "run_deep") + + Returns: + Tuple[...]: Answer, token count, status + """ + global CURRENT_PROCESS + + # Validate input + if not validate_input(question): + logging.warning("User submitted invalid input") + return ( + "Please enter a valid question", + "0", + "❌ Error: Invalid input question", + ) + + try: + # Ensure environment variables are loaded + load_dotenv(find_dotenv(), override=True) + logging.info( + f"Processing question: '{question}', using module: {example_module}" + ) + + # Check if the module is in MODULE_DESCRIPTIONS + if example_module not in MODULE_DESCRIPTIONS: + logging.error(f"User selected an unsupported module: {example_module}") + return ( + f"Selected module '{example_module}' is not supported", + "0", + "❌ Error: Unsupported module", + ) + + # Dynamically import target module + module_path = f"examples.{example_module}" + try: + logging.info(f"Importing module: {module_path}") + module = importlib.import_module(module_path) + except ImportError as ie: + logging.error(f"Unable to import module {module_path}: {str(ie)}") + return ( + f"Unable to import module: {module_path}", + "0", + f"❌ Error: Module {example_module} does not exist or cannot be loaded - {str(ie)}", + ) + except Exception as e: + logging.error( + f"Error occurred while importing module {module_path}: {str(e)}" + ) + return ( + f"Error occurred while importing module: {module_path}", + "0", + f"❌ Error: {str(e)}", + ) + + # Check if it contains the construct_society function + if not hasattr(module, "construct_society"): + logging.error( + f"construct_society function not found in module {module_path}" + ) + return ( + f"construct_society function not found in module {module_path}", + "0", + "❌ Error: Module interface incompatible", + ) + + # Build society simulation + try: + logging.info("Building society simulation...") + society = module.construct_society(question) + + except Exception as e: + logging.error(f"Error occurred while building society simulation: {str(e)}") + return ( + f"Error occurred while building society simulation: {str(e)}", + "0", + f"❌ Error: Build failed - {str(e)}", + ) + + # Run society simulation + try: + logging.info("Running society simulation...") + answer, chat_history, token_info = run_society(society) + logging.info("Society simulation completed") + except Exception as e: + logging.error(f"Error occurred while running society simulation: {str(e)}") + return ( + f"Error occurred while running society simulation: {str(e)}", + "0", + f"❌ Error: Run failed - {str(e)}", + ) + + # Safely get token count + if not isinstance(token_info, dict): + token_info = {} + + completion_tokens = token_info.get("completion_token_count", 0) + prompt_tokens = token_info.get("prompt_token_count", 0) + total_tokens = completion_tokens + prompt_tokens + + logging.info( + f"Processing completed, token usage: completion={completion_tokens}, prompt={prompt_tokens}, total={total_tokens}" + ) + + return ( + answer, + f"Completion tokens: {completion_tokens:,} | Prompt tokens: {prompt_tokens:,} | Total: {total_tokens:,}", + "✅ Successfully completed", + ) + + except Exception as e: + logging.error( + f"Uncaught error occurred while processing the question: {str(e)}" + ) + return (f"Error occurred: {str(e)}", "0", f"❌ Error: {str(e)}") + + +def update_module_description(module_name: str) -> str: + """Return the description of the selected module""" + return MODULE_DESCRIPTIONS.get(module_name, "No description available") + + +# Store environment variables configured from the frontend +WEB_FRONTEND_ENV_VARS: dict[str, str] = {} + + +def init_env_file(): + """Initialize .env file if it doesn't exist""" + dotenv_path = find_dotenv() + if not dotenv_path: + with open(".env", "w") as f: + f.write(DEFAULT_ENV_TEMPLATE) + dotenv_path = find_dotenv() + return dotenv_path + + +def load_env_vars(): + """Load environment variables and return as dictionary format + + Returns: + dict: Environment variable dictionary, each value is a tuple containing value and source (value, source) + """ + dotenv_path = init_env_file() + load_dotenv(dotenv_path, override=True) + + # Read environment variables from .env file + env_file_vars = {} + with open(dotenv_path, "r") as f: + for line in f: + line = line.strip() + if line and not line.startswith("#"): + if "=" in line: + key, value = line.split("=", 1) + env_file_vars[key.strip()] = value.strip().strip("\"'") + + # Get from system environment variables + system_env_vars = { + k: v + for k, v in os.environ.items() + if k not in env_file_vars and k not in WEB_FRONTEND_ENV_VARS + } + + # Merge environment variables and mark sources + env_vars = {} + + # Add system environment variables (lowest priority) + for key, value in system_env_vars.items(): + env_vars[key] = (value, "System") + + # Add .env file environment variables (medium priority) + for key, value in env_file_vars.items(): + env_vars[key] = (value, ".env file") + + # Add frontend configured environment variables (highest priority) + for key, value in WEB_FRONTEND_ENV_VARS.items(): + env_vars[key] = (value, "Frontend configuration") + # Ensure operating system environment variables are also updated + os.environ[key] = value + + return env_vars + + +def save_env_vars(env_vars): + """Save environment variables to .env file + + Args: + env_vars: Dictionary, keys are environment variable names, values can be strings or (value, source) tuples + """ + try: + dotenv_path = init_env_file() + + # Save each environment variable + for key, value_data in env_vars.items(): + if key and key.strip(): # Ensure key is not empty + # Handle case where value might be a tuple + if isinstance(value_data, tuple): + value = value_data[0] + else: + value = value_data + + set_key(dotenv_path, key.strip(), value.strip()) + + # Reload environment variables to ensure they take effect + load_dotenv(dotenv_path, override=True) + + return True, "Environment variables have been successfully saved!" + except Exception as e: + return False, f"Error saving environment variables: {str(e)}" + + +def add_env_var(key, value, from_frontend=True): + """Add or update a single environment variable + + Args: + key: Environment variable name + value: Environment variable value + from_frontend: Whether it's from frontend configuration, default is True + """ + try: + if not key or not key.strip(): + return False, "Variable name cannot be empty" + + key = key.strip() + value = value.strip() + + # If from frontend, add to frontend environment variable dictionary + if from_frontend: + WEB_FRONTEND_ENV_VARS[key] = value + # Directly update system environment variables + os.environ[key] = value + + # Also update .env file + dotenv_path = init_env_file() + set_key(dotenv_path, key, value) + load_dotenv(dotenv_path, override=True) + + return True, f"Environment variable {key} has been successfully added/updated!" + except Exception as e: + return False, f"Error adding environment variable: {str(e)}" + + +def delete_env_var(key): + """Delete environment variable""" + try: + if not key or not key.strip(): + return False, "Variable name cannot be empty" + + key = key.strip() + + # Delete from .env file + dotenv_path = init_env_file() + unset_key(dotenv_path, key) + + # Delete from frontend environment variable dictionary + if key in WEB_FRONTEND_ENV_VARS: + del WEB_FRONTEND_ENV_VARS[key] + + # Also delete from current process environment + if key in os.environ: + del os.environ[key] + + return True, f"Environment variable {key} has been successfully deleted!" + except Exception as e: + return False, f"Error deleting environment variable: {str(e)}" + + +def is_api_related(key: str) -> bool: + """Determine if an environment variable is API-related + + Args: + key: Environment variable name + + Returns: + bool: Whether it's API-related + """ + # API-related keywords + api_keywords = [ + "api", + "key", + "token", + "secret", + "password", + "openai", + "qwen", + "deepseek", + "google", + "search", + "hf", + "hugging", + "chunkr", + "firecrawl", + ] + + # Check if it contains API-related keywords (case insensitive) + return any(keyword in key.lower() for keyword in api_keywords) + + +def get_api_guide(key: str) -> str: + """Return the corresponding API guide based on the environment variable name + + Args: + key: Environment variable name + + Returns: + str: API guide link or description + """ + key_lower = key.lower() + if "openai" in key_lower: + return "https://platform.openai.com/api-keys" + elif "qwen" in key_lower or "dashscope" in key_lower: + return "https://help.aliyun.com/zh/model-studio/developer-reference/get-api-key" + elif "deepseek" in key_lower: + return "https://platform.deepseek.com/api_keys" + elif "google" in key_lower: + return "https://coda.io/@jon-dallas/google-image-search-pack-example/search-engine-id-and-google-api-key-3" + elif "search_engine_id" in key_lower: + return "https://coda.io/@jon-dallas/google-image-search-pack-example/search-engine-id-and-google-api-key-3" + elif "chunkr" in key_lower: + return "https://chunkr.ai/" + elif "firecrawl" in key_lower: + return "https://www.firecrawl.dev/" + else: + return "" + + +def update_env_table(): + """Update environment variable table display, only showing API-related environment variables""" + env_vars = load_env_vars() + # Filter out API-related environment variables + api_env_vars = {k: v for k, v in env_vars.items() if is_api_related(k)} + # Convert to list format to meet Gradio Dataframe requirements + # Format: [Variable name, Variable value, Guide link] + result = [] + for k, v in api_env_vars.items(): + guide = get_api_guide(k) + # If there's a guide link, create a clickable link + guide_link = ( + f"🔗 Get" + if guide + else "" + ) + result.append([k, v[0], guide_link]) + return result + + +def save_env_table_changes(data): + """Save changes to the environment variable table + + Args: + data: Dataframe data, possibly a pandas DataFrame object + + Returns: + str: Operation status information, containing HTML-formatted status message + """ + try: + logging.info( + f"Starting to process environment variable table data, type: {type(data)}" + ) + + # Get all current environment variables + current_env_vars = load_env_vars() + processed_keys = set() # Record processed keys to detect deleted variables + + # 处理pandas DataFrame对象 + import pandas as pd + + if isinstance(data, pd.DataFrame): + # Get column name information + columns = data.columns.tolist() + logging.info(f"DataFrame column names: {columns}") + + # Iterate through each row of the DataFrame + for index, row in data.iterrows(): + # 使用列名访问数据 + if len(columns) >= 3: + # Get variable name and value (column 0 is name, column 1 is value) + key = row[0] if isinstance(row, pd.Series) else row.iloc[0] + value = row[1] if isinstance(row, pd.Series) else row.iloc[1] + + # Check if it's an empty row or deleted variable + if ( + key and str(key).strip() + ): # If key name is not empty, add or update + logging.info( + f"Processing environment variable: {key} = {value}" + ) + add_env_var(key, str(value)) + processed_keys.add(key) + # 处理其他格式 + elif isinstance(data, dict): + logging.info(f"Dictionary format data keys: {list(data.keys())}") + # 如果是字典格式,尝试不同的键 + if "data" in data: + rows = data["data"] + elif "values" in data: + rows = data["values"] + elif "value" in data: + rows = data["value"] + else: + # 尝试直接使用字典作为行数据 + rows = [] + for key, value in data.items(): + if key not in ["headers", "types", "columns"]: + rows.append([key, value]) + + if isinstance(rows, list): + for row in rows: + if isinstance(row, list) and len(row) >= 2: + key, value = row[0], row[1] + if key and str(key).strip(): + add_env_var(key, str(value)) + processed_keys.add(key) + elif isinstance(data, list): + # 列表格式 + for row in data: + if isinstance(row, list) and len(row) >= 2: + key, value = row[0], row[1] + if key and str(key).strip(): + add_env_var(key, str(value)) + processed_keys.add(key) + else: + logging.error(f"Unknown data format: {type(data)}") + return f"❌ Save failed: Unknown data format {type(data)}" + + # Process deleted variables - check if there are variables in current environment not appearing in the table + api_related_keys = {k for k in current_env_vars.keys() if is_api_related(k)} + keys_to_delete = api_related_keys - processed_keys + + # Delete variables no longer in the table + for key in keys_to_delete: + logging.info(f"Deleting environment variable: {key}") + delete_env_var(key) + + return "✅ Environment variables have been successfully saved" + except Exception as e: + import traceback + + error_details = traceback.format_exc() + logging.error(f"Error saving environment variables: {str(e)}\n{error_details}") + return f"❌ Save failed: {str(e)}" + + +def get_env_var_value(key): + """Get the actual value of an environment variable + + Priority: Frontend configuration > .env file > System environment variables + """ + # Check frontend configured environment variables + if key in WEB_FRONTEND_ENV_VARS: + return WEB_FRONTEND_ENV_VARS[key] + + # Check system environment variables (including those loaded from .env) + return os.environ.get(key, "") + + +def create_ui(): + """Create enhanced Gradio interface""" + + # Define conversation record update function + def update_logs2(): + """Get the latest conversation records and return them to the frontend for display""" + return get_latest_logs(100, LOG_QUEUE) + + def clear_log_file(): + """Clear log file content""" + try: + if LOG_FILE and os.path.exists(LOG_FILE): + # Clear log file content instead of deleting the file + open(LOG_FILE, "w").close() + logging.info("Log file has been cleared") + # Clear log queue + while not LOG_QUEUE.empty(): + try: + LOG_QUEUE.get_nowait() + except queue.Empty: + break + return "" + else: + return "" + except Exception as e: + logging.error(f"Error clearing log file: {str(e)}") + return "" + + # Create a real-time log update function + def process_with_live_logs(question, module_name): + """Process questions and update logs in real-time""" + global CURRENT_PROCESS + + # Clear log file + clear_log_file() + + # Create a background thread to process the question + result_queue = queue.Queue() + + def process_in_background(): + try: + result = run_owl(question, module_name) + result_queue.put(result) + except Exception as e: + result_queue.put( + (f"Error occurred: {str(e)}", "0", f"❌ Error: {str(e)}") + ) + + # Start background processing thread + bg_thread = threading.Thread(target=process_in_background) + CURRENT_PROCESS = bg_thread # Record current process + bg_thread.start() + + # While waiting for processing to complete, update logs once per second + while bg_thread.is_alive(): + # Update conversation record display + logs2 = get_latest_logs(100, LOG_QUEUE) + + # Always update status + yield ( + "0", + " Processing...", + logs2, + ) + + time.sleep(1) + + # Processing complete, get results + if not result_queue.empty(): + result = result_queue.get() + answer, token_count, status = result + + # Final update of conversation record + logs2 = get_latest_logs(100, LOG_QUEUE) + + # Set different indicators based on status + if "Error" in status: + status_with_indicator = ( + f" {status}" + ) + else: + status_with_indicator = ( + f" {status}" + ) + + yield token_count, status_with_indicator, logs2 + else: + logs2 = get_latest_logs(100, LOG_QUEUE) + yield ( + "0", + " Terminated", + logs2, + ) + + with gr.Blocks(theme=gr.themes.Soft(primary_hue="blue")) as app: + gr.Markdown( + """ + # 🦉 OWL Multi-Agent Collaboration System + + Advanced multi-agent collaboration system developed based on the CAMEL framework, designed to solve complex problems through agent collaboration. + Models and tools can be customized by modifying local scripts. + This web app is currently in beta development. It is provided for demonstration and testing purposes only and is not yet recommended for production use. + """ + ) + + # Add custom CSS + gr.HTML(""" + + """) + + with gr.Row(): + with gr.Column(scale=1): + question_input = gr.Textbox( + lines=5, + placeholder="Please enter your question...", + label="Question", + elem_id="question_input", + show_copy_button=True, + value="Open Baidu search, summarize the github stars, fork counts, etc. of camel-ai's camel framework, and write the numbers into a python file using the plot package, save it locally, and run the generated python file.", + ) + + # Enhanced module selection dropdown + # Only includes modules defined in MODULE_DESCRIPTIONS + module_dropdown = gr.Dropdown( + choices=list(MODULE_DESCRIPTIONS.keys()), + value="run_qwen_zh", + label="Select Function Module", + interactive=True, + ) + + # Module description text box + module_description = gr.Textbox( + value=MODULE_DESCRIPTIONS["run_qwen_zh"], + label="Module Description", + interactive=False, + elem_classes="module-info", + ) + + with gr.Row(): + run_button = gr.Button( + "Run", variant="primary", elem_classes="primary" + ) + + status_output = gr.HTML( + value=" Ready", + label="Status", + ) + token_count_output = gr.Textbox( + label="Token Count", interactive=False, elem_classes="token-count" + ) + + with gr.Tabs(): # Set conversation record as the default selected tab + with gr.TabItem("Conversation Record"): + # Add conversation record display area + log_display2 = gr.Textbox( + label="Conversation Record", + lines=25, + max_lines=100, + interactive=False, + autoscroll=True, + show_copy_button=True, + elem_classes="log-display", + container=True, + value="", + ) + + with gr.Row(): + refresh_logs_button2 = gr.Button("Refresh Record") + auto_refresh_checkbox2 = gr.Checkbox( + label="Auto Refresh", value=True, interactive=True + ) + clear_logs_button2 = gr.Button( + "Clear Record", variant="secondary" + ) + + with gr.TabItem("Environment Variable Management", id="env-settings"): + with gr.Box(elem_classes="env-manager-container"): + gr.Markdown(""" + ## Environment Variable Management + + Set model API keys and other service credentials here. This information will be saved in a local `.env` file, ensuring your API keys are securely stored and not uploaded to the network. Correctly setting API keys is crucial for the functionality of the OWL system. Environment variables can be flexibly configured according to tool requirements. + """) + + # Main content divided into two-column layout + with gr.Row(): + # Left column: Environment variable management controls + with gr.Column(scale=3): + with gr.Box(elem_classes="env-controls"): + # Environment variable table - set to interactive for direct editing + gr.Markdown(""" +