mirror of
https://github.com/camel-ai/owl.git
synced 2025-12-26 10:07:51 +08:00
Merge branch 'main' of https://github.com/camel-ai/owl
This commit is contained in:
commit
fa2237e5ce
@ -28,8 +28,8 @@
|
||||
[Community](https://github.com/camel-ai/owl#community) |
|
||||
[Installation](#️-installation) |
|
||||
[Examples](https://github.com/camel-ai/owl/tree/main/owl) |
|
||||
[Paper](https://arxiv.org/abs/2303.17760) |
|
||||
[Technical Report](https://hypnotic-mind-6bd.notion.site/OWL-Optimized-Workforce-Learning-for-General-Multi-Agent-Assistance-in-Real-World-Task-Automation-1d4004aeb21380158749c7f84b20643f) |
|
||||
[Paper](https://github.com/camel-ai/owl/blob/main/assets/OWL_Technical_Report.pdf) |
|
||||
<!-- [Technical Report](https://hypnotic-mind-6bd.notion.site/OWL-Optimized-Workforce-Learning-for-General-Multi-Agent-Assistance-in-Real-World-Task-Automation-1d4004aeb21380158749c7f84b20643f) | -->
|
||||
[Citation](https://github.com/camel-ai/owl#citation) |
|
||||
[Contributing](https://github.com/camel-ai/owl/graphs/contributors) |
|
||||
[CAMEL-AI](https://www.camel-ai.org/)
|
||||
@ -143,6 +143,7 @@ Our vision is to revolutionize how AI agents collaborate to solve real-world tas
|
||||
</p>
|
||||
</div>
|
||||
|
||||
- **[2025.05.27]**: We released the technical report of OWL, including more details on the workforce (framework) and optimized workforce learning (training methodology). [paper](https://github.com/camel-ai/owl/blob/main/assets/OWL_Technical_Report.pdf).
|
||||
- **[2025.05.18]**: We open-sourced an initial version for replicating workforce experiment on GAIA [here](https://github.com/camel-ai/owl/tree/gaia69).
|
||||
- **[2025.04.18]**: We uploaded OWL's new GAIA benchmark score of **69.09%**, ranking #1 among open-source frameworks. Check the technical report [here](https://hypnotic-mind-6bd.notion.site/OWL-Optimized-Workforce-Learning-for-General-Multi-Agent-Assistance-in-Real-World-Task-Automation-1d4004aeb21380158749c7f84b20643f).
|
||||
- **[2025.03.27]**: Integrate SearxNGToolkit performing web searches using SearxNG search engine.
|
||||
|
||||
BIN
assets/OWL_Technical_Report.pdf
Normal file
BIN
assets/OWL_Technical_Report.pdf
Normal file
Binary file not shown.
115
community_usecase/Puppeteer MCP/README.md
Normal file
115
community_usecase/Puppeteer MCP/README.md
Normal file
@ -0,0 +1,115 @@
|
||||
# 🤖 Puppeteer Task Runner (Streamlit + CAMEL-AI + MCP)
|
||||
|
||||
A Streamlit app powered by the [CAMEL-AI OWL framework](https://github.com/camel-ai/owl) and **MCP (Model Context Protocol)** that connects to a Puppeteer-based MCP server. It allows natural language task execution via autonomous agents, combining local tool access with browser automation.
|
||||
|
||||
---
|
||||
|
||||
## ✨ Features
|
||||
|
||||
- **Text-to-action UI**: Enter a task and let the agent figure out how to solve it.
|
||||
- **OwlRolePlaying Agents**: Multi-agent system using CAMEL-AI to simulate human–AI collaboration.
|
||||
- **MCP Integration**: Connects to Puppeteer MCP servers for real-world browser-based task execution.
|
||||
- **Error handling & logs**: Gracefully handles connection issues and provides debug logs.
|
||||
|
||||
---
|
||||
|
||||
## 📋 Prerequisites
|
||||
|
||||
- Python >=3.10,<3.13
|
||||
- Node.js & npm (for the MCP Puppeteer server plugin)
|
||||
- A valid OpenAI API key set in your environment:
|
||||
```bash
|
||||
export OPENAI_API_KEY="your_api_key_here"
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## 🛠️ Setup
|
||||
|
||||
1. **Clone the repository**
|
||||
|
||||
```bash
|
||||
git clone https://github.com/camel-ai/owl.git
|
||||
cd owl/community_usecase/Puppeteer MCP
|
||||
```
|
||||
|
||||
2. **Create a virtual environment**
|
||||
|
||||
```bash
|
||||
python -m venv venv
|
||||
source venv/bin/activate # macOS/Linux
|
||||
venv\\Scripts\\activate # Windows
|
||||
```
|
||||
|
||||
3. **Install Python dependencies**
|
||||
|
||||
```bash
|
||||
pip install -r requirements.txt
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## ⚙️ Configuration
|
||||
|
||||
1. **Environment Variables**
|
||||
Create a `.env` file in the root directory with:
|
||||
```ini
|
||||
OPENAI_API_KEY=your_openai_key_here
|
||||
```
|
||||
|
||||
2. **MCP Server Config**
|
||||
Ensure `mcp_servers_config.json` is present and contains:
|
||||
```json
|
||||
{
|
||||
"mcpServers": {
|
||||
"puppeteer": {
|
||||
"command": "npx",
|
||||
"args": ["-y", "@modelcontextprotocol/mcp-server-puppeteer"]
|
||||
}
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## 🚀 Running the App
|
||||
|
||||
Run the Streamlit app:
|
||||
|
||||
```bash
|
||||
streamlit run demo.py
|
||||
```
|
||||
|
||||
This will open the UI in your browser. Enter a natural language task (e.g., “Search for the weather in Paris”) and click **Run Task**.
|
||||
|
||||
---
|
||||
|
||||
## 🔧 Customization
|
||||
|
||||
- **Model config**: Change model types in the `construct_society` function.
|
||||
- **Prompt behavior**: Adjust task wording, agent roles, or tool combinations as needed.
|
||||
- **Error handling**: You can improve the exception output area for better UI display.
|
||||
|
||||
---
|
||||
|
||||
## 📂 Project Structure
|
||||
|
||||
```
|
||||
Puppeteer-MCP/
|
||||
├── demo.py # Streamlit frontend
|
||||
├── mcp_servers_config.json # MCP config
|
||||
└── .env # Secrets and keys
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## 📚 References
|
||||
|
||||
- [CAMEL-AI OWL Framework](https://github.com/camel-ai/owl)
|
||||
- [Anthropic MCP Protocol](https://docs.anthropic.com/en/docs/agents-and-tools/mcp)
|
||||
- [Streamlit Docs](https://docs.streamlit.io/)
|
||||
- [Puppeteer MCP Server (custom)](https://github.com/your-org/mcp-server-puppeteer)
|
||||
|
||||
---
|
||||
|
||||
*Let your agents browse and automate the web for you!*
|
||||
112
community_usecase/Puppeteer MCP/demo.py
Normal file
112
community_usecase/Puppeteer MCP/demo.py
Normal file
@ -0,0 +1,112 @@
|
||||
|
||||
import asyncio
|
||||
from pathlib import Path
|
||||
|
||||
import streamlit as st
|
||||
from dotenv import load_dotenv
|
||||
|
||||
from camel.models import ModelFactory
|
||||
from camel.toolkits import FunctionTool
|
||||
from camel.types import ModelPlatformType, ModelType
|
||||
from camel.logger import set_log_level
|
||||
from camel.toolkits import MCPToolkit, SearchToolkit
|
||||
import sys
|
||||
|
||||
from owl.utils.enhanced_role_playing import OwlRolePlaying, arun_society
|
||||
import logging
|
||||
|
||||
logging.basicConfig(level=logging.DEBUG)
|
||||
|
||||
# Load environment variables and set logger level
|
||||
if sys.platform.startswith("win"):
|
||||
asyncio.set_event_loop_policy(asyncio.WindowsProactorEventLoopPolicy())
|
||||
load_dotenv()
|
||||
set_log_level(level="DEBUG")
|
||||
|
||||
async def construct_society(task: str, tools: list[FunctionTool]) -> OwlRolePlaying:
|
||||
"""
|
||||
Build a multi-agent OwlRolePlaying instance.
|
||||
"""
|
||||
models = {
|
||||
"user": ModelFactory.create(
|
||||
model_platform=ModelPlatformType.OPENAI,
|
||||
model_type=ModelType.GPT_4O,
|
||||
model_config_dict={"temperature": 0},
|
||||
),
|
||||
"assistant": ModelFactory.create(
|
||||
model_platform=ModelPlatformType.OPENAI,
|
||||
model_type=ModelType.GPT_4O,
|
||||
model_config_dict={"temperature": 0},
|
||||
),
|
||||
}
|
||||
|
||||
user_agent_kwargs = {"model": models["user"]}
|
||||
assistant_agent_kwargs = {"model": models["assistant"], "tools": tools}
|
||||
task_kwargs = {
|
||||
"task_prompt": task,
|
||||
"with_task_specify": False,
|
||||
}
|
||||
|
||||
society = OwlRolePlaying(
|
||||
**task_kwargs,
|
||||
user_role_name="user",
|
||||
user_agent_kwargs=user_agent_kwargs,
|
||||
assistant_role_name="assistant",
|
||||
assistant_agent_kwargs=assistant_agent_kwargs,
|
||||
)
|
||||
return society
|
||||
|
||||
async def run_task(task: str) -> str:
|
||||
"""
|
||||
Connect to MCP servers, run the provided task, and return the answer.
|
||||
"""
|
||||
# Construct the path to your MCP server config file.
|
||||
config_path = Path(__file__).parent / "mcp_servers_config.json"
|
||||
mcp_toolkit = MCPToolkit(config_path=str(config_path))
|
||||
answer = ""
|
||||
try:
|
||||
logging.debug("Connecting to MCP server...")
|
||||
await mcp_toolkit.connect()
|
||||
logging.debug("Connected to MCP server.")
|
||||
|
||||
# Prepare all tools from the MCP toolkit and the web search toolkit
|
||||
tools = [*mcp_toolkit.get_tools(), SearchToolkit().search_duckduckgo]
|
||||
society = await construct_society(task, tools)
|
||||
answer, chat_history, token_count = await arun_society(society)
|
||||
except Exception as e:
|
||||
import traceback
|
||||
st.error(f"An error occurred: {e}")
|
||||
st.text(traceback.format_exc())
|
||||
finally:
|
||||
try:
|
||||
await mcp_toolkit.disconnect()
|
||||
except Exception as e:
|
||||
answer += f"\nError during disconnect: {e}"
|
||||
return answer
|
||||
|
||||
def main():
|
||||
st.title("OWL X Puppeteer MCP Server")
|
||||
|
||||
# Get the task from the user
|
||||
task = st.text_input("Enter your task",value="Please find the top articles from dev.to this week and go to each article and then summarize it. Please use MCP given to you")
|
||||
|
||||
if st.button("Run Task"):
|
||||
if not task.strip():
|
||||
st.error("Please enter a valid task.")
|
||||
else:
|
||||
with st.spinner("Processing the task..."):
|
||||
try:
|
||||
# Create a new event loop for the current thread
|
||||
new_loop = asyncio.new_event_loop()
|
||||
asyncio.set_event_loop(new_loop)
|
||||
result = new_loop.run_until_complete(run_task(task))
|
||||
except Exception as e:
|
||||
st.error(f"An error occurred: {e}")
|
||||
result = ""
|
||||
finally:
|
||||
new_loop.close()
|
||||
st.success("Task completed!")
|
||||
st.write(result)
|
||||
|
||||
if __name__ == "__main__":
|
||||
main()
|
||||
8
community_usecase/Puppeteer MCP/mcp_servers_config.json
Normal file
8
community_usecase/Puppeteer MCP/mcp_servers_config.json
Normal file
@ -0,0 +1,8 @@
|
||||
{
|
||||
"mcpServers": {
|
||||
"puppeteer": {
|
||||
"command": "npx",
|
||||
"args": ["-y", "@modelcontextprotocol/server-puppeteer"]
|
||||
}
|
||||
}
|
||||
}
|
||||
3
community_usecase/Puppeteer MCP/requirements.txt
Normal file
3
community_usecase/Puppeteer MCP/requirements.txt
Normal file
@ -0,0 +1,3 @@
|
||||
streamlit
|
||||
camel-ai["all"]
|
||||
python-dotenv
|
||||
@ -2,8 +2,10 @@
|
||||
"mcpServers": {
|
||||
"playwright": {
|
||||
"command": "npx",
|
||||
"args": ["-y", "@executeautomation/playwright-mcp-server"]
|
||||
"args": [
|
||||
"@playwright/mcp@latest"
|
||||
]
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
|
||||
@ -11,25 +11,83 @@
|
||||
# See the License for the specific language governing permissions and
|
||||
# limitations under the License.
|
||||
# ========= Copyright 2023-2024 @ CAMEL-AI.org. All Rights Reserved. =========
|
||||
"""MCP Multi-Agent System Example
|
||||
|
||||
This example demonstrates how to use MCP (Model Context Protocol) with CAMEL agents
|
||||
for advanced information retrieval and processing tasks.
|
||||
|
||||
Environment Setup:
|
||||
1. Configure the required dependencies of owl library
|
||||
Refer to: https://github.com/camel-ai/owl for installation guide
|
||||
|
||||
2. MCP Server Setup:
|
||||
|
||||
|
||||
2.1 MCP Playwright Service:
|
||||
```bash
|
||||
# Install MCP service
|
||||
npm install -g @executeautomation/playwright-mcp-server
|
||||
npx playwright install-deps
|
||||
|
||||
# Configure in mcp_servers_config.json:
|
||||
{
|
||||
"mcpServers": {
|
||||
"playwright": {
|
||||
"command": "npx",
|
||||
"args": ["-y", "@executeautomation/playwright-mcp-server"]
|
||||
}
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
2.2 MCP Fetch Service (Optional - for better retrieval):
|
||||
```bash
|
||||
# Install MCP service
|
||||
pip install mcp-server-fetch
|
||||
|
||||
# Configure in mcp_servers_config.json:
|
||||
{
|
||||
"mcpServers": {
|
||||
"fetch": {
|
||||
"command": "python",
|
||||
"args": ["-m", "mcp_server_fetch"]
|
||||
}
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
Usage:
|
||||
1. Ensure all MCP servers are properly configured in mcp_servers_config.json
|
||||
2. Run this script to create a multi-agent system that can:
|
||||
- Access and manipulate files through MCP Desktop Commander
|
||||
- Perform web automation tasks using Playwright
|
||||
- Process and generate information using Mistral
|
||||
- Fetch web content (if fetch service is configured)
|
||||
3. The system will execute the specified task while maintaining security through
|
||||
controlled access
|
||||
|
||||
Note:
|
||||
- All file operations are restricted to configured directories
|
||||
- Supports asynchronous operations for efficient processing
|
||||
"""
|
||||
|
||||
import asyncio
|
||||
import sys
|
||||
import pathlib
|
||||
from pathlib import Path
|
||||
from typing import List
|
||||
|
||||
from dotenv import load_dotenv
|
||||
|
||||
from camel.models import ModelFactory
|
||||
from camel.toolkits import (
|
||||
AudioAnalysisToolkit,
|
||||
CodeExecutionToolkit,
|
||||
ExcelToolkit,
|
||||
ImageAnalysisToolkit,
|
||||
SearchToolkit,
|
||||
VideoAnalysisToolkit,
|
||||
BrowserToolkit,
|
||||
FileWriteToolkit,
|
||||
)
|
||||
from camel.toolkits import FunctionTool
|
||||
from camel.types import ModelPlatformType, ModelType
|
||||
from camel.logger import set_log_level
|
||||
from camel.toolkits import MCPToolkit, FileWriteToolkit, CodeExecutionToolkit
|
||||
from camel.societies import RolePlaying
|
||||
|
||||
from owl.utils import run_society, DocumentProcessingToolkit
|
||||
from owl.utils.enhanced_role_playing import arun_society
|
||||
|
||||
import pathlib
|
||||
|
||||
base_dir = pathlib.Path(__file__).parent.parent
|
||||
env_path = base_dir / "owl" / ".env"
|
||||
@ -38,21 +96,20 @@ load_dotenv(dotenv_path=str(env_path))
|
||||
set_log_level(level="DEBUG")
|
||||
|
||||
|
||||
def construct_society(question: str) -> RolePlaying:
|
||||
r"""Construct a society of agents based on Mistral model(s).
|
||||
async def construct_society(
|
||||
question: str,
|
||||
tools: List[FunctionTool],
|
||||
) -> RolePlaying:
|
||||
r"""build a multi-agent RolePlaying instance.
|
||||
|
||||
Args:
|
||||
question (str): The task or question to be addressed by the society.
|
||||
|
||||
Returns:
|
||||
RolePlaying: A configured society of agents ready to address the question.
|
||||
question (str): The question to ask.
|
||||
tools (List[FunctionTool]): The MCP tools to use.
|
||||
"""
|
||||
|
||||
# Create models for different components
|
||||
models = {
|
||||
"user": ModelFactory.create(
|
||||
model_platform=ModelPlatformType.MISTRAL,
|
||||
model_type=ModelType.MISTRAL_LARGE,
|
||||
model_type=ModelType.MISTRAL_MEDIUM_3,
|
||||
model_config_dict={"temperature": 0},
|
||||
),
|
||||
"assistant": ModelFactory.create(
|
||||
@ -60,63 +117,19 @@ def construct_society(question: str) -> RolePlaying:
|
||||
model_type=ModelType.MISTRAL_LARGE,
|
||||
model_config_dict={"temperature": 0},
|
||||
),
|
||||
"browsing": ModelFactory.create(
|
||||
model_platform=ModelPlatformType.MISTRAL,
|
||||
model_type=ModelType.MISTRAL_LARGE,
|
||||
model_config_dict={"temperature": 0},
|
||||
),
|
||||
"planning": ModelFactory.create(
|
||||
model_platform=ModelPlatformType.MISTRAL,
|
||||
model_type=ModelType.MISTRAL_LARGE,
|
||||
model_config_dict={"temperature": 0},
|
||||
),
|
||||
"video": ModelFactory.create(
|
||||
model_platform=ModelPlatformType.MISTRAL,
|
||||
model_type=ModelType.MISTRAL_PIXTRAL_12B,
|
||||
model_config_dict={"temperature": 0},
|
||||
),
|
||||
"image": ModelFactory.create(
|
||||
model_platform=ModelPlatformType.MISTRAL,
|
||||
model_type=ModelType.MISTRAL_PIXTRAL_12B,
|
||||
model_config_dict={"temperature": 0},
|
||||
),
|
||||
"document": ModelFactory.create(
|
||||
model_platform=ModelPlatformType.MISTRAL,
|
||||
model_type=ModelType.MISTRAL_LARGE,
|
||||
model_config_dict={"temperature": 0},
|
||||
),
|
||||
}
|
||||
|
||||
# Configure toolkits
|
||||
tools = [
|
||||
*BrowserToolkit(
|
||||
headless=True,
|
||||
web_agent_model=models["browsing"],
|
||||
planning_agent_model=models["planning"],
|
||||
).get_tools(),
|
||||
*VideoAnalysisToolkit(model=models["video"]).get_tools(),
|
||||
*AudioAnalysisToolkit().get_tools(),
|
||||
*CodeExecutionToolkit(sandbox="subprocess", verbose=True).get_tools(),
|
||||
*ImageAnalysisToolkit(model=models["image"]).get_tools(),
|
||||
SearchToolkit().search_duckduckgo,
|
||||
SearchToolkit().search_google,
|
||||
SearchToolkit().search_wiki,
|
||||
*ExcelToolkit().get_tools(),
|
||||
*DocumentProcessingToolkit(model=models["document"]).get_tools(),
|
||||
*FileWriteToolkit(output_dir="./").get_tools(),
|
||||
]
|
||||
|
||||
# Configure agent roles and parameters
|
||||
user_agent_kwargs = {"model": models["user"]}
|
||||
assistant_agent_kwargs = {"model": models["assistant"], "tools": tools}
|
||||
assistant_agent_kwargs = {
|
||||
"model": models["assistant"],
|
||||
"tools": tools,
|
||||
}
|
||||
|
||||
# Configure task parameters
|
||||
task_kwargs = {
|
||||
"task_prompt": question,
|
||||
"with_task_specify": False,
|
||||
}
|
||||
|
||||
# Create and return the society
|
||||
society = RolePlaying(
|
||||
**task_kwargs,
|
||||
user_role_name="user",
|
||||
@ -124,25 +137,44 @@ def construct_society(question: str) -> RolePlaying:
|
||||
assistant_role_name="assistant",
|
||||
assistant_agent_kwargs=assistant_agent_kwargs,
|
||||
)
|
||||
|
||||
return society
|
||||
|
||||
|
||||
def main():
|
||||
r"""Main function to run the OWL system with an example question."""
|
||||
# Default research question
|
||||
default_task = "Open Brave search, summarize the github stars, fork counts, etc. of camel-ai's camel framework, and write the numbers into a python file using the plot package, save it locally, and run the generated python file. Note: You have been provided with the necessary tools to complete this task."
|
||||
async def main():
|
||||
config_path = Path(__file__).parent / "mcp_servers_config.json"
|
||||
mcp_toolkit = MCPToolkit(config_path=str(config_path))
|
||||
|
||||
# Override default task if command line argument is provided
|
||||
task = sys.argv[1] if len(sys.argv) > 1 else default_task
|
||||
try:
|
||||
await mcp_toolkit.connect()
|
||||
|
||||
# Construct and run the society
|
||||
society = construct_society(task)
|
||||
answer, chat_history, token_count = run_society(society)
|
||||
# Default task
|
||||
default_task = (
|
||||
"Help me search the latest reports about smart city, "
|
||||
"summarize them and help me generate a PDF file. You have "
|
||||
"been provided with tools to do browser operation. Open "
|
||||
"browser to finish the task."
|
||||
)
|
||||
|
||||
# Output the result
|
||||
print(f"\033[94mAnswer: {answer}\033[0m")
|
||||
# Override default task if command line argument is provided
|
||||
task = sys.argv[1] if len(sys.argv) > 1 else default_task
|
||||
|
||||
# Connect to toolkits
|
||||
tools = [
|
||||
*mcp_toolkit.get_tools(),
|
||||
*FileWriteToolkit().get_tools(),
|
||||
*CodeExecutionToolkit().get_tools(),
|
||||
]
|
||||
society = await construct_society(task, tools)
|
||||
answer, chat_history, token_count = await arun_society(society)
|
||||
print(f"\033[94mAnswer: {answer}\033[0m")
|
||||
|
||||
finally:
|
||||
# Make sure to disconnect safely after all operations are completed.
|
||||
try:
|
||||
await mcp_toolkit.disconnect()
|
||||
except Exception:
|
||||
print("Disconnect failed")
|
||||
|
||||
|
||||
if __name__ == "__main__":
|
||||
main()
|
||||
asyncio.run(main())
|
||||
|
||||
@ -28,6 +28,7 @@ dependencies = [
|
||||
"mcp-server-fetch==2025.1.17",
|
||||
"xmltodict>=0.14.2",
|
||||
"firecrawl>=2.5.3",
|
||||
"mistralai>=1.7.0",
|
||||
]
|
||||
|
||||
[project.urls]
|
||||
|
||||
18
uv.lock
generated
18
uv.lock
generated
@ -1832,6 +1832,22 @@ wheels = [
|
||||
{ url = "https://files.pythonhosted.org/packages/b3/38/89ba8ad64ae25be8de66a6d463314cf1eb366222074cfda9ee839c56a4b4/mdurl-0.1.2-py3-none-any.whl", hash = "sha256:84008a41e51615a49fc9966191ff91509e3c40b939176e643fd50a5c2196b8f8", size = 9979 },
|
||||
]
|
||||
|
||||
[[package]]
|
||||
name = "mistralai"
|
||||
version = "1.7.0"
|
||||
source = { registry = "https://pypi.org/simple" }
|
||||
dependencies = [
|
||||
{ name = "eval-type-backport" },
|
||||
{ name = "httpx" },
|
||||
{ name = "pydantic" },
|
||||
{ name = "python-dateutil" },
|
||||
{ name = "typing-inspection" },
|
||||
]
|
||||
sdist = { url = "https://files.pythonhosted.org/packages/98/d0/11e0116a02aa88701422ccc048185ed8834754f3b94140bfad09620c9d11/mistralai-1.7.0.tar.gz", hash = "sha256:94e3eb23c1d3ed398a95352062fd8c92993cc3754ed18e9a35b60aa3db0bd103", size = 141981 }
|
||||
wheels = [
|
||||
{ url = "https://files.pythonhosted.org/packages/60/77/eb7519ddfccb6428ac430129e7b42cc662e710cb719f82c0ffe79ab50859/mistralai-1.7.0-py3-none-any.whl", hash = "sha256:e0e75ab8508598d69ae19b14d9d7e905db6259a2de3cf9204946a27e9bf81c5d", size = 301483 },
|
||||
]
|
||||
|
||||
[[package]]
|
||||
name = "more-itertools"
|
||||
version = "10.7.0"
|
||||
@ -2389,6 +2405,7 @@ dependencies = [
|
||||
{ name = "gradio" },
|
||||
{ name = "mcp-server-fetch" },
|
||||
{ name = "mcp-simple-arxiv" },
|
||||
{ name = "mistralai" },
|
||||
{ name = "xmltodict" },
|
||||
]
|
||||
|
||||
@ -2400,6 +2417,7 @@ requires-dist = [
|
||||
{ name = "gradio", specifier = ">=3.50.2" },
|
||||
{ name = "mcp-server-fetch", specifier = "==2025.1.17" },
|
||||
{ name = "mcp-simple-arxiv", specifier = "==0.2.2" },
|
||||
{ name = "mistralai", specifier = ">=1.7.0" },
|
||||
{ name = "xmltodict", specifier = ">=0.14.2" },
|
||||
]
|
||||
|
||||
|
||||
Loading…
x
Reference in New Issue
Block a user