diff --git a/README.md b/README.md index 2d7a4c3..43e4a7c 100644 --- a/README.md +++ b/README.md @@ -154,6 +154,21 @@ Run the following demo case: python owl/run.py ``` +## Running with Different Models + +OWL supports various LLM backends. You can use the following scripts to run with different models: + +```bash +# Run with Qwen model +python owl/run_qwen.py + +# Run with Deepseek model +python owl/run_deepseek.py + +# Run with other OpenAI-compatible models +python owl/run_openai_compatiable_model.py +``` + For a simpler version that only requires an LLM API key, you can try our minimal example: ```bash @@ -169,7 +184,7 @@ question = "Task description here." society = construct_society(question) answer, chat_history, token_count = run_society(society) -logger.success(f"Answer: {answer}") +print(f"Answer: {answer}") ``` For uploading files, simply provide the file path along with your question: @@ -180,8 +195,7 @@ question = "What is in the given DOCX file? Here is the file path: tmp/example.d society = construct_society(question) answer, chat_history, token_count = run_society(society) - -logger.success(f"Answer: {answer}") +print(f"Answer: {answer}") ``` OWL will then automatically invoke document-related tools to process the file and extract the answer. diff --git a/README_zh.md b/README_zh.md index 6b1d536..fbad2d3 100644 --- a/README_zh.md +++ b/README_zh.md @@ -154,6 +154,21 @@ python owl/run.py python owl/run_mini.py ``` +## 使用不同的模型 + +OWL 支持多种 LLM 后端。您可以使用以下脚本来运行不同的模型: + +```bash +# 使用 Qwen 模型运行 +python owl/run_qwen.py + +# 使用 Deepseek 模型运行 +python owl/run_deepseek.py + +# 使用其他 OpenAI 兼容模型运行 +python owl/run_openai_compatiable_model.py +``` + 你可以通过修改 `run.py` 脚本来运行自己的任务: ```python @@ -163,7 +178,7 @@ question = "Task description here." society = construct_society(question) answer, chat_history, token_count = run_society(society) -logger.success(f"Answer: {answer}") +print(f"Answer: {answer}") ``` 上传文件时,只需提供文件路径和问题: @@ -175,12 +190,11 @@ question = "给定的 DOCX 文件中有什么内容?文件路径如下:tmp/e society = construct_society(question) answer, chat_history, token_count = run_society(society) -logger.success(f"答案:{answer}") +print(f"答案:{answer}") ``` OWL 将自动调用与文档相关的工具来处理文件并提取答案。 - OWL 将自动调用与文档相关的工具来处理文件并提取答案。 你可以尝试以下示例任务: