mirror of
https://github.com/jina-ai/node-DeepResearch.git
synced 2026-03-22 07:29:35 +08:00
chore: add local ollama lmstudio support
This commit is contained in:
@@ -34,7 +34,7 @@ npm install
|
|||||||
|
|
||||||
## Usage
|
## Usage
|
||||||
|
|
||||||
We use Gemini/OpenAI/[LocalLLM] for reasoning, [Jina Reader](https://jina.ai/reader) for searching and reading webpages, you can get a free API key with 1M tokens from jina.ai.
|
We use Gemini/OpenAI/[LocalLLM](#use-local-llm) for reasoning, [Jina Reader](https://jina.ai/reader) for searching and reading webpages, you can get a free API key with 1M tokens from jina.ai.
|
||||||
|
|
||||||
```bash
|
```bash
|
||||||
export GEMINI_API_KEY=... # for gemini
|
export GEMINI_API_KEY=... # for gemini
|
||||||
@@ -88,9 +88,9 @@ npm run dev "what should be jina ai strategy for 2025?"
|
|||||||
If you use Ollama or LMStudio, you can redirect the reasoning request to your local LLM by setting the following environment variables:
|
If you use Ollama or LMStudio, you can redirect the reasoning request to your local LLM by setting the following environment variables:
|
||||||
|
|
||||||
```bash
|
```bash
|
||||||
export LLM_PROVIDER=openai
|
export LLM_PROVIDER=openai # yes, that's right - for local llm we still use openai client
|
||||||
export OPENAI_BASE_URL=http://127.0.0.1:1234/v1
|
export OPENAI_BASE_URL=http://127.0.0.1:1234/v1 # your local llm endpoint
|
||||||
export DEFAULT_MODEL_NAME=qwen2.5-7b
|
export DEFAULT_MODEL_NAME=qwen2.5-7b # your local llm model name
|
||||||
```
|
```
|
||||||
|
|
||||||
Not every LLM works with our reasoning flow, but you can test it out.
|
Not every LLM works with our reasoning flow, but you can test it out.
|
||||||
|
|||||||
Reference in New Issue
Block a user