This commit is contained in:
David Zhang 2025-03-07 23:23:13 -08:00
parent 06f55caf09
commit 10a26896de

View File

@ -103,6 +103,7 @@ OPENAI_KEY="your_openai_key"
```
To use local LLM, comment out `OPENAI_KEY` and instead uncomment `OPENAI_ENDPOINT` and `OPENAI_MODEL`:
- Set `OPENAI_ENDPOINT` to the address of your local server (eg."http://localhost:1234/v1")
- Set `OPENAI_MODEL` to the name of the model loaded in your local server.
@ -120,6 +121,7 @@ docker compose up -d
```
5. Execute `npm run docker` in the docker service:
```bash
docker exec -it deep-research npm run docker
```
@ -146,7 +148,7 @@ The system will then:
3. Recursively explore deeper based on findings
4. Generate a comprehensive markdown report
The final report will be saved as `output.md` in your working directory.
The final report will be saved as `report.md` or `answer.md` in your working directory, depending on which modes you selected.
### Concurrency
@ -154,13 +156,23 @@ If you have a paid version of Firecrawl or a local version, feel free to increas
If you have a free version, you may sometimes run into rate limit errors, you can reduce the limit (but it will run a lot slower).
### DeepSeek R1
Deep research performs great on R1! We use [Fireworks](http://fireworks.ai) as the main provider for the R1 model. To use R1, simply set a Fireworks API key:
```bash
FIREWORKS_KEY="api_key"
```
The system will automatically switch over to use R1 instead of `o3-mini` when the key is detected.
### Custom endpoints and models
There are 2 other optional env vars that lets you tweak the endpoint (for other OpenAI compatible APIs like OpenRouter or Gemini) as well as the model string.
```bash
OPENAI_ENDPOINT="custom_endpoint"
OPENAI_MODEL="custom_model"
CUSTOM_MODEL="custom_model"
```
## How It Works