Merge pull request #36 from vekoada/expand-alt-endpoint-instructions

Expand instructions for local LLM integration
This commit is contained in:
David Zhang
2025-02-08 17:45:41 -08:00
committed by GitHub

View File

@@ -102,6 +102,10 @@ FIRECRAWL_KEY="your_firecrawl_key"
OPENAI_KEY="your_openai_key"
```
To use local LLM, comment out `OPENAI_KEY` and instead uncomment `OPENAI_ENDPOINT` and `OPENAI_MODEL`:
- Set `OPENAI_ENDPOINT` to the address of your local server (eg."http://localhost:1234/v1")
- Set `OPENAI_MODEL` to the name of the model loaded in your local server.
### Docker
1. Clone the repository