From 6bef270526d2616934166822ee116bad2edb8755 Mon Sep 17 00:00:00 2001 From: =?UTF-8?q?=E0=AE=AE=E0=AE=A9=E0=AF=8B=E0=AE=9C=E0=AF=8D=E0=AE=95?= =?UTF-8?q?=E0=AF=81=E0=AE=AE=E0=AE=BE=E0=AE=B0=E0=AF=8D=20=E0=AE=AA?= =?UTF-8?q?=E0=AE=B4=E0=AE=A9=E0=AE=BF=E0=AE=9A=E0=AF=8D=E0=AE=9A=E0=AE=BE?= =?UTF-8?q?=E0=AE=AE=E0=AE=BF?= Date: Thu, 11 Jul 2024 21:26:36 +0530 Subject: [PATCH] Doc: Fix Azure Guide (#2894) * Doc: Fix Azure Guide * Update azureLLMs.md --- docs/modules/usage/llms/azureLLMs.md | 24 +++++++++++++++++++++--- docs/modules/usage/llms/localLLMs.md | 4 ++-- 2 files changed, 23 insertions(+), 5 deletions(-) diff --git a/docs/modules/usage/llms/azureLLMs.md b/docs/modules/usage/llms/azureLLMs.md index aff734eddd..7ec4bc77f4 100644 --- a/docs/modules/usage/llms/azureLLMs.md +++ b/docs/modules/usage/llms/azureLLMs.md @@ -12,9 +12,27 @@ When running the OpenDevin Docker image, you'll need to set the following enviro LLM_BASE_URL="" # e.g. "https://openai-gpt-4-test-v-1.openai.azure.com/" LLM_API_KEY="" LLM_MODEL="azure/" -LLM_API_VERSION = "" # e.g. "2024-02-15-preview" +LLM_API_VERSION="" # e.g. "2024-02-15-preview" ``` +Example: +```bash +docker run -it \ +--pull=always \ +-e SANDBOX_USER_ID=$(id -u) \ +-e WORKSPACE_MOUNT_PATH=$WORKSPACE_BASE \ +-e LLM_BASE_URL="x.openai.azure.com" \ +-e LLM_API_VERSION="2024-02-15-preview" \ +-v $WORKSPACE_BASE:/opt/workspace_base \ +-v /var/run/docker.sock:/var/run/docker.sock \ +-p 3000:3000 \ +--add-host host.docker.internal:host-gateway \ +--name opendevin-app-$(date +%Y%m%d%H%M%S) \ +ghcr.io/opendevin/opendevin +``` + +You can set the LLM_MODEL and LLM_API_KEY in the OpenDevin UI itself. + :::note You can find your ChatGPT deployment name on the deployments page in Azure. It could be the same with the chat model name (e.g. 'GPT4-1106-preview'), by default or initially set, but it doesn't have to be the same. Run opendevin, and when you load it in the browser, go to Settings and set model as above: "azure/<your-actual-gpt-deployment-name>". If it's not in the list, enter your own text and save it. ::: @@ -32,6 +50,6 @@ When running OpenDevin in Docker, set the following environment variables using ``` LLM_EMBEDDING_MODEL="azureopenai" -LLM_EMBEDDING_DEPLOYMENT_NAME = "" # e.g. "TextEmbedding..." -LLM_API_VERSION = "" # e.g. "2024-02-15-preview" +LLM_EMBEDDING_DEPLOYMENT_NAME="" # e.g. "TextEmbedding..." +LLM_API_VERSION="" # e.g. "2024-02-15-preview" ``` diff --git a/docs/modules/usage/llms/localLLMs.md b/docs/modules/usage/llms/localLLMs.md index 5325b1d244..8ad8c4821c 100644 --- a/docs/modules/usage/llms/localLLMs.md +++ b/docs/modules/usage/llms/localLLMs.md @@ -38,9 +38,9 @@ But when running `docker run`, you'll need to add a few more arguments: -e LLM_OLLAMA_BASE_URL="http://host.docker.internal:11434" \ ``` -LLM_OLLAMA_BASE_URL is optional. If you set it, it will be used to show the available installed models in the UI. +LLM_OLLAMA_BASE_URL is optional. If you set it, it will be used to show the available installed models in the UI. -For example: +Example: ```bash # The directory you want OpenDevin to modify. MUST be an absolute path!