Update documentation with new settings page (#6716)

Co-authored-by: Engel Nyst <enyst@users.noreply.github.com>
This commit is contained in:
mamoodi 2025-02-19 14:30:36 -05:00 committed by GitHub
parent 61ce673400
commit e92e4a1cbc
No known key found for this signature in database
GPG Key ID: B5690EEEBB952194
14 changed files with 101 additions and 94 deletions

View File

@ -46,3 +46,11 @@ docker run -it \
-e THAT=that
...
```
### Referring to UI Elements
When referencing UI elements, use ``.
Example:
1. Toggle the `Advanced` option
2. Enter your model in the `Custom Model` textbox.

View File

@ -1,9 +1,6 @@
# GUI Mode
## Introduction
OpenHands provides a user-friendly Graphical User Interface (GUI) mode for interacting with the AI assistant.
This mode offers an intuitive way to set up the environment, manage settings, and communicate with the AI.
OpenHands provides a Graphical User Interface (GUI) mode for interacting with the AI assistant.
## Installation and Setup
@ -14,104 +11,95 @@ This mode offers an intuitive way to set up the environment, manage settings, an
### Initial Setup
1. Upon first launch, you'll see a settings modal.
2. Select an `LLM Provider` and `LLM Model` from the dropdown menus.
1. Upon first launch, you'll see a settings page.
2. Select an `LLM Provider` and `LLM Model` from the dropdown menus. If the required model does not exist in the list,
toggle `Advanced` options and enter it with the correct prefix in the `Custom Model` text box.
3. Enter the corresponding `API Key` for your chosen provider.
4. Click "Save" to apply the settings.
4. Click `Save Changes` to apply the settings.
### GitHub Token Setup
OpenHands automatically exports a `GITHUB_TOKEN` to the shell environment if it is available. This can happen in two ways:
- **Locally (OSS)**: The user directly inputs their GitHub token.
- **Online (SaaS)**: The token is obtained through GitHub OAuth authentication.
#### Setting Up a Local GitHub Token
1. **Generate a Personal Access Token (PAT)**:
- Go to GitHub Settings > Developer Settings > Personal Access Tokens > Tokens (classic).
- Click "Generate new token (classic)".
- **Local Installation**: The user directly inputs their GitHub token.
<details>
<summary>Setting Up a GitHub Token</summary>
1. **Generate a Personal Access Token (PAT)**:
- On GitHub, go to Settings > Developer Settings > Personal Access Tokens > Tokens (classic).
- Click `Generate new token (classic)`.
- Required scopes:
- `repo` (Full control of private repositories)
- `workflow` (Update GitHub Action workflows)
- `read:org` (Read organization data)
2. **Enter Token in OpenHands**:
- Click the Settings button (gear icon).
- Navigate to the `GitHub Settings` section.
- Paste your token in the `GitHub Token` field.
- Click `Save Changes` to apply the changes.
</details>
2. **Enter Token in OpenHands**:
- Click the Settings button (gear icon) in the top right.
- Navigate to the "GitHub" section.
- Paste your token in the "GitHub Token" field.
- Click "Save" to apply the changes.
<details>
<summary>Organizational Token Policies</summary>
#### Organizational Token Policies
If you're working with organizational repositories, additional setup may be required:
If you're working with organizational repositories, additional setup may be required:
1. **Check Organization Requirements**:
1. **Check Organization Requirements**:
- Organization admins may enforce specific token policies.
- Some organizations require tokens to be created with SSO enabled.
- Review your organization's [token policy settings](https://docs.github.com/en/organizations/managing-programmatic-access-to-your-organization/setting-a-personal-access-token-policy-for-your-organization).
2. **Verify Organization Access**:
2. **Verify Organization Access**:
- Go to your token settings on GitHub.
- Look for the organization under "Organization access".
- If required, click "Enable SSO" next to your organization.
- Look for the organization under `Organization access`.
- If required, click `Enable SSO` next to your organization.
- Complete the SSO authorization process.
</details>
#### OAuth Authentication (Online Mode)
<details>
<summary>Troubleshooting</summary>
When using OpenHands in online mode, the GitHub OAuth flow:
Common issues and solutions:
1. Requests the following permissions:
- **Token Not Recognized**:
- Ensure the token is properly saved in settings.
- Check that the token hasn't expired.
- Verify the token has the required scopes.
- Try regenerating the token.
- **Organization Access Denied**:
- Check if SSO is required but not enabled.
- Verify organization membership.
- Contact organization admin if token policies are blocking access.
- **Verifying Token Works**:
- The app will show a green checkmark if the token is valid.
- Try accessing a repository to confirm permissions.
- Check the browser console for any error messages.
</details>
- **OpenHands Cloud**: The token is obtained through GitHub OAuth authentication.
<details>
<summary>OAuth Authentication</summary>
When using OpenHands Cloud, the GitHub OAuth flow requests the following permissions:
- Repository access (read/write)
- Workflow management
- Organization read access
2. Authentication steps:
- Click "Sign in with GitHub" when prompted.
To authenticate OpenHands:
- Click `Sign in with GitHub` when prompted.
- Review the requested permissions.
- Authorize OpenHands to access your GitHub account.
- If using an organization, authorize organization access if prompted.
#### Troubleshooting
Common issues and solutions:
- **Token Not Recognized**:
- Ensure the token is properly saved in settings.
- Check that the token hasn't expired.
- Verify the token has the required scopes.
- Try regenerating the token.
- **Organization Access Denied**:
- Check if SSO is required but not enabled.
- Verify organization membership.
- Contact organization admin if token policies are blocking access.
- **Verifying Token Works**:
- The app will show a green checkmark if the token is valid.
- Try accessing a repository to confirm permissions.
- Check the browser console for any error messages.
- Use the "Test Connection" button in settings if available.
</details>
### Advanced Settings
1. Toggle `Advanced Options` to access additional settings.
1. Inside the Settings page, toggle `Advanced` options to access additional settings.
2. Use the `Custom Model` text box to manually enter a model if it's not in the list.
3. Specify a `Base URL` if required by your LLM provider.
### Main Interface
The main interface consists of several key components:
- **Chat Window**: The central area where you can view the conversation history with the AI assistant.
- **Input Box**: Located at the bottom of the screen, use this to type your messages or commands to the AI.
- **Send Button**: Click this to send your message to the AI.
- **Settings Button**: A gear icon that opens the settings modal, allowing you to adjust your configuration at any time.
- **Workspace Panel**: Displays the files and folders in your workspace, allowing you to navigate and view files, or the agent's past commands or web browsing history.
### Interacting with the AI
1. Type your question, request, or task description in the input box.
1. Type your prompt in the input box.
2. Click the send button or press Enter to submit your message.
3. The AI will process your input and provide a response in the chat window.
4. You can continue the conversation by asking follow-up questions or providing additional information.

View File

@ -12,7 +12,8 @@ A system with a modern processor and a minimum of **4GB RAM** is recommended to
<details>
<summary>MacOS</summary>
### Docker Desktop
**Docker Desktop**
1. [Install Docker Desktop on Mac](https://docs.docker.com/desktop/setup/install/mac-install).
2. Open Docker Desktop, go to `Settings > Advanced` and ensure `Allow the default Docker socket to be used` is enabled.
@ -25,7 +26,7 @@ A system with a modern processor and a minimum of **4GB RAM** is recommended to
Tested with Ubuntu 22.04.
:::
### Docker Desktop
**Docker Desktop**
1. [Install Docker Desktop on Linux](https://docs.docker.com/desktop/setup/install/linux/).
@ -33,12 +34,13 @@ A system with a modern processor and a minimum of **4GB RAM** is recommended to
<details>
<summary>Windows</summary>
### WSL
**WSL**
1. [Install WSL](https://learn.microsoft.com/en-us/windows/wsl/install).
2. Run `wsl --version` in powershell and confirm `Default Version: 2`.
### Docker Desktop
**Docker Desktop**
1. [Install Docker Desktop on Windows](https://docs.docker.com/desktop/setup/install/windows-install).
2. Open Docker Desktop, go to `Settings` and confirm the following:
@ -78,24 +80,22 @@ or run it on tagged issues with [a github action](https://docs.all-hands.dev/mod
## Setup
Upon launching OpenHands, you'll see a settings modal. You **must** select an `LLM Provider` and `LLM Model` and enter a corresponding `API Key`.
Upon launching OpenHands, you'll see a Settings page. You **must** select an `LLM Provider` and `LLM Model` and enter a corresponding `API Key`.
These can be changed at any time by selecting the `Settings` button (gear icon) in the UI.
If the required `LLM Model` does not exist in the list, you can toggle `Advanced Options` and manually enter it with the correct prefix
If the required model does not exist in the list, you can toggle `Advanced` options and manually enter it with the correct prefix
in the `Custom Model` text box.
The `Advanced Options` also allow you to specify a `Base URL` if required.
The `Advanced` options also allow you to specify a `Base URL` if required.
<div style={{ display: 'flex', justifyContent: 'center', gap: '20px' }}>
<img src="/img/settings-screenshot.png" alt="settings-modal" width="340" />
<img src="/img/settings-advanced.png" alt="settings-modal" width="335" />
</div>
Now you're ready to [get started with OpenHands](./getting-started).
## Versions
The command above pulls the most recent stable release of OpenHands. You have other options as well:
- For a specific release, use `docker.all-hands.dev/all-hands-ai/openhands:$VERSION`, replacing $VERSION with the version number.
- We use semver, and release major, minor, and patch tags. So `0.9` will automatically point to the latest `0.9.x` release, and `0` will point to the latest `0.x.x` release.
- For the most up-to-date development version, you can use `docker.all-hands.dev/all-hands-ai/openhands:main`. This version is unstable and is recommended for testing or development purposes only.
The [docker command above](./installation#start-the-app) pulls the most recent stable release of OpenHands. You have other options as well:
- For a specific release, replace $VERSION in `openhands:$VERSION` and `runtime:$VERSION`, with the version number.
We use SemVer so `0.9` will automatically point to the latest `0.9.x` release, and `0` will point to the latest `0.x.x` release.
- For the most up-to-date development version, replace $VERSION in `openhands:$VERSION` and `runtime:$VERSION`, with `main`.
This version is unstable and is recommended for testing or development purposes only.
You can choose the tag that best suits your needs based on stability requirements and desired features.

View File

@ -25,7 +25,7 @@ You will need your ChatGPT deployment name which can be found on the deployments
&lt;deployment-name&gt; below.
:::
1. Enable `Advanced Options`
1. Enable `Advanced` options
2. Set the following:
- `Custom Model` to azure/&lt;deployment-name&gt;
- `Base URL` to your Azure API Base URL (e.g. `https://example-endpoint.openai.azure.com`)

View File

@ -10,7 +10,7 @@ OpenHands uses LiteLLM to make calls to Google's chat models. You can find their
When running OpenHands, you'll need to set the following in the OpenHands UI through the Settings:
- `LLM Provider` to `Gemini`
- `LLM Model` to the model you will be using.
If the model is not in the list, toggle `Advanced Options`, and enter it in `Custom Model` (e.g. gemini/&lt;model-name&gt; like `gemini/gemini-1.5-pro`).
If the model is not in the list, toggle `Advanced` options, and enter it in `Custom Model` (e.g. gemini/&lt;model-name&gt; like `gemini/gemini-1.5-pro`).
- `API Key` to your Gemini API key
## VertexAI - Google Cloud Platform Configs
@ -27,4 +27,4 @@ VERTEXAI_LOCATION="<your-gcp-location>"
Then set the following in the OpenHands UI through the Settings:
- `LLM Provider` to `VertexAI`
- `LLM Model` to the model you will be using.
If the model is not in the list, toggle `Advanced Options`, and enter it in `Custom Model` (e.g. vertex_ai/&lt;model-name&gt;).
If the model is not in the list, toggle `Advanced` options, and enter it in `Custom Model` (e.g. vertex_ai/&lt;model-name&gt;).

View File

@ -8,7 +8,7 @@ When running OpenHands, you'll need to set the following in the OpenHands UI thr
- `LLM Provider` to `Groq`
- `LLM Model` to the model you will be using. [Visit here to see the list of
models that Groq hosts](https://console.groq.com/docs/models). If the model is not in the list, toggle
`Advanced Options`, and enter it in `Custom Model` (e.g. groq/&lt;model-name&gt; like `groq/llama3-70b-8192`).
`Advanced` options, and enter it in `Custom Model` (e.g. groq/&lt;model-name&gt; like `groq/llama3-70b-8192`).
- `API key` to your Groq API key. To find or create your Groq API Key, [see here](https://console.groq.com/keys).
@ -17,7 +17,7 @@ models that Groq hosts](https://console.groq.com/docs/models). If the model is n
The Groq endpoint for chat completion is [mostly OpenAI-compatible](https://console.groq.com/docs/openai). Therefore, you can access Groq models as you
would access any OpenAI-compatible endpoint. In the OpenHands UI through the Settings:
1. Enable `Advanced Options`
1. Enable `Advanced` options
2. Set the following:
- `Custom Model` to the prefix `openai/` + the model you will be using (e.g. `openai/llama3-70b-8192`)
- `Base URL` to `https://api.groq.com/openai/v1`

View File

@ -8,7 +8,7 @@ To use LiteLLM proxy with OpenHands, you need to:
1. Set up a LiteLLM proxy server (see [LiteLLM documentation](https://docs.litellm.ai/docs/proxy/quick_start))
2. When running OpenHands, you'll need to set the following in the OpenHands UI through the Settings:
* Enable `Advanced Options`
* Enable `Advanced` options
* `Custom Model` to the prefix `litellm_proxy/` + the model you will be using (e.g. `litellm_proxy/anthropic.claude-3-5-sonnet-20241022-v2:0`)
* `Base URL` to your LiteLLM proxy URL (e.g. `https://your-litellm-proxy.com`)
* `API Key` to your LiteLLM proxy API key

View File

@ -38,7 +38,7 @@ The following can be set in the OpenHands UI through the Settings:
- `LLM Provider`
- `LLM Model`
- `API Key`
- `Base URL` (through `Advanced Settings`)
- `Base URL` (through `Advanced` settings)
There are some settings that may be necessary for some LLMs/providers that cannot be set through the UI. Instead, these
can be set through environment variables passed to the [docker run command](/modules/usage/installation#start-the-app)

View File

@ -8,7 +8,7 @@ When running OpenHands, you'll need to set the following in the OpenHands UI thr
* `LLM Provider` to `OpenAI`
* `LLM Model` to the model you will be using.
[Visit here to see a full list of OpenAI models that LiteLLM supports.](https://docs.litellm.ai/docs/providers/openai#openai-chat-completion-models)
If the model is not in the list, toggle `Advanced Options`, and enter it in `Custom Model` (e.g. openai/&lt;model-name&gt; like `openai/gpt-4o`).
If the model is not in the list, toggle `Advanced` options, and enter it in `Custom Model` (e.g. openai/&lt;model-name&gt; like `openai/gpt-4o`).
* `API Key` to your OpenAI API key. To find or create your OpenAI Project API Key, [see here](https://platform.openai.com/api-keys).
## Using OpenAI-Compatible Endpoints
@ -18,7 +18,7 @@ Just as for OpenAI Chat completions, we use LiteLLM for OpenAI-compatible endpoi
## Using an OpenAI Proxy
If you're using an OpenAI proxy, in the OpenHands UI through the Settings:
1. Enable `Advanced Options`
1. Enable `Advanced` options
2. Set the following:
- `Custom Model` to openai/&lt;model-name&gt; (e.g. `openai/gpt-4o` or openai/&lt;proxy-prefix&gt;/&lt;model-name&gt;)
- `Base URL` to the URL of your OpenAI proxy

View File

@ -8,5 +8,5 @@ When running OpenHands, you'll need to set the following in the OpenHands UI thr
* `LLM Provider` to `OpenRouter`
* `LLM Model` to the model you will be using.
[Visit here to see a full list of OpenRouter models](https://openrouter.ai/models).
If the model is not in the list, toggle `Advanced Options`, and enter it in `Custom Model` (e.g. openrouter/&lt;model-name&gt; like `openrouter/anthropic/claude-3.5-sonnet`).
If the model is not in the list, toggle `Advanced` options, and enter it in `Custom Model` (e.g. openrouter/&lt;model-name&gt; like `openrouter/anthropic/claude-3.5-sonnet`).
* `API Key` to your OpenRouter API key.

View File

@ -66,7 +66,7 @@ const sidebars: SidebarsConfig = {
},
{
type: 'doc',
label: 'Github Actions',
label: 'Github Action',
id: 'usage/how-to/github-action',
},
{

View File

@ -23,6 +23,17 @@ export default function Home(): JSX.Element {
})}
>
<HomepageHeader />
<div style={{ textAlign: 'center', padding: '2rem' }}>
<br />
<h2>Most Popular Links</h2>
<ul style={{ listStyleType: 'none'}}>
<li><a href="/modules/usage/Installation">How to Run OpenHands</a></li>
<li><a href="/modules/usage/prompting/microagents-repo">Customizing OpenHands to a repository</a></li>
<li><a href="/modules/usage/how-to/github-action">Integrating OpenHands with Github</a></li>
<li><a href="/modules/usage/llms#model-recommendations">Recommended models to use</a></li>
<li><a href="/modules/usage/runtimes#connecting-to-your-filesystem">Connecting OpenHands to your filesystem</a></li>
</ul>
</div>
</Layout>
);
}

Binary file not shown.

Before

Width:  |  Height:  |  Size: 28 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 33 KiB