feat(resolver): implement gitlab resolver (#6458)

Signed-off-by: José Luis Di Biase <josx@interorganic.com.ar>
Co-authored-by: José Luis Di Biase <josx@interorganic.com.ar>
Co-authored-by: Oriana <oriana@camba.coop>
Co-authored-by: Charlie <charlie@camba.coop>
Co-authored-by: Juan Manuel Daza <61162223+juanmanueldaza@users.noreply.github.com>
Co-authored-by: Juan Manuel Daza <juandaza@camba.coop>
Co-authored-by: Cody Kociemba <cody@symbaventures.com>
Co-authored-by: Rohit Malhotra <rohitvinodmalhotra@gmail.com>
Co-authored-by: Engel Nyst <enyst@users.noreply.github.com>
This commit is contained in:
wtiger9218 2025-02-13 18:21:23 -05:00 committed by GitHub
parent ef12bc5381
commit f5fccab1f6
No known key found for this signature in database
GPG Key ID: B5690EEEBB952194
29 changed files with 6688 additions and 1301 deletions

View File

@ -231,7 +231,7 @@ jobs:
- name: Attempt to resolve issue
env:
GITHUB_TOKEN: ${{ secrets.PAT_TOKEN || github.token }}
GITHUB_USERNAME: ${{ secrets.PAT_USERNAME || 'openhands-agent' }}
GIT_USERNAME: ${{ secrets.PAT_USERNAME || 'openhands-agent' }}
LLM_MODEL: ${{ secrets.LLM_MODEL || inputs.LLM_MODEL }}
LLM_API_KEY: ${{ secrets.LLM_API_KEY }}
LLM_BASE_URL: ${{ secrets.LLM_BASE_URL }}
@ -267,7 +267,7 @@ jobs:
if: always() # Create PR or branch even if the previous steps fail
env:
GITHUB_TOKEN: ${{ secrets.PAT_TOKEN || github.token }}
GITHUB_USERNAME: ${{ secrets.PAT_USERNAME || 'openhands-agent' }}
GIT_USERNAME: ${{ secrets.PAT_USERNAME || 'openhands-agent' }}
LLM_MODEL: ${{ secrets.LLM_MODEL || inputs.LLM_MODEL }}
LLM_API_KEY: ${{ secrets.LLM_API_KEY }}
LLM_BASE_URL: ${{ secrets.LLM_BASE_URL }}

View File

@ -51,6 +51,9 @@ export default defineConfig(({ mode }) => {
// rewriteWsOrigin: true,
},
},
watch: {
ignored: ['**/node_modules/**', '**/.git/**'],
},
},
ssr: {
noExternal: ["react-syntax-highlighter"],

View File

@ -1,4 +1,4 @@
# OpenHands Github Issue Resolver 🙌
# OpenHands Github & Gitlab Issue Resolver 🙌
Need help resolving a GitHub issue but don't have the time to do it yourself? Let an AI agent help you out!
@ -74,14 +74,24 @@ If you prefer to run the resolver programmatically instead of using GitHub Actio
pip install openhands-ai
```
2. Create a GitHub access token:
- Visit [GitHub's token settings](https://github.com/settings/personal-access-tokens/new)
- Create a fine-grained token with these scopes:
- "Content"
- "Pull requests"
- "Issues"
- "Workflows"
- If you don't have push access to the target repo, you can fork it first
2. Create a GitHub or GitLab access token:
- Create a GitHub acces token
- Visit [GitHub's token settings](https://github.com/settings/personal-access-tokens/new)
- Create a fine-grained token with these scopes:
- "Content"
- "Pull requests"
- "Issues"
- "Workflows"
- If you don't have push access to the target repo, you can fork it first
- Create a GitLab acces token
- Visit [GitLab's token settings](https://gitlab.com/-/user_settings/personal_access_tokens)
- Create a fine-grained token with these scopes:
- 'api'
- 'read_api'
- 'read_user'
- 'read_repository'
- 'write_repository'
3. Set up environment variables:
@ -90,7 +100,12 @@ pip install openhands-ai
# GitHub credentials
export GITHUB_TOKEN="your-github-token"
export GITHUB_USERNAME="your-github-username" # Optional, defaults to token owner
export GIT_USERNAME="your-github-username" # Optional, defaults to token owner
# GitLab credentials if you're using GitLab repo
export GITLAB_TOKEN="your-gitlab-token"
export GIT_USERNAME="your-gitlab-username" # Optional, defaults to token owner
# LLM configuration
@ -169,13 +184,13 @@ There are three ways you can upload:
3. `ready` - create a non-draft PR that's ready for review
```bash
python -m openhands.resolver.send_pull_request --issue-number ISSUE_NUMBER --github-username YOUR_GITHUB_USERNAME --pr-type draft
python -m openhands.resolver.send_pull_request --issue-number ISSUE_NUMBER --username YOUR_GITHUB_OR_GITLAB_USERNAME --pr-type draft
```
If you want to upload to a fork, you can do so by specifying the `fork-owner`:
```bash
python -m openhands.resolver.send_pull_request --issue-number ISSUE_NUMBER --github-username YOUR_GITHUB_USERNAME --pr-type draft --fork-owner YOUR_GITHUB_USERNAME
python -m openhands.resolver.send_pull_request --issue-number ISSUE_NUMBER --username YOUR_GITHUB_OR_GITLAB_USERNAME --pr-type draft --fork-owner YOUR_GITHUB_OR_GITLAB_USERNAME
```
## Providing Custom Instructions
@ -184,5 +199,5 @@ You can customize how the AI agent approaches issue resolution by adding a `.ope
## Troubleshooting
If you have any issues, please open an issue on this github repo, we're happy to help!
If you have any issues, please open an issue on this github or gitlab repo, we're happy to help!
Alternatively, you can [email us](mailto:contact@all-hands.dev) or join the OpenHands Slack workspace (see [the README](/README.md) for an invite link).

View File

@ -1,21 +0,0 @@
from pydantic import BaseModel
class ReviewThread(BaseModel):
comment: str
files: list[str]
class GithubIssue(BaseModel):
owner: str
repo: str
number: int
title: str
body: str
thread_comments: list[str] | None = None # Added field for issue thread comments
closing_issues: list[str] | None = None
review_comments: list[str] | None = None
review_threads: list[ReviewThread] | None = None
thread_ids: list[str] | None = None
head_branch: str | None = None
base_branch: str | None = None

View File

@ -0,0 +1,591 @@
from typing import Any
import requests
from openhands.core.logger import openhands_logger as logger
from openhands.resolver.interfaces.issue import (
Issue,
IssueHandlerInterface,
ReviewThread,
)
from openhands.resolver.utils import extract_issue_references
class GithubIssueHandler(IssueHandlerInterface):
def __init__(self, owner: str, repo: str, token: str, username: str | None = None):
self.owner = owner
self.repo = repo
self.token = token
self.username = username
self.base_url = self.get_base_url()
self.download_url = self.get_download_url()
self.clone_url = self.get_clone_url()
self.headers = self.get_headers()
def set_owner(self, owner: str):
self.owner = owner
def get_headers(self):
return {
'Authorization': f'token {self.token}',
'Accept': 'application/vnd.github.v3+json',
}
def get_base_url(self):
return f'https://api.github.com/repos/{self.owner}/{self.repo}'
def get_authorize_url(self):
return f'https://{self.username}:{self.token}@github.com/'
def get_branch_url(self, branch_name: str):
return self.get_base_url() + f'/branches/{branch_name}'
def get_download_url(self):
return f'{self.base_url}/issues'
def get_clone_url(self):
username_and_token = (
f'{self.username}:{self.token}'
if self.username
else f'x-auth-token:{self.token}'
)
return f'https://{username_and_token}@github.com/{self.owner}/{self.repo}.git'
def get_graphql_url(self):
return 'https://api.github.com/graphql'
def get_compare_url(self, branch_name: str):
return f'https://github.com/{self.owner}/{self.repo}/compare/{branch_name}?expand=1'
def get_converted_issues(
self, issue_numbers: list[int] | None = None, comment_id: int | None = None
) -> list[Issue]:
"""Download issues from Github.
Args:
issue_numbers: The numbers of the issues to download
comment_id: The ID of a single comment, if provided, otherwise all comments
Returns:
List of Github issues.
"""
if not issue_numbers:
raise ValueError('Unspecified issue number')
all_issues = self.download_issues()
logger.info(f'Limiting resolving to issues {issue_numbers}.')
all_issues = [
issue
for issue in all_issues
if issue['number'] in issue_numbers and 'pull_request' not in issue
]
if len(issue_numbers) == 1 and not all_issues:
raise ValueError(f'Issue {issue_numbers[0]} not found')
converted_issues = []
for issue in all_issues:
# Check for required fields (number and title)
if any([issue.get(key) is None for key in ['number', 'title']]):
logger.warning(
f'Skipping issue {issue} as it is missing number or title.'
)
continue
# Handle empty body by using empty string
if issue.get('body') is None:
issue['body'] = ''
# Get issue thread comments
thread_comments = self.get_issue_comments(
issue['number'], comment_id=comment_id
)
# Convert empty lists to None for optional fields
issue_details = Issue(
owner=self.owner,
repo=self.repo,
number=issue['number'],
title=issue['title'],
body=issue['body'],
thread_comments=thread_comments,
review_comments=None, # Initialize review comments as None for regular issues
)
converted_issues.append(issue_details)
return converted_issues
def download_issues(self) -> list[Any]:
params: dict[str, int | str] = {'state': 'open', 'per_page': 100, 'page': 1}
all_issues = []
while True:
response = requests.get(
self.download_url, headers=self.headers, params=params
)
response.raise_for_status()
issues = response.json()
if not issues:
break
if not isinstance(issues, list) or any(
[not isinstance(issue, dict) for issue in issues]
):
raise ValueError(
'Expected list of dictionaries from Service Github API.'
)
all_issues.extend(issues)
assert isinstance(params['page'], int)
params['page'] += 1
return all_issues
def get_issue_comments(
self, issue_number: int, comment_id: int | None = None
) -> list[str] | None:
"""Download comments for a specific issue from Github."""
url = f'{self.download_url}/{issue_number}/comments'
params = {'per_page': 100, 'page': 1}
all_comments = []
while True:
response = requests.get(url, headers=self.headers, params=params)
response.raise_for_status()
comments = response.json()
if not comments:
break
if comment_id:
matching_comment = next(
(
comment['body']
for comment in comments
if comment['id'] == comment_id
),
None,
)
if matching_comment:
return [matching_comment]
else:
all_comments.extend([comment['body'] for comment in comments])
params['page'] += 1
return all_comments if all_comments else None
def branch_exists(self, branch_name: str) -> bool:
print(f'Checking if branch {branch_name} exists...')
response = requests.get(
f'{self.base_url}/branches/{branch_name}', headers=self.headers
)
exists = response.status_code == 200
print(f'Branch {branch_name} exists: {exists}')
return exists
def get_branch_name(self, base_branch_name: str):
branch_name = base_branch_name
attempt = 1
while self.branch_exists(branch_name):
attempt += 1
branch_name = f'{base_branch_name}-try{attempt}'
return branch_name
def reply_to_comment(self, pr_number: int, comment_id: str, reply: str):
# Opting for graphql as REST API doesn't allow reply to replies in comment threads
query = """
mutation($body: String!, $pullRequestReviewThreadId: ID!) {
addPullRequestReviewThreadReply(input: { body: $body, pullRequestReviewThreadId: $pullRequestReviewThreadId }) {
comment {
id
body
createdAt
}
}
}
"""
comment_reply = f'Openhands fix success summary\n\n\n{reply}'
variables = {'body': comment_reply, 'pullRequestReviewThreadId': comment_id}
url = self.get_graphql_url()
headers = {
'Authorization': f'Bearer {self.token}',
'Content-Type': 'application/json',
}
response = requests.post(
url, json={'query': query, 'variables': variables}, headers=headers
)
response.raise_for_status()
def get_pull_url(self, pr_number: int):
return f'https://github.com/{self.owner}/{self.repo}/pull/{pr_number}'
def get_default_branch_name(self) -> str:
response = requests.get(f'{self.base_url}', headers=self.headers)
response.raise_for_status()
return response.json()['default_branch']
def create_pull_request(self, data=dict) -> dict:
response = requests.post(
f'{self.base_url}/pulls', headers=self.headers, json=data
)
if response.status_code == 403:
raise RuntimeError(
'Failed to create pull request due to missing permissions. '
'Make sure that the provided token has push permissions for the repository.'
)
response.raise_for_status()
pr_data = response.json()
return pr_data
def request_reviewers(self, reviewer: str, pr_number: int):
review_data = {'reviewers': [reviewer]}
review_response = requests.post(
f'{self.base_url}/pulls/{pr_number}/requested_reviewers',
headers=self.headers,
json=review_data,
)
if review_response.status_code != 201:
print(
f'Warning: Failed to request review from {reviewer}: {review_response.text}'
)
def send_comment_msg(self, issue_number: int, msg: str):
"""Send a comment message to a GitHub issue or pull request.
Args:
issue_number: The issue or pull request number
msg: The message content to post as a comment
"""
# Post a comment on the PR
comment_url = f'{self.base_url}/issues/{issue_number}/comments'
comment_data = {'body': msg}
comment_response = requests.post(
comment_url, headers=self.headers, json=comment_data
)
if comment_response.status_code != 201:
print(
f'Failed to post comment: {comment_response.status_code} {comment_response.text}'
)
else:
print(f'Comment added to the PR: {msg}')
def get_context_from_external_issues_references(
self,
closing_issues: list[str],
closing_issue_numbers: list[int],
issue_body: str,
review_comments: list[str] | None,
review_threads: list[ReviewThread],
thread_comments: list[str] | None,
):
pass
class GithubPRHandler(GithubIssueHandler):
def __init__(self, owner: str, repo: str, token: str, username: str | None = None):
super().__init__(owner, repo, token, username)
self.download_url = (
f'https://api.github.com/repos/{self.owner}/{self.repo}/pulls'
)
def download_pr_metadata(
self, pull_number: int, comment_id: int | None = None
) -> tuple[list[str], list[int], list[str], list[ReviewThread], list[str]]:
"""Run a GraphQL query against the GitHub API for information.
Retrieves information about:
1. unresolved review comments
2. referenced issues the pull request would close
Args:
pull_number: The number of the pull request to query.
comment_id: Optional ID of a specific comment to focus on.
query: The GraphQL query as a string.
variables: A dictionary of variables for the query.
token: Your GitHub personal access token.
Returns:
The JSON response from the GitHub API.
"""
# Using graphql as REST API doesn't indicate resolved status for review comments
# TODO: grabbing the first 10 issues, 100 review threads, and 100 coments; add pagination to retrieve all
query = """
query($owner: String!, $repo: String!, $pr: Int!) {
repository(owner: $owner, name: $repo) {
pullRequest(number: $pr) {
closingIssuesReferences(first: 10) {
edges {
node {
body
number
}
}
}
url
reviews(first: 100) {
nodes {
body
state
fullDatabaseId
}
}
reviewThreads(first: 100) {
edges{
node{
id
isResolved
comments(first: 100) {
totalCount
nodes {
body
path
fullDatabaseId
}
}
}
}
}
}
}
}
"""
variables = {'owner': self.owner, 'repo': self.repo, 'pr': pull_number}
url = 'https://api.github.com/graphql'
headers = {
'Authorization': f'Bearer {self.token}',
'Content-Type': 'application/json',
}
response = requests.post(
url, json={'query': query, 'variables': variables}, headers=headers
)
response.raise_for_status()
response_json = response.json()
# Parse the response to get closing issue references and unresolved review comments
pr_data = (
response_json.get('data', {}).get('repository', {}).get('pullRequest', {})
)
# Get closing issues
closing_issues = pr_data.get('closingIssuesReferences', {}).get('edges', [])
closing_issues_bodies = [issue['node']['body'] for issue in closing_issues]
closing_issue_numbers = [
issue['node']['number'] for issue in closing_issues
] # Extract issue numbers
# Get review comments
reviews = pr_data.get('reviews', {}).get('nodes', [])
if comment_id is not None:
reviews = [
review
for review in reviews
if int(review['fullDatabaseId']) == comment_id
]
review_bodies = [review['body'] for review in reviews]
# Get unresolved review threads
review_threads = []
thread_ids = [] # Store thread IDs; agent replies to the thread
raw_review_threads = pr_data.get('reviewThreads', {}).get('edges', [])
for thread in raw_review_threads:
node = thread.get('node', {})
if not node.get(
'isResolved', True
): # Check if the review thread is unresolved
id = node.get('id')
thread_contains_comment_id = False
my_review_threads = node.get('comments', {}).get('nodes', [])
message = ''
files = []
for i, review_thread in enumerate(my_review_threads):
if (
comment_id is not None
and int(review_thread['fullDatabaseId']) == comment_id
):
thread_contains_comment_id = True
if (
i == len(my_review_threads) - 1
): # Check if it's the last thread in the thread
if len(my_review_threads) > 1:
message += '---\n' # Add "---" before the last message if there's more than one thread
message += 'latest feedback:\n' + review_thread['body'] + '\n'
else:
message += (
review_thread['body'] + '\n'
) # Add each thread in a new line
file = review_thread.get('path')
if file and file not in files:
files.append(file)
if comment_id is None or thread_contains_comment_id:
unresolved_thread = ReviewThread(comment=message, files=files)
review_threads.append(unresolved_thread)
thread_ids.append(id)
return (
closing_issues_bodies,
closing_issue_numbers,
review_bodies,
review_threads,
thread_ids,
)
# Override processing of downloaded issues
def get_pr_comments(
self, pr_number: int, comment_id: int | None = None
) -> list[str] | None:
"""Download comments for a specific pull request from Github."""
url = f'https://api.github.com/repos/{self.owner}/{self.repo}/issues/{pr_number}/comments'
headers = {
'Authorization': f'token {self.token}',
'Accept': 'application/vnd.github.v3+json',
}
params = {'per_page': 100, 'page': 1}
all_comments = []
while True:
response = requests.get(url, headers=headers, params=params)
response.raise_for_status()
comments = response.json()
if not comments:
break
if comment_id is not None:
matching_comment = next(
(
comment['body']
for comment in comments
if comment['id'] == comment_id
),
None,
)
if matching_comment:
return [matching_comment]
else:
all_comments.extend([comment['body'] for comment in comments])
params['page'] += 1
return all_comments if all_comments else None
def get_context_from_external_issues_references(
self,
closing_issues: list[str],
closing_issue_numbers: list[int],
issue_body: str,
review_comments: list[str] | None,
review_threads: list[ReviewThread],
thread_comments: list[str] | None,
):
new_issue_references = []
if issue_body:
new_issue_references.extend(extract_issue_references(issue_body))
if review_comments:
for comment in review_comments:
new_issue_references.extend(extract_issue_references(comment))
if review_threads:
for review_thread in review_threads:
new_issue_references.extend(
extract_issue_references(review_thread.comment)
)
if thread_comments:
for thread_comment in thread_comments:
new_issue_references.extend(extract_issue_references(thread_comment))
non_duplicate_references = set(new_issue_references)
unique_issue_references = non_duplicate_references.difference(
closing_issue_numbers
)
for issue_number in unique_issue_references:
try:
url = f'https://api.github.com/repos/{self.owner}/{self.repo}/issues/{issue_number}'
headers = {
'Authorization': f'Bearer {self.token}',
'Accept': 'application/vnd.github.v3+json',
}
response = requests.get(url, headers=headers)
response.raise_for_status()
issue_data = response.json()
issue_body = issue_data.get('body', '')
if issue_body:
closing_issues.append(issue_body)
except requests.exceptions.RequestException as e:
logger.warning(f'Failed to fetch issue {issue_number}: {str(e)}')
return closing_issues
def get_converted_issues(
self, issue_numbers: list[int] | None = None, comment_id: int | None = None
) -> list[Issue]:
if not issue_numbers:
raise ValueError('Unspecified issue numbers')
all_issues = self.download_issues()
logger.info(f'Limiting resolving to issues {issue_numbers}.')
all_issues = [issue for issue in all_issues if issue['number'] in issue_numbers]
converted_issues = []
for issue in all_issues:
# For PRs, body can be None
if any([issue.get(key) is None for key in ['number', 'title']]):
logger.warning(f'Skipping #{issue} as it is missing number or title.')
continue
# Handle None body for PRs
body = issue.get('body') if issue.get('body') is not None else ''
(
closing_issues,
closing_issues_numbers,
review_comments,
review_threads,
thread_ids,
) = self.download_pr_metadata(issue['number'], comment_id=comment_id)
head_branch = issue['head']['ref']
# Get PR thread comments
thread_comments = self.get_pr_comments(
issue['number'], comment_id=comment_id
)
closing_issues = self.get_context_from_external_issues_references(
closing_issues,
closing_issues_numbers,
body,
review_comments,
review_threads,
thread_comments,
)
issue_details = Issue(
owner=self.owner,
repo=self.repo,
number=issue['number'],
title=issue['title'],
body=body,
closing_issues=closing_issues,
review_comments=review_comments,
review_threads=review_threads,
thread_ids=thread_ids,
head_branch=head_branch,
thread_comments=thread_comments,
)
converted_issues.append(issue_details)
return converted_issues

View File

@ -0,0 +1,577 @@
from typing import Any
from urllib.parse import quote
import requests
from openhands.core.logger import openhands_logger as logger
from openhands.resolver.interfaces.issue import (
Issue,
IssueHandlerInterface,
ReviewThread,
)
from openhands.resolver.utils import extract_issue_references
class GitlabIssueHandler(IssueHandlerInterface):
def __init__(self, owner: str, repo: str, token: str, username: str | None = None):
self.owner = owner
self.repo = repo
self.token = token
self.username = username
self.base_url = self.get_base_url()
self.download_url = self.get_download_url()
self.clone_url = self.get_clone_url()
self.headers = self.get_headers()
def set_owner(self, owner: str):
self.owner = owner
def get_headers(self):
return {
'Authorization': f'Bearer {self.token}',
'Accept': 'application/json',
}
def get_base_url(self):
return f'https://gitlab.com/api/v4/projects/{quote(f'{self.owner}/{self.repo}', safe="")}'
def get_authorize_url(self):
return f'https://{self.username}:{self.token}@gitlab.com/'
def get_branch_url(self, branch_name: str):
return self.get_base_url() + f'/repository/branches/{branch_name}'
def get_download_url(self):
return f'{self.base_url}/issues'
def get_clone_url(self):
username_and_token = (
f'{self.username}:{self.token}' if self.username else f'{self.token}'
)
return f'https://{username_and_token}@gitlab.com/{self.owner}/{self.repo}.git'
def get_graphql_url(self):
return 'https://gitlab.com/api/graphql'
def get_compare_url(self, branch_name: str):
return f'https://gitlab.com/{self.owner}/{self.repo}/-/compare/{self.get_default_branch_name()}...{branch_name}'
def get_converted_issues(
self, issue_numbers: list[int] | None = None, comment_id: int | None = None
) -> list[Issue]:
"""Download issues from Gitlab.
Args:
issue_numbers: The numbers of the issues to download
comment_id: The ID of a single comment, if provided, otherwise all comments
Returns:
List of Gitlab issues.
"""
if not issue_numbers:
raise ValueError('Unspecified issue number')
all_issues = self.download_issues()
logger.info(f'Limiting resolving to issues {issue_numbers}.')
all_issues = [
issue
for issue in all_issues
# if issue['iid'] in issue_numbers and issue['merge_requests_count'] == 0
if issue['iid'] in issue_numbers # TODO for testing
]
if len(issue_numbers) == 1 and not all_issues:
raise ValueError(f'Issue {issue_numbers[0]} not found')
converted_issues = []
for issue in all_issues:
if any([issue.get(key) is None for key in ['iid', 'title']]):
logger.warning(f'Skipping issue {issue} as it is missing iid or title.')
continue
# Handle empty body by using empty string
if issue.get('description') is None:
issue['description'] = ''
# Get issue thread comments
thread_comments = self.get_issue_comments(
issue['iid'], comment_id=comment_id
)
# Convert empty lists to None for optional fields
issue_details = Issue(
owner=self.owner,
repo=self.repo,
number=issue['iid'],
title=issue['title'],
body=issue['description'],
thread_comments=thread_comments,
review_comments=None, # Initialize review comments as None for regular issues
)
converted_issues.append(issue_details)
return converted_issues
def download_issues(self) -> list[Any]:
params: dict[str, int | str] = {
'state': 'opened',
'scope': 'all',
'per_page': 100,
'page': 1,
}
all_issues = []
while True:
response = requests.get(
self.download_url, headers=self.headers, params=params
)
response.raise_for_status()
issues = response.json()
if not issues:
break
if not isinstance(issues, list) or any(
[not isinstance(issue, dict) for issue in issues]
):
raise ValueError(
'Expected list of dictionaries from Service Gitlab API.'
)
all_issues.extend(issues)
assert isinstance(params['page'], int)
params['page'] += 1
return all_issues
def get_issue_comments(
self, issue_number: int, comment_id: int | None = None
) -> list[str] | None:
"""Download comments for a specific issue from Gitlab."""
url = f'{self.download_url}/{issue_number}/notes'
params = {'per_page': 100, 'page': 1}
all_comments = []
while True:
response = requests.get(url, headers=self.headers, params=params)
response.raise_for_status()
comments = response.json()
if not comments:
break
if comment_id:
matching_comment = next(
(
comment['body']
for comment in comments
if comment['id'] == comment_id
),
None,
)
if matching_comment:
return [matching_comment]
else:
all_comments.extend([comment['body'] for comment in comments])
params['page'] += 1
return all_comments if all_comments else None
def branch_exists(self, branch_name: str) -> bool:
print(f'Checking if branch {branch_name} exists...')
response = requests.get(
f'{self.base_url}/repository/branches/{branch_name}', headers=self.headers
)
exists = response.status_code == 200
print(f'Branch {branch_name} exists: {exists}')
return exists
def get_branch_name(self, base_branch_name: str):
branch_name = base_branch_name
attempt = 1
while self.branch_exists(branch_name):
attempt += 1
branch_name = f'{base_branch_name}-try{attempt}'
return branch_name
def reply_to_comment(self, pr_number: int, comment_id: str, reply: str):
response = requests.get(
f'{self.base_url}/merge_requests/{pr_number}/discussions/{comment_id.split('/')[-1]}',
headers=self.headers,
)
response.raise_for_status()
discussions = response.json()
if len(discussions.get('notes', [])) > 0:
data = {
'body': f'Openhands fix success summary\n\n\n{reply}',
'note_id': discussions.get('notes', [])[-1]['id'],
}
response = requests.post(
f'{self.base_url}/merge_requests/{pr_number}/discussions/{comment_id.split('/')[-1]}/notes',
headers=self.headers,
json=data,
)
response.raise_for_status()
def get_pull_url(self, pr_number: int):
return (
f'https://gitlab.com/{self.owner}/{self.repo}/-/merge_requests/{pr_number}'
)
def get_default_branch_name(self) -> str:
response = requests.get(f'{self.base_url}', headers=self.headers)
response.raise_for_status()
return response.json()['default_branch']
def create_pull_request(self, data=dict) -> dict:
response = requests.post(
f'{self.base_url}/merge_requests', headers=self.headers, json=data
)
if response.status_code == 403:
raise RuntimeError(
'Failed to create pull request due to missing permissions. '
'Make sure that the provided token has push permissions for the repository.'
)
response.raise_for_status()
pr_data = response.json()
if 'web_url' in pr_data:
pr_data['html_url'] = pr_data['web_url']
if 'iid' in pr_data:
pr_data['number'] = pr_data['iid']
return pr_data
def request_reviewers(self, reviewer: str, pr_number: int):
response = requests.get(
f'https://gitlab.com/api/v4/users?username={reviewer}',
headers=self.headers,
)
response.raise_for_status()
user_data = response.json()
if len(user_data) > 0:
review_data = {'reviewer_ids': [user_data[0]['id']]}
review_response = requests.put(
f'{self.base_url}/merge_requests/{pr_number}',
headers=self.headers,
json=review_data,
)
if review_response.status_code != 200:
print(
f'Warning: Failed to request review from {reviewer}: {review_response.text}'
)
def send_comment_msg(self, issue_number: int, msg: str):
"""Send a comment message to a GitHub issue or pull request.
Args:
issue_number: The issue or pull request number
msg: The message content to post as a comment
"""
# Post a comment on the PR
comment_url = f'{self.base_url}/issues/{issue_number}/notes'
comment_data = {'body': msg}
comment_response = requests.post(
comment_url, headers=self.headers, json=comment_data
)
if comment_response.status_code != 201:
print(
f'Failed to post comment: {comment_response.status_code} {comment_response.text}'
)
else:
print(f'Comment added to the PR: {msg}')
def get_context_from_external_issues_references(
self,
closing_issues: list[str],
closing_issue_numbers: list[int],
issue_body: str,
review_comments: list[str] | None,
review_threads: list[ReviewThread],
thread_comments: list[str] | None,
):
pass
class GitlabPRHandler(GitlabIssueHandler):
def __init__(self, owner: str, repo: str, token: str, username: str | None = None):
super().__init__(owner, repo, token, username)
self.download_url = f'{self.base_url}/merge_requests'
def download_pr_metadata(
self, pull_number: int, comment_id: int | None = None
) -> tuple[list[str], list[int], list[str] | None, list[ReviewThread], list[str]]:
"""Run a GraphQL query against the Gitlab API for information.
Retrieves information about:
1. unresolved review comments
2. referenced issues the pull request would close
Args:
pull_number: The number of the pull request to query.
comment_id: Optional ID of a specific comment to focus on.
query: The GraphQL query as a string.
variables: A dictionary of variables for the query.
token: Your Gitlab personal access token.
Returns:
The JSON response from the Gitlab API.
"""
# Using graphql as REST API doesn't indicate resolved status for review comments
# TODO: grabbing the first 10 issues, 100 review threads, and 100 coments; add pagination to retrieve all
response = requests.get(
f'{self.base_url}/merge_requests/{pull_number}/related_issues',
headers=self.headers,
)
response.raise_for_status()
closing_issues = response.json()
closing_issues_bodies = [issue['description'] for issue in closing_issues]
closing_issue_numbers = [
issue['iid'] for issue in closing_issues
] # Extract issue numbers
query = """
query($projectPath: ID!, $pr: String!) {
project(fullPath: $projectPath) {
mergeRequest(iid: $pr) {
webUrl
discussions(first: 100) {
edges {
node {
id
resolved
resolvable
notes(first: 100) {
nodes {
body
id
position {
filePath
}
}
}
}
}
}
}
}
}
"""
variables = {'projectPath': f'{self.owner}/{self.repo}', 'pr': f'{pull_number}'}
response = requests.post(
self.get_graphql_url(),
json={'query': query, 'variables': variables},
headers=self.headers,
)
response.raise_for_status()
response_json = response.json()
# Parse the response to get closing issue references and unresolved review comments
pr_data = (
response_json.get('data', {}).get('project', {}).get('mergeRequest', {})
)
# Get review comments
review_bodies = None
# Get unresolved review threads
review_threads = []
thread_ids = [] # Store thread IDs; agent replies to the thread
raw_review_threads = pr_data.get('discussions', {}).get('edges', [])
for thread in raw_review_threads:
node = thread.get('node', {})
if not node.get('resolved', True) and node.get(
'resolvable', True
): # Check if the review thread is unresolved
id = node.get('id')
thread_contains_comment_id = False
my_review_threads = node.get('notes', {}).get('nodes', [])
message = ''
files = []
for i, review_thread in enumerate(my_review_threads):
if (
comment_id is not None
and int(review_thread['id'].split('/')[-1]) == comment_id
):
thread_contains_comment_id = True
if (
i == len(my_review_threads) - 1
): # Check if it's the last thread in the thread
if len(my_review_threads) > 1:
message += '---\n' # Add "---" before the last message if there's more than one thread
message += 'latest feedback:\n' + review_thread['body'] + '\n'
else:
message += (
review_thread['body'] + '\n'
) # Add each thread in a new line
file = review_thread.get('position', {})
file = file.get('filePath') if file is not None else None
if file and file not in files:
files.append(file)
if comment_id is None or thread_contains_comment_id:
unresolved_thread = ReviewThread(comment=message, files=files)
review_threads.append(unresolved_thread)
thread_ids.append(id)
return (
closing_issues_bodies,
closing_issue_numbers,
review_bodies,
review_threads,
thread_ids,
)
# Override processing of downloaded issues
def get_pr_comments(
self, pr_number: int, comment_id: int | None = None
) -> list[str] | None:
"""Download comments for a specific pull request from Gitlab."""
url = f'{self.base_url}/merge_requests/{pr_number}/notes'
params = {'per_page': 100, 'page': 1}
all_comments = []
while True:
response = requests.get(url, headers=self.headers, params=params)
response.raise_for_status()
comments = response.json()
comments = [
comment
for comment in comments
if comment.get('resolvable', True) and not comment.get('system', True)
]
if not comments:
break
if comment_id is not None:
matching_comment = next(
(
comment['body']
for comment in comments
if comment['id'] == comment_id
),
None,
)
if matching_comment:
return [matching_comment]
else:
all_comments.extend([comment['body'] for comment in comments])
params['page'] += 1
return all_comments if all_comments else None
def get_context_from_external_issues_references(
self,
closing_issues: list[str],
closing_issue_numbers: list[int],
issue_body: str,
review_comments: list[str] | None,
review_threads: list[ReviewThread],
thread_comments: list[str] | None,
):
new_issue_references = []
if issue_body:
new_issue_references.extend(extract_issue_references(issue_body))
if review_comments:
for comment in review_comments:
new_issue_references.extend(extract_issue_references(comment))
if review_threads:
for review_thread in review_threads:
new_issue_references.extend(
extract_issue_references(review_thread.comment)
)
if thread_comments:
for thread_comment in thread_comments:
new_issue_references.extend(extract_issue_references(thread_comment))
non_duplicate_references = set(new_issue_references)
unique_issue_references = non_duplicate_references.difference(
closing_issue_numbers
)
for issue_number in unique_issue_references:
try:
url = f'{self.base_url}/issues/{issue_number}'
response = requests.get(url, headers=self.headers)
response.raise_for_status()
issue_data = response.json()
issue_body = issue_data.get('description', '')
if issue_body:
closing_issues.append(issue_body)
except requests.exceptions.RequestException as e:
logger.warning(f'Failed to fetch issue {issue_number}: {str(e)}')
return closing_issues
def get_converted_issues(
self, issue_numbers: list[int] | None = None, comment_id: int | None = None
) -> list[Issue]:
if not issue_numbers:
raise ValueError('Unspecified issue numbers')
all_issues = self.download_issues()
logger.info(f'Limiting resolving to issues {issue_numbers}.')
all_issues = [issue for issue in all_issues if issue['iid'] in issue_numbers]
converted_issues = []
for issue in all_issues:
# For PRs, body can be None
if any([issue.get(key) is None for key in ['iid', 'title']]):
logger.warning(f'Skipping #{issue} as it is missing iid or title.')
continue
# Handle None body for PRs
body = (
issue.get('description') if issue.get('description') is not None else ''
)
(
closing_issues,
closing_issues_numbers,
review_comments,
review_threads,
thread_ids,
) = self.download_pr_metadata(issue['iid'], comment_id=comment_id)
head_branch = issue['source_branch']
# Get PR thread comments
thread_comments = self.get_pr_comments(issue['iid'], comment_id=comment_id)
closing_issues = self.get_context_from_external_issues_references(
closing_issues,
closing_issues_numbers,
body,
review_comments,
review_threads,
thread_comments,
)
issue_details = Issue(
owner=self.owner,
repo=self.repo,
number=issue['iid'],
title=issue['title'],
body=body,
closing_issues=closing_issues,
review_comments=review_comments,
review_threads=review_threads,
thread_ids=thread_ids,
head_branch=head_branch,
thread_comments=thread_comments,
)
converted_issues.append(issue_details)
return converted_issues

View File

@ -0,0 +1,123 @@
from abc import ABC, abstractmethod
from typing import Any
from pydantic import BaseModel
class ReviewThread(BaseModel):
comment: str
files: list[str]
class Issue(BaseModel):
owner: str
repo: str
number: int
title: str
body: str
thread_comments: list[str] | None = None # Added field for issue thread comments
closing_issues: list[str] | None = None
review_comments: list[str] | None = None
review_threads: list[ReviewThread] | None = None
thread_ids: list[str] | None = None
head_branch: str | None = None
base_branch: str | None = None
class IssueHandlerInterface(ABC):
@abstractmethod
def set_owner(self, owner: str):
pass
@abstractmethod
def download_issues(self) -> list[Any]:
pass
@abstractmethod
def get_issue_comments(
self, issue_number: int, comment_id: int | None = None
) -> list[str] | None:
pass
@abstractmethod
def get_base_url(self):
pass
@abstractmethod
def get_branch_url(self, branch_name):
pass
@abstractmethod
def get_download_url(self):
pass
@abstractmethod
def get_clone_url(self):
pass
@abstractmethod
def get_pull_url(self, pr_number: int):
pass
@abstractmethod
def get_graphql_url(self):
pass
@abstractmethod
def get_headers(self):
pass
@abstractmethod
def get_compare_url(self, branch_name):
pass
@abstractmethod
def get_branch_name(self, base_branch_name: str):
pass
@abstractmethod
def get_default_branch_name(self):
pass
@abstractmethod
def branch_exists(self, branch_name: str) -> bool:
pass
@abstractmethod
def reply_to_comment(self, pr_number: int, comment_id: str, reply: str):
pass
@abstractmethod
def send_comment_msg(self, issue_number: int, msg: str):
pass
@abstractmethod
def get_authorize_url(self):
pass
@abstractmethod
def create_pull_request(self, data=dict) -> dict:
pass
@abstractmethod
def request_reviewers(self, reviewer: str, pr_number: int):
pass
@abstractmethod
def get_context_from_external_issues_references(
self,
closing_issues: list[str],
closing_issue_numbers: list[int],
issue_body: str,
review_comments: list[str] | None,
review_threads: list[ReviewThread],
thread_comments: list[str] | None,
):
pass
@abstractmethod
def get_converted_issues(
self, issue_numbers: list[int] | None = None, comment_id: int | None = None
) -> list[Issue]:
"""Download issues from Gitlab."""
pass

View File

@ -0,0 +1,400 @@
import json
import os
import re
from typing import Any, ClassVar
import jinja2
from openhands.core.config import LLMConfig
from openhands.events.event import Event
from openhands.llm.llm import LLM
from openhands.resolver.interfaces.issue import (
Issue,
IssueHandlerInterface,
ReviewThread,
)
from openhands.resolver.utils import extract_image_urls
class ServiceContext:
issue_type: ClassVar[str]
default_git_patch: ClassVar[str] = 'No changes made yet'
def __init__(self, strategy: IssueHandlerInterface, llm_config: LLMConfig | None):
self._strategy = strategy
if llm_config is not None:
self.llm = LLM(llm_config)
def set_strategy(self, strategy):
self._strategy = strategy
# Strategy context interface
class ServiceContextPR(ServiceContext):
issue_type: ClassVar[str] = 'pr'
def __init__(self, strategy: IssueHandlerInterface, llm_config: LLMConfig):
super().__init__(strategy, llm_config)
def get_clone_url(self):
return self._strategy.get_clone_url()
def download_issues(self) -> list[Any]:
return self._strategy.download_issues()
def guess_success(
self,
issue: Issue,
history: list[Event],
git_patch: str | None = None,
) -> tuple[bool, None | list[bool], str]:
"""Guess if the issue is fixed based on the history, issue description and git patch.
Args:
issue: The issue to check
history: The agent's history
git_patch: Optional git patch showing the changes made
"""
last_message = history[-1].message
issues_context = json.dumps(issue.closing_issues, indent=4)
success_list = []
explanation_list = []
# Handle PRs with file-specific review comments
if issue.review_threads:
for review_thread in issue.review_threads:
if issues_context and last_message:
success, explanation = self._check_review_thread(
review_thread, issues_context, last_message, git_patch
)
else:
success, explanation = False, 'Missing context or message'
success_list.append(success)
explanation_list.append(explanation)
# Handle PRs with only thread comments (no file-specific review comments)
elif issue.thread_comments:
if issue.thread_comments and issues_context and last_message:
success, explanation = self._check_thread_comments(
issue.thread_comments, issues_context, last_message, git_patch
)
else:
success, explanation = (
False,
'Missing thread comments, context or message',
)
success_list.append(success)
explanation_list.append(explanation)
elif issue.review_comments:
# Handle PRs with only review comments (no file-specific review comments or thread comments)
if issue.review_comments and issues_context and last_message:
success, explanation = self._check_review_comments(
issue.review_comments, issues_context, last_message, git_patch
)
else:
success, explanation = (
False,
'Missing review comments, context or message',
)
success_list.append(success)
explanation_list.append(explanation)
else:
# No review comments, thread comments, or file-level review comments found
return False, None, 'No feedback was found to process'
# Return overall success (all must be true) and explanations
if not success_list:
return False, None, 'No feedback was processed'
return all(success_list), success_list, json.dumps(explanation_list)
def get_converted_issues(
self, issue_numbers: list[int] | None = None, comment_id: int | None = None
) -> list[Issue]:
return self._strategy.get_converted_issues(issue_numbers, comment_id)
def get_instruction(
self,
issue: Issue,
prompt_template: str,
repo_instruction: str | None = None,
) -> tuple[str, list[str]]:
"""Generate instruction for the agent."""
template = jinja2.Template(prompt_template)
images = []
issues_str = None
if issue.closing_issues:
issues_str = json.dumps(issue.closing_issues, indent=4)
images.extend(extract_image_urls(issues_str))
# Handle PRs with review comments
review_comments_str = None
if issue.review_comments:
review_comments_str = json.dumps(issue.review_comments, indent=4)
images.extend(extract_image_urls(review_comments_str))
# Handle PRs with file-specific review comments
review_thread_str = None
review_thread_file_str = None
if issue.review_threads:
review_threads = [
review_thread.comment for review_thread in issue.review_threads
]
review_thread_files = []
for review_thread in issue.review_threads:
review_thread_files.extend(review_thread.files)
review_thread_str = json.dumps(review_threads, indent=4)
review_thread_file_str = json.dumps(review_thread_files, indent=4)
images.extend(extract_image_urls(review_thread_str))
# Format thread comments if they exist
thread_context = ''
if issue.thread_comments:
thread_context = '\n---\n'.join(issue.thread_comments)
images.extend(extract_image_urls(thread_context))
instruction = template.render(
issues=issues_str,
review_comments=review_comments_str,
review_threads=review_thread_str,
files=review_thread_file_str,
thread_context=thread_context,
repo_instruction=repo_instruction,
)
return instruction, images
def _check_feedback_with_llm(self, prompt: str) -> tuple[bool, str]:
"""Helper function to check feedback with LLM and parse response."""
response = self.llm.completion(messages=[{'role': 'user', 'content': prompt}])
answer = response.choices[0].message.content.strip()
pattern = r'--- success\n*(true|false)\n*--- explanation*\n((?:.|\n)*)'
match = re.search(pattern, answer)
if match:
return match.group(1).lower() == 'true', match.group(2).strip()
return False, f'Failed to decode answer from LLM response: {answer}'
def _check_review_thread(
self,
review_thread: ReviewThread,
issues_context: str,
last_message: str,
git_patch: str | None = None,
) -> tuple[bool, str]:
"""Check if a review thread's feedback has been addressed."""
files_context = json.dumps(review_thread.files, indent=4)
with open(
os.path.join(
os.path.dirname(__file__),
'../prompts/guess_success/pr-feedback-check.jinja',
),
'r',
) as f:
template = jinja2.Template(f.read())
prompt = template.render(
issue_context=issues_context,
feedback=review_thread.comment,
files_context=files_context,
last_message=last_message,
git_patch=git_patch or self.default_git_patch,
)
return self._check_feedback_with_llm(prompt)
def _check_thread_comments(
self,
thread_comments: list[str],
issues_context: str,
last_message: str,
git_patch: str | None = None,
) -> tuple[bool, str]:
"""Check if thread comments feedback has been addressed."""
thread_context = '\n---\n'.join(thread_comments)
with open(
os.path.join(
os.path.dirname(__file__),
'../prompts/guess_success/pr-thread-check.jinja',
),
'r',
) as f:
template = jinja2.Template(f.read())
prompt = template.render(
issue_context=issues_context,
thread_context=thread_context,
last_message=last_message,
git_patch=git_patch or self.default_git_patch,
)
return self._check_feedback_with_llm(prompt)
def _check_review_comments(
self,
review_comments: list[str],
issues_context: str,
last_message: str,
git_patch: str | None = None,
) -> tuple[bool, str]:
"""Check if review comments feedback has been addressed."""
review_context = '\n---\n'.join(review_comments)
with open(
os.path.join(
os.path.dirname(__file__),
'../prompts/guess_success/pr-review-check.jinja',
),
'r',
) as f:
template = jinja2.Template(f.read())
prompt = template.render(
issue_context=issues_context,
review_context=review_context,
last_message=last_message,
git_patch=git_patch or self.default_git_patch,
)
return self._check_feedback_with_llm(prompt)
class ServiceContextIssue(ServiceContext):
issue_type: ClassVar[str] = 'issue'
def __init__(self, strategy: IssueHandlerInterface, llm_config: LLMConfig | None):
super().__init__(strategy, llm_config)
def get_base_url(self):
return self._strategy.get_base_url()
def get_branch_url(self, branch_name):
return self._strategy.get_branch_url(branch_name)
def get_download_url(self):
return self._strategy.get_download_url()
def get_clone_url(self):
return self._strategy.get_clone_url()
def get_graphql_url(self):
return self._strategy.get_graphql_url()
def get_headers(self):
return self._strategy.get_headers()
def get_authorize_url(self):
return self._strategy.get_authorize_url()
def get_pull_url(self, pr_number: int):
return self._strategy.get_pull_url(pr_number)
def get_compare_url(self, branch_name: str):
return self._strategy.get_compare_url(branch_name)
def download_issues(self) -> list[Any]:
return self._strategy.download_issues()
def get_branch_name(
self,
base_branch_name: str,
):
return self._strategy.get_branch_name(base_branch_name)
def branch_exists(self, branch_name: str):
return self._strategy.branch_exists(branch_name)
def get_default_branch_name(self) -> str:
return self._strategy.get_default_branch_name()
def create_pull_request(self, data=dict):
return self._strategy.create_pull_request(data)
def request_reviewers(self, reviewer: str, pr_number: int):
return self._strategy.request_reviewers(reviewer, pr_number)
def reply_to_comment(self, pr_number, comment_id, reply):
return self._strategy.reply_to_comment(pr_number, comment_id, reply)
def send_comment_msg(self, issue_number: int, msg: str):
return self._strategy.send_comment_msg(issue_number, msg)
def get_issue_comments(
self, issue_number: int, comment_id: int | None = None
) -> list[str] | None:
return self._strategy.get_issue_comments(issue_number, comment_id)
def get_instruction(
self,
issue: Issue,
prompt_template: str,
repo_instruction: str | None = None,
) -> tuple[str, list[str]]:
"""Generate instruction for the agent."""
# Format thread comments if they exist
thread_context = ''
if issue.thread_comments:
thread_context = '\n\nIssue Thread Comments:\n' + '\n---\n'.join(
issue.thread_comments
)
images = []
images.extend(extract_image_urls(issue.body))
images.extend(extract_image_urls(thread_context))
template = jinja2.Template(prompt_template)
return (
template.render(
body=issue.title + '\n\n' + issue.body + thread_context,
repo_instruction=repo_instruction,
),
images,
)
def guess_success(
self, issue: Issue, history: list[Event], git_patch: str | None = None
) -> tuple[bool, None | list[bool], str]:
"""Guess if the issue is fixed based on the history and the issue description.
Args:
issue: The issue to check
history: The agent's history
git_patch: Optional git patch showing the changes made
"""
last_message = history[-1].message
# Include thread comments in the prompt if they exist
issue_context = issue.body
if issue.thread_comments:
issue_context += '\n\nIssue Thread Comments:\n' + '\n---\n'.join(
issue.thread_comments
)
with open(
os.path.join(
os.path.dirname(__file__),
'../prompts/guess_success/issue-success-check.jinja',
),
'r',
) as f:
template = jinja2.Template(f.read())
prompt = template.render(
issue_context=issue_context,
last_message=last_message,
git_patch=git_patch or self.default_git_patch,
)
response = self.llm.completion(messages=[{'role': 'user', 'content': prompt}])
answer = response.choices[0].message.content.strip()
pattern = r'--- success\n*(true|false)\n*--- explanation*\n((?:.|\n)*)'
match = re.search(pattern, answer)
if match:
return match.group(1).lower() == 'true', None, match.group(2)
return False, None, f'Failed to decode answer from LLM response: {answer}'
def get_converted_issues(
self, issue_numbers: list[int] | None = None, comment_id: int | None = None
) -> list[Issue]:
return self._strategy.get_converted_issues(issue_numbers, comment_id)

View File

@ -1,806 +0,0 @@
import json
import os
import re
from abc import ABC, abstractmethod
from typing import Any, ClassVar
import jinja2
import requests
from openhands.core.config import LLMConfig
from openhands.core.logger import openhands_logger as logger
from openhands.events.event import Event
from openhands.llm.llm import LLM
from openhands.resolver.github_issue import GithubIssue, ReviewThread
class IssueHandlerInterface(ABC):
issue_type: ClassVar[str]
llm: LLM
@abstractmethod
def get_converted_issues(
self, issue_numbers: list[int] | None = None, comment_id: int | None = None
) -> list[GithubIssue]:
"""Download issues from GitHub."""
pass
@abstractmethod
def get_instruction(
self,
issue: GithubIssue,
prompt_template: str,
repo_instruction: str | None = None,
) -> tuple[str, list[str]]:
"""Generate instruction and image urls for the agent."""
pass
@abstractmethod
def guess_success(
self, issue: GithubIssue, history: list[Event], git_patch: str | None = None
) -> tuple[bool, list[bool] | None, str]:
"""Guess if the issue has been resolved based on the agent's output and git patch."""
pass
class IssueHandler(IssueHandlerInterface):
issue_type: ClassVar[str] = 'issue'
default_git_patch: ClassVar[str] = 'No changes made yet'
def __init__(self, owner: str, repo: str, token: str, llm_config: LLMConfig):
self.download_url = 'https://api.github.com/repos/{}/{}/issues'
self.owner = owner
self.repo = repo
self.token = token
self.llm = LLM(llm_config)
def _download_issues_from_github(self) -> list[Any]:
url = self.download_url.format(self.owner, self.repo)
headers = {
'Authorization': f'token {self.token}',
'Accept': 'application/vnd.github.v3+json',
}
params: dict[str, int | str] = {'state': 'open', 'per_page': 100, 'page': 1}
all_issues = []
# Get issues, page by page
while True:
response = requests.get(url, headers=headers, params=params)
response.raise_for_status()
issues = response.json()
# No more issues, break the loop
if not issues:
break
# Sanity check - the response is a list of dictionaries
if not isinstance(issues, list) or any(
[not isinstance(issue, dict) for issue in issues]
):
raise ValueError('Expected list of dictionaries from Github API.')
# Add the issues to the final list
all_issues.extend(issues)
assert isinstance(params['page'], int)
params['page'] += 1
return all_issues
def _extract_image_urls(self, issue_body: str) -> list[str]:
# Regular expression to match Markdown image syntax ![alt text](image_url)
image_pattern = r'!\[.*?\]\((https?://[^\s)]+)\)'
return re.findall(image_pattern, issue_body)
def _extract_issue_references(self, body: str) -> list[int]:
# First, remove code blocks as they may contain false positives
body = re.sub(r'```.*?```', '', body, flags=re.DOTALL)
# Remove inline code
body = re.sub(r'`[^`]*`', '', body)
# Remove URLs that contain hash symbols
body = re.sub(r'https?://[^\s)]*#\d+[^\s)]*', '', body)
# Now extract issue numbers, making sure they're not part of other text
# The pattern matches #number that:
# 1. Is at the start of text or after whitespace/punctuation
# 2. Is followed by whitespace, punctuation, or end of text
# 3. Is not part of a URL
pattern = r'(?:^|[\s\[({]|[^\w#])#(\d+)(?=[\s,.\])}]|$)'
return [int(match) for match in re.findall(pattern, body)]
def _get_issue_comments(
self, issue_number: int, comment_id: int | None = None
) -> list[str] | None:
"""Retrieve comments for a specific issue from Github.
Args:
issue_number: The ID of the issue to get comments for
comment_id: The ID of a single comment, if provided, otherwise all comments
"""
url = f'https://api.github.com/repos/{self.owner}/{self.repo}/issues/{issue_number}/comments'
headers = {
'Authorization': f'token {self.token}',
'Accept': 'application/vnd.github.v3+json',
}
params = {'per_page': 100, 'page': 1}
all_comments = []
# Get comments, page by page
while True:
response = requests.get(url, headers=headers, params=params)
response.raise_for_status()
comments = response.json()
if not comments:
break
# If a single comment ID is provided, return only that comment
if comment_id:
matching_comment = next(
(
comment['body']
for comment in comments
if comment['id'] == comment_id
),
None,
)
if matching_comment:
return [matching_comment]
else:
# Otherwise, return all comments
all_comments.extend([comment['body'] for comment in comments])
params['page'] += 1
return all_comments if all_comments else None
def get_converted_issues(
self, issue_numbers: list[int] | None = None, comment_id: int | None = None
) -> list[GithubIssue]:
"""Download issues from Github.
Args:
issue_numbers: The numbers of the issues to download
comment_id: The ID of a single comment, if provided, otherwise all comments
Returns:
List of Github issues.
"""
if not issue_numbers:
raise ValueError('Unspecified issue number')
all_issues = self._download_issues_from_github()
logger.info(f'Limiting resolving to issues {issue_numbers}.')
all_issues = [
issue
for issue in all_issues
if issue['number'] in issue_numbers and 'pull_request' not in issue
]
if len(issue_numbers) == 1 and not all_issues:
raise ValueError(f'Issue {issue_numbers[0]} not found')
converted_issues = []
for issue in all_issues:
# Check for required fields (number and title)
if any([issue.get(key) is None for key in ['number', 'title']]):
logger.warning(
f'Skipping issue {issue} as it is missing number or title.'
)
continue
# Handle empty body by using empty string
if issue.get('body') is None:
issue['body'] = ''
# Get issue thread comments
thread_comments = self._get_issue_comments(
issue['number'], comment_id=comment_id
)
# Convert empty lists to None for optional fields
issue_details = GithubIssue(
owner=self.owner,
repo=self.repo,
number=issue['number'],
title=issue['title'],
body=issue['body'],
thread_comments=thread_comments,
review_comments=None, # Initialize review comments as None for regular issues
)
converted_issues.append(issue_details)
return converted_issues
def get_instruction(
self,
issue: GithubIssue,
prompt_template: str,
repo_instruction: str | None = None,
) -> tuple[str, list[str]]:
"""Generate instruction for the agent.
Args:
issue: The issue to generate instruction for
prompt_template: The prompt template to use
repo_instruction: The repository instruction if it exists
"""
# Format thread comments if they exist
thread_context = ''
if issue.thread_comments:
thread_context = '\n\nIssue Thread Comments:\n' + '\n---\n'.join(
issue.thread_comments
)
# Extract image URLs from the issue body and thread comments
images = []
images.extend(self._extract_image_urls(issue.body))
images.extend(self._extract_image_urls(thread_context))
template = jinja2.Template(prompt_template)
return (
template.render(
body=issue.title + '\n\n' + issue.body + thread_context,
repo_instruction=repo_instruction,
),
images,
)
def guess_success(
self, issue: GithubIssue, history: list[Event], git_patch: str | None = None
) -> tuple[bool, None | list[bool], str]:
"""Guess if the issue is fixed based on the history and the issue description.
Args:
issue: The issue to check
history: The agent's history
git_patch: Optional git patch showing the changes made
"""
last_message = history[-1].message
# Include thread comments in the prompt if they exist
issue_context = issue.body
if issue.thread_comments:
issue_context += '\n\nIssue Thread Comments:\n' + '\n---\n'.join(
issue.thread_comments
)
# Prepare the prompt
with open(
os.path.join(
os.path.dirname(__file__),
'prompts/guess_success/issue-success-check.jinja',
),
'r',
) as f:
template = jinja2.Template(f.read())
prompt = template.render(
issue_context=issue_context,
last_message=last_message,
git_patch=git_patch or self.default_git_patch,
)
# Get the LLM response and check for 'success' and 'explanation' in the answer
response = self.llm.completion(messages=[{'role': 'user', 'content': prompt}])
answer = response.choices[0].message.content.strip()
pattern = r'--- success\n*(true|false)\n*--- explanation*\n((?:.|\n)*)'
match = re.search(pattern, answer)
if match:
return match.group(1).lower() == 'true', None, match.group(2)
return False, None, f'Failed to decode answer from LLM response: {answer}'
class PRHandler(IssueHandler):
issue_type: ClassVar[str] = 'pr'
def __init__(self, owner: str, repo: str, token: str, llm_config: LLMConfig):
super().__init__(owner, repo, token, llm_config)
self.download_url = 'https://api.github.com/repos/{}/{}/pulls'
def __download_pr_metadata(
self, pull_number: int, comment_id: int | None = None
) -> tuple[list[str], list[int], list[str], list[ReviewThread], list[str]]:
"""Run a GraphQL query against the GitHub API for information.
Retrieves information about:
1. unresolved review comments
2. referenced issues the pull request would close
Args:
pull_number: The number of the pull request to query.
comment_id: Optional ID of a specific comment to focus on.
query: The GraphQL query as a string.
variables: A dictionary of variables for the query.
token: Your GitHub personal access token.
Returns:
The JSON response from the GitHub API.
"""
# Using graphql as REST API doesn't indicate resolved status for review comments
# TODO: grabbing the first 10 issues, 100 review threads, and 100 coments; add pagination to retrieve all
query = """
query($owner: String!, $repo: String!, $pr: Int!) {
repository(owner: $owner, name: $repo) {
pullRequest(number: $pr) {
closingIssuesReferences(first: 10) {
edges {
node {
body
number
}
}
}
url
reviews(first: 100) {
nodes {
body
state
fullDatabaseId
}
}
reviewThreads(first: 100) {
edges{
node{
id
isResolved
comments(first: 100) {
totalCount
nodes {
body
path
fullDatabaseId
}
}
}
}
}
}
}
}
"""
variables = {'owner': self.owner, 'repo': self.repo, 'pr': pull_number}
# Run the query
url = 'https://api.github.com/graphql'
headers = {
'Authorization': f'Bearer {self.token}',
'Content-Type': 'application/json',
}
response = requests.post(
url, json={'query': query, 'variables': variables}, headers=headers
)
response.raise_for_status()
response_json = response.json()
# Parse the response to get closing issue references and unresolved review comments
pr_data = (
response_json.get('data', {}).get('repository', {}).get('pullRequest', {})
)
# Get closing issues
closing_issues = pr_data.get('closingIssuesReferences', {}).get('edges', [])
closing_issues_bodies = [issue['node']['body'] for issue in closing_issues]
closing_issue_numbers = [
issue['node']['number'] for issue in closing_issues
] # Extract issue numbers
# Get review comments
reviews = pr_data.get('reviews', {}).get('nodes', [])
if comment_id is not None:
reviews = [
review
for review in reviews
if int(review['fullDatabaseId']) == comment_id
]
review_bodies = [review['body'] for review in reviews]
# Get unresolved review threads
review_threads = []
thread_ids = [] # Store thread IDs; agent replies to the thread
raw_review_threads = pr_data.get('reviewThreads', {}).get('edges', [])
for thread in raw_review_threads:
node = thread.get('node', {})
if not node.get(
'isResolved', True
): # Check if the review thread is unresolved
id = node.get('id')
thread_contains_comment_id = False
my_review_threads = node.get('comments', {}).get('nodes', [])
message = ''
files = []
for i, review_thread in enumerate(my_review_threads):
if (
comment_id is not None
and int(review_thread['fullDatabaseId']) == comment_id
):
thread_contains_comment_id = True
if (
i == len(my_review_threads) - 1
): # Check if it's the last thread in the thread
if len(my_review_threads) > 1:
message += '---\n' # Add "---" before the last message if there's more than one thread
message += 'latest feedback:\n' + review_thread['body'] + '\n'
else:
message += (
review_thread['body'] + '\n'
) # Add each thread in a new line
# Source files on which the comments were made
file = review_thread.get('path')
if file and file not in files:
files.append(file)
# If the comment ID is not provided or the thread contains the comment ID, add the thread to the list
if comment_id is None or thread_contains_comment_id:
unresolved_thread = ReviewThread(comment=message, files=files)
review_threads.append(unresolved_thread)
thread_ids.append(id)
return (
closing_issues_bodies,
closing_issue_numbers,
review_bodies,
review_threads,
thread_ids,
)
# Override processing of downloaded issues
def _get_pr_comments(
self, pr_number: int, comment_id: int | None = None
) -> list[str] | None:
"""Download comments for a specific pull request from Github."""
url = f'https://api.github.com/repos/{self.owner}/{self.repo}/issues/{pr_number}/comments'
headers = {
'Authorization': f'token {self.token}',
'Accept': 'application/vnd.github.v3+json',
}
params = {'per_page': 100, 'page': 1}
all_comments = []
while True:
response = requests.get(url, headers=headers, params=params)
response.raise_for_status()
comments = response.json()
if not comments:
break
if comment_id is not None:
matching_comment = next(
(
comment['body']
for comment in comments
if comment['id'] == comment_id
),
None,
)
if matching_comment:
return [matching_comment]
else:
all_comments.extend([comment['body'] for comment in comments])
params['page'] += 1
return all_comments if all_comments else None
def __get_context_from_external_issues_references(
self,
closing_issues: list[str],
closing_issue_numbers: list[int],
issue_body: str,
review_comments: list[str],
review_threads: list[ReviewThread],
thread_comments: list[str] | None,
):
new_issue_references = []
if issue_body:
new_issue_references.extend(self._extract_issue_references(issue_body))
if review_comments:
for comment in review_comments:
new_issue_references.extend(self._extract_issue_references(comment))
if review_threads:
for review_thread in review_threads:
new_issue_references.extend(
self._extract_issue_references(review_thread.comment)
)
if thread_comments:
for thread_comment in thread_comments:
new_issue_references.extend(
self._extract_issue_references(thread_comment)
)
non_duplicate_references = set(new_issue_references)
unique_issue_references = non_duplicate_references.difference(
closing_issue_numbers
)
for issue_number in unique_issue_references:
try:
url = f'https://api.github.com/repos/{self.owner}/{self.repo}/issues/{issue_number}'
headers = {
'Authorization': f'Bearer {self.token}',
'Accept': 'application/vnd.github.v3+json',
}
response = requests.get(url, headers=headers)
response.raise_for_status()
issue_data = response.json()
issue_body = issue_data.get('body', '')
if issue_body:
closing_issues.append(issue_body)
except requests.exceptions.RequestException as e:
logger.warning(f'Failed to fetch issue {issue_number}: {str(e)}')
return closing_issues
def get_converted_issues(
self, issue_numbers: list[int] | None = None, comment_id: int | None = None
) -> list[GithubIssue]:
if not issue_numbers:
raise ValueError('Unspecified issue numbers')
all_issues = self._download_issues_from_github()
logger.info(f'Limiting resolving to issues {issue_numbers}.')
all_issues = [issue for issue in all_issues if issue['number'] in issue_numbers]
converted_issues = []
for issue in all_issues:
# For PRs, body can be None
if any([issue.get(key) is None for key in ['number', 'title']]):
logger.warning(f'Skipping #{issue} as it is missing number or title.')
continue
# Handle None body for PRs
body = issue.get('body') if issue.get('body') is not None else ''
(
closing_issues,
closing_issues_numbers,
review_comments,
review_threads,
thread_ids,
) = self.__download_pr_metadata(issue['number'], comment_id=comment_id)
head_branch = issue['head']['ref']
# Get PR thread comments
thread_comments = self._get_pr_comments(
issue['number'], comment_id=comment_id
)
closing_issues = self.__get_context_from_external_issues_references(
closing_issues,
closing_issues_numbers,
body,
review_comments,
review_threads,
thread_comments,
)
issue_details = GithubIssue(
owner=self.owner,
repo=self.repo,
number=issue['number'],
title=issue['title'],
body=body,
closing_issues=closing_issues,
review_comments=review_comments,
review_threads=review_threads,
thread_ids=thread_ids,
head_branch=head_branch,
thread_comments=thread_comments,
)
converted_issues.append(issue_details)
return converted_issues
def get_instruction(
self,
issue: GithubIssue,
prompt_template: str,
repo_instruction: str | None = None,
) -> tuple[str, list[str]]:
"""Generate instruction for the agent."""
template = jinja2.Template(prompt_template)
images = []
issues_str = None
if issue.closing_issues:
issues_str = json.dumps(issue.closing_issues, indent=4)
images.extend(self._extract_image_urls(issues_str))
# Handle PRs with review comments
review_comments_str = None
if issue.review_comments:
review_comments_str = json.dumps(issue.review_comments, indent=4)
images.extend(self._extract_image_urls(review_comments_str))
# Handle PRs with file-specific review comments
review_thread_str = None
review_thread_file_str = None
if issue.review_threads:
review_threads = [
review_thread.comment for review_thread in issue.review_threads
]
review_thread_files = []
for review_thread in issue.review_threads:
review_thread_files.extend(review_thread.files)
review_thread_str = json.dumps(review_threads, indent=4)
review_thread_file_str = json.dumps(review_thread_files, indent=4)
images.extend(self._extract_image_urls(review_thread_str))
# Format thread comments if they exist
thread_context = ''
if issue.thread_comments:
thread_context = '\n---\n'.join(issue.thread_comments)
images.extend(self._extract_image_urls(thread_context))
instruction = template.render(
issues=issues_str,
review_comments=review_comments_str,
review_threads=review_thread_str,
files=review_thread_file_str,
thread_context=thread_context,
repo_instruction=repo_instruction,
)
return instruction, images
def _check_feedback_with_llm(self, prompt: str) -> tuple[bool, str]:
"""Helper function to check feedback with LLM and parse response."""
response = self.llm.completion(messages=[{'role': 'user', 'content': prompt}])
answer = response.choices[0].message.content.strip()
pattern = r'--- success\n*(true|false)\n*--- explanation*\n((?:.|\n)*)'
match = re.search(pattern, answer)
if match:
return match.group(1).lower() == 'true', match.group(2).strip()
return False, f'Failed to decode answer from LLM response: {answer}'
def _check_review_thread(
self,
review_thread: ReviewThread,
issues_context: str,
last_message: str,
git_patch: str | None = None,
) -> tuple[bool, str]:
"""Check if a review thread's feedback has been addressed."""
files_context = json.dumps(review_thread.files, indent=4)
with open(
os.path.join(
os.path.dirname(__file__),
'prompts/guess_success/pr-feedback-check.jinja',
),
'r',
) as f:
template = jinja2.Template(f.read())
prompt = template.render(
issue_context=issues_context,
feedback=review_thread.comment,
files_context=files_context,
last_message=last_message,
git_patch=git_patch or self.default_git_patch,
)
return self._check_feedback_with_llm(prompt)
def _check_thread_comments(
self,
thread_comments: list[str],
issues_context: str,
last_message: str,
git_patch: str | None = None,
) -> tuple[bool, str]:
"""Check if thread comments feedback has been addressed."""
thread_context = '\n---\n'.join(thread_comments)
with open(
os.path.join(
os.path.dirname(__file__), 'prompts/guess_success/pr-thread-check.jinja'
),
'r',
) as f:
template = jinja2.Template(f.read())
prompt = template.render(
issue_context=issues_context,
thread_context=thread_context,
last_message=last_message,
git_patch=git_patch or self.default_git_patch,
)
return self._check_feedback_with_llm(prompt)
def _check_review_comments(
self,
review_comments: list[str],
issues_context: str,
last_message: str,
git_patch: str | None = None,
) -> tuple[bool, str]:
"""Check if review comments feedback has been addressed."""
review_context = '\n---\n'.join(review_comments)
with open(
os.path.join(
os.path.dirname(__file__), 'prompts/guess_success/pr-review-check.jinja'
),
'r',
) as f:
template = jinja2.Template(f.read())
prompt = template.render(
issue_context=issues_context,
review_context=review_context,
last_message=last_message,
git_patch=git_patch or self.default_git_patch,
)
return self._check_feedback_with_llm(prompt)
def guess_success(
self, issue: GithubIssue, history: list[Event], git_patch: str | None = None
) -> tuple[bool, None | list[bool], str]:
"""Guess if the issue is fixed based on the history, issue description and git patch."""
last_message = history[-1].message
issues_context = json.dumps(issue.closing_issues, indent=4)
success_list = []
explanation_list = []
# Handle PRs with file-specific review comments
if issue.review_threads:
for review_thread in issue.review_threads:
if issues_context and last_message:
success, explanation = self._check_review_thread(
review_thread, issues_context, last_message, git_patch
)
else:
success, explanation = False, 'Missing context or message'
success_list.append(success)
explanation_list.append(explanation)
# Handle PRs with only thread comments (no file-specific review comments)
elif issue.thread_comments:
if issue.thread_comments and issues_context and last_message:
success, explanation = self._check_thread_comments(
issue.thread_comments, issues_context, last_message, git_patch
)
else:
success, explanation = (
False,
'Missing thread comments, context or message',
)
success_list.append(success)
explanation_list.append(explanation)
elif issue.review_comments:
# Handle PRs with only review comments (no file-specific review comments or thread comments)
if issue.review_comments and issues_context and last_message:
success, explanation = self._check_review_comments(
issue.review_comments, issues_context, last_message, git_patch
)
else:
success, explanation = (
False,
'Missing review comments, context or message',
)
success_list.append(success)
explanation_list.append(explanation)
else:
# No review comments, thread comments, or file-level review comments found
return False, None, 'No feedback was found to process'
# Return overall success (all must be true) and explanations
if not success_list:
return False, None, 'No feedback was processed'
return all(success_list), success_list, json.dumps(explanation_list)

View File

@ -13,12 +13,16 @@ from tqdm import tqdm
import openhands
from openhands.core.config import LLMConfig
from openhands.core.logger import openhands_logger as logger
from openhands.resolver.github_issue import GithubIssue
from openhands.resolver.interfaces.issue import Issue
from openhands.resolver.resolve_issue import (
issue_handler_factory,
process_issue,
)
from openhands.resolver.resolver_output import ResolverOutput
from openhands.resolver.utils import (
Platform,
identify_token,
)
def cleanup():
@ -51,6 +55,7 @@ async def resolve_issues(
repo: str,
token: str,
username: str,
platform: Platform,
max_iterations: int,
limit_issues: int | None,
num_workers: int,
@ -62,13 +67,13 @@ async def resolve_issues(
repo_instruction: str | None,
issue_numbers: list[int] | None,
) -> None:
"""Resolve multiple github issues.
"""Resolve multiple github or gitlab issues.
Args:
owner: Github owner of the repo.
repo: Github repository to resolve issues in form of `owner/repo`.
token: Github token to access the repository.
username: Github username to access the repository.
owner: Github or Gitlab owner of the repo.
repo: Github or Gitlab repository to resolve issues in form of `owner/repo`.
token: Github or Gitlab token to access the repository.
username: Github or Gitlab username to access the repository.
max_iterations: Maximum number of iterations to run.
limit_issues: Limit the number of issues to resolve.
num_workers: Number of workers to use for parallel processing.
@ -80,10 +85,12 @@ async def resolve_issues(
repo_instruction: Repository instruction to use.
issue_numbers: List of issue numbers to resolve.
"""
issue_handler = issue_handler_factory(issue_type, owner, repo, token, llm_config)
issue_handler = issue_handler_factory(
issue_type, owner, repo, token, llm_config, platform
)
# Load dataset
issues: list[GithubIssue] = issue_handler.get_converted_issues(
issues: list[Issue] = issue_handler.get_converted_issues(
issue_numbers=issue_numbers
)
@ -107,7 +114,7 @@ async def resolve_issues(
[
'git',
'clone',
f'https://{username}:{token}@github.com/{owner}/{repo}',
issue_handler.get_clone_url(),
f'{output_dir}/repo',
]
).decode('utf-8')
@ -188,6 +195,7 @@ async def resolve_issues(
task = update_progress(
process_issue(
issue,
platform,
base_commit,
max_iterations,
llm_config,
@ -221,24 +229,26 @@ async def resolve_issues(
def main():
parser = argparse.ArgumentParser(description='Resolve multiple issues from Github.')
parser = argparse.ArgumentParser(
description='Resolve multiple issues from Github or Gitlab.'
)
parser.add_argument(
'--repo',
type=str,
required=True,
help='Github repository to resolve issues in form of `owner/repo`.',
help='Github or Gitlab repository to resolve issues in form of `owner/repo`.',
)
parser.add_argument(
'--token',
type=str,
default=None,
help='Github token to access the repository.',
help='Github or Gitlab token to access the repository.',
)
parser.add_argument(
'--username',
type=str,
default=None,
help='Github username to access the repository.',
help='Github or Gitlab username to access the repository.',
)
parser.add_argument(
'--runtime-container-image',
@ -323,15 +333,20 @@ def main():
)
owner, repo = my_args.repo.split('/')
token = my_args.token if my_args.token else os.getenv('GITHUB_TOKEN')
username = my_args.username if my_args.username else os.getenv('GITHUB_USERNAME')
token = my_args.token or os.getenv('GITHUB_TOKEN') or os.getenv('GITLAB_TOKEN')
username = my_args.username if my_args.username else os.getenv('GIT_USERNAME')
if not username:
raise ValueError('Github username is required.')
raise ValueError('Username is required.')
if not token:
raise ValueError('Github token is required.')
raise ValueError('Token is required.')
platform = identify_token(token)
if platform == Platform.INVALID:
raise ValueError('Token is invalid.')
api_key = my_args.llm_api_key or os.environ['LLM_API_KEY']
llm_config = LLMConfig(
model=my_args.llm_model or os.environ['LLM_MODEL'],
api_key=str(api_key) if api_key else None,
@ -369,6 +384,7 @@ def main():
repo=repo,
token=token,
username=username,
platform=platform,
runtime_container_image=runtime_container_image,
max_iterations=my_args.max_iterations,
limit_issues=my_args.limit_issues,

View File

@ -24,15 +24,19 @@ from openhands.events.observation import (
Observation,
)
from openhands.events.stream import EventStreamSubscriber
from openhands.resolver.github_issue import GithubIssue
from openhands.resolver.issue_definitions import (
IssueHandler,
IssueHandlerInterface,
PRHandler,
from openhands.resolver.interfaces.github import GithubIssueHandler, GithubPRHandler
from openhands.resolver.interfaces.gitlab import GitlabIssueHandler, GitlabPRHandler
from openhands.resolver.interfaces.issue import Issue
from openhands.resolver.interfaces.issue_definitions import (
ServiceContextIssue,
ServiceContextPR,
)
from openhands.resolver.resolver_output import ResolverOutput
from openhands.resolver.utils import (
Platform,
codeact_user_response,
get_unique_uid,
identify_token,
reset_logger_for_multiprocessing,
)
from openhands.runtime.base import Runtime
@ -43,6 +47,7 @@ AGENT_CLASS = 'CodeActAgent'
def initialize_runtime(
runtime: Runtime,
platform: Platform,
):
"""Initialize the runtime for the agent.
@ -61,6 +66,12 @@ def initialize_runtime(
if not isinstance(obs, CmdOutputObservation) or obs.exit_code != 0:
raise RuntimeError(f'Failed to change directory to /workspace.\n{obs}')
if platform == Platform.GITLAB and os.getenv('GITLAB_CI') == 'true':
action = CmdRunAction(command='sudo chown -R 1001:0 /workspace/*')
logger.info(action, extra={'msg_type': 'ACTION'})
obs = runtime.run_action(action)
logger.info(obs, extra={'msg_type': 'OBSERVATION'})
action = CmdRunAction(command='git config --global core.pager ""')
logger.info(action, extra={'msg_type': 'ACTION'})
obs = runtime.run_action(action)
@ -72,6 +83,7 @@ def initialize_runtime(
async def complete_runtime(
runtime: Runtime,
base_commit: str,
platform: Platform,
) -> dict[str, Any]:
"""Complete the runtime for the agent.
@ -107,7 +119,11 @@ async def complete_runtime(
if not isinstance(obs, CmdOutputObservation) or obs.exit_code != 0:
raise RuntimeError(f'Failed to set git config. Observation: {obs}')
action = CmdRunAction(command='git add -A')
if platform == Platform.GITLAB and os.getenv('GITLAB_CI') == 'true':
action = CmdRunAction(command='sudo git add -A')
else:
action = CmdRunAction(command='git add -A')
logger.info(action, extra={'msg_type': 'ACTION'})
obs = runtime.run_action(action)
logger.info(obs, extra={'msg_type': 'OBSERVATION'})
@ -143,14 +159,15 @@ async def complete_runtime(
async def process_issue(
issue: GithubIssue,
issue: Issue,
platform: Platform,
base_commit: str,
max_iterations: int,
llm_config: LLMConfig,
output_dir: str,
runtime_container_image: str | None,
prompt_template: str,
issue_handler: IssueHandlerInterface,
issue_handler: ServiceContextIssue | ServiceContextPR,
repo_instruction: str | None = None,
reset_logger: bool = False,
) -> ResolverOutput:
@ -172,6 +189,16 @@ async def process_issue(
shutil.rmtree(workspace_base)
shutil.copytree(os.path.join(output_dir, 'repo'), workspace_base)
# This code looks unnecessary because these are default values in the config class
# they're set by default if nothing else overrides them
# FIXME we should remove them here
kwargs = {}
if os.getenv('GITLAB_CI') == 'True':
kwargs['local_runtime_url'] = os.getenv('LOCAL_RUNTIME_URL', 'http://localhost')
user_id = os.getuid() if hasattr(os, 'getuid') else 1000
if user_id == 0:
kwargs['user_id'] = get_unique_uid()
config = AppConfig(
default_agent='CodeActAgent',
runtime='docker',
@ -183,6 +210,7 @@ async def process_issue(
use_host_network=False,
# large enough timeout, since some testcases take very long to run
timeout=300,
**kwargs,
),
# do not mount workspace
workspace_base=workspace_base,
@ -199,7 +227,7 @@ async def process_issue(
runtime.event_stream.subscribe(EventStreamSubscriber.MAIN, on_event, str(uuid4()))
initialize_runtime(runtime)
initialize_runtime(runtime, platform)
instruction, images_urls = issue_handler.get_instruction(
issue, prompt_template, repo_instruction
@ -222,7 +250,7 @@ async def process_issue(
last_error: str | None = error_msg
# Get git patch
return_val = await complete_runtime(runtime, base_commit)
return_val = await complete_runtime(runtime, base_commit, platform)
git_patch = return_val['git_patch']
logger.info(
f'Got git diff for instance {issue.number}:\n--------\n{git_patch}\n--------'
@ -283,12 +311,32 @@ async def process_issue(
def issue_handler_factory(
issue_type: str, owner: str, repo: str, token: str, llm_config: LLMConfig
) -> IssueHandlerInterface:
issue_type: str,
owner: str,
repo: str,
token: str,
llm_config: LLMConfig,
platform: Platform,
username: str | None = None,
) -> ServiceContextIssue | ServiceContextPR:
if issue_type == 'issue':
return IssueHandler(owner, repo, token, llm_config)
if platform == Platform.GITHUB:
return ServiceContextIssue(
GithubIssueHandler(owner, repo, token, username), llm_config
)
else: # platform == Platform.GITLAB
return ServiceContextIssue(
GitlabIssueHandler(owner, repo, token, username), llm_config
)
elif issue_type == 'pr':
return PRHandler(owner, repo, token, llm_config)
if platform == Platform.GITHUB:
return ServiceContextPR(
GithubPRHandler(owner, repo, token, username), llm_config
)
else: # platform == Platform.GITLAB
return ServiceContextPR(
GitlabPRHandler(owner, repo, token, username), llm_config
)
else:
raise ValueError(f'Invalid issue type: {issue_type}')
@ -298,6 +346,7 @@ async def resolve_issue(
repo: str,
token: str,
username: str,
platform: Platform,
max_iterations: int,
output_dir: str,
llm_config: LLMConfig,
@ -309,13 +358,14 @@ async def resolve_issue(
comment_id: int | None,
reset_logger: bool = False,
) -> None:
"""Resolve a single github issue.
"""Resolve a single issue.
Args:
owner: Github owner of the repo.
repo: Github repository to resolve issues in form of `owner/repo`.
token: Github token to access the repository.
username: Github username to access the repository.
owner: owner of the repo.
repo: repository to resolve issues in form of `owner/repo`.
token: token to access the repository.
username: username to access the repository.
platform: platform of the repository.
max_iterations: Maximum number of iterations to run.
output_dir: Output directory to write the results.
llm_config: Configuration for the language model.
@ -328,10 +378,12 @@ async def resolve_issue(
reset_logger: Whether to reset the logger for multiprocessing.
"""
issue_handler = issue_handler_factory(issue_type, owner, repo, token, llm_config)
issue_handler = issue_handler_factory(
issue_type, owner, repo, token, llm_config, platform, username
)
# Load dataset
issues: list[GithubIssue] = issue_handler.get_converted_issues(
issues: list[Issue] = issue_handler.get_converted_issues(
issue_numbers=[issue_number], comment_id=comment_id
)
@ -377,7 +429,7 @@ async def resolve_issue(
[
'git',
'clone',
f'https://{username}:{token}@github.com/{owner}/{repo}',
issue_handler.get_clone_url(),
f'{output_dir}/repo',
]
).decode('utf-8')
@ -453,6 +505,7 @@ async def resolve_issue(
output = await process_issue(
issue,
platform,
base_commit,
max_iterations,
llm_config,
@ -480,24 +533,24 @@ def main():
else:
return int(value)
parser = argparse.ArgumentParser(description='Resolve a single issue from Github.')
parser = argparse.ArgumentParser(description='Resolve a single issue.')
parser.add_argument(
'--repo',
type=str,
required=True,
help='Github repository to resolve issues in form of `owner/repo`.',
help='repository to resolve issues in form of `owner/repo`.',
)
parser.add_argument(
'--token',
type=str,
default=None,
help='Github token to access the repository.',
help='token to access the repository.',
)
parser.add_argument(
'--username',
type=str,
default=None,
help='Github username to access the repository.',
help='username to access the repository.',
)
parser.add_argument(
'--runtime-container-image',
@ -581,14 +634,22 @@ def main():
f'ghcr.io/all-hands-ai/runtime:{openhands.__version__}-nikolaik'
)
owner, repo = my_args.repo.split('/')
token = my_args.token if my_args.token else os.getenv('GITHUB_TOKEN')
username = my_args.username if my_args.username else os.getenv('GITHUB_USERNAME')
parts = my_args.repo.rsplit('/', 1)
if len(parts) < 2:
raise ValueError('Invalid repo name')
owner, repo = parts
token = my_args.token or os.getenv('GITHUB_TOKEN') or os.getenv('GITLAB_TOKEN')
username = my_args.username if my_args.username else os.getenv('GIT_USERNAME')
if not username:
raise ValueError('Github username is required.')
raise ValueError('Username is required.')
if not token:
raise ValueError('Github token is required.')
raise ValueError('Token is required.')
platform = identify_token(token)
if platform == Platform.INVALID:
raise ValueError('Token is invalid.')
api_key = my_args.llm_api_key or os.environ['LLM_API_KEY']
llm_config = LLMConfig(
@ -624,6 +685,7 @@ def main():
repo=repo,
token=token,
username=username,
platform=platform,
runtime_container_image=runtime_container_image,
max_iterations=my_args.max_iterations,
output_dir=my_args.output_dir,

View File

@ -2,12 +2,12 @@ from typing import Any
from litellm import BaseModel
from openhands.resolver.github_issue import GithubIssue
from openhands.resolver.interfaces.issue import Issue
class ResolverOutput(BaseModel):
# NOTE: User-specified
issue: GithubIssue
issue: Issue
issue_type: str
instruction: str
base_commit: str

View File

@ -5,18 +5,24 @@ import shutil
import subprocess
import jinja2
import requests
from openhands.core.config import LLMConfig
from openhands.core.logger import openhands_logger as logger
from openhands.llm.llm import LLM
from openhands.resolver.github_issue import GithubIssue
from openhands.resolver.interfaces.github import GithubIssueHandler
from openhands.resolver.interfaces.gitlab import GitlabIssueHandler
from openhands.resolver.interfaces.issue import Issue
from openhands.resolver.interfaces.issue_definitions import ServiceContextIssue
from openhands.resolver.io_utils import (
load_all_resolver_outputs,
load_single_resolver_output,
)
from openhands.resolver.patching import apply_diff, parse_patch
from openhands.resolver.resolver_output import ResolverOutput
from openhands.resolver.utils import (
Platform,
identify_token,
)
def apply_patch(repo_dir: str, patch: str) -> None:
@ -153,7 +159,7 @@ def initialize_repo(
return dest_dir
def make_commit(repo_dir: str, issue: GithubIssue, issue_type: str) -> None:
def make_commit(repo_dir: str, issue: Issue, issue_type: str) -> None:
"""Make a commit with the changes to the repository.
Args:
@ -214,25 +220,11 @@ def make_commit(repo_dir: str, issue: GithubIssue, issue_type: str) -> None:
raise RuntimeError(f'Failed to commit changes: {result}')
def branch_exists(base_url: str, branch_name: str, headers: dict) -> bool:
"""Check if a branch exists in the GitHub repository.
Args:
base_url: The base URL of the GitHub repository API
branch_name: The name of the branch to check
headers: The HTTP headers to use for authentication
"""
print(f'Checking if branch {branch_name} exists...')
response = requests.get(f'{base_url}/branches/{branch_name}', headers=headers)
exists = response.status_code == 200
print(f'Branch {branch_name} exists: {exists}')
return exists
def send_pull_request(
github_issue: GithubIssue,
github_token: str,
github_username: str | None,
issue: Issue,
token: str,
username: str | None,
platform: Platform,
patch_dir: str,
pr_type: str,
fork_owner: str | None = None,
@ -241,53 +233,49 @@ def send_pull_request(
reviewer: str | None = None,
pr_title: str | None = None,
) -> str:
"""Send a pull request to a GitHub repository.
"""Send a pull request to a GitHub or Gitlab repository.
Args:
github_issue: The issue to send the pull request for
github_token: The GitHub token to use for authentication
github_username: The GitHub username, if provided
issue: The issue to send the pull request for
token: The GitHub or Gitlab token to use for authentication
username: The GitHub or Gitlab username, if provided
platform: The platform of the repository.
patch_dir: The directory containing the patches to apply
pr_type: The type: branch (no PR created), draft or ready (regular PR created)
fork_owner: The owner of the fork to push changes to (if different from the original repo owner)
additional_message: The additional messages to post as a comment on the PR in json list format
target_branch: The target branch to create the pull request against (defaults to repository default branch)
reviewer: The GitHub username of the reviewer to assign
reviewer: The GitHub or Gitlab username of the reviewer to assign
pr_title: Custom title for the pull request (optional)
"""
if pr_type not in ['branch', 'draft', 'ready']:
raise ValueError(f'Invalid pr_type: {pr_type}')
# Set up headers and base URL for GitHub API
headers = {
'Authorization': f'token {github_token}',
'Accept': 'application/vnd.github.v3+json',
}
base_url = f'https://api.github.com/repos/{github_issue.owner}/{github_issue.repo}'
handler = None
if platform == Platform.GITHUB:
handler = ServiceContextIssue(
GithubIssueHandler(issue.owner, issue.repo, token, username), None
)
else: # platform == Platform.GITLAB
handler = ServiceContextIssue(
GitlabIssueHandler(issue.owner, issue.repo, token, username), None
)
# Create a new branch with a unique name
base_branch_name = f'openhands-fix-issue-{github_issue.number}'
branch_name = base_branch_name
attempt = 1
# Find a unique branch name
print('Checking if branch exists...')
while branch_exists(base_url, branch_name, headers):
attempt += 1
branch_name = f'{base_branch_name}-try{attempt}'
base_branch_name = f'openhands-fix-issue-{issue.number}'
branch_name = handler.get_branch_name(
base_branch_name=base_branch_name,
)
# Get the default branch or use specified target branch
print('Getting base branch...')
if target_branch:
base_branch = target_branch
# Verify the target branch exists
response = requests.get(f'{base_url}/branches/{target_branch}', headers=headers)
if response.status_code != 200:
exists = handler.branch_exists(branch_name=target_branch)
if not exists:
raise ValueError(f'Target branch {target_branch} does not exist')
else:
response = requests.get(f'{base_url}', headers=headers)
response.raise_for_status()
base_branch = response.json()['default_branch']
base_branch = handler.get_default_branch_name()
print(f'Base branch: {base_branch}')
# Create and checkout the new branch
@ -304,16 +292,12 @@ def send_pull_request(
)
# Determine the repository to push to (original or fork)
push_owner = fork_owner if fork_owner else github_issue.owner
push_repo = github_issue.repo
push_owner = fork_owner if fork_owner else issue.owner
handler._strategy.set_owner(push_owner)
print('Pushing changes...')
username_and_token = (
f'{github_username}:{github_token}'
if github_username
else f'x-auth-token:{github_token}'
)
push_url = f'https://{username_and_token}@github.com/{push_owner}/{push_repo}.git'
push_url = handler.get_clone_url()
result = subprocess.run(
['git', '-C', patch_dir, 'push', push_url, branch_name],
capture_output=True,
@ -325,11 +309,9 @@ def send_pull_request(
# Prepare the PR data: title and body
final_pr_title = (
pr_title
if pr_title
else f'Fix issue #{github_issue.number}: {github_issue.title}'
pr_title if pr_title else f'Fix issue #{issue.number}: {issue.title}'
)
pr_body = f'This pull request fixes #{github_issue.number}.'
pr_body = f'This pull request fixes #{issue.number}.'
if additional_message:
pr_body += f'\n\n{additional_message}'
pr_body += '\n\nAutomatic fix generated by [OpenHands](https://github.com/All-Hands-AI/OpenHands/) 🙌'
@ -337,41 +319,25 @@ def send_pull_request(
# If we are not sending a PR, we can finish early and return the
# URL for the user to open a PR manually
if pr_type == 'branch':
url = f'https://github.com/{push_owner}/{github_issue.repo}/compare/{branch_name}?expand=1'
url = handler.get_compare_url(branch_name)
else:
# Prepare the PR for the GitHub API
data = {
'title': final_pr_title, # No need to escape title for GitHub API
'body': pr_body,
'head': branch_name,
'base': base_branch,
'title': final_pr_title,
('body' if platform == Platform.GITHUB else 'description'): pr_body,
('head' if platform == Platform.GITHUB else 'source_branch'): branch_name,
('base' if platform == Platform.GITHUB else 'target_branch'): base_branch,
'draft': pr_type == 'draft',
}
# Send the PR and get its URL to tell the user
response = requests.post(f'{base_url}/pulls', headers=headers, json=data)
if response.status_code == 403:
raise RuntimeError(
'Failed to create pull request due to missing permissions. '
'Make sure that the provided token has push permissions for the repository.'
)
response.raise_for_status()
pr_data = response.json()
pr_data = handler.create_pull_request(data)
url = pr_data['html_url']
print(pr_data)
# Request review if a reviewer was specified
if reviewer and pr_type != 'branch':
review_data = {'reviewers': [reviewer]}
review_response = requests.post(
f'{base_url}/pulls/{pr_data["number"]}/requested_reviewers',
headers=headers,
json=review_data,
)
if review_response.status_code != 201:
print(
f'Warning: Failed to request review from {reviewer}: {review_response.text}'
)
url = pr_data['html_url']
number = pr_data['number']
handler.request_reviewers(reviewer, number)
print(
f'{pr_type} created: {url}\n\n--- Title: {final_pr_title}\n\n--- Body:\n{pr_body}'
@ -380,74 +346,11 @@ def send_pull_request(
return url
def reply_to_comment(github_token: str, comment_id: str, reply: str):
"""Reply to a comment on a GitHub issue or pull request.
Args:
github_token: The GitHub token to use for authentication
comment_id: The ID of the comment to reply to
reply: The reply message to post
"""
# Opting for graphql as REST API doesn't allow reply to replies in comment threads
query = """
mutation($body: String!, $pullRequestReviewThreadId: ID!) {
addPullRequestReviewThreadReply(input: { body: $body, pullRequestReviewThreadId: $pullRequestReviewThreadId }) {
comment {
id
body
createdAt
}
}
}
"""
# Prepare the reply to the comment
comment_reply = f'Openhands fix success summary\n\n\n{reply}'
variables = {'body': comment_reply, 'pullRequestReviewThreadId': comment_id}
url = 'https://api.github.com/graphql'
headers = {
'Authorization': f'Bearer {github_token}',
'Content-Type': 'application/json',
}
# Send the reply to the comment
response = requests.post(
url, json={'query': query, 'variables': variables}, headers=headers
)
response.raise_for_status()
def send_comment_msg(base_url: str, issue_number: int, github_token: str, msg: str):
"""Send a comment message to a GitHub issue or pull request.
Args:
base_url: The base URL of the GitHub repository API
issue_number: The issue or pull request number
github_token: The GitHub token to use for authentication
msg: The message content to post as a comment
"""
# Set up headers for GitHub API
headers = {
'Authorization': f'token {github_token}',
'Accept': 'application/vnd.github.v3+json',
}
# Post a comment on the PR
comment_url = f'{base_url}/issues/{issue_number}/comments'
comment_data = {'body': msg}
comment_response = requests.post(comment_url, headers=headers, json=comment_data)
if comment_response.status_code != 201:
print(
f'Failed to post comment: {comment_response.status_code} {comment_response.text}'
)
else:
print(f'Comment added to the PR: {msg}')
def update_existing_pull_request(
github_issue: GithubIssue,
github_token: str,
github_username: str | None,
issue: Issue,
token: str,
username: str | None,
platform: Platform,
patch_dir: str,
llm_config: LLMConfig,
comment_message: str | None = None,
@ -456,23 +359,34 @@ def update_existing_pull_request(
"""Update an existing pull request with the new patches.
Args:
github_issue: The issue to update.
github_token: The GitHub token to use for authentication.
github_username: The GitHub username to use for authentication.
issue: The issue to update.
token: The token to use for authentication.
username: The username to use for authentication.
platform: The platform of the repository.
patch_dir: The directory containing the patches to apply.
llm_config: The LLM configuration to use for summarizing changes.
comment_message: The main message to post as a comment on the PR.
additional_message: The additional messages to post as a comment on the PR in json list format.
"""
# Set up base URL for GitHub API
base_url = f'https://api.github.com/repos/{github_issue.owner}/{github_issue.repo}'
branch_name = github_issue.head_branch
# Set up headers and base URL for GitHub or GitLab API
handler = None
if platform == Platform.GITHUB:
handler = ServiceContextIssue(
GithubIssueHandler(issue.owner, issue.repo, token, username), llm_config
)
else: # platform == Platform.GITLAB
handler = ServiceContextIssue(
GitlabIssueHandler(issue.owner, issue.repo, token, username), llm_config
)
branch_name = issue.head_branch
# Prepare the push command
push_command = (
f'git -C {patch_dir} push '
f'https://{github_username}:{github_token}@github.com/'
f'{github_issue.owner}/{github_issue.repo}.git {branch_name}'
f'{handler.get_authorize_url()}'
f'{issue.owner}/{issue.repo}.git {branch_name}'
)
# Push the changes to the existing branch
@ -481,7 +395,7 @@ def update_existing_pull_request(
print(f'Error pushing changes: {result.stderr}')
raise RuntimeError('Failed to push changes to the remote repository')
pr_url = f'https://github.com/{github_issue.owner}/{github_issue.repo}/pull/{github_issue.number}'
pr_url = handler.get_pull_url(issue.number)
print(f'Updated pull request {pr_url} with new patches.')
# Generate a summary of all comment success indicators for PR message
@ -517,18 +431,18 @@ def update_existing_pull_request(
# Post a comment on the PR
if comment_message:
send_comment_msg(base_url, github_issue.number, github_token, comment_message)
handler.send_comment_msg(issue.number, comment_message)
# Reply to each unresolved comment thread
if additional_message and github_issue.thread_ids:
if additional_message and issue.thread_ids:
try:
explanations = json.loads(additional_message)
for count, reply_comment in enumerate(explanations):
comment_id = github_issue.thread_ids[count]
reply_to_comment(github_token, comment_id, reply_comment)
comment_id = issue.thread_ids[count]
handler.reply_to_comment(issue.number, comment_id, reply_comment)
except (json.JSONDecodeError, TypeError):
msg = f'Error occured when replying to threads; success explanations {additional_message}'
send_comment_msg(base_url, github_issue.number, github_token, msg)
handler.send_comment_msg(issue.number, msg)
return pr_url
@ -536,8 +450,9 @@ def update_existing_pull_request(
def process_single_issue(
output_dir: str,
resolver_output: ResolverOutput,
github_token: str,
github_username: str,
token: str,
username: str,
platform: Platform,
pr_type: str,
llm_config: LLMConfig,
fork_owner: str | None,
@ -577,18 +492,20 @@ def process_single_issue(
if issue_type == 'pr':
update_existing_pull_request(
github_issue=resolver_output.issue,
github_token=github_token,
github_username=github_username,
issue=resolver_output.issue,
token=token,
username=username,
platform=platform,
patch_dir=patched_repo_dir,
additional_message=resolver_output.result_explanation,
llm_config=llm_config,
)
else:
send_pull_request(
github_issue=resolver_output.issue,
github_token=github_token,
github_username=github_username,
issue=resolver_output.issue,
token=token,
username=username,
platform=platform,
patch_dir=patched_repo_dir,
pr_type=pr_type,
fork_owner=fork_owner,
@ -601,8 +518,9 @@ def process_single_issue(
def process_all_successful_issues(
output_dir: str,
github_token: str,
github_username: str,
token: str,
username: str,
platform: Platform,
pr_type: str,
llm_config: LLMConfig,
fork_owner: str | None,
@ -614,8 +532,9 @@ def process_all_successful_issues(
process_single_issue(
output_dir,
resolver_output,
github_token,
github_username,
token,
username,
platform,
pr_type,
llm_config,
fork_owner,
@ -625,18 +544,20 @@ def process_all_successful_issues(
def main():
parser = argparse.ArgumentParser(description='Send a pull request to Github.')
parser.add_argument(
'--github-token',
type=str,
default=None,
help='Github token to access the repository.',
parser = argparse.ArgumentParser(
description='Send a pull request to Github or Gitlab.'
)
parser.add_argument(
'--github-username',
'--token',
type=str,
default=None,
help='Github username to access the repository.',
help='token to access the repository.',
)
parser.add_argument(
'--username',
type=str,
default=None,
help='username to access the repository.',
)
parser.add_argument(
'--output-dir',
@ -695,7 +616,7 @@ def main():
parser.add_argument(
'--reviewer',
type=str,
help='GitHub username of the person to request review from',
help='GitHub or GitLab username of the person to request review from',
default=None,
)
parser.add_argument(
@ -706,18 +627,16 @@ def main():
)
my_args = parser.parse_args()
github_token = (
my_args.github_token if my_args.github_token else os.getenv('GITHUB_TOKEN')
)
if not github_token:
token = my_args.token or os.getenv('GITHUB_TOKEN') or os.getenv('GITLAB_TOKEN')
if not token:
raise ValueError(
'Github token is not set, set via --github-token or GITHUB_TOKEN environment variable.'
'token is not set, set via --token or GITHUB_TOKEN or GITLAB_TOKEN environment variable.'
)
github_username = (
my_args.github_username
if my_args.github_username
else os.getenv('GITHUB_USERNAME')
)
username = my_args.username if my_args.username else os.getenv('GIT_USERNAME')
platform = identify_token(token)
if platform == Platform.INVALID:
raise ValueError('Token is invalid.')
api_key = my_args.llm_api_key or os.environ['LLM_API_KEY']
llm_config = LLMConfig(
@ -730,12 +649,13 @@ def main():
raise ValueError(f'Output directory {my_args.output_dir} does not exist.')
if my_args.issue_number == 'all_successful':
if not github_username:
raise ValueError('Github username is required.')
if not username:
raise ValueError('username is required.')
process_all_successful_issues(
my_args.output_dir,
github_token,
github_username,
token,
username,
platform,
my_args.pr_type,
llm_config,
my_args.fork_owner,
@ -746,13 +666,14 @@ def main():
issue_number = int(my_args.issue_number)
output_path = os.path.join(my_args.output_dir, 'output.jsonl')
resolver_output = load_single_resolver_output(output_path, issue_number)
if not github_username:
raise ValueError('Github username is required.')
if not username:
raise ValueError('username is required.')
process_single_issue(
my_args.output_dir,
resolver_output,
github_token,
github_username,
token,
username,
platform,
my_args.pr_type,
llm_config,
my_args.fork_owner,

View File

@ -2,9 +2,12 @@ import json
import logging
import multiprocessing as mp
import os
import re
from enum import Enum
from typing import Callable
import pandas as pd
import requests
from openhands.controller.state.state import State
from openhands.core.logger import get_console_handler
@ -13,6 +16,47 @@ from openhands.events.action import Action
from openhands.events.action.message import MessageAction
class Platform(Enum):
INVALID = 0
GITHUB = 1
GITLAB = 2
def identify_token(token: str) -> Platform:
"""
Identifies whether a token belongs to GitHub or GitLab.
Parameters:
token (str): The personal access token to check.
Returns:
Platform: "GitHub" if the token is valid for GitHub,
"GitLab" if the token is valid for GitLab,
"Invalid" if the token is not recognized by either.
"""
github_url = 'https://api.github.com/user'
github_headers = {'Authorization': f'token {token}'}
try:
github_response = requests.get(github_url, headers=github_headers, timeout=5)
if github_response.status_code == 200:
return Platform.GITHUB
except requests.RequestException as e:
print(f'Error connecting to GitHub API: {e}')
gitlab_url = 'https://gitlab.com/api/v4/user'
gitlab_headers = {'Authorization': f'Bearer {token}'}
try:
gitlab_response = requests.get(gitlab_url, headers=gitlab_headers, timeout=5)
if gitlab_response.status_code == 200:
return Platform.GITLAB
except requests.RequestException as e:
print(f'Error connecting to GitLab API: {e}')
return Platform.INVALID
def codeact_user_response(
state: State,
encapsulate_solution: bool = False,
@ -137,3 +181,45 @@ def reset_logger_for_multiprocessing(
logging.Formatter('%(asctime)s - %(levelname)s - %(message)s')
)
logger.addHandler(file_handler)
def extract_image_urls(issue_body: str) -> list[str]:
# Regular expression to match Markdown image syntax ![alt text](image_url)
image_pattern = r'!\[.*?\]\((https?://[^\s)]+)\)'
return re.findall(image_pattern, issue_body)
def extract_issue_references(body: str) -> list[int]:
# First, remove code blocks as they may contain false positives
body = re.sub(r'```.*?```', '', body, flags=re.DOTALL)
# Remove inline code
body = re.sub(r'`[^`]*`', '', body)
# Remove URLs that contain hash symbols
body = re.sub(r'https?://[^\s)]*#\d+[^\s)]*', '', body)
# Now extract issue numbers, making sure they're not part of other text
# The pattern matches #number that:
# 1. Is at the start of text or after whitespace/punctuation
# 2. Is followed by whitespace, punctuation, or end of text
# 3. Is not part of a URL
pattern = r'(?:^|[\s\[({]|[^\w#])#(\d+)(?=[\s,.\])}]|$)'
return [int(match) for match in re.findall(pattern, body)]
def get_unique_uid(start_uid=1000):
existing_uids = set()
with open('/etc/passwd', 'r') as passwd_file:
for line in passwd_file:
parts = line.split(':')
if len(parts) > 2:
try:
existing_uids.add(int(parts[2]))
except ValueError:
continue
while start_uid in existing_uids:
start_uid += 1
return start_uid

View File

@ -4,13 +4,17 @@ from unittest.mock import MagicMock, patch
from openhands.core.config import LLMConfig
from openhands.events.action.message import MessageAction
from openhands.llm import LLM
from openhands.resolver.github_issue import GithubIssue
from openhands.resolver.issue_definitions import IssueHandler, PRHandler
from openhands.resolver.interfaces.github import GithubIssueHandler, GithubPRHandler
from openhands.resolver.interfaces.issue import Issue
from openhands.resolver.interfaces.issue_definitions import (
ServiceContextIssue,
ServiceContextPR,
)
def test_guess_success_multiline_explanation():
# Mock data
issue = GithubIssue(
issue = Issue(
owner='test',
repo='test',
number=1,
@ -44,7 +48,9 @@ Automatic fix generated by OpenHands 🙌"""
# Use patch to mock the LLM completion call
with patch.object(LLM, 'completion', return_value=mock_response) as mock_completion:
# Create a handler instance
handler = IssueHandler('test', 'test', 'test', llm_config)
handler = ServiceContextIssue(
GithubIssueHandler('test', 'test', 'test'), llm_config
)
# Call guess_success
success, _, explanation = handler.guess_success(issue, history)
@ -64,10 +70,10 @@ Automatic fix generated by OpenHands 🙌"""
def test_pr_handler_guess_success_with_thread_comments():
# Create a PR handler instance
llm_config = LLMConfig(model='test', api_key='test')
handler = PRHandler('test-owner', 'test-repo', 'test-token', llm_config)
handler = ServiceContextPR(GithubPRHandler('test', 'test', 'test'), llm_config)
# Create a mock issue with thread comments but no review comments
issue = GithubIssue(
issue = Issue(
owner='test-owner',
repo='test-repo',
number=1,
@ -114,10 +120,12 @@ The changes successfully address the feedback."""
def test_pr_handler_guess_success_only_review_comments():
# Create a PR handler instance
llm_config = LLMConfig(model='test', api_key='test')
handler = PRHandler('test-owner', 'test-repo', 'test-token', llm_config)
handler = ServiceContextPR(
GithubPRHandler('test-owner', 'test-repo', 'test-token'), llm_config
)
# Create a mock issue with only review comments
issue = GithubIssue(
issue = Issue(
owner='test-owner',
repo='test-repo',
number=1,
@ -165,10 +173,10 @@ The changes successfully address the review comments."""
def test_pr_handler_guess_success_no_comments():
# Create a PR handler instance
llm_config = LLMConfig(model='test', api_key='test')
handler = PRHandler('test-owner', 'test-repo', 'test-token', llm_config)
handler = ServiceContextPR(GithubPRHandler('test', 'test', 'test'), llm_config)
# Create a mock issue with no comments
issue = GithubIssue(
issue = Issue(
owner='test-owner',
repo='test-repo',
number=1,

View File

@ -1,8 +1,12 @@
from unittest.mock import MagicMock, patch
from openhands.core.config import LLMConfig
from openhands.resolver.github_issue import ReviewThread
from openhands.resolver.issue_definitions import IssueHandler, PRHandler
from openhands.resolver.interfaces.github import GithubIssueHandler, GithubPRHandler
from openhands.resolver.interfaces.issue import ReviewThread
from openhands.resolver.interfaces.issue_definitions import (
ServiceContextIssue,
ServiceContextPR,
)
def test_get_converted_issues_initializes_review_comments():
@ -27,7 +31,9 @@ def test_get_converted_issues_initializes_review_comments():
# Create an instance of IssueHandler
llm_config = LLMConfig(model='test', api_key='test')
handler = IssueHandler('test-owner', 'test-repo', 'test-token', llm_config)
handler = ServiceContextIssue(
GithubIssueHandler('test-owner', 'test-repo', 'test-token'), llm_config
)
# Get converted issues
issues = handler.get_converted_issues(issue_numbers=[1])
@ -57,7 +63,6 @@ def test_get_converted_issues_handles_empty_body():
# Mock the response for comments
mock_comments_response = MagicMock()
mock_comments_response.json.return_value = []
# Set up the mock to return different responses
mock_get.side_effect = [
mock_issues_response,
@ -67,7 +72,9 @@ def test_get_converted_issues_handles_empty_body():
# Create an instance of IssueHandler
llm_config = LLMConfig(model='test', api_key='test')
handler = IssueHandler('test-owner', 'test-repo', 'test-token', llm_config)
handler = ServiceContextIssue(
GithubIssueHandler('test-owner', 'test-repo', 'test-token'), llm_config
)
# Get converted issues
issues = handler.get_converted_issues(issue_numbers=[1])
@ -148,7 +155,9 @@ def test_pr_handler_get_converted_issues_with_comments():
# Create an instance of PRHandler
llm_config = LLMConfig(model='test', api_key='test')
handler = PRHandler('test-owner', 'test-repo', 'test-token', llm_config)
handler = ServiceContextPR(
GithubPRHandler('test-owner', 'test-repo', 'test-token'), llm_config
)
# Get converted issues
prs = handler.get_converted_issues(issue_numbers=[1])
@ -185,10 +194,12 @@ def test_get_issue_comments_with_specific_comment_id():
# Create an instance of IssueHandler
llm_config = LLMConfig(model='test', api_key='test')
handler = IssueHandler('test-owner', 'test-repo', 'test-token', llm_config)
handler = ServiceContextIssue(
GithubIssueHandler('test-owner', 'test-repo', 'test-token'), llm_config
)
# Get comments with a specific comment_id
specific_comment = handler._get_issue_comments(issue_number=1, comment_id=123)
specific_comment = handler.get_issue_comments(issue_number=1, comment_id=123)
# Verify only the specific comment is returned
assert specific_comment == ['First comment']
@ -273,7 +284,9 @@ def test_pr_handler_get_converted_issues_with_specific_thread_comment():
# Create an instance of PRHandler
llm_config = LLMConfig(model='test', api_key='test')
handler = PRHandler('test-owner', 'test-repo', 'test-token', llm_config)
handler = ServiceContextPR(
GithubPRHandler('test-owner', 'test-repo', 'test-token'), llm_config
)
# Get converted issues
prs = handler.get_converted_issues(
@ -376,7 +389,9 @@ def test_pr_handler_get_converted_issues_with_specific_review_thread_comment():
# Create an instance of PRHandler
llm_config = LLMConfig(model='test', api_key='test')
handler = PRHandler('test-owner', 'test-repo', 'test-token', llm_config)
handler = ServiceContextPR(
GithubPRHandler('test-owner', 'test-repo', 'test-token'), llm_config
)
# Get converted issues
prs = handler.get_converted_issues(
@ -499,7 +514,9 @@ def test_pr_handler_get_converted_issues_with_specific_comment_and_issue_refs():
# Create an instance of PRHandler
llm_config = LLMConfig(model='test', api_key='test')
handler = PRHandler('test-owner', 'test-repo', 'test-token', llm_config)
handler = ServiceContextPR(
GithubPRHandler('test-owner', 'test-repo', 'test-token'), llm_config
)
# Get converted issues
prs = handler.get_converted_issues(
@ -599,7 +616,9 @@ def test_pr_handler_get_converted_issues_with_duplicate_issue_refs():
# Create an instance of PRHandler
llm_config = LLMConfig(model='test', api_key='test')
handler = PRHandler('test-owner', 'test-repo', 'test-token', llm_config)
handler = ServiceContextPR(
GithubPRHandler('test-owner', 'test-repo', 'test-token'), llm_config
)
# Get converted issues
prs = handler.get_converted_issues(issue_numbers=[1])

View File

@ -7,8 +7,12 @@ from litellm.exceptions import RateLimitError
from openhands.core.config import LLMConfig
from openhands.events.action.message import MessageAction
from openhands.llm.llm import LLM
from openhands.resolver.github_issue import GithubIssue
from openhands.resolver.issue_definitions import IssueHandler, PRHandler
from openhands.resolver.interfaces.github import GithubIssueHandler, GithubPRHandler
from openhands.resolver.interfaces.issue import Issue
from openhands.resolver.interfaces.issue_definitions import (
ServiceContextIssue,
ServiceContextPR,
)
@pytest.fixture(autouse=True)
@ -33,7 +37,9 @@ def default_config():
def test_handle_nonexistent_issue_reference():
llm_config = LLMConfig(model='test', api_key='test')
handler = PRHandler('test-owner', 'test-repo', 'test-token', llm_config)
handler = ServiceContextPR(
GithubPRHandler('test-owner', 'test-repo', 'test-token'), llm_config
)
# Mock the requests.get to simulate a 404 error
mock_response = MagicMock()
@ -43,7 +49,7 @@ def test_handle_nonexistent_issue_reference():
with patch('requests.get', return_value=mock_response):
# Call the method with a non-existent issue reference
result = handler._PRHandler__get_context_from_external_issues_references(
result = handler._strategy.get_context_from_external_issues_references(
closing_issues=[],
closing_issue_numbers=[],
issue_body='This references #999999', # Non-existent issue
@ -58,7 +64,9 @@ def test_handle_nonexistent_issue_reference():
def test_handle_rate_limit_error():
llm_config = LLMConfig(model='test', api_key='test')
handler = PRHandler('test-owner', 'test-repo', 'test-token', llm_config)
handler = ServiceContextPR(
GithubPRHandler('test-owner', 'test-repo', 'test-token'), llm_config
)
# Mock the requests.get to simulate a rate limit error
mock_response = MagicMock()
@ -68,7 +76,7 @@ def test_handle_rate_limit_error():
with patch('requests.get', return_value=mock_response):
# Call the method with an issue reference
result = handler._PRHandler__get_context_from_external_issues_references(
result = handler._strategy.get_context_from_external_issues_references(
closing_issues=[],
closing_issue_numbers=[],
issue_body='This references #123',
@ -83,14 +91,16 @@ def test_handle_rate_limit_error():
def test_handle_network_error():
llm_config = LLMConfig(model='test', api_key='test')
handler = PRHandler('test-owner', 'test-repo', 'test-token', llm_config)
handler = ServiceContextPR(
GithubPRHandler('test-owner', 'test-repo', 'test-token'), llm_config
)
# Mock the requests.get to simulate a network error
with patch(
'requests.get', side_effect=requests.exceptions.ConnectionError('Network Error')
):
# Call the method with an issue reference
result = handler._PRHandler__get_context_from_external_issues_references(
result = handler._strategy.get_context_from_external_issues_references(
closing_issues=[],
closing_issue_numbers=[],
issue_body='This references #123',
@ -105,7 +115,9 @@ def test_handle_network_error():
def test_successful_issue_reference():
llm_config = LLMConfig(model='test', api_key='test')
handler = PRHandler('test-owner', 'test-repo', 'test-token', llm_config)
handler = ServiceContextPR(
GithubPRHandler('test-owner', 'test-repo', 'test-token'), llm_config
)
# Mock a successful response
mock_response = MagicMock()
@ -114,7 +126,7 @@ def test_successful_issue_reference():
with patch('requests.get', return_value=mock_response):
# Call the method with an issue reference
result = handler._PRHandler__get_context_from_external_issues_references(
result = handler._strategy.get_context_from_external_issues_references(
closing_issues=[],
closing_issue_numbers=[],
issue_body='This references #123',
@ -201,11 +213,13 @@ def test_guess_success_rate_limit_wait_time(mock_litellm_completion, default_con
]
llm = LLM(config=default_config)
handler = IssueHandler('test-owner', 'test-repo', 'test-token', default_config)
handler = ServiceContextIssue(
GithubIssueHandler('test-owner', 'test-repo', 'test-token'), default_config
)
handler.llm = llm
# Mock issue and history
issue = GithubIssue(
issue = Issue(
owner='test-owner',
repo='test-repo',
number=1,
@ -241,11 +255,13 @@ def test_guess_success_exhausts_retries(mock_completion, default_config):
# Initialize LLM and handler
llm = LLM(config=default_config)
handler = PRHandler('test-owner', 'test-repo', 'test-token', default_config)
handler = ServiceContextPR(
GithubPRHandler('test-owner', 'test-repo', 'test-token'), default_config
)
handler.llm = llm
# Mock issue and history
issue = GithubIssue(
issue = Issue(
owner='test-owner',
repo='test-repo',
number=1,

View File

@ -6,14 +6,18 @@ import pytest
from openhands.core.config import LLMConfig
from openhands.events.action.message import MessageAction
from openhands.llm.llm import LLM
from openhands.resolver.github_issue import GithubIssue, ReviewThread
from openhands.resolver.issue_definitions import PRHandler
from openhands.resolver.interfaces.github import GithubPRHandler
from openhands.resolver.interfaces.issue import Issue, ReviewThread
from openhands.resolver.interfaces.issue_definitions import ServiceContextPR
@pytest.fixture
def pr_handler():
llm_config = LLMConfig(model='test-model')
return PRHandler('test-owner', 'test-repo', 'test-token', llm_config)
handler = ServiceContextPR(
GithubPRHandler('test-owner', 'test-repo', 'test-token'), llm_config
)
return handler
@pytest.fixture
@ -37,10 +41,12 @@ def test_guess_success_review_threads_litellm_call():
"""Test that the completion() call for review threads contains the expected content."""
# Create a PR handler instance
llm_config = LLMConfig(model='test', api_key='test')
handler = PRHandler('test-owner', 'test-repo', 'test-token', llm_config)
handler = ServiceContextPR(
GithubPRHandler('test-owner', 'test-repo', 'test-token'), llm_config
)
# Create a mock issue with review threads
issue = GithubIssue(
issue = Issue(
owner='test-owner',
repo='test-repo',
number=1,
@ -142,10 +148,12 @@ def test_guess_success_thread_comments_litellm_call():
"""Test that the completion() call for thread comments contains the expected content."""
# Create a PR handler instance
llm_config = LLMConfig(model='test', api_key='test')
handler = PRHandler('test-owner', 'test-repo', 'test-token', llm_config)
handler = ServiceContextPR(
GithubPRHandler('test-owner', 'test-repo', 'test-token'), llm_config
)
# Create a mock issue with thread comments
issue = GithubIssue(
issue = Issue(
owner='test-owner',
repo='test-repo',
number=1,
@ -215,7 +223,9 @@ def test_check_feedback_with_llm():
"""Test the _check_feedback_with_llm helper function."""
# Create a PR handler instance
llm_config = LLMConfig(model='test', api_key='test')
handler = PRHandler('test-owner', 'test-repo', 'test-token', llm_config)
handler = ServiceContextPR(
GithubPRHandler('test-owner', 'test-repo', 'test-token'), llm_config
)
# Test cases for different LLM responses
test_cases = [
@ -255,7 +265,9 @@ def test_check_review_thread_with_git_patch():
"""Test that git patch from complete_runtime is included in the prompt."""
# Create a PR handler instance
llm_config = LLMConfig(model='test', api_key='test')
handler = PRHandler('test-owner', 'test-repo', 'test-token', llm_config)
handler = ServiceContextPR(
GithubPRHandler('test-owner', 'test-repo', 'test-token'), llm_config
)
# Create test data
review_thread = ReviewThread(
@ -312,7 +324,9 @@ def test_check_review_thread():
"""Test the _check_review_thread helper function."""
# Create a PR handler instance
llm_config = LLMConfig(model='test', api_key='test')
handler = PRHandler('test-owner', 'test-repo', 'test-token', llm_config)
handler = ServiceContextPR(
GithubPRHandler('test-owner', 'test-repo', 'test-token'), llm_config
)
# Create test data
review_thread = ReviewThread(
@ -367,7 +381,9 @@ def test_check_thread_comments_with_git_patch():
"""Test that git patch from complete_runtime is included in the prompt."""
# Create a PR handler instance
llm_config = LLMConfig(model='test', api_key='test')
handler = PRHandler('test-owner', 'test-repo', 'test-token', llm_config)
handler = ServiceContextPR(
GithubPRHandler('test-owner', 'test-repo', 'test-token'), llm_config
)
# Create test data
thread_comments = [
@ -422,7 +438,9 @@ def test_check_thread_comments():
"""Test the _check_thread_comments helper function."""
# Create a PR handler instance
llm_config = LLMConfig(model='test', api_key='test')
handler = PRHandler('test-owner', 'test-repo', 'test-token', llm_config)
handler = ServiceContextPR(
GithubPRHandler('test-owner', 'test-repo', 'test-token'), llm_config
)
# Create test data
thread_comments = [
@ -475,7 +493,9 @@ def test_check_review_comments_with_git_patch():
"""Test that git patch from complete_runtime is included in the prompt."""
# Create a PR handler instance
llm_config = LLMConfig(model='test', api_key='test')
handler = PRHandler('test-owner', 'test-repo', 'test-token', llm_config)
handler = ServiceContextPR(
GithubPRHandler('test-owner', 'test-repo', 'test-token'), llm_config
)
# Create test data
review_comments = [
@ -530,7 +550,9 @@ def test_check_review_comments():
"""Test the _check_review_comments helper function."""
# Create a PR handler instance
llm_config = LLMConfig(model='test', api_key='test')
handler = PRHandler('test-owner', 'test-repo', 'test-token', llm_config)
handler = ServiceContextPR(
GithubPRHandler('test-owner', 'test-repo', 'test-token'), llm_config
)
# Create test data
review_comments = [
@ -583,10 +605,12 @@ def test_guess_success_review_comments_litellm_call():
"""Test that the completion() call for review comments contains the expected content."""
# Create a PR handler instance
llm_config = LLMConfig(model='test', api_key='test')
handler = PRHandler('test-owner', 'test-repo', 'test-token', llm_config)
handler = ServiceContextPR(
GithubPRHandler('test-owner', 'test-repo', 'test-token'), llm_config
)
# Create a mock issue with review comments
issue = GithubIssue(
issue = Issue(
owner='test-owner',
repo='test-repo',
number=1,
@ -627,7 +651,6 @@ The changes successfully address the feedback."""
)
]
# Test the guess_success method
with patch.object(LLM, 'completion') as mock_completion:
mock_completion.return_value = mock_response
success, success_list, explanation = handler.guess_success(issue, history)

View File

@ -2,8 +2,9 @@ import os
import subprocess
import tempfile
from openhands.resolver.github_issue import GithubIssue
from openhands.resolver.interfaces.issue import Issue
from openhands.resolver.send_pull_request import make_commit
from openhands.resolver.utils import Platform
def test_commit_message_with_quotes():
@ -19,7 +20,7 @@ def test_commit_message_with_quotes():
subprocess.run(['git', '-C', temp_dir, 'add', 'test.txt'], check=True)
# Create a test issue with problematic title
issue = GithubIssue(
issue = Issue(
owner='test-owner',
repo='test-repo',
number=123,
@ -89,7 +90,7 @@ def test_pr_title_with_quotes(monkeypatch):
monkeypatch.setattr('requests.post', mock_post)
monkeypatch.setattr('requests.get', lambda *args, **kwargs: MockGetResponse())
monkeypatch.setattr(
'openhands.resolver.send_pull_request.branch_exists',
'openhands.resolver.interfaces.github.GithubIssueHandler.branch_exists',
lambda *args, **kwargs: False,
)
@ -135,7 +136,7 @@ def test_pr_title_with_quotes(monkeypatch):
# Create a test issue with problematic title
print('Creating test issue...')
issue = GithubIssue(
issue = Issue(
owner='test-owner',
repo='test-repo',
number=123,
@ -156,9 +157,10 @@ def test_pr_title_with_quotes(monkeypatch):
from openhands.resolver.send_pull_request import send_pull_request
send_pull_request(
github_issue=issue,
github_token='dummy-token',
github_username='test-user',
issue=issue,
token='dummy-token',
username='test-user',
platform=Platform.GITHUB,
patch_dir=temp_dir,
pr_type='ready',
)

View File

@ -12,14 +12,19 @@ from openhands.events.observation import (
NullObservation,
)
from openhands.llm.llm import LLM
from openhands.resolver.github_issue import GithubIssue, ReviewThread
from openhands.resolver.issue_definitions import IssueHandler, PRHandler
from openhands.resolver.interfaces.github import GithubIssueHandler, GithubPRHandler
from openhands.resolver.interfaces.issue import Issue, ReviewThread
from openhands.resolver.interfaces.issue_definitions import (
ServiceContextIssue,
ServiceContextPR,
)
from openhands.resolver.resolve_issue import (
complete_runtime,
initialize_runtime,
process_issue,
)
from openhands.resolver.resolver_output import ResolverOutput
from openhands.resolver.utils import Platform
@pytest.fixture
@ -76,7 +81,7 @@ def test_initialize_runtime():
),
]
initialize_runtime(mock_runtime)
initialize_runtime(mock_runtime, Platform.GITHUB)
assert mock_runtime.run_action.call_count == 2
mock_runtime.run_action.assert_any_call(CmdRunAction(command='cd /workspace'))
@ -103,6 +108,7 @@ async def test_resolve_issue_no_issues_found():
repo='test-repo',
token='test-token',
username='test-user',
platform=Platform.GITHUB,
max_iterations=5,
output_dir='/tmp',
llm_config=LLMConfig(model='test', api_key='test'),
@ -122,7 +128,9 @@ async def test_resolve_issue_no_issues_found():
def test_download_issues_from_github():
llm_config = LLMConfig(model='test', api_key='test')
handler = IssueHandler('owner', 'repo', 'token', llm_config)
handler = ServiceContextIssue(
GithubIssueHandler('owner', 'repo', 'token'), llm_config
)
mock_issues_response = MagicMock()
mock_issues_response.json.side_effect = [
@ -154,7 +162,7 @@ def test_download_issues_from_github():
assert len(issues) == 2
assert handler.issue_type == 'issue'
assert all(isinstance(issue, GithubIssue) for issue in issues)
assert all(isinstance(issue, Issue) for issue in issues)
assert [issue.number for issue in issues] == [1, 3]
assert [issue.title for issue in issues] == ['Issue 1', 'Issue 2']
assert [issue.review_comments for issue in issues] == [None, None]
@ -164,7 +172,7 @@ def test_download_issues_from_github():
def test_download_pr_from_github():
llm_config = LLMConfig(model='test', api_key='test')
handler = PRHandler('owner', 'repo', 'token', llm_config)
handler = ServiceContextPR(GithubPRHandler('owner', 'repo', 'token'), llm_config)
mock_pr_response = MagicMock()
mock_pr_response.json.side_effect = [
[
@ -268,7 +276,7 @@ def test_download_pr_from_github():
assert len(issues) == 3
assert handler.issue_type == 'pr'
assert all(isinstance(issue, GithubIssue) for issue in issues)
assert all(isinstance(issue, Issue) for issue in issues)
assert [issue.number for issue in issues] == [1, 2, 3]
assert [issue.title for issue in issues] == ['PR 1', 'My PR', 'PR 3']
assert [issue.head_branch for issue in issues] == ['b1', 'b2', 'b3']
@ -307,7 +315,7 @@ async def test_complete_runtime():
create_cmd_output(exit_code=0, content='git diff content', command='git apply'),
]
result = await complete_runtime(mock_runtime, 'base_commit_hash')
result = await complete_runtime(mock_runtime, 'base_commit_hash', Platform.GITHUB)
assert result == {'git_patch': 'git diff content'}
assert mock_runtime.run_action.call_count == 5
@ -323,7 +331,7 @@ async def test_process_issue(mock_output_dir, mock_prompt_template):
handler_instance = MagicMock()
# Set up test data
issue = GithubIssue(
issue = Issue(
owner='test_owner',
repo='test_repo',
number=1,
@ -434,6 +442,7 @@ async def test_process_issue(mock_output_dir, mock_prompt_template):
# Call the function
result = await process_issue(
issue,
Platform.GITHUB,
base_commit,
max_iterations,
llm_config,
@ -470,7 +479,7 @@ async def test_process_issue(mock_output_dir, mock_prompt_template):
def test_get_instruction(mock_prompt_template, mock_followup_prompt_template):
issue = GithubIssue(
issue = Issue(
owner='test_owner',
repo='test_repo',
number=123,
@ -478,7 +487,9 @@ def test_get_instruction(mock_prompt_template, mock_followup_prompt_template):
body='This is a test issue refer to image ![First Image](https://sampleimage.com/image1.png)',
)
mock_llm_config = LLMConfig(model='test_model', api_key='test_api_key')
issue_handler = IssueHandler('owner', 'repo', 'token', mock_llm_config)
issue_handler = ServiceContextIssue(
GithubIssueHandler('owner', 'repo', 'token'), mock_llm_config
)
instruction, images_urls = issue_handler.get_instruction(
issue, mock_prompt_template, None
)
@ -488,7 +499,7 @@ def test_get_instruction(mock_prompt_template, mock_followup_prompt_template):
assert issue_handler.issue_type == 'issue'
assert instruction == expected_instruction
issue = GithubIssue(
issue = Issue(
owner='test_owner',
repo='test_repo',
number=123,
@ -506,7 +517,9 @@ def test_get_instruction(mock_prompt_template, mock_followup_prompt_template):
],
)
pr_handler = PRHandler('owner', 'repo', 'token', mock_llm_config)
pr_handler = ServiceContextPR(
GithubPRHandler('owner', 'repo', 'token'), mock_llm_config
)
instruction, images_urls = pr_handler.get_instruction(
issue, mock_followup_prompt_template, None
)
@ -518,7 +531,7 @@ def test_get_instruction(mock_prompt_template, mock_followup_prompt_template):
def test_file_instruction():
issue = GithubIssue(
issue = Issue(
owner='test_owner',
repo='test_repo',
number=123,
@ -530,7 +543,9 @@ def test_file_instruction():
prompt = f.read()
# Test without thread comments
mock_llm_config = LLMConfig(model='test_model', api_key='test_api_key')
issue_handler = IssueHandler('owner', 'repo', 'token', mock_llm_config)
issue_handler = ServiceContextIssue(
GithubIssueHandler('owner', 'repo', 'token'), mock_llm_config
)
instruction, images_urls = issue_handler.get_instruction(issue, prompt, None)
expected_instruction = """Please fix the following issue for the repository in /workspace.
An environment has been set up for you to start working. You may assume all necessary tools are installed.
@ -550,7 +565,7 @@ When you think you have fixed the issue through code changes, please finish the
def test_file_instruction_with_repo_instruction():
issue = GithubIssue(
issue = Issue(
owner='test_owner',
repo='test_repo',
number=123,
@ -568,7 +583,9 @@ def test_file_instruction_with_repo_instruction():
repo_instruction = f.read()
mock_llm_config = LLMConfig(model='test_model', api_key='test_api_key')
issue_handler = IssueHandler('owner', 'repo', 'token', mock_llm_config)
issue_handler = ServiceContextIssue(
GithubIssueHandler('owner', 'repo', 'token'), mock_llm_config
)
instruction, image_urls = issue_handler.get_instruction(
issue, prompt, repo_instruction
)
@ -597,7 +614,7 @@ When you think you have fixed the issue through code changes, please finish the
def test_guess_success():
mock_issue = GithubIssue(
mock_issue = Issue(
owner='test_owner',
repo='test_repo',
number=1,
@ -615,7 +632,9 @@ def test_guess_success():
)
)
]
issue_handler = IssueHandler('owner', 'repo', 'token', mock_llm_config)
issue_handler = ServiceContextIssue(
GithubIssueHandler('owner', 'repo', 'token'), mock_llm_config
)
with patch.object(
LLM, 'completion', MagicMock(return_value=mock_completion_response)
@ -630,7 +649,7 @@ def test_guess_success():
def test_guess_success_with_thread_comments():
mock_issue = GithubIssue(
mock_issue = Issue(
owner='test_owner',
repo='test_repo',
number=1,
@ -653,7 +672,9 @@ def test_guess_success_with_thread_comments():
)
)
]
issue_handler = IssueHandler('owner', 'repo', 'token', mock_llm_config)
issue_handler = ServiceContextIssue(
GithubIssueHandler('owner', 'repo', 'token'), mock_llm_config
)
with patch.object(
LLM, 'completion', MagicMock(return_value=mock_completion_response)
@ -669,7 +690,7 @@ def test_guess_success_with_thread_comments():
def test_instruction_with_thread_comments():
# Create an issue with thread comments
issue = GithubIssue(
issue = Issue(
owner='test_owner',
repo='test_repo',
number=123,
@ -687,7 +708,9 @@ def test_instruction_with_thread_comments():
prompt = f.read()
llm_config = LLMConfig(model='test', api_key='test')
issue_handler = IssueHandler('owner', 'repo', 'token', llm_config)
issue_handler = ServiceContextIssue(
GithubIssueHandler('owner', 'repo', 'token'), llm_config
)
instruction, images_urls = issue_handler.get_instruction(issue, prompt, None)
# Verify that thread comments are included in the instruction
@ -699,7 +722,7 @@ def test_instruction_with_thread_comments():
def test_guess_success_failure():
mock_issue = GithubIssue(
mock_issue = Issue(
owner='test_owner',
repo='test_repo',
number=1,
@ -722,7 +745,9 @@ def test_guess_success_failure():
)
)
]
issue_handler = IssueHandler('owner', 'repo', 'token', mock_llm_config)
issue_handler = ServiceContextIssue(
GithubIssueHandler('owner', 'repo', 'token'), mock_llm_config
)
with patch.object(
LLM, 'completion', MagicMock(return_value=mock_completion_response)
@ -737,7 +762,7 @@ def test_guess_success_failure():
def test_guess_success_negative_case():
mock_issue = GithubIssue(
mock_issue = Issue(
owner='test_owner',
repo='test_repo',
number=1,
@ -755,7 +780,9 @@ def test_guess_success_negative_case():
)
)
]
issue_handler = IssueHandler('owner', 'repo', 'token', mock_llm_config)
issue_handler = ServiceContextIssue(
GithubIssueHandler('owner', 'repo', 'token'), mock_llm_config
)
with patch.object(
LLM, 'completion', MagicMock(return_value=mock_completion_response)
@ -770,7 +797,7 @@ def test_guess_success_negative_case():
def test_guess_success_invalid_output():
mock_issue = GithubIssue(
mock_issue = Issue(
owner='test_owner',
repo='test_repo',
number=1,
@ -784,7 +811,9 @@ def test_guess_success_invalid_output():
mock_completion_response.choices = [
MagicMock(message=MagicMock(content='This is not a valid output'))
]
issue_handler = IssueHandler('owner', 'repo', 'token', mock_llm_config)
issue_handler = ServiceContextIssue(
GithubIssueHandler('owner', 'repo', 'token'), mock_llm_config
)
with patch.object(
LLM, 'completion', MagicMock(return_value=mock_completion_response)
@ -803,7 +832,7 @@ def test_guess_success_invalid_output():
def test_download_pr_with_review_comments():
llm_config = LLMConfig(model='test', api_key='test')
handler = PRHandler('owner', 'repo', 'token', llm_config)
handler = ServiceContextPR(GithubPRHandler('owner', 'repo', 'token'), llm_config)
mock_pr_response = MagicMock()
mock_pr_response.json.side_effect = [
[
@ -854,7 +883,7 @@ def test_download_pr_with_review_comments():
assert len(issues) == 1
assert handler.issue_type == 'pr'
assert isinstance(issues[0], GithubIssue)
assert isinstance(issues[0], Issue)
assert issues[0].number == 1
assert issues[0].title == 'PR 1'
assert issues[0].head_branch == 'b1'
@ -870,7 +899,9 @@ def test_download_pr_with_review_comments():
def test_download_issue_with_specific_comment():
llm_config = LLMConfig(model='test', api_key='test')
handler = IssueHandler('owner', 'repo', 'token', llm_config)
handler = ServiceContextIssue(
GithubIssueHandler('owner', 'repo', 'token'), llm_config
)
# Define the specific comment_id to filter
specific_comment_id = 101

View File

@ -5,8 +5,9 @@ from unittest.mock import MagicMock, call, patch
import pytest
from openhands.core.config import LLMConfig
from openhands.resolver.github_issue import ReviewThread
from openhands.resolver.resolver_output import GithubIssue, ResolverOutput
from openhands.resolver.interfaces.github import GithubIssueHandler
from openhands.resolver.interfaces.issue import ReviewThread
from openhands.resolver.resolver_output import Issue, ResolverOutput
from openhands.resolver.send_pull_request import (
apply_patch,
initialize_repo,
@ -14,10 +15,10 @@ from openhands.resolver.send_pull_request import (
make_commit,
process_all_successful_issues,
process_single_issue,
reply_to_comment,
send_pull_request,
update_existing_pull_request,
)
from openhands.resolver.utils import Platform
@pytest.fixture
@ -36,8 +37,8 @@ def mock_output_dir():
@pytest.fixture
def mock_github_issue():
return GithubIssue(
def mock_issue():
return Issue(
number=42,
title='Test Issue',
owner='test-owner',
@ -241,7 +242,7 @@ def test_initialize_repo(mock_output_dir):
assert f.read() == 'hello world'
@patch('openhands.resolver.send_pull_request.reply_to_comment')
@patch('openhands.resolver.interfaces.github.GithubIssueHandler.reply_to_comment')
@patch('requests.post')
@patch('subprocess.run')
@patch('openhands.resolver.send_pull_request.LLM')
@ -252,7 +253,7 @@ def test_update_existing_pull_request(
mock_reply_to_comment,
):
# Arrange: Set up test data
github_issue = GithubIssue(
issue = Issue(
owner='test-owner',
repo='test-repo',
number=1,
@ -261,8 +262,8 @@ def test_update_existing_pull_request(
thread_ids=['comment1', 'comment2'],
head_branch='test-branch',
)
github_token = 'test-token'
github_username = 'test-user'
token = 'test-token'
username = 'test-user'
patch_dir = '/path/to/patch'
additional_message = '["Fixed bug in function A", "Updated documentation for B"]'
@ -285,9 +286,10 @@ def test_update_existing_pull_request(
# Act: Call the function without comment_message to test auto-generation
result = update_existing_pull_request(
github_issue,
github_token,
github_username,
issue,
token,
username,
Platform.GITHUB,
patch_dir,
llm_config,
comment_message=None,
@ -297,20 +299,20 @@ def test_update_existing_pull_request(
# Assert: Check if the git push command was executed
push_command = (
f'git -C {patch_dir} push '
f'https://{github_username}:{github_token}@github.com/'
f'{github_issue.owner}/{github_issue.repo}.git {github_issue.head_branch}'
f'https://{username}:{token}@github.com/'
f'{issue.owner}/{issue.repo}.git {issue.head_branch}'
)
mock_subprocess_run.assert_called_once_with(
push_command, shell=True, capture_output=True, text=True
)
# Assert: Check if the auto-generated comment was posted to the PR
comment_url = f'https://api.github.com/repos/{github_issue.owner}/{github_issue.repo}/issues/{github_issue.number}/comments'
comment_url = f'https://api.github.com/repos/{issue.owner}/{issue.repo}/issues/{issue.number}/comments'
expected_comment = 'This is an issue resolution.'
mock_requests_post.assert_called_once_with(
comment_url,
headers={
'Authorization': f'token {github_token}',
'Authorization': f'token {token}',
'Accept': 'application/vnd.github.v3+json',
},
json={'body': expected_comment},
@ -319,15 +321,14 @@ def test_update_existing_pull_request(
# Assert: Check if the reply_to_comment function was called for each thread ID
mock_reply_to_comment.assert_has_calls(
[
call(github_token, 'comment1', 'Fixed bug in function A'),
call(github_token, 'comment2', 'Updated documentation for B'),
call(issue.number, 'comment1', 'Fixed bug in function A'),
call(issue.number, 'comment2', 'Updated documentation for B'),
]
)
# Assert: Check the returned PR URL
assert (
result
== f'https://github.com/{github_issue.owner}/{github_issue.repo}/pull/{github_issue.number}'
result == f'https://github.com/{issue.owner}/{issue.repo}/pull/{issue.number}'
)
@ -351,7 +352,8 @@ def test_send_pull_request(
mock_get,
mock_post,
mock_run,
mock_github_issue,
mock_issue,
mock_llm_config,
mock_output_dir,
pr_type,
target_branch,
@ -383,9 +385,10 @@ def test_send_pull_request(
# Call the function
result = send_pull_request(
github_issue=mock_github_issue,
github_token='test-token',
github_username='test-user',
issue=mock_issue,
token='test-token',
username='test-user',
platform=Platform.GITHUB,
patch_dir=repo_path,
pr_type=pr_type,
target_branch=target_branch,
@ -441,7 +444,7 @@ def test_send_pull_request(
@patch('requests.post')
@patch('requests.get')
def test_send_pull_request_with_reviewer(
mock_get, mock_post, mock_run, mock_github_issue, mock_output_dir
mock_get, mock_post, mock_run, mock_issue, mock_output_dir, mock_llm_config
):
repo_path = os.path.join(mock_output_dir, 'repo')
reviewer = 'test-reviewer'
@ -472,9 +475,10 @@ def test_send_pull_request_with_reviewer(
# Call the function with reviewer
result = send_pull_request(
github_issue=mock_github_issue,
github_token='test-token',
github_username='test-user',
issue=mock_issue,
token='test-token',
username='test-user',
platform=Platform.GITHUB,
patch_dir=repo_path,
pr_type='ready',
reviewer=reviewer,
@ -504,7 +508,7 @@ def test_send_pull_request_with_reviewer(
@patch('requests.post')
@patch('requests.get')
def test_send_pull_request_target_branch_with_fork(
mock_get, mock_post, mock_run, mock_github_issue, mock_output_dir
mock_get, mock_post, mock_run, mock_issue, mock_output_dir
):
"""Test that target_branch works correctly when using a fork."""
repo_path = os.path.join(mock_output_dir, 'repo')
@ -528,10 +532,11 @@ def test_send_pull_request_target_branch_with_fork(
]
# Call the function with fork_owner and target_branch
result = send_pull_request(
github_issue=mock_github_issue,
github_token='test-token',
github_username='test-user',
send_pull_request(
issue=mock_issue,
token='test-token',
username='test-user',
platform=Platform.GITHUB,
patch_dir=repo_path,
pr_type='ready',
fork_owner=fork_owner,
@ -540,27 +545,34 @@ def test_send_pull_request_target_branch_with_fork(
# Assert API calls
assert mock_get.call_count == 2
# Verify target branch was checked in original repo, not fork
target_branch_check = mock_get.call_args_list[1]
assert target_branch_check[0][0] == f'https://api.github.com/repos/test-owner/test-repo/branches/{target_branch}'
assert (
target_branch_check[0][0]
== f'https://api.github.com/repos/test-owner/test-repo/branches/{target_branch}'
)
# Check PR creation
mock_post.assert_called_once()
post_data = mock_post.call_args[1]['json']
assert post_data['base'] == target_branch # PR should target the specified branch
assert post_data['head'] == 'openhands-fix-issue-42' # Branch name should be standard
assert (
post_data['head'] == 'openhands-fix-issue-42'
) # Branch name should be standard
# Check that push was to fork
push_call = mock_run.call_args_list[1]
assert f'https://test-user:test-token@github.com/{fork_owner}/test-repo.git' in str(push_call)
assert f'https://test-user:test-token@github.com/{fork_owner}/test-repo.git' in str(
push_call
)
@patch('subprocess.run')
@patch('requests.post')
@patch('requests.get')
def test_send_pull_request_target_branch_with_additional_message(
mock_get, mock_post, mock_run, mock_github_issue, mock_output_dir
mock_get, mock_post, mock_run, mock_issue, mock_output_dir
):
"""Test that target_branch works correctly with additional PR message."""
repo_path = os.path.join(mock_output_dir, 'repo')
@ -584,10 +596,11 @@ def test_send_pull_request_target_branch_with_additional_message(
]
# Call the function with target_branch and additional_message
result = send_pull_request(
github_issue=mock_github_issue,
github_token='test-token',
github_username='test-user',
send_pull_request(
issue=mock_issue,
token='test-token',
username='test-user',
platform=Platform.GITHUB,
patch_dir=repo_path,
pr_type='ready',
target_branch=target_branch,
@ -607,7 +620,7 @@ def test_send_pull_request_target_branch_with_additional_message(
@patch('requests.get')
def test_send_pull_request_invalid_target_branch(
mock_get, mock_github_issue, mock_output_dir
mock_get, mock_issue, mock_output_dir, mock_llm_config
):
"""Test that an error is raised when specifying a non-existent target branch"""
repo_path = os.path.join(mock_output_dir, 'repo')
@ -623,9 +636,10 @@ def test_send_pull_request_invalid_target_branch(
ValueError, match='Target branch nonexistent-branch does not exist'
):
send_pull_request(
github_issue=mock_github_issue,
github_token='test-token',
github_username='test-user',
issue=mock_issue,
token='test-token',
username='test-user',
platform=Platform.GITHUB,
patch_dir=repo_path,
pr_type='ready',
target_branch='nonexistent-branch',
@ -639,7 +653,7 @@ def test_send_pull_request_invalid_target_branch(
@patch('requests.post')
@patch('requests.get')
def test_send_pull_request_git_push_failure(
mock_get, mock_post, mock_run, mock_github_issue, mock_output_dir
mock_get, mock_post, mock_run, mock_issue, mock_output_dir, mock_llm_config
):
repo_path = os.path.join(mock_output_dir, 'repo')
@ -657,9 +671,10 @@ def test_send_pull_request_git_push_failure(
RuntimeError, match='Failed to push changes to the remote repository'
):
send_pull_request(
github_issue=mock_github_issue,
github_token='test-token',
github_username='test-user',
issue=mock_issue,
token='test-token',
username='test-user',
platform=Platform.GITHUB,
patch_dir=repo_path,
pr_type='ready',
)
@ -697,7 +712,7 @@ def test_send_pull_request_git_push_failure(
@patch('requests.post')
@patch('requests.get')
def test_send_pull_request_permission_error(
mock_get, mock_post, mock_run, mock_github_issue, mock_output_dir
mock_get, mock_post, mock_run, mock_issue, mock_output_dir, mock_llm_config
):
repo_path = os.path.join(mock_output_dir, 'repo')
@ -716,9 +731,10 @@ def test_send_pull_request_permission_error(
RuntimeError, match='Failed to create pull request due to missing permissions.'
):
send_pull_request(
github_issue=mock_github_issue,
github_token='test-token',
github_username='test-user',
issue=mock_issue,
token='test-token',
username='test-user',
platform=Platform.GITHUB,
patch_dir=repo_path,
pr_type='ready',
)
@ -729,12 +745,17 @@ def test_send_pull_request_permission_error(
@patch('requests.post')
def test_reply_to_comment(mock_post):
def test_reply_to_comment(mock_post, mock_issue):
# Arrange: set up the test data
github_token = 'test_token'
token = 'test_token'
comment_id = 'test_comment_id'
reply = 'This is a test reply.'
# Create an instance of GithubIssueHandler
handler = GithubIssueHandler(
owner='test-owner', repo='test-repo', token=token, username='test-user'
)
# Mock the response from the GraphQL API
mock_response = MagicMock()
mock_response.status_code = 200
@ -753,7 +774,7 @@ def test_reply_to_comment(mock_post):
mock_post.return_value = mock_response
# Act: call the function
reply_to_comment(github_token, comment_id, reply)
handler.reply_to_comment(mock_issue.number, comment_id, reply)
# Assert: check that the POST request was made with the correct parameters
query = """
@ -778,7 +799,7 @@ def test_reply_to_comment(mock_post):
'https://api.github.com/graphql',
json={'query': query, 'variables': expected_variables},
headers={
'Authorization': f'Bearer {github_token}',
'Authorization': f'Bearer {token}',
'Content-Type': 'application/json',
},
)
@ -800,12 +821,12 @@ def test_process_single_pr_update(
mock_llm_config,
):
# Initialize test data
github_token = 'test_token'
github_username = 'test_user'
token = 'test_token'
username = 'test_user'
pr_type = 'draft'
resolver_output = ResolverOutput(
issue=GithubIssue(
issue=Issue(
owner='test-owner',
repo='test-repo',
number=1,
@ -838,8 +859,9 @@ def test_process_single_pr_update(
process_single_issue(
mock_output_dir,
resolver_output,
github_token,
github_username,
token,
username,
Platform.GITHUB,
pr_type,
mock_llm_config,
None,
@ -855,9 +877,10 @@ def test_process_single_pr_update(
f'{mock_output_dir}/patches/pr_1', resolver_output.issue, 'pr'
)
mock_update_existing_pull_request.assert_called_once_with(
github_issue=resolver_output.issue,
github_token=github_token,
github_username=github_username,
issue=resolver_output.issue,
token=token,
username=username,
platform=Platform.GITHUB,
patch_dir=f'{mock_output_dir}/patches/pr_1',
additional_message='[Test success 1]',
llm_config=mock_llm_config,
@ -877,12 +900,13 @@ def test_process_single_issue(
mock_llm_config,
):
# Initialize test data
github_token = 'test_token'
github_username = 'test_user'
token = 'test_token'
username = 'test_user'
pr_type = 'draft'
platform = Platform.GITHUB
resolver_output = ResolverOutput(
issue=GithubIssue(
issue=Issue(
owner='test-owner',
repo='test-repo',
number=1,
@ -911,8 +935,9 @@ def test_process_single_issue(
process_single_issue(
mock_output_dir,
resolver_output,
github_token,
github_username,
token,
username,
platform,
pr_type,
mock_llm_config,
None,
@ -929,9 +954,10 @@ def test_process_single_issue(
f'{mock_output_dir}/patches/issue_1', resolver_output.issue, 'issue'
)
mock_send_pull_request.assert_called_once_with(
github_issue=resolver_output.issue,
github_token=github_token,
github_username=github_username,
issue=resolver_output.issue,
token=token,
username=username,
platform=platform,
patch_dir=f'{mock_output_dir}/patches/issue_1',
pr_type=pr_type,
fork_owner=None,
@ -955,12 +981,12 @@ def test_process_single_issue_unsuccessful(
mock_llm_config,
):
# Initialize test data
github_token = 'test_token'
github_username = 'test_user'
token = 'test_token'
username = 'test_user'
pr_type = 'draft'
resolver_output = ResolverOutput(
issue=GithubIssue(
issue=Issue(
owner='test-owner',
repo='test-repo',
number=1,
@ -983,8 +1009,9 @@ def test_process_single_issue_unsuccessful(
process_single_issue(
mock_output_dir,
resolver_output,
github_token,
github_username,
token,
username,
Platform.GITHUB,
pr_type,
mock_llm_config,
None,
@ -1006,7 +1033,7 @@ def test_process_all_successful_issues(
):
# Create ResolverOutput objects with properly initialized GithubIssue instances
resolver_output_1 = ResolverOutput(
issue=GithubIssue(
issue=Issue(
owner='test-owner',
repo='test-repo',
number=1,
@ -1026,7 +1053,7 @@ def test_process_all_successful_issues(
)
resolver_output_2 = ResolverOutput(
issue=GithubIssue(
issue=Issue(
owner='test-owner',
repo='test-repo',
number=2,
@ -1046,7 +1073,7 @@ def test_process_all_successful_issues(
)
resolver_output_3 = ResolverOutput(
issue=GithubIssue(
issue=Issue(
owner='test-owner',
repo='test-repo',
number=3,
@ -1074,8 +1101,9 @@ def test_process_all_successful_issues(
# Call the function
process_all_successful_issues(
'output_dir',
'github_token',
'github_username',
'token',
'username',
Platform.GITHUB,
'draft',
mock_llm_config, # llm_config
None, # fork_owner
@ -1090,8 +1118,9 @@ def test_process_all_successful_issues(
call(
'output_dir',
resolver_output_1,
'github_token',
'github_username',
'token',
'username',
Platform.GITHUB,
'draft',
mock_llm_config,
None,
@ -1101,8 +1130,9 @@ def test_process_all_successful_issues(
call(
'output_dir',
resolver_output_3,
'github_token',
'github_username',
'token',
'username',
Platform.GITHUB,
'draft',
mock_llm_config,
None,
@ -1118,7 +1148,7 @@ def test_process_all_successful_issues(
@patch('requests.get')
@patch('subprocess.run')
def test_send_pull_request_branch_naming(
mock_run, mock_get, mock_github_issue, mock_output_dir
mock_run, mock_get, mock_issue, mock_output_dir, mock_llm_config
):
repo_path = os.path.join(mock_output_dir, 'repo')
@ -1138,9 +1168,10 @@ def test_send_pull_request_branch_naming(
# Call the function
result = send_pull_request(
github_issue=mock_github_issue,
github_token='test-token',
github_username='test-user',
issue=mock_issue,
token='test-token',
username='test-user',
platform=Platform.GITHUB,
patch_dir=repo_path,
pr_type='branch',
)
@ -1181,11 +1212,13 @@ def test_send_pull_request_branch_naming(
@patch('openhands.resolver.send_pull_request.process_all_successful_issues')
@patch('openhands.resolver.send_pull_request.process_single_issue')
@patch('openhands.resolver.send_pull_request.load_single_resolver_output')
@patch('openhands.resolver.send_pull_request.identify_token')
@patch('os.path.exists')
@patch('os.getenv')
def test_main(
mock_getenv,
mock_path_exists,
mock_identify_token,
mock_load_single_resolver_output,
mock_process_single_issue,
mock_process_all_successful_issues,
@ -1195,8 +1228,8 @@ def test_main(
# Setup mock parser
mock_args = MagicMock()
mock_args.github_token = None
mock_args.github_username = 'mock_username'
mock_args.token = None
mock_args.username = 'mock_username'
mock_args.output_dir = '/mock/output'
mock_args.pr_type = 'draft'
mock_args.issue_number = '42'
@ -1222,9 +1255,13 @@ def test_main(
mock_resolver_output = MagicMock()
mock_load_single_resolver_output.return_value = mock_resolver_output
mock_identify_token.return_value = Platform.GITHUB
# Run main function
main()
mock_identify_token.assert_called_with('mock_token')
llm_config = LLMConfig(
model=mock_args.llm_model,
base_url=mock_args.llm_base_url,
@ -1237,6 +1274,7 @@ def test_main(
mock_resolver_output,
'mock_token',
'mock_username',
Platform.GITHUB,
'draft',
llm_config,
None,
@ -1259,6 +1297,7 @@ def test_main(
'/mock/output',
'mock_token',
'mock_username',
Platform.GITHUB,
'draft',
llm_config,
None,
@ -1269,12 +1308,17 @@ def test_main(
with pytest.raises(ValueError):
main()
# Test for invalid token
mock_identify_token.return_value = Platform.INVALID
with pytest.raises(ValueError, match='Token is invalid.'):
main()
@patch('subprocess.run')
def test_make_commit_escapes_issue_title(mock_subprocess_run):
# Setup
repo_dir = '/path/to/repo'
issue = GithubIssue(
issue = Issue(
owner='test-owner',
repo='test-repo',
number=42,
@ -1314,7 +1358,7 @@ def test_make_commit_escapes_issue_title(mock_subprocess_run):
def test_make_commit_no_changes(mock_subprocess_run):
# Setup
repo_dir = '/path/to/repo'
issue = GithubIssue(
issue = Issue(
owner='test-owner',
repo='test-repo',
number=42,

View File

@ -0,0 +1,202 @@
import json
from unittest.mock import MagicMock, patch
from openhands.core.config import LLMConfig
from openhands.events.action.message import MessageAction
from openhands.llm import LLM
from openhands.resolver.interfaces.gitlab import GitlabIssueHandler, GitlabPRHandler
from openhands.resolver.interfaces.issue import Issue
from openhands.resolver.interfaces.issue_definitions import (
ServiceContextIssue,
ServiceContextPR,
)
def test_guess_success_multiline_explanation():
# Mock data
issue = Issue(
owner='test',
repo='test',
number=1,
title='Test Issue',
body='Test body',
thread_comments=None,
review_comments=None,
)
history = [MessageAction(content='Test message')]
llm_config = LLMConfig(model='test', api_key='test')
# Create a mock response with multi-line explanation
mock_response = MagicMock()
mock_response.choices = [
MagicMock(
message=MagicMock(
content="""--- success
true
--- explanation
The PR successfully addressed the issue by:
- Fixed bug A
- Added test B
- Updated documentation C
Automatic fix generated by OpenHands 🙌"""
)
)
]
# Use patch to mock the LLM completion call
with patch.object(LLM, 'completion', return_value=mock_response) as mock_completion:
# Create a handler instance
handler = ServiceContextIssue(
GitlabIssueHandler('test', 'test', 'test'), llm_config
)
# Call guess_success
success, _, explanation = handler.guess_success(issue, history)
# Verify the results
assert success is True
assert 'The PR successfully addressed the issue by:' in explanation
assert 'Fixed bug A' in explanation
assert 'Added test B' in explanation
assert 'Updated documentation C' in explanation
assert 'Automatic fix generated by OpenHands' in explanation
# Verify that LLM completion was called exactly once
mock_completion.assert_called_once()
def test_pr_handler_guess_success_with_thread_comments():
# Create a PR handler instance
llm_config = LLMConfig(model='test', api_key='test')
handler = ServiceContextPR(GitlabPRHandler('test', 'test', 'test'), llm_config)
# Create a mock issue with thread comments but no review comments
issue = Issue(
owner='test-owner',
repo='test-repo',
number=1,
title='Test PR',
body='Test Body',
thread_comments=['First comment', 'Second comment'],
closing_issues=['Issue description'],
review_comments=None,
thread_ids=None,
head_branch='test-branch',
)
# Create mock history
history = [MessageAction(content='Fixed the issue by implementing X and Y')]
# Create mock LLM config
llm_config = LLMConfig(model='test-model', api_key='test-key')
# Mock the LLM response
mock_response = MagicMock()
mock_response.choices = [
MagicMock(
message=MagicMock(
content="""--- success
true
--- explanation
The changes successfully address the feedback."""
)
)
]
# Test the guess_success method
with patch.object(LLM, 'completion', return_value=mock_response):
success, success_list, explanation = handler.guess_success(issue, history)
# Verify the results
assert success is True
assert success_list == [True]
assert 'successfully address' in explanation
assert len(json.loads(explanation)) == 1
def test_pr_handler_guess_success_only_review_comments():
# Create a PR handler instance
llm_config = LLMConfig(model='test', api_key='test')
handler = ServiceContextPR(
GitlabPRHandler('test-owner', 'test-repo', 'test-token'), llm_config
)
# Create a mock issue with only review comments
issue = Issue(
owner='test-owner',
repo='test-repo',
number=1,
title='Test PR',
body='Test Body',
thread_comments=None,
closing_issues=['Issue description'],
review_comments=['Please fix the formatting', 'Add more tests'],
thread_ids=None,
head_branch='test-branch',
)
# Create mock history
history = [MessageAction(content='Fixed the formatting and added more tests')]
# Create mock LLM config
llm_config = LLMConfig(model='test-model', api_key='test-key')
# Mock the LLM response
mock_response = MagicMock()
mock_response.choices = [
MagicMock(
message=MagicMock(
content="""--- success
true
--- explanation
The changes successfully address the review comments."""
)
)
]
# Test the guess_success method
with patch.object(LLM, 'completion', return_value=mock_response):
success, success_list, explanation = handler.guess_success(issue, history)
# Verify the results
assert success is True
assert success_list == [True]
assert (
'["The changes successfully address the review comments."]' in explanation
)
def test_pr_handler_guess_success_no_comments():
# Create a PR handler instance
llm_config = LLMConfig(model='test', api_key='test')
handler = ServiceContextPR(GitlabPRHandler('test', 'test', 'test'), llm_config)
# Create a mock issue with no comments
issue = Issue(
owner='test-owner',
repo='test-repo',
number=1,
title='Test PR',
body='Test Body',
thread_comments=None,
closing_issues=['Issue description'],
review_comments=None,
thread_ids=None,
head_branch='test-branch',
)
# Create mock history
history = [MessageAction(content='Fixed the issue')]
# Create mock LLM config
llm_config = LLMConfig(model='test-model', api_key='test-key')
# Test that it returns appropriate message when no comments are present
success, success_list, explanation = handler.guess_success(issue, history)
assert success is False
assert success_list is None
assert explanation == 'No feedback was found to process'

View File

@ -0,0 +1,683 @@
from unittest.mock import MagicMock, patch
from openhands.core.config import LLMConfig
from openhands.resolver.interfaces.gitlab import GitlabIssueHandler, GitlabPRHandler
from openhands.resolver.interfaces.issue import ReviewThread
from openhands.resolver.interfaces.issue_definitions import (
ServiceContextIssue,
ServiceContextPR,
)
def test_get_converted_issues_initializes_review_comments():
# Mock the necessary dependencies
with patch('requests.get') as mock_get:
# Mock the response for issues
mock_issues_response = MagicMock()
mock_issues_response.json.return_value = [
{'iid': 1, 'title': 'Test Issue', 'description': 'Test Body'}
]
# Mock the response for comments
mock_comments_response = MagicMock()
mock_comments_response.json.return_value = []
# Set up the mock to return different responses for different calls
# First call is for issues, second call is for comments
mock_get.side_effect = [
mock_issues_response,
mock_comments_response,
mock_comments_response,
] # Need two comment responses because we make two API calls
# Create an instance of IssueHandler
llm_config = LLMConfig(model='test', api_key='test')
handler = ServiceContextIssue(
GitlabIssueHandler('test-owner', 'test-repo', 'test-token'), llm_config
)
# Get converted issues
issues = handler.get_converted_issues(issue_numbers=[1])
# Verify that we got exactly one issue
assert len(issues) == 1
# Verify that review_comments is initialized as None
assert issues[0].review_comments is None
# Verify other fields are set correctly
assert issues[0].number == 1
assert issues[0].title == 'Test Issue'
assert issues[0].body == 'Test Body'
assert issues[0].owner == 'test-owner'
assert issues[0].repo == 'test-repo'
def test_get_converted_issues_handles_empty_body():
# Mock the necessary dependencies
with patch('requests.get') as mock_get:
# Mock the response for issues
mock_issues_response = MagicMock()
mock_issues_response.json.return_value = [
{'iid': 1, 'title': 'Test Issue', 'description': None}
]
# Mock the response for comments
mock_comments_response = MagicMock()
mock_comments_response.json.return_value = []
# Set up the mock to return different responses
mock_get.side_effect = [
mock_issues_response,
mock_comments_response,
mock_comments_response,
]
# Create an instance of IssueHandler
llm_config = LLMConfig(model='test', api_key='test')
handler = ServiceContextIssue(
GitlabIssueHandler('test-owner', 'test-repo', 'test-token'), llm_config
)
# Get converted issues
issues = handler.get_converted_issues(issue_numbers=[1])
# Verify that we got exactly one issue
assert len(issues) == 1
# Verify that body is empty string when None
assert issues[0].body == ''
# Verify other fields are set correctly
assert issues[0].number == 1
assert issues[0].title == 'Test Issue'
assert issues[0].owner == 'test-owner'
assert issues[0].repo == 'test-repo'
# Verify that review_comments is initialized as None
assert issues[0].review_comments is None
def test_pr_handler_get_converted_issues_with_comments():
# Mock the necessary dependencies
with patch('requests.get') as mock_get:
# Mock the response for PRs
mock_prs_response = MagicMock()
mock_prs_response.json.return_value = [
{
'iid': 1,
'title': 'Test PR',
'description': 'Test Body fixes #1',
'source_branch': 'test-branch',
}
]
# Mock the response for PR comments
mock_comments_response = MagicMock()
mock_comments_response.json.return_value = [
{'body': 'First comment', 'resolvable': True, 'system': False},
{'body': 'Second comment', 'resolvable': True, 'system': False},
]
# Mock the response for PR metadata (GraphQL)
mock_graphql_response = MagicMock()
mock_graphql_response.json.return_value = {
'data': {
'project': {
'mergeRequest': {
'discussions': {'edges': []},
}
}
}
}
# Set up the mock to return different responses
# We need to return empty responses for subsequent pages
mock_empty_response = MagicMock()
mock_empty_response.json.return_value = []
# Mock the response for fetching the external issue referenced in PR body
mock_external_issue_response = MagicMock()
mock_external_issue_response.json.return_value = {
'description': 'This is additional context from an externally referenced issue.'
}
mock_get.side_effect = [
mock_prs_response, # First call for PRs
mock_empty_response, # Second call for PRs (empty page)
mock_empty_response, # Third call for related issues
mock_comments_response, # Fourth call for PR comments
mock_empty_response, # Fifth call for PR comments (empty page)
mock_external_issue_response, # Mock response for the external issue reference #1
]
# Mock the post request for GraphQL
with patch('requests.post') as mock_post:
mock_post.return_value = mock_graphql_response
# Create an instance of PRHandler
llm_config = LLMConfig(model='test', api_key='test')
handler = ServiceContextPR(
GitlabPRHandler('test-owner', 'test-repo', 'test-token'), llm_config
)
# Get converted issues
prs = handler.get_converted_issues(issue_numbers=[1])
# Verify that we got exactly one PR
assert len(prs) == 1
# Verify that thread_comments are set correctly
assert prs[0].thread_comments == ['First comment', 'Second comment']
# Verify other fields are set correctly
assert prs[0].number == 1
assert prs[0].title == 'Test PR'
assert prs[0].body == 'Test Body fixes #1'
assert prs[0].owner == 'test-owner'
assert prs[0].repo == 'test-repo'
assert prs[0].head_branch == 'test-branch'
assert prs[0].closing_issues == [
'This is additional context from an externally referenced issue.'
]
def test_get_issue_comments_with_specific_comment_id():
# Mock the necessary dependencies
with patch('requests.get') as mock_get:
# Mock the response for comments
mock_comments_response = MagicMock()
mock_comments_response.json.return_value = [
{'id': 123, 'body': 'First comment', 'resolvable': True, 'system': False},
{'id': 456, 'body': 'Second comment', 'resolvable': True, 'system': False},
]
mock_get.return_value = mock_comments_response
# Create an instance of IssueHandler
llm_config = LLMConfig(model='test', api_key='test')
handler = ServiceContextIssue(
GitlabIssueHandler('test-owner', 'test-repo', 'test-token'), llm_config
)
# Get comments with a specific comment_id
specific_comment = handler.get_issue_comments(issue_number=1, comment_id=123)
# Verify only the specific comment is returned
assert specific_comment == ['First comment']
def test_pr_handler_get_converted_issues_with_specific_thread_comment():
# Define the specific comment_id to filter
specific_comment_id = 123
# Mock GraphQL response for review threads
with patch('requests.get') as mock_get:
# Mock the response for PRs
mock_prs_response = MagicMock()
mock_prs_response.json.return_value = [
{
'iid': 1,
'title': 'Test PR',
'description': 'Test Body',
'source_branch': 'test-branch',
}
]
# Mock the response for PR comments
mock_comments_response = MagicMock()
mock_comments_response.json.return_value = [
{'body': 'First comment', 'id': 123, 'resolvable': True, 'system': False},
{'body': 'Second comment', 'id': 124, 'resolvable': True, 'system': False},
]
# Mock the response for PR metadata (GraphQL)
mock_graphql_response = MagicMock()
mock_graphql_response.json.return_value = {
'data': {
'project': {
'mergeRequest': {
'discussions': {
'edges': [
{
'node': {
'id': 'review-thread-1',
'resolved': False,
'resolvable': True,
'notes': {
'nodes': [
{
'id': 'GID/121',
'body': 'Specific review comment',
'position': {
'filePath': 'file1.txt',
},
},
{
'id': 'GID/456',
'body': 'Another review comment',
'position': {
'filePath': 'file2.txt',
},
},
]
},
}
}
]
},
}
}
}
}
# Set up the mock to return different responses
# We need to return empty responses for subsequent pages
mock_empty_response = MagicMock()
mock_empty_response.json.return_value = []
mock_get.side_effect = [
mock_prs_response, # First call for PRs
mock_empty_response, # Second call for PRs (empty page)
mock_empty_response, # Third call for related issues
mock_comments_response, # Fourth call for PR comments
mock_empty_response, # Fifth call for PR comments (empty page)
]
# Mock the post request for GraphQL
with patch('requests.post') as mock_post:
mock_post.return_value = mock_graphql_response
# Create an instance of PRHandler
llm_config = LLMConfig(model='test', api_key='test')
handler = ServiceContextPR(
GitlabPRHandler('test-owner', 'test-repo', 'test-token'), llm_config
)
# Get converted issues
prs = handler.get_converted_issues(
issue_numbers=[1], comment_id=specific_comment_id
)
# Verify that we got exactly one PR
assert len(prs) == 1
# Verify that thread_comments are set correctly
assert prs[0].thread_comments == ['First comment']
assert prs[0].review_comments is None
assert prs[0].review_threads == []
# Verify other fields are set correctly
assert prs[0].number == 1
assert prs[0].title == 'Test PR'
assert prs[0].body == 'Test Body'
assert prs[0].owner == 'test-owner'
assert prs[0].repo == 'test-repo'
assert prs[0].head_branch == 'test-branch'
def test_pr_handler_get_converted_issues_with_specific_review_thread_comment():
# Define the specific comment_id to filter
specific_comment_id = 123
# Mock GraphQL response for review threads
with patch('requests.get') as mock_get:
# Mock the response for PRs
mock_prs_response = MagicMock()
mock_prs_response.json.return_value = [
{
'iid': 1,
'title': 'Test PR',
'description': 'Test Body',
'source_branch': 'test-branch',
}
]
# Mock the response for PR comments
mock_comments_response = MagicMock()
mock_comments_response.json.return_value = [
{
'description': 'First comment',
'id': 120,
'resolvable': True,
'system': False,
},
{
'description': 'Second comment',
'id': 124,
'resolvable': True,
'system': False,
},
]
# Mock the response for PR metadata (GraphQL)
mock_graphql_response = MagicMock()
mock_graphql_response.json.return_value = {
'data': {
'project': {
'mergeRequest': {
'discussions': {
'edges': [
{
'node': {
'id': 'review-thread-1',
'resolved': False,
'resolvable': True,
'notes': {
'nodes': [
{
'id': f'GID/{specific_comment_id}',
'body': 'Specific review comment',
'position': {
'filePath': 'file1.txt',
},
},
{
'id': 'GID/456',
'body': 'Another review comment',
'position': {
'filePath': 'file1.txt',
},
},
]
},
}
}
]
},
}
}
}
}
# Set up the mock to return different responses
# We need to return empty responses for subsequent pages
mock_empty_response = MagicMock()
mock_empty_response.json.return_value = []
mock_get.side_effect = [
mock_prs_response, # First call for PRs
mock_empty_response, # Second call for PRs (empty page)
mock_empty_response, # Third call for related issues
mock_comments_response, # Fourth call for PR comments
mock_empty_response, # Fifth call for PR comments (empty page)
]
# Mock the post request for GraphQL
with patch('requests.post') as mock_post:
mock_post.return_value = mock_graphql_response
# Create an instance of PRHandler
llm_config = LLMConfig(model='test', api_key='test')
handler = ServiceContextPR(
GitlabPRHandler('test-owner', 'test-repo', 'test-token'), llm_config
)
# Get converted issues
prs = handler.get_converted_issues(
issue_numbers=[1], comment_id=specific_comment_id
)
# Verify that we got exactly one PR
assert len(prs) == 1
# Verify that thread_comments are set correctly
assert prs[0].thread_comments is None
assert prs[0].review_comments is None
assert len(prs[0].review_threads) == 1
assert isinstance(prs[0].review_threads[0], ReviewThread)
assert (
prs[0].review_threads[0].comment
== 'Specific review comment\n---\nlatest feedback:\nAnother review comment\n'
)
assert prs[0].review_threads[0].files == ['file1.txt']
# Verify other fields are set correctly
assert prs[0].number == 1
assert prs[0].title == 'Test PR'
assert prs[0].body == 'Test Body'
assert prs[0].owner == 'test-owner'
assert prs[0].repo == 'test-repo'
assert prs[0].head_branch == 'test-branch'
def test_pr_handler_get_converted_issues_with_specific_comment_and_issue_refs():
# Define the specific comment_id to filter
specific_comment_id = 123
# Mock GraphQL response for review threads
with patch('requests.get') as mock_get:
# Mock the response for PRs
mock_prs_response = MagicMock()
mock_prs_response.json.return_value = [
{
'iid': 1,
'title': 'Test PR fixes #3',
'description': 'Test Body',
'source_branch': 'test-branch',
}
]
# Mock the response for PR comments
mock_comments_response = MagicMock()
mock_comments_response.json.return_value = [
{
'description': 'First comment',
'id': 120,
'resolvable': True,
'system': False,
},
{
'description': 'Second comment',
'id': 124,
'resolvable': True,
'system': False,
},
]
# Mock the response for PR metadata (GraphQL)
mock_graphql_response = MagicMock()
mock_graphql_response.json.return_value = {
'data': {
'project': {
'mergeRequest': {
'discussions': {
'edges': [
{
'node': {
'id': 'review-thread-1',
'resolved': False,
'resolvable': True,
'notes': {
'nodes': [
{
'id': f'GID/{specific_comment_id}',
'body': 'Specific review comment that references #6',
'position': {
'filePath': 'file1.txt',
},
},
{
'id': 'GID/456',
'body': 'Another review comment referencing #7',
'position': {
'filePath': 'file2.txt',
},
},
]
},
}
}
]
},
}
}
}
}
# Set up the mock to return different responses
# We need to return empty responses for subsequent pages
mock_empty_response = MagicMock()
mock_empty_response.json.return_value = []
# Mock the response for fetching the external issue referenced in PR body
mock_external_issue_response_in_body = MagicMock()
mock_external_issue_response_in_body.json.return_value = {
'description': 'External context #1.'
}
# Mock the response for fetching the external issue referenced in review thread
mock_external_issue_response_review_thread = MagicMock()
mock_external_issue_response_review_thread.json.return_value = {
'description': 'External context #2.'
}
mock_get.side_effect = [
mock_prs_response, # First call for PRs
mock_empty_response, # Second call for PRs (empty page)
mock_empty_response, # Third call for related issues
mock_comments_response, # Fourth call for PR comments
mock_empty_response, # Fifth call for PR comments (empty page)
mock_external_issue_response_in_body,
mock_external_issue_response_review_thread,
]
# Mock the post request for GraphQL
with patch('requests.post') as mock_post:
mock_post.return_value = mock_graphql_response
# Create an instance of PRHandler
llm_config = LLMConfig(model='test', api_key='test')
handler = ServiceContextPR(
GitlabPRHandler('test-owner', 'test-repo', 'test-token'), llm_config
)
# Get converted issues
prs = handler.get_converted_issues(
issue_numbers=[1], comment_id=specific_comment_id
)
# Verify that we got exactly one PR
assert len(prs) == 1
# Verify that thread_comments are set correctly
assert prs[0].thread_comments is None
assert prs[0].review_comments is None
assert len(prs[0].review_threads) == 1
assert isinstance(prs[0].review_threads[0], ReviewThread)
assert (
prs[0].review_threads[0].comment
== 'Specific review comment that references #6\n---\nlatest feedback:\nAnother review comment referencing #7\n'
)
assert prs[0].closing_issues == [
'External context #1.',
'External context #2.',
] # Only includes references inside comment ID and body PR
# Verify other fields are set correctly
assert prs[0].number == 1
assert prs[0].title == 'Test PR fixes #3'
assert prs[0].body == 'Test Body'
assert prs[0].owner == 'test-owner'
assert prs[0].repo == 'test-repo'
assert prs[0].head_branch == 'test-branch'
def test_pr_handler_get_converted_issues_with_duplicate_issue_refs():
# Mock the necessary dependencies
with patch('requests.get') as mock_get:
# Mock the response for PRs
mock_prs_response = MagicMock()
mock_prs_response.json.return_value = [
{
'iid': 1,
'title': 'Test PR',
'description': 'Test Body fixes #1',
'source_branch': 'test-branch',
}
]
# Mock the response for PR comments
mock_comments_response = MagicMock()
mock_comments_response.json.return_value = [
{
'body': 'First comment addressing #1',
'resolvable': True,
'system': False,
},
{
'body': 'Second comment addressing #2',
'resolvable': True,
'system': False,
},
]
# Mock the response for PR metadata (GraphQL)
mock_graphql_response = MagicMock()
mock_graphql_response.json.return_value = {
'data': {
'project': {
'mergeRequest': {
'discussions': {'edges': []},
}
}
}
}
# Set up the mock to return different responses
# We need to return empty responses for subsequent pages
mock_empty_response = MagicMock()
mock_empty_response.json.return_value = []
# Mock the response for fetching the external issue referenced in PR body
mock_external_issue_response_in_body = MagicMock()
mock_external_issue_response_in_body.json.return_value = {
'description': 'External context #1.'
}
# Mock the response for fetching the external issue referenced in review thread
mock_external_issue_response_in_comment = MagicMock()
mock_external_issue_response_in_comment.json.return_value = {
'description': 'External context #2.'
}
mock_get.side_effect = [
mock_prs_response, # First call for PRs
mock_empty_response, # Second call for PRs (empty page)
mock_empty_response, # Third call for related issues
mock_comments_response, # Fourth call for PR comments
mock_empty_response, # Fifth call for PR comments (empty page)
mock_external_issue_response_in_body, # Mock response for the external issue reference #1
mock_external_issue_response_in_comment,
]
# Mock the post request for GraphQL
with patch('requests.post') as mock_post:
mock_post.return_value = mock_graphql_response
# Create an instance of PRHandler
llm_config = LLMConfig(model='test', api_key='test')
handler = ServiceContextPR(
GitlabPRHandler('test-owner', 'test-repo', 'test-token'), llm_config
)
# Get converted issues
prs = handler.get_converted_issues(issue_numbers=[1])
# Verify that we got exactly one PR
assert len(prs) == 1
# Verify that thread_comments are set correctly
assert prs[0].thread_comments == [
'First comment addressing #1',
'Second comment addressing #2',
]
# Verify other fields are set correctly
assert prs[0].number == 1
assert prs[0].title == 'Test PR'
assert prs[0].body == 'Test Body fixes #1'
assert prs[0].owner == 'test-owner'
assert prs[0].repo == 'test-repo'
assert prs[0].head_branch == 'test-branch'
assert prs[0].closing_issues == [
'External context #1.',
'External context #2.',
]

View File

@ -0,0 +1,283 @@
from unittest.mock import MagicMock, patch
import pytest
import requests
from litellm.exceptions import RateLimitError
from openhands.core.config import LLMConfig
from openhands.events.action.message import MessageAction
from openhands.llm.llm import LLM
from openhands.resolver.interfaces.gitlab import GitlabIssueHandler, GitlabPRHandler
from openhands.resolver.interfaces.issue import Issue
from openhands.resolver.interfaces.issue_definitions import (
ServiceContextIssue,
ServiceContextPR,
)
@pytest.fixture(autouse=True)
def mock_logger(monkeypatch):
# suppress logging of completion data to file
mock_logger = MagicMock()
monkeypatch.setattr('openhands.llm.debug_mixin.llm_prompt_logger', mock_logger)
monkeypatch.setattr('openhands.llm.debug_mixin.llm_response_logger', mock_logger)
return mock_logger
@pytest.fixture
def default_config():
return LLMConfig(
model='gpt-4o',
api_key='test_key',
num_retries=2,
retry_min_wait=1,
retry_max_wait=2,
)
def test_handle_nonexistent_issue_reference():
llm_config = LLMConfig(model='test', api_key='test')
handler = ServiceContextPR(
GitlabPRHandler('test-owner', 'test-repo', 'test-token'), llm_config
)
# Mock the requests.get to simulate a 404 error
mock_response = MagicMock()
mock_response.raise_for_status.side_effect = requests.exceptions.HTTPError(
'404 Client Error: Not Found'
)
with patch('requests.get', return_value=mock_response):
# Call the method with a non-existent issue reference
result = handler._strategy.get_context_from_external_issues_references(
closing_issues=[],
closing_issue_numbers=[],
issue_body='This references #999999', # Non-existent issue
review_comments=[],
review_threads=[],
thread_comments=None,
)
# The method should return an empty list since the referenced issue couldn't be fetched
assert result == []
def test_handle_rate_limit_error():
llm_config = LLMConfig(model='test', api_key='test')
handler = ServiceContextPR(
GitlabPRHandler('test-owner', 'test-repo', 'test-token'), llm_config
)
# Mock the requests.get to simulate a rate limit error
mock_response = MagicMock()
mock_response.raise_for_status.side_effect = requests.exceptions.HTTPError(
'403 Client Error: Rate Limit Exceeded'
)
with patch('requests.get', return_value=mock_response):
# Call the method with an issue reference
result = handler._strategy.get_context_from_external_issues_references(
closing_issues=[],
closing_issue_numbers=[],
issue_body='This references #123',
review_comments=[],
review_threads=[],
thread_comments=None,
)
# The method should return an empty list since the request was rate limited
assert result == []
def test_handle_network_error():
llm_config = LLMConfig(model='test', api_key='test')
handler = ServiceContextPR(
GitlabPRHandler('test-owner', 'test-repo', 'test-token'), llm_config
)
# Mock the requests.get to simulate a network error
with patch(
'requests.get', side_effect=requests.exceptions.ConnectionError('Network Error')
):
# Call the method with an issue reference
result = handler._strategy.get_context_from_external_issues_references(
closing_issues=[],
closing_issue_numbers=[],
issue_body='This references #123',
review_comments=[],
review_threads=[],
thread_comments=None,
)
# The method should return an empty list since the network request failed
assert result == []
def test_successful_issue_reference():
llm_config = LLMConfig(model='test', api_key='test')
handler = ServiceContextPR(
GitlabPRHandler('test-owner', 'test-repo', 'test-token'), llm_config
)
# Mock a successful response
mock_response = MagicMock()
mock_response.raise_for_status.return_value = None
mock_response.json.return_value = {
'description': 'This is the referenced issue body'
}
with patch('requests.get', return_value=mock_response):
# Call the method with an issue reference
result = handler._strategy.get_context_from_external_issues_references(
closing_issues=[],
closing_issue_numbers=[],
issue_body='This references #123',
review_comments=[],
review_threads=[],
thread_comments=None,
)
# The method should return a list with the referenced issue body
assert result == ['This is the referenced issue body']
class MockLLMResponse:
"""Mock LLM Response class to mimic the actual LLM response structure."""
class Choice:
class Message:
def __init__(self, content):
self.content = content
def __init__(self, content):
self.message = self.Message(content)
def __init__(self, content):
self.choices = [self.Choice(content)]
class DotDict(dict):
"""
A dictionary that supports dot notation access.
"""
def __init__(self, *args, **kwargs):
super().__init__(*args, **kwargs)
for key, value in self.items():
if isinstance(value, dict):
self[key] = DotDict(value)
elif isinstance(value, list):
self[key] = [
DotDict(item) if isinstance(item, dict) else item for item in value
]
def __getattr__(self, key):
if key in self:
return self[key]
else:
raise AttributeError(
f"'{self.__class__.__name__}' object has no attribute '{key}'"
)
def __setattr__(self, key, value):
self[key] = value
def __delattr__(self, key):
if key in self:
del self[key]
else:
raise AttributeError(
f"'{self.__class__.__name__}' object has no attribute '{key}'"
)
@patch('openhands.llm.llm.litellm_completion')
def test_guess_success_rate_limit_wait_time(mock_litellm_completion, default_config):
"""Test that the retry mechanism in guess_success respects wait time between retries."""
with patch('time.sleep') as mock_sleep:
# Simulate a rate limit error followed by a successful response
mock_litellm_completion.side_effect = [
RateLimitError(
'Rate limit exceeded', llm_provider='test_provider', model='test_model'
),
DotDict(
{
'choices': [
{
'message': {
'content': '--- success\ntrue\n--- explanation\nRetry successful'
}
}
]
}
),
]
llm = LLM(config=default_config)
handler = ServiceContextIssue(
GitlabIssueHandler('test-owner', 'test-repo', 'test-token'), default_config
)
handler.llm = llm
# Mock issue and history
issue = Issue(
owner='test-owner',
repo='test-repo',
number=1,
title='Test Issue',
body='This is a test issue.',
thread_comments=['Please improve error handling'],
)
history = [MessageAction(content='Fixed error handling.')]
# Call guess_success
success, _, explanation = handler.guess_success(issue, history)
# Assertions
assert success is True
assert explanation == 'Retry successful'
assert mock_litellm_completion.call_count == 2 # Two attempts made
mock_sleep.assert_called_once() # Sleep called once between retries
# Validate wait time
wait_time = mock_sleep.call_args[0][0]
assert (
default_config.retry_min_wait <= wait_time <= default_config.retry_max_wait
), f'Expected wait time between {default_config.retry_min_wait} and {default_config.retry_max_wait} seconds, but got {wait_time}'
@patch('openhands.llm.llm.litellm_completion')
def test_guess_success_exhausts_retries(mock_completion, default_config):
"""Test the retry mechanism in guess_success exhausts retries and raises an error."""
# Simulate persistent rate limit errors by always raising RateLimitError
mock_completion.side_effect = RateLimitError(
'Rate limit exceeded', llm_provider='test_provider', model='test_model'
)
# Initialize LLM and handler
llm = LLM(config=default_config)
handler = ServiceContextPR(
GitlabPRHandler('test-owner', 'test-repo', 'test-token'), default_config
)
handler.llm = llm
# Mock issue and history
issue = Issue(
owner='test-owner',
repo='test-repo',
number=1,
title='Test Issue',
body='This is a test issue.',
thread_comments=['Please improve error handling'],
)
history = [MessageAction(content='Fixed error handling.')]
# Call guess_success and expect it to raise an error after retries
with pytest.raises(RateLimitError):
handler.guess_success(issue, history)
# Assertions
assert (
mock_completion.call_count == default_config.num_retries
) # Initial call + retries

View File

@ -0,0 +1,672 @@
import json
from unittest.mock import MagicMock, patch
import pytest
from openhands.core.config import LLMConfig
from openhands.events.action.message import MessageAction
from openhands.llm.llm import LLM
from openhands.resolver.interfaces.gitlab import GitlabPRHandler
from openhands.resolver.interfaces.issue import Issue, ReviewThread
from openhands.resolver.interfaces.issue_definitions import ServiceContextPR
@pytest.fixture
def pr_handler():
llm_config = LLMConfig(model='test-model')
handler = ServiceContextPR(
GitlabPRHandler('test-owner', 'test-repo', 'test-token'), llm_config
)
return handler
@pytest.fixture
def mock_llm_success_response():
return MagicMock(
choices=[
MagicMock(
message=MagicMock(
content="""--- success
true
--- explanation
The changes look good"""
)
)
]
)
def test_guess_success_review_threads_litellm_call():
"""Test that the completion() call for review threads contains the expected content."""
# Create a PR handler instance
llm_config = LLMConfig(model='test', api_key='test')
handler = ServiceContextPR(
GitlabPRHandler('test-owner', 'test-repo', 'test-token'), llm_config
)
# Create a mock issue with review threads
issue = Issue(
owner='test-owner',
repo='test-repo',
number=1,
title='Test PR',
body='Test Body',
thread_comments=None,
closing_issues=['Issue 1 description', 'Issue 2 description'],
review_comments=None,
review_threads=[
ReviewThread(
comment='Please fix the formatting\n---\nlatest feedback:\nAdd docstrings',
files=['/src/file1.py', '/src/file2.py'],
),
ReviewThread(
comment='Add more tests\n---\nlatest feedback:\nAdd test cases',
files=['/tests/test_file.py'],
),
],
thread_ids=['1', '2'],
head_branch='test-branch',
)
# Create mock history with a detailed response
history = [
MessageAction(
content="""I have made the following changes:
1. Fixed formatting in file1.py and file2.py
2. Added docstrings to all functions
3. Added test cases in test_file.py"""
)
]
# Create mock LLM config
llm_config = LLMConfig(model='test-model', api_key='test-key')
# Mock the LLM response
mock_response = MagicMock()
mock_response.choices = [
MagicMock(
message=MagicMock(
content="""--- success
true
--- explanation
The changes successfully address the feedback."""
)
)
]
# Test the guess_success method
with patch.object(LLM, 'completion') as mock_completion:
mock_completion.return_value = mock_response
success, success_list, explanation = handler.guess_success(issue, history)
# Verify the completion() calls
assert mock_completion.call_count == 2 # One call per review thread
# Check first call
first_call = mock_completion.call_args_list[0]
first_prompt = first_call[1]['messages'][0]['content']
assert (
'Issue descriptions:\n'
+ json.dumps(['Issue 1 description', 'Issue 2 description'], indent=4)
in first_prompt
)
assert (
'Feedback:\nPlease fix the formatting\n---\nlatest feedback:\nAdd docstrings'
in first_prompt
)
assert (
'Files locations:\n'
+ json.dumps(['/src/file1.py', '/src/file2.py'], indent=4)
in first_prompt
)
assert 'Last message from AI agent:\n' + history[0].content in first_prompt
# Check second call
second_call = mock_completion.call_args_list[1]
second_prompt = second_call[1]['messages'][0]['content']
assert (
'Issue descriptions:\n'
+ json.dumps(['Issue 1 description', 'Issue 2 description'], indent=4)
in second_prompt
)
assert (
'Feedback:\nAdd more tests\n---\nlatest feedback:\nAdd test cases'
in second_prompt
)
assert (
'Files locations:\n' + json.dumps(['/tests/test_file.py'], indent=4)
in second_prompt
)
assert 'Last message from AI agent:\n' + history[0].content in second_prompt
assert len(json.loads(explanation)) == 2
def test_guess_success_thread_comments_litellm_call():
"""Test that the completion() call for thread comments contains the expected content."""
# Create a PR handler instance
llm_config = LLMConfig(model='test', api_key='test')
handler = ServiceContextPR(
GitlabPRHandler('test-owner', 'test-repo', 'test-token'), llm_config
)
# Create a mock issue with thread comments
issue = Issue(
owner='test-owner',
repo='test-repo',
number=1,
title='Test PR',
body='Test Body',
thread_comments=[
'Please improve error handling',
'Add input validation',
'latest feedback:\nHandle edge cases',
],
closing_issues=['Issue 1 description', 'Issue 2 description'],
review_comments=None,
thread_ids=None,
head_branch='test-branch',
)
# Create mock history with a detailed response
history = [
MessageAction(
content="""I have made the following changes:
1. Added try/catch blocks for error handling
2. Added input validation checks
3. Added handling for edge cases"""
)
]
# Create mock LLM config
llm_config = LLMConfig(model='test-model', api_key='test-key')
# Mock the LLM response
mock_response = MagicMock()
mock_response.choices = [
MagicMock(
message=MagicMock(
content="""--- success
true
--- explanation
The changes successfully address the feedback."""
)
)
]
# Test the guess_success method
with patch.object(LLM, 'completion') as mock_completion:
mock_completion.return_value = mock_response
success, success_list, explanation = handler.guess_success(issue, history)
# Verify the completion() call
mock_completion.assert_called_once()
call_args = mock_completion.call_args
prompt = call_args[1]['messages'][0]['content']
# Check prompt content
assert (
'Issue descriptions:\n'
+ json.dumps(['Issue 1 description', 'Issue 2 description'], indent=4)
in prompt
)
assert 'PR Thread Comments:\n' + '\n---\n'.join(issue.thread_comments) in prompt
assert 'Last message from AI agent:\n' + history[0].content in prompt
assert len(json.loads(explanation)) == 1
def test_check_feedback_with_llm():
"""Test the _check_feedback_with_llm helper function."""
# Create a PR handler instance
llm_config = LLMConfig(model='test', api_key='test')
handler = ServiceContextPR(
GitlabPRHandler('test-owner', 'test-repo', 'test-token'), llm_config
)
# Test cases for different LLM responses
test_cases = [
{
'response': '--- success\ntrue\n--- explanation\nChanges look good',
'expected': (True, 'Changes look good'),
},
{
'response': '--- success\nfalse\n--- explanation\nNot all issues fixed',
'expected': (False, 'Not all issues fixed'),
},
{
'response': 'Invalid response format',
'expected': (
False,
'Failed to decode answer from LLM response: Invalid response format',
),
},
{
'response': '--- success\ntrue\n--- explanation\nMultiline\nexplanation\nhere',
'expected': (True, 'Multiline\nexplanation\nhere'),
},
]
for case in test_cases:
# Mock the LLM response
mock_response = MagicMock()
mock_response.choices = [MagicMock(message=MagicMock(content=case['response']))]
# Test the function
with patch.object(LLM, 'completion', return_value=mock_response):
success, explanation = handler._check_feedback_with_llm('test prompt')
assert (success, explanation) == case['expected']
def test_check_review_thread_with_git_patch():
"""Test that git patch from complete_runtime is included in the prompt."""
# Create a PR handler instance
llm_config = LLMConfig(model='test', api_key='test')
handler = ServiceContextPR(
GitlabPRHandler('test-owner', 'test-repo', 'test-token'), llm_config
)
# Create test data
review_thread = ReviewThread(
comment='Please fix the formatting\n---\nlatest feedback:\nAdd docstrings',
files=['/src/file1.py', '/src/file2.py'],
)
issues_context = json.dumps(
['Issue 1 description', 'Issue 2 description'], indent=4
)
last_message = 'I have fixed the formatting and added docstrings'
git_patch = 'diff --git a/src/file1.py b/src/file1.py\n+"""Added docstring."""\n'
# Mock the LLM response
mock_response = MagicMock()
mock_response.choices = [
MagicMock(
message=MagicMock(
content="""--- success
true
--- explanation
Changes look good"""
)
)
]
# Test the function
with patch.object(LLM, 'completion') as mock_completion:
mock_completion.return_value = mock_response
success, explanation = handler._check_review_thread(
review_thread, issues_context, last_message, git_patch
)
# Verify the completion() call
mock_completion.assert_called_once()
call_args = mock_completion.call_args
prompt = call_args[1]['messages'][0]['content']
# Check prompt content
assert 'Issue descriptions:\n' + issues_context in prompt
assert 'Feedback:\n' + review_thread.comment in prompt
assert (
'Files locations:\n' + json.dumps(review_thread.files, indent=4) in prompt
)
assert 'Last message from AI agent:\n' + last_message in prompt
assert 'Changes made (git patch):\n' + git_patch in prompt
# Check result
assert success is True
assert explanation == 'Changes look good'
def test_check_review_thread():
"""Test the _check_review_thread helper function."""
# Create a PR handler instance
llm_config = LLMConfig(model='test', api_key='test')
handler = ServiceContextPR(
GitlabPRHandler('test-owner', 'test-repo', 'test-token'), llm_config
)
# Create test data
review_thread = ReviewThread(
comment='Please fix the formatting\n---\nlatest feedback:\nAdd docstrings',
files=['/src/file1.py', '/src/file2.py'],
)
issues_context = json.dumps(
['Issue 1 description', 'Issue 2 description'], indent=4
)
last_message = 'I have fixed the formatting and added docstrings'
# Mock the LLM response
mock_response = MagicMock()
mock_response.choices = [
MagicMock(
message=MagicMock(
content="""--- success
true
--- explanation
Changes look good"""
)
)
]
# Test the function
with patch.object(LLM, 'completion') as mock_completion:
mock_completion.return_value = mock_response
success, explanation = handler._check_review_thread(
review_thread, issues_context, last_message
)
# Verify the completion() call
mock_completion.assert_called_once()
call_args = mock_completion.call_args
prompt = call_args[1]['messages'][0]['content']
# Check prompt content
assert 'Issue descriptions:\n' + issues_context in prompt
assert 'Feedback:\n' + review_thread.comment in prompt
assert (
'Files locations:\n' + json.dumps(review_thread.files, indent=4) in prompt
)
assert 'Last message from AI agent:\n' + last_message in prompt
# Check result
assert success is True
assert explanation == 'Changes look good'
def test_check_thread_comments_with_git_patch():
"""Test that git patch from complete_runtime is included in the prompt."""
# Create a PR handler instance
llm_config = LLMConfig(model='test', api_key='test')
handler = ServiceContextPR(
GitlabPRHandler('test-owner', 'test-repo', 'test-token'), llm_config
)
# Create test data
thread_comments = [
'Please improve error handling',
'Add input validation',
'latest feedback:\nHandle edge cases',
]
issues_context = json.dumps(
['Issue 1 description', 'Issue 2 description'], indent=4
)
last_message = 'I have added error handling and input validation'
git_patch = 'diff --git a/src/file1.py b/src/file1.py\n+try:\n+ validate_input()\n+except ValueError:\n+ handle_error()\n'
# Mock the LLM response
mock_response = MagicMock()
mock_response.choices = [
MagicMock(
message=MagicMock(
content="""--- success
true
--- explanation
Changes look good"""
)
)
]
# Test the function
with patch.object(LLM, 'completion') as mock_completion:
mock_completion.return_value = mock_response
success, explanation = handler._check_thread_comments(
thread_comments, issues_context, last_message, git_patch
)
# Verify the completion() call
mock_completion.assert_called_once()
call_args = mock_completion.call_args
prompt = call_args[1]['messages'][0]['content']
# Check prompt content
assert 'Issue descriptions:\n' + issues_context in prompt
assert 'PR Thread Comments:\n' + '\n---\n'.join(thread_comments) in prompt
assert 'Last message from AI agent:\n' + last_message in prompt
assert 'Changes made (git patch):\n' + git_patch in prompt
# Check result
assert success is True
assert explanation == 'Changes look good'
def test_check_thread_comments():
"""Test the _check_thread_comments helper function."""
# Create a PR handler instance
llm_config = LLMConfig(model='test', api_key='test')
handler = ServiceContextPR(
GitlabPRHandler('test-owner', 'test-repo', 'test-token'), llm_config
)
# Create test data
thread_comments = [
'Please improve error handling',
'Add input validation',
'latest feedback:\nHandle edge cases',
]
issues_context = json.dumps(
['Issue 1 description', 'Issue 2 description'], indent=4
)
last_message = 'I have added error handling and input validation'
# Mock the LLM response
mock_response = MagicMock()
mock_response.choices = [
MagicMock(
message=MagicMock(
content="""--- success
true
--- explanation
Changes look good"""
)
)
]
# Test the function
with patch.object(LLM, 'completion') as mock_completion:
mock_completion.return_value = mock_response
success, explanation = handler._check_thread_comments(
thread_comments, issues_context, last_message
)
# Verify the completion() call
mock_completion.assert_called_once()
call_args = mock_completion.call_args
prompt = call_args[1]['messages'][0]['content']
# Check prompt content
assert 'Issue descriptions:\n' + issues_context in prompt
assert 'PR Thread Comments:\n' + '\n---\n'.join(thread_comments) in prompt
assert 'Last message from AI agent:\n' + last_message in prompt
# Check result
assert success is True
assert explanation == 'Changes look good'
def test_check_review_comments_with_git_patch():
"""Test that git patch from complete_runtime is included in the prompt."""
# Create a PR handler instance
llm_config = LLMConfig(model='test', api_key='test')
handler = ServiceContextPR(
GitlabPRHandler('test-owner', 'test-repo', 'test-token'), llm_config
)
# Create test data
review_comments = [
'Please fix the code style',
'Add more test cases',
'latest feedback:\nImprove documentation',
]
issues_context = json.dumps(
['Issue 1 description', 'Issue 2 description'], indent=4
)
last_message = 'I have fixed the code style and added tests'
git_patch = 'diff --git a/src/file1.py b/src/file1.py\n+"""This module does X."""\n+def func():\n+ """Do Y."""\n'
# Mock the LLM response
mock_response = MagicMock()
mock_response.choices = [
MagicMock(
message=MagicMock(
content="""--- success
true
--- explanation
Changes look good"""
)
)
]
# Test the function
with patch.object(LLM, 'completion') as mock_completion:
mock_completion.return_value = mock_response
success, explanation = handler._check_review_comments(
review_comments, issues_context, last_message, git_patch
)
# Verify the completion() call
mock_completion.assert_called_once()
call_args = mock_completion.call_args
prompt = call_args[1]['messages'][0]['content']
# Check prompt content
assert 'Issue descriptions:\n' + issues_context in prompt
assert 'PR Review Comments:\n' + '\n---\n'.join(review_comments) in prompt
assert 'Last message from AI agent:\n' + last_message in prompt
assert 'Changes made (git patch):\n' + git_patch in prompt
# Check result
assert success is True
assert explanation == 'Changes look good'
def test_check_review_comments():
"""Test the _check_review_comments helper function."""
# Create a PR handler instance
llm_config = LLMConfig(model='test', api_key='test')
handler = ServiceContextPR(
GitlabPRHandler('test-owner', 'test-repo', 'test-token'), llm_config
)
# Create test data
review_comments = [
'Please improve code readability',
'Add comments to complex functions',
'Follow PEP 8 style guide',
]
issues_context = json.dumps(
['Issue 1 description', 'Issue 2 description'], indent=4
)
last_message = 'I have improved code readability and added comments'
# Mock the LLM response
mock_response = MagicMock()
mock_response.choices = [
MagicMock(
message=MagicMock(
content="""--- success
true
--- explanation
Changes look good"""
)
)
]
# Test the function
with patch.object(LLM, 'completion') as mock_completion:
mock_completion.return_value = mock_response
success, explanation = handler._check_review_comments(
review_comments, issues_context, last_message
)
# Verify the completion() call
mock_completion.assert_called_once()
call_args = mock_completion.call_args
prompt = call_args[1]['messages'][0]['content']
# Check prompt content
assert 'Issue descriptions:\n' + issues_context in prompt
assert 'PR Review Comments:\n' + '\n---\n'.join(review_comments) in prompt
assert 'Last message from AI agent:\n' + last_message in prompt
# Check result
assert success is True
assert explanation == 'Changes look good'
def test_guess_success_review_comments_litellm_call():
"""Test that the completion() call for review comments contains the expected content."""
# Create a PR handler instance
llm_config = LLMConfig(model='test', api_key='test')
handler = ServiceContextPR(
GitlabPRHandler('test-owner', 'test-repo', 'test-token'), llm_config
)
# Create a mock issue with review comments
issue = Issue(
owner='test-owner',
repo='test-repo',
number=1,
title='Test PR',
body='Test Body',
thread_comments=None,
closing_issues=['Issue 1 description', 'Issue 2 description'],
review_comments=[
'Please improve code readability',
'Add comments to complex functions',
'Follow PEP 8 style guide',
],
thread_ids=None,
head_branch='test-branch',
)
# Create mock history with a detailed response
history = [
MessageAction(
content="""I have made the following changes:
1. Improved code readability by breaking down complex functions
2. Added detailed comments to all complex functions
3. Fixed code style to follow PEP 8"""
)
]
# Mock the LLM response
mock_response = MagicMock()
mock_response.choices = [
MagicMock(
message=MagicMock(
content="""--- success
true
--- explanation
The changes successfully address the feedback."""
)
)
]
with patch.object(LLM, 'completion') as mock_completion:
mock_completion.return_value = mock_response
success, success_list, explanation = handler.guess_success(issue, history)
# Verify the completion() call
mock_completion.assert_called_once()
call_args = mock_completion.call_args
prompt = call_args[1]['messages'][0]['content']
# Check prompt content
assert (
'Issue descriptions:\n'
+ json.dumps(['Issue 1 description', 'Issue 2 description'], indent=4)
in prompt
)
assert 'PR Review Comments:\n' + '\n---\n'.join(issue.review_comments) in prompt
assert 'Last message from AI agent:\n' + history[0].content in prompt
assert len(json.loads(explanation)) == 1

View File

@ -0,0 +1,167 @@
import os
import subprocess
import tempfile
from openhands.core.logger import openhands_logger as logger
from openhands.resolver.interfaces.issue import Issue
from openhands.resolver.send_pull_request import make_commit
from openhands.resolver.utils import Platform
def test_commit_message_with_quotes():
# Create a temporary directory and initialize git repo
with tempfile.TemporaryDirectory() as temp_dir:
subprocess.run(['git', 'init', temp_dir], check=True)
# Create a test file and add it to git
test_file = os.path.join(temp_dir, 'test.txt')
with open(test_file, 'w') as f:
f.write('test content')
subprocess.run(['git', '-C', temp_dir, 'add', 'test.txt'], check=True)
# Create a test issue with problematic title
issue = Issue(
owner='test-owner',
repo='test-repo',
number=123,
title="Issue with 'quotes' and \"double quotes\" and <class 'ValueError'>",
body='Test body',
labels=[],
assignees=[],
state='open',
created_at='2024-01-01T00:00:00Z',
updated_at='2024-01-01T00:00:00Z',
closed_at=None,
head_branch=None,
thread_ids=None,
)
# Make the commit
make_commit(temp_dir, issue, 'issue')
# Get the commit message
result = subprocess.run(
['git', '-C', temp_dir, 'log', '-1', '--pretty=%B'],
capture_output=True,
text=True,
check=True,
)
commit_msg = result.stdout.strip()
# The commit message should contain the quotes without excessive escaping
expected = "Fix issue #123: Issue with 'quotes' and \"double quotes\" and <class 'ValueError'>"
assert commit_msg == expected, f'Expected: {expected}\nGot: {commit_msg}'
def test_pr_title_with_quotes(monkeypatch):
# Mock requests.post to avoid actual API calls
class MockResponse:
def __init__(self, status_code=201):
self.status_code = status_code
self.text = ''
def json(self):
return {'html_url': 'https://github.com/test/test/pull/1'}
def raise_for_status(self):
pass
def mock_post(*args, **kwargs):
# Verify that the PR title is not over-escaped
data = kwargs.get('json', {})
title = data.get('title', '')
expected = "Fix issue #123: Issue with 'quotes' and \"double quotes\" and <class 'ValueError'>"
assert (
title == expected
), f'PR title was incorrectly escaped.\nExpected: {expected}\nGot: {title}'
return MockResponse()
class MockGetResponse:
def __init__(self, status_code=200):
self.status_code = status_code
self.text = ''
def json(self):
return {'default_branch': 'main'}
def raise_for_status(self):
pass
monkeypatch.setattr('requests.post', mock_post)
monkeypatch.setattr('requests.get', lambda *args, **kwargs: MockGetResponse())
monkeypatch.setattr(
'openhands.resolver.interfaces.github.GithubIssueHandler.branch_exists',
lambda *args, **kwargs: False,
)
# Mock subprocess.run to avoid actual git commands
original_run = subprocess.run
def mock_run(*args, **kwargs):
logger.info(f"Running command: {args[0] if args else kwargs.get('args', [])}")
if isinstance(args[0], list) and args[0][0] == 'git':
if 'push' in args[0]:
return subprocess.CompletedProcess(
args[0], returncode=0, stdout='', stderr=''
)
return original_run(*args, **kwargs)
return original_run(*args, **kwargs)
monkeypatch.setattr('subprocess.run', mock_run)
# Create a temporary directory and initialize git repo
with tempfile.TemporaryDirectory() as temp_dir:
logger.info('Initializing git repo...')
subprocess.run(['git', 'init', temp_dir], check=True)
# Add these lines to configure git
subprocess.run(
['git', '-C', temp_dir, 'config', 'user.name', 'Test User'], check=True
)
subprocess.run(
['git', '-C', temp_dir, 'config', 'user.email', 'test@example.com'],
check=True,
)
# Create a test file and add it to git
test_file = os.path.join(temp_dir, 'test.txt')
with open(test_file, 'w') as f:
f.write('test content')
logger.info('Adding and committing test file...')
subprocess.run(['git', '-C', temp_dir, 'add', 'test.txt'], check=True)
subprocess.run(
['git', '-C', temp_dir, 'commit', '-m', 'Initial commit'], check=True
)
# Create a test issue with problematic title
logger.info('Creating test issue...')
issue = Issue(
owner='test-owner',
repo='test-repo',
number=123,
title="Issue with 'quotes' and \"double quotes\" and <class 'ValueError'>",
body='Test body',
labels=[],
assignees=[],
state='open',
created_at='2024-01-01T00:00:00Z',
updated_at='2024-01-01T00:00:00Z',
closed_at=None,
head_branch=None,
thread_ids=None,
)
# Try to send a PR - this will fail if the title is incorrectly escaped
logger.info('Sending PR...')
from openhands.resolver.send_pull_request import send_pull_request
send_pull_request(
issue=issue,
token='dummy-token',
username='test-user',
platform=Platform.GITHUB,
patch_dir=temp_dir,
pr_type='ready',
)

View File

@ -0,0 +1,923 @@
import os
import tempfile
from unittest.mock import AsyncMock, MagicMock, patch
import pytest
from openhands.core.config import LLMConfig
from openhands.events.action import CmdRunAction
from openhands.events.observation import (
CmdOutputMetadata,
CmdOutputObservation,
NullObservation,
)
from openhands.llm.llm import LLM
from openhands.resolver.interfaces.gitlab import GitlabIssueHandler, GitlabPRHandler
from openhands.resolver.interfaces.issue import Issue, ReviewThread
from openhands.resolver.interfaces.issue_definitions import (
ServiceContextIssue,
ServiceContextPR,
)
from openhands.resolver.resolve_issue import (
complete_runtime,
initialize_runtime,
process_issue,
)
from openhands.resolver.resolver_output import ResolverOutput
from openhands.resolver.utils import Platform
@pytest.fixture
def mock_output_dir():
with tempfile.TemporaryDirectory() as temp_dir:
repo_path = os.path.join(temp_dir, 'repo')
# Initialize a Gitlab repo in "repo" and add a commit with "README.md"
os.makedirs(repo_path)
os.system(f'git init {repo_path}')
readme_path = os.path.join(repo_path, 'README.md')
with open(readme_path, 'w') as f:
f.write('hello world')
os.system(f'git -C {repo_path} add README.md')
os.system(f"git -C {repo_path} commit -m 'Initial commit'")
yield temp_dir
@pytest.fixture
def mock_subprocess():
with patch('subprocess.check_output') as mock_check_output:
yield mock_check_output
@pytest.fixture
def mock_os():
with patch('os.system') as mock_system, patch('os.path.join') as mock_join:
yield mock_system, mock_join
@pytest.fixture
def mock_prompt_template():
return 'Issue: {{ body }}\n\nPlease fix this issue.'
@pytest.fixture
def mock_followup_prompt_template():
return 'Issue context: {{ issues }}\n\nReview comments: {{ review_comments }}\n\nReview threads: {{ review_threads }}\n\nFiles: {{ files }}\n\nThread comments: {{ thread_context }}\n\nPlease fix this issue.'
def create_cmd_output(exit_code: int, content: str, command: str):
return CmdOutputObservation(
content=content,
command=command,
metadata=CmdOutputMetadata(exit_code=exit_code),
)
def test_initialize_runtime():
mock_runtime = MagicMock()
if os.getenv('GITLAB_CI') == 'true':
mock_runtime.run_action.side_effect = [
create_cmd_output(exit_code=0, content='', command='cd /workspace'),
create_cmd_output(
exit_code=0, content='', command='sudo chown -R 1001:0 /workspace/*'
),
create_cmd_output(
exit_code=0, content='', command='git config --global core.pager ""'
),
]
else:
mock_runtime.run_action.side_effect = [
create_cmd_output(exit_code=0, content='', command='cd /workspace'),
create_cmd_output(
exit_code=0, content='', command='git config --global core.pager ""'
),
]
initialize_runtime(mock_runtime, Platform.GITLAB)
if os.getenv('GITLAB_CI') == 'true':
assert mock_runtime.run_action.call_count == 3
else:
assert mock_runtime.run_action.call_count == 2
mock_runtime.run_action.assert_any_call(CmdRunAction(command='cd /workspace'))
if os.getenv('GITLAB_CI') == 'true':
mock_runtime.run_action.assert_any_call(
CmdRunAction(command='sudo chown -R 1001:0 /workspace/*')
)
mock_runtime.run_action.assert_any_call(
CmdRunAction(command='git config --global core.pager ""')
)
@pytest.mark.asyncio
async def test_resolve_issue_no_issues_found():
from openhands.resolver.resolve_issue import resolve_issue
# Mock dependencies
mock_handler = MagicMock()
mock_handler.get_converted_issues.return_value = [] # Return empty list
with patch(
'openhands.resolver.resolve_issue.issue_handler_factory',
return_value=mock_handler,
):
with pytest.raises(ValueError) as exc_info:
await resolve_issue(
owner='test-owner',
repo='test-repo',
token='test-token',
username='test-user',
platform=Platform.GITLAB,
max_iterations=5,
output_dir='/tmp',
llm_config=LLMConfig(model='test', api_key='test'),
runtime_container_image='test-image',
prompt_template='test-template',
issue_type='pr',
repo_instruction=None,
issue_number=5432,
comment_id=None,
)
assert 'No issues found for issue number 5432' in str(exc_info.value)
assert 'test-owner/test-repo' in str(exc_info.value)
assert 'exists in the repository' in str(exc_info.value)
assert 'correct permissions' in str(exc_info.value)
def test_download_issues_from_gitlab():
llm_config = LLMConfig(model='test', api_key='test')
handler = ServiceContextIssue(
GitlabIssueHandler('owner', 'repo', 'token'), llm_config
)
mock_issues_response = MagicMock()
mock_issues_response.json.side_effect = [
[
{'iid': 1, 'title': 'Issue 1', 'description': 'This is an issue'},
{
'iid': 2,
'title': 'PR 1',
'description': 'This is a pull request',
'pull_request': {},
},
{'iid': 3, 'title': 'Issue 2', 'description': 'This is another issue'},
],
None,
]
mock_issues_response.raise_for_status = MagicMock()
mock_comments_response = MagicMock()
mock_comments_response.json.return_value = []
mock_comments_response.raise_for_status = MagicMock()
def get_mock_response(url, *args, **kwargs):
if '/notes' in url:
return mock_comments_response
return mock_issues_response
with patch('requests.get', side_effect=get_mock_response):
issues = handler.get_converted_issues(issue_numbers=[1, 3])
assert len(issues) == 2
assert handler.issue_type == 'issue'
assert all(isinstance(issue, Issue) for issue in issues)
assert [issue.number for issue in issues] == [1, 3]
assert [issue.title for issue in issues] == ['Issue 1', 'Issue 2']
assert [issue.review_comments for issue in issues] == [None, None]
assert [issue.closing_issues for issue in issues] == [None, None]
assert [issue.thread_ids for issue in issues] == [None, None]
def test_download_pr_from_gitlab():
llm_config = LLMConfig(model='test', api_key='test')
handler = ServiceContextPR(GitlabPRHandler('owner', 'repo', 'token'), llm_config)
mock_pr_response = MagicMock()
mock_pr_response.json.side_effect = [
[
{
'iid': 1,
'title': 'PR 1',
'description': 'This is a pull request',
'source_branch': 'b1',
},
{
'iid': 2,
'title': 'My PR',
'description': 'This is another pull request',
'source_branch': 'b2',
},
{
'iid': 3,
'title': 'PR 3',
'description': 'Final PR',
'source_branch': 'b3',
},
],
None,
]
mock_pr_response.raise_for_status = MagicMock()
# Mock for related issues response
mock_related_issuse_response = MagicMock()
mock_related_issuse_response.json.return_value = [
{'description': 'Issue 1 body', 'iid': 1},
{'description': 'Issue 2 body', 'iid': 2},
]
mock_related_issuse_response.raise_for_status = MagicMock()
# Mock for PR comments response
mock_comments_response = MagicMock()
mock_comments_response.json.return_value = [] # No PR comments
mock_comments_response.raise_for_status = MagicMock()
# Mock for GraphQL request (for download_pr_metadata)
mock_graphql_response = MagicMock()
mock_graphql_response.json.side_effect = lambda: {
'data': {
'project': {
'mergeRequest': {
'discussions': {
'edges': [
{
'node': {
'id': '1',
'resolved': False,
'resolvable': True,
'notes': {
'nodes': [
{
'body': 'Unresolved comment 1',
'position': {
'filePath': '/frontend/header.tsx',
},
},
{
'body': 'Follow up thread',
},
]
},
}
},
{
'node': {
'id': '2',
'resolved': True,
'resolvable': True,
'notes': {
'nodes': [
{
'body': 'Resolved comment 1',
'position': {
'filePath': '/some/file.py',
},
},
]
},
}
},
{
'node': {
'id': '3',
'resolved': False,
'resolvable': True,
'notes': {
'nodes': [
{
'body': 'Unresolved comment 3',
'position': {
'filePath': '/another/file.py',
},
},
]
},
}
},
]
},
}
}
}
}
mock_graphql_response.raise_for_status = MagicMock()
def get_mock_response(url, *args, **kwargs):
if '/notes' in url:
return mock_comments_response
if '/related_issues' in url:
return mock_related_issuse_response
return mock_pr_response
with patch('requests.get', side_effect=get_mock_response):
with patch('requests.post', return_value=mock_graphql_response):
issues = handler.get_converted_issues(issue_numbers=[1, 2, 3])
assert len(issues) == 3
assert handler.issue_type == 'pr'
assert all(isinstance(issue, Issue) for issue in issues)
assert [issue.number for issue in issues] == [1, 2, 3]
assert [issue.title for issue in issues] == ['PR 1', 'My PR', 'PR 3']
assert [issue.head_branch for issue in issues] == ['b1', 'b2', 'b3']
assert len(issues[0].review_threads) == 2 # Only unresolved threads
assert (
issues[0].review_threads[0].comment
== 'Unresolved comment 1\n---\nlatest feedback:\nFollow up thread\n'
)
assert issues[0].review_threads[0].files == ['/frontend/header.tsx']
assert (
issues[0].review_threads[1].comment
== 'latest feedback:\nUnresolved comment 3\n'
)
assert issues[0].review_threads[1].files == ['/another/file.py']
assert issues[0].closing_issues == ['Issue 1 body', 'Issue 2 body']
assert issues[0].thread_ids == ['1', '3']
@pytest.mark.asyncio
async def test_complete_runtime():
mock_runtime = MagicMock()
mock_runtime.run_action.side_effect = [
create_cmd_output(exit_code=0, content='', command='cd /workspace'),
create_cmd_output(
exit_code=0, content='', command='git config --global core.pager ""'
),
create_cmd_output(
exit_code=0,
content='',
command='git config --global --add safe.directory /workspace',
),
create_cmd_output(
exit_code=0, content='', command='git diff base_commit_hash fix'
),
create_cmd_output(exit_code=0, content='git diff content', command='git apply'),
]
result = await complete_runtime(mock_runtime, 'base_commit_hash', Platform.GITLAB)
assert result == {'git_patch': 'git diff content'}
assert mock_runtime.run_action.call_count == 5
@pytest.mark.asyncio
async def test_process_issue(mock_output_dir, mock_prompt_template):
# Mock dependencies
mock_create_runtime = MagicMock()
mock_initialize_runtime = AsyncMock()
mock_run_controller = AsyncMock()
mock_complete_runtime = AsyncMock()
handler_instance = MagicMock()
# Set up test data
issue = Issue(
owner='test_owner',
repo='test_repo',
number=1,
title='Test Issue',
body='This is a test issue',
)
base_commit = 'abcdef1234567890'
repo_instruction = 'Resolve this repo'
max_iterations = 5
llm_config = LLMConfig(model='test_model', api_key='test_api_key')
runtime_container_image = 'test_image:latest'
# Test cases for different scenarios
test_cases = [
{
'name': 'successful_run',
'run_controller_return': MagicMock(
history=[NullObservation(content='')],
metrics=MagicMock(
get=MagicMock(return_value={'test_result': 'passed'})
),
last_error=None,
),
'run_controller_raises': None,
'expected_success': True,
'expected_error': None,
'expected_explanation': 'Issue resolved successfully',
},
{
'name': 'value_error',
'run_controller_return': None,
'run_controller_raises': ValueError('Test value error'),
'expected_success': False,
'expected_error': 'Agent failed to run or crashed',
'expected_explanation': 'Agent failed to run',
},
{
'name': 'runtime_error',
'run_controller_return': None,
'run_controller_raises': RuntimeError('Test runtime error'),
'expected_success': False,
'expected_error': 'Agent failed to run or crashed',
'expected_explanation': 'Agent failed to run',
},
{
'name': 'json_decode_error',
'run_controller_return': MagicMock(
history=[NullObservation(content='')],
metrics=MagicMock(
get=MagicMock(return_value={'test_result': 'passed'})
),
last_error=None,
),
'run_controller_raises': None,
'expected_success': True,
'expected_error': None,
'expected_explanation': 'Non-JSON explanation',
'is_pr': True,
'comment_success': [
True,
False,
], # To trigger the PR success logging code path
},
]
for test_case in test_cases:
# Reset mocks
mock_create_runtime.reset_mock()
mock_initialize_runtime.reset_mock()
mock_run_controller.reset_mock()
mock_complete_runtime.reset_mock()
handler_instance.reset_mock()
# Mock return values
mock_create_runtime.return_value = MagicMock(connect=AsyncMock())
if test_case['run_controller_raises']:
mock_run_controller.side_effect = test_case['run_controller_raises']
else:
mock_run_controller.return_value = test_case['run_controller_return']
mock_run_controller.side_effect = None
mock_complete_runtime.return_value = {'git_patch': 'test patch'}
handler_instance.guess_success.return_value = (
test_case['expected_success'],
test_case.get('comment_success', None),
test_case['expected_explanation'],
)
handler_instance.get_instruction.return_value = ('Test instruction', [])
handler_instance.issue_type = 'pr' if test_case.get('is_pr', False) else 'issue'
with (
patch(
'openhands.resolver.resolve_issue.create_runtime', mock_create_runtime
),
patch(
'openhands.resolver.resolve_issue.initialize_runtime',
mock_initialize_runtime,
),
patch(
'openhands.resolver.resolve_issue.run_controller', mock_run_controller
),
patch(
'openhands.resolver.resolve_issue.complete_runtime',
mock_complete_runtime,
),
patch('openhands.resolver.resolve_issue.logger'),
):
# Call the function
result = await process_issue(
issue,
Platform.GITLAB,
base_commit,
max_iterations,
llm_config,
mock_output_dir,
runtime_container_image,
mock_prompt_template,
handler_instance,
repo_instruction,
reset_logger=False,
)
# Assert the result
expected_issue_type = 'pr' if test_case.get('is_pr', False) else 'issue'
assert handler_instance.issue_type == expected_issue_type
assert isinstance(result, ResolverOutput)
assert result.issue == issue
assert result.base_commit == base_commit
assert result.git_patch == 'test patch'
assert result.success == test_case['expected_success']
assert result.result_explanation == test_case['expected_explanation']
assert result.error == test_case['expected_error']
# Assert that the mocked functions were called
mock_create_runtime.assert_called_once()
mock_initialize_runtime.assert_called_once()
mock_run_controller.assert_called_once()
mock_complete_runtime.assert_called_once()
# Assert that guess_success was called only for successful runs
if test_case['expected_success']:
handler_instance.guess_success.assert_called_once()
else:
handler_instance.guess_success.assert_not_called()
def test_get_instruction(mock_prompt_template, mock_followup_prompt_template):
issue = Issue(
owner='test_owner',
repo='test_repo',
number=123,
title='Test Issue',
body='This is a test issue refer to image ![First Image](https://sampleimage.com/image1.png)',
)
mock_llm_config = LLMConfig(model='test_model', api_key='test_api_key')
issue_handler = ServiceContextIssue(
GitlabIssueHandler('owner', 'repo', 'token'), mock_llm_config
)
instruction, images_urls = issue_handler.get_instruction(
issue, mock_prompt_template, None
)
expected_instruction = 'Issue: Test Issue\n\nThis is a test issue refer to image ![First Image](https://sampleimage.com/image1.png)\n\nPlease fix this issue.'
assert images_urls == ['https://sampleimage.com/image1.png']
assert issue_handler.issue_type == 'issue'
assert instruction == expected_instruction
issue = Issue(
owner='test_owner',
repo='test_repo',
number=123,
title='Test Issue',
body='This is a test issue',
closing_issues=['Issue 1 fix the type'],
review_threads=[
ReviewThread(
comment="There is still a typo 'pthon' instead of 'python'", files=[]
)
],
thread_comments=[
"I've left review comments, please address them",
'This is a valid concern.',
],
)
pr_handler = ServiceContextPR(
GitlabPRHandler('owner', 'repo', 'token'), mock_llm_config
)
instruction, images_urls = pr_handler.get_instruction(
issue, mock_followup_prompt_template, None
)
expected_instruction = "Issue context: [\n \"Issue 1 fix the type\"\n]\n\nReview comments: None\n\nReview threads: [\n \"There is still a typo 'pthon' instead of 'python'\"\n]\n\nFiles: []\n\nThread comments: I've left review comments, please address them\n---\nThis is a valid concern.\n\nPlease fix this issue."
assert images_urls == []
assert pr_handler.issue_type == 'pr'
assert instruction == expected_instruction
def test_file_instruction():
issue = Issue(
owner='test_owner',
repo='test_repo',
number=123,
title='Test Issue',
body='This is a test issue ![image](https://sampleimage.com/sample.png)',
)
# load prompt from openhands/resolver/prompts/resolve/basic.jinja
with open('openhands/resolver/prompts/resolve/basic.jinja', 'r') as f:
prompt = f.read()
# Test without thread comments
mock_llm_config = LLMConfig(model='test_model', api_key='test_api_key')
issue_handler = ServiceContextIssue(
GitlabIssueHandler('owner', 'repo', 'token'), mock_llm_config
)
instruction, images_urls = issue_handler.get_instruction(issue, prompt, None)
expected_instruction = """Please fix the following issue for the repository in /workspace.
An environment has been set up for you to start working. You may assume all necessary tools are installed.
# Problem Statement
Test Issue
This is a test issue ![image](https://sampleimage.com/sample.png)
IMPORTANT: You should ONLY interact with the environment provided to you AND NEVER ASK FOR HUMAN HELP.
You SHOULD INCLUDE PROPER INDENTATION in your edit commands.
When you think you have fixed the issue through code changes, please finish the interaction."""
assert instruction == expected_instruction
assert images_urls == ['https://sampleimage.com/sample.png']
def test_file_instruction_with_repo_instruction():
issue = Issue(
owner='test_owner',
repo='test_repo',
number=123,
title='Test Issue',
body='This is a test issue',
)
# load prompt from openhands/resolver/prompts/resolve/basic.jinja
with open('openhands/resolver/prompts/resolve/basic.jinja', 'r') as f:
prompt = f.read()
# load repo instruction from openhands/resolver/prompts/repo_instructions/all-hands-ai___openhands-resolver.txt
with open(
'openhands/resolver/prompts/repo_instructions/all-hands-ai___openhands-resolver.txt',
'r',
) as f:
repo_instruction = f.read()
mock_llm_config = LLMConfig(model='test_model', api_key='test_api_key')
issue_handler = ServiceContextIssue(
GitlabIssueHandler('owner', 'repo', 'token'), mock_llm_config
)
instruction, image_urls = issue_handler.get_instruction(
issue, prompt, repo_instruction
)
expected_instruction = """Please fix the following issue for the repository in /workspace.
An environment has been set up for you to start working. You may assume all necessary tools are installed.
# Problem Statement
Test Issue
This is a test issue
IMPORTANT: You should ONLY interact with the environment provided to you AND NEVER ASK FOR HUMAN HELP.
You SHOULD INCLUDE PROPER INDENTATION in your edit commands.
Some basic information about this repository:
This is a Python repo for openhands-resolver, a library that attempts to resolve github issues with the AI agent OpenHands.
- Setup: `poetry install --with test --with dev`
- Testing: `poetry run pytest tests/test_*.py`
When you think you have fixed the issue through code changes, please finish the interaction."""
assert instruction == expected_instruction
assert issue_handler.issue_type == 'issue'
assert image_urls == []
def test_guess_success():
mock_issue = Issue(
owner='test_owner',
repo='test_repo',
number=1,
title='Test Issue',
body='This is a test issue',
)
mock_history = [create_cmd_output(exit_code=0, content='', command='cd /workspace')]
mock_llm_config = LLMConfig(model='test_model', api_key='test_api_key')
mock_completion_response = MagicMock()
mock_completion_response.choices = [
MagicMock(
message=MagicMock(
content='--- success\ntrue\n--- explanation\nIssue resolved successfully'
)
)
]
issue_handler = ServiceContextIssue(
GitlabIssueHandler('owner', 'repo', 'token'), mock_llm_config
)
with patch.object(
LLM, 'completion', MagicMock(return_value=mock_completion_response)
):
success, comment_success, explanation = issue_handler.guess_success(
mock_issue, mock_history
)
assert issue_handler.issue_type == 'issue'
assert comment_success is None
assert success
assert explanation == 'Issue resolved successfully'
def test_guess_success_with_thread_comments():
mock_issue = Issue(
owner='test_owner',
repo='test_repo',
number=1,
title='Test Issue',
body='This is a test issue',
thread_comments=[
'First comment',
'Second comment',
'latest feedback:\nPlease add tests',
],
)
mock_history = [MagicMock(message='I have added tests for this case')]
mock_llm_config = LLMConfig(model='test_model', api_key='test_api_key')
mock_completion_response = MagicMock()
mock_completion_response.choices = [
MagicMock(
message=MagicMock(
content='--- success\ntrue\n--- explanation\nTests have been added to verify thread comments handling'
)
)
]
issue_handler = ServiceContextIssue(
GitlabIssueHandler('owner', 'repo', 'token'), mock_llm_config
)
with patch.object(
LLM, 'completion', MagicMock(return_value=mock_completion_response)
):
success, comment_success, explanation = issue_handler.guess_success(
mock_issue, mock_history
)
assert issue_handler.issue_type == 'issue'
assert comment_success is None
assert success
assert 'Tests have been added' in explanation
def test_instruction_with_thread_comments():
# Create an issue with thread comments
issue = Issue(
owner='test_owner',
repo='test_repo',
number=123,
title='Test Issue',
body='This is a test issue',
thread_comments=[
'First comment',
'Second comment',
'latest feedback:\nPlease add tests',
],
)
# Load the basic prompt template
with open('openhands/resolver/prompts/resolve/basic.jinja', 'r') as f:
prompt = f.read()
llm_config = LLMConfig(model='test', api_key='test')
issue_handler = ServiceContextIssue(
GitlabIssueHandler('owner', 'repo', 'token'), llm_config
)
instruction, images_urls = issue_handler.get_instruction(issue, prompt, None)
# Verify that thread comments are included in the instruction
assert 'First comment' in instruction
assert 'Second comment' in instruction
assert 'Please add tests' in instruction
assert 'Issue Thread Comments:' in instruction
assert images_urls == []
def test_guess_success_failure():
mock_issue = Issue(
owner='test_owner',
repo='test_repo',
number=1,
title='Test Issue',
body='This is a test issue',
thread_comments=[
'First comment',
'Second comment',
'latest feedback:\nPlease add tests',
],
)
mock_history = [MagicMock(message='I have added tests for this case')]
mock_llm_config = LLMConfig(model='test_model', api_key='test_api_key')
mock_completion_response = MagicMock()
mock_completion_response.choices = [
MagicMock(
message=MagicMock(
content='--- success\ntrue\n--- explanation\nTests have been added to verify thread comments handling'
)
)
]
issue_handler = ServiceContextIssue(
GitlabIssueHandler('owner', 'repo', 'token'), mock_llm_config
)
with patch.object(
LLM, 'completion', MagicMock(return_value=mock_completion_response)
):
success, comment_success, explanation = issue_handler.guess_success(
mock_issue, mock_history
)
assert issue_handler.issue_type == 'issue'
assert comment_success is None
assert success
assert 'Tests have been added' in explanation
def test_guess_success_negative_case():
mock_issue = Issue(
owner='test_owner',
repo='test_repo',
number=1,
title='Test Issue',
body='This is a test issue',
)
mock_history = [create_cmd_output(exit_code=0, content='', command='cd /workspace')]
mock_llm_config = LLMConfig(model='test_model', api_key='test_api_key')
mock_completion_response = MagicMock()
mock_completion_response.choices = [
MagicMock(
message=MagicMock(
content='--- success\nfalse\n--- explanation\nIssue not resolved'
)
)
]
issue_handler = ServiceContextIssue(
GitlabIssueHandler('owner', 'repo', 'token'), mock_llm_config
)
with patch.object(
LLM, 'completion', MagicMock(return_value=mock_completion_response)
):
success, comment_success, explanation = issue_handler.guess_success(
mock_issue, mock_history
)
assert issue_handler.issue_type == 'issue'
assert comment_success is None
assert not success
assert explanation == 'Issue not resolved'
def test_guess_success_invalid_output():
mock_issue = Issue(
owner='test_owner',
repo='test_repo',
number=1,
title='Test Issue',
body='This is a test issue',
)
mock_history = [create_cmd_output(exit_code=0, content='', command='cd /workspace')]
mock_llm_config = LLMConfig(model='test_model', api_key='test_api_key')
mock_completion_response = MagicMock()
mock_completion_response.choices = [
MagicMock(message=MagicMock(content='This is not a valid output'))
]
issue_handler = ServiceContextIssue(
GitlabIssueHandler('owner', 'repo', 'token'), mock_llm_config
)
with patch.object(
LLM, 'completion', MagicMock(return_value=mock_completion_response)
):
success, comment_success, explanation = issue_handler.guess_success(
mock_issue, mock_history
)
assert issue_handler.issue_type == 'issue'
assert comment_success is None
assert not success
assert (
explanation
== 'Failed to decode answer from LLM response: This is not a valid output'
)
def test_download_issue_with_specific_comment():
llm_config = LLMConfig(model='test', api_key='test')
handler = ServiceContextIssue(
GitlabIssueHandler('owner', 'repo', 'token'), llm_config
)
# Define the specific comment_id to filter
specific_comment_id = 101
# Mock issue and comment responses
mock_issue_response = MagicMock()
mock_issue_response.json.side_effect = [
[
{'iid': 1, 'title': 'Issue 1', 'description': 'This is an issue'},
],
None,
]
mock_issue_response.raise_for_status = MagicMock()
mock_comments_response = MagicMock()
mock_comments_response.json.return_value = [
{
'id': specific_comment_id,
'body': 'Specific comment body',
},
{
'id': 102,
'body': 'Another comment body',
},
]
mock_comments_response.raise_for_status = MagicMock()
def get_mock_response(url, *args, **kwargs):
if '/notes' in url:
return mock_comments_response
return mock_issue_response
with patch('requests.get', side_effect=get_mock_response):
issues = handler.get_converted_issues(
issue_numbers=[1], comment_id=specific_comment_id
)
assert len(issues) == 1
assert issues[0].number == 1
assert issues[0].title == 'Issue 1'
assert issues[0].thread_comments == ['Specific comment body']
if __name__ == '__main__':
pytest.main()

File diff suppressed because it is too large Load Diff

View File

@ -1,19 +1,15 @@
from openhands.core.config.llm_config import LLMConfig
from openhands.resolver.issue_definitions import IssueHandler
from openhands.resolver.utils import extract_issue_references
def test_extract_issue_references():
llm_config = LLMConfig(model='test', api_key='test')
handler = IssueHandler('test-owner', 'test-repo', 'test-token', llm_config)
# Test basic issue reference
assert handler._extract_issue_references('Fixes #123') == [123]
assert extract_issue_references('Fixes #123') == [123]
# Test multiple issue references
assert handler._extract_issue_references('Fixes #123, #456') == [123, 456]
assert extract_issue_references('Fixes #123, #456') == [123, 456]
# Test issue references in code blocks should be ignored
assert handler._extract_issue_references("""
assert extract_issue_references("""
Here's a code block:
```python
# This is a comment with #123
@ -24,21 +20,37 @@ def test_extract_issue_references():
""") == [789]
# Test issue references in inline code should be ignored
assert handler._extract_issue_references(
assert extract_issue_references(
'This `#123` should be ignored but #456 should be extracted'
) == [456]
assert extract_issue_references(
'This `#123` should be ignored but #456 should be extracted'
) == [456]
# Test issue references in URLs should be ignored
assert handler._extract_issue_references(
assert extract_issue_references(
'Check http://example.com/#123 but #456 should be extracted'
) == [456]
assert extract_issue_references(
'Check http://example.com/#123 but #456 should be extracted'
) == [456]
# Test issue references in markdown links should be extracted
assert handler._extract_issue_references(
'[Link to #123](http://example.com) and #456'
) == [123, 456]
assert extract_issue_references('[Link to #123](http://example.com) and #456') == [
123,
456,
]
assert extract_issue_references('[Link to #123](http://example.com) and #456') == [
123,
456,
]
# Test issue references with text around them
assert handler._extract_issue_references(
'Issue #123 is fixed and #456 is pending'
) == [123, 456]
assert extract_issue_references('Issue #123 is fixed and #456 is pending') == [
123,
456,
]
assert extract_issue_references('Issue #123 is fixed and #456 is pending') == [
123,
456,
]