mirror of
https://github.com/OpenHands/OpenHands.git
synced 2025-12-26 05:48:36 +08:00
* Preliminary HumanEvalFix integration * Clean paths * fix: set workspace path correctly for config fix: task in that contains / * add missing run_infer.sh * update run_infer w/o hard coded agent * fix typo * change `instance_id` to `task_id` * add the warning and env var setting to run_infer.sh * reset back workspace mount at the end of each instance * 10 max iter is probably enough for humanevalfix * Remove unneeded section Co-authored-by: Xingyao Wang <xingyao6@illinois.edu> * Fix link Co-authored-by: Yufan Song <33971064+yufansong@users.noreply.github.com> * Use logger Co-authored-by: Yufan Song <33971064+yufansong@users.noreply.github.com> * Update run_infer.py fix a bug: ERROR:concurrent.futures:exception calling callback for <Future at 0x309cbc470 state=finished raised NameError> concurrent.futures.process._RemoteTraceback: * Update README.md * Update README.md * Update README.md * Update README.md added an example * Update README.md added: enable_auto_lint = true * Update pyproject.toml add: evaluate package * Delete poetry.lock update poetry.lock * update poetry.lock update poetry.lock * Update README.md * Update README.md --------- Co-authored-by: Xingyao Wang <xingyao6@illinois.edu> Co-authored-by: Yufan Song <33971064+yufansong@users.noreply.github.com> Co-authored-by: Robert <871607149@qq.com>
1.3 KiB
1.3 KiB
Evaluation
This folder contains code and resources to run experiments and evaluations.
Logistics
To better organize the evaluation folder, we should follow the rules below:
- Each subfolder contains a specific benchmark or experiment. For example,
evaluation/swe_benchshould contain all the preprocessing/evaluation/analysis scripts. - Raw data and experimental records should not be stored within this repo.
- For model outputs, they should be stored at this huggingface space for visualization.
- Important data files of manageable size and analysis scripts (e.g., jupyter notebooks) can be directly uploaded to this repo.
Supported Benchmarks
- SWE-Bench:
evaluation/swe_bench - HumanEvalFix:
evaluation/humanevalfix
Result Visualization
Check this huggingface space for visualization of existing experimental results.
Upload your results
You can start your own fork of our huggingface evaluation outputs and submit a PR of your evaluation results to our hosted huggingface repo via PR following the guide here.