mirror of
https://github.com/OpenHands/OpenHands.git
synced 2025-12-26 05:48:36 +08:00
* Add detailed tutorial for adding new evaluation benchmarks * update tutorial, fix typo, and log observation to the cmdline * fix url * Update evaluation/TUTORIAL.md * Update evaluation/TUTORIAL.md * Update evaluation/TUTORIAL.md * Update evaluation/TUTORIAL.md Co-authored-by: Graham Neubig <neubig@gmail.com> * Update evaluation/TUTORIAL.md Co-authored-by: Graham Neubig <neubig@gmail.com> * Update evaluation/TUTORIAL.md Co-authored-by: Graham Neubig <neubig@gmail.com> * Update evaluation/TUTORIAL.md Co-authored-by: Graham Neubig <neubig@gmail.com> * Update evaluation/TUTORIAL.md Co-authored-by: Graham Neubig <neubig@gmail.com> * Update evaluation/TUTORIAL.md Co-authored-by: Graham Neubig <neubig@gmail.com> * Update evaluation/TUTORIAL.md Co-authored-by: Graham Neubig <neubig@gmail.com> * Update evaluation/TUTORIAL.md Co-authored-by: Graham Neubig <neubig@gmail.com> * Update evaluation/TUTORIAL.md Co-authored-by: Graham Neubig <neubig@gmail.com> * Update evaluation/TUTORIAL.md Co-authored-by: Graham Neubig <neubig@gmail.com> * Update evaluation/TUTORIAL.md Co-authored-by: Graham Neubig <neubig@gmail.com> * Update evaluation/TUTORIAL.md Co-authored-by: Graham Neubig <neubig@gmail.com> * Update evaluation/TUTORIAL.md Co-authored-by: Graham Neubig <neubig@gmail.com> * Update evaluation/TUTORIAL.md Co-authored-by: Graham Neubig <neubig@gmail.com> * simplify readme and add comments to the actual code * Fix typo in evaluation/TUTORIAL.md * Fix typo in evaluation/swe_bench/run_infer.py * Fix another typo in evaluation/swe_bench/run_infer.py * Update TUTORIAL.md * Set host net work to false for SWEBench * Update evaluation/TUTORIAL.md Co-authored-by: Boxuan Li <liboxuan@connect.hku.hk> * Update evaluation/TUTORIAL.md Co-authored-by: Boxuan Li <liboxuan@connect.hku.hk> * Update evaluation/TUTORIAL.md Co-authored-by: Boxuan Li <liboxuan@connect.hku.hk> * Update evaluation/TUTORIAL.md Co-authored-by: Boxuan Li <liboxuan@connect.hku.hk> --------- Co-authored-by: OpenDevin <opendevin@opendevin.ai> Co-authored-by: Engel Nyst <enyst@users.noreply.github.com> Co-authored-by: Graham Neubig <neubig@gmail.com> Co-authored-by: Boxuan Li <liboxuan@connect.hku.hk>
Evaluation
This folder contains code and resources to run experiments and evaluations.
Logistics
To better organize the evaluation folder, we should follow the rules below:
- Each subfolder contains a specific benchmark or experiment. For example,
evaluation/swe_benchshould contain all the preprocessing/evaluation/analysis scripts. - Raw data and experimental records should not be stored within this repo.
- For model outputs, they should be stored at this huggingface space for visualization.
- Important data files of manageable size and analysis scripts (e.g., jupyter notebooks) can be directly uploaded to this repo.
Supported Benchmarks
- SWE-Bench:
evaluation/swe_bench
Result Visualization
Check this huggingface space for visualization of existing experimental results.
Upload your results
You can start your own fork of our huggingface evaluation outputs and submit a PR of your evaluation results to our hosted huggingface repo via PR following the guide here.