OpenHands/evaluation/README.md
Niklas Muennighoff ef6cdb7532
HumanEvalFix integration (#1908)
* Preliminary HumanEvalFix integration

* Clean paths

* fix: set workspace path correctly for config
fix: task in that contains /

* add missing run_infer.sh

* update run_infer w/o hard coded agent

* fix typo

* change `instance_id` to `task_id`

* add the warning and env var setting to run_infer.sh

* reset back workspace mount at the end of each instance

* 10 max iter is probably enough for humanevalfix

* Remove unneeded section

Co-authored-by: Xingyao Wang <xingyao6@illinois.edu>

* Fix link

Co-authored-by: Yufan Song <33971064+yufansong@users.noreply.github.com>

* Use logger

Co-authored-by: Yufan Song <33971064+yufansong@users.noreply.github.com>

* Update run_infer.py

fix a bug:

ERROR:concurrent.futures:exception calling callback for <Future at 0x309cbc470 state=finished raised NameError>
concurrent.futures.process._RemoteTraceback:

* Update README.md

* Update README.md

* Update README.md

* Update README.md

added an example

* Update README.md

added: enable_auto_lint = true

* Update pyproject.toml

add: evaluate package

* Delete poetry.lock

update poetry.lock

* update poetry.lock

update poetry.lock

* Update README.md

* Update README.md

---------

Co-authored-by: Xingyao Wang <xingyao6@illinois.edu>
Co-authored-by: Yufan Song <33971064+yufansong@users.noreply.github.com>
Co-authored-by: Robert <871607149@qq.com>
2024-05-23 13:09:40 +00:00

1.3 KiB

Evaluation

This folder contains code and resources to run experiments and evaluations.

Logistics

To better organize the evaluation folder, we should follow the rules below:

  • Each subfolder contains a specific benchmark or experiment. For example, evaluation/swe_bench should contain all the preprocessing/evaluation/analysis scripts.
  • Raw data and experimental records should not be stored within this repo.
  • Important data files of manageable size and analysis scripts (e.g., jupyter notebooks) can be directly uploaded to this repo.

Supported Benchmarks

Result Visualization

Check this huggingface space for visualization of existing experimental results.

Upload your results

You can start your own fork of our huggingface evaluation outputs and submit a PR of your evaluation results to our hosted huggingface repo via PR following the guide here.