Xingyao Wang bdf6df12c3
fix: pip not available in runtime (#3306)
* try to fix pip unavailable

* update test case for pip

* force rebuild in CI

* remove extra symlink

* fix newline

* added semi-colon to line 31

* Dockerfile.j2: activate env at the end

* Revert "Dockerfile.j2: activate env at the end"

This reverts commit cf2f5651021fe80d4ab69a35a85f0a35b29dc3d7.

* cleanup Dockerfile

* switch default python image

* remove image agnostic (no longer used)

* fix tests

* switch to nikolaik/python-nodejs:python3.11-nodejs22

* fix test

* fix test

* revert docker

* update template

---------

Co-authored-by: tobitege <tobitege@gmx.de>
Co-authored-by: Graham Neubig <neubig@gmail.com>
2024-08-09 15:04:43 -04:00
..

WebArena Evaluation with OpenDevin Browsing Agents

This folder contains evaluation for WebArena benchmark, powered by BrowserGym for easy evaluation of how well an agent capable of browsing can perform on realistic web browsing tasks.

Setup Environment and LLM Configuration

Please follow instruction here to setup your local development environment and LLM.

Setup WebArena Environment

WebArena requires you to set up websites containing pre-populated content that is accessible via URL to the machine running the OpenDevin agents. Follow this document to set up your own WebArena environment through local servers or AWS EC2 instances. Take note of the base URL ($WEBARENA_BASE_URL) of the machine where the environment is installed.

Test if your environment works

Access with browser the above WebArena website URLs and see if they load correctly. If you cannot access the website, make sure the firewall allows public access of the aforementioned ports on your server Check the network security policy if you are using an AWS machine. Follow the WebArena environment setup guide carefully, and make sure the URL fields are populated with the correct base URL of your server.

Run Evaluation

export WEBARENA_BASE_URL=<YOUR_SERVER_URL_HERE>
export OPENAI_API_KEY="yourkey" # this key is required for some WebArena validators that utilize LLMs
bash evaluation/webarena/scripts/run_infer.sh

Results will be in evaluation/evaluation_outputs/outputs/webarena/

To calculate the success rate, run:

poetry run python evaluation/webarena/get_success_rate.py evaluation/evaluation_outputs/outputs/webarena/SOME_AGENT/EXP_NAME/output.jsonl

Submit your evaluation results

You can start your own fork of our huggingface evaluation outputs and submit a PR of your evaluation results following the guide here.

BrowsingAgent V1.0 result

Tested on BrowsingAgent V1.0

WebArena, 812 tasks (high cost, single run due to fixed task), max step 15

  • GPT4o: 0.1478
  • GPT3.5: 0.0517