OpenHands 678436da30
Fix issue #5222: [Refactor]: Refactor the evaluation directory (#5223)
Co-authored-by: Engel Nyst <enyst@users.noreply.github.com>
2024-11-25 08:35:52 -05:00

477 B

DiscoveryBench Evaluation Utils

  • eval_w_subhypo_gen.py: Implements the DiscoveryBench logic for evaluating agent-generated hypotheses.
  • lm_utils.py: Provides utility functions necessary for the evaluation process.
  • openai_helpers.py: Includes helper functions for OpenAI-related tasks.
  • openai_semantic_gen_prompts.py: Contains prompts used for semantic generation.
  • response_parser.py: Handles the parsing of agent-generated hypotheses.