MEnvBench / README.md
wjmcat's picture
Upload folder using huggingface_hub
4e312f1 verified
metadata
language:
  - en
license: apache-2.0
size_categories:
  - 1K<n<10K
task_categories:
  - text-generation
pretty_name: MEnvBench
tags:
  - code
  - software-engineering
  - benchmark
  - environment-construction
  - multi-language

MEnvBench: Multi-Language Environment Construction Benchmark

πŸ“‹ Dataset Description

MEnvBench is a comprehensive benchmark for evaluating multi-language environment building and test execution capabilities, comprising 1,000 task instances (10 languages Γ— 20 repositories Γ— 5 instances) selected from 200 high-quality open-source repositories.

Key Features

  • 🌐 Multi-Language Coverage: 10 programming languages (Python, Java, TypeScript, JavaScript, Rust, Go, C++, Ruby, PHP, C)
  • 🎯 High Quality: Multi-stage filtering pipeline from 8,000 candidate repositories (>1,000 stars, >200 forks/issues/PRs)
  • πŸ“Š Diverse Domains: Strategic sampling across AI, System, Web, and other software ecosystems
  • πŸ—οΈ Difficulty Levels: Five project scale bands from <10MB to >500MB
  • βœ… Verified Quality: Closed issues with test patches and LLM-based quality assessment

Dataset Statistics

Metric Value
Total Instances 1,000
Languages 10
Repositories 200
Instge 100 (20 repos Γ— 5 instances)

πŸ“Š Dataset Structure

Each instance in MEnvBench contains the following fields:

Field Type Description
repo str The full GitHub repository name (e.g., "home-assistant/core").
pull_number int The pull request number associated with the fix (e.g., 807).
instance_id str A unique identifier for the task instance (e.g., "home-assistant__core-807").
issue_numbers list A list of linked issue numbers (e.g., [103876]).
base_commit str The commit SHA of the repository prior to the fix.
version str The version of the dataset (e.g., "0.10").
patch str The ground-truth patch (git diff) that resolves the issue.
test_patch str The test patch (git diff) containing new tests to reproduce the issue.
problem_statement str The natural language description of the issue.
hints_text str Hints extracted from the issue dussion to aid resolution.
all_hints_text str Comprehensive context including all comments and code review details.
commit_urls list A list of URLs pointing to the relevant commits.
created_at str The creation timestamp (e.g., "2015-12-27T19:33:55Z").
language str The programming language of the repository (e.g., "Python").

πŸš€ Usage

Loading the Dataset

from datasets import load_dataset

# Load the full dataset
dataset = load_dataset("ernie-research/MEnvBench")

# Access a single instance
instance = dataset['test'][0]
print(f"Repository: {instance['repo']}")
print(f"Language: {instance['language']}")
print(f"Problem: {instance['problem_statement'][:200]}...")

Example Instance

{
    "repo": "home-assistant/core",
    "pull_number": 807,
    "instance_id": "home-assistant__core-807",
    "issue_numbers": [103876],
    "base_commit": "87c88078c87257cde4786997fedb865be6813545",
    "version": "0.10",
    "language""Python",
    "problem_statement": "Scene configuration issue...",
    "patch": "diff --git a/homeassistant/components/scene.py...",
    "test_patch": "diff --git a/tests/components/test_scene.py...",
    ...
}

πŸ“– Citation

If MEnvBench helps your research, please cite:

@misc{guo2026menvagent,
      title={MEnvAgent: Scalable Polyglot Environment Construction for Verifiable Software Engineering}, 
      author={Chuanzhe Guo and Jingjing Wu and Sijun He and Yang Chen and Zhaoqi Kuang and Shilong Fan and Bingjin Chen and Siqi Bao and Jing Liu and Hua Wu and Qingfu Zhu and Wanxiang Che and Haifeng Wang},
      year={2026},
      url={https://arxiv.org/abs/2601.22859}, 
}

πŸ“§ Contact

For questions or issues:

πŸ™ Acknowledgements

We thank the open-source community and all repository maintainers whose high-quality projects made this benchmark possible.