The dataset viewer is not available for this split.
Error code: InfoError
Exception: FileNotFoundError
Message: Couldn't find any data file at /src/services/worker/TestEvo-Bench/teb-generation. Couldn't find 'TestEvo-Bench/teb-generation' on the Hugging Face Hub either: FileNotFoundError: Unable to find 'hf://datasets/TestEvo-Bench/teb-generation@aa71f93acf910b123b152d659e5fa4698fd39cf2/data/train.jsonl' with any supported extension ['.csv', '.tsv', '.json', '.jsonl', '.ndjson', '.parquet', '.geoparquet', '.gpq', '.arrow', '.txt', '.tar', '.xml', '.hdf5', '.h5', '.eval', '.lance', '.blp', '.bmp', '.dib', '.bufr', '.cur', '.pcx', '.dcx', '.dds', '.ps', '.eps', '.fit', '.fits', '.fli', '.flc', '.ftc', '.ftu', '.gbr', '.gif', '.grib', '.png', '.apng', '.jp2', '.j2k', '.jpc', '.jpf', '.jpx', '.j2c', '.icns', '.ico', '.im', '.iim', '.tif', '.tiff', '.jfif', '.jpe', '.jpg', '.jpeg', '.mpg', '.mpeg', '.msp', '.pcd', '.pxr', '.pbm', '.pgm', '.ppm', '.pnm', '.psd', '.bw', '.rgb', '.rgba', '.sgi', '.ras', '.tga', '.icb', '.vda', '.vst', '.webp', '.wmf', '.emf', '.xbm', '.xpm', '.BLP', '.BMP', '.DIB', '.BUFR', '.CUR', '.PCX', '.DCX', '.DDS', '.PS', '.EPS', '.FIT', '.FITS', '.FLI', '.FLC', '.FTC', '.FTU', '.GBR', '.GIF', '.GRIB', '.PNG', '.APNG', '.JP2', '.J2K', '.JPC', '.JPF', '.JPX', '.J2C', '.ICNS', '.ICO', '.IM', '.IIM', '.TIF', '.TIFF', '.JFIF', '.JPE', '.JPG', '.JPEG', '.MPG', '.MPEG', '.MSP', '.PCD', '.PXR', '.PBM', '.PGM', '.PPM', '.PNM', '.PSD', '.BW', '.RGB', '.RGBA', '.SGI', '.RAS', '.TGA', '.ICB', '.VDA', '.VST', '.WEBP', '.WMF', '.EMF', '.XBM', '.XPM', '.aiff', '.au', '.avr', '.caf', '.flac', '.htk', '.svx', '.mat4', '.mat5', '.mpc2k', '.ogg', '.paf', '.pvf', '.raw', '.rf64', '.sd2', '.sds', '.ircam', '.voc', '.w64', '.wav', '.nist', '.wavex', '.wve', '.xi', '.mp3', '.opus', '.3gp', '.3g2', '.avi', '.asf', '.flv', '.mp4', '.mov', '.m4v', '.mkv', '.webm', '.f4v', '.wmv', '.wma', '.ogm', '.mxf', '.nut', '.AIFF', '.AU', '.AVR', '.CAF', '.FLAC', '.HTK', '.SVX', '.MAT4', '.MAT5', '.MPC2K', '.OGG', '.PAF', '.PVF', '.RAW', '.RF64', '.SD2', '.SDS', '.IRCAM', '.VOC', '.W64', '.WAV', '.NIST', '.WAVEX', '.WVE', '.XI', '.MP3', '.OPUS', '.3GP', '.3G2', '.AVI', '.ASF', '.FLV', '.MP4', '.MOV', '.M4V', '.MKV', '.WEBM', '.F4V', '.WMV', '.WMA', '.OGM', '.MXF', '.NUT', '.pdf', '.PDF', '.nii', '.NII', '.zip', '.idx', '.manifest', '.txn']
Traceback: Traceback (most recent call last):
File "/src/services/worker/src/worker/job_runners/split/first_rows.py", line 226, in compute_first_rows_from_streaming_response
info = get_dataset_config_info(path=dataset, config_name=config, token=hf_token)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/datasets/inspect.py", line 268, in get_dataset_config_info
builder = load_dataset_builder(
^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/datasets/load.py", line 1315, in load_dataset_builder
dataset_module = dataset_module_factory(
^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/datasets/load.py", line 1203, in dataset_module_factory
raise FileNotFoundError(
FileNotFoundError: Couldn't find any data file at /src/services/worker/TestEvo-Bench/teb-generation. Couldn't find 'TestEvo-Bench/teb-generation' on the Hugging Face Hub either: FileNotFoundError: Unable to find 'hf://datasets/TestEvo-Bench/teb-generation@aa71f93acf910b123b152d659e5fa4698fd39cf2/data/train.jsonl' with any supported extension ['.csv', '.tsv', '.json', '.jsonl', '.ndjson', '.parquet', '.geoparquet', '.gpq', '.arrow', '.txt', '.tar', '.xml', '.hdf5', '.h5', '.eval', '.lance', '.blp', '.bmp', '.dib', '.bufr', '.cur', '.pcx', '.dcx', '.dds', '.ps', '.eps', '.fit', '.fits', '.fli', '.flc', '.ftc', '.ftu', '.gbr', '.gif', '.grib', '.png', '.apng', '.jp2', '.j2k', '.jpc', '.jpf', '.jpx', '.j2c', '.icns', '.ico', '.im', '.iim', '.tif', '.tiff', '.jfif', '.jpe', '.jpg', '.jpeg', '.mpg', '.mpeg', '.msp', '.pcd', '.pxr', '.pbm', '.pgm', '.ppm', '.pnm', '.psd', '.bw', '.rgb', '.rgba', '.sgi', '.ras', '.tga', '.icb', '.vda', '.vst', '.webp', '.wmf', '.emf', '.xbm', '.xpm', '.BLP', '.BMP', '.DIB', '.BUFR', '.CUR', '.PCX', '.DCX', '.DDS', '.PS', '.EPS', '.FIT', '.FITS', '.FLI', '.FLC', '.FTC', '.FTU', '.GBR', '.GIF', '.GRIB', '.PNG', '.APNG', '.JP2', '.J2K', '.JPC', '.JPF', '.JPX', '.J2C', '.ICNS', '.ICO', '.IM', '.IIM', '.TIF', '.TIFF', '.JFIF', '.JPE', '.JPG', '.JPEG', '.MPG', '.MPEG', '.MSP', '.PCD', '.PXR', '.PBM', '.PGM', '.PPM', '.PNM', '.PSD', '.BW', '.RGB', '.RGBA', '.SGI', '.RAS', '.TGA', '.ICB', '.VDA', '.VST', '.WEBP', '.WMF', '.EMF', '.XBM', '.XPM', '.aiff', '.au', '.avr', '.caf', '.flac', '.htk', '.svx', '.mat4', '.mat5', '.mpc2k', '.ogg', '.paf', '.pvf', '.raw', '.rf64', '.sd2', '.sds', '.ircam', '.voc', '.w64', '.wav', '.nist', '.wavex', '.wve', '.xi', '.mp3', '.opus', '.3gp', '.3g2', '.avi', '.asf', '.flv', '.mp4', '.mov', '.m4v', '.mkv', '.webm', '.f4v', '.wmv', '.wma', '.ogm', '.mxf', '.nut', '.AIFF', '.AU', '.AVR', '.CAF', '.FLAC', '.HTK', '.SVX', '.MAT4', '.MAT5', '.MPC2K', '.OGG', '.PAF', '.PVF', '.RAW', '.RF64', '.SD2', '.SDS', '.IRCAM', '.VOC', '.W64', '.WAV', '.NIST', '.WAVEX', '.WVE', '.XI', '.MP3', '.OPUS', '.3GP', '.3G2', '.AVI', '.ASF', '.FLV', '.MP4', '.MOV', '.M4V', '.MKV', '.WEBM', '.F4V', '.WMV', '.WMA', '.OGM', '.MXF', '.NUT', '.pdf', '.PDF', '.nii', '.NII', '.zip', '.idx', '.manifest', '.txn']Need help to make the dataset viewer work? Make sure to review how to configure the dataset viewer, and open a discussion for direct support.
TestEvo-Bench / teb-generation
A benchmark for test-case generation in evolving Java codebases. Each row captures a single (project, revision-pair) where new tests were authored to exercise newly-added or changed production code, and surfaces both the test code and the surrounding production context needed to predict it.
Links
- π Project website: https://www.testevo-bench.com/
- π¦ Code repository: https://anonymous.4open.science/r/testevo-bench-1150
- π€ Sibling dataset:
TestEvo-Bench/teb-updateβ for test-update (modifying existing tests instead of generating new ones)
Dataset summary
teb-generation is a benchmark for generating new test methods that exercise newly-introduced or changed production code. Every example was mined from a real commit (rev1 β rev2) on a public Java project, and the new test is one that depends on the production-code change to behave correctly: it cannot be written or satisfied by reading only the pre-change codebase.
Each row corresponds to a single (project, rev1 β rev2) pair. Inside the row, test_changes is a list of one or more individual test methods authored in that commit, each paired with its focal production method, the production-side before/after, candidate dependency methods, and the dependencies that changed in the same commit.
| Tasks (rows) | 746 |
| Test methods | 1,961 |
| Source projects | 84 distinct Java repositories on GitHub |
| Java versions | 8, 11, 17, 21 |
| Build system | Maven |
| Total size | ~8 MB |
A task is one (project, rev1 β rev2) pair β the unit a model is asked to operate on. A test method is one entry inside that task's test_changes list β a single @Test method that was added or modified at rev2. One task may contain several related test methods written together; benchmarks can be reported at either granularity.
Schema
Each row has the following top-level fields:
| Field | Type | Description |
|---|---|---|
task_id |
string | Stable identifier: <project>__<rev1_short>_<rev2_short>__<test_class>_<hash> |
project_name |
string | e.g. Javen205_IJPay-v2.8.4 |
git_clone_url |
string | Upstream GitHub clone URL |
rev1 |
string | Git SHA before the change (full 40-char) |
rev2 |
string | Git SHA after the change |
rev1_date |
string | ISO 8601 (e.g. 2019-12-08T06:45:32Z) |
rev2_date |
string | ISO 8601 |
git_diff_url |
string | GitHub compare URL between rev1 and rev2 |
test_file |
string | Path of the test file within the repo |
test_changes |
list<struct> | One element per individual test method that was added/changed; see below |
version |
string | Upstream tag or branch (e.g. v2.8.4, master) |
java_version |
int | Java major version (8, 11, 17, or 21) |
Each test_changes[] element contains:
| Field | Type | Description |
|---|---|---|
change_id |
string | Unique id within the dataset |
test_sign |
string | JVM-style test method signature |
class, method, module |
string | Test class, method name, Maven module |
junit_selector |
string | JUnit selector usable by mvn test -Dtest=β¦ |
old_test_code |
string | null | The test method's source at rev1 (null for newly-added tests) |
new_test_code |
string | The test method's source at rev2 |
old_test_lnum, new_test_lnum |
list<string> | Line ranges of the test method at each rev |
old_test_file_path |
list<string> | Where the test file lived at rev1 (may be empty for new files) |
old_production_code, new_production_code |
string | Source of the focal production method at each rev |
focal_file_path |
string | Path of the focal production file |
focal_method_sign |
string | JVM-style signature of the focal production method |
focal_all_deps_scored |
list<struct> | All candidate dependency methods, each with a relevance score (nc, lcs_u, ed, class, tfidf signal sub-scores combined into score) |
deps_changes |
list<struct> | Production-code dependencies that changed between rev1 and rev2 |
Example row
{
"task_id": "Javen205_IJPay-v2.8.4__ceafd24_996582e__WxPayKitTest_8649b344",
"project_name": "Javen205_IJPay-v2.8.4",
"git_clone_url": "https://github.com/Javen205/IJPay.git",
"rev1": "ceafd245bef465d703c7530111626de905893657",
"rev2": "996582e5028204b7ba2938ccaf009a3715ed204b",
"rev1_date": "2019-12-08T06:45:32Z",
"rev2_date": "2019-12-08T07:49:34Z",
"git_diff_url": "https://github.com/Javen205/IJPay/compare/ceafd24β¦...996582eβ¦",
"test_file": "IJPay-WxPay/src/test/java/com/ijpay/wxpay/WxPayKitTest.java",
"test_changes": [ /* 1 or more entries β see Schema */ ],
"version": "v2.8.4",
"java_version": 11
}
How to load
from datasets import load_dataset
ds = load_dataset("TestEvo-Bench/teb-generation", split="train")
print(len(ds), ds.column_names)
# Iterate over individual test changes (the unit of evaluation)
for row in ds:
for tc in row["test_changes"]:
focal = tc["focal_method_sign"]
new_test = tc["new_test_code"]
...
Source projects + attribution
The code snippets in old_test_code, new_test_code, old_production_code, and new_production_code are derived from public Java repositories on GitHub. Each row's git_clone_url and git_diff_url link back to the upstream project; the upstream repository's own license governs reuse of those code snippets. The dataset structure and metadata are released under CC-BY-4.0 β please cite the project (see below) when using this benchmark.
Intended uses
Evaluating LLMs and program-synthesis systems on the task of generating a new test method given (a) the production-code change, (b) the test file's prior context, and (c) candidate focal-method dependencies. The included focal_all_deps_scored signals support both retrieval-augmented and end-to-end setups.
License
- Dataset structure and metadata: CC-BY-4.0
- Embedded code snippets: governed by each upstream repository's own license. See
git_clone_urlper row.
- Downloads last month
- 19