AI & ML interests
None defined yet.
Recent Activity
TestEvo-Bench
A Live Benchmark for Test Generation & Test Update
Evaluating how AI agents understand and adapt tests to real-world software evolution.
TestEvo-Bench is a live benchmark for evaluating AI software engineering agents on realistic software test evolution tasks mined from open-source repositories.
Unlike traditional benchmarks that isolate tests from production changes, TestEvo-Bench models real software co-evolution between production code and test suites.
The benchmark contains two complementary tracks:
- ๐ Test Generation โ generate new tests for newly introduced behavior
- ๐ฃ Test Update โ repair or adapt outdated tests after code changes
Each task is execution-grounded with runnable environments and evaluated using metrics such as pass rate, coverage, and mutation score.
Datasets
Test Generation โ https://huggingface.co/datasets/TestEvo-Bench/teb-generation
Test Update โ https://huggingface.co/datasets/TestEvo-Bench/teb-update
Links
๐ Website โ https://www.testevo-bench.com/
๐ค Hugging Face Space โ https://huggingface.co/spaces/TestEvo-Bench/
๐ป Code โ https://anonymous.4open.science/r/testevo-bench-1150/README.md
Real-world โข Execution-grounded โข Live software evolution benchmark