Datasets:
RAT-bench: RunAnyThing Benchmark
Dataset Summary
RAT-bench is a curated collection of GitHub repositories across multiple programming languages (Python, Java, Rust, Node.js), each containing Dockerfiles and unit tests. This dataset is designed for evaluating code analysis, Docker containerization, and automated testing capabilities.
The dataset is organized by programming language and repository size, providing 16 different configurations for flexible use in various research and development scenarios.
Dataset Configurations
The dataset contains 16 configurations organized by language and size:
| Configuration | Language | Size Category | Count | Size Range | Avg Size |
|---|---|---|---|---|---|
python_small |
Python | Small | 258 | < 500 KB | 185 KB |
python_medium |
Python | Medium | 190 | 500 KB - 5 MB | 2 MB |
python_large |
Python | Large | 52 | > 5 MB | 25 MB |
python_all |
Python | All | 500 | All sizes | - |
java_small |
Java | Small | 242 | < 500 KB | 159 KB |
java_medium |
Java | Medium | 178 | 500 KB - 5 MB | 2 MB |
java_large |
Java | Large | 80 | > 5 MB | 103 MB |
java_all |
Java | All | 500 | All sizes | - |
rust_small |
Rust | Small | 257 | < 500 KB | 199 KB |
rust_medium |
Rust | Medium | 199 | 500 KB - 5 MB | 2 MB |
rust_large |
Rust | Large | 44 | > 5 MB | 17 MB |
rust_all |
Rust | All | 500 | All sizes | - |
nodejs_small |
Nodejs | Small | 393 | < 500 KB | 112 KB |
nodejs_medium |
Nodejs | Medium | 86 | 500 KB - 5 MB | 2 MB |
nodejs_large |
Nodejs | Large | 21 | > 5 MB | 31 MB |
nodejs_all |
Nodejs | All | 500 | All sizes | - |
Usage
Load a specific configuration
from datasets import load_dataset
# Load Python small repositories (< 500 KB)
dataset = load_dataset("gemelom/RAT-bench", "python_small")
# Load Java medium repositories (500 KB - 5 MB)
dataset = load_dataset("gemelom/RAT-bench", "java_medium")
# Load all Rust repositories
dataset = load_dataset("gemelom/RAT-bench", "rust_all")
Load multiple configurations
from datasets import load_dataset, concatenate_datasets
# Load all small repositories across all languages
small_repos = concatenate_datasets([
load_dataset("gemelom/RAT-bench", f"{lang}_small")["train"]
for lang in ["python", "java", "rust", "nodejs"]
])
print(f"Total small repositories: {len(small_repos)}")
Iterate through repositories
import json
dataset = load_dataset("gemelom/RAT-bench", "python_all")
for repo in dataset["train"]:
print(f"Repository: {repo['full_name']}")
print(f"Stars: {repo['stargazers_count']}")
print(f"Size: {repo['total_bytes']} bytes")
# Parse language breakdown from JSON string
lang_breakdown = json.loads(repo['language_breakdown_json'])
print(f"Languages: {lang_breakdown}")
print("---")
Data Fields
Each repository entry contains the following fields:
full_name(string): Repository full name in format "owner/repo"clone_url(string): Git clone URL (HTTPS)default_branch(string): Default branch name (e.g., "main", "master")stargazers_count(int): Number of GitHub starslanguage(string): Primary programming languagedescription(string, nullable): Repository descriptionhtml_url(string, nullable): GitHub repository URLforks_count(int): Number of forkstopics(list[string]): Repository topics/tagstotal_bytes(int): Total repository size in bytes (all files)code_bytes(int): Code-only size in bytes (excluding assets)language_breakdown_json(string): JSON string of language distribution percentages- Example:
{"Python": 95.5, "JavaScript": 3.2} - Parse with
json.loads(row["language_breakdown_json"])
- Example:
size(string): Size category ("small", "medium", or "large")id(string): Unique identifier (format: language-size-index)
Dataset Statistics
Overall Statistics
- Total repositories: 2,000
- Languages: 4 (Python, Java, Rust, Node.js)
- Configurations: 16 (4 languages × 4 size categories)
Size Categories
- Small: < 500 KB code size
- Medium: 500 KB - 5 MB code size
- Large: > 5 MB code size
- All: All repositories (small + medium + large)
Data Collection
The repositories were collected using the GitHub Search API with the following criteria:
- Language-specific search: Repositories must be primarily written in Python, Java, Rust, or JavaScript/TypeScript
- Dockerfile requirement: Must contain a Dockerfile for containerization
- Test requirement: Must contain unit tests (for Python: pytest)
- Build tool requirement:
- Java: Maven (pom.xml) or Gradle (build.gradle)
- Node.js: package.json
- Rust: Cargo.toml
- Activity: Repositories updated within the last year
- Popularity: Varying star counts (10-50+ stars depending on size category)
Collection Strategy
To ensure balanced size distribution, we used a stratified sampling approach:
- 40% small repositories (< 1MB on GitHub)
- 35% medium repositories (1-5MB on GitHub)
- 25% large repositories (> 5MB on GitHub)
Repositories are sorted by code size (ascending) to prioritize smaller, more manageable projects.
Use Cases
This dataset is suitable for:
- Docker Build Automation: Training/evaluating models to generate or fix Dockerfiles
- Test Execution: Analyzing test suites and test execution patterns
- Code Analysis: Multi-language code understanding and analysis
- Repository Characterization: Understanding repository structure and composition
- CI/CD Research: Studying containerization and testing practices
- Benchmark Evaluation: Evaluating code agents, LLMs, and automated tools
Limitations
- Only includes repositories with Dockerfiles and tests (selection bias)
- Repository metadata is a snapshot from the collection date
- Some repositories may have been updated or deleted since collection
- Size measurements are approximate (based on GitHub's language statistics API)
Citation
If you use this dataset in your research, please cite:
@dataset{
}
License
This dataset is released under the Apache 2.0 License. Note that individual repositories may have their own licenses - please check each repository's license before use.
Acknowledgments
Data collected from GitHub using the GitHub API. Repository metadata is provided by GitHub, Inc.
- Downloads last month
- 48