RAT-bench / README.md
gemelom's picture
Upload dataset
7d987e3 verified
---
license: apache-2.0
task_categories:
- text-generation
language:
- en
tags:
- code
- docker
- testing
- repositories
- github
- code-generation
size_categories:
- 1K<n<10K
configs:
- config_name: default
data_files:
- split: java_all
path: data/java_all-*
- split: java_eval
path: data/java_eval-*
- split: java_large
path: data/java_large-*
- split: java_medium
path: data/java_medium-*
- split: java_small
path: data/java_small-*
- split: nodejs_all
path: data/nodejs_all-*
- split: nodejs_eval
path: data/nodejs_eval-*
- split: nodejs_large
path: data/nodejs_large-*
- split: nodejs_medium
path: data/nodejs_medium-*
- split: nodejs_small
path: data/nodejs_small-*
- split: python_all
path: data/python_all-*
- split: python_eval
path: data/python_eval-*
- split: python_large
path: data/python_large-*
- split: python_medium
path: data/python_medium-*
- split: python_small
path: data/python_small-*
- split: rust_all
path: data/rust_all-*
- split: rust_eval
path: data/rust_eval-*
- split: rust_large
path: data/rust_large-*
- split: rust_medium
path: data/rust_medium-*
- split: rust_small
path: data/rust_small-*
- split: python_l1
path: data/python_l1-*
- split: python_l2
path: data/python_l2-*
- split: python_l3
path: data/python_l3-*
- split: python_claude
path: data/python_claude-*
- split: python_claude_continue
path: data/python_claude_continue-*
dataset_info:
features:
- name: full_name
dtype: string
- name: clone_url
dtype: string
- name: default_branch
dtype: string
- name: stargazers_count
dtype: int64
- name: language
dtype: string
- name: description
dtype: string
- name: html_url
dtype: string
- name: forks_count
dtype: int64
- name: topics
list: string
- name: total_bytes
dtype: int64
- name: code_bytes
dtype: int64
- name: size
dtype: string
- name: id
dtype: string
- name: language_breakdown_json
dtype: string
splits:
- name: java_all
num_bytes: 203687
num_examples: 500
- name: java_eval
num_bytes: 60990
num_examples: 150
- name: java_large
num_bytes: 38573
num_examples: 80
- name: java_medium
num_bytes: 69081
num_examples: 178
- name: java_small
num_bytes: 97012
num_examples: 242
- name: nodejs_all
num_bytes: 197947
num_examples: 500
- name: nodejs_eval
num_bytes: 60204
num_examples: 150
- name: nodejs_large
num_bytes: 9239
num_examples: 21
- name: nodejs_medium
num_bytes: 37669
num_examples: 86
- name: nodejs_small
num_bytes: 151999
num_examples: 393
- name: python_all
num_bytes: 214928
num_examples: 500
- name: python_eval
num_bytes: 62216
num_examples: 150
- name: python_large
num_bytes: 29049
num_examples: 52
- name: python_medium
num_bytes: 81759
num_examples: 190
- name: python_small
num_bytes: 105139
num_examples: 258
- name: rust_all
num_bytes: 177941
num_examples: 500
- name: rust_eval
num_bytes: 54279
num_examples: 150
- name: rust_large
num_bytes: 19649
num_examples: 44
- name: rust_medium
num_bytes: 73401
num_examples: 199
- name: rust_small
num_bytes: 85927
num_examples: 257
- name: python_l1
num_bytes: 22090
num_examples: 50
- name: python_l2
num_bytes: 19338
num_examples: 50
- name: python_l3
num_bytes: 19584
num_examples: 50
- name: python_claude
num_bytes: 12251
num_examples: 30
- name: python_claude_continue
num_bytes: 3998
num_examples: 9
download_size: 1202140
dataset_size: 1907950
---
# RAT-bench: RunAnyThing Benchmark
## Dataset Summary
RAT-bench is a curated collection of GitHub repositories across multiple programming languages (Python, Java, Rust, Node.js),
each containing Dockerfiles and unit tests. This dataset is designed for evaluating code analysis, Docker containerization,
and automated testing capabilities.
The dataset is organized by programming language and repository size, providing 16 different configurations
for flexible use in various research and development scenarios.
## Dataset Configurations
The dataset contains 16 configurations organized by language and size:
| Configuration | Language | Size Category | Count | Size Range | Avg Size |
|--------------|----------|---------------|-------|------------|----------|
| `python_small` | Python | Small | 258 | < 500 KB | 185 KB |
| `python_medium` | Python | Medium | 190 | 500 KB - 5 MB | 2 MB |
| `python_large` | Python | Large | 52 | > 5 MB | 25 MB |
| `python_all` | Python | All | 500 | All sizes | - |
| `java_small` | Java | Small | 242 | < 500 KB | 159 KB |
| `java_medium` | Java | Medium | 178 | 500 KB - 5 MB | 2 MB |
| `java_large` | Java | Large | 80 | > 5 MB | 103 MB |
| `java_all` | Java | All | 500 | All sizes | - |
| `rust_small` | Rust | Small | 257 | < 500 KB | 199 KB |
| `rust_medium` | Rust | Medium | 199 | 500 KB - 5 MB | 2 MB |
| `rust_large` | Rust | Large | 44 | > 5 MB | 17 MB |
| `rust_all` | Rust | All | 500 | All sizes | - |
| `nodejs_small` | Nodejs | Small | 393 | < 500 KB | 112 KB |
| `nodejs_medium` | Nodejs | Medium | 86 | 500 KB - 5 MB | 2 MB |
| `nodejs_large` | Nodejs | Large | 21 | > 5 MB | 31 MB |
| `nodejs_all` | Nodejs | All | 500 | All sizes | - |
| `python-l1_small` | Python-l1 | Small | 0 | - | - |
| `python-l1_medium` | Python-l1 | Medium | 0 | - | - |
| `python-l1_large` | Python-l1 | Large | 0 | - | - |
| `python-l1_all` | Python-l1 | All | 0 | All sizes | - |
| `python-l2_small` | Python-l2 | Small | 0 | - | - |
| `python-l2_medium` | Python-l2 | Medium | 0 | - | - |
| `python-l2_large` | Python-l2 | Large | 0 | - | - |
| `python-l2_all` | Python-l2 | All | 0 | All sizes | - |
| `python-l3_small` | Python-l3 | Small | 0 | - | - |
| `python-l3_medium` | Python-l3 | Medium | 0 | - | - |
| `python-l3_large` | Python-l3 | Large | 0 | - | - |
| `python-l3_all` | Python-l3 | All | 0 | All sizes | - |
## Usage
### Load a specific configuration
```python
from datasets import load_dataset
# Load Python small repositories (< 500 KB)
dataset = load_dataset("gemelom/RAT-bench", "python_small")
# Load Java medium repositories (500 KB - 5 MB)
dataset = load_dataset("gemelom/RAT-bench", "java_medium")
# Load all Rust repositories
dataset = load_dataset("gemelom/RAT-bench", "rust_all")
```
### Load multiple configurations
```python
from datasets import load_dataset, concatenate_datasets
# Load all small repositories across all languages
small_repos = concatenate_datasets([
load_dataset("gemelom/RAT-bench", f"{lang}_small")["train"]
for lang in ["python", "java", "rust", "nodejs"]
])
print(f"Total small repositories: {len(small_repos)}")
```
### Iterate through repositories
```python
import json
dataset = load_dataset("gemelom/RAT-bench", "python_all")
for repo in dataset["train"]:
print(f"Repository: {repo['full_name']}")
print(f"Stars: {repo['stargazers_count']}")
print(f"Size: {repo['total_bytes']} bytes")
# Parse language breakdown from JSON string
lang_breakdown = json.loads(repo['language_breakdown_json'])
print(f"Languages: {lang_breakdown}")
print("---")
```
## Data Fields
Each repository entry contains the following fields:
- `full_name` (string): Repository full name in format "owner/repo"
- `clone_url` (string): Git clone URL (HTTPS)
- `default_branch` (string): Default branch name (e.g., "main", "master")
- `stargazers_count` (int): Number of GitHub stars
- `language` (string): Primary programming language
- `description` (string, nullable): Repository description
- `html_url` (string, nullable): GitHub repository URL
- `forks_count` (int): Number of forks
- `topics` (list[string]): Repository topics/tags
- `total_bytes` (int): Total repository size in bytes (all files)
- `code_bytes` (int): Code-only size in bytes (excluding assets)
- `language_breakdown_json` (string): JSON string of language distribution percentages
- Example: `{"Python": 95.5, "JavaScript": 3.2}`
- Parse with `json.loads(row["language_breakdown_json"])`
- `size` (string): Size category ("small", "medium", or "large")
- `id` (string): Unique identifier (format: language-size-index)
## Dataset Statistics
### Overall Statistics
- **Total repositories**: 2,000
- **Languages**: 7 (Python, Java, Rust, Node.js)
- **Configurations**: 16 (4 languages × 4 size categories)
### Size Categories
- **Small**: < 500 KB code size
- **Medium**: 500 KB - 5 MB code size
- **Large**: > 5 MB code size
- **All**: All repositories (small + medium + large)
## Data Collection
The repositories were collected using the GitHub Search API with the following criteria:
1. **Language-specific search**: Repositories must be primarily written in Python, Java, Rust, or JavaScript/TypeScript
2. **Dockerfile requirement**: Must contain a Dockerfile for containerization
3. **Test requirement**: Must contain unit tests (for Python: pytest)
4. **Build tool requirement**:
- Java: Maven (pom.xml) or Gradle (build.gradle)
- Node.js: package.json
- Rust: Cargo.toml
5. **Activity**: Repositories updated within the last year
6. **Popularity**: Varying star counts (10-50+ stars depending on size category)
### Collection Strategy
To ensure balanced size distribution, we used a stratified sampling approach:
- 40% small repositories (< 1MB on GitHub)
- 35% medium repositories (1-5MB on GitHub)
- 25% large repositories (> 5MB on GitHub)
Repositories are sorted by code size (ascending) to prioritize smaller, more manageable projects.
## Use Cases
This dataset is suitable for:
1. **Docker Build Automation**: Training/evaluating models to generate or fix Dockerfiles
2. **Test Execution**: Analyzing test suites and test execution patterns
3. **Code Analysis**: Multi-language code understanding and analysis
4. **Repository Characterization**: Understanding repository structure and composition
5. **CI/CD Research**: Studying containerization and testing practices
6. **Benchmark Evaluation**: Evaluating code agents, LLMs, and automated tools
## Limitations
- Only includes repositories with Dockerfiles and tests (selection bias)
- Repository metadata is a snapshot from the collection date
- Some repositories may have been updated or deleted since collection
- Size measurements are approximate (based on GitHub's language statistics API)
## Citation
If you use this dataset in your research, please cite:
```bibtex
@dataset{
}
```
## License
This dataset is released under the Apache 2.0 License. Note that individual repositories
may have their own licenses - please check each repository's license before use.
## Acknowledgments
Data collected from GitHub using the GitHub API. Repository metadata is provided by GitHub, Inc.