You need to agree to share your contact information to access this dataset

This repository is publicly accessible, but you have to accept the conditions to access its files and content.

Log in or Sign Up to review the conditions and access this dataset content.

RAT-bench: RunAnyThing Benchmark

Dataset Summary

RAT-bench is a curated collection of GitHub repositories across multiple programming languages (Python, Java, Rust, Node.js), each containing Dockerfiles and unit tests. This dataset is designed for evaluating code analysis, Docker containerization, and automated testing capabilities.

The dataset is organized by programming language and repository size, providing 16 different configurations for flexible use in various research and development scenarios.

Dataset Configurations

The dataset contains 16 configurations organized by language and size:

Configuration Language Size Category Count Size Range Avg Size
python_small Python Small 258 < 500 KB 185 KB
python_medium Python Medium 190 500 KB - 5 MB 2 MB
python_large Python Large 52 > 5 MB 25 MB
python_all Python All 500 All sizes -
java_small Java Small 242 < 500 KB 159 KB
java_medium Java Medium 178 500 KB - 5 MB 2 MB
java_large Java Large 80 > 5 MB 103 MB
java_all Java All 500 All sizes -
rust_small Rust Small 257 < 500 KB 199 KB
rust_medium Rust Medium 199 500 KB - 5 MB 2 MB
rust_large Rust Large 44 > 5 MB 17 MB
rust_all Rust All 500 All sizes -
nodejs_small Nodejs Small 393 < 500 KB 112 KB
nodejs_medium Nodejs Medium 86 500 KB - 5 MB 2 MB
nodejs_large Nodejs Large 21 > 5 MB 31 MB
nodejs_all Nodejs All 500 All sizes -

Usage

Load a specific configuration

from datasets import load_dataset

# Load Python small repositories (< 500 KB)
dataset = load_dataset("gemelom/RAT-bench", "python_small")

# Load Java medium repositories (500 KB - 5 MB)
dataset = load_dataset("gemelom/RAT-bench", "java_medium")

# Load all Rust repositories
dataset = load_dataset("gemelom/RAT-bench", "rust_all")

Load multiple configurations

from datasets import load_dataset, concatenate_datasets

# Load all small repositories across all languages
small_repos = concatenate_datasets([
    load_dataset("gemelom/RAT-bench", f"{lang}_small")["train"]
    for lang in ["python", "java", "rust", "nodejs"]
])

print(f"Total small repositories: {len(small_repos)}")

Iterate through repositories

import json
dataset = load_dataset("gemelom/RAT-bench", "python_all")

for repo in dataset["train"]:
    print(f"Repository: {repo['full_name']}")
    print(f"Stars: {repo['stargazers_count']}")
    print(f"Size: {repo['total_bytes']} bytes")
    
    # Parse language breakdown from JSON string
    lang_breakdown = json.loads(repo['language_breakdown_json'])
    print(f"Languages: {lang_breakdown}")
    print("---")

Data Fields

Each repository entry contains the following fields:

  • full_name (string): Repository full name in format "owner/repo"
  • clone_url (string): Git clone URL (HTTPS)
  • default_branch (string): Default branch name (e.g., "main", "master")
  • stargazers_count (int): Number of GitHub stars
  • language (string): Primary programming language
  • description (string, nullable): Repository description
  • html_url (string, nullable): GitHub repository URL
  • forks_count (int): Number of forks
  • topics (list[string]): Repository topics/tags
  • total_bytes (int): Total repository size in bytes (all files)
  • code_bytes (int): Code-only size in bytes (excluding assets)
  • language_breakdown_json (string): JSON string of language distribution percentages
    • Example: {"Python": 95.5, "JavaScript": 3.2}
    • Parse with json.loads(row["language_breakdown_json"])
  • size (string): Size category ("small", "medium", or "large")
  • id (string): Unique identifier (format: language-size-index)

Dataset Statistics

Overall Statistics

  • Total repositories: 2,000
  • Languages: 4 (Python, Java, Rust, Node.js)
  • Configurations: 16 (4 languages × 4 size categories)

Size Categories

  • Small: < 500 KB code size
  • Medium: 500 KB - 5 MB code size
  • Large: > 5 MB code size
  • All: All repositories (small + medium + large)

Data Collection

The repositories were collected using the GitHub Search API with the following criteria:

  1. Language-specific search: Repositories must be primarily written in Python, Java, Rust, or JavaScript/TypeScript
  2. Dockerfile requirement: Must contain a Dockerfile for containerization
  3. Test requirement: Must contain unit tests (for Python: pytest)
  4. Build tool requirement:
    • Java: Maven (pom.xml) or Gradle (build.gradle)
    • Node.js: package.json
    • Rust: Cargo.toml
  5. Activity: Repositories updated within the last year
  6. Popularity: Varying star counts (10-50+ stars depending on size category)

Collection Strategy

To ensure balanced size distribution, we used a stratified sampling approach:

  • 40% small repositories (< 1MB on GitHub)
  • 35% medium repositories (1-5MB on GitHub)
  • 25% large repositories (> 5MB on GitHub)

Repositories are sorted by code size (ascending) to prioritize smaller, more manageable projects.

Use Cases

This dataset is suitable for:

  1. Docker Build Automation: Training/evaluating models to generate or fix Dockerfiles
  2. Test Execution: Analyzing test suites and test execution patterns
  3. Code Analysis: Multi-language code understanding and analysis
  4. Repository Characterization: Understanding repository structure and composition
  5. CI/CD Research: Studying containerization and testing practices
  6. Benchmark Evaluation: Evaluating code agents, LLMs, and automated tools

Limitations

  • Only includes repositories with Dockerfiles and tests (selection bias)
  • Repository metadata is a snapshot from the collection date
  • Some repositories may have been updated or deleted since collection
  • Size measurements are approximate (based on GitHub's language statistics API)

Citation

If you use this dataset in your research, please cite:

@dataset{

}

License

This dataset is released under the Apache 2.0 License. Note that individual repositories may have their own licenses - please check each repository's license before use.

Acknowledgments

Data collected from GitHub using the GitHub API. Repository metadata is provided by GitHub, Inc.

Downloads last month
48