codeconfig / README.md
ronantakizawa's picture
Update README.md
13c7033 verified
metadata
license: mit
task_categories:
  - text-generation
language:
  - en
  - code
tags:
  - devops
  - docker
  - ci-cd
  - github-actions
  - build-systems
  - configuration
size_categories:
  - 10K<n<100K

Build/CI Configuration Corpus

A curated dataset of build, CI/CD, and project configuration files from top GitHub repositories.

Repositories are sourced from ronantakizawa/github-top-projects, which tracks GitHub's top repositories from 2013–2025.

Use Cases

  • Fine-tuning LLMs for DevOps/infrastructure code generation
  • Training code completion models for configuration files
  • Benchmarking LLM performance on build/CI tasks

Screenshot 2026-03-01 at 11.21.07 AM

Schema

Field Type Description
content string Full file content
file_path string Path within repository
file_name string Filename only
category string High-level category (see above)
config_type string Specific config type (e.g., "docker-compose", "tsconfig")
repo_name string Repository (owner/name)
repo_stars int64 Star count
repo_language string Primary language of repository
license string SPDX license identifier
quality_score float32 Quality score (0.0–1.0), see below
is_generated bool Whether file appears auto-generated (lower signal for training)

Quality Filtering

The dataset undergoes three quality filtering stages:

  1. Minimum size: Files with fewer than 5 lines or 50 characters are removed (trivial configs like 2-line .nvmrc files add no training signal).

  2. Near-deduplication: MinHash LSH (128 permutations, Jaccard threshold 0.85) removes near-duplicate files. Within each duplicate cluster, the version from the highest-starred repository is kept. This eliminates hundreds of copies of common starter templates (e.g., default tsconfig.json, boilerplate Dockerfile).

  3. Makefile scoping: Makefiles are restricted to root-level and 1 directory deep, preventing large C/C++ repos from flooding the dataset with subdirectory Makefiles.

Quality Score

Each file receives a quality score (0.0–1.0) based on four equally-weighted factors:

  • Comment density (0–0.25): Files with comments/annotations teach intent, not just syntax
  • Content length (0–0.25): Longer files are more substantive (log-scaled, capped at 500 lines)
  • Repository quality (0–0.25): Higher-starred repos signal better engineering practices (log-scaled)
  • Non-trivial ratio (0–0.25): Ratio of meaningful lines vs blank/bracket-only lines

Use quality_score to filter for higher-quality examples during training:

high_quality = ds["train"].filter(lambda x: x["quality_score"] >= 0.5)

Splits

  • train (90%): For training
  • test (10%): For evaluation

Splits are deterministic by repository (all files from a repo go to the same split).

Usage

from datasets import load_dataset

ds = load_dataset("ronantakizawa/codeconfig")

# Filter by category
dockerfiles = ds["train"].filter(lambda x: x["category"] == "dockerfile")
github_actions = ds["train"].filter(lambda x: x["category"] == "github_actions")

# Filter by specific config type
tsconfigs = ds["train"].filter(lambda x: x["config_type"] == "tsconfig")