tangled-ccs-commits / README.md
Berom0227's picture
Upload README.md with huggingface_hub
411a2b5 verified
metadata
license: mit
task_categories:
  - text-generation
  - text-classification
language:
  - en
tags:
  - code
  - git
  - commits
  - software-engineering
  - concern-separation
size_categories:
  - 1K<n<10K

Untangling Multi-Concern Commits with Small Language Models

This dataset contains commit data for training and evaluating models on software engineering tasks, specifically focusing on identifying and separating concerns in multi-concern commits.

Dataset Description

This dataset is structured in two layers: Atomic Commits and Tangled Commits.

1. Atomic Commits (original)

  • File: data/sampled_ccs_dataset.csv
  • Records: 350 individual atomic commits with single concerns
  • Source: Sampled from CCS Dataset (2,000 commits)
  • Description: Base dataset containing individual single-concern commits
  • Features:
    • annotated_type: The type of concern/change in the commit
    • masked_commit_message: Commit message with sensitive information masked
    • git_diff: The actual code changes in diff format
    • sha: Git commit SHA hash

2. Tangled Commits

Artificially generated multi-concern commits by combining atomic commits. Split into training and test sets.

2.1. Training Set (train)

  • File: data/tangled_ccs_dataset_train.csv
  • Records: 1,400 multi-concern commits
  • Description: Training dataset for model development
  • Features:
    • commit_message: Combined commit messages of all concerns
    • diff: JSON string containing array of diffs for each concern
    • concern_count: Number of individual concerns combined (1-5)
    • shas: JSON string containing array of original commit SHAs
    • types: JSON string containing array of concern types

2.2. Test Set (test)

  • File: data/tangled_ccs_dataset_test.csv
  • Records: 350 multi-concern commits
  • Description: Test dataset for evaluation, generated separately from training data
  • Features: Same as training set

Dataset Statistics

Source Data

Dataset Hierarchy

Atomic Commits

  • 350 single-concern commits (sampled from CCS Dataset)

Tangled Commits (artificially generated from atomic commits)

  • Training set: 1,400 multi-concern commits
  • Test set: 350 multi-concern commits
  • Total: 1,750 multi-concern commits

Concern Type Distribution

The dataset includes 7 conventional commit types:

  • feat: New features
  • fix: Bug fixes
  • refactor: Code restructuring
  • test: Test modifications
  • docs: Documentation updates
  • build: Build system changes
  • ci: CI/CD configuration changes

Generation Parameters

  • Atomic commits: 350 commits sampled from CCS Dataset (2,000 commits)
  • Tangled commits: 1,750 multi-concern commits generated by combining atomic commits
  • Concern count range: 1-5 concerns per tangled commit
  • Token limit: 12,288 tokens per diff (GPT-4 context window compatibility)
  • Train/Test split: 80/20 ratio (1,400 train / 350 test)

Use Cases

  1. Commit Message Generation: Generate appropriate commit messages for code changes
  2. Concern Classification: Classify the type of concern addressed in a commit
  3. Commit Decomposition: Break down multi-concern commits into individual concerns
  4. Code Change Analysis: Understand the relationship between code changes and their descriptions

Data Collection and Processing

The dataset was created through a multi-stage pipeline:

Stage 1: Atomic Commit Sampling

  1. Source: Started with CCS Dataset (2,000 commits)
  2. Normalization: Standardized commit type labels to lowercase
  3. Token Filtering: Removed commits exceeding 12,288 tokens (GPT-4 context limit)
  4. Sampling: Selected 350 commits across 7 conventional commit types
  5. Output: 350 atomic commits in sampled_ccs_dataset.csv

Stage 2: Tangled Commit Generation

  1. Train/Test Split: Split 350 atomic commits by type (80/20 ratio) before tangling
  2. Random Combination: Randomly selected and combined 1-5 atomic commits
  3. Token Enforcement: Rejected combinations exceeding 12,288 tokens
  4. Duplicate Prevention: Ensured unique SHA combinations using frozenset tracking
  5. Output:
    • 1,400 training examples in tangled_ccs_dataset_train.csv
    • 350 test examples in tangled_ccs_dataset_test.csv

Data Quality Measures

  • All commit messages have sensitive information masked
  • Diffs are validated for token limits to ensure model compatibility
  • Train/test split ensures no data leakage between sets
  • Balanced representation across all concern types and counts

Citation

If you use this dataset in your research, please cite:

@dataset{tangled_commits_dataset,
  title={Detecting Semantic Concerns in Tangled Code Changes Using Small Language Models},
  author={Beromsu Koh},
  year={2025},
  publisher={HuggingFace},
  url={https://huggingface.co/datasets/Berom0227/Detecting-Semantic-Concerns-in-Tangled-Code-Changes-Using-SLMs},
  note={Dataset includes 350 atomic commits and 1,750 artificially tangled multi-concern commits (1,400 train / 350 test)}
}

Scripts

  • sample_atomic_commites.py: Samples atomic (single-concern) commits from the CCS dataset

    • Implements sampling pipeline with filtering and normalization
    • Uses token limit filtering (12,288 tokens) to ensure model compatibility
    • Samples 350 commits across 7 conventional commit types
    • Produces sampled_ccs_dataset.csv
  • generate_tangled_commites.py: Generates artificial multi-concern commits by combining atomic commits

    • Creates train/test split (80/20 ratio) at the atomic commit level
    • Randomly combines 1-5 atomic commits to create tangled commits
    • Ensures no duplicate SHA combinations
    • Enforces token limits on combined diffs
    • Produces 1,400 train and 350 test examples