Yuuki-dataset / README.md
OpceanAI's picture
Update README.md
1e7a2c9 verified
metadata
language:
  - en
  - code
license: mit
size_categories:
  - 100K<n<1M
task_categories:
  - text-generation
pretty_name: Yuuki Code Dataset
tags:
  - code
  - multilingual
  - programming
  - training-data
  - open-source

Yuuki Dataset



Multilingual Code Training Dataset

321K code samples. 18 programming languages. Curated from GitHub + HuggingFace.
High-quality training data for code generation models. Zero cloud budget.


Description    Structure    Sponsor



License   HuggingFace   Python   Code   DOI




Curated for quality.

321,000 code samples total.
18 programming languages.
Sourced from GitHub repositories.
Filtered and deduplicated.
Train/validation/test splits.
Ready for model training.
MIT licensed and attribution-complete.

Production-ready.

Structured JSONL format.
Comprehensive metadata fields.
Language detection included.
Source repository tracking.
File path preservation.

Built for the Yuuki project.




Dataset Description


Yuuki Dataset is a carefully curated collection of high-quality source code from open-source repositories and public datasets. Created specifically for training the Yuuki language models, this dataset represents real-world code across 18 programming languages, with emphasis on Python, C/C++, JavaScript, and other widely-used languages.

The dataset was assembled with zero cloud budget using streaming collection from HuggingFace Datasets and targeted cloning from popular GitHub repositories. Each sample includes the source code, detected programming language, origin source, and (where applicable) repository URL and file path.

Built with rigorous deduplication, quality filtering, and language balancing to ensure diverse, high-quality training data. All code is from permissive open-source licenses (MIT, Apache-2.0, BSD, GPL, etc.) with full attribution metadata preserved.


Dataset Summary

  • Total Samples: 321,000
  • Languages: 18 (Python, C, C++, JavaScript, Java, Go, Rust, and more)
  • Sources: GitHub repositories + HuggingFace Datasets
  • License: MIT
  • Format: JSONL (JSON Lines)
  • Splits: Train (257k) / Validation (32.1k) / Test (32.1k)
  • Use Case: Training code generation and completion models



Supported Tasks


Code Generation

Generate complete functions, classes, or modules from natural language descriptions or partial code contexts. The dataset's diverse language coverage and real-world code patterns make it ideal for training models to produce syntactically correct and idiomatic code.


Code Completion

Autocomplete code as developers type. Train models to predict the next tokens, lines, or blocks based on surrounding context. Includes common patterns, API usage, and language-specific idioms.

Program Synthesis

Learn to translate specifications, comments, or natural language into executable code. The dataset's wide range of programming paradigms (imperative, functional, object-oriented) supports robust synthesis capabilities.


Code Translation

Cross-language translation tasks. With 18 languages represented, models can learn to convert code from one language to another while preserving functionality and idioms.




Dataset Structure


Data Instances

Each instance in the dataset is a JSON object with the following structure:

{
  "code": "def fibonacci(n):\n    if n <= 1:\n        return n\n    return fibonacci(n-1) + fibonacci(n-2)",
  "language": "python",
  "source": "github",
  "repo": "https://github.com/pytorch/pytorch",
  "path": "examples/recursion/fibonacci.py"
}

Data Fields

Field Type Description
code string The source code content (1-25.8M characters)
language string Detected programming language (18 values)
source string Origin source: github, hf:dataset-name, etc. (4 unique sources)
repo string Repository URL if sourced from GitHub (78 unique repos)
path string Original file path within repository (0-268 characters)

Data Splits

Split Samples Percentage Size (approx)
Train 257,000 80% ~15 GB
Validation 32,100 10% ~2 GB
Test 32,100 10% ~2 GB
Total 321,200 100% ~19 GB

Splits are randomized and stratified to maintain language distribution consistency across train/validation/test sets.




Languages


The dataset covers 18 programming languages with varying representation:

Language Category Primary Use Cases
C Systems Operating systems, embedded systems, performance-critical code
C++ Systems Game engines, high-performance computing, systems software
Python General-purpose Data science, web development, automation, AI/ML
JavaScript Web Frontend development, Node.js backends, full-stack applications
TypeScript Web Type-safe JavaScript for large-scale applications
Java Enterprise Android development, enterprise backends, distributed systems
Go Cloud-native Microservices, cloud infrastructure, concurrent systems
Rust Systems Memory-safe systems programming, WebAssembly, tooling
PHP Web WordPress, Laravel, server-side web development
Ruby Web Rails applications, scripting, web backends
Swift Mobile iOS, macOS, watchOS, tvOS application development
Kotlin Mobile Android development, server-side applications
HTML Markup Web page structure and content
CSS Styling Web page styling and layout
SQL Database Database queries, schema definitions, data manipulation
Shell Scripting Bash, Zsh, shell automation scripts
JSON Data Configuration files, API responses, data interchange
YAML Configuration Config files, CI/CD pipelines, infrastructure as code



Dataset Creation


Curation Rationale

This dataset was created to train the Yuuki code generation models on resource-constrained hardware (specifically, a Snapdragon 685 smartphone) with zero cloud budget. The curation process prioritized:

  1. Quality over quantity — Aggressive filtering for syntax correctness, readability, and real-world patterns
  2. Language diversity — Balanced representation across major programming languages
  3. License compliance — Only permissive open-source licenses with full attribution
  4. Deduplication — Advanced MinHash LSH for near-duplicate detection (80% similarity threshold)
  5. Reproducibility — All sources documented with repository URLs and file paths

Source Data

HuggingFace Datasets

  • bigcode/the-stack-dedup — Deduplicated subset of The Stack
  • bigcode/starcoderdata — StarCoder training corpus
  • code_search_net — CodeSearchNet dataset (all languages)
  • codeparrot/github-code — GitHub code samples
  • Additional curated code datasets

GitHub Repositories

78 popular open-source repositories were cloned and filtered:

  • Python: Django, Flask, NumPy, Pandas, PyTorch, TensorFlow, scikit-learn
  • JavaScript/TypeScript: React, Vue, Angular, Next.js, Node.js, Express
  • Systems: Linux kernel, PostgreSQL, Redis, Nginx, curl, Git
  • Languages: Rust, Go, Kotlin, Swift language implementations
  • Frameworks: Spring Boot, Laravel, Rails, and more

Full repository list available in dataset metadata.


Data Collection

  1. Streaming Collection from HuggingFace Datasets (target: ~10GB)
  2. GitHub Cloning with shallow clones (depth=1) for efficiency
  3. File Extraction filtering by extension (.py, .js, .c, .cpp, etc.)
  4. Language Detection based on file extension and content analysis
  5. Quality Filtering removing minified, generated, and binary files
  6. Deduplication using SHA-256 exact matching + MinHash LSH (80% threshold)
  7. Balancing to prevent language dominance (no single language >20%)
  8. Splitting into 80/10/10 train/validation/test sets

Preprocessing

  • Normalization: Line ending conversion to \n, trailing whitespace removal
  • Validation: Length checks (50-50,000 characters), line length heuristics
  • Exclusion: Binary files, minified code, generated files (e.g., _pb2.py, .min.js)
  • Pattern Filtering: Removed node_modules, vendor, __pycache__, build artifacts



Considerations for Using the Data


Social Impact

Democratizes access to high-quality code training data for researchers and developers without access to expensive compute resources or proprietary datasets. Enables training competitive code models on consumer hardware.


Discussion of Biases

Language Bias

Overrepresentation of popular languages (Python, JavaScript, C/C++). Underrepresentation of niche or domain-specific languages (Fortran, COBOL, R).


Domain Bias

Web development and data science code is overrepresented compared to embedded systems, scientific computing, or enterprise applications.

Cultural Bias

English-centric variable names, comments, and documentation. Code from Western/US developers may dominate due to GitHub's demographics.


Recency Bias

Modern coding patterns favored. Legacy code, deprecated APIs, and historical programming styles underrepresented.


Other Known Limitations

  • Snapshot in time: Dataset reflects code patterns from early 2026
  • Quality variance: Some low-quality or educational code may remain despite filtering
  • License diversity: Mix of licenses (MIT, Apache, GPL, BSD); users must verify compatibility for commercial use
  • Incomplete attribution: Some samples from aggregated datasets may lack complete provenance



Usage


Load with HuggingFace Datasets

from datasets import load_dataset

# Load the full dataset
dataset = load_dataset("OpceanAI/Yuuki-dataset")

# Access splits
train_data = dataset["train"]
val_data = dataset["validation"]
test_data = dataset["test"]

# Iterate through samples
for sample in train_data:
    code = sample["code"]
    language = sample["language"]
    print(f"Language: {language}")
    print(f"Code: {code[:100]}...")  # First 100 chars

Load with Pandas

import pandas as pd

# Load a specific split
df = pd.read_json("hf://datasets/OpceanAI/Yuuki-dataset/train-00000-of-00001.parquet")

# Filter by language
python_code = df[df["language"] == "python"]

# Group by source
by_source = df.groupby("source").size()
print(by_source)

Filter by Language

from datasets import load_dataset

dataset = load_dataset("OpceanAI/Yuuki-dataset", split="train")

# Get all Python samples
python_samples = dataset.filter(lambda x: x["language"] == "python")

# Get all JavaScript/TypeScript samples
js_samples = dataset.filter(lambda x: x["language"] in ["javascript", "typescript"])



Citation


If you use this dataset in your research or projects, please cite:

@misc{yuuki-dataset-2026,
  author = {agua_omg},
  title = {Yuuki Code Dataset: Multilingual Code Training Data},
  year = {2026},
  publisher = {HuggingFace},
  howpublished = {\url{https://huggingface.co/datasets/OpceanAI/Yuuki-dataset}},
  doi = {10.57967/hf/7809}
}



Related Projects


Project Description
Yuuki Models Code generation models trained on this dataset
Yuuki API Inference API for Yuuki models
Yuuki Chat Web chat interface for Yuuki models
yuy CLI Command-line tool for running Yuuki models
yuy-chat Terminal UI chat interface
Yuuki Web Official landing page



Links


Dataset Viewer   Model Weights   API


GitHub   Paper   Sponsor




License


MIT License

Copyright (c) 2026 Yuuki Project

Permission is hereby granted, free of charge, to any person obtaining a copy
of this software and associated documentation files (the "Software"), to deal
in the Software without restriction, including without limitation the rights
to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
copies of the Software, and to permit persons to whom the Software is
furnished to do so, subject to the following conditions:

The above copyright notice and this permission notice shall be included in all
copies or substantial portions of the Software.

THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
SOFTWARE.

Individual Code Licenses

While the dataset itself is released under MIT License, individual code samples within the dataset retain their original licenses. When using code from this dataset:

  • Verify the license of the source repository before commercial use
  • Respect original attributions and copyright notices
  • Common licenses in dataset: MIT, Apache-2.0, BSD-2-Clause, BSD-3-Clause, GPL-2.0, GPL-3.0, LGPL

See the repo and source fields in each sample for license information of the original source.




Acknowledgments


This dataset builds upon the incredible work of:

  • BigCode — The Stack and StarCoder datasets
  • GitHub — Open-source repository hosting
  • HuggingFace — Dataset hosting and infrastructure
  • All open-source contributors whose code is included in this dataset

Special thanks to the maintainers of the 78 repositories included in this collection.




Curated with patience, a phone, and zero budget.


Yuuki Project