Datasets:
metadata
language:
- zh
license: apache-2.0
task_categories:
- text-generation
tags:
- code
- multilingual
- legesher
- tiny-aya-expedition
- language-decoded
- native-code
pretty_name: Language Decoded — Community Code
size_categories:
- 1K<n<10K
configs:
- config_name: zh
data_files:
- split: train
path: data/zh/train-*.parquet
- split: validation
path: data/zh/validation-*.parquet
dataset_info:
- config_name: zh
features:
- name: filename
dtype: string
- name: content
dtype: string
- name: extension
dtype: string
- name: source
dtype: string
- name: license
dtype: string
- name: quality_tier
dtype: string
- name: sha256
dtype: string
- name: byte_size
dtype: int64
- name: total_lines
dtype: int64
- name: cjk_ratio
dtype: float64
- name: has_cjk
dtype: bool
splits:
- name: train
num_bytes: 23921213
num_examples: 3137
- name: validation
num_bytes: 2506431
num_examples: 349
download_size: 10076444
dataset_size: 26427644
Language Decoded — Community Code
Natively-authored multilingual code for the Language Decoded project (part of Cohere's Tiny Aya Expedition). This dataset contains code written by developers in non-English programming languages and code with significant CJK content — not mechanically transpiled from English.
This data serves as a component of Condition 3 ("Mixed Native Sources") and Condition 4 ("Strictly Native Code") in the Language Decoded experiment, which tests whether native-language code improves multilingual reasoning beyond keyword swapping alone.
Available Configs
| Config | Language | Files | Description |
|---|---|---|---|
zh |
Chinese | 3,486 | Natively Chinese-authored code from 5 sources |
Schema
| Column | Type | Description |
|---|---|---|
filename |
string | Unique file identifier |
content |
string | Full file content |
extension |
string | File extension (e.g., .py, .java, .wy, .qi) |
source |
string | Origin dataset or project |
license |
string | SPDX license identifier or UNKNOWN |
quality_tier |
string | Quality tier: A (highest), B, C, D |
sha256 |
string | SHA-256 hash of file content for deduplication |
byte_size |
int64 | File size in bytes |
total_lines |
int64 | Number of lines in the file |
cjk_ratio |
float | Ratio of CJK characters to total non-whitespace chars |
has_cjk |
bool | Whether the file contains any CJK characters |
Chinese (zh) Source Breakdown
| Source | Files | Extensions | Description |
|---|---|---|---|
thestack |
1,948 | .py, .js, .java, … | Code from The Stack with CJK in comments, strings, identifiers |
program_in_chinese |
703 | .java, .js, .ts, … | Program in Chinese — code with Chinese identifiers |
qi |
239 | .qi | Qi — Chinese-syntax programming language |
mulan |
166 | .ul | Mulan — Chinese programming language |
wenyan |
81 | .wy | Wenyan — Classical Chinese programming language (20K+ GitHub stars) |
Quality Tier Distribution
| Tier | Count | Description |
|---|---|---|
| A | 778 | High quality, rich CJK |
| B | 1,158 | Good quality |
| C | 789 | Moderate quality |
| D | 412 | Lower quality, sparse CJK |
File Type Distribution
| Extension | Count | Extension | Count |
|---|---|---|---|
| .py | 2,003 | .ul | 166 |
| .java | 288 | .wy | 81 |
| .qi | 239 | .ts | 59 |
| .js | 205 | .c | 36 |
| Others | 59 |
Usage
from datasets import load_dataset
# Load Chinese native code
ds = load_dataset("legesher/language-decoded-community", "zh")
train = ds["train"] # 3,137 files
val = ds["validation"] # 349 files
# Filter by source
wenyan = train.filter(lambda x: x["source"] == "wenyan")
# Filter by quality
high_quality = train.filter(lambda x: x["quality_tier"] in ("A", "B"))
Relationship to Other Datasets
- legesher/language-decoded-data: The main experiment dataset with transpiled code (conditions 1–2), blended datasets (condition 3), and strictly native code (condition 4). Conditions 3 and 4 use native code from this repo.
- This repo stores the raw native code with full metadata. The blended and native training datasets live in
language-decoded-data.
Limitations
- Chinese only: Currently limited to Chinese-language code. Native code for Spanish and Urdu is not yet available.
- License uncertainty: Some files (particularly from
thestack) haveUNKNOWNlicenses. These were included because they appeared in The Stack's permissive-license subset, but individual file licenses could not always be verified. - Quality variation: Quality tiers are assigned heuristically based on CJK content ratio, file size, and structural indicators. Tier D files may contain minimal native-language content.
- Non-Python files included: Unlike the transpiled datasets (conditions 1–2), this dataset includes code in multiple programming languages (Python, Java, JavaScript, Wenyan, Qi, Mulan, etc.), reflecting the reality of native-language programming ecosystems.
- CJK-heavy bias: Files were selected partly based on CJK character presence, which may over-represent code with Chinese comments/strings rather than code with Chinese-language syntax.
Citation
@misc{language-decoded-2026,
title={Language Decoded: Investigating Language-Dependent vs. Structure-Dependent Reasoning Benefits of Code},
author={Madison Edgar and Saad Ahmed Bazaz and Tom Sherborne and Rashik Shahjahan and Khojasteh Mirza and Sarah Jawaid and Rafay Mustafa and Sohaib Ahmed Bazaz},
year={2026},
publisher={Hugging Face},
url={https://huggingface.co/datasets/legesher/language-decoded-community}
}
License
Apache 2.0