Datasets:
docs: add citation, limitations section, update condition references
#5
by madiedgar - opened
README.md
CHANGED
|
@@ -1,108 +1,108 @@
|
|
| 1 |
---
|
| 2 |
language:
|
| 3 |
-
- zh
|
| 4 |
license: apache-2.0
|
| 5 |
task_categories:
|
| 6 |
-
- text-generation
|
| 7 |
tags:
|
| 8 |
-
- code
|
| 9 |
-
- multilingual
|
| 10 |
-
- legesher
|
| 11 |
-
- tiny-aya-expedition
|
| 12 |
-
- language-decoded
|
| 13 |
-
- native-code
|
| 14 |
pretty_name: Language Decoded — Community Code
|
| 15 |
size_categories:
|
| 16 |
-
- 1K<n<10K
|
| 17 |
configs:
|
| 18 |
-
- config_name: zh
|
| 19 |
-
|
| 20 |
-
|
| 21 |
-
|
| 22 |
-
|
| 23 |
-
|
| 24 |
dataset_info:
|
| 25 |
-
- config_name: zh
|
| 26 |
-
|
| 27 |
-
|
| 28 |
-
|
| 29 |
-
|
| 30 |
-
|
| 31 |
-
|
| 32 |
-
|
| 33 |
-
|
| 34 |
-
|
| 35 |
-
|
| 36 |
-
|
| 37 |
-
|
| 38 |
-
|
| 39 |
-
|
| 40 |
-
|
| 41 |
-
|
| 42 |
-
|
| 43 |
-
|
| 44 |
-
|
| 45 |
-
|
| 46 |
-
|
| 47 |
-
|
| 48 |
-
|
| 49 |
-
|
| 50 |
-
|
| 51 |
-
|
| 52 |
-
|
| 53 |
-
|
| 54 |
-
|
| 55 |
-
|
| 56 |
-
|
| 57 |
-
|
| 58 |
---
|
| 59 |
|
| 60 |
# Language Decoded — Community Code
|
| 61 |
|
| 62 |
Natively-authored multilingual code for the **Language Decoded** project (part of [Cohere's Tiny Aya Expedition](https://aya.for.ai)). This dataset contains code written by developers in non-English programming languages and code with significant CJK content — **not** mechanically transpiled from English.
|
| 63 |
|
| 64 |
-
This data serves as a component of **Condition 3** ("Mixed Native Sources") in the Language Decoded experiment, which tests whether
|
| 65 |
|
| 66 |
## Available Configs
|
| 67 |
|
| 68 |
-
| Config | Language | Files | Description
|
| 69 |
-
| ------ | -------- | ----- | ----------- |
|
| 70 |
| `zh` | Chinese | 3,486 | Natively Chinese-authored code from 5 sources |
|
| 71 |
|
| 72 |
## Schema
|
| 73 |
|
| 74 |
-
| Column | Type | Description
|
| 75 |
-
| -------------- | ------ | -----------------------------------------------------
|
| 76 |
-
| `filename` | string | Unique file identifier
|
| 77 |
-
| `content` | string | Full file content
|
| 78 |
-
| `extension` | string | File extension (e.g., `.py`, `.java`, `.wy`, `.qi`)
|
| 79 |
-
| `source` | string | Origin dataset or project
|
| 80 |
-
| `license` | string | SPDX license identifier or `UNKNOWN`
|
| 81 |
-
| `quality_tier` | string | Quality tier: A (highest), B, C, D
|
| 82 |
-
| `sha256` | string | SHA-256 hash of file content for deduplication
|
| 83 |
-
| `byte_size` | int64 | File size in bytes
|
| 84 |
-
| `total_lines` | int64 | Number of lines in the file
|
| 85 |
-
| `cjk_ratio` | float | Ratio of CJK characters to total non-whitespace chars
|
| 86 |
-
| `has_cjk` | bool | Whether the file contains any CJK characters
|
| 87 |
|
| 88 |
## Chinese (`zh`) Source Breakdown
|
| 89 |
|
| 90 |
-
| Source | Files | Extensions
|
| 91 |
-
| -------------------- | ----- | ------------------
|
| 92 |
-
| `thestack` | 1,948 | .py, .js, .java, … | Code from The Stack with CJK in comments, strings, identifiers
|
| 93 |
-
| `program_in_chinese` | 703 | .java, .js, .ts, … | [Program in Chinese](https://github.com/program-in-chinese) — code with Chinese identifiers
|
| 94 |
-
| `qi` | 239 | .qi
|
| 95 |
-
| `mulan` | 166 | .ul
|
| 96 |
-
| `wenyan` | 81 | .wy
|
| 97 |
|
| 98 |
### Quality Tier Distribution
|
| 99 |
|
| 100 |
-
| Tier | Count | Description
|
| 101 |
-
| ---- | ----- | -------------------------
|
| 102 |
-
| A | 778 | High quality, rich CJK
|
| 103 |
-
| B | 1,158 | Good quality
|
| 104 |
-
| C | 789 | Moderate quality
|
| 105 |
-
| D | 412 | Lower quality, sparse CJK
|
| 106 |
|
| 107 |
### File Type Distribution
|
| 108 |
|
|
@@ -133,8 +133,28 @@ high_quality = train.filter(lambda x: x["quality_tier"] in ("A", "B"))
|
|
| 133 |
|
| 134 |
## Relationship to Other Datasets
|
| 135 |
|
| 136 |
-
- **[legesher/language-decoded-data](https://huggingface.co/datasets/legesher/language-decoded-data)**: The main experiment dataset with transpiled code (conditions 1–2)
|
| 137 |
-
- This repo stores the **raw native code** with full metadata. The blended training datasets live in `language-decoded-data`.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 138 |
|
| 139 |
## License
|
| 140 |
|
|
|
|
| 1 |
---
|
| 2 |
language:
|
| 3 |
+
- zh
|
| 4 |
license: apache-2.0
|
| 5 |
task_categories:
|
| 6 |
+
- text-generation
|
| 7 |
tags:
|
| 8 |
+
- code
|
| 9 |
+
- multilingual
|
| 10 |
+
- legesher
|
| 11 |
+
- tiny-aya-expedition
|
| 12 |
+
- language-decoded
|
| 13 |
+
- native-code
|
| 14 |
pretty_name: Language Decoded — Community Code
|
| 15 |
size_categories:
|
| 16 |
+
- 1K<n<10K
|
| 17 |
configs:
|
| 18 |
+
- config_name: zh
|
| 19 |
+
data_files:
|
| 20 |
+
- split: train
|
| 21 |
+
path: data/zh/train-*.parquet
|
| 22 |
+
- split: validation
|
| 23 |
+
path: data/zh/validation-*.parquet
|
| 24 |
dataset_info:
|
| 25 |
+
- config_name: zh
|
| 26 |
+
features:
|
| 27 |
+
- name: filename
|
| 28 |
+
dtype: string
|
| 29 |
+
- name: content
|
| 30 |
+
dtype: string
|
| 31 |
+
- name: extension
|
| 32 |
+
dtype: string
|
| 33 |
+
- name: source
|
| 34 |
+
dtype: string
|
| 35 |
+
- name: license
|
| 36 |
+
dtype: string
|
| 37 |
+
- name: quality_tier
|
| 38 |
+
dtype: string
|
| 39 |
+
- name: sha256
|
| 40 |
+
dtype: string
|
| 41 |
+
- name: byte_size
|
| 42 |
+
dtype: int64
|
| 43 |
+
- name: total_lines
|
| 44 |
+
dtype: int64
|
| 45 |
+
- name: cjk_ratio
|
| 46 |
+
dtype: float64
|
| 47 |
+
- name: has_cjk
|
| 48 |
+
dtype: bool
|
| 49 |
+
splits:
|
| 50 |
+
- name: train
|
| 51 |
+
num_bytes: 23921213
|
| 52 |
+
num_examples: 3137
|
| 53 |
+
- name: validation
|
| 54 |
+
num_bytes: 2506431
|
| 55 |
+
num_examples: 349
|
| 56 |
+
download_size: 10076444
|
| 57 |
+
dataset_size: 26427644
|
| 58 |
---
|
| 59 |
|
| 60 |
# Language Decoded — Community Code
|
| 61 |
|
| 62 |
Natively-authored multilingual code for the **Language Decoded** project (part of [Cohere's Tiny Aya Expedition](https://aya.for.ai)). This dataset contains code written by developers in non-English programming languages and code with significant CJK content — **not** mechanically transpiled from English.
|
| 63 |
|
| 64 |
+
This data serves as a component of **Condition 3** ("Mixed Native Sources") and **Condition 4** ("Strictly Native Code") in the Language Decoded experiment, which tests whether native-language code improves multilingual reasoning beyond keyword swapping alone.
|
| 65 |
|
| 66 |
## Available Configs
|
| 67 |
|
| 68 |
+
| Config | Language | Files | Description |
|
| 69 |
+
| ------ | -------- | ----- | --------------------------------------------- |
|
| 70 |
| `zh` | Chinese | 3,486 | Natively Chinese-authored code from 5 sources |
|
| 71 |
|
| 72 |
## Schema
|
| 73 |
|
| 74 |
+
| Column | Type | Description |
|
| 75 |
+
| -------------- | ------ | ----------------------------------------------------- |
|
| 76 |
+
| `filename` | string | Unique file identifier |
|
| 77 |
+
| `content` | string | Full file content |
|
| 78 |
+
| `extension` | string | File extension (e.g., `.py`, `.java`, `.wy`, `.qi`) |
|
| 79 |
+
| `source` | string | Origin dataset or project |
|
| 80 |
+
| `license` | string | SPDX license identifier or `UNKNOWN` |
|
| 81 |
+
| `quality_tier` | string | Quality tier: A (highest), B, C, D |
|
| 82 |
+
| `sha256` | string | SHA-256 hash of file content for deduplication |
|
| 83 |
+
| `byte_size` | int64 | File size in bytes |
|
| 84 |
+
| `total_lines` | int64 | Number of lines in the file |
|
| 85 |
+
| `cjk_ratio` | float | Ratio of CJK characters to total non-whitespace chars |
|
| 86 |
+
| `has_cjk` | bool | Whether the file contains any CJK characters |
|
| 87 |
|
| 88 |
## Chinese (`zh`) Source Breakdown
|
| 89 |
|
| 90 |
+
| Source | Files | Extensions | Description |
|
| 91 |
+
| -------------------- | ----- | ------------------ | ------------------------------------------------------------------------------------------------------------ |
|
| 92 |
+
| `thestack` | 1,948 | .py, .js, .java, … | Code from The Stack with CJK in comments, strings, identifiers |
|
| 93 |
+
| `program_in_chinese` | 703 | .java, .js, .ts, … | [Program in Chinese](https://github.com/program-in-chinese) — code with Chinese identifiers |
|
| 94 |
+
| `qi` | 239 | .qi | [Qi](https://github.com/nicevoice/qi) — Chinese-syntax programming language |
|
| 95 |
+
| `mulan` | 166 | .ul | [Mulan](https://github.com/MulanRevive/mulan-rework) — Chinese programming language |
|
| 96 |
+
| `wenyan` | 81 | .wy | [Wenyan](https://github.com/wenyan-lang/wenyan) — Classical Chinese programming language (20K+ GitHub stars) |
|
| 97 |
|
| 98 |
### Quality Tier Distribution
|
| 99 |
|
| 100 |
+
| Tier | Count | Description |
|
| 101 |
+
| ---- | ----- | ------------------------- |
|
| 102 |
+
| A | 778 | High quality, rich CJK |
|
| 103 |
+
| B | 1,158 | Good quality |
|
| 104 |
+
| C | 789 | Moderate quality |
|
| 105 |
+
| D | 412 | Lower quality, sparse CJK |
|
| 106 |
|
| 107 |
### File Type Distribution
|
| 108 |
|
|
|
|
| 133 |
|
| 134 |
## Relationship to Other Datasets
|
| 135 |
|
| 136 |
+
- **[legesher/language-decoded-data](https://huggingface.co/datasets/legesher/language-decoded-data)**: The main experiment dataset with transpiled code (conditions 1–2), blended datasets (condition 3), and strictly native code (condition 4). Conditions 3 and 4 use native code from this repo.
|
| 137 |
+
- This repo stores the **raw native code** with full metadata. The blended and native training datasets live in `language-decoded-data`.
|
| 138 |
+
|
| 139 |
+
## Limitations
|
| 140 |
+
|
| 141 |
+
- **Chinese only**: Currently limited to Chinese-language code. Native code for Spanish and Urdu is not yet available.
|
| 142 |
+
- **License uncertainty**: Some files (particularly from `thestack`) have `UNKNOWN` licenses. These were included because they appeared in The Stack's permissive-license subset, but individual file licenses could not always be verified.
|
| 143 |
+
- **Quality variation**: Quality tiers are assigned heuristically based on CJK content ratio, file size, and structural indicators. Tier D files may contain minimal native-language content.
|
| 144 |
+
- **Non-Python files included**: Unlike the transpiled datasets (conditions 1–2), this dataset includes code in multiple programming languages (Python, Java, JavaScript, Wenyan, Qi, Mulan, etc.), reflecting the reality of native-language programming ecosystems.
|
| 145 |
+
- **CJK-heavy bias**: Files were selected partly based on CJK character presence, which may over-represent code with Chinese comments/strings rather than code with Chinese-language syntax.
|
| 146 |
+
|
| 147 |
+
## Citation
|
| 148 |
+
|
| 149 |
+
```bibtex
|
| 150 |
+
@misc{language-decoded-2026,
|
| 151 |
+
title={Language Decoded: Investigating Language-Dependent vs. Structure-Dependent Reasoning Benefits of Code},
|
| 152 |
+
author={Madison Edgar and Saad Ahmed Bazaz and Tom Sherborne and Rashik Shahjahan and Khojasteh Mirza and Sarah Jawaid and Rafay Mustafa and Sohaib Ahmed Bazaz},
|
| 153 |
+
year={2026},
|
| 154 |
+
publisher={Hugging Face},
|
| 155 |
+
url={https://huggingface.co/datasets/legesher/language-decoded-community}
|
| 156 |
+
}
|
| 157 |
+
```
|
| 158 |
|
| 159 |
## License
|
| 160 |
|