Datasets:
license: apache-2.0
task_categories:
- text-generation
- text2text-generation
language:
- en
- ur
- am
- zh
tags:
- code
- multilingual
- legesher
- transpilation
- tiny-aya-expedition
- language-decoded
pretty_name: Language Decoded Data
size_categories:
- 10K<n<100K
Language Decoded | Multilingual Code Dataset
Multilingual Python code datasets for the Language Decoded project (part of Cohere's Tiny Aya Expedition), investigating whether code's reasoning benefit for language models is language-dependent or structure-dependent.
Research Question
Does fine-tuning on non-English code (Python with translated keywords) improve multilingual reasoning as much as English code does?
Prior work (Aryabumi et al., 2024) showed English code improves English reasoning by 8.2%, but never tested non-English code. This dataset enables that experiment.
Dataset Structure
This repo contains multiple experimental conditions as subdirectories:
| Subdirectory | Condition | Description |
|---|---|---|
source-python/ |
Source | Filtered Python files from The Stack (shared base) |
baseline/ |
Condition 1 | No code augmentation (control) |
english-code/ |
Condition 2 | Original English-keyword Python code |
multilingual-code-ur/ |
Condition 3a | Python transpiled to Urdu keywords via Legesher |
multilingual-code-am/ |
Condition 3b | Python transpiled to Amharic keywords via Legesher |
multilingual-code-zh/ |
Condition 3c | Python transpiled to Chinese keywords via Legesher |
multilingual-text/ |
Condition 4 | Non-code multilingual text (control) |
Usage
from datasets import load_dataset
# Load a specific condition
ds = load_dataset("Legesher/language-decoded-data", data_dir="multilingual-code-ur")
Transpilation
Code translation is performed using Legesher, which translates Python reserved words (keywords, builtins, exceptions) into target languages while preserving code structure and semantics.
Example (English → Chinese):
# English
for item in range(10):
if item > 5:
print(item)
# Chinese / 中文 (via Legesher)
循环 元素 在 范围(10):
如果 元素 > 5:
打印(元素)
Source Data
- Base: The Stack — permissively licensed Python subset
- Filtering: Quality-filtered to 50K-100K files
- Transpilation tool: Legesher v0.6.0+
Evaluation Benchmarks
Models fine-tuned on these conditions are evaluated on:
- XNLI — Cross-lingual natural language inference (15 languages)
- XStoryCloze — Story completion (11 languages)
- TyDi QA — Question answering (11 languages)
- MMLU — Multilingual knowledge
Related Resources
- Models: Legesher/language-decoded-lora — LoRA adapters trained on these conditions
- Community code: Legesher/language-decoded-community — Human-written native language code
- Experiments: Legesher/language-decoded-experiments — Training logs and eval results
- Paper: Coming soon
Citation
@misc{language-decoded-2026,
title={Language Decoded: Investigating Language-Dependent vs. Structure-Dependent Reasoning Benefits of Code},
author={Madison Edgar and Saad Bazaz and Rafay Mustafa and Sarah Jawaid and Rashik Shahjahan and Khojasteh Mirza and Sohaib Bazaz},
year={2026},
publisher={Hugging Face},
url={https://huggingface.co/datasets/Legesher/language-decoded-data}
}
License
Apache 2.0