The dataset is currently empty. Upload or create new data files. Then, you will be able to explore them in the Dataset Viewer.

Language Decoded — Community Code

A dataset of Python code written by native speakers using Legesher translated keywords in their own language. Unlike the transpiled datasets in language-decoded-data, this code is human-authored — written naturally by people thinking and coding in their native language.

Purpose

This dataset serves two goals:

  1. Training signal: Human-written native code may carry different structural patterns than mechanically transpiled code
  2. Validation: Confirms that Legesher's translations produce natural, usable keywords that native speakers actually want to use

Structure

Organized by language, with exercise tiers:

language-decoded-community/
├── zh/          (Chinese)
│   ├── tier-1/  (basic exercises)
│   └── tier-2/  (intermediate exercises)
├── am/          (Amharic)
│   ├── tier-1/
│   └── tier-2/
├── ur/          (Urdu)
│   ├── tier-1/
│   └── tier-2/
└── ...

How to Contribute

We welcome code submissions from native speakers of any language supported by Legesher.

Submission Requirements

  1. Valid Python — your code must parse without syntax errors
  2. Uses Legesher keywords — write using your language's translated keywords (see supported languages)
  3. Include a metadata header at the top of each file:
# language: zh
# exercise: tier-1
# description: A brief description of what this code does

How to Submit

Option A: HuggingFace web UI

  1. Navigate to the appropriate language folder
  2. Click "Add file" → "Upload files"
  3. Upload your .py file(s)
  4. Add a commit message describing your submission

Option B: Using the CLI

pip install huggingface_hub
huggingface-cli login
huggingface-cli upload Legesher/language-decoded-community ./my_code.py zh/tier-1/my_code.py --repo-type dataset

Exercise Tiers

Tier Difficulty Examples
Tier 1 Basic Hello world, loops, conditionals, simple functions
Tier 2 Intermediate Data structures, file I/O, classes, error handling

Usage

from datasets import load_dataset

# Load all submissions
ds = load_dataset("Legesher/language-decoded-community")

# Load a specific language
ds = load_dataset("Legesher/language-decoded-community", data_dir="zh")

Related Resources

License

Apache 2.0 — all contributions are released under this license.

Downloads last month
4