Datasets:
metadata
language:
- en
- vi
license: apache-2.0
size_categories:
- 10K<n<100K
task_categories:
- image-to-image
pretty_name: NomGenie - Font Diffusion for Sino-Nom Language
dataset_info:
features:
- name: character
dtype: string
- name: style
dtype: string
- name: font
dtype: string
- name: content_image
dtype: image
- name: target_image
dtype: image
- name: content_hash
dtype: string
- name: target_hash
dtype: string
splits:
- name: train_original
num_bytes: 583130879
num_examples: 41245
- name: train
num_bytes: 21425838
num_examples: 1732
- name: val
num_bytes: 1090108
num_examples: 86
- name: handwritten_original
num_bytes: 512564010
num_examples: 40327
download_size: 19389645399
dataset_size: 1118210835
configs:
- config_name: default
data_files:
- split: train_original
path: data/train_original-*
- split: train
path: data/train-*
- split: val
path: data/val-*
- split: handwritten_original
path: data/handwritten_original-*
tags:
- font
- diffusion
- deep-learning
- computer-vision
NomGenie: Font Diffusion for Sino-Nom Language
NomGenie is a specialized image-to-image dataset designed for font generation and style transfer within the Sino-Nom (Hán-Nôm) script system. This dataset facilitates the training of deep learning models—particularly Diffusion Models and GANs—to preserve the historical and structural integrity of Vietnamese Nom characters while applying diverse typographic styles.
Dataset Description
The dataset consists of paired images: a content image (representing the skeletal or standard structure of a character) and a target image (representing the character rendered in a specific artistic or historical font style).
Key Features
- character: The specific Sino-Nom character represented.
- style/font: Metadata identifying the aesthetic transformation applied.
- content_image: The source glyph used as the structural reference.
- target_image: The ground truth stylized glyph for model supervision.
- Hashing:
content_hashandtarget_hashare provided to ensure data integrity and assist in deduplication.
Dataset Structure
Data Splits
The dataset is organized into three distinct splits to support various training stages:
| Split | Examples | Size | Description |
|---|---|---|---|
| train_original | 8,235 | 124.79 MB | The full original training set. |
| train | 5,172 | 79.72 MB | A curated subset optimized for standard training. |
| val | 318 | 4.48 MB | Validation set for hyperparameter tuning and evaluation. |
Quick Start
To use this dataset with the Hugging Face datasets library:
from datasets import load_dataset
# Load the dataset
dataset = load_dataset("path/to/NomGenie")
# Access a training sample
sample = dataset['train'][0]
display(sample['content_image'])
display(sample['target_image'])
## Technical Details
- Task Category: image-to-image
- Languages: Vietnamese (vi), English (en)
- License: Apache 2.0
- Primary Use Case: Generative AI for cultural heritage preservation and digital typography.