dzungpham's picture
Upload dataset
2f33fa5 verified
|
raw
history blame
3.05 kB
metadata
language:
  - en
  - vi
license: apache-2.0
size_categories:
  - 10K<n<100K
task_categories:
  - image-to-image
pretty_name: NomGenie - Font Diffusion for Sino-Nom Language
dataset_info:
  features:
    - name: character
      dtype: string
    - name: style
      dtype: string
    - name: font
      dtype: string
    - name: content_image
      dtype: image
    - name: target_image
      dtype: image
    - name: content_hash
      dtype: string
    - name: target_hash
      dtype: string
  splits:
    - name: train
      num_bytes: 86791289
      num_examples: 5676
    - name: val
      num_bytes: 4978138
      num_examples: 342
    - name: train_original
      num_bytes: 133370914
      num_examples: 8805
  download_size: 813661882
  dataset_size: 225140341
configs:
  - config_name: default
    data_files:
      - split: train_original
        path: data/train_original-*
      - split: train
        path: data/train-*
      - split: val
        path: data/val-*
tags:
  - font
  - diffusion
  - deep-learning
  - computer-vision

NomGenie: Font Diffusion for Sino-Nom Language

NomGenie is a specialized image-to-image dataset designed for font generation and style transfer within the Sino-Nom (Hán-Nôm) script system. This dataset facilitates the training of deep learning models—particularly Diffusion Models and GANs—to preserve the historical and structural integrity of Vietnamese Nom characters while applying diverse typographic styles.

Dataset Description

The dataset consists of paired images: a content image (representing the skeletal or standard structure of a character) and a target image (representing the character rendered in a specific artistic or historical font style).

Key Features

  • character: The specific Sino-Nom character represented.
  • style/font: Metadata identifying the aesthetic transformation applied.
  • content_image: The source glyph used as the structural reference.
  • target_image: The ground truth stylized glyph for model supervision.
  • Hashing: content_hash and target_hash are provided to ensure data integrity and assist in deduplication.

Dataset Structure

Data Splits

The dataset is organized into three distinct splits to support various training stages:

Split Examples Size Description
train_original 8,235 124.79 MB The full original training set.
train 5,172 79.72 MB A curated subset optimized for standard training.
val 318 4.48 MB Validation set for hyperparameter tuning and evaluation.

Quick Start

To use this dataset with the Hugging Face datasets library:

from datasets import load_dataset

# Load the dataset
dataset = load_dataset("path/to/NomGenie")

# Access a training sample
sample = dataset['train'][0]
display(sample['content_image'])
display(sample['target_image'])

## Technical Details
- Task Category: image-to-image

- Languages: Vietnamese (vi), English (en)

- License: Apache 2.0

- Primary Use Case: Generative AI for cultural heritage preservation and digital typography.