Datasets:
Update README.md
Browse files
README.md
CHANGED
|
@@ -36,4 +36,68 @@ configs:
|
|
| 36 |
path: data/train-*
|
| 37 |
- split: val
|
| 38 |
path: data/val-*
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 39 |
---
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 36 |
path: data/train-*
|
| 37 |
- split: val
|
| 38 |
path: data/val-*
|
| 39 |
+
license: apache-2.0
|
| 40 |
+
task_categories:
|
| 41 |
+
- image-to-image
|
| 42 |
+
language:
|
| 43 |
+
- en
|
| 44 |
+
- vi
|
| 45 |
+
tags:
|
| 46 |
+
- font
|
| 47 |
+
- diffusion
|
| 48 |
+
- deep-learning
|
| 49 |
+
- computer-vision
|
| 50 |
+
pretty_name: NomGenie - Font Diffusion for Sino-Nom Language
|
| 51 |
+
size_categories:
|
| 52 |
+
- 10K<n<100K
|
| 53 |
---
|
| 54 |
+
|
| 55 |
+
# NomGenie: Font Diffusion for Sino-Nom Language
|
| 56 |
+
|
| 57 |
+
**NomGenie** is a specialized image-to-image dataset designed for font generation and style transfer within the **Sino-Nom (Hán-Nôm)** script system. This dataset facilitates the training of deep learning models—particularly Diffusion Models and GANs—to preserve the historical and structural integrity of Vietnamese Nom characters while applying diverse typographic styles.
|
| 58 |
+
|
| 59 |
+
## Dataset Description
|
| 60 |
+
|
| 61 |
+
The dataset consists of paired images: a **content image** (representing the skeletal or standard structure of a character) and a **target image** (representing the character rendered in a specific artistic or historical font style).
|
| 62 |
+
|
| 63 |
+
### Key Features
|
| 64 |
+
* **character**: The specific Sino-Nom character represented.
|
| 65 |
+
* **style/font**: Metadata identifying the aesthetic transformation applied.
|
| 66 |
+
* **content_image**: The source glyph used as the structural reference.
|
| 67 |
+
* **target_image**: The ground truth stylized glyph for model supervision.
|
| 68 |
+
* **Hashing**: `content_hash` and `target_hash` are provided to ensure data integrity and assist in deduplication.
|
| 69 |
+
|
| 70 |
+
## Dataset Structure
|
| 71 |
+
|
| 72 |
+
### Data Splits
|
| 73 |
+
The dataset is organized into three distinct splits to support various training stages:
|
| 74 |
+
|
| 75 |
+
| Split | Examples | Size | Description |
|
| 76 |
+
| :--- | :--- | :--- | :--- |
|
| 77 |
+
| **train_original** | 8,235 | 124.79 MB | The full original training set. |
|
| 78 |
+
| **train** | 5,172 | 79.72 MB | A curated subset optimized for standard training. |
|
| 79 |
+
| **val** | 318 | 4.48 MB | Validation set for hyperparameter tuning and evaluation. |
|
| 80 |
+
|
| 81 |
+
## Quick Start
|
| 82 |
+
|
| 83 |
+
To use this dataset with the Hugging Face `datasets` library:
|
| 84 |
+
|
| 85 |
+
```python
|
| 86 |
+
from datasets import load_dataset
|
| 87 |
+
|
| 88 |
+
# Load the dataset
|
| 89 |
+
dataset = load_dataset("path/to/NomGenie")
|
| 90 |
+
|
| 91 |
+
# Access a training sample
|
| 92 |
+
sample = dataset['train'][0]
|
| 93 |
+
display(sample['content_image'])
|
| 94 |
+
display(sample['target_image'])
|
| 95 |
+
|
| 96 |
+
## Technical Details
|
| 97 |
+
- Task Category: image-to-image
|
| 98 |
+
|
| 99 |
+
- Languages: Vietnamese (vi), English (en)
|
| 100 |
+
|
| 101 |
+
- License: Apache 2.0
|
| 102 |
+
|
| 103 |
+
- Primary Use Case: Generative AI for cultural heritage preservation and digital typography.
|