alakxender's picture
Update README.md
c1fef2e verified
---
dataset_info:
features:
- name: image_id
dtype: int64
- name: image
dtype: image
- name: width
dtype: int64
- name: height
dtype: int64
- name: objects
sequence:
- name: id
dtype: int64
- name: area
dtype: int64
- name: bbox
sequence: float32
length: 4
- name: category
dtype:
class_label:
names:
'0': Textline
'1': Heading
'2': Picture
'3': Caption
'4': Columns
- name: ground_truth
struct:
- name: gt_parse
struct:
- name: headline
sequence: string
- name: textline
sequence: string
splits:
- name: train
num_bytes: 84308039804.908
num_examples: 58738
download_size: 93323036554
dataset_size: 84308039804.908
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
license: mit
task_categories:
- image-classification
- object-detection
- visual-question-answering
language:
- dv
tags:
- dhivehi
- thaana
- ocr
- vqa
- bbox
- textline
pretty_name: dv_page_annotation
size_categories:
- 10K<n<100K
---
# πŸ“¦ Dhivehi Synthetic Document Layout + Textline Dataset
This dataset contains **synthetically generated** image-document pairs with detailed layout annotations and ground-truth Dhivehi text extractions.
It’s designed for document layout analysis, visual document understanding, OCR fine-tuning, and related tasks specifically for Dhivehi script.
***Note: this version image are compressed.***
***Raw version πŸ“ **Repository**: [Hugging Face Datasets](https://huggingface.co/datasets/alakxender/od-syn-page-annotations)***
## πŸ“‹ Dataset Summary
- **Total Examples**: ~58,738
- **Image Content**: Synthetic Dhivehi documents generated to simulate real-world layouts, including headlines, textlines, pictures, and captions.
- **Annotations**:
- Bounding boxes (`bbox`)
- Object areas (`area`)
- Object categories (`category`)
- Ground-truth parsed text, split into:
- `headline` (major headings)
- `textline` (paragraph or text body lines)
## ⚠️ Important Note
This dataset is **synthetic** β€” no real-world documents or personal data were used. It was generated programmatically to train and evaluate models under controlled conditions, without legal or ethical concerns tied to real-world data.
## 🏷️ Categories
| Label ID | Label Name |
|----------|-------------|
| 0 | Textline |
| 1 | Heading |
| 2 | Picture |
| 3 | Caption |
| 4 | Columns |
## πŸ“ Features
| Field | Type |
|----------------------|-----------------------------------------|
| `image_id` | int64 |
| `image` | image |
| `width` | int64 |
| `height` | int64 |
| `objects` | List of:
- `id`: int64
- `area`: int64
- `bbox`: [x, y, width, height] (float32)
- `category`: label (class label 0–4) |
| `ground_truth.gt_parse` |
- `headline`: list of strings
- `textline`: list of strings |
## πŸ“Š Split
| Split | # Examples | Size (bytes) |
|--------|------------|----------------------|
| Train | 58,738 | ~84.31 GB (compressed) |
## πŸ“¦ Download
- **Download size**: ~93.32 GB
- **Uncompressed dataset size**: ~84.31 GB
## πŸ”§ Example Use (with πŸ€— Datasets)
```python
from datasets import load_dataset
dataset = load_dataset("alakxender/od-syn-page-annotations")
categories = dataset.features["objects"].feature["category"].names
id2label = {i: name for i, name in enumerate(categories)}
print(id2label)
sample = dataset['train'][0]
print("Image ID:", sample['image_id'])
print("Image size:", sample['width'], "x", sample['height'])
print("First object category:", sample['objects']['category'][0])
print("First headline:", sample['ground_truth']['gt_parse']['headline'][0])
```
## πŸ“Š Visualize
```python
import numpy as np
from PIL import Image, ImageDraw, ImageFont
from datasets import load_dataset
def get_color(idx):
palette = [
"red", "green", "blue", "orange", "purple", "cyan", "magenta", "yellow", "lime", "pink"
]
return palette[idx % len(palette)]
def draw_bboxes(sample, id2label, save_path=None):
"""
Draw bounding boxes and labels on a single dataset sample.
Args:
sample: A dataset example (dict) with 'image' and 'objects'.
id2label: Mapping from category ID to label name.
save_path: If provided, saves the image to this path.
Returns:
PIL Image with drawn bounding boxes.
"""
image = sample["image"]
annotations = sample["objects"]
image = Image.fromarray(np.array(image))
draw = ImageDraw.Draw(image)
try:
font = ImageFont.truetype("arial.ttf", 14)
except:
font = ImageFont.load_default()
for category, box in zip(annotations["category"], annotations["bbox"]):
x, y, w, h = box
color = get_color(category)
draw.rectangle((x, y, x + w, y + h), outline=color, width=2)
label = id2label[category]
bbox = font.getbbox(label)
text_width = bbox[2] - bbox[0]
text_height = bbox[3] - bbox[1]
draw.rectangle([x, y, x + text_width + 4, y + text_height + 2], fill=color)
draw.text((x + 2, y + 1), label, fill="black", font=font)
if save_path:
image.save(save_path)
print(f"Saved image to {save_path}")
else:
image.show()
return image
# Load one sample
dataset = load_dataset("alakxender/od-syn-page-annotations", split="train[:1]")
# Get category mapping
categories = dataset.features["objects"].feature["category"].names
id2label = {i: name for i, name in enumerate(categories)}
# Draw bounding boxes on the first sample
draw_bboxes(
sample=dataset[0],
id2label=id2label,
save_path="sample_0.png"
)
```