|
|
--- |
|
|
dataset_name: "Forbin Dataset" |
|
|
tags: |
|
|
- humanities |
|
|
- digital-humanities |
|
|
- archives |
|
|
- historical-documents |
|
|
- text-detection |
|
|
- polygon-annotation |
|
|
- verso-recto photographs |
|
|
license: cc-by-nc-4.0 |
|
|
task_categories: |
|
|
- object-detection |
|
|
- feature-extraction |
|
|
- image-classification |
|
|
pretty_name: "Forbin Dataset: A collection of historical photographs with archival metadata" |
|
|
--- |
|
|
|
|
|
# Forbin Dataset: *A collection of historical photographs with archival metadata* |
|
|
|
|
|
This repository hosts the *Forbin Dataset*, a large-scale collection of historical photographs taken or collected by **Victor Forbin (1868–1947)**. |
|
|
|
|
|
This HuggingFace dataset version provides: |
|
|
- COCO-style annotations (segmentation polygons) |
|
|
- Archival metadata (Box ID, description, notes, dates when available) |
|
|
- A lightweight **explorer interface** (HTML/JS) to preview images and annotations: [https://mchelali.github.io/forbin_dataset/](https://mchelali.github.io/forbin_dataset/) |
|
|
|
|
|
## 📜 Dataset Description |
|
|
|
|
|
The Forbin Dataset contains digitized historical photographs from the personal archives of Victor Forbin, a French explorer, photographer, and writer. |
|
|
Images are accompanied by rich metadata and manually extracted segmentation polygons suitable for: |
|
|
|
|
|
- Computer Vision |
|
|
- Document Analysis |
|
|
- Cultural Heritage Studies |
|
|
- Machine Learning Research |
|
|
|
|
|
The sample included here is intended for **illustration and early experimentation only**. |
|
|
The upcoming full release will contain tens of thousands of images with complete metadata and annotations. |
|
|
|
|
|
## 🛠️ Data Access and Usage Instructions |
|
|
|
|
|
Given the size of the image archives, the dataset must be loaded in a two-step process: **Local Download** followed by **Indexing**. |
|
|
|
|
|
### 1\. Downloading the Raw Data Files (Images and Annotations) ⬇️ |
|
|
|
|
|
The dataset is distributed as WebDataset archives (`.tar`) and separate JSON annotation files. **You must download these files locally before starting the training process.** |
|
|
|
|
|
| File | Content | Note | |
|
|
| :--- | :--- | :--- | |
|
|
| **`forbin_all.json`** | All Image IDs, metadata, and annotations (for annotated images). | Used for full dataset indexing. | |
|
|
| **`forbin_annotated.json`** | Only images that have associated annotations (simplified index). | Useful for training on annotation tasks. | |
|
|
| **`data/*.tar`** | WebDataset archives containing all raw images. | **Large files.** | |
|
|
|
|
|
#### **Mode A: Via the Hugging Face Command Line Interface (CLI)** |
|
|
|
|
|
This is the fastest method for users familiar with the terminal. |
|
|
|
|
|
```bash |
|
|
# Requires installation: pip install huggingface_hub |
|
|
hf download mchelali/forbin_dataset --repo-type dataset --local-dir ./forbin_data_local |
|
|
``` |
|
|
|
|
|
#### **Mode B: Via Python (Recommended for Resumable Downloads)** |
|
|
|
|
|
This reliable method uses the official Python API, which automatically handles resuming the download process if interrupted. |
|
|
|
|
|
```python |
|
|
from huggingface_hub import snapshot_download |
|
|
|
|
|
snapshot_download( |
|
|
repo_id="mchelali/forbin_dataset", |
|
|
repo_type="dataset", |
|
|
local_dir="./forbin_data_local" # Your chosen destination folder |
|
|
) |
|
|
``` |
|
|
|
|
|
#### **Web Download Interface (For SHS Researchers):** |
|
|
|
|
|
For users less familiar with the command line, we provide a dedicated web interface to download the individual `.tar` archives one by one: |
|
|
|
|
|
➡️ **Web Download Interface:** [https://mchelali.github.io/forbin\_dataset/download.html](https://mchelali.github.io/forbin_dataset/download.html) |
|
|
|
|
|
----- |
|
|
|
|
|
### 2\. Indexing and Annotation Usage 📚 |
|
|
|
|
|
Once the `*.json` and `*.tar` files are downloaded locally, you can build your own data loading pipeline. |
|
|
|
|
|
**Annotation Format:** |
|
|
|
|
|
All annotations (including textual metadata, bounding boxes, and segmentation polygons) are provided in the standard **COCO (Common Objects in Context) format**. This ensures compatibility with existing computer vision tools and libraries like PyTorch, TensorFlow, and `pycocotools`. |
|
|
|
|
|
The JSON file acts as your **Manifest** (Index Table). It links the image ID (via `image_id`) to the image's location within the `.tar` archives (via the `file_names` field in the `images` section). |
|
|
|
|
|
**To use the dataset:** |
|
|
|
|
|
1. Load the JSON file (`forbin_all.json` or `forbin_annotated.json`) into your program. |
|
|
2. Use the Python `tarfile` (or `webdataset`) library to open the corresponding `.tar` archive and load the image bytes based on the path provided in the `file_names` field. |
|
|
3. Apply the COCO annotations (found in the `annotations` section of the JSON) to the loaded image. |
|
|
|
|
|
## 🔖 License |
|
|
|
|
|
This sample dataset is released under the following license: |
|
|
|
|
|
**Creative Commons Attribution-NonCommercial 4.0 International (CC BY-NC 4.0)** |
|
|
➡️ https://creativecommons.org/licenses/by-nc/4.0/ |
|
|
|
|
|
This means: |
|
|
- ✔ You must provide attribution |
|
|
- ✔ You may share and adapt the material |
|
|
- ❌ You may **not** use it for commercial purposes |
|
|
|
|
|
|
|
|
## 📚 Citation |
|
|
|
|
|
If you use this dataset or the sample in academic work, please cite the forthcoming data paper: |
|
|
|
|
|
``` |
|
|
|
|
|
[Under review] |
|
|
Chelali M., Gosselet S. K., Cloppet F., Kurtz C., Bloch I. and Foliard D., |
|
|
The Forbin Dataset: A collection of historical photographs with archival metadata, 2025. |
|
|
|
|
|
``` |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
## 🤝 Acknowledgment of Authors |
|
|
|
|
|
This dataset originates from the personal archives of **Victor Forbin**, digitized and curated by the *High Vision Project – Archives & Vision Initiative*. |
|
|
All annotation and data processing work was performed by the project contributors. |
|
|
|
|
|
This work is supported by the French National Research Agency under the **ANR-24-CE38-4079** project |
|
|
|