Improve dataset card: Add metadata, paper, code, and description for MS-Celeb-1M-v1c
#2
by
nielsr
HF Staff
- opened
README.md
CHANGED
|
@@ -20,4 +20,31 @@ configs:
|
|
| 20 |
data_files:
|
| 21 |
- split: train
|
| 22 |
path: data/train-*
|
|
|
|
|
|
|
|
|
|
|
|
|
| 23 |
---
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 20 |
data_files:
|
| 21 |
- split: train
|
| 22 |
path: data/train-*
|
| 23 |
+
license: mit
|
| 24 |
+
task_categories:
|
| 25 |
+
- face-recognition
|
| 26 |
+
- image-classification
|
| 27 |
---
|
| 28 |
+
|
| 29 |
+
# MS-Celeb-1M-v1c Dataset
|
| 30 |
+
|
| 31 |
+
This repository hosts the **MS-Celeb-1M-v1c** dataset, a cleaned version of the MS-Celeb-1M dataset specifically designed for face recognition tasks. It is integrated and utilized within the [DORAEMON: A Unified Library for Visual Object Modeling and Representation Learning at Scale](https://huggingface.co/papers/2511.04394) framework.
|
| 32 |
+
|
| 33 |
+
The dataset comprises over 70,000 unique identities and approximately 3.6 million images. It has been validated using the Labeled Faces in the Wild (LFW) benchmark, ensuring its quality and relevance for training robust face recognition models. This dataset offers a scalable foundation for rapid experimentation in visual recognition and representation learning.
|
| 34 |
+
|
| 35 |
+
**Paper:** [DORAEMON: A Unified Library for Visual Object Modeling and Representation Learning at Scale](https://huggingface.co/papers/2511.04394)
|
| 36 |
+
**Code:** [https://github.com/wuji3/DORAEMON](https://github.com/wuji3/DORAEMON)
|
| 37 |
+
|
| 38 |
+
## Citation
|
| 39 |
+
|
| 40 |
+
If you find this dataset or the DORAEMON project useful for your research or development, please cite the following paper:
|
| 41 |
+
|
| 42 |
+
```bibtex
|
| 43 |
+
@misc{du2025visual,
|
| 44 |
+
title={DORAEMON: A Unified Library for Visual Object Modeling and Representation Learning at Scale},
|
| 45 |
+
author={Ke Du and Yimin Peng and Chao Gao and Fan Zhou and Siqiao Xue},
|
| 46 |
+
year={2025},
|
| 47 |
+
journal={arXiv preprint arXiv:2511.04394},
|
| 48 |
+
url={https://arxiv.org/abs/2511.04394},
|
| 49 |
+
}
|
| 50 |
+
```
|