File size: 1,913 Bytes
67bc9a8
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
0f65ef4
 
 
 
67bc9a8
0f65ef4
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
---
dataset_info:
  features:
  - name: image
    dtype: image
  - name: label
    dtype: int64
  - name: class_name
    dtype: string
  - name: file_name
    dtype: string
  splits:
  - name: train
    num_bytes: 39977866160.901
    num_examples: 3671021
  download_size: 32804935831
  dataset_size: 39977866160.901
configs:
- config_name: default
  data_files:
  - split: train
    path: data/train-*
license: mit
task_categories:
- face-recognition
- image-classification
---

# MS-Celeb-1M-v1c Dataset

This repository hosts the **MS-Celeb-1M-v1c** dataset, a cleaned version of the MS-Celeb-1M dataset specifically designed for face recognition tasks. It is integrated and utilized within the [DORAEMON: A Unified Library for Visual Object Modeling and Representation Learning at Scale](https://huggingface.co/papers/2511.04394) framework.

The dataset comprises over 70,000 unique identities and approximately 3.6 million images. It has been validated using the Labeled Faces in the Wild (LFW) benchmark, ensuring its quality and relevance for training robust face recognition models. This dataset offers a scalable foundation for rapid experimentation in visual recognition and representation learning.

**Paper:** [DORAEMON: A Unified Library for Visual Object Modeling and Representation Learning at Scale](https://huggingface.co/papers/2511.04394)
**Code:** [https://github.com/wuji3/DORAEMON](https://github.com/wuji3/DORAEMON)

## Citation

If you find this dataset or the DORAEMON project useful for your research or development, please cite the following paper:

```bibtex
@misc{du2025visual,
      title={DORAEMON: A Unified Library for Visual Object Modeling and Representation Learning at Scale}, 
      author={Ke Du and Yimin Peng and Chao Gao and Fan Zhou and Siqiao Xue},
      year={2025},
      journal={arXiv preprint arXiv:2511.04394},
      url={https://arxiv.org/abs/2511.04394}, 
}
```