Datasets:

Modalities:
Text
Formats:
parquet
Languages:
English
ArXiv:
Libraries:
Datasets
Dask
License:
dclm_20b / README.md
nielsr's picture
nielsr HF Staff
Improve dataset card: Add task category, refine tags, and include sample usage
1b6680a verified
|
raw
history blame
3.78 kB
metadata
language:
  - en
license: cc-by-4.0
size_categories:
  - 10M<n<100M
task_categories:
  - feature-extraction
tags:
  - pretraining
  - text
  - multimodal
  - embeddings
  - vision-language
dataset_info:
  features:
    - name: text
      dtype: string
    - name: images
      sequence: 'null'
  splits:
    - name: train
      num_bytes: 116521171921
      num_examples: 20582647
  download_size: 71520997481
  dataset_size: 116521171921
configs:
  - config_name: default
    data_files:
      - split: train
        path: data/train-*

DCLM used in MoCa Pre-training

๐Ÿ  Homepage | ๐Ÿ’ป Code | ๐Ÿค– MoCa-Qwen25VL-7B | ๐Ÿค– MoCa-Qwen25VL-3B | ๐Ÿ“š Datasets | ๐Ÿ“„ Paper

Introduction

This is a text pre-training dataset used in the modality-aware continual pre-training of MoCa models. It is adapted from DCLM and randomly downsampled to ~20B tokens.

The dataset consists of text examples. text is a string containing text while images are left blank intentionally since there is no image available.

Sample Usage

This dataset is used for pre-training MoCa models, which are multimodal embedding models. To use the pre-trained MoCa models for inference (e.g., embedding text and images), you can refer to the associated code repository.

An example script for inference is provided:

python demo.py

This script allows you to embed your own text and images.

Citation

MoCa

@article{chen2025moca,
  title={MoCa: Modality-aware Continual Pre-training Makes Better Bidirectional Multimodal Embeddings},
  author={Chen, Haonan and Liu, Hong and Luo, Yuping and Wang, Liang and Yang, Nan and Wei, Furu and Dou, Zhicheng},
  journal={arXiv preprint arXiv:2506.23115},
  year={2025}
}

DCLM

@misc{li2024datacomplm,
      title={DataComp-LM: In search of the next generation of training sets for language models}, 
      author={Jeffrey Li and Alex Fang and Georgios Smyrnis and Maor Ivgi and Matt Jordan and Samir Gadre and Hritik Bansal and Etash Guha and Sedrick Keh and Kushal Arora and Saurabh Garg and Rui Xin and Niklas Muennighoff and Reinhard Heckel and Jean Mercat and Mayee Chen and Suchin Gururangan and Mitchell Wortsman and Alon Albalak and Yonatan Bitton and Marianna Nezhurina and Amro Abbas and Cheng-Yu Hsieh and Dhruba Ghosh and Josh Gardner and Maciej Kilian and Hanlin Zhang and Rulin Shao and Sarah Pratt and Sunny Sanyal and Gabriel Ilharco and Giannis Daras and Kalyani Marathe and Aaron Gokaslan and Jieyu Zhang and Khyathi Chandu and Thao Nguyen and Igor Vasiljevic and Sham Kakade and Shuran Song and Sujay Sanghavi and Fartash Faghri and Sewoong Oh and Luke Zettlemoyer and Kyle Lo and Alaaeldin El-Nouby and Hadi Pouransari and Alexander Toshev and Stephanie Wang and Dirk Groeneveld and Luca Soldaini and Pang Wei Koh and Jenia Jitsev and Thomas Kollar and Alexandros G. Dimakis and Yair Carmon and Achal Dave and Ludwig Schmidt and Vaishaal Shankar},
      year={2024},
      eprint={2406.11794},
      archivePrefix={arXiv},
      primaryClass={id='cs.LG' full_name='Machine Learning' is_active=True alt_name=None in_archive='cs' is_general=False description='Papers on all aspects of machine learning research (supervised, unsupervised, reinforcement learning, bandit problems, and so on) including also robustness, explanation, fairness, and methodology. cs.LG is also an appropriate primary category for applications of machine learning methods.'}