Datasets:
Improve dataset card: Add task category, refine tags, and include sample usage
#2
by
nielsr
HF Staff
- opened
README.md
CHANGED
|
@@ -1,13 +1,17 @@
|
|
| 1 |
---
|
| 2 |
-
---
|
| 3 |
-
license: cc-by-4.0
|
| 4 |
language:
|
| 5 |
- en
|
|
|
|
| 6 |
size_categories:
|
| 7 |
- 10M<n<100M
|
|
|
|
|
|
|
| 8 |
tags:
|
| 9 |
- pretraining
|
| 10 |
- text
|
|
|
|
|
|
|
|
|
|
| 11 |
dataset_info:
|
| 12 |
features:
|
| 13 |
- name: text
|
|
@@ -26,6 +30,7 @@ configs:
|
|
| 26 |
- split: train
|
| 27 |
path: data/train-*
|
| 28 |
---
|
|
|
|
| 29 |
# DCLM used in MoCa Pre-training
|
| 30 |
|
| 31 |
[π Homepage](https://haon-chen.github.io/MoCa/) | [π» Code](https://github.com/haon-chen/MoCa) | [π€ MoCa-Qwen25VL-7B](https://huggingface.co/moca-embed/MoCa-Qwen25VL-7B) | [π€ MoCa-Qwen25VL-3B](https://huggingface.co/moca-embed/MoCa-Qwen25VL-3B) | [π Datasets](https://huggingface.co/moca-embed/datasets) | [π Paper](https://arxiv.org/abs/2506.23115)
|
|
@@ -36,7 +41,17 @@ This is a text pre-training dataset used in the modality-aware continual pre-tra
|
|
| 36 |
|
| 37 |
The dataset consists of text examples. `text` is a string containing text while `images` are left blank intentionally since there is no image available.
|
| 38 |
|
|
|
|
|
|
|
|
|
|
| 39 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 40 |
|
| 41 |
## Citation
|
| 42 |
MoCa
|
|
@@ -60,5 +75,4 @@ DCLM
|
|
| 60 |
eprint={2406.11794},
|
| 61 |
archivePrefix={arXiv},
|
| 62 |
primaryClass={id='cs.LG' full_name='Machine Learning' is_active=True alt_name=None in_archive='cs' is_general=False description='Papers on all aspects of machine learning research (supervised, unsupervised, reinforcement learning, bandit problems, and so on) including also robustness, explanation, fairness, and methodology. cs.LG is also an appropriate primary category for applications of machine learning methods.'}
|
| 63 |
-
```
|
| 64 |
-
|
|
|
|
| 1 |
---
|
|
|
|
|
|
|
| 2 |
language:
|
| 3 |
- en
|
| 4 |
+
license: cc-by-4.0
|
| 5 |
size_categories:
|
| 6 |
- 10M<n<100M
|
| 7 |
+
task_categories:
|
| 8 |
+
- feature-extraction
|
| 9 |
tags:
|
| 10 |
- pretraining
|
| 11 |
- text
|
| 12 |
+
- multimodal
|
| 13 |
+
- embeddings
|
| 14 |
+
- vision-language
|
| 15 |
dataset_info:
|
| 16 |
features:
|
| 17 |
- name: text
|
|
|
|
| 30 |
- split: train
|
| 31 |
path: data/train-*
|
| 32 |
---
|
| 33 |
+
|
| 34 |
# DCLM used in MoCa Pre-training
|
| 35 |
|
| 36 |
[π Homepage](https://haon-chen.github.io/MoCa/) | [π» Code](https://github.com/haon-chen/MoCa) | [π€ MoCa-Qwen25VL-7B](https://huggingface.co/moca-embed/MoCa-Qwen25VL-7B) | [π€ MoCa-Qwen25VL-3B](https://huggingface.co/moca-embed/MoCa-Qwen25VL-3B) | [π Datasets](https://huggingface.co/moca-embed/datasets) | [π Paper](https://arxiv.org/abs/2506.23115)
|
|
|
|
| 41 |
|
| 42 |
The dataset consists of text examples. `text` is a string containing text while `images` are left blank intentionally since there is no image available.
|
| 43 |
|
| 44 |
+
## Sample Usage
|
| 45 |
+
|
| 46 |
+
This dataset is used for pre-training MoCa models, which are multimodal embedding models. To use the pre-trained MoCa models for inference (e.g., embedding text and images), you can refer to the associated [code repository](https://github.com/haon-chen/MoCa).
|
| 47 |
|
| 48 |
+
An example script for inference is provided:
|
| 49 |
+
|
| 50 |
+
```bash
|
| 51 |
+
python demo.py
|
| 52 |
+
```
|
| 53 |
+
|
| 54 |
+
This script allows you to embed your own text and images.
|
| 55 |
|
| 56 |
## Citation
|
| 57 |
MoCa
|
|
|
|
| 75 |
eprint={2406.11794},
|
| 76 |
archivePrefix={arXiv},
|
| 77 |
primaryClass={id='cs.LG' full_name='Machine Learning' is_active=True alt_name=None in_archive='cs' is_general=False description='Papers on all aspects of machine learning research (supervised, unsupervised, reinforcement learning, bandit problems, and so on) including also robustness, explanation, fairness, and methodology. cs.LG is also an appropriate primary category for applications of machine learning methods.'}
|
| 78 |
+
```
|
|
|