nielsr HF Staff commited on
Commit
5914def
·
verified ·
1 Parent(s): 6efda93

Enhance dataset card: Update task category and add sample usage

Browse files

This pull request improves the dataset card for mmBERT training data by:
- Updating the `task_categories` metadata from `fill-mask` to `feature-extraction`. This change better reflects the primary utility of models trained on this dataset for downstream tasks like classification and retrieval, as highlighted in the paper abstract and the associated GitHub repository.
- Incorporating a comprehensive "Sample Usage" section with practical Python code snippets directly from the GitHub README. This section demonstrates how to install the necessary packages and use the mmBERT models for tasks such as generating multilingual embeddings, performing masked language modeling, and multilingual retrieval.

Files changed (1) hide show
  1. README.md +95 -3
README.md CHANGED
@@ -1,7 +1,7 @@
1
  ---
2
  license: mit
3
  task_categories:
4
- - fill-mask
5
  tags:
6
  - pretraining
7
  - encoder
@@ -19,6 +19,98 @@ tags:
19
 
20
  This dataset is part of the complete, pre-shuffled training data used to train the [mmBERT encoder models](https://huggingface.co/collections/jhu-clsp/mmbert-a-modern-multilingual-encoder-68b725831d7c6e3acc435ed4). Unlike the individual phase datasets, this version is ready for immediate use but **the mixture cannot be modified easily**. The data is provided in **decompressed MDS format** ready for use with [ModernBERT's Composer](https://github.com/mosaicml/composer) and the [ModernBERT training repository](https://github.com/answerdotai/ModernBERT).
21
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
22
  ## Licensing & Attribution
23
 
24
  This dataset aggregates multiple open-source datasets under permissive licenses. See individual source datasets for specific attribution requirements.
@@ -35,12 +127,12 @@ This dataset aggregates multiple open-source datasets under permissive licenses.
35
 
36
  ```bibtex
37
  @misc{marone2025mmbertmodernmultilingualencoder,
38
- title={mmBERT: A Modern Multilingual Encoder with Annealed Language Learning},
39
  author={Marc Marone and Orion Weller and William Fleshman and Eugene Yang and Dawn Lawrie and Benjamin Van Durme},
40
  year={2025},
41
  eprint={2509.06888},
42
  archivePrefix={arXiv},
43
  primaryClass={cs.CL},
44
- url={https://arxiv.org/abs/2509.06888},
45
  }
46
  ```
 
1
  ---
2
  license: mit
3
  task_categories:
4
+ - feature-extraction
5
  tags:
6
  - pretraining
7
  - encoder
 
19
 
20
  This dataset is part of the complete, pre-shuffled training data used to train the [mmBERT encoder models](https://huggingface.co/collections/jhu-clsp/mmbert-a-modern-multilingual-encoder-68b725831d7c6e3acc435ed4). Unlike the individual phase datasets, this version is ready for immediate use but **the mixture cannot be modified easily**. The data is provided in **decompressed MDS format** ready for use with [ModernBERT's Composer](https://github.com/mosaicml/composer) and the [ModernBERT training repository](https://github.com/answerdotai/ModernBERT).
21
 
22
+ ## Sample Usage
23
+
24
+ The mmBERT models are available on the Hugging Face Hub and can be easily loaded using the `transformers` library. Here are some quick examples for feature extraction (getting embeddings), masked language modeling, and multilingual retrieval.
25
+
26
+ First, install the necessary packages:
27
+ ```bash
28
+ pip install torch>=1.9.0
29
+ pip install transformers>=4.48.0
30
+ ```
31
+
32
+ ### Get Multilingual Embeddings (Feature Extraction)
33
+
34
+ ```python
35
+ from transformers import AutoTokenizer, AutoModel
36
+ import torch
37
+
38
+ tokenizer = AutoTokenizer.from_pretrained("jhu-clsp/mmbert-small")
39
+ model = AutoModel.from_pretrained("jhu-clsp/mmbert-small")
40
+
41
+ # Example: Get multilingual embeddings
42
+ inputs = tokenizer("Hello world! 你好世界! Bonjour le monde!", return_tensors="pt")
43
+ outputs = model(**inputs)
44
+ embeddings = outputs.last_hidden_state.mean(dim=1)
45
+
46
+ print(f"Embeddings shape: {embeddings.shape}")
47
+ ```
48
+
49
+ ### Multilingual Masked Language Modeling
50
+
51
+ ```python
52
+ from transformers import AutoTokenizer, AutoModelForMaskedLM
53
+ import torch
54
+
55
+ tokenizer = AutoTokenizer.from_pretrained("jhu-clsp/mmbert-base")
56
+ model = AutoModelForMaskedLM.from_pretrained("jhu-clsp/mmbert-base")
57
+
58
+ # Example: Multilingual masked language modeling
59
+ text = "The capital of [MASK] is Paris."
60
+ inputs = tokenizer(text, return_tensors="pt")
61
+ with torch.no_grad():
62
+ outputs = model(**inputs)
63
+
64
+ # Get predictions for [MASK] tokens
65
+ mask_indices = torch.where(inputs["input_ids"] == tokenizer.mask_token_id)
66
+ predictions = outputs.logits[mask_indices]
67
+ top_tokens = torch.topk(predictions, 5, dim=-1)
68
+ predicted_words = [tokenizer.decode(token) for token in top_tokens.indices[0]]
69
+ print(f"Predictions for [MASK]: {predicted_words}")
70
+ ```
71
+
72
+ ### Multilingual Retrieval
73
+
74
+ ```python
75
+ from transformers import AutoTokenizer, AutoModel
76
+ import torch
77
+ import numpy as np
78
+
79
+ tokenizer = AutoTokenizer.from_pretrained("jhu-clsp/mmbert-base")
80
+ model = AutoModel.from_pretrained("jhu-clsp/mmbert-base")
81
+
82
+ def get_embeddings(texts):
83
+ inputs = tokenizer(texts, return_tensors="pt", padding=True, truncation=True)
84
+ with torch.no_grad():
85
+ outputs = model(**inputs)
86
+ # Mean pooling
87
+ embeddings = outputs.last_hidden_state.mean(dim=1)
88
+ return embeddings.numpy()
89
+
90
+ # Multilingual document retrieval
91
+ documents = [
92
+ "Artificial intelligence is transforming healthcare.",
93
+ "L'intelligence artificielle transforme les soins de santé.",
94
+ "人工智能正在改变医疗保健。",
95
+ "Climate change requires immediate action.",
96
+ "El cambio climático requiere acción inmediata."
97
+ ]
98
+
99
+ query = "AI in medicine"
100
+
101
+ # Get embeddings
102
+ doc_embeddings = get_embeddings(documents)
103
+ query_embedding = get_embeddings([query])
104
+
105
+ # Compute similarities
106
+ similarities = np.dot(doc_embeddings, query_embedding.T).flatten()
107
+ ranked_docs = np.argsort(similarities)[::-1]
108
+
109
+ print("Most similar documents:")
110
+ for i, doc_idx in enumerate(ranked_docs[:3]):
111
+ print(f"{i+1}. {documents[doc_idx]} (score: {similarities[doc_idx]:.3f})")
112
+ ```
113
+
114
  ## Licensing & Attribution
115
 
116
  This dataset aggregates multiple open-source datasets under permissive licenses. See individual source datasets for specific attribution requirements.
 
127
 
128
  ```bibtex
129
  @misc{marone2025mmbertmodernmultilingualencoder,
130
+ title={mmBERT: A Modern Multilingual Encoder with Annealed Language Learning},
131
  author={Marc Marone and Orion Weller and William Fleshman and Eugene Yang and Dawn Lawrie and Benjamin Van Durme},
132
  year={2025},
133
  eprint={2509.06888},
134
  archivePrefix={arXiv},
135
  primaryClass={cs.CL},
136
+ url={https://arxiv.org/abs/2509.06888},
137
  }
138
  ```