| --- |
| library_name: transformers |
| license: apache-2.0 |
| pipeline_tag: feature-extraction |
| --- |
| |
| # Overview |
|
|
| This repository contains an encoder model, part of the research presented in the paper *Should We Still Pretrain Encoders with Masked Language Modeling?* (Gisserot-Boukhlef et al.). |
|
|
| * **Paper:** [Should We Still Pretrain Encoders with Masked Language Modeling?](https://huggingface.co/papers/2507.00994) |
| * **Blog post:** [Link](https://huggingface.co/blog/Nicolas-BZRD/encoders-should-not-be-only-pre-trained-with-mlm) |
| * **Project page:** [https://hf.co/MLMvsCLM](https://hf.co/MLMvsCLM) |
| * **Code:** [https://github.com/Nicolas-BZRD/EuroBERT](https://github.com/Nicolas-BZRD/EuroBERT) |
|
|
| ## Model Naming |
|
|
| Model identifiers follow a consistent format that encodes key training details: |
|
|
| * **Single-stage models**: |
| `[model size]-[objective]-[number of steps]`. |
| Example: `610m-clm-42k` denotes a 610M-parameter model trained with CLM for 42,000 steps. |
| * **Two-stage models**: |
| `[model size]-[objective #1]-[steps #1]-[objective #2]-[total steps]`. |
| Example: `610m-clm-10k-mlm40-42k` indicates a 610M model trained first with CLM for 10k steps, then continued with MLM (40% masking ratio) for 32k more steps, totaling 42k steps. |
| * **Continued pretraining from decayed checkpoints**: |
| These use the dec prefix on the first training stage. |
| Example: `610m-clm-dec42k-mlm40-64k refers` to a 610M model pretrained with CLM for 42k steps (with weight decay), then further trained with MLM (40% masking) for 22k additional steps, totaling 64k. |
| * **Intermediate checkpoints**: |
| To refer to a specific training step before the final checkpoint, append the step number at the end. |
| Example: `610m-mlm40-42k-1000` corresponds to step 1,000 during the MLM training phase of a 610M model trained for 42k steps. |
|
|
| ## Usage |
|
|
| You can use this model for feature extraction with the Hugging Face `transformers` library. |
|
|
| ```python |
| from transformers import AutoTokenizer, AutoModel |
| import torch |
| |
| # Replace with the actual model ID if different, e.g., "AhmedAliHassan/MLMvsCLM-Biphasic-210M" |
| # This placeholder assumes the current repository is the model you want to load. |
| model_name = "<YOUR_MODEL_ID_HERE>" |
| |
| # Load the tokenizer and model, ensuring trust_remote_code for custom architectures |
| tokenizer = AutoTokenizer.from_pretrained(model_name, trust_remote_code=True) |
| model = AutoModel.from_pretrained(model_name, trust_remote_code=True) |
| |
| text = "This is an example sentence to extract features from." |
| |
| inputs = tokenizer(text, return_tensors="pt") |
| |
| with torch.no_grad(): |
| outputs = model(**inputs) |
| |
| # The last hidden state contains the token embeddings (features) |
| last_hidden_state = outputs.last_hidden_state |
| print(f"Shape of last hidden state: {last_hidden_state.shape}") |
| |
| # For sentence-level embeddings, common approaches include: |
| # 1. Averaging the token embeddings (excluding special tokens) |
| # 2. Using the embedding of the [CLS] token (if applicable for the model's architecture) |
| # Example: Mean pooling (simple average over non-padding tokens) |
| attention_mask = inputs["attention_mask"] |
| input_mask_expanded = attention_mask.unsqueeze(-1).expand(last_hidden_state.size()).float() |
| sum_embeddings = torch.sum(last_hidden_state * input_mask_expanded, 1) |
| sum_mask = torch.clamp(input_mask_expanded.sum(1), min=1e-9) |
| mean_pooled_embedding = sum_embeddings / sum_mask |
| print(f"Shape of mean pooled embedding: {mean_pooled_embedding.shape}") |
| ``` |
|
|
| ## Citation |
|
|
| If you found this model useful, please consider citing our paper: |
|
|
| ```bibtex |
| @misc{gisserotboukhlef2025pretrainencodersmaskedlanguage, |
| title={Should We Still Pretrain Encoders with Masked Language Modeling?}, |
| author={Hippolyte Gisserot-Boukhlef and Nicolas Boizard and Manuel Faysse and Duarte M. Alves and Emmanuel Malherbe and André F. T. Martins and Céline Hudelot and Pierre Colombo}, |
| year={2025}, |
| eprint={2507.00994}, |
| archivePrefix={arXiv}, |
| primaryClass={cs.CL}, |
| url={https://arxiv.org/abs/2507.00994}, |
| } |
| ``` |