File size: 4,165 Bytes
1940ba2 | 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 | ---
license: apache-2.0
language:
- en
- ru
- zh
- de
- ja
- es
- fr
- it
- pt
- pl
- nl
- id
- tr
- cs
- vi
- sv
- fa
- ar
- el
- da
- hu
---
# FineWeb-HQ-Classifiers
This repository contains the model weights of the trained deep learning classifiers used to identify structured and knowledge-rich samples for the [FineWeb-HQ](https://huggingface.co/datasets/epfml/FineWeb-HQ) and [FineWeb2-HQ](https://huggingface.co/datasets/epfml/FineWeb2-HQ) datasets.
FineWeb-HQ and FineWeb2-HQ are high-quality, model-filtered, multilingual pretraining datasets derived as subsets of [FineWeb](https://huggingface.co/datasets/HuggingFaceFW/fineweb) and [FineWeb2](https://huggingface.co/datasets/HuggingFaceFW/fineweb-2). The datasets were created by selecting the top 10% of FineWeb and FineWeb-2 documents. The classifiers use [XLM-RoBERTa](https://huggingface.co/FacebookAI/xlm-roberta-base) embeddings to score the documents.
For more details, see our paper [Enhancing Multilingual LLM Pretraining with Model-Based Data Selection](https://arxiv.org/abs/2502.10361).
## Quickstart
Classifiers use a simple architecture that takes mean-pooled XLM-RoBERTa embeddings as input and outputs the score logit.
```python
import torch
import torch.nn.functional as F
from transformers import AutoModel, AutoTokenizer
import huggingface_hub
class BinaryClassifier(torch.nn.Module):
def __init__(self, embedding_dim=768, hidden_dim=256):
super(BinaryClassifier, self).__init__()
self.classifier = torch.nn.Sequential(
torch.nn.Linear(embedding_dim, hidden_dim),
torch.nn.ReLU(),
torch.nn.Dropout(0.2),
torch.nn.Linear(hidden_dim, 1),
)
def forward(self, X):
return self.classifier(X)
def to_pt(self, file_name):
torch.save(self.state_dict(), file_name)
def from_pt(file_name, embedding_dim=768, hidden_dim=256):
state_dict = torch.load(
file_name,
weights_only=True,
map_location=torch.device("cpu"),
)
classifier = BinaryClassifier(
embedding_dim=embedding_dim,
hidden_dim=hidden_dim,
)
classifier.load_state_dict(state_dict)
classifier.eval()
return classifier
if __name__ == "__main__":
embedding_model_name = "FacebookAI/xlm-roberta-base"
tokenizer = AutoTokenizer.from_pretrained(embedding_model_name)
embedding_model = AutoModel.from_pretrained(
embedding_model_name,
dtype=torch.bfloat16,
)
classifiers_dir = huggingface_hub.snapshot_download("epfml/FineWeb-HQ-Classifiers")
classifier_model_en = BinaryClassifier.from_pt(f"{classifiers_dir}/eng_Latn.pt")
def score_sample(text, classifier_model):
inputs = tokenizer([text], return_tensors="pt")
embeddings = embedding_model(**inputs).last_hidden_state.float().mean(1)
score = F.sigmoid(classifier_model(embeddings)).item()
return score
text = "Question: How is bipolar disorder different from unipolar depression or 'regular' depression?\nAnswer: Both bipolar disorder and major depression are typically associated with depressive episodes. So both illnesses are accompanied by depressions. The difference is that in bipolar disorder people also have periods of elevation -- or severe irritability. We call these manic or hypomanic episodes."
score = score_sample(text, classifier_model_en)
print(f"{score:0.4f}") # 0.9338
text = "Custom Wedding Gifts\nPersonalized photo frames, albums & keepsakes. Heirloom quality!\nCustom Engraved Journals\nHandmade in Florence Italy. Dozens of sizes and paper styles!"
score = score_sample(text, classifier_model_en)
print(f"{score:0.4f}") # 0.0001
```
For a more efficient implementation, see our [Github repository](https://github.com/epfml/fineweb2-hq).
## Citation information
```
@article
{messmer2025multilingdatacomp,
title={Enhancing Multilingual LLM Pretraining with Model-Based Data Selection},
author={Bettina Messmer and Vinko Sabolčec and Martin Jaggi},
journal={arXiv},
year={2025},
url={https://arxiv.org/abs/2502.10361},
}
``` |