Update README.md
Browse files
README.md
CHANGED
|
@@ -71,3 +71,20 @@ configs:
|
|
| 71 |
- split: train
|
| 72 |
path: data/train-*
|
| 73 |
---
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 71 |
- split: train
|
| 72 |
path: data/train-*
|
| 73 |
---
|
| 74 |
+
|
| 75 |
+
# FineWeb2-Ro-BERT
|
| 76 |
+
|
| 77 |
+
**FineWeb2-Ro-BERT** is a large-scale pretraining dataset in the Romanian language. The data is derived from [FineWeb2](https://huggingface.co/datasets/HuggingFaceFW/fineweb-2) and annotated using a bert architecture for signals such as `educational quality` or `topic`. More details can be found [here](https://arxiv.org/abs/2511.01090).
|
| 78 |
+
|
| 79 |
+
## Key Features
|
| 80 |
+
|
| 81 |
+
* **Massive Scale**: Contains approximately **54.1M** rows (documents or sequences), providing comprehensive linguistic coverage for training robust Romanian embeddings and encoders.
|
| 82 |
+
|
| 83 |
+
## Usage
|
| 84 |
+
|
| 85 |
+
You can load this dataset using the Hugging Face `datasets` library:
|
| 86 |
+
|
| 87 |
+
```python
|
| 88 |
+
from datasets import load_dataset
|
| 89 |
+
|
| 90 |
+
dataset = load_dataset("OpenLLM-Ro/fineweb2-ro-bert", split="train")
|