Update README.md
Browse files
README.md
CHANGED
|
@@ -47,3 +47,23 @@ configs:
|
|
| 47 |
- split: train
|
| 48 |
path: data/train-*
|
| 49 |
---
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 47 |
- split: train
|
| 48 |
path: data/train-*
|
| 49 |
---
|
| 50 |
+
|
| 51 |
+
# FineWeb2-Ro-LLM
|
| 52 |
+
|
| 53 |
+
**FineWeb2-Ro-LLM** is a high-quality pretraining dataset for the Romanian language. The data was filtered from [FineWeb2](https://huggingface.co/datasets/HuggingFaceFW/fineweb-2) leveraging LLMs.
|
| 54 |
+
More details can be found [here](https://arxiv.org/abs/2511.01090).
|
| 55 |
+
|
| 56 |
+
## Key Features
|
| 57 |
+
|
| 58 |
+
* **High Quality**: The dataset was filtered using [Gemma3 12B](https://huggingface.co/google/gemma-3-12b-it)
|
| 59 |
+
* **Large Scale**: Contains approximately **1.06M** documents (rows).
|
| 60 |
+
* **Rich Metadata**: Includes detailed metadata such as quality scores (`int_score`), topics, subtopics, and reasoning/explanations for the assigned quality scores.
|
| 61 |
+
|
| 62 |
+
## Usage
|
| 63 |
+
|
| 64 |
+
You can load this dataset using the Hugging Face `datasets` library:
|
| 65 |
+
|
| 66 |
+
```python
|
| 67 |
+
from datasets import load_dataset
|
| 68 |
+
|
| 69 |
+
dataset = load_dataset("OpenLLM-Ro/fineweb2-ro-llm", split="train")
|