|
|
--- |
|
|
dataset_info: |
|
|
features: |
|
|
- name: id |
|
|
dtype: string |
|
|
- name: text |
|
|
dtype: string |
|
|
- name: url |
|
|
dtype: string |
|
|
- name: date |
|
|
dtype: timestamp[ns, tz=UTC] |
|
|
- name: dump |
|
|
dtype: string |
|
|
- name: file_path |
|
|
dtype: string |
|
|
- name: language_score |
|
|
dtype: float64 |
|
|
- name: minhash_cluster_size |
|
|
dtype: int64 |
|
|
- name: top_langs |
|
|
dtype: string |
|
|
- name: score |
|
|
dtype: float64 |
|
|
- name: int_score |
|
|
dtype: int64 |
|
|
- name: topic_class_1 |
|
|
dtype: string |
|
|
- name: topic_prob_1 |
|
|
dtype: float64 |
|
|
- name: topic_class_2 |
|
|
dtype: string |
|
|
- name: topic_prob_2 |
|
|
dtype: float64 |
|
|
- name: topic_class_3 |
|
|
dtype: string |
|
|
- name: topic_prob_3 |
|
|
dtype: float64 |
|
|
- name: format_class_1 |
|
|
dtype: string |
|
|
- name: format_prob_1 |
|
|
dtype: float64 |
|
|
- name: format_class_2 |
|
|
dtype: string |
|
|
- name: format_prob_2 |
|
|
dtype: float64 |
|
|
- name: format_class_3 |
|
|
dtype: string |
|
|
- name: format_prob_3 |
|
|
dtype: float64 |
|
|
- name: age_group_class_1 |
|
|
dtype: string |
|
|
- name: age_group_prob_1 |
|
|
dtype: float64 |
|
|
- name: age_group_class_2 |
|
|
dtype: string |
|
|
- name: age_group_prob_2 |
|
|
dtype: float64 |
|
|
- name: age_group_class_3 |
|
|
dtype: string |
|
|
- name: age_group_prob_3 |
|
|
dtype: float64 |
|
|
splits: |
|
|
- name: train |
|
|
num_bytes: 225675034950 |
|
|
num_examples: 54128784 |
|
|
download_size: 131804901421 |
|
|
dataset_size: 225675034950 |
|
|
configs: |
|
|
- config_name: default |
|
|
data_files: |
|
|
- split: train |
|
|
path: data/train-* |
|
|
--- |
|
|
|
|
|
# FineWeb2-Ro-BERT |
|
|
|
|
|
**FineWeb2-Ro-BERT** is a large-scale pretraining dataset in the Romanian language. The data is derived from [FineWeb2](https://huggingface.co/datasets/HuggingFaceFW/fineweb-2) and annotated using a bert architecture for signals such as `educational quality` or `topic`. More details can be found [here](https://arxiv.org/abs/2511.01090). |
|
|
|
|
|
## Key Features |
|
|
|
|
|
* **Massive Scale**: Contains approximately **54.1M** rows (documents or sequences), providing comprehensive linguistic coverage for training robust Romanian embeddings and encoders. |
|
|
|
|
|
## Usage |
|
|
|
|
|
You can load this dataset using the Hugging Face `datasets` library: |
|
|
|
|
|
```python |
|
|
from datasets import load_dataset |
|
|
|
|
|
dataset = load_dataset("OpenLLM-Ro/fineweb2-ro-bert", split="train") |
|
|
|