File size: 1,586 Bytes
2853b4c 0b1687a 2853b4c 0b1687a 2853b4c 0b1687a 2853b4c 0b1687a 2853b4c edea7a8 e70100f 2853b4c 87dc77a edea7a8 87dc77a edea7a8 87dc77a edea7a8 4e04cbf |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 |
---
dataset_info:
features:
- name: review
dtype: string
- name: label
dtype: int64
splits:
- name: train
num_bytes: 91153465
num_examples: 159443
- name: validation
num_bytes: 11526130
num_examples: 19933
- name: test
num_bytes: 11522522
num_examples: 19928
download_size: 75005133
dataset_size: 114202117
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: validation
path: data/validation-*
- split: test
path: data/test-*
task_categories:
- text-classification
language:
- fra
---
# Allocine_clean
In the [allocine](https://huggingface.co/datasets/tblard/allocine) dataset there are leaks and duplicated data:
- Leakage between train split and test split: 23
- Leakage between validation split and test split: 15
- Duplicated lines in the train split: 534
- Duplicated lines in the validation split: 52
- Duplicated lines in the test split: 72
In all, this means 0.6% of test data are biased.
So this version is a cleaned version of the allocine dataset, i.e. without leaks and duplicated data.
It is likely that the resulting dataset is still imperfect, with annotation problems requiring further proofreading/correction.
```
DatasetDict({
train: Dataset({
features: ['review', 'label'],
num_rows: 159443 #160000 before
})
validation: Dataset({
features: ['review', 'label'],
num_rows: 19933 #20000 before
})
test: Dataset({
features: ['review', 'label'],
num_rows: 19928 #20000 before
})
})
``` |