| configs: | |
| - config_name: default | |
| data_files: | |
| - split: train | |
| path: data/train-* | |
| - split: validation | |
| path: data/validation-* | |
| - split: test | |
| path: data/test-* | |
| dataset_info: | |
| features: | |
| - name: label | |
| dtype: float64 | |
| - name: dataset_name | |
| dtype: string | |
| - name: input_ids | |
| sequence: int32 | |
| - name: token_type_ids | |
| sequence: int8 | |
| - name: attention_mask | |
| sequence: int8 | |
| splits: | |
| - name: train | |
| num_bytes: 232895922 | |
| num_examples: 949728 | |
| - name: validation | |
| num_bytes: 17255970 | |
| num_examples: 69711 | |
| - name: test | |
| num_bytes: 96102951 | |
| num_examples: 425205 | |
| download_size: 123150665 | |
| dataset_size: 346254843 | |
| Original Dataset from: https://huggingface.co/datasets/glue | |
| This dataset is adapted from https://huggingface.co/datasets/gmongaras/BERT_Base_Cased_512_GLUE | |
| Every split besides the ax split is in this dataset. | |
| Lines above 512 tokens from the BERT-cased (bert-base-cased) tokenizer are removed in the original dataset | |
| If in any case the sentences are longer than 512 tokens, they are subsetted. | |
| Original labels and dataset categories are retained. |