arxiv_tokenized / README.md
Asutosh2003's picture
Upload README.md with huggingface_hub
d7632de
---
dataset_info:
features:
- name: article
dtype: string
- name: abstract
dtype: string
- name: input_ids
sequence: int32
- name: attention_mask
sequence: int8
- name: labels
sequence: int64
splits:
- name: train
num_bytes: 9692577180
num_examples: 203037
- name: validation
num_bytes: 296631062
num_examples: 6436
- name: test
num_bytes: 297275288
num_examples: 6440
download_size: 4391426343
dataset_size: 10286483530
---
# Dataset Card for "arxiv_tokenized"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)