| dataset_info: | |
| features: | |
| - name: bert_token | |
| sequence: int64 | |
| - name: gpt2_token | |
| sequence: int64 | |
| splits: | |
| - name: train | |
| num_bytes: 173553456.7202345 | |
| num_examples: 551455 | |
| - name: test | |
| num_bytes: 261864.0 | |
| num_examples: 1000 | |
| download_size: 42652803 | |
| dataset_size: 173815320.7202345 | |
| # Dataset Card for "amazon_tokenized" | |
| [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |