| dataset_info: | |
| features: | |
| - name: text | |
| dtype: string | |
| splits: | |
| - name: train | |
| num_bytes: 30969232 | |
| num_examples: 1000 | |
| download_size: 17993854 | |
| dataset_size: 30969232 | |
| configs: | |
| - config_name: default | |
| data_files: | |
| - split: train | |
| path: data/train-* | |
| license: odc-by | |
| task_categories: | |
| - text-generation | |
| language: | |
| - en | |
| size_categories: | |
| - n<1K | |
| source_datasets: HuggingFaceFW/fineweb | |
| # fineweb sample: 1k docs 'long' | |
| sample of 1k docs from `HuggingFaceFW/fineweb` for use in some experiments. | |
| - more than 6144 GPT-4 tiktoken tokens | |
| - less than 8192 GPT-4 tiktoken tokens |