Datasets:
| task_categories: | |
| - text-generation | |
| language: | |
| - en | |
| - fr | |
| - zh | |
| # Gutenberg | |
| Our version of the [project gutenberg corpus](https://huggingface.co/datasets/manu/project_gutenberg), so as used to pretrain Apertus (v1 being used before 9T, v2 between 9T and 12T). | |
| More details about data provenance, preparation, and statistics can be found in our [tech report](https://github.com/swiss-ai/apertus-tech-report). | |
| Sampling, filtering and data-preparation scripts can be found in [our dedicated GitHub repository](https://github.com/swiss-ai/pretrain-data/tree/main/pipelines/gutemberg). | |
| Feel free to [reach out](mailto:sven.najem-meyer@epfl.ch) for any questions or suggestions ๐ |