dataset_info:
features:
- name: gutenberg_id
dtype: int64
- name: title
dtype: string
- name: text
dtype: string
- name: tokenized_length
dtype: int64
- name: metadata
struct:
- name: authors
sequence: string
- name: bookshelves
sequence: string
- name: encoding
dtype: string
- name: languages
sequence: string
- name: subjects
sequence: string
- name: summaries
sequence: 'null'
- name: url
dtype: string
splits:
- name: train
num_bytes: 480844883
num_examples: 1084
download_size: 295576071
dataset_size: 480844883
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
license: apache-2.0
language:
- it
tags:
- project_gutenberg
- text
pretty_name: ITAGutenberg
ITAGutenberg
A collection of all books written in Italian that appear on Project Gutenberg, meant for pretraining of Large Language Models. We collected the plaintext version of each book and lightly processed it to remove licensing text that is usually included in the original text of Project Gutenberg.
π Quickstart
Simply download the dataset as you would with any other Hugging Face dataset:
from datasets import load_dataset
dataset = load_dataset("tommasobonomo/ITAGutenberg")
π Data Schema
The dataset is organized with the following schema:
gutenberg_id(str) β Project Gutenberg key identifying the book.title(str) β Title of the book.text(str) β Full text of the book from Project Gutenberg.tokenized_length(int) β Length in tokens of the book. Computed through thecl100k_baseencoding fromopenai/tiktoken.metadata(dict) β Additional contextual information about the book, including:authors(list[str]) β Name of the bookβs author(s).bookshelves(list[str]) β Name of bookshelves that include this book, as per Project Gutenberg.encoding(str) β Name of the encoding of the original book source.languages(list[str]) β Languages that appear in a specific book.summaries(list[str]) β Summaries collected from Project Gutenberg for a specific book.url(str) β URL from which the text of the book was downloaded.