metadata
language:
- fra
Description
Dataframe containing 2075 French books in txt format (= the ~2600 French books present in gutenberg from which all books by authors present in the french_books_summuries dataset have been removed to avoid any leaks).
More precisely :
- the
textecolumn contains the texts - the
titrecolumn contains the book title - the
auteurcolumn contains the author's name and dates of birth and death (if you want to filter the texts to keep only those from the given century to the present day) - the column
nb_motscontains an estimate of the number of words per text (a simple.split(" "); again, if you want to filter)
Assuming that a word in French contains an average of 1.3 to 1.5 tokens, we have :
- Estimated number of texts with at least 8K tokens: between 1998 and 2014
- Estimated number of texts with at least 16K tokens: between 1865 and 1891
- Estimated number of texts with at least 32K tokens: between 1661 and 1717
- Estimated number of texts with at least 64K tokens: between 1288 and 1408
- Estimated number of texts with at least 128K tokens: between 563 and 721
- Estimated number of texts with at least 256K tokens: between 82 and 147