fr_dumas_chapters / README.md
1ou2's picture
update readme
948cef7 verified
metadata
dataset_info:
  features:
    - name: metadata
      struct:
        - name: file_name
          dtype: string
        - name: title
          dtype: string
        - name: author
          dtype: string
        - name: language
          dtype: string
    - name: chapter_title
      dtype: string
    - name: text
      dtype: string
  splits:
    - name: train
      num_bytes: 32084130.1799591
      num_examples: 1858
    - name: validation
      num_bytes: 1692273.8200408998
      num_examples: 98
  download_size: 20269105
  dataset_size: 33776404
configs:
  - config_name: default
    data_files:
      - split: train
        path: data/train-*
      - split: validation
        path: data/validation-*

Gutenberg Chapters Dataset

This dataset contains chapters from french books in the Project Gutenberg collection. Each entry in the dataset represents a single chapter from a book. All books in this dataset were written or edited by Alexandre Dumas.

Dataset Structure

Each entry in the dataset contains:

  • metadata: Information about the source book including:

    • file_name: Original file name
    • title: Book title
    • author: Book author
    • release_date: Release date of the book
    • language: Language of the book
    • encoding: Character encoding of the original file
  • chapter_title: The title of the chapter (e.g., "CHAPITRE I" or Roman numerals)

  • text: The full text content of the chapter

Usage

You can load this dataset using the Hugging Face datasets library:

from datasets import load_dataset

dataset = load_dataset("1ou2/fr_dumas_chapters")

# Access the first example
example = dataset['train'][0]
print(f"Chapter: {example['chapter_title']}")
print(f"Book: {example['metadata']['title']} by {example['metadata']['author']}")
print(f"Text preview: {example['text'][:200]}...")

Dataset Creation

This dataset was created by:

  1. Collecting text files from Project Gutenberg
  2. Preprocessing to remove headers and footers. Fix formatting issues (-- converted to —, _ removed, and fix carriage returns)
  3. Identifying chapter boundaries using pattern matching
  4. Extracting metadata from the original files
  5. Saving each chapter as a separate entry in JSONL format

License

This dataset contains works from Project Gutenberg. Project Gutenberg books are free and in the public domain in the United States. Please check the copyright laws in your country before using this dataset.