File size: 8,018 Bytes
e3e1f05 d98e57e e3e1f05 4366d88 e3e1f05 d98e57e e3e1f05 d98e57e e3e1f05 51591fc 02c58ae b639684 02c58ae b316d9e 02c58ae | 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 167 168 169 170 171 172 173 174 175 176 177 178 179 180 181 | ---
dataset_info:
features:
- name: etextno
dtype: int64
- name: book_title
dtype: string
- name: author
dtype: string
- name: issued
dtype: string
- name: context
dtype: string
splits:
- name: train
num_bytes: 21144011332
num_examples: 58653
download_size: 12884319326
dataset_size: 21144011332
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
# Gutenberg-BookCorpus-Cleaned-Data-English
This dataset is been cleaned and preprocessed using **Gutenberg_English_Preprocessor** class method (given below) from preference Kaggle dataset [75,000+ Gutenberg Books and Metadata 2025](https://www.kaggle.com/datasets/lokeshparab/gutenberg-books-and-metadata-2025). This dataset is only specialisation for english contented with rights as "Public domain in the USA" hence you can free used it anywhere.
Following reference metadata of Gutenberg is also available and downloaded it using following CLI command below :-
```
pip install kaggle
kaggle kernels output lokeshparab/gutenberg-metadata-downloader -p /path/to/dest
```
## About Project Gutenberg
Project Gutenberg is a digital library that hosts over 75,000 free eBooks. Users can choose among EPUB, Kindle, and plain text formats, download them, or read them online. The library primarily focuses on older literary works whose U.S. copyright has expired. Thousands of volunteers have digitized and meticulously proofread the eBooks for readers to enjoy.
## Dataset Details
| Column | Description |
|-----------------------------------------|--------------------------------------------------------------------|
| **etextno** | Unique identifier for each book. |
| **book_title** | Title of the book. |
| **Authors** | Author(s) of the respective book. |
| **Issued** | Date when the book was published or added to the collection. |
| **Context** | Cleaned and preprocessed the plain text version in UTF-8 encoding. |
## Gutenberg English Preprocessor (Methodolody)
The **Gutenberg English Preprocessor** is designed to clean and preprocess text data from Project Gutenberg files by removing unwanted patterns such as special markers, Gutenberg-specific sentences, and decorative text blocks.
* **Notebook Reference** :- [Click here](https://www.kaggle.com/code/lokeshparab/demo-of-using-gutenberg-for-english)
* **Following Features** :-
- **Removes Blocks Enclosed in `=` Symbols** — Eliminates text sections framed by lines of `=` symbols, often found in decorative headers or footers.
- **Removes Gutenberg-Specific Sentences** — Filters out sentences containing the term "Gutenberg" in any case (uppercase, lowercase, or mixed).
- **Removes Small Print Notices** — Identifies and removes text segments marked as "Small Print" content.
- **Trims Text Between Project Gutenberg Start/End Markers** — Extracts content enclosed between `*** START OF THE PROJECT GUTENBERG...` and `*** END OF THE PROJECT GUTENBERG...`.
- **Removes Inline and Block Patterns Marked with `*`, `**`, `***`, etc.** — Effectively cleans unwanted text patterns that are enclosed in stars.
* **Class function** :-
```python
class Gutenberg_English_Preprocessor:
"""
A text preprocessor designed to clean Project Gutenberg text data.
This class removes unwanted patterns like:
- Blocks enclosed in '=' lines
- Sentences containing "Gutenberg" (case insensitive)
- "Small Print" sections from Project Gutenberg files
- Blocks enclosed in '*' patterns
"""
def __init__(self, text: str):
"""
Initializes the Gutenberg_English_Preprocessor with the provided text.
Args:
text (str): The text content to be processed.
"""
self.text = text
def remove_equal_sign_blocks(self):
"""
Removes blocks of text enclosed by lines containing only '=' symbols.
Example:
========================
This content will be removed.
========================
"""
equal_block_pattern = r'^\s*=+\s*\n(?:.*?\n)*?\s*=+\s*$'
self.text = re.sub(equal_block_pattern, '', self.text, flags=re.MULTILINE)
self.text = self.text.strip()
def remove_gutenberg_sentences(self):
"""
Removes sentences that contain the word "Gutenberg" in any case format.
Example:
"This is a Project Gutenberg text." → Removed
"Random sentence without Gutenberg." → Removed
"This is a normal sentence." → Retained
"""
gutenberg_pattern = r'^[^\n]*\bgutenberg\b[^\n]*\n?'
self.text = re.sub(gutenberg_pattern, '', self.text, flags=re.IGNORECASE | re.MULTILINE)
self.text = self.text.strip()
def remove_small_print(self):
"""
Removes Project Gutenberg's "Small Print" sections.
These sections often contain legal disclaimers and metadata.
"""
pattern1 = r'\*\*\*START\*\*THE SMALL PRINT.*?\*END\*THE SMALL PRINT!'
pattern2 = r'\*\*\*START\*\*THE SMALL PRINT.*?\*END THE SMALL PRINT'
self.text = re.sub(pattern1, '', self.text, flags=re.DOTALL)
self.text = re.sub(pattern2, '', self.text, flags=re.DOTALL)
self.text = self.text.strip()
def start_end(self):
"""
Trims the text to retain only the content between:
- "*** START OF THE PROJECT GUTENBERG..."
- "*** END OF THE PROJECT GUTENBERG..."
Ensures non-essential content outside these markers is excluded.
"""
str_str = "*** START OF THE PROJECT GUTENBERG"
end_str = "*** END OF THE PROJECT GUTENBERG"
start_idx = self.text.find(str_str)
end_idx = self.text.find(end_str)
if start_idx != -1 and end_idx != -1:
self.text = self.text[start_idx:end_idx]
def remove_patterns(self):
"""
Removes patterns enclosed by '*' characters, such as:
- Inline patterns like "* text *", "** text **", etc.
- Standalone patterns and multi-line blocks enclosed in '*'
"""
star_pattern = r'^\s*\*{1,4}.*?\*{1,4}\s*$'
self.text = re.sub(star_pattern, '', self.text, flags=re.MULTILINE | re.DOTALL)
self.text = self.text.strip()
def preprocess(self):
"""
Executes the full text preprocessing pipeline by calling all individual
cleaning functions in the desired sequence.
Returns:
str: The cleaned and processed text content.
"""
self.start_end()
self.remove_small_print()
self.remove_patterns()
self.remove_equal_sign_blocks()
self.remove_gutenberg_sentences()
return self.text
```
* **Execution Steps**
```python
preprocessor = Gutenberg_English_Preprocessor(text="Here contents Gutenberg text")
clean_text = preprocessor.preprocess()
```
## Usage
This dataset can be effectively applied to various Natural Language Processing (NLP) and Machine Learning (ML) tasks, such as:
* **Creating Embeddings**: Extract meaningful vector representations for search engines and recommendation systems.
* **Training Transformers**: Utilize the dataset to train transformer models like BERT, GPT, etc., for improved language understanding and generation.
* **Language Model Fine-tuning**: Fine-tune LLMs (Large Language Models) to enhance performance in specific domains or tasks.
* **Text Analysis and Classification**: Conduct topic modeling, sentiment analysis, or language detection.
* **Information Retrieval**: Develop powerful search systems by indexing the dataset with metadata attributes.
|