update readme
Browse files
README.md
CHANGED
|
@@ -34,7 +34,8 @@ configs:
|
|
| 34 |
---
|
| 35 |
# Gutenberg Chapters Dataset
|
| 36 |
|
| 37 |
-
This dataset contains chapters from books in the Project Gutenberg collection. Each entry in the dataset represents a single chapter from a book.
|
|
|
|
| 38 |
|
| 39 |
## Dataset Structure
|
| 40 |
|
|
@@ -59,7 +60,7 @@ You can load this dataset using the Hugging Face datasets library:
|
|
| 59 |
```python
|
| 60 |
from datasets import load_dataset
|
| 61 |
|
| 62 |
-
dataset = load_dataset("
|
| 63 |
|
| 64 |
# Access the first example
|
| 65 |
example = dataset['train'][0]
|
|
@@ -72,7 +73,7 @@ print(f"Text preview: {example['text'][:200]}...")
|
|
| 72 |
|
| 73 |
This dataset was created by:
|
| 74 |
1. Collecting text files from Project Gutenberg
|
| 75 |
-
2. Preprocessing to remove headers and footers
|
| 76 |
3. Identifying chapter boundaries using pattern matching
|
| 77 |
4. Extracting metadata from the original files
|
| 78 |
5. Saving each chapter as a separate entry in JSONL format
|
|
|
|
| 34 |
---
|
| 35 |
# Gutenberg Chapters Dataset
|
| 36 |
|
| 37 |
+
This dataset contains chapters from french books in the Project Gutenberg collection. Each entry in the dataset represents a single chapter from a book.
|
| 38 |
+
All books in this dataset were written or edited by Alexandre Dumas.
|
| 39 |
|
| 40 |
## Dataset Structure
|
| 41 |
|
|
|
|
| 60 |
```python
|
| 61 |
from datasets import load_dataset
|
| 62 |
|
| 63 |
+
dataset = load_dataset("1ou2/fr_dumas_chapters")
|
| 64 |
|
| 65 |
# Access the first example
|
| 66 |
example = dataset['train'][0]
|
|
|
|
| 73 |
|
| 74 |
This dataset was created by:
|
| 75 |
1. Collecting text files from Project Gutenberg
|
| 76 |
+
2. Preprocessing to remove headers and footers. Fix formatting issues (-- converted to —, _ removed, and fix carriage returns)
|
| 77 |
3. Identifying chapter boundaries using pattern matching
|
| 78 |
4. Extracting metadata from the original files
|
| 79 |
5. Saving each chapter as a separate entry in JSONL format
|