Nelathan commited on
Commit
ca54b42
·
verified ·
1 Parent(s): c1368df

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +72 -20
README.md CHANGED
@@ -1,25 +1,77 @@
1
  ---
2
  dataset_info:
3
  features:
4
- - name: link
5
- dtype: string
6
- - name: title
7
- dtype: string
8
- - name: author
9
- dtype: string
10
- - name: text
11
- dtype: string
12
- - name: language
13
- dtype: string
14
  splits:
15
- - name: train
16
- num_bytes: 712126185
17
- num_examples: 1219
18
- download_size: 426251469
19
- dataset_size: 712126185
20
- configs:
21
- - config_name: default
22
- data_files:
23
- - split: train
24
- path: data/train-*
25
  ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
  ---
2
  dataset_info:
3
  features:
4
+ - name: link
5
+ dtype: string
6
+ - name: title
7
+ dtype: string
8
+ - name: author
9
+ dtype: string
10
+ - name: text
11
+ dtype: string
12
+ - name: language
13
+ dtype: string
14
  splits:
15
+ - name: train
16
+ num_bytes: 712126185
17
+ num_examples: 1219
 
 
 
 
 
 
 
18
  ---
19
+
20
+ # Standard Ebooks Text Dataset
21
+
22
+ This dataset contains the full text of public domain books sourced from [Standard Ebooks](https://standardebooks.org). It is intended for use in Natural Language Processing tasks, particularly Large Language Model pretraining, fine-tuning, and research.
23
+
24
+ Standard Ebooks provides high-quality, carefully formatted, and proofread versions of classic literature, making this a valuable collection of clean text data.
25
+
26
+ ## Dataset Structure
27
+
28
+ The dataset consists of a single split: `train`.
29
+
30
+ Each entry (book) in the dataset has the following features:
31
+
32
+ - `link` (string): The URL of the repository of the Standard Ebooks Github.
33
+ - `title` (string): The title of the book.
34
+ - `author` (string): The author(s) of the book.
35
+ - `text` (string): The full extracted text of the book, formatted in Markdown.
36
+ - `language` (string): The language code of the book (e.g., 'en-GB').
37
+
38
+ ## Data Source
39
+
40
+ The source material for this dataset comes directly from the public domain book repositories maintained by the [Standard Ebooks GitHub organization](https://github.com/standardebooks).
41
+
42
+ The dataset was built using a custom Python script available on GitHub: [https://github.com/Nelathan/standardebooks-dataset-builder](https://github.com/Nelathan/standardebooks-dataset-builder) (replace with your actual repo link once published).
43
+
44
+ ## Content Filtering and Formatting
45
+
46
+ The dataset builder script extracts content files (`.xhtml`) listed in the `content.opf` file's spine. It performs the following filtering and formatting steps:
47
+
48
+ - **Exclusion by Filename:** Files commonly associated with metadata, licensing, or structural elements outside the main narrative are excluded based on keywords in their filenames (e.g., imprint, colophon, uncopyright, dedication, acknowledgments, foreword, preface, epigraph, afterword, appendix, glossary, index, bibliography, toc, cover, license).
49
+ - **Copyright Page Exclusion:** Files containing common copyright notices are excluded.
50
+ - **Inclusion of Structural Elements:** Files representing parts, volumes, or books (e.g., `part-1.xhtml`, `book-i.xhtml`) are **included** as they provide valuable structural context.
51
+ - **Markdown Conversion:** The XHTML content is converted to Markdown for a consistent text representation. Basic HTML tags for paragraphs, headings, lists, emphasis, etc., are preserved in the Markdown output.
52
+ - **XML Prolog Stripping:** The `<?xml ...?>` prolog is removed.
53
+ - **HTML Body Extraction:** Only the content within the `<body>` tag is processed to avoid including metadata from the `<head>`.
54
+
55
+ The goal is to capture the primary narrative and structural content of each book while excluding boilerplate and external reference material.
56
+
57
+ ## Dataset Curators
58
+
59
+ This dataset was created by [Daniel Otto](https://github.com/Nelathan) (replace with your GitHub profile link).
60
+
61
+ ## Licensing
62
+
63
+ The code used to generate this dataset is licensed under the MIT License, available in the [GitHub repository](https://github.com/Nelathan/ebooks-to-dataset/blob/main/LICENSE).
64
+
65
+ The content of the dataset itself is composed of public domain works from [Standard Ebooks](https://github.com/standardebooks). Standard Ebooks dedicates its contributions to the worldwide public domain via the terms in the [CC0 1.0 Universal Public Domain Dedication](https://creativecommons.org/publicdomain/zero/1.0/). Users of this dataset should be aware of and abide by the terms of the public domain for the included content.
66
+
67
+ ## Citation
68
+
69
+ You should acknowledge Standard Ebooks as the source of the high-quality public domain texts. Refer to the Standard Ebooks website and individual book metadata for specific source and licensing information for each work.
70
+
71
+ ## Potential Uses
72
+
73
+ - Training and fine-tuning Language Models (LLMs)
74
+ - Text generation and analysis
75
+ - Natural Language Understanding research
76
+ - Building literary analysis tools
77
+ - Creating specialized datasets based on classic literature