Update README.md
Browse files
README.md
CHANGED
|
@@ -14,7 +14,6 @@ language:
|
|
| 14 |
- **Homepage:** https://github.com/roemmele/AbLit
|
| 15 |
- **Repository:** https://github.com/roemmele/AbLit
|
| 16 |
- **Paper:** https://arxiv.org/pdf/2302.06579.pdf
|
| 17 |
-
- **Leaderboard:**
|
| 18 |
- **Point of Contact:** Melissa Roemmele (melissa@roemmele.io)
|
| 19 |
|
| 20 |
### Dataset Summary
|
|
@@ -23,34 +22,28 @@ The AbLit dataset contains **ab**ridged versions of 10 classic English **lit**er
|
|
| 23 |
The abridgements were written and made publically available by Emma Laybourn [here](http://www.englishliteratureebooks.com/classicnovelsabridged.html).
|
| 24 |
This is the first known dataset for NLP research that focuses on the abridgement task. See the paper for details.
|
| 25 |
|
| 26 |
-
### Supported Tasks and Leaderboards
|
| 27 |
-
|
| 28 |
-
None
|
| 29 |
-
|
| 30 |
### Languages
|
| 31 |
|
| 32 |
English
|
| 33 |
|
| 34 |
## Dataset Structure
|
| 35 |
|
| 36 |
-
Each passage in the original version of a book chapter is aligned with its corresponding passage in the abridged version. These aligned pairs are available for various passage sizes: sentences, paragraphs, and multi-paragraph "chunks". The passage size is specified when loading the dataset. There are train/dev/test splits for items of each size.
|
| 37 |
|
| 38 |
-
###
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 39 |
|
| 40 |
-
|
| 41 |
-
- Train: 808
|
| 42 |
-
- Dev: 10
|
| 43 |
-
- Test: 50
|
| 44 |
|
| 45 |
-
|
| 46 |
-
|
| 47 |
-
|
| 48 |
-
|
| 49 |
-
|
| 50 |
-
Original-abridged sentence pairs:
|
| 51 |
-
- Train: 122,219
|
| 52 |
-
- Dev: 1,143
|
| 53 |
-
- Test: 10,431
|
| 54 |
|
| 55 |
### Data Fields
|
| 56 |
|
|
@@ -59,12 +52,6 @@ Abridged: passage text in the abridged version
|
|
| 59 |
Book: title of book containing passage
|
| 60 |
Chapter: title of chapter containing passage
|
| 61 |
|
| 62 |
-
### Data Splits
|
| 63 |
-
|
| 64 |
-
Train: ~99% alignment accuracy (see paper)
|
| 65 |
-
Dev: 100% alignment accuracy
|
| 66 |
-
Test: 100% alignment accuracy
|
| 67 |
-
|
| 68 |
## Dataset Creation
|
| 69 |
|
| 70 |
### Curation Rationale
|
|
@@ -77,7 +64,7 @@ The author Emma Laybourn wrote abridged versions of classic English literature b
|
|
| 77 |
|
| 78 |
#### Initial Data Collection and Normalization
|
| 79 |
|
| 80 |
-
We
|
| 81 |
|
| 82 |
#### Who are the source language producers?
|
| 83 |
|
|
@@ -87,7 +74,7 @@ Emma Laybourn
|
|
| 87 |
|
| 88 |
#### Annotation process
|
| 89 |
|
| 90 |
-
We designed a procedure for automatically aligning passages between the original and abridged version of each chapter. We conducted a human evaluation to verify these alignments had high accuracy. The dev and test splits of the dataset were fully validated to ensure 100%
|
| 91 |
|
| 92 |
#### Who are the annotators?
|
| 93 |
|
|
|
|
| 14 |
- **Homepage:** https://github.com/roemmele/AbLit
|
| 15 |
- **Repository:** https://github.com/roemmele/AbLit
|
| 16 |
- **Paper:** https://arxiv.org/pdf/2302.06579.pdf
|
|
|
|
| 17 |
- **Point of Contact:** Melissa Roemmele (melissa@roemmele.io)
|
| 18 |
|
| 19 |
### Dataset Summary
|
|
|
|
| 22 |
The abridgements were written and made publically available by Emma Laybourn [here](http://www.englishliteratureebooks.com/classicnovelsabridged.html).
|
| 23 |
This is the first known dataset for NLP research that focuses on the abridgement task. See the paper for details.
|
| 24 |
|
|
|
|
|
|
|
|
|
|
|
|
|
| 25 |
### Languages
|
| 26 |
|
| 27 |
English
|
| 28 |
|
| 29 |
## Dataset Structure
|
| 30 |
|
| 31 |
+
Each passage in the original version of a book chapter is aligned with its corresponding passage in the abridged version. These aligned pairs are available for various passage sizes: sentences, paragraphs, and multi-paragraph "chunks". The passage size is specified when loading the dataset. There are train/dev/test splits for items of each size.
|
| 32 |
|
| 33 |
+
| Passage Size | Description | # Train | # Dev | # Test |
|
| 34 |
+
| ------------- | ------------- | ------- | ------- | ------- |
|
| 35 |
+
| Chapters | Each passage is a single chapter | 808 | 10 | 50
|
| 36 |
+
| Sentences | Each passage is a sentence delimited by the NLTK sentence tokenizer | 122,219 | 1,143 | 10,431 |
|
| 37 |
+
| Paragraphs | Each passage is a paragraph delimited by a line break | 37,227 | 313 | 3,125 |
|
| 38 |
+
| Chunks | Each passage consists of up to X number of sentences, which may span more than one paragraph. X=10 is provided here; to derive chunks with other lengths X, see GitHub repo) | 14,857 | 141 | 1,264
|
| 39 |
|
| 40 |
+
#### Example Usage
|
|
|
|
|
|
|
|
|
|
| 41 |
|
| 42 |
+
To load aligned sentences:
|
| 43 |
+
``
|
| 44 |
+
from datasets import load_dataset
|
| 45 |
+
data = load_dataset("ablit", "sentences")
|
| 46 |
+
``
|
|
|
|
|
|
|
|
|
|
|
|
|
| 47 |
|
| 48 |
### Data Fields
|
| 49 |
|
|
|
|
| 52 |
Book: title of book containing passage
|
| 53 |
Chapter: title of chapter containing passage
|
| 54 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 55 |
## Dataset Creation
|
| 56 |
|
| 57 |
### Curation Rationale
|
|
|
|
| 64 |
|
| 65 |
#### Initial Data Collection and Normalization
|
| 66 |
|
| 67 |
+
We obtained the original and abridged versions of the books from the respective websites.
|
| 68 |
|
| 69 |
#### Who are the source language producers?
|
| 70 |
|
|
|
|
| 74 |
|
| 75 |
#### Annotation process
|
| 76 |
|
| 77 |
+
We designed a procedure for automatically aligning passages between the original and abridged version of each chapter. We conducted a human evaluation to verify these alignments had high accuracy. The training split of the dataset has ~99% accuracy. The dev and test splits of the dataset were fully human-validated to ensure 100% accuracy. See the paper for further explanation.
|
| 78 |
|
| 79 |
#### Who are the annotators?
|
| 80 |
|