DigitalAsocial commited on
Commit
fb3019f
·
verified ·
1 Parent(s): 445a4d4

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +97 -5
README.md CHANGED
@@ -1,5 +1,97 @@
1
- ---
2
- license: other
3
- license_name: research-only
4
- license_link: LICENSE
5
- ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ license: other
3
+ license_name: research-only
4
+ license_link: LICENSE
5
+ language:
6
+ - en
7
+ ---
8
+ Dataset Card: Processed 5 Scientific Books Sentence Pairs
9
+ Dataset Description
10
+ Language: English (scientific/technical domain)
11
+
12
+ License: Research use only (derived from copyrighted books, full texts not redistributed)
13
+
14
+ Dataset Summary: This dataset contains sentence pairs extracted from 5 scientific books. Each book was processed with GROBID to obtain structured text from PDF files. Sentences were segmented and paired for training sentence-transformer models on semantic similarity tasks. To ensure balance, a maximum of 6,000 sentences per book was included.
15
+ Dataset Structure
16
+ Data Instances
17
+ Each entry is a pair of sentences:
18
+
19
+ ```json
20
+ {
21
+ "sentence_0": "Some of the generated samples that had been achieved with this architecture already in 2014 can be seen in Figure 3.14.",
22
+ "sentence_1": "Conditioning on Text: So far, only image generation has been covered, completely ignoring textual input."
23
+ }
24
+ ```
25
+ Data Splits
26
+ All data is in the "train" split.
27
+
28
+ Total size: ~66424 sentence pairs.
29
+
30
+ Balanced across 5 books (≤6000 sentences per book).
31
+
32
+ Dataset Creation
33
+ Curation Rationale
34
+ The dataset was created to provide high-quality sentence pairs for training and evaluating sentence-transformer models in the scientific domain. Limiting to 6000 sentences per book ensures balanced representation and reduces copyright risks.
35
+
36
+ Source Data
37
+ Books: 5 scientific/technical books (copyrighted, not redistributed).
38
+
39
+ Extraction: PDFs processed with GROBID → structured text → sentence segmentation (NLTK).
40
+
41
+ Pairs: Constructed from consecutive sentences and curated positive/negative examples.
42
+
43
+ Data Extraction Logic
44
+ Raw PDFs processed with GROBID.
45
+
46
+ Sentences segmented with NLTK.
47
+
48
+ Maximum 6000 sentences per book included.
49
+
50
+ Sentence pairs generated for semantic similarity training.
51
+
52
+ Additional Information
53
+ Citation
54
+ If you use this dataset, please cite:
55
+
56
+ ```bibtex
57
+ @misc{aghakhani2025synergsticrag,
58
+ author = {Danial Aghakhani Zadeh},
59
+ title = {Processed 5 Scientific Books Sentence Pairs},
60
+ year = {2025},
61
+ publisher = {Hugging Face},
62
+ howpublished = {\url{https://huggingface.co/datasets/DigitalAsocial/ds-tb-5-g}}
63
+ }
64
+ ```
65
+
66
+ Personal and Sensitive Information
67
+ The dataset consists of scientific/technical text.
68
+
69
+ No personal or sensitive information is included.
70
+
71
+ Bias, Risks, and Limitations
72
+ Texts reflect the style and biases of their original authors.
73
+
74
+ Dataset is domain-specific (scientific books) and may not generalize to everyday language.
75
+
76
+ Full copyrighted texts are not included; only derived sentence pairs are shared.
77
+
78
+ Notice and Takedown Policy
79
+ If you believe this dataset contains material that infringes copyright, please contact us with:
80
+
81
+ Your contact information
82
+
83
+ Reference to the original work
84
+
85
+ Identification of the material claimed to be infringing
86
+
87
+ We will comply with legitimate requests by removing affected sources from future releases.
88
+
89
+ Dataset Curators
90
+ Created by Danial for research on sentence-transformers and semantic similarity.
91
+
92
+ License
93
+ Derived from copyrighted books.
94
+
95
+ Shared under Research Use Only license.
96
+
97
+ Full texts are not redistributed.