This view is limited to 50 files because it contains too many changes. See the raw diff here.
Files changed (50) hide show
  1. .gitattributes +0 -0
  2. README.md +8 -212
  3. data/stackexchange/1-1/0_2289.jsonl +0 -3
  4. data/stackexchange/1-1/1000_2289.jsonl +0 -3
  5. data/stackexchange/1-1/1001_2289.jsonl +0 -3
  6. data/stackexchange/1-1/1002_2289.jsonl +0 -3
  7. data/stackexchange/1-1/1003_2289.jsonl +0 -3
  8. data/stackexchange/1-1/1004_2289.jsonl +0 -3
  9. data/stackexchange/1-1/1005_2289.jsonl +0 -3
  10. data/stackexchange/1-1/1006_2289.jsonl +0 -3
  11. data/stackexchange/1-1/1007_2289.jsonl +0 -3
  12. data/stackexchange/1-1/1008_2289.jsonl +0 -3
  13. data/stackexchange/1-1/1009_2289.jsonl +0 -3
  14. data/stackexchange/1-1/100_2289.jsonl +0 -3
  15. data/stackexchange/1-1/1010_2289.jsonl +0 -3
  16. data/stackexchange/1-1/1011_2289.jsonl +0 -3
  17. data/stackexchange/1-1/1012_2289.jsonl +0 -3
  18. data/stackexchange/1-1/1013_2289.jsonl +0 -3
  19. data/stackexchange/1-1/1014_2289.jsonl +0 -3
  20. data/stackexchange/1-1/1015_2289.jsonl +0 -3
  21. data/stackexchange/1-1/1016_2289.jsonl +0 -3
  22. data/stackexchange/1-1/1017_2289.jsonl +0 -3
  23. data/stackexchange/1-1/1018_2289.jsonl +0 -3
  24. data/stackexchange/1-1/1019_2289.jsonl +0 -3
  25. data/stackexchange/1-1/101_2289.jsonl +0 -3
  26. data/stackexchange/1-1/1020_2289.jsonl +0 -3
  27. data/stackexchange/1-1/1021_2289.jsonl +0 -3
  28. data/stackexchange/1-1/1022_2289.jsonl +0 -3
  29. data/stackexchange/1-1/1023_2289.jsonl +0 -3
  30. data/stackexchange/1-1/1024_2289.jsonl +0 -3
  31. data/stackexchange/1-1/1025_2289.jsonl +0 -3
  32. data/stackexchange/1-1/1026_2289.jsonl +0 -3
  33. data/stackexchange/1-1/1027_2289.jsonl +0 -3
  34. data/stackexchange/1-1/1028_2289.jsonl +0 -3
  35. data/stackexchange/1-1/1029_2289.jsonl +0 -3
  36. data/stackexchange/1-1/102_2289.jsonl +0 -3
  37. data/stackexchange/1-1/1030_2289.jsonl +0 -3
  38. data/stackexchange/1-1/1031_2289.jsonl +0 -3
  39. data/stackexchange/1-1/1032_2289.jsonl +0 -3
  40. data/stackexchange/1-1/1033_2289.jsonl +0 -3
  41. data/stackexchange/1-1/1034_2289.jsonl +0 -3
  42. data/stackexchange/1-1/1035_2289.jsonl +0 -3
  43. data/stackexchange/1-1/1036_2289.jsonl +0 -3
  44. data/stackexchange/1-1/1037_2289.jsonl +0 -3
  45. data/stackexchange/1-1/1038_2289.jsonl +0 -3
  46. data/stackexchange/1-1/1039_2289.jsonl +0 -3
  47. data/stackexchange/1-1/103_2289.jsonl +0 -3
  48. data/stackexchange/1-1/1040_2289.jsonl +0 -3
  49. data/stackexchange/1-1/1041_2289.jsonl +0 -3
  50. data/stackexchange/1-1/1042_2289.jsonl +0 -3
.gitattributes CHANGED
The diff for this file is too large to render. See raw diff
 
README.md CHANGED
@@ -1,66 +1,16 @@
1
  ---
2
  license: odc-by
3
- task_categories:
4
- - text-generation
5
- language:
6
- - en
7
- size_categories:
8
- - n>1T
9
  ---
10
  # TxT360: A Top-Quality LLM Pre-training Dataset Requires the Perfect Blend
11
  <center><img src="llm360_logo(1).png" alt="k2 eval table" /></center>
12
 
13
- ## Changelog
14
-
15
- | Version | Details |
16
- |---------|---------|
17
- | v1.1 | Added new data sources: TxT360_BestOfWeb, TxT360_QA, europarl-aligned, and wikipedia_extended. |
18
-
19
- ## Details of v1.1 Additions
20
-
21
- - **TxT360_BestOfWeb**: This is a filtered version of the TxT360 dataset, created using the [ProX document filtering model](https://huggingface.co/gair-prox/web-doc-refining-lm). The model is similar to the [FineWeb-Edu classifier](https://huggingface.co/HuggingFaceFW/fineweb-edu-classifier), but also assigns an additional format score that considers how a document is formatted.
22
-
23
- - **TxT360_QA**: Synthetic QA pairs generated for each document using Mistral-7B-Instruct-v0.3. QA pairs are appended to the end of every document in the format:
24
-
25
- ```json
26
- {
27
- "text": "{ORIGINAL_DOCUMENT_TEXT}\n\nQ: {QUESTION_1}\nA: {ANSWER_1}\n\nQ: {QUESTION_2}\nA: {ANSWER_2}......{ANSWER_N}\n",
28
- "meta": {original TxT360 meta}
29
- }
30
- ```
31
- The number of QA pairs may differ for each document, providing diverse question-answering supervision.
32
-
33
- - **europarl-aligned**: Europarl v7 data processed to align English source text with parallel corpora in multiple languages. Each sample concatenates the same content in different languages. Steps include reading English source text, matching with parallel corpus data, and concatenating multilingual content for robust cross-lingual training without any order., e.g.:
34
-
35
- ```json
36
- {
37
- "text": "# English\n\n[English content]\n\n# French\n\n[French content]\n\n # Italian\n\n [Italian content]\n...",
38
- "meta": {
39
- "language":"fi-de-nl-el-it-fr-en-pt-sv-da-es",
40
- "src_file":"ep-00-01-17"
41
- }
42
- }
43
- ```
44
-
45
- - **wikipedia_extended**: An enhanced version of Wikipedia data that:
46
- - **Appends abstracts** of outgoing linked articles from the source article's abstract to each Wikipedia document.
47
- - **Creates a contextually dense document** with interconnected information from related articles.
48
- - **Enables long-context training** by allowing models to process extended sequences of linked content.
49
- - **Enhances model ability** to understand topic relationships, maintain coherence in long contexts, and generate accurate responses across topics.
50
- ```json
51
- {
52
- "text": "{ORIGINAL ARTICLE}\n\n{First Link Title}\n\"{First Link Abstract}\"\n\n{Second Link Title}\n\"{Second Link Abstract}\"..."
53
- }
54
- ```
55
-
56
-
57
  ## We introduce TxT360 (Trillion eXtracted Text) the first dataset to globally deduplicate 99 CommonCrawl snapshots and 14 commonly used non-web data sources (e.g. FreeLaw, PG-19, etc.) providing pretraining teams with a recipe to easily adjust data weighting, obtain the largest high-quality open source dataset, and train the most performant models.
58
 
59
  # TxT360 Compared to Common Pretraining Datasets
60
  | Data Source | TxT360 | FineWeb | RefinedWeb | PedPajamaV2 | C4 | Dolma | RedPajamaV1 | The Pile |
61
  |---------------------------|--------|---------|------------|-------------|----|-------|-------------|--------------------|
62
  | CommonCrawl Snapshots | 99 | 96 | 90 | 84 | 1 | 24 | 5 | 0.6% of 74 |
63
- | Papers | 5 Sources | - | - | - | - | 1 Source | 1 Source | 4 Sources |
64
  | Wikipedia | 310+ Languages | - | - | - | - | Included | Included | English Only |
65
  | FreeLaw | Included | - | - | - | - | - | - | Included |
66
  | DM Math | Included | - | - | - | - | - | - | Included |
@@ -69,11 +19,12 @@ size_categories:
69
  | HackerNews | Included | - | - | - | - | - | - | Included |
70
  | Ubuntu IRC | Included | - | - | - | - | - | - | Included |
71
  | EuroParl | Included | - | - | - | - | - | - | Included |
72
- | StackExchange | Included | - | - | - | - | - | - | Included |
73
  | Code | * | - | - | - | - | Included | Included | Included |
74
 
75
  * TxT360 does not include code. This decision was made due to the perceived low duplication code with other sources.
76
 
 
77
 
78
  Complete details on the dataset can be found in our blog post [here](https://huggingface.co/spaces/LLM360/TxT360).
79
 
@@ -100,10 +51,10 @@ We further highlight the importance of mixing the datasets together with the rig
100
  | DM Math | 22 GB | 5.23B | - |
101
  | USPTO | 45 GB | 4.95B | Q3 2024 |
102
  | PG-19 | 11 GB | 2.63B | - |
103
- | HackerNews | 4.2 GB | 1.05B | Q4 2023 |
104
- | Ubuntu IRC | 6 GB | 1.89B | Q3 2024 |
105
  | Europarl | 6.1 GB | 1.96B | - |
106
- | StackExchange | 81 GB | 27.76B | Q4 2023 |
107
 
108
  The [TxT360](https://huggingface.co/spaces/LLM360/TxT360) blog post provides all the details behind how we approached and implemented the following features:
109
 
@@ -116,161 +67,6 @@ Each data source was filtered individually with respect to the underlying data.
116
  ## Global Deduplication
117
  After the web and curated sources were filtered, all sources globally deduplicated to create TxT360. The tips and tricks behind the deduplication process are included.
118
 
119
- ## Dataset Structure
120
- The dataset is organized under the ```data``` directory, with each subdirectory representing a data subset.
121
- Below is an overview of the structure and organization of these subsets:
122
- ```
123
- ├── data
124
- ├── common-crawl # data subset
125
- ├── CC-MAIN-2013-20 # common-crawl dumps
126
- ├── 1-1 # number of duplicates
127
- ├── chunk_000_0000.jsonl.gz
128
- ├── ...
129
- ├── 2-5
130
- ├── chunk_000_0000.jsonl.gz
131
- ├── ...
132
- ├── ...
133
- ├── CC-MAIN-2013-48
134
- ├── 1-1
135
- ├── chunk_000_0000.jsonl.gz
136
- ├── ...
137
- ├── ...
138
- ├── ...
139
- ├── dm_math
140
- ├── full_data_1
141
- ├── 0_11255.jsonl
142
- ├── ...
143
- ├── full_data_2
144
- ├── 10000_11255.jsonl
145
- ├── ...
146
- ├── arxiv
147
- ├── 1-1 # number of duplicates
148
- ├── 0_171.jsonl
149
- ├── ...
150
- ├── 2-5
151
- ├── 0_2.jsonl
152
- ├── ...
153
- ├── ...
154
- ├── europarl
155
- ├── 1-1 # number of duplicates
156
- ├── 0_6.jsonl
157
- ├── ...
158
- ├── 2-5
159
- ├── 0_0.jsonl
160
- ├── ...
161
- ├── ...
162
- ├── ...
163
- ```
164
-
165
- ### Common Crawl (common-crawl)
166
- Each subdirectory under ```common-crawl``` corresponds to a specific dump of the dataset.
167
- Inside each dump folder, the data is further segmented into buckets based on the number of duplicates identified during deduplication:
168
-
169
- - ```1-1```: Contains documents with no duplicates across the dataset.
170
- - ```2-5```, ```6-10```, ```11-100```, ```101-1000```, ```1001-30000000```: Each contains documents that fall within the respective range of duplicates.
171
-
172
- Example path: ```data/common-crawl/CC-MAIN-2013-20/1-1/chunk_000_0000.jsonl.gz```
173
-
174
- ### DM Math (dm_math)
175
- The ```dm_math``` subset is divided into two subfolders to comply with the limit of 10,000 files per folder in a HuggingFace Repository:
176
-
177
- Example path: ```data/dm_math/full_data_1/0_11255.jsonl```
178
-
179
- ### Others
180
- Similar to common-crawl, other curated data subsets, such as arxiv, europal, etc., are organized by the number of duplicates:
181
- - ```1-1```, ```2-5```, ```6-10```, ```11-100```, ```101-1000```, ```1001-inf```
182
-
183
- Kindly note that some data subsets might not include the folder ```1001-inf``` (```1001-30000000``` in ```common-crawl```) or might contain only a few documents in such a folder due to the rarity of documents duplicated more than 1000 times.
184
-
185
- ## Data Schema
186
-
187
- ### Common Crawl (common-crawl)
188
- The documents in common-crawl follow the schema:
189
- ```python
190
- {'text': '...', # texts in the document
191
- 'meta':
192
- {
193
- 'lang': 'en', # top 1 language detected by fastText model
194
- 'lang_score': 0.912118136882782, # language score for the detected language
195
- 'url': 'http://www.shopgirljen.com/2017/10/lg-celebrates-5-years-of-lg-oled-tv.html', # the url that raw webpage is scraped from
196
- 'timestamp': '2024-07-24T00:56:12Z', # timestamp from Common Crawl raw data
197
- 'cc-path': 'crawl-data/CC-MAIN-2024-30/segments/1720763518130.6/warc/CC-MAIN-20240723224601-20240724014601-00300.warc.gz', # the path of the document in the raw Common Crawl
198
- 'quality_signals':
199
- {
200
- 'url_score': 0.0,
201
- 'fraction_of_duplicate_lines': 0.0,
202
- 'fraction_of_characters_in_duplicate_lines': 0.0,
203
- 'fraction_of_duplicate_paragraphs': 0.0,
204
- 'fraction_of_characters_in_duplicate_paragraphs': 0.0,
205
- 'fraction_of_characters_in_most_common_ngram': [[2, 0.03626373626373627],
206
- [3, 0.03296703296703297],
207
- [4, 0.01868131868131868]],
208
- 'fraction_of_characters_in_duplicate_ngrams': [[5, 0.01868131868131868],
209
- [6, 0.01868131868131868],
210
- [7, 0.01868131868131868],
211
- [8, 0.0],
212
- [9, 0.0],
213
- [10, 0.0]],
214
- 'fraction_of_words_corrected_in_lines': 0.0,
215
- 'fraction_of_lines_ending_with_ellipsis': 0.0,
216
- 'fraction_of_lines_starting_with_bullet_point': 0.0,
217
- 'fraction_of_lines_with_toxic_words': 0.0,
218
- 'num_of_lines_with_toxic_words': 0,
219
- 'num_of_toxic_words': 0,
220
- 'word_count': 358,
221
- 'mean_word_length': 5.083798882681564,
222
- 'num_of_sentences': 19,
223
- 'symbol_to_word_ratio': 0.0,
224
- 'fraction_of_words_with_alpha_character': 1.0,
225
- 'num_of_stop_words': 82,
226
- 'num_of_paragraphs': 0,
227
- 'has_curly_bracket': False,
228
- 'has_lorem_ipsum': False,
229
- 'orig_text_has_dup_lines': False
230
- },
231
- 'dup_signals':
232
- {
233
- 'dup_doc_count': 166, # the number of duplicated documents
234
- 'dup_dump_count': 57, # the number of dumps that the duplicated documents are from
235
- 'dup_details': # the dump distribution of the duplicated documents
236
- {
237
- '2024-30': 2,
238
- '2024-26': 1,
239
- '2024-22': 1,
240
- ...
241
- }
242
- }
243
- },
244
- 'subset': 'commoncrawl'}
245
- ```
246
-
247
- Please note that documents without duplicates, located in folders `*/1-1/`, have an empty `dup_signals` field.
248
- Additionally, some documents with duplicates might include an `unknown` entry within the `dup_details`.
249
- One example could be:
250
- ```python
251
- {'text': '...', # texts in the document
252
- 'meta':
253
- {
254
- ...
255
- 'dup_signals':
256
- {
257
- 'dup_doc_count': 7,
258
- 'dup_dump_count': 3,
259
- 'dup_details':
260
- {
261
- 'unknown': 4,
262
- '2024-30': 1,
263
- '2024-26': 1,
264
- '2024-22': 1,
265
- }
266
- }
267
- },
268
- 'subset': 'commoncrawl'}
269
- ```
270
- This occurs because the distribution of duplicates across dumps was not recorded in the early stages of our deduplication process, and only the total count of duplicate documents (`dup_doc_count`) was maintained.
271
- Due to the high cost of rerunning the deduplication, we have opted to label these distributions as `unknown` when integrating them with other documents for which duplicate distribution data is available.
272
- In these cases, the `dup_dump_count` is calculated excluding the `unknown`.
273
-
274
  # Citation
275
 
276
  **BibTeX:**
@@ -278,7 +74,7 @@ In these cases, the `dup_dump_count` is calculated excluding the `unknown`.
278
  ```bibtex
279
  @misc{txt360data2024,
280
  title={TxT360: A Top-Quality LLM Pre-training Dataset Requires the Perfect Blend},
281
- author={Liping Tang, Nikhil Ranjan, Omkar Pangarkar, Xuezhi Liang, Zhen Wang, Li An, Bhaskar Rao, Linghao Jin, Huijuan Wang, Zhoujun Cheng, Suqi Sun, Cun Mu, Victor Miller, Xuezhe Ma, Yue Peng, Zhengzhong Liu, Eric P. Xing},
282
  year={2024}
283
  }
284
- ```
 
1
  ---
2
  license: odc-by
 
 
 
 
 
 
3
  ---
4
  # TxT360: A Top-Quality LLM Pre-training Dataset Requires the Perfect Blend
5
  <center><img src="llm360_logo(1).png" alt="k2 eval table" /></center>
6
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
7
  ## We introduce TxT360 (Trillion eXtracted Text) the first dataset to globally deduplicate 99 CommonCrawl snapshots and 14 commonly used non-web data sources (e.g. FreeLaw, PG-19, etc.) providing pretraining teams with a recipe to easily adjust data weighting, obtain the largest high-quality open source dataset, and train the most performant models.
8
 
9
  # TxT360 Compared to Common Pretraining Datasets
10
  | Data Source | TxT360 | FineWeb | RefinedWeb | PedPajamaV2 | C4 | Dolma | RedPajamaV1 | The Pile |
11
  |---------------------------|--------|---------|------------|-------------|----|-------|-------------|--------------------|
12
  | CommonCrawl Snapshots | 99 | 96 | 90 | 84 | 1 | 24 | 5 | 0.6% of 74 |
13
+ | Papers** | 5 Sources | - | - | - | - | 1 Source | 1 Source | 4 Sources |
14
  | Wikipedia | 310+ Languages | - | - | - | - | Included | Included | English Only |
15
  | FreeLaw | Included | - | - | - | - | - | - | Included |
16
  | DM Math | Included | - | - | - | - | - | - | Included |
 
19
  | HackerNews | Included | - | - | - | - | - | - | Included |
20
  | Ubuntu IRC | Included | - | - | - | - | - | - | Included |
21
  | EuroParl | Included | - | - | - | - | - | - | Included |
22
+ | StackExchange** | Included | - | - | - | - | - | - | Included |
23
  | Code | * | - | - | - | - | Included | Included | Included |
24
 
25
  * TxT360 does not include code. This decision was made due to the perceived low duplication code with other sources.
26
 
27
+ * StackExchange and PubMed Central datasets will be uploaded shortly. All other datasets are present and complete.
28
 
29
  Complete details on the dataset can be found in our blog post [here](https://huggingface.co/spaces/LLM360/TxT360).
30
 
 
51
  | DM Math | 22 GB | 5.23B | - |
52
  | USPTO | 45 GB | 4.95B | Q3 2024 |
53
  | PG-19 | 11 GB | 2.63B | - |
54
+ | HackerNews | 4.1 GB | 1.08B | Q4 2023 |
55
+ | Ubuntu IRC | 4.7 GB | 1.54B | Q3 2024 |
56
  | Europarl | 6.1 GB | 1.96B | - |
57
+ | StackExchange | 79 GB | 27.0B | Q4 2023 |
58
 
59
  The [TxT360](https://huggingface.co/spaces/LLM360/TxT360) blog post provides all the details behind how we approached and implemented the following features:
60
 
 
67
  ## Global Deduplication
68
  After the web and curated sources were filtered, all sources globally deduplicated to create TxT360. The tips and tricks behind the deduplication process are included.
69
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
70
  # Citation
71
 
72
  **BibTeX:**
 
74
  ```bibtex
75
  @misc{txt360data2024,
76
  title={TxT360: A Top-Quality LLM Pre-training Dataset Requires the Perfect Blend},
77
+ author={Liping Tang, Nikhil Ranjan, Omkar Pangarkar, Xuezhi Liang, Zhen Wang, Li An, Bhaskar Rao, Zhoujun Cheng, Suqi Sun, Cun Mu, Victor Miller, Yue Peng, Eric P. Xing, Zhengzhong Liu},
78
  year={2024}
79
  }
80
+ ```
data/stackexchange/1-1/0_2289.jsonl DELETED
@@ -1,3 +0,0 @@
1
- version https://git-lfs.github.com/spec/v1
2
- oid sha256:5ee6642fa41a2581efa4b38670a3e8e21779c848a5eb38f310c63d9e928f0df2
3
- size 35607549
 
 
 
 
data/stackexchange/1-1/1000_2289.jsonl DELETED
@@ -1,3 +0,0 @@
1
- version https://git-lfs.github.com/spec/v1
2
- oid sha256:c8518100fab3c28c922faa695bb771c44b4ffe5c04127c3466ccb0a16b486cdd
3
- size 35130869
 
 
 
 
data/stackexchange/1-1/1001_2289.jsonl DELETED
@@ -1,3 +0,0 @@
1
- version https://git-lfs.github.com/spec/v1
2
- oid sha256:28235bf636fc4278ba0635f2b16706e7ef453502bfe41f567660122e5112627c
3
- size 35147569
 
 
 
 
data/stackexchange/1-1/1002_2289.jsonl DELETED
@@ -1,3 +0,0 @@
1
- version https://git-lfs.github.com/spec/v1
2
- oid sha256:a6fc485eb06ae5ea7b1a9a87d20da503ef96b846cdc21f8eae5d7431f3720de9
3
- size 35055186
 
 
 
 
data/stackexchange/1-1/1003_2289.jsonl DELETED
@@ -1,3 +0,0 @@
1
- version https://git-lfs.github.com/spec/v1
2
- oid sha256:857d0af9d20ad9aae001715abf725e252233e38cd09ad95d1a48e41652c6c528
3
- size 35619521
 
 
 
 
data/stackexchange/1-1/1004_2289.jsonl DELETED
@@ -1,3 +0,0 @@
1
- version https://git-lfs.github.com/spec/v1
2
- oid sha256:c799029ad46fb4fb0d27cbff53b0e1f18662081d122a954e6268add9f7d0c746
3
- size 35376973
 
 
 
 
data/stackexchange/1-1/1005_2289.jsonl DELETED
@@ -1,3 +0,0 @@
1
- version https://git-lfs.github.com/spec/v1
2
- oid sha256:5bfd34f292846c6bfc37187fd6689ecf436688211b38f06faede1ba52c71a4c5
3
- size 35152769
 
 
 
 
data/stackexchange/1-1/1006_2289.jsonl DELETED
@@ -1,3 +0,0 @@
1
- version https://git-lfs.github.com/spec/v1
2
- oid sha256:a3cdcda8fa07b8db83e87721c117a460c11ed3990c39aabfe6a6474b3bed9133
3
- size 35512543
 
 
 
 
data/stackexchange/1-1/1007_2289.jsonl DELETED
@@ -1,3 +0,0 @@
1
- version https://git-lfs.github.com/spec/v1
2
- oid sha256:fda3a1ecbd362c7b6a4bee29d7cc7946240b2b6826116beeda1e549fe557d3fa
3
- size 34262610
 
 
 
 
data/stackexchange/1-1/1008_2289.jsonl DELETED
@@ -1,3 +0,0 @@
1
- version https://git-lfs.github.com/spec/v1
2
- oid sha256:1bc498eb5959ed140a4efb335edb01c395eca346734c93f6faa90ed430311103
3
- size 35210300
 
 
 
 
data/stackexchange/1-1/1009_2289.jsonl DELETED
@@ -1,3 +0,0 @@
1
- version https://git-lfs.github.com/spec/v1
2
- oid sha256:d899daee513727fc48021a25ba77354b25651d8a78b2307b59d70fc1074b60aa
3
- size 34943572
 
 
 
 
data/stackexchange/1-1/100_2289.jsonl DELETED
@@ -1,3 +0,0 @@
1
- version https://git-lfs.github.com/spec/v1
2
- oid sha256:6a2de1d5701c61745422fe34a40326b72eca3c6cb3816a3bcaae5bc7bb708a89
3
- size 34282494
 
 
 
 
data/stackexchange/1-1/1010_2289.jsonl DELETED
@@ -1,3 +0,0 @@
1
- version https://git-lfs.github.com/spec/v1
2
- oid sha256:137b9fcfc59dfa98fb8bf1cf17b361fe00706b11fc5e1863edf80b63221c3862
3
- size 34770499
 
 
 
 
data/stackexchange/1-1/1011_2289.jsonl DELETED
@@ -1,3 +0,0 @@
1
- version https://git-lfs.github.com/spec/v1
2
- oid sha256:6eb2c0a47ed839dcd7a9be3fa927d587afe8e8ccf987e77491f5dd406999d920
3
- size 35431846
 
 
 
 
data/stackexchange/1-1/1012_2289.jsonl DELETED
@@ -1,3 +0,0 @@
1
- version https://git-lfs.github.com/spec/v1
2
- oid sha256:80c72ecae4e6e00c523c88f8ef4b19ab280c71f3fca80df38759c847d6adf04a
3
- size 34761965
 
 
 
 
data/stackexchange/1-1/1013_2289.jsonl DELETED
@@ -1,3 +0,0 @@
1
- version https://git-lfs.github.com/spec/v1
2
- oid sha256:087f85503153f0c2b450ad947cf5520109bfefcdbb4bde83f57341ad1b5b4d59
3
- size 35275615
 
 
 
 
data/stackexchange/1-1/1014_2289.jsonl DELETED
@@ -1,3 +0,0 @@
1
- version https://git-lfs.github.com/spec/v1
2
- oid sha256:15c1d49dd26a054348ed025d3c720150add3df8f680a8fa8adbce32dd290093e
3
- size 35277658
 
 
 
 
data/stackexchange/1-1/1015_2289.jsonl DELETED
@@ -1,3 +0,0 @@
1
- version https://git-lfs.github.com/spec/v1
2
- oid sha256:1c187720018b173ace4adc6204e9c50da2ebe12f0608411feb34cbcd60c23d45
3
- size 35239387
 
 
 
 
data/stackexchange/1-1/1016_2289.jsonl DELETED
@@ -1,3 +0,0 @@
1
- version https://git-lfs.github.com/spec/v1
2
- oid sha256:8abf76257bed54018c1ccaf56a68e101f88e348ecfb197ac0120fc6a5d56660f
3
- size 35309781
 
 
 
 
data/stackexchange/1-1/1017_2289.jsonl DELETED
@@ -1,3 +0,0 @@
1
- version https://git-lfs.github.com/spec/v1
2
- oid sha256:095f6540341fc452d94d493e6bf7ab32d1302755fb522f3aff767e82de2f4a18
3
- size 35436783
 
 
 
 
data/stackexchange/1-1/1018_2289.jsonl DELETED
@@ -1,3 +0,0 @@
1
- version https://git-lfs.github.com/spec/v1
2
- oid sha256:b7e47e63995a8fee1546dc3d1d883a3cfb2cbbf6fae6367605386c00d980c489
3
- size 35524178
 
 
 
 
data/stackexchange/1-1/1019_2289.jsonl DELETED
@@ -1,3 +0,0 @@
1
- version https://git-lfs.github.com/spec/v1
2
- oid sha256:f6e3abd9c6dc136180f43915656a5845e0270ef66c432aabc47ae2cec746f8eb
3
- size 35212522
 
 
 
 
data/stackexchange/1-1/101_2289.jsonl DELETED
@@ -1,3 +0,0 @@
1
- version https://git-lfs.github.com/spec/v1
2
- oid sha256:c12302eaf4e8730ef41b63761f14893a4161c106cd38ae91b505132487e6a92b
3
- size 33958940
 
 
 
 
data/stackexchange/1-1/1020_2289.jsonl DELETED
@@ -1,3 +0,0 @@
1
- version https://git-lfs.github.com/spec/v1
2
- oid sha256:12a96356b349d898ef06c3634e46af50f1be443e890506de75ff3149b9674594
3
- size 35779716
 
 
 
 
data/stackexchange/1-1/1021_2289.jsonl DELETED
@@ -1,3 +0,0 @@
1
- version https://git-lfs.github.com/spec/v1
2
- oid sha256:600f7b6f4dee44945c25ad3d1c0348205baae2b514e1c5b1c16b2be2dfe69dd5
3
- size 34850713
 
 
 
 
data/stackexchange/1-1/1022_2289.jsonl DELETED
@@ -1,3 +0,0 @@
1
- version https://git-lfs.github.com/spec/v1
2
- oid sha256:4043f12bfae7d8e8da0da73947204abd996c70b1d18a01f598d2209009c713c4
3
- size 34749622
 
 
 
 
data/stackexchange/1-1/1023_2289.jsonl DELETED
@@ -1,3 +0,0 @@
1
- version https://git-lfs.github.com/spec/v1
2
- oid sha256:6f0e732e5d67e1a6df0043f8a9e7515d41fbc417c78f1dfab63aff76707290d6
3
- size 35268842
 
 
 
 
data/stackexchange/1-1/1024_2289.jsonl DELETED
@@ -1,3 +0,0 @@
1
- version https://git-lfs.github.com/spec/v1
2
- oid sha256:6f531f5e911eb693e6e43c5f03ea24f090c1ab1eb6fd601d1ae18b0455e74d0e
3
- size 35354026
 
 
 
 
data/stackexchange/1-1/1025_2289.jsonl DELETED
@@ -1,3 +0,0 @@
1
- version https://git-lfs.github.com/spec/v1
2
- oid sha256:980300660a9c8d589514d2ac8ced2df016f1a07cfaab80b3ea91d67bd9b70601
3
- size 34868215
 
 
 
 
data/stackexchange/1-1/1026_2289.jsonl DELETED
@@ -1,3 +0,0 @@
1
- version https://git-lfs.github.com/spec/v1
2
- oid sha256:dc0870a0f70c13efb0ee4bed2d6fe9e130fbf83aa820745d80e6e0d0bed29ee4
3
- size 35007474
 
 
 
 
data/stackexchange/1-1/1027_2289.jsonl DELETED
@@ -1,3 +0,0 @@
1
- version https://git-lfs.github.com/spec/v1
2
- oid sha256:c697733e8fb5558eb2d783440be50a77b310c73aee0dbdd6ae0456cbdee27bf8
3
- size 34915186
 
 
 
 
data/stackexchange/1-1/1028_2289.jsonl DELETED
@@ -1,3 +0,0 @@
1
- version https://git-lfs.github.com/spec/v1
2
- oid sha256:5c6444c6ea5d05988f54a9f1660e51998c31115d0a744759338a91507fabd3cd
3
- size 35208519
 
 
 
 
data/stackexchange/1-1/1029_2289.jsonl DELETED
@@ -1,3 +0,0 @@
1
- version https://git-lfs.github.com/spec/v1
2
- oid sha256:19a1a3dcf36b77d3c211f47aee91b9275676caa40c71df8f1a9c2997028a969a
3
- size 34952584
 
 
 
 
data/stackexchange/1-1/102_2289.jsonl DELETED
@@ -1,3 +0,0 @@
1
- version https://git-lfs.github.com/spec/v1
2
- oid sha256:df9fe22cb420a3269d423f7f427cbca10c61c8c6846cd81cfb451fcc884a1268
3
- size 33657694
 
 
 
 
data/stackexchange/1-1/1030_2289.jsonl DELETED
@@ -1,3 +0,0 @@
1
- version https://git-lfs.github.com/spec/v1
2
- oid sha256:fb273495992300ddc6e64be95b3b9dedef6676521c3f70256c931b4066142aea
3
- size 35532031
 
 
 
 
data/stackexchange/1-1/1031_2289.jsonl DELETED
@@ -1,3 +0,0 @@
1
- version https://git-lfs.github.com/spec/v1
2
- oid sha256:5b6f76d69f90532411ad3f111ddf7e2b3345fd0d7facee6ba3379ed60ec88d73
3
- size 35150643
 
 
 
 
data/stackexchange/1-1/1032_2289.jsonl DELETED
@@ -1,3 +0,0 @@
1
- version https://git-lfs.github.com/spec/v1
2
- oid sha256:269e1aee0142a563182969724be2b9aa21fba0dc3672eedddc4593b666f8d46c
3
- size 34934648
 
 
 
 
data/stackexchange/1-1/1033_2289.jsonl DELETED
@@ -1,3 +0,0 @@
1
- version https://git-lfs.github.com/spec/v1
2
- oid sha256:62d6ebaeb97c6499ff3cb5d44b3d987edffa5ceead290283265cda08b3d1c508
3
- size 35138645
 
 
 
 
data/stackexchange/1-1/1034_2289.jsonl DELETED
@@ -1,3 +0,0 @@
1
- version https://git-lfs.github.com/spec/v1
2
- oid sha256:2b23b39bb0ee58324b10aa32a3b019b184e786214d89734c532d347c8053290e
3
- size 35099056
 
 
 
 
data/stackexchange/1-1/1035_2289.jsonl DELETED
@@ -1,3 +0,0 @@
1
- version https://git-lfs.github.com/spec/v1
2
- oid sha256:a00ef5b709b3ff851c4fd6168d28de0090924adb7d19dc84c9b317f96499583d
3
- size 34954239
 
 
 
 
data/stackexchange/1-1/1036_2289.jsonl DELETED
@@ -1,3 +0,0 @@
1
- version https://git-lfs.github.com/spec/v1
2
- oid sha256:e619a8bbaf2a9b17e282ee4abcea40b5895af05e7f0b2043d931ebb180e857ad
3
- size 35654447
 
 
 
 
data/stackexchange/1-1/1037_2289.jsonl DELETED
@@ -1,3 +0,0 @@
1
- version https://git-lfs.github.com/spec/v1
2
- oid sha256:d6c331e6b51d255b725f198ef196e306471a6c22306319b616945d5c9564bbe9
3
- size 34813728
 
 
 
 
data/stackexchange/1-1/1038_2289.jsonl DELETED
@@ -1,3 +0,0 @@
1
- version https://git-lfs.github.com/spec/v1
2
- oid sha256:6c39d2012da6b40c659b913a723b38494759c3638a94949a0747d599be345f67
3
- size 35200091
 
 
 
 
data/stackexchange/1-1/1039_2289.jsonl DELETED
@@ -1,3 +0,0 @@
1
- version https://git-lfs.github.com/spec/v1
2
- oid sha256:574791d6fbc036025196b55b2b9cc691961144bb9b25e751788ba7ce667302e7
3
- size 35773132
 
 
 
 
data/stackexchange/1-1/103_2289.jsonl DELETED
@@ -1,3 +0,0 @@
1
- version https://git-lfs.github.com/spec/v1
2
- oid sha256:397b2c44ece5d0af68a59402f03517d483ce9cb224a8407ac229c29ceb3c53e1
3
- size 34079466
 
 
 
 
data/stackexchange/1-1/1040_2289.jsonl DELETED
@@ -1,3 +0,0 @@
1
- version https://git-lfs.github.com/spec/v1
2
- oid sha256:f6c4d8eb8301c460964e706c951cfd8f84f9473c55b401a096069ffb6fbf4ba6
3
- size 34853128
 
 
 
 
data/stackexchange/1-1/1041_2289.jsonl DELETED
@@ -1,3 +0,0 @@
1
- version https://git-lfs.github.com/spec/v1
2
- oid sha256:19f61478c288c1324ba1b1275d451750d34cbbf02e49b52593a913cb0dbe8bcf
3
- size 35053678
 
 
 
 
data/stackexchange/1-1/1042_2289.jsonl DELETED
@@ -1,3 +0,0 @@
1
- version https://git-lfs.github.com/spec/v1
2
- oid sha256:d2eb3ef5b359c64c144782141cd6fedd4f82f2c8a074a0a26ea823742e0daf5e
3
- size 35661742