KousikDeuli tomaarsen HF Staff commited on
Commit
2fbe84b
·
verified ·
0 Parent(s):

Duplicate from sentence-transformers/eli5

Browse files

Co-authored-by: Tom Aarsen <tomaarsen@users.noreply.huggingface.co>

Files changed (3) hide show
  1. .gitattributes +55 -0
  2. README.md +53 -0
  3. pair/train-00000-of-00001.parquet +3 -0
.gitattributes ADDED
@@ -0,0 +1,55 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ *.7z filter=lfs diff=lfs merge=lfs -text
2
+ *.arrow filter=lfs diff=lfs merge=lfs -text
3
+ *.bin filter=lfs diff=lfs merge=lfs -text
4
+ *.bz2 filter=lfs diff=lfs merge=lfs -text
5
+ *.ckpt filter=lfs diff=lfs merge=lfs -text
6
+ *.ftz filter=lfs diff=lfs merge=lfs -text
7
+ *.gz filter=lfs diff=lfs merge=lfs -text
8
+ *.h5 filter=lfs diff=lfs merge=lfs -text
9
+ *.joblib filter=lfs diff=lfs merge=lfs -text
10
+ *.lfs.* filter=lfs diff=lfs merge=lfs -text
11
+ *.lz4 filter=lfs diff=lfs merge=lfs -text
12
+ *.mlmodel filter=lfs diff=lfs merge=lfs -text
13
+ *.model filter=lfs diff=lfs merge=lfs -text
14
+ *.msgpack filter=lfs diff=lfs merge=lfs -text
15
+ *.npy filter=lfs diff=lfs merge=lfs -text
16
+ *.npz filter=lfs diff=lfs merge=lfs -text
17
+ *.onnx filter=lfs diff=lfs merge=lfs -text
18
+ *.ot filter=lfs diff=lfs merge=lfs -text
19
+ *.parquet filter=lfs diff=lfs merge=lfs -text
20
+ *.pb filter=lfs diff=lfs merge=lfs -text
21
+ *.pickle filter=lfs diff=lfs merge=lfs -text
22
+ *.pkl filter=lfs diff=lfs merge=lfs -text
23
+ *.pt filter=lfs diff=lfs merge=lfs -text
24
+ *.pth filter=lfs diff=lfs merge=lfs -text
25
+ *.rar filter=lfs diff=lfs merge=lfs -text
26
+ *.safetensors filter=lfs diff=lfs merge=lfs -text
27
+ saved_model/**/* filter=lfs diff=lfs merge=lfs -text
28
+ *.tar.* filter=lfs diff=lfs merge=lfs -text
29
+ *.tar filter=lfs diff=lfs merge=lfs -text
30
+ *.tflite filter=lfs diff=lfs merge=lfs -text
31
+ *.tgz filter=lfs diff=lfs merge=lfs -text
32
+ *.wasm filter=lfs diff=lfs merge=lfs -text
33
+ *.xz filter=lfs diff=lfs merge=lfs -text
34
+ *.zip filter=lfs diff=lfs merge=lfs -text
35
+ *.zst filter=lfs diff=lfs merge=lfs -text
36
+ *tfevents* filter=lfs diff=lfs merge=lfs -text
37
+ # Audio files - uncompressed
38
+ *.pcm filter=lfs diff=lfs merge=lfs -text
39
+ *.sam filter=lfs diff=lfs merge=lfs -text
40
+ *.raw filter=lfs diff=lfs merge=lfs -text
41
+ # Audio files - compressed
42
+ *.aac filter=lfs diff=lfs merge=lfs -text
43
+ *.flac filter=lfs diff=lfs merge=lfs -text
44
+ *.mp3 filter=lfs diff=lfs merge=lfs -text
45
+ *.ogg filter=lfs diff=lfs merge=lfs -text
46
+ *.wav filter=lfs diff=lfs merge=lfs -text
47
+ # Image files - uncompressed
48
+ *.bmp filter=lfs diff=lfs merge=lfs -text
49
+ *.gif filter=lfs diff=lfs merge=lfs -text
50
+ *.png filter=lfs diff=lfs merge=lfs -text
51
+ *.tiff filter=lfs diff=lfs merge=lfs -text
52
+ # Image files - compressed
53
+ *.jpg filter=lfs diff=lfs merge=lfs -text
54
+ *.jpeg filter=lfs diff=lfs merge=lfs -text
55
+ *.webp filter=lfs diff=lfs merge=lfs -text
README.md ADDED
@@ -0,0 +1,53 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ language:
3
+ - en
4
+ multilinguality:
5
+ - monolingual
6
+ size_categories:
7
+ - 100K<n<1M
8
+ task_categories:
9
+ - feature-extraction
10
+ - sentence-similarity
11
+ pretty_name: ELI5
12
+ tags:
13
+ - sentence-transformers
14
+ dataset_info:
15
+ config_name: pair
16
+ features:
17
+ - name: question
18
+ dtype: string
19
+ - name: answer
20
+ dtype: string
21
+ splits:
22
+ - name: train
23
+ num_bytes: 173287519
24
+ num_examples: 325475
25
+ download_size: 112893890
26
+ dataset_size: 173287519
27
+ configs:
28
+ - config_name: pair
29
+ data_files:
30
+ - split: train
31
+ path: pair/train-*
32
+ ---
33
+
34
+ # Dataset Card for ELI5
35
+
36
+ This dataset is a collection of question-answer pairs, collected from the Explain Like I'm 5 subreddit. See [ELI5](https://huggingface.co/datasets/eli5) for additional information.
37
+ This dataset can be used directly with Sentence Transformers to train embedding models.
38
+
39
+ ## Dataset Subsets
40
+
41
+ ### `pair` subset
42
+
43
+ * Columns: "question", "answer"
44
+ * Column types: `str`, `str`
45
+ * Examples:
46
+ ```python
47
+ {
48
+ 'question': 'Why chemical weapons considered more indiscriminate than conventional weapons?',
49
+ 'answer': "Well, any large-scale ordinance is indiscriminate. The problem particularly with Chemical weapons is that, especially with those that are gas based, is that the actual range of the weapon is much larger than the blast radius. The Chemical residue can remain in the area for a long period of time, it can taint and damage water and food supplies, it can be carried on clothing, such that you could drop it on a house full of terrorists, but if there is an orphanage upwind, they are going to get some of it as well. With conventional ordinance, you can target and turn that building full of terrorists into rubble, and thanks to years of testing, we know more or less a good idea of the collateral damage. With a chemical warhead, its not one building - its anything and everything in that area. There is nothing pinpoint about a chemical weapon system - it can and will spread beyond the impact zone That is what makes it (imho) more 'indescriminate' than conventional weaponry.",
50
+ }
51
+ ```
52
+ * Collection strategy: Reading the ELI5 dataset from [embedding-training-data](https://huggingface.co/datasets/sentence-transformers/embedding-training-data).
53
+ * Deduplified: No
pair/train-00000-of-00001.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:0ce47a5e6c200590f1513e50fe53f000900b500adf0be112577fa5dfac4f9ab9
3
+ size 112893890