id stringlengths 2 115 | lastModified stringlengths 24 24 | tags list | author stringlengths 2 42 ⌀ | description stringlengths 0 6.67k ⌀ | citation stringlengths 0 10.7k ⌀ | likes int64 0 3.66k | downloads int64 0 8.89M | created timestamp[us] | card stringlengths 11 977k | card_len int64 11 977k | embeddings list |
|---|---|---|---|---|---|---|---|---|---|---|---|
Suganyak/train | 2023-10-11T05:17:38.000Z | [
"region:us"
] | Suganyak | null | null | 0 | 8 | 2023-10-11T05:17:37 | ---
dataset_info:
features:
- name: name
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 179741
num_examples: 1000
download_size: 79137
dataset_size: 179741
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
# Dataset Card for "train"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 462 | [
[
-0.0430908203125,
0.002796173095703125,
0.01151275634765625,
0.0209197998046875,
-0.006549835205078125,
-0.0038814544677734375,
0.01387786865234375,
-0.01085662841796875,
0.05218505859375,
0.0222320556640625,
-0.06390380859375,
-0.034088134765625,
-0.04193115234... |
renumics/spotlight-b-mc2-sql-create-context-enrichment | 2023-10-13T09:03:38.000Z | [
"region:us"
] | renumics | null | null | 0 | 8 | 2023-10-11T08:29:36 | ---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
dataset_info:
features:
- name: answer.embedding
sequence: float32
length: 2
- name: question.embedding
sequence: float32
length: 2
- name: context.embedding
sequence: float32
length: 2
splits:
- name: train
num_bytes: 1885848
num_examples: 78577
download_size: 2616932
dataset_size: 1885848
---
# Dataset Card for "spotlight-b-mc2-sql-create-context-enrichment"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 633 | [
[
-0.037139892578125,
-0.03240966796875,
0.0229034423828125,
0.033294677734375,
-0.017059326171875,
0.0007681846618652344,
0.00965118408203125,
-0.00861358642578125,
0.058197021484375,
0.03875732421875,
-0.066650390625,
-0.039764404296875,
-0.0252532958984375,
... |
MananSantoki/M.K.G-Baapu | 2023-10-12T05:58:35.000Z | [
"region:us"
] | MananSantoki | null | null | 0 | 8 | 2023-10-12T05:32:32 | Entry not found | 15 | [
[
-0.02142333984375,
-0.01495361328125,
0.05718994140625,
0.0288238525390625,
-0.035064697265625,
0.046539306640625,
0.052520751953125,
0.005062103271484375,
0.0513916015625,
0.016998291015625,
-0.052093505859375,
-0.014984130859375,
-0.060394287109375,
0.0379... |
buelfhood/S_Exp | 2023-10-12T11:10:58.000Z | [
"region:us"
] | buelfhood | null | null | 0 | 8 | 2023-10-12T11:08:48 | Entry not found | 15 | [
[
-0.02142333984375,
-0.01495361328125,
0.05718994140625,
0.0288238525390625,
-0.035064697265625,
0.046539306640625,
0.052520751953125,
0.005062103271484375,
0.0513916015625,
0.016998291015625,
-0.052093505859375,
-0.014984130859375,
-0.060394287109375,
0.0379... |
renumics/spotlight-zishuod-pokemon-icons-enrichment | 2023-10-13T10:43:39.000Z | [
"region:us"
] | renumics | null | null | 0 | 8 | 2023-10-12T14:15:29 | ---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: test
path: data/test-*
dataset_info:
features:
- name: image.embedding
sequence: float32
length: 2
splits:
- name: train
num_bytes: 3416
num_examples: 427
- name: test
num_bytes: 1320
num_examples: 165
download_size: 8424
dataset_size: 4736
---
# Dataset Card for "spotlight-zishuod-pokemon-icons-enrichment"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 584 | [
[
-0.039581298828125,
-0.00368499755859375,
0.017364501953125,
0.0220794677734375,
-0.0241546630859375,
0.0087432861328125,
0.002796173095703125,
-0.0216064453125,
0.0823974609375,
0.02496337890625,
-0.07830810546875,
-0.0460205078125,
-0.0259246826171875,
-0.... |
Brian039/ADL_HW1 | 2023-10-12T23:56:16.000Z | [
"region:us"
] | Brian039 | null | null | 0 | 8 | 2023-10-12T23:55:32 | Entry not found | 15 | [
[
-0.02142333984375,
-0.01495361328125,
0.05718994140625,
0.0288238525390625,
-0.035064697265625,
0.046539306640625,
0.052520751953125,
0.005062103271484375,
0.0513916015625,
0.016998291015625,
-0.052093505859375,
-0.014984130859375,
-0.060394287109375,
0.0379... |
theblackcat102/gpt-4v-eval-samples | 2023-11-02T12:49:25.000Z | [
"region:us"
] | theblackcat102 | null | null | 1 | 8 | 2023-10-13T00:51:36 | ---
configs:
- config_name: default
data_files:
- split: test
path: data/test-*
dataset_info:
features:
- name: image
dtype: image
- name: conversations
dtype: string
splits:
- name: test
num_bytes: 300443694.647
num_examples: 1339
download_size: 275794412
dataset_size: 300443694.647
---
# GPT-4V Eval samples
This is a hand curated images from the web and questions asked by myself to GPT-4V to understand its ability and limits.
I am mainly focus in localization, OCR ability and understanding of GPT-4V vision module. So the language part is skipped as we already seen in GPT-4. As long as GPT-4V can extract the required information in text, the rest of the LLM shouldn't have any issue answering the rest of the questions.
The numbers of examples is still pretty tiny and will continue to increase further in the future until I am satisfy with the size. So please check back from time to time.
Note : the dataset viewer had a bug which cause the image displayed differ from the actual dataset (Due to frequent update). Please load the dataset and save it on your local path for best accuracy.
## How to use:
```
import json
from datasets import load_dataset
dataset = load_dataset('theblackcat102/gpt-4v-eval-samples')['test']
print(dataset[0]['image'])
print(json.loads(dataset[0]['conversations']))
```
## Contributions
Please checkout my github repo for more details : [theblackcat102/gpt-4v-samples](https://github.com/theblackcat102/gpt-4v-samples)
## Citation
```
@article{yang2023dawn,
title={The Dawn of LMMs: Preliminary Explorations with GPT-4V (ision)},
author={Yang, Zhengyuan and Li, Linjie and Lin, Kevin and Wang, Jianfeng and Lin, Chung-Ching and Liu, Zicheng and Wang, Lijuan},
journal={arXiv preprint arXiv:2309.17421},
year={2023}
}
```
| 1,821 | [
[
-0.0208282470703125,
-0.051025390625,
0.06060791015625,
-0.005321502685546875,
-0.016571044921875,
0.0011377334594726562,
-0.008758544921875,
-0.03228759765625,
-0.000919342041015625,
0.033172607421875,
-0.0323486328125,
-0.041351318359375,
-0.021728515625,
... |
stevenhsu123/chinese_exam_train_data | 2023-10-14T02:50:25.000Z | [
"region:us"
] | stevenhsu123 | null | null | 0 | 8 | 2023-10-13T02:19:09 | Entry not found | 15 | [
[
-0.02142333984375,
-0.01495361328125,
0.05718994140625,
0.0288238525390625,
-0.035064697265625,
0.046539306640625,
0.052520751953125,
0.005062103271484375,
0.0513916015625,
0.016998291015625,
-0.052093505859375,
-0.014984130859375,
-0.060394287109375,
0.0379... |
fahrialfiansyah/openstax-with-instruction | 2023-10-13T08:45:59.000Z | [
"region:us"
] | fahrialfiansyah | null | null | 0 | 8 | 2023-10-13T08:21:15 | Entry not found | 15 | [
[
-0.02142333984375,
-0.01495361328125,
0.05718994140625,
0.0288238525390625,
-0.035064697265625,
0.046539306640625,
0.052520751953125,
0.005062103271484375,
0.0513916015625,
0.016998291015625,
-0.052093505859375,
-0.014984130859375,
-0.060394287109375,
0.0379... |
hippocrates/PubmedSumm_test | 2023-10-17T19:54:03.000Z | [
"region:us"
] | hippocrates | null | null | 0 | 8 | 2023-10-13T10:51:14 | ---
dataset_info:
features:
- name: id
dtype: string
- name: query
dtype: string
- name: answer
dtype: string
splits:
- name: train
num_bytes: 2293491121
num_examples: 119924
- name: valid
num_bytes: 129680450
num_examples: 6633
- name: test
num_bytes: 129463253
num_examples: 6658
download_size: 1172343963
dataset_size: 2552634824
---
# Dataset Card for "PubmedSumm_test"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 562 | [
[
-0.0299530029296875,
-0.0174560546875,
0.02117919921875,
0.01175689697265625,
-0.0183258056640625,
-0.00881195068359375,
0.01332855224609375,
-0.000942230224609375,
0.05963134765625,
0.039154052734375,
-0.04888916015625,
-0.0543212890625,
-0.043670654296875,
... |
erbacher/nq_open-halM | 2023-10-13T13:15:36.000Z | [
"region:us"
] | erbacher | null | null | 0 | 8 | 2023-10-13T13:14:48 | ---
dataset_info:
features:
- name: query
dtype: string
- name: gold_generation
sequence: string
- name: target
dtype: string
- name: text
dtype: string
- name: results
dtype: string
- name: em
dtype: float64
- name: hal_m
dtype: string
splits:
- name: train1
num_bytes: 20868789.5
num_examples: 39584
- name: train2
num_bytes: 20868789.5
num_examples: 39584
- name: dev
num_bytes: 4612579
num_examples: 8757
- name: test
num_bytes: 1950822
num_examples: 3610
download_size: 13134688
dataset_size: 48300980.0
---
# Dataset Card for "nq_open-halM"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 769 | [
[
-0.04022216796875,
-0.0205230712890625,
0.0037479400634765625,
-0.0006198883056640625,
-0.0135498046875,
-0.005588531494140625,
0.0100555419921875,
0.00537109375,
0.04473876953125,
0.04681396484375,
-0.050628662109375,
-0.07073974609375,
-0.0234527587890625,
... |
yusuf802/new-image-dataset | 2023-10-14T09:09:59.000Z | [
"region:us"
] | yusuf802 | null | null | 0 | 8 | 2023-10-14T05:22:09 | ---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: test
path: data/test-*
dataset_info:
features:
- name: image
dtype: image
- name: label
dtype:
class_label:
names:
'0': Apple_Black_rot
'1': Apple_Cedar_apple_rust
'2': Apple_Powdery_mildew
'3': Apple_healthy
'4': Apple_scab
'5': Cherry_(including_sour)_Powdery_mildew
'6': Cherry_(including_sour)_healthy
'7': Corn_(maize)_Cercospora_leaf_spot Gray_leaf_spot
'8': Corn_(maize)_Common_rust
'9': Corn_(maize)_Northern_Leaf_Blight
'10': Corn_(maize)_healthy
'11': Cotton_leaf_diseased
'12': Cotton_leaf_fresh
'13': Grape_Black_rot
'14': Grape___Esca_(Black_Measles)
'15': Grape___Leaf_blight_(Isariopsis_Leaf_Spot)
'16': Grape___healthy
'17': Orange_Haunglongbing_(Citrus_greening)
'18': Orange__Black_Rot
'19': Orange__Canker
'20': Orange__Healthy
'21': Peach_Bacterial_spot
'22': Peach_healthy
'23': Pepper,_bell_Bacterial_spot
'24': Pepper,_bell_healthy
'25': Potato_Early_blight
'26': Potato_Late_blight
'27': Potato_healthy
'28': Squash_Powdery_mildew
'29': Strawberry_Leaf_scorch
'30': Strawberry_healthy
'31': Tomato_Bacterial_spot
'32': Tomato_Early_blight
'33': Tomato_Late_blight
'34': Tomato_Leaf_Mold
'35': Tomato_Septoria_leaf_spot
'36': Tomato_Spider_mites_Two_spotted_spider_mite
'37': Tomato_Target_Spot
'38': Tomato_Tomato_Yellow_Leaf_Curl_Virus
'39': Tomato_Tomato_mosaic_virus
'40': Tomato_healthy
'41': Wheat_healthy
'42': Wheat_leaf_rust
'43': Wheat_nitrogen_deficiency
splits:
- name: train
num_bytes: 5580252809.260068
num_examples: 56842
- name: test
num_bytes: 960697024.6779323
num_examples: 10032
download_size: 6476692260
dataset_size: 6540949833.938
---
# Dataset Card for "new-image-dataset"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 2,351 | [
[
-0.05499267578125,
-0.0179290771484375,
0.0106658935546875,
0.01158905029296875,
-0.0290374755859375,
0.0022296905517578125,
0.027679443359375,
-0.022857666015625,
0.06976318359375,
0.036956787109375,
-0.0484619140625,
-0.05853271484375,
-0.0562744140625,
-0... |
anujpaudel/linge-ping-1 | 2023-10-14T12:08:15.000Z | [
"region:us"
] | anujpaudel | null | null | 0 | 8 | 2023-10-14T12:00:20 | ---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
dataset_info:
features:
- name: image
dtype: image
- name: text
dtype: string
splits:
- name: train
num_bytes: 6525269.0
num_examples: 159
download_size: 6003377
dataset_size: 6525269.0
---
# Dataset Card for "linge-ping-1"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 476 | [
[
-0.051513671875,
-0.0189208984375,
-0.002498626708984375,
0.03662109375,
-0.0179443359375,
-0.028045654296875,
0.0281982421875,
-0.0215606689453125,
0.08013916015625,
0.0321044921875,
-0.06610107421875,
-0.05975341796875,
-0.037567138671875,
-0.0063400268554... |
daishen/CALM-Data | 2023-10-15T02:07:29.000Z | [
"region:us"
] | daishen | null | null | 0 | 8 | 2023-10-15T02:06:21 | Entry not found | 15 | [
[
-0.02142333984375,
-0.01495361328125,
0.05718994140625,
0.0288238525390625,
-0.035064697265625,
0.046539306640625,
0.052520751953125,
0.005062103271484375,
0.0513916015625,
0.016998291015625,
-0.052093505859375,
-0.014984130859375,
-0.060394287109375,
0.0379... |
dbadal123/text2SQLChanged | 2023-10-15T15:02:35.000Z | [
"region:us"
] | dbadal123 | null | null | 1 | 8 | 2023-10-15T14:48:51 | Entry not found | 15 | [
[
-0.02142333984375,
-0.01495361328125,
0.05718994140625,
0.0288238525390625,
-0.035064697265625,
0.046539306640625,
0.052520751953125,
0.005062103271484375,
0.0513916015625,
0.016998291015625,
-0.052093505859375,
-0.014984130859375,
-0.060394287109375,
0.0379... |
HemanthKumarK/SKINgpt | 2023-10-16T04:49:50.000Z | [
"region:us"
] | HemanthKumarK | null | null | 0 | 8 | 2023-10-16T04:49:26 | Entry not found | 15 | [
[
-0.02142333984375,
-0.01495361328125,
0.05718994140625,
0.0288238525390625,
-0.035064697265625,
0.046539306640625,
0.052520751953125,
0.005062103271484375,
0.0513916015625,
0.016998291015625,
-0.052093505859375,
-0.014984130859375,
-0.060394287109375,
0.0379... |
thangvip/data-kalapa-medical-chunked | 2023-10-16T15:02:44.000Z | [
"region:us"
] | thangvip | null | null | 0 | 8 | 2023-10-16T15:02:39 | ---
dataset_info:
features:
- name: text
dtype: string
- name: __index_level_0__
dtype: int64
splits:
- name: train
num_bytes: 9804125
num_examples: 4399
download_size: 4338224
dataset_size: 9804125
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
# Dataset Card for "data-kalapa-medical-chunked"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 500 | [
[
-0.01654052734375,
-0.035491943359375,
0.0262451171875,
0.0242462158203125,
-0.04107666015625,
0.010406494140625,
0.0264892578125,
-0.02655029296875,
0.0933837890625,
0.042205810546875,
-0.04400634765625,
-0.038116455078125,
-0.06640625,
-0.02142333984375,
... |
shossain/merged-pad-16384 | 2023-10-16T19:15:38.000Z | [
"region:us"
] | shossain | null | null | 0 | 8 | 2023-10-16T19:14:35 | ---
dataset_info:
features:
- name: input_ids
sequence: int32
- name: attention_mask
sequence: int8
- name: labels
sequence: int64
splits:
- name: train
num_bytes: 2084670148
num_examples: 9787
download_size: 484608278
dataset_size: 2084670148
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
# Dataset Card for "merged-pad-16384"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 540 | [
[
-0.061004638671875,
-0.005077362060546875,
0.01247406005859375,
0.0251617431640625,
-0.0297393798828125,
0.01222991943359375,
0.022003173828125,
-0.013427734375,
0.072021484375,
0.042572021484375,
-0.048828125,
-0.0330810546875,
-0.042388916015625,
-0.016876... |
tyzhu/squad_title_v4_train_30_eval_10_permute3 | 2023-10-17T09:06:13.000Z | [
"region:us"
] | tyzhu | null | null | 0 | 8 | 2023-10-17T07:28:30 | ---
dataset_info:
features:
- name: id
dtype: string
- name: title
dtype: string
- name: context
dtype: string
- name: question
dtype: string
- name: answers
sequence:
- name: text
dtype: string
- name: answer_start
dtype: int32
- name: context_id
dtype: string
- name: inputs
dtype: string
- name: targets
dtype: string
splits:
- name: train
num_bytes: 493463.5595794392
num_examples: 319
- name: validation
num_bytes: 50807
num_examples: 50
download_size: 100594
dataset_size: 544270.5595794392
---
# Dataset Card for "squad_title_v4_train_30_eval_10_permute3"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 791 | [
[
-0.0211029052734375,
0.0007863044738769531,
0.0172882080078125,
0.04534912109375,
-0.002475738525390625,
0.0292816162109375,
0.02691650390625,
0.01214599609375,
0.0382080078125,
0.0292816162109375,
-0.07958984375,
-0.05126953125,
-0.0300140380859375,
0.00901... |
annahonghong/hello | 2023-10-18T02:25:56.000Z | [
"region:us"
] | annahonghong | null | null | 0 | 8 | 2023-10-17T08:03:45 | Entry not found | 15 | [
[
-0.02142333984375,
-0.01495361328125,
0.05718994140625,
0.0288238525390625,
-0.035064697265625,
0.046539306640625,
0.052520751953125,
0.005062103271484375,
0.0513916015625,
0.016998291015625,
-0.052093505859375,
-0.014984130859375,
-0.060394287109375,
0.0379... |
carnival13/rbrt_test_val_lrg3 | 2023-10-17T08:52:20.000Z | [
"region:us"
] | carnival13 | null | null | 0 | 8 | 2023-10-17T08:52:08 | ---
dataset_info:
features:
- name: label
dtype: int64
- name: input_ids
sequence: int32
- name: attention_mask
sequence: int8
splits:
- name: train
num_bytes: 148079605
num_examples: 104550
download_size: 32715970
dataset_size: 148079605
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
# Dataset Card for "rbrt_test_val_lrg3"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 537 | [
[
-0.04583740234375,
-0.039031982421875,
0.004360198974609375,
0.01177215576171875,
-0.01210784912109375,
0.00710296630859375,
0.02642822265625,
-0.01360321044921875,
0.0322265625,
0.033416748046875,
-0.043243408203125,
-0.03759765625,
-0.0328369140625,
-0.008... |
ThangDinh/biomedqa_train | 2023-10-17T14:32:28.000Z | [
"region:us"
] | ThangDinh | null | null | 0 | 8 | 2023-10-17T13:47:08 | ---
dataset_info:
features:
- name: input_ids
sequence: int32
- name: attention_mask
sequence: int8
- name: answer
dtype: string
- name: start_positions
dtype: int64
- name: end_positions
dtype: int64
splits:
- name: train
num_bytes: 61729446
num_examples: 6000
download_size: 0
dataset_size: 61729446
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
# Dataset Card for "biomedqa_train"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 608 | [
[
-0.034332275390625,
0.006824493408203125,
0.0193328857421875,
0.0016889572143554688,
-0.009979248046875,
0.0082855224609375,
0.033599853515625,
-0.00672149658203125,
0.053924560546875,
0.020843505859375,
-0.06005859375,
-0.04248046875,
-0.041259765625,
-0.01... |
chirunder/MixSnips_for_DecoderOnly_90-10_split-HALF | 2023-10-18T06:10:33.000Z | [
"region:us"
] | chirunder | null | null | 0 | 8 | 2023-10-17T15:51:42 | ---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: test
path: data/test-*
dataset_info:
features:
- name: prompt
dtype: string
- name: completion
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 17739996.800127994
num_examples: 22500
- name: test
num_bytes: 1971899.199872005
num_examples: 2501
download_size: 7061034
dataset_size: 19711896.0
---
# Dataset Card for "MixSnips_for_DecoderOnly_90-10_split-HALF"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 667 | [
[
-0.043426513671875,
-0.0164794921875,
-0.005146026611328125,
0.044189453125,
-0.02862548828125,
0.00685882568359375,
0.0149078369140625,
-0.0081024169921875,
0.088134765625,
0.030975341796875,
-0.06182861328125,
-0.036376953125,
-0.05078125,
0.00182819366455... |
MattBastar/medicine | 2023-10-17T23:13:05.000Z | [
"region:us"
] | MattBastar | null | null | 0 | 8 | 2023-10-17T23:10:43 | Entry not found | 15 | [
[
-0.02142333984375,
-0.01495361328125,
0.05718994140625,
0.0288238525390625,
-0.035064697265625,
0.046539306640625,
0.052520751953125,
0.005062103271484375,
0.0513916015625,
0.016998291015625,
-0.052093505859375,
-0.014984130859375,
-0.060394287109375,
0.0379... |
yardeny/processed_gpt2_context_len_64 | 2023-10-18T15:09:50.000Z | [
"region:us"
] | yardeny | null | null | 0 | 8 | 2023-10-18T14:54:49 | ---
dataset_info:
features:
- name: input_ids
sequence: int32
- name: attention_mask
sequence: int8
splits:
- name: train
num_bytes: 7604038432.0
num_examples: 23183044
download_size: 3576830919
dataset_size: 7604038432.0
---
# Dataset Card for "processed_gpt2_context_len_64"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 439 | [
[
-0.035736083984375,
-0.0299224853515625,
0.037689208984375,
0.018798828125,
-0.033935546875,
-0.0237274169921875,
-0.003467559814453125,
-0.0210113525390625,
0.0271148681640625,
0.0309600830078125,
-0.051239013671875,
-0.03955078125,
-0.052276611328125,
-0.0... |
SWLLMS/sum_dataset_TK0 | 2023-10-19T05:18:22.000Z | [
"region:us"
] | SWLLMS | null | null | 0 | 8 | 2023-10-19T05:18:18 | ---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: test
path: data/test-*
dataset_info:
features:
- name: text
dtype: string
- name: summary
dtype: string
- name: input_ids
sequence: int32
- name: attention_mask
sequence: int8
- name: labels
sequence: int64
splits:
- name: train
num_bytes: 99342526.22106361
num_examples: 767
- name: test
num_bytes: 24868011.778936394
num_examples: 192
download_size: 25499841
dataset_size: 124210538.0
---
# Dataset Card for "sum_dataset_TK0"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 723 | [
[
-0.0290679931640625,
0.00576019287109375,
0.00701904296875,
0.0219573974609375,
-0.02288818359375,
0.01546478271484375,
0.0232696533203125,
0.007785797119140625,
0.0750732421875,
0.033721923828125,
-0.057403564453125,
-0.058074951171875,
-0.047088623046875,
... |
bh8648/split_dataset_1 | 2023-10-19T08:38:20.000Z | [
"region:us"
] | bh8648 | null | null | 0 | 8 | 2023-10-19T08:38:17 | ---
dataset_info:
features:
- name: instruction
dtype: string
- name: output
dtype: string
- name: page_num
dtype: int64
splits:
- name: train
num_bytes: 659763
num_examples: 212
download_size: 336962
dataset_size: 659763
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
# Dataset Card for "split_dataset_1"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 517 | [
[
-0.050933837890625,
-0.033050537109375,
0.002246856689453125,
0.022918701171875,
-0.032684326171875,
0.008209228515625,
0.0253143310546875,
-0.0012731552124023438,
0.070556640625,
0.038604736328125,
-0.06982421875,
-0.041717529296875,
-0.04718017578125,
-0.0... |
berkouille/ultrachat_golf | 2023-10-19T12:51:32.000Z | [
"region:us"
] | berkouille | null | null | 0 | 8 | 2023-10-19T12:51:15 | Entry not found | 15 | [
[
-0.02142333984375,
-0.01495361328125,
0.05718994140625,
0.0288238525390625,
-0.035064697265625,
0.046539306640625,
0.052520751953125,
0.005062103271484375,
0.0513916015625,
0.016998291015625,
-0.052093505859375,
-0.014984130859375,
-0.060394287109375,
0.0379... |
tyzhu/find_first_sent_train_100_eval_10 | 2023-10-31T14:48:31.000Z | [
"region:us"
] | tyzhu | null | null | 0 | 8 | 2023-10-19T15:56:50 | ---
dataset_info:
features:
- name: inputs
dtype: string
- name: targets
dtype: string
- name: title
dtype: string
- name: context
dtype: string
splits:
- name: train
num_bytes: 267331
num_examples: 210
- name: validation
num_bytes: 10399
num_examples: 10
download_size: 135617
dataset_size: 277730
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: validation
path: data/validation-*
---
# Dataset Card for "find_first_sent_train_100_eval_10"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 678 | [
[
-0.0433349609375,
-0.0223236083984375,
0.019744873046875,
0.034454345703125,
-0.0047454833984375,
-0.00821685791015625,
0.0173492431640625,
0.0212860107421875,
0.054473876953125,
0.022979736328125,
-0.06951904296875,
-0.050689697265625,
-0.043731689453125,
-... |
vladisha3000/test | 2023-10-20T19:25:21.000Z | [
"region:us"
] | vladisha3000 | null | null | 0 | 8 | 2023-10-20T19:12:44 | ---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
dataset_info:
features:
- name: image
dtype: image
- name: text
dtype: string
splits:
- name: train
num_bytes: 254706.0
num_examples: 100
download_size: 257963
dataset_size: 254706.0
---
# Dataset Card for "test"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 465 | [
[
-0.04620361328125,
-0.028656005859375,
0.00555419921875,
0.0131072998046875,
-0.009124755859375,
0.00058746337890625,
0.0164794921875,
-0.00917816162109375,
0.050537109375,
0.0228424072265625,
-0.056121826171875,
-0.04486083984375,
-0.03240966796875,
-0.0128... |
Omickeyee/Marathi_LLM_3k | 2023-10-21T03:26:02.000Z | [
"region:us"
] | Omickeyee | null | null | 0 | 8 | 2023-10-21T03:24:24 | Entry not found | 15 | [
[
-0.02142333984375,
-0.01495361328125,
0.05718994140625,
0.0288238525390625,
-0.035064697265625,
0.046539306640625,
0.052520751953125,
0.005062103271484375,
0.0513916015625,
0.016998291015625,
-0.052093505859375,
-0.014984130859375,
-0.060394287109375,
0.0379... |
Maxtra/zenko.ai | 2023-10-22T06:33:45.000Z | [
"region:us"
] | Maxtra | null | null | 0 | 8 | 2023-10-22T06:22:56 | Entry not found | 15 | [
[
-0.02142333984375,
-0.01495361328125,
0.05718994140625,
0.0288238525390625,
-0.035064697265625,
0.046539306640625,
0.052520751953125,
0.005062103271484375,
0.0513916015625,
0.016998291015625,
-0.052093505859375,
-0.014984130859375,
-0.060394287109375,
0.0379... |
thdangtr/xsum_10_percents | 2023-10-22T15:07:07.000Z | [
"region:us"
] | thdangtr | null | null | 0 | 8 | 2023-10-22T15:06:45 | ---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: validation
path: data/validation-*
- split: test
path: data/test-*
dataset_info:
features:
- name: document
dtype: string
- name: summary
dtype: string
- name: id
dtype: string
splits:
- name: train
num_bytes: 47919462.033629835
num_examples: 20404
- name: validation
num_bytes: 2628823.6534592304
num_examples: 1133
- name: test
num_bytes: 2674669.821157579
num_examples: 1133
download_size: 33669166
dataset_size: 53222955.508246645
---
# Dataset Card for "xsum_10_percents"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 776 | [
[
-0.03656005859375,
0.000560760498046875,
0.01418304443359375,
0.0168914794921875,
-0.0038814544677734375,
0.00859832763671875,
0.01125335693359375,
0.005596160888671875,
0.07855224609375,
0.031494140625,
-0.047088623046875,
-0.054595947265625,
-0.047027587890625... |
ericyu/CLCD_Cropped_256 | 2023-10-22T16:21:21.000Z | [
"region:us"
] | ericyu | null | null | 0 | 8 | 2023-10-22T16:21:12 | ---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: test
path: data/test-*
- split: val
path: data/val-*
dataset_info:
features:
- name: imageA
dtype: image
- name: imageB
dtype: image
- name: label
dtype: image
splits:
- name: train
num_bytes: 29228609.52
num_examples: 1440
- name: test
num_bytes: 9716986.0
num_examples: 480
- name: val
num_bytes: 9686310.0
num_examples: 480
download_size: 48264072
dataset_size: 48631905.519999996
---
# Dataset Card for "CLCD_Cropped_256"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 725 | [
[
-0.064697265625,
-0.0159149169921875,
0.026123046875,
0.0158843994140625,
-0.019256591796875,
-0.0062713623046875,
0.00711822509765625,
-0.00957489013671875,
0.048980712890625,
0.04119873046875,
-0.07281494140625,
-0.05828857421875,
-0.04180908203125,
-0.017... |
AdapterOcean/physics_dataset_standardized_cluster_1_alpaca | 2023-10-23T01:52:01.000Z | [
"region:us"
] | AdapterOcean | null | null | 0 | 8 | 2023-10-22T18:30:46 | ---
dataset_info:
features:
- name: input
dtype: string
- name: output
dtype: string
splits:
- name: train
num_bytes: 13048987
num_examples: 4356
download_size: 0
dataset_size: 13048987
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
# Dataset Card for "physics_dataset_standardized_cluster_1_alpaca"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 505 | [
[
-0.044281005859375,
-0.0254669189453125,
0.026763916015625,
0.0312347412109375,
-0.03399658203125,
-0.0102081298828125,
0.037567138671875,
-0.00872802734375,
0.07867431640625,
0.01366424560546875,
-0.0526123046875,
-0.05755615234375,
-0.0413818359375,
-0.023... |
skvarre/movie_posters-100k | 2023-10-22T23:25:56.000Z | [
"region:us"
] | skvarre | null | null | 1 | 8 | 2023-10-22T22:50:21 | ---
dataset_info:
features:
- name: id
dtype: int64
- name: image
dtype: image
- name: title
dtype: string
- name: genres
list:
- name: id
dtype: int64
- name: name
dtype: string
- name: overview
dtype: string
- name: popularity
dtype: float64
- name: release_date
dtype: string
- name: budget
dtype: int64
- name: revenue
dtype: int64
- name: tagline
dtype: string
- name: original_language
dtype: string
- name: runtime
dtype: int64
splits:
- name: train
num_bytes: 43543732674.2
num_examples: 95300
download_size: 43339016957
dataset_size: 43543732674.2
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
# Dataset Card for "movie_posters-100k"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 929 | [
[
-0.041473388671875,
-0.00789642333984375,
0.01251220703125,
0.00894927978515625,
-0.0217132568359375,
0.0008058547973632812,
0.0236358642578125,
0.005504608154296875,
0.060882568359375,
0.04345703125,
-0.058197021484375,
-0.046234130859375,
-0.055511474609375,
... |
Hessa/tqa_all_topics | 2023-10-23T05:28:57.000Z | [
"region:us"
] | Hessa | null | null | 0 | 8 | 2023-10-23T05:27:57 | Entry not found | 15 | [
[
-0.02142333984375,
-0.01495361328125,
0.05718994140625,
0.0288238525390625,
-0.035064697265625,
0.046539306640625,
0.052520751953125,
0.005062103271484375,
0.0513916015625,
0.016998291015625,
-0.052093505859375,
-0.014984130859375,
-0.060394287109375,
0.0379... |
aminlouhichi/test | 2023-10-23T09:36:27.000Z | [
"region:us"
] | aminlouhichi | null | null | 0 | 8 | 2023-10-23T09:36:17 | ---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: validation
path: data/validation-*
- split: test
path: data/test-*
dataset_info:
features:
- name: image
dtype: image
- name: ground_truth
dtype: string
splits:
- name: train
num_bytes: 4913077.0
num_examples: 14
- name: validation
num_bytes: 2034037.0
num_examples: 8
- name: test
num_bytes: 3511069.0
num_examples: 7
download_size: 9722282
dataset_size: 10458183.0
---
# Dataset Card for "test"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 690 | [
[
-0.04620361328125,
-0.028656005859375,
0.00555419921875,
0.0131072998046875,
-0.009124755859375,
0.00058746337890625,
0.0164794921875,
-0.00917816162109375,
0.050537109375,
0.0228424072265625,
-0.056121826171875,
-0.04486083984375,
-0.03240966796875,
-0.0128... |
swiftmind/sn_wiki_meds_terms_llama2 | 2023-10-23T11:29:30.000Z | [
"region:us"
] | swiftmind | null | null | 0 | 8 | 2023-10-23T11:26:36 | Entry not found | 15 | [
[
-0.02142333984375,
-0.01495361328125,
0.05718994140625,
0.0288238525390625,
-0.035064697265625,
0.046539306640625,
0.052520751953125,
0.005062103271484375,
0.0513916015625,
0.016998291015625,
-0.052093505859375,
-0.014984130859375,
-0.060394287109375,
0.0379... |
anjakuzev/test300 | 2023-10-23T16:22:10.000Z | [
"region:us"
] | anjakuzev | null | null | 0 | 8 | 2023-10-23T16:17:47 | Entry not found | 15 | [
[
-0.02142333984375,
-0.01495361328125,
0.05718994140625,
0.0288238525390625,
-0.035064697265625,
0.046539306640625,
0.052520751953125,
0.005062103271484375,
0.0513916015625,
0.016998291015625,
-0.052093505859375,
-0.014984130859375,
-0.060394287109375,
0.0379... |
anjakuzev/test_gal | 2023-10-23T18:31:33.000Z | [
"region:us"
] | anjakuzev | null | null | 0 | 8 | 2023-10-23T18:30:32 | Entry not found | 15 | [
[
-0.02142333984375,
-0.01495361328125,
0.05718994140625,
0.0288238525390625,
-0.035064697265625,
0.046539306640625,
0.052520751953125,
0.005062103271484375,
0.0513916015625,
0.016998291015625,
-0.052093505859375,
-0.014984130859375,
-0.060394287109375,
0.0379... |
hmao/cvecpe_multiapis_nlq_function_pairs | 2023-10-23T19:31:29.000Z | [
"region:us"
] | hmao | null | null | 0 | 8 | 2023-10-23T19:12:28 | ---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
dataset_info:
features:
- name: Input
dtype: string
- name: Output
dtype: string
splits:
- name: train
num_bytes: 19666
num_examples: 56
download_size: 11947
dataset_size: 19666
---
# Dataset Card for "cvecpe_multiapis_nlq_function_pairs"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 491 | [
[
-0.040435791015625,
-0.007293701171875,
0.00643157958984375,
0.0302581787109375,
-0.01282501220703125,
0.00870513916015625,
0.005817413330078125,
-0.01297760009765625,
0.056427001953125,
0.041229248046875,
-0.049163818359375,
-0.0482177734375,
-0.026443481445312... |
Intuit-GenSRF/combined_toxicity_profanity_v2_train_eval | 2023-10-23T22:42:33.000Z | [
"region:us"
] | Intuit-GenSRF | null | null | 0 | 8 | 2023-10-23T22:41:18 | ---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: validation
path: data/validation-*
dataset_info:
features:
- name: text
dtype: string
- name: labels
sequence: string
- name: encoded_labels
sequence: int64
splits:
- name: train
num_bytes: 2803997548
num_examples: 6344950
- name: validation
num_bytes: 313551093
num_examples: 710497
download_size: 1607228317
dataset_size: 3117548641
---
# Dataset Card for "combined_toxicity_profanity_v2_train_eval"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 685 | [
[
-0.00965118408203125,
-0.01468658447265625,
0.01312255859375,
0.0285797119140625,
-0.018890380859375,
0.00504302978515625,
0.0035991668701171875,
-0.0080413818359375,
0.0252838134765625,
0.033905029296875,
-0.040496826171875,
-0.061004638671875,
-0.0487060546875... |
intone/horror_stories_reddit | 2023-10-24T16:16:22.000Z | [
"task_categories:text-generation",
"task_categories:translation",
"size_categories:1K<n<10K",
"language:en",
"region:us"
] | intone | null | null | 1 | 8 | 2023-10-24T12:00:59 | ---
task_categories:
- text-generation
- translation
language:
- en
size_categories:
- 1K<n<10K
---
# HSR <br>
HSR is a compilation of 5605 reddit posts scraped from the following subreddits:
- r/ScaryStories
- r/LetsNotMeet
- r/TwoSentenceHorror
- r/freehorrorstories
- r/TrueScaryStories
- r/NoSleep
- r/Ruleshorror
# HSR Credits
If you are using HSR, you must cite us for your project. This dataset can be used for Translation, Generative or Conversational models. <br>
Here are a few ideas that you can use HSR for: <br>
- Title-to-story
- Text Generation
- Spooky chats
| 577 | [
[
-0.0243988037109375,
-0.06689453125,
0.021392822265625,
0.011505126953125,
-0.03839111328125,
0.00092315673828125,
0.0260162353515625,
-0.02197265625,
0.041961669921875,
0.049163818359375,
-0.040985107421875,
-0.03155517578125,
-0.0295867919921875,
0.0254669... |
krasaee/nietzsche | 2023-10-26T07:47:27.000Z | [
"region:us"
] | krasaee | null | null | 0 | 8 | 2023-10-24T16:54:51 | ---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
dataset_info:
features:
- name: text
dtype: string
splits:
- name: train
num_bytes: 9929433
num_examples: 60480
download_size: 6288420
dataset_size: 9929433
---
# Dataset Card for "nietzsche"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 438 | [
[
-0.05450439453125,
-0.00959014892578125,
0.0335693359375,
0.0114288330078125,
-0.012847900390625,
-0.00968170166015625,
0.0145111083984375,
-0.0136871337890625,
0.06640625,
0.024017333984375,
-0.06646728515625,
-0.051971435546875,
-0.05120849609375,
-0.02615... |
BLACKBUN/imaginary_patient_cases | 2023-10-25T02:10:36.000Z | [
"region:us"
] | BLACKBUN | null | null | 0 | 8 | 2023-10-25T02:10:32 | ---
dataset_info:
features:
- name: question
dtype: string
- name: answer
dtype: string
- name: explanation
dtype: string
splits:
- name: train
num_bytes: 2673420
num_examples: 4970
download_size: 680987
dataset_size: 2673420
---
# Dataset Card for "imaginary_patient_cases"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 443 | [
[
-0.034576416015625,
-0.02734375,
0.047027587890625,
0.0089263916015625,
-0.0095672607421875,
0.0012149810791015625,
0.02777099609375,
-0.021148681640625,
0.07037353515625,
0.0303802490234375,
-0.062103271484375,
-0.05072021484375,
-0.0391845703125,
-0.016372... |
phanvancongthanh/bindingdb | 2023-10-25T12:48:32.000Z | [
"region:us"
] | phanvancongthanh | null | null | 0 | 8 | 2023-10-25T12:48:22 | ---
dataset_info:
features:
- name: smiles
dtype: string
splits:
- name: train
num_bytes: 161671433
num_examples: 2498120
download_size: 28050616
dataset_size: 161671433
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
# Dataset Card for "bindingdb"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 447 | [
[
-0.05609130859375,
-0.01348114013671875,
0.0074462890625,
0.00015747547149658203,
-0.012481689453125,
-0.005702972412109375,
0.01470184326171875,
-0.01300048828125,
0.06512451171875,
0.05059814453125,
-0.05792236328125,
-0.061553955078125,
-0.038330078125,
-... |
Naveengo/codeparrot_10000_rows | 2023-10-25T15:29:00.000Z | [
"region:us"
] | Naveengo | null | null | 0 | 8 | 2023-10-25T15:28:35 | ---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: valid
path: data/valid-*
dataset_info:
features:
- name: repo_name
dtype: string
- name: path
dtype: string
- name: copies
dtype: string
- name: size
dtype: string
- name: content
dtype: string
- name: license
dtype: string
splits:
- name: train
num_bytes: 130556998.1704905
num_examples: 10000
- name: valid
num_bytes: 6658657.886815172
num_examples: 500
download_size: 52539728
dataset_size: 137215656.05730566
---
# Dataset Card for "codeparrot_10000_rows"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 761 | [
[
-0.0428466796875,
-0.01372528076171875,
0.0014142990112304688,
0.02886962890625,
-0.00800323486328125,
0.003879547119140625,
0.0177154541015625,
0.00988006591796875,
0.0704345703125,
0.0278778076171875,
-0.03436279296875,
-0.0413818359375,
-0.0241241455078125,
... |
imdatta0/mmlu_sample | 2023-10-26T05:47:36.000Z | [
"region:us"
] | imdatta0 | null | null | 0 | 8 | 2023-10-26T05:32:56 | ---
dataset_info:
features:
- name: text
dtype: string
splits:
- name: train_1pc
num_bytes: 76328814
num_examples: 56886
- name: train_5pc
num_bytes: 585203496
num_examples: 284544
download_size: 201927295
dataset_size: 661532310
configs:
- config_name: default
data_files:
- split: train_1pc
path: data/train_1pc-*
- split: train_5pc
path: data/train_5pc-*
---
# Dataset Card for "mmlu_1pc"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 572 | [
[
-0.044769287109375,
-0.021759033203125,
0.003814697265625,
0.022705078125,
-0.0216827392578125,
-0.0128173828125,
0.032073974609375,
0.0034999847412109375,
0.06793212890625,
0.0240631103515625,
-0.07391357421875,
-0.04315185546875,
-0.036590576171875,
-0.009... |
anlp/anno1_w_elimination | 2023-10-27T05:53:10.000Z | [
"region:us"
] | anlp | null | null | 0 | 8 | 2023-10-27T05:53:09 | ---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
dataset_info:
features:
- name: sentences
sequence: string
- name: ner_tags
sequence: string
splits:
- name: train
num_bytes: 1239484
num_examples: 917
download_size: 249472
dataset_size: 1239484
---
# Dataset Card for "anno1_w_elimination"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 493 | [
[
-0.0572509765625,
-0.03656005859375,
0.0131683349609375,
0.002315521240234375,
-0.02337646484375,
-0.01497650146484375,
0.0111083984375,
-0.020050048828125,
0.064208984375,
0.037994384765625,
-0.07635498046875,
-0.055511474609375,
-0.045440673828125,
0.00416... |
Kabatubare/autotrain-data-1w6s-u4vt-i7yo | 2023-10-27T10:42:10.000Z | [
"region:us"
] | Kabatubare | null | null | 0 | 8 | 2023-10-27T10:42:08 | ---
dataset_info:
features:
- name: autotrain_text
dtype: string
splits:
- name: train
num_bytes: 19109937
num_examples: 23437
- name: validation
num_bytes: 19109937
num_examples: 23437
download_size: 20605004
dataset_size: 38219874
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: validation
path: data/validation-*
---
# Dataset Card for "autotrain-data-1w6s-u4vt-i7yo"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 590 | [
[
-0.03564453125,
0.01593017578125,
0.004486083984375,
0.019805908203125,
-0.0211334228515625,
0.006092071533203125,
0.036407470703125,
-0.0037746429443359375,
0.04876708984375,
0.013824462890625,
-0.067138671875,
-0.02783203125,
-0.0318603515625,
-0.007736206... |
aino813/yuho-risk-202303 | 2023-10-28T08:13:00.000Z | [
"region:us"
] | aino813 | null | null | 0 | 8 | 2023-10-28T07:24:23 | Entry not found | 15 | [
[
-0.02142333984375,
-0.01495361328125,
0.05718994140625,
0.0288238525390625,
-0.035064697265625,
0.046539306640625,
0.052520751953125,
0.005062103271484375,
0.0513916015625,
0.016998291015625,
-0.052093505859375,
-0.014984130859375,
-0.060394287109375,
0.0379... |
aditijha/instruct_v3_5k | 2023-10-29T14:55:50.000Z | [
"region:us"
] | aditijha | null | null | 0 | 8 | 2023-10-29T14:55:48 | ---
dataset_info:
features:
- name: prompt
dtype: string
- name: response
dtype: string
- name: source
dtype: string
splits:
- name: train
num_bytes: 19654811.27708441
num_examples: 5000
download_size: 11429021
dataset_size: 19654811.27708441
---
# Dataset Card for "instruct_v3_5k"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 451 | [
[
-0.038116455078125,
0.0027370452880859375,
0.017578125,
0.0207061767578125,
-0.01483917236328125,
-0.01262664794921875,
0.040802001953125,
-0.0241241455078125,
0.0396728515625,
0.046051025390625,
-0.054718017578125,
-0.0555419921875,
-0.034393310546875,
-0.0... |
aditijha/instruct_v3_10k | 2023-10-29T14:56:08.000Z | [
"region:us"
] | aditijha | null | null | 0 | 8 | 2023-10-29T14:56:05 | ---
dataset_info:
features:
- name: prompt
dtype: string
- name: response
dtype: string
- name: source
dtype: string
splits:
- name: train
num_bytes: 39309622.55416882
num_examples: 10000
download_size: 23617961
dataset_size: 39309622.55416882
---
# Dataset Card for "instruct_v3_10k"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 453 | [
[
-0.0372314453125,
-0.00312042236328125,
0.0217742919921875,
0.02972412109375,
-0.011138916015625,
-0.01055145263671875,
0.032379150390625,
-0.01934814453125,
0.04888916015625,
0.047607421875,
-0.051300048828125,
-0.0482177734375,
-0.03924560546875,
-0.008247... |
MylesChew/JAX_FACADE_240 | 2023-10-30T20:29:55.000Z | [
"region:us"
] | MylesChew | null | null | 0 | 8 | 2023-10-30T14:31:55 | ---
dataset_info:
features:
- name: image
dtype: image
- name: label
dtype: image
splits:
- name: train
num_bytes: 3848813.0
num_examples: 214
- name: validation
num_bytes: 371632.0
num_examples: 24
download_size: 3438896
dataset_size: 4220445.0
---
# Dataset Card for "JAX_FACADE_240"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 458 | [
[
-0.053009033203125,
-0.01406097412109375,
0.0289306640625,
0.035675048828125,
0.00795745849609375,
0.006984710693359375,
0.0278472900390625,
-0.005340576171875,
0.03076171875,
0.04296875,
-0.05828857421875,
-0.0648193359375,
-0.0179290771484375,
-0.017288208... |
AriaK99/CalChat | 2023-10-30T21:55:16.000Z | [
"region:us"
] | AriaK99 | null | null | 0 | 8 | 2023-10-30T20:21:14 | Entry not found | 15 | [
[
-0.02142333984375,
-0.01495361328125,
0.05718994140625,
0.0288238525390625,
-0.035064697265625,
0.046539306640625,
0.052520751953125,
0.005062103271484375,
0.0513916015625,
0.016998291015625,
-0.052093505859375,
-0.014984130859375,
-0.060394287109375,
0.0379... |
toilaluan/tuned_prompt_ig_db_v1 | 2023-10-31T07:13:06.000Z | [
"region:us"
] | toilaluan | null | null | 0 | 8 | 2023-10-31T07:12:15 | ---
dataset_info:
features:
- name: image
dtype: image
- name: topic
dtype: string
- name: prompt
dtype: string
- name: request_id
dtype: int64
- name: model_type
dtype: string
splits:
- name: train
num_bytes: 852360042.0
num_examples: 18000
download_size: 1308058237
dataset_size: 852360042.0
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
# Dataset Card for "tuned_prompt_ig_db_v1"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 607 | [
[
-0.051483154296875,
-0.033843994140625,
0.01104736328125,
0.033966064453125,
-0.022979736328125,
-0.00809478759765625,
0.013336181640625,
0.004680633544921875,
0.05902099609375,
0.033111572265625,
-0.09613037109375,
-0.054229736328125,
-0.022979736328125,
-0... |
Mudditha/Medibot_C | 2023-10-31T08:56:38.000Z | [
"region:us"
] | Mudditha | null | null | 0 | 8 | 2023-10-31T08:52:28 | Entry not found | 15 | [
[
-0.02142333984375,
-0.01495361328125,
0.05718994140625,
0.0288238525390625,
-0.035064697265625,
0.046539306640625,
0.052520751953125,
0.005062103271484375,
0.0513916015625,
0.016998291015625,
-0.052093505859375,
-0.014984130859375,
-0.060394287109375,
0.0379... |
legacy107/newsqa-retrieved-ce | 2023-11-02T06:25:40.000Z | [
"region:us"
] | legacy107 | null | null | 0 | 8 | 2023-10-31T11:47:47 | ---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: validation
path: data/validation-*
- split: test
path: data/test-*
dataset_info:
features:
- name: context
dtype: string
- name: question
dtype: string
- name: answers
sequence: string
- name: key
dtype: string
- name: labels
list:
- name: end
sequence: int64
- name: start
sequence: int64
- name: document_id
dtype: int64
- name: retrieved_context
dtype: string
splits:
- name: train
num_bytes: 603680325
num_examples: 69960
- name: validation
num_bytes: 37107681
num_examples: 4200
- name: test
num_bytes: 36152371
num_examples: 4212
download_size: 92986601
dataset_size: 676940377
---
# Dataset Card for "newsqa-retrieved-ce"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 972 | [
[
-0.042266845703125,
-0.0170745849609375,
0.020263671875,
0.0005693435668945312,
-0.016998291015625,
0.00360107421875,
0.017852783203125,
-0.003368377685546875,
0.06488037109375,
0.0345458984375,
-0.05889892578125,
-0.0684814453125,
-0.040618896484375,
-0.016... |
kenil-samyak/invoices-donut-data-v1 | 2023-10-31T12:32:22.000Z | [
"region:us"
] | kenil-samyak | null | null | 0 | 8 | 2023-10-31T12:30:54 | ---
dataset_info:
features:
- name: image
dtype: image
- name: ground_truth
dtype: string
splits:
- name: train
num_bytes: 13690093.0
num_examples: 18
- name: test
num_bytes: 1552115.0
num_examples: 2
- name: validation
num_bytes: 1546321.0
num_examples: 2
download_size: 8398831
dataset_size: 16788529.0
---
# Dataset Card for "invoices-donut-data-v1"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 535 | [
[
-0.00910186767578125,
-0.005870819091796875,
0.01236724853515625,
0.00434112548828125,
-0.0130462646484375,
-0.00087738037109375,
0.03375244140625,
-0.005054473876953125,
0.05780029296875,
0.05712890625,
-0.054107666015625,
-0.0462646484375,
-0.038360595703125,
... |
danielz01/neon-trees | 2023-11-01T18:24:41.000Z | [
"region:us"
] | danielz01 | null | null | 0 | 8 | 2023-11-01T06:14:45 | ---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
dataset_info:
features:
- name: image
dtype: image
- name: path
dtype: string
- name: objects
struct:
- name: bbox
sequence:
sequence: float64
- name: categories
sequence: string
- name: count
dtype: int64
- name: height
dtype: int64
- name: width
dtype: int64
splits:
- name: train
num_bytes: 2836777144.782
num_examples: 2309
download_size: 1943975342
dataset_size: 2836777144.782
---
# Dataset Card for "neon-trees"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 723 | [
[
-0.0306396484375,
-0.0212554931640625,
0.0140380859375,
0.011322021484375,
-0.0125274658203125,
0.019775390625,
0.02349853515625,
-0.0255279541015625,
0.055389404296875,
0.0180816650390625,
-0.057769775390625,
-0.04730224609375,
-0.02032470703125,
-0.0085754... |
marziye-A/dataset-farma-test2 | 2023-11-01T08:05:24.000Z | [
"region:us"
] | marziye-A | null | null | 0 | 8 | 2023-11-01T07:47:57 | ---
dataset_info:
features:
- name: audio
dtype: audio
- name: name
dtype: string
splits:
- name: train
num_bytes: 74308914.36
num_examples: 2005
download_size: 72537312
dataset_size: 74308914.36
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
# Dataset Card for "dataset-farma-test2"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 489 | [
[
-0.0309600830078125,
-0.0252838134765625,
0.0014486312866210938,
0.01383209228515625,
-0.004985809326171875,
-0.00547027587890625,
0.0307464599609375,
-0.0212249755859375,
0.04815673828125,
0.018768310546875,
-0.05364990234375,
-0.033599853515625,
-0.04144287109... |
cynefin/llama-2-7b-chat-aave | 2023-11-01T16:51:04.000Z | [
"region:us"
] | cynefin | null | null | 0 | 8 | 2023-11-01T11:02:34 | Entry not found | 15 | [
[
-0.02142333984375,
-0.01495361328125,
0.05718994140625,
0.0288238525390625,
-0.035064697265625,
0.046539306640625,
0.052520751953125,
0.005062103271484375,
0.0513916015625,
0.016998291015625,
-0.052093505859375,
-0.014984130859375,
-0.060394287109375,
0.0379... |
shaaz10/querygst | 2023-11-01T14:26:45.000Z | [
"region:us"
] | shaaz10 | null | null | 0 | 8 | 2023-11-01T13:52:37 | Entry not found | 15 | [
[
-0.02142333984375,
-0.01495361328125,
0.05718994140625,
0.0288238525390625,
-0.035064697265625,
0.046539306640625,
0.052520751953125,
0.005062103271484375,
0.0513916015625,
0.016998291015625,
-0.052093505859375,
-0.014984130859375,
-0.060394287109375,
0.0379... |
aminlouhichi/donut5Fournissuer | 2023-11-01T20:04:45.000Z | [
"region:us"
] | aminlouhichi | null | null | 0 | 8 | 2023-11-01T20:04:38 | ---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: validation
path: data/validation-*
- split: test
path: data/test-*
dataset_info:
features:
- name: image
dtype: image
- name: ground_truth
dtype: string
splits:
- name: train
num_bytes: 22887975.0
num_examples: 106
- name: validation
num_bytes: 22887975.0
num_examples: 106
- name: test
num_bytes: 35690926.0
num_examples: 106
download_size: 69740850
dataset_size: 81466876.0
---
# Dataset Card for "donut5Fournissuer"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 712 | [
[
-0.0260467529296875,
-0.00266265869140625,
0.016998291015625,
0.0145416259765625,
-0.0034809112548828125,
0.0094146728515625,
0.0182952880859375,
-0.009857177734375,
0.04974365234375,
0.04083251953125,
-0.0594482421875,
-0.04852294921875,
-0.04541015625,
-0.... |
tyzhu/find_first_sent_train_100_eval_10_dec | 2023-11-02T13:53:36.000Z | [
"region:us"
] | tyzhu | null | null | 0 | 8 | 2023-11-02T12:50:53 | ---
configs:
- config_name: default
data_files:
- split: validation
path: data/validation-*
- split: train
path: data/train-*
dataset_info:
features:
- name: inputs
dtype: string
- name: targets
dtype: string
- name: title
dtype: string
- name: context
dtype: string
- name: text
dtype: string
splits:
- name: validation
num_bytes: 11337
num_examples: 10
- name: train
num_bytes: 379104
num_examples: 210
download_size: 197674
dataset_size: 390441
---
# Dataset Card for "find_first_sent_train_100_eval_10_dec"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 715 | [
[
-0.04144287109375,
-0.0214385986328125,
0.0172271728515625,
0.035980224609375,
-0.01398468017578125,
-0.0093536376953125,
0.0184783935546875,
0.0157012939453125,
0.058837890625,
0.0218353271484375,
-0.07257080078125,
-0.054168701171875,
-0.037353515625,
-0.0... |
midas/duc2001 | 2022-01-23T06:13:06.000Z | [
"region:us"
] | midas | \ | @inproceedings{10.5555/1620163.1620205,
author = {Wan, Xiaojun and Xiao, Jianguo},
title = {Single Document Keyphrase Extraction Using Neighborhood Knowledge},
year = {2008},
isbn = {9781577353683},
publisher = {AAAI Press},
booktitle = {Proceedings of the 23rd National Conference on Artificial Intelligence - Volume 2},
pages = {855–860},
numpages = {6},
location = {Chicago, Illinois},
series = {AAAI'08}
} | 1 | 7 | 2022-03-02T23:29:22 | ## Dataset Summary
A dataset for benchmarking keyphrase extraction and generation techniques from english news articles. For more details about the dataset please refer the original paper - [https://dl.acm.org/doi/10.5555/1620163.1620205](https://dl.acm.org/doi/10.5555/1620163.1620205)
Original source of the data - []()
## Dataset Structure
### Data Fields
- **id**: unique identifier of the document.
- **document**: Whitespace separated list of words in the document.
- **doc_bio_tags**: BIO tags for each word in the document. B stands for the beginning of a keyphrase and I stands for inside the keyphrase. O stands for outside the keyphrase and represents the word that isn't a part of the keyphrase at all.
- **extractive_keyphrases**: List of all the present keyphrases.
- **abstractive_keyphrase**: List of all the absent keyphrases.
### Data Splits
|Split| #datapoints |
|--|--|
| Test | 308 |
## Usage
### Full Dataset
```python
from datasets import load_dataset
# get entire dataset
dataset = load_dataset("midas/duc2001", "raw")
# sample from the test split
print("Sample from test dataset split")
test_sample = dataset["test"][0]
print("Fields in the sample: ", [key for key in test_sample.keys()])
print("Tokenized Document: ", test_sample["document"])
print("Document BIO Tags: ", test_sample["doc_bio_tags"])
print("Extractive/present Keyphrases: ", test_sample["extractive_keyphrases"])
print("Abstractive/absent Keyphrases: ", test_sample["abstractive_keyphrases"])
print("\n-----------\n")
```
**Output**
```bash
Sample from test data split
Fields in the sample: ['id', 'document', 'doc_bio_tags', 'extractive_keyphrases', 'abstractive_keyphrases', 'other_metadata']
Tokenized Document: ['Here', ',', 'at', 'a', 'glance', ',', 'are', 'developments', 'today', 'involving', 'the', 'crash', 'of', 'Pan', 'American', 'World', 'Airways', 'Flight', '103', 'Wednesday', 'night', 'in', 'Lockerbie', ',', 'Scotland', ',', 'that', 'killed', 'all', '259', 'people', 'aboard', 'and', 'more', 'than', '20', 'people', 'on', 'the', 'ground', ':']
Document BIO Tags: ['O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'B', 'O', 'B', 'I', 'I', 'I', 'I', 'I', 'O', 'O', 'O', 'B', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O']
Extractive/present Keyphrases: ['pan american world airways flight 103', 'crash', 'lockerbie']
Abstractive/absent Keyphrases: ['terrorist threats', 'widespread wreckage', 'radical palestinian faction', 'terrorist bombing', 'bomb threat', 'sabotage']
-----------
```
### Keyphrase Extraction
```python
from datasets import load_dataset
# get the dataset only for keyphrase extraction
dataset = load_dataset("midas/duc2001", "extraction")
print("Samples for Keyphrase Extraction")
# sample from the test split
print("Sample from test data split")
test_sample = dataset["test"][0]
print("Fields in the sample: ", [key for key in test_sample.keys()])
print("Tokenized Document: ", test_sample["document"])
print("Document BIO Tags: ", test_sample["doc_bio_tags"])
print("\n-----------\n")
```
### Keyphrase Generation
```python
# get the dataset only for keyphrase generation
dataset = load_dataset("midas/duc2001", "generation")
print("Samples for Keyphrase Generation")
# sample from the test split
print("Sample from test data split")
test_sample = dataset["test"][0]
print("Fields in the sample: ", [key for key in test_sample.keys()])
print("Tokenized Document: ", test_sample["document"])
print("Extractive/present Keyphrases: ", test_sample["extractive_keyphrases"])
print("Abstractive/absent Keyphrases: ", test_sample["abstractive_keyphrases"])
print("\n-----------\n")
```
## Citation Information
```
@inproceedings{10.5555/1620163.1620205,
author = {Wan, Xiaojun and Xiao, Jianguo},
title = {Single Document Keyphrase Extraction Using Neighborhood Knowledge},
year = {2008},
isbn = {9781577353683},
publisher = {AAAI Press},
booktitle = {Proceedings of the 23rd National Conference on Artificial Intelligence - Volume 2},
pages = {855–860},
numpages = {6},
location = {Chicago, Illinois},
series = {AAAI'08}
}
```
## Contributions
Thanks to [@debanjanbhucs](https://github.com/debanjanbhucs), [@dibyaaaaax](https://github.com/dibyaaaaax) and [@ad6398](https://github.com/ad6398) for adding this dataset
| 4,321 | [
[
-0.01220703125,
-0.041595458984375,
0.0292816162109375,
0.0025157928466796875,
-0.0162200927734375,
0.01380157470703125,
-0.0082244873046875,
-0.007534027099609375,
0.00936126708984375,
0.010498046875,
-0.043609619140625,
-0.06451416015625,
-0.034912109375,
... |
midas/krapivin | 2022-01-10T06:52:51.000Z | [
"region:us"
] | midas | \ | @inproceedings{Krapivin2009LargeDF,
title={Large Dataset for Keyphrases Extraction},
author={Mikalai Krapivin and Aliaksandr Autaeu and Maurizio Marchese},
year={2009}
} | 0 | 7 | 2022-03-02T23:29:22 | ## Dataset Summary
A dataset for benchmarking keyphrase extraction and generation techniques from long document english scientific papers. For more details about the dataset please refer the original paper - [https://www.semanticscholar.org/paper/Large-Dataset-for-Keyphrases-Extraction-Krapivin-Autaeu/2c56421ff3c2a69894d28b09a656b7157df8eb83](https://www.semanticscholar.org/paper/Large-Dataset-for-Keyphrases-Extraction-Krapivin-Autaeu/2c56421ff3c2a69894d28b09a656b7157df8eb83)
Original source of the data - []()
## Dataset Structure
### Data Fields
- **id**: unique identifier of the document.
- **document**: Whitespace separated list of words in the document.
- **doc_bio_tags**: BIO tags for each word in the document. B stands for the beginning of a keyphrase and I stands for inside the keyphrase. O stands for outside the keyphrase and represents the word that isn't a part of the keyphrase at all.
- **extractive_keyphrases**: List of all the present keyphrases.
- **abstractive_keyphrase**: List of all the absent keyphrases.
### Data Splits
|Split| #datapoints |
|--|--|
| Test | 2305 |
## Usage
### Full Dataset
```python
from datasets import load_dataset
# get entire dataset
dataset = load_dataset("midas/krapivin", "raw")
# sample from the test split
print("Sample from test dataset split")
test_sample = dataset["test"][0]
print("Fields in the sample: ", [key for key in test_sample.keys()])
print("Tokenized Document: ", test_sample["document"])
print("Document BIO Tags: ", test_sample["doc_bio_tags"])
print("Extractive/present Keyphrases: ", test_sample["extractive_keyphrases"])
print("Abstractive/absent Keyphrases: ", test_sample["abstractive_keyphrases"])
print("\n-----------\n")
```
**Output**
```bash
```
### Keyphrase Extraction
```python
from datasets import load_dataset
# get the dataset only for keyphrase extraction
dataset = load_dataset("midas/krapivin", "extraction")
print("Samples for Keyphrase Extraction")
# sample from the test split
print("Sample from test data split")
test_sample = dataset["test"][0]
print("Fields in the sample: ", [key for key in test_sample.keys()])
print("Tokenized Document: ", test_sample["document"])
print("Document BIO Tags: ", test_sample["doc_bio_tags"])
print("\n-----------\n")
```
### Keyphrase Generation
```python
# get the dataset only for keyphrase generation
dataset = load_dataset("midas/krapivin", "generation")
print("Samples for Keyphrase Generation")
# sample from the test split
print("Sample from test data split")
test_sample = dataset["test"][0]
print("Fields in the sample: ", [key for key in test_sample.keys()])
print("Tokenized Document: ", test_sample["document"])
print("Extractive/present Keyphrases: ", test_sample["extractive_keyphrases"])
print("Abstractive/absent Keyphrases: ", test_sample["abstractive_keyphrases"])
print("\n-----------\n")
```
## Citation Information
```
@inproceedings{Krapivin2009LargeDF,
title={Large Dataset for Keyphrases Extraction},
author={Mikalai Krapivin and Aliaksandr Autaeu and Maurizio Marchese},
year={2009}
}
```
## Contributions
Thanks to [@debanjanbhucs](https://github.com/debanjanbhucs), [@dibyaaaaax](https://github.com/dibyaaaaax) and [@ad6398](https://github.com/ad6398) for adding this dataset
| 3,274 | [
[
-0.0022125244140625,
-0.03741455078125,
0.0269012451171875,
0.006683349609375,
-0.0211029052734375,
0.0115509033203125,
-0.01427459716796875,
-0.01052093505859375,
0.004581451416015625,
0.01212310791015625,
-0.035736083984375,
-0.054840087890625,
-0.041900634765... |
mrojas/disease | 2021-06-07T18:57:42.000Z | [
"region:us"
] | mrojas | \ | \ | 0 | 7 | 2022-03-02T23:29:22 | Entry not found | 15 | [
[
-0.021392822265625,
-0.01494598388671875,
0.05718994140625,
0.028839111328125,
-0.0350341796875,
0.046539306640625,
0.052490234375,
0.00507354736328125,
0.051361083984375,
0.01702880859375,
-0.052093505859375,
-0.01494598388671875,
-0.06036376953125,
0.03790... |
mrojas/family | 2021-06-07T20:59:36.000Z | [
"region:us"
] | mrojas | \ | \ | 0 | 7 | 2022-03-02T23:29:22 | Entry not found | 15 | [
[
-0.02142333984375,
-0.01495361328125,
0.05718994140625,
0.0288238525390625,
-0.035064697265625,
0.046539306640625,
0.052520751953125,
0.005062103271484375,
0.0513916015625,
0.016998291015625,
-0.052093505859375,
-0.014984130859375,
-0.060394287109375,
0.0379... |
nickmuchi/trade-the-event-finance | 2022-02-04T06:05:02.000Z | [
"region:us"
] | nickmuchi | null | null | 6 | 7 | 2022-03-02T23:29:22 | Entry not found | 15 | [
[
-0.021392822265625,
-0.01494598388671875,
0.05718994140625,
0.028839111328125,
-0.0350341796875,
0.046539306640625,
0.052490234375,
0.00507354736328125,
0.051361083984375,
0.01702880859375,
-0.052093505859375,
-0.01494598388671875,
-0.06036376953125,
0.03790... |
superb/superb-data | 2021-07-21T16:04:51.000Z | [
"region:us"
] | superb | null | null | 4 | 7 | 2022-03-02T23:29:22 | Entry not found | 15 | [
[
-0.02142333984375,
-0.01495361328125,
0.05718994140625,
0.0288238525390625,
-0.035064697265625,
0.046539306640625,
0.052520751953125,
0.005062103271484375,
0.0513916015625,
0.016998291015625,
-0.052093505859375,
-0.014984130859375,
-0.060394287109375,
0.0379... |
wmt/europarl | 2022-12-06T06:53:35.000Z | [
"region:us"
] | wmt | null | null | 1 | 7 | 2022-03-02T23:29:22 | Entry not found | 15 | [
[
-0.021392822265625,
-0.01494598388671875,
0.05718994140625,
0.028839111328125,
-0.0350341796875,
0.046539306640625,
0.052490234375,
0.00507354736328125,
0.051361083984375,
0.01702880859375,
-0.052093505859375,
-0.01494598388671875,
-0.06036376953125,
0.03790... |
Biomedical-TeMU/CodiEsp_corpus | 2022-03-11T02:24:53.000Z | [
"license:cc-by-4.0",
"region:us"
] | Biomedical-TeMU | null | null | 0 | 7 | 2022-03-11T02:19:32 | ---
license: cc-by-4.0
---
## Introduction
These are the train, development, test and background sets of the CodiEsp corpus. Train and development have gold standard annotations. The unannotated background and test sets are distributed together. All documents are released in the context of the CodiEsp track for CLEF ehealth 2020 (http://temu.bsc.es/codiesp/).
The CodiEsp corpus contains manually coded clinical cases. All documents are in Spanish language and CIE10 is the coding terminology (it is the Spanish version of ICD10-CM and ICD10-PCS). The CodiEsp corpus has been randomly sampled into three subsets: the train, the development, and the test set. The train set contains 500 clinical cases, and the development and test set 250 clinical cases each. The test set contains 250 clinical cases and it is released together with the background set (with 2751 clinical cases). CodiEsp participants must submit predictions for the test and background set, but they will only be evaluated on the test set.
## Structure
Three folders: train, dev and test. Each one of them contains the files for the train, development and test corpora, respectively.
+ train and dev folders have:
+ 3 tab-separated files with the annotation information relevant for each of the 3 sub-tracks of CodiEsp.
+ A subfolder named text_files with the plain text files of the clinical cases.
+ A subfolder named text_files_en with the plain text files machine-translated to English. Due to the translation process, the text files are sentence-splitted.
+ The test folder has only text_files and text_files_en subfolders with the plain text files.
## Corpus format description
The CodiEsp corpus is distributed in plain text in UTF8 encoding, where each clinical case is stored as a single file whose name is the clinical case identifier. Annotations are released in a tab-separated file. Since the CodiEsp track has 3 sub-tracks, every set of documents (train and test) has 3 tab-separated files associated with it.
For the sub-tracks CodiEsp-D and CodiEsp-P, the file has the following fields:
articleID ICD10-code
Tab-separated files for the sub-track CodiEsp-X contain extra fields that provide the text-reference and its position:
articleID label ICD10-code text-reference reference-position
## Corpus summary statistics
The final collection of 1000 clinical cases that make up the corpus had a total of 16504 sentences, with an average of 16.5 sentences per clinical case. It contains a total of 396,988 words, with an average of 396.2 words per clinical case.
For more information, visit the track webpage: http://temu.bsc.es/codiesp/ | 2,665 | [
[
-0.049163818359375,
-0.02435302734375,
0.0433349609375,
0.0299835205078125,
-0.03173828125,
-0.008544921875,
-0.024658203125,
-0.05377197265625,
0.039794921875,
0.03131103515625,
-0.034027099609375,
-0.064208984375,
-0.05609130859375,
0.0253753662109375,
... |
huggan/few-shot-anime-face | 2022-04-12T14:08:09.000Z | [
"arxiv:2101.04775",
"region:us"
] | huggan | null | null | 0 | 7 | 2022-04-01T11:42:03 | # Citation
```
@article{DBLP:journals/corr/abs-2101-04775,
author = {Bingchen Liu and
Yizhe Zhu and
Kunpeng Song and
Ahmed Elgammal},
title = {Towards Faster and Stabilized {GAN} Training for High-fidelity Few-shot
Image Synthesis},
journal = {CoRR},
volume = {abs/2101.04775},
year = {2021},
url = {https://arxiv.org/abs/2101.04775},
eprinttype = {arXiv},
eprint = {2101.04775},
timestamp = {Fri, 22 Jan 2021 15:16:00 +0100},
biburl = {https://dblp.org/rec/journals/corr/abs-2101-04775.bib},
bibsource = {dblp computer science bibliography, https://dblp.org}
}
``` | 676 | [
[
-0.038330078125,
-0.055694580078125,
0.0012407302856445312,
0.0233306884765625,
-0.006511688232421875,
-0.01244354248046875,
-0.0055999755859375,
-0.0200042724609375,
0.0054931640625,
-0.0029010772705078125,
-0.024627685546875,
-0.023834228515625,
-0.02734375,
... |
huggan/few-shot-pokemon | 2022-04-12T14:06:36.000Z | [
"arxiv:2101.04775",
"region:us"
] | huggan | null | null | 3 | 7 | 2022-04-01T11:56:00 | # Citation
```
@article{DBLP:journals/corr/abs-2101-04775,
author = {Bingchen Liu and
Yizhe Zhu and
Kunpeng Song and
Ahmed Elgammal},
title = {Towards Faster and Stabilized {GAN} Training for High-fidelity Few-shot
Image Synthesis},
journal = {CoRR},
volume = {abs/2101.04775},
year = {2021},
url = {https://arxiv.org/abs/2101.04775},
eprinttype = {arXiv},
eprint = {2101.04775},
timestamp = {Fri, 22 Jan 2021 15:16:00 +0100},
biburl = {https://dblp.org/rec/journals/corr/abs-2101-04775.bib},
bibsource = {dblp computer science bibliography, https://dblp.org}
}
``` | 676 | [
[
-0.038360595703125,
-0.055694580078125,
0.0012750625610351562,
0.0233154296875,
-0.0065155029296875,
-0.012420654296875,
-0.00557708740234375,
-0.0200042724609375,
0.00550079345703125,
-0.0029239654541015625,
-0.024627685546875,
-0.0238189697265625,
-0.02734375,... |
iluvvatar/RuREBus | 2023-03-30T13:37:32.000Z | [
"task_categories:token-classification",
"task_ids:named-entity-recognition",
"multilinguality:monolingual",
"language:ru",
"region:us"
] | iluvvatar | null | null | 1 | 7 | 2022-04-10T09:52:30 | ---
language:
- ru
multilinguality:
- monolingual
task_categories:
- token-classification
task_ids:
- named-entity-recognition
pretty_name: RuREBus
---
# RuREBus dataset
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Structure](#dataset-structure)
- [Citation Information](#citation-information)
- [Contacts](#contacts)
## Dataset Description
RuREBus dataset (https://github.com/dialogue-evaluation/RuREBus) is
a Russian dataset for named entity recognition and relation extraction.
## Dataset Structure
There are two subsets of the dataset.
Using
`load_dataset('MalakhovIlya/RuREBus')`
you can download annotated data (DatasetDict) for named entity recognition task and
relation extraction tasks.
This subset consists of two splits: "train" and "test".
Using
`load_dataset('MalakhovIlya/NEREL', 'raw_txt')['raw_txt']`
you can download (Dataset) large corpus (~3gb) raw texts of the same subject
area, but without any annotations.
"entities" are used in named-entity recognition task (see https://en.wikipedia.org/wiki/Named-entity_recognition).
"relations" are used in relationship extraction task (see https://en.wikipedia.org/wiki/Relationship_extraction).
Each entity is represented by a string of the following format:
`"<id>\t<type> <start> <stop>\t<text>"`, where
`<id>` is an entity id,
`<type>` is one of entity types,
`<start>` is a position of the first symbol of entity in text,
`<stop>` is the last symbol position in text +1.
Each relation is represented by a string of the following format:
`"<id>\t<type> Arg1:<arg1_id> Arg2:<arg2_id>"`, where
`<id>` is a relation id,
`<arg1_id>` and `<arg2_id>` are entity ids.
## Citation Information
@inproceedings{rurebus,
Address = {Moscow, Russia},
Author = {Ivanin, Vitaly and Artemova, Ekaterina and Batura, Tatiana and Ivanov, Vladimir and Sarkisyan, Veronika and Tutubalina, Elena and Smurov, Ivan},
Title = {RuREBus-2020 Shared Task: Russian Relation Extraction for Business},
Booktitle = {Computational Linguistics and Intellectual Technologies: Proceedings of the International Conference “Dialog” [Komp’iuternaia Lingvistika i Intellektual’nye Tehnologii: Trudy Mezhdunarodnoj Konferentsii “Dialog”]},
Year = {2020}
}
| 2,269 | [
[
-0.0196685791015625,
-0.051605224609375,
0.0223846435546875,
0.0300750732421875,
-0.031951904296875,
-0.008697509765625,
-0.0157318115234375,
-0.043792724609375,
0.0164337158203125,
0.02740478515625,
-0.0482177734375,
-0.03973388671875,
-0.022979736328125,
0... |
student/CIFAR-10 | 2022-04-16T03:50:36.000Z | [
"region:us"
] | student | null | null | 0 | 7 | 2022-04-16T03:39:09 | Entry not found | 15 | [
[
-0.02142333984375,
-0.01495361328125,
0.05718994140625,
0.0288238525390625,
-0.035064697265625,
0.046539306640625,
0.052520751953125,
0.005062103271484375,
0.0513916015625,
0.016998291015625,
-0.052093505859375,
-0.014984130859375,
-0.060394287109375,
0.0379... |
pietrolesci/glue_diagnostics | 2022-04-21T16:51:56.000Z | [
"region:us"
] | pietrolesci | null | null | 0 | 7 | 2022-04-21T16:46:38 | ## Overview
Original dataset available [here](https://gluebenchmark.com/diagnostics).
## Dataset curation
Filled in the empty rows of columns "lexical semantics", "predicate-argument structure",
"logic", "knowledge" with empty string `""`.
Labels are encoded as follows
```
{"entailment": 0, "neutral": 1, "contradiction": 2}
```
## Code to create dataset
```python
import pandas as pd
from datasets import Features, Value, ClassLabel, Dataset
df = pd.read_csv("<path to file>/diagnostic-full.tsv", sep="\t")
# column names to lower
df.columns = df.columns.str.lower()
# fill na
assert df["label"].isna().sum() == 0
df = df.fillna("")
# encode labels
df["label"] = df["label"].map({"entailment": 0, "neutral": 1, "contradiction": 2})
# cast to dataset
features = Features({
"lexical semantics": Value(dtype="string", id=None),
"predicate-argument structure": Value(dtype="string", id=None),
"logic": Value(dtype="string", id=None),
"knowledge": Value(dtype="string", id=None),
"domain": Value(dtype="string", id=None),
"premise": Value(dtype="string", id=None),
"hypothesis": Value(dtype="string", id=None),
"label": ClassLabel(num_classes=3, names=["entailment", "neutral", "contradiction"]),
})
dataset = Dataset.from_pandas(df, features=features)
dataset.push_to_hub("glue_diagnostics", token="<token>", split="test")
```
| 1,373 | [
[
-0.0301361083984375,
-0.0538330078125,
0.0185546875,
0.01299285888671875,
-0.016998291015625,
-0.005645751953125,
-0.0131072998046875,
0.004268646240234375,
0.0333251953125,
0.02679443359375,
-0.03704833984375,
-0.0653076171875,
-0.041839599609375,
0.0099716... |
h4iku/coconut_java2006_preprocessed | 2022-04-21T20:04:55.000Z | [
"region:us"
] | h4iku | null | null | 0 | 7 | 2022-04-21T19:16:05 | Entry not found | 15 | [
[
-0.02142333984375,
-0.01495361328125,
0.05718994140625,
0.0288238525390625,
-0.035064697265625,
0.046539306640625,
0.052520751953125,
0.005062103271484375,
0.0513916015625,
0.016998291015625,
-0.052093505859375,
-0.014984130859375,
-0.060394287109375,
0.0379... |
pietrolesci/joci | 2022-04-25T13:33:08.000Z | [
"region:us"
] | pietrolesci | null | null | 0 | 7 | 2022-04-25T13:32:52 | ## Overview
Original dataset available [here](https://github.com/sheng-z/JOCI/tree/master/data).
This dataset is the "full" JOCI dataset, which is the file named `joci.csv.zip`.
# Dataset curation
The following processing is applied,
- `label` column renamed to `original_label`
- creation of the `label` column using the following mapping, using common practices ([1](https://github.com/rabeehk/robust-nli/blob/c32ff958d4df68ac2fad9bf990f70d30eab9f297/data/scripts/joci.py#L22-L27), [2](https://github.com/azpoliak/hypothesis-only-NLI/blob/b045230437b5ba74b9928ca2bac5e21ae57876b9/data/convert_joci.py#L7-L12))
```
{
0: "contradiction",
1: "contradiction",
2: "neutral",
3: "neutral",
4: "neutral",
5: "entailment",
}
```
- finally, converting this to the usual NLI classes, that is `{"entailment": 0, "neutral": 1, "contradiction": 2}`
## Code to create dataset
```python
import pandas as pd
from datasets import Features, Value, ClassLabel, Dataset
# read data
df = pd.read_csv("<path to folder>/joci.csv")
# column name to lower
df.columns = df.columns.str.lower()
# rename label column
df = df.rename(columns={"label": "original_label"})
# encode labels
df["label"] = df["original_label"].map({
0: "contradiction",
1: "contradiction",
2: "neutral",
3: "neutral",
4: "neutral",
5: "entailment",
})
# encode labels
df["label"] = df["label"].map({"entailment": 0, "neutral": 1, "contradiction": 2})
# cast to dataset
features = Features({
"context": Value(dtype="string"),
"hypothesis": Value(dtype="string"),
"label": ClassLabel(num_classes=3, names=["entailment", "neutral", "contradiction"]),
"original_label": Value(dtype="int32"),
"context_from": Value(dtype="string"),
"hypothesis_from": Value(dtype="string"),
"subset": Value(dtype="string"),
})
ds = Dataset.from_pandas(df, features=features)
ds.push_to_hub("joci", token="<token>")
```
| 1,945 | [
[
-0.0240631103515625,
-0.044830322265625,
0.01459503173828125,
0.015411376953125,
-0.012481689453125,
-0.0209808349609375,
-0.0270843505859375,
-0.016815185546875,
0.03271484375,
0.04815673828125,
-0.037933349609375,
-0.0611572265625,
-0.045196533203125,
0.02... |
NLPC-UOM/Writing-style-classification | 2022-10-25T10:12:46.000Z | [
"task_categories:text-classification",
"language_creators:crowdsourced",
"multilinguality:monolingual",
"language:si",
"license:mit",
"region:us"
] | NLPC-UOM | null | null | 0 | 7 | 2022-04-27T18:08:07 | ---
annotations_creators: []
language_creators:
- crowdsourced
language:
- si
license:
- mit
multilinguality:
- monolingual
pretty_name: sinhala-writing-style-classification
size_categories: []
source_datasets: []
task_categories:
- text-classification
task_ids: []
---
This file contains news texts (sentences) belonging to different writing styles. The original dataset created by {*Upeksha, D., Wijayarathna, C., Siriwardena, M.,
Lasandun, L., Wimalasuriya, C., de Silva, N., and Dias, G. (2015). Implementing a corpus for Sinhala language. 01*}is processed and cleaned.
If you use this dataset, please cite {*Dhananjaya et al. BERTifying Sinhala - A Comprehensive Analysis of Pre-trained Language Models for Sinhala Text Classification, 2022*} and the above mentioned paper. | 779 | [
[
0.00698089599609375,
-0.054443359375,
0.007305145263671875,
0.0341796875,
-0.041290283203125,
0.0004191398620605469,
-0.040252685546875,
-0.0187530517578125,
0.036773681640625,
0.065185546875,
-0.04412841796875,
-0.0298614501953125,
-0.02740478515625,
0.0311... |
anuragshas/ur_opus100_processed_cv9 | 2022-05-10T16:34:49.000Z | [
"region:us"
] | anuragshas | null | null | 0 | 7 | 2022-05-10T16:34:37 | Entry not found | 15 | [
[
-0.02142333984375,
-0.01495361328125,
0.05718994140625,
0.0288238525390625,
-0.035064697265625,
0.046539306640625,
0.052520751953125,
0.005062103271484375,
0.0513916015625,
0.016998291015625,
-0.052093505859375,
-0.014984130859375,
-0.060394287109375,
0.0379... |
EMBO/sd-nlp-non-tokenized | 2023-01-19T10:12:45.000Z | [
"task_categories:token-classification",
"task_categories:text-classification",
"task_ids:multi-class-classification",
"task_ids:named-entity-recognition",
"task_ids:parsing",
"annotations_creators:expert-generated",
"language_creators:expert-generated",
"multilinguality:monolingual",
"size_categorie... | EMBO | This dataset is based on the SourceData database and is intented to facilitate training of NLP tasks in the cell and molecualr biology domain. | @Unpublished{
huggingface: dataset,
title = {SourceData NLP},
authors={Thomas Lemberger & Jorge Abreu-Vicente, EMBO},
year={2021}
} | 0 | 7 | 2022-05-17T12:34:22 | ---
annotations_creators:
- expert-generated
language_creators:
- expert-generated
language:
- en
license:
- cc-by-4.0
multilinguality:
- monolingual
size_categories:
- 10K<n<100K
source_datasets: []
task_categories:
- token-classification
- text-classification
task_ids:
- multi-class-classification
- named-entity-recognition
- parsing
---
# Dataset Card for sd-nlp
## Table of Contents
- [Dataset Card for [EMBO/sd-nlp-non-tokenized]](#dataset-card-for-dataset-name)
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Initial Data Collection and Normalization](#initial-data-collection-and-normalization)
- [Who are the source language producers?](#who-are-the-source-language-producers)
- [Annotations](#annotations)
- [Annotation process](#annotation-process)
- [Who are the annotators?](#who-are-the-annotators)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** https://sourcedata.embo.org
- **Repository:** https://github.com/source-data/soda-roberta
- **Paper:**
- **Leaderboard:**
- **Point of Contact:** thomas.lemberger@embo.org, jorge.abreu@embo.org
### Dataset Summary
This dataset is based on the content of the SourceData (https://sourcedata.embo.org) database, which contains manually annotated figure legends written in English and extracted from scientific papers in the domain of cell and molecular biology (Liechti et al, Nature Methods, 2017, https://doi.org/10.1038/nmeth.4471).
Unlike the dataset [`sd-nlp`](https://huggingface.co/datasets/EMBO/sd-nlp), pre-tokenized with the `roberta-base` tokenizer, this dataset is not previously tokenized, but just splitted into words. Users can therefore use it to fine-tune other models.
Additional details at https://github.com/source-data/soda-roberta
### Supported Tasks and Leaderboards
Tags are provided as [IOB2-style tags](https://en.wikipedia.org/wiki/Inside%E2%80%93outside%E2%80%93beginning_(tagging)).
`PANELIZATION`: figure captions (or figure legends) are usually composed of segments that each refer to one of several 'panels' of the full figure. Panels tend to represent results obtained with a coherent method and depicts data points that can be meaningfully compared to each other. `PANELIZATION` provide the start (B-PANEL_START) of these segments and allow to train for recogntion of the boundary between consecutive panel lengends.
`NER`: biological and chemical entities are labeled. Specifically the following entities are tagged:
- `SMALL_MOLECULE`: small molecules
- `GENEPROD`: gene products (genes and proteins)
- `SUBCELLULAR`: subcellular components
- `CELL`: cell types and cell lines.
- `TISSUE`: tissues and organs
- `ORGANISM`: species
- `DISEASE`: diseases (see limitations)
- `EXP_ASSAY`: experimental assays
`ROLES`: the role of entities with regard to the causal hypotheses tested in the reported results. The tags are:
- `CONTROLLED_VAR`: entities that are associated with experimental variables and that subjected to controlled and targeted perturbations.
- `MEASURED_VAR`: entities that are associated with the variables measured and the object of the measurements.
`BORING`: entities are marked with the tag `BORING` when they are more of descriptive value and not directly associated with causal hypotheses ('boring' is not an ideal choice of word, but it is short...). Typically, these entities are so-called 'reporter' geneproducts, entities used as common baseline across samples, or specify the context of the experiment (cellular system, species, etc...).
### Languages
The text in the dataset is English.
## Dataset Structure
### Data Instances
```json
{
"words": [
".", "Figure", "6", "(", "A", ")", "Cisplatin", "dose", "response", "curves", "of", "(", "i", ")", "MB002", ",", "(", "ii", ")", "Daoy", ",", "and", "(", "iii", ")", "MIC", "in", "the", "absence", "(", "EV", ")", "or", "presence", "of", "SOX9", "by", "Alamar", "blue", ".", "Cells", "were", "pre", "-", "conditioned", "with", "doxycycline", "to", "induce", "expression", "of", "SOX9", "(", "or", "EV", ")", "prior", "to", "treatment", "with", "increasing", "concentrations", "of", "cisplatin", ".", "The", "IC50", "were", "calculated", "following", "5", "(", "MB002", "and", "MIC", ")", "or", "3", "days", "(", "Daoy", ")", "of", "treatment", ".", "Data", "are", "mean", "+", "standard", "deviation", "from", "3", "independent", "repeats", ",", "each", "containing", "5", "technical", "replicates", ".", "(", "B", ")", "Cisplatin", "dose", "response", "curves", "of", "SOX9", "-", "expressing", "(", "i", ")", "Daoy", "and", "(", "ii", ")", "MIC", "in", "the", "absence", "or", "presence", "of", "FBW7\u03b1", ".", "Experiments", "and", "data", "analysis", "were", "performed", "as", "described", "in", "(", "A", ")", "(", "C", ")", "Overall", "survival", "analysis", "of", "mice", "bearing", "Daoy", "or", "Daoy", "-", "expressing", "dox", "-", "inducible", "SOX9", "treated", "with", "cisplatin", ".", "The", "dox", "-", "preconditioned", "cells", "(", "105", "cells", ")", "were", "orthotopically", "xenografted", "to", "Nude", "-", "Foxn1nu", "mice", "and", "left", "for", "1", "week", "to", "prior", "to", "being", "treated", "with", "vehicle", "control", "or", "cisplatin", "(", "2mg", "/", "kg", ")", "intraperitoneally", "for", "every", "other", "day", "for", "a", "total", "of", "6", "doses", ".", "(", "D", ")", "Heat", "map", "of", "the", "row", "-", "wise", "z", "-", "scores", "of", "11", "genes", "associated", "with", "cisplatin", "resistance", "in", "MB002", "expressing", "Sox9", "-", "WT", "or", "Sox9", "-", "T236", "/", "T240A", ".", "Heat", "map", "was", "generated", "using", "the", "GenePattern", "software", ".", "(", "E", ")", "Quantitative", "analysis", "of", "ATP7A", ",", "DUSP2", ",", "and", "TTK", "mRNAs", "in", "MB002", "following", "expression", "of", "SOX9", "-", "WT", "or", "SOX9", "-", "T236", "/", "240A", ".", "Total", "RNA", "were", "collected", "24", "hours", "following", "doxycycline", "treatment", ",", "from", "which", "cDNA", "were", "generated", "for", "qPCR", ".", "Data", "are", "mean", "mRNA", "level", "(", "normalized", "to", "B2M", "transcript", ")", "+", "standard", "deviation", "from", "3", "independent", "experiments", "with", "statistical", "significance", "were", "determined", "by", "Multiple", "comparisons", "2", "-", "way", "ANOVA", "with", "Bonferroni", "'", "s", "post", "-", "test", ".", "(", "F", ")", "Time", "course", "western", "blotting", "of", "HA", "-", "SOX9", ",", "ATP7A", ",", "DUSP2", ",", "ERK1", "/", "2", "pThr202", "/", "Tyr204", "and", "total", "ERK1", "/", "2", "in", "MB002", "cells", "following", "doxycycline", "induction", "of", "either", "EV", ",", "SOX9", "-", "WT", "or", "SOX9", "-", "T236", "/", "240A", ".", "GAPDH", "was", "used", "as", "a", "loading", "control", "."
],
"panel_id": "12345",
"label_ids": {
"entity_types": [
"O", "O", "O", "O", "O", "O", "B-SMALL_MOLECULE", "O", "O", "O", "O", "O", "O", "O", "B-CELL", "O", "O", "O", "O", "B-CELL", "O", "O", "O", "O", "O", "B-CELL", "O", "O", "O", "O", "O", "O", "O", "O", "O", "B-GENEPROD", "O", "B-EXP_ASSAY", "I-EXP_ASSAY", "O", "O", "O", "O", "O", "O", "O", "B-SMALL_MOLECULE", "O", "O", "O", "O", "B-GENEPROD", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "B-SMALL_MOLECULE", "O", "O", "O", "O", "O", "O", "O", "O", "B-CELL", "O", "B-CELL", "O", "O", "O", "O", "O", "B-CELL", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "B-SMALL_MOLECULE", "O", "O", "O", "O", "B-GENEPROD", "O", "O", "O", "O", "O", "B-CELL", "O", "O", "O", "O", "B-CELL", "O", "O", "O", "O", "O", "O", "B-GENEPROD", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "B-EXP_ASSAY", "O", "O", "B-ORGANISM", "O", "B-CELL", "O", "B-CELL", "O", "O", "B-SMALL_MOLECULE", "O", "O", "B-GENEPROD", "O", "O", "B-SMALL_MOLECULE", "O", "O", "B-SMALL_MOLECULE", "O", "O", "O", "O", "O", "O", "O", "O", "O", "B-ORGANISM", "O", "O", "O", "B-GENEPROD", "B-ORGANISM", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "B-SMALL_MOLECULE", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "B-SMALL_MOLECULE", "O", "O", "B-CELL", "O", "B-GENEPROD", "O", "O", "O", "B-GENEPROD", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "B-GENEPROD", "O", "B-GENEPROD", "O", "O", "B-GENEPROD", "O", "O", "B-CELL", "O", "O", "O", "B-GENEPROD", "O", "O", "O", "B-GENEPROD", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "B-SMALL_MOLECULE", "O", "O", "O", "O", "O", "O", "O", "O", "B-EXP_ASSAY", "O", "O", "O", "O", "O", "O", "O", "O", "O", "B-GENEPROD", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "B-EXP_ASSAY", "I-EXP_ASSAY", "O", "B-GENEPROD", "O", "B-GENEPROD", "O", "B-GENEPROD", "O", "B-GENEPROD", "O", "B-GENEPROD", "I-GENEPROD", "I-GENEPROD", "O", "O", "O", "O", "O", "B-GENEPROD", "I-GENEPROD", "I-GENEPROD", "O", "B-CELL", "O", "O", "B-SMALL_MOLECULE", "O", "O", "O", "O", "O", "B-GENEPROD", "O", "O", "O", "B-GENEPROD", "O", "O", "O", "O", "O", "B-GENEPROD", "O", "O", "O", "O", "O", "O", "O"
],
"geneprod_roles": [
"O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "B-CONTROLLED_VAR", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "B-CONTROLLED_VAR", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "B-CONTROLLED_VAR", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "B-CONTROLLED_VAR", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "B-CONTROLLED_VAR", "O", "O", "O", "B-CONTROLLED_VAR", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "B-MEASURED_VAR", "O", "B-MEASURED_VAR", "O", "O", "B-MEASURED_VAR", "O", "O", "O", "O", "O", "O", "B-CONTROLLED_VAR", "O", "O", "O", "B-CONTROLLED_VAR", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "B-MEASURED_VAR", "O", "B-MEASURED_VAR", "O", "B-MEASURED_VAR", "O", "B-MEASURED_VAR", "I-MEASURED_VAR", "I-MEASURED_VAR", "O", "O", "O", "O", "O", "B-MEASURED_VAR", "I-MEASURED_VAR", "I-MEASURED_VAR", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "B-CONTROLLED_VAR", "O", "O", "O", "B-CONTROLLED_VAR", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O"
],
"boring": [
"O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "B-BORING", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "B-BORING", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "B-BORING", "O", "B-BORING", "O", "O", "B-BORING", "O", "O", "O", "O", "O", "O", "O", "O", "B-BORING", "O", "O", "O", "O", "O", "O", "O", "O", "O", "B-BORING", "O", "O", "O", "B-BORING", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "B-BORING", "O", "O", "B-BORING", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "B-BORING", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "B-BORING", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "B-BORING", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "B-BORING", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "B-BORING", "O", "O", "B-BORING", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "B-BORING", "O", "O", "O", "O", "O", "O", "O"
],
"panel_start": [
"O", "O", "O", "B-PANEL_START", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "B-PANEL_START", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "B-PANEL_START", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "B-PANEL_START", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "B-PANEL_START", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "B-PANEL_START", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O"
],
"small_mol_roles": ["O", "O", "O", "O", "O", "O", "B-CONTROLLED_VAR", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "B-CONTROLLED_VAR", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "B-CONTROLLED_VAR", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "B-CONTROLLED_VAR", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "B-CONTROLLED_VAR", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O"]
}
}
```
### Data Fields
- `words`: `list` of `strings` text tokenized into words.
- `panel_id`: ID of the panel to which the example belongs to in the SourceData database.
- `label_ids`:
- `entity_types`: `list` of `strings` for the IOB2 tags for entity type; possible value in `["O", "I-SMALL_MOLECULE", "B-SMALL_MOLECULE", "I-GENEPROD", "B-GENEPROD", "I-SUBCELLULAR", "B-SUBCELLULAR", "I-CELL", "B-CELL", "I-TISSUE", "B-TISSUE", "I-ORGANISM", "B-ORGANISM", "I-EXP_ASSAY", "B-EXP_ASSAY"]`
- `geneprod_roles`: `list` of `strings` for the IOB2 tags for experimental roles; values in `["O", "I-CONTROLLED_VAR", "B-CONTROLLED_VAR", "I-MEASURED_VAR", "B-MEASURED_VAR"]`
- `boring`: `list` of `strings` for IOB2 tags for entities unrelated to causal design; values in `["O", "I-BORING", "B-BORING"]`
- `panel_start`: `list` of `strings` for IOB2 tags `["O", "B-PANEL_START"]`
- `small_mol_roles`: `list` of `strings` for IOB2 tags showing whether the entity is the variable being measured or the control variable `["O", "B-CONTROLLED_VAR", "I-CONTROLLED_VAR", "B-MEASURED_VAR", "I-MEASURED_VAR",]`
### Data Splits
- train:
- features: ['words', 'labels', 'tag_mask', 'panel_id'],
- num_rows: 50_198
- validation:
- features: ['words', 'labels', 'tag_mask', 'panel_id'],
- num_rows: 5_946
- test:
- features: ['words', 'labels', 'tag_mask', 'panel_id'],
- num_rows: 6_222
## Dataset Creation
### Curation Rationale
The dataset was built to train models for the automatic extraction of a knowledge graph based from the scientific literature. The dataset can be used to train models for text segmentation, named entity recognition and semantic role labeling.
### Source Data
#### Initial Data Collection and Normalization
Figure legends were annotated according to the SourceData framework described in Liechti et al 2017 (Nature Methods, 2017, https://doi.org/10.1038/nmeth.4471). The curation tool at https://curation.sourcedata.io was used to segment figure legends into panel legends, tag enities, assign experiemental roles and normalize with standard identifiers (not available in this dataset). The source data was downloaded from the SourceData API (https://api.sourcedata.io) on 21 Jan 2021.
#### Who are the source language producers?
The examples are extracted from the figure legends from scientific papers in cell and molecular biology.
### Annotations
#### Annotation process
The annotations were produced manually with expert curators from the SourceData project (https://sourcedata.embo.org)
#### Who are the annotators?
Curators of the SourceData project.
### Personal and Sensitive Information
None known.
## Considerations for Using the Data
### Social Impact of Dataset
Not applicable.
### Discussion of Biases
The examples are heavily biased towards cell and molecular biology and are enriched in examples from papers published in EMBO Press journals (https://embopress.org)
The annotation of diseases has been added recently to the dataset. Although they appear, the number is very low and they are not consistently tagged through the entire dataset.
We recommend to use the diseases by filtering the examples that contain them.
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
Thomas Lemberger, EMBO.
Jorge Abreu Vicente, EMBO
### Licensing Information
CC BY 4.0
### Citation Information
We are currently working on a paper to present the dataset. It is expected to be ready by 2023 spring. In the meantime, the following paper should be cited.
```latex
@article {Liechti2017,
author = {Liechti, Robin and George, Nancy and Götz, Lou and El-Gebali, Sara and Chasapi, Anastasia and Crespo, Isaac and Xenarios, Ioannis and Lemberger, Thomas},
title = {SourceData - a semantic platform for curating and searching figures},
year = {2017},
volume = {14},
number = {11},
doi = {10.1038/nmeth.4471},
URL = {https://doi.org/10.1038/nmeth.4471},
eprint = {https://www.biorxiv.org/content/early/2016/06/20/058529.full.pdf},
journal = {Nature Methods}
}
```
### Contributions
Thanks to [@tlemberger](https://github.com/tlemberger>) and [@drAbreu](https://github.com/drAbreu>) for adding this dataset.
| 23,330 | [
[
-0.041473388671875,
-0.03753662109375,
0.0211944580078125,
0.026336669921875,
-0.0152130126953125,
0.0166778564453125,
-0.006504058837890625,
-0.0014162063598632812,
0.049285888671875,
0.0338134765625,
-0.0499267578125,
-0.065673828125,
-0.0439453125,
0.0419... |
bigscience-data/roots_id_indonesian_news_articles_2017 | 2022-12-12T11:05:35.000Z | [
"language:id",
"license:cc0-1.0",
"region:us"
] | bigscience-data | null | null | 2 | 7 | 2022-05-18T09:14:12 | ---
language: id
license: cc0-1.0
extra_gated_prompt: 'By accessing this dataset, you agree to abide by the BigScience
Ethical Charter. The charter can be found at:
https://hf.co/spaces/bigscience/ethical-charter'
extra_gated_fields:
I have read and agree to abide by the BigScience Ethical Charter: checkbox
---
ROOTS Subset: roots_id_indonesian_news_articles_2017
# Indonesian News Articles 2017
- Dataset uid: `indonesian_news_articles_2017`
### Description
Indonesian news articles published at 2017 contains published date, content, title, and source.
### Homepage
kaggle.com/aashari/indonesian-news-articles-published-at-2017
### Licensing
- public domain
- cc0-1.0: Creative Commons Zero v1.0 Universal
CC0: Public Domain
### Speaker Locations
- Asia
- Indonesia
### Sizes
- 0.0688 % of total
- 26.1751 % of id
### BigScience processing steps
#### Filters applied to: id
- dedup_document
- dedup_template_soft
- filter_remove_empty_docs
- filter_small_docs_bytes_300
| 1,001 | [
[
-0.0158843994140625,
-0.048095703125,
0.0272064208984375,
0.019378662109375,
-0.0452880859375,
-0.008453369140625,
-0.005985260009765625,
0.004425048828125,
0.042938232421875,
0.0299530029296875,
-0.053009033203125,
-0.06414794921875,
-0.049346923828125,
0.0... |
bigscience-data/roots_pt_wikipedia | 2022-12-12T11:15:43.000Z | [
"language:pt",
"license:cc-by-sa-3.0",
"region:us"
] | bigscience-data | null | null | 0 | 7 | 2022-05-18T09:19:00 | ---
language: pt
license: cc-by-sa-3.0
extra_gated_prompt: 'By accessing this dataset, you agree to abide by the BigScience
Ethical Charter. The charter can be found at:
https://hf.co/spaces/bigscience/ethical-charter'
extra_gated_fields:
I have read and agree to abide by the BigScience Ethical Charter: checkbox
---
ROOTS Subset: roots_pt_wikipedia
# wikipedia
- Dataset uid: `wikipedia`
### Description
### Homepage
### Licensing
### Speaker Locations
### Sizes
- 3.2299 % of total
- 4.2071 % of en
- 5.6773 % of ar
- 3.3416 % of fr
- 5.2815 % of es
- 12.4852 % of ca
- 0.4288 % of zh
- 0.4286 % of zh
- 5.4743 % of indic-bn
- 8.9062 % of indic-ta
- 21.3313 % of indic-te
- 4.4845 % of pt
- 4.0493 % of indic-hi
- 11.3163 % of indic-ml
- 22.5300 % of indic-ur
- 4.4902 % of vi
- 16.9916 % of indic-kn
- 24.7820 % of eu
- 11.6241 % of indic-mr
- 9.8749 % of id
- 9.3489 % of indic-pa
- 9.4767 % of indic-gu
- 24.1132 % of indic-as
- 5.3309 % of indic-or
### BigScience processing steps
#### Filters applied to: en
- dedup_document
- filter_remove_empty_docs
- filter_small_docs_bytes_1024
#### Filters applied to: ar
- filter_wiki_user_titles
- dedup_document
- filter_remove_empty_docs
- filter_small_docs_bytes_300
#### Filters applied to: fr
- dedup_document
- filter_remove_empty_docs
- filter_small_docs_bytes_1024
#### Filters applied to: es
- dedup_document
- filter_remove_empty_docs
- filter_small_docs_bytes_1024
#### Filters applied to: ca
- filter_wiki_user_titles
- dedup_document
- filter_remove_empty_docs
- filter_small_docs_bytes_1024
#### Filters applied to: zh
#### Filters applied to: zh
#### Filters applied to: indic-bn
- filter_wiki_user_titles
- dedup_document
- filter_remove_empty_docs
- filter_small_docs_bytes_300
#### Filters applied to: indic-ta
- filter_wiki_user_titles
- dedup_document
- filter_remove_empty_docs
- filter_small_docs_bytes_300
#### Filters applied to: indic-te
- filter_wiki_user_titles
- dedup_document
- filter_remove_empty_docs
- filter_small_docs_bytes_300
#### Filters applied to: pt
- dedup_document
- filter_remove_empty_docs
- filter_small_docs_bytes_300
#### Filters applied to: indic-hi
- filter_wiki_user_titles
- dedup_document
- filter_remove_empty_docs
- filter_small_docs_bytes_300
#### Filters applied to: indic-ml
- filter_wiki_user_titles
- dedup_document
- filter_remove_empty_docs
- filter_small_docs_bytes_300
#### Filters applied to: indic-ur
- filter_wiki_user_titles
- dedup_document
- filter_remove_empty_docs
- filter_small_docs_bytes_300
#### Filters applied to: vi
- dedup_document
- filter_remove_empty_docs
- filter_small_docs_bytes_300
#### Filters applied to: indic-kn
- filter_wiki_user_titles
- dedup_document
- filter_remove_empty_docs
- filter_small_docs_bytes_300
#### Filters applied to: eu
- filter_wiki_user_titles
- dedup_document
- filter_remove_empty_docs
#### Filters applied to: indic-mr
- filter_wiki_user_titles
- dedup_document
- filter_remove_empty_docs
- filter_small_docs_bytes_300
#### Filters applied to: id
- dedup_document
- filter_remove_empty_docs
- filter_small_docs_bytes_300
#### Filters applied to: indic-pa
- filter_wiki_user_titles
- dedup_document
- filter_remove_empty_docs
- filter_small_docs_bytes_300
#### Filters applied to: indic-gu
- filter_wiki_user_titles
- dedup_document
- filter_remove_empty_docs
- filter_small_docs_bytes_300
#### Filters applied to: indic-as
- filter_wiki_user_titles
- dedup_document
- filter_remove_empty_docs
#### Filters applied to: indic-or
- filter_wiki_user_titles
- dedup_document
- filter_remove_empty_docs
| 3,635 | [
[
-0.044891357421875,
-0.03857421875,
0.0227813720703125,
0.012359619140625,
-0.01434326171875,
-0.005931854248046875,
-0.01453399658203125,
-0.00994110107421875,
0.045074462890625,
0.021484375,
-0.052520751953125,
-0.059234619140625,
-0.046295166015625,
0.032... |
scoup123/tr_movie_reviews_training | 2022-05-21T18:03:05.000Z | [
"license:other",
"region:us"
] | scoup123 | null | null | 0 | 7 | 2022-05-20T17:34:16 | ---
license: other
---
annotations_creators:
- found
language_creators:
- found
languages:
- tr
licenses:
- unknown
multilinguality:
- monolingual
paperswithcode_id: null
pretty_name: turkish_movie_reviews
size_categories:
- 10K<n<100K
source_datasets:
- original
task_categories:
- text-classification
task_ids:
- sentiment-classification
- sentiment-scoring | 359 | [
[
-0.03533935546875,
-0.0247802734375,
0.0178070068359375,
0.045074462890625,
-0.05059814453125,
0.005397796630859375,
-0.00881195068359375,
-0.02825927734375,
0.038238525390625,
0.054779052734375,
-0.043212890625,
-0.05255126953125,
-0.06365966796875,
0.03295... |
taesiri/GamePhysics_Grand_Theft_Auto_V | 2022-05-26T06:00:19.000Z | [
"region:us"
] | taesiri | A test dataset for GamePhysics | @article{taesiri2022clip,
title={CLIP meets GamePhysics: Towards bug identification in gameplay videos using zero-shot transfer learning},
author={Taesiri, Mohammad Reza and Macklon, Finlay and Bezemer, Cor-Paul},
journal={arXiv preprint arXiv:2203.11096},
year={2022}
} | 3 | 7 | 2022-05-26T05:43:59 | ---
annotations_creators:
- no-annotation
languages:
- en
# Dataset Card for GamePhysics_Grand_Theft_Auto_V
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-instances)
- [Data Splits](#data-instances)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
## Dataset Description
- **Homepage:** https://asgaardlab.github.io/CLIPxGamePhysics/
- **Repository:** https://github.com/asgaardlab/CLIPxGamePhysics
- **Paper:** CLIP meets GamePhysics
- **Leaderboard:** [N/A]
- **Point of Contact:** [Mohammad Reza Taesiri](mailto:mtaesiri@gmail.com)
### Dataset Summary
The GamePhysics Grand Theft Auto V dataset is a small video dataset of buggy gameplay videos of Grand Theft Auto V game, collected from [GamePhysics](https://www.reddit.com/r/GamePhysics/) subrredit
### Supported Tasks and Leaderboards
[N/A]
### Languages
[Needs More Information]
## Dataset Structure
### Data Instances
[Needs More Information]
### Data Fields
[Needs More Information]
### Data Splits
[Needs More Information]
## Dataset Creation
### Curation Rationale
[Needs More Information]
### Source Data
#### Initial Data Collection and Normalization
[Needs More Information]
#### Who are the source language producers?
[Needs More Information]
### Annotations
#### Annotation process
[Needs More Information]
#### Who are the annotators?
[Needs More Information]
### Personal and Sensitive Information
[Needs More Information]
## Considerations for Using the Data
### Social Impact of Dataset
[Needs More Information]
### Discussion of Biases
[Needs More Information]
### Other Known Limitations
[Needs More Information]
## Additional Information
### Dataset Curators
[Needs More Information]
### Licensing Information
[Needs More Information]
### Citation Information
[Needs More Information] | 2,668 | [
[
-0.041900634765625,
-0.04571533203125,
0.0269775390625,
0.02301025390625,
-0.0189208984375,
0.0196533203125,
-0.021392822265625,
-0.0321044921875,
0.03839111328125,
0.040771484375,
-0.07427978515625,
-0.07269287109375,
-0.044525146484375,
-0.0020408630371093... |
Adapting/empathetic_dialogues_v2 | 2022-06-21T17:56:26.000Z | [
"license:afl-3.0",
"region:us"
] | Adapting | null | null | 5 | 7 | 2022-06-06T08:22:16 | ---
license: afl-3.0
---
Fine-tuned empathetic dialogue datasets from https://huggingface.co/datasets/empathetic_dialogues
With labeled chat history, system response, question or not and behavior.
| 199 | [
[
-0.028717041015625,
-0.07550048828125,
0.01953125,
0.0267181396484375,
0.00876617431640625,
0.0031566619873046875,
-0.0004968643188476562,
-0.007541656494140625,
0.07037353515625,
0.053436279296875,
-0.08935546875,
-0.026885986328125,
-0.0207672119140625,
0.... |
rungalileo/mit_movies_fixed_connll_format | 2022-10-25T18:39:27.000Z | [
"task_categories:token-classification",
"task_ids:named-entity-recognition",
"annotations_creators:crowdsourced",
"language_creators:crowdsourced",
"multilinguality:monolingual",
"size_categories:10K<n<100K",
"source_datasets:original",
"language:en",
"license:unknown",
"region:us"
] | rungalileo | null | null | 0 | 7 | 2022-06-07T19:04:54 | ---
annotations_creators:
- crowdsourced
language_creators:
- crowdsourced
language:
- en
license:
- unknown
multilinguality:
- monolingual
size_categories:
- 10K<n<100K
source_datasets:
- original
task_categories:
- token-classification
task_ids:
- named-entity-recognition
pretty_name: MIT_movies_fixed
---
# Dataset Card for MIT_movies_fixed
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-instances)
- [Data Splits](#data-instances)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
## Dataset Description
- **Galileo Homepage:** [Galileo ML Data Intelligence Platform](https://www.rungalileo.io)
- **Repository:** [Needs More Information]
- **Dataset Blog:** [Improving Your ML Datasets With Galileo, Part 2](https://www.rungalileo.io/blog/improving-your-ml-datasets-part-2-ner)
- **Leaderboard:** [Needs More Information]
- **Point of Contact:** [Needs More Information]
- **MIT movies Homepage:** [newsgroups homepage](https://groups.csail.mit.edu/sls/downloads/)
### Dataset Summary
This dataset is a version of the [**MIT movies**](https://groups.csail.mit.edu/sls/downloads/)
### Curation Rationale
This dataset was created to showcase the power of Galileo as a Data Intelligence Platform. Through Galileo, we identify critical error patterns within the original MIT movies dataset - annotation errors, ill-formed samples etc. Moreover, we observe that these errors permeate throughout the test dataset. As a result of our analysis, we fix 4% of the dataset by re-annotating the samples, and provide the dataset for NER research. To learn more about the process of fixing this dataset, please refer to our [**Blog**](https://www.rungalileo.io/blog/improving-your-ml-datasets-part-2-ner).
## Dataset Structure
### Data Instances
Every sample is blank line separated, every row is tab separated, and contains the word and its corresponding NER tag. This dataset uses the BIOES tagging schema.
An example from the dataset looks as follows:
```
show O
me O
a O
movie O
about O
cars B-PLOT
that I-PLOT
talk E-PLOT
```
### Data Splits
The data is split into a training and test split. The training data has ~9700 samples and the test data has ~2700 samples.
### Data Classes
The dataset contains the following 12 classes: ACTOR, YEAR, TITLE, GENRE, DIRECTOR, SONG, PLOT, REVIEW, CHARACTER, RATING, RATINGS_AVERAGE, TRAILER. Some of the classes have high semantic overlap (e.g. RATING/RATINGS_AVERAGE and ACTOR/DIRECTOR). | 3,304 | [
[
-0.0352783203125,
-0.044158935546875,
-0.006107330322265625,
-0.0121612548828125,
-0.01264190673828125,
0.01251220703125,
-0.002285003662109375,
-0.0009355545043945312,
0.027099609375,
0.040374755859375,
-0.060089111328125,
-0.0430908203125,
-0.048095703125,
... |
gcaillaut/frwiki_el | 2022-09-28T08:52:12.000Z | [
"task_categories:token-classification",
"annotations_creators:crowdsourced",
"language_creators:machine-generated",
"multilinguality:monolingual",
"size_categories:1M<n<10M",
"source_datasets:original",
"language:fr",
"license:wtfpl",
"region:us"
] | gcaillaut | French Wikipedia dataset for Entity Linking | null | 1 | 7 | 2022-06-15T09:37:40 | ---
annotations_creators:
- crowdsourced
language_creators:
- machine-generated
language:
- fr
license:
- wtfpl
multilinguality:
- monolingual
pretty_name: French Wikipedia dataset for Entity Linking
size_categories:
- 1M<n<10M
source_datasets:
- original
task_categories:
- token-classification
task_ids: []
---
# Dataset Card for frwiki_good_pages_el
## Dataset Description
- Repository: [frwiki_el](https://github.com/GaaH/frwiki_el)
- Point of Contact: [Gaëtan Caillaut](mailto://g.caillaut@brgm.fr)
### Dataset Summary
This dataset contains articles from the French Wikipédia.
It is intended to be used to train Entity Linking (EL) systems. Links in articles are used to detect named entities.
The dataset `frwiki` contains sentences of each Wikipedia pages.
The dataset `entities` contains description for each Wikipedia pages.
### Languages
- French
## Dataset Structure
### frwiki
```
{
"name": "Title of the page",
"wikidata_id": "Identifier of the related Wikidata entity. Can be null.",
"wikipedia_id": "Identifier of the Wikipedia page",
"wikipedia_url": "URL to the Wikipedia page",
"wikidata_url": "URL to the Wikidata page. Can be null.",
"sentences" : [
{
"text": "text of the current sentence",
"ner": ["list", "of", "ner", "labels"],
"mention_mappings": [
(start_of_first_mention, end_of_first_mention),
(start_of_second_mention, end_of_second_mention)
],
"el_wikidata_id": ["wikidata id of first mention", "wikidata id of second mention"],
"el_wikipedia_id": [wikipedia id of first mention, wikipedia id of second mention],
"el_wikipedia_title": ["wikipedia title of first mention", "wikipedia title of second mention"]
}
]
"words": ["words", "in", "the", "sentence"],
"ner": ["ner", "labels", "of", "each", "words"],
"el": ["el", "labels", "of", "each", "words"]
}
```
### entities
```
{
"name": "Title of the page",
"wikidata_id": "Identifier of the related Wikidata entity. Can be null.",
"wikipedia_id": "Identifier of the Wikipedia page",
"wikipedia_url": "URL to the Wikipedia page",
"wikidata_url": "URL to the Wikidata page. Can be null.",
"description": "Description of the entity"
}
``` | 2,325 | [
[
-0.051361083984375,
-0.03082275390625,
0.0164947509765625,
0.0138702392578125,
-0.0180816650390625,
-0.01203155517578125,
-0.0178985595703125,
-0.01947021484375,
0.0413818359375,
0.03302001953125,
-0.051422119140625,
-0.059600830078125,
-0.03558349609375,
0.... |
nateraw/airbnb-stock-price | 2022-06-16T21:10:27.000Z | [
"region:us"
] | nateraw | null | null | 0 | 7 | 2022-06-16T21:10:24 | Entry not found | 15 | [
[
-0.02142333984375,
-0.01495361328125,
0.05718994140625,
0.0288238525390625,
-0.035064697265625,
0.046539306640625,
0.052520751953125,
0.005062103271484375,
0.0513916015625,
0.016998291015625,
-0.052093505859375,
-0.014984130859375,
-0.060394287109375,
0.0379... |
bengaliAI/CommonVoiceBangla | 2022-07-01T00:46:28.000Z | [
"license:cc0-1.0",
"region:us"
] | bengaliAI | null | null | 4 | 7 | 2022-06-17T12:07:13 | ---
license: cc0-1.0
---
How to load the Common Voice Bangla dataset directly with the datasets library
Run
1) from datasets import load_dataset
2) dataset = load_dataset("bengaliAI/CommonVoiceBangla", "bn", delimiter='\t')
| 234 | [
[
-0.00811767578125,
-0.006023406982421875,
-0.0228271484375,
0.03857421875,
-0.031341552734375,
0.01690673828125,
-0.01265716552734375,
-0.01092529296875,
0.0123138427734375,
0.06964111328125,
-0.043243408203125,
-0.017730712890625,
-0.00832366943359375,
0.02... |
BeIR/nq-generated-queries | 2022-10-23T06:15:15.000Z | [
"task_categories:text-retrieval",
"task_ids:entity-linking-retrieval",
"task_ids:fact-checking-retrieval",
"multilinguality:monolingual",
"language:en",
"license:cc-by-sa-4.0",
"region:us"
] | BeIR | null | null | 0 | 7 | 2022-06-17T13:20:26 | ---
annotations_creators: []
language_creators: []
language:
- en
license:
- cc-by-sa-4.0
multilinguality:
- monolingual
paperswithcode_id: beir
pretty_name: BEIR Benchmark
size_categories:
msmarco:
- 1M<n<10M
trec-covid:
- 100k<n<1M
nfcorpus:
- 1K<n<10K
nq:
- 1M<n<10M
hotpotqa:
- 1M<n<10M
fiqa:
- 10K<n<100K
arguana:
- 1K<n<10K
touche-2020:
- 100K<n<1M
cqadupstack:
- 100K<n<1M
quora:
- 100K<n<1M
dbpedia:
- 1M<n<10M
scidocs:
- 10K<n<100K
fever:
- 1M<n<10M
climate-fever:
- 1M<n<10M
scifact:
- 1K<n<10K
source_datasets: []
task_categories:
- text-retrieval
- zero-shot-retrieval
- information-retrieval
- zero-shot-information-retrieval
task_ids:
- passage-retrieval
- entity-linking-retrieval
- fact-checking-retrieval
- tweet-retrieval
- citation-prediction-retrieval
- duplication-question-retrieval
- argument-retrieval
- news-retrieval
- biomedical-information-retrieval
- question-answering-retrieval
---
# Dataset Card for BEIR Benchmark
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** https://github.com/UKPLab/beir
- **Repository:** https://github.com/UKPLab/beir
- **Paper:** https://openreview.net/forum?id=wCu6T5xFjeJ
- **Leaderboard:** https://docs.google.com/spreadsheets/d/1L8aACyPaXrL8iEelJLGqlMqXKPX2oSP_R10pZoy77Ns
- **Point of Contact:** nandan.thakur@uwaterloo.ca
### Dataset Summary
BEIR is a heterogeneous benchmark that has been built from 18 diverse datasets representing 9 information retrieval tasks:
- Fact-checking: [FEVER](http://fever.ai), [Climate-FEVER](http://climatefever.ai), [SciFact](https://github.com/allenai/scifact)
- Question-Answering: [NQ](https://ai.google.com/research/NaturalQuestions), [HotpotQA](https://hotpotqa.github.io), [FiQA-2018](https://sites.google.com/view/fiqa/)
- Bio-Medical IR: [TREC-COVID](https://ir.nist.gov/covidSubmit/index.html), [BioASQ](http://bioasq.org), [NFCorpus](https://www.cl.uni-heidelberg.de/statnlpgroup/nfcorpus/)
- News Retrieval: [TREC-NEWS](https://trec.nist.gov/data/news2019.html), [Robust04](https://trec.nist.gov/data/robust/04.guidelines.html)
- Argument Retrieval: [Touche-2020](https://webis.de/events/touche-20/shared-task-1.html), [ArguAna](tp://argumentation.bplaced.net/arguana/data)
- Duplicate Question Retrieval: [Quora](https://www.quora.com/q/quoradata/First-Quora-Dataset-Release-Question-Pairs), [CqaDupstack](http://nlp.cis.unimelb.edu.au/resources/cqadupstack/)
- Citation-Prediction: [SCIDOCS](https://allenai.org/data/scidocs)
- Tweet Retrieval: [Signal-1M](https://research.signal-ai.com/datasets/signal1m-tweetir.html)
- Entity Retrieval: [DBPedia](https://github.com/iai-group/DBpedia-Entity/)
All these datasets have been preprocessed and can be used for your experiments.
```python
```
### Supported Tasks and Leaderboards
The dataset supports a leaderboard that evaluates models against task-specific metrics such as F1 or EM, as well as their ability to retrieve supporting information from Wikipedia.
The current best performing models can be found [here](https://eval.ai/web/challenges/challenge-page/689/leaderboard/).
### Languages
All tasks are in English (`en`).
## Dataset Structure
All BEIR datasets must contain a corpus, queries and qrels (relevance judgments file). They must be in the following format:
- `corpus` file: a `.jsonl` file (jsonlines) that contains a list of dictionaries, each with three fields `_id` with unique document identifier, `title` with document title (optional) and `text` with document paragraph or passage. For example: `{"_id": "doc1", "title": "Albert Einstein", "text": "Albert Einstein was a German-born...."}`
- `queries` file: a `.jsonl` file (jsonlines) that contains a list of dictionaries, each with two fields `_id` with unique query identifier and `text` with query text. For example: `{"_id": "q1", "text": "Who developed the mass-energy equivalence formula?"}`
- `qrels` file: a `.tsv` file (tab-seperated) that contains three columns, i.e. the `query-id`, `corpus-id` and `score` in this order. Keep 1st row as header. For example: `q1 doc1 1`
### Data Instances
A high level example of any beir dataset:
```python
corpus = {
"doc1" : {
"title": "Albert Einstein",
"text": "Albert Einstein was a German-born theoretical physicist. who developed the theory of relativity, \
one of the two pillars of modern physics (alongside quantum mechanics). His work is also known for \
its influence on the philosophy of science. He is best known to the general public for his mass–energy \
equivalence formula E = mc2, which has been dubbed 'the world's most famous equation'. He received the 1921 \
Nobel Prize in Physics 'for his services to theoretical physics, and especially for his discovery of the law \
of the photoelectric effect', a pivotal step in the development of quantum theory."
},
"doc2" : {
"title": "", # Keep title an empty string if not present
"text": "Wheat beer is a top-fermented beer which is brewed with a large proportion of wheat relative to the amount of \
malted barley. The two main varieties are German Weißbier and Belgian witbier; other types include Lambic (made\
with wild yeast), Berliner Weisse (a cloudy, sour beer), and Gose (a sour, salty beer)."
},
}
queries = {
"q1" : "Who developed the mass-energy equivalence formula?",
"q2" : "Which beer is brewed with a large proportion of wheat?"
}
qrels = {
"q1" : {"doc1": 1},
"q2" : {"doc2": 1},
}
```
### Data Fields
Examples from all configurations have the following features:
### Corpus
- `corpus`: a `dict` feature representing the document title and passage text, made up of:
- `_id`: a `string` feature representing the unique document id
- `title`: a `string` feature, denoting the title of the document.
- `text`: a `string` feature, denoting the text of the document.
### Queries
- `queries`: a `dict` feature representing the query, made up of:
- `_id`: a `string` feature representing the unique query id
- `text`: a `string` feature, denoting the text of the query.
### Qrels
- `qrels`: a `dict` feature representing the query document relevance judgements, made up of:
- `_id`: a `string` feature representing the query id
- `_id`: a `string` feature, denoting the document id.
- `score`: a `int32` feature, denoting the relevance judgement between query and document.
### Data Splits
| Dataset | Website| BEIR-Name | Type | Queries | Corpus | Rel D/Q | Down-load | md5 |
| -------- | -----| ---------| --------- | ----------- | ---------| ---------| :----------: | :------:|
| MSMARCO | [Homepage](https://microsoft.github.io/msmarco/)| ``msmarco`` | ``train``<br>``dev``<br>``test``| 6,980 | 8.84M | 1.1 | [Link](https://public.ukp.informatik.tu-darmstadt.de/thakur/BEIR/datasets/msmarco.zip) | ``444067daf65d982533ea17ebd59501e4`` |
| TREC-COVID | [Homepage](https://ir.nist.gov/covidSubmit/index.html)| ``trec-covid``| ``test``| 50| 171K| 493.5 | [Link](https://public.ukp.informatik.tu-darmstadt.de/thakur/BEIR/datasets/trec-covid.zip) | ``ce62140cb23feb9becf6270d0d1fe6d1`` |
| NFCorpus | [Homepage](https://www.cl.uni-heidelberg.de/statnlpgroup/nfcorpus/) | ``nfcorpus`` | ``train``<br>``dev``<br>``test``| 323 | 3.6K | 38.2 | [Link](https://public.ukp.informatik.tu-darmstadt.de/thakur/BEIR/datasets/nfcorpus.zip) | ``a89dba18a62ef92f7d323ec890a0d38d`` |
| BioASQ | [Homepage](http://bioasq.org) | ``bioasq``| ``train``<br>``test`` | 500 | 14.91M | 8.05 | No | [How to Reproduce?](https://github.com/UKPLab/beir/blob/main/examples/dataset#2-bioasq) |
| NQ | [Homepage](https://ai.google.com/research/NaturalQuestions) | ``nq``| ``train``<br>``test``| 3,452 | 2.68M | 1.2 | [Link](https://public.ukp.informatik.tu-darmstadt.de/thakur/BEIR/datasets/nq.zip) | ``d4d3d2e48787a744b6f6e691ff534307`` |
| HotpotQA | [Homepage](https://hotpotqa.github.io) | ``hotpotqa``| ``train``<br>``dev``<br>``test``| 7,405 | 5.23M | 2.0 | [Link](https://public.ukp.informatik.tu-darmstadt.de/thakur/BEIR/datasets/hotpotqa.zip) | ``f412724f78b0d91183a0e86805e16114`` |
| FiQA-2018 | [Homepage](https://sites.google.com/view/fiqa/) | ``fiqa`` | ``train``<br>``dev``<br>``test``| 648 | 57K | 2.6 | [Link](https://public.ukp.informatik.tu-darmstadt.de/thakur/BEIR/datasets/fiqa.zip) | ``17918ed23cd04fb15047f73e6c3bd9d9`` |
| Signal-1M(RT) | [Homepage](https://research.signal-ai.com/datasets/signal1m-tweetir.html)| ``signal1m`` | ``test``| 97 | 2.86M | 19.6 | No | [How to Reproduce?](https://github.com/UKPLab/beir/blob/main/examples/dataset#4-signal-1m) |
| TREC-NEWS | [Homepage](https://trec.nist.gov/data/news2019.html) | ``trec-news`` | ``test``| 57 | 595K | 19.6 | No | [How to Reproduce?](https://github.com/UKPLab/beir/blob/main/examples/dataset#1-trec-news) |
| ArguAna | [Homepage](http://argumentation.bplaced.net/arguana/data) | ``arguana``| ``test`` | 1,406 | 8.67K | 1.0 | [Link](https://public.ukp.informatik.tu-darmstadt.de/thakur/BEIR/datasets/arguana.zip) | ``8ad3e3c2a5867cdced806d6503f29b99`` |
| Touche-2020| [Homepage](https://webis.de/events/touche-20/shared-task-1.html) | ``webis-touche2020``| ``test``| 49 | 382K | 19.0 | [Link](https://public.ukp.informatik.tu-darmstadt.de/thakur/BEIR/datasets/webis-touche2020.zip) | ``46f650ba5a527fc69e0a6521c5a23563`` |
| CQADupstack| [Homepage](http://nlp.cis.unimelb.edu.au/resources/cqadupstack/) | ``cqadupstack``| ``test``| 13,145 | 457K | 1.4 | [Link](https://public.ukp.informatik.tu-darmstadt.de/thakur/BEIR/datasets/cqadupstack.zip) | ``4e41456d7df8ee7760a7f866133bda78`` |
| Quora| [Homepage](https://www.quora.com/q/quoradata/First-Quora-Dataset-Release-Question-Pairs) | ``quora``| ``dev``<br>``test``| 10,000 | 523K | 1.6 | [Link](https://public.ukp.informatik.tu-darmstadt.de/thakur/BEIR/datasets/quora.zip) | ``18fb154900ba42a600f84b839c173167`` |
| DBPedia | [Homepage](https://github.com/iai-group/DBpedia-Entity/) | ``dbpedia-entity``| ``dev``<br>``test``| 400 | 4.63M | 38.2 | [Link](https://public.ukp.informatik.tu-darmstadt.de/thakur/BEIR/datasets/dbpedia-entity.zip) | ``c2a39eb420a3164af735795df012ac2c`` |
| SCIDOCS| [Homepage](https://allenai.org/data/scidocs) | ``scidocs``| ``test``| 1,000 | 25K | 4.9 | [Link](https://public.ukp.informatik.tu-darmstadt.de/thakur/BEIR/datasets/scidocs.zip) | ``38121350fc3a4d2f48850f6aff52e4a9`` |
| FEVER | [Homepage](http://fever.ai) | ``fever``| ``train``<br>``dev``<br>``test``| 6,666 | 5.42M | 1.2| [Link](https://public.ukp.informatik.tu-darmstadt.de/thakur/BEIR/datasets/fever.zip) | ``5a818580227bfb4b35bb6fa46d9b6c03`` |
| Climate-FEVER| [Homepage](http://climatefever.ai) | ``climate-fever``|``test``| 1,535 | 5.42M | 3.0 | [Link](https://public.ukp.informatik.tu-darmstadt.de/thakur/BEIR/datasets/climate-fever.zip) | ``8b66f0a9126c521bae2bde127b4dc99d`` |
| SciFact| [Homepage](https://github.com/allenai/scifact) | ``scifact``| ``train``<br>``test``| 300 | 5K | 1.1 | [Link](https://public.ukp.informatik.tu-darmstadt.de/thakur/BEIR/datasets/scifact.zip) | ``5f7d1de60b170fc8027bb7898e2efca1`` |
| Robust04 | [Homepage](https://trec.nist.gov/data/robust/04.guidelines.html) | ``robust04``| ``test``| 249 | 528K | 69.9 | No | [How to Reproduce?](https://github.com/UKPLab/beir/blob/main/examples/dataset#3-robust04) |
## Dataset Creation
### Curation Rationale
[Needs More Information]
### Source Data
#### Initial Data Collection and Normalization
[Needs More Information]
#### Who are the source language producers?
[Needs More Information]
### Annotations
#### Annotation process
[Needs More Information]
#### Who are the annotators?
[Needs More Information]
### Personal and Sensitive Information
[Needs More Information]
## Considerations for Using the Data
### Social Impact of Dataset
[Needs More Information]
### Discussion of Biases
[Needs More Information]
### Other Known Limitations
[Needs More Information]
## Additional Information
### Dataset Curators
[Needs More Information]
### Licensing Information
[Needs More Information]
### Citation Information
Cite as:
```
@inproceedings{
thakur2021beir,
title={{BEIR}: A Heterogeneous Benchmark for Zero-shot Evaluation of Information Retrieval Models},
author={Nandan Thakur and Nils Reimers and Andreas R{\"u}ckl{\'e} and Abhishek Srivastava and Iryna Gurevych},
booktitle={Thirty-fifth Conference on Neural Information Processing Systems Datasets and Benchmarks Track (Round 2)},
year={2021},
url={https://openreview.net/forum?id=wCu6T5xFjeJ}
}
```
### Contributions
Thanks to [@Nthakur20](https://github.com/Nthakur20) for adding this dataset. | 13,988 | [
[
-0.0396728515625,
-0.03985595703125,
0.01096343994140625,
0.0036678314208984375,
0.004238128662109375,
0.00009435415267944336,
-0.008209228515625,
-0.018890380859375,
0.021697998046875,
0.00595855712890625,
-0.034332275390625,
-0.0545654296875,
-0.02638244628906... |
FacePerceiver/laion-face | 2022-11-18T04:04:56.000Z | [
"region:us"
] | FacePerceiver | null | null | 15 | 7 | 2022-06-21T13:28:35 | # Laion-Face
[LAION-Face](https://github.com/FacePerceiver/LAION-Face) is the human face subset of [LAION-400M](https://laion.ai/laion-400-open-dataset/), it consists of 50 million image-text pairs. Face detection is conducted to find images with faces. Apart from the 50 million full-set(LAION-Face 50M), there is a 20 million sub-set(LAION-Face 20M) for fast evaluation.
LAION-Face is first used as the training set of [FaRL](https://github.com/FacePerceiver/FaRL), which provides powerful pre-training transformer backbones for face analysis tasks.
For more details, please check the offical repo at https://github.com/FacePerceiver/LAION-Face .
## Download and convert metadata
```bash
wget -l1 -r --no-parent https://the-eye.eu/public/AI/cah/laion400m-met-release/laion400m-meta/
mv the-eye.eu/public/AI/cah/laion400m-met-release/laion400m-meta/ .
wget https://huggingface.co/datasets/FacePerceiver/laion-face/resolve/main/laion_face_ids.pth
wget https://raw.githubusercontent.com/FacePerceiver/LAION-Face/master/convert_parquet.py
python convert_parquet.py ./laion_face_ids.pth ./laion400m-meta ./laion_face_meta
```
## Download the images with img2dataset
When metadata is ready, you can start download the images.
```bash
wget https://raw.githubusercontent.com/FacePerceiver/LAION-Face/master/download.sh
bash download.sh ./laion_face_meta ./laion_face_data
```
Please be patient, this command might run over days, and cost about 2T disk space, and it will download 50 million image-text pairs as 32 parts.
- To use the **LAION-Face 50M**, you should use all the 32 parts.
- To use the **LAION-Face 20M**, you should use these parts.
```
0,2,5,8,13,15,17,18,21,22,24,25,28
```
checkout `download.sh` and [img2dataset](https://github.com/rom1504/img2dataset) for more details and parameter setting.
| 1,831 | [
[
-0.06036376953125,
-0.0234222412109375,
0.0287017822265625,
0.021270751953125,
-0.025238037109375,
-0.013763427734375,
0.009246826171875,
-0.03314208984375,
0.0176849365234375,
0.056915283203125,
-0.054595947265625,
-0.033905029296875,
-0.03173828125,
0.0071... |
nateraw/lung-cancer | 2022-10-25T10:32:46.000Z | [
"license:cc-by-nc-sa-4.0",
"region:us"
] | nateraw | null | null | 1 | 7 | 2022-06-21T23:57:00 | ---
kaggle_id: nancyalaswad90/lung-cancer
license:
- cc-by-nc-sa-4.0
---
# Dataset Card for Lung Cancer
## Table of Contents
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** https://kaggle.com/datasets/nancyalaswad90/lung-cancer
- **Repository:**
- **Paper:**
- **Leaderboard:**
- **Point of Contact:**
### Dataset Summary
The effectiveness of cancer prediction system helps the people to know their cancer risk with low cost and it also helps the people to take the appropriate decision based on their cancer risk status. The data is collected from the website online lung cancer prediction system .
### Supported Tasks and Leaderboards
[More Information Needed]
### Languages
[More Information Needed]
## Dataset Structure
### Data Instances
[More Information Needed]
### Data Fields
[More Information Needed]
### Data Splits
[More Information Needed]
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
This dataset was shared by [@nancyalaswad90](https://kaggle.com/nancyalaswad90)
### Licensing Information
The license for this dataset is cc-by-nc-sa-4.0
### Citation Information
```bibtex
[More Information Needed]
```
### Contributions
[More Information Needed] | 2,884 | [
[
-0.010406494140625,
-0.033538818359375,
0.01885986328125,
0.01488494873046875,
-0.02557373046875,
-0.0004124641418457031,
-0.00493621826171875,
-0.00687408447265625,
0.03497314453125,
0.06671142578125,
-0.06292724609375,
-0.08551025390625,
-0.06951904296875,
... |
mustapha/QuranExe | 2022-07-20T15:33:24.000Z | [
"task_categories:text-generation",
"task_categories:fill-mask",
"task_categories:sentence-similarity",
"task_ids:language-modeling",
"task_ids:masked-language-modeling",
"annotations_creators:no-annotation",
"language_creators:expert-generated",
"multilinguality:multilingual",
"size_categories:10K<n... | mustapha | null | null | 5 | 7 | 2022-06-25T07:07:28 | ---
annotations_creators:
- no-annotation
language_creators:
- expert-generated
language:
- ar
license:
- mit
multilinguality:
- multilingual
paperswithcode_id: null
pretty_name: QuranExe
size_categories:
- 10K<n<100K
source_datasets:
- original
task_categories:
- text-generation
- fill-mask
- sentence-similarity
task_ids:
- language-modeling
- masked-language-modeling
---
## Dataset Description
- **Size of downloaded dataset files:** 126 MB
This dataset contains the exegeses/tafsirs (تفسير القرآن) of the holy Quran in arabic by 8 exegetes.
This is a non Official dataset. It have been scrapped from the `Quran.com Api`
This dataset contains `49888` records with `+14` Million words. `8` records per Quranic verse
Usage Example :
```python
from datasets import load_dataset
tafsirs = load_dataset("mustapha/QuranExe")
``` | 858 | [
[
-0.0191497802734375,
-0.023529052734375,
-0.015228271484375,
0.0175018310546875,
-0.048980712890625,
-0.008819580078125,
-0.00380706787109375,
-0.0285797119140625,
0.017333984375,
0.06036376953125,
-0.0208282470703125,
-0.032073974609375,
-0.04779052734375,
... |
jvanz/portuguese_wikipedia_sentences | 2022-06-27T16:36:05.000Z | [
"region:us"
] | jvanz | null | null | 1 | 7 | 2022-06-27T03:13:49 | Entry not found | 15 | [
[
-0.021392822265625,
-0.01494598388671875,
0.05718994140625,
0.028839111328125,
-0.0350341796875,
0.046539306640625,
0.052490234375,
0.00507354736328125,
0.051361083984375,
0.01702880859375,
-0.052093505859375,
-0.01494598388671875,
-0.06036376953125,
0.03790... |
arize-ai/xtreme_en | 2022-07-01T17:23:29.000Z | [
"task_categories:token-classification",
"task_ids:named-entity-recognition",
"annotations_creators:expert-generated",
"language_creators:expert-generated",
"multilinguality:monolingual",
"size_categories:10K<n<100K",
"source_datasets:extended|xtreme",
"language:en",
"license:mit",
"region:us"
] | arize-ai | This dataset was crafted to be used in our tutorial [Link to the tutorial when
ready]. It consists on product reviews from an e-commerce store. The reviews
are labeled on a scale from 1 to 5 (stars). The training & validation sets are
fully composed by reviews written in english. However, the production set has
some reviews written in spanish. At Arize, we work to surface this issue and
help you solve it. | # @InProceedings{huggingface:dataset,
# title = {A great new dataset},
# author={huggingface, Inc.
# },
# year={2020}
# }
# | 0 | 7 | 2022-06-30T19:48:47 | ---
annotations_creators:
- expert-generated
language_creators:
- expert-generated
language:
- en
license:
- mit
multilinguality:
- monolingual
pretty_name: named-entity-recognition-en-no-drift
size_categories:
- 10K<n<100K
source_datasets:
- extended|xtreme
task_categories:
- token-classification
task_ids:
- named-entity-recognition
---
# Dataset Card for `reviews_with_drift`
## Table of Contents
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
### Dataset Summary
This dataset was crafted to be used in our tutorial [Link to the tutorial when ready]. It consists on a large Movie Review Dataset mixed with some reviews from a Hotel Review Dataset. The training/validation set are purely obtained from the Movie Review Dataset while the production set is mixed. Some other features have been added (`age`, `gender`, `context`) as well as a made up timestamp `prediction_ts` of when the inference took place.
### Supported Tasks and Leaderboards
`text-classification`, `sentiment-classification`: The dataset is mainly used for text classification: given the text, predict the sentiment (positive or negative).
### Languages
Text is mainly written in english.
## Dataset Structure
### Data Instances
[More Information Needed]
### Data Fields
[More Information Needed]
### Data Splits
[More Information Needed]
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
[More Information Needed]
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
[More Information Needed]
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
[More Information Needed]
### Citation Information
[More Information Needed]
### Contributions
Thanks to [@fjcasti1](https://github.com/fjcasti1) for adding this dataset. | 3,341 | [
[
-0.04541015625,
-0.03265380859375,
0.0182952880859375,
0.00946044921875,
-0.027435302734375,
0.01232147216796875,
-0.024932861328125,
-0.01444244384765625,
0.045501708984375,
0.045562744140625,
-0.074462890625,
-0.0718994140625,
-0.039398193359375,
0.0026817... |
arize-ai/xtreme_en_token_drift | 2022-07-01T17:25:34.000Z | [
"task_categories:token-classification",
"task_ids:named-entity-recognition",
"annotations_creators:expert-generated",
"language_creators:expert-generated",
"multilinguality:monolingual",
"size_categories:10K<n<100K",
"source_datasets:extended|xtreme",
"language:en",
"license:mit",
"region:us"
] | arize-ai | This dataset was crafted to be used in our tutorial [Link to the tutorial when
ready]. It consists on product reviews from an e-commerce store. The reviews
are labeled on a scale from 1 to 5 (stars). The training & validation sets are
fully composed by reviews written in english. However, the production set has
some reviews written in spanish. At Arize, we work to surface this issue and
help you solve it. | # @InProceedings{huggingface:dataset,
# title = {A great new dataset},
# author={huggingface, Inc.
# },
# year={2020}
# }
# | 1 | 7 | 2022-06-30T21:08:01 | ---
annotations_creators:
- expert-generated
language_creators:
- expert-generated
language:
- en
license:
- mit
multilinguality:
- monolingual
pretty_name: named-entity-recognition-en-no-drift
size_categories:
- 10K<n<100K
source_datasets:
- extended|xtreme
task_categories:
- token-classification
task_ids:
- named-entity-recognition
---
# Dataset Card for `reviews_with_drift`
## Table of Contents
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
### Dataset Summary
This dataset was crafted to be used in our tutorial [Link to the tutorial when ready]. It consists on a large Movie Review Dataset mixed with some reviews from a Hotel Review Dataset. The training/validation set are purely obtained from the Movie Review Dataset while the production set is mixed. Some other features have been added (`age`, `gender`, `context`) as well as a made up timestamp `prediction_ts` of when the inference took place.
### Supported Tasks and Leaderboards
`text-classification`, `sentiment-classification`: The dataset is mainly used for text classification: given the text, predict the sentiment (positive or negative).
### Languages
Text is mainly written in english.
## Dataset Structure
### Data Instances
[More Information Needed]
### Data Fields
[More Information Needed]
### Data Splits
[More Information Needed]
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
[More Information Needed]
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
[More Information Needed]
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
[More Information Needed]
### Citation Information
[More Information Needed]
### Contributions
Thanks to [@fjcasti1](https://github.com/fjcasti1) for adding this dataset. | 3,341 | [
[
-0.04541015625,
-0.03265380859375,
0.0182952880859375,
0.00946044921875,
-0.027435302734375,
0.01232147216796875,
-0.024932861328125,
-0.01444244384765625,
0.045501708984375,
0.045562744140625,
-0.074462890625,
-0.0718994140625,
-0.039398193359375,
0.0026817... |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.