id stringlengths 2 115 | author stringlengths 2 42 ⌀ | last_modified timestamp[us, tz=UTC] | downloads int64 0 8.87M | likes int64 0 3.84k | paperswithcode_id stringlengths 2 45 ⌀ | tags list | lastModified timestamp[us, tz=UTC] | createdAt stringlengths 24 24 | key stringclasses 1 value | created timestamp[us] | card stringlengths 1 1.01M | embedding list | library_name stringclasses 21 values | pipeline_tag stringclasses 27 values | mask_token null | card_data null | widget_data null | model_index null | config null | transformers_info null | spaces null | safetensors null | transformersInfo null | modelId stringlengths 5 111 ⌀ | embeddings list |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
mlabonne/Evol-Instruct-Python-1k | mlabonne | 2023-08-25T16:31:50Z | 60 | 1 | null | [
"region:us"
] | 2023-08-25T16:31:50Z | 2023-08-25T16:28:23.000Z | 2023-08-25T16:28:23 | ---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
dataset_info:
features:
- name: instruction
dtype: string
- name: output
dtype: string
splits:
- name: train
num_bytes: 5465833
num_examples: 1000
download_size: 2322359
dataset_size: 5465833
---
# Evol-Instruct-Python-1k
Subset of the [`mlabonne/Evol-Instruct-Python-26k`](https://huggingface.co/datasets/mlabonne/Evol-Instruct-Python-26k) dataset with only 1000 samples.
It was made by filtering out a few rows (instruction + output) with more than 2048 tokens, and then by keeping the 1000 longest samples.
Here is the distribution of the number of tokens in each row using Llama's tokenizer:
 | [
-0.6010487675666809,
-0.5371569991111755,
0.19644279778003693,
0.3548283874988556,
-0.4323570728302002,
0.0020183834712952375,
0.1865091621875763,
-0.23114506900310516,
0.6354524493217468,
0.4530154764652252,
-0.7059121131896973,
-0.5998737812042236,
-0.37115129828453064,
0.398549586534500... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
rizerphe/glaive-function-calling-v2-llama | rizerphe | 2023-09-05T12:51:42Z | 60 | 11 | null | [
"task_categories:text-generation",
"size_categories:100K<n<1M",
"language:en",
"license:cc-by-sa-4.0",
"region:us"
] | 2023-09-05T12:51:42Z | 2023-09-04T09:32:26.000Z | 2023-09-04T09:32:26 | ---
license: cc-by-sa-4.0
task_categories:
- text-generation
language:
- en
size_categories:
- 100K<n<1M
dataset_info:
features:
- name: text
dtype: string
splits:
- name: train
num_bytes: 228119663
num_examples: 103091
download_size: 95393598
dataset_size: 228119663
pretty_name: a
---
# Glaive's Function Calling V2 for Llama2
[Glaive's Function Calling V2 dataset](https://huggingface.co/datasets/glaiveai/glaive-function-calling-v2), formatted according to the Llama2 chat schema, with all the data that I wasn't able to automatically convert removed manually.
Adds a special `<function>` token. Here's an example prompt:
```
<s>[INST] <<SYS>>
<function>Available functions:
<function>{
"name": "generate_password",
"description": "Generate a random password with specified criteria",
"parameters": {
"type": "object",
"properties": {
"length": {
"type": "integer",
"description": "The length of the password"
},
"include_numbers": {
"type": "boolean",
"description": "Include numbers in the password"
},
"include_special_characters": {
"type": "boolean",
"description": "Include special characters in the password"
}
},
"required": [
"length"
]
}
}
<</SYS>>
I need a new password. Can you generate one for me? [/INST] Of course! How long would you like your password to be? And do you want it to include numbers and special characters?</s><s>[INST] I want it to be 12 characters long and yes, it should include both numbers and special characters. [/INST]<function>generate_password
{
"length": 12,
"include_numbers": true,
"include_special_characters": true
}</s><s>[INST] <function>{"password": "4#7gB6&9L1!0"} [/INST] Here is your new password: 4#7gB6&9L1!0. Please make sure to save it in a secure place.</s>
``` | [
0.14905096590518951,
-0.6516465544700623,
0.2945863902568817,
0.3223308324813843,
-0.47791653871536255,
0.13326750695705414,
0.25652432441711426,
-0.2875681519508362,
0.41865190863609314,
0.7226588726043701,
-0.7360987663269043,
-0.6606388092041016,
-0.5329772233963013,
0.1868782788515091,... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
math-eval/TAL-SCQ5K | math-eval | 2023-09-15T06:37:10Z | 60 | 20 | null | [
"license:mit",
"region:us"
] | 2023-09-15T06:37:10Z | 2023-09-13T08:58:01.000Z | 2023-09-13T08:58:01 | ---
license: mit
---
<h1 align="center">TAL-SCQ5K</h1>
## Dataset Description
### Dataset Summary
TAL-SCQ5K-EN/TAL-SCQ5K-CN are high quality mathematical competition datasets in English and Chinese language created by TAL Education Group, each consisting of 5K questions(3K training and 2K testing). The questions are in the form of multiple-choice and cover mathematical topics at the primary,junior high and high school levels. In addition, detailed solution steps are provided to facilitate CoT training and all the mathematical expressions in the questions have been presented as standard text-mode Latex.
### Supported Tasks and Leaderboards
[More Information Needed]
### Languages
The text in TAL-SCQ5K-EN is in English and TAL-SCQ5K-CN is in Chinese.
## Dataset Structure
### Data Instances
```
{
"dataset_name": "prime_math_competition_en_single_choice_8K_dev",
"dataset_version": "2023-07-07",
"qid": "244",
"queId": "8afc802a8c304199b1040f11ffa2e92a",
"competition_source_list": [],
"difficulty": "2",
"qtype": "single_choice",
"problem": "A $14$-digit. number $666666 XY 444444$ is a multiple of $26$. If $X$ and $Y$ are both positive, what is the smallest vaue of $X+ Y$? ",
"answer_option_list": [
[{
"aoVal": "A",
"content": "$$3$$ "
}],
[{
"aoVal": "B",
"content": "$$4$$ "
}],
[{
"aoVal": "C",
"content": "$$9$$ "
}],
[{
"aoVal": "D",
"content": "$$14$$ "
}],
[{
"aoVal": "E",
"content": "None of the above "
}]
],
"knowledge_point_routes": ["Overseas Competition->Knowledge Point->Number Theory Modules->Division without Remainders->Divisibility Rules"],
"answer_analysis": ["Since $1001$ is a multiple of $13$, $111111 = 111 \\times 1001$ is also a multiple of $13$. It follows that both $666666$ and $444444$ are both multiples of $26$. $666666XY 444444 = 66666600000000 + XY 000000 + 444444$ $\\Rightarrow XY$ must be divisible by $13$. Smallest $X+Y=1+3=4$. "],
"answer_value": "B"
}
```
### Data Fields
* "dataset_name": identification of the source dataset name from which TAL-SCQ5K-EN/TAL-SCQ5K-CN has been created, use only for inner of TAL education group, please ignore.
* "dataset_version": identification of the source dataset version from which TAL-SCQ5K-EN/TAL-SCQ5K-CN has been created, use only for inner of TAL education group, please ignore.
* "qid": identification of local id of the question in the source dataset from which TAL-SCQ5K-EN/TAL-SCQ5K-CN has been created, use only for inner of TAL education group, please ignore.
* "queId": identification of global id of the question, use only for inner of TAL education group, please ignore.
* "competition_source_list": identification of math competitions in which the questions appeared, if have been logged.
* "difficulty": difficulty level of the questions, value ranged from 0 to 4
* "qtype": question type, valued as "single_choice" for all the questions in this dataset indicates that all the questions are multiple-choice questions with unique ground-truth answer.
* "problem": the question string to a math competition question.
* "answer_option_list": answer choices to be selected
* "knowledge_point_routes": knowledge point route from coarse-grained to fine-grained.
* "answer_analysis": step-by-step answer analysis of the questions, which helps CoT training
* "answer_value": value of the ground-truth answer choice
### Data Splits
<style>
table th:first-of-type {
width: 40%;
}
table th:nth-of-type(2) {
width: 30%;
}
table th:nth-of-type(3) {
width: 30%;
}
</style>
| name|train|test |
|:---:|:----:|:----:|
|TAL-SCQ5K-EN|3K |2K |
|TAL-SCQ5K-CN|3K |2K |
## Usage
Each of the above datasets is located in a separate sub-directory. To load an individual subset, use the data_dir argument of the load_dataset() function as follows:
```python
from datasets import load_dataset
# Load all subsets (share the same schema)
dataset = load_dataset("math-eval/TAL-SCQ5K")
# Load TAL-SCQ5K-EN
dataset = load_dataset("math-eval/TAL-SCQ5K", data_dir="TAL-SCQ5K-EN")
# Load TAL-SCQ5K-CN
dataset = load_dataset("math-eval/TAL-SCQ5K", data_dir="TAL-SCQ5K-CN")
```
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
The TAL-SCQ5K dataset is licensed under the [MIT License](https://opensource.org/license/mit/)
### Citation Information
[More Information Needed]
### Contact
The original authors host this dataset on GitHub here: https://github.com/math-eval/TAL-SCQ5K You can submit inquiries to: matheval.ai@gmail.com | [
-0.4446580111980438,
-0.5484453439712524,
0.23276206851005554,
0.14685948193073273,
-0.16886669397354126,
0.13287343084812164,
-0.09787449240684509,
-0.07421508431434631,
0.04191096872091293,
0.30559787154197693,
-0.6707477569580078,
-0.6320598721504211,
-0.5687740445137024,
0.314637064933... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
vision-paper/DC_upper_segmented_mask | vision-paper | 2023-09-22T19:12:51Z | 60 | 0 | null | [
"region:us"
] | 2023-09-22T19:12:51Z | 2023-09-22T17:47:58.000Z | 2023-09-22T17:47:58 | Entry not found | [
-0.3227645754814148,
-0.22568479180335999,
0.8622263669967651,
0.43461522459983826,
-0.52829909324646,
0.7012971639633179,
0.7915719747543335,
0.07618614286184311,
0.774603009223938,
0.2563217282295227,
-0.7852813005447388,
-0.22573819756507874,
-0.9104475975036621,
0.5715674161911011,
-... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
SEACrowd/sentiment_nathasa_review | SEACrowd | 2023-09-26T12:35:04Z | 60 | 1 | null | [
"language:ind",
"license:unknown",
"sentiment-analysis",
"region:us"
] | 2023-09-26T12:35:04Z | 2023-09-26T11:42:52.000Z | 2023-09-26T11:42:52 | ---
license: unknown
tags:
- sentiment-analysis
language:
- ind
---
# sentiment_nathasa_review
Customer Review (Natasha Skincare) is a customers emotion dataset, with amounted to 19,253 samples with the division for each class is 804 joy, 43 surprise, 154 anger, 61 fear, 287 sad, 167 disgust, and 17736 no-emotions.
## Dataset Usage
Run `pip install nusacrowd` before loading the dataset through HuggingFace's `load_dataset`.
## Citation
```
@article{nurlaila2018classification,
title={CLASSIFICATION OF CUSTOMERS EMOTION USING NA{"I}VE BAYES CLASSIFIER (Case Study: Natasha Skin Care)},
author={Nurlaila, Afifah and Wiranto, Wiranto and Saptono, Ristu},
journal={ITSMART: Jurnal Teknologi dan Informasi},
volume={6},
number={2},
pages={92--97},
year={2018}
}
```
## License
Unknown
## Homepage
[https://jurnal.uns.ac.id/itsmart/article/viewFile/17328/15082](https://jurnal.uns.ac.id/itsmart/article/viewFile/17328/15082)
### NusaCatalogue
For easy indexing and metadata: [https://indonlp.github.io/nusa-catalogue](https://indonlp.github.io/nusa-catalogue) | [
-0.6095760464668274,
-0.2896018326282501,
0.0029334113933146,
0.5673404335975647,
-0.37416955828666687,
0.008137495256960392,
0.13424281775951385,
-0.3107666075229645,
0.5202796459197998,
0.4016697108745575,
-0.502685010433197,
-0.9306879043579102,
-0.3418513536453247,
0.42309436202049255,... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
chloecchng/biomedical_cpgQA | chloecchng | 2023-10-24T17:37:28Z | 60 | 2 | null | [
"task_categories:question-answering",
"size_categories:1K<n<10K",
"language:en",
"license:apache-2.0",
"biology",
"medical",
"region:us"
] | 2023-10-24T17:37:28Z | 2023-10-09T09:58:21.000Z | 2023-10-09T09:58:21 | ---
license: apache-2.0
task_categories:
- question-answering
language:
- en
tags:
- biology
- medical
size_categories:
- 1K<n<10K
---
# Dataset Card for the Biomedical Domain
### Dataset Summary
This dataset was obtain through github (https://github.com/mmahbub/cpgQA/blob/main/dataset/cpgQA-v1.0.csv?plain=1) to Huggin Face for easier access while fine tuning.
### Languages
English (en)
## Dataset Structure
The dataset is in a CSV format, with each row representing a single review. The following columns are included:
* **Title:** Categorises the QA.
* **Context:** Gives a context of the QA.
* **Question:** The question asked.
* **Answer:** The expected and appropriate answer to the question asked. | [
-0.2887774705886841,
-0.6590145230293274,
0.19809220731258392,
-0.008581743575632572,
-0.3397635519504547,
0.17760366201400757,
0.33653610944747925,
-0.25042805075645447,
0.6490251421928406,
0.5564349889755249,
-0.6647143959999084,
-0.8861307501792908,
-0.3316376507282257,
0.12490830570459... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
Spiral-AI/cc100_debug | Spiral-AI | 2023-10-17T04:27:52Z | 60 | 0 | null | [
"region:us"
] | 2023-10-17T04:27:52Z | 2023-10-17T04:27:47.000Z | 2023-10-17T04:27:47 | ---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
dataset_info:
features:
- name: text
dtype: string
splits:
- name: train
num_bytes: 12282688
num_examples: 129838
download_size: 6976030
dataset_size: 12282688
---
# Dataset Card for "cc100_debug"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | [
-0.6391081213951111,
-0.6018805503845215,
0.23777878284454346,
0.32825157046318054,
-0.20668792724609375,
0.10081232339143753,
0.16455470025539398,
-0.00003464259862084873,
0.7414073348045349,
0.4868651032447815,
-0.885818600654602,
-0.8881734609603882,
-0.5660263299942017,
-0.296861946582... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
zhen-dong-nexusflow/cvecpe_nested_multiapis_nlq_function_pairs | zhen-dong-nexusflow | 2023-10-27T23:35:09Z | 60 | 0 | null | [
"region:us"
] | 2023-10-27T23:35:09Z | 2023-10-18T22:04:37.000Z | 2023-10-18T22:04:37 | Entry not found | [
-0.32276472449302673,
-0.22568407654762268,
0.8622258901596069,
0.4346148371696472,
-0.5282984972000122,
0.7012965679168701,
0.7915717363357544,
0.07618629932403564,
0.7746022939682007,
0.2563222646713257,
-0.785281777381897,
-0.22573848068714142,
-0.9104482531547546,
0.5715669393539429,
... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
gdurkin/flood_dataset | gdurkin | 2023-11-01T13:43:12Z | 60 | 0 | null | [
"region:us"
] | 2023-11-01T13:43:12Z | 2023-10-26T20:07:33.000Z | 2023-10-26T20:07:33 | ---
dataset_info:
features:
- name: pixel_values
dtype:
array3_d:
shape:
- 512
- 512
- 3
dtype: uint8
- name: label
dtype: image
splits:
- name: train
num_bytes: 464353592.0
num_examples: 252
download_size: 198779583
dataset_size: 464353592.0
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
# Dataset Card for "flood_dataset"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | [
-0.8748784065246582,
-0.35517019033432007,
-0.0005878229858353734,
0.6978302597999573,
-0.17091849446296692,
0.03853427618741989,
0.4546336829662323,
-0.1263941377401352,
0.5922544002532959,
0.6956827044487,
-0.6041621565818787,
-0.5853695273399353,
-0.7156754732131958,
-0.2799017727375030... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
kpriyanshu256/semeval-task-8-b | kpriyanshu256 | 2023-10-31T03:41:15Z | 60 | 0 | null | [
"region:us"
] | 2023-10-31T03:41:15Z | 2023-10-31T03:41:08.000Z | 2023-10-31T03:41:08 | ---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: dev
path: data/dev-*
dataset_info:
features:
- name: text
dtype: string
- name: model
dtype: string
- name: source
dtype: string
- name: label
dtype: int64
- name: id
dtype: int64
splits:
- name: train
num_bytes: 151567991
num_examples: 71027
- name: dev
num_bytes: 4814312
num_examples: 3000
download_size: 84851066
dataset_size: 156382303
---
# Dataset Card for "semeval-task-8-b"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | [
-0.4364106357097626,
-0.2971959114074707,
0.3013934791088104,
0.39201620221138,
-0.2469874918460846,
-0.2202560305595398,
0.3380228579044342,
-0.07543313503265381,
0.8852531909942627,
0.698545515537262,
-0.883033037185669,
-0.595878005027771,
-0.763924777507782,
-0.17660075426101685,
-0.... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
Denm/lch_codebase | Denm | 2023-11-18T18:35:39Z | 60 | 0 | null | [
"license:apache-2.0",
"region:us"
] | 2023-11-18T18:35:39Z | 2023-11-01T17:30:35.000Z | 2023-11-01T17:30:35 | ---
license: apache-2.0
dataset_info:
features:
- name: repo_id
dtype: string
- name: file_path
dtype: string
- name: content
dtype: string
- name: __index_level_0__
dtype: int64
splits:
- name: train
num_bytes: 1563382173
num_examples: 72255
download_size: 445895201
dataset_size: 1563382173
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
| [
-0.12853392958641052,
-0.18616779148578644,
0.6529127955436707,
0.49436280131340027,
-0.19319361448287964,
0.23607419431209564,
0.36072003841400146,
0.050563063472509384,
0.579365611076355,
0.7400140762329102,
-0.6508104205131531,
-0.23783954977989197,
-0.7102249264717102,
-0.0478260256350... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
magnifi/hl-codellama-chat-response | magnifi | 2023-11-02T16:45:00Z | 60 | 0 | null | [
"region:us"
] | 2023-11-02T16:45:00Z | 2023-11-02T13:36:09.000Z | 2023-11-02T13:36:09 | ---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: test
path: data/test-*
dataset_info:
features:
- name: Query
dtype: string
- name: Result
dtype: string
- name: chat_response
dtype: string
splits:
- name: train
num_bytes: 1321860.461185117
num_examples: 1523
- name: test
num_bytes: 567627.5388148829
num_examples: 654
download_size: 0
dataset_size: 1889488.0
---
# Dataset Card for "hl-codellama-chat-response"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | [
-0.5805966854095459,
-0.5819656252861023,
-0.11809248477220535,
0.4092307984828949,
0.004181090742349625,
0.31385841965675354,
-0.03001931495964527,
-0.1747892051935196,
1.0241477489471436,
0.488258421421051,
-0.80029296875,
-0.7154430747032166,
-0.4331747889518738,
-0.4074622690677643,
... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
Rewcifer/trainset1_2000_cutoff_llama | Rewcifer | 2023-11-03T02:25:53Z | 60 | 0 | null | [
"region:us"
] | 2023-11-03T02:25:53Z | 2023-11-03T02:25:48.000Z | 2023-11-03T02:25:48 | ---
dataset_info:
features:
- name: text
dtype: string
splits:
- name: train
num_bytes: 249703784.98341143
num_examples: 50000
download_size: 45211692
dataset_size: 249703784.98341143
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
# Dataset Card for "trainset1_2000_cutoff_llama"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | [
-0.5342342257499695,
0.016921937465667725,
0.27977752685546875,
0.4675995409488678,
-0.4693983197212219,
-0.08755224198102951,
0.4993838667869568,
-0.07892843335866928,
0.9616556763648987,
0.5265178084373474,
-1.1050910949707031,
-0.6302812695503235,
-0.6540914177894592,
-0.129369169473648... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
ademax/dataset_line_connect | ademax | 2023-11-06T01:47:01Z | 60 | 0 | null | [
"region:us"
] | 2023-11-06T01:47:01Z | 2023-11-06T01:46:18.000Z | 2023-11-06T01:46:18 | ---
dataset_info:
features:
- name: lineA
dtype: string
- name: lineB
dtype: string
- name: is_join
dtype: int64
splits:
- name: train
num_bytes: 1143553535
num_examples: 10001530
download_size: 412153174
dataset_size: 1143553535
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
# Dataset Card for "dataset_line_connect"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | [
-0.8243535161018372,
-0.2573605477809906,
0.07833047211170197,
0.1575021743774414,
-0.32090675830841064,
-0.04525772109627724,
0.496975839138031,
-0.3101156949996948,
0.8038766980171204,
0.6079230904579163,
-0.9466176629066467,
-0.6495099663734436,
-0.302776038646698,
-0.2536560893058777,
... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
multi-train/amazon_review_2018_1107 | multi-train | 2023-11-10T18:36:36Z | 60 | 0 | null | [
"region:us"
] | 2023-11-10T18:36:36Z | 2023-11-10T18:36:24.000Z | 2023-11-10T18:36:24 | ---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
dataset_info:
features:
- name: query
dtype: string
- name: pos
sequence: string
- name: neg
sequence: string
- name: task
dtype: string
- name: instruction
struct:
- name: query
dtype: string
- name: pos
dtype: string
- name: neg
dtype: string
splits:
- name: train
num_bytes: 146232172
num_examples: 200000
download_size: 81634497
dataset_size: 146232172
---
# Dataset Card for "amazon_review_2018_1107"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | [
-0.5778794884681702,
-0.13490533828735352,
0.17466610670089722,
0.4711952209472656,
-0.3397979140281677,
-0.06296328455209732,
0.5377197265625,
-0.29328304529190063,
0.766943097114563,
0.7821738123893738,
-0.8460134863853455,
-0.6495672464370728,
-0.35503146052360535,
0.05608440190553665,
... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
multi-train/agnews_1107 | multi-train | 2023-11-10T18:36:46Z | 60 | 0 | null | [
"region:us"
] | 2023-11-10T18:36:46Z | 2023-11-10T18:36:37.000Z | 2023-11-10T18:36:37 | ---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
dataset_info:
features:
- name: query
dtype: string
- name: pos
sequence: string
- name: neg
sequence: string
- name: task
dtype: string
- name: instruction
struct:
- name: query
dtype: string
- name: pos
dtype: string
- name: neg
dtype: string
splits:
- name: train
num_bytes: 98773974
num_examples: 200000
download_size: 50174968
dataset_size: 98773974
---
# Dataset Card for "agnews_1107"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | [
-0.46649518609046936,
-0.26846715807914734,
0.36018189787864685,
0.2839205265045166,
-0.2589015066623688,
-0.08281827718019485,
0.3109527826309204,
-0.1714855283498764,
0.9379346966743469,
0.38640373945236206,
-0.6832451224327087,
-0.6467448472976685,
-0.6662506461143494,
-0.13898016512393... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
multi-train/altlex_1107 | multi-train | 2023-11-10T18:36:55Z | 60 | 0 | null | [
"region:us"
] | 2023-11-10T18:36:55Z | 2023-11-10T18:36:48.000Z | 2023-11-10T18:36:48 | ---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
dataset_info:
features:
- name: query
dtype: string
- name: pos
sequence: string
- name: neg
sequence: string
- name: task
dtype: string
- name: instruction
struct:
- name: query
dtype: string
- name: pos
dtype: string
- name: neg
dtype: string
splits:
- name: train
num_bytes: 59606453
num_examples: 112696
download_size: 30565780
dataset_size: 59606453
---
# Dataset Card for "altlex_1107"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | [
-0.4977808892726898,
-0.11452613770961761,
0.3260418176651001,
0.08636779338121414,
-0.13017334043979645,
-0.08174259960651398,
0.3283114433288574,
-0.17506609857082367,
0.7913554310798645,
0.586352527141571,
-0.6820976138114929,
-0.7775470614433289,
-0.4561724066734314,
0.0096173062920570... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
multi-train/cnn_dailymail_1107 | multi-train | 2023-11-10T18:39:45Z | 60 | 0 | null | [
"region:us"
] | 2023-11-10T18:39:45Z | 2023-11-10T18:37:56.000Z | 2023-11-10T18:37:56 | ---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
dataset_info:
features:
- name: query
dtype: string
- name: pos
sequence: string
- name: neg
sequence: string
- name: task
dtype: string
- name: instruction
struct:
- name: query
dtype: string
- name: pos
dtype: string
- name: neg
dtype: string
splits:
- name: train
num_bytes: 1710027721
num_examples: 200000
download_size: 1026018118
dataset_size: 1710027721
---
# Dataset Card for "cnn_dailymail_1107"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | [
-0.41403985023498535,
-0.267403781414032,
0.09083497524261475,
0.38752081990242004,
-0.4365726113319397,
-0.019495077431201935,
0.1341012716293335,
-0.031229292973876,
0.7310764789581299,
0.4806489944458008,
-0.8010236024856567,
-0.9007552862167358,
-0.7087618112564087,
-0.1016645580530166... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
multi-train/coco_captions_1107 | multi-train | 2023-11-10T18:39:54Z | 60 | 0 | null | [
"region:us"
] | 2023-11-10T18:39:54Z | 2023-11-10T18:39:48.000Z | 2023-11-10T18:39:48 | ---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
dataset_info:
features:
- name: query
dtype: string
- name: pos
sequence: string
- name: neg
sequence: string
- name: task
dtype: string
- name: instruction
struct:
- name: query
dtype: string
- name: pos
dtype: string
- name: neg
dtype: string
splits:
- name: train
num_bytes: 27977412
num_examples: 82783
download_size: 8138135
dataset_size: 27977412
---
# Dataset Card for "coco_captions_1107"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | [
-0.531970739364624,
-0.10980206727981567,
0.11748852580785751,
0.612311601638794,
-0.38970398902893066,
0.30559515953063965,
0.10809053480625153,
-0.1393631100654602,
0.846330463886261,
0.6849298477172852,
-0.7117663621902466,
-0.7301495671272278,
-0.5969340205192566,
0.07315047085285187,
... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
multi-train/eli5_question_answer_1107 | multi-train | 2023-11-10T18:40:10Z | 60 | 0 | null | [
"region:us"
] | 2023-11-10T18:40:10Z | 2023-11-10T18:39:56.000Z | 2023-11-10T18:39:56 | ---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
dataset_info:
features:
- name: query
dtype: string
- name: pos
sequence: string
- name: neg
sequence: string
- name: task
dtype: string
- name: instruction
struct:
- name: query
dtype: string
- name: pos
dtype: string
- name: neg
dtype: string
splits:
- name: train
num_bytes: 187610302
num_examples: 200000
download_size: 106360840
dataset_size: 187610302
---
# Dataset Card for "eli5_question_answer_1107"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | [
-0.7465716004371643,
-0.49180522561073303,
0.26877689361572266,
0.002724412828683853,
-0.17291522026062012,
-0.277549147605896,
0.35931098461151123,
-0.16522979736328125,
0.7606682181358337,
0.4954899847507477,
-0.7898517847061157,
-0.5046069025993347,
-0.44765761494636536,
-0.032725777477... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
multi-train/triviaqa-train-multikilt_1107 | multi-train | 2023-11-10T18:52:08Z | 60 | 0 | null | [
"region:us"
] | 2023-11-10T18:52:08Z | 2023-11-10T18:51:59.000Z | 2023-11-10T18:51:59 | ---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
dataset_info:
features:
- name: query
dtype: string
- name: pos
sequence: string
- name: neg
sequence: string
- name: task
dtype: string
- name: instruction
struct:
- name: query
dtype: string
- name: pos
dtype: string
- name: neg
dtype: string
splits:
- name: train
num_bytes: 67898183
num_examples: 52886
download_size: 39123463
dataset_size: 67898183
---
# Dataset Card for "triviaqa-train-multikilt_1107"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | [
-0.6372554898262024,
0.024960773065686226,
0.22228913009166718,
0.2829509675502777,
-0.21274125576019287,
0.2749953269958496,
0.19863329827785492,
0.13066592812538147,
0.7711856961250305,
0.4333176612854004,
-0.7702699303627014,
-0.5971009731292725,
-0.4789556860923767,
-0.1046143993735313... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
multi-train/wow-train-multikilt_1107 | multi-train | 2023-11-10T18:52:21Z | 60 | 0 | null | [
"region:us"
] | 2023-11-10T18:52:21Z | 2023-11-10T18:52:09.000Z | 2023-11-10T18:52:09 | ---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
dataset_info:
features:
- name: query
dtype: string
- name: pos
sequence: string
- name: neg
sequence: string
- name: task
dtype: string
- name: instruction
struct:
- name: query
dtype: string
- name: pos
dtype: string
- name: neg
dtype: string
splits:
- name: train
num_bytes: 124530766
num_examples: 80035
download_size: 65428253
dataset_size: 124530766
---
# Dataset Card for "wow-train-multikilt_1107"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | [
-0.7602085471153259,
-0.10610200464725494,
0.12012267857789993,
0.308718740940094,
-0.17048722505569458,
-0.1239570751786232,
0.24293552339076996,
0.03935970366001129,
0.618336021900177,
0.45651668310165405,
-0.9834462404251099,
-0.44023212790489197,
-0.5749585628509521,
-0.211931332945823... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
multi-train/multi_lexsum_1107 | multi-train | 2023-11-10T18:54:52Z | 60 | 0 | null | [
"region:us"
] | 2023-11-10T18:54:52Z | 2023-11-10T18:52:34.000Z | 2023-11-10T18:52:34 | ---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
dataset_info:
features:
- name: query
dtype: string
- name: pos
sequence: string
- name: neg
sequence: string
- name: task
dtype: string
- name: instruction
struct:
- name: query
dtype: string
- name: pos
dtype: string
- name: neg
dtype: string
splits:
- name: train
num_bytes: 2800042167
num_examples: 3177
download_size: 1258860340
dataset_size: 2800042167
---
# Dataset Card for "multi_lexsum_1107"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | [
-0.5237568616867065,
-0.026912564411759377,
0.31181687116622925,
0.3182752728462219,
-0.2429608702659607,
0.03356310725212097,
0.16658329963684082,
0.015342188067734241,
0.9375860691070557,
0.43517005443573,
-0.6601560115814209,
-0.8967764377593994,
-0.5774123668670654,
-0.1044081151485443... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
multi-train/medmcqa_1107 | multi-train | 2023-11-10T18:58:42Z | 60 | 0 | null | [
"region:us"
] | 2023-11-10T18:58:42Z | 2023-11-10T18:58:26.000Z | 2023-11-10T18:58:26 | ---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
dataset_info:
features:
- name: query
dtype: string
- name: pos
sequence: string
- name: neg
sequence: string
- name: task
dtype: string
- name: instruction
struct:
- name: query
dtype: string
- name: pos
dtype: string
- name: neg
dtype: string
splits:
- name: train
num_bytes: 194641944
num_examples: 160869
download_size: 102313307
dataset_size: 194641944
---
# Dataset Card for "medmcqa_1107"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | [
-0.4835893213748932,
0.014727339148521423,
0.4854133725166321,
0.02462175115942955,
-0.27531537413597107,
0.0739528015255928,
0.5322077870368958,
0.10031891614198685,
0.8405230641365051,
0.6201721429824829,
-0.813513994216919,
-0.767703652381897,
-0.5454774498939514,
-0.11638093739748001,
... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
multi-train/SimpleWiki_1107 | multi-train | 2023-11-10T18:58:49Z | 60 | 0 | null | [
"region:us"
] | 2023-11-10T18:58:49Z | 2023-11-10T18:58:43.000Z | 2023-11-10T18:58:43 | ---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
dataset_info:
features:
- name: query
dtype: string
- name: pos
sequence: string
- name: neg
sequence: string
- name: task
dtype: string
- name: instruction
struct:
- name: query
dtype: string
- name: pos
dtype: string
- name: neg
dtype: string
splits:
- name: train
num_bytes: 57699115
num_examples: 102225
download_size: 29311247
dataset_size: 57699115
---
# Dataset Card for "SimpleWiki_1107"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | [
-0.6296225786209106,
-0.11092172563076019,
0.24134056270122528,
0.23977714776992798,
-0.2207110971212387,
-0.30745255947113037,
0.04427841678261757,
-0.05829235538840294,
0.8307951092720032,
0.4518871605396271,
-0.9026169776916504,
-0.574505627155304,
-0.43219998478889465,
0.26850906014442... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
multi-train/squad_pairs_1107 | multi-train | 2023-11-10T18:59:09Z | 60 | 0 | null | [
"region:us"
] | 2023-11-10T18:59:09Z | 2023-11-10T18:59:03.000Z | 2023-11-10T18:59:03 | ---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
dataset_info:
features:
- name: query
dtype: string
- name: pos
sequence: string
- name: neg
sequence: string
- name: task
dtype: string
- name: instruction
struct:
- name: query
dtype: string
- name: pos
dtype: string
- name: neg
dtype: string
splits:
- name: train
num_bytes: 131284545
num_examples: 87599
download_size: 27083693
dataset_size: 131284545
---
# Dataset Card for "squad_pairs_1107"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | [
-0.4735133945941925,
-0.06418172270059586,
0.12336380779743195,
0.49038878083229065,
-0.20896300673484802,
0.2548309564590454,
0.3381463885307312,
-0.06888896971940994,
0.8587680459022522,
0.29926058650016785,
-1.0638118982315063,
-0.6358752250671387,
-0.47512441873550415,
0.05894811451435... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
multi-train/searchQA_top5_snippets_1107 | multi-train | 2023-11-10T18:59:21Z | 60 | 0 | null | [
"region:us"
] | 2023-11-10T18:59:21Z | 2023-11-10T18:59:11.000Z | 2023-11-10T18:59:11 | ---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
dataset_info:
features:
- name: query
dtype: string
- name: pos
sequence: string
- name: neg
sequence: string
- name: task
dtype: string
- name: instruction
struct:
- name: query
dtype: string
- name: pos
dtype: string
- name: neg
dtype: string
splits:
- name: train
num_bytes: 102625564
num_examples: 117220
download_size: 62573380
dataset_size: 102625564
---
# Dataset Card for "searchQA_top5_snippets_1107"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | [
-0.4365701675415039,
-0.0264138150960207,
0.3574102222919464,
0.021304382011294365,
-0.21213193237781525,
0.06006648764014244,
0.2884760797023773,
0.38490813970565796,
0.885326623916626,
0.4932725131511688,
-0.7852805256843567,
-0.7749814987182617,
-0.6557291150093079,
-0.03631586208939552... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
multi-train/pubmedqa_1107 | multi-train | 2023-11-10T18:59:57Z | 60 | 0 | null | [
"region:us"
] | 2023-11-10T18:59:57Z | 2023-11-10T18:59:23.000Z | 2023-11-10T18:59:23 | ---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
dataset_info:
features:
- name: query
dtype: string
- name: pos
sequence: string
- name: neg
sequence: string
- name: task
dtype: string
- name: instruction
struct:
- name: query
dtype: string
- name: pos
dtype: string
- name: neg
dtype: string
splits:
- name: train
num_bytes: 553042848
num_examples: 200000
download_size: 277163918
dataset_size: 553042848
---
# Dataset Card for "pubmedqa_1107"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | [
-0.33278602361679077,
0.05416101962327957,
0.5336374044418335,
0.15390224754810333,
-0.3747607171535492,
-0.08033434301614761,
0.43144935369491577,
0.08668146282434464,
0.814662754535675,
0.6072100400924683,
-0.5564326643943787,
-0.7589558362960815,
-0.5895138382911682,
0.09897865355014801... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
multi-train/npr_1107 | multi-train | 2023-11-10T19:00:27Z | 60 | 0 | null | [
"region:us"
] | 2023-11-10T19:00:27Z | 2023-11-10T18:59:59.000Z | 2023-11-10T18:59:59 | ---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
dataset_info:
features:
- name: query
dtype: string
- name: pos
sequence: string
- name: neg
sequence: string
- name: task
dtype: string
- name: instruction
struct:
- name: query
dtype: string
- name: pos
dtype: string
- name: neg
dtype: string
splits:
- name: train
num_bytes: 474583082
num_examples: 200000
download_size: 259084905
dataset_size: 474583082
---
# Dataset Card for "npr_1107"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | [
-0.6299998164176941,
-0.19468791782855988,
0.42169636487960815,
0.29931747913360596,
-0.3076586425304413,
-0.058043673634529114,
0.19003280997276306,
-0.07653015851974487,
0.885711669921875,
0.5332710146903992,
-0.7434237003326416,
-0.7023307085037231,
-0.6218118071556091,
0.01284866221249... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
multi-train/wikihow_1107 | multi-train | 2023-11-10T19:00:54Z | 60 | 0 | null | [
"region:us"
] | 2023-11-10T19:00:54Z | 2023-11-10T19:00:47.000Z | 2023-11-10T19:00:47 | ---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
dataset_info:
features:
- name: query
dtype: string
- name: pos
sequence: string
- name: neg
sequence: string
- name: task
dtype: string
- name: instruction
struct:
- name: query
dtype: string
- name: pos
dtype: string
- name: neg
dtype: string
splits:
- name: train
num_bytes: 52538597
num_examples: 128542
download_size: 19871957
dataset_size: 52538597
---
# Dataset Card for "wikihow_1107"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | [
-0.6168068647384644,
-0.15707363188266754,
0.12571267783641815,
0.13488775491714478,
-0.4200133681297302,
-0.13975143432617188,
0.12199472635984421,
0.015337790362536907,
0.9508422017097473,
0.3394138813018799,
-0.7797764539718628,
-0.6343926787376404,
-0.5673120617866516,
-0.0547939948737... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
multi-train/trex-train-multikilt_1107 | multi-train | 2023-11-10T19:01:10Z | 60 | 0 | null | [
"region:us"
] | 2023-11-10T19:01:10Z | 2023-11-10T19:00:56.000Z | 2023-11-10T19:00:56 | ---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
dataset_info:
features:
- name: query
dtype: string
- name: pos
sequence: string
- name: neg
sequence: string
- name: task
dtype: string
- name: instruction
struct:
- name: query
dtype: string
- name: pos
dtype: string
- name: neg
dtype: string
splits:
- name: train
num_bytes: 228887845
num_examples: 200000
download_size: 116247120
dataset_size: 228887845
---
# Dataset Card for "trex-train-multikilt_1107"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | [
-0.6401129961013794,
0.00015725108096376061,
0.2694404721260071,
0.33894750475883484,
-0.3215525150299072,
0.2656843960285187,
0.16483965516090393,
0.15966816246509552,
0.6658514738082886,
0.44465726613998413,
-0.8263417482376099,
-0.6374571919441223,
-0.6396751403808594,
-0.10721193999052... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
multi-train/nq-train-multikilt_1107 | multi-train | 2023-11-10T19:01:21Z | 60 | 0 | null | [
"region:us"
] | 2023-11-10T19:01:21Z | 2023-11-10T19:01:12.000Z | 2023-11-10T19:01:12 | ---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
dataset_info:
features:
- name: query
dtype: string
- name: pos
sequence: string
- name: neg
sequence: string
- name: task
dtype: string
- name: instruction
struct:
- name: query
dtype: string
- name: pos
dtype: string
- name: neg
dtype: string
splits:
- name: train
num_bytes: 94822797
num_examples: 76945
download_size: 53958820
dataset_size: 94822797
---
# Dataset Card for "nq-train-multikilt_1107"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | [
-0.5995315909385681,
0.12377786636352539,
0.09403938055038452,
0.3067147433757782,
-0.24293124675750732,
0.09207595139741898,
0.32003262639045715,
0.19201292097568512,
0.7425810694694519,
0.442798376083374,
-0.8377463817596436,
-0.42024797201156616,
-0.5597158074378967,
-0.0659384578466415... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
multi-train/gigaword_1107 | multi-train | 2023-11-10T19:01:31Z | 60 | 0 | null | [
"region:us"
] | 2023-11-10T19:01:31Z | 2023-11-10T19:01:23.000Z | 2023-11-10T19:01:23 | ---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
dataset_info:
features:
- name: query
dtype: string
- name: pos
sequence: string
- name: neg
sequence: string
- name: task
dtype: string
- name: instruction
struct:
- name: query
dtype: string
- name: pos
dtype: string
- name: neg
dtype: string
splits:
- name: train
num_bytes: 87945355
num_examples: 200000
download_size: 41386512
dataset_size: 87945355
---
# Dataset Card for "gigaword_1107"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | [
-0.6263056993484497,
-0.09218981862068176,
0.28643229603767395,
0.13446015119552612,
-0.3249216675758362,
-0.12355322390794754,
0.31547439098358154,
-0.21148526668548584,
1.0707429647445679,
0.6262527704238892,
-0.8060028553009033,
-0.5428837537765503,
-0.32760125398635864,
-0.218586698174... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
multi-train/yahoo_answers_title_answer_1107 | multi-train | 2023-11-10T19:01:43Z | 60 | 0 | null | [
"region:us"
] | 2023-11-10T19:01:43Z | 2023-11-10T19:01:32.000Z | 2023-11-10T19:01:32 | ---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
dataset_info:
features:
- name: query
dtype: string
- name: pos
sequence: string
- name: neg
sequence: string
- name: task
dtype: string
- name: instruction
struct:
- name: query
dtype: string
- name: pos
dtype: string
- name: neg
dtype: string
splits:
- name: train
num_bytes: 141237635
num_examples: 200000
download_size: 78339836
dataset_size: 141237635
---
# Dataset Card for "yahoo_answers_title_answer_1107"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | [
-0.5382375717163086,
-0.46590912342071533,
0.20010770857334137,
0.12493876367807388,
-0.22144363820552826,
-0.014709044247865677,
0.38021746277809143,
0.08921582251787186,
0.7901862859725952,
0.4773540794849396,
-0.795602023601532,
-0.5661738514900208,
-0.4120594561100006,
-0.0137220481410... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
multi-train/fever-train-multikilt_1107 | multi-train | 2023-11-10T19:01:53Z | 60 | 0 | null | [
"region:us"
] | 2023-11-10T19:01:53Z | 2023-11-10T19:01:45.000Z | 2023-11-10T19:01:45 | ---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
dataset_info:
features:
- name: query
dtype: string
- name: pos
sequence: string
- name: neg
sequence: string
- name: task
dtype: string
- name: instruction
struct:
- name: query
dtype: string
- name: pos
dtype: string
- name: neg
dtype: string
splits:
- name: train
num_bytes: 87617512
num_examples: 71257
download_size: 46276668
dataset_size: 87617512
---
# Dataset Card for "fever-train-multikilt_1107"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | [
-0.5321190357208252,
0.10348320007324219,
0.0014180887956172228,
0.4306359589099884,
-0.274733304977417,
-0.09667935222387314,
0.16586557030677795,
0.03412632271647453,
0.8882151246070862,
0.31256696581840515,
-0.6640385985374451,
-0.5816444754600525,
-0.7023705244064331,
-0.07613815367221... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
multi-train/flickr30k_captions_1107 | multi-train | 2023-11-10T19:01:59Z | 60 | 0 | null | [
"region:us"
] | 2023-11-10T19:01:59Z | 2023-11-10T19:01:55.000Z | 2023-11-10T19:01:55 | ---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
dataset_info:
features:
- name: query
dtype: string
- name: pos
sequence: string
- name: neg
sequence: string
- name: task
dtype: string
- name: instruction
struct:
- name: query
dtype: string
- name: pos
dtype: string
- name: neg
dtype: string
splits:
- name: train
num_bytes: 12081988
num_examples: 31783
download_size: 4010622
dataset_size: 12081988
---
# Dataset Card for "flickr30k_captions_1107"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | [
-0.7018823027610779,
0.19928325712680817,
0.2363795042037964,
0.428874135017395,
-0.35015425086021423,
0.11106659471988678,
0.38891470432281494,
0.1138187125325203,
0.5119509696960449,
0.6131980419158936,
-0.8994880318641663,
-0.5226900577545166,
-0.44412410259246826,
0.021828539669513702,... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
multi-train/sentence-compression_1107 | multi-train | 2023-11-10T19:02:09Z | 60 | 0 | null | [
"region:us"
] | 2023-11-10T19:02:09Z | 2023-11-10T19:02:00.000Z | 2023-11-10T19:02:00 | ---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
dataset_info:
features:
- name: query
dtype: string
- name: pos
sequence: string
- name: neg
sequence: string
- name: task
dtype: string
- name: instruction
struct:
- name: query
dtype: string
- name: pos
dtype: string
- name: neg
dtype: string
splits:
- name: train
num_bytes: 71393703
num_examples: 180000
download_size: 36617830
dataset_size: 71393703
---
# Dataset Card for "sentence-compression_1107"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | [
-0.398103266954422,
-0.37328749895095825,
0.37579116225242615,
0.3414977788925171,
-0.13931037485599518,
-0.2547528147697449,
-0.15836617350578308,
0.022190287709236145,
0.8816609382629395,
0.4832426905632019,
-0.7455801367759705,
-0.63576740026474,
-0.6562433838844299,
0.15158137679100037... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
multi-train/zeroshot-train-multikilt_1107 | multi-train | 2023-11-10T19:02:21Z | 60 | 0 | null | [
"region:us"
] | 2023-11-10T19:02:21Z | 2023-11-10T19:02:10.000Z | 2023-11-10T19:02:10 | ---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
dataset_info:
features:
- name: query
dtype: string
- name: pos
sequence: string
- name: neg
sequence: string
- name: task
dtype: string
- name: instruction
struct:
- name: query
dtype: string
- name: pos
dtype: string
- name: neg
dtype: string
splits:
- name: train
num_bytes: 156575454
num_examples: 130514
download_size: 74461241
dataset_size: 156575454
---
# Dataset Card for "zeroshot-train-multikilt_1107"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | [
-0.5071298480033875,
0.068544402718544,
0.33044323325157166,
0.26143670082092285,
-0.2854275405406952,
0.002414046786725521,
0.24838052690029144,
0.12856921553611755,
0.8088578581809998,
0.2985139787197113,
-0.8941643834114075,
-0.536382257938385,
-0.632247269153595,
-0.28714725375175476,
... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
multi-train/hotpotqa-train-multikilt_1107 | multi-train | 2023-11-10T19:02:52Z | 60 | 0 | null | [
"region:us"
] | 2023-11-10T19:02:52Z | 2023-11-10T19:02:43.000Z | 2023-11-10T19:02:43 | ---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
dataset_info:
features:
- name: query
dtype: string
- name: pos
sequence: string
- name: neg
sequence: string
- name: task
dtype: string
- name: instruction
struct:
- name: query
dtype: string
- name: pos
dtype: string
- name: neg
dtype: string
splits:
- name: train
num_bytes: 88502871
num_examples: 68659
download_size: 50639711
dataset_size: 88502871
---
# Dataset Card for "hotpotqa-train-multikilt_1107"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | [
-0.6911565065383911,
-0.0704859048128128,
0.0664585530757904,
0.5675113797187805,
-0.29961833357810974,
-0.13800767064094543,
0.05765201896429062,
0.3739187717437744,
0.6573705077171326,
0.5465130805969238,
-0.6098540425300598,
-0.520955502986908,
-0.6976633071899414,
-0.1633363664150238,
... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
multi-train/scitldr_1107 | multi-train | 2023-11-10T19:03:01Z | 60 | 0 | null | [
"region:us"
] | 2023-11-10T19:03:01Z | 2023-11-10T19:02:54.000Z | 2023-11-10T19:02:54 | ---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
dataset_info:
features:
- name: query
dtype: string
- name: pos
sequence: string
- name: neg
sequence: string
- name: task
dtype: string
- name: instruction
struct:
- name: query
dtype: string
- name: pos
dtype: string
- name: neg
dtype: string
splits:
- name: train
num_bytes: 59829071
num_examples: 1992
download_size: 29628456
dataset_size: 59829071
---
# Dataset Card for "scitldr_1107"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | [
-0.428070068359375,
0.011456786654889584,
0.26000532507896423,
0.2824285924434662,
-0.25777459144592285,
0.0523369275033474,
0.3444038927555084,
-0.027682775631546974,
0.8846637606620789,
0.31885024905204773,
-0.7040692567825317,
-0.6492854356765747,
-0.5606093406677246,
0.0308449491858482... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
multi-train/xsum_1107 | multi-train | 2023-11-10T19:04:02Z | 60 | 0 | null | [
"region:us"
] | 2023-11-10T19:04:02Z | 2023-11-10T19:03:03.000Z | 2023-11-10T19:03:03 | ---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
dataset_info:
features:
- name: query
dtype: string
- name: pos
sequence: string
- name: neg
sequence: string
- name: task
dtype: string
- name: instruction
struct:
- name: query
dtype: string
- name: pos
dtype: string
- name: neg
dtype: string
splits:
- name: train
num_bytes: 848430524
num_examples: 200000
download_size: 523334138
dataset_size: 848430524
---
# Dataset Card for "xsum_1107"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | [
-0.4583677351474762,
0.18822184205055237,
0.2531813085079193,
0.09623496979475021,
-0.22179317474365234,
-0.02659461461007595,
0.35442137718200684,
-0.08814839273691177,
1.110185146331787,
0.6221945881843567,
-0.7064804434776306,
-0.6994386315345764,
-0.6341826915740967,
-0.217595532536506... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
jmelsbach/easy-german-definitions | jmelsbach | 2023-11-15T14:01:10Z | 60 | 0 | null | [
"region:us"
] | 2023-11-15T14:01:10Z | 2023-11-15T13:56:15.000Z | 2023-11-15T13:56:15 | ---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: test
path: data/test-*
dataset_info:
features:
- name: title
dtype: string
- name: explanation
dtype: string
- name: detailed_explanation
dtype: string
splits:
- name: train
num_bytes: 2153588.053902302
num_examples: 2849
- name: test
num_bytes: 538963.946097698
num_examples: 713
download_size: 0
dataset_size: 2692552.0
---
# Dataset Card for "easy-german-definitions"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | [
-0.8051302433013916,
-0.43644264340400696,
0.3472807705402374,
0.28538089990615845,
-0.1588907241821289,
-0.3078976571559906,
-0.14078113436698914,
-0.19357770681381226,
0.5679495334625244,
0.13258923590183258,
-0.8013994693756104,
-0.9283403158187866,
-0.7318576574325562,
-0.0362447910010... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
aops02/MetaMath-Vi | aops02 | 2023-11-27T19:22:02Z | 60 | 2 | null | [
"region:us"
] | 2023-11-27T19:22:02Z | 2023-11-23T14:37:49.000Z | 2023-11-23T14:37:49 | ---
dataset_info:
features:
- name: text
dtype: string
splits:
- name: train
num_bytes: 72172855
num_examples: 32972
download_size: 15771804
dataset_size: 72172855
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
# Dataset Card for "MetaMath-Vi"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | [
-0.6503434777259827,
-0.24544109404087067,
0.2753262221813202,
0.05389608070254326,
-0.2431098222732544,
-0.03424415364861488,
0.31167149543762207,
0.023915065452456474,
0.9722927808761597,
0.4731324315071106,
-1.039003849029541,
-0.7961228489875793,
-0.47894859313964844,
-0.25088331103324... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
botdevringring/fr_health_sent_3label | botdevringring | 2023-11-27T11:05:06Z | 60 | 0 | null | [
"region:us"
] | 2023-11-27T11:05:06Z | 2023-11-25T16:39:15.000Z | 2023-11-25T16:39:15 | Entry not found | [
-0.3227647542953491,
-0.22568407654762268,
0.8622258901596069,
0.4346148371696472,
-0.5282984972000122,
0.7012965083122253,
0.7915717959403992,
0.07618629932403564,
0.7746022343635559,
0.2563222348690033,
-0.785281777381897,
-0.22573848068714142,
-0.9104482531547546,
0.5715669393539429,
... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
huggan/few-shot-aurora | huggan | 2022-04-15T02:42:46Z | 59 | 1 | null | [
"region:us"
] | 2022-04-15T02:42:46Z | 2022-04-13T00:26:04.000Z | 2022-04-13T00:26:04 | Entry not found | [
-0.3227645754814148,
-0.22568479180335999,
0.8622263669967651,
0.43461522459983826,
-0.52829909324646,
0.7012971639633179,
0.7915719747543335,
0.07618614286184311,
0.774603009223938,
0.2563217282295227,
-0.7852813005447388,
-0.22573819756507874,
-0.9104475975036621,
0.5715674161911011,
-... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
wikimedia/wit_base | wikimedia | 2022-11-04T15:09:33Z | 59 | 14 | wit | [
"task_categories:image-to-text",
"task_categories:text-retrieval",
"task_ids:image-captioning",
"annotations_creators:machine-generated",
"language_creators:found",
"multilinguality:multilingual",
"size_categories:1M<n<10M",
"source_datasets:original",
"source_datasets:extended|wikipedia",
"langua... | 2022-11-04T15:09:33Z | 2022-05-02T16:08:58.000Z | 2022-05-02T16:08:58 | ---
annotations_creators:
- machine-generated
language_creators:
- found
language:
- af
- an
- ar
- arz
- ast
- az
- azb
- ba
- bar
- be
- bg
- bn
- br
- bs
- ca
- ce
- ceb
- ckb
- cs
- cv
- cy
- da
- de
- el
- en
- eo
- es
- et
- eu
- fa
- fi
- fil
- fr
- fy
- ga
- gl
- hi
- hr
- hsb
- ht
- hu
- hy
- ia
- id
- io
- is
- it
- iw
- ja
- jv
- ka
- kk
- kn
- ko
- la
- lah
- lb
- lmo
- lt
- lv
- mg
- mk
- ml
- mn
- mr
- ms
- my
- nan
- nds
- ne
- nl
- nn
- 'no'
- nv
- oc
- pa
- pl
- pt
- qu
- ro
- ru
- sco
- si
- sk
- sl
- sq
- sr
- sv
- sw
- ta
- te
- tg
- th
- tr
- tt
- uk
- ur
- uz
- vec
- vi
- vo
- war
- xmf
- yue
- zh
license:
- cc-by-sa-4.0
multilinguality:
- multilingual
size_categories:
- 1M<n<10M
source_datasets:
- original
- extended|wikipedia
task_categories:
- image-to-text
- text-retrieval
task_ids:
- image-captioning
paperswithcode_id: wit
pretty_name: Wikipedia-based Image Text
language_bcp47:
- af
- an
- ar
- arz
- ast
- az
- azb
- ba
- bar
- be
- be-tarask
- bg
- bn
- br
- bs
- ca
- ce
- ceb
- ckb
- cs
- cv
- cy
- da
- de
- el
- en
- eo
- es
- et
- eu
- fa
- fi
- fil
- fr
- fy
- ga
- gl
- hi
- hr
- hsb
- ht
- hu
- hy
- ia
- id
- io
- is
- it
- iw
- ja
- jv
- ka
- kk
- kn
- ko
- la
- lah
- lb
- lmo
- lt
- lv
- mg
- mk
- ml
- mn
- mr
- ms
- my
- nan
- nds
- ne
- nl
- nn
- 'no'
- nv
- oc
- pa
- pl
- pt
- qu
- ro
- ru
- sco
- si
- sk
- sl
- sq
- sr
- sr-Latn
- sv
- sw
- ta
- te
- tg
- th
- tr
- tt
- uk
- ur
- uz
- vec
- vi
- vo
- war
- xmf
- yue
- zh
- zh-TW
tags:
- text-image-retrieval
---
# Dataset Card for WIT
## Table of Contents
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** [WIT homepage](https://github.com/google-research-datasets/wit)
- **Paper:** [WIT: Wikipedia-based Image Text Dataset for Multimodal Multilingual Machine Learning
](https://arxiv.org/abs/2103.01913)
- **Leaderboard:** [WIT leaderboard](https://paperswithcode.com/sota/text-image-retrieval-on-wit) and [WIT Kaggle competition](https://www.kaggle.com/competitions/wikipedia-image-caption/leaderboard)
- **Point of Contact:** [Miriam Redi](mailto:miriam@wikimedia.org)
### Dataset Summary
Wikimedia's version of the Wikipedia-based Image Text (WIT) Dataset, a large multimodal multilingual dataset.
From the [official blog post](https://techblog.wikimedia.org/2021/09/09/the-wikipedia-image-caption-matching-challenge-and-a-huge-release-of-image-data-for-research/):
> The core training data is taken from the Wikipedia Image-Text (WIT) Dataset, a large curated set of more than 37 million image-text associations extracted from Wikipedia articles in 108 languages that was recently released by Google Research.
>
> The WIT dataset offers extremely valuable data about the pieces of text associated with Wikipedia images. However, due to licensing and data volume issues, the Google dataset only provides the image name and corresponding URL for download and not the raw image files.
>
> Getting easy access to the image files is crucial for participants to successfully develop competitive models. Therefore, today, the Wikimedia Research team is releasing its first large image dataset. It contains more than six million image files from Wikipedia articles in 100+ languages, which correspond to almost [1] all captioned images in the WIT dataset. Image files are provided at a 300-px resolution, a size that is suitable for most of the learning frameworks used to classify and analyze images.
> [1] We are publishing all images having a non-null “reference description” in the WIT dataset. For privacy reasons, we are not publishing images where a person is the primary subject, i.e., where a person’s face covers more than 10% of the image surface. To identify faces and their bounding boxes, we use the RetinaFace detector. In addition, to avoid the inclusion of inappropriate images or images that violate copyright constraints, we have removed all images that are candidate for deletion on Commons from the dataset.
**Note**: Compared to [Google's version](https://huggingface.co/datasets/google/wit), which has contents of one Wikipedia page per data sample, this version groups contents of all Wikipedia pages available in different languages for the image in one single data sample to avoid duplication of image bytes.
### Supported Tasks and Leaderboards
- `image-captioning`: This dataset can be used to train a model for image captioning where the goal is to predict a caption given the image.
- `text-retrieval`: The goal in this task is to build a model that retrieves the text (`caption_title_and_reference_description`) closest to an image. The leaderboard for this task can be found [here](https://paperswithcode.com/sota/text-image-retrieval-on-wit). This task also has a competition on [Kaggle](https://www.kaggle.com/c/wikipedia-image-caption).
In these tasks, any combination of the `caption_reference_description`, `caption_attribution_description` and `caption_alt_text_description` fields can be used as the input text/caption.
### Languages
The dataset contains examples from all Wikipedia languages.
## Dataset Structure
### Data Instances
Each instance is an image, its representation in bytes, a pre-computed embedding, and the set of captions attached to the image in Wikipedia.
```
{
'image': <PIL.JpegImagePlugin.JpegImageFile image mode=RGB size=300x225 at 0x7F88F3876358>,
'image_url': 'https://upload.wikimedia.org/wikipedia/commons/8/8b/Scolopendra_gigantea.jpg',
'embedding': [1.4784087, 2.8710432, 0.0, 0.51603067, ..., 10.266883, 0.51142216, 0.0, 2.3464653],
'metadata_url': 'http://commons.wikimedia.org/wiki/File:Scolopendra_gigantea.jpg',
'original_height': 3000,
'original_width': 4000,
'mime_type': 'image/jpeg',
'caption_attribution_description': 'English: Puerto Rican Giant Centipede, Scolopendra gigantea; Vieques, Puerto Rico Slovenčina: Stonožka obrovská, Scolopendra gigantea; Vieques, Portoriko',
'wit_features': {
'language': ['ro', 'vi', 'sk', ..., 'nl', 'th', 'lv'],
'page_url': ['https://ro.wikipedia.org/wiki/Scolopendra_gigantea', 'https://vi.wikipedia.org/wiki/Scolopendra_gigantea', 'https://sk.wikipedia.org/wiki/Scolopendra_gigantea', ..., 'https://nl.wikipedia.org/wiki/Scolopendra_gigantea', 'https://th.wikipedia.org/wiki/%E0%B8%95%E0%B8%B0%E0%B8%82%E0%B8%B2%E0%B8%9A%E0%B8%A2%E0%B8%B1%E0%B8%81%E0%B8%A9%E0%B9%8C%E0%B8%82%E0%B8%B2%E0%B9%80%E0%B8%AB%E0%B8%A5%E0%B8%B7%E0%B8%AD%E0%B8%87%E0%B9%80%E0%B8%9B%E0%B8%A3%E0%B8%B9', 'https://lv.wikipedia.org/wiki/Skolopendru_dzimta'],
'attribution_passes_lang_id': [True, True, True, ..., True, True, True],
'caption_alt_text_description': [None, None, None, ..., 'Scolopendra gigantea', None, 'Milzu skolopendra (Scolopendra gigantea)'],
'caption_reference_description': [None, None, None, ..., None, None, 'Milzu skolopendra (Scolopendra gigantea)'],
'caption_title_and_reference_description': [None, 'Scolopendra gigantea [SEP] ', None, ..., 'Scolopendra gigantea [SEP] ', None, 'Skolopendru dzimta [SEP] Milzu skolopendra (Scolopendra gigantea)'],
'context_page_description': ['Scolopendra gigantea este un miriapod din clasa Chilopoda, fiind cel mai mare reprezentant al genului Scolopendra. Adultul poate atinge o lungime de 26 cm, uneori depășind 30 cm. Această specie habitează în regiunile de nord și de vest a Americii de Sud, pe insulele Trinidad, insulele Virgine, Jamaica Hispaniola ș.a. Localnicii denumesc scolopendra chilopodul gigant galben și chilopodul gigant amazonian.', 'Scolopendra gigantea là đại diện lớn nhất của chi Scolopendra nói riêng và cả lớp rết nói chung, thường đạt độ dài 26 cm và có thể vượt quá 30 cm. Sinh sống ở khu vực phía bắc và tây của Nam Mỹ và các đảo Trinidad, Puerto Rico, Saint Thomas, U.S. Virgin Islands, Jamaica, và Hispaniola.', 'Scolopendra gigantea, starší slovenský nazov: štípavica veľká, je živočích z rodu Scolopendra, s veľkosťou do 30 cm.', ..., 'Scolopendra gigantea is een tijgerduizendpoot uit Zuid-Amerika. De soort jaagt onder andere op grote geleedpotigen, amfibieën, reptielen en kleine zoogdieren. Het is voor zover bekend de grootste niet uitgestorven duizendpoot ter wereld.', 'ตะขาบยักษ์ขาเหลืองเปรู หรือ ตะขาบยักษ์อเมซอน เป็นตะขาบชนิดที่มีขนาดใหญ่ที่สุดในสกุล Scolopendra โดยปกติเมื่อโตเต็มที่จะยาว 26 เซนติเมตร แต่บางครั้งก็สามารถโตได้ถึง 30 เซนติเมตร ตะขาบชนิดนี้อาศัยอยู่ทางแถบเหนือและตะวันตกของทวีปอเมริกาใต้ และตามเกาะแก่งของประเทศตรินิแดดและจาไมกา เป็นสัตว์กินเนื้อ โดยกินจิ้งจก, กบ, นก, หนู และแม้แต่ค้างคาวเป็นอาหาร และขึ้นชื่อในเรื่องความดุร้าย', 'Skolpendru dzimta pieder pie simtkāju kārtas. Ap 400 dzimtas sugas sastopamas visā pasaulē, īpaši subtropu un tropu apgabalos. Mitinās augsnē, nobirušās lapās, plaisās, spraugās.'],
'context_section_description': [None, 'Scolopendra gigantea (còn được gọi là Rết chân vàng khổng lồ Peru và Rết khổng lồ Amazon) là đại diện lớn nhất của chi Scolopendra nói riêng và cả lớp rết nói chung, thường đạt độ dài 26\xa0cm (10\xa0in) và có thể vượt quá 30\xa0cm (12\xa0in). Sinh sống ở khu vực phía bắc và tây của Nam Mỹ và các đảo Trinidad, Puerto Rico, Saint Thomas, U.S. Virgin Islands, Jamaica, và Hispaniola.', None, ..., 'Scolopendra gigantea is een tijgerduizendpoot uit Zuid-Amerika. De soort jaagt onder andere op grote geleedpotigen, amfibieën, reptielen en kleine zoogdieren. Het is voor zover bekend de grootste niet uitgestorven duizendpoot ter wereld.', None, 'Skolpendru dzimta (Scolopendridae) pieder pie simtkāju kārtas. Ap 400 dzimtas sugas sastopamas visā pasaulē, īpaši subtropu un tropu apgabalos. Mitinās augsnē, nobirušās lapās, plaisās, spraugās.'],
'hierarchical_section_title': ['Scolopendra gigantea', 'Scolopendra gigantea', 'Scolopendra gigantea', ..., 'Scolopendra gigantea', 'ตะขาบยักษ์ขาเหลืองเปรู', 'Skolopendru dzimta'],
'is_main_image': [True, True, True, ..., True, True, True],
'page_title': ['Scolopendra gigantea', 'Scolopendra gigantea', 'Scolopendra gigantea', ..., 'Scolopendra gigantea', 'ตะขาบยักษ์ขาเหลืองเปรู', 'Skolopendru dzimta'],
'section_title': [None, None, None, ..., None, None, None]
}
}
```
**Note**: The dataset is stored in Parquet for better performance. This dataset was generated from the original files using [this script](wit_base/blob/main/scripts/wit.py). Additionally, 120 examples from the original files have incorrectly formatted one or more of the following fields: `original_height`, `original_width`, `mime_type` and `caption_attribution_description`. The fixed versions of these examples that were used in the generation script can be found [here](wit_base/blob/main/scripts/corrected_examples.py).
### Data Fields
- `image`: A `PIL.Image.Image` object containing the image resized to a width of 300-px while preserving its aspect ratio. Note that when accessing the image column: `dataset[0]["image"]` the image file is automatically decoded. Decoding of a large number of image files might take a significant amount of time. Thus it is important to first query the sample index before the `"image"` column, *i.e.* `dataset[0]["image"]` should **always** be preferred over `dataset["image"][0]`.
- `image_url`: URL to wikipedia image
- `embedding`: Precomputed image embedding. Each image is described with a 2048-dimensional signature extracted from the second-to-last layer of a [ResNet-50](https://arxiv.org/abs/1512.03385) neural network trained with [Imagenet](https://www.image-net.org/) data. These embeddings contain rich information about the image content and layout, in a compact form
- `metadata_url`: URL to wikimedia page containing the image and the metadata
- `original_height`: Original image height before resizing
- `original_width`: Original image width before resizing
- `mime_type`: Mime type associated to the image
- `caption_attribution_description`: This is the text found on the Wikimedia page of the image. This text is common to all occurrences of that image across all Wikipedias.
- `wit_features`: Sequence of captions for the image with language, page URL, information about the page, caption text, etc.
- `language`: Language code depicting wikipedia language of the page
- `page_url`: URL to wikipedia page
- `attribution_passes_lang_id`: Compared `language` field with the attribution language (written in the prefix of the attribution description.
- `caption_alt_text_description`: This is the “alt” text associated with the image. While not visible in general, it is commonly used for accessibility / screen readers
- `caption_reference_description`: This is the caption that is visible on the wikipedia page directly below the image.
- `caption_title_and_reference_description`: Concatenation of `page_title` and `caption_reference_description`.
- `context_page_description`: Corresponds to the short description of the page. It provides a concise explanation of the scope of the page.
- `context_section_description`: Text within the image's section
- `hierarchical_section_title`: Hierarchical section's title
- `is_main_image`: Flag determining if the image is the first image of the page. Usually displayed on the top-right part of the page when using web browsers.
- `page_changed_recently`: [More Information Needed]
- `page_title`: Wikipedia page's title
- `section_title`: Section's title
<p align='center'>
<img width='75%' src='https://production-media.paperswithcode.com/datasets/Screenshot_2021-03-04_at_14.26.02.png' alt="Half Dome" /> </br>
<b>Figure: WIT annotation example. </b>
</p>
Details on the field content can be found directly in the [paper, figure 5 and table 12.](https://arxiv.org/abs/2103.01913)
### Data Splits
All data is held in `train` split, with a total of 6477255 examples.
## Dataset Creation
### Curation Rationale
From the [official blog post](https://techblog.wikimedia.org/2021/09/09/the-wikipedia-image-caption-matching-challenge-and-a-huge-release-of-image-data-for-research/):
> The WIT dataset offers extremely valuable data about the pieces of text associated with Wikipedia images.
> Getting easy access to the image files is crucial for participants to successfully develop competitive models.
> With this large release of visual data, we aim to help the competition participants—as well as researchers and practitioners who are interested in working with Wikipedia images—find and download the large number of image files associated with the challenge, in a compact form.
### Source Data
#### Initial Data Collection and Normalization
From the [paper, section 3.1](https://arxiv.org/abs/2103.01913):
> We started with all Wikipedia content pages (i.e., ignoring other
pages that have discussions, comments and such). These number about ~124M pages across 279 languages.
#### Who are the source language producers?
Text was extracted from Wikipedia.
### Annotations
#### Annotation process
WIT was constructed using an automatic process. However it was human-validated.
From the [paper, section 3.7](https://arxiv.org/abs/2103.01913):
> To further verify the quality of the WIT dataset we performed a
study using (crowd-sourced) human annotators. As seen in Fig. 3,
we asked raters to answer 3 questions. Given an image and the page
title, raters first evaluate the quality of the attribution description
and reference description in the first two questions (order randomized). The third question understands the contextual quality of these
text descriptions given the page description and caption. Each response is on a 3-point scale: "Yes" if the text perfectly describes
the image, "Maybe" if it is sufficiently explanatory and "No" if it is
irrelevant or the image is inappropriate.
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
From the [official blog post](https://techblog.wikimedia.org/2021/09/09/the-wikipedia-image-caption-matching-challenge-and-a-huge-release-of-image-data-for-research/#FN1):
> For privacy reasons, we are not publishing images where a person is the primary subject, i.e., where a person’s face covers more than 10% of the image surface. To identify faces and their bounding boxes, we use the [RetinaFace](https://arxiv.org/abs/1905.00641) detector. In addition, to avoid the inclusion of inappropriate images or images that violate copyright constraints, we have removed all images that are [candidate for deletion](https://commons.wikimedia.org/wiki/Commons:Deletion_requests) on Commons from the dataset.
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
From the [paper, section 3.4](https://arxiv.org/abs/2103.01913):
> Lastly we found that certain image-text pairs occurred very
frequently. These were often generic images that did not have
much to do with the main article page. Common examples
included flags, logos, maps, insignia and such. To prevent
biasing the data, we heavily under-sampled all such images
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
Miriam Redi, Fabian Kaelin and Tiziano Piccardi.
### Licensing Information
[CC BY-SA 4.0 international license](https://creativecommons.org/licenses/by-sa/4.0/)
### Citation Information
```bibtex
@article{srinivasan2021wit,
title={WIT: Wikipedia-based Image Text Dataset for Multimodal Multilingual Machine Learning},
author={Srinivasan, Krishna and Raman, Karthik and Chen, Jiecao and Bendersky, Michael and Najork, Marc},
journal={arXiv preprint arXiv:2103.01913},
year={2021}
}
```
### Contributions
Thanks to [@nateraw](https://github.com/nateraw), [yjernite](https://github.com/yjernite) and [mariosasko](https://github.com/mariosasko) for adding this dataset. | [
-0.611397385597229,
-0.37041109800338745,
0.2797066867351532,
0.27433329820632935,
-0.45462799072265625,
-0.1723342388868332,
-0.29774001240730286,
-0.6394824981689453,
0.737773060798645,
0.5429137349128723,
-0.5667992234230042,
-0.5460675358772278,
-0.5040158033370972,
0.48644310235977173... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
BeIR/bioasq-generated-queries | BeIR | 2022-10-23T06:16:16Z | 59 | 1 | beir | [
"task_categories:text-retrieval",
"task_ids:entity-linking-retrieval",
"task_ids:fact-checking-retrieval",
"multilinguality:monolingual",
"language:en",
"license:cc-by-sa-4.0",
"region:us"
] | 2022-10-23T06:16:16Z | 2022-06-17T14:01:55.000Z | 2022-06-17T14:01:55 | ---
annotations_creators: []
language_creators: []
language:
- en
license:
- cc-by-sa-4.0
multilinguality:
- monolingual
paperswithcode_id: beir
pretty_name: BEIR Benchmark
size_categories:
msmarco:
- 1M<n<10M
trec-covid:
- 100k<n<1M
nfcorpus:
- 1K<n<10K
nq:
- 1M<n<10M
hotpotqa:
- 1M<n<10M
fiqa:
- 10K<n<100K
arguana:
- 1K<n<10K
touche-2020:
- 100K<n<1M
cqadupstack:
- 100K<n<1M
quora:
- 100K<n<1M
dbpedia:
- 1M<n<10M
scidocs:
- 10K<n<100K
fever:
- 1M<n<10M
climate-fever:
- 1M<n<10M
scifact:
- 1K<n<10K
source_datasets: []
task_categories:
- text-retrieval
- zero-shot-retrieval
- information-retrieval
- zero-shot-information-retrieval
task_ids:
- passage-retrieval
- entity-linking-retrieval
- fact-checking-retrieval
- tweet-retrieval
- citation-prediction-retrieval
- duplication-question-retrieval
- argument-retrieval
- news-retrieval
- biomedical-information-retrieval
- question-answering-retrieval
---
# Dataset Card for BEIR Benchmark
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** https://github.com/UKPLab/beir
- **Repository:** https://github.com/UKPLab/beir
- **Paper:** https://openreview.net/forum?id=wCu6T5xFjeJ
- **Leaderboard:** https://docs.google.com/spreadsheets/d/1L8aACyPaXrL8iEelJLGqlMqXKPX2oSP_R10pZoy77Ns
- **Point of Contact:** nandan.thakur@uwaterloo.ca
### Dataset Summary
BEIR is a heterogeneous benchmark that has been built from 18 diverse datasets representing 9 information retrieval tasks:
- Fact-checking: [FEVER](http://fever.ai), [Climate-FEVER](http://climatefever.ai), [SciFact](https://github.com/allenai/scifact)
- Question-Answering: [NQ](https://ai.google.com/research/NaturalQuestions), [HotpotQA](https://hotpotqa.github.io), [FiQA-2018](https://sites.google.com/view/fiqa/)
- Bio-Medical IR: [TREC-COVID](https://ir.nist.gov/covidSubmit/index.html), [BioASQ](http://bioasq.org), [NFCorpus](https://www.cl.uni-heidelberg.de/statnlpgroup/nfcorpus/)
- News Retrieval: [TREC-NEWS](https://trec.nist.gov/data/news2019.html), [Robust04](https://trec.nist.gov/data/robust/04.guidelines.html)
- Argument Retrieval: [Touche-2020](https://webis.de/events/touche-20/shared-task-1.html), [ArguAna](tp://argumentation.bplaced.net/arguana/data)
- Duplicate Question Retrieval: [Quora](https://www.quora.com/q/quoradata/First-Quora-Dataset-Release-Question-Pairs), [CqaDupstack](http://nlp.cis.unimelb.edu.au/resources/cqadupstack/)
- Citation-Prediction: [SCIDOCS](https://allenai.org/data/scidocs)
- Tweet Retrieval: [Signal-1M](https://research.signal-ai.com/datasets/signal1m-tweetir.html)
- Entity Retrieval: [DBPedia](https://github.com/iai-group/DBpedia-Entity/)
All these datasets have been preprocessed and can be used for your experiments.
```python
```
### Supported Tasks and Leaderboards
The dataset supports a leaderboard that evaluates models against task-specific metrics such as F1 or EM, as well as their ability to retrieve supporting information from Wikipedia.
The current best performing models can be found [here](https://eval.ai/web/challenges/challenge-page/689/leaderboard/).
### Languages
All tasks are in English (`en`).
## Dataset Structure
All BEIR datasets must contain a corpus, queries and qrels (relevance judgments file). They must be in the following format:
- `corpus` file: a `.jsonl` file (jsonlines) that contains a list of dictionaries, each with three fields `_id` with unique document identifier, `title` with document title (optional) and `text` with document paragraph or passage. For example: `{"_id": "doc1", "title": "Albert Einstein", "text": "Albert Einstein was a German-born...."}`
- `queries` file: a `.jsonl` file (jsonlines) that contains a list of dictionaries, each with two fields `_id` with unique query identifier and `text` with query text. For example: `{"_id": "q1", "text": "Who developed the mass-energy equivalence formula?"}`
- `qrels` file: a `.tsv` file (tab-seperated) that contains three columns, i.e. the `query-id`, `corpus-id` and `score` in this order. Keep 1st row as header. For example: `q1 doc1 1`
### Data Instances
A high level example of any beir dataset:
```python
corpus = {
"doc1" : {
"title": "Albert Einstein",
"text": "Albert Einstein was a German-born theoretical physicist. who developed the theory of relativity, \
one of the two pillars of modern physics (alongside quantum mechanics). His work is also known for \
its influence on the philosophy of science. He is best known to the general public for his mass–energy \
equivalence formula E = mc2, which has been dubbed 'the world's most famous equation'. He received the 1921 \
Nobel Prize in Physics 'for his services to theoretical physics, and especially for his discovery of the law \
of the photoelectric effect', a pivotal step in the development of quantum theory."
},
"doc2" : {
"title": "", # Keep title an empty string if not present
"text": "Wheat beer is a top-fermented beer which is brewed with a large proportion of wheat relative to the amount of \
malted barley. The two main varieties are German Weißbier and Belgian witbier; other types include Lambic (made\
with wild yeast), Berliner Weisse (a cloudy, sour beer), and Gose (a sour, salty beer)."
},
}
queries = {
"q1" : "Who developed the mass-energy equivalence formula?",
"q2" : "Which beer is brewed with a large proportion of wheat?"
}
qrels = {
"q1" : {"doc1": 1},
"q2" : {"doc2": 1},
}
```
### Data Fields
Examples from all configurations have the following features:
### Corpus
- `corpus`: a `dict` feature representing the document title and passage text, made up of:
- `_id`: a `string` feature representing the unique document id
- `title`: a `string` feature, denoting the title of the document.
- `text`: a `string` feature, denoting the text of the document.
### Queries
- `queries`: a `dict` feature representing the query, made up of:
- `_id`: a `string` feature representing the unique query id
- `text`: a `string` feature, denoting the text of the query.
### Qrels
- `qrels`: a `dict` feature representing the query document relevance judgements, made up of:
- `_id`: a `string` feature representing the query id
- `_id`: a `string` feature, denoting the document id.
- `score`: a `int32` feature, denoting the relevance judgement between query and document.
### Data Splits
| Dataset | Website| BEIR-Name | Type | Queries | Corpus | Rel D/Q | Down-load | md5 |
| -------- | -----| ---------| --------- | ----------- | ---------| ---------| :----------: | :------:|
| MSMARCO | [Homepage](https://microsoft.github.io/msmarco/)| ``msmarco`` | ``train``<br>``dev``<br>``test``| 6,980 | 8.84M | 1.1 | [Link](https://public.ukp.informatik.tu-darmstadt.de/thakur/BEIR/datasets/msmarco.zip) | ``444067daf65d982533ea17ebd59501e4`` |
| TREC-COVID | [Homepage](https://ir.nist.gov/covidSubmit/index.html)| ``trec-covid``| ``test``| 50| 171K| 493.5 | [Link](https://public.ukp.informatik.tu-darmstadt.de/thakur/BEIR/datasets/trec-covid.zip) | ``ce62140cb23feb9becf6270d0d1fe6d1`` |
| NFCorpus | [Homepage](https://www.cl.uni-heidelberg.de/statnlpgroup/nfcorpus/) | ``nfcorpus`` | ``train``<br>``dev``<br>``test``| 323 | 3.6K | 38.2 | [Link](https://public.ukp.informatik.tu-darmstadt.de/thakur/BEIR/datasets/nfcorpus.zip) | ``a89dba18a62ef92f7d323ec890a0d38d`` |
| BioASQ | [Homepage](http://bioasq.org) | ``bioasq``| ``train``<br>``test`` | 500 | 14.91M | 8.05 | No | [How to Reproduce?](https://github.com/UKPLab/beir/blob/main/examples/dataset#2-bioasq) |
| NQ | [Homepage](https://ai.google.com/research/NaturalQuestions) | ``nq``| ``train``<br>``test``| 3,452 | 2.68M | 1.2 | [Link](https://public.ukp.informatik.tu-darmstadt.de/thakur/BEIR/datasets/nq.zip) | ``d4d3d2e48787a744b6f6e691ff534307`` |
| HotpotQA | [Homepage](https://hotpotqa.github.io) | ``hotpotqa``| ``train``<br>``dev``<br>``test``| 7,405 | 5.23M | 2.0 | [Link](https://public.ukp.informatik.tu-darmstadt.de/thakur/BEIR/datasets/hotpotqa.zip) | ``f412724f78b0d91183a0e86805e16114`` |
| FiQA-2018 | [Homepage](https://sites.google.com/view/fiqa/) | ``fiqa`` | ``train``<br>``dev``<br>``test``| 648 | 57K | 2.6 | [Link](https://public.ukp.informatik.tu-darmstadt.de/thakur/BEIR/datasets/fiqa.zip) | ``17918ed23cd04fb15047f73e6c3bd9d9`` |
| Signal-1M(RT) | [Homepage](https://research.signal-ai.com/datasets/signal1m-tweetir.html)| ``signal1m`` | ``test``| 97 | 2.86M | 19.6 | No | [How to Reproduce?](https://github.com/UKPLab/beir/blob/main/examples/dataset#4-signal-1m) |
| TREC-NEWS | [Homepage](https://trec.nist.gov/data/news2019.html) | ``trec-news`` | ``test``| 57 | 595K | 19.6 | No | [How to Reproduce?](https://github.com/UKPLab/beir/blob/main/examples/dataset#1-trec-news) |
| ArguAna | [Homepage](http://argumentation.bplaced.net/arguana/data) | ``arguana``| ``test`` | 1,406 | 8.67K | 1.0 | [Link](https://public.ukp.informatik.tu-darmstadt.de/thakur/BEIR/datasets/arguana.zip) | ``8ad3e3c2a5867cdced806d6503f29b99`` |
| Touche-2020| [Homepage](https://webis.de/events/touche-20/shared-task-1.html) | ``webis-touche2020``| ``test``| 49 | 382K | 19.0 | [Link](https://public.ukp.informatik.tu-darmstadt.de/thakur/BEIR/datasets/webis-touche2020.zip) | ``46f650ba5a527fc69e0a6521c5a23563`` |
| CQADupstack| [Homepage](http://nlp.cis.unimelb.edu.au/resources/cqadupstack/) | ``cqadupstack``| ``test``| 13,145 | 457K | 1.4 | [Link](https://public.ukp.informatik.tu-darmstadt.de/thakur/BEIR/datasets/cqadupstack.zip) | ``4e41456d7df8ee7760a7f866133bda78`` |
| Quora| [Homepage](https://www.quora.com/q/quoradata/First-Quora-Dataset-Release-Question-Pairs) | ``quora``| ``dev``<br>``test``| 10,000 | 523K | 1.6 | [Link](https://public.ukp.informatik.tu-darmstadt.de/thakur/BEIR/datasets/quora.zip) | ``18fb154900ba42a600f84b839c173167`` |
| DBPedia | [Homepage](https://github.com/iai-group/DBpedia-Entity/) | ``dbpedia-entity``| ``dev``<br>``test``| 400 | 4.63M | 38.2 | [Link](https://public.ukp.informatik.tu-darmstadt.de/thakur/BEIR/datasets/dbpedia-entity.zip) | ``c2a39eb420a3164af735795df012ac2c`` |
| SCIDOCS| [Homepage](https://allenai.org/data/scidocs) | ``scidocs``| ``test``| 1,000 | 25K | 4.9 | [Link](https://public.ukp.informatik.tu-darmstadt.de/thakur/BEIR/datasets/scidocs.zip) | ``38121350fc3a4d2f48850f6aff52e4a9`` |
| FEVER | [Homepage](http://fever.ai) | ``fever``| ``train``<br>``dev``<br>``test``| 6,666 | 5.42M | 1.2| [Link](https://public.ukp.informatik.tu-darmstadt.de/thakur/BEIR/datasets/fever.zip) | ``5a818580227bfb4b35bb6fa46d9b6c03`` |
| Climate-FEVER| [Homepage](http://climatefever.ai) | ``climate-fever``|``test``| 1,535 | 5.42M | 3.0 | [Link](https://public.ukp.informatik.tu-darmstadt.de/thakur/BEIR/datasets/climate-fever.zip) | ``8b66f0a9126c521bae2bde127b4dc99d`` |
| SciFact| [Homepage](https://github.com/allenai/scifact) | ``scifact``| ``train``<br>``test``| 300 | 5K | 1.1 | [Link](https://public.ukp.informatik.tu-darmstadt.de/thakur/BEIR/datasets/scifact.zip) | ``5f7d1de60b170fc8027bb7898e2efca1`` |
| Robust04 | [Homepage](https://trec.nist.gov/data/robust/04.guidelines.html) | ``robust04``| ``test``| 249 | 528K | 69.9 | No | [How to Reproduce?](https://github.com/UKPLab/beir/blob/main/examples/dataset#3-robust04) |
## Dataset Creation
### Curation Rationale
[Needs More Information]
### Source Data
#### Initial Data Collection and Normalization
[Needs More Information]
#### Who are the source language producers?
[Needs More Information]
### Annotations
#### Annotation process
[Needs More Information]
#### Who are the annotators?
[Needs More Information]
### Personal and Sensitive Information
[Needs More Information]
## Considerations for Using the Data
### Social Impact of Dataset
[Needs More Information]
### Discussion of Biases
[Needs More Information]
### Other Known Limitations
[Needs More Information]
## Additional Information
### Dataset Curators
[Needs More Information]
### Licensing Information
[Needs More Information]
### Citation Information
Cite as:
```
@inproceedings{
thakur2021beir,
title={{BEIR}: A Heterogeneous Benchmark for Zero-shot Evaluation of Information Retrieval Models},
author={Nandan Thakur and Nils Reimers and Andreas R{\"u}ckl{\'e} and Abhishek Srivastava and Iryna Gurevych},
booktitle={Thirty-fifth Conference on Neural Information Processing Systems Datasets and Benchmarks Track (Round 2)},
year={2021},
url={https://openreview.net/forum?id=wCu6T5xFjeJ}
}
```
### Contributions
Thanks to [@Nthakur20](https://github.com/Nthakur20) for adding this dataset. | [
-0.5227212905883789,
-0.5249219536781311,
0.14435674250125885,
0.04820423573255539,
0.055916160345077515,
0.0011022627586498857,
-0.1081070527434349,
-0.24874727427959442,
0.28598034381866455,
0.07840226590633392,
-0.45233607292175293,
-0.7186435461044312,
-0.347678542137146,
0.20300328731... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
nid989/EssayFroum-Dataset | nid989 | 2022-09-02T04:45:37Z | 59 | 2 | null | [
"license:apache-2.0",
"region:us"
] | 2022-09-02T04:45:37Z | 2022-09-02T04:09:43.000Z | 2022-09-02T04:09:43 | ---
license: apache-2.0
---
| [
-0.1285335123538971,
-0.1861683875322342,
0.6529128551483154,
0.49436232447624207,
-0.19319400191307068,
0.23607441782951355,
0.36072009801864624,
0.05056373029947281,
0.5793656706809998,
0.7400146722793579,
-0.650810182094574,
-0.23784008622169495,
-0.7102247476577759,
-0.0478255338966846... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
CShorten/CDC-COVID-FAQ | CShorten | 2022-09-11T15:42:46Z | 59 | 1 | null | [
"license:afl-3.0",
"region:us"
] | 2022-09-11T15:42:46Z | 2022-09-11T15:42:18.000Z | 2022-09-11T15:42:18 | ---
license: afl-3.0
---
Dataset extracted from https://www.cdc.gov/coronavirus/2019-ncov/hcp/faq.html#Treatment-and-Management.
| [
-0.10552928596735,
-0.7102739214897156,
0.35714778304100037,
0.0675211176276207,
-0.2938596308231354,
0.015502152033150196,
0.16792546212673187,
-0.37254324555397034,
0.22944645583629608,
1.1697108745574951,
-0.6299296021461487,
-0.7123748660087585,
-0.3925105035305023,
-0.2286376953125,
... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
ywchoi/pubmed_abstract_5 | ywchoi | 2022-09-13T01:07:12Z | 59 | 0 | null | [
"region:us"
] | 2022-09-13T01:07:12Z | 2022-09-13T01:05:10.000Z | 2022-09-13T01:05:10 | Entry not found | [
-0.3227645754814148,
-0.22568479180335999,
0.8622263669967651,
0.43461522459983826,
-0.52829909324646,
0.7012971639633179,
0.7915719747543335,
0.07618614286184311,
0.774603009223938,
0.2563217282295227,
-0.7852813005447388,
-0.22573819756507874,
-0.9104475975036621,
0.5715674161911011,
-... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
beki/privy | beki | 2023-04-25T21:45:06Z | 59 | 11 | null | [
"task_categories:token-classification",
"task_ids:named-entity-recognition",
"multilinguality:monolingual",
"size_categories:100K<n<200K",
"size_categories:300K<n<400K",
"language:en",
"license:mit",
"pii-detection",
"region:us"
] | 2023-04-25T21:45:06Z | 2022-09-16T04:41:28.000Z | 2022-09-16T04:41:28 | ---
language:
- en
license:
- mit
multilinguality:
- monolingual
size_categories:
- 100K<n<200K
- 300K<n<400K
task_categories:
- token-classification
task_ids:
- named-entity-recognition
tags:
- pii-detection
train-eval-index:
- config: privy-small
task: token-classification
task_id: entity_extraction
splits:
train_split: train
eval_split: test
metrics:
- type: seqeval
name: seqeval
pretty_name: Privy English
---
# Dataset Card for "privy-english"
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** [https://github.com/pixie-io/pixie/tree/main/src/datagen/pii/privy](https://github.com/pixie-io/pixie/tree/main/src/datagen/pii/privy)
### Dataset Summary
A synthetic PII dataset generated using [Privy](https://github.com/pixie-io/pixie/tree/main/src/datagen/pii/privy), a tool which parses OpenAPI specifications and generates synthetic request payloads, searching for keywords in API schema definitions to select appropriate data providers. Generated API payloads are converted to various protocol trace formats like JSON and SQL to approximate the data developers might encounter while debugging applications.
This labelled PII dataset consists of protocol traces (JSON, SQL (PostgreSQL, MySQL), HTML, and XML) generated from OpenAPI specifications and includes 60+ PII types.
### Supported Tasks and Leaderboards
Named Entity Recognition (NER) and PII classification.
### Label Scheme
<details>
<summary>View label scheme (26 labels for 60 PII data providers)</summary>
| Component | Labels |
| --- | --- |
| **`ner`** | `PERSON`, `LOCATION`, `NRP`, `DATE_TIME`, `CREDIT_CARD`, `URL`, `IBAN_CODE`, `US_BANK_NUMBER`, `PHONE_NUMBER`, `US_SSN`, `US_PASSPORT`, `US_DRIVER_LICENSE`, `IP_ADDRESS`, `US_ITIN`, `EMAIL_ADDRESS`, `ORGANIZATION`, `TITLE`, `COORDINATE`, `IMEI`, `PASSWORD`, `LICENSE_PLATE`, `CURRENCY`, `ROUTING_NUMBER`, `SWIFT_CODE`, `MAC_ADDRESS`, `AGE` |
</details>
### Languages
English
## Dataset Structure
### Data Instances
A sample:
```
{
"full_text": "{\"full_name_female\": \"Bethany Williams\", \"NewServerCertificateName\": \"\", \"NewPath\": \"\", \"ServerCertificateName\": \"dCwMNqR\", \"Action\": \"\", \"Version\": \"u zNS zNS\"}",
"masked": "{\"full_name_female\": \"{{name_female}}\", \"NewServerCertificateName\": \"{{string}}\", \"NewPath\": \"{{string}}\", \"ServerCertificateName\": \"{{string}}\", \"Action\": \"{{string}}\", \"Version\": \"{{string}}\"}",
"spans": [
{
"entity_type": "PERSON",
"entity_value": "Bethany Williams",
"start_position": 22,
"end_position": 38
}
],
"template_id": 51889,
"metadata": null
}
```
## Dataset Creation
### Curation Rationale
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
#### Who are the source language producers?
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Annotations
#### Annotation process
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
#### Who are the annotators?
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Personal and Sensitive Information
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Discussion of Biases
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Other Known Limitations
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
## Additional Information
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Dataset Curators
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Licensing Information
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Citation Information
```
@online{WinNT,
author = {Benjamin Kilimnik},
title = {{Privy} Synthetic PII Protocol Trace Dataset},
year = 2022,
url = {https://huggingface.co/datasets/beki/privy},
}
```
### Contributions
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | [
-0.5104015469551086,
-0.5865562558174133,
0.18801189959049225,
0.2749228775501251,
-0.185699462890625,
-0.003796334145590663,
-0.2984100580215454,
-0.32986003160476685,
0.6452711820602417,
0.4348604083061218,
-0.6625880599021912,
-1.0401395559310913,
-0.4566280245780945,
0.0785312503576278... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
copenlu/spiced | copenlu | 2022-10-24T12:31:04Z | 59 | 2 | null | [
"task_categories:text-classification",
"task_ids:text-scoring",
"task_ids:semantic-similarity-scoring",
"annotations_creators:crowdsourced",
"annotations_creators:machine-generated",
"language_creators:found",
"multilinguality:monolingual",
"size_categories:1K<n<10K",
"source_datasets:extended|s2orc... | 2022-10-24T12:31:04Z | 2022-10-20T15:18:50.000Z | 2022-10-20T15:18:50 | ---
annotations_creators:
- crowdsourced
- machine-generated
language:
- en
language_creators:
- found
license:
- mit
multilinguality:
- monolingual
paperswithcode_id: null
pretty_name: SPICED
size_categories:
- 1K<n<10K
source_datasets:
- extended|s2orc
tags:
- scientific text
- scholarly text
- semantic text similarity
- fact checking
- misinformation
task_categories:
- text-classification
task_ids:
- text-scoring
- semantic-similarity-scoring
---
# Dataset Card for SPICED
## Table of Contents
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** http://www.copenlu.com/publication/2022_emnlp_wright/
- **Repository:** https://github.com/copenlu/scientific-information-change
- **Paper:**
### Dataset Summary
The Scientific Paraphrase and Information ChangE Dataset (SPICED) is a dataset of paired scientific findings from scientific papers, news media, and Twitter. The types of pairs are between <paper, news> and <paper, tweet>. Each pair is labeled for the degree of information similarity in the _findings_ described by each sentence, on a scale from 1-5. This is called the _Information Matching Score (IMS)_. The data was curated from S2ORC and matched news articles and Tweets using Altmetric. Instances are annotated by experts using the Prolific platform and Potato. Please use the following citation when using this dataset:
```
@article{modeling-information-change,
title={{Modeling Information Change in Science Communication with Semantically Matched Paraphrases}},
author={Wright, Dustin and Pei, Jiaxin and Jurgens, David and Augenstein, Isabelle},
year={2022},
booktitle = {Proceedings of EMNLP},
publisher = {Association for Computational Linguistics},
year = 2022
}
```
### Supported Tasks and Leaderboards
The task is to predict the IMS between two scientific sentences, which is a scalar between 1 and 5. Preferred metrics are mean-squared error and Pearson correlation.
### Languages
English
## Dataset Structure
### Data Fields
- DOI: The DOI of the original scientific article
- instance\_id: Unique instance ID for the sample. The ID contains the field, whether or not it is a tweet, and whether or not the sample was manually labeled or automatically using SBERT (marked as "easy")
- News Finding: Text of the news or tweet finding
- Paper Finding: Text of the paper finding
- News Context: For news instances, the surrounding two sentences for the news finding. For tweets, a copy of the tweet
- Paper Context: The surrounding two sentences for the paper finding
- scores: Annotator scores after removing low competence annotators
- field: The academic field of the paper ('Computer\_Science', 'Medicine', 'Biology', or 'Psychology')
- split: The dataset split ('train', 'val', or 'test')
- final\_score: The IMS of the instance
- source: Either "news" or "tweet"
- News Url: A URL to the source article if a news instance or the tweet ID of a tweet
### Data Splits
- train: 4721 instances
- validation: 664 instances
- test: 640 instances
## Dataset Creation
For the full details of how the dataset was created, please refer to our [EMNLP 2022 paper]().
### Curation Rationale
Science communication is a complex process of translation from highly technical scientific language to common language that lay people can understand. At the same time, the general public relies on good science communication in order to inform critical decisions about their health and behavior. SPICED was curated in order to provide a training dataset and benchmark for machine learning models to measure changes in scientific information at different stages of the science communication pipeline.
### Source Data
#### Initial Data Collection and Normalization
Scientific text: S2ORC
News articles and Tweets are collected through Altmetric.
#### Who are the source language producers?
Scientists, journalists, and Twitter users.
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
Models trained on SPICED can be used to perform large scale analyses of science communication. They can be used to match the same finding discussed in different media, and reveal trends in differences in reporting at different stages of the science communication pipeline. It is hoped that this can help to build tools which will improve science communication.
### Discussion of Biases
The dataset is restricted to computer science, medicine, biology, and psychology, which may introduce some bias in the topics which models will perform well on.
### Other Known Limitations
While some context is available, we do not release the full text of news articles and scientific papers, which may contain further context to help with learning the task. We do however provide the paper DOIs and links to the original news articles in case full text is desired.
## Additional Information
### Dataset Curators
Dustin Wright, Jiaxin Pei, David Jurgens, and Isabelle Augenstein
### Licensing Information
MIT
### Contributions
Thanks to [@dwright37](https://github.com/dwright37) for adding this dataset. | [
-0.138886496424675,
-0.5862298011779785,
0.32654646039009094,
0.5130570530891418,
-0.3059394359588623,
-0.036233268678188324,
-0.21203668415546417,
-0.1470424383878708,
0.569057285785675,
0.37592825293540955,
-0.6124423742294312,
-0.8374130725860596,
-0.6279064416885376,
0.2416464388370514... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
CarperAI/pile-v2-small-filtered | CarperAI | 2022-12-06T14:16:11Z | 59 | 8 | null | [
"task_categories:text-generation",
"task_ids:language-modeling",
"language_creators:crowdsourced",
"multilinguality:multilingual",
"size_categories:unknown",
"language:en",
"language:code",
"region:us"
] | 2022-12-06T14:16:11Z | 2022-12-06T06:08:44.000Z | 2022-12-06T06:08:44 | ---
annotations_creators: []
language_creators:
- crowdsourced
language: ["en","code"]
multilinguality:
- multilingual
size_categories:
- unknown
source_datasets: []
task_categories:
- text-generation
task_ids:
- language-modeling
---
## Dataset Description
A small subset in each dataset of `pile-v2`(~1000 samples) of [pile-v2]() dataset, each has 1,000 random samples from the original dataset. The dataset has 255MB of text (code and english).
## Languages
The dataset contains technical text on programming languages and natural language with the following subsets,
- Bible
- TED2020
- PileOfLaw
- StackExchange
- GithubIssues
- Opensubtitles
- USPTO
- S2ORC
- DevDocs
- CodePileReddit2022
- USENET
- GNOME
- ASFPublicMail
- PileV2Reddit2020
- CodePilePosts
- Discourse
- Tanzil
- arXiv
- UbuntuIRC
- PubMed
- CodePileReddit2020
- CodePileReddit2021
- GlobalVoices
- FreeLaw_Options
- PileV2Posts
## Dataset Structure
```python
from datasets import load_dataset
load_dataset("CarperAI/pile-v2-small")
```
### How to use it
You can either load the whole dataset like above, or load a specific subset such as arxiv by specifying the folder directory:
```python
load_dataset("CarperAI/pile-v2-small", data_dir="data/arxiv")
```
| [
-0.5054317712783813,
-0.4381188452243805,
-0.0735994353890419,
0.23853525519371033,
-0.41380494832992554,
-0.09775874763727188,
0.050180740654468536,
-0.26188376545906067,
0.19313646852970123,
0.9798389673233032,
-0.29725611209869385,
-0.4429510235786438,
-0.4389587640762329,
0.13530106842... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
HuggingFaceH4/helpful-instructions | HuggingFaceH4 | 2023-02-20T08:58:24Z | 59 | 5 | null | [
"license:apache-2.0",
"human-feedback",
"region:us"
] | 2023-02-20T08:58:24Z | 2023-02-16T09:12:16.000Z | 2023-02-16T09:12:16 | ---
license: apache-2.0
tags:
- human-feedback
pretty_name: Helpful Instructions
---
# Dataset Card for Helpful Instructions
## Dataset Description
- **Homepage:**
- **Repository:**
- **Paper:**
- **Leaderboard:**
- **Point of Contact: Lewis Tunstall**
### Dataset Summary
Helpful Instructions is a dataset of `(instruction, demonstration)` pairs that are derived from public datasets. As the name suggests, it focuses on instructions that are "helpful", i.e. the kind of questions or tasks a human user might instruct an AI assistant to perform. You can load the dataset as follows:
```python
from datasets import load_dataset
# Load all subsets
helpful_instructions = load_dataset("HuggingFaceH4/helpful_instructions")
# Load a single subset
helpful_instructions_subset = load_dataset("HuggingFaceH4/helpful_instructions", data_dir="data/helpful-anthropic-raw")
```
### Supported Tasks and Leaderboards
This dataset can be used to fine-tune pretrained language models to follow instructions.
### Languages
English
## Dataset Structure
### Data Instances
[More Information Needed]
### Data Fields
[More Information Needed]
### Data Splits
[More Information Needed]
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
[More Information Needed]
### Citation Information
[More Information Needed]
### Contributions
[More Information Needed] | [
-0.2892112731933594,
-0.6660897135734558,
0.20125210285186768,
0.25894051790237427,
-0.17845861613750458,
-0.2715380787849426,
-0.3147009611129761,
-0.04050496220588684,
0.2988830804824829,
0.5592653751373291,
-0.8983093500137329,
-0.9272134900093079,
-0.5772737264633179,
0.173323526978492... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
breadlicker45/youtube-comments-180k | breadlicker45 | 2023-02-24T15:15:32Z | 59 | 1 | null | [
"region:us"
] | 2023-02-24T15:15:32Z | 2023-02-24T15:14:47.000Z | 2023-02-24T15:14:47 | Entry not found | [
-0.3227645754814148,
-0.22568479180335999,
0.8622264862060547,
0.43461528420448303,
-0.52829909324646,
0.7012971639633179,
0.7915720343589783,
0.07618614286184311,
0.774603009223938,
0.2563217282295227,
-0.7852813005447388,
-0.22573819756507874,
-0.9104477167129517,
0.5715674161911011,
-... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
grosenthal/latin_english_translation | grosenthal | 2023-07-17T21:59:06Z | 59 | 4 | null | [
"task_categories:translation",
"size_categories:10K<n<100K",
"language:la",
"language:en",
"license:mit",
"doi:10.57967/hf/0903",
"region:us"
] | 2023-07-17T21:59:06Z | 2023-02-28T00:10:51.000Z | 2023-02-28T00:10:51 | ---
dataset_info:
features:
- name: id
dtype: int64
- name: la
dtype: string
- name: en
dtype: string
- name: file
dtype: string
splits:
- name: train
num_bytes: 39252644
num_examples: 99343
- name: test
num_bytes: 405056
num_examples: 1014
- name: valid
num_bytes: 392886
num_examples: 1014
download_size: 25567350
dataset_size: 40050586
license: mit
task_categories:
- translation
language:
- la
- en
pretty_name: Latin to English Translation Pairs
size_categories:
- 10K<n<100K
---
# Dataset Card for "latin_english_parallel"
101k translation pairs between Latin and English, split 99/1/1 as train/test/val. These have been collected roughly 66% from the Loeb Classical Library and 34% from the Vulgate translation.
For those that were gathered from the Loeb Classical Library, alignment was performd manually between Source and Target sequences.
Each sample is annotated with the index and file (and therefore author/work) that the sample is from. If you find errors, please feel free to submit a PR to fix them.
 | [
-0.2594360113143921,
-0.3780158460140228,
0.24450913071632385,
0.3584078252315521,
-0.44017449021339417,
0.020667683333158493,
-0.20133548974990845,
-0.39024415612220764,
0.588620662689209,
0.4766141176223755,
-0.5120354294776917,
-0.7666114568710327,
-0.45888641476631165,
0.43776750564575... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
hackathon-somos-nlp-2023/informes_discriminacion_gitana | hackathon-somos-nlp-2023 | 2023-04-11T09:29:14Z | 59 | 7 | null | [
"task_categories:text-classification",
"task_categories:text2text-generation",
"size_categories:n<1K",
"language:es",
"license:apache-2.0",
"hate",
"region:us"
] | 2023-04-11T09:29:14Z | 2023-04-04T14:19:40.000Z | 2023-04-04T14:19:40 | ---
dataset_info:
features:
- name: sintetico
dtype: string
- name: text
dtype: string
- name: intervencion
dtype: string
- name: tipo_discriminacion
dtype: string
- name: resultado
dtype: string
splits:
- name: train
num_bytes: 1569183.3
num_examples: 1791
- name: test
num_bytes: 87614.92462311558
num_examples: 100
- name: valid
num_bytes: 86738.77537688443
num_examples: 99
download_size: 936705
dataset_size: 1743537.0000000002
task_categories:
- text-classification
- text2text-generation
language:
- es
tags:
- hate
size_categories:
- n<1K
license: apache-2.0
---
### Resumen del dataset
Se trata de un dataset en español, extraído del centro de documentación de la Fundación Secretariado Gitano, en el que se presentan distintas situaciones discriminatorias acontecidas por el pueblo gitano. Puesto que el objetivo del modelo es crear un sistema de generación de actuaciones que permita minimizar el impacto de una situación discriminatoria, se hizo un scrappeo y se extrajeron todos los PDFs que contuvieron casos de discriminación con el formato (HECHOS, INTERVENCIÓN, RESULTADO). Para extraer la información se hizo un scrappeo de la página, a continuación se limpió y se unificó todo el dataset con un script de preprocesamiento para que todo el dataset tuviera el mismo formato.
### Tareas admitidas y tablas de clasificación
- `task-generation`: Dado el hecho generar la intervención y la etiqueta de resultado, para dar métodos para hacer la intervección y que sea efectiva. ([PAG-BERT](https://huggingface.co/hackathon-somos-nlp-2023/PAG-BERT))
- `task-classication`: Se puede entrenar un modelo de clasificación, dejamos a los usarios, predecir el tipo de descriminación de dependiedo del hecho
### Idioma
Es un dataset con la variante español de España, el estilo empleado es formal y objetivo, limitándose a describir los hechos descritos por las personas afectadas.
## Estructura de los datos
### Instancias
A continuación se muestra una instancia de ejemplo del dataset:
```
{
'sintetico': '0',
'text': 'Una joven gitana comenzó a trabajar en una tienda de ropa, hace dos años, con contrato indefinido. Al mes de comenzar a trabajar, una compañera le preguntó, en presencia de su encargada, si era gitana, ella respondió que sí; desde entonces el trato de la encargada hacia la joven cambió, comenzó a tirar al suelo perchas, tierra, para luego acusarla de que no limpiaba el suelo, además de hacer continuamente comentarios generalizados refiriéndose a las mujeres gitanas, del tipo “¿Pero te dejan trabajar?” “¿Y estudiar?”, “tú tienes que saber cómo trabajar en la tienda porque como aprendéis en los mercadillos...” La víctima comentó que desde que la encargada se enteró de que era gitana le hizo la vida imposible, se sintió muy humillada. No aguantó más y presentó la baja voluntaria, aun siendo consciente de que perdía su derecho a la prestación por desempleo.',
'intervencion': 'Se entrevistó a la joven. Se comprobó a través del testimonio de la víctima que desde que su encargada se enteró de que es mujer gitana, al mes de comenzar a trabajar aproximadamente, comenzó a sufrir discriminación. Se informó a la víctima del Servicio, del trabajo que realizamos y de sus derechos.\xa0',
'tipo_discriminacion': 'Discriminación directa',
'resultado': 'Negativo.'
}
```
### Campos de los datos
- `sintetico`: indica si los datos son relacionados con la intervención y el resultado son originales, es decir, proceden de la fuente "Fundación Secretariado Gitano" (valor 0); o si, por el contrario, los hemos generado sintéticamente (valor 1).
- `text`: expone los hechos descritos por la persona afectada.
- `intervencion`: presenta las medidas que se tomaron desde la Fundación para evitar que los hechos descritos en "text" se volvieran a repetir.
- `tipo_discriminacion`: etiqueta que identifica el tipo de discriminación. Puede tomar los valores **Acoso discriminatorio**, **Discriminación directa**, **Discriminación indirecta**, **Discriminación interseccional**, **Discurso de odio**, **Orden de discriminar**,, **Sin especificar**.
- `resultado`: presenta la repercusión que tuvo la intervención adoptada. Sus posibles valores son **Positivo**, **Negativo** y **Neutro**.
### División de los datos
El dataset cuenta con un total de 1990 instancias, repartidas del siguiente modo:
| | train | validation | test |
|-------------------------|----------:|-------------:|----------:|
| Input Sentences | 90% | 5% | 5% |
| Average Sentence Length | 94.71 | 90.94 | 98.07 |
Cabe destacar que, teniendo en cuenta el resultado de las intervenciones (positivo, negativo o neutro), el dataset no está balanceado. En concreto, hay un total de 280 muestras positivas, 939 negativas y 771 neutras. En próximas actualizaciones del dataset trabajaremos para incrementar el tamaño del dataset de forma balanceada.
## Creación del dataset
### Justificación de la curación
El motivo por el que se creó este dataset es para conocer de una forma objetiva, si las medidas actuales que se están adoptando por parte de la Fundación han surtido efecto (en cuyo caso sería positivo), no ha surtido ningún efecto (negativo), o si por el contrario, las medidas propuestas no han incentivado al usuario a llevar a cabo ninguna acción.
Se ha optado por este dataset por el volumen de datos que contiene relativos a distintos escenarios, y por el formato que todos comparten de: HECHOS, INTERVENCIÓN Y RESULTADO.
### Fuente de los datos
Los datos utilizados para construir el modelo fueron extraídos de la página web de la Fundación Secretariado Gitano (<a href="https://informesdiscriminacion.gitanos.org">FSM</a>). El FSM tiene una base de datos que contiene actos de discriminación que han sido reportados a la organización. Estos actos de discriminación fueron seleccionados para entrenar y evaluar el modelo.
#### Recogida inicial de datos y normalización
Los datos fueron extraídos de la sección de <a href = "https://informesdiscriminacion.gitanos.org/buscar-casos" >Buscador de casos</a>, donde se lleva un registro de todo los casos de descriminación.
Los campos que ofrece la página web para estetipo de informes son:
* `Hecho` que hace referencia al acto de descriminación.
* `Intervención` qué medidas tomo la FSG para solucionar el problema.
* `Resultado`: Descripción del resultado.
* Año que ocurrió el caso.
* Año del informe.
* Ámbito: Dado el caso de que la discrimnación haya sido una empresa gubernamenta, en cual derecho fundamental se presentó.
* Provincia: Lugar donde ocurrió el acto.
* Tipo de discriminación.
En la extracción de datos solo tuvimos en cuenta los campos **hechos**, **intervención**, **resultados** y **tipo de discriminación**. El lenguaje usado en los informes es formal.
Originalmente, una elevado número de Hechos no contaban con una intervención y resultado (los campos estaban vacíos).
#### Limpieza de los datos
En la página web, el campo resultado contiene un breve explicación del los efectos obtenidos tras llevar a cabo la intervección. Usando la librería <a href="https://github.com/pysentimiento/pysentimiento">pysentimiento</a>, se clasificó el resultado entre negativo, neutro y positivo.
Posterior mente se revisó la etiqueta y se ajustó para lo que se consideraba neutral, negativo o positivo
El 17% de los actos de discriminación en el dataset no contaban con intervención ni resultado. Para completar estos campos se aplicó la técnica few-show learning usando el modelo Bloom. De modo que dado algunos ejemplos de **hechos**, **intervención** y **resultado**, seríamos capaces de generar **intervenciones** y **resultados** de forma automática. El output del modelo Bloom se revisó manualmente para corregir errores.
El 41% de los textos del campo **hechos** eran demasiado largos para ser utilizado en BLOOM aplicando la técnica de few-shot learning. Para resolver este problema, se decidió resumirlos, para esto se utilizó la función `segmenter.split_single` de la librería <a href="https://github.com/fnl/segtok" >segtok</a>, que divide el texto en oraciones y las separa por caracteres de nueva línea.
Se usaron dos modelos pre-etrenados para resumir cada sub-texto. El primero fue <a href="https://huggingface.co/mrm8488/bert2bert_shared-spanish-finetuned-summarization">mrm8488/bert2bert_shared-spanish-finetuned-summarization</a> y el segundo fue el <a href="https://huggingface.co/Narrativa/bsc_roberta2roberta_shared-spanish-finetuned-mlsum-summarization">Narrativa/bsc_roberta2roberta_shared-spanish-finetuned-mlsum-summarization</a>
En el repositorio https://github.com/Frorozcoloa/somos_nlp_hackaton se encuentran los scripts originales usados para el preprocesamiento. También puedes encontrar una copia de los mismos en este mismo repositorio.
### Anotación
Las anotaciones que se ralizaron fueron verificaciones a los datos de sintéticos generados con few-shot learning (intervenciones y resultados):
* Se rellenaron los valores nulos.
* Se hicieron resumenes de algunos textos (Hehos) aplicando modelos pre-entrenados.
* Se cambió el texto de resultado por etiquetas de POS, NEU, NEG.
#### Proceso de anotación
Para el proceso de etiquetado se utilizó Argilla para etiquetar la categoría de "Resultado", para ello se emplearon las siguientes etiquetas: "Positivo", "Negativo" y "Neutro". En el proceso de etiquetado lo que nos interesaba era etiquetar el resultado de las intervenciones para que el modelo aprendiera y pudiera generar texto para dar respuesta a la situación expuesta por el usuario, además de predecir con los datos etiquetados si la repercusión que pudiera tener la medida que propone el modelo sería "positiva"(surtiría efecto), "negativa"(no tendría ningún efecto) o "neutra"(si es posible que el usuario no llevara a cabo ninguna acción).
En concreto, tras descargar todos los datos disponibles en la web, los preprocesamos y unimos en un solo dataset que fue subido a Argilla. Una vez aquí, validamos cada una de las instancias del siguiente modo:
* Si la intervención y/o resultado están vacías, se anota como tal.
* Se comprueba que el resultado positivo, negativo o neutro es correcto. La mayoría de las incongruencias surgen entre los pares positivo/neutro y negativo/neutro.
Una vez validado el dataset con argilla, seleccionamos las muestras que fueron anotadas como "vacías" para proceder a completarlas. Para ello, hemos aplicado Few-Shot Learning usando el modelo [BLOOM](https://huggingface.co/bigscience/bloom).
Cabe destacar que algunos hechos del dataset eran demasiado largos y no podían ser procesados por BLOOM (generaba un error que indicaba que habíamos superado el número máximo de tokens), para solucionarlo, utilizamos los modelos <a href="https://huggingface.co/mrm8488/bert2bert_shared-spanish-finetuned-summarization">mrm8488/bert2bert_shared-spanish-finetuned-summarization</a> y <a href="https://huggingface.co/Narrativa/bsc_roberta2roberta_shared-spanish-finetuned-mlsum-summarization">Narrativa/bsc_roberta2roberta_shared-spanish-finetuned-mlsum-summarization</a> para resumir dichos hechos y minimizar así su tamaño.
### Información personal y sensible
En este caso no se ha necesitado utilizar ningún proceso de anonimización, ya que los datos procedentes de esta fuente no contienen ninguna información que vulnere los derechos de los afectados.
## Consideraciones sobre el uso de los datos
### Consideraciones sobre el uso de los datos
El impacto social de este dataset se dirige a ser una herramienta que sirva para implementar acciones que ayuden a combatir el racismo hacia la población gitana, además este dataset se podría utilizar para evaluar la repercusión de las distintas medidas adoptadas durante un período de tiempo, y aquellas medidas con una repercusión "negativa" o "neutra" investigarlas y mejorarlas con un trato más concienzudo hacia la población gitana.
### Debate sobre los prejuicios
Sé realizó un analisís exploratorio de los datos, para eso hemos realizado una nube de palabras para analizar los datos sintéticos y no sintéticos.
#### Datos no sintéticos
<img src="https://huggingface.co/datasets/hackathon-somos-nlp-2023/informes_discriminacion_gitana/resolve/main/imagenes/Hechos_normales.png">
Aquí podemos ver que muchos de los hechos se generaron en noticias, en mujeres, temas de vivienda, con la policia y la familia.
<img src="https://huggingface.co/datasets/hackathon-somos-nlp-2023/informes_discriminacion_gitana/resolve/main/imagenes/Intervenci%C3%B3n_normal.png">
Las intervenciones hablan de derechos, de cartas, de igualdad, asesorar a la persona y de presentar quejas.
<img src="https://huggingface.co/datasets/hackathon-somos-nlp-2023/informes_discriminacion_gitana/resolve/main/imagenes/etiqueta_normal.png">
Muchos de los resultados de las intervenciones fueron negativos o neutrales (Posiblemente sin respuesta) o de que no se logró lo propuesto (Negativo). Se puede observar el desbalance en los datos.
Por medio de la librería *pysentimiento* y usando el modelo `pysentimiento/pt_hate_speech`, se realizó una métrica para medir el discurso de odio en el `Hecho`.
Para eso análizaremos hateful, targeted y aggressive. La métrica va de 0 a 1, para cada una. Siendo la probabilidad de que esa caracteristica esté en el texto.
Se encotró lo siguiente
<img src="https://huggingface.co/datasets/hackathon-somos-nlp-2023/informes_discriminacion_gitana/resolve/main/imagenes/hate_normal.png">
<img src="https://huggingface.co/datasets/hackathon-somos-nlp-2023/informes_discriminacion_gitana/resolve/main/imagenes/hate_2_normal.png">
La distribución de los valores de hateful, targeted y aggressive presentan una cola alargada hacia la derecha, lo que indica que hay pocos casos en los que se detecta un mensaje de odio en los hechos.
Para el caso, donde no se generó la intervección y resultado se presenta un crecimiento en el tercer cuartil, esto quiere decir que hay mensajes que muestra un discurso de odio. Por ejemplo el hateful es de 0.4, targeted de 0.02 y aggresive de 0.03. En conclusión, como está escrito el hecho y como fue entrenado el modelo de *pysentimiento*, en general los hechos no tienen un mensaje de odio.
#### Datos sintéticos.
Se realizó el mismo análisis para los datos sintéticos
<img src="https://huggingface.co/datasets/hackathon-somos-nlp-2023/informes_discriminacion_gitana/resolve/main/imagenes/Hechos_sinteticos.png"/>
Cabe resltar que el hecho no fue generado.
Es claro que el dataset está más sesgado a contener las palabras gitano, gitana, comunidad gitana, etnia gitana, familia, discriminación.
<img src="https://huggingface.co/datasets/hackathon-somos-nlp-2023/informes_discriminacion_gitana/resolve/main/imagenes/Intervenci%C3%B3n_sintetica.png"/>
Esta parte fue generada por el modelo *Bloom*. Puede comprobarse que con *few-shot* se logra captar más que todo la palabra `derecho`.
<img src="https://huggingface.co/datasets/hackathon-somos-nlp-2023/informes_discriminacion_gitana/resolve/main/imagenes/Etiquetas%20sinteticas.png">
Tambien hay un desbalance en las etiquetas generadas.
Por medio de la librería *pysentimiento* y usando el modelo `pysentimiento/pt_hate_speech` ,se realizó una métrica para medir el discurso de odio en el `Hecho`
Para eso análizaremos hateful, targeted y aggressive. La métrica va de 0 a 1, para cada una. Siendo la probabilidad de que esa caracteristica esté en el texto.
Se encotró lo siguiente
<img src="https://huggingface.co/datasets/hackathon-somos-nlp-2023/informes_discriminacion_gitana/resolve/main/imagenes/hate_sintetico.png">
<img src="https://huggingface.co/datasets/hackathon-somos-nlp-2023/informes_discriminacion_gitana/resolve/main/imagenes/hate_2_sintetico.png">
La distribución de los valores de hateful, targeted y aggressive presentan una cola alargada hacia la derecha, lo que indica que hay pocos casos en los que se detecta un mensaje de odio en los hechos.
Tanto la mediana como la media de los valores de hateful, targeted y aggressive están muy cerca de cero, lo que indica que la mayoría de los hechos no contienen mensajes de odio. Además, se observó que en el tercer cuartil, el 75% de los datos en la métrica de hateful es 0.3, para targeted es de 0.0089 y para aggressive es de 0.06, lo que refuerza la conclusión de que la mayoría de los datos no contienen un mensaje de odio en la descripción de los hechos.
## Información adicional
### Curadores del dataset
* <a href="https://www.linkedin.com/in/frorozcol/">Fredy Orozco</a>
* <a href="https://www.linkedin.com/in/mariajesusgs">María Jesús García</a>
* <a href="https://www.linkedin.com/in/ramonruedadelgado/">Ramón Rueda</a> | [
-0.46861445903778076,
-0.647391676902771,
0.15961264073848724,
0.3549286425113678,
-0.40363261103630066,
-0.13507215678691864,
-0.15462864935398102,
-0.4774248003959656,
0.41836047172546387,
0.24163737893104553,
-0.5096086263656616,
-0.8479011654853821,
-0.62229323387146,
0.421300768852233... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
distil-whisper/voxpopuli | distil-whisper | 2023-09-25T10:30:13Z | 59 | 0 | null | [
"task_categories:automatic-speech-recognition",
"language:en",
"license:cc0-1.0",
"region:us"
] | 2023-09-25T10:30:13Z | 2023-04-07T17:10:56.000Z | 2023-04-07T17:10:56 | ---
license: cc0-1.0
task_categories:
- automatic-speech-recognition
language:
- en
-pretty_name: VoxPopuli
---
# Distil Whisper: VoxPopuli
This is a variant of the [VoxPopuli](https://huggingface.co/datasets/facebook/voxpopuli) dataset, augmented to return the pseudo-labelled Whisper
Transcriptions alongside the original dataset elements. The pseudo-labelled transcriptions were generated by
labelling the input audio data with the Whisper [large-v2](https://huggingface.co/openai/whisper-large-v2)
model with *greedy* sampling. For information on how the original dataset was curated, refer to the original
[dataset card](https://huggingface.co/datasets/facebook/voxpopuli).
## Standalone Usage
First, install the latest version of the 🤗 Datasets package:
```bash
pip install --upgrade pip
pip install --upgrade datasets[audio]
```
The dataset can be downloaded and pre-processed on disk using the [`load_dataset`](https://huggingface.co/docs/datasets/v2.14.5/en/package_reference/loading_methods#datasets.load_dataset)
function:
```python
from datasets import load_dataset
dataset = load_dataset("distil-whisper/voxpopuli", "en")
# take the first sample of the validation set
sample = dataset["validation"][0]
```
It can also be streamed directly from the Hub using Datasets' [streaming mode](https://huggingface.co/blog/audio-datasets#streaming-mode-the-silver-bullet).
Loading a dataset in streaming mode loads individual samples of the dataset at a time, rather than downloading the entire
dataset to disk:
```python
from datasets import load_dataset
dataset = load_dataset("distil-whisper/voxpopuli", "en", streaming=True)
# take the first sample of the validation set
sample = next(iter(dataset["validation"]))
```
## Distil Whisper Usage
To use this dataset to reproduce a Distil Whisper training run, refer to the instructions on the
[Distil Whisper repository](https://github.com/huggingface/distil-whisper#training).
## License
This dataset is licensed under cc0-1.0.
| [
-0.15034285187721252,
-0.7668005228042603,
0.137380912899971,
0.4081325829029083,
-0.1388116180896759,
0.051264796406030655,
-0.1943289190530777,
-0.13166329264640808,
0.4409128427505493,
0.37996166944503784,
-0.8221492171287537,
-0.505931556224823,
-0.5634603500366211,
0.03640911355614662... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
vietgpt/the_pile_openwebtext2 | vietgpt | 2023-07-15T09:20:18Z | 59 | 1 | null | [
"language:en",
"region:us"
] | 2023-07-15T09:20:18Z | 2023-04-11T19:24:36.000Z | 2023-04-11T19:24:36 | ---
language: en
dataset_info:
features:
- name: title
dtype: string
- name: text
dtype: string
- name: reddit_scores
sequence: int32
splits:
- name: train
num_bytes: 68786199155
num_examples: 17103059
download_size: 42444568964
dataset_size: 68786199155
---
# Dataset Card for "the_pile_openwebtext2"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | [
-0.6477506160736084,
-0.20104403793811798,
-0.05990539491176605,
0.17286372184753418,
-0.4325448274612427,
-0.09399135410785675,
0.34568077325820923,
-0.1732153296470642,
0.6717790961265564,
0.4294852018356323,
-0.5037835240364075,
-0.5587186813354492,
-0.6035104393959045,
-0.4501978754997... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
slvnwhrl/blurbs-clustering-p2p | slvnwhrl | 2023-04-24T11:42:06Z | 59 | 0 | null | [
"size_categories:10K<n<100K",
"language:de",
"license:cc-by-nc-4.0",
"embeddings",
"clustering",
"benchmark",
"region:us"
] | 2023-04-24T11:42:06Z | 2023-04-21T14:17:32.000Z | 2023-04-21T14:17:32 | ---
license: cc-by-nc-4.0
language:
- de
tags:
- embeddings
- clustering
- benchmark
size_categories:
- 10K<n<100K
---
This dataset can be used as a benchmark for clustering word embeddings for <b>German</b>.
The datasets contains book titles and is based on the dataset from the [GermEval 2019 Shared Task on Hierarchical Classification of Blurbs](https://www.inf.uni-hamburg.de/en/inst/ab/lt/resources/data/germeval-2019-hmc.html). It contains 18'084 unqiue samples, 28 splits with 177 to 16'425 samples and 4 to 93 unique classes. Splits are built similarly to [MTEB](https://github.com/embeddings-benchmark/mteb)'s [ArxivClusteringP2P](https://huggingface.co/datasets/mteb/arxiv-clustering-p2p).
Have a look at [German Text Embedding Clustering Benchmark](https://github.com/ClimSocAna/tecb-de) for more infos, datasets and evaluation results. | [
-0.36985525488853455,
-0.6850837469100952,
0.3440563976764679,
0.29514163732528687,
-0.513349175453186,
0.016135383397340775,
-0.14674881100654602,
-0.26755329966545105,
0.13917119801044464,
0.23082047700881958,
-0.22190719842910767,
-1.0893950462341309,
-0.7497704029083252,
0.013750746846... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
doushabao4766/weibo_ner_knowledge_V3_wc | doushabao4766 | 2023-05-20T02:21:51Z | 59 | 1 | null | [
"region:us"
] | 2023-05-20T02:21:51Z | 2023-05-20T02:21:48.000Z | 2023-05-20T02:21:48 | ---
dataset_info:
features:
- name: id
dtype: string
- name: tokens
sequence: string
- name: ner_tags
sequence:
class_label:
names:
'0': B-GPE.NAM
'1': B-GPE.NOM
'2': B-LOC.NAM
'3': B-LOC.NOM
'4': B-ORG.NAM
'5': B-ORG.NOM
'6': B-PER.NAM
'7': B-PER.NOM
'8': I-GPE.NAM
'9': I-GPE.NOM
'10': I-LOC.NAM
'11': I-LOC.NOM
'12': I-ORG.NAM
'13': I-ORG.NOM
'14': I-PER.NAM
'15': I-PER.NOM
'16': O
- name: knowledge
dtype: string
- name: token_words
sequence:
sequence: string
- name: knowledge_words
sequence:
sequence: string
splits:
- name: train
num_bytes: 7027512
num_examples: 1350
- name: validation
num_bytes: 1116528
num_examples: 270
- name: test
num_bytes: 1107689
num_examples: 270
download_size: 2405285
dataset_size: 9251729
---
# Dataset Card for "weibo_ner_knowledge_V3_wc"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | [
-0.3938560485839844,
-0.08260628581047058,
0.21571815013885498,
0.4199269115924835,
-0.10426066815853119,
-0.19325433671474457,
0.46178242564201355,
-0.26801586151123047,
0.5970896482467651,
0.516441822052002,
-0.5955990552902222,
-0.8473407626152039,
-0.5617578625679016,
-0.38197234272956... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
ejschwartz/oo-method-test-split | ejschwartz | 2023-09-11T19:21:06Z | 59 | 0 | null | [
"task_categories:text-classification",
"region:us"
] | 2023-09-11T19:21:06Z | 2023-06-20T18:50:45.000Z | 2023-06-20T18:50:45 | ---
task_categories:
- text-classification
train-eval-index:
- config: bylibrary
task: text-classification
task_id: binary_classification
splits:
eval_split: test
col_mapping:
Disassembly: text
Type: target
---
TODO: Add datacard | [
-0.8879550695419312,
0.15493148565292358,
0.3644258975982666,
0.4791889190673828,
-0.5132691264152527,
0.16183073818683624,
0.5309286117553711,
-0.04604329168796539,
0.8361197710037231,
0.924931526184082,
-0.5331580638885498,
-0.7911657094955444,
-0.1576492190361023,
-0.3203865587711334,
... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
DynamicSuperb/NoiseSNRLevelPrediction_VCTK_MUSAN-Gaussian | DynamicSuperb | 2023-11-24T09:54:21Z | 59 | 0 | null | [
"region:us"
] | 2023-11-24T09:54:21Z | 2023-08-11T09:13:37.000Z | 2023-08-11T09:13:37 | ---
dataset_info:
features:
- name: file
dtype: string
- name: audio
dtype:
audio:
sampling_rate: 16000
- name: instruction
dtype: string
- name: label
dtype: string
splits:
- name: test
num_bytes: 3458150877.875
num_examples: 26865
download_size: 3434724026
dataset_size: 3458150877.875
---
# Dataset Card for "NoiseSNRLevelPredictiongaussian_VCTKMusan"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | [
-0.4300971031188965,
-0.256049782037735,
0.08270318806171417,
0.5216569304466248,
-0.2827795445919037,
-0.09025660902261734,
0.1791808307170868,
-0.14336681365966797,
0.6178210377693176,
0.35689565539360046,
-1.0336592197418213,
-0.9209120273590088,
-0.6776683330535889,
-0.4086796939373016... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
larryvrh/ShareGPT-Zh_Only | larryvrh | 2023-08-22T08:25:50Z | 59 | 6 | null | [
"task_categories:text-generation",
"task_categories:conversational",
"size_categories:1K<n<10K",
"language:zh",
"region:us"
] | 2023-08-22T08:25:50Z | 2023-08-21T09:57:50.000Z | 2023-08-21T09:57:50 | ---
dataset_info:
features:
- name: conversations
list:
- name: from
dtype: string
- name: value
dtype: string
- name: src
dtype: string
splits:
- name: train
num_bytes: 69835231
num_examples: 8631
download_size: 32862465
dataset_size: 69835231
task_categories:
- text-generation
- conversational
language:
- zh
size_categories:
- 1K<n<10K
---
# Dataset Card for "sharegpt"
Combined and filtered from [shibing624/sharegpt_gpt4](https://huggingface.co/datasets/shibing624/sharegpt_gpt4) and [zetavg/ShareGPT-Processed](https://huggingface.co/datasets/zetavg/ShareGPT-Processed). | [
-0.6489244699478149,
-0.3391622006893158,
0.3925166726112366,
0.4470568597316742,
-0.48967790603637695,
0.10886511951684952,
0.22832509875297546,
-0.3818342387676239,
0.687946617603302,
0.6817283034324646,
-1.1072444915771484,
-0.6375783681869507,
-0.8206462264060974,
-0.12696382403373718,... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
notrichardren/HaluEval | notrichardren | 2023-09-11T21:09:44Z | 59 | 0 | null | [
"region:us"
] | 2023-09-11T21:09:44Z | 2023-09-11T21:09:34.000Z | 2023-09-11T21:09:34 | ---
dataset_info:
- config_name: dialogue
features:
- name: knowledge
dtype: string
- name: dialogue_history
dtype: string
- name: right_response
dtype: string
- name: hallucinated_response
dtype: string
- name: task_type
dtype: string
splits:
- name: train
num_bytes: 6332598
num_examples: 10000
download_size: 3451421
dataset_size: 6332598
- config_name: general
features:
- name: user_query
dtype: string
- name: chatgpt_response
dtype: string
- name: hallucination_label
dtype: string
- name: task_type
dtype: string
splits:
- name: train
num_bytes: 3010941
num_examples: 5000
download_size: 1849332
dataset_size: 3010941
- config_name: qa
features:
- name: knowledge
dtype: string
- name: question
dtype: string
- name: right_answer
dtype: string
- name: hallucinated_answer
dtype: string
- name: task_type
dtype: string
splits:
- name: train
num_bytes: 5546422
num_examples: 10000
download_size: 3753464
dataset_size: 5546422
- config_name: summarization
features:
- name: document
dtype: string
- name: right_summary
dtype: string
- name: hallucinated_summary
dtype: string
- name: task_type
dtype: string
splits:
- name: train
num_bytes: 46578787
num_examples: 10000
download_size: 27986765
dataset_size: 46578787
configs:
- config_name: dialogue
data_files:
- split: train
path: dialogue/train-*
- config_name: general
data_files:
- split: train
path: general/train-*
- config_name: qa
data_files:
- split: train
path: qa/train-*
- config_name: summarization
data_files:
- split: train
path: summarization/train-*
---
# Dataset Card for "HaluEval"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | [
-0.6240628361701965,
-0.219760000705719,
0.12995846569538116,
0.11946724355220795,
-0.2131556123495102,
0.0688980296254158,
0.29161930084228516,
-0.20891475677490234,
0.7995426058769226,
0.4559326171875,
-0.6947270035743713,
-0.8520516753196716,
-0.5840199589729309,
-0.52220618724823,
-0... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
jjonhwa/SECOND_KQ_V2 | jjonhwa | 2023-09-13T07:04:47Z | 59 | 0 | null | [
"region:us"
] | 2023-09-13T07:04:47Z | 2023-09-13T01:44:49.000Z | 2023-09-13T01:44:49 | ---
dataset_info:
features:
- name: question
dtype: string
- name: answers
sequence: string
- name: ctxs
list:
- name: score
dtype: float64
- name: text
dtype: string
splits:
- name: train
num_bytes: 686780736
num_examples: 86975
download_size: 276955064
dataset_size: 686780736
---
# Dataset Card for "SECOND_KQ_V2"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | [
-0.28313779830932617,
-0.040561795234680176,
0.27448680996894836,
0.1278473287820816,
-0.41667449474334717,
0.12216156721115112,
0.6244043707847595,
-0.23704354465007782,
0.6429892778396606,
0.602932870388031,
-0.776605486869812,
-0.6415378451347351,
-0.6215881109237671,
-0.524972200393676... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
adityarra07/czech_test | adityarra07 | 2023-10-04T18:09:08Z | 59 | 0 | null | [
"region:us"
] | 2023-10-04T18:09:08Z | 2023-10-04T18:09:04.000Z | 2023-10-04T18:09:04 | ---
dataset_info:
features:
- name: audio
dtype:
audio:
sampling_rate: 16000
- name: transcription
dtype: string
- name: id
dtype: string
splits:
- name: train
num_bytes: 53042654.644653864
num_examples: 1000
download_size: 52259185
dataset_size: 53042654.644653864
---
# Dataset Card for "czech_test"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | [
-0.6022446155548096,
-0.4622125029563904,
0.22799746692180634,
0.3067328631877899,
-0.3895472586154938,
-0.00010471741552464664,
-0.09014029800891876,
-0.2299228012561798,
0.6475375294685364,
0.5229538083076477,
-1.0113428831100464,
-1.049472451210022,
-0.42114147543907166,
-0.034015297889... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
sayakpaul/drawbench | sayakpaul | 2023-10-21T05:25:29Z | 59 | 3 | null | [
"license:apache-2.0",
"region:us"
] | 2023-10-21T05:25:29Z | 2023-10-21T05:24:45.000Z | 2023-10-21T05:24:45 | ---
license: apache-2.0
---
DrawBench dataset from [Imagen](https://imagen.research.google/). | [
-0.4636109471321106,
-0.4595412611961365,
0.21393035352230072,
0.2906893491744995,
-0.2289542853832245,
-0.2737414240837097,
0.289381206035614,
-0.4062679708003998,
0.7545084357261658,
1.0534642934799194,
-0.7877190113067627,
-0.6163312196731567,
-0.3460128903388977,
-0.5242343544960022,
... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
Evening2k/gpi | Evening2k | 2023-10-22T17:55:44Z | 59 | 0 | null | [
"region:us"
] | 2023-10-22T17:55:44Z | 2023-10-22T17:54:53.000Z | 2023-10-22T17:54:53 | Entry not found | [
-0.3227647542953491,
-0.22568407654762268,
0.8622258901596069,
0.4346148371696472,
-0.5282984972000122,
0.7012965083122253,
0.7915717959403992,
0.07618629932403564,
0.7746022343635559,
0.2563222348690033,
-0.785281777381897,
-0.22573848068714142,
-0.9104482531547546,
0.5715669393539429,
... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
liezeleinstein/rwkvtest2instja5 | liezeleinstein | 2023-10-24T13:11:19Z | 59 | 0 | null | [
"license:mit",
"region:us"
] | 2023-10-24T13:11:19Z | 2023-10-24T13:10:51.000Z | 2023-10-24T13:10:51 | ---
license: mit
---
| [
-0.12853392958641052,
-0.18616779148578644,
0.6529127955436707,
0.49436280131340027,
-0.19319361448287964,
0.23607419431209564,
0.36072003841400146,
0.050563063472509384,
0.579365611076355,
0.7400140762329102,
-0.6508104205131531,
-0.23783954977989197,
-0.7102249264717102,
-0.0478260256350... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
Leul78/fina | Leul78 | 2023-11-03T11:28:30Z | 59 | 0 | null | [
"license:apache-2.0",
"region:us"
] | 2023-11-03T11:28:30Z | 2023-11-03T11:27:58.000Z | 2023-11-03T11:27:58 | ---
license: apache-2.0
---
| [
-0.12853392958641052,
-0.18616779148578644,
0.6529127955436707,
0.49436280131340027,
-0.19319361448287964,
0.23607419431209564,
0.36072003841400146,
0.050563063472509384,
0.579365611076355,
0.7400140762329102,
-0.6508104205131531,
-0.23783954977989197,
-0.7102249264717102,
-0.0478260256350... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
Phando/vsr | Phando | 2023-11-06T09:01:54Z | 59 | 0 | null | [
"region:us"
] | 2023-11-06T09:01:54Z | 2023-11-06T09:00:47.000Z | 2023-11-06T09:00:47 | ---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: validation
path: data/validation-*
- split: test
path: data/test-*
dataset_info:
features:
- name: image_link
dtype: string
- name: caption
dtype: string
- name: label
dtype: int64
- name: relation
dtype: string
- name: subj
dtype: string
- name: obj
dtype: string
- name: annotator_id
dtype: int64
- name: vote_true_validator_id
dtype: string
- name: vote_false_validator_id
dtype: string
- name: image
dtype: image
splits:
- name: train
num_bytes: 176262215.386
num_examples: 3489
- name: validation
num_bytes: 17990271.0
num_examples: 340
- name: test
num_bytes: 54289880.918
num_examples: 1222
download_size: 239471235
dataset_size: 248542367.30400002
---
# Dataset Card for "vsr"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | [
-0.6949394345283508,
-0.1876426488161087,
0.23771794140338898,
0.014882495626807213,
-0.21487021446228027,
0.14312632381916046,
0.29443588852882385,
-0.18436770141124725,
0.8125481605529785,
0.5607078671455383,
-0.8133576512336731,
-0.546524167060852,
-0.5937156677246094,
-0.22411653399467... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
jxm/nq_corpus_dpr | jxm | 2023-11-07T22:15:32Z | 59 | 0 | null | [
"region:us"
] | 2023-11-07T22:15:32Z | 2023-11-07T22:13:52.000Z | 2023-11-07T22:13:52 | ---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: dev
path: data/dev-*
dataset_info:
features:
- name: text
dtype: string
splits:
- name: train
num_bytes: 3284289693
num_examples: 5332023
- name: dev
num_bytes: 520583613
num_examples: 849508
download_size: 2568992962
dataset_size: 3804873306
---
# Dataset Card for "nq_corpus_dpr"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | [
-0.5815801620483398,
-0.05416104197502136,
0.08214902877807617,
0.30425286293029785,
-0.16146022081375122,
0.35346901416778564,
0.19752392172813416,
0.013005310669541359,
0.8353875279426575,
0.4761368930339813,
-0.5109419822692871,
-0.9076217412948608,
-0.581018328666687,
-0.04423723369836... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
shunk031/PubLayNet | shunk031 | 2023-11-09T13:09:05Z | 59 | 0 | null | [
"task_categories:image-classification",
"task_categories:image-segmentation",
"task_categories:image-to-text",
"task_categories:question-answering",
"task_categories:other",
"task_categories:multiple-choice",
"task_categories:token-classification",
"task_categories:tabular-to-text",
"task_categories... | 2023-11-09T13:09:05Z | 2023-11-09T13:02:05.000Z | 2023-11-09T13:02:05 | ---
annotations_creators:
- machine-generated
language:
- en
language_creators:
- found
license:
- cdla-permissive-1.0
multilinguality:
- monolingual
pretty_name: PubLayNet
size_categories: []
source_datasets:
- original
tags:
- graphic design
- layout-generation
task_categories:
- image-classification
- image-segmentation
- image-to-text
- question-answering
- other
- multiple-choice
- token-classification
- tabular-to-text
- object-detection
- table-question-answering
- text-classification
- table-to-text
task_ids:
- multi-label-image-classification
- multi-class-image-classification
- semantic-segmentation
- image-captioning
- extractive-qa
- closed-domain-qa
- multiple-choice-qa
- named-entity-recognition
---
# Dataset Card for PubLayNet
[](https://github.com/shunk031/huggingface-datasets_PubLayNet/actions/workflows/ci.yaml)
## Table of Contents
- [Dataset Card Creation Guide](#dataset-card-creation-guide)
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Initial Data Collection and Normalization](#initial-data-collection-and-normalization)
- [Who are the source language producers?](#who-are-the-source-language-producers)
- [Annotations](#annotations)
- [Annotation process](#annotation-process)
- [Who are the annotators?](#who-are-the-annotators)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** https://developer.ibm.com/exchanges/data/all/publaynet/
- **Repository:** https://github.com/shunk031/huggingface-datasets_PubLayNet
- **Paper (Preprint):** https://arxiv.org/abs/1908.07836
- **Paper (ICDAR2019):** https://ieeexplore.ieee.org/document/8977963
### Dataset Summary
PubLayNet is a dataset for document layout analysis. It contains images of research papers and articles and annotations for various elements in a page such as "text", "list", "figure" etc in these research paper images. The dataset was obtained by automatically matching the XML representations and the content of over 1 million PDF articles that are publicly available on PubMed Central.
### Supported Tasks and Leaderboards
[More Information Needed]
### Languages
[More Information Needed]
## Dataset Structure
### Data Instances
```python
import datasets as ds
dataset = ds.load_dataset(
path="shunk031/PubLayNet",
decode_rle=True, # True if Run-length Encoding (RLE) is to be decoded and converted to binary mask.
)
```
### Data Fields
[More Information Needed]
### Data Splits
[More Information Needed]
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
[More Information Needed]
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
[More Information Needed]
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
- [CDLA-Permissive](https://cdla.io/permissive-1-0/)
### Citation Information
```bibtex
@inproceedings{zhong2019publaynet,
title={Publaynet: largest dataset ever for document layout analysis},
author={Zhong, Xu and Tang, Jianbin and Yepes, Antonio Jimeno},
booktitle={2019 International Conference on Document Analysis and Recognition (ICDAR)},
pages={1015--1022},
year={2019},
organization={IEEE}
}
```
### Contributions
Thanks to [ibm-aur-nlp/PubLayNet](https://github.com/ibm-aur-nlp/PubLayNet) for creating this dataset.
| [
-0.40505942702293396,
-0.4141570031642914,
0.1389823853969574,
0.3609328269958496,
-0.19821199774742126,
-0.07809282094240189,
-0.17887042462825775,
-0.2768246829509735,
0.531981885433197,
0.6907927393913269,
-0.6004325151443481,
-0.7616609334945679,
-0.4853772521018982,
0.0861412286758422... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
multi-train/codesearchnet_1107 | multi-train | 2023-11-10T21:11:59Z | 59 | 0 | null | [
"region:us"
] | 2023-11-10T21:11:59Z | 2023-11-10T21:10:52.000Z | 2023-11-10T21:10:52 | ---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
dataset_info:
features:
- name: query
dtype: string
- name: pos
sequence: string
- name: neg
sequence: string
- name: task
dtype: string
- name: instruction
struct:
- name: query
dtype: string
- name: pos
dtype: string
- name: neg
dtype: string
splits:
- name: train
num_bytes: 2207111297
num_examples: 1000000
download_size: 552466752
dataset_size: 2207111297
---
# Dataset Card for "codesearchnet_1107"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | [
-0.44477900862693787,
0.1765303909778595,
0.14093017578125,
0.16273897886276245,
-0.11935389786958694,
0.033884331583976746,
0.28366973996162415,
0.04514116048812866,
0.9791173338890076,
0.5166811943054199,
-0.641822099685669,
-0.8079129457473755,
-0.4708254337310791,
-0.05079292878508568,... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
qbo-odp/MNBVC-core | qbo-odp | 2023-11-17T13:37:56Z | 59 | 0 | null | [
"region:us"
] | 2023-11-17T13:37:56Z | 2023-11-17T07:51:53.000Z | 2023-11-17T07:51:53 | Entry not found | [
-0.3227645754814148,
-0.22568479180335999,
0.8622263669967651,
0.43461522459983826,
-0.52829909324646,
0.7012971639633179,
0.7915719747543335,
0.07618614286184311,
0.774603009223938,
0.2563217282295227,
-0.7852813005447388,
-0.22573819756507874,
-0.9104475975036621,
0.5715674161911011,
-... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
AlignmentLab-AI/know-saraswati-alpaca-cot-by-knowrohit | AlignmentLab-AI | 2023-11-17T16:24:46Z | 59 | 0 | null | [
"region:us"
] | 2023-11-17T16:24:46Z | 2023-11-17T15:44:03.000Z | 2023-11-17T15:44:03 | Entry not found | [
-0.3227649927139282,
-0.225684255361557,
0.862226128578186,
0.43461498618125916,
-0.5282987952232361,
0.7012963891029358,
0.7915717363357544,
0.07618629932403564,
0.7746025919914246,
0.2563219666481018,
-0.7852816581726074,
-0.2257382869720459,
-0.9104480743408203,
0.5715669393539429,
-0... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
yxchar/amazon-tlm | yxchar | 2021-11-04T22:22:29Z | 58 | 0 | null | [
"region:us"
] | 2021-11-04T22:22:29Z | 2022-03-02T23:29:22.000Z | 2022-03-02T23:29:22 | Entry not found | [
-0.3227649927139282,
-0.225684255361557,
0.862226128578186,
0.43461498618125916,
-0.5282987952232361,
0.7012963891029358,
0.7915717363357544,
0.07618629932403564,
0.7746025919914246,
0.2563219666481018,
-0.7852816581726074,
-0.2257382869720459,
-0.9104480743408203,
0.5715669393539429,
-0... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
jamescalam/reddit-topics | jamescalam | 2022-04-28T18:14:19Z | 58 | 2 | null | [
"region:us"
] | 2022-04-28T18:14:19Z | 2022-04-28T18:13:13.000Z | 2022-04-28T18:13:13 | Entry not found | [
-0.3227649927139282,
-0.225684255361557,
0.862226128578186,
0.43461498618125916,
-0.5282987952232361,
0.7012963891029358,
0.7915717363357544,
0.07618629932403564,
0.7746025919914246,
0.2563219666481018,
-0.7852816581726074,
-0.2257382869720459,
-0.9104480743408203,
0.5715669393539429,
-0... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
codeparrot/codeparrot-valid-v2-near-dedup | codeparrot | 2022-06-16T18:25:43Z | 58 | 0 | null | [
"region:us"
] | 2022-06-16T18:25:43Z | 2022-06-16T18:25:35.000Z | 2022-06-16T18:25:35 | Entry not found | [
-0.3227649927139282,
-0.225684255361557,
0.862226128578186,
0.43461498618125916,
-0.5282987952232361,
0.7012963891029358,
0.7915717363357544,
0.07618629932403564,
0.7746025919914246,
0.2563219666481018,
-0.7852816581726074,
-0.2257382869720459,
-0.9104480743408203,
0.5715669393539429,
-0... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
BeIR/fiqa-generated-queries | BeIR | 2022-10-23T06:13:18Z | 58 | 2 | beir | [
"task_categories:text-retrieval",
"task_ids:entity-linking-retrieval",
"task_ids:fact-checking-retrieval",
"multilinguality:monolingual",
"language:en",
"license:cc-by-sa-4.0",
"region:us"
] | 2022-10-23T06:13:18Z | 2022-06-17T12:56:09.000Z | 2022-06-17T12:56:09 | ---
annotations_creators: []
language_creators: []
language:
- en
license:
- cc-by-sa-4.0
multilinguality:
- monolingual
paperswithcode_id: beir
pretty_name: BEIR Benchmark
size_categories:
msmarco:
- 1M<n<10M
trec-covid:
- 100k<n<1M
nfcorpus:
- 1K<n<10K
nq:
- 1M<n<10M
hotpotqa:
- 1M<n<10M
fiqa:
- 10K<n<100K
arguana:
- 1K<n<10K
touche-2020:
- 100K<n<1M
cqadupstack:
- 100K<n<1M
quora:
- 100K<n<1M
dbpedia:
- 1M<n<10M
scidocs:
- 10K<n<100K
fever:
- 1M<n<10M
climate-fever:
- 1M<n<10M
scifact:
- 1K<n<10K
source_datasets: []
task_categories:
- text-retrieval
- zero-shot-retrieval
- information-retrieval
- zero-shot-information-retrieval
task_ids:
- passage-retrieval
- entity-linking-retrieval
- fact-checking-retrieval
- tweet-retrieval
- citation-prediction-retrieval
- duplication-question-retrieval
- argument-retrieval
- news-retrieval
- biomedical-information-retrieval
- question-answering-retrieval
---
# Dataset Card for BEIR Benchmark
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** https://github.com/UKPLab/beir
- **Repository:** https://github.com/UKPLab/beir
- **Paper:** https://openreview.net/forum?id=wCu6T5xFjeJ
- **Leaderboard:** https://docs.google.com/spreadsheets/d/1L8aACyPaXrL8iEelJLGqlMqXKPX2oSP_R10pZoy77Ns
- **Point of Contact:** nandan.thakur@uwaterloo.ca
### Dataset Summary
BEIR is a heterogeneous benchmark that has been built from 18 diverse datasets representing 9 information retrieval tasks:
- Fact-checking: [FEVER](http://fever.ai), [Climate-FEVER](http://climatefever.ai), [SciFact](https://github.com/allenai/scifact)
- Question-Answering: [NQ](https://ai.google.com/research/NaturalQuestions), [HotpotQA](https://hotpotqa.github.io), [FiQA-2018](https://sites.google.com/view/fiqa/)
- Bio-Medical IR: [TREC-COVID](https://ir.nist.gov/covidSubmit/index.html), [BioASQ](http://bioasq.org), [NFCorpus](https://www.cl.uni-heidelberg.de/statnlpgroup/nfcorpus/)
- News Retrieval: [TREC-NEWS](https://trec.nist.gov/data/news2019.html), [Robust04](https://trec.nist.gov/data/robust/04.guidelines.html)
- Argument Retrieval: [Touche-2020](https://webis.de/events/touche-20/shared-task-1.html), [ArguAna](tp://argumentation.bplaced.net/arguana/data)
- Duplicate Question Retrieval: [Quora](https://www.quora.com/q/quoradata/First-Quora-Dataset-Release-Question-Pairs), [CqaDupstack](http://nlp.cis.unimelb.edu.au/resources/cqadupstack/)
- Citation-Prediction: [SCIDOCS](https://allenai.org/data/scidocs)
- Tweet Retrieval: [Signal-1M](https://research.signal-ai.com/datasets/signal1m-tweetir.html)
- Entity Retrieval: [DBPedia](https://github.com/iai-group/DBpedia-Entity/)
All these datasets have been preprocessed and can be used for your experiments.
```python
```
### Supported Tasks and Leaderboards
The dataset supports a leaderboard that evaluates models against task-specific metrics such as F1 or EM, as well as their ability to retrieve supporting information from Wikipedia.
The current best performing models can be found [here](https://eval.ai/web/challenges/challenge-page/689/leaderboard/).
### Languages
All tasks are in English (`en`).
## Dataset Structure
All BEIR datasets must contain a corpus, queries and qrels (relevance judgments file). They must be in the following format:
- `corpus` file: a `.jsonl` file (jsonlines) that contains a list of dictionaries, each with three fields `_id` with unique document identifier, `title` with document title (optional) and `text` with document paragraph or passage. For example: `{"_id": "doc1", "title": "Albert Einstein", "text": "Albert Einstein was a German-born...."}`
- `queries` file: a `.jsonl` file (jsonlines) that contains a list of dictionaries, each with two fields `_id` with unique query identifier and `text` with query text. For example: `{"_id": "q1", "text": "Who developed the mass-energy equivalence formula?"}`
- `qrels` file: a `.tsv` file (tab-seperated) that contains three columns, i.e. the `query-id`, `corpus-id` and `score` in this order. Keep 1st row as header. For example: `q1 doc1 1`
### Data Instances
A high level example of any beir dataset:
```python
corpus = {
"doc1" : {
"title": "Albert Einstein",
"text": "Albert Einstein was a German-born theoretical physicist. who developed the theory of relativity, \
one of the two pillars of modern physics (alongside quantum mechanics). His work is also known for \
its influence on the philosophy of science. He is best known to the general public for his mass–energy \
equivalence formula E = mc2, which has been dubbed 'the world's most famous equation'. He received the 1921 \
Nobel Prize in Physics 'for his services to theoretical physics, and especially for his discovery of the law \
of the photoelectric effect', a pivotal step in the development of quantum theory."
},
"doc2" : {
"title": "", # Keep title an empty string if not present
"text": "Wheat beer is a top-fermented beer which is brewed with a large proportion of wheat relative to the amount of \
malted barley. The two main varieties are German Weißbier and Belgian witbier; other types include Lambic (made\
with wild yeast), Berliner Weisse (a cloudy, sour beer), and Gose (a sour, salty beer)."
},
}
queries = {
"q1" : "Who developed the mass-energy equivalence formula?",
"q2" : "Which beer is brewed with a large proportion of wheat?"
}
qrels = {
"q1" : {"doc1": 1},
"q2" : {"doc2": 1},
}
```
### Data Fields
Examples from all configurations have the following features:
### Corpus
- `corpus`: a `dict` feature representing the document title and passage text, made up of:
- `_id`: a `string` feature representing the unique document id
- `title`: a `string` feature, denoting the title of the document.
- `text`: a `string` feature, denoting the text of the document.
### Queries
- `queries`: a `dict` feature representing the query, made up of:
- `_id`: a `string` feature representing the unique query id
- `text`: a `string` feature, denoting the text of the query.
### Qrels
- `qrels`: a `dict` feature representing the query document relevance judgements, made up of:
- `_id`: a `string` feature representing the query id
- `_id`: a `string` feature, denoting the document id.
- `score`: a `int32` feature, denoting the relevance judgement between query and document.
### Data Splits
| Dataset | Website| BEIR-Name | Type | Queries | Corpus | Rel D/Q | Down-load | md5 |
| -------- | -----| ---------| --------- | ----------- | ---------| ---------| :----------: | :------:|
| MSMARCO | [Homepage](https://microsoft.github.io/msmarco/)| ``msmarco`` | ``train``<br>``dev``<br>``test``| 6,980 | 8.84M | 1.1 | [Link](https://public.ukp.informatik.tu-darmstadt.de/thakur/BEIR/datasets/msmarco.zip) | ``444067daf65d982533ea17ebd59501e4`` |
| TREC-COVID | [Homepage](https://ir.nist.gov/covidSubmit/index.html)| ``trec-covid``| ``test``| 50| 171K| 493.5 | [Link](https://public.ukp.informatik.tu-darmstadt.de/thakur/BEIR/datasets/trec-covid.zip) | ``ce62140cb23feb9becf6270d0d1fe6d1`` |
| NFCorpus | [Homepage](https://www.cl.uni-heidelberg.de/statnlpgroup/nfcorpus/) | ``nfcorpus`` | ``train``<br>``dev``<br>``test``| 323 | 3.6K | 38.2 | [Link](https://public.ukp.informatik.tu-darmstadt.de/thakur/BEIR/datasets/nfcorpus.zip) | ``a89dba18a62ef92f7d323ec890a0d38d`` |
| BioASQ | [Homepage](http://bioasq.org) | ``bioasq``| ``train``<br>``test`` | 500 | 14.91M | 8.05 | No | [How to Reproduce?](https://github.com/UKPLab/beir/blob/main/examples/dataset#2-bioasq) |
| NQ | [Homepage](https://ai.google.com/research/NaturalQuestions) | ``nq``| ``train``<br>``test``| 3,452 | 2.68M | 1.2 | [Link](https://public.ukp.informatik.tu-darmstadt.de/thakur/BEIR/datasets/nq.zip) | ``d4d3d2e48787a744b6f6e691ff534307`` |
| HotpotQA | [Homepage](https://hotpotqa.github.io) | ``hotpotqa``| ``train``<br>``dev``<br>``test``| 7,405 | 5.23M | 2.0 | [Link](https://public.ukp.informatik.tu-darmstadt.de/thakur/BEIR/datasets/hotpotqa.zip) | ``f412724f78b0d91183a0e86805e16114`` |
| FiQA-2018 | [Homepage](https://sites.google.com/view/fiqa/) | ``fiqa`` | ``train``<br>``dev``<br>``test``| 648 | 57K | 2.6 | [Link](https://public.ukp.informatik.tu-darmstadt.de/thakur/BEIR/datasets/fiqa.zip) | ``17918ed23cd04fb15047f73e6c3bd9d9`` |
| Signal-1M(RT) | [Homepage](https://research.signal-ai.com/datasets/signal1m-tweetir.html)| ``signal1m`` | ``test``| 97 | 2.86M | 19.6 | No | [How to Reproduce?](https://github.com/UKPLab/beir/blob/main/examples/dataset#4-signal-1m) |
| TREC-NEWS | [Homepage](https://trec.nist.gov/data/news2019.html) | ``trec-news`` | ``test``| 57 | 595K | 19.6 | No | [How to Reproduce?](https://github.com/UKPLab/beir/blob/main/examples/dataset#1-trec-news) |
| ArguAna | [Homepage](http://argumentation.bplaced.net/arguana/data) | ``arguana``| ``test`` | 1,406 | 8.67K | 1.0 | [Link](https://public.ukp.informatik.tu-darmstadt.de/thakur/BEIR/datasets/arguana.zip) | ``8ad3e3c2a5867cdced806d6503f29b99`` |
| Touche-2020| [Homepage](https://webis.de/events/touche-20/shared-task-1.html) | ``webis-touche2020``| ``test``| 49 | 382K | 19.0 | [Link](https://public.ukp.informatik.tu-darmstadt.de/thakur/BEIR/datasets/webis-touche2020.zip) | ``46f650ba5a527fc69e0a6521c5a23563`` |
| CQADupstack| [Homepage](http://nlp.cis.unimelb.edu.au/resources/cqadupstack/) | ``cqadupstack``| ``test``| 13,145 | 457K | 1.4 | [Link](https://public.ukp.informatik.tu-darmstadt.de/thakur/BEIR/datasets/cqadupstack.zip) | ``4e41456d7df8ee7760a7f866133bda78`` |
| Quora| [Homepage](https://www.quora.com/q/quoradata/First-Quora-Dataset-Release-Question-Pairs) | ``quora``| ``dev``<br>``test``| 10,000 | 523K | 1.6 | [Link](https://public.ukp.informatik.tu-darmstadt.de/thakur/BEIR/datasets/quora.zip) | ``18fb154900ba42a600f84b839c173167`` |
| DBPedia | [Homepage](https://github.com/iai-group/DBpedia-Entity/) | ``dbpedia-entity``| ``dev``<br>``test``| 400 | 4.63M | 38.2 | [Link](https://public.ukp.informatik.tu-darmstadt.de/thakur/BEIR/datasets/dbpedia-entity.zip) | ``c2a39eb420a3164af735795df012ac2c`` |
| SCIDOCS| [Homepage](https://allenai.org/data/scidocs) | ``scidocs``| ``test``| 1,000 | 25K | 4.9 | [Link](https://public.ukp.informatik.tu-darmstadt.de/thakur/BEIR/datasets/scidocs.zip) | ``38121350fc3a4d2f48850f6aff52e4a9`` |
| FEVER | [Homepage](http://fever.ai) | ``fever``| ``train``<br>``dev``<br>``test``| 6,666 | 5.42M | 1.2| [Link](https://public.ukp.informatik.tu-darmstadt.de/thakur/BEIR/datasets/fever.zip) | ``5a818580227bfb4b35bb6fa46d9b6c03`` |
| Climate-FEVER| [Homepage](http://climatefever.ai) | ``climate-fever``|``test``| 1,535 | 5.42M | 3.0 | [Link](https://public.ukp.informatik.tu-darmstadt.de/thakur/BEIR/datasets/climate-fever.zip) | ``8b66f0a9126c521bae2bde127b4dc99d`` |
| SciFact| [Homepage](https://github.com/allenai/scifact) | ``scifact``| ``train``<br>``test``| 300 | 5K | 1.1 | [Link](https://public.ukp.informatik.tu-darmstadt.de/thakur/BEIR/datasets/scifact.zip) | ``5f7d1de60b170fc8027bb7898e2efca1`` |
| Robust04 | [Homepage](https://trec.nist.gov/data/robust/04.guidelines.html) | ``robust04``| ``test``| 249 | 528K | 69.9 | No | [How to Reproduce?](https://github.com/UKPLab/beir/blob/main/examples/dataset#3-robust04) |
## Dataset Creation
### Curation Rationale
[Needs More Information]
### Source Data
#### Initial Data Collection and Normalization
[Needs More Information]
#### Who are the source language producers?
[Needs More Information]
### Annotations
#### Annotation process
[Needs More Information]
#### Who are the annotators?
[Needs More Information]
### Personal and Sensitive Information
[Needs More Information]
## Considerations for Using the Data
### Social Impact of Dataset
[Needs More Information]
### Discussion of Biases
[Needs More Information]
### Other Known Limitations
[Needs More Information]
## Additional Information
### Dataset Curators
[Needs More Information]
### Licensing Information
[Needs More Information]
### Citation Information
Cite as:
```
@inproceedings{
thakur2021beir,
title={{BEIR}: A Heterogeneous Benchmark for Zero-shot Evaluation of Information Retrieval Models},
author={Nandan Thakur and Nils Reimers and Andreas R{\"u}ckl{\'e} and Abhishek Srivastava and Iryna Gurevych},
booktitle={Thirty-fifth Conference on Neural Information Processing Systems Datasets and Benchmarks Track (Round 2)},
year={2021},
url={https://openreview.net/forum?id=wCu6T5xFjeJ}
}
```
### Contributions
Thanks to [@Nthakur20](https://github.com/Nthakur20) for adding this dataset. | [
-0.5227212905883789,
-0.5249219536781311,
0.14435674250125885,
0.04820423573255539,
0.055916160345077515,
0.0011022627586498857,
-0.1081070527434349,
-0.24874727427959442,
0.28598034381866455,
0.07840226590633392,
-0.45233607292175293,
-0.7186435461044312,
-0.347678542137146,
0.20300328731... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
knkarthick/samsum | knkarthick | 2022-10-21T03:03:27Z | 58 | 3 | samsum-corpus | [
"task_categories:summarization",
"annotations_creators:expert-generated",
"language_creators:expert-generated",
"multilinguality:monolingual",
"size_categories:10K<n<100K",
"source_datasets:original",
"language:en",
"license:cc-by-nc-nd-4.0",
"conversations-summarization",
"arxiv:1911.12237",
"r... | 2022-10-21T03:03:27Z | 2022-06-29T08:24:34.000Z | 2022-06-29T08:24:34 | ---
annotations_creators:
- expert-generated
language_creators:
- expert-generated
language:
- en
license:
- cc-by-nc-nd-4.0
multilinguality:
- monolingual
size_categories:
- 10K<n<100K
source_datasets:
- original
task_categories:
- summarization
task_ids: []
paperswithcode_id: samsum-corpus
pretty_name: SAMSum Corpus
tags:
- conversations-summarization
---
# Dataset Card for SAMSum Corpus
## Dataset Description
### Links
- **Homepage:** hhttps://arxiv.org/abs/1911.12237v2
- **Repository:** https://arxiv.org/abs/1911.12237v2
- **Paper:** https://arxiv.org/abs/1911.12237v2
- **Point of Contact:** https://huggingface.co/knkarthick
### Dataset Summary
The SAMSum dataset contains about 16k messenger-like conversations with summaries. Conversations were created and written down by linguists fluent in English. Linguists were asked to create conversations similar to those they write on a daily basis, reflecting the proportion of topics of their real-life messenger conversations. The style and register are diversified - conversations could be informal, semi-formal or formal, they may contain slang words, emoticons and typos. Then, the conversations were annotated with summaries. It was assumed that summaries should be a concise brief of what people talked about in the conversation in third person.
The SAMSum dataset was prepared by Samsung R&D Institute Poland and is distributed for research purposes (non-commercial licence: CC BY-NC-ND 4.0).
### Languages
English
## Dataset Structure
### Data Instances
SAMSum dataset is made of 16369 conversations distributed uniformly into 4 groups based on the number of utterances in con- versations: 3-6, 7-12, 13-18 and 19-30. Each utterance contains the name of the speaker. Most conversations consist of dialogues between two interlocutors (about 75% of all conversations), the rest is between three or more people
The first instance in the training set:
{'id': '13818513', 'summary': 'Amanda baked cookies and will bring Jerry some tomorrow.', 'dialogue': "Amanda: I baked cookies. Do you want some?\r\nJerry: Sure!\r\nAmanda: I'll bring you tomorrow :-)"}
### Data Fields
- dialogue: text of dialogue.
- summary: human written summary of the dialogue.
- id: unique file id of an example.
### Data Splits
- train: 14732
- val: 818
- test: 819
## Dataset Creation
### Curation Rationale
In paper:
In the first approach, we reviewed datasets from the following categories: chatbot dialogues, SMS corpora, IRC/chat data, movie dialogues, tweets, comments data (conversations formed by replies to comments), transcription of meetings, written discussions, phone dialogues and daily communication data. Unfortunately, they all differed in some respect from the conversations that are typically written in messenger apps, e.g. they were too technical (IRC data), too long (comments data, transcription of meetings), lacked context (movie dialogues) or they were more of a spoken type, such as a dialogue between a petrol station assistant and a client buying petrol.
As a consequence, we decided to create a chat dialogue dataset by constructing such conversations that would epitomize the style of a messenger app.
### Who are the source language producers?
linguists
### Who are the annotators?
language experts
### Annotation process
In paper:
Each dialogue was created by one person. After collecting all of the conversations, we asked language experts to annotate them with summaries, assuming that they should (1) be rather short, (2) extract important pieces of information, (3) include names of interlocutors, (4) be written in the third person. Each dialogue contains only one reference summary.
## Licensing Information
non-commercial licence: CC BY-NC-ND 4.0
## Citation Information
```
@inproceedings{gliwa-etal-2019-samsum,
title = "{SAMS}um Corpus: A Human-annotated Dialogue Dataset for Abstractive Summarization",
author = "Gliwa, Bogdan and
Mochol, Iwona and
Biesek, Maciej and
Wawer, Aleksander",
booktitle = "Proceedings of the 2nd Workshop on New Frontiers in Summarization",
month = nov,
year = "2019",
address = "Hong Kong, China",
publisher = "Association for Computational Linguistics",
url = "https://www.aclweb.org/anthology/D19-5409",
doi = "10.18653/v1/D19-5409",
pages = "70--79"
}
```
## Contributions | [
-0.348955363035202,
-0.7967272996902466,
0.19213993847370148,
0.14605115354061127,
-0.27109137177467346,
0.0754840150475502,
-0.30115869641304016,
-0.4882080852985382,
0.6768436431884766,
0.5494160652160645,
-0.5561162829399109,
-0.6235945820808411,
-0.3545190989971161,
0.29382258653640747... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
BirdL/DALL-E-Cats | BirdL | 2022-09-28T21:07:37Z | 58 | 0 | null | [
"task_categories:image-classification",
"task_categories:unconditional-image-generation",
"size_categories:1K<n<10K",
"license:other",
"region:us"
] | 2022-09-28T21:07:37Z | 2022-08-01T20:37:15.000Z | 2022-08-01T20:37:15 | ---
annotations_creators: []
language: []
language_creators: []
license:
- other
multilinguality: []
pretty_name: DALL-E Cats Dataset
size_categories:
- 1K<n<10K
source_datasets: []
tags: []
task_categories:
- image-classification
- unconditional-image-generation
task_ids: []
---
DALL-E-Cats is a dataset meant to produce a synthetic animal dataset. This is a successor to DALL-E-Dogs. DALL-E-Dogs and DALL-E-Cats will be fed into an image classifier to see how it performs. This is under the [BirdL-AirL License.](https://huggingface.co/spaces/BirdL/license/) | [
-0.5938958525657654,
-0.6161585450172424,
0.04950995370745659,
0.30039146542549133,
-0.17872191965579987,
0.4625435173511505,
0.3942636251449585,
-0.5702176094055176,
0.4111902117729187,
0.6491820812225342,
-0.6739726662635803,
-0.27903246879577637,
0.05875644087791443,
0.4954630136489868,... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
USC-MOLA-Lab/MFRC | USC-MOLA-Lab | 2022-08-26T00:36:03Z | 58 | 5 | null | [
"arxiv:2208.05545",
"region:us"
] | 2022-08-26T00:36:03Z | 2022-08-10T15:11:55.000Z | 2022-08-10T15:11:55 | # Dataset Card for MFRC
## Table of Contents
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:**
- **Repository:**
- **Paper:**
- **Leaderboard:**
- **Point of Contact:**
### Dataset Summary
Reddit posts annotated for moral foundations
### Supported Tasks and Leaderboards
### Languages
English
## Dataset Structure
### Data Instances
### Data Fields
- text
- subreddit
- bucket
- annotator
- annotation
- confidence
### Data Splits
## Dataset Creation
### Curation Rationale
### Source Data
#### Initial Data Collection and Normalization
#### Who are the source language producers?
### Annotations
#### Annotation process
#### Who are the annotators?
### Personal and Sensitive Information
## Considerations for Using the Data
### Social Impact of Dataset
### Discussion of Biases
### Other Known Limitations
## Additional Information
### Dataset Curators
### Licensing Information
cc-by-4.0
### Citation Information
```bibtex
@misc{trager2022moral,
title={The Moral Foundations Reddit Corpus},
author={Jackson Trager and Alireza S. Ziabari and Aida Mostafazadeh Davani and Preni Golazazian and Farzan Karimi-Malekabadi and Ali Omrani and Zhihe Li and Brendan Kennedy and Nils Karl Reimer and Melissa Reyes and Kelsey Cheng and Mellow Wei and Christina Merrifield and Arta Khosravi and Evans Alvarez and Morteza Dehghani},
year={2022},
eprint={2208.05545},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
### Contributions
| [
-0.5953580141067505,
-0.4138275384902954,
0.11607244610786438,
0.18754126131534576,
-0.3849513530731201,
0.06634992361068726,
-0.12406843155622482,
-0.2741219401359558,
0.36246034502983093,
0.4147392809391022,
-0.9271416664123535,
-0.8572237491607666,
-0.7516272664070129,
0.376853525638580... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
bigbio/osiris | bigbio | 2022-12-22T15:46:10Z | 58 | 1 | null | [
"multilinguality:monolingual",
"language:en",
"license:cc-by-3.0",
"region:us"
] | 2022-12-22T15:46:10Z | 2022-11-13T22:11:10.000Z | 2022-11-13T22:11:10 |
---
language:
- en
bigbio_language:
- English
license: cc-by-3.0
multilinguality: monolingual
bigbio_license_shortname: CC_BY_3p0
pretty_name: OSIRIS
homepage: https://sites.google.com/site/laurafurlongweb/databases-and-tools/corpora/
bigbio_pubmed: True
bigbio_public: True
bigbio_tasks:
- NAMED_ENTITY_RECOGNITION
- NAMED_ENTITY_DISAMBIGUATION
---
# Dataset Card for OSIRIS
## Dataset Description
- **Homepage:** https://sites.google.com/site/laurafurlongweb/databases-and-tools/corpora/
- **Pubmed:** True
- **Public:** True
- **Tasks:** NER,NED
The OSIRIS corpus is a set of MEDLINE abstracts manually annotated
with human variation mentions. The corpus is distributed under the terms
of the Creative Commons Attribution License
Creative Commons Attribution 3.0 Unported License,
which permits unrestricted use, distribution, and reproduction in any medium,
provided the original work is properly cited (Furlong et al, BMC Bioinformatics 2008, 9:84).
## Citation Information
```
@ARTICLE{Furlong2008,
author = {Laura I Furlong and Holger Dach and Martin Hofmann-Apitius and Ferran Sanz},
title = {OSIRISv1.2: a named entity recognition system for sequence variants
of genes in biomedical literature.},
journal = {BMC Bioinformatics},
year = {2008},
volume = {9},
pages = {84},
doi = {10.1186/1471-2105-9-84},
pii = {1471-2105-9-84},
pmid = {18251998},
timestamp = {2013.01.15},
url = {http://dx.doi.org/10.1186/1471-2105-9-84}
}
```
| [
-0.5111790895462036,
-0.19827868044376373,
0.2597680389881134,
0.005502836313098669,
-0.22484083473682404,
-0.1997968852519989,
-0.15762561559677124,
-0.5425147414207458,
0.6527869701385498,
0.6605000495910645,
-0.6078504920005798,
-0.8019595146179199,
-0.7792196869850159,
0.79186993837356... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
tarteel-ai/quran-tafsir | tarteel-ai | 2023-02-11T23:18:23Z | 58 | 4 | null | [
"region:us"
] | 2023-02-11T23:18:23Z | 2023-02-11T23:18:20.000Z | 2023-02-11T23:18:20 | ---
dataset_info:
features:
- name: en-ahmedali
dtype: string
- name: en-ahmedraza
dtype: string
- name: en-arberry
dtype: string
- name: en-asad
dtype: string
- name: en-daryabadi
dtype: string
- name: en-hilali
dtype: string
- name: en-itani
dtype: string
- name: en-maududi
dtype: string
- name: en-mubarakpuri
dtype: string
- name: en-pickthall
dtype: string
- name: en-qarai
dtype: string
- name: en-qaribullah
dtype: string
- name: en-sahih
dtype: string
- name: en-sarwar
dtype: string
- name: en-shakir
dtype: string
- name: en-transliterati
dtype: string
- name: en-wahiduddi
dtype: string
- name: en-yusufali
dtype: string
- name: surah
dtype: int64
- name: ayah
dtype: int64
splits:
- name: train
num_bytes: 16266291
num_examples: 6236
download_size: 9038013
dataset_size: 16266291
---
# Dataset Card for "quran-tafsir"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | [
-0.5451316833496094,
-0.27834734320640564,
-0.06181654334068298,
0.20243756473064423,
-0.29712826013565063,
0.09262420237064362,
0.19832226634025574,
-0.14744754135608673,
0.6824036836624146,
0.513515830039978,
-0.6217271685600281,
-0.8795517683029175,
-0.7393322587013245,
-0.1207696273922... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
jonathan-roberts1/MultiScene | jonathan-roberts1 | 2023-04-03T16:15:59Z | 58 | 0 | null | [
"task_categories:image-classification",
"task_categories:zero-shot-image-classification",
"license:mit",
"region:us"
] | 2023-04-03T16:15:59Z | 2023-02-28T16:13:48.000Z | 2023-02-28T16:13:48 | ---
dataset_info:
features:
- name: image
dtype: image
- name: label
sequence:
class_label:
names:
'0': apron
'1': baseball field
'2': basketball field
'3': beach
'4': bridge
'5': cemetery
'6': commercial
'7': farmland
'8': woodland
'9': golf course
'10': greenhouse
'11': helipad
'12': lake or pond
'13': oil field
'14': orchard
'15': parking lot
'16': park
'17': pier
'18': port
'19': quarry
'20': railway
'21': residential
'22': river
'23': roundabout
'24': runway
'25': soccer
'26': solar panel
'27': sparse shrub
'28': stadium
'29': storage tank
'30': tennis court
'31': train station
'32': wastewater plant
'33': wind turbine
'34': works
'35': sea
splits:
- name: train
num_bytes: 867506522
num_examples: 14000
download_size: 867005851
dataset_size: 867506522
license: mit
task_categories:
- image-classification
- zero-shot-image-classification
---
# Dataset Card for "MultiScene"
## Dataset Description
- **Paper** [MultiScene: A Large-scale Dataset and Benchmark for Multi-scene Recognition in Single Aerial Images](https://ieeexplore.ieee.org/iel7/36/4358825/09537917.pdf)
- **Split** Clean
### Split Information
This HuggingFace dataset repository contains just the 'Clean' split.
### Licensing Information
MIT.
## Citation Information
[MultiScene: A Large-scale Dataset and Benchmark for Multi-scene Recognition in Single Aerial Images](https://ieeexplore.ieee.org/iel7/36/4358825/09537917.pdf)
```
@article{hua2021multiscene,
title = {MultiScene: A Large-scale Dataset and Benchmark for Multi-scene Recognition in Single Aerial Images},
author = {Hua, Y. and Mou, L. and Jin, P. and Zhu, X. X.},
year = {in press},
journal = {IEEE Transactions on Geoscience and Remote Sensing}
}
``` | [
-0.7411736249923706,
-0.05192083120346069,
0.077554851770401,
0.15549615025520325,
-0.20589974522590637,
0.04826116934418678,
-0.137734055519104,
-0.3762117624282837,
0.23754997551441193,
0.5199432969093323,
-0.6877481341362,
-0.614818811416626,
-0.35336220264434814,
-0.022532867267727852,... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
siddharthtumre/Revised-JNLPBA | siddharthtumre | 2023-04-12T12:43:52Z | 58 | 0 | null | [
"task_categories:token-classification",
"task_ids:named-entity-recognition",
"annotations_creators:expert-generated",
"language_creators:expert-generated",
"multilinguality:monolingual",
"size_categories:10K<n<100K",
"language:en",
"license:unknown",
"region:us"
] | 2023-04-12T12:43:52Z | 2023-04-12T12:29:56.000Z | 2023-04-12T12:29:56 | ---
annotations_creators:
- expert-generated
language_creators:
- expert-generated
language:
- en
license:
- unknown
multilinguality:
- monolingual
size_categories:
- 10K<n<100K
task_categories:
- token-classification
task_ids:
- named-entity-recognition
pretty_name: IASL-BNER Revised JNLPBA
dataset_info:
features:
- name: id
dtype: string
- name: tokens
sequence: string
- name: ner_tags
sequence:
class_label:
names:
'0': O
'1': B-DNA
'2': I-DNA
'3': B-RNA
'4': I-RNA
'5': B-cell_line
'6': I-cell_line
'7': B-cell_type
'8': I-cell_type
'9': B-protein
'10': I-protein
config_name: revised-jnlpba
---
# Dataset Card for Dataset Name
## Dataset Description
- **Homepage:**
- **Repository:**
- **Paper:**
- **Leaderboard:**
- **Point of Contact:**
### Dataset Summary
This dataset card aims to be a base template for new datasets. It has been generated using [this raw template](https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/templates/datasetcard_template.md?plain=1).
### Supported Tasks and Leaderboards
[More Information Needed]
### Languages
[More Information Needed]
## Dataset Structure
### Data Instances
[More Information Needed]
### Data Fields
[More Information Needed]
### Data Splits
[More Information Needed]
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
[More Information Needed]
### Citation Information
[More Information Needed]
### Contributions
[More Information Needed] | [
-0.5456172227859497,
-0.42588189244270325,
-0.051285386085510254,
0.3873917758464813,
-0.4620095491409302,
0.05422838777303696,
-0.24659359455108643,
-0.2884668707847595,
0.6999505162239075,
0.5781948566436768,
-0.9070087671279907,
-1.1513407230377197,
-0.756676435470581,
0.029052251949906... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
bbz662bbz/databricks-dolly-15k-ja-gozaru | bbz662bbz | 2023-05-29T12:58:37Z | 58 | 1 | null | [
"license:cc-by-sa-3.0",
"region:us"
] | 2023-05-29T12:58:37Z | 2023-05-28T00:51:18.000Z | 2023-05-28T00:51:18 | ---
license: cc-by-sa-3.0
---
This dataset was using "kunishou/databricks-dolly-15k-ja"
This dataset is licensed under CC BY SA 3.0
Last Update : 2023-05-28
databricks-dolly-15k-ja-gozaru
kunishou/databricks-dolly-15k-ja
https://huggingface.co/datasets/kunishou/databricks-dolly-15k-ja
| [
-0.12805062532424927,
-0.26643067598342896,
0.1786426603794098,
0.8457103967666626,
-0.48013362288475037,
-0.21841225028038025,
0.31894224882125854,
-0.14519278705120087,
0.5423585772514343,
0.8291373252868652,
-1.0741316080093384,
-0.3626120090484619,
-0.41345423460006714,
0.1988215595483... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
Joemgu/sumstew | Joemgu | 2023-06-21T13:07:18Z | 58 | 6 | null | [
"task_categories:summarization",
"size_categories:100K<n<1M",
"language:en",
"language:de",
"language:fr",
"language:it",
"language:es",
"license:apache-2.0",
"chemistry",
"biology",
"region:us"
] | 2023-06-21T13:07:18Z | 2023-05-30T20:36:23.000Z | 2023-05-30T20:36:23 | ---
dataset_info:
features:
- name: prompt
dtype: string
- name: target
dtype: string
- name: input_tokens
dtype: int64
- name: target_tokens
dtype: int64
- name: subset
dtype: string
- name: language
dtype: string
splits:
- name: train
num_bytes: 3338029493
num_examples: 187221
- name: validation
num_bytes: 218403099
num_examples: 14542
- name: test
num_bytes: 201638368
num_examples: 12467
download_size: 1982559322
dataset_size: 3758070960
task_categories:
- summarization
language:
- en
- de
- fr
- it
- es
size_categories:
- 100K<n<1M
license: apache-2.0
tags:
- chemistry
- biology
---
# Dataset Card for "sumstew"
## TL;DR:
Sumstew is a abstractive, multilingual Dataset, with a balanced number of samples from a diverse set of summarization Datasets. The input sizes range up to 16384 tokens.
Filtered using a diverse set of heuristics to encourage high coverage, accuracy and factual consistency. Code to reproduce Dataset available at *TODO*
## Dataset Description
- **Dataset Identifier**: sumstew
- **Dataset Summary**: "SumStew" is a rich multilingual dataset for text summarization. It incorporates diverse data sources such as cnn_dailymail, samsum, mlsum (de, fr, es, it), klexikon, xlsum (fr, en, es), govreport, sciqa, piqa, pumbed_qa, multinews, laysum, booksum, dialogsum, fanpage (it), ilpost (it). This data has been curated by filtering based on n-gram overlap between the source and target documents and normalized to prevent undue bias. Every instance in this dataset is prefixed by an instruction (title, summary, or qa).
## Task Information
- **Task Categories**: The tasks covered by this dataset are primarily summarization tasks.
- **Languages**: This dataset supports multiple languages including English (en), German (de), French (fr), Italian (it), and Spanish (es).
## Dataset Structure
- **Data Instances**: Each data instance in the dataset comprises five fields - 'prompt', 'target', 'task', 'subset', and 'language'.
- 'prompt': The input text for the task. (dtype: string)
- 'target': The expected output for the task. (dtype: string)
- 'subset': The subset of the dataset the instance belongs to. (dtype: string)
- 'language': The language of the instance. (dtype: string)
- **Data Splits**: The dataset is split into two subsets:
- 'train' set: 187221 examples
- 'validation' set: 14542 examples
- 'test' set: 12467 examples
## Dataset Statistics
- **Max Document Length**: The maximum document length is 16384 mlong-t5 tokens.
- **Max Output Length**: The maximum output length is 1024 mlong-t5 tokens.
## Additional Information
- **Data Collection**: The data has been collected from a variety of sources spanning different languages and domains, ensuring a diverse and comprehensive dataset.
- **Data Cleaning**: The dataset has been filtered by checking the ngram overlap between the source and target document and dropping samples which have too much or too little overlap, and also through normalization.
- **Known Limitations**: As the dataset is generated from diverse sources, the inherent biases or limitations of those sources may persist in this dataset as well.
- **Usage Scenarios**: This dataset can be used for training and evaluating models on tasks like summarization and question-answering, in a multilingual context.
## Credits
At this point I want to thank every creator of the underlying datasets (there are too many for me to count). If there are any issues concercining licensing or you want your data removed from the dataset, feel free to DM over Twitter (link in profile).
Special thanks to @pszemraj [https://huggingface.co/pszemraj] for the inspiration.
If interested in collaboration or consulting for your project, feel free to DM https://twitter.com/StutterBuddy | [
-0.270089715719223,
-0.4911327064037323,
0.11028729379177094,
0.38688522577285767,
-0.27958306670188904,
-0.05636768415570259,
-0.38216134905815125,
-0.3349200487136841,
0.4637514650821686,
0.4449634850025177,
-0.6793559789657593,
-0.7831690311431885,
-0.7051559090614319,
0.406336843967437... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
mshenoda/spam-messages | mshenoda | 2023-06-08T01:29:46Z | 58 | 0 | null | [
"license:mit",
"region:us"
] | 2023-06-08T01:29:46Z | 2023-06-04T02:36:32.000Z | 2023-06-04T02:36:32 | ---
license: mit
---
## Dataset
The dataset is composed of messages labeled by ham or spam, merged from three data sources:
- SMS Spam Collection https://www.kaggle.com/datasets/uciml/sms-spam-collection-dataset
- Telegram Spam Ham https://huggingface.co/datasets/thehamkercat/telegram-spam-ham/tree/main
- Enron Spam: https://huggingface.co/datasets/SetFit/enron_spam/tree/main (only used message column and labels)
The prepare script for enron is available at https://github.com/mshenoda/roberta-spam/tree/main/data/enron.
The data is split 80% train 10% validation, and 10% test sets; the scripts used to split and merge of the three data sources are available at: https://github.com/mshenoda/roberta-spam/tree/main/data/utils.
### Dataset Class Distribution
Training 80% | Validation 10% | Testing 10%
:-------------------------:|:-------------------------:|:-------------------------:
 Class Distribution |  Class Distribution |  Class Distribution | [
-0.49386924505233765,
-0.649951159954071,
-0.15202271938323975,
0.20015451312065125,
-0.28021132946014404,
0.003614675486460328,
-0.1660815328359604,
-0.3216502070426941,
0.2977293133735657,
0.6876609921455383,
-0.6617334485054016,
-0.8114099502563477,
-0.6758919358253479,
0.28136172890663... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
clarin-knext/hotpotqa-pl | clarin-knext | 2023-06-07T08:13:33Z | 58 | 0 | null | [
"language:pl",
"arxiv:2305.19840",
"region:us"
] | 2023-06-07T08:13:33Z | 2023-06-06T22:21:34.000Z | 2023-06-06T22:21:34 | ---
language:
- pl
---
Part of **BEIR-PL: Zero Shot Information Retrieval Benchmark for the Polish Language**.
Link to arxiv: https://arxiv.org/pdf/2305.19840.pdf
Contact: konrad.wojtasik@pwr.edu.pl | [
-0.2209920734167099,
-0.9029767513275146,
0.5094642043113708,
0.2354191392660141,
-0.318521112203598,
-0.1491902619600296,
-0.16673962771892548,
-0.49629199504852295,
-0.0189602542668581,
0.41122621297836304,
-0.5503097772598267,
-0.6913566589355469,
-0.4166175425052643,
-0.048304721713066... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
eddsterxyz/Raiders-Of-The-Lost-Kek | eddsterxyz | 2023-06-25T19:36:37Z | 58 | 0 | null | [
"arxiv:2001.07487",
"region:us"
] | 2023-06-25T19:36:37Z | 2023-06-25T18:06:23.000Z | 2023-06-25T18:06:23 | # Raiders Of The Lost Kek
The largest 4chan /pol/ dataset.
I extracted the post content, removed HTML nonesense, and 4chan specific things
like post number replies in text, etc.
## There are a few sizes of datasets available
- 100kLines - first 100,000 lines of text from the dataset
- 300kLines - first 300,000 lines of text from the dataset
- 500kLines - first 500,000 lines of text from the dataset
maybe at some point once i have the compute ill upload the whole thing
link : https://arxiv.org/abs/2001.07487 | [
-0.6903060674667358,
-0.3443634808063507,
0.6656823754310608,
0.2343713492155075,
-0.3976344168186188,
0.19487923383712769,
0.3983422815799713,
0.005137340631335974,
0.524075448513031,
0.7478552460670471,
-0.9744910597801208,
-0.18865086138248444,
-0.48557886481285095,
0.5697621703147888,
... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
sharmaarushi17/HPCPerfOpt-Open-ended | sharmaarushi17 | 2023-11-07T06:25:52Z | 58 | 0 | null | [
"task_categories:question-answering",
"size_categories:n<1K",
"license:openrail",
"code",
"region:us"
] | 2023-11-07T06:25:52Z | 2023-07-14T02:01:48.000Z | 2023-07-14T02:01:48 | ---
license: openrail
pretty_name: HPCPerfOpt (HPC Performance Optimization Benchmark)
configs:
- config_name: text
data_files:
- split: test
path: "text.csv"
- config_name: code
data_files:
- split: test
path: "code.csv"
task_categories:
- question-answering
tags:
- code
size_categories:
- n<1K
---
# Dataset Card for HPCPerfOpt (HPC Performance Optimization Dataset)
## Dataset Description
- **Homepage:**
- **Repository:**
- **Paper:**
- **Leaderboard:**
- **Point of Contact:**
### Dataset Summary
This dataset is a question answering dataset for OpenMP Performance Optimization questions. It contains open-ended questions of 2 types:
1. What is the performance issue in the given code snippet? - Text answers
2. Please generate the optimized version of the given OpenMP code for better performance. - Code answers
### Supported Tasks and Leaderboards
[More Information Needed]
### Languages
[More Information Needed]
## Dataset Structure
### Data Instances
[More Information Needed]
### Data Fields
[More Information Needed]
### Data Splits
[More Information Needed]
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
[More Information Needed]
### Citation Information
[More Information Needed]
### Contributions
[More Information Needed] | [
-0.6447977423667908,
-0.47556209564208984,
-0.03465237841010094,
0.23401600122451782,
-0.3443717360496521,
-0.3560236394405365,
-0.3632550537586212,
-0.23785917460918427,
-0.2315646857023239,
0.48329585790634155,
-0.7107687592506409,
-0.27838441729545593,
-0.30632075667381287,
-0.030886525... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
Icannos/lichess_games | Icannos | 2023-07-16T14:58:24Z | 58 | 0 | null | [
"task_categories:text-generation",
"size_categories:100B<n<1T",
"language:en",
"license:cc0-1.0",
"region:us"
] | 2023-07-16T14:58:24Z | 2023-07-15T22:08:43.000Z | 2023-07-15T22:08:43 | ---
license: cc0-1.0
task_categories:
- text-generation
language:
- en
pretty_name: Lichess Games
size_categories:
- 100B<n<1T
viewer: false
---
# Dataset Card for Lichess Games
## Dataset Description
- **Homepage:** https://database.lichess.org/
- **Point of Contact:** maxime.darrin@outlook.com
### Dataset Summary
This is an easy-to-use huggingface dataset to access the [lichess game database](https://database.lichess.org/). For now it supports only the standard games
but other variant will be added shortly.
Requirements:
```
chess
zstandard
```
### Supported Tasks and Leaderboards
It is intended for pretraining text generation models for chess games (in a PGN format).
## Dataset Structure
### Data Instances
Available configs consist on the year and month of the file as described here: https://database.lichess.org/.
For example to get a small sample one can try to download the dataset for june 2013 (~40mo).
```python
from datasets import load_dataset
dataset = load_dataset("Icannos/lichess_games", "2013-06", streaming=True)
```
Examples (3 rows from june 2013):
<details>
```
{'text': '[Event "Rated Bullet game"]\n'
'[Site "https://lichess.org/in28emmw"]\n'
'[Date "????.??.??"]\n'
'[Round "?"]\n'
'[White "Kazuma"]\n'
'[Black "kikeillana"]\n'
'[Result "1-0"]\n'
'[BlackElo "1684"]\n'
'[BlackRatingDiff "-9"]\n'
'[ECO "A07"]\n'
'[Opening "King\'s Indian Attack: Keres Variation #2"]\n'
'[Termination "Normal"]\n'
'[TimeControl "60+0"]\n'
'[UTCDate "2013.05.31"]\n'
'[UTCTime "22:00:22"]\n'
'[WhiteElo "1756"]\n'
'[WhiteRatingDiff "+11"]\n'
'\n'
'1. Nf3 d5 2. g3 Bg4 3. Bg2 Bxf3 4. Bxf3 e6 5. O-O Bb4 6. d4 Nd7 7. '
'c3 Ba5 8. Bf4 Bb6 9. b4 a6 10. a4 c6 11. Nd2 Ngf6 12. e4 dxe4 13. '
'Nxe4 Nxe4 14. Bxe4 f6 15. c4 h6 16. c5 Bc7 17. Qb3 Bxf4 18. Qxe6+ '
'Qe7 19. Bg6+ Kd8 20. Qxe7+ Kxe7 21. gxf4 Rhe8 22. Bxe8 Rxe8 23. '
'Rfe1+ Kf7 24. Rxe8 Kxe8 25. Re1+ Kf7 26. Re4 g6 27. Kg2 f5 28. Re3 '
'h5 29. Kf3 Kg7 30. Re7+ Kf6 31. Rxd7 g5 32. Rxb7 1-0'}
{'text': '[Event "Rated Bullet game"]\n'
'[Site "https://lichess.org/e174t8h7"]\n'
'[Date "????.??.??"]\n'
'[Round "?"]\n'
'[White "Aceves"]\n'
'[Black "calculus"]\n'
'[Result "0-1"]\n'
'[BlackElo "1568"]\n'
'[BlackRatingDiff "+9"]\n'
'[ECO "D00"]\n'
'[Opening "Queen\'s Pawn Game #3"]\n'
'[Termination "Time forfeit"]\n'
'[TimeControl "60+1"]\n'
'[UTCDate "2013.05.31"]\n'
'[UTCTime "22:02:13"]\n'
'[WhiteElo "1487"]\n'
'[WhiteRatingDiff "-9"]\n'
'\n'
'1. d4 d5 2. e3 Nf6 3. c3 Bg4 4. Qc2 e6 5. Bd3 Bd6 6. Nd2 c6 7. e4 '
'dxe4 8. Nxe4 Nxe4 9. Bxe4 Bc7 10. Bxh7 g6 11. h3 Bf5 12. Qe2 Rxh7 '
'13. Be3 Qd6 14. Nf3 Nd7 15. Ng5 Rh8 16. g3 f6 17. Bf4 e5 18. dxe5 '
'fxe5 19. Bxe5 Qxe5 20. Qe3 Qxe3+ 21. fxe3 Bxg3+ 22. Ke2 Bh4 23. Nf3 '
'Be4 24. Rad1 O-O-O 25. Rhf1 Rhf8 26. Nd4 Rxf1 27. Rxf1 Ne5 28. Ne6 '
'Re8 29. Ng7 Re7 30. Rf4 Bd3+ 31. Kd2 Rxg7 32. Rxh4 Nf3+ 33. Kd1 Nxh4 '
'34. Kd2 Bf5 0-1'}
{'text': '[Event "Rated Blitz game"]\n'
'[Site "https://lichess.org/d4ui60z6"]\n'
'[Date "????.??.??"]\n'
'[Round "?"]\n'
'[White "melro"]\n'
'[Black "patrimpas"]\n'
'[Result "0-1"]\n'
'[BlackElo "1912"]\n'
'[BlackRatingDiff "+0"]\n'
'[ECO "B20"]\n'
'[Opening "Sicilian Defense: Staunton-Cochrane Variation"]\n'
'[Termination "Normal"]\n'
'[TimeControl "240+0"]\n'
'[UTCDate "2013.05.31"]\n'
'[UTCTime "22:02:15"]\n'
'[WhiteElo "1144"]\n'
'[WhiteRatingDiff "-1"]\n'
'\n'
'1. e4 c5 2. c4 Nc6 3. d3 g6 4. Bd2 Bg7 5. Bc3 Nf6 6. Nd2 d6 7. Rb1 '
'O-O 8. Bxf6 Bxf6 9. b3 Qa5 10. a4 Bc3 11. f3 e6 12. Ne2 Bg7 13. g4 '
'd5 14. h3 Nd4 15. Nxd4 cxd4 16. Be2 dxe4 17. fxe4 Bh6 18. Rb2 e5 19. '
'O-O Be3+ 20. Kh1 Qd8 21. Nf3 Bf4 22. Rf2 h5 23. Rg2 hxg4 24. hxg4 '
'Kg7 25. Kg1 Rh8 26. Kf2 Qf6 27. Qc2 Rh3 28. Qd1 Be3+ 29. Ke1 Rh1+ '
'30. Rg1 0-1'}
```
</details>
### Data Fields
Only a single column "text". Each row contains a single game in PGN format.
### How to use with python-chess
```python
from datasets import load_dataset
import chess.pgn
import io
dataset = load_dataset("lichess_games", "2013-06", streaming=True)
for d in dataset['train']:
pgn = io.StringIO(d['text'])
game = chess.pgn.read_game(pgn)
print(game.headers['White'], game.headers['Black'])
print(game.headers['Result'])
print(game.mainline_moves())
break
```
### Data Splits
No splits only the file per dates.
### Source Data
The underlying data are provided and maintained by the Lichess team and provided under CC0 license (https://database.lichess.org/). I only provide the huggingface interface here.
The loading script download the zstd files and reads from them on the fly without decompressing the whole file, and parses the games using python-chess.
#### Initial Data Collection and Normalization
The data comes from all the standard rated games played on lichess.org. Every rated game played on lichess and its metadata are recorded and stored by lichess.
Lichess.org provides a forever free to use, libre and open-source plateform to play chess online.
### Annotations
Some of the games (~6% according to lichess: https://database.lichess.org/) comes annotated (directly in the PGN format) with computer analysis of the move:
```
About 6% of the games include Stockfish analysis evaluations: [%eval 2.35] (235 centipawn advantage), [%eval #-4] (getting mated in 4), always from White's point of view.
The WhiteElo and BlackElo tags contain Glicko2 ratings.
Games contain clock information as PGN %clk comments since April 2017.
Variant games have a Variant tag, e.g., [Variant "Antichess"].
```
### Personal and Sensitive Information
The metadata of the different PGN contains information of the players (their pseudo on lichess), the date and times when the game happened, the strength of the players
(in terms of ELO rating) and a link to the game on the platform.
An example of metadata from one the games.
```
[Event "Rated Bullet tournament https://lichess.org/tournament/yc1WW2Ox"]
[Site "https://lichess.org/PpwPOZMq"]
[Date "2017.04.01"]
[Round "-"]
[White "Abbot"]
[Black "Costello"]
[Result "0-1"]
[UTCDate "2017.04.01"]
[UTCTime "11:32:01"]
[WhiteElo "2100"]
[BlackElo "2000"]
[WhiteRatingDiff "-4"]
[BlackRatingDiff "+1"]
[WhiteTitle "FM"]
[ECO "B30"]
[Opening "Sicilian Defense: Old Sicilian"]
[TimeControl "300+0"]
[Termination "Time forfeit"]
```
## Additional Information
### Licensing Information
Lichess provides all the data under CC0.
### Citation Information
TO COME.
| [
-0.5396248698234558,
-0.18517526984214783,
0.17451027035713196,
0.343433141708374,
-0.3392447531223297,
0.0281672365963459,
-0.0558418370783329,
-0.4318789839744568,
0.8434594869613647,
0.39513126015663147,
-0.9063585996627808,
-0.9045686721801758,
-0.3685651123523712,
0.01546545047312975,... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
HydraLM/biology_dataset_alpaca | HydraLM | 2023-07-27T18:43:14Z | 58 | 0 | null | [
"region:us"
] | 2023-07-27T18:43:14Z | 2023-07-27T18:43:04.000Z | 2023-07-27T18:43:04 | ---
dataset_info:
features:
- name: input
dtype: string
- name: output
dtype: string
splits:
- name: train
num_bytes: 59941674
num_examples: 19999
download_size: 28644935
dataset_size: 59941674
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
# Dataset Card for "biology_dataset_alpaca"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | [
-0.6336127519607544,
-0.41945284605026245,
0.2724416255950928,
0.24413977563381195,
-0.4047197103500366,
-0.09605198353528976,
0.533585786819458,
-0.3433087468147278,
1.2637110948562622,
0.3692367374897003,
-0.8003025054931641,
-0.7907758355140686,
-0.7918845415115356,
-0.1184200718998909,... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
nateshmbhat/isha-qa-text | nateshmbhat | 2023-07-31T08:18:40Z | 58 | 0 | null | [
"task_categories:text-generation",
"size_categories:n<1K",
"language:en",
"region:us"
] | 2023-07-31T08:18:40Z | 2023-07-31T08:18:15.000Z | 2023-07-31T08:18:15 | ---
task_categories:
- text-generation
language:
- en
size_categories:
- n<1K
--- | [
-0.1285335123538971,
-0.1861683875322342,
0.6529128551483154,
0.49436232447624207,
-0.19319400191307068,
0.23607441782951355,
0.36072009801864624,
0.05056373029947281,
0.5793656706809998,
0.7400146722793579,
-0.650810182094574,
-0.23784008622169495,
-0.7102247476577759,
-0.0478255338966846... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
jason-lee08/TinyStoriesWithExclamationsSmall | jason-lee08 | 2023-08-20T03:52:44Z | 58 | 0 | null | [
"region:us"
] | 2023-08-20T03:52:44Z | 2023-08-03T01:20:07.000Z | 2023-08-03T01:20:07 | ---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: validation
path: data/validation-*
dataset_info:
features:
- name: input_ids
sequence: int32
- name: attention_mask
sequence: int8
splits:
- name: train
num_bytes: 23826331
num_examples: 21197
- name: validation
num_bytes: 236180
num_examples: 220
download_size: 8127925
dataset_size: 24062511
---
# Dataset Card for "TinyStoriesWithExclamationsSmall"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | [
-0.5553200840950012,
-0.1484646201133728,
0.3596901595592499,
0.3140837252140045,
-0.21448278427124023,
-0.04245536029338837,
0.13718707859516144,
-0.056115493178367615,
0.7939596772193909,
0.3065355718135834,
-0.8813250064849854,
-0.6939929723739624,
-0.5735636353492737,
-0.12043198198080... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
DataProvenanceInitiative/Commercially-Verified-Licenses | DataProvenanceInitiative | 2023-11-03T19:23:40Z | 58 | 0 | null | [
"arxiv:2310.16787",
"region:us"
] | 2023-11-03T19:23:40Z | 2023-09-18T04:31:20.000Z | 2023-09-18T04:31:20 |
# Dataset Card for **Data Provenance Initiative - Commercial-Licenses**
## Dataset Description
- **Homepage:** https://github.com/Data-Provenance-Initiative/Data-Provenance-Collection
- **Repository:** https://github.com/Data-Provenance-Initiative/Data-Provenance-Collection
- **Paper:** https://arxiv.org/abs/2310.16787
- **Point of Contact:** data.provenance.init@gmail.com
- **NOTE:** Licenses for these datasets are "self-reported" and collected by best-effort volunteers on a per dataset basis. Please find more details in the paper linked above.
### Legal Disclaimer / Notice
Collected License Information is **NOT** Legal Advice.
It is important to note we collect self-reported licenses, from the papers and repositories that released these datasets, and categorize them according to our best efforts, as a volunteer research and transparency initiative.
The information provided by any of our works and any outputs of the Data Provenance Initiative do not, and are not intended to, constitute legal advice; instead, all information, content, and materials are for general informational purposes only.
Readers and users should seek their own legal advice from counsel in their relevant jurisdiction.
### Dataset Summary
A wave of recent language models have been powered by large collections of natural language datasets. The sudden race to train models on these disparate collections of incorrectly, ambiguously, or under-documented datasets has left practitioners unsure of the legal and qualitative characteristics of the models they train. To remedy this crisis in data transparency and understanding, in a joint effort between experts in machine learning and the law, we’ve compiled the most detailed and reliable metadata available for data licenses, sources, and provenance, as well as fine-grained characteristics like language, text domains, topics, usage, collection time, and task compositions. Beginning with nearly 40 popular instruction (or “alignment”) tuning collections, we release a suite of open source tools for downloading, filtering, and examining this training data. Our analysis sheds light on the fractured state of data transparency, particularly with data licensing, and we hope our tools will empower more informed and responsible data-centric development of future language models.
### What does **Commercial** mean here?
- `Commercial` includes datasets that are compatible with commercial usage, meaning commercial usage of this dataset is permitted as per its license.
### Constituent Data Collections
- Following table shows each constituent data collection this Dataset along with original source from where each data collection is derived from.
| # | Collection Name | Description | Source |
| --------------- | --------------- | --------------- | --------------- |
| 1 | Anthropic HH-RLHF | Human preference data about helpfulness and harmlessness & Human-generated and annotated red teaming dialogues. | https://huggingface.co/datasets/Anthropic/hh-rlhf |
| 2 | CommitPackFT | CommitPackFT is a 2GB filtered version of CommitPack to contain only high-quality commit messages that resemble natural language instructions. | https://huggingface.co/datasets/bigcode/commitpackft |
| 3 | Dolly 15k | Databricks Dolly 15k is a dataset containing 15,000 high-quality human-generated prompt / response pairs specifically designed for instruction tuning large language models. | https://huggingface.co/datasets/databricks/databricks-dolly-15k |
| 4 | Flan Collection (Chain-of-Thought) | Chain-of-Thought sub-mixture in Flan collection dataset. | https://huggingface.co/datasets/conceptofmind/cot_submix_original |
| 5 | Flan Collection (Dialog) | Chain-of-Thought sub-mixture in Flan collection dataset. | https://huggingface.co/datasets/conceptofmind/dialog_submix_original |
| 6 | Flan Collection (Flan 2021) | Flan 2021 sub-mixture in Flan collection dataset. | https://huggingface.co/datasets/conceptofmind/flan2021_submix_original |
| 7 | Flan Collection (P3) | P3 sub-mixture in Flan collection dataset. | https://huggingface.co/datasets/conceptofmind/t0_submix_original |
| 8 | Flan Collection (Super-NaturalInstructions) | Super-Natural Instructions in Flan collection dataset. | https://huggingface.co/datasets/conceptofmind/niv2_submix_original |
| 9 | Joke Explanation | Corpus for testing whether your LLM can explain the joke well. | https://huggingface.co/datasets/theblackcat102/joke_explaination |
| 10 | OIG | Open Instruction Generalist is a large instruction dataset of medium quality along with a smaller high quality instruction dataset (OIG-small-chip2). | https://huggingface.co/datasets/laion/OIG |
| 11 | Open Assistant | OpenAssistant Conversations (OASST1) is a human-generated, human-annotated assistant-style conversation corpus consisting of 161,443 messages in 35 different languages, annotated with 461,292 quality ratings, resulting in over 10,000 fully annotated conversation trees. | https://huggingface.co/datasets/OpenAssistant/oasst1 |
| 12 | Open Assistant OctoPack | Filtered version of OpenAssistant Conversations (OASST1) to focus only on high-quality conversation trees as used in OctoPack paper. | https://huggingface.co/datasets/bigcode/oasst-octopack |
| 13 | Tasksource Symbol-Tuning | Tasksource datasets converted for symbol-tuning. | https://github.com/sileod/tasksource |
| 14 | Tasksource Instruct | Tasksource datasets as instructions for instruction-tuning. | https://github.com/sileod/tasksource |
| 15 | xp3x | xP3x is a collection of prompts & datasets across 277 languages & 16 NLP tasks. It contains all of xP3 + much more. | https://huggingface.co/datasets/Muennighoff/xP3x |
| 16 | StarCoder Self-Instruct | Dataset generated by prompting starcoder to generate new instructions based on some human-written seed instructions. | https://huggingface.co/datasets/codeparrot/self-instruct-starcoder |
### Data Instances
[More Information Needed]
### Data Fields
The following snippet shows the fields in a row in each data collection in this dataset:
```
[
{"from": "user", "text": input_text.strip(), "parent": dset},
{"from": "assistant", "text": target_text.strip(), "parent": 0},
...
]
```
with fields:
- from: indicates the originator of the text in this conversation. This can be either "user" or "assistant", where "assistant" indicates the model and text will be model's response to user's text.
- text: indicates text that originator wants to communicate to receiver.
- parent: field indicating the parent for tracing the conversation hierarchy.
Here each row contains one or more json objects indicating user-assistant interaction dialogue with text messages exchanged between them. You can leverager `parent` field in json object to follow the tree structure of interactions.
### Downloading Dataset
You can load the entire dataset by using the following code:
```python
import os
from datasets import load_dataset
# If the dataset is gated/private, make sure you have run huggingface-cli login
dataset = load_dataset("DataProvenanceInitiative/Commercially-Verified-Licenses")
```
You can load a specific dataset subset such as Dolly 15k using the following code:
```python
import os
from datasets import load_dataset
subset = load_dataset(
"DataProvenanceInitiative/Commercially-Verified-Licenses",
split="train",
num_proc = os.cpu_count(),
revision="main",
data_files="data/dolly_15k/*.jsonl"
)
```
### Data Splits
[More Information Needed]
[TODO: Add each dataset and add # of samples in train/dev]
## Dataset Creation
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
[More Information Needed]
### Citation Information
```
@article{longpre2023data,
title={The Data Provenance Initiative: A Large Scale Audit of Dataset Licensing \& Attribution in AI},
author={Longpre, Shayne and Mahari, Robert and Chen, Anthony and Obeng-Marnu, Naana and Sileo, Damien and Brannon, William and Muennighoff, Niklas and Khazam, Nathan and Kabbara, Jad and Perisetla, Kartik and others},
journal={arXiv preprint arXiv:2310.16787},
year={2023}
}
```
### Contributions
Thanks to [data.provenance.init@gmail.com](mailto:data.provenance.init@gmail.com) for adding this dataset. | [
-0.36989256739616394,
-0.6985824704170227,
0.199478879570961,
0.13076648116111755,
-0.04793982580304146,
-0.0021075918339192867,
-0.17958183586597443,
-0.5521989464759827,
0.10297532379627228,
0.7085972428321838,
-0.6469448208808899,
-0.6945362687110901,
-0.40861189365386963,
0.06648860871... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
ISCA-IUB/AntisemitismOnTwitter | ISCA-IUB | 2023-09-22T08:39:09Z | 58 | 1 | null | [
"language:en",
"arxiv:2304.14599",
"region:us"
] | 2023-09-22T08:39:09Z | 2023-09-22T08:18:44.000Z | 2023-09-22T08:18:44 | ---
language:
- en
---
# Dataset Card for Dataset on Antisemitism on Twitter/X
## Dataset Description
- **Homepage:**
- **Repository:**
- **Paper:**
- **Leaderboard:**
- **Point of Contact:**
### Dataset Summary
The ISCA project has compiled this dataset using an annotation portal, which was used to label tweets as either antisemitic or non-antisemitic, among other labels. Please note that the annotation was done with live data, including images and the context, such as threads. The original data was sourced from annotationportal.com.
### Languages
English
## Dataset Structure
‘TweetID’: Represents the tweet ID.
‘Username’: Represents the username who published the tweet.
‘Text’: Represents the full text of the tweet (not pre-processed).
‘CreateDate’: Represents the date the tweet was created.
‘Biased’: Represents the labeled by our annotations if the tweet is antisemitic or non-antisemitic.
‘Keyword’: Represents the keyword that was used in the query. The keyword can be in the text, including mentioned names, or the username.
## Dataset Creation
This dataset contains 6,941 tweets that cover a wide range of topics common in conversations about Jews, Israel, and antisemitism between January 2019 and December 2021. The dataset is drawn from representative samples during this period with relevant keywords. 1,250 tweets (18%) meet the IHRA definition of antisemitic messages.
The dataset has been compiled within the ISCA project using an annotation portal to label tweets as either antisemitic or non-antisemitic. The original data was sourced from annotationportal.com.
### Annotations
#### Annotation process
We annotated the tweets, considering the text, images, videos, and links, in their “natural” context, including threads. We used a detailed annotation guideline, based on the IHRA Definition, which has been endorsed and recommended by more than 30 governments and international organizations5 and is frequently used to monitor and record antisemitic incidents. We divided the definition into 12 paragraphs. Each of the paragraphs addresses different forms and tropes of antisemitism. We created an online annotation tool (https://annotationportal.com) to make labeling easier, more consistent, and less prone to errors, including in the process of recording the annotations. The portal displays the tweet and a clickable annotation form, see Figure 1. It automatically saves each annotation, including the time spent labeling each tweet.
The Annotation Portal retrieves live tweets by referencing their ID number. Our annotators first look at the tweet, and if they are unsure of the meaning, they are prompted to look at the entire thread, replies, likes, links, and comments. A click on the visualized tweet opens a new tab in the browser, displaying the message on the Twitter page in its “natural” environment.
The portal is designed to help annotators consistently label messages as antisemitic or not according to the IHRA definition. After verifying that the message is still live and in English, they select from a drop-down menu where they classify the message as "confident antisemitic," "probably antisemitic," "probably not antisemitic," "confident not antisemitic," or "don’t know." The annotation guideline, including the definition, is linked in a PDF document.
#### Who are the annotators?
All annotators are familiar with the definition and have been trained on test samples. They have also taken at least one academic course on antisemitism or have done research on antisemitism. We consider them to be expert annotators. Eight such expert annotators of different religions and genders labeled the 18 samples, two for each sample in alternating configurations.
## Considerations for Using the Data
### Social Impact of Dataset
One of the major challenges in automatic hate speech detection is the lack of datasets that cover a wide range of biased and unbiased messages and that are consistently labeled. We propose a labeling procedure that addresses some of the common weaknesses of labeled datasets.
We focus on antisemitic speech on Twitter and create a labeled dataset of 6,941 tweets that cover a wide range of topics common in conversations about Jews, Israel, and antisemitism between January 2019 and December 2021 by drawing from representative samples with relevant keywords.
Our annotation process aims to strictly apply a commonly used definition of antisemitism by forcing annotators to specify which part of the definition applies, and by giving them the option to personally disagree with the definition on a case-by-case basis. Labeling tweets that call out antisemitism, report antisemitism, or are otherwise related to antisemitism (such as the Holocaust) but are not actually antisemitic can help reduce false positives in automated detection.
## Additional Information
### Dataset Curators
Gunther Jikeli, Sameer Karali, Daniel Miehling, and Katharina Soemer
### Citation Information
Jikeli,Gunther, Sameer Karali, Daniel Miehling, and Katharina Soemer (2023): Antisemitic Messages? A Guide to High-Quality Annotation and a Labeled Dataset of Tweets. https://arxiv.org/abs/2304.14599
| [
-0.5746423006057739,
-0.8753219842910767,
-0.013971633277833462,
0.028465399518609047,
-0.7076320648193359,
0.3589010238647461,
-0.16133621335029602,
-0.6740016937255859,
0.820165753364563,
0.2593262791633606,
-0.3428822457790375,
-0.721634566783905,
-0.9670082926750183,
-0.087508633732795... | null | null | null | null | null | null | null | null | null | null | null | null | null |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.