text-classification bool 2 classes | text stringlengths 0 664k |
|---|---|
true | # PARARULE-Plus
This is a branch which includes the dataset from PARARULE-Plus Depth=2, Depth=3, Depth=4 and Depth=5. PARARULE Plus is a deep multi-step reasoning dataset over natural language. It can be seen as an improvement on the dataset of PARARULE (Peter Clark et al., 2020). Both PARARULE and PARARULE-Plus follow the closed-world assumption and negation as failure. The motivation is to generate deeper PARARULE training samples. We add more training samples for the case where the depth is greater than or equal to two to explore whether Transformer has reasoning ability. PARARULE Plus is a combination of two types of entities, animals and people, and corresponding relationships and attributes. From the depth of 2 to the depth of 5, we have around 100,000 samples in the depth of each layer, and there are nearly 400,000 samples in total.
Here is the original links for PARARULE-Plus including paper, project and data.
Paper: https://www.cs.ox.ac.uk/isg/conferences/tmp-proceedings/NeSy2022/paper15.pdf
Project: https://github.com/Strong-AI-Lab/Multi-Step-Deductive-Reasoning-Over-Natural-Language
Data: https://github.com/Strong-AI-Lab/PARARULE-Plus
PARARULE-Plus has been collected and merged by [LogiTorch.ai](https://www.logitorch.ai/), [ReasoningNLP](https://github.com/FreedomIntelligence/ReasoningNLP), [Prompt4ReasoningPapers](https://github.com/zjunlp/Prompt4ReasoningPapers) and [OpenAI/Evals](https://github.com/openai/evals/pull/651).
In this huggingface version, we pre-processed the dataset and use `1` to represent `true` and `0` to represent `false` to better help user train model.
## How to load the dataset?
```
from datasets import load_dataset
dataset = load_dataset("qbao775/PARARULE-Plus")
```
## How to train a model using the dataset?
We provide an [example](https://github.com/Strong-AI-Lab/PARARULE-Plus/blob/main/README.md#an-example-script-to-load-pararule-plus-and-fine-tune-bert) that you can `git clone` the project and fine-tune the dataset locally.
## Citation
```
@inproceedings{bao2022multi,
title={Multi-Step Deductive Reasoning Over Natural Language: An Empirical Study on Out-of-Distribution Generalisation},
author={Qiming Bao and Alex Yuxuan Peng and Tim Hartill and Neset Tan and Zhenyun Deng and Michael Witbrock and Jiamou Liu},
year={2022},
publisher={The 2nd International Joint Conference on Learning and Reasoning and 16th International Workshop on Neural-Symbolic Learning and Reasoning (IJCLR-NeSy 2022)}
}
``` |
true | # Dataset Card for "natural-instruction-195"
## Dataset Description
NaturalInstruction task 195.
In this task, you are given a text from tweets. Your task is to classify given tweet text into two categories: 1) positive, and 2) negative based on its content.
## Data Fields
- `text`: Tweet text.
- `label`: Sentiment of the text, either "negative" (0) or positive (1).
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
true | ```bib
@misc{liu-etal-2023-afraid,
title = "We're Afraid Language Models Aren't Modeling Ambiguity",
author = "Alisa Liu and Zhaofeng Wu and Julian Michael and Alane Suhr and Peter West and Alexander Koller and Swabha Swayamdipta and Noah A. Smith and Yejin Choi",
month = apr,
year = "2023",
url = "https://arxiv.org/abs/2304.14399",
}
``` |
true | code:
https://i2d2.allen.ai/
https://arxiv.org/abs/2212.09246
```
@inproceedings{Bhagavatula2022GenGen,
title={Generating Generics: Knowledge Induction with NeuroLogic and Self-Imitation},
author={Chandra Bhagavatula, Jena D. Hwang, Doug Downey, Ronan Le Bras, Ximing Lu, Lianhui Qin, Keisuke Sakaguchi, Swabha Swayamdipta, Peter West, Yejin Choi},
booktitle={arXiv},
year={2022}
}
``` |
true | https://github.com/dwslab/StArCon
```
@inproceedings{kobbe-etal-2020-unsupervised,
title = "Unsupervised stance detection for arguments from consequences",
author = "Kobbe, Jonathan and
Hulpu{\textcommabelow{s}}, Ioana and
Stuckenschmidt, Heiner",
booktitle = "Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP)",
month = nov,
year = "2020",
address = "Online",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2020.emnlp-main.4",
doi = "10.18653/v1/2020.emnlp-main.4",
pages = "50--60",
abstract = "Social media platforms have become an essential venue for online deliberation where users discuss arguments, debate, and form opinions. In this paper, we propose an unsupervised method to detect the stance of argumentative claims with respect to a topic. Most related work focuses on topic-specific supervised models that need to be trained for every emergent debate topic. To address this limitation, we propose a topic independent approach that focuses on a frequently encountered class of arguments, specifically, on arguments from consequences. We do this by extracting the effects that claims refer to, and proposing a means for inferring if the effect is a good or bad consequence. Our experiments provide promising results that are comparable to, and in particular regards even outperform BERT. Furthermore, we publish a novel dataset of arguments relating to consequences, annotated with Amazon Mechanical Turk.",
}
``` |
true | https://github.com/csitfun/ConTRoL-dataset
```
@article{Liu_Cui_Liu_Zhang_2021,
title={Natural Language Inference in Context - Investigating Contextual Reasoning over Long Texts},
volume={35},
url={https://ojs.aaai.org/index.php/AAAI/article/view/17580},
DOI={10.1609/aaai.v35i15.17580},
number={15},
journal={Proceedings of the AAAI Conference on Artificial Intelligence},
author={Liu, Hanmeng and Cui, Leyang and Liu, Jian and Zhang, Yue},
year={2021},
month={May},
pages={13388-13396}
}
``` |
true | https://github.com/allenai/aristo-leaderboard/tree/master/tracie/data
```
@inproceedings{ZRNKSR21,
author = {Ben Zhou and Kyle Richardson and Qiang Ning and Tushar Khot and Ashish Sabharwal and Dan Roth},
title = {Temporal Reasoning on Implicit Events from Distant Supervision},
booktitle = {NAACL},
year = {2021},
}
``` |
false | # Dataset Card for "avatar-the-last-airbender-tagged"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
false | # A collection of 12 million french-only instructions deduplicated from various sources
Source :
- clips/mqa-fr-faq
- multilingual-wikihow-qa-16k
- MBZUAI/Bactrian-X
- argilla/databricks-dolly-15k-curated-multilingual
- innermost47/alpaca-fr
- etalab-ia/piaf |
false |
# Dataset Card for rottentoCorpus
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** https://arxiv.org/abs/1911.12237v2
- **Repository:** [Needs More Information]
- **Paper:** https://arxiv.org/abs/1911.12237v2
- **Leaderboard:** [Needs More Information]
- **Point of Contact:** [Needs More Information]
### Dataset Summary
The SAMSum dataset contains about 16k messenger-like conversations with summaries. Conversations were created and written down by linguists fluent in English. Linguists were asked to create conversations similar to those they write on a daily basis, reflecting the proportion of topics of their real-life messenger convesations. The style and register are diversified - conversations could be informal, semi-formal or formal, they may contain slang words, emoticons and typos. Then, the conversations were annotated with summaries. It was assumed that summaries should be a concise brief of what people talked about in the conversation in third person.
The SAMSum dataset was prepared by Samsung R&D Institute Poland and is distributed for research purposes (non-commercial licence: CC BY-NC-ND 4.0).
### Supported Tasks and Leaderboards
[Needs More Information]
### Languages
English
## Dataset Structure
### Data Instances
The created dataset is made of 16369 conversations distributed uniformly into 4 groups based on the number of utterances in con- versations: 3-6, 7-12, 13-18 and 19-30. Each utterance contains the name of the speaker. Most conversations consist of dialogues between two interlocutors (about 75% of all conversations), the rest is between three or more people
The first instance in the training set:
{'id': '13818513', 'summary': 'Amanda baked cookies and will bring Jerry some tomorrow.', 'dialogue': "Amanda: I baked cookies. Do you want some?\r\nJerry: Sure!\r\nAmanda: I'll bring you tomorrow :-)"}
### Data Fields
- dialogue: text of dialogue.
- summary: human written summary of the dialogue.
- id: unique id of an example.
### Data Splits
- train: 14732
- val: 818
- test: 819
## Dataset Creation
### Curation Rationale
In paper:
> In the first approach, we reviewed datasets from the following categories: chatbot dialogues, SMS corpora, IRC/chat data, movie dialogues, tweets, comments data (conversations formed by replies to comments), transcription of meetings, written discussions, phone dialogues and daily communication data. Unfortunately, they all differed in some respect from the conversations that are typ- ically written in messenger apps, e.g. they were too technical (IRC data), too long (comments data, transcription of meetings), lacked context (movie dialogues) or they were more of a spoken type, such as a dialogue between a petrol station assis- tant and a client buying petrol.
As a consequence, we decided to create a chat dialogue dataset by constructing such conversa- tions that would epitomize the style of a messenger app.
### Source Data
#### Initial Data Collection and Normalization
In paper:
> We asked linguists to create conversations similar to those they write on a daily basis, reflecting the proportion of topics of their real-life messenger conversations. It includes chit-chats, gossiping about friends, arranging meetings, discussing politics, consulting university assignments with colleagues, etc. Therefore, this dataset does not contain any sensitive data or fragments of other corpora.
#### Who are the source language producers?
linguists
### Annotations
#### Annotation process
In paper:
> Each dialogue was created by one person. After collecting all of the conversations, we asked language experts to annotate them with summaries, assuming that they should (1) be rather short, (2) extract important pieces of information, (3) include names of interlocutors, (4) be written in the third person. Each dialogue contains only one ref- erence summary.
#### Who are the annotators?
language experts
### Personal and Sensitive Information
None, see above: Initial Data Collection and Normalization
## Considerations for Using the Data
### Social Impact of Dataset
[Needs More Information]
### Discussion of Biases
[Needs More Information]
### Other Known Limitations
[Needs More Information]
## Additional Information
### Dataset Curators
[Needs More Information]
### Licensing Information
non-commercial licence: CC BY-NC-ND 4.0
### Citation Information
```
@inproceedings{gliwa-etal-2019-samsum,
title = "{SAMS}um Corpus: A Human-annotated Dialogue Dataset for Abstractive Summarization",
author = "Gliwa, Bogdan and
Mochol, Iwona and
Biesek, Maciej and
Wawer, Aleksander",
booktitle = "Proceedings of the 2nd Workshop on New Frontiers in Summarization",
month = nov,
year = "2019",
address = "Hong Kong, China",
publisher = "Association for Computational Linguistics",
url = "https://www.aclweb.org/anthology/D19-5409",
doi = "10.18653/v1/D19-5409",
pages = "70--79"
}
```
### Contributions
Thanks to [@cccntu](https://github.com/cccntu) for adding this dataset. |
true |
[tasksource](https://github.com/sileod/tasksource) classification tasks recasted as natural language inference.
This dataset is intended to improve label understanding in [zero-shot classification HF pipelines](https://huggingface.co/docs/transformers/main/main_classes/pipelines#transformers.ZeroShotClassificationPipeline
).
Inputs that are text pairs are separated by a newline (\n).
```python
from transformers import pipeline
classifier = pipeline(model="sileod/deberta-v3-base-tasksource-nli")
classifier(
"I have a problem with my iphone that needs to be resolved asap!!",
candidate_labels=["urgent", "not urgent", "phone", "tablet", "computer"],
)
```
[deberta-v3-base-tasksource-nli](https://huggingface.co/sileod/deberta-v3-base-tasksource-nli) now includes `label-nli` in its training mix (a relatively small portion, to keep the model general, but note that nli models work for label-like zero shot classification without specific supervision (https://aclanthology.org/D19-1404.pdf).
```
@article{sileo2023tasksource,
title={tasksource: A Dataset Harmonization Framework for Streamlined NLP Multi-Task Learning and Evaluation},
author={Sileo, Damien},
year={2023}
}
``` |
false |
Retrieving the 50th example from the train set:
```
> print(dataset['train']['sentence1'][0][50])
Muž hrá na gitare.
> print(dataset['train']['sentence2'][0][50])
Chlapec hrá na gitare.
> print(dataset['train']['similarity_score'][0][50])
3.200000047683716
```
For score explanation see [stsb_multi_mt](https://huggingface.co/datasets/stsb_multi_mt).
|
false |
# Mario Maker 2 levels
Part of the [Mario Maker 2 Dataset Collection](https://tgrcode.com/posts/mario_maker_2_datasets)
## Dataset Description
The Mario Maker 2 levels dataset consists of 26.6 million levels from Nintendo's online service totaling around 100GB of data. The dataset was created using the self-hosted [Mario Maker 2 api](https://tgrcode.com/posts/mario_maker_2_api) over the course of 1 month in February 2022.
### How to use it
The Mario Maker 2 levels dataset is a very large dataset so for most use cases it is recommended to make use of the streaming API of `datasets`. You can load and iterate through the dataset with the following code:
```python
from datasets import load_dataset
ds = load_dataset("TheGreatRambler/mm2_level", streaming=True, split="train")
print(next(iter(ds)))
#OUTPUT:
{
'data_id': 3000004,
'name': 'カベキック',
'description': 'カベキックをとにかくするコースです。',
'uploaded': 1561644329,
'created': 1561674240,
'gamestyle': 4,
'theme': 0,
'difficulty': 0,
'tag1': 7,
'tag2': 10,
'game_version': 1,
'world_record': 8049,
'upload_time': 193540,
'upload_attempts': 1,
'num_comments': 60,
'clear_condition': 0,
'clear_condition_magnitude': 0,
'timer': 300,
'autoscroll_speed': 0,
'clears': 1646,
'attempts': 3168,
'clear_rate': 51.957070707070706,
'plays': 1704,
'versus_matches': 80,
'coop_matches': 27,
'likes': 152,
'boos': 118,
'unique_players_and_versus': 1391,
'weekly_likes': 0,
'weekly_plays': 1,
'uploader_pid': '5218390885570355093',
'first_completer_pid': '16824392528839047213',
'record_holder_pid': '5411258160547085075',
'level_data': [some binary data],
'unk2': 0,
'unk3': [some binary data],
'unk9': 3,
'unk10': 4,
'unk11': 1,
'unk12': 1
}
```
Level data is a binary blob describing the actual level and is equivalent to the level format Nintendo uses in-game. It is gzip compressed and needs to be decompressed to be read. To read it you only need to use the provided `level.ksy` kaitai struct file and install the kaitai struct runtime to parse it into an object:
```python
from datasets import load_dataset
from kaitaistruct import KaitaiStream
from io import BytesIO
from level import Level
import zlib
ds = load_dataset("TheGreatRambler/mm2_level", streaming=True, split="train")
level_data = next(iter(ds))["level_data"]
level = Level(KaitaiStream(BytesIO(zlib.decompress(level_data))))
# NOTE level.overworld.objects is a fixed size (limitation of Kaitai struct)
# must iterate by object_count or null objects will be included
for i in range(level.overworld.object_count):
obj = level.overworld.objects[i]
print("X: %d Y: %d ID: %s" % (obj.x, obj.y, obj.id))
#OUTPUT:
X: 1200 Y: 400 ID: ObjId.block
X: 1360 Y: 400 ID: ObjId.block
X: 1360 Y: 240 ID: ObjId.block
X: 1520 Y: 240 ID: ObjId.block
X: 1680 Y: 240 ID: ObjId.block
X: 1680 Y: 400 ID: ObjId.block
X: 1840 Y: 400 ID: ObjId.block
X: 2000 Y: 400 ID: ObjId.block
X: 2160 Y: 400 ID: ObjId.block
X: 2320 Y: 400 ID: ObjId.block
X: 2480 Y: 560 ID: ObjId.block
X: 2480 Y: 720 ID: ObjId.block
X: 2480 Y: 880 ID: ObjId.block
X: 2160 Y: 880 ID: ObjId.block
```
Rendering the level data into an image can be done using [Toost](https://github.com/TheGreatRambler/toost) if desired.
You can also download the full dataset. Note that this will download ~100GB:
```python
ds = load_dataset("TheGreatRambler/mm2_level", split="train")
```
## Data Structure
### Data Instances
```python
{
'data_id': 3000004,
'name': 'カベキック',
'description': 'カベキックをとにかくするコースです。',
'uploaded': 1561644329,
'created': 1561674240,
'gamestyle': 4,
'theme': 0,
'difficulty': 0,
'tag1': 7,
'tag2': 10,
'game_version': 1,
'world_record': 8049,
'upload_time': 193540,
'upload_attempts': 1,
'num_comments': 60,
'clear_condition': 0,
'clear_condition_magnitude': 0,
'timer': 300,
'autoscroll_speed': 0,
'clears': 1646,
'attempts': 3168,
'clear_rate': 51.957070707070706,
'plays': 1704,
'versus_matches': 80,
'coop_matches': 27,
'likes': 152,
'boos': 118,
'unique_players_and_versus': 1391,
'weekly_likes': 0,
'weekly_plays': 1,
'uploader_pid': '5218390885570355093',
'first_completer_pid': '16824392528839047213',
'record_holder_pid': '5411258160547085075',
'level_data': [some binary data],
'unk2': 0,
'unk3': [some binary data],
'unk9': 3,
'unk10': 4,
'unk11': 1,
'unk12': 1
}
```
### Data Fields
|Field|Type|Description|
|---|---|---|
|data_id|int|Data IDs are unique identifiers, gaps in the table are due to levels deleted by users or Nintendo|
|name|string|Course name|
|description|string|Course description|
|uploaded|int|UTC timestamp for when the level was uploaded|
|created|int|Local timestamp for when the level was created|
|gamestyle|int|Gamestyle, enum below|
|theme|int|Theme, enum below|
|difficulty|int|Difficulty, enum below|
|tag1|int|The first tag, if it exists, enum below|
|tag2|int|The second tag, if it exists, enum below|
|game_version|int|The version of the game this level was made on|
|world_record|int|The world record in milliseconds|
|upload_time|int|The upload time in milliseconds|
|upload_attempts|int|The number of attempts it took the uploader to upload|
|num_comments|int|Number of comments, may not reflect the archived comments if there were more than 1000 comments|
|clear_condition|int|Clear condition, enum below|
|clear_condition_magnitude|int|If applicable, the magnitude of the clear condition|
|timer|int|The timer of the level|
|autoscroll_speed|int|A unit of how fast the configured autoscroll speed is for the level|
|clears|int|Course clears|
|attempts|int|Course attempts|
|clear_rate|float|Course clear rate as a float between 0 and 1|
|plays|int|Course plays, or "footprints"|
|versus_matches|int|Course versus matches|
|coop_matches|int|Course coop matches|
|likes|int|Course likes|
|boos|int|Course boos|
|unique_players_and_versus|int|All unique players that have ever played this level, including the number of versus matches|
|weekly_likes|int|The weekly likes on this course|
|weekly_plays|int|The weekly plays on this course|
|uploader_pid|string|The player ID of the uploader|
|first_completer_pid|string|The player ID of the user who first cleared this course|
|record_holder_pid|string|The player ID of the user who held the world record at time of archival |
|level_data|bytes|The GZIP compressed decrypted level data, kaitai struct file is provided for reading|
|unk2|int|Unknown|
|unk3|bytes|Unknown|
|unk9|int|Unknown|
|unk10|int|Unknown|
|unk11|int|Unknown|
|unk12|int|Unknown|
### Data Splits
The dataset only contains a train split.
## Enums
The dataset contains some enum integer fields. This can be used to convert back to their string equivalents:
```python
GameStyles = {
0: "SMB1",
1: "SMB3",
2: "SMW",
3: "NSMBU",
4: "SM3DW"
}
Difficulties = {
0: "Easy",
1: "Normal",
2: "Expert",
3: "Super expert"
}
CourseThemes = {
0: "Overworld",
1: "Underground",
2: "Castle",
3: "Airship",
4: "Underwater",
5: "Ghost house",
6: "Snow",
7: "Desert",
8: "Sky",
9: "Forest"
}
TagNames = {
0: "None",
1: "Standard",
2: "Puzzle solving",
3: "Speedrun",
4: "Autoscroll",
5: "Auto mario",
6: "Short and sweet",
7: "Multiplayer versus",
8: "Themed",
9: "Music",
10: "Art",
11: "Technical",
12: "Shooter",
13: "Boss battle",
14: "Single player",
15: "Link"
}
ClearConditions = {
137525990: "Reach the goal without landing after leaving the ground.",
199585683: "Reach the goal after defeating at least/all (n) Mechakoopa(s).",
272349836: "Reach the goal after defeating at least/all (n) Cheep Cheep(s).",
375673178: "Reach the goal without taking damage.",
426197923: "Reach the goal as Boomerang Mario.",
436833616: "Reach the goal while wearing a Shoe.",
713979835: "Reach the goal as Fire Mario.",
744927294: "Reach the goal as Frog Mario.",
751004331: "Reach the goal after defeating at least/all (n) Larry(s).",
900050759: "Reach the goal as Raccoon Mario.",
947659466: "Reach the goal after defeating at least/all (n) Blooper(s).",
976173462: "Reach the goal as Propeller Mario.",
994686866: "Reach the goal while wearing a Propeller Box.",
998904081: "Reach the goal after defeating at least/all (n) Spike(s).",
1008094897: "Reach the goal after defeating at least/all (n) Boom Boom(s).",
1051433633: "Reach the goal while holding a Koopa Shell.",
1061233896: "Reach the goal after defeating at least/all (n) Porcupuffer(s).",
1062253843: "Reach the goal after defeating at least/all (n) Charvaargh(s).",
1079889509: "Reach the goal after defeating at least/all (n) Bullet Bill(s).",
1080535886: "Reach the goal after defeating at least/all (n) Bully/Bullies.",
1151250770: "Reach the goal while wearing a Goomba Mask.",
1182464856: "Reach the goal after defeating at least/all (n) Hop-Chops.",
1219761531: "Reach the goal while holding a Red POW Block. OR Reach the goal after activating at least/all (n) Red POW Block(s).",
1221661152: "Reach the goal after defeating at least/all (n) Bob-omb(s).",
1259427138: "Reach the goal after defeating at least/all (n) Spiny/Spinies.",
1268255615: "Reach the goal after defeating at least/all (n) Bowser(s)/Meowser(s).",
1279580818: "Reach the goal after defeating at least/all (n) Ant Trooper(s).",
1283945123: "Reach the goal on a Lakitu's Cloud.",
1344044032: "Reach the goal after defeating at least/all (n) Boo(s).",
1425973877: "Reach the goal after defeating at least/all (n) Roy(s).",
1429902736: "Reach the goal while holding a Trampoline.",
1431944825: "Reach the goal after defeating at least/all (n) Morton(s).",
1446467058: "Reach the goal after defeating at least/all (n) Fish Bone(s).",
1510495760: "Reach the goal after defeating at least/all (n) Monty Mole(s).",
1656179347: "Reach the goal after picking up at least/all (n) 1-Up Mushroom(s).",
1665820273: "Reach the goal after defeating at least/all (n) Hammer Bro(s.).",
1676924210: "Reach the goal after hitting at least/all (n) P Switch(es). OR Reach the goal while holding a P Switch.",
1715960804: "Reach the goal after activating at least/all (n) POW Block(s). OR Reach the goal while holding a POW Block.",
1724036958: "Reach the goal after defeating at least/all (n) Angry Sun(s).",
1730095541: "Reach the goal after defeating at least/all (n) Pokey(s).",
1780278293: "Reach the goal as Superball Mario.",
1839897151: "Reach the goal after defeating at least/all (n) Pom Pom(s).",
1969299694: "Reach the goal after defeating at least/all (n) Peepa(s).",
2035052211: "Reach the goal after defeating at least/all (n) Lakitu(s).",
2038503215: "Reach the goal after defeating at least/all (n) Lemmy(s).",
2048033177: "Reach the goal after defeating at least/all (n) Lava Bubble(s).",
2076496776: "Reach the goal while wearing a Bullet Bill Mask.",
2089161429: "Reach the goal as Big Mario.",
2111528319: "Reach the goal as Cat Mario.",
2131209407: "Reach the goal after defeating at least/all (n) Goomba(s)/Galoomba(s).",
2139645066: "Reach the goal after defeating at least/all (n) Thwomp(s).",
2259346429: "Reach the goal after defeating at least/all (n) Iggy(s).",
2549654281: "Reach the goal while wearing a Dry Bones Shell.",
2694559007: "Reach the goal after defeating at least/all (n) Sledge Bro(s.).",
2746139466: "Reach the goal after defeating at least/all (n) Rocky Wrench(es).",
2749601092: "Reach the goal after grabbing at least/all (n) 50-Coin(s).",
2855236681: "Reach the goal as Flying Squirrel Mario.",
3036298571: "Reach the goal as Buzzy Mario.",
3074433106: "Reach the goal as Builder Mario.",
3146932243: "Reach the goal as Cape Mario.",
3174413484: "Reach the goal after defeating at least/all (n) Wendy(s).",
3206222275: "Reach the goal while wearing a Cannon Box.",
3314955857: "Reach the goal as Link.",
3342591980: "Reach the goal while you have Super Star invincibility.",
3346433512: "Reach the goal after defeating at least/all (n) Goombrat(s)/Goombud(s).",
3348058176: "Reach the goal after grabbing at least/all (n) 10-Coin(s).",
3353006607: "Reach the goal after defeating at least/all (n) Buzzy Beetle(s).",
3392229961: "Reach the goal after defeating at least/all (n) Bowser Jr.(s).",
3437308486: "Reach the goal after defeating at least/all (n) Koopa Troopa(s).",
3459144213: "Reach the goal after defeating at least/all (n) Chain Chomp(s).",
3466227835: "Reach the goal after defeating at least/all (n) Muncher(s).",
3481362698: "Reach the goal after defeating at least/all (n) Wiggler(s).",
3513732174: "Reach the goal as SMB2 Mario.",
3649647177: "Reach the goal in a Koopa Clown Car/Junior Clown Car.",
3725246406: "Reach the goal as Spiny Mario.",
3730243509: "Reach the goal in a Koopa Troopa Car.",
3748075486: "Reach the goal after defeating at least/all (n) Piranha Plant(s)/Jumping Piranha Plant(s).",
3797704544: "Reach the goal after defeating at least/all (n) Dry Bones.",
3824561269: "Reach the goal after defeating at least/all (n) Stingby/Stingbies.",
3833342952: "Reach the goal after defeating at least/all (n) Piranha Creeper(s).",
3842179831: "Reach the goal after defeating at least/all (n) Fire Piranha Plant(s).",
3874680510: "Reach the goal after breaking at least/all (n) Crates(s).",
3974581191: "Reach the goal after defeating at least/all (n) Ludwig(s).",
3977257962: "Reach the goal as Super Mario.",
4042480826: "Reach the goal after defeating at least/all (n) Skipsqueak(s).",
4116396131: "Reach the goal after grabbing at least/all (n) Coin(s).",
4117878280: "Reach the goal after defeating at least/all (n) Magikoopa(s).",
4122555074: "Reach the goal after grabbing at least/all (n) 30-Coin(s).",
4153835197: "Reach the goal as Balloon Mario.",
4172105156: "Reach the goal while wearing a Red POW Box.",
4209535561: "Reach the Goal while riding Yoshi.",
4269094462: "Reach the goal after defeating at least/all (n) Spike Top(s).",
4293354249: "Reach the goal after defeating at least/all (n) Banzai Bill(s)."
}
```
<!-- TODO create detailed statistics -->
## Dataset Creation
The dataset was created over a little more than a month in Febuary 2022 using the self hosted [Mario Maker 2 api](https://tgrcode.com/posts/mario_maker_2_api). As requests made to Nintendo's servers require authentication the process had to be done with upmost care and limiting download speed as to not overload the API and risk a ban. There are no intentions to create an updated release of this dataset.
## Considerations for Using the Data
The dataset consists of levels from many different Mario Maker 2 players globally and as such their titles and descriptions could contain harmful language. Harmful depictions could also be present in the level data, should you choose to render it.
|
false |
# Dataset Card for ASQA
## Table of Contents
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Additional Information](#additional-information)
- [Contributions](#contributions)
## Dataset Description
- **Repository:** https://github.com/google-research/language/tree/master/language/asqa
- **Paper:** https://arxiv.org/abs/2204.06092
- **Leaderboard:** https://ambigqa.github.io/asqa_leaderboard.html
### Dataset Summary
ASQA is the first long-form question answering dataset that focuses on ambiguous factoid questions. Different from previous long-form answers datasets, each question is annotated with both long-form answers and extractive question-answer pairs, which should be answerable by the generated passage. A generated long-form answer will be evaluated using both ROUGE and QA accuracy. In the paper, we show that these evaluation metrics are well-correlated with human judgments.
### Supported Tasks and Leaderboards
Long-form Question Answering. [Leaderboard](https://ambigqa.github.io/asqa_leaderboard.html)
### Languages
- English
## Dataset Structure
### Data Instances
```py
{
"ambiguous_question": "Where does the civil liberties act place the blame for the internment of u.s. citizens?",
"qa_pairs": [
{
"context": "No context provided",
"question": "Where does the civil liberties act place the blame for the internment of u.s. citizens by apologizing on behalf of them?",
"short_answers": [
"the people of the United States"
],
"wikipage": None
},
{
"context": "No context provided",
"question": "Where does the civil liberties act place the blame for the internment of u.s. citizens by making them pay reparations?",
"short_answers": [
"United States government"
],
"wikipage": None
}
],
"wikipages": [
{
"title": "Civil Liberties Act of 1988",
"url": "https://en.wikipedia.org/wiki/Civil%20Liberties%20Act%20of%201988"
}
],
"annotations": [
{
"knowledge": [
{
"content": "The Civil Liberties Act of 1988 (Pub.L. 100–383, title I, August 10, 1988, 102 Stat. 904, 50a U.S.C. § 1989b et seq.) is a United States federal law that granted reparations to Japanese Americans who had been interned by the United States government during World War II.",
"wikipage": "Civil Liberties Act of 1988"
}
],
"long_answer": "The Civil Liberties Act of 1988 is a United States federal law that granted reparations to Japanese Americans who had been interned by the United States government during World War II. In the act, the blame for the internment of U.S. citizens was placed on the people of the United States, by apologizing on behalf of them. Furthermore, the blame for the internment was placed on the United States government, by making them pay reparations."
}
],
"sample_id": -4557617869928758000
}
```
### Data Fields
- `ambiguous_question`: ambiguous question from AmbigQA.
- `annotations`: long-form answers to the ambiguous question constructed by ASQA annotators.
- `annotations/knowledge`: list of additional knowledge pieces.
- `annotations/knowledge/content`: a passage from Wikipedia.
- `annotations/knowledge/wikipage`: title of the Wikipedia page the passage was taken from.
- `annotations/long_answer`: annotation.
- `qa_pairs`: Q&A pairs from AmbigQA which are used for disambiguation.
- `qa_pairs/context`: additional context provided.
- `qa_pairs/question`: disambiguated question from AmbigQA.
- `qa_pairs/short_answers`: list of short answers from AmbigQA.
- `qa_pairs/wikipage`: title of the Wikipedia page the additional context was taken from.
- `sample_id`: the unique id of the sample
- `wikipages`: list of Wikipedia pages visited by AmbigQA annotators.
- `wikipages/title`: title of the Wikipedia page.
- `wikipages/url`: link to the Wikipedia page.
### Data Splits
| **Split** | **Instances** |
|-----------|---------------|
| Train | 4353 |
| Dev | 948 |
## Additional Information
### Contributions
Thanks to [@din0s](https://github.com/din0s) for adding this dataset. |
false |
# Dataset Card for samromur_children
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** [Samrómur Children Icelandic Speech 1.0](https://samromur.is/)
- **Repository:** [LDC](https://catalog.ldc.upenn.edu/LDC2022S11)
- **Paper:** [Samrómur Children: An Icelandic Speech Corpus](https://aclanthology.org/2022.lrec-1.105.pdf)
- **Point of Contact:** [Carlos Mena](mailto:carlos.mena@ciempiess.org), [Jón Guðnason](mailto:jg@ru.is)
### Dataset Summary
The Samrómur Children Corpus consists of audio recordings and metadata files containing prompts read by the participants. It contains more than 137000 validated speech-recordings uttered by Icelandic children.
The corpus is a result of the crowd-sourcing effort run by the Language and Voice Lab (LVL) at the Reykjavik University, in cooperation with Almannarómur, Center for Language Technology. The recording process has started in October 2019 and continues to this day (Spetember 2021).
### Example Usage
The Samrómur Children Corpus is divided in 3 splits: train, validation and test. To load a specific split pass its name as a config name:
```python
from datasets import load_dataset
samromur_children = load_dataset("language-and-voice-lab/samromur_children")
```
To load an specific split (for example, the validation split) do:
```python
from datasets import load_dataset
samromur_children = load_dataset("language-and-voice-lab/samromur_children",split="validation")
```
### Supported Tasks
automatic-speech-recognition: The dataset can be used to train a model for Automatic Speech Recognition (ASR). The model is presented with an audio file and asked to transcribe the audio file to written text. The most common evaluation metric is the word error rate (WER).
### Languages
The audio is in Icelandic.
The reading prompts were gathered from a variety of sources, mainly from the [Icelandic Gigaword Corpus](http://clarin.is/en/resources/gigaword). The corpus includes text from novels, news, plays, and from a list of location names in Iceland. The prompts also came from the [Icelandic Web of Science](https://www.visindavefur.is/).
## Dataset Structure
### Data Instances
```python
{
'audio_id': '015652-0717240',
'audio': {
'path': '/home/carlos/.cache/HuggingFace/datasets/downloads/extracted/2c6b0d82de2ef0dc0879732f726809cccbe6060664966099f43276e8c94b03f2/test/015652/015652-0717240.flac',
'array': array([ 0. , 0. , 0. , ..., -0.00311279,
-0.0007019 , 0.00128174], dtype=float32),
'sampling_rate': 16000
},
'speaker_id': '015652',
'gender': 'female',
'age': '11',
'duration': 4.179999828338623,
'normalized_text': 'eiginlega var hann hin unga rússneska bylting lifandi komin'
}
```
### Data Fields
* `audio_id` (string) - id of audio segment
* `audio` (datasets.Audio) - a dictionary containing the path to the audio, the decoded audio array, and the sampling rate. In non-streaming mode (default), the path points to the locally extracted audio. In streaming mode, the path is the relative path of an audio inside its archive (as files are not downloaded and extracted locally).
* `speaker_id` (string) - id of speaker
* `gender` (string) - gender of speaker (male or female)
* `age` (string) - range of age of the speaker: Younger (15-35), Middle-aged (36-60) or Elderly (61+).
* `duration` (float32) - duration of the audio file in seconds.
* `normalized_text` (string) - normalized audio segment transcription.
### Data Splits
The corpus is split into train, dev, and test portions. Lenghts of every portion are: train = 127h25m, test = 1h50m, dev=1h50m.
To load an specific portion please see the above section "Example Usage".
## Dataset Creation
### Curation Rationale
In the field of Automatic Speech Recognition (ASR) is a known fact that the children's speech is particularly hard to recognise due to its high variability produced by developmental changes in children's anatomy and speech production skills.
For this reason, the criteria of selection for the train/dev/test portions have to take into account the children's age. Nevertheless, the Samrómur Children is an unbalanced corpus in terms of gender and age of the speakers. This means that the corpus has, for example, a total of 1667 female speakers (73h38m) versus 1412 of male speakers (52h26m).
These unbalances impose conditions in the type of the experiments than can be performed with the corpus. For example, a equal number of female and male speakers through certain ranges of age is impossible. So, if one can't have a perfectly balance corpus in the training set, at least one can have it in the test portion.
The test portion of the Samrómur Children was meticulously selected to cover ages between 6 to 16 years in both female and male speakers. Every of these range of age in both genders have a total duration of 5 minutes each.
The development portion of the corpus contains only speakers with an unknown gender information. Both test and dev sets have a total duration of 1h50m each.
In order to perform fairer experiments, speakers in the train and test sets are not shared. Nevertheless, there is only one speaker shared between the train and development set. It can be identified with the speaker ID=010363. However, no audio files are shared between these two sets.
### Source Data
#### Initial Data Collection and Normalization
The data was collected using the website https://samromur.is, code of which is available at https://github.com/cadia-lvl/samromur. The age range selected for this corpus is between 4 and 17 years.
The original audio was collected at 44.1 kHz or 48 kHz sampling rate as *.wav files, which was down-sampled to 16 kHz and converted to *.flac. Each recording contains one read sentence from a script. The script contains 85.080 unique sentences and 90.838 unique tokens.
There was no identifier other than the session ID, which is used as the speaker ID. The corpus is distributed with a metadata file with a detailed information on each utterance and speaker. The madata file is encoded as UTF-8 Unicode.
The prompts were gathered from a variety of sources, mainly from The Icelandic Gigaword Corpus, which is available at http://clarin.is/en/resources/gigaword. The corpus includes text from novels, news, plays, and from a list of location names in Iceland. The prompts also came from the [Icelandic Web of Science](https://www.visindavefur.is/).
### Annotations
#### Annotation process
Prompts were pulled from these corpora if they met the criteria of having only letters which are present in the Icelandic alphabet, and if they are listed in the [DIM: Database Icelandic Morphology](https://aclanthology.org/W19-6116.pdf).
There are also synthesised prompts consisting of a name followed by a question or a demand, in order to simulate a dialogue with a smart-device.
#### Who are the annotators?
The audio files content was manually verified against the prompts by one or more listener (summer students mainly).
### Personal and Sensitive Information
The dataset consists of people who have donated their voice. You agree to not attempt to determine the identity of speakers in this dataset.
## Considerations for Using the Data
### Social Impact of Dataset
This is the first ASR corpus of Icelandic children.
### Discussion of Biases
* The utterances were recorded by a smartphone or the web app.
* Participants self-reported their age group, gender, and the native language.
* Participants are aged between 4 to 17 years.
* The corpus contains 137597 utterances from 3175 speakers, totalling 131 hours.
* The amount of data due to female speakers is 73h38m, the amount of data due to male speakers is 52h26m and the amount of data due to speakers with an unknown gender information is 05h02m
* The number of female speakers is 1667, the number of male speakers is 1412. The number of speakers with an unknown gender information is 96.
* The audios due to female speakers are 78993, the audios due to male speakers are 53927 and the audios due to speakers with an unknown gender information are 4677.
### Other Known Limitations
"Samrómur Children: Icelandic Speech 21.09" by the Language and Voice Laboratory (LVL) at the Reykjavik University is licensed under a Creative Commons Attribution 4.0 International (CC BY 4.0) License with the hope that it will be useful, but WITHOUT ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.
## Additional Information
### Dataset Curators
The corpus is a result of the crowd-sourcing effort run by the Language and Voice Lab (LVL) at the Reykjavik University, in cooperation with Almannarómur, Center for Language Technology. The recording process has started in October 2019 and continues to this day (Spetember 2021). The corpus was curated by Carlos Daniel Hernández Mena in 2021.
### Licensing Information
[CC-BY-4.0](https://creativecommons.org/licenses/by/4.0/)
### Citation Information
```
@misc{menasamromurchildren2021,
title={Samrómur Children Icelandic Speech 1.0},
ldc_catalog_no={LDC2022S11},
DOI={https://doi.org/10.35111/frrj-qd60},
author={Hernández Mena, Carlos Daniel and Borsky, Michal and Mollberg, David Erik and Guðmundsson, Smári Freyr and Hedström, Staffan and Pálsson, Ragnar and Jónsson, Ólafur Helgi and Þorsteinsdóttir, Sunneva and Guðmundsdóttir, Jóhanna Vigdís and Magnúsdóttir, Eydís Huld and Þórhallsdóttir, Ragnheiður and Guðnason, Jón},
publisher={Reykjavík University}
journal={Linguistic Data Consortium, Philadelphia},
year={2019},
url={https://catalog.ldc.upenn.edu/LDC2022S11},
}
```
### Contributions
This project was funded by the Language Technology Programme for Icelandic 2019-2023. The programme, which is managed and coordinated by Almannarómur, is funded by the Icelandic Ministry of Education, Science and Culture.
The verification for the dataset was funded by the the Icelandic Directorate of Labour's Student Summer Job Program in 2020 and 2021.
Special thanks for the summer students for all the hard work.
|
false |
# Dataset Card for FrenchMedMCQA : A French Multiple-Choice Question Answering Corpus for Medical domain
## Table of Contents
- [Dataset Card for FrenchMedMCQA : A French Multiple-Choice Question Answering Corpus for Medical domain](#dataset-card-for-frenchmedmcqa--a-french-multiple-choice-question-answering-corpus-for-medical-domain)
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Source Data](#source-data)
- [Initial Data Collection and Normalization](#initial-data-collection-and-normalization)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contact](#contact)
## Dataset Description
- **Homepage:** https://deft2023.univ-avignon.fr/
- **Repository:** https://deft2023.univ-avignon.fr/
- **Paper:** [FrenchMedMCQA: A French Multiple-Choice Question Answering Dataset for Medical domain](https://hal.science/hal-03824241/document)
- **Leaderboard:** Coming soon
- **Point of Contact:** [Yanis LABRAK](mailto:yanis.labrak@univ-avignon.fr)
### Dataset Summary
This paper introduces FrenchMedMCQA, the first publicly available Multiple-Choice Question Answering (MCQA) dataset in French for medical domain. It is composed of 3,105 questions taken from real exams of the French medical specialization diploma in pharmacy, mixing single and multiple answers.
Each instance of the dataset contains an identifier, a question, five possible answers and their manual correction(s).
We also propose first baseline models to automatically process this MCQA task in order to report on the current performances and to highlight the difficulty of the task. A detailed analysis of the results showed that it is necessary to have representations adapted to the medical domain or to the MCQA task: in our case, English specialized models yielded better results than generic French ones, even though FrenchMedMCQA is in French. Corpus, models and tools are available online.
### Supported Tasks and Leaderboards
Multiple-Choice Question Answering (MCQA)
### Languages
The questions and answers are available in French.
## Dataset Structure
### Data Instances
```json
{
"id": "1863462668476003678",
"question": "Parmi les propositions suivantes, laquelle (lesquelles) est (sont) exacte(s) ? Les chylomicrons plasmatiques :",
"answers": {
"a": "Sont plus riches en cholestérol estérifié qu'en triglycérides",
"b": "Sont synthétisés par le foie",
"c": "Contiennent de l'apolipoprotéine B48",
"d": "Contiennent de l'apolipoprotéine E",
"e": "Sont transformés par action de la lipoprotéine lipase"
},
"correct_answers": [
"c",
"d",
"e"
],
"subject_name": "pharmacie",
"type": "multiple"
}
```
### Data Fields
- `id` : a string question identifier for each example
- `question` : question text (a string)
- `answer_a` : Option A
- `answer_b` : Option B
- `answer_c` : Option C
- `answer_d` : Option D
- `answer_e` : Option E
- `correct_answers` : Correct options, i.e., A, D and E
- `choice_type` ({"single", "multiple"}): Question choice type.
- "single": Single-choice question, where each choice contains a single option.
- "multiple": Multi-choice question, where each choice contains a combination of multiple options.
### Data Splits
| # Answers | Training | Validation | Test | Total |
|:---------:|:--------:|:----------:|:----:|:-----:|
| 1 | 595 | 164 | 321 | 1,080 |
| 2 | 528 | 45 | 97 | 670 |
| 3 | 718 | 71 | 141 | 930 |
| 4 | 296 | 30 | 56 | 382 |
| 5 | 34 | 2 | 7 | 43 |
| Total | 2171 | 312 | 622 | 3,105 |
## Dataset Creation
### Source Data
#### Initial Data Collection and Normalization
The questions and their associated candidate answer(s) were collected from real French pharmacy exams on the remede website. Questions and answers were manually created by medical experts and used during examinations. The dataset is composed of 2,025 questions with multiple answers and 1,080 with a single one, for a total of 3,105 questions. Each instance of the dataset contains an identifier, a question, five options (labeled from A to E) and correct answer(s). The average question length is 14.17 tokens and the average answer length is 6.44 tokens. The vocabulary size is of 13k words, of which 3.8k are estimated medical domain-specific words (i.e. a word related to the medical field). We find an average of 2.49 medical domain-specific words in each question (17 % of the words) and 2 in each answer (36 % of the words). On average, a medical domain-specific word is present in 2 questions and in 8 answers.
### Personal and Sensitive Information
The corpora is free of personal or sensitive information.
## Additional Information
### Dataset Curators
The dataset was created by Labrak Yanis and Bazoge Adrien and Dufour Richard and Daille Béatrice and Gourraud Pierre-Antoine and Morin Emmanuel and Rouvier Mickael.
### Licensing Information
Apache 2.0
### Citation Information
If you find this useful in your research, please consider citing the dataset paper :
```latex
@inproceedings{labrak-etal-2022-frenchmedmcqa,
title = "{F}rench{M}ed{MCQA}: A {F}rench Multiple-Choice Question Answering Dataset for Medical domain",
author = "Labrak, Yanis and
Bazoge, Adrien and
Dufour, Richard and
Daille, Beatrice and
Gourraud, Pierre-Antoine and
Morin, Emmanuel and
Rouvier, Mickael",
booktitle = "Proceedings of the 13th International Workshop on Health Text Mining and Information Analysis (LOUHI)",
month = dec,
year = "2022",
address = "Abu Dhabi, United Arab Emirates (Hybrid)",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2022.louhi-1.5",
pages = "41--46",
abstract = "This paper introduces FrenchMedMCQA, the first publicly available Multiple-Choice Question Answering (MCQA) dataset in French for medical domain. It is composed of 3,105 questions taken from real exams of the French medical specialization diploma in pharmacy, mixing single and multiple answers. Each instance of the dataset contains an identifier, a question, five possible answers and their manual correction(s). We also propose first baseline models to automatically process this MCQA task in order to report on the current performances and to highlight the difficulty of the task. A detailed analysis of the results showed that it is necessary to have representations adapted to the medical domain or to the MCQA task: in our case, English specialized models yielded better results than generic French ones, even though FrenchMedMCQA is in French. Corpus, models and tools are available online.",
}
```
### Contact
Thanks to contact [Yanis LABRAK](https://github.com/qanastek) for more information about this dataset.
|
false |
# Dataset Card for Dataset Name
## Dataset Description
- **Homepage:** : https://explainthejoke.com/
### Dataset Summary
Corpus for testing whether your LLM can explain the joke well. But this is a rather small dataset, if someone can point to a larger ones would be very nice.
### Languages
English
## Dataset Structure
### Data Fields
* url : link to the explaination
* joke : the original joke
* explaination : the explaination of the joke
### Data Splits
Since its so small, there's no splits just like gsm8k |
false | # Dataset Card for "oig_small_chip2_python"
### Dataset Summary
From [LAION's Open Instruction Generalist (OIG) dataset](https://huggingface.co/datasets/laion/OIG), we use a 4775-prompt segment pertaining to Python code generation. OIG text elements are formatted as dialogue exerpts between a "human" and "bot" agent. The code generation prompt is parsed from the initial "human" agent's statement and the resultant response from the "bot" agent's statement. We then reformat the text/response pairs according to the format of the original Alpaca dataset; that is, instruction/input/output triplets. In cases where the instruction field does not specify the code language, we provide "Write the code in Python" in the input field. Otherwise, the input field is left blank.
The OIG dataset was prepared by LAION, and released under the Apache 2.0 license.
Numbers:
- **Prompts**: 4775
- **Tokens**: 578083 using the [EleutherAI/gpt-neox-20b](https://huggingface.co/EleutherAI/gpt-neox-20b) tokenizer (counting instruction+input+output) |
false |
# SummComparer - v0.1 version

> Comparative analysis of summarization models on a variety of everyday documents
<a href="https://colab.research.google.com/gist/pszemraj/915cc610a37ffce963993fd005cf6154/summcomparer-gauntlet-v0p1-basic-eda.ipynb">
<img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/>
</a>
Dataset host/upload for [SummComparer](https://github.com/pszemraj/SummComparer). This is just a hosting page, check the repo for the latest info.
- This is a work in progress and will be updated over time.
- PRs/discussions **on this card** are disabled, but discussions/ideas/analysis etc are welcome, just post in the [github repo discussions](https://github.com/pszemraj/SummComparer/discussions) so things are all in one place.
- Please note that this is a dataset intended **for analyzing the summary quality of different models** rather than something to train more models on.
## EDA links
Outside of a basic EDA [colab notebook](https://colab.research.google.com/gist/pszemraj/915cc610a37ffce963993fd005cf6154/summcomparer-gauntlet-v0p1-basic-eda.ipynb) some static sites powered via `pandas-profiling`:
- [summary outputs](https://gauntlet-compiled-eda-v0p1.netlify.app/)
- [input docs](https://gauntlet-inputs-eda-v0p1.netlify.app/)
## Working with the dataset
> Note:** The current version of the dataset is still largely in a "raw" format. It has seen some basic cleaning but may need more in the future.
**In the repo,** the dataset is split into two different tables. One contains the original documents with long text & IDs etc, and the other contains everything else.
- `input_documents.parquet`: This file contains the input documents for the gauntlet along with metadata/`id` fields as defined in `gauntlet_master_data.json`.
- `gauntlet_summaries.parquet`: This file contains the output summaries for the gauntlet with hyperparameters/models as columns. All summaries (rows) are mapped to their source documents (columns) by columns prefixed with `source_doc`.
If you are joining the two, join on `source_doc_id`. Here, they have already been merged for you. You can load as use the dataset from here:
```python
from datasets import load_dataset
dataset = load_dataset("pszemraj/summcomparer-gauntlet-v0p1",)
dataset
```
which should output (for `v0.1.2`):
```
DatasetDict(
{
train: Dataset(
{
features: [
"GAUNTLET_PATH",
"file_name",
"summary",
"min_length",
"max_length",
"no_repeat_ngram_size",
"encoder_no_repeat_ngram_size",
"repetition_penalty",
"num_beams",
"num_beam_groups",
"length_penalty",
"early_stopping",
"do_sample",
"model_name",
"date",
"length",
"format",
"extractiveness",
"temperature",
"token_batch_length",
"penalty_alpha",
"top_k",
"batch_stride",
"max_len_ratio",
"directory-topic-tag",
"runtime",
"source_doc_filename",
"source_doc_id",
"source_doc_domain",
"document_text",
],
num_rows: 2043,
}
)
}
)
```
## OpenAI Terms of Use Notice
This dataset does contain reference summaries generated by GPT-4 and GPT-3.5-turbo. While it shouldn't be an issue as **this is meant for analysis and not training**, please note that the OpenAI generated text is subject to their terms of use.
This data can be filtered out/dropped if needed/relevant for your use of the data.
|
true | # Small-GPT-wiki-intro-features dataset
This dataset is based on [aadityaubhat/GPT-wiki-intro](https://huggingface.co/datasets/aadityaubhat/GPT-wiki-intro).
It contains 150k short texts from Wikipedia (label 0) and corresponding texts generated by ChatGPT (label 1) (together 300k texts).
For each text, various complexity measures were calculated, including e.g. readability, lexical diversity etc.
It can be used for text classification or analysis of linguistic features of human-generated and ChatGPT-generated texts.
For a smaller version, check out [julia-lukasiewicz-pater/small-GPT-wiki-intro-features](https://huggingface.co/datasets/julia-lukasiewicz-pater/small-GPT-wiki-intro-features).
## Dataset structure
Features were calculated using various Python libraries, i.e. NLTK, [readability-metrics](https://pypi.org/project/py-readability-metrics/), [lexical-diversity](https://pypi.org/project/lexical-diversity/),
and [TextDescriptives](https://hlasse.github.io/TextDescriptives/). The list of all features and their corresponding sources can be found below:
| Column | Description |
| ------ | ----------- |
| text | human- or ChatGPT-generated text; taken from aadityaubhat/GPT-wiki-intro |
| normalized_bigram_entropy | bigram entropy normalized with estimated maximum entropy; nltk |
| mean_word_length | mean word length; nltk |
| mean_sent_length | mean sentence length; nltk |
| fog | Gunning-Fog; readability-metrics |
| ari | Automated Readability Index; readability-metrics |
| dale_chall | Dale Chall Readability; readability-metrics |
| hdd | Hypergeometric Distribution; lexical-diversity |
| mtld | Measure of lexical textual diversity; lexical-diversity |
| mattr | Moving average type-token ratio; lexical-diversity |
| number_of_ADJ | proportion of adjectives per word; nltk |
| number_of_ADP | proportion of adpositions per word; nltk |
| number_of_ADV | proportion of adverbs per word; nltk |
| number_of_CONJ | proportion of conjunctions per word; nltk |
| number_of_DET | proportion of determiners per word; nltk |
| number_of_NOUN | proportion of nouns per word; nltk |
| number_of_NUM | proportion of numerals per word; nltk |
| number_of_PRT | proportion of particles per word; nltk |
| number_of_PRON | proportion of pronuns per word; nltk |
| number_of_VERB | proportion of verbs per word; nltk |
| number_of_DOT | proportion of punctuation marks per word; nltk |
| number_of_X | proportion of POS tag 'Other' per word; nltk |
| class | binary class, 0 stands for Wikipedia, 1 stands for ChatGPT |
| spacy_perplexity | text perplexity; TextDescriptives |
| entropy | text entropy; TextDescriptives |
| automated_readability_index | Automated Readability Index; TextDescriptives |
| per_word_spacy_perplexity | text perplexity per word; TextDescriptives |
| dependency_distance_mean | mean distance from each token to their dependent; TextDescriptives |
| dependency_distance_std | standard deviation of distance from each token to their dependent; TextDescriptives |
| first_order_coherence | cosine similarity between consecutive sentences; TextDescriptives |
| second_order_coherence | cosine similarity between sentences that are two sentences apart; TextDescriptives |
| smog |SMOG; TextDescriptives |
| prop_adjacent_dependency_relation_mean | mean proportion adjacent dependency relations; TextDescriptives |
| prop_adjacent_dependency_relation_std | standard deviation of proportion adjacent dependency relations; TextDescriptives |
| syllables_per_token_mean | mean of syllables per token; TextDescriptives |
| syllables_per_token_median | median of syllables per token; TextDescriptives |
| token_length_std | standard deviation of token length; TextDescriptives |
| token_length_median | median of token length; TextDescriptives |
| sentence_length_median | median of sentence length; TextDescriptives |
| syllables_per_token_std | standard deviation of syllables per token; TextDescriptives |
| proportion_unique_tokens | proportion of unique tokens; TextDescriptives |
| top_ngram_chr_fraction_3 | fraction of characters in a document which are contained within the top n-grams. For a specified n-gram range; TextDescriptives |
| top_ngram_chr_fraction_2 | fraction of characters in a document which are contained within the top n-grams. For a specified n-gram range; TextDescriptives |
| top_ngram_chr_fraction_4 | fraction of characters in a document which are contained within the top n-grams. For a specified n-gram range; TextDescriptives |
| proportion_bullet_points | fraction of characters in a document which are contained within the top n-grams. For a specified n-gram range; TextDescriptives |
| flesch_reading_ease | Flesch Reading ease ; TextDescriptives |
| flesch_kincaid_grade | Flesch Kincaid grade; TextDescriptives |
| gunning_fog | Gunning-Fog; TextDescriptives |
| coleman_liau_index | Coleman-Liau Index; TextDescriptives |
| oov_ratio| out-of-vocabulary ratio; TextDescriptives |
## Code
Code that was used to generate this dataset can be found on [Github](https://github.com/julia-lukasiewicz-pater/gpt-wiki-features/tree/main). |
false |
<img style="float:right; padding:1%" src="https://huggingface.co/datasets/mdroth/TinyGuanaco_DE/resolve/main/GuanacoBaby_SD2-1.jpg" alt="Picture of a young guanaco (thanks to Stable Diffusion 2.1)." width="25%">
# Dataset Card for _TinyGuanaco_DE_
**TinyGuanaco_DE**
- is intended for **development purposes**: use _TinyGuanaco_DE_ for prototyping your code
- is comprised of **German texts only** (hence _DE_)
- is really small: the `train` split has 4 instances and the `test` split has 2 instances
- has 3 columns: `index`, `query`, and `reply`
- the `query` column contains concatenations of a context ("Kontext:\n...") and a question ("Frage:\n...") that can be answered by knowing the context
- the `reply` column contains the according reply to that query
- features texts from the [`JosephusCheung/Guanaco`](https://huggingface.co/JosephusCheung/Guanaco) dataset and inherits its license from that dataset
License: [**gpl-3.0**](https://www.gnu.org/licenses/gpl-3.0.en.html) |
false |
<p align="center"><img src="https://huggingface.co/datasets/cfilt/HiNER-collapsed/raw/main/cfilt-dark-vec.png" alt="Computation for Indian Language Technology Logo" width="150" height="150"/></p>
# IWN Wordlists
[](https://creativecommons.org/licenses/by-nc-sa/4.0/) [](https://twitter.com/cfiltnlp) [](https://twitter.com/PeopleCentredAI)
We provide the unique word list form the [IndoWordnet (IWN)](https://www.cfilt.iitb.ac.in/indowordnet/) knowledge base.
## Usage
```python
from datasets import load_dataset
language = "hindi" // supported languages: assamese, bengali, bodo, gujarati, hindi, kannada, kashmiri, konkani, malayalam, manipuri, marathi, meitei, nepali, oriya, punjabi, sanskrit, tamil, telugu, urdu.
words = load_dataset("cfilt/iwn_wordlists", language)
word_list = words["train"]["word"]
```
## Citation
```latex
@inproceedings{bhattacharyya2010indowordnet,
title={IndoWordNet},
author={Bhattacharyya, Pushpak},
booktitle={Proceedings of the Seventh International Conference on Language Resources and Evaluation (LREC'10)},
year={2010}
}
``` |
false |
<p align="center"><img src="https://huggingface.co/datasets/cfilt/HiNER-collapsed/raw/main/cfilt-dark-vec.png" alt="Computation for Indian Language Technology Logo" width="150" height="150"/></p>
# Dataset Card for HiNER-original
[](https://twitter.com/cfiltnlp)
[](https://twitter.com/PeopleCentredAI)
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-instances)
- [Data Splits](#data-instances)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
## Dataset Description
- **Homepage:** https://github.com/cfiltnlp/HiNER
- **Repository:** https://github.com/cfiltnlp/HiNER
- **Paper:** https://arxiv.org/abs/2204.13743
- **Leaderboard:** https://paperswithcode.com/sota/named-entity-recognition-on-hiner-collapsed
- **Point of Contact:** Rudra Murthy V
### Dataset Summary
This dataset was created for the fundamental NLP task of Named Entity Recognition for the Hindi language at CFILT Lab, IIT Bombay. We gathered the dataset from various government information webpages and manually annotated these sentences as a part of our data collection strategy.
**Note:** The dataset contains sentences from ILCI and other sources. ILCI dataset requires license from Indian Language Consortium due to which we do not distribute the ILCI portion of the data. Please send us a mail with proof of ILCI data acquisition to obtain the full dataset.
### Supported Tasks and Leaderboards
Named Entity Recognition
### Languages
Hindi
## Dataset Structure
### Data Instances
{'id': '0', 'tokens': ['प्राचीन', 'समय', 'में', 'उड़ीसा', 'को', 'कलिंग', 'के', 'नाम', 'से', 'जाना', 'जाता', 'था', '।'], 'ner_tags': [0, 0, 0, 3, 0, 3, 0, 0, 0, 0, 0, 0, 0]}
### Data Fields
- `id`: The ID value of the data point.
- `tokens`: Raw tokens in the dataset.
- `ner_tags`: the NER tags for this dataset.
### Data Splits
| | Train | Valid | Test |
| ----- | ------ | ----- | ---- |
| original | 76025 | 10861 | 21722|
| collapsed | 76025 | 10861 | 21722|
## About
This repository contains the Hindi Named Entity Recognition dataset (HiNER) published at the Langauge Resources and Evaluation conference (LREC) in 2022. A pre-print via arXiv is available [here](https://arxiv.org/abs/2204.13743).
### Recent Updates
* Version 0.0.5: HiNER initial release
## Usage
You should have the 'datasets' packages installed to be able to use the :rocket: HuggingFace datasets repository. Please use the following command and install via pip:
```code
pip install datasets
```
To use the original dataset with all the tags, please use:<br/>
```python
from datasets import load_dataset
hiner = load_dataset('cfilt/HiNER-original')
```
To use the collapsed dataset with only PER, LOC, and ORG tags, please use:<br/>
```python
from datasets import load_dataset
hiner = load_dataset('cfilt/HiNER-collapsed')
```
However, the CoNLL format dataset files can also be found on this Git repository under the [data](data/) folder.
## Model(s)
Our best performing models are hosted on the HuggingFace models repository:
1. [HiNER-Collapsed-XLM-R](https://huggingface.co/cfilt/HiNER-Collapse-XLM-Roberta-Large)
2. [HiNER-Original-XLM-R](https://huggingface.co/cfilt/HiNER-Original-XLM-Roberta-Large)
## Dataset Creation
### Curation Rationale
HiNER was built on data extracted from various government websites handled by the Government of India which provide information in Hindi. This dataset was built for the task of Named Entity Recognition. The dataset was introduced to introduce new resources to the Hindi language that was under-served for Natural Language Processing.
### Source Data
#### Initial Data Collection and Normalization
HiNER was built on data extracted from various government websites handled by the Government of India which provide information in Hindi
#### Who are the source language producers?
Various Government of India webpages
### Annotations
#### Annotation process
This dataset was manually annotated by a single annotator of a long span of time.
#### Who are the annotators?
Pallab Bhattacharjee
### Personal and Sensitive Information
We ensured that there was no sensitive information present in the dataset. All the data points are curated from publicly available information.
## Considerations for Using the Data
### Social Impact of Dataset
The purpose of this dataset is to provide a large Hindi Named Entity Recognition dataset. Since the information (data points) has been obtained from public resources, we do not think there is a negative social impact in releasing this data.
### Discussion of Biases
Any biases contained in the data released by the Indian government are bound to be present in our data.
### Other Known Limitations
[Needs More Information]
## Additional Information
### Dataset Curators
Pallab Bhattacharjee
### Licensing Information
CC-BY-SA 4.0
### Citation Information
```latex
@misc{https://doi.org/10.48550/arxiv.2204.13743,
doi = {10.48550/ARXIV.2204.13743},
url = {https://arxiv.org/abs/2204.13743},
author = {Murthy, Rudra and Bhattacharjee, Pallab and Sharnagat, Rahul and Khatri, Jyotsana and Kanojia, Diptesh and Bhattacharyya, Pushpak},
keywords = {Computation and Language (cs.CL), FOS: Computer and information sciences, FOS: Computer and information sciences},
title = {HiNER: A Large Hindi Named Entity Recognition Dataset},
publisher = {arXiv},
year = {2022},
copyright = {Creative Commons Attribution 4.0 International}
}
``` |
true |
# Dataset Card for CEDR-M7
## Table of Contents
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:**
- **Repository:**
- **Paper:**
- **Leaderboard:**
- **Point of Contact:**
### Dataset Summary
[More Information Needed]
### Supported Tasks and Leaderboards
[More Information Needed]
### Languages
[More Information Needed]
## Dataset Structure
### Data Instances
[More Information Needed]
### Data Fields
[More Information Needed]
### Data Splits
[More Information Needed]
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
[More Information Needed]
### Citation Information
```
@misc{Aniemore,
author = {Артем Аментес, Илья Лубенец, Никита Давидчук},
title = {Открытая библиотека искусственного интеллекта для анализа и выявления эмоциональных оттенков речи человека},
year = {2022},
publisher = {Hugging Face},
journal = {Hugging Face Hub},
howpublished = {\url{https://huggingface.com/aniemore/Aniemore}},
email = {hello@socialcode.ru}
}
```
### Contributions
Thanks to [@toiletsandpaper](https://github.com/toiletsandpaper) for adding this dataset.
|
false |
# Dataset Card for "BanglaParaphrase"
## Table of Contents
- [Dataset Card Creation Guide](#dataset-card-creation-guide)
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Initial Data Collection and Normalization](#initial-data-collection-and-normalization)
- [Who are the source language producers?](#who-are-the-source-language-producers)
- [Annotations](#annotations)
- [Annotation process](#annotation-process)
- [Who are the annotators?](#who-are-the-annotators)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Repository:** [https://github.com/csebuetnlp/banglaparaphrase](https://github.com/csebuetnlp/banglaparaphrase)
- **Paper:** [BanglaParaphrase: A High-Quality Bangla Paraphrase Dataset](https://arxiv.org/abs/2210.05109)
- **Point of Contact:** [Najrin Sultana](mailto:nazrinshukti@gmail.com)
### Dataset Summary
We present BanglaParaphrase, a high quality synthetic Bangla paraphrase dataset containing about 466k paraphrase pairs.
The paraphrases ensures high quality by being semantically coherent and syntactically diverse.
### Supported Tasks and Leaderboards
[More information needed](https://github.com/csebuetnlp/banglaparaphrase)
### Languages
- `bengali`
## Loading the dataset
```python
from datasets import load_dataset
from datasets import load_dataset
ds = load_dataset("csebuetnlp/BanglaParaphrase")
```
## Dataset Structure
### Data Instances
One example from the `train` part of the dataset is given below in JSON format.
```
{
"source": "বেশিরভাগ সময় প্রকৃতির দয়ার ওপরেই বেঁচে থাকতেন উপজাতিরা।",
"target": "বেশিরভাগ সময়ই উপজাতিরা প্রকৃতির দয়ার উপর নির্ভরশীল ছিল।"
}
```
### Data Fields
- 'source': A string representing the source sentence.
- 'target': A string representing the target sentence.
### Data Splits
Dataset with train-dev-test example counts are given below:
Language | ISO 639-1 Code | Train | Validation | Test |
-------------- | ---------------- | ------- | ----- | ------ |
Bengali | bn | 419, 967 | 233, 31 | 233, 32 |
## Dataset Creation
### Curation Rationale
[More information needed](https://github.com/csebuetnlp/banglaparaphrase)
### Source Data
[Roar Bangla](https://roar.media/bangla)
#### Initial Data Collection and Normalization
[Detailed in the paper](https://arxiv.org/abs/2210.05109)
#### Who are the source language producers?
[Detailed in the paper](https://arxiv.org/abs/2210.05109)
### Annotations
[Detailed in the paper](https://arxiv.org/abs/2210.05109)
#### Annotation process
[Detailed in the paper](https://arxiv.org/abs/2210.05109)
#### Who are the annotators?
[Detailed in the paper](https://arxiv.org/abs/2210.05109)
### Personal and Sensitive Information
[More information needed](https://github.com/csebuetnlp/banglaparaphrase)
## Considerations for Using the Data
### Social Impact of Dataset
[More information needed](https://github.com/csebuetnlp/banglaparaphrase)
### Discussion of Biases
[More information needed](https://github.com/csebuetnlp/banglaparaphrase)
### Other Known Limitations
[More information needed](https://github.com/csebuetnlp/banglaparaphrase)
## Additional Information
### Dataset Curators
[More information needed](https://github.com/csebuetnlp/banglaparaphrase)
### Licensing Information
Contents of this repository are restricted to only non-commercial research purposes under the [Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License (CC BY-NC-SA 4.0)](https://creativecommons.org/licenses/by-nc-sa/4.0/). Copyright of the dataset contents belongs to the original copyright holders.
### Citation Information
```
@article{akil2022banglaparaphrase,
title={BanglaParaphrase: A High-Quality Bangla Paraphrase Dataset},
author={Akil, Ajwad and Sultana, Najrin and Bhattacharjee, Abhik and Shahriyar, Rifat},
journal={arXiv preprint arXiv:2210.05109},
year={2022}
}
```
### Contributions
|
false |
# UD_Spanish-AnCora
## Table of Contents
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Website:** https://github.com/UniversalDependencies/UD_Spanish-AnCora
- **Point of Contact:** [Daniel Zeman](zeman@ufal.mff.cuni.cz)
### Dataset Summary
This dataset is composed of the annotations from the [AnCora corpus](http://clic.ub.edu/corpus/), projected on the [Universal Dependencies treebank](https://universaldependencies.org/). We use the POS annotations of this corpus as part of the EvalEs Spanish language benchmark.
### Supported Tasks and Leaderboards
POS tagging
### Languages
The dataset is in Spanish (`es-ES`)
## Dataset Structure
### Data Instances
Three conllu files.
Annotations are encoded in plain text files (UTF-8, normalized to NFC, using only the LF character as line break, including an LF character at the end of file) with three types of lines:
1) Word lines containing the annotation of a word/token in 10 fields separated by single tab characters (see below).
2) Blank lines marking sentence boundaries.
3) Comment lines starting with hash (#).
### Data Fields
Word lines contain the following fields:
1) ID: Word index, integer starting at 1 for each new sentence; may be a range for multiword tokens; may be a decimal number for empty nodes (decimal numbers can be lower than 1 but must be greater than 0).
2) FORM: Word form or punctuation symbol.
3) LEMMA: Lemma or stem of word form.
4) UPOS: Universal part-of-speech tag.
5) XPOS: Language-specific part-of-speech tag; underscore if not available.
6) FEATS: List of morphological features from the universal feature inventory or from a defined language-specific extension; underscore if not available.
7) HEAD: Head of the current word, which is either a value of ID or zero (0).
8) DEPREL: Universal dependency relation to the HEAD (root iff HEAD = 0) or a defined language-specific subtype of one.
9) DEPS: Enhanced dependency graph in the form of a list of head-deprel pairs.
10) MISC: Any other annotation.
From: [https://universaldependencies.org](https://universaldependencies.org/guidelines.html)
### Data Splits
- es_ancora-ud-train.conllu
- es_ancora-ud-dev.conllu
- es_ancora-ud-test.conllu
## Dataset Creation
### Curation Rationale
[N/A]
### Source Data
[UD_Spanish-AnCora](https://github.com/UniversalDependencies/UD_Spanish-AnCora)
#### Initial Data Collection and Normalization
The original annotation was done in a constituency framework as a part of the [AnCora project](http://clic.ub.edu/corpus/) at the University of Barcelona. It was converted to dependencies by the [Universal Dependencies team](https://universaldependencies.org/) and used in the CoNLL 2009 shared task. The CoNLL 2009 version was later converted to HamleDT and to Universal Dependencies.
For more information on the AnCora project, visit the [AnCora site](http://clic.ub.edu/corpus/).
To learn about the Universal Dependences, visit the webpage [https://universaldependencies.org](https://universaldependencies.org)
#### Who are the source language producers?
For more information on the AnCora corpus and its sources, visit the [AnCora site](http://clic.ub.edu/corpus/).
### Annotations
#### Annotation process
For more information on the first AnCora annotation, visit the [AnCora site](http://clic.ub.edu/corpus/).
#### Who are the annotators?
For more information on the AnCora annotation team, visit the [AnCora site](http://clic.ub.edu/corpus/).
### Personal and Sensitive Information
No personal or sensitive information included.
## Considerations for Using the Data
### Social Impact of Dataset
This dataset contributes to the development of language models in Spanish.
### Discussion of Biases
[N/A]
### Other Known Limitations
[N/A]
## Additional Information
### Dataset Curators
[N/A]
### Licensing Information
This work is licensed under a <a rel="license" href="https://creativecommons.org/licenses/by/4.0/">CC Attribution 4.0 International License</a>.
### Citation Information
The following paper must be cited when using this corpus:
Taulé, M., M.A. Martí, M. Recasens (2008) 'Ancora: Multilevel Annotated Corpora for Catalan and Spanish', Proceedings of 6th International Conference on Language Resources and Evaluation. Marrakesh (Morocco).
To cite the Universal Dependencies project:
Rueter, J. (Creator), Erina, O. (Contributor), Klementeva, J. (Contributor), Ryabov, I. (Contributor), Tyers, F. M. (Contributor), Zeman, D. (Contributor), Nivre, J. (Creator) (15 Nov 2020). Universal Dependencies version 2.7 Erzya JR. Universal Dependencies Consortium.
### Contributions
[N/A]
|
false |
# Wikipedia (ja) embedded with cohere.ai `multilingual-22-12` encoder
We encoded [Wikipedia (ja)](https://ja.wikipedia.org) using the [cohere.ai](https://txt.cohere.ai/multilingual/) `multilingual-22-12` embedding model.
To get an overview how this dataset was created and pre-processed, have a look at [Cohere/wikipedia-22-12](https://huggingface.co/datasets/Cohere/wikipedia-22-12).
## Embeddings
We compute for `title+" "+text` the embeddings using our `multilingual-22-12` embedding model, a state-of-the-art model that works for semantic search in 100 languages. If you want to learn more about this model, have a look at [cohere.ai multilingual embedding model](https://txt.cohere.ai/multilingual/).
## Further languages
We provide embeddings of Wikipedia in many different languages:
[ar](https://huggingface.co/datasets/Cohere/wikipedia-22-12-ar-embeddings), [de](https://huggingface.co/datasets/Cohere/wikipedia-22-12-de-embeddings), [en](https://huggingface.co/datasets/Cohere/wikipedia-22-12-en-embeddings), [es](https://huggingface.co/datasets/Cohere/wikipedia-22-12-es-embeddings), [fr](https://huggingface.co/datasets/Cohere/wikipedia-22-12-fr-embeddings), [hi](https://huggingface.co/datasets/Cohere/wikipedia-22-12-hi-embeddings), [it](https://huggingface.co/datasets/Cohere/wikipedia-22-12-it-embeddings), [ja](https://huggingface.co/datasets/Cohere/wikipedia-22-12-ja-embeddings), [ko](https://huggingface.co/datasets/Cohere/wikipedia-22-12-ko-embeddings), [simple english](https://huggingface.co/datasets/Cohere/wikipedia-22-12-simple-embeddings), [zh](https://huggingface.co/datasets/Cohere/wikipedia-22-12-zh-embeddings),
You can find the Wikipedia datasets without embeddings at [Cohere/wikipedia-22-12](https://huggingface.co/datasets/Cohere/wikipedia-22-12).
## Loading the dataset
You can either load the dataset like this:
```python
from datasets import load_dataset
docs = load_dataset(f"Cohere/wikipedia-22-12-ja-embeddings", split="train")
```
Or you can also stream it without downloading it before:
```python
from datasets import load_dataset
docs = load_dataset(f"Cohere/wikipedia-22-12-ja-embeddings", split="train", streaming=True)
for doc in docs:
docid = doc['id']
title = doc['title']
text = doc['text']
emb = doc['emb']
```
## Search
A full search example:
```python
#Run: pip install cohere datasets
from datasets import load_dataset
import torch
import cohere
co = cohere.Client(f"<<COHERE_API_KEY>>") # Add your cohere API key from www.cohere.com
#Load at max 1000 documents + embeddings
max_docs = 1000
docs_stream = load_dataset(f"Cohere/wikipedia-22-12-ja-embeddings", split="train", streaming=True)
docs = []
doc_embeddings = []
for doc in docs_stream:
docs.append(doc)
doc_embeddings.append(doc['emb'])
if len(docs) >= max_docs:
break
doc_embeddings = torch.tensor(doc_embeddings)
query = 'Who founded Youtube'
response = co.embed(texts=[query], model='multilingual-22-12')
query_embedding = response.embeddings
query_embedding = torch.tensor(query_embedding)
# Compute dot score between query embedding and document embeddings
dot_scores = torch.mm(query_embedding, doc_embeddings.transpose(0, 1))
top_k = torch.topk(dot_scores, k=3)
# Print results
print("Query:", query)
for doc_id in top_k.indices[0].tolist():
print(docs[doc_id]['title'])
print(docs[doc_id]['text'], "\n")
```
## Performance
You can find performance on the MIRACL dataset (a semantic search evaluation dataset) here: [miracl-en-queries-22-12#performance](https://huggingface.co/datasets/Cohere/miracl-en-queries-22-12#performance) |
false |
# Dataset Card for Swiss Court View Generation
## Table of Contents
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:**
- **Repository:**
- **Paper:**
- **Leaderboard:**
- **Point of Contact:**
### Dataset Summary
Swiss Court View Generation is a multilingual, diachronic dataset of 404K Swiss Federal Supreme Court (FSCS) cases. This dataset is part of a challenging text generation task.
This dataset contains court views for different languages and court chambers. It includes information such as decision id, language, chamber, file name, url, and the number of tokens in the facts and considerations sections.
Main (L1) contains all the data, Origin (L2) contains only data with complete origin facts & origin considerations.
### Supported Tasks and Leaderboards
### Languages
Switzerland has four official languages with three languages German, French and Italian being represenated. The decisions are written by the judges and clerks in the language of the proceedings.
| Language | Subset | Number of Documents Main |Number of Documents Origin|
|------------|------------|--------------------------|--------------------------|
| German | **de** | 197K | 49 |
| French | **fr** | 163K | 221 |
| Italian | **it** | 44K | 0 |
## Dataset Structure
### Data Fields
```
decision_id (string)
facts (string)
considerations (string)
origin_facts (string)
origin_considerations (string)
law_area (string)
language (string)
year (int32)
court (string)
chamber (string)
canton (string)
region (string)
```
### Data Instances
[More Information Needed]
### Data Fields
[More Information Needed]
### Data Splits
## Dataset Creation
### Curation Rationale
### Source Data
#### Initial Data Collection and Normalization
The original data are published from the Swiss Federal Supreme Court (https://www.bger.ch) in unprocessed formats (HTML). The documents were downloaded from the Entscheidsuche portal (https://entscheidsuche.ch) in HTML.
#### Who are the source language producers?
The decisions are written by the judges and clerks in the language of the proceedings.
### Annotations
#### Annotation process
#### Who are the annotators?
Metadata is published by the Swiss Federal Supreme Court (https://www.bger.ch).
### Personal and Sensitive Information
The dataset contains publicly available court decisions from the Swiss Federal Supreme Court. Personal or sensitive information has been anonymized by the court before publication according to the following guidelines: https://www.bger.ch/home/juridiction/anonymisierungsregeln.html.
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
We release the data under CC-BY-4.0 which complies with the court licensing (https://www.bger.ch/files/live/sites/bger/files/pdf/de/urteilsveroeffentlichung_d.pdf)
© Swiss Federal Supreme Court, 2002-2022
The copyright for the editorial content of this website and the consolidated texts, which is owned by the Swiss Federal Supreme Court, is licensed under the Creative Commons Attribution 4.0 International licence. This means that you can re-use the content provided you acknowledge the source and indicate any changes you have made.
Source: https://www.bger.ch/files/live/sites/bger/files/pdf/de/urteilsveroeffentlichung_d.pdf
### Citation Information
*Visu, Ronja, Joel*
*Title: Blabliblablu*
*Name of conference*
```
cit
```
### Contributions
|
false |
# Dataset Card for Dataset Name
## Dataset Description
An open-source, large-scale, and multi-round dialogue data powered by Turbo APIs. In consideration of factors such as safeguarding privacy, **we do not directly use any data available on the Internet as prompts**.
To ensure generation quality, two separate ChatGPT Turbo APIs are adopted in generation, where one plays the role of the user to generate queries and the other generates the response.
We instruct the user model with carefully designed prompts to mimic human user behavior and call the two APIs iteratively. The generated dialogues undergo further post-processing and filtering.
ULtraChat is composed of three sectors:
- 🌏 **Questions about the World**: The dialogue data in this sector is derived from a wide range of inquiries related to concepts, entities, and objects from the real world. The topics covered are extensive, spanning areas such as technology, art, and entrepreneurship.
- ✍🏻 **Writing and Creation**: The dialogue data in this sector is driven by the demands for writing/creation from scratch, and encompasses any tasks that an AI assistant may aid within the creative process, spanning from email composition to crafting narratives and plays, and beyond.
- 📋 **Assistance on Existent Materials**: The dialogue data in this sector is generated based on existing materials, including but not limited to rewriting, continuation, summarization, and inference, covering a diverse range of topics.
- Repository: [UltraChat](https://github.com/thunlp/UltraChat)
- Explorer: [plain-explorer](http://39.101.77.220/), [Nomic-AI-Atlas-Explorer](https://atlas.nomic.ai/map/0ce65783-c3a9-40b5-895d-384933f50081/a7b46301-022f-45d8-bbf4-98107eabdbac)
## Dataset Structure
Each line in the downloaded data file is a json dict containing the data id and dialogue data in a list format. Below is an example line.
```
{
"id": "0",
"data": [
"How can cross training benefit groups like runners, swimmers, or weightlifters?",
"Cross training can benefit groups like runners, swimmers, or weightlifters in the following ways: ...",
"That makes sense. I've been wanting to improve my running time, but I never thought about incorporating strength training. Do you have any recommendations for specific exercises?",
"Sure, here are some strength training exercises that can benefit runners: ...",
"Hmm, I'm not really a fan of weightlifting though. Can I incorporate other forms of exercise into my routine to improve my running time?",
"Yes, absolutely! ...",
"..."
]
}
```
### Citation Information
```bibtex
@misc{UltraChat,
author = {Ding, Ning and Chen, Yulin and Xu, Bokai and Hu, Shengding and Qin, Yujia and Liu, Zhiyuan and Sun, Maosong and Zhou, Bowen},
title = {UltraChat: A Large-scale Auto-generated Multi-round Dialogue Data},
year = {2023},
publisher = {GitHub},
journal = {GitHub repository},
howpublished = {\url{https://github.com/thunlp/ultrachat}},
}
``` |
false | # Dataset Card for "hent"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
true |
# Dataset Card for equity-evaluation-corpus
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-instances)
- [Data Splits](#data-instances)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
## Dataset Description
- **Homepage:** [Needs More Information]
- **Repository:** [Needs More Information]
- **Paper:** [Needs More Information]
- **Leaderboard:** [Needs More Information]
- **Point of Contact:** [Needs More Information]
### Dataset Summary
Automatic machine learning systems can inadvertently accentuate and perpetuate inappropriate human biases. Past work on examining inappropriate biases has largely focused on just individual systems and resources. Further, there is a lack of benchmark datasets for examining inappropriate biases in system predictions. Here, we present the Equity Evaluation Corpus (EEC), which consists of 8,640 English sentences carefully chosen to tease out biases towards certain races and genders. We used the dataset to examine 219 automatic sentiment analysis systems that took part in a recent shared task, SemEval-2018 Task 1 Affect in Tweets. We found that several of the systems showed statistically significant bias; that is, they consistently provide slightly higher sentiment intensity predictions for one race or one gender. We make the EEC freely available, and encourage its use to evaluate biases in sentiment and other NLP tasks.
### Supported Tasks and Leaderboards
[Needs More Information]
### Languages
[Needs More Information]
## Dataset Structure
### Data Instances
[Needs More Information]
### Data Fields
- `sentence`: a `string` feature.
- `template`: a `string` feature.
- `person`: a `string` feature.
- `race`: a `string` feature.
- `emotion`: a `string` feature.
- `emotion word`: a `string` feature.
### Data Splits
[Needs More Information]
## Dataset Creation
### Curation Rationale
[Needs More Information]
### Source Data
#### Initial Data Collection and Normalization
[Needs More Information]
#### Who are the source language producers?
[Needs More Information]
### Annotations
#### Annotation process
[Needs More Information]
#### Who are the annotators?
[Needs More Information]
### Personal and Sensitive Information
[Needs More Information]
## Considerations for Using the Data
### Social Impact of Dataset
[Needs More Information]
### Discussion of Biases
[Needs More Information]
### Other Known Limitations
[Needs More Information]
## Additional Information
### Dataset Curators
[Needs More Information]
### Licensing Information
[Needs More Information]
### Citation Information
[Needs More Information]
|
false | # Spanish Gender Neutralization
<p align="center">
<img src="https://upload.wikimedia.org/wikipedia/commons/2/29/Gender_equality_symbol_%28clipart%29.png" width="250"/>
</p>
Spanish is a beautiful language and it has many ways of referring to people, neutralizing the genders and using some of the resources inside the language. One would say *Todas las personas asistentes* instead of *Todos los asistentes* and it would end in a more inclusive way for talking about people. This dataset collects a set of manually anotated examples of gendered-to-neutral spanish transformations.
The intended use of this dataset is to train a spanish language model for translating from gendered to neutral, in order to have more inclusive sentences.
### Compiled sources
One of the major challenges was to obtain a valuable dataset that would suit gender inclusion purpose, therefore, when building the dataset, the team opted to dedicate a considerable amount of time to build it from a scratch. You can find here the results.
The data used for the model training has been manually created form a compilation of sources, obtained from a series of guidelines and manuals issued by Spanish Ministry of Health, Social Services and Equality in the matter of the usage of non-sexist language, stipulated in this linked [document](https://www.inmujeres.gob.es/servRecursos/formacion/GuiasLengNoSexista/docs/Guiaslenguajenosexista_.pdf).
**NOTE: Appart from manually anotated samples, this dataset has been further increased by applying data augmentation so a minumin number of training examples are generated.**
* [Guía para un discurso igualitario en la universidad de alicante](https://ieg.ua.es/es/documentos/normativasobreigualdad/guia-para-un-discurso-igualitario-en-la-ua.pdf)
* [Guía UC de Comunicación en Igualdad](<https://web.unican.es/unidades/igualdad/SiteAssets/igualdad/comunicacion-en-igualdad/guia%20comunicacion%20igualdad%20(web).pdf>)
* [Buenas prácticas para el tratamiento del lenguaje en igualdad](https://e-archivo.uc3m.es/handle/10016/22811)
* [Guía del lenguaje no sexista de la Universidad de Castilla-La Mancha](https://unidadigualdad.ugr.es/page/guiialenguajeuniversitarionosexista_universidaddecastillalamancha/!)
* [Guía de Lenguaje Para el Ámbito Educativo](https://www.educacionyfp.gob.es/va/dam/jcr:8ce318fd-c8ff-4ad2-97b4-7318c27d1682/guialenguajeambitoeducativo.pdf)
* [Guía para un uso igualitario y no sexista del lenguaje y dela imagen en la Universidad de Jaén](https://www.ujaen.es/servicios/uigualdad/sites/servicio_uigualdad/files/uploads/Guia_lenguaje_no_sexista.pdf)
* [Guía de uso no sexista del vocabulario español](https://www.um.es/documents/2187255/2187763/guia-leng-no-sexista.pdf/d5b22eb9-b2e4-4f4b-82aa-8a129cdc83e3)
* [Guía para el uso no sexista de la lengua castellana y de imágnes en la UPV/EHV](https://www.ehu.eus/documents/1734204/1884196/Guia_uso_no_sexista_EHU.pdf)
* [Guía de lenguaje no sexista UNED](http://portal.uned.es/pls/portal/docs/PAGE/UNED_MAIN/LAUNIVERSIDAD/VICERRECTORADOS/GERENCIA/OFICINA_IGUALDAD/CONCEPTOS%20BASICOS/GUIA_LENGUAJE.PDF)
* [COMUNICACIÓN AMBIENTAL CON PERSPECTIVA DE GÉNERO](https://cima.cantabria.es/documents/5710649/5729124/COMUNICACI%C3%93N+AMBIENTAL+CON+PERSPECTIVA+DE+G%C3%89NERO.pdf/ccc18730-53e3-35b9-731e-b4c43339254b)
* [Recomendaciones para la utilización de lenguaje no sexista](https://www.csic.es/sites/default/files/guia_para_un_uso_no_sexista_de_la_lengua_adoptada_por_csic2.pdf)
* [Estudio sobre lenguaje y contenido sexista en la Web](https://www.mujeresenred.net/IMG/pdf/Estudio_paginas_web_T-incluye_ok.pdf)
* [Nombra.en.red. En femenino y en masculino](https://www.inmujeres.gob.es/areasTematicas/educacion/publicaciones/serieLenguaje/docs/Nombra_en_red.pdf)
## Team Members
- Fernando Velasco [(fermaat)](https://huggingface.co/fermaat)
- Cibeles Redondo [(CibelesR)](https://huggingface.co/CibelesR)
- Juan Julian Cea [(Juanju)](https://huggingface.co/Juanju)
- Magdalena Kujalowicz [(MacadellaCosta)](https://huggingface.co/MacadellaCosta)
- Javier Blasco [(javiblasco)](https://huggingface.co/javiblasco)
### Enjoy and feel free to collaborate with this dataset 🤗 |
false |
# Dataset Card for "lmqg/qg_itquad"
## Dataset Description
- **Repository:** [https://github.com/asahi417/lm-question-generation](https://github.com/asahi417/lm-question-generation)
- **Paper:** [https://arxiv.org/abs/2210.03992](https://arxiv.org/abs/2210.03992)
- **Point of Contact:** [Asahi Ushio](http://asahiushio.com/)
### Dataset Summary
This is a subset of [QG-Bench](https://github.com/asahi417/lm-question-generation/blob/master/QG_BENCH.md#datasets), a unified question generation benchmark proposed in
["Generative Language Models for Paragraph-Level Question Generation: A Unified Benchmark and Evaluation, EMNLP 2022 main conference"](https://arxiv.org/abs/2210.03992).
This is a modified version of [SQuAD-it](https://huggingface.co/datasets/squad_it) for question generation (QG) task.
Since the original dataset only contains training/validation set, we manually sample test set from training set, which
has no overlap in terms of the paragraph with the training set.
### Supported Tasks and Leaderboards
* `question-generation`: The dataset is assumed to be used to train a model for question generation.
Success on this task is typically measured by achieving a high BLEU4/METEOR/ROUGE-L/BERTScore/MoverScore (see our paper for more in detail).
### Languages
Italian (it)
## Dataset Structure
An example of 'train' looks as follows.
```
{
'answer': 'Carlo III',
'question': "Il figlio di chi è morto sulla strada per Palermo e vi è sepolto?",
'sentence': 'Carlo III scelse Palermo per la sua incoronazione come Re di Sicilia.',
'paragraph': 'Dopo il trattato di Utrecht (1713), la Sicilia fu consegnata ai Savoia, ma nel 1734 fu nuovamente posseduta dai...',
'sentence_answer': '<hl> Carlo III <hl> scelse Palermo per la sua incoronazione come Re di Sicilia.',
'paragraph_answer': "Dopo il trattato di Utrecht (1713), la Sicilia fu consegnata ai Savoia, ma nel 1734 fu nuovamente posseduta dai borbonici. <hl> Carlo III <hl> scelse Palermo per la sua incoronazione come Re di Sicilia. Charles fece costruire nuove case per la popolazione in crescita, mentre il commercio e l' industria crebbero. Tuttavia, ormai Palermo era ora solo un' altra città provinciale, dato che la Corte Reale risiedeva a Napoli. Il figlio di Carlo Ferdinando, anche se non gradito dalla popolazione, si rifugiò a Palermo dopo la Rivoluzione francese del 1798. Suo figlio Alberto è morto sulla strada per Palermo ed è sepolto in città. Quando fu fondato il Regno delle Due Sicilie, la capitale originaria era Palermo (1816) ma un anno dopo si trasferì a Napoli.",
'paragraph_sentence': "Dopo il trattato di Utrecht (1713), la Sicilia fu consegnata ai Savoia, ma nel 1734 fu nuovamente posseduta dai borbonici. <hl> Carlo III scelse Palermo per la sua incoronazione come Re di Sicilia. <hl> Charles fece costruire nuove case per la popolazione in crescita, mentre il commercio e l' industria crebbero. Tuttavia, ormai Palermo era ora solo un' altra città provinciale, dato che la Corte Reale risiedeva a Napoli. Il figlio di Carlo Ferdinando, anche se non gradito dalla popolazione, si rifugiò a Palermo dopo la Rivoluzione francese del 1798. Suo figlio Alberto è morto sulla strada per Palermo ed è sepolto in città. Quando fu fondato il Regno delle Due Sicilie, la capitale originaria era Palermo (1816) ma un anno dopo si trasferì a Napoli."
}
```
The data fields are the same among all splits.
- `question`: a `string` feature.
- `paragraph`: a `string` feature.
- `answer`: a `string` feature.
- `sentence`: a `string` feature.
- `paragraph_answer`: a `string` feature, which is same as the paragraph but the answer is highlighted by a special token `<hl>`.
- `paragraph_sentence`: a `string` feature, which is same as the paragraph but a sentence containing the answer is highlighted by a special token `<hl>`.
- `sentence_answer`: a `string` feature, which is same as the sentence but the answer is highlighted by a special token `<hl>`.
Each of `paragraph_answer`, `paragraph_sentence`, and `sentence_answer` feature is assumed to be used to train a question generation model,
but with different information. The `paragraph_answer` and `sentence_answer` features are for answer-aware question generation and
`paragraph_sentence` feature is for sentence-aware question generation.
## Data Splits
|train|validation|test |
|----:|---------:|----:|
|46550| 7609 |7609|
## Citation Information
```
@inproceedings{ushio-etal-2022-generative,
title = "{G}enerative {L}anguage {M}odels for {P}aragraph-{L}evel {Q}uestion {G}eneration",
author = "Ushio, Asahi and
Alva-Manchego, Fernando and
Camacho-Collados, Jose",
booktitle = "Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing",
month = dec,
year = "2022",
address = "Abu Dhabi, U.A.E.",
publisher = "Association for Computational Linguistics",
}
``` |
false | # Dataset Card for "simpsons-blip-captions"
|
true |
# Dataset Card for Law Area Prediction
## Table of Contents
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:**
- **Repository:**
- **Paper:**
- **Leaderboard:**
- **Point of Contact:**
### Dataset Summary
The dataset contains cases to be classified into the four main areas of law: Public, Civil, Criminal and Social
These can be classified further into sub-areas:
```
"public": ['Tax', 'Urban Planning and Environmental', 'Expropriation', 'Public Administration', 'Other Fiscal'],
"civil": ['Rental and Lease', 'Employment Contract', 'Bankruptcy', 'Family', 'Competition and Antitrust', 'Intellectual Property'],
'criminal': ['Substantive Criminal', 'Criminal Procedure']
```
### Supported Tasks and Leaderboards
Law Area Prediction can be used as text classification task
### Languages
Switzerland has four official languages with three languages German, French and Italian being represenated. The decisions are written by the judges and clerks in the language of the proceedings.
| Language | Subset | Number of Documents|
|------------|------------|--------------------|
| German | **de** | 127K |
| French | **fr** | 156K |
| Italian | **it** | 46K |
## Dataset Structure
- decision_id: unique identifier for the decision
- facts: facts section of the decision
- considerations: considerations section of the decision
- law_area: label of the decision (main area of law)
- law_sub_area: sub area of law of the decision
- language: language of the decision
- year: year of the decision
- court: court of the decision
- chamber: chamber of the decision
- canton: canton of the decision
- region: region of the decision
### Data Fields
[More Information Needed]
### Data Instances
[More Information Needed]
### Data Fields
[More Information Needed]
### Data Splits
The dataset was split date-stratisfied
- Train: 2002-2015
- Validation: 2016-2017
- Test: 2018-2022
## Dataset Creation
### Curation Rationale
### Source Data
#### Initial Data Collection and Normalization
The original data are published from the Swiss Federal Supreme Court (https://www.bger.ch) in unprocessed formats (HTML). The documents were downloaded from the Entscheidsuche portal (https://entscheidsuche.ch) in HTML.
#### Who are the source language producers?
The decisions are written by the judges and clerks in the language of the proceedings.
### Annotations
#### Annotation process
#### Who are the annotators?
### Personal and Sensitive Information
The dataset contains publicly available court decisions from the Swiss Federal Supreme Court. Personal or sensitive information has been anonymized by the court before publication according to the following guidelines: https://www.bger.ch/home/juridiction/anonymisierungsregeln.html.
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
We release the data under CC-BY-4.0 which complies with the court licensing (https://www.bger.ch/files/live/sites/bger/files/pdf/de/urteilsveroeffentlichung_d.pdf)
© Swiss Federal Supreme Court, 2002-2022
The copyright for the editorial content of this website and the consolidated texts, which is owned by the Swiss Federal Supreme Court, is licensed under the Creative Commons Attribution 4.0 International licence. This means that you can re-use the content provided you acknowledge the source and indicate any changes you have made.
Source: https://www.bger.ch/files/live/sites/bger/files/pdf/de/urteilsveroeffentlichung_d.pdf
### Citation Information
*Visu, Ronja, Joel*
*Title: Blabliblablu*
*Name of conference*
```
cit
```
### Contributions |
false | # Acute Inflammation
The [Acute Inflammation dataset](https://archive.ics.uci.edu/ml/datasets/Acute+Inflammations) from the [UCI ML repository](https://archive-beta.ics.uci.edu).
Predict whether the patient has an acute inflammation.
# Configurations and tasks
| **Configuration** | **Task** | Description |
|-------------------|---------------------------|---------------------------------------------------------------|
| inflammation | Binary classification | Does the patient have an acute inflammation? |
| nephritis | Binary classification | Does the patient have a nephritic pelvis? |
| bladder | Binary classification | Does the patient have bladder inflammation? |
nephritis
# Usage
```python
from datasets import load_dataset
dataset = load_dataset("mstz/acute_inflammation", "inflammation")["train"]
```
# Features
Target feature changes according to the selected configuration and is always in last position in the dataset.
| **Feature** | **Type** |
|---------------------------------------|---------------|
| `temperature` | `[float64]` |
| `has_nausea` | `[bool]` |
| `has_lumbar_pain` | `[bool]` |
| `has_urine_pushing` | `[bool]` |
| `has_micturition_pains` | `[bool]` |
| `has_burnt_urethra` | `[bool]` |
| `has_inflammed_bladder` | `[bool]` |
| `has_nephritis_of_renal_pelvis` | `[bool]` |
| `has_acute_inflammation` | `[int8]` | |
false | # Magic
The [Magic dataset](https://archive.ics.uci.edu/ml/datasets/Magic) from the [UCI ML repository](https://archive.ics.uci.edu/ml/datasets).
# Configurations and tasks
| **Configuration** | **Task** | **Description** |
|-------------------|---------------------------|---------------------------------------------------------------|
| magic | Binary classification | Classify the person's magic as over or under the threshold. |
# Usage
```python
from datasets import load_dataset
dataset = load_dataset("mstz/magic")["train"]
``` |
false |
# IVA Swift GitHub Code Dataset - Curated - Validation
## Dataset Description
This is the curated valid split of IVA Swift dataset extracted from GitHub.
It contains curated Swift files gathered with the purpose to train & validate a code generation model.
The dataset only contains a valid split.
For validation and unspliced versions, please check the following links:
* Clean Version Unsliced: https://huggingface.co/datasets/mvasiliniuc/iva-swift-codeint-clean
* Clean Version Train: https://huggingface.co/datasets/mvasiliniuc/iva-swift-codeint-clean-train
Information about dataset structure, data involved, licenses, and standard Dataset Card information is available that applies to this dataset also.
# Considerations for Using the Data
The dataset comprises source code from various repositories, potentially containing harmful or biased code,
along with sensitive information such as passwords or usernames. |
true | # PropSegmEnt: A Large-Scale Corpus for Proposition-Level Segmentation and Entailment Recognition
## Dataset Description
- **Homepage:** https://github.com/google-research-datasets/PropSegmEnt
- **Repository:** https://github.com/google-research-datasets/PropSegmEnt
- **Paper:** https://arxiv.org/abs/2212.10750
- **Point of Contact:** sihaoc@seas.upenn.edu
### Dataset Summary
This is a reproduced (i.e. after web-crawling) and processed version of [the "PropSegment" dataset](https://github.com/google-research-datasets/PropSegmEnt) from Google Research.
Since the [`News`](https://github.com/google-research-datasets/NewSHead) portion of the dataset is released only via urls, we reconstruct the dataset by crawling.
Overall, ~96% of the dataset can be reproduced, and the rest ~4% either have url no longer valid, or sentences that have been edited (i.e. cannot be aligned with the orignial dataset).
PropSegment (Proposition-level Segmentation and Entailment) is a large-scale, human annotated dataset for segmenting English text into propositions, and recognizing proposition-level entailment relations --- whether a different, related document entails each proposition, contradicts it, or neither.
The original dataset features >45k human annotated propositions, i.e. individual semantic units within sentences, as well as >35k entailment labels between propositions and documents.
Check out more details in the [dataset paper](https://arxiv.org/abs/2212.10750).
## Dataset Structure
Here we provide processed versions of the dataset for seq2seq model inputs/outputs.
`proposition_segmentation.*.jsonl` contains data for the text segmentation task, i.e. split a sentence into propositions.
The output propositions are concatenated as one string (with no particular order between them) by a special token `[SEP]`.
Each proposition is annotated as spans enclosed by `[M]` and `[/M]`.
```
{
"sentence": "This film marks the directorial debut for production designer Robert Stromberg.",
"propositions": "This film marks the directorial debut for [M]production designer Robert Stromberg.[/M][SEP]This [M]film marks the directorial debut for[/M] production designer [M]Robert Stromberg[/M]."
}
```
`propnli.*.jsonl` contains examples for the proposition-to-document entailment task, i.e. Given a proposition and a document, predict whether the proposition can be entailed/contradicted, or neutral with respect to the document.
```
{
"hypothesis": "[M]The Departed is[/M] a 2006 feature film [M]directed by Martin Scorsese.[/M]",
"premise": "The Departed is a 2006 American crime thriller film directed by Martin Scorsese and written by William Monahan. It starred Leonardo DiCaprio, Matt Damon, Jack Nicholson, and Mark Wahlberg, with Martin Sheen, Ray Winstone, Vera Farmiga, and Alec Baldwin in supporting roles. It is a remake of the Hong Kong film Infernal Affairs (2002).\nThe Departed won the Oscar for Best Picture at the 79th Academy Awards. Scorsese received the Oscar for Best Director, Thelma Schoonmaker the Oscar for Best Editing and William Monahan the Oscar for Best Adapted Screenplay.",
"label": "e"
}
```
### Citation
```
@inproceedings{chen2023propsegment,
title = "{PropSegmEnt}: A Large-Scale Corpus for Proposition-Level Segmentation and Entailment Recognition",
author = "Chen, Sihao and Buthpitiya, Senaka and Fabrikant, Alex and Roth, Dan and Schuster, Tal",
booktitle = "Findings of the Association for Computational Linguistics: ACL 2023",
year = "2023",
}
```
|
false | |
false |
### Dataset Summary
This dataset is a DeepL -based machine translation of the English SQuAD2.0 dataset which combines the 100,000 questions in
SQuAD1.1 with over 50,000 unanswerable questions written adversarially by crowdworkers to look similar to answerable ones.
To do well on SQuAD2.0, systems must not only answer questions when possible, but also determine when no answer is supported
by the paragraph and abstain from answering.
### Data Fields
The data fields are the same among all splits.
#### Example Data
```
{
"title": "Victoria_(Australia)",
"paragraphs": [
{
"qas": [
{
"question": "Millainen talous Victoriassa on?",
"id": "570d2417fed7b91900d45c3d",
"answers": [
{
"text": "monipuolinen",
"answer_start": 26,
"texts": [
"monipuolinen"
],
"starts": [
26
]
},
{
"text": "hyvin monipuolinen",
"answer_start": 20,
"texts": [
"hyvin ",
"monipuolinen"
],
"starts": [
20,
26
]
},
{
"text": "hyvin monipuolinen",
"answer_start": 20,
"texts": [
"hyvin ",
"monipuolinen"
],
"starts": [
20,
26
]
}
],
"is_impossible": false
}
],
"context": "Victorian talous on hyvin monipuolinen: palvelualat, kuten rahoitus- ja kiinteistöpalvelut, terveydenhuolto, koulutus, tukkukauppa, vähittäiskauppa, majoitus- ja ravitsemistoiminta ja teollisuus muodostavat suurimman osan työllisyydestä. Victorian osavaltion bruttokansantuote on Australian toiseksi suurin, vaikka Victoria on asukaskohtaisen bruttokansantuotteen osalta neljäntenä, koska sen kaivostoiminta on vähäistä. Kulttuurin alalla Melbournessa on useita museoita, taidegallerioita ja teattereita, ja sitä kutsutaan myös \"Australian urheilupääkaupungiksi\". Melbournen krikettikenttä (Melbourne Cricket Ground) on Australian suurin stadion, ja siellä järjestettiin vuoden 1956 kesäolympialaiset ja vuoden 2006 Kansainyhteisön kisat. Kenttää pidetään myös australialaisen kriketin ja australialaisen jalkapallon \"henkisenä kotina\", ja se isännöi vuosittain Australian jalkapalloliigan (AFL) suurta loppuottelua, johon osallistuu yleensä yli 95 000 ihmistä. Victoriaan kuuluu kahdeksan julkista yliopistoa, joista vanhin, Melbournen yliopisto, on perustettu vuonna 1853."
}
]
}
```
#### squad_v2
- `id`: a `string` feature.
- `title`: a `string` feature.
- `context`: a `string` feature.
- `question`: a `string` feature.
- `answers`: a dictionary feature containing:
- `text`: a `string` feature.
- `answer_start`: a `int32` feature.
- `texts`: a `string` feature.
- `starts`: a `int32` feature.
### Data Splits
| name | train | validation |
| -------- | -----: | ---------: |
| squad_v2 | 130319 | 11873 |
### Evaluation Results
Results from fine-tuning [TurkuNLP/bert-base-finnish-cased-v1](ttps://huggingface.co/TurkuNLP/bert-base-finnish-cased-v1) for extractive question answering.
| dataset | F1 |
| -------------------- | ----: |
| TurkuNLP/squad_v2_fi | 73.66 |
| ilmariky/SQuAD_v2_fi | 61.87 |
### Considerations for Using the Data
Due to DeepL terms and conditions, this dataset **must not be used for any machine translation work**, namely machine translation
system development and evaluation of any kind. In general, we wish you do not pair the original English data with the translations
except when working on research unrelated to machine translation, so as not to infringe on the terms and conditions.
### Licensing Information
Contents of this repository are distributed under the
[Creative Commons Attribution-ShareAlike 4.0 International License (CC BY-SA 4.0)](https://creativecommons.org/licenses/by-sa/4.0/).
Copyright of the dataset contents belongs to the original copyright holders. |
false | |
false |
# IVA Kotlin GitHub Code Dataset
## Dataset Description
This is the raw IVA Kotlin dataset extracted from GitHub.
It contains uncurated Kotlin files gathered with the purpose to train a code generation model.
The dataset consists of 464215 kotlin code files from GitHub totaling ~361 MB of data.
The dataset was created from the public GitHub dataset on Google BiqQuery.
### How to use it
To download the full dataset:
```python
from datasets import load_dataset
dataset = load_dataset('mvasiliniuc/iva-kotlin-codeint', split='train')
```
```python
from datasets import load_dataset
dataset = load_dataset('mvasiliniuc/iva-kotlin-codeint', split='train')
print(dataset[723])
#OUTPUT:
{
"repo_name":"nemerosa/ontrack",
"path":"ontrack-extension-notifications/src/main/java/net/nemerosa/ontrack/extension/notifications/webhooks/WebhookController.kt",
"copies":"1",
"size":"3248",
"content":"...@RestController\n@RequestMapping(\"/extension/notifications/webhook\")\nclass WebhookController(\n private val webhookAdminService: WebhookAdminService,\n private val webhookExecutionService: ",
"license":"mit"
}
```
## Data Structure
### Data Fields
|Field|Type|Description|
|---|---|---|
|repo_name|string|name of the GitHub repository|
|path|string|path of the file in GitHub repository|
|copies|string|number of occurrences in dataset|
|code|string|content of source file|
|size|string|size of the source file in bytes|
|license|string|license of GitHub repository|
### Instance
```json
{
"repo_name":"nemerosa/ontrack",
"path":"ontrack-extension-notifications/src/main/java/net/nemerosa/ontrack/extension/notifications/webhooks/WebhookController.kt",
"copies":"1",
"size":"3248",
"content":"...@RestController\n@RequestMapping(\"/extension/notifications/webhook\")\nclass WebhookController(\n private val webhookAdminService: WebhookAdminService,\n private val webhookExecutionService: ",
"license":"mit"
}
```
## Languages
The dataset contains only Kotlin files.
```json
{
"Kotlin": [".kt"]
}
```
## Licenses
Each entry in the dataset contains the associated license. The following is a list of licenses involved and their occurrences.
```json
{
"agpl-3.0": 9146,
"apache-2.0": 272388,
"artistic-2.0": 219,
"bsd-2-clause": 896,
"bsd-3-clause": 12328,
"cc0-1.0": 411,
"epl-1.0": 2111,
"gpl-2.0": 11080,
"gpl-3.0": 48911,
"isc": 997,
"lgpl-2.1": 297,
"lgpl-3.0": 7749,
"mit": 92540,
"mpl-2.0": 3386,
"unlicense": 1756
}
```
## Dataset Statistics
```json
{
"Total size": "~361 MB",
"Number of files": 464215,
"Number of files under 500 bytes": 99845,
"Average file size in bytes": 3252,
}
```
## Dataset Creation
The dataset was created using Google Query for Github:
https://cloud.google.com/blog/topics/public-datasets/github-on-bigquery-analyze-all-the-open-source-code
The following steps were pursued for data
gathering:
1. Creation of a dataset and a table in Google Big Query Project.
2. Creation of a bucket in Google Cloud Storage.
3. Creation of a query in Google Big Query Project.
4. Running the query with the setting to output the results in the dataset and table
created at step one.
5. Exporting the resulting dataset into the bucket created in step 2. Export format of JSON with gzip compression.
The result of these steps leads to the following results:
* 2.7 TB Processed,
* number of extracted rows/files was 464,215
* total logical bytes 1.46 GB.
* the result amounts to 7 json.gz files in a total of 361 MB.
The SQL Query used is:
```sql
SELECT
f.repo_name, f.path, c.copies, c.size, c.content, l.license
FROM
(select f.*, row_number() over (partition by id order by path desc) as seqnum from `bigquery-public-data.github_repos.files` AS f) f
JOIN
`bigquery-public-data.github_repos.contents` AS c
ON
f.id = c.id AND seqnum=1
JOIN
`bigquery-public-data.github_repos.licenses` AS l
ON
f.repo_name = l.repo_name
WHERE
NOT c.binary AND ((f.path LIKE '%.kt') AND (c.size BETWEEN 0 AND 1048575))
```
## Data Splits
The dataset only contains a train split.
Using the curated version of this dataset, a split was made into multiple repositories:
* Clean Version: https://huggingface.co/datasets/mvasiliniuc/iva-kotlin-codeint-clean
* Clean Version Train: https://huggingface.co/datasets/mvasiliniuc/iva-kotlin-codeint-clean-train
* Clean Version Valid: https://huggingface.co/datasets/mvasiliniuc/iva-kotlin-codeint-clean-valid
# Considerations for Using the Data
The dataset comprises source code from various repositories, potentially containing harmful or biased code,
along with sensitive information such as passwords or usernames.
# Additional Information
## Dataset Curators
[mircea.dev@icloud.com](mircea.dev@icloud.com)
## Licensing Information
* The license of this open-source dataset is: other.
* The dataset is gathered from open-source repositories on [GitHub using BigQuery](https://cloud.google.com/blog/topics/public-datasets/github-on-bigquery-analyze-all-the-open-source-code).
* Find the license of each entry in the dataset in the corresponding license column.
## Citation Information
```json
@misc {mircea_vasiliniuc_2023,
author = { {Mircea Vasiliniuc} },
title = { iva-kotlin-codeint (Revision 1af5124) },
year = 2023,
url = { https://huggingface.co/datasets/mvasiliniuc/iva-kotlin-codeint },
doi = { 10.57967/hf/0779 },
publisher = { Hugging Face }
}
``` |
false | |
true | |
false | # sql-create-context_guanaco_style
### Dataset Summary
I would recommend using my other finetuning dataset [richardr1126/spider-context-alpaca-finetune](https://huggingface.co/datasets/richardr1126/spider-context-alpaca-finetune) as it was made directly from the [spider](https://huggingface.co/datasets/spider) dataset.
This dataset was created by reformatting [b-mc2/sql-create-context](https://huggingface.co/datasets/b-mc2/sql-create-context), a WikiSQL and Spider based dataset, to be in Guanaco-chat style format instead of Alpaca.
WikiSQL is an older, less complex dataset than Spider. |
false |
# Dataset Card for "squad"
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks](#supported-tasks)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits Sample Size](#data-splits-sample-size)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** [https://rajpurkar.github.io/SQuAD-explorer/](https://rajpurkar.github.io/SQuAD-explorer/)
- **Repository:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
- **Paper:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
- **Point of Contact:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
- **Size of downloaded dataset files:** 33.51 MB
- **Size of the generated dataset:** 85.75 MB
- **Total amount of disk used:** 119.27 MB
### Dataset Summary
This dataset is a custom copy of the original SQuAD dataset. It is used to showcase dataset repositories. Data are the same as the original dataset.
Stanford Question Answering Dataset (SQuAD) is a reading comprehension dataset, consisting of questions posed by crowdworkers on a set of Wikipedia articles, where the answer to every question is a segment of text, or span, from the corresponding reading passage, or the question might be unanswerable.
### Supported Tasks
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Languages
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
## Dataset Structure
We show detailed information for up to 5 configurations of the dataset.
### Data Instances
#### plain_text
- **Size of downloaded dataset files:** 33.51 MB
- **Size of the generated dataset:** 85.75 MB
- **Total amount of disk used:** 119.27 MB
An example of 'train' looks as follows.
```
{
"answers": {
"answer_start": [1],
"text": ["This is a test text"]
},
"context": "This is a test context.",
"id": "1",
"question": "Is this a test?",
"title": "train test"
}
```
### Data Fields
The data fields are the same among all splits.
#### plain_text
- `id`: a `string` feature.
- `title`: a `string` feature.
- `context`: a `string` feature.
- `question`: a `string` feature.
- `answers`: a dictionary feature containing:
- `text`: a `string` feature.
- `answer_start`: a `int32` feature.
### Data Splits Sample Size
| name |train|validation|
|----------|----:|---------:|
|plain_text|87599| 10570|
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
[More Information Needed]
### Annotations
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
[More Information Needed]
### Citation Information
```
@article{2016arXiv160605250R,
author = {{Rajpurkar}, Pranav and {Zhang}, Jian and {Lopyrev},
Konstantin and {Liang}, Percy},
title = "{SQuAD: 100,000+ Questions for Machine Comprehension of Text}",
journal = {arXiv e-prints},
year = 2016,
eid = {arXiv:1606.05250},
pages = {arXiv:1606.05250},
archivePrefix = {arXiv},
eprint = {1606.05250},
}
```
### Contributions
Thanks to [@lewtun](https://github.com/lewtun), [@albertvillanova](https://github.com/albertvillanova), [@patrickvonplaten](https://github.com/patrickvonplaten), [@thomwolf](https://github.com/thomwolf) for adding this dataset. |
true |
# WNLI-ca
## Table of Contents
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Website:** https://cs.nyu.edu/~davise/papers/WinogradSchemas/WS.html
- **Point of Contact:** [Carlos Rodríguez-Penagos](carlos.rodriguez1@bsc.es) and [Carme Armentano-Oller](carme.armentano@bsc.es)
### Dataset Summary
"A Winograd schema is a pair of sentences that differ in only one or two words and that contain an ambiguity that is resolved in opposite ways in the two sentences and requires the use of world knowledge and reasoning for its resolution. The schema takes its name from Terry Winograd." Source: [The Winograd Schema Challenge](https://cs.nyu.edu/~davise/papers/WinogradSchemas/WS.html).
The [Winograd NLI dataset](https://dl.fbaipublicfiles.com/glue/data/WNLI.zip) presents 855 sentence pairs, in which the first sentence contains an ambiguity and the second one a possible interpretation of it. The label indicates if the interpretation is correct (1) or not (0).
This dataset is a professional translation into Catalan of [Winograd NLI dataset](https://dl.fbaipublicfiles.com/glue/data/WNLI.zip) as published in [GLUE Benchmark](https://gluebenchmark.com/tasks).
Both the original dataset and this translation are licenced under a [Creative Commons Attribution 4.0 International License](https://creativecommons.org/licenses/by/4.0/).
### Supported Tasks and Leaderboards
Textual entailment, Text classification, Language Model.
### Languages
The dataset is in Catalan (`ca-CA`)
## Dataset Structure
### Data Instances
Three tsv files.
### Data Fields
- index
- sentence 1: first sentence of the pair
- sentence 2: second sentence of the pair
- label: relation between the two sentences:
* 0: the second sentence does not entail a correct interpretation of the first one (neutral)
* 1: the second sentence entails a correct interpretation of the first one (entailment)
### Example
| index | sentence 1 | sentence 2 | label |
| ------- |----------- | --------- | ----- |
| 0 | Vaig clavar una agulla en una pastanaga. Quan la vaig treure, tenia un forat. | La pastanaga tenia un forat. | 1 |
| 1 | En Joan no podia veure l’escenari amb en Guillem davant seu perquè és molt baix. | En Joan és molt baix. | 1 |
| 2 | Els policies van arrestar tots els membres de la banda. Volien aturar el tràfic de drogues del barri. | Els policies volien aturar el tràfic de drogues del barri. | 1 |
| 3 | L’Esteve segueix els passos d’en Frederic en tot. L’influencia moltíssim. | L’Esteve l’influencia moltíssim. | 0 |
### Data Splits
- wnli-train-ca.csv: 636
- wnli-dev-ca.csv: 72
- wnli-test-shuffled-ca.csv: 147
## Dataset Creation
### Curation Rationale
We translated this dataset to contribute to the development of language models in Catalan, a low-resource language, and to allow inter-lingual comparisons.
### Source Data
- [GLUE Benchmark site](https://gluebenchmark.com)
#### Initial Data Collection and Normalization
This is a professional translation of [WNLI dataset](https://cs.nyu.edu/~davise/papers/WinogradSchemas/WS.html) into Catalan, commissioned by BSC TeMU within the [Projecte AINA](https://politiquesdigitals.gencat.cat/ca/economia/catalonia-ai/aina/).
For more information on how the Winograd NLI dataset was created, visit the webpage [The Winograd Schema Challenge](https://cs.nyu.edu/~davise/papers/WinogradSchemas/WS.html).
#### Who are the source language producers?
For more information on how the Winograd NLI dataset was created, visit the webpage [The Winograd Schema Challenge](https://cs.nyu.edu/~davise/papers/WinogradSchemas/WS.html).
### Annotations
#### Annotation process
We comissioned a professional translation of [WNLI dataset](https://cs.nyu.edu/~davise/papers/WinogradSchemas/WS.html) into Catalan.
#### Who are the annotators?
Translation was commisioned to a professional translator.
### Personal and Sensitive Information
No personal or sensitive information included.
## Considerations for Using the Data
### Social Impact of Dataset
This dataset contributes to the development of language models in Catalan, a low-resource language.
### Discussion of Biases
[N/A]
### Other Known Limitations
[N/A]
## Additional Information
### Dataset Curators
Text Mining Unit (TeMU) at the Barcelona Supercomputing Center (bsc-temu@bsc.es).
This work was funded by the [Departament de la Vicepresidència i de Polítiques Digitals i Territori de la Generalitat de Catalunya](https://politiquesdigitals.gencat.cat/ca/inici/index.html#googtrans(ca|en) within the framework of [Projecte AINA](https://politiquesdigitals.gencat.cat/ca/economia/catalonia-ai/aina).
### Licensing Information
This work is licensed under a <a rel="license" href="https://creativecommons.org/licenses/by/4.0/">CC Attribution 4.0 International License</a>.
### Contributions
[N/A]
|
false |
# Dataset Card for BEIR Benchmark
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** https://github.com/UKPLab/beir
- **Repository:** https://github.com/UKPLab/beir
- **Paper:** https://openreview.net/forum?id=wCu6T5xFjeJ
- **Leaderboard:** https://docs.google.com/spreadsheets/d/1L8aACyPaXrL8iEelJLGqlMqXKPX2oSP_R10pZoy77Ns
- **Point of Contact:** nandan.thakur@uwaterloo.ca
### Dataset Summary
BEIR is a heterogeneous benchmark that has been built from 18 diverse datasets representing 9 information retrieval tasks:
- Fact-checking: [FEVER](http://fever.ai), [Climate-FEVER](http://climatefever.ai), [SciFact](https://github.com/allenai/scifact)
- Question-Answering: [NQ](https://ai.google.com/research/NaturalQuestions), [HotpotQA](https://hotpotqa.github.io), [FiQA-2018](https://sites.google.com/view/fiqa/)
- Bio-Medical IR: [TREC-COVID](https://ir.nist.gov/covidSubmit/index.html), [BioASQ](http://bioasq.org), [NFCorpus](https://www.cl.uni-heidelberg.de/statnlpgroup/nfcorpus/)
- News Retrieval: [TREC-NEWS](https://trec.nist.gov/data/news2019.html), [Robust04](https://trec.nist.gov/data/robust/04.guidelines.html)
- Argument Retrieval: [Touche-2020](https://webis.de/events/touche-20/shared-task-1.html), [ArguAna](tp://argumentation.bplaced.net/arguana/data)
- Duplicate Question Retrieval: [Quora](https://www.quora.com/q/quoradata/First-Quora-Dataset-Release-Question-Pairs), [CqaDupstack](http://nlp.cis.unimelb.edu.au/resources/cqadupstack/)
- Citation-Prediction: [SCIDOCS](https://allenai.org/data/scidocs)
- Tweet Retrieval: [Signal-1M](https://research.signal-ai.com/datasets/signal1m-tweetir.html)
- Entity Retrieval: [DBPedia](https://github.com/iai-group/DBpedia-Entity/)
All these datasets have been preprocessed and can be used for your experiments.
```python
```
### Supported Tasks and Leaderboards
The dataset supports a leaderboard that evaluates models against task-specific metrics such as F1 or EM, as well as their ability to retrieve supporting information from Wikipedia.
The current best performing models can be found [here](https://eval.ai/web/challenges/challenge-page/689/leaderboard/).
### Languages
All tasks are in English (`en`).
## Dataset Structure
All BEIR datasets must contain a corpus, queries and qrels (relevance judgments file). They must be in the following format:
- `corpus` file: a `.jsonl` file (jsonlines) that contains a list of dictionaries, each with three fields `_id` with unique document identifier, `title` with document title (optional) and `text` with document paragraph or passage. For example: `{"_id": "doc1", "title": "Albert Einstein", "text": "Albert Einstein was a German-born...."}`
- `queries` file: a `.jsonl` file (jsonlines) that contains a list of dictionaries, each with two fields `_id` with unique query identifier and `text` with query text. For example: `{"_id": "q1", "text": "Who developed the mass-energy equivalence formula?"}`
- `qrels` file: a `.tsv` file (tab-seperated) that contains three columns, i.e. the `query-id`, `corpus-id` and `score` in this order. Keep 1st row as header. For example: `q1 doc1 1`
### Data Instances
A high level example of any beir dataset:
```python
corpus = {
"doc1" : {
"title": "Albert Einstein",
"text": "Albert Einstein was a German-born theoretical physicist. who developed the theory of relativity, \
one of the two pillars of modern physics (alongside quantum mechanics). His work is also known for \
its influence on the philosophy of science. He is best known to the general public for his mass–energy \
equivalence formula E = mc2, which has been dubbed 'the world's most famous equation'. He received the 1921 \
Nobel Prize in Physics 'for his services to theoretical physics, and especially for his discovery of the law \
of the photoelectric effect', a pivotal step in the development of quantum theory."
},
"doc2" : {
"title": "", # Keep title an empty string if not present
"text": "Wheat beer is a top-fermented beer which is brewed with a large proportion of wheat relative to the amount of \
malted barley. The two main varieties are German Weißbier and Belgian witbier; other types include Lambic (made\
with wild yeast), Berliner Weisse (a cloudy, sour beer), and Gose (a sour, salty beer)."
},
}
queries = {
"q1" : "Who developed the mass-energy equivalence formula?",
"q2" : "Which beer is brewed with a large proportion of wheat?"
}
qrels = {
"q1" : {"doc1": 1},
"q2" : {"doc2": 1},
}
```
### Data Fields
Examples from all configurations have the following features:
### Corpus
- `corpus`: a `dict` feature representing the document title and passage text, made up of:
- `_id`: a `string` feature representing the unique document id
- `title`: a `string` feature, denoting the title of the document.
- `text`: a `string` feature, denoting the text of the document.
### Queries
- `queries`: a `dict` feature representing the query, made up of:
- `_id`: a `string` feature representing the unique query id
- `text`: a `string` feature, denoting the text of the query.
### Qrels
- `qrels`: a `dict` feature representing the query document relevance judgements, made up of:
- `_id`: a `string` feature representing the query id
- `_id`: a `string` feature, denoting the document id.
- `score`: a `int32` feature, denoting the relevance judgement between query and document.
### Data Splits
| Dataset | Website| BEIR-Name | Type | Queries | Corpus | Rel D/Q | Down-load | md5 |
| -------- | -----| ---------| --------- | ----------- | ---------| ---------| :----------: | :------:|
| MSMARCO | [Homepage](https://microsoft.github.io/msmarco/)| ``msmarco`` | ``train``<br>``dev``<br>``test``| 6,980 | 8.84M | 1.1 | [Link](https://public.ukp.informatik.tu-darmstadt.de/thakur/BEIR/datasets/msmarco.zip) | ``444067daf65d982533ea17ebd59501e4`` |
| TREC-COVID | [Homepage](https://ir.nist.gov/covidSubmit/index.html)| ``trec-covid``| ``test``| 50| 171K| 493.5 | [Link](https://public.ukp.informatik.tu-darmstadt.de/thakur/BEIR/datasets/trec-covid.zip) | ``ce62140cb23feb9becf6270d0d1fe6d1`` |
| NFCorpus | [Homepage](https://www.cl.uni-heidelberg.de/statnlpgroup/nfcorpus/) | ``nfcorpus`` | ``train``<br>``dev``<br>``test``| 323 | 3.6K | 38.2 | [Link](https://public.ukp.informatik.tu-darmstadt.de/thakur/BEIR/datasets/nfcorpus.zip) | ``a89dba18a62ef92f7d323ec890a0d38d`` |
| BioASQ | [Homepage](http://bioasq.org) | ``bioasq``| ``train``<br>``test`` | 500 | 14.91M | 8.05 | No | [How to Reproduce?](https://github.com/UKPLab/beir/blob/main/examples/dataset#2-bioasq) |
| NQ | [Homepage](https://ai.google.com/research/NaturalQuestions) | ``nq``| ``train``<br>``test``| 3,452 | 2.68M | 1.2 | [Link](https://public.ukp.informatik.tu-darmstadt.de/thakur/BEIR/datasets/nq.zip) | ``d4d3d2e48787a744b6f6e691ff534307`` |
| HotpotQA | [Homepage](https://hotpotqa.github.io) | ``hotpotqa``| ``train``<br>``dev``<br>``test``| 7,405 | 5.23M | 2.0 | [Link](https://public.ukp.informatik.tu-darmstadt.de/thakur/BEIR/datasets/hotpotqa.zip) | ``f412724f78b0d91183a0e86805e16114`` |
| FiQA-2018 | [Homepage](https://sites.google.com/view/fiqa/) | ``fiqa`` | ``train``<br>``dev``<br>``test``| 648 | 57K | 2.6 | [Link](https://public.ukp.informatik.tu-darmstadt.de/thakur/BEIR/datasets/fiqa.zip) | ``17918ed23cd04fb15047f73e6c3bd9d9`` |
| Signal-1M(RT) | [Homepage](https://research.signal-ai.com/datasets/signal1m-tweetir.html)| ``signal1m`` | ``test``| 97 | 2.86M | 19.6 | No | [How to Reproduce?](https://github.com/UKPLab/beir/blob/main/examples/dataset#4-signal-1m) |
| TREC-NEWS | [Homepage](https://trec.nist.gov/data/news2019.html) | ``trec-news`` | ``test``| 57 | 595K | 19.6 | No | [How to Reproduce?](https://github.com/UKPLab/beir/blob/main/examples/dataset#1-trec-news) |
| ArguAna | [Homepage](http://argumentation.bplaced.net/arguana/data) | ``arguana``| ``test`` | 1,406 | 8.67K | 1.0 | [Link](https://public.ukp.informatik.tu-darmstadt.de/thakur/BEIR/datasets/arguana.zip) | ``8ad3e3c2a5867cdced806d6503f29b99`` |
| Touche-2020| [Homepage](https://webis.de/events/touche-20/shared-task-1.html) | ``webis-touche2020``| ``test``| 49 | 382K | 19.0 | [Link](https://public.ukp.informatik.tu-darmstadt.de/thakur/BEIR/datasets/webis-touche2020.zip) | ``46f650ba5a527fc69e0a6521c5a23563`` |
| CQADupstack| [Homepage](http://nlp.cis.unimelb.edu.au/resources/cqadupstack/) | ``cqadupstack``| ``test``| 13,145 | 457K | 1.4 | [Link](https://public.ukp.informatik.tu-darmstadt.de/thakur/BEIR/datasets/cqadupstack.zip) | ``4e41456d7df8ee7760a7f866133bda78`` |
| Quora| [Homepage](https://www.quora.com/q/quoradata/First-Quora-Dataset-Release-Question-Pairs) | ``quora``| ``dev``<br>``test``| 10,000 | 523K | 1.6 | [Link](https://public.ukp.informatik.tu-darmstadt.de/thakur/BEIR/datasets/quora.zip) | ``18fb154900ba42a600f84b839c173167`` |
| DBPedia | [Homepage](https://github.com/iai-group/DBpedia-Entity/) | ``dbpedia-entity``| ``dev``<br>``test``| 400 | 4.63M | 38.2 | [Link](https://public.ukp.informatik.tu-darmstadt.de/thakur/BEIR/datasets/dbpedia-entity.zip) | ``c2a39eb420a3164af735795df012ac2c`` |
| SCIDOCS| [Homepage](https://allenai.org/data/scidocs) | ``scidocs``| ``test``| 1,000 | 25K | 4.9 | [Link](https://public.ukp.informatik.tu-darmstadt.de/thakur/BEIR/datasets/scidocs.zip) | ``38121350fc3a4d2f48850f6aff52e4a9`` |
| FEVER | [Homepage](http://fever.ai) | ``fever``| ``train``<br>``dev``<br>``test``| 6,666 | 5.42M | 1.2| [Link](https://public.ukp.informatik.tu-darmstadt.de/thakur/BEIR/datasets/fever.zip) | ``5a818580227bfb4b35bb6fa46d9b6c03`` |
| Climate-FEVER| [Homepage](http://climatefever.ai) | ``climate-fever``|``test``| 1,535 | 5.42M | 3.0 | [Link](https://public.ukp.informatik.tu-darmstadt.de/thakur/BEIR/datasets/climate-fever.zip) | ``8b66f0a9126c521bae2bde127b4dc99d`` |
| SciFact| [Homepage](https://github.com/allenai/scifact) | ``scifact``| ``train``<br>``test``| 300 | 5K | 1.1 | [Link](https://public.ukp.informatik.tu-darmstadt.de/thakur/BEIR/datasets/scifact.zip) | ``5f7d1de60b170fc8027bb7898e2efca1`` |
| Robust04 | [Homepage](https://trec.nist.gov/data/robust/04.guidelines.html) | ``robust04``| ``test``| 249 | 528K | 69.9 | No | [How to Reproduce?](https://github.com/UKPLab/beir/blob/main/examples/dataset#3-robust04) |
## Dataset Creation
### Curation Rationale
[Needs More Information]
### Source Data
#### Initial Data Collection and Normalization
[Needs More Information]
#### Who are the source language producers?
[Needs More Information]
### Annotations
#### Annotation process
[Needs More Information]
#### Who are the annotators?
[Needs More Information]
### Personal and Sensitive Information
[Needs More Information]
## Considerations for Using the Data
### Social Impact of Dataset
[Needs More Information]
### Discussion of Biases
[Needs More Information]
### Other Known Limitations
[Needs More Information]
## Additional Information
### Dataset Curators
[Needs More Information]
### Licensing Information
[Needs More Information]
### Citation Information
Cite as:
```
@inproceedings{
thakur2021beir,
title={{BEIR}: A Heterogeneous Benchmark for Zero-shot Evaluation of Information Retrieval Models},
author={Nandan Thakur and Nils Reimers and Andreas R{\"u}ckl{\'e} and Abhishek Srivastava and Iryna Gurevych},
booktitle={Thirty-fifth Conference on Neural Information Processing Systems Datasets and Benchmarks Track (Round 2)},
year={2021},
url={https://openreview.net/forum?id=wCu6T5xFjeJ}
}
```
### Contributions
Thanks to [@Nthakur20](https://github.com/Nthakur20) for adding this dataset. |
false |
# Dataset Card for Quick, Draw!
## Table of Contents
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** [Quick, Draw! homepage](https://quickdraw.withgoogle.com/data)
- **Repository:** [Quick, Draw! repository](https://github.com/googlecreativelab/quickdraw-dataset)
- **Paper:** [A Neural Representation of Sketch Drawings](https://arxiv.org/abs/1704.03477v4)
- **Leaderboard:** [Quick, Draw! Doodle Recognition Challenge](https://www.kaggle.com/competitions/quickdraw-doodle-recognition/leaderboard)
- **Point of Contact:** [Quick, Draw! support](mailto:quickdraw-support@google.com)
### Dataset Summary
The Quick Draw Dataset is a collection of 50 million drawings across 345 categories, contributed by players of the game Quick, Draw!. The drawings were captured as timestamped vectors, tagged with metadata including what the player was asked to draw and in which country the player was located.
### Supported Tasks and Leaderboards
- `image-classification`: The goal of this task is to classify a given sketch into one of 345 classes.
The (closed) leaderboard for this task is available [here](https://www.kaggle.com/competitions/quickdraw-doodle-recognition/leaderboard).
### Languages
English.
## Dataset Structure
### Data Instances
#### `raw`
A data point comprises a drawing and its metadata.
```
{
'key_id': '5475678961008640',
'word': 0,
'recognized': True,
'timestamp': datetime.datetime(2017, 3, 28, 13, 28, 0, 851730),
'countrycode': 'MY',
'drawing': {
'x': [[379.0, 380.0, 381.0, 381.0, 381.0, 381.0, 382.0], [362.0, 368.0, 375.0, 380.0, 388.0, 393.0, 399.0, 404.0, 409.0, 410.0, 410.0, 405.0, 397.0, 392.0, 384.0, 377.0, 370.0, 363.0, 356.0, 348.0, 342.0, 336.0, 333.0], ..., [477.0, 473.0, 471.0, 469.0, 468.0, 466.0, 464.0, 462.0, 461.0, 469.0, 475.0, 483.0, 491.0, 499.0, 510.0, 521.0, 531.0, 540.0, 548.0, 558.0, 566.0, 576.0, 583.0, 590.0, 595.0, 598.0, 597.0, 596.0, 594.0, 592.0, 590.0, 589.0, 588.0, 586.0]],
'y': [[1.0, 7.0, 15.0, 21.0, 27.0, 32.0, 32.0], [17.0, 17.0, 17.0, 17.0, 16.0, 16.0, 16.0, 16.0, 18.0, 23.0, 29.0, 32.0, 32.0, 32.0, 29.0, 27.0, 25.0, 23.0, 21.0, 19.0, 17.0, 16.0, 14.0], ..., [151.0, 146.0, 139.0, 131.0, 125.0, 119.0, 113.0, 107.0, 102.0, 99.0, 98.0, 98.0, 98.0, 98.0, 98.0, 98.0, 98.0, 98.0, 98.0, 98.0, 98.0, 100.0, 102.0, 104.0, 105.0, 110.0, 115.0, 121.0, 126.0, 131.0, 137.0, 142.0, 148.0, 150.0]],
't': [[0, 84, 100, 116, 132, 148, 260], [573, 636, 652, 660, 676, 684, 701, 724, 796, 838, 860, 956, 973, 979, 989, 995, 1005, 1012, 1020, 1028, 1036, 1053, 1118], ..., [8349, 8446, 8468, 8484, 8500, 8516, 8541, 8557, 8573, 8685, 8693, 8702, 8710, 8718, 8724, 8732, 8741, 8748, 8757, 8764, 8773, 8780, 8788, 8797, 8804, 8965, 8996, 9029, 9045, 9061, 9076, 9092, 9109, 9167]]
}
}
```
#### `preprocessed_simplified_drawings`
The simplified version of the dataset generated from the `raw` data with the simplified vectors, removed timing information, and the data positioned and scaled into a 256x256 region.
The simplification process was:
1.Align the drawing to the top-left corner, to have minimum values of 0.
2.Uniformly scale the drawing, to have a maximum value of 255.
3.Resample all strokes with a 1 pixel spacing.
4.Simplify all strokes using the [Ramer-Douglas-Peucker algorithm](https://en.wikipedia.org/wiki/Ramer%E2%80%93Douglas%E2%80%93Peucker_algorithm) with an epsilon value of 2.0.
```
{
'key_id': '5475678961008640',
'word': 0,
'recognized': True,
'timestamp': datetime.datetime(2017, 3, 28, 15, 28),
'countrycode': 'MY',
'drawing': {
'x': [[31, 32], [27, 37, 38, 35, 21], [25, 28, 38, 39], [33, 34, 32], [5, 188, 254, 251, 241, 185, 45, 9, 0], [35, 35, 43, 125, 126], [35, 76, 80, 77], [53, 50, 54, 80, 78]],
'y': [[0, 7], [4, 4, 6, 7, 3], [5, 10, 10, 7], [4, 33, 44], [50, 50, 54, 83, 86, 90, 86, 77, 52], [85, 91, 92, 96, 90], [35, 37, 41, 47], [34, 23, 22, 23, 34]]
}
}
```
#### `preprocessed_bitmaps` (default configuration)
This configuration contains the 28x28 grayscale bitmap images that were generated from the simplified data, but are aligned to the center of the drawing's bounding box rather than the top-left corner. The code that was used for generation is available [here](https://github.com/googlecreativelab/quickdraw-dataset/issues/19#issuecomment-402247262).
```
{
'image': <PIL.PngImagePlugin.PngImageFile image mode=L size=28x28 at 0x10B5B102828>,
'label': 0
}
```
#### `sketch_rnn` and `sketch_rnn_full`
The `sketch_rnn_full` configuration stores the data in the format suitable for inputs into a recurrent neural network and was used for for training the [Sketch-RNN](https://arxiv.org/abs/1704.03477) model. Unlike `sketch_rnn` where the samples have been randomly selected from each category, the `sketch_rnn_full` configuration contains the full data for each category.
```
{
'word': 0,
'drawing': [[132, 0, 0], [23, 4, 0], [61, 1, 0], [76, 0, 0], [22, -4, 0], [152, 0, 0], [50, -5, 0], [36, -10, 0], [8, 26, 0], [0, 69, 0], [-2, 11, 0], [-8, 10, 0], [-56, 24, 0], [-23, 14, 0], [-99, 40, 0], [-45, 6, 0], [-21, 6, 0], [-170, 2, 0], [-81, 0, 0], [-29, -9, 0], [-94, -19, 0], [-48, -24, 0], [-6, -16, 0], [2, -36, 0], [7, -29, 0], [23, -45, 0], [13, -6, 0], [41, -8, 0], [42, -2, 1], [392, 38, 0], [2, 19, 0], [11, 33, 0], [13, 0, 0], [24, -9, 0], [26, -27, 0], [0, -14, 0], [-8, -10, 0], [-18, -5, 0], [-14, 1, 0], [-23, 4, 0], [-21, 12, 1], [-152, 18, 0], [10, 46, 0], [26, 6, 0], [38, 0, 0], [31, -2, 0], [7, -2, 0], [4, -6, 0], [-10, -21, 0], [-2, -33, 0], [-6, -11, 0], [-46, 1, 0], [-39, 18, 0], [-19, 4, 1], [-122, 0, 0], [-2, 38, 0], [4, 16, 0], [6, 4, 0], [78, 0, 0], [4, -8, 0], [-8, -36, 0], [0, -22, 0], [-6, -2, 0], [-32, 14, 0], [-58, 13, 1], [-96, -12, 0], [-10, 27, 0], [2, 32, 0], [102, 0, 0], [1, -7, 0], [-27, -17, 0], [-4, -6, 0], [-1, -34, 0], [-64, 8, 1], [129, -138, 0], [-108, 0, 0], [-8, 12, 0], [-1, 15, 0], [12, 15, 0], [20, 5, 0], [61, -3, 0], [24, 6, 0], [19, 0, 0], [5, -4, 0], [2, 14, 1]]
}
```
### Data Fields
#### `raw`
- `key_id`: A unique identifier across all drawings.
- `word`: Category the player was prompted to draw.
- `recognized`: Whether the word was recognized by the game.
- `timestamp`: When the drawing was created.
- `countrycode`: A two letter country code ([ISO 3166-1 alpha-2](https://en.wikipedia.org/wiki/ISO_3166-1_alpha-2)) of where the player was located.
- `drawing`: A dictionary where `x` and `y` are the pixel coordinates, and `t` is the time in milliseconds since the first point. `x` and `y` are real-valued while `t` is an integer. `x`, `y` and `t` match in lenght and are represented as lists of lists where each sublist corresponds to a single stroke. The raw drawings can have vastly different bounding boxes and number of points due to the different devices used for display and input.
#### `preprocessed_simplified_drawings`
- `key_id`: A unique identifier across all drawings.
- `word`: Category the player was prompted to draw.
- `recognized`: Whether the word was recognized by the game.
- `timestamp`: When the drawing was created.
- `countrycode`: A two letter country code ([ISO 3166-1 alpha-2](https://en.wikipedia.org/wiki/ISO_3166-1_alpha-2)) of where the player was located.
- `drawing`: A simplified drawing represented as a dictionary where `x` and `y` are the pixel coordinates. The simplification processed is described in the `Data Instances` section.
#### `preprocessed_bitmaps` (default configuration)
- `image`: A `PIL.Image.Image` object containing the 28x28 grayscale bitmap. Note that when accessing the image column: `dataset[0]["image"]` the image file is automatically decoded. Decoding of a large number of image files might take a significant amount of time. Thus it is important to first query the sample index before the `"image"` column, *i.e.* `dataset[0]["image"]` should **always** be preferred over `dataset["image"][0]`.
- `label`: Category the player was prompted to draw.
<details>
<summary>
Click here to see the full class labels mapping:
</summary>
|id|class|
|---|---|
|0|aircraft carrier|
|1|airplane|
|2|alarm clock|
|3|ambulance|
|4|angel|
|5|animal migration|
|6|ant|
|7|anvil|
|8|apple|
|9|arm|
|10|asparagus|
|11|axe|
|12|backpack|
|13|banana|
|14|bandage|
|15|barn|
|16|baseball bat|
|17|baseball|
|18|basket|
|19|basketball|
|20|bat|
|21|bathtub|
|22|beach|
|23|bear|
|24|beard|
|25|bed|
|26|bee|
|27|belt|
|28|bench|
|29|bicycle|
|30|binoculars|
|31|bird|
|32|birthday cake|
|33|blackberry|
|34|blueberry|
|35|book|
|36|boomerang|
|37|bottlecap|
|38|bowtie|
|39|bracelet|
|40|brain|
|41|bread|
|42|bridge|
|43|broccoli|
|44|broom|
|45|bucket|
|46|bulldozer|
|47|bus|
|48|bush|
|49|butterfly|
|50|cactus|
|51|cake|
|52|calculator|
|53|calendar|
|54|camel|
|55|camera|
|56|camouflage|
|57|campfire|
|58|candle|
|59|cannon|
|60|canoe|
|61|car|
|62|carrot|
|63|castle|
|64|cat|
|65|ceiling fan|
|66|cell phone|
|67|cello|
|68|chair|
|69|chandelier|
|70|church|
|71|circle|
|72|clarinet|
|73|clock|
|74|cloud|
|75|coffee cup|
|76|compass|
|77|computer|
|78|cookie|
|79|cooler|
|80|couch|
|81|cow|
|82|crab|
|83|crayon|
|84|crocodile|
|85|crown|
|86|cruise ship|
|87|cup|
|88|diamond|
|89|dishwasher|
|90|diving board|
|91|dog|
|92|dolphin|
|93|donut|
|94|door|
|95|dragon|
|96|dresser|
|97|drill|
|98|drums|
|99|duck|
|100|dumbbell|
|101|ear|
|102|elbow|
|103|elephant|
|104|envelope|
|105|eraser|
|106|eye|
|107|eyeglasses|
|108|face|
|109|fan|
|110|feather|
|111|fence|
|112|finger|
|113|fire hydrant|
|114|fireplace|
|115|firetruck|
|116|fish|
|117|flamingo|
|118|flashlight|
|119|flip flops|
|120|floor lamp|
|121|flower|
|122|flying saucer|
|123|foot|
|124|fork|
|125|frog|
|126|frying pan|
|127|garden hose|
|128|garden|
|129|giraffe|
|130|goatee|
|131|golf club|
|132|grapes|
|133|grass|
|134|guitar|
|135|hamburger|
|136|hammer|
|137|hand|
|138|harp|
|139|hat|
|140|headphones|
|141|hedgehog|
|142|helicopter|
|143|helmet|
|144|hexagon|
|145|hockey puck|
|146|hockey stick|
|147|horse|
|148|hospital|
|149|hot air balloon|
|150|hot dog|
|151|hot tub|
|152|hourglass|
|153|house plant|
|154|house|
|155|hurricane|
|156|ice cream|
|157|jacket|
|158|jail|
|159|kangaroo|
|160|key|
|161|keyboard|
|162|knee|
|163|knife|
|164|ladder|
|165|lantern|
|166|laptop|
|167|leaf|
|168|leg|
|169|light bulb|
|170|lighter|
|171|lighthouse|
|172|lightning|
|173|line|
|174|lion|
|175|lipstick|
|176|lobster|
|177|lollipop|
|178|mailbox|
|179|map|
|180|marker|
|181|matches|
|182|megaphone|
|183|mermaid|
|184|microphone|
|185|microwave|
|186|monkey|
|187|moon|
|188|mosquito|
|189|motorbike|
|190|mountain|
|191|mouse|
|192|moustache|
|193|mouth|
|194|mug|
|195|mushroom|
|196|nail|
|197|necklace|
|198|nose|
|199|ocean|
|200|octagon|
|201|octopus|
|202|onion|
|203|oven|
|204|owl|
|205|paint can|
|206|paintbrush|
|207|palm tree|
|208|panda|
|209|pants|
|210|paper clip|
|211|parachute|
|212|parrot|
|213|passport|
|214|peanut|
|215|pear|
|216|peas|
|217|pencil|
|218|penguin|
|219|piano|
|220|pickup truck|
|221|picture frame|
|222|pig|
|223|pillow|
|224|pineapple|
|225|pizza|
|226|pliers|
|227|police car|
|228|pond|
|229|pool|
|230|popsicle|
|231|postcard|
|232|potato|
|233|power outlet|
|234|purse|
|235|rabbit|
|236|raccoon|
|237|radio|
|238|rain|
|239|rainbow|
|240|rake|
|241|remote control|
|242|rhinoceros|
|243|rifle|
|244|river|
|245|roller coaster|
|246|rollerskates|
|247|sailboat|
|248|sandwich|
|249|saw|
|250|saxophone|
|251|school bus|
|252|scissors|
|253|scorpion|
|254|screwdriver|
|255|sea turtle|
|256|see saw|
|257|shark|
|258|sheep|
|259|shoe|
|260|shorts|
|261|shovel|
|262|sink|
|263|skateboard|
|264|skull|
|265|skyscraper|
|266|sleeping bag|
|267|smiley face|
|268|snail|
|269|snake|
|270|snorkel|
|271|snowflake|
|272|snowman|
|273|soccer ball|
|274|sock|
|275|speedboat|
|276|spider|
|277|spoon|
|278|spreadsheet|
|279|square|
|280|squiggle|
|281|squirrel|
|282|stairs|
|283|star|
|284|steak|
|285|stereo|
|286|stethoscope|
|287|stitches|
|288|stop sign|
|289|stove|
|290|strawberry|
|291|streetlight|
|292|string bean|
|293|submarine|
|294|suitcase|
|295|sun|
|296|swan|
|297|sweater|
|298|swing set|
|299|sword|
|300|syringe|
|301|t-shirt|
|302|table|
|303|teapot|
|304|teddy-bear|
|305|telephone|
|306|television|
|307|tennis racquet|
|308|tent|
|309|The Eiffel Tower|
|310|The Great Wall of China|
|311|The Mona Lisa|
|312|tiger|
|313|toaster|
|314|toe|
|315|toilet|
|316|tooth|
|317|toothbrush|
|318|toothpaste|
|319|tornado|
|320|tractor|
|321|traffic light|
|322|train|
|323|tree|
|324|triangle|
|325|trombone|
|326|truck|
|327|trumpet|
|328|umbrella|
|329|underwear|
|330|van|
|331|vase|
|332|violin|
|333|washing machine|
|334|watermelon|
|335|waterslide|
|336|whale|
|337|wheel|
|338|windmill|
|339|wine bottle|
|340|wine glass|
|341|wristwatch|
|342|yoga|
|343|zebra|
|344|zigzag|
</details>
#### `sketch_rnn` and `sketch_rnn_full`
- `word`: Category the player was prompted to draw.
- `drawing`: An array of strokes. Strokes are represented as 3-tuples consisting of x-offset, y-offset, and a binary variable which is 1 if the pen is lifted between this position and the next, and 0 otherwise.
<details>
<summary>
Click here to see the code for visualizing drawings in Jupyter Notebook or Google Colab:
</summary>
```python
import numpy as np
import svgwrite # pip install svgwrite
from IPython.display import SVG, display
def draw_strokes(drawing, factor=0.045):
"""Displays vector drawing as SVG.
Args:
drawing: a list of strokes represented as 3-tuples
factor: scaling factor. The smaller the scaling factor, the bigger the SVG picture and vice versa.
"""
def get_bounds(data, factor):
"""Return bounds of data."""
min_x = 0
max_x = 0
min_y = 0
max_y = 0
abs_x = 0
abs_y = 0
for i in range(len(data)):
x = float(data[i, 0]) / factor
y = float(data[i, 1]) / factor
abs_x += x
abs_y += y
min_x = min(min_x, abs_x)
min_y = min(min_y, abs_y)
max_x = max(max_x, abs_x)
max_y = max(max_y, abs_y)
return (min_x, max_x, min_y, max_y)
data = np.array(drawing)
min_x, max_x, min_y, max_y = get_bounds(data, factor)
dims = (50 + max_x - min_x, 50 + max_y - min_y)
dwg = svgwrite.Drawing(size=dims)
dwg.add(dwg.rect(insert=(0, 0), size=dims,fill='white'))
lift_pen = 1
abs_x = 25 - min_x
abs_y = 25 - min_y
p = "M%s,%s " % (abs_x, abs_y)
command = "m"
for i in range(len(data)):
if (lift_pen == 1):
command = "m"
elif (command != "l"):
command = "l"
else:
command = ""
x = float(data[i,0])/factor
y = float(data[i,1])/factor
lift_pen = data[i, 2]
p += command+str(x)+","+str(y)+" "
the_color = "black"
stroke_width = 1
dwg.add(dwg.path(p).stroke(the_color,stroke_width).fill("none"))
display(SVG(dwg.tostring()))
```
</details>
> **Note**: Sketch-RNN takes for input strokes represented as 5-tuples with drawings padded to a common maximum length and prefixed by the special start token `[0, 0, 1, 0, 0]`. The 5-tuple representation consists of x-offset, y-offset, and p_1, p_2, p_3, a binary one-hot vector of 3 possible pen states: pen down, pen up, end of sketch. More precisely, the first two elements are the offset distance in the x and y directions of the pen from the previous point. The last 3 elements represents a binary one-hot vector of 3 possible states. The first pen state, p1, indicates that the pen is currently touching the paper, and that a line will be drawn connecting the next point with the current point. The second pen state, p2, indicates that the pen will be lifted from the paper after the current point, and that no line will be drawn next. The final pen state, p3, indicates that the drawing has ended, and subsequent points, including the current point, will not be rendered.
><details>
> <summary>
> Click here to see the code for converting drawings to Sketch-RNN input format:
> </summary>
>
> ```python
> def to_sketch_rnn_format(drawing, max_len):
> """Converts a drawing to Sketch-RNN input format.
>
> Args:
> drawing: a list of strokes represented as 3-tuples
> max_len: maximum common length of all drawings
>
> Returns:
> NumPy array
> """
> drawing = np.array(drawing)
> result = np.zeros((max_len, 5), dtype=float)
> l = len(drawing)
> assert l <= max_len
> result[0:l, 0:2] = drawing[:, 0:2]
> result[0:l, 3] = drawing[:, 2]
> result[0:l, 2] = 1 - result[0:l, 3]
> result[l:, 4] = 1
> # Prepend special start token
> result = np.vstack([[0, 0, 1, 0, 0], result])
> return result
> ```
>
></details>
### Data Splits
In the configurations `raw`, `preprocessed_simplified_drawings` and `preprocessed_bitamps` (default configuration), all the data is contained in the training set, which has 50426266 examples.
`sketch_rnn` and `sketch_rnn_full` have the data split into training, validation and test split. In the `sketch_rnn` configuration, 75K samples (70K Training, 2.5K Validation, 2.5K Test) have been randomly selected from each category. Therefore, the training set contains 24150000 examples, the validation set 862500 examples and the test set 862500 examples. The `sketch_rnn_full` configuration has the full (training) data for each category, which leads to the training set having 43988874 examples, the validation set 862500 and the test set 862500 examples.
## Dataset Creation
### Curation Rationale
From the GitHub repository:
> The Quick Draw Dataset is a collection of 50 million drawings across [345 categories](categories.txt), contributed by players of the game [Quick, Draw!](https://quickdraw.withgoogle.com). The drawings were captured as timestamped vectors, tagged with metadata including what the player was asked to draw and in which country the player was located. You can browse the recognized drawings on [quickdraw.withgoogle.com/data](https://quickdraw.withgoogle.com/data).
>
> We're sharing them here for developers, researchers, and artists to explore, study, and learn from
### Source Data
#### Initial Data Collection and Normalization
This dataset contains vector drawings obtained from [Quick, Draw!](https://quickdraw.withgoogle.com/), an online game where the players are asked to draw objects belonging to a particular object class in less than 20 seconds.
#### Who are the source language producers?
The participants in the [Quick, Draw!](https://quickdraw.withgoogle.com/) game.
### Annotations
#### Annotation process
The annotations are machine-generated and match the category the player was prompted to draw.
#### Who are the annotators?
The annotations are machine-generated.
### Personal and Sensitive Information
Some sketches are known to be problematic (see https://github.com/googlecreativelab/quickdraw-dataset/issues/74 and https://github.com/googlecreativelab/quickdraw-dataset/issues/18).
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
## Additional Information
### Dataset Curators
Jonas Jongejan, Henry Rowley, Takashi Kawashima, Jongmin Kim and Nick Fox-Gieg.
### Licensing Information
The data is made available by Google, Inc. under the [Creative Commons Attribution 4.0 International](https://creativecommons.org/licenses/by/4.0/) license.
### Citation Information
```bibtex
@article{DBLP:journals/corr/HaE17,
author = {David Ha and
Douglas Eck},
title = {A Neural Representation of Sketch Drawings},
journal = {CoRR},
volume = {abs/1704.03477},
year = {2017},
url = {http://arxiv.org/abs/1704.03477},
archivePrefix = {arXiv},
eprint = {1704.03477},
timestamp = {Mon, 13 Aug 2018 16:48:30 +0200},
biburl = {https://dblp.org/rec/bib/journals/corr/HaE17},
bibsource = {dblp computer science bibliography, https://dblp.org}
}
```
### Contributions
Thanks to [@mariosasko](https://github.com/mariosasko) for adding this dataset. |
false |
# Dataset Card for MAFAND
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-instances)
- [Data Splits](#data-instances)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
## Dataset Description
- **Homepage:** https://github.com/masakhane-io/lafand-mt
- **Repository:** https://github.com/masakhane-io/lafand-mt
- **Paper:** https://aclanthology.org/2022.naacl-main.223/
- **Leaderboard:** [Needs More Information]
- **Point of Contact:** [David Adelani](https://dadelani.github.io/)
### Dataset Summary
MAFAND-MT is the largest MT benchmark for African languages in the news domain, covering 21 languages.
### Supported Tasks and Leaderboards
Machine Translation
### Languages
The languages covered are:
- Amharic
- Bambara
- Ghomala
- Ewe
- Fon
- Hausa
- Igbo
- Kinyarwanda
- Luganda
- Luo
- Mossi
- Nigerian-Pidgin
- Chichewa
- Shona
- Swahili
- Setswana
- Twi
- Wolof
- Xhosa
- Yoruba
- Zulu
## Dataset Structure
### Data Instances
```
>>> from datasets import load_dataset
>>> data = load_dataset('masakhane/mafand', 'en-yor')
{"translation": {"src": "President Buhari will determine when to lift lockdown – Minister", "tgt": "Ààrẹ Buhari ló lè yóhùn padà lórí ètò kónílégbélé – Mínísítà"}}
{"translation": {"en": "President Buhari will determine when to lift lockdown – Minister", "yo": "Ààrẹ Buhari ló lè yóhùn padà lórí ètò kónílégbélé – Mínísítà"}}
```
### Data Fields
- "translation": name of the task
- "src" : source language e.g en
- "tgt": target language e.g yo
### Data Splits
Train/dev/test split
language| Train| Dev |Test
-|-|-|-
amh |-|899|1037
bam |3302|1484|1600
bbj |2232|1133|1430
ewe |2026|1414|1563
fon |2637|1227|1579
hau |5865|1300|1500
ibo |6998|1500|1500
kin |-|460|1006
lug |4075|1500|1500
luo |4262|1500|1500
mos |2287|1478|1574
nya |-|483|1004
pcm |4790|1484|1574
sna |-|556|1005
swa |30782|1791|1835
tsn |2100|1340|1835
twi |3337|1284|1500
wol |3360|1506|1500|
xho |-|486|1002|
yor |6644|1544|1558|
zul |3500|1239|998|
## Dataset Creation
### Curation Rationale
MAFAND was created from the news domain, translated from English or French to an African language
### Source Data
#### Initial Data Collection and Normalization
[Needs More Information]
#### Who are the source language producers?
- [Masakhane](https://github.com/masakhane-io/lafand-mt)
- [Igbo](https://github.com/IgnatiusEzeani/IGBONLP/tree/master/ig_en_mt)
- [Swahili](https://opus.nlpl.eu/GlobalVoices.php)
- [Hausa](https://www.statmt.org/wmt21/translation-task.html)
- [Yoruba](https://github.com/uds-lsv/menyo-20k_MT)
### Annotations
#### Annotation process
[Needs More Information]
#### Who are the annotators?
Masakhane members
### Personal and Sensitive Information
[Needs More Information]
## Considerations for Using the Data
### Social Impact of Dataset
[Needs More Information]
### Discussion of Biases
[Needs More Information]
### Other Known Limitations
[Needs More Information]
## Additional Information
### Dataset Curators
[Needs More Information]
### Licensing Information
[CC-BY-4.0-NC](https://creativecommons.org/licenses/by-nc/4.0/)
### Citation Information
```
@inproceedings{adelani-etal-2022-thousand,
title = "A Few Thousand Translations Go a Long Way! Leveraging Pre-trained Models for {A}frican News Translation",
author = "Adelani, David and
Alabi, Jesujoba and
Fan, Angela and
Kreutzer, Julia and
Shen, Xiaoyu and
Reid, Machel and
Ruiter, Dana and
Klakow, Dietrich and
Nabende, Peter and
Chang, Ernie and
Gwadabe, Tajuddeen and
Sackey, Freshia and
Dossou, Bonaventure F. P. and
Emezue, Chris and
Leong, Colin and
Beukman, Michael and
Muhammad, Shamsuddeen and
Jarso, Guyo and
Yousuf, Oreen and
Niyongabo Rubungo, Andre and
Hacheme, Gilles and
Wairagala, Eric Peter and
Nasir, Muhammad Umair and
Ajibade, Benjamin and
Ajayi, Tunde and
Gitau, Yvonne and
Abbott, Jade and
Ahmed, Mohamed and
Ochieng, Millicent and
Aremu, Anuoluwapo and
Ogayo, Perez and
Mukiibi, Jonathan and
Ouoba Kabore, Fatoumata and
Kalipe, Godson and
Mbaye, Derguene and
Tapo, Allahsera Auguste and
Memdjokam Koagne, Victoire and
Munkoh-Buabeng, Edwin and
Wagner, Valencia and
Abdulmumin, Idris and
Awokoya, Ayodele and
Buzaaba, Happy and
Sibanda, Blessing and
Bukula, Andiswa and
Manthalu, Sam",
booktitle = "Proceedings of the 2022 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies",
month = jul,
year = "2022",
address = "Seattle, United States",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2022.naacl-main.223",
doi = "10.18653/v1/2022.naacl-main.223",
pages = "3053--3070",
abstract = "Recent advances in the pre-training for language models leverage large-scale datasets to create multilingual models. However, low-resource languages are mostly left out in these datasets. This is primarily because many widely spoken languages that are not well represented on the web and therefore excluded from the large-scale crawls for datasets. Furthermore, downstream users of these models are restricted to the selection of languages originally chosen for pre-training. This work investigates how to optimally leverage existing pre-trained models to create low-resource translation systems for 16 African languages. We focus on two questions: 1) How can pre-trained models be used for languages not included in the initial pretraining? and 2) How can the resulting translation models effectively transfer to new domains? To answer these questions, we create a novel African news corpus covering 16 languages, of which eight languages are not part of any existing evaluation dataset. We demonstrate that the most effective strategy for transferring both additional languages and additional domains is to leverage small quantities of high-quality translation data to fine-tune large pre-trained models.",
}
``` |
false |
# Dataset Card for NewsCommentary
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** https://stonybrooknlp.github.io/tellmewhy/
- **Repository:** https://github.com/StonyBrookNLP/tellmewhy
- **Paper:** https://aclanthology.org/2021.findings-acl.53/
- **Leaderboard:** None
- **Point of Contact:** [Yash Kumar Lal](mailto:ylal@cs.stonybrook.edu)
### Dataset Summary
TellMeWhy is a large-scale crowdsourced dataset made up of more than 30k questions and free-form answers concerning why characters in short narratives perform the actions described.
### Supported Tasks and Leaderboards
The dataset is designed to test why-question answering abilities of models when bound by local context.
### Languages
English
## Dataset Structure
### Data Instances
A typical data point consists of a story, a question and a crowdsourced answer to that question. Additionally, the instance also indicates whether the question's answer would be implicit or if it is explicitly stated in text. If applicable, it also contains Likert scores (-2 to 2) about the answer's grammaticality and validity in the given context.
```
{
"narrative":"Cam ordered a pizza and took it home. He opened the box to take out a slice. Cam discovered that the store did not cut the pizza for him. He looked for his pizza cutter but did not find it. He had to use his chef knife to cut a slice.",
"question":"Why did Cam order a pizza?",
"original_sentence_for_question":"Cam ordered a pizza and took it home.",
"narrative_lexical_overlap":0.3333333333,
"is_ques_answerable":"Not Answerable",
"answer":"Cam was hungry.",
"is_ques_answerable_annotator":"Not Answerable",
"original_narrative_form":[
"Cam ordered a pizza and took it home.",
"He opened the box to take out a slice.",
"Cam discovered that the store did not cut the pizza for him.",
"He looked for his pizza cutter but did not find it.",
"He had to use his chef knife to cut a slice."
],
"question_meta":"rocstories_narrative_41270_sentence_0_question_0",
"helpful_sentences":[
],
"human_eval":false,
"val_ann":[
],
"gram_ann":[
]
}
```
### Data Fields
- `question_meta` - Unique meta for each question in the corpus
- `narrative` - Full narrative from ROCStories. Used as the context with which the question and answer are associated
- `question` - Why question about an action or event in the narrative
- `answer` - Crowdsourced answer to the question
- `original_sentence_for_question` - Sentence in narrative from which question was generated
- `narrative_lexical_overlap` - Unigram overlap of answer with the narrative
- `is_ques_answerable` - Majority judgment by annotators on whether an answer to this question is explicitly stated in the narrative. If "Not Answerable", it is part of the Implicit-Answer questions subset, which is harder for models.
- `is_ques_answerable_annotator` - Individual annotator judgment on whether an answer to this question is explicitly stated in the narrative.
- `original_narrative_form` - ROCStories narrative as an array of its sentences
- `human_eval` - Indicates whether a question is a specific part of the test set. Models should be evaluated for their answers on these questions using the human evaluation suite released by the authors. They advocate for this human evaluation to be the correct way to track progress on this dataset.
- `val_ann` - Array of Likert scores (possible sizes are 0 and 3) about whether an answer is valid given the question and context. Empty arrays exist for cases where the human_eval flag is False.
- `gram_ann` - Array of Likert scores (possible sizes are 0 and 3) about whether an answer is grammatical. Empty arrays exist for cases where the human_eval flag is False.
### Data Splits
The data is split into training, valiudation, and test sets.
| Train | Valid | Test |
| ------ | ----- | ----- |
| 23964 | 2992 | 3563 |
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
ROCStories corpus (Mostafazadeh et al, 2016)
#### Initial Data Collection and Normalization
ROCStories was used to create why-questions related to actions and events in the stories.
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
Amazon Mechanical Turk workers were provided a story and an associated why-question, and asked to answer. Three answers were collected for each question. For a small subset of questions, the quality of answers was also validated in a second round of annotation. This smaller subset should be used to perform human evaluation of any new models built for this dataset.
#### Who are the annotators?
Amazon Mechanical Turk workers
### Personal and Sensitive Information
None
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Evaluation
To evaluate progress on this dataset, the authors advocate for human evaluation and release a suite with the required settings [here](https://github.com/StonyBrookNLP/tellmewhy). Once inference on the test set has been completed, please filter out the answers on which human evaluation needs to be performed by selecting the questions (one answer per question, deduplication might be needed) in the test set where the `human_eval` flag is set to `True`. This subset can then be used to complete the requisite evaluation on TellMeWhy.
### Dataset Curators
[More Information Needed]
### Licensing Information
[More Information Needed]
### Citation Information
```
@inproceedings{lal-etal-2021-tellmewhy,
title = "{T}ell{M}e{W}hy: A Dataset for Answering Why-Questions in Narratives",
author = "Lal, Yash Kumar and
Chambers, Nathanael and
Mooney, Raymond and
Balasubramanian, Niranjan",
booktitle = "Findings of the Association for Computational Linguistics: ACL-IJCNLP 2021",
month = aug,
year = "2021",
address = "Online",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2021.findings-acl.53",
doi = "10.18653/v1/2021.findings-acl.53",
pages = "596--610",
}
```
### Contributions
Thanks to [@yklal95](https://github.com/ykl7) for adding this dataset. |
false |
# Dataset Card for [Telugu Asr Corpus]
## Table of Contents
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:**
- **Repository:**
- **Paper:**
- **Leaderboard:**
- **Point of Contact:**
### Dataset Summary
[More Information Needed]
### Supported Tasks and Leaderboards
[More Information Needed]
### Languages
[More Information Needed]
## Dataset Structure
### Data Instances
[More Information Needed]
### Data Fields
[More Information Needed]
### Data Splits
[More Information Needed]
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
[More Information Needed]
### Citation Information
[More Information Needed]
### Contributions
Thanks to [@parambharat](https://github.com/parambharat) for adding this dataset. |
true | |
false | # Wine
The [Wine dataset](https://www.kaggle.com/datasets/ghassenkhaled/wine-quality-data) from Kaggle.
Classify wine as red or white.
# Configurations and tasks
| **Configuration** | **Task** | **Description** |
|-------------------|---------------------------|-----------------------------------------------------------------|
| wine | Binary classification | Is this red wine? |
# Usage
```python
from datasets import load_dataset
dataset = load_dataset("mstz/wine")["train"]
``` |
true | # Dataset Card for Dataset Name
## Dataset Description
- **Homepage:**
- **Repository:**
- **Paper:**
- **Leaderboard:**
- **Point of Contact:**
### Dataset Summary
This dataset card aims to be a base template for new datasets. It has been generated using [this raw template](https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/templates/datasetcard_template.md?plain=1).
### Supported Tasks and Leaderboards
[More Information Needed]
### Languages
[More Information Needed]
## Dataset Structure
### Data Instances
[More Information Needed]
### Data Fields
[More Information Needed]
### Data Splits
[More Information Needed]
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
[More Information Needed]
### Citation Information
[More Information Needed]
### Contributions
[More Information Needed] |
false |
# Dataset Card for BEIR Benchmark
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** https://github.com/UKPLab/beir
- **Repository:** https://github.com/UKPLab/beir
- **Paper:** https://openreview.net/forum?id=wCu6T5xFjeJ
- **Leaderboard:** https://docs.google.com/spreadsheets/d/1L8aACyPaXrL8iEelJLGqlMqXKPX2oSP_R10pZoy77Ns
- **Point of Contact:** nandan.thakur@uwaterloo.ca
### Dataset Summary
BEIR is a heterogeneous benchmark that has been built from 18 diverse datasets representing 9 information retrieval tasks:
- Fact-checking: [FEVER](http://fever.ai), [Climate-FEVER](http://climatefever.ai), [SciFact](https://github.com/allenai/scifact)
- Question-Answering: [NQ](https://ai.google.com/research/NaturalQuestions), [HotpotQA](https://hotpotqa.github.io), [FiQA-2018](https://sites.google.com/view/fiqa/)
- Bio-Medical IR: [TREC-COVID](https://ir.nist.gov/covidSubmit/index.html), [BioASQ](http://bioasq.org), [NFCorpus](https://www.cl.uni-heidelberg.de/statnlpgroup/nfcorpus/)
- News Retrieval: [TREC-NEWS](https://trec.nist.gov/data/news2019.html), [Robust04](https://trec.nist.gov/data/robust/04.guidelines.html)
- Argument Retrieval: [Touche-2020](https://webis.de/events/touche-20/shared-task-1.html), [ArguAna](tp://argumentation.bplaced.net/arguana/data)
- Duplicate Question Retrieval: [Quora](https://www.quora.com/q/quoradata/First-Quora-Dataset-Release-Question-Pairs), [CqaDupstack](http://nlp.cis.unimelb.edu.au/resources/cqadupstack/)
- Citation-Prediction: [SCIDOCS](https://allenai.org/data/scidocs)
- Tweet Retrieval: [Signal-1M](https://research.signal-ai.com/datasets/signal1m-tweetir.html)
- Entity Retrieval: [DBPedia](https://github.com/iai-group/DBpedia-Entity/)
All these datasets have been preprocessed and can be used for your experiments.
```python
```
### Supported Tasks and Leaderboards
The dataset supports a leaderboard that evaluates models against task-specific metrics such as F1 or EM, as well as their ability to retrieve supporting information from Wikipedia.
The current best performing models can be found [here](https://eval.ai/web/challenges/challenge-page/689/leaderboard/).
### Languages
All tasks are in English (`en`).
## Dataset Structure
All BEIR datasets must contain a corpus, queries and qrels (relevance judgments file). They must be in the following format:
- `corpus` file: a `.jsonl` file (jsonlines) that contains a list of dictionaries, each with three fields `_id` with unique document identifier, `title` with document title (optional) and `text` with document paragraph or passage. For example: `{"_id": "doc1", "title": "Albert Einstein", "text": "Albert Einstein was a German-born...."}`
- `queries` file: a `.jsonl` file (jsonlines) that contains a list of dictionaries, each with two fields `_id` with unique query identifier and `text` with query text. For example: `{"_id": "q1", "text": "Who developed the mass-energy equivalence formula?"}`
- `qrels` file: a `.tsv` file (tab-seperated) that contains three columns, i.e. the `query-id`, `corpus-id` and `score` in this order. Keep 1st row as header. For example: `q1 doc1 1`
### Data Instances
A high level example of any beir dataset:
```python
corpus = {
"doc1" : {
"title": "Albert Einstein",
"text": "Albert Einstein was a German-born theoretical physicist. who developed the theory of relativity, \
one of the two pillars of modern physics (alongside quantum mechanics). His work is also known for \
its influence on the philosophy of science. He is best known to the general public for his mass–energy \
equivalence formula E = mc2, which has been dubbed 'the world's most famous equation'. He received the 1921 \
Nobel Prize in Physics 'for his services to theoretical physics, and especially for his discovery of the law \
of the photoelectric effect', a pivotal step in the development of quantum theory."
},
"doc2" : {
"title": "", # Keep title an empty string if not present
"text": "Wheat beer is a top-fermented beer which is brewed with a large proportion of wheat relative to the amount of \
malted barley. The two main varieties are German Weißbier and Belgian witbier; other types include Lambic (made\
with wild yeast), Berliner Weisse (a cloudy, sour beer), and Gose (a sour, salty beer)."
},
}
queries = {
"q1" : "Who developed the mass-energy equivalence formula?",
"q2" : "Which beer is brewed with a large proportion of wheat?"
}
qrels = {
"q1" : {"doc1": 1},
"q2" : {"doc2": 1},
}
```
### Data Fields
Examples from all configurations have the following features:
### Corpus
- `corpus`: a `dict` feature representing the document title and passage text, made up of:
- `_id`: a `string` feature representing the unique document id
- `title`: a `string` feature, denoting the title of the document.
- `text`: a `string` feature, denoting the text of the document.
### Queries
- `queries`: a `dict` feature representing the query, made up of:
- `_id`: a `string` feature representing the unique query id
- `text`: a `string` feature, denoting the text of the query.
### Qrels
- `qrels`: a `dict` feature representing the query document relevance judgements, made up of:
- `_id`: a `string` feature representing the query id
- `_id`: a `string` feature, denoting the document id.
- `score`: a `int32` feature, denoting the relevance judgement between query and document.
### Data Splits
| Dataset | Website| BEIR-Name | Type | Queries | Corpus | Rel D/Q | Down-load | md5 |
| -------- | -----| ---------| --------- | ----------- | ---------| ---------| :----------: | :------:|
| MSMARCO | [Homepage](https://microsoft.github.io/msmarco/)| ``msmarco`` | ``train``<br>``dev``<br>``test``| 6,980 | 8.84M | 1.1 | [Link](https://public.ukp.informatik.tu-darmstadt.de/thakur/BEIR/datasets/msmarco.zip) | ``444067daf65d982533ea17ebd59501e4`` |
| TREC-COVID | [Homepage](https://ir.nist.gov/covidSubmit/index.html)| ``trec-covid``| ``test``| 50| 171K| 493.5 | [Link](https://public.ukp.informatik.tu-darmstadt.de/thakur/BEIR/datasets/trec-covid.zip) | ``ce62140cb23feb9becf6270d0d1fe6d1`` |
| NFCorpus | [Homepage](https://www.cl.uni-heidelberg.de/statnlpgroup/nfcorpus/) | ``nfcorpus`` | ``train``<br>``dev``<br>``test``| 323 | 3.6K | 38.2 | [Link](https://public.ukp.informatik.tu-darmstadt.de/thakur/BEIR/datasets/nfcorpus.zip) | ``a89dba18a62ef92f7d323ec890a0d38d`` |
| BioASQ | [Homepage](http://bioasq.org) | ``bioasq``| ``train``<br>``test`` | 500 | 14.91M | 8.05 | No | [How to Reproduce?](https://github.com/UKPLab/beir/blob/main/examples/dataset#2-bioasq) |
| NQ | [Homepage](https://ai.google.com/research/NaturalQuestions) | ``nq``| ``train``<br>``test``| 3,452 | 2.68M | 1.2 | [Link](https://public.ukp.informatik.tu-darmstadt.de/thakur/BEIR/datasets/nq.zip) | ``d4d3d2e48787a744b6f6e691ff534307`` |
| HotpotQA | [Homepage](https://hotpotqa.github.io) | ``hotpotqa``| ``train``<br>``dev``<br>``test``| 7,405 | 5.23M | 2.0 | [Link](https://public.ukp.informatik.tu-darmstadt.de/thakur/BEIR/datasets/hotpotqa.zip) | ``f412724f78b0d91183a0e86805e16114`` |
| FiQA-2018 | [Homepage](https://sites.google.com/view/fiqa/) | ``fiqa`` | ``train``<br>``dev``<br>``test``| 648 | 57K | 2.6 | [Link](https://public.ukp.informatik.tu-darmstadt.de/thakur/BEIR/datasets/fiqa.zip) | ``17918ed23cd04fb15047f73e6c3bd9d9`` |
| Signal-1M(RT) | [Homepage](https://research.signal-ai.com/datasets/signal1m-tweetir.html)| ``signal1m`` | ``test``| 97 | 2.86M | 19.6 | No | [How to Reproduce?](https://github.com/UKPLab/beir/blob/main/examples/dataset#4-signal-1m) |
| TREC-NEWS | [Homepage](https://trec.nist.gov/data/news2019.html) | ``trec-news`` | ``test``| 57 | 595K | 19.6 | No | [How to Reproduce?](https://github.com/UKPLab/beir/blob/main/examples/dataset#1-trec-news) |
| ArguAna | [Homepage](http://argumentation.bplaced.net/arguana/data) | ``arguana``| ``test`` | 1,406 | 8.67K | 1.0 | [Link](https://public.ukp.informatik.tu-darmstadt.de/thakur/BEIR/datasets/arguana.zip) | ``8ad3e3c2a5867cdced806d6503f29b99`` |
| Touche-2020| [Homepage](https://webis.de/events/touche-20/shared-task-1.html) | ``webis-touche2020``| ``test``| 49 | 382K | 19.0 | [Link](https://public.ukp.informatik.tu-darmstadt.de/thakur/BEIR/datasets/webis-touche2020.zip) | ``46f650ba5a527fc69e0a6521c5a23563`` |
| CQADupstack| [Homepage](http://nlp.cis.unimelb.edu.au/resources/cqadupstack/) | ``cqadupstack``| ``test``| 13,145 | 457K | 1.4 | [Link](https://public.ukp.informatik.tu-darmstadt.de/thakur/BEIR/datasets/cqadupstack.zip) | ``4e41456d7df8ee7760a7f866133bda78`` |
| Quora| [Homepage](https://www.quora.com/q/quoradata/First-Quora-Dataset-Release-Question-Pairs) | ``quora``| ``dev``<br>``test``| 10,000 | 523K | 1.6 | [Link](https://public.ukp.informatik.tu-darmstadt.de/thakur/BEIR/datasets/quora.zip) | ``18fb154900ba42a600f84b839c173167`` |
| DBPedia | [Homepage](https://github.com/iai-group/DBpedia-Entity/) | ``dbpedia-entity``| ``dev``<br>``test``| 400 | 4.63M | 38.2 | [Link](https://public.ukp.informatik.tu-darmstadt.de/thakur/BEIR/datasets/dbpedia-entity.zip) | ``c2a39eb420a3164af735795df012ac2c`` |
| SCIDOCS| [Homepage](https://allenai.org/data/scidocs) | ``scidocs``| ``test``| 1,000 | 25K | 4.9 | [Link](https://public.ukp.informatik.tu-darmstadt.de/thakur/BEIR/datasets/scidocs.zip) | ``38121350fc3a4d2f48850f6aff52e4a9`` |
| FEVER | [Homepage](http://fever.ai) | ``fever``| ``train``<br>``dev``<br>``test``| 6,666 | 5.42M | 1.2| [Link](https://public.ukp.informatik.tu-darmstadt.de/thakur/BEIR/datasets/fever.zip) | ``5a818580227bfb4b35bb6fa46d9b6c03`` |
| Climate-FEVER| [Homepage](http://climatefever.ai) | ``climate-fever``|``test``| 1,535 | 5.42M | 3.0 | [Link](https://public.ukp.informatik.tu-darmstadt.de/thakur/BEIR/datasets/climate-fever.zip) | ``8b66f0a9126c521bae2bde127b4dc99d`` |
| SciFact| [Homepage](https://github.com/allenai/scifact) | ``scifact``| ``train``<br>``test``| 300 | 5K | 1.1 | [Link](https://public.ukp.informatik.tu-darmstadt.de/thakur/BEIR/datasets/scifact.zip) | ``5f7d1de60b170fc8027bb7898e2efca1`` |
| Robust04 | [Homepage](https://trec.nist.gov/data/robust/04.guidelines.html) | ``robust04``| ``test``| 249 | 528K | 69.9 | No | [How to Reproduce?](https://github.com/UKPLab/beir/blob/main/examples/dataset#3-robust04) |
## Dataset Creation
### Curation Rationale
[Needs More Information]
### Source Data
#### Initial Data Collection and Normalization
[Needs More Information]
#### Who are the source language producers?
[Needs More Information]
### Annotations
#### Annotation process
[Needs More Information]
#### Who are the annotators?
[Needs More Information]
### Personal and Sensitive Information
[Needs More Information]
## Considerations for Using the Data
### Social Impact of Dataset
[Needs More Information]
### Discussion of Biases
[Needs More Information]
### Other Known Limitations
[Needs More Information]
## Additional Information
### Dataset Curators
[Needs More Information]
### Licensing Information
[Needs More Information]
### Citation Information
Cite as:
```
@inproceedings{
thakur2021beir,
title={{BEIR}: A Heterogeneous Benchmark for Zero-shot Evaluation of Information Retrieval Models},
author={Nandan Thakur and Nils Reimers and Andreas R{\"u}ckl{\'e} and Abhishek Srivastava and Iryna Gurevych},
booktitle={Thirty-fifth Conference on Neural Information Processing Systems Datasets and Benchmarks Track (Round 2)},
year={2021},
url={https://openreview.net/forum?id=wCu6T5xFjeJ}
}
```
### Contributions
Thanks to [@Nthakur20](https://github.com/Nthakur20) for adding this dataset. |
false |
# Dataset Card for "Amazon-QA"
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** [http://jmcauley.ucsd.edu/data/amazon/qa/](http://jmcauley.ucsd.edu/data/amazon/qa/)
- **Repository:** [More Information Needed](http://jmcauley.ucsd.edu/data/amazon/qa/)
- **Paper:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
- **Point of Contact:** [Julian McAuley](https://cseweb.ucsd.edu//~jmcauley/#)
- **Size of downloaded dataset files:**
- **Size of the generated dataset:**
- **Total amount of disk used:** 247 MB
### Dataset Summary
This dataset contains Question and Answer data from Amazon.
Disclaimer: The team releasing Amazon-QA did not upload the dataset to the Hub and did not write a dataset card.
These steps were done by the Hugging Face team.
### Supported Tasks
- [Sentence Transformers](https://huggingface.co/sentence-transformers) training; useful for semantic search and sentence similarity.
### Languages
- English.
## Dataset Structure
Each example in the dataset contains pairs of query and answer sentences and is formatted as a dictionary:
```
{"query": [sentence_1], "pos": [sentence_2]}
{"query": [sentence_1], "pos": [sentence_2]}
...
{"query": [sentence_1], "pos": [sentence_2]}
```
This dataset is useful for training Sentence Transformers models. Refer to the following post on how to train models using similar sentences.
### Usage Example
Install the 🤗 Datasets library with `pip install datasets` and load the dataset from the Hub with:
```python
from datasets import load_dataset
dataset = load_dataset("embedding-data/Amazon-QA")
```
The dataset is loaded as a `DatasetDict` and has the format:
```python
DatasetDict({
train: Dataset({
features: ['query', 'pos'],
num_rows: 1095290
})
})
```
Review an example `i` with:
```python
dataset["train"][0]
```
### Data Instances
### Data Fields
### Data Splits
## Dataset Creation
### Curation Rationale
[More Information Needed](http://jmcauley.ucsd.edu/data/amazon/qa/)
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed](http://jmcauley.ucsd.edu/data/amazon/qa/)
#### Who are the source language producers?
[More Information Needed](http://jmcauley.ucsd.edu/data/amazon/qa/)
### Annotations
#### Annotation process
[More Information Needed](http://jmcauley.ucsd.edu/data/amazon/qa/)
#### Who are the annotators?
[More Information Needed](http://jmcauley.ucsd.edu/data/amazon/qa/)
### Personal and Sensitive Information
[More Information Needed](http://jmcauley.ucsd.edu/data/amazon/qa/)
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed](http://jmcauley.ucsd.edu/data/amazon/qa/)
### Discussion of Biases
[More Information Needed](http://jmcauley.ucsd.edu/data/amazon/qa/)
### Other Known Limitations
[More Information Needed](http://jmcauley.ucsd.edu/data/amazon/qa/s)
## Additional Information
### Dataset Curators
[More Information Needed](http://jmcauley.ucsd.edu/data/amazon/qa/)
### Licensing Information
[More Information Needed](http://jmcauley.ucsd.edu/data/amazon/qa/)
### Citation Information
### Contributions
|
false |
# lexica-aperture-v3
[scrape script](https://github.com/hlky/scrape/blob/main/lexica.py)
```
1105167 x 15 columns
'id', 'width', 'height', 'upscaled_width', 'upscaled_height', 'is_upscaled', 'url', 'upscaled_url', 'userid', 'prompt', 'negativePrompt', 'timestamp', 'seed', 'cfg', 'model'
```
is_upscaled = width != upscaled_height
if image is upscaled then url is
```
https://image.lexica.art/md2/{id}
```
and upscaled url is
```
https://image.lexica.art/full_jpg/{id}
```
author's note: no upscaled v3 images discovered so far |
false | # Dataset Card for "cl-signal_processing_attacks_large"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
true |
# Dataset Card for EmoWOZ Dataset
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:**
- **Repository:** [EmoWOZ Dataset repository](https://zenodo.org/record/6506504), [EmoWOZ Benchmark repository](https://gitlab.cs.uni-duesseldorf.de/general/dsml/emowoz-public)
- **Paper:** [EmoWOZ: A Large-Scale Corpus and Labelling Scheme for Emotion Recognition in Task-Oriented Dialogue Systems](https://aclanthology.org/2022.lrec-1.436/)
- **Leaderboard:** [Papers with Code leaderboard for EmoWOZ Dataset](https://paperswithcode.com/dataset/emowoz-1)
- **Point of Contact:** [Shutong Feng](mailto:shutong.feng@hhu.de)
### Dataset Summary
EmoWOZ is based on [MultiWOZ, a multi-domain task-oriented dialogue dataset](https://github.com/budzianowski/multiwoz). It contains more than 11K task-oriented dialogues with more than 83K emotion annotations of user utterances. In addition to Wizard-of-Oz dialogues from MultiWOZ, we collect human-machine dialogues (DialMAGE) within the same set of domains to sufficiently cover the space of various emotions that can happen during the lifetime of a data-driven dialogue system. There are 7 emotion labels, which are adapted from the OCC emotion models: _Neutral_, _Satisfied_, _Dissatisfied_, _Excited_, _Apologetic_, _Fearful_, _Abusive_.
Some of the statistics about the dataset:
| Metirc | Value |
| ---------- | ---------------- |
| # Dialogues | 11434 |
| # Turns | 167234 |
| # Annotations | 83617 |
| # Unique Tokens | 28417 |
| Average Turns per Dialogue | 14.63 |
| Average Tokens per Turn | 12.78 |
Emotion Distribution in EmoWOZ and subsets:
| Emotion | EmoWOZ | # MultiWOZ | DialMAGE |
| ---------- | ---------------- | ---------- | ---------------- |
| Neutral | 58,656 | 51,426 | 7,230 |
| Satisfied | 17,532 | 17,061 | 471 |
| Dissatisfied | 5,117 | 914 | 4,203 |
| Excited | 971 | 860 | 111 |
| Apologetic | 840 | 838 | 2 |
| Fearful | 396 | 381 | 15 |
| Satisfied | 105 | 44 | 61 |
### Supported Tasks and Leaderboards
- 'Emotion Recognition in Conversations': See the [Papers With Code leaderboard](hhttps://paperswithcode.com/sota/emotion-recognition-in-conversation-on-emowoz) for more models.
- 'Additional Classification Tasks': According to the initial benchmark [paper](https://aclanthology.org/2022.lrec-1.436/), emotion labels in EmoWOZ can be mapped to sentiment polarities. Therefore, sentiment classification and sentiment analysis can also be performed. Since EmoWOZ has two subsets: MultiWOZ (human-to-human) and DialMAGE (human-to-machine), it is also possible to perform cross-domain emotion/sentiment recognition.
### Languages
Only English is represented in the data.
## Dataset Structure
### Data Instances
For each instance, there is a string id for the dialogue, a list of strings for the dialogue utterances, and a list of integers for the emotion labels.
```
{
'dialogue_id': 'PMUL4725.json',
'log': {
'text': [
'Hi, i am looking for some museums that I could visit when in town, could you help me find some?',
'Is there an area of town you prefer?',
"No, I don't care.",
"I recommend the Cafe Jello Gallery in the west. It's free to enter!",
'I also need a place to stay',
'Great! There are 33 hotels in the area. What area of town would you like to stay in? What is your preference on price?',
" The attraction should be in the type of museum. I don't care about the price range or the area",
'Just to clarify - did you need a different museum? Or a hotel?',
'That museum from earlier is fine, I just need their postalcode. I need a hotel two in the west and moderately priced. ',
"The postal code for Cafe Jello Gallery is cb30af. Okay, Hobson's House matches your request. ",
'Do they have internet?',
'Yes they do. Would you like me to book a room for you?',
"No thanks. I will do that later. Can you please arrange for taxi service from Cafe Jello to Hobson's House sometime after 04:00?",
'I was able to book that for you. Be expecting a grey Tesla. If you need to reach them, please call 07615015749. ',
'Well that you that is all i need for today',
'Your welcome. Have a great day!'
],
'emotion': [0, -1, 0, -1, 0, -1, 0, -1, 0, -1, 0, -1, 0, -1, 0, -1]
}
}
```
### Data Fields
- `dialogue_id`: a string representing the unique id of the dialogue. For MultiWOZ dialogues, the original id is keeped. For DialMAGE dialogues, all ids are in the format of DMAGExxx.json where xxx is an integer of variable number of digits.
- `text`: a list of strings containing the dialogue turns.
- `emotion`: a list of integers containing the sequence of emotion labels for the dialogue. Specificially,
- -1: system turns with unlabelled emotion
- 0: neutral, no emotion expressed
- 1: fearful, or sad/disappointed, negative emotion elicited by facts/events, which is out of the system's control
- 2: dissatisfied, negative emotion elicited by the system, usually after the system's poor performance
- 3: apologetic, negative emotion from the user, usually expressing apologies for causing confusion or changing search criteria
- 4: abusive, negative emotion elicited by the system, expressed in an impolite way
- 5: excited, positive emotion elicited by facts/events
- 6: satisfied, positive emotion elicited by the system
### Data Splits
The EmoWOZ dataset has 3 splits: _train_, _validation_, and _test_. Below are the statistics for the dataset.
| Dataset Split | Number of Emotion Annotations in Split| Of Which from MultiWOZ | Of Which from DialMage |
| ------------- | ----------------------------| ------------- | ------------------------------------------- |
| Train | 66,474 | 56,778 | 9696 |
| Validation | 8,509 | 7,374 | 1135 |
| Test | 8,634 | 7,372 | 1262 |
## Dataset Creation
### Curation Rationale
EmoWOZ was built on top of MultiWOZ because MultiWOZ is a well-established dataset for task-oriented dialogue modelling, allowing further study of the impact of user emotions on downstream tasks. The additional 1000 human-machine dialogues (DialMAGE) was collected to improve the emotion coverage and emotional expression diversity.
### Source Data
#### Initial Data Collection and Normalization
MultiWOZ dialogues were inherited from the work of [MultiWOZ - A Large-Scale Multi-Domain Wizard-of-Oz Dataset for Task-Oriented Dialogue Modelling](https://aclanthology.org/D18-1547/).
DialMAGE dialogues were collected from a human evaluation of an RNN-based policy trained on MultiWOZ on Amazon Mechanical Turk platform.
#### Who are the source language producers?
The text of both MultiWOZ and DialMAGE was written by workers on Amazon Mechanical Turk platform. For detailed data collection set-ups, please refer to their respective publications.
### Annotations
All dialogues take place between a _user_ and a _system_ (or an _operator_). The dialogue always starts with a user turn, which is always followed by a system response, and ends with a system turn. Only user turns are annotated with a emotion label.
#### Annotation process
Each user utterance was annotated by three annotators. The final label was determined by majority voting. If there was no agreement, the final label would be resolved manually.
For details such as annotator selection process and quality assurance methods, please refer to the EmoWOZ publication.
#### Who are the annotators?
Annotators are crowdsource workers on Amazon Mechanical Turk platform.
### Personal and Sensitive Information
All annotators are anonymised. There is no personal information in EmoWOZ.
## Considerations for Using the Data
### Social Impact of Dataset
The purpose of this dataset is to help develop task-oriented dialogue systems that can perceive human emotions and avoid abusive behaviours. This task is useful for building more human-like dialogue agents.
### Discussion of Biases
There is bias in emotion distribution in the MultiWOZ (human-human) and DialMAGE (human-machine) subset of EmoWOZ. The linguistic styles are also different between the two subsets.
As pointed out in [Reevaluating Data Partitioning for Emotion Detection in EmoWOZ](https://arxiv.org/abs/2303.13364), there is also emotion shift in train-dev-test split in the MultiWOZ subset. EmoWOZ keeps the original data split of MultiWOZ, which is suitable for task-oriented dialogue modelling but the emotion distribution in these data splits are different. Further investigations will be needed.
### Other Known Limitations
The emotion distribution is unbalanced where _neutral_, _satisfied_, and _dissatisfied_ make up more than 95% of the labels.
## Additional Information
### Dataset Curators
The collection and annotation of EmoWOZ were conducted by the [Chair for Dialog Systems and Machine Learning at Heinrich Heine Universität Düsseldorf](https://www.cs.hhu.de/lehrstuehle-und-arbeitsgruppen/dialog-systems-and-machine-learning).
### Licensing Information
The EmoWOZ datasetis released under the [CC-BY-NC-4.0 License](https://creativecommons.org/licenses/by-nc/4.0/).
### Citation Information
```
@inproceedings{feng-etal-2022-emowoz,
title = "{E}mo{WOZ}: A Large-Scale Corpus and Labelling Scheme for Emotion Recognition in Task-Oriented Dialogue Systems",
author = "Feng, Shutong and
Lubis, Nurul and
Geishauser, Christian and
Lin, Hsien-chin and
Heck, Michael and
van Niekerk, Carel and
Gasic, Milica",
booktitle = "Proceedings of the Thirteenth Language Resources and Evaluation Conference",
month = jun,
year = "2022",
address = "Marseille, France",
publisher = "European Language Resources Association",
url = "https://aclanthology.org/2022.lrec-1.436",
pages = "4096--4113",
abstract = "The ability to recognise emotions lends a conversational artificial intelligence a human touch. While emotions in chit-chat dialogues have received substantial attention, emotions in task-oriented dialogues remain largely unaddressed. This is despite emotions and dialogue success having equally important roles in a natural system. Existing emotion-annotated task-oriented corpora are limited in size, label richness, and public availability, creating a bottleneck for downstream tasks. To lay a foundation for studies on emotions in task-oriented dialogues, we introduce EmoWOZ, a large-scale manually emotion-annotated corpus of task-oriented dialogues. EmoWOZ is based on MultiWOZ, a multi-domain task-oriented dialogue dataset. It contains more than 11K dialogues with more than 83K emotion annotations of user utterances. In addition to Wizard-of-Oz dialogues from MultiWOZ, we collect human-machine dialogues within the same set of domains to sufficiently cover the space of various emotions that can happen during the lifetime of a data-driven dialogue system. To the best of our knowledge, this is the first large-scale open-source corpus of its kind. We propose a novel emotion labelling scheme, which is tailored to task-oriented dialogues. We report a set of experimental results to show the usability of this corpus for emotion recognition and state tracking in task-oriented dialogues.",
}
``` |
true |
# Dataset Card for RTE_TH
### Dataset Description
This dataset is Thai translated version of [RTE](https://huggingface.co/datasets/super_glue/viewer/rte) using google translate with [Multilingual Universal Sentence Encoder](https://arxiv.org/abs/1907.04307) to calculate score for Thai translation. |
false |
# MIRACL (bn) embedded with cohere.ai `multilingual-22-12` encoder
We encoded the [MIRACL dataset](https://huggingface.co/miracl) using the [cohere.ai](https://txt.cohere.ai/multilingual/) `multilingual-22-12` embedding model.
The query embeddings can be found in [Cohere/miracl-bn-queries-22-12](https://huggingface.co/datasets/Cohere/miracl-bn-queries-22-12) and the corpus embeddings can be found in [Cohere/miracl-bn-corpus-22-12](https://huggingface.co/datasets/Cohere/miracl-bn-corpus-22-12).
For the orginal datasets, see [miracl/miracl](https://huggingface.co/datasets/miracl/miracl) and [miracl/miracl-corpus](https://huggingface.co/datasets/miracl/miracl-corpus).
Dataset info:
> MIRACL 🌍🙌🌏 (Multilingual Information Retrieval Across a Continuum of Languages) is a multilingual retrieval dataset that focuses on search across 18 different languages, which collectively encompass over three billion native speakers around the world.
>
> The corpus for each language is prepared from a Wikipedia dump, where we keep only the plain text and discard images, tables, etc. Each article is segmented into multiple passages using WikiExtractor based on natural discourse units (e.g., `\n\n` in the wiki markup). Each of these passages comprises a "document" or unit of retrieval. We preserve the Wikipedia article title of each passage.
## Embeddings
We compute for `title+" "+text` the embeddings using our `multilingual-22-12` embedding model, a state-of-the-art model that works for semantic search in 100 languages. If you want to learn more about this model, have a look at [cohere.ai multilingual embedding model](https://txt.cohere.ai/multilingual/).
## Loading the dataset
In [miracl-bn-corpus-22-12](https://huggingface.co/datasets/Cohere/miracl-bn-corpus-22-12) we provide the corpus embeddings. Note, depending on the selected split, the respective files can be quite large.
You can either load the dataset like this:
```python
from datasets import load_dataset
docs = load_dataset(f"Cohere/miracl-bn-corpus-22-12", split="train")
```
Or you can also stream it without downloading it before:
```python
from datasets import load_dataset
docs = load_dataset(f"Cohere/miracl-bn-corpus-22-12", split="train", streaming=True)
for doc in docs:
docid = doc['docid']
title = doc['title']
text = doc['text']
emb = doc['emb']
```
## Search
Have a look at [miracl-bn-queries-22-12](https://huggingface.co/datasets/Cohere/miracl-bn-queries-22-12) where we provide the query embeddings for the MIRACL dataset.
To search in the documents, you must use **dot-product**.
And then compare this query embeddings either with a vector database (recommended) or directly computing the dot product.
A full search example:
```python
# Attention! For large datasets, this requires a lot of memory to store
# all document embeddings and to compute the dot product scores.
# Only use this for smaller datasets. For large datasets, use a vector DB
from datasets import load_dataset
import torch
#Load documents + embeddings
docs = load_dataset(f"Cohere/miracl-bn-corpus-22-12", split="train")
doc_embeddings = torch.tensor(docs['emb'])
# Load queries
queries = load_dataset(f"Cohere/miracl-bn-queries-22-12", split="dev")
# Select the first query as example
qid = 0
query = queries[qid]
query_embedding = torch.tensor(queries['emb'])
# Compute dot score between query embedding and document embeddings
dot_scores = torch.mm(query_embedding, doc_embeddings.transpose(0, 1))
top_k = torch.topk(dot_scores, k=3)
# Print results
print("Query:", query['query'])
for doc_id in top_k.indices[0].tolist():
print(docs[doc_id]['title'])
print(docs[doc_id]['text'])
```
You can get embeddings for new queries using our API:
```python
#Run: pip install cohere
import cohere
co = cohere.Client(f"{api_key}") # You should add your cohere API Key here :))
texts = ['my search query']
response = co.embed(texts=texts, model='multilingual-22-12')
query_embedding = response.embeddings[0] # Get the embedding for the first text
```
## Performance
In the following table we compare the cohere multilingual-22-12 model with Elasticsearch version 8.6.0 lexical search (title and passage indexed as independent fields). Note that Elasticsearch doesn't support all languages that are part of the MIRACL dataset.
We compute nDCG@10 (a ranking based loss), as well as hit@3: Is at least one relevant document in the top-3 results. We find that hit@3 is easier to interpret, as it presents the number of queries for which a relevant document is found among the top-3 results.
Note: MIRACL only annotated a small fraction of passages (10 per query) for relevancy. Especially for larger Wikipedias (like English), we often found many more relevant passages. This is know as annotation holes. Real nDCG@10 and hit@3 performance is likely higher than depicted.
| Model | cohere multilingual-22-12 nDCG@10 | cohere multilingual-22-12 hit@3 | ES 8.6.0 nDCG@10 | ES 8.6.0 acc@3 |
|---|---|---|---|---|
| miracl-ar | 64.2 | 75.2 | 46.8 | 56.2 |
| miracl-bn | 61.5 | 75.7 | 49.2 | 60.1 |
| miracl-de | 44.4 | 60.7 | 19.6 | 29.8 |
| miracl-en | 44.6 | 62.2 | 30.2 | 43.2 |
| miracl-es | 47.0 | 74.1 | 27.0 | 47.2 |
| miracl-fi | 63.7 | 76.2 | 51.4 | 61.6 |
| miracl-fr | 46.8 | 57.1 | 17.0 | 21.6 |
| miracl-hi | 50.7 | 62.9 | 41.0 | 48.9 |
| miracl-id | 44.8 | 63.8 | 39.2 | 54.7 |
| miracl-ru | 49.2 | 66.9 | 25.4 | 36.7 |
| **Avg** | 51.7 | 67.5 | 34.7 | 46.0 |
Further languages (not supported by Elasticsearch):
| Model | cohere multilingual-22-12 nDCG@10 | cohere multilingual-22-12 hit@3 |
|---|---|---|
| miracl-fa | 44.8 | 53.6 |
| miracl-ja | 49.0 | 61.0 |
| miracl-ko | 50.9 | 64.8 |
| miracl-sw | 61.4 | 74.5 |
| miracl-te | 67.8 | 72.3 |
| miracl-th | 60.2 | 71.9 |
| miracl-yo | 56.4 | 62.2 |
| miracl-zh | 43.8 | 56.5 |
| **Avg** | 54.3 | 64.6 |
|
false |
# Stack Overflow Python Q&A Dataset
## Description
Filtered Python Q&A with API_Usage subcategory without:
1. Images
2. Links
3. Blocks of code
Scores in Q1-Q3 scaled with MaxAbsScaler. Tanh function applyed to joint Scores. |
false |
# Dataset Card for PMC Open Access Subset
## Table of Contents
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** https://www.ncbi.nlm.nih.gov/pmc/tools/openftlist/
- **Repository:**
- **Paper:**
- **Leaderboard:**
- **Point of Contact:** [PubMed Central](mailto:pubmedcentral@ncbi.nlm.nih.gov)
### Dataset Summary
The PMC Open Access Subset includes more than 3.4 million journal articles and preprints that are made available under
license terms that allow reuse. Not all articles in PMC are available for text mining and other reuse, many have
copyright protection, however articles in the PMC Open Access Subset are made available under Creative Commons or
similar licenses that generally allow more liberal redistribution and reuse than a traditional copyrighted work. The
PMC Open Access Subset is one part of the PMC Article Datasets.
Within the PMC Open Access Subset, there are three groupings:
- Commercial Use Allowed - CC0, CC BY, CC BY-SA, CC BY-ND licenses
- Non-Commercial Use Only - CC BY-NC, CC BY-NC-SA, CC BY-NC-ND licenses; and
- Other - no machine-readable Creative Commons license, no license, or a custom license.
### Supported Tasks and Leaderboards
- Language modeling
### Languages
English (`en`).
## Dataset Structure
### Data Instances
```
{
'text': "==== Front\nPLoS BiolPLoS BiolpbioplosbiolPLoS Biology1544-91731545-7885Public Library of Science San Francisco, USA 10.1371/journal.pbio.0000005Research ArticleGenetics/Genomics/Gene TherapyInfectious DiseasesMicrobiologyPlasmodiumThe Transcriptome of the Intraerythrocytic Developmental Cycle of Plasmodium falciparum\n P. falciparum IDC TranscriptomeBozdech Zbynek \n1\nLlinás Manuel \n1\nPulliam Brian Lee \n1\nWong Edith D \n1\nZhu Jingchun \n2\nDeRisi Joseph L joe@derisilab.ucsf.edu\n1\n1Department of Biochemistry and Biophysics, University of California, San FranciscoSan Francisco, CaliforniaUnited States of America2Department of Biological and Medical Informatics, University of California, San FranciscoSan Francisco, CaliforniaUnited States of America10 2003 18 8 2003 18 8 2003 1 1 e512 6 2003 25 7 2003 Copyright: ©2003 Bozdech et al.2003This is an open-access article distributed under the terms of the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are properly credited.\nMicroarray Analysis: Genome-Scale Hypothesis Scanning \n\nMonitoring Malaria: Genomic Activity of the Parasite in Human Blood Cells \n\nPlasmodium falciparum is the causative agent of the most burdensome form of human malaria, affecting 200–300 million individuals per year worldwide. The recently sequenced genome of P. falciparum revealed over 5,400 genes, of which 60% encode proteins of unknown function. Insights into the biochemical function and regulation of these genes will provide the foundation for future drug and vaccine development efforts toward eradication of this disease. By analyzing the complete asexual intraerythrocytic developmental cycle (IDC) transcriptome of the HB3 strain of P. falciparum, we demonstrate that at least 60% of the genome is transcriptionally active during this stage. Our data demonstrate that this parasite has evolved an extremely specialized mode of transcriptional regulation that produces a continuous cascade of gene expression, beginning with genes corresponding to general cellular processes, such as protein synthesis, and ending with Plasmodium-specific functionalities, such as genes involved in erythrocyte invasion. The data reveal that genes contiguous along the chromosomes are rarely coregulated, while transcription from the plastid genome is highly coregulated and likely polycistronic. Comparative genomic hybridization between HB3 and the reference genome strain (3D7) was used to distinguish between genes not expressed during the IDC and genes not detected because of possible sequence variations...
'pmid': '12929205',
'accession_id': 'PMC176545',
'license': 'CC BY',
'last_updated': '2021-01-05 08:21:03',
'retracted': 'no',
'citation': 'PLoS Biol. 2003 Oct 18; 1(1):e5'
}
```
### Data Fields
- `text`: Text content.
- `pmid`: PubMed ID.
- `accession_id`: Unique identifier for a sequence record.
- `license`: License type.
- `last_updated`: Date of last update.
- `retracted`: Whether retracted or not.
- `citation`: Citation reference.
### Data Splits
The dataset is not split.
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
License terms vary. Please refer to the license statement in each article for specific terms of use.
Within the PMC Open Access Subset, there are three groupings based on available license terms:
- Commercial Use Allowed - CC0, CC BY, CC BY-SA, CC BY-ND licenses;
- Non-Commercial Use Only - CC BY-NC, CC BY-NC-SA, CC BY-NC-ND licenses; and
- Other - no machine-readable Creative Commons license, no license, or a custom license.
### Citation Information
```
PMC Open Access Subset [Internet]. Bethesda (MD): National Library of Medicine. 2003 - [cited YEAR MONTH DAY]. Available from https://www.ncbi.nlm.nih.gov/pmc/tools/openftlist/
```
### Contributions
Thanks to [@albertvillanova](https://github.com/albertvillanova) for adding this dataset.
|
false |
# Wikinews-fr-100 Benchmark Dataset for Keyphrase Generation
## About
Wikinews-fr-100 is a dataset for benchmarking keyphrase extraction and generation models.
The dataset is composed of 100 news articles in French collected from [wikinews](https://fr.wikinews.org/wiki/Accueil).
Keyphrases were annotated by readers (students in computer science) in an uncontrolled setting (that is, not limited to thesaurus entries).
Details about the dataset can be found in the original paper [(Bougouin et al., 2013)][bougouin-2013].
Reference (indexer-assigned) keyphrases are also categorized under the PRMU (<u>P</u>resent-<u>R</u>eordered-<u>M</u>ixed-<u>U</u>nseen) scheme as proposed in [(Boudin and Gallina, 2021)][boudin-2021]. Present reference keyphrases are also ordered by their order of apparition in the concatenation of title and abstract.
Text pre-processing (tokenization) is carried out using `spacy` (`fr_core_news_sm` model) with a special rule to avoid splitting words with hyphens (e.g. graph-based is kept as one token).
Stemming (Snowball stemmer implementation for french provided in `nltk`) is applied before reference keyphrases are matched against the source text.
Details about the process can be found in `prmu.py`.
## Content and statistics
The dataset contains the following test split:
| Split | # documents | #words | # keyphrases | % Present | % Reordered | % Mixed | % Unseen |
| :--------- | ----------: | -----: | -----------: | --------: | ----------: | ------: | -------: |
| Test | 100 | 306.9 | 9.64 | 95.91 | 1.40 | 0.85 | 1.84 |
The following data fields are available :
- **id**: unique identifier of the document.
- **title**: title of the document.
- **abstract**: abstract of the document.
- **keyphrases**: list of reference keyphrases.
- **prmu**: list of <u>P</u>resent-<u>R</u>eordered-<u>M</u>ixed-<u>U</u>nseen categories for reference keyphrases.
## References
- (Bougouin et al., 2013) Adrien Bougouin, Florian Boudin, and Béatrice Daille. 2013.
[TopicRank: Graph-Based Topic Ranking for Keyphrase Extraction][bougouin-2013].
In Proceedings of the Sixth International Joint Conference on Natural Language Processing, pages 543–551, Nagoya, Japan. Asian Federation of Natural Language Processing.
- (Boudin and Gallina, 2021) Florian Boudin and Ygor Gallina. 2021.
[Redefining Absent Keyphrases and their Effect on Retrieval Effectiveness][boudin-2021].
In Proceedings of the 2021 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 4185–4193, Online. Association for Computational Linguistics.
[bougouin-2013]: https://aclanthology.org/I13-1062/
[boudin-2021]: https://aclanthology.org/2021.naacl-main.330/ |
false |
# Dataset Card for the Qur'anic Reading Comprehension Dataset (QRCD)
## Table of Contents
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** https://sites.google.com/view/quran-qa-2022/home
- **Repository:** https://gitlab.com/bigirqu/quranqa/-/tree/main/
- **Paper:** https://dl.acm.org/doi/10.1145/3400396
- **Leaderboard:**
- **Point of Contact:** @piraka9011
### Dataset Summary
The QRCD (Qur'anic Reading Comprehension Dataset) is composed of 1,093 tuples of question-passage pairs that are
coupled with their extracted answers to constitute 1,337 question-passage-answer triplets.
### Supported Tasks and Leaderboards
This task is evaluated as a ranking task.
To give credit to a QA system that may retrieve an answer (not necessarily at the first rank) that does not fully
match one of the gold answers but partially matches it, we use partial Reciprocal Rank (pRR) measure.
It is a variant of the traditional Reciprocal Rank evaluation metric that considers partial matching.
pRR is the official evaluation measure of this shared task.
We will also report Exact Match (EM) and F1@1, which are evaluation metrics applied only on the top predicted answer.
The EM metric is a binary measure that rewards a system only if the top predicted answer exactly matches one of the
gold answers.
Whereas, the F1@1 metric measures the token overlap between the top predicted answer and the best matching gold answer.
To get an overall evaluation score, each of the above measures is averaged over all questions.
### Languages
Qur'anic Arabic
## Dataset Structure
### Data Instances
To simplify the structure of the dataset, each tuple contains one passage, one question and a list that may contain
one or more answers to that question, as shown below:
```json
{
"pq_id": "38:41-44_105",
"passage": "واذكر عبدنا أيوب إذ نادى ربه أني مسني الشيطان بنصب وعذاب. اركض برجلك هذا مغتسل بارد وشراب. ووهبنا له أهله ومثلهم معهم رحمة منا وذكرى لأولي الألباب. وخذ بيدك ضغثا فاضرب به ولا تحنث إنا وجدناه صابرا نعم العبد إنه أواب.",
"surah": 38,
"verses": "41-44",
"question": "من هو النبي المعروف بالصبر؟",
"answers": [
{
"text": "أيوب",
"start_char": 12
}
]
}
```
Each Qur’anic passage in QRCD may have more than one occurrence; and each passage occurrence is paired with a different
question.
Likewise, each question in QRCD may have more than one occurrence; and each question occurrence is paired with a
different Qur’anic passage.
The source of the Qur'anic text in QRCD is the Tanzil project download page, which provides verified versions of the
Holy Qur'an in several scripting styles.
We have chosen the simple-clean text style of Tanzil version 1.0.2.
### Data Fields
* `pq_id`: Sample ID
* `passage`: Context text
* `surah`: Surah number
* `verses`: Verse range
* `question`: Question text
* `answers`: List of answers and their start character
### Data Splits
| **Dataset** | **%** | **# Question-Passage Pairs** | **# Question-Passage-Answer Triplets** |
|-------------|:-----:|:-----------------------------:|:---------------------------------------:|
| Training | 65% | 710 | 861 |
| Development | 10% | 109 | 128 |
| Test | 25% | 274 | 348 |
| All | 100% | 1,093 | 1,337 |
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
The QRCD v1.1 dataset is distributed under the CC-BY-ND 4.0 License https://creativecommons.org/licenses/by-nd/4.0/legalcode
For a human-readable summary of (and not a substitute for) the above CC-BY-ND 4.0 License, please refer to https://creativecommons.org/licenses/by-nd/4.0/
### Citation Information
```
@article{malhas2020ayatec,
author = {Malhas, Rana and Elsayed, Tamer},
title = {AyaTEC: Building a Reusable Verse-Based Test Collection for Arabic Question Answering on the Holy Qur’an},
year = {2020},
issue_date = {November 2020},
publisher = {Association for Computing Machinery},
address = {New York, NY, USA},
volume = {19},
number = {6},
issn = {2375-4699},
url = {https://doi.org/10.1145/3400396},
doi = {10.1145/3400396},
journal = {ACM Trans. Asian Low-Resour. Lang. Inf. Process.},
month = {oct},
articleno = {78},
numpages = {21},
keywords = {evaluation, Classical Arabic}
}
```
### Contributions
Thanks to [@piraka9011](https://github.com/piraka9011) for adding this dataset.
|
false | # Dataset Card for Fashion12K DE
## Dataset Description
- **Repository:** [Fashion12K](https://github.com/Toloka/Fashion12K_german_queries)
### Dataset Summary
This dataset is a German-language dataset based on the [Fashion12K](https://github.com/Toloka/Fashion12K_german_queries) dataset, which originally contains both English and German text descriptions for each item.
This dataset was used to to finetuner CLIP using the [Finetuner](https://finetuner.jina.ai/) tool.
## Dataset Structure
### Data Instances
Each data point consists of a 'text' and an 'image' field, where the 'text' field describes an item of clothing in German, and the 'image' field contains and image of that item of clothing.
### Data Fields
- 'text': A string describing the item of clothing.
- 'image': A `PIL.Image.Image` object containing the image. Note that when accessing the image column: dataset[0]["image"] the image file is automatically decoded. Decoding of a large number of image files might take a significant amount of time. Thus it is important to first query the sample index before the "image" column, i.e. dataset[0]["image"] should always be preferred over dataset["image"][0].
### Data Splits
| | train | test |
|------------|-------|------|
| # of items | 10000 | 2001 |
### Source Data
#### Initial Data Collection and Normalization
Images were sampled from the [Fashion200K dataset](https://github.com/xthan/fashion-200k).
### Annotations
#### Annotation process
Data was annotated using [Toloka](https://toloka.ai/). See their site for more details.
## Additional Information
### Licensing Information
This work is licensed under a Creative Commons Attribution 4.0 International License.
### Contributions
Thanks to contributors from [Jina AI](https://jina.ai) and [Toloka](https://toloka.ai) for adding this dataset. |
false |
# Dataset Card for Multilingual Grammar Error Correction
## Dataset Description
- **Homepage:** https://juancavallotti.com
- **Paper:** https://blog.juancavallotti.com/2023/01/06/training-a-multi-language-grammar-error-correction-system/
- **Point of Contact:** Juan Alberto López Cavallotti
### Dataset Summary
This dataset can be used to train a transformer model (we used T5) to correct grammar errors in simple sentences written in English, Spanish, French, or German.
This dataset was developed as a component for the [Squidigies](https://squidgies.app/) platform.
### Supported Tasks and Leaderboards
* **Grammar Error Correction:** By appending the prefix *fix grammar:* to the prrompt.
* **Language Detection:** By appending the prefix: *language:* to the prompt.
### Languages
* English
* Spanish
* French
* German
## Dataset Structure
### Data Instances
The dataset contains the following instances for each language:
* German 32282 sentences.
* English 51393 sentences.
* Spanish 67672 sentences.
* French 67157 sentences.
### Data Fields
* `lang`: The language of the sentence
* `sentence`: The original sentence.
* `modified`: The corrupted sentence.
* `transformation`: The primary transformation used by the synthetic data generator.
* `sec_transformation`: The secondary transformation (if any) used by the synthetic data generator.
### Data Splits
* `train`: There isn't a specific split defined. I recommend using 1k sentences sampled randomly from each language, combined with the SacreBleu metric.
## Dataset Creation
### Curation Rationale
This dataset was generated synthetically through code with the help of information of common grammar errors harvested throughout the internet.
### Source Data
#### Initial Data Collection and Normalization
The source grammatical sentences come from various open-source datasets, such as Tatoeba.
#### Who are the source language producers?
* Juan Alberto López Cavallotti
### Annotations
#### Annotation process
The annotation is automatic and produced by the generation script.
#### Who are the annotators?
* Data generation script by Juan Alberto López Cavallotti
### Other Known Limitations
The dataset doesn't cover all the possible grammar errors but serves as a starting point that generates fair results.
## Additional Information
### Dataset Curators
* Juan Alberto López Cavallotti
### Licensing Information
This dataset is distributed under the [Apache 2 License](https://www.apache.org/licenses/LICENSE-2.0)
### Citation Information
Please mention this original dataset and the author **Juan Alberto López Cavallotti**
### Contributions
* Juan Alberto López Cavallotti |
false |
This dataset is curated by [GIZ Data Service Center](https://www.giz.de/expertise/html/63018.html) in the form of Sqaud dataset with features 'question', 'answers', 'answers_start' and 'context'. The source dataset for this
comes from [Climatewatchdata](https://www.climatewatchdata.org/data-explorer/historical-emissions?historical-emissions-data-sources=climate-watch&historical-emissions-gases=all-ghg&historical-emissions-regions=All%20Selected&historical-emissions-sectors=total-including-lucf%2Ctotal-including-lucf&page=1),
where Climatewatch has analysed Intended nationally determined contribution (INDC), NDC and Revised/Updated NDC of the countries to answer some important questions related to Climate change.
Specifications
- Dataset size: 31382
- Average Context length : 50 words
- Language: English
The list of Sectors covered include: Agriculture', 'Coastal Zone', 'Cross-Cutting Area', 'Education', 'Energy', 'Environment', 'Water', 'Buildings', 'Economy-wide', 'Industries', 'Transport', 'Waste', 'Health', 'LULUCF/Forestry', 'Social Development', 'Disaster Risk Management (DRM)', 'Urban','Tourism'.
Some of the important question categories pertaining to climate change(adapted from climatewatchdata) include
- Sectoral Policies
- Sectoral Unconditional Actions
- Building on existing downstream actions
- Sectoral plans
- Sectoral targets
- Action and priority
- Adapt Now sector
- Emission reduction potential
- Capacity Building Needs for Sectoral Implementation
- Sectoral Conditional Actions
- Technology Transfer Needs for Sectoral Implementation
- Conditional part of mitigation target
- Capacity building needs
- Technology needs
- Unconditional part of mitigation target
- Time frame
- Emission reduction potential
No answer category like 'Squad2' is not part of dataset but can be easily curated from existing examples. |
false | # Iris
The [Iris dataset](https://archive-beta.ics.uci.edu/dataset/53/iris) from the [UCI repository](https://archive-beta.ics.uci.edu).
# Configurations and tasks
| **Configuration** | **Task** | **Description** |
|-------------------|---------------------------|-------------------------------|
| iris | Multiclass classification | Classify iris type. |
| setosa | Binary classification | Is this a iris-setosa? |
| versicolor | Binary classification | Is this a iris-versicolor? |
| virginica | Binary classification | Is this a iris-virginica? |
# Usage
```python
from datasets import load_dataset
dataset = load_dataset("mstz/iris", "iris")["train"]
``` |
true | # Dataset Card for A Benchmark Dataset for Identifying Machine-Generated Scientific Papers in the LLM Era
## Dataset Description
- **Repository:** https://github.com/qwenzo/-IDMGSP
- **Paper:** TODO
### Dataset Summary
A benchmark for detecting machine-generated scientific papers based on their abstract, introduction and conclusion sections.
### Supported Tasks and Leaderboards
TODO
current benchmark results in terms of accuracy:
| Model | Train Dataset | TEST (classifier_input - test) | OOD-GPT3 (ood_gpt3) | OOD-REAL (ood_real) | TECG (tecg) |
|-----------------|---------------|--------|----------|----------|----------|
| GPT-3 (our) | TRAIN-SUB | 99.96% | 25.9% | 99.07% | 100% |
| [IDMGSP-Galactica-TRAIN](https://huggingface.co/tum-nlp/IDMGSP-Galactica-TRAIN) (our) | TRAIN (classifier_input - train) | 98.3% | 24.6% | 95.8% | 83% |
| [IDMGSP-Galactica-TRAIN_GPT3](https://huggingface.co/tum-nlp/IDMGSP-Galactica-TRAIN_GPT3) (our) | TRAIN+GPT3 (train+gpt3) | 98.5% | 70% | 92.1% | 87.2% |
| [IDMGSP-Galactica-TRAIN-CG](https://huggingface.co/tum-nlp/IDMGSP-Galactica-TRAIN-CG) (our) | TRAIN-CG (train-cg) | 95% | 11.1% | 96.9% | 42% |
| RoBERTa (TODO add link to model) (our) | TRAIN (classifier_input - train) | 86% | 23% | 76% | 100% |
| RoBERTa (TODO add link to model) (our) | TRAIN+GPT3 (train+gpt3) | 68% | 100% | 36% | 63% |
| RoBERTa (TODO add link to model) (our) | TRAIN-CG (train-cg) | 75% | 32% | 58% | 88% |
| DetectGPT | - | 61.5% | 0% | 99.92% | 68.7% |
### Languages
English
## Dataset Structure
### Data Instances
Each instance in the dataset corresponds to a row in a CSV file, encompassing the features of a paper, its label, and the paper's source.
### Data Fields
#### classifier_input
- name: id
description: The ID of the provided paper corresponds to the identifier assigned by the arXiv database if the paper's source is marked as "real".
dtype: string
- name: year
description: year of the publication as given by the arXiv database.
dtype: string
- name: title
description: title of the paper given by the arXiv database.
dtype: string
- name: abstract
description: abstract of the paper given by the arXiv database.
dtype: string
- name: introduction
description: introduction section of the paper. extracted by the PDF parser.
dtype: string
- name: conclusion
description: conclusion section of the paper. extracted by the PDF parser.
dtype: string
- name: categories
description: topics/domains of the paper given by the arXiv database. This field is null if the src field is not "real".
dtype: string
- name: src
description: indicator of the source of the paper. This can have the values "chatgpt", "gpt2", "real", "scigen" or "galactica".
dtype: string
- name: label
description: 0 for real/human-written papers and 1 for fake/machine-generated papers.
dtype: int64
#### train+gpt3
- name: id
description: The ID of the provided paper corresponds to the identifier assigned by the arXiv database if the paper's source is marked as "real".
dtype: string
- name: year
description: year of the publication as given by the arXiv database.
dtype: string
- name: title
description: title of the paper given by the arXiv database.
dtype: string
- name: abstract
description: abstract of the paper given by the arXiv database.
dtype: string
- name: introduction
description: introduction section of the paper. extracted by the PDF parser.
dtype: string
- name: conclusion
description: conclusion section of the paper. extracted by the PDF parser.
dtype: string
- name: categories
description: topics/domains of the paper given by the arXiv database. This field is null if the src field is not "real".
dtype: string
- name: src
description: indicator of the source of the paper. This can have the values "chatgpt", "gpt2", "real", "scigen" or "galactica", "gpt3".
dtype: string
- name: label
description: 0 for real/human-written papers and 1 for fake/machine-generated papers.
dtype: int64
#### tecg
- name: id
description: The ID of the provided paper corresponds to the identifier assigned by the arXiv database if the paper's source is marked as "real".
dtype: string
- name: year
description: year of the publication as given by the arXiv database.
dtype: string
- name: title
description: title of the paper given by the arXiv database.
dtype: string
- name: abstract
description: abstract of the paper given by the arXiv database.
dtype: string
- name: introduction
description: introduction section of the paper. extracted by the PDF parser.
dtype: string
- name: conclusion
description: conclusion section of the paper. extracted by the PDF parser.
dtype: string
- name: categories
description: topics/domains of the paper given by the arXiv database. This field is null if the src field is not "real".
dtype: string
- name: src
description: indicator of the source of the paper. Always has the value "chatgpt".
dtype: string
- name: label
description: always having the value 1.
dtype: int64
#### train-cg
- name: id
description: The ID of the provided paper corresponds to the identifier assigned by the arXiv database if the paper's source is marked as "real".
dtype: string
- name: year
description: year of the publication as given by the arXiv database.
dtype: string
- name: title
description: title of the paper given by the arXiv database.
dtype: string
- name: abstract
description: abstract of the paper given by the arXiv database.
dtype: string
- name: introduction
description: introduction section of the paper. extracted by the PDF parser.
dtype: string
- name: conclusion
description: conclusion section of the paper. extracted by the PDF parser.
dtype: string
- name: categories
description: topics/domains of the paper given by the arXiv database. This field is null if the src field is not "real".
dtype: string
- name: src
description: indicator of the source of the paper. This can have the values "gpt2", "real", "scigen" or "galactica".
dtype: string
- name: label
description: 0 for real/human-written papers and 1 for fake/machine-generated papers.
dtype: int64
#### ood_gpt3
- name: title
description: title of the paper given by the arXiv database.
dtype: string
- name: abstract
description: abstract of the paper given by the arXiv database.
dtype: string
- name: introduction
description: introduction section of the paper. extracted by the PDF parser.
dtype: string
- name: conclusion
description: conclusion section of the paper. extracted by the PDF parser.
dtype: string
- name: src
description: indicator of the source of the paper. Has the value "gpt3".
dtype: string
- name: label
description: always having the value 1.
dtype: int64
#### ood_real
dtype: string
- name: abstract
description: abstract of the paper given by the arXiv database.
dtype: string
- name: introduction
description: introduction section of the paper. extracted by the PDF parser.
dtype: string
- name: conclusion
description: conclusion section of the paper. extracted by the PDF parser.
dtype: string
- name: src
description: indicator of the source of the paper. Has the value "ood_real".
dtype: string
- name: label
description: always having the value 0.
dtype: int64
### Data Splits
Table: Overview of the datasets used to train and evaluate the classifiers.
| Dataset | arXiv (real) | ChatGPT (fake) | GPT-2 (fake) | SCIgen (fake) | Galactica (fake) | GPT-3 (fake) |
|------------------------------|--------------|----------------|--------------|----------------|------------------|--------------|
| Standard train (TRAIN) | 8k | 2k | 2k | 2k | 2k | - |
| Standard train subset (TRAIN-SUB) | 4k | 1k | 1k | 1k | 1k | - |
| TRAIN without ChatGPT (TRAIN-CG) | 8k | - | 2k | 2k | 2k | - |
| TRAIN plus GPT-3 (TRAIN+GPT3) | 8k | 2k | 2k | 2k | 2k | 1.2k |
| Standard test (TEST) | 4k | 1k | 1k | 1k | 1k | - |
| Out-of-domain GPT-3 only (OOD-GPT3) | - | - | - | - | - | 1k |
| Out-of-domain real (OOD-REAL) | 4k (parsing 2) | - | - | - | - | - |
| ChatGPT only (TECG) | - | 1k | - | - | - | - |
[More Information Needed]
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
[More Information Needed]
### Citation Information
[More Information Needed]
### Contributions
[More Information Needed] |
false |
# Dataset Card for Janes-Preklop
### Dataset Summary
Janes-Preklop is a corpus of Slovene tweets that is manually annotated for code-switching: the use of words from two
or more languages within one sentence or utterance.
### Languages
Code-switched Slovenian.
## Dataset Structure
### Data Instances
A sample instance from the dataset - each word is annotated with its language, either `"default"`
(Slovenian/unclassifiable), `en` (English), `de` (German), `hbs` (Serbo-Croatian), `sp` (Spanish),
`la` (Latin), `ar` (Arabic), `fr` (French), `it` (Italian), or `pt` (Portuguese).
```
{
'id': 'tid.397447931558895616',
'words': ['Brad', 'Pitt', 'na', 'Planet', 'TV', '.', 'U', 'are', 'welcome', ';)'],
'language': ['default', 'default', 'default', 'default', 'default', 'default', 'B-en', 'I-en', 'I-en', 'I-en']
}
```
### Data Fields
- `id`: unique identifier of the example;
- `words`: words in the sentence;
- `language`: language of each word.
## Additional Information
### Dataset Curators
Špela Reher, Tomaž Erjavec, Darja Fišer.
### Licensing Information
CC BY-SA 4.0.
### Citation Information
```
@misc{janes_preklop,
title = {Tweet code-switching corpus Janes-Preklop 1.0},
author = {Reher, {\v S}pela and Erjavec, Toma{\v z} and Fi{\v s}er, Darja},
url = {http://hdl.handle.net/11356/1154},
note = {Slovenian language resource repository {CLARIN}.{SI}},
copyright = {Creative Commons - Attribution-{ShareAlike} 4.0 International ({CC} {BY}-{SA} 4.0)},
issn = {2820-4042},
year = {2017}
}
```
### Contributions
Thanks to [@matejklemen](https://github.com/matejklemen) for adding this dataset. |
false | # Dataset Card for "KLUE_mrc_negative_train"
KLUE mrc train dataset에 BM25을 이용해서 question에 대한 hard negative text 20개를 추가한 데이터입니다.
BM25로 hard negative text를 찾았고, preprocessing을 통해 중복 데이터를 최대한 삭제했습니다.
사용한 BM25의 정보는 아래와 같습니다.
|top-k|top-10|top-20|top-50|top-100|
|-|-|-|-|-|
|accuracy(%)|92.1|95.0|97.1|98.8|
# Citation
```
@misc{park2021klue,
title={KLUE: Korean Language Understanding Evaluation},
author={Sungjoon Park and Jihyung Moon and Sungdong Kim and Won Ik Cho and Jiyoon Han and Jangwon Park and Chisung Song and Junseong Kim and Yongsook Song and Taehwan Oh and Joohong Lee and Juhyun Oh and Sungwon Lyu and Younghoon Jeong and Inkwon Lee and Sangwoo Seo and Dongjun Lee and Hyunwoo Kim and Myeonghwa Lee and Seongbo Jang and Seungwon Do and Sunkyoung Kim and Kyungtae Lim and Jongwon Lee and Kyumin Park and Jamin Shin and Seonghyun Kim and Lucy Park and Alice Oh and Jungwoo Ha and Kyunghyun Cho},
year={2021},
eprint={2105.09680},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
|
false |
# Dataset Card for Demo
## Table of Contents
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:**
- **Repository:**
- **Paper:**
- **Leaderboard:**
- **Point of Contact:**
### Dataset Summary
This is a demo dataset with two files `train.csv` and `test.csv`.
Load it by:
```python
from datasets import load_dataset
data_files = {"train": "train.csv", "test": "test.csv"}
demo = load_dataset("stevhliu/demo", data_files=data_files)
```
### Supported Tasks and Leaderboards
[More Information Needed]
### Languages
[More Information Needed]
## Dataset Structure
### Data Instances
[More Information Needed]
### Data Fields
[More Information Needed]
### Data Splits
[More Information Needed]
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
[More Information Needed]
### Citation Information
[More Information Needed]
### Contributions
Thanks to [@github-username](https://github.com/<github-username>) for adding this dataset. |
false |
Queries for Lotte dataset from [ColBERTv2: Effective and Efficient Retrieval via
Lightweight Late Interaction](https://arxiv.org/abs/2112.01488) |
false |
# Dataset Card for MUTAG
## Table of Contents
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [External Use](#external-use)
- [PyGeometric](#pygeometric)
- [Dataset Structure](#dataset-structure)
- [Data Properties](#data-properties)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Additional Information](#additional-information)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **[Homepage](https://pubs.acs.org/doi/abs/10.1021/jm00106a046)**
- **[Repository](https://www.chrsmrrs.com/graphkerneldatasets/MUTAG.zip):**:
- **Paper:**: Structure-activity relationship of mutagenic aromatic and heteroaromatic nitro compounds. Correlation with molecular orbital energies and hydrophobicity (see citation)
- **Leaderboard:**: [Papers with code leaderboard](https://paperswithcode.com/sota/graph-classification-on-mutag)
### Dataset Summary
The `MUTAG` dataset is 'a collection of nitroaromatic compounds and the goal is to predict their mutagenicity on Salmonella typhimurium'.
### Supported Tasks and Leaderboards
`MUTAG` should be used for molecular property prediction (aiming to predict whether molecules have a mutagenic effect on a given bacterium or not), a binary classification task. The score used is accuracy, using a 10-fold cross-validation.
## External Use
### PyGeometric
To load in PyGeometric, do the following:
```python
from datasets import load_dataset
from torch_geometric.data import Data
from torch_geometric.loader import DataLoader
dataset_hf = load_dataset("graphs-datasets/<mydataset>")
# For the train set (replace by valid or test as needed)
dataset_pg_list = [Data(graph) for graph in dataset_hf["train"]]
dataset_pg = DataLoader(dataset_pg_list)
```
## Dataset Structure
### Data Properties
| property | value |
|---|---|
| scale | small |
| #graphs | 187 |
| average #nodes | 18.03 |
| average #edges | 39.80 |
### Data Fields
Each row of a given file is a graph, with:
- `node_feat` (list: #nodes x #node-features): nodes
- `edge_index` (list: 2 x #edges): pairs of nodes constituting edges
- `edge_attr` (list: #edges x #edge-features): for the aforementioned edges, contains their features
- `y` (list: 1 x #labels): contains the number of labels available to predict (here 1, equal to zero or one)
- `num_nodes` (int): number of nodes of the graph
### Data Splits
This data comes from the PyGeometric version of the dataset provided by OGB, and follows the provided data splits.
This information can be found back using
```python
from torch_geometric.datasets import TUDataset
cur_dataset = TUDataset(root="../dataset/loaded/",
name="MUTAG")
```
## Additional Information
### Licensing Information
The dataset has been released under unknown license, please open an issue if you have information.
### Citation Information
```
@article{doi:10.1021/jm00106a046,
author = {Debnath, Asim Kumar and Lopez de Compadre, Rosa L. and Debnath, Gargi and Shusterman, Alan J. and Hansch, Corwin},
title = {Structure-activity relationship of mutagenic aromatic and heteroaromatic nitro compounds. Correlation with molecular orbital energies and hydrophobicity},
journal = {Journal of Medicinal Chemistry},
volume = {34},
number = {2},
pages = {786-797},
year = {1991},
doi = {10.1021/jm00106a046},
URL = {
https://doi.org/10.1021/jm00106a046
},
eprint = {
https://doi.org/10.1021/jm00106a046
}
}
```
### Contributions
Thanks to [@clefourrier](https://github.com/clefourrier) for adding this dataset. |
false |
# Dataset Card for EurlexResources: A Corpus Covering the Largest EURLEX Resources
## Table of Contents
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:**
- **Repository:** [GitHub](https://github.com/JoelNiklaus/LegalDatasets/tree/main/pretrain/eurlex)
- **Paper:**
- **Leaderboard:**
- **Point of Contact:** [Joel Niklaus](mailto:joel.niklaus.2@bfh.ch)
### Dataset Summary
This dataset contains large text resources (~179GB in total) from EURLEX that can be used for pretraining language models.
Use the dataset like this:
```python
from datasets import load_dataset
config = "de_caselaw" # {lang}_{resource}
dataset = load_dataset("joelito/eurlex_resources", config, split='train', streaming=True)
```
### Supported Tasks and Leaderboards
The dataset supports the task of masked language modeling.
### Languages
The following languages are supported: bg, cs, da, de, el, en, es, et, fi, fr, ga, hr, hu, it, lt, lv, mt, nl, pl, pt, ro, sk, sl, sv
## Dataset Structure
### Data Instances
The file format is jsonl.xz and there is one split available ("train").
The following resource types are supported: caselaw, decision, directive, intagr, proposal, recommendation, regulation
More information about the resource types can be found here:
- Caselaw: [EU](https://eur-lex.europa.eu/collection/eu-law/eu-case-law.html)
- Decision: [EU](https://eur-lex.europa.eu/EN/legal-content/summary/european-union-decisions.html), [Wikipedia](https://en.wikipedia.org/wiki/Decision_(European_Union))
- Directive: [EU](https://european-union.europa.eu/institutions-law-budget/law/types-legislation_en), [Wikipedia](https://en.wikipedia.org/wiki/Directive_(European_Union))
- Recommendation: [EU](https://eur-lex.europa.eu/EN/legal-content/glossary/recommendation.html), [Wikipedia](https://en.wikipedia.org/wiki/Recommendation_(European_Union))
- Regulation: [EU](https://european-union.europa.eu/institutions-law-budget/law/types-legislation_en), [Wikipedia](https://en.wikipedia.org/wiki/Regulation_(European_Union))
- Intagr: [EU](https://eur-lex.europa.eu/collection/eu-law/inter-agree.html), [Wikipedia](https://en.wikipedia.org/wiki/Treaties_of_the_European_Union)
- Proposal: No resource found
| Source | Size (MB) | Words | Documents | Words/Document |
|:-------------------|------------:|------------:|------------:|-----------------:|
| all_all | 180668 | 12106556233 | 8306749 | 1457 |
| all_caselaw | 34939 | 3413551598 | 2487794 | 1372 |
| all_decision | 28519 | 1698585620 | 1267402 | 1340 |
| all_directive | 4786 | 368577940 | 104187 | 3537 |
| all_intagr | 11421 | 743271516 | 274485 | 2707 |
| all_proposal | 26526 | 2087989530 | 702392 | 2972 |
| all_recommendation | 1886 | 164979037 | 80277 | 2055 |
| all_regulation | 72590 | 3629600992 | 3390212 | 1070 |
| bg_all | 7819 | 398067053 | 348691 | 1141 |
| bg_caselaw | 1588 | 109749174 | 104434 | 1050 |
| bg_decision | 1248 | 58817972 | 54075 | 1087 |
| bg_directive | 263 | 15731608 | 4388 | 3585 |
| bg_intagr | 603 | 31292848 | 11581 | 2702 |
| bg_proposal | 1083 | 60674956 | 29251 | 2074 |
| bg_recommendation | 89 | 5588991 | 3321 | 1682 |
| bg_regulation | 2943 | 116211504 | 141641 | 820 |
| cs_all | 8360 | 471961631 | 449793 | 1049 |
| cs_caselaw | 1163 | 110005022 | 104519 | 1052 |
| cs_decision | 1102 | 58921128 | 54075 | 1089 |
| cs_directive | 186 | 13951134 | 4388 | 3179 |
| cs_intagr | 449 | 28106332 | 11581 | 2426 |
| cs_proposal | 840 | 61838692 | 29252 | 2113 |
| cs_recommendation | 64 | 5416549 | 3323 | 1630 |
| cs_regulation | 4557 | 193722774 | 242655 | 798 |
| da_all | 8932 | 671484862 | 332500 | 2019 |
| da_caselaw | 1746 | 185589641 | 88234 | 2103 |
| da_decision | 1356 | 89498535 | 54085 | 1654 |
| da_directive | 207 | 17525792 | 4388 | 3994 |
| da_intagr | 506 | 35596169 | 11582 | 3073 |
| da_proposal | 1399 | 119759476 | 29257 | 4093 |
| da_recommendation | 100 | 9463897 | 3352 | 2823 |
| da_regulation | 3618 | 214051352 | 141602 | 1511 |
| de_all | 9607 | 695512401 | 348290 | 1996 |
| de_caselaw | 1930 | 193232441 | 104228 | 1853 |
| de_decision | 1449 | 93688222 | 53980 | 1735 |
| de_directive | 218 | 17337760 | 4385 | 3953 |
| de_intagr | 531 | 36791153 | 11580 | 3177 |
| de_proposal | 1556 | 126987454 | 29219 | 4346 |
| de_recommendation | 109 | 9608034 | 3318 | 2895 |
| de_regulation | 3813 | 217867337 | 141580 | 1538 |
| el_all | 12469 | 696216541 | 349667 | 1991 |
| el_caselaw | 2951 | 202027703 | 105138 | 1921 |
| el_decision | 1823 | 94919886 | 54150 | 1752 |
| el_directive | 321 | 19411959 | 4390 | 4421 |
| el_intagr | 701 | 38965777 | 11584 | 3363 |
| el_proposal | 2085 | 128005737 | 29290 | 4370 |
| el_recommendation | 145 | 9344866 | 3357 | 2783 |
| el_regulation | 4443 | 203540613 | 141758 | 1435 |
| en_all | 9217 | 769465561 | 348641 | 2207 |
| en_caselaw | 1846 | 222891827 | 104422 | 2134 |
| en_decision | 1504 | 114626013 | 54054 | 2120 |
| en_directive | 204 | 18860876 | 4388 | 4298 |
| en_intagr | 499 | 39029843 | 11581 | 3370 |
| en_proposal | 1538 | 140781768 | 29242 | 4814 |
| en_recommendation | 97 | 10091809 | 3320 | 3039 |
| en_regulation | 3530 | 223183425 | 141634 | 1575 |
| es_all | 8588 | 725125274 | 348443 | 2081 |
| es_caselaw | 1870 | 220621730 | 104312 | 2115 |
| es_decision | 1334 | 98163499 | 54001 | 1817 |
| es_directive | 221 | 21484479 | 4385 | 4899 |
| es_intagr | 516 | 41841805 | 11581 | 3612 |
| es_proposal | 1366 | 133674486 | 29224 | 4574 |
| es_recommendation | 82 | 8864018 | 3319 | 2670 |
| es_regulation | 3199 | 200475257 | 141621 | 1415 |
| et_all | 6090 | 328068754 | 349615 | 938 |
| et_caselaw | 1074 | 93096396 | 105111 | 885 |
| et_decision | 1069 | 50752324 | 54159 | 937 |
| et_directive | 177 | 11555930 | 4390 | 2632 |
| et_intagr | 436 | 24018147 | 11584 | 2073 |
| et_proposal | 810 | 51600852 | 29283 | 1762 |
| et_recommendation | 61 | 4451369 | 3355 | 1326 |
| et_regulation | 2464 | 92593736 | 141733 | 653 |
| fi_all | 7346 | 404265224 | 349633 | 1156 |
| fi_caselaw | 1596 | 126525296 | 105119 | 1203 |
| fi_decision | 1227 | 59659475 | 54163 | 1101 |
| fi_directive | 204 | 12766491 | 4389 | 2908 |
| fi_intagr | 463 | 25392311 | 11584 | 2192 |
| fi_proposal | 1075 | 69198401 | 29288 | 2362 |
| fi_recommendation | 73 | 5070392 | 3356 | 1510 |
| fi_regulation | 2707 | 105652858 | 141734 | 745 |
| fr_all | 9937 | 828959218 | 348295 | 2380 |
| fr_caselaw | 2158 | 246262666 | 104228 | 2362 |
| fr_decision | 1473 | 108648744 | 53981 | 2012 |
| fr_directive | 222 | 20308801 | 4385 | 4631 |
| fr_intagr | 536 | 41986012 | 11580 | 3625 |
| fr_proposal | 1592 | 149134298 | 29218 | 5104 |
| fr_recommendation | 112 | 11510415 | 3318 | 3469 |
| fr_regulation | 3845 | 251108282 | 141585 | 1773 |
| ga_all | 1028 | 65030095 | 349778 | 185 |
| ga_caselaw | 11 | 696305 | 105205 | 6 |
| ga_decision | 87 | 4415457 | 54189 | 81 |
| ga_directive | 18 | 1512027 | 4390 | 344 |
| ga_intagr | 19 | 1820723 | 11586 | 157 |
| ga_proposal | 289 | 26106889 | 29298 | 891 |
| ga_recommendation | 10 | 902390 | 3361 | 268 |
| ga_regulation | 594 | 29576304 | 141749 | 208 |
| hr_all | 4594 | 258816068 | 348691 | 742 |
| hr_caselaw | 617 | 62432734 | 104434 | 597 |
| hr_decision | 596 | 31911903 | 54075 | 590 |
| hr_directive | 156 | 10855913 | 4388 | 2474 |
| hr_intagr | 450 | 24962086 | 11581 | 2155 |
| hr_proposal | 552 | 33437815 | 29251 | 1143 |
| hr_recommendation | 40 | 3612247 | 3321 | 1087 |
| hr_regulation | 2183 | 91603370 | 141641 | 646 |
| hu_all | 6653 | 375253894 | 349605 | 1073 |
| hu_caselaw | 1278 | 110179375 | 105144 | 1047 |
| hu_decision | 1147 | 57108172 | 54156 | 1054 |
| hu_directive | 200 | 13568304 | 4389 | 3091 |
| hu_intagr | 470 | 27258501 | 11586 | 2352 |
| hu_proposal | 912 | 60882750 | 29291 | 2078 |
| hu_recommendation | 70 | 5312868 | 3357 | 1582 |
| hu_regulation | 2576 | 100943924 | 141682 | 712 |
| it_all | 9586 | 768605772 | 333631 | 2303 |
| it_caselaw | 1889 | 206117726 | 89560 | 2301 |
| it_decision | 1445 | 102848859 | 53983 | 1905 |
| it_directive | 217 | 19687773 | 4385 | 4489 |
| it_intagr | 528 | 40134330 | 11580 | 3465 |
| it_proposal | 1533 | 140713925 | 29218 | 4816 |
| it_recommendation | 109 | 10923431 | 3318 | 3292 |
| it_regulation | 3865 | 248179728 | 141587 | 1752 |
| lt_all | 6400 | 364361783 | 200565 | 1816 |
| lt_caselaw | 1137 | 101808706 | 105477 | 965 |
| lt_decision | 1096 | 55850308 | 21990 | 2539 |
| lt_directive | 185 | 13078983 | 3239 | 4037 |
| lt_intagr | 452 | 27009631 | 7481 | 3610 |
| lt_proposal | 850 | 58553579 | 29272 | 2000 |
| lt_recommendation | 64 | 5121089 | 3363 | 1522 |
| lt_regulation | 2617 | 102939487 | 29743 | 3460 |
| lv_all | 6349 | 363239195 | 349919 | 1038 |
| lv_caselaw | 1153 | 103456811 | 105242 | 983 |
| lv_decision | 1103 | 55512944 | 54224 | 1023 |
| lv_directive | 186 | 13023024 | 4392 | 2965 |
| lv_intagr | 452 | 26693107 | 11630 | 2295 |
| lv_proposal | 96 | 58176216 | 29298 | 1985 |
| lv_recommendation | 64 | 5074494 | 3361 | 1509 |
| lv_regulation | 2545 | 101302599 | 141772 | 714 |
| mt_all | 6540 | 367834815 | 350292 | 1050 |
| mt_caselaw | 1164 | 100423543 | 105479 | 952 |
| mt_decision | 1109 | 55239141 | 54280 | 1017 |
| mt_directive | 203 | 14355266 | 4392 | 3268 |
| mt_intagr | 470 | 27701991 | 11675 | 2372 |
| mt_proposal | 878 | 59749277 | 29274 | 2041 |
| mt_recommendation | 65 | 5039600 | 3363 | 1498 |
| mt_regulation | 2650 | 105325997 | 141829 | 742 |
| nl_all | 9586 | 770312808 | 349407 | 2204 |
| nl_caselaw | 1847 | 206271837 | 105005 | 1964 |
| nl_decision | 1456 | 104060901 | 54152 | 1921 |
| nl_directive | 217 | 19529361 | 4388 | 4450 |
| nl_intagr | 529 | 40247634 | 11584 | 3474 |
| nl_proposal | 1540 | 141258274 | 29279 | 4824 |
| nl_recommendation | 111 | 11002405 | 3355 | 3279 |
| nl_regulation | 3886 | 247942396 | 141644 | 1750 |
| pl_all | 6677 | 406648795 | 350349 | 1160 |
| pl_caselaw | 1231 | 115824759 | 105479 | 1098 |
| pl_decision | 1125 | 60407576 | 54287 | 1112 |
| pl_directive | 197 | 14672157 | 4392 | 3340 |
| pl_intagr | 466 | 28543668 | 11680 | 2443 |
| pl_proposal | 886 | 64728230 | 29317 | 2207 |
| pl_recommendation | 68 | 5769893 | 3363 | 1715 |
| pl_regulation | 2703 | 116702512 | 141831 | 822 |
| pt_all | 8450 | 675152149 | 348449 | 1937 |
| pt_caselaw | 1763 | 198084937 | 104312 | 1898 |
| pt_decision | 1327 | 93278293 | 54007 | 1727 |
| pt_directive | 217 | 19831549 | 4385 | 4522 |
| pt_intagr | 504 | 37999753 | 11581 | 3281 |
| pt_proposal | 1361 | 127461782 | 29224 | 4361 |
| pt_recommendation | 81 | 8396661 | 3319 | 2529 |
| pt_regulation | 3197 | 190099174 | 141621 | 1342 |
| ro_all | 6315 | 415038571 | 350300 | 1184 |
| ro_caselaw | 1110 | 114780999 | 105516 | 1087 |
| ro_decision | 1047 | 59479553 | 54281 | 1095 |
| ro_directive | 206 | 16101628 | 4392 | 3666 |
| ro_intagr | 481 | 31497000 | 11675 | 2697 |
| ro_proposal | 805 | 62130419 | 29274 | 2122 |
| ro_recommendation | 63 | 5977913 | 3363 | 1777 |
| ro_regulation | 2603 | 125071059 | 141799 | 882 |
| sk_all | 6484 | 392235510 | 350570 | 1118 |
| sk_caselaw | 1160 | 110125141 | 105608 | 1042 |
| sk_decision | 1111 | 59576875 | 54349 | 1096 |
| sk_directive | 188 | 14132755 | 4393 | 3217 |
| sk_intagr | 458 | 28298155 | 11676 | 2423 |
| sk_proposal | 859 | 63726047 | 29290 | 2175 |
| sk_recommendation | 66 | 5654790 | 3364 | 1680 |
| sk_regulation | 2642 | 110721747 | 141890 | 780 |
| sl_all | 6222 | 394814289 | 350574 | 1126 |
| sl_caselaw | 1071 | 111238184 | 105608 | 1053 |
| sl_decision | 1075 | 59454906 | 54349 | 1093 |
| sl_directive | 176 | 13908097 | 4393 | 3165 |
| sl_intagr | 441 | 28239078 | 11676 | 2418 |
| sl_proposal | 812 | 63391970 | 29290 | 2164 |
| sl_recommendation | 62 | 5628775 | 3364 | 1673 |
| sl_regulation | 2585 | 112953279 | 141894 | 796 |
| sv_all | 7419 | 500085970 | 351051 | 1424 |
| sv_caselaw | 1585 | 162108645 | 105980 | 1529 |
| sv_decision | 1213 | 71744934 | 54357 | 1319 |
| sv_directive | 195 | 15386273 | 4393 | 3502 |
| sv_intagr | 463 | 29845462 | 11676 | 2556 |
| sv_proposal | 1059 | 86016237 | 29292 | 2936 |
| sv_recommendation | 79 | 7152141 | 3366 | 2124 |
| sv_regulation | 2825 | 127832278 | 141987 | 900 |
### Data Fields
[More Information Needed]
### Data Splits
[More Information Needed]
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
The data has been downloaded using the R package [eurlex](https://cran.r-project.org/web/packages/eurlex/vignettes/eurlexpkg.html) between June and August 2022.
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
[CC BY 4.0](https://creativecommons.org/licenses/by/4.0/)
[see also the legal notice](https://eur-lex.europa.eu/content/legal-notice/legal-notice.html)
### Citation Information
[More Information Needed]
### Contributions
Thanks to [@JoelNiklaus](https://github.com/joelniklaus) for adding this dataset.
|
false |
## NeQA: Can Large Language Models Understand Negation in Multi-choice Questions? (Zhengping Zhou and Yuhui Zhang)
### General description
This task takes an existing multiple-choice dataset and negates a part of each question to see if language models are sensitive to negation. The authors find that smaller language models display approximately random performance whereas the performance of larger models become significantly worse than random.
Language models failing to follow instructions in the prompt could be a serious issue that only becomes apparent on a task once models are sufficiently capable to perform non-randomly on the task.
### Example
The following are multiple choice questions (with answers) about common sense.
Question: If a cat has a body temp that is below average, it isn't in
A. danger
B. safe ranges
Answer:
(where the model should choose B.)
## Submission details
### Task description
Negation is a common linguistic phenomenon that can completely alter the semantics of a sentence by changing just a few words.
This task evaluates whether language models can understand negation, which is an important step towards true natural language understanding.
Specifically, we focus on negation in open-book multi-choice questions, considering its wide range of applications and the simplicity of evaluation.
We collect a multi-choice question answering dataset, NeQA, that includes questions with negations.
When negation is presented in the question, the original correct answer becomes wrong, and the wrong answer becomes correct.
We use the accuracy metric to examine whether the model can understand negation in the questions and select the correct answer given the presence of negation.
We observe a clear inverse scaling trend on GPT-3, demonstrating that larger language models can answer more complex questions but fail at the last step to understanding negation.
### Dataset generation procedure
The dataset is created by applying rules to transform questions in a publicly available multiple-choice question answering dataset named OpenBookQA. We use a simple rule by filtering questions containing "is" and adding "not" after it. For each question, we sample an incorrect answer as the correct answer and treat the correct answer as the incorrect answer. We randomly sample 300 questions and balance the label distributions (50% label as "A" and 50% label as "B" since there are two choices for each question)..
### Why do you expect to see inverse scaling?
For open-book question answering, larger language models usually achieve better accuracy because more factual and commonsense knowledge is stored in the model parameters and can be used as a knowledge base to answer these questions without context.
A higher accuracy rate means a lower chance of choosing the wrong answer. Can we change the wrong answer to the correct one? A simple solution is to negate the original question. If the model cannot understand negation, it will still predict the same answer and, therefore, will exhibit an inverse scaling trend.
We expect that the model cannot understand negation because negation introduces only a small perturbation to the model input. It is difficult for the model to understand that this small perturbation leads to completely different semantics.
### Why is the task important?
This task is important because it demonstrates that current language models cannot understand negation, a very common linguistic phenomenon and a real-world challenge to natural language understanding.
Why is the task novel or surprising? (1+ sentences)
To the best of our knowledge, no prior work shows that negation can cause inverse scaling. This finding should be surprising to the community, as large language models show an incredible variety of emergent capabilities, but still fail to understand negation, which is a fundamental concept in language.
## Results
[Inverse Scaling Prize: Round 1 Winners announcement](https://www.alignmentforum.org/posts/iznohbCPFkeB9kAJL/inverse-scaling-prize-round-1-winners#Zhengping_Zhou_and_Yuhui_Zhang__for_NeQA__Can_Large_Language_Models_Understand_Negation_in_Multi_choice_Questions_)
|
false |
<div align="center">
<img width="640" alt="keremberke/license-plate-object-detection" src="https://huggingface.co/datasets/keremberke/license-plate-object-detection/resolve/main/thumbnail.jpg">
</div>
### Dataset Labels
```
['license_plate']
```
### Number of Images
```json
{'train': 6176, 'valid': 1765, 'test': 882}
```
### How to Use
- Install [datasets](https://pypi.org/project/datasets/):
```bash
pip install datasets
```
- Load the dataset:
```python
from datasets import load_dataset
ds = load_dataset("keremberke/license-plate-object-detection", name="full")
example = ds['train'][0]
```
### Roboflow Dataset Page
[https://universe.roboflow.com/augmented-startups/vehicle-registration-plates-trudk/dataset/1](https://universe.roboflow.com/augmented-startups/vehicle-registration-plates-trudk/dataset/1?ref=roboflow2huggingface)
### Citation
```
@misc{ vehicle-registration-plates-trudk_dataset,
title = { Vehicle Registration Plates Dataset },
type = { Open Source Dataset },
author = { Augmented Startups },
howpublished = { \\url{ https://universe.roboflow.com/augmented-startups/vehicle-registration-plates-trudk } },
url = { https://universe.roboflow.com/augmented-startups/vehicle-registration-plates-trudk },
journal = { Roboflow Universe },
publisher = { Roboflow },
year = { 2022 },
month = { jun },
note = { visited on 2023-01-18 },
}
```
### License
CC BY 4.0
### Dataset Summary
This dataset was exported via roboflow.ai on January 13, 2022 at 5:20 PM GMT
It includes 8823 images.
VRP are annotated in COCO format.
The following pre-processing was applied to each image:
* Auto-orientation of pixel data (with EXIF-orientation stripping)
No image augmentation techniques were applied.
|
false |
### Roboflow Dataset Page
[https://universe.roboflow.com/riis/aerial-sheep/dataset/1](https://universe.roboflow.com/riis/aerial-sheep/dataset/1?ref=roboflow2huggingface)
### Dataset Labels
```
['sheep']
```
### Citation
```
@misc{ aerial-sheep_dataset,
title = { Aerial Sheep Dataset },
type = { Open Source Dataset },
author = { Riis },
howpublished = { \\url{ https://universe.roboflow.com/riis/aerial-sheep } },
url = { https://universe.roboflow.com/riis/aerial-sheep },
journal = { Roboflow Universe },
publisher = { Roboflow },
year = { 2022 },
month = { jun },
note = { visited on 2023-01-02 },
}
```
### License
Public Domain
### Dataset Summary
This dataset was exported via roboflow.com on December 2, 2022 at 4:47 AM GMT
Roboflow is an end-to-end computer vision platform that helps you
* collaborate with your team on computer vision projects
* collect & organize images
* understand unstructured image data
* annotate, and create datasets
* export, train, and deploy computer vision models
* use active learning to improve your dataset over time
It includes 4133 images.
Sheep are annotated in COCO format.
The following pre-processing was applied to each image:
* Auto-orientation of pixel data (with EXIF-orientation stripping)
* Resize to 600x600 (Stretch)
The following augmentation was applied to create 3 versions of each source image:
* 50% probability of horizontal flip
* 50% probability of vertical flip
* Randomly crop between 0 and 20 percent of the image
* Random brigthness adjustment of between -15 and +15 percent
* Random exposure adjustment of between -10 and +10 percent
|
false |
<div align="center">
<img width="640" alt="keremberke/hard-hat-detection" src="https://huggingface.co/datasets/keremberke/hard-hat-detection/resolve/main/thumbnail.jpg">
</div>
### Dataset Labels
```
['hardhat', 'no-hardhat']
```
### Number of Images
```json
{'test': 2001, 'train': 13782, 'valid': 3962}
```
### How to Use
- Install [datasets](https://pypi.org/project/datasets/):
```bash
pip install datasets
```
- Load the dataset:
```python
from datasets import load_dataset
ds = load_dataset("keremberke/hard-hat-detection", name="full")
example = ds['train'][0]
```
### Roboflow Dataset Page
[https://universe.roboflow.com/roboflow-universe-projects/hard-hats-fhbh5/dataset/2](https://universe.roboflow.com/roboflow-universe-projects/hard-hats-fhbh5/dataset/2?ref=roboflow2huggingface)
### Citation
```
@misc{ hard-hats-fhbh5_dataset,
title = { Hard Hats Dataset },
type = { Open Source Dataset },
author = { Roboflow Universe Projects },
howpublished = { \\url{ https://universe.roboflow.com/roboflow-universe-projects/hard-hats-fhbh5 } },
url = { https://universe.roboflow.com/roboflow-universe-projects/hard-hats-fhbh5 },
journal = { Roboflow Universe },
publisher = { Roboflow },
year = { 2022 },
month = { dec },
note = { visited on 2023-01-16 },
}
```
### License
CC BY 4.0
### Dataset Summary
This dataset was exported via roboflow.com on January 16, 2023 at 9:17 PM GMT
Roboflow is an end-to-end computer vision platform that helps you
* collaborate with your team on computer vision projects
* collect & organize images
* understand and search unstructured image data
* annotate, and create datasets
* export, train, and deploy computer vision models
* use active learning to improve your dataset over time
For state of the art Computer Vision training notebooks you can use with this dataset,
visit https://github.com/roboflow/notebooks
To find over 100k other datasets and pre-trained models, visit https://universe.roboflow.com
The dataset includes 19745 images.
Hardhat-ppe are annotated in COCO format.
The following pre-processing was applied to each image:
* Auto-orientation of pixel data (with EXIF-orientation stripping)
* Resize to 640x640 (Stretch)
No image augmentation techniques were applied.
|
true |
# Dataset Card for Dataset Name
## Dataset Description
- **Repository:** https://github.com/americanas-tech/b2w-reviews01
- **Paper:** http://comissoes.sbc.org.br/ce-pln/stil2019/proceedings-stil-2019-Final-Publicacao.pdf
- **Point of Contact:** Livy Real
### Dataset Summary
B2W-Reviews01 is an open corpus of product reviews. It contains more than 130k e-commerce customer reviews, collected from the Americanas.com website between January and May, 2018. B2W-Reviews01 offers rich information about the reviewer profile, such as gender, age, and geographical location. The corpus also has two different review rates:
* the usual 5-point scale rate, represented by stars in most e-commerce websites,
* a "recommend to a friend" label, a "yes or no" question representing the willingness of the customer to recommend the product to someone else.
### Supported Tasks and Leaderboards
* Sentiment Analysis
* Topic Modeling
### Languages
* Portuguese
## Dataset Structure
### Data Instances
```
{'submission_date': '2018-01-02 06:23:22',
'reviewer_id': '6adc7901926fc1697d34181fbd88895976b4f3f31f0102d90217d248a1fad156',
'product_id': '123911277',
'product_name': 'Triciclo Gangorra Belfix Cabeça Cachorro Rosa',
'product_brand': 'belfix',
'site_category_lv1': 'Brinquedos',
'site_category_lv2': 'Mini Veículos',
'review_title': 'O produto não foi entregue',
'overall_rating': 1,
'recommend_to_a_friend': 'Yes',
'review_text': 'Incrível o descaso com o consumidor. O produto não chegou, apesar de já ter sido pago. Não recebo qualquer informação sobre onde se encontra o produto, ou qualquer compensação do vendedor. Não recomendo.',
'reviewer_birth_year': 1981,
'reviewer_gender': 'M',
'reviewer_state': 'RJ'}
```
### Data Fields
* **submission_date**: the date and time when the review was submitted. `"%Y-%m-%d %H:%M:%S"`.
* **reviewer_id**: a unique identifier for the reviewer.
* **product_id**: a unique identifier for the product being reviewed.
* **product_name**: the name of the product being reviewed.
* **product_brand**: the brand of the product being reviewed.
* **site_category_lv1**: the highest level category for the product on the site where the review is being submitted.
* **site_category_lv2**: the second level category for the product on the site where the review is being submitted.
* **review_title**: the title of the review.
* **overall_rating**: the overall star rating given by the reviewer on a scale of 1 to 5.
* **recommend_to_a_friend**: whether or not the reviewer would recommend the product to a friend (Yes/No).
* **review_text**: the full text of the review.
* **reviewer_birth_year**: the birth year of the reviewer.
* **reviewer_gender**: the gender of the reviewer (F/M).
* **reviewer_state**: the Brazilian state of the reviewer (e.g. RJ).
### Data Splits
| name |train|
|---------|----:|
|b2w-reviews01|132373|
### Citation Information
```
@inproceedings{real2019b2w,
title={B2W-reviews01: an open product reviews corpus},
author={Real, Livy and Oshiro, Marcio and Mafra, Alexandre},
booktitle={STIL-Symposium in Information and Human Language Technology},
year={2019}
}
```
### Contributions
Thanks to [@ruanchaves](https://github.com/ruanchaves) for adding this dataset. |
false | # Dataset Card for Active/Passive/Logical Transforms
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Dataset Subsets (Tasks)](#data-tasks)
- [Dataset Splits](#data-splits)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:**
- **Repository:**
- **Paper:**
- **Leaderboard:**
- **Point of Contact:** [Roland Fernandez](mailto:rfernand@microsoft.com)
### Dataset Summary
This dataset is a synthetic dataset containing structure-to-structure transformation tasks between
English sentences in 3 forms: active, passive, and logical. The dataset also includes several
tree-transformation diagnostic/warm-up tasks.
### Supported Tasks and Leaderboards
[TBD]
### Languages
All data is in English.
## Dataset Structure
The dataset consists of several subsets, or tasks. Each task contains a train split, a validation split, and a
test split, with most tasks also containing two out-of-distruction splits (one for new adjectives and one for longer adjective phrases).
Each sample in a split contains a source string, a target string, and 0-2 annotation strings.
### Dataset Subsets (Tasks)
The dataset consists of diagnostic/warm-up tasks and core tasks. The core tasks represent the translation of English sentences between the active, passive, and logical forms.
The 12 diagnostic/warm-up tasks are:
```
- car_cdr_cons (small phrase translation tasks that require only: CAR, CDR, or CAR+CDR+CONS operations)
- car_cdr_cons_tuc (same task as car_cdr_cons, but requires mapping lowercase fillers to their uppercase tokens)
- car_cdr_rcons (same task as car_cdr_cons, but the CONS samples have their left/right children swapped)
- car_cdr_rcons_tuc (same task as car_cdr_rcons, but requires mapping lowercase fillers to their uppercase tokens)
- car_cdr_seq (each samples requires 1-4 combinations of CAR and CDR, as identified by the root filler oken)
- car_cdr_seq_40k (same task as car_cdr_seq, but train samples increased from 10K to 40K)
- car_cdr_seq_tuc (same task as car_cdr_seq, but requires mapping lowercase fillers to their uppercase tokens)
- car_cdr_seq_40k_tuc (same task as car_cdr_seq_tuc, but train samples increased from 10K to 40K)
- car_cdr_seq_path (similiar to car_cdr_seq, but each needed operation in represented as a node in the left child of the root)
- car_cdr_seq_path_40k (same task as car_cdr_seq_path, but train samples increased from 10K to 40K)
- car_cdr_seq_path_40k_tuc (same task as car_cdr_seq_path_40k, but requires mapping lowercase fillers to their uppercase tokens)
- car_cdr_seq_path_tuc (same task as car_cdr_seq_path, but requires mapping lowercase fillers to their uppercase tokens)
```
There are 22 core tasks are:
```
- active_active_stb (active sentence translation, from sentence to parenthesized tree form, both directions)
- active_active_stb_40k (same task as active_active_stb, but train samples increased from 10K to 40K)
- active_logical_ssb (active to logical sentence translation, in both directions)
- active_logical_ssb_40k (same task as active_logical_ssb, but train samples increased from 10K to 40K)
- active_logical_ttb (active to logical tree translation, in both directions)
- active_logical_ttb_40k (same task as active_logical_ttb, but train samples increased from 10K to 40K)
- active_passive_ssb (active to passive sentence translation, in both directions)
- active_passive_ssb_40k (same task as active_passive_ssb, but train samples increased from 10K to 40K)
- active_passive_ttb (active to passive tree translation, in both directions)
- active_passive_ttb_40k (same task as active_passive_ttb, but train samples increased from 10K to 40K)
- actpass_logical_ss (mixture of active to logical and passive to logical sentence translations, single direction)
- actpass_logical_ss_40k (same task as actpass_logical_ss, but train samples increased from 10K to 40K)
- actpass_logical_tt (mixture of active to logical and passive to logical tree translations, single direction)
- actpass_logical_tt_40k (same task as actpass_logical_tt, but train samples increased from 10K to 40K)
- logical_logical_stb (logical form sentence translation, from sentence to parenthesized tree form, both directions)
- logical_logical_stb_40k (same task as logical_logical_stb, but train samples increased from 10K to 40K)
- passive_logical_ssb (passive to logical sentence translation, in both directions)
- passive_logical_ssb_40k (same task as passive_logical_ssb, but train samples increased from 10K to 40K)
- passive_logical_ttb (passive to logical tree translation, in both directions)
- passive_logical_ttb_40k (same task as passive_logical_ttb, but train samples increased from 10K to 40K)
- passive_passive_stb (passive sentence translation, from sentence to parenthesized tree form, both directions)
- passive_passive_stb_40k (same task as passive_passive_stb, but train samples increased from 10K to 40K)
```
### Data Splits
Most tasks have the following splits:
- train
- validation
- test
- ood_new
- ood_long
- ood_all
Here is a table showing how the number of examples varies by split (for most tasks):
| Dataset Split | Number of Instances in Split |
| ------------- | ------------------------------------------- |
| train | 10,000 |
| validation | 1,250 |
| test | 1,250 |
| ood_new | 1,250 |
| ood_long | 1,250 |
| ood_all | 1,250 |
### Data Instances
For each sample, there is source and target string. Source and target string are either plain text, or a parenthesized
version of a tree, depending on the task.
Here is an example from the *train* split of the *active_passive_ttb* task:
```
{
'source': '( S ( NP ( DET his ) ( AP ( N cat ) ) ) ( VP ( V discovered ) ( NP ( DET the ) ( AP ( ADJ blue ) ( AP ( N priest ) ) ) ) ) )',
'target': '( S ( NP ( DET the ) ( AP ( ADJ blue ) ( AP ( N priest ) ) ) ) ( VP ( AUXPS was ) ( VPPS ( V discovered ) ( PPPS ( PPS by ) ( NP ( DET his ) ( AP ( N cat ) ) ) ) ) ) )',
'direction': 'forward'
}
```
### Data Fields
- `source`: the string denoting the sequence or tree structure to be translated
- `target`: the string denoting the gold (aka label) sequence or tree structure
Optional annotation fields (their presence varies by task):
- `direction`: describes the direction of the translation (forward, backward), relative to the task name
- `count` : a string denoting the count of symbolic operations needed (e.g., "s3") to translate the source to the target
- `class` : a string denoting the type of translation needed
## Dataset Creation
### Curation Rationale
We wanted a dataset comprised of relatively simple English active/passive/logical form translations, where we could focus
on two types of out of distribution generalization: longer source sequences and new adjectives.
### Source Data
[N/A]
#### Initial Data Collection and Normalization
[N/A]
#### Who are the source language producers?
The dataset by generated from templates designed by Paul Smolensky and Roland Fernandez.
### Annotations
Besides the source and target structured sequences, some of the subsets (tasks) contain 1-2 additional columns that
describe the category and tree depth of each sample.
#### Annotation process
The annotation columns were generated from the each sample template and source sequence.
#### Who are the annotators?
[N/A]
### Personal and Sensitive Information
No names or other sensitive information are included in the data.
## Considerations for Using the Data
### Social Impact of Dataset
The purpose of this dataset is to help develop models that can translated structured data from one form to another, in a
way that generalizes to out of distribution adjective values and lengths.
### Discussion of Biases
[TBD]
### Other Known Limitations
[TBD]
## Additional Information
The internal name of this dataset is nc_pat.
### Dataset Curators
The dataset by generated from templates designed by Paul Smolensky and Roland Fernandez.
### Licensing Information
This dataset is released under the [Permissive 2.0 license](https://cdla.dev/permissive-2-0/).
### Citation Information
[TBD]
### Contributions
Thanks to [The Neurocompositional AI group at Microsoft Research](https://www.microsoft.com/en-us/research/project/neurocompositional-ai/) for creating and adding this dataset.
|
false | Redistributed from http://weegee.vision.ucmerced.edu/datasets/landuse.html without modification. See https://www.usgs.gov/faqs/what-are-terms-uselicensing-map-services-and-data-national-map for license. |
false | # Dataset Card for "GID"
## Dataset Description
- **Paper** [Land-cover classification with high-resolution remote sensing images using transferable deep models](https://www.sciencedirect.com/science/article/pii/S0034425719303414)
### Licensing Information
Public domain.
## Citation Information
[Land-cover classification with high-resolution remote sensing images using transferable deep models](https://www.sciencedirect.com/science/article/pii/S0034425719303414)
```
@article{GID2020,
title = {Land-cover classification with high-resolution remote sensing images using transferable deep models},
author = {Tong, Xin-Yi and Xia, Gui-Song and Lu, Qikai and Shen, Huanfeng and Li, Shengyang and You, Shucheng and Zhang, Liangpei},
year = 2020,
journal = {Remote Sensing of Environment},
volume = 237,
pages = 111322
}
``` |
false |
# Stihi.ru dataset
## Table of Contents
- [Table of Contents](#table-of-contents)
- [Description](#description)
- [Usage](#usage)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
## Description
**Summary:** A subset if [Taiga](https://tatianashavrina.github.io/taiga_site/), uploaded here for convenience. Additional cleaning was performed.
**Script:** [create_stihi.py](https://github.com/IlyaGusev/rulm/blob/master/data_processing/create_stihi.py)
**Point of Contact:** [Ilya Gusev](ilya.gusev@phystech.edu)
**Languages:** Russian.
## Usage
Prerequisites:
```bash
pip install datasets zstandard jsonlines pysimdjson
```
Dataset iteration:
```python
from datasets import load_dataset
dataset = load_dataset('IlyaGusev/stihi_ru', split="train", streaming=True)
for example in dataset:
print(example["text"])
```
## Personal and Sensitive Information
The dataset is not anonymized, so individuals' names can be found in the dataset. Information about the original authors is included in the dataset where possible. |
false | # Bank
The [Bank dataset](https://archive.ics.uci.edu/ml/datasets/bank+marketing) from the [UCI ML repository](https://archive.ics.uci.edu/ml/datasets).
Potential clients are contacted by a bank during a second advertisement campaign.
This datasets records the customer, the interaction with the AD campaign, and if they subscribed to a proposed bank plan or not.
# Configurations and tasks
| **Configuration** | **Task** | Description |
|-------------------|---------------------------|-----------------------------------------------------------------|
| encoding | | Encoding dictionary showing original values of encoded features.|
| subscription | Binary classification | Has the customer subscribed to a bank plan? |
# Usage
```python
from datasets import load_dataset
dataset = load_dataset("mstz/bank", "subscription")["train"]
```
# Features
| **Name** |**Type** |
|-----------------------------------------------|-----------|
|`age` |`int64` |
|`job` |`string` |
|`marital_status` |`string` |
|`education` |`int8` |
|`has_defaulted` |`int8` |
|`account_balance` |`int64` |
|`has_housing_loan` |`int8` |
|`has_personal_loan` |`int8` |
|`month_of_last_contact` |`string` |
|`number_of_calls_in_ad_campaign` |`string` |
|`days_since_last_contact_of_previous_campaign` |`int16` |
|`number_of_calls_before_this_campaign` |`int16` |
|`successfull_subscription` |`int8` | |
false |
# Dataset Card for aerial-cows
** The original COCO dataset is stored at `dataset.tar.gz`**
## Dataset Description
- **Homepage:** https://universe.roboflow.com/object-detection/aerial-cows
- **Point of Contact:** francesco.zuppichini@gmail.com
### Dataset Summary
aerial-cows
### Supported Tasks and Leaderboards
- `object-detection`: The dataset can be used to train a model for Object Detection.
### Languages
English
## Dataset Structure
### Data Instances
A data point comprises an image and its object annotations.
```
{
'image_id': 15,
'image': <PIL.JpegImagePlugin.JpegImageFile image mode=RGB size=640x640 at 0x2373B065C18>,
'width': 964043,
'height': 640,
'objects': {
'id': [114, 115, 116, 117],
'area': [3796, 1596, 152768, 81002],
'bbox': [
[302.0, 109.0, 73.0, 52.0],
[810.0, 100.0, 57.0, 28.0],
[160.0, 31.0, 248.0, 616.0],
[741.0, 68.0, 202.0, 401.0]
],
'category': [4, 4, 0, 0]
}
}
```
### Data Fields
- `image`: the image id
- `image`: `PIL.Image.Image` object containing the image. Note that when accessing the image column: `dataset[0]["image"]` the image file is automatically decoded. Decoding of a large number of image files might take a significant amount of time. Thus it is important to first query the sample index before the `"image"` column, *i.e.* `dataset[0]["image"]` should **always** be preferred over `dataset["image"][0]`
- `width`: the image width
- `height`: the image height
- `objects`: a dictionary containing bounding box metadata for the objects present on the image
- `id`: the annotation id
- `area`: the area of the bounding box
- `bbox`: the object's bounding box (in the [coco](https://albumentations.ai/docs/getting_started/bounding_boxes_augmentation/#coco) format)
- `category`: the object's category.
#### Who are the annotators?
Annotators are Roboflow users
## Additional Information
### Licensing Information
See original homepage https://universe.roboflow.com/object-detection/aerial-cows
### Citation Information
```
@misc{ aerial-cows,
title = { aerial cows Dataset },
type = { Open Source Dataset },
author = { Roboflow 100 },
howpublished = { \url{ https://universe.roboflow.com/object-detection/aerial-cows } },
url = { https://universe.roboflow.com/object-detection/aerial-cows },
journal = { Roboflow Universe },
publisher = { Roboflow },
year = { 2022 },
month = { nov },
note = { visited on 2023-03-29 },
}"
```
### Contributions
Thanks to [@mariosasko](https://github.com/mariosasko) for adding this dataset. |
false | # Dataset Card for "thailaw"
## English
Thai Law Dataset (Act of Parliament)
- Data source from Office of the Council of State, Thailand. [https://www.krisdika.go.th/](https://www.krisdika.go.th/)
- This part of PyThaiNLP Project.
- License Dataset is public domain.
Download [https://github.com/PyThaiNLP/thai-law/releases](https://github.com/PyThaiNLP/thai-law/releases)
This hub based on [Thailaw v0.2](https://github.com/PyThaiNLP/thai-law/releases/tag/v0.2).
## Thai
คลังข้อมูลกฎหมายไทย (พระราชบัญญัติ)
- ข้อมูลเก็บรวบรวมมาจากเว็บไซต์สำนักงานคณะกรรมการกฤษฎีกา [https://www.krisdika.go.th/](https://www.krisdika.go.th/)
- โครงการนี้เป็นส่วนหนึ่งในแผนพัฒนา [PyThaiNLP](https://github.com/PyThaiNLP/)
- ข้อมูลที่รวบรวมในคลังข้อความนี้เป็นสาธารณสมบัติ (public domain) ตามพ.ร.บ.ลิขสิทธิ์ พ.ศ. 2537 มาตรา 7 (สิ่งต่อไปนี้ไม่ถือว่าเป็นงานอันมีลิขสิทธิ์ตามพระราชบัญญัตินี้ (1) ข่าวประจำวัน และข้อเท็จจริงต่างๆ ที่มีลักษณะเป็นเพียงข่าวสารอันมิใช่งานในแผนกวรรณคดี แผนกวิทยาศาสตร์ หรือแผนกศิลปะ [...] (3) ระเบียบ ข้อบังคับ ประกาศ คำสั่ง คำชี้แจง และหนังสือตอบโต้ของกระทรวง ทบวง กรม หรือหน่วยงานอื่นใดของรัฐหรือของท้องถิ่น [...])
ดาวน์โหลดได้ที่ [https://github.com/PyThaiNLP/thai-law/releases](https://github.com/PyThaiNLP/thai-law/releases)
This dataset is Thai Law dataset v0.2
- Data source from Office of the Council of State, Thailand. [https://www.krisdika.go.th/](https://www.krisdika.go.th/)
- This part of PyThaiNLP Project.
- License Dataset is public domain.
Datasize: 42,755 row
GitHub: [https://github.com/PyThaiNLP/thai-law/releases/tag/v0.2](https://github.com/PyThaiNLP/thai-law/releases/tag/v0.2) |
false |
# 台灣正體中文維基百科 (zh-tw Wikipedia)
截至 2023 年 5 月,中文維基百科 2,533,212 篇條目的台灣正體文字內容。每篇條目為一列 (row),包含 HTML 以及 Markdown 兩種格式。
A nearly-complete collection of 2,533,212 Traditional Chinese (`zh-tw`) Wikipedia pages, gathered between May 1, 2023, and May 7, 2023. Includes both the original HTML format and an auto-converted Markdown version, which has been processed using [vinta/pangu.py](https://github.com/vinta/pangu.py).
於 2023 年 5 月 1 日至 5 月 7 日間取自維基百科 [`action=query`](https://zh.wikipedia.org/w/api.php?action=help&modules=query) & [`prop=extracts`](https://zh.wikipedia.org/w/api.php?action=help&modules=query%2Bextracts) API,內容皆與維基百科網站之台灣正體版本一致,沒有繁簡體混雜的問題。
For development usage, checkout [`zetavg/zh-tw-wikipedia-dev`](https://huggingface.co/datasets/zetavg/zh-tw-wikipedia-dev), which is a subset that contains only 1,000 randomly picked items.
## 資料內容
* `pageid` — 維基百科頁面 ID。
* `html` — 頁面原始的 HTML 匯出。
* `markdown` — 頁面轉換為 Markdown 格式,並以 [vinta/pangu.py](https://github.com/vinta/pangu.js) 於全形字與半形字之間加入空格後的版本。
* `coordinate` — 頁面主題的經緯度座標,例如 `{ "lat": 22.63333333, "lon": 120.26666667 }`。若無則為 `null`。
* `length` — 頁面內容長度。
* `touched` — 頁面的最後修訂時間。
* `lastrevid` — 最新修訂版本的修訂 ID。
* `original_title` — 維基百科未經轉換的原始頁面標題,可能為簡體中文。
## 已知問題
* 無法抽取為 *受限格式 HTML* 的內容皆會遺失,例如所有圖片、圖表、表格、參考資料列表,以及部分程式碼區塊。
* 極少數內容過長的條目沒有納入,大致上計有:`四千`、`五千`、`六千`、`英雄傳說VI`、`軌跡系列角色列表`、`碧之軌跡角色列表`、`零之軌跡角色列表`。
* 缺少頁面標題 `title` 欄位(原可透過 API `inprop=varianttitles` 取得,但資料抓取時程式撰寫遺漏了這個欄位)。 |
false |
# Dataset Card for PersiNLU (Textual Entailment)
## Table of Contents
- [Dataset Card for PersiNLU (Sentiment Analysis)](#dataset-card-for-persi_sentiment)
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Initial Data Collection and Normalization](#initial-data-collection-and-normalization)
- [Who are the source language producers?](#who-are-the-source-language-producers)
- [Annotations](#annotations)
- [Annotation process](#annotation-process)
- [Who are the annotators?](#who-are-the-annotators)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** [Github](https://github.com/persiannlp/parsinlu/)
- **Repository:** [Github](https://github.com/persiannlp/parsinlu/)
- **Paper:** [Arxiv](https://arxiv.org/abs/2012.06154)
- **Leaderboard:**
- **Point of Contact:** d.khashabi@gmail.com
### Dataset Summary
A Persian sentiment analysis dataset.
### Supported Tasks and Leaderboards
[More Information Needed]
### Languages
The text dataset is in Persian (`fa`).
## Dataset Structure
### Data Instances
Here is an example from the dataset:
```json
{
"review": "خوب بود ولی خیلی گرون شده دیگه...فک نکنم به این قیمت ارزش خرید داشته باشد",
"review_id": "1538",
"example_id": "4",
"excel_id": "food_194",
"question": "نظر شما در مورد بسته بندی و نگهداری این حلوا شکری، ارده و کنجد چیست؟",
"category": "حلوا شکری، ارده و کنجد",
"aspect": "بسته بندی",
"label": "-3",
"guid": "food-dev-r1538-e4"
}
```
### Data Fields
- `review`: the review text.
- `review_id`: a unique id associated with the review.
- `example_id`: a unique id associated with a particular attribute being addressed about the review.
- `question`: a natural language question about a particular attribute.
- `category`: the subject discussed in the review.
- `aspect`: the aspect mentioned in the input question.
- `label`: the overall sentiment towards this particular subject, in the context of the mentioned aspect. Here are the definition of the labels:
```
'-3': 'no sentiment expressed',
'-2': 'very negative',
'-1': 'negative',
'0': 'neutral',
'1': 'positive',
'2': 'very positive',
'3': 'mixed',
```
### Data Splits
See the data.
## Dataset Creation
### Curation Rationale
For details, check [the corresponding draft](https://arxiv.org/abs/2012.06154).
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
CC BY-NC-SA 4.0 License
### Citation Information
```bibtex
@article{huggingface:dataset,
title = {ParsiNLU: A Suite of Language Understanding Challenges for Persian},
authors = {Khashabi, Daniel and Cohan, Arman and Shakeri, Siamak and Hosseini, Pedram and Pezeshkpour, Pouya and Alikhani, Malihe and Aminnaseri, Moin and Bitaab, Marzieh and Brahman, Faeze and Ghazarian, Sarik and others},
year={2020}
journal = {arXiv e-prints},
eprint = {2012.06154},
}
```
### Contributions
Thanks to [@danyaljj](https://github.com/danyaljj) for adding this dataset.
|
true |
# Dataset Card for the EUR-Lex dataset
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** http://nlp.cs.aueb.gr/software_and_datasets/EURLEX57K/
- **Repository:** http://nlp.cs.aueb.gr/software_and_datasets/EURLEX57K/
- **Paper:** https://www.aclweb.org/anthology/P19-1636/
- **Leaderboard:** N/A
### Dataset Summary
EURLEX57K can be viewed as an improved version of the dataset released by Mencia and Furnkranzand (2007), which has been widely used in Large-scale Multi-label Text Classification (LMTC) research, but is less than half the size of EURLEX57K (19.6k documents, 4k EUROVOC labels) and more than ten years old.
EURLEX57K contains 57k legislative documents in English from EUR-Lex (https://eur-lex.europa.eu) with an average length of 727 words. Each document contains four major zones:
- the header, which includes the title and name of the legal body enforcing the legal act;
- the recitals, which are legal background references; and
- the main body, usually organized in articles.
**Labeling / Annotation**
All the documents of the dataset have been annotated by the Publications Office of EU (https://publications.europa.eu/en) with multiple concepts from EUROVOC (http://eurovoc.europa.eu/).
While EUROVOC includes approx. 7k concepts (labels), only 4,271 (59.31%) are present in EURLEX57K, from which only 2,049 (47.97%) have been assigned to more than 10 documents. The 4,271 labels are also divided into frequent (746 labels), few-shot (3,362), and zero- shot (163), depending on whether they were assigned to more than 50, fewer than 50 but at least one, or no training documents, respectively.
### Supported Tasks and Leaderboards
The dataset supports:
**Multi-label Text Classification:** Given the text of a document, a model predicts the relevant EUROVOC concepts.
**Few-shot and Zero-shot learning:** As already noted, the labels can be divided into three groups: frequent (746 labels), few-shot (3,362), and zero- shot (163), depending on whether they were assigned to more than 50, fewer than 50 but at least one, or no training documents, respectively.
### Languages
All documents are written in English.
## Dataset Structure
### Data Instances
```json
{
"celex_id": "31979D0509",
"title": "79/509/EEC: Council Decision of 24 May 1979 on financial aid from the Community for the eradication of African swine fever in Spain",
"text": "COUNCIL DECISION of 24 May 1979 on financial aid from the Community for the eradication of African swine fever in Spain (79/509/EEC)\nTHE COUNCIL OF THE EUROPEAN COMMUNITIES\nHaving regard to the Treaty establishing the European Economic Community, and in particular Article 43 thereof,\nHaving regard to the proposal from the Commission (1),\nHaving regard to the opinion of the European Parliament (2),\nWhereas the Community should take all appropriate measures to protect itself against the appearance of African swine fever on its territory;\nWhereas to this end the Community has undertaken, and continues to undertake, action designed to contain outbreaks of this type of disease far from its frontiers by helping countries affected to reinforce their preventive measures ; whereas for this purpose Community subsidies have already been granted to Spain;\nWhereas these measures have unquestionably made an effective contribution to the protection of Community livestock, especially through the creation and maintenance of a buffer zone north of the river Ebro;\nWhereas, however, in the opinion of the Spanish authorities themselves, the measures so far implemented must be reinforced if the fundamental objective of eradicating the disease from the entire country is to be achieved;\nWhereas the Spanish authorities have asked the Community to contribute to the expenses necessary for the efficient implementation of a total eradication programme;\nWhereas a favourable response should be given to this request by granting aid to Spain, having regard to the undertaking given by that country to protect the Community against African swine fever and to eliminate completely this disease by the end of a five-year eradication plan;\nWhereas this eradication plan must include certain measures which guarantee the effectiveness of the action taken, and it must be possible to adapt these measures to developments in the situation by means of a procedure establishing close cooperation between the Member States and the Commission;\nWhereas it is necessary to keep the Member States regularly informed as to the progress of the action undertaken,",
"eurovoc_concepts": ["192", "2356", "2560", "862", "863"]
}
```
### Data Fields
The following data fields are provided for documents (`train`, `dev`, `test`):
`celex_id`: (**str**) The official ID of the document. The CELEX number is the unique identifier for all publications in both Eur-Lex and CELLAR.\
`title`: (**str**) The title of the document.\
`text`: (**str**) The full content of each document, which is represented by its `header`, `recitals` and `main_body`.\
`eurovoc_concepts`: (**List[str]**) The relevant EUROVOC concepts (labels).
If you want to use the descriptors of EUROVOC concepts, similar to Chalkidis et al. (2020), please load: https://archive.org/download/EURLEX57K/eurovoc_concepts.jsonl
```python
import json
with open('./eurovoc_concepts.jsonl') as jsonl_file:
eurovoc_concepts = {json.loads(concept) for concept in jsonl_file.readlines()}
```
### Data Splits
| Split | No of Documents | Avg. words | Avg. labels |
| ------------------- | ------------------------------------ | --- | --- |
| Train | 45,000 | 729 | 5 |
|Development | 6,000 | 714 | 5 |
|Test | 6,000 | 725 | 5 |
## Dataset Creation
### Curation Rationale
The dataset was curated by Chalkidis et al. (2019).\
The documents have been annotated by the Publications Office of EU (https://publications.europa.eu/en).
### Source Data
#### Initial Data Collection and Normalization
The original data are available at EUR-Lex portal (https://eur-lex.europa.eu) in an unprocessed format.
The documents were downloaded from EUR-Lex portal in HTML format.
The relevant metadata and EUROVOC concepts were downloaded from the SPARQL endpoint of the Publications Office of EU (http://publications.europa.eu/webapi/rdf/sparql).
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
* The original documents are available at EUR-Lex portal (https://eur-lex.europa.eu) in an unprocessed HTML format. The HTML code was striped and the documents split into sections.
* The documents have been annotated by the Publications Office of EU (https://publications.europa.eu/en).
#### Who are the annotators?
Publications Office of EU (https://publications.europa.eu/en)
### Personal and Sensitive Information
The dataset does not include personal or sensitive information.
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
Chalkidis et al. (2019)
### Licensing Information
© European Union, 1998-2021
The Commission’s document reuse policy is based on Decision 2011/833/EU. Unless otherwise specified, you can re-use the legal documents published in EUR-Lex for commercial or non-commercial purposes.
The copyright for the editorial content of this website, the summaries of EU legislation and the consolidated texts, which is owned by the EU, is licensed under the Creative Commons Attribution 4.0 International licence. This means that you can re-use the content provided you acknowledge the source and indicate any changes you have made.
Source: https://eur-lex.europa.eu/content/legal-notice/legal-notice.html \
Read more: https://eur-lex.europa.eu/content/help/faq/reuse-contents-eurlex.html
### Citation Information
*Ilias Chalkidis, Manos Fergadiotis, Prodromos Malakasiotis and Ion Androutsopoulos.*
*Large-Scale Multi-Label Text Classification on EU Legislation.*
*Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics (ACL 2019). Florence, Italy. 2019*
```
@inproceedings{chalkidis-etal-2019-large,
title = "Large-Scale Multi-Label Text Classification on {EU} Legislation",
author = "Chalkidis, Ilias and Fergadiotis, Manos and Malakasiotis, Prodromos and Androutsopoulos, Ion",
booktitle = "Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics",
year = "2019",
address = "Florence, Italy",
publisher = "Association for Computational Linguistics",
url = "https://www.aclweb.org/anthology/P19-1636",
doi = "10.18653/v1/P19-1636",
pages = "6314--6322"
}
```
### Contributions
Thanks to [@iliaschalkidis](https://github.com/iliaschalkidis) for adding this dataset.
|
true |
# WNLI-es
## Table of Contents
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Website:** https://cs.nyu.edu/~davise/papers/WinogradSchemas/WS.html
- **Point of Contact:** [Carlos Rodríguez-Penagos](carlos.rodriguez1@bsc.es) and [Carme Armentano-Oller](carme.armentano@bsc.es)
### Dataset Summary
"A Winograd schema is a pair of sentences that differ in only one or two words and that contain an ambiguity that is resolved in opposite ways in the two sentences and requires the use of world knowledge and reasoning for its resolution. The schema takes its name from Terry Winograd." Source: [The Winograd Schema Challenge](https://cs.nyu.edu/~davise/papers/WinogradSchemas/WS.html).
The [Winograd NLI dataset](https://dl.fbaipublicfiles.com/glue/data/WNLI.zip) presents 855 sentence pairs, in which the first sentence contains an ambiguity and the second one a possible interpretation of it. The label indicates if the interpretation is correct (1) or not (0).
This dataset is a professional translation into Spanish of [Winograd NLI dataset](https://dl.fbaipublicfiles.com/glue/data/WNLI.zip) as published in [GLUE Benchmark](https://gluebenchmark.com/tasks).
Both the original dataset and this translation are licenced under a [Creative Commons Attribution 4.0 International License](https://creativecommons.org/licenses/by/4.0/).
### Supported Tasks and Leaderboards
Textual entailment, Text classification, Language Model.
### Languages
* Spanish (es)
## Dataset Structure
### Data Instances
Three tsv files.
### Data Fields
- index
- sentence 1: first sentence of the pair
- sentence 2: second sentence of the pair
- label: relation between the two sentences:
* 0: the second sentence does not entail a correct interpretation of the first one (neutral)
* 1: the second sentence entails a correct interpretation of the first one (entailment)
### Data Splits
- wnli-train-es.csv: 636 sentence pairs
- wnli-dev-es.csv: 72 sentence pairs
- wnli-test-shuffled-es.csv: 147 sentence pairs
## Dataset Creation
### Curation Rationale
We translated this dataset to contribute to the development of language models in Spanish.
### Source Data
- [GLUE Benchmark site](https://gluebenchmark.com)
#### Initial Data Collection and Normalization
This is a professional translation of [WNLI dataset](https://cs.nyu.edu/~davise/papers/WinogradSchemas/WS.html) into Spanish, commissioned by [BSC TeMU](https://temu.bsc.es/) within the the framework of the [Plan-TL](https://plantl.mineco.gob.es/Paginas/index.aspx).
For more information on how the Winograd NLI dataset was created, visit the webpage [The Winograd Schema Challenge](https://cs.nyu.edu/~davise/papers/WinogradSchemas/WS.html).
#### Who are the source language producers?
For more information on how the Winograd NLI dataset was created, visit the webpage [The Winograd Schema Challenge](https://cs.nyu.edu/~davise/papers/WinogradSchemas/WS.html).
### Annotations
#### Annotation process
We comissioned a professional translation of [WNLI dataset](https://cs.nyu.edu/~davise/papers/WinogradSchemas/WS.html) into Spanish.
#### Who are the annotators?
Translation was commisioned to a professional translation agency.
### Personal and Sensitive Information
No personal or sensitive information included.
## Considerations for Using the Data
### Social Impact of Dataset
This dataset contributes to the development of language models in Spanish.
### Discussion of Biases
[N/A]
### Other Known Limitations
[N/A]
## Additional Information
### Dataset Curators
Text Mining Unit (TeMU) at the Barcelona Supercomputing Center (bsc-temu@bsc.es).
For further information, send an email to (plantl-gob-es@bsc.es).
This work was funded by the [Spanish State Secretariat for Digitalization and Artificial Intelligence (SEDIA)](https://avancedigital.mineco.gob.es/en-us/Paginas/index.aspx) within the framework of the [Plan-TL](https://plantl.mineco.gob.es/Paginas/index.aspx).
### Licensing information
This work is licensed under [CC Attribution 4.0 International](https://creativecommons.org/licenses/by/4.0/) License.
Copyright by the Spanish State Secretariat for Digitalization and Artificial Intelligence (SEDIA) (2022)
### Contributions
[N/A]
|
true | # Dataset Card for "CHISTES_spanish_jokes"
Dataset from [Workshop for NLP introduction with Spanish jokes](https://github.com/liopic/chistes-nlp)
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
false | # Dataset Card for "diffusiondb_2m_first_5k_canny"
Process [diffusiondb 2m first 5k canny](https://huggingface.co/datasets/poloclub/diffusiondb) to edges by Canny algorithm.
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
false | # Student performance
The [Student performance dataset](https://www.kaggle.com/datasets/ulrikthygepedersen/student_performances) from Kaggle.
| **Configuration** | **Task** | **Description** |
|-------------------|---------------------------|-----------------------------------------------------------------|
| encoding | | Encoding dictionary showing original values of encoded features.|
| math | Binary classification | Has the student passed the math exam? |
| writing | Binary classification | Has the student passed the writing exam? |
| reading | Binary classification | Has the student passed the reading exam? |
# Usage
```python
from datasets import load_dataset
dataset = load_dataset("mstz/student_performance", "math")["train"]
```
# Features
|**Feature** |**Type** |
|-----------------------------------|-----------|
|`is_male` |`bool` |
|`ethnicity` |`string` |
|`parental_level_of_education` |`int8` |
|`has_standard_lunch` |`bool` |
|`has_completed_preparation_test` |`bool` |
|`reading_score` |`int64` |
|`writing_score` |`int64` |
|`math_score` |`int64` | |
false |
# Dataset Card for wall-damage
** The original COCO dataset is stored at `dataset.tar.gz`**
## Dataset Description
- **Homepage:** https://universe.roboflow.com/object-detection/wall-damage
- **Point of Contact:** francesco.zuppichini@gmail.com
### Dataset Summary
wall-damage
### Supported Tasks and Leaderboards
- `object-detection`: The dataset can be used to train a model for Object Detection.
### Languages
English
## Dataset Structure
### Data Instances
A data point comprises an image and its object annotations.
```
{
'image_id': 15,
'image': <PIL.JpegImagePlugin.JpegImageFile image mode=RGB size=640x640 at 0x2373B065C18>,
'width': 964043,
'height': 640,
'objects': {
'id': [114, 115, 116, 117],
'area': [3796, 1596, 152768, 81002],
'bbox': [
[302.0, 109.0, 73.0, 52.0],
[810.0, 100.0, 57.0, 28.0],
[160.0, 31.0, 248.0, 616.0],
[741.0, 68.0, 202.0, 401.0]
],
'category': [4, 4, 0, 0]
}
}
```
### Data Fields
- `image`: the image id
- `image`: `PIL.Image.Image` object containing the image. Note that when accessing the image column: `dataset[0]["image"]` the image file is automatically decoded. Decoding of a large number of image files might take a significant amount of time. Thus it is important to first query the sample index before the `"image"` column, *i.e.* `dataset[0]["image"]` should **always** be preferred over `dataset["image"][0]`
- `width`: the image width
- `height`: the image height
- `objects`: a dictionary containing bounding box metadata for the objects present on the image
- `id`: the annotation id
- `area`: the area of the bounding box
- `bbox`: the object's bounding box (in the [coco](https://albumentations.ai/docs/getting_started/bounding_boxes_augmentation/#coco) format)
- `category`: the object's category.
#### Who are the annotators?
Annotators are Roboflow users
## Additional Information
### Licensing Information
See original homepage https://universe.roboflow.com/object-detection/wall-damage
### Citation Information
```
@misc{ wall-damage,
title = { wall damage Dataset },
type = { Open Source Dataset },
author = { Roboflow 100 },
howpublished = { \url{ https://universe.roboflow.com/object-detection/wall-damage } },
url = { https://universe.roboflow.com/object-detection/wall-damage },
journal = { Roboflow Universe },
publisher = { Roboflow },
year = { 2022 },
month = { nov },
note = { visited on 2023-03-29 },
}"
```
### Contributions
Thanks to [@mariosasko](https://github.com/mariosasko) for adding this dataset. |
false | # Arhythmia
The [Arrhythmia dataset](https://archive.ics.uci.edu/ml/datasets/Arrhythmia) from the [UCI ML repository](https://archive.ics.uci.edu/ml/datasets).
Does the patient have arhythmia? If so, what type?
# Configurations and tasks
| **Configuration** | **Task** | Description |
|-------------------|---------------------------|---------------------------------------------------------------|
| arhytmia | Multiclass classification | What type of arhythmia does the patient have? |
| has_arhytmia | Binary classification | Does the patient have arhythmia? |
# Usage
```python
from datasets import load_dataset
dataset = load_dataset("mstz/arhythmia", "arhythmia")["train"]
```
# Features
Target feature changes according to the selected configuration and is always in last position in the dataset. |
true |
# Dataset Card for climate_sentiment
## Dataset Description
- **Homepage:** [climatebert.ai](https://climatebert.ai)
- **Repository:**
- **Paper:** [papers.ssrn.com/sol3/papers.cfm?abstract_id=3998435](https://papers.ssrn.com/sol3/papers.cfm?abstract_id=3998435)
- **Leaderboard:**
- **Point of Contact:** [Nicolas Webersinke](mailto:nicolas.webersinke@fau.de)
### Dataset Summary
We introduce an expert-annotated dataset for classifying climate-related sentiment of climate-related paragraphs in corporate disclosures.
### Supported Tasks and Leaderboards
The dataset supports a ternary sentiment classification task of whether a given climate-related paragraph has sentiment opportunity, neutral, or risk.
### Languages
The text in the dataset is in English.
## Dataset Structure
### Data Instances
```
{
'text': '− Scope 3: Optional scope that includes indirect emissions associated with the goods and services supply chain produced outside the organization. Included are emissions from the transport of products from our logistics centres to stores (downstream) performed by external logistics operators (air, land and sea transport) as well as the emissions associated with electricity consumption in franchise stores.',
'label': 1
}
```
### Data Fields
- text: a climate-related paragraph extracted from corporate annual reports and sustainability reports
- label: the label (0 -> risk, 1 -> neutral, 2 -> opportunity)
### Data Splits
The dataset is split into:
- train: 1,000
- test: 320
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
Our dataset contains climate-related paragraphs extracted from financial disclosures by firms. We collect text from corporate annual reports and sustainability reports.
For more information regarding our sample selection, please refer to the Appendix of our paper (see [citation](#citation-information)).
#### Who are the source language producers?
Mainly large listed companies.
### Annotations
#### Annotation process
For more information on our annotation process and annotation guidelines, please refer to the Appendix of our paper (see [citation](#citation-information)).
#### Who are the annotators?
The authors and students at Universität Zürich and Friedrich-Alexander-Universität Erlangen-Nürnberg with majors in finance and sustainable finance.
### Personal and Sensitive Information
Since our text sources contain public information, no personal and sensitive information should be included.
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
- Julia Anna Bingler
- Mathias Kraus
- Markus Leippold
- Nicolas Webersinke
### Licensing Information
This dataset is licensed under the Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International license (cc-by-nc-sa-4.0). To view a copy of this license, visit [creativecommons.org/licenses/by-nc-sa/4.0](https://creativecommons.org/licenses/by-nc-sa/4.0/).
If you are interested in commercial use of the dataset, please contact [markus.leippold@bf.uzh.ch](mailto:markus.leippold@bf.uzh.ch).
### Citation Information
```bibtex
@techreport{bingler2023cheaptalk,
title={How Cheap Talk in Climate Disclosures Relates to Climate Initiatives, Corporate Emissions, and Reputation Risk},
author={Bingler, Julia and Kraus, Mathias and Leippold, Markus and Webersinke, Nicolas},
type={Working paper},
institution={Available at SSRN 3998435},
year={2023}
}
```
### Contributions
Thanks to [@webersni](https://github.com/webersni) for adding this dataset. |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.