id stringlengths 2 115 | lastModified stringlengths 24 24 | tags list | author stringlengths 2 42 ⌀ | description stringlengths 0 68.7k ⌀ | citation stringlengths 0 10.7k ⌀ | cardData null | likes int64 0 3.55k | downloads int64 0 10.1M | card stringlengths 0 1.01M |
|---|---|---|---|---|---|---|---|---|---|
Twitter/SignedGraphs | 2022-11-22T03:32:19.000Z | [
"license:cc-by-4.0",
"arxiv:2201.11675",
"region:us"
] | Twitter | null | null | null | 0 | 3 | ---
license: cc-by-4.0
---
# Learning Stance Embeddings from Signed Social Graphs
[](http://makeapullrequest.com)
[](https://arxiv.org/abs/2201.11675)
This repo contains the datasets from our paper [Learning Stance Embeddings from Signed Social Graphs](https://arxiv.org/abs/2201.11675). <br />
[[PDF]](https://arxiv.org/pdf/2201.11675.pdf)
[[HuggingFace Datasets]](https://huggingface.co/Twitter)
<a rel="license" href="http://creativecommons.org/licenses/by/4.0/"><img alt="Creative Commons License" style="border-width:0" src="https://i.creativecommons.org/l/by/4.0/88x31.png" /></a><br />This work is licensed under a <a rel="license" href="http://creativecommons.org/licenses/by/4.0/">Creative Commons Attribution 4.0 International License</a>.
## Overview
A key challenge in social network analysis is understanding the position, or stance, of people in the graph on a large set of topics. In such social graphs, modeling (dis)agreement patterns across a range of correlated topics may be beneficial. For example, disagreement on one topic may make disagreement (or agreement) more likely for related topics.
We open source **two Twitter signed, topical graph datasets**. One dataset, **TwitterSG**, labels (dis)agreements using engagements between users via tweets to derive topic-informed, signed edges. The other, **BirdwatchSG**,leverages community reports on misinformation and misleading content.
## Datasets
### TwitterSG
Twitter Signed Graph, or TwitterSG, is a signed, directed, edge-attributed graph of users, drawn from Twitter interactions. TwitterSG contains 753,944 nodes (users), 200 topics and 12,848,093 edges. It is the largest publicly available user-to-user signed social graph (∼6x larger than the Epinions graph).
A positive edge exists from user 𝐴 to user 𝐵 if user 𝐴 liked a tweet posted by user 𝐵. A negative edge exists from user 𝐴 to user 𝐵 if user 𝐴 expressed opposition towards user 𝐵’s tweet, e.g., by replying *I disagree with you*. The full list of opposition keywords is specified [here](https://github.com/lejohnyjohn/learning-stance-embeddings-from-signed-social-graphs/tree/main/datasets). The topic of an edge from user 𝐴 to user 𝐵 is determined by the topic of user 𝐵’s tweet.
Tweets' topics were inferred with a topic classifier used in production by Twitter. The topics provided in the dataset are all related to sports (e.g., sports teams, players, managers, or events), and the tweets related to these interactions were published between 20th May (Ice Hockey World Championships) and 8th August 2021 (closing date of the 2020 Tokyo Olympic Games).
9.6\% of edges are negative (opposition) and 90.4\% are positive. There may be several edges between two nodes (several interactions, several topics). The data format is displayed below.
| source_idx | target_idx | topic_idx | topic | rating |
| ------------- | ------------- | ---------- | ------ | ---- |
| 1 | 6 | 19 | Copa America | +1 |
| 1 | 6 | 97 | NFL | -1 |
| 4 | 5 | 23 |Kylian Mbappe | +1 |
### BirdwatchSG
Birdwatch Signed Graph, or BirdwatchSG, is a signed, directed, edge-attributed graph of users, drawn from note ratings on the Birdwatch pilot. The graph contains 2,987 nodes (users), 1,020 topics and 441,986 edges.
[Birdwatch pilot](https://blog.twitter.com/en_us/topics/product/2021/introducing-birdwatch-a-community-based-approach-to-misinformation) was launched by Twitter in January 2021 in the USA to address misleading information on the platform, in a community-driven fashion: the Birdwatch participants can identify information they believe is misleading in tweets and write notes that provide informative context. They can also rate the helpfulness (either *helpful*, *somewhat helpful*, or *not helpful*) of notes added by other contributors. All Birdwatch contributions are publicly available on the [Birdwatch site](https://twitter.github.io/birdwatch/) for anyone in the USA.
Using Birdwatch data from January to July 2021, a positive (negative) edge is created from participant 𝑈1 to 𝑈2 if participant 𝑈1 rated a note written by participant 𝑈2 as *helpful* (*not helpful*). The *somewhat helpful* ratings were filtered out. The topic associated with an edge is the topic inferred from the tweet the note refers to.
36.9% of edges are negative (opposition) and 63.1% are positive. There may be several edges between two nodes (several interactions, several topics).
| source_idx | target_idx | topic_idx | topic | rating |
| ------------- | ------------- | ---------- | ------ | ---- |
| 10 | 6 | 443 | US Politics | +1 |
| 7 | 14 | 12 | Ted Cruz | -1 |
| 1 | 11 | 1003 | COVID-19 | +1 |
## Citation
If you use our datasets in your work, please cite the following:
```bib
@article{pougue2022learning,
title={Learning Stance Embeddings from Signed Social Graphs},
author={Pougu{\'e}-Biyong, John and Gupta, Akshay and Haghighi, Aria and El-Kishky, Ahmed},
journal={arXiv preprint arXiv:2201.11675},
year={2022}
}
``` |
MLRS/masri_test | 2023-03-30T11:08:22.000Z | [
"task_categories:automatic-speech-recognition",
"annotations_creators:expert-generated",
"language_creators:other",
"multilinguality:monolingual",
"size_categories:n<1K",
"source_datasets:original",
"language:mt",
"license:cc-by-nc-sa-4.0",
"masri",
"maltese",
"masri-project",
"malta",
"test... | MLRS | The MASRI-TEST CORPUS was created out of YouTube videos belonging to the channel of the University of Malta. It has a length of 1 hour and it is gender balanced, as it has the same number of male and female speakers. | @misc{carlosmenamasritest2020,
title={MASRI-TEST CORPUS: Audio and Transcriptions in Maltese extracted from the YouTube channel of the University of Malta.},
author={Hernandez Mena, Carlos Daniel and Brincat, Ayrton Didier and Gatt, Albert and DeMarco, Andrea and Borg, Claudia and van der Plas, Lonneke and Meza Ruiz, Iván Vladimir},
journal={MASRI Project, Malta},
year={2020},
url={https://www.um.edu.mt/projects/masri/},
} | null | 0 | 3 | ---
annotations_creators:
- expert-generated
language:
- mt
language_creators:
- other
license: cc-by-nc-sa-4.0
multilinguality:
- monolingual
pretty_name: >-
MASRI-TEST CORPUS: Audio and Transcriptions in Maltese extracted from the
YouTube channel of the University of Malta.
size_categories:
- n<1K
source_datasets:
- original
tags:
- masri
- maltese
- masri-project
- malta
- test corpus
task_categories:
- automatic-speech-recognition
task_ids: []
---
# Dataset Card for masri_test
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** [MASRI Project](https://www.um.edu.mt/projects/masri/)
- **Repository:** [MASRI Data Repo](https://github.com/UMSpeech/)
- **Point of Contact:** [Carlos Mena](mailto:carlos.mena@ciempiess.org), [Andrea De Marco](mailto:andrea.demarco@um.edu.mt), [Claudia Borg](mailto:claudia.borg@um.edu.mt)
### Dataset Summary
The MASRI-TEST CORPUS was created out of YouTube videos belonging to the channel of the [University of Malta](www.youtube.com/user/universityofmalta). It has a length of 1 hour and it is gender balanced, as it has the same number of male and female speakers.
### Example Usage
The MASRI-TEST contains only the test split:
```python
from datasets import load_dataset
masri_test = load_dataset("MLRS/masri_test")
```
It is also valid to do:
```python
from datasets import load_dataset
masri_test = load_dataset("MLRS/masri_test",split="test")
```
### Supported Tasks
automatic-speech-recognition: The dataset can be used to test a model for Automatic Speech Recognition (ASR). The model is presented with an audio file and asked to transcribe the audio file to written text. The most common evaluation metric is the word error rate (WER).
### Languages
The language of the corpus is Maltese.
## Dataset Structure
### Data Instances
```python
{
'audio_id': 'MSRTS_M_17_TS_00001',
'audio': {
'path': '/home/carlos/.cache/HuggingFace/datasets/downloads/extracted/9158ecbeeb3532038f3fe3d53e0adda1f790c9363a613bac32c454a39d9c682c/test/male/M_17/MSRTS_M_17_TS_00001.flac',
'array': array([ 0.0020752 , 0.00283813, 0.00167847, ..., -0.0010376 ,
-0.00091553, -0.00100708], dtype=float32),
'sampling_rate': 16000
},
'speaker_id': 'M_17',
'gender': 'male',
'duration': 5.920000076293945,
'normalized_text': 'ignazio saverio mifsud kien qed jippjana kien qed iħejji tliet volumi tal-biblijoteka maltese'
}
```
### Data Fields
* `audio_id` (string) - id of audio segment
* `audio` (datasets.Audio) - a dictionary containing the path to the audio, the decoded audio array, and the sampling rate. In non-streaming mode (default), the path points to the locally extracted audio. In streaming mode, the path is the relative path of an audio inside its archive (as files are not downloaded and extracted locally).
* `speaker_id` (string) - id of speaker
* `gender` (string) - gender of speaker (male or female)
* `duration` (float32) - duration of the audio file in seconds.
* `normalized_text` (string) - normalized audio segment transcription
### Data Splits
The corpus counts just with the test split which has a total of 668 speech files from 17 male speakers and 17 female speakers with a total duration of 1 hour.
## Dataset Creation
### Curation Rationale
The MASRI-TEST CORPUS (MTSC) has the following characteristics:
* The MTSC has an exact duration of 1 hours and 0 minutes. It has 668 audio files.
* The MTSC has recordings from 34 different speakers: 17 men and 17 women.
* Data in MTSC is classified by speaker. Therefore, all the recordings of each individual speaker are stored in one single directory.
* Data is also classified according to the gender (male/female) of the speakers.
* Every audio file in the MTSC has a duration between 3 and 10 seconds approximately.
* Audio files in the MTSC are distributed in a 16khz@16bit mono format.
* Transcriptions in MTSC are in lowercase. No punctuation marks are permitted except for dashes (-) and apostrophes (') due to their importance in Maltese orthography.
### Source Data
#### Initial Data Collection and Normalization
The MASRI-TEST CORPUS was possible due to a collaboration of two different Universities. The data selection and audio segmentation was performed by the [CIEMPIESS-UNAM Project](http://www.ciempiess.org/) at the [Universidad Nacional Autónoma de México (UNAM)](https://www.unam.mx/) in Mexico City. The audio transcription and corpus edition was performed by the [MASRI Team](https://www.um.edu.mt/projects/masri/) at the [University of Malta](https://www.um.edu.mt/) in the Msida Campus.
### Annotations
#### Annotation process
Proper nouns and other words pronounced in languages other than Maltese (mainly from English, Italian, French and German) were transcribed in their respective orthographic system.
#### Who are the annotators?
The audio transcription was performed by expert native speakers at the [University of Malta](https://www.um.edu.mt/) in the Msida Campus.
### Personal and Sensitive Information
The dataset could contain names revealing the identity of some speakers; on the other side, the recordings come from a publicly repository (YouTube), so, there is not a real intent of the participants to be anonymized. Anyway, you agree to not attempt to determine the identity of speakers in this dataset.
**Notice:** Should you consider that our data contains material that is owned by you and should therefore not be reproduced here?, please:
* Clearly identify yourself, with detailed contact data such as an address, telephone number or email address at which you can be contacted.
* Clearly identify the copyrighted work claimed to be infringed.
* Clearly identify the material that is claimed to be infringing and information reasonably sufficient to allow us to locate the material.
* Send the request to [Carlos Mena](mailto:carlos.mena@ciempiess.org)
Take down: We will comply to legitimate requests by removing the affected sources from the corpus.
## Considerations for Using the Data
### Social Impact of Dataset
This dataset is challenging because it contains spontaneous speech; so, it will be helpful for the ASR community to evaluate their acoustic models in Maltese with it.
### Discussion of Biases
The dataset intents to be gender balanced. It is comprised of 17 male speakers and 17 female speakers.
### Other Known Limitations
Neither the MASRI Team or the CIEMPIESS-UNAM Project guarantee the accuracy of this corpus, nor its suitability for any specific purpose. As a matter of fact, a number of errors, omissions and inconsistencies are expected to be found within the corpus.
### Dataset Curators
The audio recordings were collected and segmented by students belonging to the social service program ["Desarrollo de Tecnologías del Habla"](http://profesores.fi-b.unam.mx/carlos_mena/servicio.html), it was curated by Carlos Daniel Hernández Mena and its transcriptions were manually performed by Ayrton-Didier Brincat during 2020.
### Licensing Information
[CC-BY-NC-SA-4.0](https://creativecommons.org/licenses/by-nc-sa/4.0/). The copyright remains with the original owners of the video.
As the data is taken from YouTube, we invoke the same argument of "fair use" as in the [Voxlingua107](http://bark.phon.ioc.ee/voxlingua107/) dataset, which is:
**"While YouTube users own the copyright to their own videos, using the audio in the videos for training speech recognition models has very limited and transformative purpose and qualifies thus as "fair use" of copyrighted materials. YouTube’s terms of service forbid downloading, storing and distribution of videos. However, the aim of this rule is clearly to forbid unfair monetization of the content by third-party sites and applications. Our dataset contains the videos in segmented audio-only form that makes the monetization of the actual distributed content extremely difficult."**
### Citation Information
```
@misc{carlosmenamasritest2020,
title={MASRI-TEST CORPUS: Audio and Transcriptions in Maltese extracted from the YouTube channel of the University of Malta.},
author={Hernandez Mena, Carlos Daniel and Brincat, Ayrton-Didier and Gatt, Albert and DeMarco, Andrea and Borg, Claudia and van der Plas, Lonneke and Meza Ruiz, Iván Vladimir},
journal={MASRI Project, Malta},
year={2020},
url={https://huggingface.co/datasets/MLRS/masri_test},
}
```
### Contributions
The authors would like to thank to Alberto Templos Carbajal, Elena Vera and Angélica Gutiérrez for their support to the social service program ["Desarrollo de Tecnologías del Habla"](http://profesores.fi-b.unam.mx/carlos_mena/servicio.html) at the ["Facultad de Ingeniería (FI)"](https://www.ingenieria.unam.mx/) of the [Universidad Nacional Autónoma de México (UNAM)](https://www.unam.mx/). We also thank to the social service students for all the hard work during the audio segmentation. |
nyanko7/yandere-images | 2022-12-02T09:34:14.000Z | [
"license:openrail",
"doi:10.57967/hf/0139",
"region:us"
] | nyanko7 | null | null | null | 6 | 3 | ---
license: openrail
---
yande.re sampled images 2019-2022
Estimated 500k, including metadata(`.json`) and tags(`.txt`) |
maveriq/DocBank | 2023-01-05T20:41:27.000Z | [
"annotations_creators:machine-generated",
"language_creators:machine-generated",
"multilinguality:monolingual",
"size_categories:100K<n<1M",
"source_datasets:original",
"language:en",
"license:apache-2.0",
"document-ai",
"arxiv:2006.01038",
"region:us"
] | maveriq | null | null | null | 1 | 3 | ---
annotations_creators:
- machine-generated
language:
- en
language_creators:
- machine-generated
license:
- apache-2.0
multilinguality:
- monolingual
pretty_name: DocBank
size_categories:
- 100K<n<1M
source_datasets:
- original
tags:
- document-ai
task_categories: []
task_ids: []
---
# Dataset Card for DocBank
## Table of Contents
- [Dataset Card for DocBank](#dataset-card-for-docbank)
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Initial Data Collection and Normalization](#initial-data-collection-and-normalization)
- [Who are the source language producers?](#who-are-the-source-language-producers)
- [Annotations](#annotations)
- [Annotation process](#annotation-process)
- [Who are the annotators?](#who-are-the-annotators)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** https://doc-analysis.github.io/docbank-page/index.html
- **Repository:** https://github.com/doc-analysis/DocBank
- **Paper:** https://arxiv.org/abs/2006.01038
- **Leaderboard:**
- **Point of Contact:**
### Dataset Summary
DocBank is a new large-scale dataset that is constructed using a weak supervision approach. It enables models to integrate both the textual and layout information for downstream tasks. The current DocBank dataset totally includes 500K document pages, where 400K for training, 50K for validation and 50K for testing.
### Supported Tasks and Leaderboards
Document AI (text and layout)
### Languages
English
## Dataset Structure
### Data Instances
[More Information Needed]
### Data Fields
dataset_info:
features:
- name: image
dtype: image
- name: token
dtype: string
- name: bounding_box
sequence:
sequence: uint16
- name: color
sequence:
sequence: uint8
- name: font
dtype: string
- name: label
dtype: string
### Data Splits
dataset_info:
splits:
- name: train
num_bytes: 80004043
num_examples: 400000
- name: validation
num_bytes: 9995812
num_examples: 50000
- name: test
num_bytes: 9995812
num_examples: 50000
download_size: 0
dataset_size: 99995667
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
Apache 2.0 License
### Citation Information
title={DocBank: A Benchmark Dataset for Document Layout Analysis},
author={Minghao Li and Yiheng Xu and Lei Cui and Shaohan Huang and Furu Wei and Zhoujun Li and Ming Zhou},
year={2020},
eprint={2006.01038},
archivePrefix={arXiv},
primaryClass={cs.CL}
### Contributions
Thanks to [@doc-analysis](https://github.com/doc-analysis) for adding this dataset. |
mlxen/squad_1_1_smallcase_context | 2022-11-28T07:02:05.000Z | [
"region:us"
] | mlxen | null | null | null | 0 | 3 | ---
dataset_info:
features:
- name: id
dtype: string
- name: title
dtype: string
- name: context
dtype: string
- name: question
dtype: string
- name: answers
sequence:
- name: text
dtype: string
- name: answer_start
dtype: int32
splits:
- name: train
num_bytes: 79346181
num_examples: 87599
download_size: 14361272
dataset_size: 79346181
---
# Dataset Card for "squad_1_1_smallcase_context"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
kasnerz/logic2text | 2023-03-14T15:08:47.000Z | [
"region:us"
] | kasnerz | null | null | null | 0 | 3 | Entry not found |
kasnerz/charttotext-s | 2023-03-14T15:08:25.000Z | [
"region:us"
] | kasnerz | null | null | null | 1 | 3 | Entry not found |
surrey-nlp/SAD | 2022-11-28T18:41:51.000Z | [
"task_categories:text-classification",
"annotations_creators:Jordan Painter, Diptesh Kanojia",
"multilinguality:monolingual",
"size_categories:1K<n<10K",
"source_datasets:original",
"language:en",
"license:cc-by-sa-4.0",
"region:us"
] | surrey-nlp | null | null | null | 0 | 3 | ---
annotations_creators:
- Jordan Painter, Diptesh Kanojia
language:
- en
license:
- cc-by-sa-4.0
multilinguality:
- monolingual
pretty_name: 'Utilising Weak Supervision to create S3D: A Sarcasm Annotated Dataset'
size_categories:
- 1K<n<10K
source_datasets:
- original
task_categories:
- text-classification
---
# Utilising Weak Supervision to Create S3D: A Sarcasm Annotated Dataset
This is the repository for the S3D dataset published at EMNLP 2022. The dataset can help build sarcasm detection models.
# SAD
The SAD dataset is our gold standard dataset of tweets labelled for sarcasm. These tweets were scraped by observing a '#sarcasm' hashtag and then manually annotated by three annotators.
There are a total of 1170 pairs of a sarcastic and non-sarcastic tweets which were both posted by the same user, resulting in a total of 2340 tweets annotated for sarcasm.
These tweets can be accessed by using the Twitter API so that they can be used for other experiments.
# Data Fields
- Tweet ID: The ID of the labelled tweet
- Label: A label to denote if a given tweet is sarcastic
# Data Splits
- Train: 1638
- Valid: 351
- Test: 351 |
surrey-nlp/S3D-v1 | 2022-11-28T18:46:48.000Z | [
"task_categories:text-classification",
"annotations_creators:Jordan Painter, Diptesh Kanojia",
"multilinguality:monolingual",
"size_categories:100K<n<1M",
"source_datasets:original",
"language:en",
"license:cc-by-sa-4.0",
"region:us"
] | surrey-nlp | null | null | null | 0 | 3 | ---
annotations_creators:
- Jordan Painter, Diptesh Kanojia
language:
- en
license:
- cc-by-sa-4.0
multilinguality:
- monolingual
pretty_name: 'Utilising Weak Supervision to create S3D: A Sarcasm Annotated Dataset'
size_categories:
- 100K<n<1M
source_datasets:
- original
task_categories:
- text-classification
---
## Table of Contents
- [Dataset Description](#dataset-description)
-
# Utilising Weak Supervision to Create S3D: A Sarcasm Annotated Dataset
This is the repository for the S3D dataset published at EMNLP 2022. The dataset can help build sarcasm detection models.
# S3D Summary
The S3D dataset is our silver standard dataset of 100,000 tweets labelled for sarcasm using weak supervision by our **BERTweet-sarcasm-combined** model.
These tweets can be accessed by using the Twitter API so that they can be used for other experiments.
S3D contains 38879 tweets labelled as sarcastic, and 61211 tweets labelled as not being sarcastic.
# Data Fields
- Tweet ID: The ID of the labelled tweet
- Label: A label to denote if a given tweet is sarcastic
# Data Splits
- Train: 70,000
- Valid: 15,000
- Test: 15,000 |
piuba-bigdata/contextualized_hate_speech | 2023-04-29T14:19:58.000Z | [
"task_categories:text-classification",
"size_categories:10K<n<100K",
"language:es",
"hate_speech",
"arxiv:2210.00465",
"region:us"
] | piuba-bigdata | null | null | null | 3 | 3 | ---
language:
- es
pretty_name: contextualized_hate_speech
task_categories:
- text-classification
tags:
- hate_speech
size_categories:
- 10K<n<100K
---
# Contextualized Hate Speech: A dataset of comments in news outlets on Twitter
## Dataset Description
- **Homepage:**
- **Repository:**
- **Paper**: ["Assessing the impact of contextual information in hate speech detection"](https://arxiv.org/abs/2210.00465), Juan Manuel Pérez, Franco Luque, Demian Zayat, Martín Kondratzky, Agustín Moro, Pablo Serrati, Joaquín Zajac, Paula Miguel, Natalia Debandi, Agustín Gravano, Viviana Cotik
- **Point of Contact**: jmperez (at) dc uba ar
### Dataset Summary

This dataset is a collection of tweets that were posted in response to news articles from five specific Argentinean news outlets: Clarín, Infobae, La Nación, Perfil and Crónica, during the COVID-19 pandemic. The comments were analyzed for hate speech across eight different characteristics: against women, racist content, class hatred, against LGBTQ+ individuals, against physical appearance, against people with disabilities, against criminals, and for political reasons. All the data is in Spanish.
Each comments is labeled with the following variables
| Label | Description |
| :--------- | :---------------------------------------------------------------------- |
| HATEFUL | Contains hate speech (HS)? |
| CALLS | If it is hateful, is this message calling to (possibly violent) action? |
| WOMEN | Is this against women? |
| LGBTI | Is this against LGBTI people? |
| RACISM | Is this a racist message? |
| CLASS | Is this a classist message? |
| POLITICS | Is this HS due to political ideology? |
| DISABLED | Is this HS against disabled people? |
| APPEARANCE | Is this HS against people due to their appearance? (e.g. fatshaming) |
| CRIMINAL | Is this HS against criminals or people in conflict with law? |
There is an extra label `CALLS`, which represents whether a comment is a call to violent action or not.
### Citation Information
```bibtex
@article{perez2022contextual,
author = {Pérez, Juan Manuel and Luque, Franco M. and Zayat, Demian and Kondratzky, Martín and Moro, Agustín and Serrati, Pablo Santiago and Zajac, Joaquín and Miguel, Paula and Debandi, Natalia and Gravano, Agustín and Cotik, Viviana},
journal = {IEEE Access},
title = {Assessing the Impact of Contextual Information in Hate Speech Detection},
year = {2023},
volume = {11},
number = {},
pages = {30575-30590},
doi = {10.1109/ACCESS.2023.3258973}
}
```
### Contributions
[More Information Needed] |
Matrix430/CONDA | 2022-11-30T07:03:52.000Z | [
"task_categories:text-classification",
"task_categories:token-classification",
"task_ids:intent-classification",
"annotations_creators:expert-generated",
"language_creators:found",
"multilinguality:monolingual",
"size_categories:10K<n<100K",
"source_datasets:original",
"language:en",
"license:afl-... | Matrix430 | null | null | null | 0 | 3 | ---
annotations_creators:
- expert-generated
language:
- en
language_creators:
- found
license:
- afl-3.0
multilinguality:
- monolingual
pretty_name: CONDA
size_categories:
- 10K<n<100K
source_datasets:
- original
tags:
- CONDA
task_categories:
- text-classification
- token-classification
task_ids:
- intent-classification
---
# Dataset Card for CONDA
## Table of Contents
- [Dataset Description](#dataset-description)
- [Abstract](#dataset-summary)
- [Leaderboards](#leaderboards)
- [Evaluation Metrics](#evaluation-metrics)
- [Languages](#languages)
- [Video](#video)
- [Citation Information](#citation-information)
## Dataset Description
- **Homepage:** [CONDA](https://github.com/usydnlp/CONDA)
- **Paper:** [CONDA: a CONtextual Dual-Annotated dataset for in-game toxicity understanding and detection](https://arxiv.org/abs/2106.06213)
- **Point of Contact:** [Caren Han](caren.han@sydney.edu.au)
## Dataset Summary
Traditional toxicity detection models have focused on the single utterance level without deeper understanding of context. We introduce CONDA, a new dataset for in-game toxic language detection enabling joint intent classification and slot filling analysis, which is the core task of Natural Language Understanding (NLU). The dataset consists of 45K utterances from 12K conversations from the chat logs of 1.9K completed Dota 2 matches. We propose a robust dual semantic-level toxicity framework, which handles utterance and token-level patterns, and rich contextual chatting history. Accompanying the dataset is a thorough in-game toxicity analysis, which provides comprehensive understanding of context at utterance, token, and dual levels. Inspired by NLU, we also apply its metrics to the toxicity detection tasks for assessing toxicity and game-specific aspects. We evaluate strong NLU models on CONDA, providing fine-grained results for different intent classes and slot classes. Furthermore, we examine the coverage of toxicity nature in our dataset by comparing it with other toxicity datasets.
## Leaderboards
The Codalab leaderboard can be found at: https://codalab.lisn.upsaclay.fr/competitions/7827
### Evaluation Metrics
**JSA**(Joint Semantic Accuracy) is used for ranking. An utterance is deemed correctly analysed only if both utterance-level and all the token-level labels including Os are correctly predicted.
Besides, the f1 score of **utterance-level** E(xplicit) and I(mplicit) classes, **token-level** T(oxicity), D(ota-specific), S(game Slang) classes will be shown on the leaderboard (but not used as the ranking metric).
## Languages
English
## Video
Please enjoy a video presentation covering the main points from our paper:
<p align="centre">
[](https://www.youtube.com/watch?v=qRCPSSUuf18)
</p>
## Citation Information
```
@inproceedings{weld-etal-2021-conda,
title = "{CONDA}: a {CON}textual Dual-Annotated dataset for in-game toxicity understanding and detection",
author = "Weld, Henry and
Huang, Guanghao and
Lee, Jean and
Zhang, Tongshu and
Wang, Kunze and
Guo, Xinghong and
Long, Siqu and
Poon, Josiah and
Han, Caren",
booktitle = "Findings of the Association for Computational Linguistics: ACL-IJCNLP 2021",
month = aug,
year = "2021",
address = "Online",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2021.findings-acl.213",
doi = "10.18653/v1/2021.findings-acl.213",
pages = "2406--2416",
}
```
|
pacovaldez/stackoverflow-questions-2016 | 2022-11-30T23:16:54.000Z | [
"task_categories:text-classification",
"task_ids:multi-class-classification",
"annotations_creators:machine-generated",
"language_creators:found",
"multilinguality:monolingual",
"size_categories:1M<n<10M",
"source_datasets:original",
"language:en",
"license:apache-2.0",
"stackoverflow",
"technic... | pacovaldez | null | null | null | 0 | 3 | ---
annotations_creators:
- machine-generated
language:
- en
language_creators:
- found
license:
- apache-2.0
multilinguality:
- monolingual
pretty_name: stackoverflow_post_questions
size_categories:
- 1M<n<10M
source_datasets:
- original
tags:
- stackoverflow
- technical questions
task_categories:
- text-classification
task_ids:
- multi-class-classification
---
# Dataset Card for [Stackoverflow Post Questions]
## Table of Contents
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Source Data](#source-data)
- [Contributions](#contributions)
## Dataset Description
Companies that sell Open-source software tools usually hire an army of Customer representatives to try to answer every question asked about their tool. The first step in this process
is the prioritization of the question. The classification scale usually consists of 4 values, P0, P1, P2, and P3, with different meanings across every participant in the industry. On
the other hand, every software developer in the world has dealt with Stack Overflow (SO); the amount of shared knowledge there is incomparable to any other website. Questions in SO are
usually annotated and curated by thousands of people, providing metadata about the quality of the question. This dataset aims to provide an accurate prioritization for programming
questions.
### Dataset Summary
The dataset contains the title and body of stackoverflow questions and a label value(0,1,2,3) that was calculated using thresholds defined by SO badges.
### Languages
English
## Dataset Structure
title: string,
body: string,
label: int
### Data Splits
The split is 40/40/20, where classes have been balaned to be around the same size.
## Dataset Creation
The data set was extracted and labeled with the following query in BigQuery:
```
SELECT
title,
body,
CASE
WHEN score >= 100 OR favorite_count >= 100 OR view_count >= 10000 THEN 0
WHEN score >= 25 OR favorite_count >= 25 OR view_count >= 2500 THEN 1
WHEN score >= 10 OR favorite_count >= 10 OR view_count >= 1000 THEN 2
ELSE 3
END AS label
FROM `bigquery-public-data`.stackoverflow.posts_questions
```
### Source Data
The data was extracted from the Big Query public dataset: `bigquery-public-data.stackoverflow.posts_questions`
#### Initial Data Collection and Normalization
The original dataset contained high class imbalance:
label count
0 977424
1 2401534
2 3418179
3 16222990
Grand Total 23020127
The data was sampled from each class to have around the same amount of records on every class.
### Contributions
Thanks to [@pacofvf](https://github.com/pacofvf) for adding this dataset.
|
deutsche-telekom/NLU-Evaluation-Data-en-de | 2022-12-29T20:33:24.000Z | [
"task_categories:text-classification",
"task_ids:intent-classification",
"multilinguality:multilingual",
"size_categories:10K<n<100K",
"source_datasets:extended|nlu_evaluation_data",
"language:en",
"language:de",
"license:cc-by-4.0",
"arxiv:1903.05566",
"region:us"
] | deutsche-telekom | null | null | null | 1 | 3 | ---
license: cc-by-4.0
source_datasets:
- extended|nlu_evaluation_data
multilinguality:
- multilingual
language:
- en
- de
size_categories:
- 10K<n<100K
task_categories:
- text-classification
task_ids:
- intent-classification
---
# NLU Evaluation Data - English and German
A labeled English **and German** language multi-domain dataset (21 domains) with 25K user utterances for human-robot interaction.
This dataset is collected and annotated for evaluating NLU services and platforms.
The detailed paper on this dataset can be found at arXiv.org:
[Benchmarking Natural Language Understanding Services for building Conversational Agents](https://arxiv.org/abs/1903.05566)
The dataset builds on the annotated data of the [xliuhw/NLU-Evaluation-Data](https://github.com/xliuhw/NLU-Evaluation-Data)
repository. We have added an additional column (`answer_de`)
by translating the texts in column `answer` into German.
The translation was made with [DeepL](https://www.deepl.com/translator).
## Labels
The columns `scenario` and `intent` can be used for classification tasks.
However, we recommend to use even more fine-grained labels.
For this purpose, a new label can be derived by concatenating `scenario` and `intent`.
For example, this would turn "alarm" and "set" into "alarm_set".
## Dataset Quirks
The original dataset contains some `NaN` values in the `answer` column.
This means that there are also `NaN` values in the translations (`answer_de` column).
These rows should be filtered.
The dataset also contains duplicate values.
## Copyright
Copyright (c) the authors of [xliuhw/NLU-Evaluation-Data](https://github.com/xliuhw/NLU-Evaluation-Data)\
Copyright (c) 2022 [Philip May](https://may.la/)
All data is released under the
[Creative Commons Attribution 4.0 International License (CC BY 4.0)](http://creativecommons.org/licenses/by/4.0/).
|
lmqg/qg_tweetqa | 2022-12-02T19:11:42.000Z | [
"task_categories:text-generation",
"task_ids:language-modeling",
"multilinguality:monolingual",
"size_categories:1k<n<10K",
"source_datasets:tweet_qa",
"language:en",
"license:cc-by-sa-4.0",
"question-generation",
"arxiv:2210.03992",
"region:us"
] | lmqg | Question generation dataset based on [TweetQA](https://huggingface.co/datasets/tweet_qa). | @inproceedings{ushio-etal-2022-generative,
title = "{G}enerative {L}anguage {M}odels for {P}aragraph-{L}evel {Q}uestion {G}eneration",
author = "Ushio, Asahi and
Alva-Manchego, Fernando and
Camacho-Collados, Jose",
booktitle = "Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing",
month = dec,
year = "2022",
address = "Abu Dhabi, U.A.E.",
publisher = "Association for Computational Linguistics",
} | null | 0 | 3 | ---
license: cc-by-sa-4.0
pretty_name: TweetQA for question generation
language: en
multilinguality: monolingual
size_categories: 1k<n<10K
source_datasets: tweet_qa
task_categories:
- text-generation
task_ids:
- language-modeling
tags:
- question-generation
---
# Dataset Card for "lmqg/qg_tweetqa"
## Dataset Description
- **Repository:** [https://github.com/asahi417/lm-question-generation](https://github.com/asahi417/lm-question-generation)
- **Paper:** [https://arxiv.org/abs/2210.03992](https://arxiv.org/abs/2210.03992)
- **Point of Contact:** [Asahi Ushio](http://asahiushio.com/)
### Dataset Summary
This is the question & answer generation dataset based on the [tweet_qa](https://huggingface.co/datasets/tweet_qa). The test set of the original data is not publicly released, so we randomly sampled test questions from the training set.
### Supported Tasks and Leaderboards
* `question-answer-generation`: The dataset is assumed to be used to train a model for question & answer generation.
Success on this task is typically measured by achieving a high BLEU4/METEOR/ROUGE-L/BERTScore/MoverScore (see our paper for more in detail).
### Languages
English (en)
## Dataset Structure
An example of 'train' looks as follows.
```
{
'answer': 'vine',
'paragraph_question': 'question: what site does the link take you to?, context:5 years in 5 seconds. Darren Booth (@darbooth) January 25, 2013',
'question': 'what site does the link take you to?',
'paragraph': '5 years in 5 seconds. Darren Booth (@darbooth) January 25, 2013'
}
```
The data fields are the same among all splits.
- `questions`: a `list` of `string` features.
- `answers`: a `list` of `string` features.
- `paragraph`: a `string` feature.
- `question_answer`: a `string` feature.
## Data Splits
|train|validation|test |
|----:|---------:|----:|
|9489 | 1086| 1203|
## Citation Information
```
@inproceedings{ushio-etal-2022-generative,
title = "{G}enerative {L}anguage {M}odels for {P}aragraph-{L}evel {Q}uestion {G}eneration",
author = "Ushio, Asahi and
Alva-Manchego, Fernando and
Camacho-Collados, Jose",
booktitle = "Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing",
month = dec,
year = "2022",
address = "Abu Dhabi, U.A.E.",
publisher = "Association for Computational Linguistics",
}
``` |
Kurokabe/Kimetsu-no-Yaiba-Image-Dataset-01 | 2022-12-04T13:37:58.000Z | [
"region:us"
] | Kurokabe | null | null | null | 0 | 3 | ---
dataset_info:
features:
- name: image
dtype: image
splits:
- name: train
num_bytes: 2005251870.0
num_examples: 6000
- name: validation
num_bytes: 207003826.0
num_examples: 809
download_size: 2135573514
dataset_size: 2212255696.0
---
# Dataset Card for "Kimetsu-no-Yaiba-Image-Dataset-01"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
lucadiliello/bookcorpusopen | 2022-12-04T19:09:30.000Z | [
"region:us"
] | lucadiliello | null | null | null | 1 | 3 | ---
dataset_info:
features:
- name: text
dtype: string
- name: title
dtype: string
splits:
- name: train
num_bytes: 6643459928
num_examples: 17868
download_size: 3940589290
dataset_size: 6643459928
---
# Dataset Card for "bookcorpusopen"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
vpetukhov/bible_tts_hausa | 2022-12-05T12:51:17.000Z | [
"task_categories:automatic-speech-recognition",
"task_categories:text-to-speech",
"language_creators:expert-generated",
"multilinguality:monolingual",
"size_categories:10K<n<100K",
"source_datasets:original",
"language:ha",
"license:cc-by-sa-4.0",
"bible",
"arxiv:2207.03546",
"region:us"
] | vpetukhov | null | null | null | 1 | 3 | ---
annotations_creators: []
language:
- ha
language_creators:
- expert-generated
license:
- cc-by-sa-4.0
multilinguality:
- monolingual
pretty_name: BibleTTS Hausa
size_categories:
- 10K<n<100K
source_datasets:
- original
tags:
- bible
task_categories:
- automatic-speech-recognition
- text-to-speech
task_ids: []
---
# Dataset Card for BibleTTS Hausa
## Table of Contents
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
## Dataset Description
- **Homepage:** https://masakhane-io.github.io/bibleTTS/
- **Repository:** http://www.openslr.org/129/
- **Paper:** https://arxiv.org/abs/2207.03546
### Dataset Summary
BibleTTS is a large high-quality open Text-to-Speech dataset with up to 80 hours of single speaker, studio quality 48kHz recordings.
This is a Hausa part of the dataset. Aligned hours: 86.6, aligned verses: 40,603.
### Languages
Hausa
## Dataset Structure
### Data Fields
- `audio`: audio path
- `sentence`: transcription of the audio
- `locale`: always set to `ha`
- `book`: 3-char book encoding
- `verse`: verse id
### Data Splits
- `dev`: Book of Ezra (264 verses)
- `test`: Book of Colossians (124 verses)
- `train`: all other books (40215 verses)
## Additional Information
*See [this notebook](https://github.com/seads-org/hausa-speech-recognition/blob/6993c5c74379c93a2416acac6126b60ce6e52df8/notebooks/prepare_bible_dataset.ipynb) for the code on how the dataset was processed.
### Dataset Curators
The dataset uploaded by [vpetukhov](https://github.com/VPetukhov/) who is not connected to the dataset authors. Please, see the project page for more info.
### Licensing Information
The data is released under a commercial-friendly [CC-BY-SA](https://creativecommons.org/licenses/by-sa/4.0/) license.
### Citation Information
Meyer, Josh, et al. "BibleTTS: a large, high-fidelity, multilingual, and uniquely African speech corpus." arXiv preprint arXiv:2207.03546 (2022).
|
argilla/uber-reviews | 2022-12-06T12:00:28.000Z | [
"task_categories:text-classification",
"task_ids:sentiment-classification",
"size_categories:1K<n<10K",
"source_datasets:original",
"language:en",
"license:unknown",
"region:us"
] | argilla | null | null | null | 0 | 3 | ---
language:
- en
license:
- unknown
size_categories:
- 1K<n<10K
source_datasets:
- original
task_categories:
- text-classification
task_ids:
- sentiment-classification
dataset_info:
features:
- name: text
dtype: string
- name: inputs
struct:
- name: text
dtype: string
- name: prediction
list:
- name: label
dtype: string
- name: score
dtype: float64
- name: prediction_agent
dtype: string
- name: annotation
dtype: 'null'
- name: annotation_agent
dtype: 'null'
- name: multi_label
dtype: bool
- name: explanation
dtype: 'null'
- name: id
dtype: string
- name: metadata
dtype: 'null'
- name: status
dtype: string
- name: event_timestamp
dtype: timestamp[us]
- name: metrics
struct:
- name: text_length
dtype: int64
splits:
- name: train
num_bytes: 2761597
num_examples: 2347
download_size: 1691346
dataset_size: 2761597
---
# Dataset Card for "uber-reviews"
## Dataset Description
- **Homepage:** Kaggle Challenge
- **Repository:** https://www.kaggle.com/datasets/jschne61701/uber-rides-costumer-reviews-dataset
- **Paper:** N.A.
- **Leaderboard:** N.A.
- **Point of Contact:** N.A.
### Dataset Summary
Using Python's Beautiful Soup library and Scrappy framework, scraped date, star rating, and comment from all reviews from 2013 - 2019.
### Languages
english
### Citation Information
https://www.kaggle.com/datasets/jschne61701/uber-rides-costumer-reviews-dataset
https://www.sitejabber.com/reviews/uber.com
https://www.consumeraffairs.com/travel/uber.html
https://www.kaggle.com/purvank/uber-rider-reviews-dataset
### Contributions
Thanks to [@davidberenstein1957](https://github.com/davidberenstein1957) for adding this dataset.
|
Shularp/un_multi-ar-en | 2022-12-07T11:00:47.000Z | [
"region:us"
] | Shularp | null | null | null | 0 | 3 | ---
dataset_info:
features:
- name: translation
dtype:
translation:
languages:
- ar
- en
splits:
- name: train
num_bytes: 4189844561
num_examples: 9759125
download_size: 1926773979
dataset_size: 4189844561
---
# Dataset Card for "un_multi-ar-en"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
xusenlin/duie | 2022-12-07T14:49:54.000Z | [
"region:us"
] | xusenlin | null | null | null | 0 | 3 | ---
dataset_info:
features:
- name: text
dtype: string
- name: spo_list
list:
- name: predicate
dtype: string
- name: object_type
dtype: string
- name: subject_type
dtype: string
- name: object
dtype: string
- name: subject
dtype: string
splits:
- name: train
num_bytes: 51849478
num_examples: 172983
- name: validation
num_bytes: 6512116
num_examples: 21626
download_size: 32568292
dataset_size: 58361594
---
# DuIE 关系抽取数据集
字段说明
+ `text`: 文本
+ `spo_list`: 文本中包含的关系三元组
+ `subject`: 头实体(主语)
+ `subject_type`: 头实体(主语)的类型
+ `object`: 尾实体(主语)
+ `object_type`: 尾实体(主语)的类型
+ `predicate`: 关系
|
graphs-datasets/deezer_ego_nets | 2023-02-07T16:36:48.000Z | [
"task_categories:graph-ml",
"license:gpl-3.0",
"region:us"
] | graphs-datasets | null | null | null | 0 | 3 | ---
licence: unknown
license: gpl-3.0
task_categories:
- graph-ml
---
# Dataset Card for Deezer ego nets
## Table of Contents
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [External Use](#external-use)
- [PyGeometric](#pygeometric)
- [Dataset Structure](#dataset-structure)
- [Data Properties](#data-properties)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Additional Information](#additional-information)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **[Homepage](https://snap.stanford.edu/data/deezer_ego_nets.html)**
- **Paper:**: (see citation)
### Dataset Summary
The Deezer ego nets dataset contains ego-nets of Eastern European users collected from the music streaming service Deezer in February 2020. Nodes are users and edges are mutual follower relationships.
### Supported Tasks and Leaderboards
The related task is the binary classification to predict gender for the ego node in the graph.
## External Use
### PyGeometric
To load in PyGeometric, do the following:
```python
from datasets import load_dataset
from torch_geometric.data import Data
from torch_geometric.loader import DataLoader
dataset_hf = load_dataset("graphs-datasets/<mydataset>")
# For the train set (replace by valid or test as needed)
dataset_pg_list = [Data(graph) for graph in dataset_hf["train"]]
dataset_pg = DataLoader(dataset_pg_list)
```
## Dataset Structure
### Data Fields
Each row of a given file is a graph, with:
- `edge_index` (list: 2 x #edges): pairs of nodes constituting edges
- `y` (list: #labels): contains the number of labels available to predict
- `num_nodes` (int): number of nodes of the graph
### Data Splits
This data is not split, and should be used with cross validation. It comes from the PyGeometric version of the dataset.
## Additional Information
### Licensing Information
The dataset has been released under GPL-3.0 license.
### Citation Information
See also [github](https://github.com/benedekrozemberczki/karateclub).
```
@inproceedings{karateclub,
title = {{Karate Club: An API Oriented Open-source Python Framework for Unsupervised Learning on Graphs}},
author = {Benedek Rozemberczki and Oliver Kiss and Rik Sarkar},
year = {2020},
pages = {3125–3132},
booktitle = {Proceedings of the 29th ACM International Conference on Information and Knowledge Management (CIKM '20)},
organization = {ACM},
}
``` |
graphs-datasets/AQSOL | 2023-02-07T16:36:58.000Z | [
"task_categories:graph-ml",
"license:mit",
"arxiv:2003.00982",
"region:us"
] | graphs-datasets | null | null | null | 0 | 3 | ---
license: mit
task_categories:
- graph-ml
---
# Dataset Card for AQSOL
## Table of Contents
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [External Use](#external-use)
- [PyGeometric](#pygeometric)
- [Dataset Structure](#dataset-structure)
- [Data Properties](#data-properties)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Additional Information](#additional-information)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **[Homepage](https://github.com/graphdeeplearning/benchmarking-gnns)**
- **Paper:**: (see citation)
### Dataset Summary
The AQSOL dataset comes "from the Benchmarking Graph Neural Networks paper based on AqSolDB, a standardized database of 9,982 molecular graphs with their aqueous solubility values, collected from 9 different data sources" (PyGeometric doc).
### Supported Tasks and Leaderboards
`AQSOL` should be used for graph regression, on aqueous solubility.
## External Use
### PyGeometric
To load in PyGeometric, do the following:
```python
from datasets import load_dataset
from torch_geometric.data import Data
from torch_geometric.loader import DataLoader
dataset_hf = load_dataset("graphs-datasets/<mydataset>")
# For the train set (replace by valid or test as needed)
dataset_pg_list = [Data(graph) for graph in dataset_hf["train"]]
dataset_pg = DataLoader(dataset_pg_list)
```
## Dataset Structure
### Data Properties
| property | value |
|---|---|
| #graphs | 9,833 |
| average #nodes | 17.6 |
| average #edges | 35.8 |
### Data Fields
Each row of a given file is a graph, with:
- `node_feat` (list: #nodes x #node-features): nodes
- `edge_index` (list: 2 x #edges): pairs of nodes constituting edges
- `edge_attr` (list: #edges x #edge-features): for the aforementioned edges, contains their features
- `y` (list: #labels): contains the number of labels available to predict
- `num_nodes` (int): number of nodes of the graph
### Data Splits
This data is split. It comes from the PyGeometric version of the dataset.
## Additional Information
### Licensing Information
The dataset has been released under MIT license.
### Citation Information
```
@article{DBLP:journals/corr/abs-2003-00982,
author = {Vijay Prakash Dwivedi and
Chaitanya K. Joshi and
Thomas Laurent and
Yoshua Bengio and
Xavier Bresson},
title = {Benchmarking Graph Neural Networks},
journal = {CoRR},
volume = {abs/2003.00982},
year = {2020},
url = {https://arxiv.org/abs/2003.00982},
eprinttype = {arXiv},
eprint = {2003.00982},
timestamp = {Sat, 23 Jan 2021 01:14:30 +0100},
biburl = {https://dblp.org/rec/journals/corr/abs-2003-00982.bib},
bibsource = {dblp computer science bibliography, https://dblp.org}
}
``` |
molamin/Kinyarwanda_Engligh_Multilingual_ASR | 2022-12-09T07:39:29.000Z | [
"size_categories:700K<n<800K",
"size_categories:~3120 hours",
"language:rw",
"language:en",
"license:cc-by-4.0",
"region:us"
] | molamin | null | null | null | 0 | 3 | ---
language:
- rw
- en
license:
- cc-by-4.0
size_categories:
- 700K<n<800K
- ~3120 hours
---
This dataset was created from Mozilla's Common Voice dataset for the purposes of Multilingual ASR on Kinyarwanda and English.
The dataset contains 3000 hours of multilingual training samples, 300 hours of validation samples and 200 of testing samples.
|
asgaardlab/GameBugDescription | 2023-04-13T21:20:49.000Z | [
"size_categories:n<1K",
"language:en",
"license:creativeml-openrail-m",
"Bug Detection",
"arxiv:2210.02506",
"region:us"
] | asgaardlab | null | null | null | 3 | 3 | ---
license: creativeml-openrail-m
language:
- en
tags:
- Bug Detection
pretty_name: Game Bug Description
size_categories:
- n<1K
---
# `Game Bug Description` Dataset
<div>
[](https://asgaardlab.github.io/LLMxBugs/)
[](https://arxiv.org/abs/2210.02506)
</div>
## Sample video
<video src="https://asgaardlab.github.io/LLMxBugs/static/videos/video.mp4" controls="controls" muted="muted" playsinline="playsinline" width=480></video>
## Sample description
```
A person is parachuting in the air.
A plane approaches the parachuter.
The plane hits the cord and loses its right wing.
The plane falls from the sky.
```
## Citation information
```
@misc{taesiri2022large,
title={Large Language Models are Pretty Good Zero-Shot Video Game Bug Detectors},
author={Mohammad Reza Taesiri and Finlay Macklon and Yihe Wang and Hengshuo Shen and Cor-Paul Bezemer},
year={2022},
eprint={2210.02506},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
``` |
Santarabantoosoo/hf_song_lyrics_with_names | 2022-12-13T05:37:02.000Z | [
"region:us"
] | Santarabantoosoo | null | null | null | 0 | 3 | Entry not found |
HuggingFaceM4/LocalizedNarratives | 2022-12-15T23:12:48.000Z | [
"license:cc-by-4.0",
"arxiv:1912.03098",
"region:us"
] | HuggingFaceM4 | Localized Narratives, a new form of multimodal image annotations connecting vision and language.
We ask annotators to describe an image with their voice while simultaneously hovering their mouse over the region they are describing.
Since the voice and the mouse pointer are synchronized, we can localize every single word in the description.
This dense visual grounding takes the form of a mouse trace segment per word and is unique to our data.
We annotated 849k images with Localized Narratives: the whole COCO, Flickr30k, and ADE20K datasets, and 671k images of Open Images, all of which we make publicly available. | @inproceedings{PontTuset_eccv2020,
author = {Jordi Pont-Tuset and Jasper Uijlings and Soravit Changpinyo and Radu Soricut and Vittorio Ferrari},
title = {Connecting Vision and Language with Localized Narratives},
booktitle = {ECCV},
year = {2020}
} | null | 2 | 3 | ---
license: cc-by-4.0
---
# Dataset Card for [Dataset Name]
## Table of Contents
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** [https://google.github.io/localized-narratives/(https://google.github.io/localized-narratives/)
- **Repository:**: [https://github.com/google/localized-narratives](https://github.com/google/localized-narratives)
- **Paper:** [Connecting Vision and Language with Localized Narratives](https://arxiv.org/pdf/1912.03098.pdf)
- **Leaderboard:**
- **Point of Contact:**
### Dataset Summary
Localized Narratives, a new form of multimodal image annotations connecting vision and language.
We ask annotators to describe an image with their voice while simultaneously hovering their mouse over the region they are describing.
Since the voice and the mouse pointer are synchronized, we can localize every single word in the description.
This dense visual grounding takes the form of a mouse trace segment per word and is unique to our data.
We annotated 849k images with Localized Narratives: the whole COCO, Flickr30k, and ADE20K datasets, and 671k images of Open Images, all of which we make publicly available.
As of now, there is only the `OpenImages` subset, but feel free to contribute the other subset of Localized Narratives!
`OpenImages_captions` is similar to the `OpenImages` subset. The differences are that captions are groupped per image (images can have multiple captions). For this subset, `timed_caption`, `traces` and `voice_recording` are not available.
### Supported Tasks and Leaderboards
[More Information Needed]
### Languages
[More Information Needed]
## Dataset Structure
### Data Instances
Each instance has the following structure:
```
{
dataset_id: 'mscoco_val2017',
image_id: '137576',
annotator_id: 93,
caption: 'In this image there are group of cows standing and eating th...',
timed_caption: [{'utterance': 'In this', 'start_time': 0.0, 'end_time': 0.4}, ...],
traces: [[{'x': 0.2086, 'y': -0.0533, 't': 0.022}, ...], ...],
voice_recording: 'coco_val/coco_val_137576_93.ogg'
}
```
### Data Fields
Each line represents one Localized Narrative annotation on one image by one annotator and has the following fields:
- `dataset_id`: String identifying the dataset and split where the image belongs, e.g. mscoco_val2017.
- `image_id` String identifier of the image, as specified on each dataset.
- `annotator_id` Integer number uniquely identifying each annotator.
- `caption` Image caption as a string of characters.
- `timed_caption` List of timed utterances, i.e. {utterance, start_time, end_time} where utterance is a word (or group of words) and (start_time, end_time) is the time during which it was spoken, with respect to the start of the recording.
- `traces` List of trace segments, one between each time the mouse pointer enters the image and goes away from it. Each trace segment is represented as a list of timed points, i.e. {x, y, t}, where x and y are the normalized image coordinates (with origin at the top-left corner of the image) and t is the time in seconds since the start of the recording. Please note that the coordinates can go a bit beyond the image, i.e. <0 or >1, as we recorded the mouse traces including a small band around the image.
- `voice_recording` Relative URL path with respect to https://storage.googleapis.com/localized-narratives/voice-recordings where to find the voice recording (in OGG format) for that particular image.
### Data Splits
[More Information Needed]
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
[More Information Needed]
### Citation Information
[More Information Needed]
### Contributions
Thanks to [@VictorSanh](https://github.com/VictorSanh) for adding this dataset.
|
uoe-nlp/multi3-nlu | 2023-06-07T10:46:27.000Z | [
"task_categories:text-classification",
"multilinguality:multilingual",
"source_datasets:nluplusplus",
"language:multilingual",
"license:cc-by-4.0",
"arxiv:2212.10455",
"arxiv:2204.13021",
"region:us"
] | uoe-nlp | null | null | null | 1 | 3 | ---
language:
- multilingual
license:
- cc-by-4.0
multilinguality:
- multilingual
source_datasets:
- nluplusplus
task_categories:
- text-classification
pretty_name: multi3-nlu
---
# Dataset Card for Multi<sup>3</sup>NLU++
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contact](#contact)
## Dataset Description
- **Paper:** [arXiv](https://arxiv.org/abs/2212.10455)
### Dataset Summary
Please access the dataset using
```
git clone https://huggingface.co/datasets/uoe-nlp/multi3-nlu/
```
Multi<sup>3</sup>NLU++ consists of 3080 utterances per language representing challenges in building multilingual multi-intent multi-domain task-oriented dialogue systems. The domains include banking and hotels. There are 62 unique intents.
### Supported Tasks and Leaderboards
- multi-label intent detection
- slot filling
- cross-lingual language understanding for task-oriented dialogue
### Languages
The dataset covers four language pairs in addition to the source dataset in English:
Spanish, Turkish, Marathi, Amharic
## Dataset Structure
### Data Instances
Each data instance contains the following features: _text_, _intents_, _uid_, _lang_, and ocassionally _slots_ and _values_
See the [Multi<sup>3</sup>NLU++ corpus viewer](https://huggingface.co/datasets/uoe-nlp/multi3-nlu/viewer/uoe-nlp--multi3-nlu/train) to explore more examples.
An example from the Multi<sup>3</sup>NLU++ looks like the following:
```
{
"text": "माझे उद्याचे रिझर्वेशन मला रद्द का करता येणार नाही?",
"intents": [
"why",
"booking",
"cancel_close_leave_freeze",
"wrong_notworking_notshowing"
],
"slots": {
"date_from": {
"text": "उद्याचे",
"span": [
5,
12
],
"value": {
"day": 16,
"month": 3,
"year": 2022
}
}
},
"uid": "hotel_1_1",
"lang": "mr"
}
```
### Data Fields
- 'text': a string containing the utterance for which the intent needs to be detected
- 'intents': the corresponding intent labels
- 'uid': unique identifier per language
- 'lang': the language of the dataset
- 'slots': annotation of the span that needs to be extracted for value extraction with its label and _value_
### Data Splits
The experiments are done on different k-fold validation setups. The dataset has multiple types of data splits. Please see Section 4 of the paper.
## Dataset Creation
### Curation Rationale
Existing task-oriented dialogue datasets are 1) predominantly limited to detecting a single intent, 2) focused on a single domain, and 3) include a small set of slot types. Furthermore, the success of task-oriented dialogue is 4) often evaluated on a small set of higher-resource languages (i.e., typically English) which does not test how generalisable systems are to the diverse range of the world's languages.
Our proposed dataset addresses all these limitations
### Source Data
#### Initial Data Collection and Normalization
Please see Section 3 of the paper
#### Who are the source language producers?
The source language producers are authors of [NLU++ dataset](https://arxiv.org/abs/2204.13021). The dataset was professionally translated into our chosen four languages. We used Blend Express and Proz.com to recruit these translators.
### Personal and Sensitive Information
None. Names are fictional
### Discussion of Biases
We have carefully vetted the examples to exclude the problematic examples.
### Other Known Limitations
The dataset comprises utterances extracted from real dialogues between users and conversational agents as well as synthetic human-authored utterances constructed with the aim of introducing additional combinations of intents and slots. The utterances therefore lack the wider context that would be present in a complete dialogue. As such the dataset cannot be used to evaluate systems with respect to discourse-level phenomena present in dialogue.
## Additional Information
Baseline models:
Our MLP and QA models are based on the huggingface transformers library.
### QA
We use the following code snippet for our QA experiments. Please refer to the paper for more details
```
https://github.com/huggingface/transformers/blob/main/examples/pytorch/question-answering/run_qa.py
python run_qa.py config_qa.json
```
### Licensing Information
The dataset is Creative Commons Attribution 4.0 International (cc-by-4.0)
### Citation Information
Coming soon
### Contact
[Nikita Moghe](mailto:nikita.moghe@ed.ac.uk) and [Evgeniia Razumovskaia](er563@cam.ac.uk) and [Liane Guillou](mailto:lguillou@ed.ac.uk)
Dataset card based on [Allociné](https://huggingface.co/datasets/allocine) |
laion/laion2b-multi-vit-l-14-embeddings | 2022-12-16T17:53:54.000Z | [
"region:us"
] | laion | null | null | null | 0 | 3 | Entry not found |
ad321/test-tweets | 2022-12-17T14:34:45.000Z | [
"task_categories:text-classification",
"task_ids:sentiment-classification",
"annotations_creators:crowdsourced",
"language_creators:crowdsourced",
"multilinguality:monolingual",
"size_categories:1M<n<10M",
"source_datasets:original",
"language:en",
"license:gpl-3.0",
"region:us"
] | ad321 | null | null | null | 1 | 3 | ---
annotations_creators:
- crowdsourced
language:
- en
language_creators:
- crowdsourced
license:
- gpl-3.0
multilinguality:
- monolingual
paperswithcode_id: null
pretty_name: tweeter-dataset-sent-analysis
size_categories:
- 1M<n<10M
source_datasets:
- original
tags: []
task_categories:
- text-classification
task_ids:
- sentiment-classification
train-eval-index:
- col_mapping:
label: labels
metrics:
- name: Accuracy
type: accuracy
- args:
average: binary
name: F1 binary
type: f1
tweet: text
config: default
splits:
train_split: train
validation_split: validation
task: text-classification
task_id: binary_classification
---
tweets in english positive negative |
surrey-nlp/S3D-v2 | 2022-12-17T18:17:27.000Z | [
"task_categories:text-classification",
"annotations_creators:Jordan Painter, Diptesh Kanojia",
"multilinguality:monolingual",
"size_categories:100K<n<1M",
"source_datasets:original",
"language:en",
"license:cc-by-sa-4.0",
"region:us"
] | surrey-nlp | null | null | null | 0 | 3 | ---
annotations_creators:
- Jordan Painter, Diptesh Kanojia
language:
- en
license:
- cc-by-sa-4.0
multilinguality:
- monolingual
pretty_name: 'Utilising Weak Supervision to create S3D: A Sarcasm Annotated Dataset'
size_categories:
- 100K<n<1M
source_datasets:
- original
task_categories:
- text-classification
---
## Table of Contents
- [Dataset Description](#dataset-description)
-
# Utilising Weak Supervision to Create S3D: A Sarcasm Annotated Dataset
This is the repository for the S3D dataset published at EMNLP 2022. The dataset can help build sarcasm detection models.
# S3D-v2 Summary
The S3D-v2 dataset is our silver standard dataset of 100,000 tweets labelled for sarcasm using weak supervision by a majority voting system of fine-tuned sarcasm detection models. The models used are
our [roberta-large-finetuned-SARC-combined-DS](https://huggingface.co/surrey-nlp/roberta-large-finetuned-SARC-combined-DS), [bertweet-base-finetuned-SARC-DS](https://huggingface.co/surrey-nlp/bertweet-base-finetuned-SARC-DS)
and [bertweet-base-finetuned-SARC-combined-DS](https://huggingface.co/surrey-nlp/bertweet-base-finetuned-SARC-combined-DS) models.
S3D contains 13016 tweets labelled as sarcastic, and 86904 tweets labelled as not being sarcastic.
# Data Fields
- Text: The preprocessed tweet
- Label: A label to denote if a given tweet is sarcastic
# Data Splits
- Train: 70,000
- Valid: 15,000
- Test: 15,000 |
lmqg/qag_jaquad | 2022-12-18T07:54:08.000Z | [
"task_categories:text-generation",
"task_ids:language-modeling",
"multilinguality:monolingual",
"size_categories:1k<n<10K",
"source_datasets:lmqg/qg_jaquad",
"language:ja",
"license:cc-by-sa-4.0",
"question-generation",
"arxiv:2210.03992",
"region:us"
] | lmqg | Question & answer generation dataset based on SQuAD. | @inproceedings{ushio-etal-2022-generative,
title = "{G}enerative {L}anguage {M}odels for {P}aragraph-{L}evel {Q}uestion {G}eneration",
author = "Ushio, Asahi and
Alva-Manchego, Fernando and
Camacho-Collados, Jose",
booktitle = "Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing",
month = dec,
year = "2022",
address = "Abu Dhabi, U.A.E.",
publisher = "Association for Computational Linguistics",
} | null | 0 | 3 | ---
license: cc-by-sa-4.0
pretty_name: SQuAD for question generation
language: ja
multilinguality: monolingual
size_categories: 1k<n<10K
source_datasets: lmqg/qg_jaquad
task_categories:
- text-generation
task_ids:
- language-modeling
tags:
- question-generation
---
# Dataset Card for "lmqg/qag_jaquad"
## Dataset Description
- **Repository:** [https://github.com/asahi417/lm-question-generation](https://github.com/asahi417/lm-question-generation)
- **Paper:** [https://arxiv.org/abs/2210.03992](https://arxiv.org/abs/2210.03992)
- **Point of Contact:** [Asahi Ushio](http://asahiushio.com/)
### Dataset Summary
This is the question & answer generation dataset based on the JAQuAD.
### Supported Tasks and Leaderboards
* `question-answer-generation`: The dataset is assumed to be used to train a model for question & answer generation.
Success on this task is typically measured by achieving a high BLEU4/METEOR/ROUGE-L/BERTScore/MoverScore (see our paper for more in detail).
### Languages
Japanese (ja)
## Dataset Structure
An example of 'train' looks as follows.
```
{
"paragraph": ""Nerdilinga"は898年にカロリング朝の王領として初めて文献に記録されている。レーゲンスブルク司教の統治下でネルトリンゲンは市場町に成長していった。1215年にネルトリンゲンは皇帝フリードリヒ2世から都市権を与えられ、帝国自由都市となった。この年に最初の市壁が築かれた。その縄張りは現在も街の地図に見て取れる。1219年、ネルトリンゲンの聖霊降臨祭についての最も古い文献上の記録が遺されている。重要な交易路が交差するこの都市は穀物、家畜、織物、毛皮、金属製品の主要な集散地に発展していった。ネルトリンゲンはフランクフルトと並ぶドイツで最も重要な遠距離交易都市の一つとなったのである。",
"questions": [ "1215年にネルトリンゲンは誰から都市権を与えられ、帝国自由都市となったか。", "\"Nerdilinga\"の最初の記録は何年のものですか。" ],
"answers": [ "皇帝フリードリヒ2世", "898年" ],
"questions_answers": "question: 1215年にネルトリンゲンは誰から都市権を与えられ、帝国自由都市となったか。, answer: 皇帝フリードリヒ2世 | question: "Nerdilinga"の最初の記録は何年のものですか。, answer: 898年"
}
```
The data fields are the same among all splits.
- `questions`: a `list` of `string` features.
- `answers`: a `list` of `string` features.
- `paragraph`: a `string` feature.
- `questions_answers`: a `string` feature.
## Data Splits
|train|validation|test |
|----:|---------:|----:|
|9508| 1431 | 3050|
## Citation Information
```
@inproceedings{ushio-etal-2022-generative,
title = "{G}enerative {L}anguage {M}odels for {P}aragraph-{L}evel {Q}uestion {G}eneration",
author = "Ushio, Asahi and
Alva-Manchego, Fernando and
Camacho-Collados, Jose",
booktitle = "Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing",
month = dec,
year = "2022",
address = "Abu Dhabi, U.A.E.",
publisher = "Association for Computational Linguistics",
}
``` |
fewshot-goes-multilingual/sk_csfd-movie-reviews | 2022-12-18T21:30:31.000Z | [
"task_categories:text-classification",
"task_ids:sentiment-classification",
"annotations_creators:crowdsourced",
"language_creators:crowdsourced",
"multilinguality:monolingual",
"size_categories:10K<n<100K",
"source_datasets:original",
"language:sk",
"license:cc-by-sa-4.0",
"movie reviews",
"rat... | fewshot-goes-multilingual | null | null | null | 0 | 3 | ---
annotations_creators:
- crowdsourced
language:
- sk
language_creators:
- crowdsourced
license:
- cc-by-sa-4.0
multilinguality:
- monolingual
pretty_name: CSFD movie reviews (Slovak)
size_categories:
- 10K<n<100K
source_datasets:
- original
tags:
- movie reviews
- rating prediction
task_categories:
- text-classification
task_ids:
- sentiment-classification
---
# Dataset Card for CSFD movie reviews (Slovak)
## Dataset Description
The dataset contains user reviews from Czech/Slovak movie databse website <https://csfd.cz>.
Each review contains text, rating, date, and basic information about the movie (or TV series).
The dataset has in total (train+validation+test) 30,000 reviews. The data is balanced - each rating has approximately the same frequency.
## Dataset Features
Each sample contains:
- `review_id`: unique string identifier of the review.
- `rating_str`: string representation of the rating (from "0/5" to "5/5")
- `rating_int`: integer representation of the rating (from 0 to 5)
- `date`: date of publishing the review (just date, no time nor timezone)
- `comment_language`: language of the review (always "sk")
- `comment`: the string of the review
- `item_title`: title of the reviewed item
- `item_year`: publishing year of the item (string, can also be a range)
- `item_kind`: kind of the item - either "film" or "seriál"
- `item_genres`: list of genres of the item
- `item_directors`: list of director names of the item
- `item_screenwriters`: list of screenwriter names of the item
- `item_cast`: list of actors and actress in the item
## Dataset Source
The data was mined and sampled from the <https://csfd.cz> website.
Make sure to comply with the terms of conditions of the website operator when using the data.
|
noahkim/Kor_Jpn_Translation_Dataset | 2022-12-20T12:03:22.000Z | [
"task_categories:translation",
"task_ids:language-modeling",
"annotations_creators:expert-generated",
"language_creators:other",
"size_categories:1K<n<10K",
"source_datasets:original",
"language:kor",
"language:jpn",
"license:mit",
"region:us"
] | noahkim | null | null | null | 2 | 3 | ---
annotations_creators:
- expert-generated
language_creators:
- other
language:
- kor
- jpn
license:
- mit
size_categories:
- 1K<n<10K
source_datasets:
- original
task_categories:
- translation
task_ids:
- language-modeling
paperswithcode_id: null
pretty_name: Kor-Jpn-Translation
---
# Dataset Card for "Kor_Jpn_Translation_Dataset"
### Dataset Summary
AI-Hub에서 제공하는 한국어-일본어 번역 말뭉치 데이터(https://aihub.or.kr/aihubdata/data/view.do?currMenu=115&topMenu=100&aihubDataSe=realm&dataSetSn=127)를 사용하기 쉽게 정제했습니다.
- 제공처 : AI-Hub(https://aihub.or.kr/aihubdata/data/view.do?currMenu=115&topMenu=100&aihubDataSe=realm&dataSetSn=127)
- 제목 : 한국어-일본어 문화 분야 이중 말뭉치
- 구축분야 : 문화재/향토/K-Food, K-POP(한류)/대중문화_공연 콘텐츠, IT/컴퓨터/모바일, 금융/증시, 사회/노동/복지, 교육, 특허/기술, 자동차
- 구축량 : 150만 문장쌍
- 응용분야 : 언어모델, 자동번역
- 언어 : 원시어-한국어, 목적어-일본어
### Supported Tasks and Leaderboards
- Translation
### Languages
- Kor
- Jpan
## Dataset Structure
features:
- name: KOR
dtype: string
- name: JPN
dtype: string
splits:
- name: train
num_bytes: 294787449
num_examples: 840000
- name: val
num_bytes: 88406929
num_examples: 252000
- name: test
num_bytes: 37964427
num_examples: 108000
download_size: 289307354
dataset_size: 421158805
### Data Splits
splits:
- name: train
num_bytes: 294787449
num_examples: 840000
- name: val
num_bytes: 88406929
num_examples: 252000
- name: test
num_bytes: 37964427
num_examples: 108000
### Contributions
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
fewshot-goes-multilingual/cs_facebook-comments | 2022-12-20T21:56:09.000Z | [
"task_categories:text-classification",
"task_ids:sentiment-classification",
"annotations_creators:found",
"language_creators:found",
"multilinguality:monolingual",
"size_categories:10K<n<100K",
"source_datasets:original",
"language:cs",
"license:cc-by-nc-sa-3.0",
"region:us"
] | fewshot-goes-multilingual | null | null | null | 0 | 3 | ---
annotations_creators:
- found
language:
- cs
language_creators:
- found
license:
- cc-by-nc-sa-3.0
multilinguality:
- monolingual
pretty_name: Czech Facebook comments
size_categories:
- 10K<n<100K
source_datasets:
- original
tags: []
task_categories:
- text-classification
task_ids:
- sentiment-classification
---
# Dataset Card for Czech Facebook comments
## Dataset Description
The dataset contains user comments from Facebook. Each comment contains text, sentiment (positive/negative/neutral).
The dataset has in total (train+validation+test) 6,600 reviews. The data is balanced.
## Dataset Features
Each sample contains:
- `comment_id`: unique string identifier of the comment.
- `sentiment_str`: string representation of the rating - "pozitivní" / "neutrální" / "negativní"
- `sentiment_int`: integer representation of the rating (1=positive, 0=neutral, -1=negative)
- `comment`: the string of the comment
## Dataset Source
The data is a processed adaptation of [Facebook CZ Corpus](https://liks.fav.zcu.cz/sentiment/).
This adaptation is label-balanced.
|
Crosstyan/BPDataset | 2022-12-21T05:26:01.000Z | [
"license:openrail",
"region:us"
] | Crosstyan | null | null | null | 4 | 3 | ---
license: openrail
---
For the sake of full disclosure I publish the dataset that I use to train [Crosstyan/BPModel](https://huggingface.co/Crosstyan/BPModel).
NSFW content is contained. Watch with your parents if you don't feel comfortable about that.
|
diltdicker/romance_novel_data-2022 | 2023-01-07T21:40:31.000Z | [
"license:openrail",
"region:us"
] | diltdicker | null | null | null | 7 | 3 | ---
license: openrail
---
Dataset Summary
---
Collection of Romance Novels featuring `title`, `description`, and `genres`. Created with intention of building a "Romance Novel Generator."
Data Fields
---
- `id` : unique integer to id book in the dataset
- `pub_month` : string indicating the month the book was published in the form: `YEAR_MONTH`
- `title` : title of the book
- `author` : comma-separated (`last-name, first-name`) of the author of book
- `isbn13` : 13 digit number for the isbn of book (note not all books will have an isbn number)
- `description` : text description of the book. May contain quoted lines, a brief teaser of the plot, etc...
- `genres` : dictionary of all genres with 1 or 0 indicating if genre is present
- `womens-fiction` : 1 or 0 indicating if genre is present
- `abuse` : 1 or 0 indicating if genre is present
- `accidental-pregnancy` : 1 or 0 indicating if genre is present
- `action-adventure` : 1 or 0 indicating if genre is present
- `actor-actress-dancer-model` : 1 or 0 indicating if genre is present
- `adoption` : 1 or 0 indicating if genre is present
- `adultery` : 1 or 0 indicating if genre is present
- `african-american` : 1 or 0 indicating if genre is present
- `alcoholism` : 1 or 0 indicating if genre is present
- `aliens` : 1 or 0 indicating if genre is present
- `alpha-hero` : 1 or 0 indicating if genre is present
- `alternative-history` : 1 or 0 indicating if genre is present
- `amateur-sleuth` : 1 or 0 indicating if genre is present
- `americana` : 1 or 0 indicating if genre is present
- `amish` : 1 or 0 indicating if genre is present
- `amnesia` : 1 or 0 indicating if genre is present
- `angels` : 1 or 0 indicating if genre is present
- `animals` : 1 or 0 indicating if genre is present
- `anthropologists-archeologists` : 1 or 0 indicating if genre is present
- `apocalypse` : 1 or 0 indicating if genre is present
- `arranged-marriage` : 1 or 0 indicating if genre is present
- `arthurian-legend` : 1 or 0 indicating if genre is present
- `asian-american` : 1 or 0 indicating if genre is present
- `astrology` : 1 or 0 indicating if genre is present
- `bbw-heroines` : 1 or 0 indicating if genre is present
- `bad-boy` : 1 or 0 indicating if genre is present
- `best-friends` : 1 or 0 indicating if genre is present
- `beta-hero` : 1 or 0 indicating if genre is present
- `biographical` : 1 or 0 indicating if genre is present
- `blackmail` : 1 or 0 indicating if genre is present
- `boarding-school` : 1 or 0 indicating if genre is present
- `captor-captive` : 1 or 0 indicating if genre is present
- `category-romance` : 1 or 0 indicating if genre is present
- `celebrities` : 1 or 0 indicating if genre is present
- `celts` : 1 or 0 indicating if genre is present
- `chefs-foodies` : 1 or 0 indicating if genre is present
- `chick-lit` : 1 or 0 indicating if genre is present
- `christian` : 1 or 0 indicating if genre is present
- `clean-&-wholesome` : 1 or 0 indicating if genre is present
- `clones` : 1 or 0 indicating if genre is present
- `comedy-humor` : 1 or 0 indicating if genre is present
- `coming-of-age` : 1 or 0 indicating if genre is present
- `contemporary-romance` : 1 or 0 indicating if genre is present
- `cowboys` : 1 or 0 indicating if genre is present
- `cozy-mystery` : 1 or 0 indicating if genre is present
- `crime` : 1 or 0 indicating if genre is present
- `dark-fantasy` : 1 or 0 indicating if genre is present
- `death-dying` : 1 or 0 indicating if genre is present
- `debutante-heiress` : 1 or 0 indicating if genre is present
- `demons` : 1 or 0 indicating if genre is present
- `disabilities` : 1 or 0 indicating if genre is present
- `divorce` : 1 or 0 indicating if genre is present
- `doctor-nurse` : 1 or 0 indicating if genre is present
- `dragons` : 1 or 0 indicating if genre is present
- `dystopian` : 1 or 0 indicating if genre is present
- `elves` : 1 or 0 indicating if genre is present
- `enemies-to-lovers` : 1 or 0 indicating if genre is present
- `epic-fantasy` : 1 or 0 indicating if genre is present
- `erotica` : 1 or 0 indicating if genre is present
- `espionage-spies-cia` : 1 or 0 indicating if genre is present
- `fairies-fae` : 1 or 0 indicating if genre is present
- `fairy-tales-folklore` : 1 or 0 indicating if genre is present
- `fake-relationship` : 1 or 0 indicating if genre is present
- `falsely-accused` : 1 or 0 indicating if genre is present
- `family-siblings` : 1 or 0 indicating if genre is present
- `famous-characters` : 1 or 0 indicating if genre is present
- `fantasy` : 1 or 0 indicating if genre is present
- `fantasy-romance` : 1 or 0 indicating if genre is present
- `feminism` : 1 or 0 indicating if genre is present
- `firefighters` : 1 or 0 indicating if genre is present
- `forced-proximity` : 1 or 0 indicating if genre is present
- `forensics` : 1 or 0 indicating if genre is present
- `friends-to-lovers` : 1 or 0 indicating if genre is present
- `general-fiction` : 1 or 0 indicating if genre is present
- `ghosts` : 1 or 0 indicating if genre is present
- `gothic` : 1 or 0 indicating if genre is present
- `graphic-novel` : 1 or 0 indicating if genre is present
- `guardian-ward` : 1 or 0 indicating if genre is present
- `hard-boiled` : 1 or 0 indicating if genre is present
- `heroic-fantasy-sword-&-sorcery` : 1 or 0 indicating if genre is present
- `hidden-identity` : 1 or 0 indicating if genre is present
- `hispanic-&-latino` : 1 or 0 indicating if genre is present
- `historical` : 1 or 0 indicating if genre is present
- `historical-mystery` : 1 or 0 indicating if genre is present
- `historical-romance` : 1 or 0 indicating if genre is present
- `holidays` : 1 or 0 indicating if genre is present
- `horror` : 1 or 0 indicating if genre is present
- `infidelity` : 1 or 0 indicating if genre is present
- `jane-austen` : 1 or 0 indicating if genre is present
- `jewish` : 1 or 0 indicating if genre is present
- `kidnapping` : 1 or 0 indicating if genre is present
- `kids-(12-&-under)` : 1 or 0 indicating if genre is present
- `kids:-middle-grade` : 1 or 0 indicating if genre is present
- `lgbtq` : 1 or 0 indicating if genre is present
- `law-enforcement` : 1 or 0 indicating if genre is present
- `lawyers` : 1 or 0 indicating if genre is present
- `legal-thriller` : 1 or 0 indicating if genre is present
- `literary` : 1 or 0 indicating if genre is present
- `magic` : 1 or 0 indicating if genre is present
- `magical-realism` : 1 or 0 indicating if genre is present
- `mail-order-brides` : 1 or 0 indicating if genre is present
- `manga` : 1 or 0 indicating if genre is present
- `marriage-of-convenience` : 1 or 0 indicating if genre is present
- `mashup` : 1 or 0 indicating if genre is present
- `mature-(18-&-over)` : 1 or 0 indicating if genre is present
- `may-december` : 1 or 0 indicating if genre is present
- `medical` : 1 or 0 indicating if genre is present
- `medical-thriller` : 1 or 0 indicating if genre is present
- `mermaids` : 1 or 0 indicating if genre is present
- `military` : 1 or 0 indicating if genre is present
- `mistaken-identity` : 1 or 0 indicating if genre is present
- `monsters` : 1 or 0 indicating if genre is present
- `motorcycle-club-bikers` : 1 or 0 indicating if genre is present
- `moviestv` : 1 or 0 indicating if genre is present
- `multicultural-&-interracial-romance` : 1 or 0 indicating if genre is present
- `music` : 1 or 0 indicating if genre is present
- `mystery` : 1 or 0 indicating if genre is present
- `mythology` : 1 or 0 indicating if genre is present
- `native-americans` : 1 or 0 indicating if genre is present
- `nautical` : 1 or 0 indicating if genre is present
- `navy-seals` : 1 or 0 indicating if genre is present
- `new-adult-(18-25)` : 1 or 0 indicating if genre is present
- `noir` : 1 or 0 indicating if genre is present
- `occult-&-supernatural` : 1 or 0 indicating if genre is present
- `office-romance` : 1 or 0 indicating if genre is present
- `opposites-attract` : 1 or 0 indicating if genre is present
- `orphans` : 1 or 0 indicating if genre is present
- `paranormal` : 1 or 0 indicating if genre is present
- `paranormal-romance` : 1 or 0 indicating if genre is present
- `pirates` : 1 or 0 indicating if genre is present
- `police-lawmen-fbi-agents` : 1 or 0 indicating if genre is present
- `police-procedural` : 1 or 0 indicating if genre is present
- `political` : 1 or 0 indicating if genre is present
- `political-thriller` : 1 or 0 indicating if genre is present
- `post-apocalyptic` : 1 or 0 indicating if genre is present
- `pregnancy` : 1 or 0 indicating if genre is present
- `private-investigator` : 1 or 0 indicating if genre is present
- `psychological-suspense` : 1 or 0 indicating if genre is present
- `rags-to-riches` : 1 or 0 indicating if genre is present
- `rakes` : 1 or 0 indicating if genre is present
- `reincarnation` : 1 or 0 indicating if genre is present
- `revenge` : 1 or 0 indicating if genre is present
- `robin-hood` : 1 or 0 indicating if genre is present
- `rock-stars` : 1 or 0 indicating if genre is present
- `romance` : 1 or 0 indicating if genre is present
- `romantic-elements` : 1 or 0 indicating if genre is present
- `romantic-suspense` : 1 or 0 indicating if genre is present
- `royalty` : 1 or 0 indicating if genre is present
- `saga` : 1 or 0 indicating if genre is present
- `schools` : 1 or 0 indicating if genre is present
- `science-fiction` : 1 or 0 indicating if genre is present
- `science-fiction-fantasy` : 1 or 0 indicating if genre is present
- `scottish-highlands` : 1 or 0 indicating if genre is present
- `second-chance-romance` : 1 or 0 indicating if genre is present
- `secret-baby` : 1 or 0 indicating if genre is present
- `serial-killers` : 1 or 0 indicating if genre is present
- `servants-slaves` : 1 or 0 indicating if genre is present
- `shakespeare` : 1 or 0 indicating if genre is present
- `sheikhs` : 1 or 0 indicating if genre is present
- `sherlock-holmes` : 1 or 0 indicating if genre is present
- `single-parent` : 1 or 0 indicating if genre is present
- `small-town` : 1 or 0 indicating if genre is present
- `space-opera` : 1 or 0 indicating if genre is present
- `speculative-fiction` : 1 or 0 indicating if genre is present
- `sports` : 1 or 0 indicating if genre is present
- `steampunk` : 1 or 0 indicating if genre is present
- `superheroes` : 1 or 0 indicating if genre is present
- `suspense` : 1 or 0 indicating if genre is present
- `tear-jerker` : 1 or 0 indicating if genre is present
- `technology` : 1 or 0 indicating if genre is present
- `terrorists` : 1 or 0 indicating if genre is present
- `thriller` : 1 or 0 indicating if genre is present
- `time-travel` : 1 or 0 indicating if genre is present
- `tortured-hero` : 1 or 0 indicating if genre is present
- `tortured-heroine` : 1 or 0 indicating if genre is present
- `traditional-british` : 1 or 0 indicating if genre is present
- `traditional-regency` : 1 or 0 indicating if genre is present
- `twins` : 1 or 0 indicating if genre is present
- `tycoons` : 1 or 0 indicating if genre is present
- `ugly-duckling` : 1 or 0 indicating if genre is present
- `unicorns` : 1 or 0 indicating if genre is present
- `urban-fantasy` : 1 or 0 indicating if genre is present
- `vampires` : 1 or 0 indicating if genre is present
- `vikings` : 1 or 0 indicating if genre is present
- `virgin-hero` : 1 or 0 indicating if genre is present
- `virgins` : 1 or 0 indicating if genre is present
- `visionary-&-metaphysical` : 1 or 0 indicating if genre is present
- `wagon-train` : 1 or 0 indicating if genre is present
- `werewolves-shapeshifters` : 1 or 0 indicating if genre is present
- `western` : 1 or 0 indicating if genre is present
- `widow-widower` : 1 or 0 indicating if genre is present
- `witch-warlock-mage-wizard` : 1 or 0 indicating if genre is present
- `women-sleuths` : 1 or 0 indicating if genre is present
- `young-adult-teens` : 1 or 0 indicating if genre is present
- `zombies` : 1 or 0 indicating if genre is present
Languages
---
- en |
bond005/rulibrispeech | 2023-01-18T19:38:48.000Z | [
"region:us"
] | bond005 | null | null | null | 0 | 3 | ---
dataset_info:
features:
- name: audio
dtype: audio
- name: transcription
dtype: string
splits:
- name: train
num_bytes: 11165185580.744
num_examples: 54472
- name: test
num_bytes: 306649969.0
num_examples: 1352
- name: validation
num_bytes: 321842480.0
num_examples: 1400
download_size: 10689335725
dataset_size: 11793678029.744
---
# Dataset Card for "rulibrispeech"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
larrylawl/opus | 2023-01-17T03:03:16.000Z | [
"task_categories:translation",
"annotations_creators:expert-generated",
"annotations_creators:found",
"language_creators:found",
"language_creators:expert-generated",
"multilinguality:translation",
"parallel-corpus",
"region:us"
] | larrylawl | Downloads OPUS data using `opustools`. | null | null | 0 | 3 | ---
annotations_creators:
- expert-generated
- found
language_creators:
- found
- expert-generated
license: []
multilinguality:
- translation
pretty_name: opus
size_categories: []
source_datasets: []
tags:
- parallel-corpus
task_categories:
- translation
task_ids: []
---
# Dataset Card for [opus]
## Table of Contents
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
**Disclaimer.** Loading of dataset is slow, thus it may not be feasible when loading at scale. I'd suggest to use the other OPUS datasets on Huggingface which loads a specific corpus.
Loads [OPUS](https://opus.nlpl.eu/) as HuggingFace dataset. OPUS is an open parallel corpus covering 700+ languages and 1100+ datasets.
Given a `src` and `tgt` language, this repository can load *all* available parallel corpus. To my knowledge, other OPUS datasets on Huggingface loads a specific corpus
**Requirements**.
```
pip install pandas
# pip install my fork of `opustools`
git clone https://github.com/larrylawl/OpusTools.git
pip install -e OpusTools/opustools_pkg
```
**Example Usage**.
```
# args follows `opustools`: https://pypi.org/project/opustools/
src="en"
tgt="id"
download_dir="data" # dir to save downloaded files
corpus="bible-uedin" # corpus name. Leave as `None` to download all available corpus for the src-tgt pair.
dataset = load_dataset("larrylawl/opus",
src=src,
tgt=tgt,
download_dir=download_dir,
corpus=corpus)
)
```
**Disclaimer**.
This repository is still in active development. Do make a PR if there're any issues!
### Dataset Summary
[More Information Needed]
### Supported Tasks and Leaderboards
[More Information Needed]
### Languages
Available languages can be viewed on the [OPUS API](https://opus.nlpl.eu/opusapi/?languages=True)
## Dataset Structure
### Data Instances
```
{'src': 'In the beginning God created the heavens and the earth .',
'tgt': 'Pada mulanya , waktu Allah mulai menciptakan alam semesta'}
```
### Data Fields
```
features = {
"src": datasets.Value("string"),
"tgt": datasets.Value("string"),
}
```
### Data Splits
Merged all data into train split.
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
[More Information Needed]
### Citation Information
[More Information Needed]
### Contributions
Thanks to [@larrylawl](https://larrylawl.github.io/) for adding this dataset. |
jordyvl/RVL-CDIP-N | 2023-01-02T14:25:47.000Z | [
"license:cc-by-3.0",
"region:us"
] | jordyvl | null | null | null | 1 | 3 | ---
license: cc-by-3.0
dataset_info:
features:
- name: image
dtype: image
- name: label
dtype:
class_label:
names:
'0': budget
'1': email
'2': form
'3': handwritten
'4': invoice
'5': letter
'6': memo
'7': news_article
'8': questionnaire
'9': resume
'10': scientific_publication
'11': specification
splits:
- name: test
num_bytes: 2272995060.864
num_examples: 1002
download_size: 544832160
dataset_size: 2272995060.864
---
This dataset was created in https://openreview.net/pdf?id=uDlkiCI5N7Y
The original source is here: https://drive.google.com/drive/folders/1VDnwRhmguvhKUCZ0_nv54RMGgqfYHGfz
Many thanks to Stefan Larson! |
joelniklaus/mining_legal_arguments_agent | 2023-01-02T20:51:41.000Z | [
"license:apache-2.0",
"arxiv:2208.06178",
"region:us"
] | joelniklaus | null | null | null | 1 | 3 | ---
license: apache-2.0
---
# Dataset Card for MiningLegalArguments
## Table of Contents
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** [GitHub](https://github.com/trusthlt/mining-legal-arguments)
- **Repository:**
- **Paper:** [ArXiv](https://arxiv.org/pdf/2208.06178.pdf)
- **Leaderboard:**
- **Point of Contact:**
### Dataset Summary
[More Information Needed]
### Supported Tasks and Leaderboards
[More Information Needed]
### Languages
[More Information Needed]
## Dataset Structure
### Data Instances
[More Information Needed]
### Data Fields
[More Information Needed]
### Data Splits
[More Information Needed]
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
[More Information Needed]
### Citation Information
[More Information Needed]
### Contributions
Thanks to [@JoelNiklaus](https://github.com/JoelNiklaus) for adding this dataset.
|
joelniklaus/mining_legal_arguments_argType | 2023-01-02T20:51:23.000Z | [
"license:apache-2.0",
"arxiv:2208.06178",
"region:us"
] | joelniklaus | null | null | null | 2 | 3 | ---
license: apache-2.0
---
# Dataset Card for MiningLegalArguments
## Table of Contents
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** [GitHub](https://github.com/trusthlt/mining-legal-arguments)
- **Repository:**
- **Paper:** [ArXiv](https://arxiv.org/pdf/2208.06178.pdf)
- **Leaderboard:**
- **Point of Contact:**
### Dataset Summary
[More Information Needed]
### Supported Tasks and Leaderboards
[More Information Needed]
### Languages
[More Information Needed]
## Dataset Structure
### Data Instances
[More Information Needed]
### Data Fields
[More Information Needed]
### Data Splits
[More Information Needed]
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
[More Information Needed]
### Citation Information
[More Information Needed]
### Contributions
Thanks to [@JoelNiklaus](https://github.com/JoelNiklaus) for adding this dataset.
|
DavidVivancos/MindBigData2022_Imagenet_IN | 2023-01-03T21:16:12.000Z | [
"license:odbl",
"region:us"
] | DavidVivancos | null | null | null | 0 | 3 | ---
license: odbl
---
|
and-effect/mdk_gov_data_titles_clf | 2023-05-25T12:43:42.000Z | [
"task_categories:text-classification",
"annotations_creators:crowdsourced",
"language_creators:other",
"multilinguality:monolingual",
"size_categories:1K<n<10K",
"source_datasets:extended",
"language:de",
"license:cc-by-4.0",
"region:us"
] | and-effect | null | null | null | 1 | 3 | ---
annotations_creators: crowdsourced
language_creators: other
language: de
multilinguality: monolingual
size_categories:
- 1K<n<10K
source_datasets: extended
task_categories:
- text-classification
pretty_name: GOVDATA dataset titles labelled
license: cc-by-4.0
---
# Dataset Card for MDK
This dataset was created as part of the [Bertelsmann Foundation's](https://www.bertelsmann-stiftung.de/de/startseite)
[Musterdatenkatalog (MDK)]("https://www.bertelsmann-stiftung.de/de/unsere-projekte/smart-country/musterdatenkatalog") project. The MDK provides an overview of Open Data in municipalities in Germany. It is intended to help municipalities in Germany, as well as data analysts and journalists, to get an overview of the topics and the extent to which cities have already published data sets.
## Dataset Description
### Dataset Summary
The dataset is an annotated corpus of 1258 records based on the metadata of the datasets from [GOVDATA](https://www.govdata.de/). GovData is a data portal that aims to make cities' data available in a standardized way.
The annotation maps the titles of the datasets to a taxonomy containing categories such as 'Verkehr - KFZ - Messung' or 'Abfallwirtschaft - Abfallkalender'. Through the assignment the names of the data sets can be normalized and grouped. In total, the taxonomy consists 250 categories. Each category is divided into two levels:
- Level 1: "Thema" (topic)

- Level 2: "Bezeichnung" (label).
The first dash divides the levels. For example:

You can find an interactive view of the taxonomy with all labels [here](https://huggingface.co/spaces/and-effect/Musterdatenkatalog).
The repository contains a small and a large version of the data. The small version is for testing purposes only. The large data set contains all 1258 entries. The large and small datasets are split into a training and a testing dataset. In addition, the large dataset folder contains of a validation dataset that has been annotated separately. The validation dataset is an additional dataset that we created for the evaluation of the algorithm. It also consists of data from GOVDATA and has the same structure as the test and training data set.
### Languages
The language data is German.
## Dataset Structure
### Data Fields
| dataset | size |
|-----|-----|
| small/train | 18.96 KB |
| small/test | 6.13 KB |
| large/train | 517.77 KB |
| large/test | 118.66 KB |
An example of looks as follows:
```json
{
"doc_id": "a063d3b7-4c09-421e-9849-073dc8939e76",
"title": "Dienstleistungen Alphabetisch sortiert April 2019",
"description": "CSV-Datei mit allen Dienstleistungen der Kreisverwaltung Kleve. Sortiert nach AlphabetStand 01.04.2019",
"labels_name": "Sonstiges - Sonstiges",
"labels": 166
}
```
The data fields are the same among all splits:
- doc_id (uuid): identifier for each document
- title (str): dataset title from GOVDATA
- description (str): description of the dataset
- labels_name (str): annotation with labels from taxonomy
- labels (int): labels indexed from 0 to 250
### Data Splits
| dataset_name | dataset_splits | train_size | test_size | validation_size
|-----|-----|-----|-----|-----|
| dataset_large | train, test, validation | 1009 | 249 | 101
| dataset_small | train, test | 37 | 13 | None
## Dataset Creation
The dataset was created through multiple manual annotation rounds.
### Source Data
The data comes from [GOVDATA](https://www.govdata.de/), an open data portal of Germany. It aims to provide central access to administrative data from the federal, state and local governments. Their aim is to make data available in one place and thus easier to use. The data available is structured in 13 categories ranging from finance, to international topics, health, education and science and technology. [GOVDATA](https://www.govdata.de/) offers a [CKAN API](https://ckan.govdata.de/) to make requests and provides metadata for each data entry.
#### Initial Data Collection and Normalization
Several sources were used for the annotation process. A sample was collected from [GOVDATA](https://www.govdata.de/) with actual datasets. For the sample, 50 records were drawn for each group. Additional samples are from the previous version of the [MDK](https://github.com/bertelsmannstift/Musterdatenkatalog) that contain older data from [GOVDATA](https://www.govdata.de/). Some of the datasets from the old [MDK](https://github.com/bertelsmannstift/Musterdatenkatalog) already contained an annotation, but since the taxonomy is not the same, the data were re-annotated. A sample was drawn from each source (randomly and by manual selection), resulting in a total of 1258 titles.
### Annotations
#### Annotation process
The data was annotated in four rounds and one additional test round. In each round a percentage of the data was allocated to all annotators to caluculate the inter-annotator agreement using Cohens Kappa.
The following table shows the results of the of the annotations:
| | **Cohens Kappa** | **Number of Annotators** | **Number of Documents** |
| ------------------ | :--------------: | ------------------------ | ----------------------- |
| **Test Round** | .77 | 6 | 50 |
| **Round 1** | .41 | 2 | 120 |
| **Round 2** | .76 | 4 | 480 |
| **Round 3** | .71 | 3 | 420 |
| **Round 4** | .87 | 2 | 416 |
| **Validation set** | - | 1 | 177 |
In addition, a validation set was generated by the dataset curators.
#### Who are the annotators?
Annotators are all employees from [&effect data solutions GmbH](https://www.and-effect.com/). The taxonomy as well as rules and problems in the assignment of datasets were discussed and debated in advance of the development of the taxonomy and the annotation in two workshops with experts and representatives of the open data community and local governments as well as with the project members of the [Musterdatenkatalog]("https://www.bertelsmann-stiftung.de/de/unsere-projekte/smart-country/musterdatenkatalog") from the Bertelsmann Foundation. On this basis, the [&effect](https://www.and-effect.com/) employees were instructed in the annotation by the curators of the datasets.
## Considerations for Using the Data
The dataset for the annotation process was generated by sampling from [GOVDATA](https://www.govdata.de/) and data previously collected from GOVDATA. The data on GOVDATA is continuously updated and data can get deleted. Thus, there is no guarantee that data entries included here will still be available.
### Social Impact of Dataset
Since 2017, the German government has been promoting systematic and free access to public administration data with first laws on open data in municipalities. In this way, a contribution is aimed at the development of a [knowledge society] (https://www.verwaltung-innovativ.de/DE/Startseite/startseite_node.html). The categorization of open data of cities in a standardized and detailed taxonomy supports this process of making data of municipalities freely, openly and structured accessible.
### Discussion of Biases (non-ethical)
The data was mainly sampled at random from the categories available on GOVDATA. Although all categories were sampled there is still some imbalance in the data. For example: entries for the concept 'Raumordnung, Raumplanung und Raumentwicklung - Bebauungsplan' make up the majority class. Although manual selection of data was also used for not all previous concepts data entries was found. However, for 95% of concepts at least one data entry is available.
## Additional Information
### Dataset Curators
Friederike Bauer
Rahkakavee Baskaran
### Licensing Information
CC BY 4.0 |
irds/trec-arabic_ar2002 | 2023-01-05T03:51:37.000Z | [
"task_categories:text-retrieval",
"source_datasets:irds/trec-arabic",
"region:us"
] | irds | null | null | null | 0 | 3 | ---
pretty_name: '`trec-arabic/ar2002`'
viewer: false
source_datasets: ['irds/trec-arabic']
task_categories:
- text-retrieval
---
# Dataset Card for `trec-arabic/ar2002`
The `trec-arabic/ar2002` dataset, provided by the [ir-datasets](https://ir-datasets.com/) package.
For more information about the dataset, see the [documentation](https://ir-datasets.com/trec-arabic#trec-arabic/ar2002).
# Data
This dataset provides:
- `queries` (i.e., topics); count=50
- `qrels`: (relevance assessments); count=38,432
- For `docs`, use [`irds/trec-arabic`](https://huggingface.co/datasets/irds/trec-arabic)
## Usage
```python
from datasets import load_dataset
queries = load_dataset('irds/trec-arabic_ar2002', 'queries')
for record in queries:
record # {'query_id': ..., 'title': ..., 'description': ..., 'narrative': ...}
qrels = load_dataset('irds/trec-arabic_ar2002', 'qrels')
for record in qrels:
record # {'query_id': ..., 'doc_id': ..., 'relevance': ..., 'iteration': ...}
```
Note that calling `load_dataset` will download the dataset (or provide access instructions when it's not public) and make a copy of the
data in 🤗 Dataset format.
## Citation Information
```
@inproceedings{Gey2002Arabic,
title={The TREC-2002 Arabic/English CLIR Track},
author={Fredric Gey and Douglas Oard},
booktitle={TREC},
year={2002}
}
@misc{Graff2001Arabic,
title={Arabic Newswire Part 1 LDC2001T55},
author={Graff, David, and Walker, Kevin},
year={2001},
url={https://catalog.ldc.upenn.edu/LDC2001T55},
publisher={Linguistic Data Consortium}
}
```
|
irds/trec-cast_v1 | 2023-01-05T04:03:19.000Z | [
"task_categories:text-retrieval",
"region:us"
] | irds | null | null | null | 1 | 3 | ---
pretty_name: '`trec-cast/v1`'
viewer: false
source_datasets: []
task_categories:
- text-retrieval
---
# Dataset Card for `trec-cast/v1`
The `trec-cast/v1` dataset, provided by the [ir-datasets](https://ir-datasets.com/) package.
For more information about the dataset, see the [documentation](https://ir-datasets.com/trec-cast#trec-cast/v1).
# Data
This dataset provides:
- `docs` (documents, i.e., the corpus); count=38,622,444
This dataset is used by: [`trec-cast_v1_2020`](https://huggingface.co/datasets/irds/trec-cast_v1_2020), [`trec-cast_v1_2020_judged`](https://huggingface.co/datasets/irds/trec-cast_v1_2020_judged)
## Usage
```python
from datasets import load_dataset
docs = load_dataset('irds/trec-cast_v1', 'docs')
for record in docs:
record # {'doc_id': ..., 'text': ...}
```
Note that calling `load_dataset` will download the dataset (or provide access instructions when it's not public) and make a copy of the
data in 🤗 Dataset format.
## Citation Information
```
@inproceedings{Dalton2019Cast,
title={CAsT 2019: The Conversational Assistance Track Overview},
author={Jeffrey Dalton and Chenyan Xiong and Jamie Callan},
booktitle={TREC},
year={2019}
}
```
|
thsant/wgisd | 2023-01-05T17:24:09.000Z | [
"task_categories:object-detection",
"annotations_creators:expert-generated",
"language_creators:expert-generated",
"multilinguality:monolingual",
"size_categories:1K<n<10K",
"source_datasets:original",
"license:cc-by-nc-4.0",
"agriculture",
"viticulture",
"fruit detection",
"arxiv:1803.09010",
... | thsant | null | null | null | 1 | 3 | ---
viewer: false
annotations_creators:
- expert-generated
language: []
language_creators:
- expert-generated
license:
- cc-by-nc-4.0
multilinguality:
- monolingual
pretty_name: "Embrapa Wine Grape Instance Segmentation Dataset \u2013 Embrapa WGISD "
size_categories:
- 1K<n<10K
source_datasets:
- original
tags:
- agriculture
- viticulture
- fruit detection
task_categories:
- object-detection
task_ids: []
---
Embrapa Wine Grape Instance Segmentation Dataset – Embrapa WGISD
================================================================
[](https://zenodo.org/badge/latestdoi/199083745)
This is a detailed description of the dataset, a
*datasheet for the dataset* as proposed by [Gebru *et al.*](https://arxiv.org/abs/1803.09010)
Motivation for Dataset Creation
-------------------------------
### Why was the dataset created?
Embrapa WGISD (*Wine Grape Instance Segmentation Dataset*) was created
to provide images and annotation to study *object detection and instance
segmentation* for image-based monitoring and field robotics in
viticulture. It provides instances from five different grape varieties
taken on field. These instances shows variance in grape pose,
illumination and focus, including genetic and phenological variations
such as shape, color and compactness.
### What (other) tasks could the dataset be used for?
Possible uses include relaxations of the instance segmentation problem:
classification (Is a grape in the image?), semantic segmentation (What
are the "grape pixels" in the image?), object detection (Where are
the grapes in the image?), and counting (How many berries are there
per cluster?). The WGISD can also be used in grape variety
identification.
### Who funded the creation of the dataset?
The building of the WGISD dataset was supported by the Embrapa SEG
Project 01.14.09.001.05.04, *Image-based metrology for Precision
Agriculture and Phenotyping*, and the CNPq PIBIC Program (grants
161165/2017-6 and 125044/2018-6).
Dataset Composition
-------------------
### What are the instances?
Each instance consists in a RGB image and an annotation describing grape
clusters locations as bounding boxes. A subset of the instances also
contains binary masks identifying the pixels belonging to each grape
cluster. Each image presents at least one grape cluster. Some grape
clusters can appear far at the background and should be ignored.
### Are relationships between instances made explicit in the data?
File names prefixes identify the variety observed in the instance.
| Prefix | Variety |
| --- | --- |
| CDY | *Chardonnay* |
| CFR | *Cabernet Franc* |
| CSV | *Cabernet Sauvignon*|
| SVB | *Sauvignon Blanc* |
| SYH | *Syrah* |
### How many instances of each type are there?
The dataset consists of 300 images containing 4,432 grape clusters
identified by bounding boxes. A subset of 137 images also contains
binary masks identifying the pixels of each cluster. It means that from
the 4,432 clusters, 2,020 of them presents binary masks for instance
segmentation, as summarized in the following table.
|Prefix | Variety | Date | Images | Boxed clusters | Masked clusters|
| --- | --- | --- | --- | --- | --- |
|CDY | *Chardonnay* | 2018-04-27 | 65 | 840 | 308|
|CFR | *Cabernet Franc* | 2018-04-27 | 65 | 1,069 | 513|
|CSV | *Cabernet Sauvignon* | 2018-04-27 | 57 | 643 | 306|
|SVB | *Sauvignon Blanc* | 2018-04-27 | 65 | 1,316 | 608|
|SYH | *Syrah* | 2017-04-27 | 48 | 563 | 285|
|Total | | | 300 | 4,431 | 2,020|
*General information about the dataset: the grape varieties and the associated identifying prefix, the date of image capture on field, number of images (instances) and the identified grapes clusters.*
#### Contributions
Another subset of 111 images with separated and non-occluded grape
clusters was annotated with point annotations for every berry by F. Khoroshevsky and S. Khoroshevsky ([Khoroshevsky *et al.*, 2021](https://doi.org/10.1007/978-3-030-65414-6_19)). Theses annotations are available in `test_berries.txt` , `train_berries.txt` and `val_berries.txt`
|Prefix | Variety | Berries |
| --- | --- | --- |
|CDY | *Chardonnay* | 1,102 |
|CFR | *Cabernet Franc* | 1,592 |
|CSV | *Cabernet Sauvignon* | 1,712 |
|SVB | *Sauvignon Blanc* | 1,974 |
|SYH | *Syrah* | 969 |
|Total | | 7,349 |
*Berries annotations by F. Khoroshevsky and S. Khoroshevsky.*
Geng Deng ([Deng *et al.*, 2020](https://doi.org/10.1007/978-3-030-63820-7_66))
provided point-based annotations for berries in all 300 images, summing 187,374 berries.
These annotations are available in `contrib/berries`.
Daniel Angelov (@23pointsNorth) provided a version for the annotations in [COCO format](https://cocodataset.org/#format-data). See `coco_annotations` directory.
### What data does each instance consist of?
Each instance contains a 8-bits RGB image and a text file containing one
bounding box description per line. These text files follows the "YOLO
format"
CLASS CX CY W H
*class* is an integer defining the object class – the dataset presents
only the grape class that is numbered 0, so every line starts with this
“class zero” indicator. The center of the bounding box is the point
*(c_x, c_y)*, represented as float values because this format normalizes
the coordinates by the image dimensions. To get the absolute position,
use *(2048 c_x, 1365 c_y)*. The bounding box dimensions are
given by *W* and *H*, also normalized by the image size.
The instances presenting mask data for instance segmentation contain
files presenting the `.npz` extension. These files are compressed
archives for NumPy $n$-dimensional arrays. Each array is a
*H X W X n_clusters* three-dimensional array where
*n_clusters* is the number of grape clusters observed in the
image. After assigning the NumPy array to a variable `M`, the mask for
the *i*-th grape cluster can be found in `M[:,:,i]`. The *i*-th mask
corresponds to the *i*-th line in the bounding boxes file.
The dataset also includes the original image files, presenting the full
original resolution. The normalized annotation for bounding boxes allows
easy identification of clusters in the original images, but the mask
data will need to be properly rescaled if users wish to work on the
original full resolution.
#### Contributions
*For `test_berries.txt` , `train_berries.txt` and `val_berries.txt`*:
The berries annotations are following a similar notation with the only
exception being that each text file (train/val/test) includes also the
instance file name.
FILENAME CLASS CX CY
where *filename* stands for instance file name, *class* is an integer
defining the object class (0 for all instances) and the point *(c_x, c_y)*
indicates the absolute position of each "dot" indicating a single berry in
a well defined cluster.
*For `contrib/berries`*:
The annotations provide the *(x, y)* point position for each berry center, in a tabular form:
X Y
These point-based annotations can be easily loaded using, for example, `numpy.loadtxt`. See `WGISD.ipynb`for examples.
[Daniel Angelov (@23pointsNorth)](https://github.com/23pointsNorth) provided a version for the annotations in [COCO format](https://cocodataset.org/#format-data). See `coco_annotations` directory. Also see [COCO format](https://cocodataset.org/#format-data) for the JSON-based format.
### Is everything included or does the data rely on external resources?
Everything is included in the dataset.
### Are there recommended data splits or evaluation measures?
The dataset comes with specified train/test splits. The splits are found
in lists stored as text files. There are also lists referring only to
instances presenting binary masks.
| | Images | Boxed clusters | Masked clusters |
| ---------------------| -------- | ---------------- | ----------------- |
| Training/Validation | 242 | 3,581 | 1,612 |
| Test | 58 | 850 | 408 |
| Total | 300 | 4,431 | 2,020 |
*Dataset recommended split.*
Standard measures from the information retrieval and computer vision
literature should be employed: precision and recall, *F1-score* and
average precision as seen in [COCO](http://cocodataset.org)
and [Pascal VOC](http://host.robots.ox.ac.uk/pascal/VOC).
### What experiments were initially run on this dataset?
The first experiments run on this dataset are described in [*Grape detection, segmentation and tracking using deep neural networks and three-dimensional association*](https://arxiv.org/abs/1907.11819) by Santos *et al.*. See also the following video demo:
[](http://www.youtube.com/watch?v=1Hji3GS4mm4 "Grape detection, segmentation and tracking")
**UPDATE**: The JPG files corresponding to the video frames in the [video demo](http://www.youtube.com/watch?v=1Hji3GS4mm4) are now available in the `extras` directory.
Data Collection Process
-----------------------
### How was the data collected?
Images were captured at the vineyards of Guaspari Winery, located at
Espírito Santo do Pinhal, São Paulo, Brazil (Lat -22.181018, Lon
-46.741618). The winery staff performs dual pruning: one for shaping
(after previous year harvest) and one for production, resulting in
canopies of lower density. The image capturing was realized in April
2017 for *Syrah* and in April 2018 for the other varieties.
A Canon EOS REBEL T3i DSLR camera and a Motorola Z2 Play smartphone were
used to capture the images. The cameras were located between the vines
lines, facing the vines at distances around 1-2 meters. The EOS REBEL
T3i camera captured 240 images, including all *Syrah* pictures. The Z2
smartphone grabbed 60 images covering all varieties except *Syrah* . The
REBEL images were scaled to *2048 X 1365* pixels and the Z2 images
to *2048 X 1536* pixels. More data about the capture process can be found
in the Exif data found in the original image files, included in the dataset.
### Who was involved in the data collection process?
T. T. Santos, A. A. Santos and S. Avila captured the images in
field. T. T. Santos, L. L. de Souza and S. Avila performed the
annotation for bounding boxes and masks.
### How was the data associated with each instance acquired?
The rectangular bounding boxes identifying the grape clusters were
annotated using the [`labelImg` tool](https://github.com/tzutalin/labelImg).
The clusters can be under
severe occlusion by leaves, trunks or other clusters. Considering the
absence of 3-D data and on-site annotation, the clusters locations had
to be defined using only a single-view image, so some clusters could be
incorrectly delimited.
A subset of the bounding boxes was selected for mask annotation, using a
novel tool developed by the authors and presented in this work. This
interactive tool lets the annotator mark grape and background pixels
using scribbles, and a graph matching algorithm developed by [Noma *et al.*](https://doi.org/10.1016/j.patcog.2011.08.017)
is employed to perform image segmentation to every pixel in the bounding
box, producing a binary mask representing grape/background
classification.
#### Contributions
A subset of the bounding boxes of well-defined (separated and non-occluded
clusters) was used for "dot" (berry) annotations of each grape to
serve for counting applications as described in [Khoroshevsky *et
al.*](https://doi.org/10.1007/978-3-030-65414-6_19). The berries
annotation was performed by F. Khoroshevsky and S. Khoroshevsky.
Geng Deng ([Deng *et al.*, 2020](https://doi.org/10.1007/978-3-030-63820-7_66))
provided point-based annotations for berries in all 300 images, summing
187,374 berries. These annotations are available in `contrib/berries`.
Deng *et al.* employed [Huawei ModelArt](https://www.huaweicloud.com/en-us/product/modelarts.html),
for their annotation effort.
Data Preprocessing
------------------
### What preprocessing/cleaning was done?
The following steps were taken to process the data:
1. Bounding boxes were annotated for each image using the `labelImg`
tool.
2. Images were resized to *W = 2048* pixels. This resolution proved to
be practical to mask annotation, a convenient balance between grape
detail and time spent by the graph-based segmentation algorithm.
3. A randomly selected subset of images were employed on mask
annotation using the interactive tool based on graph matching.
4. All binaries masks were inspected, in search of pixels attributed to
more than one grape cluster. The annotator assigned the disputed
pixels to the most likely cluster.
5. The bounding boxes were fitted to the masks, which provided a fine
tuning of grape clusters locations.
### Was the “raw” data saved in addition to the preprocessed data?
The original resolution images, containing the Exif data provided by the
cameras, is available in the dataset.
Dataset Distribution
--------------------
### How is the dataset distributed?
The dataset is [available at GitHub](https://github.com/thsant/wgisd).
### When will the dataset be released/first distributed?
The dataset was released in July, 2019.
### What license (if any) is it distributed under?
The data is released under [**Creative Commons BY-NC 4.0 (Attribution-NonCommercial 4.0 International license)**](https://creativecommons.org/licenses/by-nc/4.0/).
There is a request to cite the corresponding paper if the dataset is used. For
commercial use, contact Embrapa Agricultural Informatics business office.
### Are there any fees or access/export restrictions?
There are no fees or restrictions. For commercial use, contact Embrapa
Agricultural Informatics business office.
Dataset Maintenance
-------------------
### Who is supporting/hosting/maintaining the dataset?
The dataset is hosted at Embrapa Agricultural Informatics and all
comments or requests can be sent to [Thiago T. Santos](https://github.com/thsant)
(maintainer).
### Will the dataset be updated?
There is no scheduled updates.
* In May, 2022, [Daniel Angelov (@23pointsNorth)](https://github.com/23pointsNorth) provided a version for the annotations in [COCO format](https://cocodataset.org/#format-data). See `coco_annotations` directory.
* In February, 2021, F. Khoroshevsky and S. Khoroshevsky provided the first extension: the berries ("dot")
annotations.
* In April, 2021, Geng Deng provided point annotations for berries. T. Santos converted Deng's XML files to
easier-to-load text files now available in `contrib/berries` directory.
In case of further updates, releases will be properly tagged at GitHub.
### If others want to extend/augment/build on this dataset, is there a mechanism for them to do so?
Contributors should contact the maintainer by e-mail.
### No warranty
The maintainers and their institutions are *exempt from any liability,
judicial or extrajudicial, for any losses or damages arising from the
use of the data contained in the image database*.
|
hsong1101/news_summarization | 2023-01-05T22:22:21.000Z | [
"license:pddl",
"region:us"
] | hsong1101 | null | null | null | 0 | 3 | ---
license: pddl
dataset_info:
features:
- name: input_ids
sequence: int32
- name: attention_mask
sequence: int8
- name: labels
sequence: int64
splits:
- name: train
num_bytes: 4643521852
num_examples: 696389
- name: test
num_bytes: 1160885464
num_examples: 174098
download_size: 978222798
dataset_size: 5804407316
---
|
pinkmooncake/rico-screen2words | 2023-01-07T04:18:11.000Z | [
"region:us"
] | pinkmooncake | null | null | null | 1 | 3 | ---
dataset_info:
features:
- name: image
dtype: image
- name: text
dtype: string
splits:
- name: test
num_bytes: 454423304.26
num_examples: 4310
- name: dev
num_bytes: 246957743.116
num_examples: 2364
- name: train
num_bytes: 1737030544.084
num_examples: 15743
download_size: 1897987283
dataset_size: 2438411591.46
---
# Dataset Card for "rico-screen2words"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
DFKI-SLT/gids | 2023-01-11T10:06:07.000Z | [
"task_categories:text-classification",
"task_ids:multi-class-classification",
"annotations_creators:other",
"language_creators:found",
"multilinguality:monolingual",
"size_categories:10K<n<100k",
"source_datasets:extended|other",
"language:en",
"license:other",
"relation extraction",
"arxiv:1804... | DFKI-SLT | Google-IISc Distant Supervision (GIDS) is a new dataset for distantly-supervised relation extraction.
GIDS is seeded from the human-judged Google relation extraction corpus. | @inproceedings{bassignana-plank-2022-crossre,
title = "Cross{RE}: A {C}ross-{D}omain {D}ataset for {R}elation {E}xtraction",
author = "Bassignana, Elisa and Plank, Barbara",
booktitle = "Findings of the Association for Computational Linguistics: EMNLP 2022",
year = "2022",
publisher = "Association for Computational Linguistics"
} | null | 0 | 3 | ---
annotations_creators:
- other
language:
- en
language_creators:
- found
license:
- other
multilinguality:
- monolingual
pretty_name: Google-IISc Distant Supervision (GIDS) dataset for distantly-supervised
relation extraction
size_categories:
- 10K<n<100k
source_datasets:
- extended|other
tags:
- relation extraction
task_categories:
- text-classification
task_ids:
- multi-class-classification
dataset_info:
- config_name: gids
features:
- name: sentence
dtype: string
- name: subj_id
dtype: string
- name: obj_id
dtype: string
- name: subj_text
dtype: string
- name: obj_text
dtype: string
- name: relation
dtype:
class_label:
names:
'0': NA
'1': /people/person/education./education/education/institution
'2': /people/person/education./education/education/degree
'3': /people/person/place_of_birth
'4': /people/deceased_person/place_of_death
splits:
- name: train
num_bytes: 5088421
num_examples: 11297
- name: validation
num_bytes: 844784
num_examples: 1864
- name: test
num_bytes: 2568673
num_examples: 5663
download_size: 8941490
dataset_size: 8501878
- config_name: gids_formatted
features:
- name: token
sequence: string
- name: subj_start
dtype: int32
- name: subj_end
dtype: int32
- name: obj_start
dtype: int32
- name: obj_end
dtype: int32
- name: relation
dtype:
class_label:
names:
'0': NA
'1': /people/person/education./education/education/institution
'2': /people/person/education./education/education/degree
'3': /people/person/place_of_birth
'4': /people/deceased_person/place_of_death
splits:
- name: train
num_bytes: 7075362
num_examples: 11297
- name: validation
num_bytes: 1173957
num_examples: 1864
- name: test
num_bytes: 3573706
num_examples: 5663
download_size: 8941490
dataset_size: 11823025
---
# Dataset Card for "gids"
## Table of Contents
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
- **Repository:** [RE-DS-Word-Attention-Models](https://github.com/SharmisthaJat/RE-DS-Word-Attention-Models/tree/master/Data/GIDS)
- **Paper:** [Improving Distantly Supervised Relation Extraction using Word and Entity Based Attention](https://arxiv.org/abs/1804.06987)
- **Size of downloaded dataset files:** 8.94 MB
- **Size of the generated dataset:** 11.82 MB
### Dataset Summary
The Google-IISc Distant Supervision (GIDS) is a new dataset for distantly-supervised relation extraction.
GIDS is seeded from the human-judged Google relation extraction corpus.
See the paper for full details: [Improving Distantly Supervised Relation Extraction using Word and Entity Based Attention](https://arxiv.org/abs/1804.06987)
Note:
- There is a formatted version that you can load with `datasets.load_dataset('gids', name='gids_formatted')`. This version is tokenized with spaCy, removes the underscores in the entities and provides entity offsets.
### Supported Tasks and Leaderboards
- **Tasks:** Relation Classification
- **Leaderboards:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Languages
The language in the dataset is English.
## Dataset Structure
### Data Instances
#### gids
- **Size of downloaded dataset files:** 8.94 MB
- **Size of the generated dataset:** 8.5 MB
An example of 'train' looks as follows:
```json
{
"sentence": "War as appropriate. Private Alfred James_Smurthwaite Sample. 26614. 2nd Battalion Yorkshire Regiment. Son of Edward James Sample, of North_Ormesby , Yorks. Died 2 April 1917. Aged 29. Born Ormesby, Enlisted Middlesbrough. Buried BUCQUOY ROAD CEMETERY, FICHEUX. Not listed on the Middlesbrough War Memorial Private Frederick Scott. 46449. 4th Battalion Yorkshire Regiment. Son of William and Maria Scott, of 25, Aspinall St., Heywood, Lancs. Born at West Hartlepool. Died 27 May 1918. Aged 24.",
"subj_id": "/m/02qt0sv",
"obj_id": "/m/0fnhl9",
"subj_text": "James_Smurthwaite",
"obj_text": "North_Ormesby",
"relation": 4
}
```
#### gids_formatted
- **Size of downloaded dataset files:** 8.94 MB
- **Size of the generated dataset:** 11.82 MB
An example of 'train' looks as follows:
```json
{
"token": ["announced", "he", "had", "closed", "shop", ".", "Mary", "D.", "Crisp", "Coyle", "opened", "in", "1951", ".", "Stoffey", ",", "a", "Maricopa", "County", "/", "Phoenix", "city", "resident", "and", "longtime", "customer", ",", "bought", "the", "business", "in", "2011", ",", "when", "then", "owners", "were", "facing", "closure", ".", "He", "renovated", "the", "diner", "is", "interior", ",", "increased", "training", "for", "staff", "and", "expanded", "the", "menu", "."],
"subj_start": 6,
"subj_end": 9,
"obj_start": 17,
"obj_end": 22,
"relation": 4
}
```
### Data Fields
The data fields are the same among all splits.
#### gids
- `sentence`: the sentence, a `string` feature.
- `subj_id`: the id of the relation subject mention, a `string` feature.
- `obj_id`: the id of the relation object mention, a `string` feature.
- `subj_text`: the text of the relation subject mention, a `string` feature.
- `obj_text`: the text of the relation object mention, a `string` feature.
- `relation`: the relation label of this instance, an `int` classification label.
```python
{"NA": 0, "/people/person/education./education/education/institution": 1, "/people/person/education./education/education/degree": 2, "/people/person/place_of_birth": 3, "/people/deceased_person/place_of_death": 4}
```
#### gids_formatted
- `token`: the list of tokens of this sentence, obtained with spaCy, a `list` of `string` features.
- `subj_start`: the 0-based index of the start token of the relation subject mention, an `ìnt` feature.
- `subj_end`: the 0-based index of the end token of the relation subject mention, exclusive, an `ìnt` feature.
- `obj_start`: the 0-based index of the start token of the relation object mention, an `ìnt` feature.
- `obj_end`: the 0-based index of the end token of the relation object mention, exclusive, an `ìnt` feature.
- `relation`: the relation label of this instance, an `int` classification label.
```python
{"NA": 0, "/people/person/education./education/education/institution": 1, "/people/person/education./education/education/degree": 2, "/people/person/place_of_birth": 3, "/people/deceased_person/place_of_death": 4}
```
### Data Splits
| | Train | Dev | Test |
|------|-------|------|------|
| GIDS | 11297 | 1864 | 5663 |
## Dataset Creation
### Curation Rationale
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
#### Who are the source language producers?
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Annotations
#### Annotation process
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
#### Who are the annotators?
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Personal and Sensitive Information
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Discussion of Biases
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Other Known Limitations
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
## Additional Information
### Dataset Curators
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Licensing Information
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Citation Information
```
@article{DBLP:journals/corr/abs-1804-06987,
author = {Sharmistha Jat and
Siddhesh Khandelwal and
Partha P. Talukdar},
title = {Improving Distantly Supervised Relation Extraction using Word and
Entity Based Attention},
journal = {CoRR},
volume = {abs/1804.06987},
year = {2018},
url = {http://arxiv.org/abs/1804.06987},
eprinttype = {arXiv},
eprint = {1804.06987},
timestamp = {Fri, 15 Nov 2019 17:16:02 +0100},
biburl = {https://dblp.org/rec/journals/corr/abs-1804-06987.bib},
bibsource = {dblp computer science bibliography, https://dblp.org}
}
```
### Contributions
Thanks to [@phucdev](https://github.com/phucdev) for adding this dataset. |
Poulpidot/FrenchHateSpeechSuperset | 2023-02-04T21:17:04.000Z | [
"license:unknown",
"doi:10.57967/hf/0284",
"region:us"
] | Poulpidot | null | null | null | 0 | 3 | ---
license: unknown
---
### FrenchHateSpeechSuperset
This dataset is a superset of multiple datasets including hate speech, harasment, sexist, racist, etc...messages from various platforms.
Included datasets :
- MLMA dataset
- CAA dataset
- FTR dataset
- "An Annotated Corpus for Sexism Detection in French Tweets" dataset
- UC-Berkeley-Measuring-Hate-Speech dataset (translated from english*)
#### References
```
@inproceedings{chiril2020annotated,
title={An Annotated Corpus for Sexism Detection in French Tweets},
author={Chiril, Patricia and Moriceau, V{\'e}ronique and Benamara, Farah and Mari, Alda and Origgi, Gloria and Coulomb-Gully, Marl{\`e}ne},
booktitle={Proceedings of The 12th Language Resources and Evaluation Conference},
pages={1397--1403},
year={2020}
}
```
```
@inproceedings{ousidhoum-etal-multilingual-hate-speech-2019,
title = "Multilingual and Multi-Aspect Hate Speech Analysis",
author = "Ousidhoum, Nedjma
and Lin, Zizheng
and Zhang, Hongming
and Song, Yangqiu
and Yeung, Dit-Yan",
booktitle = "Proceedings of EMNLP",
year = "2019",
publisher = "Association for Computational Linguistics",
}
```
```
Vanetik, N.; Mimoun, E. Detection of Racist Language in French Tweets. Information 2022, 13, 318. https://doi.org/10.3390/info13070318
```
```
@article{kennedy2020constructing,
title={Constructing interval variables via faceted Rasch measurement and multitask deep learning: a hate speech application},
author={Kennedy, Chris J and Bacon, Geoff and Sahn, Alexander and von Vacano, Claudia},
journal={arXiv preprint arXiv:2009.10277},
year={2020}
}
```
```
Anaïs Ollagnier, Elena Cabrio, Serena Villata, Catherine Blaya. CyberAgressionAdo-v1: a Dataset of Annotated Online Aggressions in French Collected through a Role-playing Game. Language Resources and Evaluation Conference, Jun 2022, Marseille, France. ⟨hal-03765860⟩
```
### Translation
French datasets for hate speech are quite rare. To augment current dataset, messages from other languages (english only for now) have been integrated.
To integrate other languages dataset, MT model were used and manually selected for each dataset.
- UC-Berkeley-Measuring-Hate-Speech dataset : Abelll/marian-finetuned-kde4-en-to-fr
### Language verification
Since MT models are not perfect, some messages are not entirely translated or not translated at all.
To check for obvious errors in pipeline, a general language detection model is used to prune non french texts.
Language detection model : papluca/xlm-roberta-base-language-detection
### Annotation
Since "hate speech" dimension is highly subjective, and datasets comes with different annotations types, a conventional labeling stategy is required.
Each sample is annotated with "0" if negative sample and "1" if positive sample.
### Filtering rules :
- FTR dataset : [wip]
- MLMA dataset : [wip]
- CAA dataset : [wip]
- "Annotated Corpus" dataset : [wip]
- UC-Berkeley Measuring Hate Speech dataset : average hate_speech_score > 0 -> 1
|
Multimodal-Fatima/OxfordPets_train | 2023-05-04T04:54:38.000Z | [
"region:us"
] | Multimodal-Fatima | null | null | null | 0 | 3 | ---
dataset_info:
features:
- name: image
dtype: image
- name: label
dtype:
class_label:
names:
'0': abyssinian
'1': american bulldog
'2': american pit bull terrier
'3': basset hound
'4': beagle
'5': bengal
'6': birman
'7': bombay
'8': boxer
'9': british shorthair
'10': chihuahua
'11': egyptian mau
'12': english cocker spaniel
'13': english setter
'14': german shorthaired
'15': great pyrenees
'16': havanese
'17': japanese chin
'18': keeshond
'19': leonberger
'20': maine coon
'21': miniature pinscher
'22': newfoundland
'23': persian
'24': pomeranian
'25': pug
'26': ragdoll
'27': russian blue
'28': saint bernard
'29': samoyed
'30': scottish terrier
'31': shiba inu
'32': siamese
'33': sphynx
'34': staffordshire bull terrier
'35': wheaten terrier
'36': yorkshire terrier
- name: species
dtype:
class_label:
names:
'0': Cat
'1': Dog
- name: id
dtype: int64
- name: clip_tags_ViT_L_14
sequence: string
- name: blip_caption
dtype: string
- name: LLM_Description_opt175b_downstream_tasks_ViT_L_14
sequence: string
- name: LLM_Description_gpt3_downstream_tasks_ViT_L_14
sequence: string
- name: clip_tags_ViT_L_14_ensemble_specific
dtype: string
- name: clip_tags_ViT_L_14_simple_specific
dtype: string
- name: LLM_Description_gpt3_downstream_tasks_visual_genome_ViT_L_14
sequence: string
- name: clip_tags_ViT_L_14with_openai_classes
sequence: string
- name: clip_tags_ViT_L_14_wo_openai_classes
sequence: string
- name: clip_tags_ViT_L_14_with_openai_classes
sequence: string
- name: Attributes_ViT_L_14_text_davinci_003
sequence: string
- name: Attributes_ViT_L_14_text_davinci_003_full
sequence: string
- name: Attributes_ViT_L_14_text_davinci_003_oxfordpets
sequence: string
- name: clip_tags_ViT_B_16_simple_specific
dtype: string
- name: clip_tags_ViT_B_16_ensemble_specific
dtype: string
- name: clip_tags_ViT_B_32_simple_specific
dtype: string
- name: clip_tags_ViT_B_32_ensemble_specific
dtype: string
- name: Attributes_ViT_B_16_descriptors_text_davinci_003_full
sequence: string
- name: Attributes_LAION_ViT_H_14_2B_descriptors_text_davinci_003_full
sequence: string
- name: clip_tags_LAION_ViT_H_14_2B_simple_specific
dtype: string
- name: clip_tags_LAION_ViT_H_14_2B_ensemble_specific
dtype: string
splits:
- name: train
num_bytes: 386730161.36
num_examples: 3680
download_size: 378295172
dataset_size: 386730161.36
---
# Dataset Card for "OxfordPets_train"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
sakharamg/AviationQA | 2023-04-06T19:08:21.000Z | [
"task_categories:question-answering",
"language:en",
"license:cc-by-4.0",
"Question Answering",
"Aviation",
"Knowledge Graphs",
"region:us"
] | sakharamg | null | null | null | 3 | 3 | ---
license: cc-by-4.0
task_categories:
- question-answering
language:
- en
tags:
- Question Answering
- Aviation
- Knowledge Graphs
pretty_name: AviationQA
---
AviationQA is introduced in the paper titled- There is No Big Brother or Small Brother: Knowledge Infusion in Language Models for Link Prediction and Question Answering
https://aclanthology.org/2022.icon-main.26/
The paper is accepted in the main conference of ICON 2022.
We create a synthetic dataset, AviationQA, a set of 1 million factoid QA pairs from 12,000 National Transportation Safety Board (NTSB) reports using templates. These QA pairs contain questions such that answers to them are entities occurring in the AviationKG (Agarwal et al., 2022). AviationQA will be helpful to researchers in finding insights into aircraft accidents and their prevention.
Examples from dataset:
What was the Aircraft Damage of the accident no. ERA22LA162? Answer: Substantial
Where was the Destination of the accident no. ERA22LA162?, Answer: Naples, GA (APH) |
Dahoas/first-instruct-human-assistant-prompt | 2023-01-11T19:15:52.000Z | [
"region:us"
] | Dahoas | null | null | null | 1 | 3 | Entry not found |
metaeval/lingnli | 2023-05-31T08:40:53.000Z | [
"task_categories:text-classification",
"language:en",
"license:unknown",
"region:us"
] | metaeval | null | null | null | 0 | 3 | ---
language:
- en
task_categories:
- text-classification
license: unknown
---
https://github.com/Alicia-Parrish/ling_in_loop/
```bib
@inproceedings{parrish-etal-2021-putting-linguist,
title = "Does Putting a Linguist in the Loop Improve {NLU} Data Collection?",
author = "Parrish, Alicia and
Huang, William and
Agha, Omar and
Lee, Soo-Hwan and
Nangia, Nikita and
Warstadt, Alexia and
Aggarwal, Karmanya and
Allaway, Emily and
Linzen, Tal and
Bowman, Samuel R.",
booktitle = "Findings of the Association for Computational Linguistics: EMNLP 2021",
month = nov,
year = "2021",
address = "Punta Cana, Dominican Republic",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2021.findings-emnlp.421",
doi = "10.18653/v1/2021.findings-emnlp.421",
pages = "4886--4901",
}
``` |
archanatikayatray/aeroBERT-classification | 2023-05-20T22:40:37.000Z | [
"task_categories:text-classification",
"size_categories:n<1K",
"language:en",
"license:apache-2.0",
"sentence classification",
"aerospace requirements",
"design",
"functional",
"performance",
"requirements",
"NLP4RE",
"doi:10.57967/hf/0433",
"region:us"
] | archanatikayatray | null | null | null | 2 | 3 | ---
license: apache-2.0
task_categories:
- text-classification
tags:
- sentence classification
- aerospace requirements
- design
- functional
- performance
- requirements
- NLP4RE
pretty_name: requirements_classification_dataset.txt
size_categories:
- n<1K
language:
- en
---
# Dataset Card for aeroBERT-classification
## Dataset Description
- **Paper:** aeroBERT-Classifier: Classification of Aerospace Requirements using BERT
- **Point of Contact:** archanatikayatray@gmail.com
### Dataset Summary
This dataset contains requirements from the aerospace domain. The requirements are tagged based on the "type"/category of requirement they belong to.
The creation of this dataset is aimed at - <br>
(1) Making available an **open-source** dataset for aerospace requirements which are often proprietary <br>
(2) Fine-tuning language models for **requirements classification** specific to the aerospace domain <br>
This dataset can be used for training or fine-tuning language models for the identification of the following types of requirements - <br>
<br>
**Design Requirement** - Dictates "how" a system should be designed given certain technical standards and specifications;
**Example:** Trim control systems must be designed to prevent creeping in flight.<br>
<br>
**Functional Requirement** - Defines the functions that need to be performed by a system in order to accomplish the desired system functionality;
**Example:** Each cockpit voice recorder shall record the voice communications of flight crew members on the flight deck.<br>
<br>
**Performance Requirement** - Defines "how well" a system needs to perform a certain function;
**Example:** The airplane must be free from flutter, control reversal, and divergence for any configuration and condition of operation.<br>
## Dataset Structure
The tagging scheme followed: <br>
(1) Design requirements: 0 (Count = 149) <br>
(2) Functional requirements: 1 (Count = 99) <br>
(3) Performance requirements: 2 (Count = 62) <br>
<br>
The dataset is of the format: ``requirements | label`` <br>
| requirements | label |
| :----: | :----: |
| Each cockpit voice recorder shall record voice communications transmitted from or received in the airplane by radio.| 1 |
| Each recorder container must be either bright orange or bright yellow.| 0 |
| Single-engine airplanes, not certified for aerobatics, must not have a tendency to inadvertently depart controlled flight. | 2|
| Each part of the airplane must have adequate provisions for ventilation and drainage. | 0 |
| Each baggage and cargo compartment must have a means to prevent the contents of the compartment from becoming a hazard by impacting occupants or shifting. | 1 |
## Dataset Creation
### Source Data
A total of 325 aerospace requirements were collected from Parts 23 and 25 of Title 14 of the Code of Federal Regulations (CFRs) and annotated (refer to the paper for more details). <br>
### Importing dataset into Python environment
Use the following code chunk to import the dataset into Python environment as a DataFrame.
```
from datasets import load_dataset
import pandas as pd
dataset = load_dataset("archanatikayatray/aeroBERT-classification")
#Converting the dataset into a pandas DataFrame
dataset = pd.DataFrame(dataset["train"]["text"])
dataset = dataset[0].str.split('*', expand = True)
#Getting the headers from the first row
header = dataset.iloc[0]
#Excluding the first row since it contains the headers
dataset = dataset[1:]
#Assigning the header to the DataFrame
dataset.columns = header
#Viewing the last 10 rows of the annotated dataset
dataset.tail(10)
```
### Annotations
#### Annotation process
A Subject Matter Expert (SME) was consulted for deciding on the annotation categories for the requirements.
The final classification dataset had 149 Design requirements, 99 Functional requirements, and 62 Performance requirements.
Lastly, the 'labels' attached to the requirements (design requirement, functional requirement, and performance requirement) were converted into numeric values: 0, 1, and 2 respectively.
### Limitations
(1)The dataset is an imbalanced dataset (more Design requirements as compared to the other types). Hence, using ``Accuracy`` as a metric for the model performance is
NOT a good idea. The use of Precision, Recall, and F1 scores are suggested for model performance evaluation.
(2)This dataset does not contain a test set. Hence, it is suggested that the user split the dataset into training/validation/testing after importing the data into a Python environment.
Please refer to the Appendix of the paper for information on the test set.
### Citation Information
```
@Article{aeroBERT-Classifier,
AUTHOR = {Tikayat Ray, Archana and Cole, Bjorn F. and Pinon Fischer, Olivia J. and White, Ryan T. and Mavris, Dimitri N.},
TITLE = {aeroBERT-Classifier: Classification of Aerospace Requirements Using BERT},
JOURNAL = {Aerospace},
VOLUME = {10},
YEAR = {2023},
NUMBER = {3},
ARTICLE-NUMBER = {279},
URL = {https://www.mdpi.com/2226-4310/10/3/279},
ISSN = {2226-4310},
DOI = {10.3390/aerospace10030279}
}
@phdthesis{tikayatray_thesis,
author = {Tikayat Ray, Archana},
title = {Standardization of Engineering Requirements Using Large Language Models},
school = {Georgia Institute of Technology},
year = {2023},
doi = {10.13140/RG.2.2.17792.40961},
URL = {https://repository.gatech.edu/items/964c73e3-f0a8-487d-a3fa-a0988c840d04}
}
``` |
rahmanfadhil/squad_v2_id | 2023-01-12T11:14:51.000Z | [
"region:us"
] | rahmanfadhil | null | null | null | 0 | 3 | ---
dataset_info:
features:
- name: id
dtype: string
- name: title
dtype: string
- name: context
dtype: string
- name: question
dtype: string
- name: answers
struct:
- name: answer_start
sequence: int32
- name: text
sequence: string
splits:
- name: train
num_bytes: 121632833
num_examples: 130318
- name: validation
num_bytes: 12218827
num_examples: 11858
download_size: 0
dataset_size: 133851660
---
# Dataset Card for "squad_id"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
AI4EPS/quakeflow_nc | 2023-08-09T18:13:03.000Z | [
"license:mit",
"doi:10.57967/hf/0716",
"region:us"
] | AI4EPS | A dataset of earthquake waveforms organized by earthquake events and based on the HDF5 format. | @InProceedings{huggingface:dataset,
title = {NCEDC dataset for QuakeFlow},
author={Zhu et al.},
year={2023}
} | null | 0 | 3 | ---
license: mit
---
# Quakeflow_NC
## Introduction
This dataset is part of the data (1970-2020) from [NCEDC (Northern California Earthquake Data Center)](https://ncedc.org/index.html) and is organized as several HDF5 files. The dataset structure is shown below, and you can find more information about the format at [AI4EPS](https://ai4eps.github.io/homepage/ml4earth/seismic_event_format1/))
Cite the NCEDC and PhaseNet:
Zhu, W., & Beroza, G. C. (2018). PhaseNet: A Deep-Neural-Network-Based Seismic Arrival Time Picking Method. arXiv preprint arXiv:1803.03211.
NCEDC (2014), Northern California Earthquake Data Center. UC Berkeley Seismological Laboratory. Dataset. doi:10.7932/NCEDC.
Acknowledge the NCEDC:
Waveform data, metadata, or data products for this study were accessed through the Northern California Earthquake Data Center (NCEDC), doi:10.7932/NCEDC.
```
Group: / len:16227
|- Group: /nc71111584 len:2
| |-* begin_time = 2020-01-02T07:01:19.620
| |-* depth_km = 3.69
| |-* end_time = 2020-01-02T07:03:19.620
| |-* event_id = nc71111584
| |-* event_time = 2020-01-02T07:01:48.240
| |-* event_time_index = 2862
| |-* latitude = 37.6545
| |-* longitude = -118.8798
| |-* magnitude = -0.15
| |-* magnitude_type = D
| |-* num_stations = 2
| |- Dataset: /nc71111584/NC.MCB..HH (shape:(3, 12000))
| | |- (dtype=float32)
| | | |-* azimuth = 233.0
| | | |-* component = ['E' 'N' 'Z']
| | | |-* distance_km = 1.9
| | | |-* dt_s = 0.01
| | | |-* elevation_m = 2391.0
| | | |-* emergence_angle = 159.0
| | | |-* event_id = ['nc71111584' 'nc71111584']
| | | |-* latitude = 37.6444
| | | |-* location =
| | | |-* longitude = -118.8968
| | | |-* network = NC
| | | |-* phase_index = [3000 3101]
| | | |-* phase_polarity = ['U' 'N']
| | | |-* phase_remark = ['IP' 'ES']
| | | |-* phase_score = [1 2]
| | | |-* phase_time = ['2020-01-02T07:01:49.620' '2020-01-02T07:01:50.630']
| | | |-* phase_type = ['P' 'S']
| | | |-* snr = [2.82143 3.055604 1.8412642]
| | | |-* station = MCB
| | | |-* unit = 1e-6m/s
| |- Dataset: /nc71111584/NC.MCB..HN (shape:(3, 12000))
| | |- (dtype=float32)
| | | |-* azimuth = 233.0
| | | |-* component = ['E' 'N' 'Z']
......
```
## How to use
### Requirements
- datasets
- h5py
- fsspec
- torch (for PyTorch)
### Usage
Import the necessary packages:
```python
import h5py
import numpy as np
import torch
from torch.utils.data import Dataset, IterableDataset, DataLoader
from datasets import load_dataset
```
We have 6 configurations for the dataset:
- "station"
- "event"
- "station_train"
- "event_train"
- "station_test"
- "event_test"
"station" yields station-based samples one by one, while "event" yields event-based samples one by one. The configurations with no suffix are the full dataset, while the configurations with suffix "_train" and "_test" only have corresponding split of the full dataset. Train split contains data from 1970 to 2019, while test split contains data in 2020.
The sample of `station` is a dictionary with the following keys:
- `data`: the waveform with shape `(3, nt)`, the default time length is 8192
- `phase_pick`: the probability of the phase pick with shape `(3, nt)`, the first dimension is noise, P and S
- `event_location`: the event location with shape `(4,)`, including latitude, longitude, depth and time
- `station_location`: the station location with shape `(3,)`, including latitude, longitude and depth
The sample of `event` is a dictionary with the following keys:
- `data`: the waveform with shape `(n_station, 3, nt)`, the default time length is 8192
- `phase_pick`: the probability of the phase pick with shape `(n_station, 3, nt)`, the first dimension is noise, P and S
- `event_center`: the probability of the event time with shape `(n_station, feature_nt)`, default feature time length is 512
- `event_location`: the space-time coordinates of the event with shape `(n_staion, 4, feature_nt)`
- `event_location_mask`: the probability mask of the event time with shape `(n_station, feature_nt)`
- `station_location`: the space coordinates of the station with shape `(n_station, 3)`, including latitude, longitude and depth
The default configuration is `station_test`. You can specify the configuration by argument `name`. For example:
```python
# load dataset
# ATTENTION: Streaming(Iterable Dataset) is difficult to support because of the feature of HDF5
# So we recommend to directly load the dataset and convert it into iterable later
# The dataset is very large, so you need to wait for some time at the first time
# to load "station_test" with test split
quakeflow_nc = load_dataset("AI4EPS/quakeflow_nc", split="test")
# or
quakeflow_nc = load_dataset("AI4EPS/quakeflow_nc", name="station_test", split="test")
# to load "event" with train split
quakeflow_nc = load_dataset("AI4EPS/quakeflow_nc", name="event", split="train")
```
#### Usage for `station`
Then you can change the dataset into PyTorch format iterable dataset, and view the first sample:
```python
quakeflow_nc = load_dataset("AI4EPS/quakeflow_nc", name="station_test", split="test")
# for PyTorch DataLoader, we need to divide the dataset into several shards
num_workers=4
quakeflow_nc = quakeflow_nc.to_iterable_dataset(num_shards=num_workers)
# because add examples formatting to get tensors when using the "torch" format
# has not been implemented yet, we need to manually add the formatting when using iterable dataset
# if you want to use dataset directly, just use
# quakeflow_nc.with_format("torch")
quakeflow_nc = quakeflow_nc.map(lambda x: {key: torch.from_numpy(np.array(value, dtype=np.float32)) for key, value in x.items()})
try:
isinstance(quakeflow_nc, torch.utils.data.IterableDataset)
except:
raise Exception("quakeflow_nc is not an IterableDataset")
# print the first sample of the iterable dataset
for example in quakeflow_nc:
print("\nIterable test\n")
print(example.keys())
for key in example.keys():
print(key, example[key].shape, example[key].dtype)
break
dataloader = DataLoader(quakeflow_nc, batch_size=4, num_workers=num_workers)
for batch in dataloader:
print("\nDataloader test\n")
print(batch.keys())
for key in batch.keys():
print(key, batch[key].shape, batch[key].dtype)
break
```
#### Usage for `event`
Then you can change the dataset into PyTorch format dataset, and view the first sample (Don't forget to reorder the keys):
```python
quakeflow_nc = datasets.load_dataset("AI4EPS/quakeflow_nc", split="test", name="event_test")
# for PyTorch DataLoader, we need to divide the dataset into several shards
num_workers=4
quakeflow_nc = quakeflow_nc.to_iterable_dataset(num_shards=num_workers)
quakeflow_nc = quakeflow_nc.map(lambda x: {key: torch.from_numpy(np.array(value, dtype=np.float32)) for key, value in x.items()})
try:
isinstance(quakeflow_nc, torch.utils.data.IterableDataset)
except:
raise Exception("quakeflow_nc is not an IterableDataset")
# print the first sample of the iterable dataset
for example in quakeflow_nc:
print("\nIterable test\n")
print(example.keys())
for key in example.keys():
print(key, example[key].shape, example[key].dtype)
break
dataloader = DataLoader(quakeflow_nc, batch_size=1, num_workers=num_workers)
for batch in dataloader:
print("\nDataloader test\n")
print(batch.keys())
for key in batch.keys():
print(key, batch[key].shape, batch[key].dtype)
break
``` |
metaeval/cycic_classification | 2023-05-31T08:47:48.000Z | [
"task_categories:question-answering",
"task_categories:text-classification",
"language:en",
"license:apache-2.0",
"arxiv:2301.05948",
"region:us"
] | metaeval | null | null | null | 1 | 3 | ---
license: apache-2.0
task_categories:
- question-answering
- text-classification
language:
- en
---
https://storage.googleapis.com/ai2-mosaic/public/cycic/CycIC-train-dev.zip
https://colab.research.google.com/drive/16nyxZPS7-ZDFwp7tn_q72Jxyv0dzK1MP?usp=sharing
```
@article{Kejriwal2020DoFC,
title={Do Fine-tuned Commonsense Language Models Really Generalize?},
author={Mayank Kejriwal and Ke Shen},
journal={ArXiv},
year={2020},
volume={abs/2011.09159}
}
```
added for
```
@article{sileo2023tasksource,
title={tasksource: Structured Dataset Preprocessing Annotations for Frictionless Extreme Multi-Task Learning and Evaluation},
author={Sileo, Damien},
url= {https://arxiv.org/abs/2301.05948},
journal={arXiv preprint arXiv:2301.05948},
year={2023}
}
``` |
clip-benchmark/wds_imagenet1k | 2023-01-20T00:57:51.000Z | [
"region:us"
] | clip-benchmark | null | null | null | 0 | 3 | Entry not found |
zpn/GRCh38 | 2023-01-22T00:32:15.000Z | [
"license:mit",
"region:us"
] | zpn | A dataset of all autosomal and sex chromosomes sequences from reference assembly GRCh38/hg38 1 and reached a total of 3.2 billion nucleotides. | null | null | 0 | 3 | ---
license: mit
dataset_info:
features:
- name: chr
dtype: string
- name: description
dtype: string
- name: seq
dtype: string
- name: split
dtype: string
splits:
- name: train
num_bytes: 3158692879
num_examples: 510445
download_size: 3166859999
dataset_size: 3158692879
---
|
mwz/ur_para | 2023-06-24T13:06:04.000Z | [
"task_categories:text2text-generation",
"task_categories:summarization",
"task_categories:text-generation",
"size_categories:100K<n<1M",
"language:ur",
"license:mit",
"region:us"
] | mwz | null | null | null | 0 | 3 | ---
license: mit
task_categories:
- text2text-generation
- summarization
- text-generation
language:
- ur
pretty_name: ur_para
size_categories:
- 100K<n<1M
---
# Paraphrase Dataset (Urdu)
This dataset contains paraphrases in Urdu. It is provided in the Parquet format and is split into a training set with 393,000 rows.
## Dataset Details
- Columns:
- `sentence1`: The first sentence in a pair of paraphrases (string).
- `sentence2`: The second sentence in a pair of paraphrases (string).
## Usage
You can use this dataset for various natural language processing tasks such as text similarity, paraphrase identification, and language generation.
|
Norod78/jojo-stone-ocean-blip-captions-512 | 2023-07-13T11:27:31.000Z | [
"size_categories:1K<n<10K",
"language:en",
"license:cc-by-nc-sa-4.0",
"text-to-image",
"region:us"
] | Norod78 | null | null | null | 0 | 3 | ---
language: en
license: cc-by-nc-sa-4.0
size_categories:
- 1K<n<10K
pretty_name: 'JoJo''s Bizarre Adventure: Stone Ocean - Blip captions'
dataset_info:
features:
- name: image
dtype: image
- name: text
dtype: string
splits:
- name: train
num_bytes: 94744425.832
num_examples: 1376
download_size: 94450521
dataset_size: 94744425.832
tags:
- text-to-image
---
# Dataset Card for "jojo-stone-ocean-blip-captions-512"
## JoJo's Bizarre Adventure: Stone Ocean with Blip captions.
## Dataset contains 512x512 cropped images whose source is [jojowiki](https://jojowiki.com/Stone_Ocean_(Anime)) |
jrahn/yolochess_deepblue | 2023-02-03T21:29:20.000Z | [
"task_categories:text-classification",
"task_categories:reinforcement-learning",
"size_categories:n<1K",
"license:gpl-3.0",
"chess",
"region:us"
] | jrahn | null | null | null | 0 | 3 | ---
dataset_info:
features:
- name: fen
dtype: string
- name: move
dtype: string
- name: result
dtype: string
- name: eco
dtype: string
splits:
- name: train
num_bytes: 45608.0
num_examples: 511
download_size: 18295
dataset_size: 45608.0
license: gpl-3.0
task_categories:
- text-classification
- reinforcement-learning
tags:
- chess
size_categories:
- n<1K
---
# Dataset Card for "yolochess_deepblue"
Source: https://github.com/niklasf/python-chess/tree/master/data/pgn
Features:
- fen = Chess board position in [FEN](https://en.wikipedia.org/wiki/Forsyth%E2%80%93Edwards_Notation) format
- move = Move played by a strong human player in this position
- result = Final result of the match
- eco = Opening [ECO](https://en.wikipedia.org/wiki/Encyclopaedia_of_Chess_Openings)-code
Deduplicated on (fen, move) pairs.
Samples: 511 |
lorenzoscottb/PLANE-ood | 2023-01-25T09:51:09.000Z | [
"task_categories:text-classification",
"size_categories:100K<n<1M",
"language:en",
"license:cc-by-2.0",
"region:us"
] | lorenzoscottb | null | null | null | 0 | 3 | ---
license: cc-by-2.0
task_categories:
- text-classification
language:
- en
size_categories:
- 100K<n<1M
---
# PLANE Out-of-Distribution Sets
PLANE (phrase-level adjective-noun entailment) is a benchmark to test models on fine-grained compositional inference.
The current dataset contains five sampled splits, used in the supervised experiments of [Bertolini et al., 22](https://aclanthology.org/2022.coling-1.359/).
## Data Structure
The `dataset` is organised around five `Train/test_split#`, each containing a training and test set of circa 60K and 2K.
### Features
Each entrance has 6 features: `seq, label, Adj_Class, Adj, Nn, Hy`
- `seq`:test sequense
- `label`: ground truth (1:entialment, 0:no-entailment)
- `Adj_Class`: the class of the sequence adjectives
- `Adj`: the adjective of the sequence (I: intersective, S: subsective, O: intensional)
- `N`n: the noun
- `Hy`: the noun's hypericum
Each sample in `seq` can take one of three forms (or inference types, in paper):
- An *Adjective-Noun* is a *Noun* (e.g. A red car is a car)
- An *Adjective-Noun* is a *Hypernym(Noun)* (e.g. A red car is a vehicle)
- An *Adjective-Noun* is a *Adjective-Hypernym(Noun)* (e.g. A red car is a red vehicle)
Please note that, as specified in the paper, the ground truth is automatically assigned based on the linguistic rule that governs the interaction between each adjective class and inference type – see the paper for more detail.
### Trained Model
You can find a tuned BERT-base model (tuned and validated using the 2nd split) [here](https://huggingface.co/lorenzoscottb/bert-base-cased-PLANE-ood-2?text=A+fake+smile+is+a+smile).
### Cite
If you use PLANE for your work, please cite the main COLING 2022 paper.
```
@inproceedings{bertolini-etal-2022-testing,
title = "Testing Large Language Models on Compositionality and Inference with Phrase-Level Adjective-Noun Entailment",
author = "Bertolini, Lorenzo and
Weeds, Julie and
Weir, David",
booktitle = "Proceedings of the 29th International Conference on Computational Linguistics",
month = oct,
year = "2022",
address = "Gyeongju, Republic of Korea",
publisher = "International Committee on Computational Linguistics",
url = "https://aclanthology.org/2022.coling-1.359",
pages = "4084--4100",
}
``` |
fathyshalab/atis-flight | 2023-01-23T17:55:20.000Z | [
"region:us"
] | fathyshalab | null | null | null | 1 | 3 | Entry not found |
liyucheng/chinese_metaphor_dataset | 2023-07-06T20:29:33.000Z | [
"task_categories:text-generation",
"size_categories:1K<n<10K",
"language:zh",
"license:cc-by-nc-sa-4.0",
"metaphor",
"figurative language",
"region:us"
] | liyucheng | Chinese Metaphor Corpus
The first Chinese metaphor corpus serving both metaphor identification and generation.
首个中文比喻数据集,可以用于中文比喻识别与中文比喻生成。 | @inproceedings{li-etal-2022-cm,
title = "{CM}-Gen: A Neural Framework for {C}hinese Metaphor Generation with Explicit Context Modelling",
author = "Li, Yucheng and
Lin, Chenghua and
Guerin, Frank",
booktitle = "Proceedings of the 29th International Conference on Computational Linguistics",
month = oct,
year = "2022",
address = "Gyeongju, Republic of Korea",
publisher = "International Committee on Computational Linguistics",
url = "https://aclanthology.org/2022.coling-1.563",
pages = "6468--6479",
}
@misc{li-inlg-2022-nominal,
doi = {10.48550/ARXIV.2206.05195},
url = {https://arxiv.org/abs/2206.05195},
author = {Li, Yucheng and Lin, Chenghua and Geurin, Frank},
keywords = {Computation and Language (cs.CL), FOS: Computer and information sciences, FOS: Computer and information sciences},
title = {Nominal Metaphor Generation with Multitask Learning},
publisher = {arXiv},
year = {2022},
copyright = {arXiv.org perpetual, non-exclusive license}
} | null | 7 | 3 | ---
license: cc-by-nc-sa-4.0
task_categories:
- text-generation
language:
- zh
tags:
- metaphor
- figurative language
pretty_name: CMC
size_categories:
- 1K<n<10K
---
# Chinese Metaphor Corpus (CMC)
## Dataset Description
- **Homepage:** https://github.com/liyucheng09/Metaphor_Generator
- **Repository:** https://github.com/liyucheng09/Metaphor_Generator
- **Paper:** CM-Gen: A Neural Framework for Chinese Metaphor Generation with Explicit Context Modelling
- **Leaderboard:**
- **Point of Contact:** liyucheng09@gmail.com
### Dataset Summary
The first Chinese metaphor corpus serving both metaphor identification and generation. We construct a big metaphor resoruce in Chinese with around 9000 metaphorical sentences with tenor and vehicle annotated. Check out more details in the [github repo](https://github.com/liyucheng09/Metaphor_Generator) and our [paper](https://aclanthology.org/2022.coling-1.563/) presenting at COLING 2022.
首个中文比喻数据集,可以用于中文比喻识别与中文比喻生成。在[知乎](https://zhuanlan.zhihu.com/p/572740322)查看更多细节。
### Languages
Chinese
### Citation Information
```
@inproceedings{li-etal-2022-cm,
title = "{CM}-Gen: A Neural Framework for {C}hinese Metaphor Generation with Explicit Context Modelling",
author = "Li, Yucheng and
Lin, Chenghua and
Guerin, Frank",
booktitle = "Proceedings of the 29th International Conference on Computational Linguistics",
month = oct,
year = "2022",
address = "Gyeongju, Republic of Korea",
publisher = "International Committee on Computational Linguistics",
url = "https://aclanthology.org/2022.coling-1.563",
pages = "6468--6479",
}
``` |
metaeval/naturallogic | 2023-01-26T09:51:03.000Z | [
"task_categories:text-classification",
"language:en",
"license:apache-2.0",
"region:us"
] | metaeval | null | null | null | 0 | 3 | ---
license: apache-2.0
task_categories:
- text-classification
language:
- en
---
https://github.com/feng-yufei/Neural-Natural-Logic
```bib
@inproceedings{feng2020exploring,
title={Exploring End-to-End Differentiable Natural Logic Modeling},
author={Feng, Yufei, Ziou Zheng, and Liu, Quan and Greenspan, Michael and Zhu, Xiaodan},
booktitle={Proceedings of the 28th International Conference on Computational Linguistics},
pages={1172--1185},
year={2020}
}
``` |
metaeval/arct2 | 2023-01-26T10:15:21.000Z | [
"task_categories:text-classification",
"language:en",
"license:apache-2.0",
"region:us"
] | metaeval | null | null | null | 0 | 3 | ---
license: apache-2.0
task_categories:
- text-classification
language:
- en
---
https://github.com/IKMLab/arct2
```bib
@inproceedings{niven-kao-2019-probing,
title = "Probing Neural Network Comprehension of Natural Language Arguments",
author = "Niven, Timothy and
Kao, Hung-Yu",
booktitle = "Proceedings of the 57th Conference of the Association for Computational Linguistics",
month = jul,
year = "2019",
address = "Florence, Italy",
publisher = "Association for Computational Linguistics",
url = "https://www.aclweb.org/anthology/P19-1459",
pages = "4658--4664",
abstract = "We are surprised to find that BERT{'}s peak performance of 77{\%} on the Argument Reasoning Comprehension Task reaches just three points below the average untrained human baseline. However, we show that this result is entirely accounted for by exploitation of spurious statistical cues in the dataset. We analyze the nature of these cues and demonstrate that a range of models all exploit them. This analysis informs the construction of an adversarial dataset on which all models achieve random accuracy. Our adversarial dataset provides a more robust assessment of argument comprehension and should be adopted as the standard in future work.",
}
``` |
pooyaphoenix/hystoclass | 2023-02-10T09:55:36.000Z | [
"task_categories:text-classification",
"task_categories:token-classification",
"size_categories:1K<n<10K",
"language:fa",
"license:openrail",
"tabular_data",
"Text Classification",
"Social Networks",
"Ensemble Learning",
"region:us"
] | pooyaphoenix | null | null | null | 1 | 3 | ---
license: openrail
task_categories:
- text-classification
- token-classification
language:
- fa
tags:
- tabular_data
- Text Classification
- Social Networks
- Ensemble Learning
pretty_name: hystoclass
size_categories:
- 1K<n<10K
---
# Dataset Summary
**hystoclass** (hybrid social text and tabular classification)has been collected from Instagram stories with privacy in mind. In addition to the texts published in the stories, this dataset has graphic features such as background color, text color, and font. also has a Textual feature named 'content' in the Persian language.
# Classes
This dataset is divided into **18 classes** by human supervision:
Event, Political, Advertising and business, Romantic, Motivational, Literature, Social Networks, Scientific, Social, IT, Advices, Academic, Cosmetic and Feminine, Religious, Sport, Property and housing, Tourism and Medical.
[Github](https://github.com/pooyaphoenix/hystoclass)
[Email](https://pooyachavoshi@gmail.com)
|
metaeval/acceptability-prediction | 2023-03-24T13:42:37.000Z | [
"task_categories:text-classification",
"task_ids:acceptability-classification",
"language:en",
"license:apache-2.0",
"region:us"
] | metaeval | null | null | null | 0 | 3 | ---
license: apache-2.0
task_categories:
- text-classification
task_ids:
- acceptability-classification
language:
- en
---
```bib
@inproceedings{lau-etal-2015-unsupervised,
title = "Unsupervised Prediction of Acceptability Judgements",
author = "Lau, Jey Han and
Clark, Alexander and
Lappin, Shalom",
booktitle = "Proceedings of the 53rd Annual Meeting of the Association for Computational Linguistics and the 7th International Joint Conference on Natural Language Processing (Volume 1: Long Papers)",
month = jul,
year = "2015",
address = "Beijing, China",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/P15-1156",
doi = "10.3115/v1/P15-1156",
pages = "1618--1628",
}
``` |
MtCelesteMa/multiglue | 2023-01-30T17:24:52.000Z | [
"task_categories:text-classification",
"annotations_creators:found",
"language_creators:found",
"multilinguality:monolingual",
"size_categories:100K<n<1M",
"source_datasets:extended|glue",
"language:en",
"license:cc-by-4.0",
"region:us"
] | MtCelesteMa | null | null | null | 0 | 3 | ---
license: cc-by-4.0
task_categories:
- text-classification
size_categories:
- 100K<n<1M
language:
- en
multilinguality:
- monolingual
pretty_name: MultiGLUE
source_datasets:
- extended|glue
language_creators:
- found
annotations_creators:
- found
---
# Dataset Card for MultiGLUE
## Dataset Description
- **Homepage:**
- **Repository:**
- **Paper:**
- **Leaderboard:**
- **Point of Contact:**
### Dataset Summary
This dataset is a combination of the cola, mrpc, qnli, qqp, rte, sst2, and wnli subsets of the GLUE dataset. Its intended use is to benchmark language models on multitask binary classification.
### Supported Tasks and Leaderboards
[More Information Needed]
### Languages
Like the GLUE dataset, this dataset is in English.
## Dataset Structure
### Data Instances
An example instance looks like this:
```
{
"label": 1,
"task": "cola",
"sentence1": "The sailors rode the breeze clear of the rocks.",
"sentence2": null
}
```
### Data Fields
- `task`: A `string` feature, indicating the GLUE task the instance is from.
- `sentence1`: A `string` feature.
- `sentence2`: A `string` feature.
- `label`: A classification label, either 0 or 1.
### Data Splits
- `train`: 551,282 instances
- `validation`: 48,564 instances
- `test`: 404,183 instances, no classification label (same as GLUE)
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
This dataset is created by combining the cola, mrpc, qnli, qqp, rte, sst2, and wnli subsets of the GLUE dataset.
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
[More Information Needed]
### Citation Information
[More Information Needed]
### Contributions
[More Information Needed] |
gokuls/glue_augmented_cola | 2023-01-30T00:37:42.000Z | [
"license:apache-2.0",
"region:us"
] | gokuls | null | null | null | 0 | 3 | ---
license: apache-2.0
---
# Dataset Card for glue_augmented_cola
## Dataset Description
Augmented COLA dataset
**Reference:** https://huggingface.co/datasets/glue |
Cohere/miracl-th-corpus-22-12 | 2023-02-06T12:01:08.000Z | [
"task_categories:text-retrieval",
"task_ids:document-retrieval",
"annotations_creators:expert-generated",
"multilinguality:multilingual",
"language:th",
"license:apache-2.0",
"region:us"
] | Cohere | null | null | null | 0 | 3 | ---
annotations_creators:
- expert-generated
language:
- th
multilinguality:
- multilingual
size_categories: []
source_datasets: []
tags: []
task_categories:
- text-retrieval
license:
- apache-2.0
task_ids:
- document-retrieval
---
# MIRACL (th) embedded with cohere.ai `multilingual-22-12` encoder
We encoded the [MIRACL dataset](https://huggingface.co/miracl) using the [cohere.ai](https://txt.cohere.ai/multilingual/) `multilingual-22-12` embedding model.
The query embeddings can be found in [Cohere/miracl-th-queries-22-12](https://huggingface.co/datasets/Cohere/miracl-th-queries-22-12) and the corpus embeddings can be found in [Cohere/miracl-th-corpus-22-12](https://huggingface.co/datasets/Cohere/miracl-th-corpus-22-12).
For the orginal datasets, see [miracl/miracl](https://huggingface.co/datasets/miracl/miracl) and [miracl/miracl-corpus](https://huggingface.co/datasets/miracl/miracl-corpus).
Dataset info:
> MIRACL 🌍🙌🌏 (Multilingual Information Retrieval Across a Continuum of Languages) is a multilingual retrieval dataset that focuses on search across 18 different languages, which collectively encompass over three billion native speakers around the world.
>
> The corpus for each language is prepared from a Wikipedia dump, where we keep only the plain text and discard images, tables, etc. Each article is segmented into multiple passages using WikiExtractor based on natural discourse units (e.g., `\n\n` in the wiki markup). Each of these passages comprises a "document" or unit of retrieval. We preserve the Wikipedia article title of each passage.
## Embeddings
We compute for `title+" "+text` the embeddings using our `multilingual-22-12` embedding model, a state-of-the-art model that works for semantic search in 100 languages. If you want to learn more about this model, have a look at [cohere.ai multilingual embedding model](https://txt.cohere.ai/multilingual/).
## Loading the dataset
In [miracl-th-corpus-22-12](https://huggingface.co/datasets/Cohere/miracl-th-corpus-22-12) we provide the corpus embeddings. Note, depending on the selected split, the respective files can be quite large.
You can either load the dataset like this:
```python
from datasets import load_dataset
docs = load_dataset(f"Cohere/miracl-th-corpus-22-12", split="train")
```
Or you can also stream it without downloading it before:
```python
from datasets import load_dataset
docs = load_dataset(f"Cohere/miracl-th-corpus-22-12", split="train", streaming=True)
for doc in docs:
docid = doc['docid']
title = doc['title']
text = doc['text']
emb = doc['emb']
```
## Search
Have a look at [miracl-th-queries-22-12](https://huggingface.co/datasets/Cohere/miracl-th-queries-22-12) where we provide the query embeddings for the MIRACL dataset.
To search in the documents, you must use **dot-product**.
And then compare this query embeddings either with a vector database (recommended) or directly computing the dot product.
A full search example:
```python
# Attention! For large datasets, this requires a lot of memory to store
# all document embeddings and to compute the dot product scores.
# Only use this for smaller datasets. For large datasets, use a vector DB
from datasets import load_dataset
import torch
#Load documents + embeddings
docs = load_dataset(f"Cohere/miracl-th-corpus-22-12", split="train")
doc_embeddings = torch.tensor(docs['emb'])
# Load queries
queries = load_dataset(f"Cohere/miracl-th-queries-22-12", split="dev")
# Select the first query as example
qid = 0
query = queries[qid]
query_embedding = torch.tensor(queries['emb'])
# Compute dot score between query embedding and document embeddings
dot_scores = torch.mm(query_embedding, doc_embeddings.transpose(0, 1))
top_k = torch.topk(dot_scores, k=3)
# Print results
print("Query:", query['query'])
for doc_id in top_k.indices[0].tolist():
print(docs[doc_id]['title'])
print(docs[doc_id]['text'])
```
You can get embeddings for new queries using our API:
```python
#Run: pip install cohere
import cohere
co = cohere.Client(f"{api_key}") # You should add your cohere API Key here :))
texts = ['my search query']
response = co.embed(texts=texts, model='multilingual-22-12')
query_embedding = response.embeddings[0] # Get the embedding for the first text
```
## Performance
In the following table we compare the cohere multilingual-22-12 model with Elasticsearch version 8.6.0 lexical search (title and passage indexed as independent fields). Note that Elasticsearch doesn't support all languages that are part of the MIRACL dataset.
We compute nDCG@10 (a ranking based loss), as well as hit@3: Is at least one relevant document in the top-3 results. We find that hit@3 is easier to interpret, as it presents the number of queries for which a relevant document is found among the top-3 results.
Note: MIRACL only annotated a small fraction of passages (10 per query) for relevancy. Especially for larger Wikipedias (like English), we often found many more relevant passages. This is know as annotation holes. Real nDCG@10 and hit@3 performance is likely higher than depicted.
| Model | cohere multilingual-22-12 nDCG@10 | cohere multilingual-22-12 hit@3 | ES 8.6.0 nDCG@10 | ES 8.6.0 acc@3 |
|---|---|---|---|---|
| miracl-ar | 64.2 | 75.2 | 46.8 | 56.2 |
| miracl-bn | 61.5 | 75.7 | 49.2 | 60.1 |
| miracl-de | 44.4 | 60.7 | 19.6 | 29.8 |
| miracl-en | 44.6 | 62.2 | 30.2 | 43.2 |
| miracl-es | 47.0 | 74.1 | 27.0 | 47.2 |
| miracl-fi | 63.7 | 76.2 | 51.4 | 61.6 |
| miracl-fr | 46.8 | 57.1 | 17.0 | 21.6 |
| miracl-hi | 50.7 | 62.9 | 41.0 | 48.9 |
| miracl-id | 44.8 | 63.8 | 39.2 | 54.7 |
| miracl-ru | 49.2 | 66.9 | 25.4 | 36.7 |
| **Avg** | 51.7 | 67.5 | 34.7 | 46.0 |
Further languages (not supported by Elasticsearch):
| Model | cohere multilingual-22-12 nDCG@10 | cohere multilingual-22-12 hit@3 |
|---|---|---|
| miracl-fa | 44.8 | 53.6 |
| miracl-ja | 49.0 | 61.0 |
| miracl-ko | 50.9 | 64.8 |
| miracl-sw | 61.4 | 74.5 |
| miracl-te | 67.8 | 72.3 |
| miracl-th | 60.2 | 71.9 |
| miracl-yo | 56.4 | 62.2 |
| miracl-zh | 43.8 | 56.5 |
| **Avg** | 54.3 | 64.6 |
|
rcds/swiss_court_view_generation | 2023-07-20T07:35:29.000Z | [
"task_categories:text-generation",
"size_categories:100K<n<1M",
"language:de",
"language:fr",
"language:it",
"license:cc-by-sa-4.0",
"arxiv:2306.09237",
"region:us"
] | rcds | This dataset contains court decision for court view generation task. | @InProceedings{huggingface:dataset,
title = {A great new dataset},
author={huggingface, Inc.
},
year={2020}
} | null | 2 | 3 | ---
task_categories:
- text-generation
language:
- de
- fr
- it
size_categories:
- 100K<n<1M
license: cc-by-sa-4.0
pretty_name: Swiss Court View Generation
---
# Dataset Card for Swiss Court View Generation
## Table of Contents
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:**
- **Repository:**
- **Paper:**
- **Leaderboard:**
- **Point of Contact:**
### Dataset Summary
Swiss Court View Generation is a multilingual, diachronic dataset of 404K Swiss Federal Supreme Court (FSCS) cases. This dataset is part of a challenging text generation task.
This dataset contains court views for different languages and court chambers. It includes information such as decision id, language, chamber, file name, url, and the number of tokens in the facts and considerations sections.
Main (L1) contains all the data, Origin (L2) contains only data with complete origin facts & origin considerations.
### Supported Tasks and Leaderboards
### Languages
Switzerland has four official languages with three languages German, French and Italian being represenated. The decisions are written by the judges and clerks in the language of the proceedings.
| Language | Subset | Number of Documents Main |Number of Documents Origin|
|------------|------------|--------------------------|--------------------------|
| German | **de** | 197K | 49 |
| French | **fr** | 163K | 221 |
| Italian | **it** | 44K | 0 |
## Dataset Structure
### Data Fields
```
decision_id (string)
facts (string)
considerations (string)
origin_facts (string)
origin_considerations (string)
law_area (string)
language (string)
year (int32)
court (string)
chamber (string)
canton (string)
region (string)
```
### Data Instances
[More Information Needed]
### Data Fields
[More Information Needed]
### Data Splits
## Dataset Creation
### Curation Rationale
### Source Data
#### Initial Data Collection and Normalization
The original data are published from the Swiss Federal Supreme Court (https://www.bger.ch) in unprocessed formats (HTML). The documents were downloaded from the Entscheidsuche portal (https://entscheidsuche.ch) in HTML.
#### Who are the source language producers?
The decisions are written by the judges and clerks in the language of the proceedings.
### Annotations
#### Annotation process
#### Who are the annotators?
Metadata is published by the Swiss Federal Supreme Court (https://www.bger.ch).
### Personal and Sensitive Information
The dataset contains publicly available court decisions from the Swiss Federal Supreme Court. Personal or sensitive information has been anonymized by the court before publication according to the following guidelines: https://www.bger.ch/home/juridiction/anonymisierungsregeln.html.
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
We release the data under CC-BY-4.0 which complies with the court licensing (https://www.bger.ch/files/live/sites/bger/files/pdf/de/urteilsveroeffentlichung_d.pdf)
© Swiss Federal Supreme Court, 2002-2022
The copyright for the editorial content of this website and the consolidated texts, which is owned by the Swiss Federal Supreme Court, is licensed under the Creative Commons Attribution 4.0 International licence. This means that you can re-use the content provided you acknowledge the source and indicate any changes you have made.
Source: https://www.bger.ch/files/live/sites/bger/files/pdf/de/urteilsveroeffentlichung_d.pdf
### Citation Information
Please cite our [ArXiv-Preprint](https://arxiv.org/abs/2306.09237)
```
@misc{rasiah2023scale,
title={SCALE: Scaling up the Complexity for Advanced Language Model Evaluation},
author={Vishvaksenan Rasiah and Ronja Stern and Veton Matoshi and Matthias Stürmer and Ilias Chalkidis and Daniel E. Ho and Joel Niklaus},
year={2023},
eprint={2306.09237},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
### Contributions
|
biu-nlp/alsqa | 2023-02-15T07:46:52.000Z | [
"task_categories:question-answering",
"task_categories:text-classification",
"task_ids:open-domain-qa",
"task_ids:extractive-qa",
"annotations_creators:crowdsourced",
"language_creators:crowdsourced",
"multilinguality:monolingual",
"size_categories:<1000",
"source_datasets:original",
"language:en"... | biu-nlp | To test the lexical overlap heuristic utilization in Reading Comprehension models, we create a new test set: Analyzing Lexically Similar QA (ALSQA).
We augment the SQuAD 2.0 dataset (Rajpurkar et al., 2018) by asking crowdworkers to generate questions with high context-overlap from questions with low overlap (These questions are paraphrases of the original questions).
In the case of un-answerable questions, annotators were asked to re-write the question without changing its meaning and maintain the unanswerability reason.3 ALSQA contains 365 questions pairs, 190 with an- swer and 174 without answer. | @misc{https://doi.org/10.48550/arxiv.2210.12673,
doi = {10.48550/ARXIV.2210.12673},
url = {https://arxiv.org/abs/2210.12673},
author = {Bandel, Elron and Goldberg, Yoav and Elazar, Yanai},
keywords = {Computation and Language (cs.CL), FOS: Computer and information sciences, FOS: Computer and information sciences},
title = {Lexical Generalization Improves with Larger Models and Longer Training},
publisher = {arXiv},
year = {2022},
copyright = {arXiv.org perpetual, non-exclusive license}
} | null | 2 | 3 | ---
pretty_name: ALSQA
annotations_creators:
- crowdsourced
language_creators:
- crowdsourced
language:
- en
license: apache-2.0
multilinguality:
- monolingual
size_categories:
- <1000
source_datasets:
- original
task_categories:
- question-answering
- text-classification
task_ids:
- open-domain-qa
- extractive-qa
paperswithcode_id: alsqa
dataset_info:
features:
- name: id
dtype: string
- name: title
dtype: string
- name: context
dtype: string
- name: question
dtype: string
- name: answers
sequence:
- name: text
dtype: string
- name: answer_start
dtype: int32
config_name: alsqa
---
# Dataset Card for "alsqa"
## Table of Contents
- [Dataset Card for "alsqa"](#dataset-card-for-alsqa)
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [squad_v2](#squad_v2)
- [Data Fields](#data-fields)
- [squad_v2](#squad_v2-1)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Initial Data Collection and Normalization](#initial-data-collection-and-normalization)
- [Who are the source language producers?](#who-are-the-source-language-producers)
- [Annotations](#annotations)
- [Annotation process](#annotation-process)
- [Who are the annotators?](#who-are-the-annotators)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** [https://github.com/elronbandel/lexical-generalization](https://github.com/elronbandel/lexical-generalization)
- **Repository:** [https://github.com/elronbandel/lexical-generalization](https://github.com/elronbandel/lexical-generalization)
- **Paper:** [Lexical Generalization Improves with Larger Models and Longer Training](https://arxiv.org/abs/2210.12673)
- **Point of Contact:** [https://github.com/elronbandel/lexical-generalization](https://github.com/elronbandel/lexical-generalization)
- **Size of downloaded dataset files:** 100 KB
- **Size of the generated dataset:** 1 MB
- **Total amount of disk used:** 1 MB
### Dataset Summary
To test the lexical overlap heuristic utilization in Reading Comprehension models, we create a new test set: Analyzing Lexically Similar QA (ALSQA).
We augment the SQuAD 2.0 dataset (Rajpurkar et al., 2018) by asking crowdworkers to generate questions with high context-overlap from questions with low overlap (These questions are paraphrases of the original questions).
In the case of un-answerable questions, annotators were asked to re-write the question without changing its meaning and maintain the unanswerability reason.3 ALSQA contains 365 questions pairs, 190 with an- swer and 174 without answer.
## Dataset Structure
Identical to squad v2
#
### Data Fields
The data fields are the same among all splits.
#### alsqa
- `id`: a `string` feature.
- `title`: a `string` feature.
- `context`: a `string` feature.
- `question`: a `string` feature.
- `answers`: a dictionary feature containing:
- `text`: a `string` feature.
- `answer_start`: a `int32` feature.
### Data Splits
| name | test |
| -------- | -----: |
| squad_v2 | 365 |
## Dataset Creation
### Curation Rationale
### Source Data
squad_v2
#### Initial Data Collection and Normalization
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
#### Who are the source language producers?
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Annotations
#### Annotation process
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
#### Who are the annotators?
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Personal and Sensitive Information
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Discussion of Biases
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Other Known Limitations
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
## Additional Information
### Dataset Curators
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Licensing Information
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Citation Information
```
@misc{https://doi.org/10.48550/arxiv.2210.12673,
doi = {10.48550/ARXIV.2210.12673},
url = {https://arxiv.org/abs/2210.12673},
author = {Bandel, Elron and Goldberg, Yoav and Elazar, Yanai},
keywords = {Computation and Language (cs.CL), FOS: Computer and information sciences, FOS: Computer and information sciences},
title = {Lexical Generalization Improves with Larger Models and Longer Training},
publisher = {arXiv},
year = {2022},
copyright = {arXiv.org perpetual, non-exclusive license}
}
```
### Contributions
Thanks to [@elronbandel](https://github.com/elronbandel) for adding this dataset. |
Cohere/miracl-ko-corpus-22-12 | 2023-02-06T11:58:37.000Z | [
"task_categories:text-retrieval",
"task_ids:document-retrieval",
"annotations_creators:expert-generated",
"multilinguality:multilingual",
"language:ko",
"license:apache-2.0",
"region:us"
] | Cohere | null | null | null | 1 | 3 | ---
annotations_creators:
- expert-generated
language:
- ko
multilinguality:
- multilingual
size_categories: []
source_datasets: []
tags: []
task_categories:
- text-retrieval
license:
- apache-2.0
task_ids:
- document-retrieval
---
# MIRACL (ko) embedded with cohere.ai `multilingual-22-12` encoder
We encoded the [MIRACL dataset](https://huggingface.co/miracl) using the [cohere.ai](https://txt.cohere.ai/multilingual/) `multilingual-22-12` embedding model.
The query embeddings can be found in [Cohere/miracl-ko-queries-22-12](https://huggingface.co/datasets/Cohere/miracl-ko-queries-22-12) and the corpus embeddings can be found in [Cohere/miracl-ko-corpus-22-12](https://huggingface.co/datasets/Cohere/miracl-ko-corpus-22-12).
For the orginal datasets, see [miracl/miracl](https://huggingface.co/datasets/miracl/miracl) and [miracl/miracl-corpus](https://huggingface.co/datasets/miracl/miracl-corpus).
Dataset info:
> MIRACL 🌍🙌🌏 (Multilingual Information Retrieval Across a Continuum of Languages) is a multilingual retrieval dataset that focuses on search across 18 different languages, which collectively encompass over three billion native speakers around the world.
>
> The corpus for each language is prepared from a Wikipedia dump, where we keep only the plain text and discard images, tables, etc. Each article is segmented into multiple passages using WikiExtractor based on natural discourse units (e.g., `\n\n` in the wiki markup). Each of these passages comprises a "document" or unit of retrieval. We preserve the Wikipedia article title of each passage.
## Embeddings
We compute for `title+" "+text` the embeddings using our `multilingual-22-12` embedding model, a state-of-the-art model that works for semantic search in 100 languages. If you want to learn more about this model, have a look at [cohere.ai multilingual embedding model](https://txt.cohere.ai/multilingual/).
## Loading the dataset
In [miracl-ko-corpus-22-12](https://huggingface.co/datasets/Cohere/miracl-ko-corpus-22-12) we provide the corpus embeddings. Note, depending on the selected split, the respective files can be quite large.
You can either load the dataset like this:
```python
from datasets import load_dataset
docs = load_dataset(f"Cohere/miracl-ko-corpus-22-12", split="train")
```
Or you can also stream it without downloading it before:
```python
from datasets import load_dataset
docs = load_dataset(f"Cohere/miracl-ko-corpus-22-12", split="train", streaming=True)
for doc in docs:
docid = doc['docid']
title = doc['title']
text = doc['text']
emb = doc['emb']
```
## Search
Have a look at [miracl-ko-queries-22-12](https://huggingface.co/datasets/Cohere/miracl-ko-queries-22-12) where we provide the query embeddings for the MIRACL dataset.
To search in the documents, you must use **dot-product**.
And then compare this query embeddings either with a vector database (recommended) or directly computing the dot product.
A full search example:
```python
# Attention! For large datasets, this requires a lot of memory to store
# all document embeddings and to compute the dot product scores.
# Only use this for smaller datasets. For large datasets, use a vector DB
from datasets import load_dataset
import torch
#Load documents + embeddings
docs = load_dataset(f"Cohere/miracl-ko-corpus-22-12", split="train")
doc_embeddings = torch.tensor(docs['emb'])
# Load queries
queries = load_dataset(f"Cohere/miracl-ko-queries-22-12", split="dev")
# Select the first query as example
qid = 0
query = queries[qid]
query_embedding = torch.tensor(queries['emb'])
# Compute dot score between query embedding and document embeddings
dot_scores = torch.mm(query_embedding, doc_embeddings.transpose(0, 1))
top_k = torch.topk(dot_scores, k=3)
# Print results
print("Query:", query['query'])
for doc_id in top_k.indices[0].tolist():
print(docs[doc_id]['title'])
print(docs[doc_id]['text'])
```
You can get embeddings for new queries using our API:
```python
#Run: pip install cohere
import cohere
co = cohere.Client(f"{api_key}") # You should add your cohere API Key here :))
texts = ['my search query']
response = co.embed(texts=texts, model='multilingual-22-12')
query_embedding = response.embeddings[0] # Get the embedding for the first text
```
## Performance
In the following table we compare the cohere multilingual-22-12 model with Elasticsearch version 8.6.0 lexical search (title and passage indexed as independent fields). Note that Elasticsearch doesn't support all languages that are part of the MIRACL dataset.
We compute nDCG@10 (a ranking based loss), as well as hit@3: Is at least one relevant document in the top-3 results. We find that hit@3 is easier to interpret, as it presents the number of queries for which a relevant document is found among the top-3 results.
Note: MIRACL only annotated a small fraction of passages (10 per query) for relevancy. Especially for larger Wikipedias (like English), we often found many more relevant passages. This is know as annotation holes. Real nDCG@10 and hit@3 performance is likely higher than depicted.
| Model | cohere multilingual-22-12 nDCG@10 | cohere multilingual-22-12 hit@3 | ES 8.6.0 nDCG@10 | ES 8.6.0 acc@3 |
|---|---|---|---|---|
| miracl-ar | 64.2 | 75.2 | 46.8 | 56.2 |
| miracl-bn | 61.5 | 75.7 | 49.2 | 60.1 |
| miracl-de | 44.4 | 60.7 | 19.6 | 29.8 |
| miracl-en | 44.6 | 62.2 | 30.2 | 43.2 |
| miracl-es | 47.0 | 74.1 | 27.0 | 47.2 |
| miracl-fi | 63.7 | 76.2 | 51.4 | 61.6 |
| miracl-fr | 46.8 | 57.1 | 17.0 | 21.6 |
| miracl-hi | 50.7 | 62.9 | 41.0 | 48.9 |
| miracl-id | 44.8 | 63.8 | 39.2 | 54.7 |
| miracl-ru | 49.2 | 66.9 | 25.4 | 36.7 |
| **Avg** | 51.7 | 67.5 | 34.7 | 46.0 |
Further languages (not supported by Elasticsearch):
| Model | cohere multilingual-22-12 nDCG@10 | cohere multilingual-22-12 hit@3 |
|---|---|---|
| miracl-fa | 44.8 | 53.6 |
| miracl-ja | 49.0 | 61.0 |
| miracl-ko | 50.9 | 64.8 |
| miracl-sw | 61.4 | 74.5 |
| miracl-te | 67.8 | 72.3 |
| miracl-th | 60.2 | 71.9 |
| miracl-yo | 56.4 | 62.2 |
| miracl-zh | 43.8 | 56.5 |
| **Avg** | 54.3 | 64.6 |
|
metaeval/rankme-nlg-acceptability | 2023-02-01T14:27:06.000Z | [
"task_categories:text-classification",
"task_ids:acceptability-classification",
"size_categories:1K<n<10K",
"language:en",
"license:apache-2.0",
"region:us"
] | metaeval | null | null | null | 0 | 3 | ---
license: apache-2.0
task_categories:
- text-classification
language:
- en
task_ids:
- acceptability-classification
size_categories:
- 1K<n<10K
---
```bib
@inproceedings{novikova-etal-2018-rankme,
title = "RankME: Reliable Human Ratings for Natural Language Generation",
author = "Novikova, Jekaterina and
Duvsek, Ondvrej and
Rieser, Verena",
booktitle = "Proceedings of the NAACL2018",
month = jun,
year = "2018",
address = "New Orleans, Louisiana",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/N18-2012",
doi = "10.18653/v1/N18-2012",
pages = "72--78",
}
``` |
LLukas22/scidocs | 2023-04-30T19:45:23.000Z | [
"task_categories:sentence-similarity",
"task_categories:feature-extraction",
"language:en",
"license:cc-by-4.0",
"region:us"
] | LLukas22 | null | null | null | 0 | 3 | ---
license: cc-by-4.0
task_categories:
- sentence-similarity
- feature-extraction
language:
- en
---
# Dataset Card for "scidocs"
## Table of Contents
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Additional Information](#additional-information)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
## Dataset Description
- **Homepage:** [https://github.com/allenai/scidocs](https://github.com/allenai/scidocs)
### Dataset Summary
This is a modified version of the original scidocs dataset for retrieval tasks. The original is availabe [here](https://github.com/allenai/scidocs).
## Dataset Structure
### Data Instances
An example of 'train' looks as follows.
```json
{
"title": "Discovery of inference rules for question-answering",
"abstract": "One of the main challenges in question-answering is the potential mismatch between the expressions in questions and ...",
}
```
### Data Fields
The data fields are the same among all splits.
- `title`: a `string` feature.
- `abstract`: a `string` feature.
## Additional Information
### Licensing Information
This dataset is distributed under the cc-by-4.0 license.
### Citation Information
BibTeX:
```json
@inproceedings{specter2020cohan,
title={SPECTER: Document-level Representation Learning using Citation-informed Transformers},
author={Arman Cohan and Sergey Feldman and Iz Beltagy and Doug Downey and Daniel S. Weld},
booktitle={ACL},
year={2020}
}
``` |
huggingface/adult-census-competition | 2023-02-01T17:14:34.000Z | [
"region:us"
] | huggingface | null | null | null | 0 | 3 | Entry not found |
SDbiaseval/professions | 2023-02-03T20:16:58.000Z | [
"region:us"
] | SDbiaseval | null | null | null | 0 | 3 | ---
dataset_info:
features:
- name: adjective
dtype: string
- name: profession
dtype: string
- name: 'no'
dtype: int32
- name: image_path
dtype: string
- name: image
dtype: image
- name: model
dtype: string
splits:
- name: train
num_bytes: 3088839692.5
num_examples: 94500
download_size: 3075495491
dataset_size: 3088839692.5
---
# Dataset Card for "professions"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
MtCelesteMa/fstdt-quotes | 2023-02-03T19:40:44.000Z | [
"task_categories:text-classification",
"task_ids:hate-speech-detection",
"annotations_creators:found",
"language_creators:found",
"multilinguality:monolingual",
"size_categories:10K<n<100K",
"source_datasets:original",
"language:en",
"license:cc-by-4.0",
"region:us"
] | MtCelesteMa | null | null | null | 0 | 3 | ---
license: cc-by-4.0
annotations_creators:
- found
language:
- en
language_creators:
- found
multilinguality:
- monolingual
pretty_name: FSTDT Quotes
size_categories:
- 10K<n<100K
source_datasets:
- original
task_categories:
- text-classification
task_ids:
- hate-speech-detection
---
# Dataset Card for FSTDT Quotes
## Dataset Description
- **Homepage:**
- **Repository:**
- **Paper:**
- **Leaderboard:**
- **Point of Contact:**
### Dataset Summary
FSTDT Quotes is a snapshot of the [Fundies Say the Darndest Things](https://fstdt.com/) website taken on 2023/02/03 14:16. It is intended for hate and fringe speech detection and classification.
### Supported Tasks and Leaderboards
[More Information Needed]
### Languages
FSTDT Quotes is in English.
## Dataset Structure
### Data Instances
An example instance looks like this:
```
{
"id": "G",
"submitter": "anonymous",
"timestamp": "2005-05-21 00:00:00+00:00",
"name": "Jack777 ",
"src_url": "http://www.theologyweb.com/forum/showpost.php?p=1034624&postcount=10",
"tags": ["#fundie"],
"quote": "As long as evolutionists deny their theory is a theory and point out ID or whatever is bunk, people like me will pester them til they drop."
}
```
### Data Fields
- `id`: A `string` feature, the ID of the post on FSTDT.
- `submitter`: A `string` feature, the submitter of the post.
- `timestamp`: A `string` feature, the time of submission.
- `name`: A `string` feature, the (user)name of the person who is being quoted.
- `src_url`: A `string` feature, the source URL of the quote.
- `tags`: A sequence of `string` features, the tags the post has been tagged with.
- `quote`: A `string` feature, the quote itself.
### Data Splits
- `train`: 56,448 instances
- `validation`: 7,111 instances
- `test`: 7,131 instances
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
The quotes are collected from all over the internet.
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
The data is annotated by users on FSTDT.
### Personal and Sensitive Information
The dataset contains the usernames of submitters as well as those quoted. However, this information is publicly available.
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
This dataset contains large amounts of hate speech as well as pseudoscience and quackery.
### Other Known Limitations
Some quotes in the dataset are quoted from news articles depicting acts of hate, which could potentially cause misclassifications on models trained on this dataset.
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
[More Information Needed]
### Citation Information
[More Information Needed]
### Contributions
[More Information Needed] |
sileod/attempto-nli | 2023-05-31T08:29:58.000Z | [
"task_categories:text-classification",
"task_ids:natural-language-inference",
"language:en",
"license:apache-2.0",
"region:us"
] | sileod | null | null | null | 0 | 3 | ---
license: apache-2.0
task_ids:
- natural-language-inference
task_categories:
- text-classification
language:
- en
---
Natural language inference using attempto controlled english
Paper to come
```
@inproceedings{fuchs2012first,
title={First-order reasoning for attempto controlled english},
author={Fuchs, Norbert E},
booktitle={Controlled Natural Language: Second International Workshop, CNL 2010, Marettimo Island, Italy, September 13-15, 2010. Revised Papers 2},
pages={73--94},
year={2012},
organization={Springer}
}
``` |
ericyu3/openassistant_inpainted_dialogs_5k_biomedical | 2023-02-06T00:26:21.000Z | [
"size_categories:1K<n<10K",
"license:apache-2.0",
"region:us"
] | ericyu3 | null | null | null | 2 | 3 | ---
license: apache-2.0
size_categories:
- 1K<n<10K
---
This dataset was created by:
* Starting with the [Dialog Inpainting](https://github.com/google-research/dialog-inpainting) dataset
* Labeling the turns of each dialog with `User: ` and `Assistant: `
* Filtering using spaCy, using code similar to the following (written by https://huggingface.co/ontocord):
```
import pandas as pd
import spacy
try:
if sci is None: pass
except:
sci = spacy.load("en_ner_craft_md")
data = pd.read_parquet('data.parquet', engine='pyarrow')
for a in data['labeleddialog']:
a = a.replace("this article", "this subject").replace("()", "").replace(" ", " ")
if 'novel' in a or ' story' in a or 'movie' in a or 'film' in a or 'music' in a:
#print ('###arts\n', a)
continue
if ' game' in a or 'sports' in a or 'football' in a or 'soccer' in a or 'baseball' in a or 'basketball' in a:
#print ('###sports\n', a)
continue
if 'population' in a or 'territory' in a or 'village' in a or 'country' in a or 'county' in a:
#print ('###place\n', a)
continue
if 'ingredient' in a or 'food' in a or 'recipe' in a:
#print ('###recipe\n', a)
continue
if ' rights' in a or ' court ' in a or ' criminal ' in a or ' verdict ' in a or ' guilt ' in a or ' legislat' in a:
#print ('###law\n', a)
continue
doc = sci(a)
j = 0
for ent in doc.ents:
if ent.label == 'SO' or (ent.label == 'CHEBI' and len(ent.text) > 5):
j+= 1
if j > 3:
print ('###biomed\n',a)
break
#print (ent.label, ent.text)
```
* Filtering using BERT, using the following code:
```
from transformers import pipeline
classifier = pipeline("zero-shot-classification", model="facebook/bart-large-mnli")
classifier(page_titles, ["Biomedical", "Non-biomedical"])
# Dialogs with page titles with `prob < 0.7` were dropped.
prob = classification_result["scores"][classification_result["labels"].index("Biomedical")]
``` |
metaeval/nli-veridicality-transitivity | 2023-02-04T18:10:09.000Z | [
"task_categories:text-classification",
"task_ids:natural-language-inference",
"language:en",
"license:cc",
"region:us"
] | metaeval | null | null | null | 0 | 3 | ---
license: cc
task_categories:
- text-classification
language:
- en
task_ids:
- natural-language-inference
---
```bib
@inproceedings{yanaka-etal-2021-exploring,
title = "Exploring Transitivity in Neural {NLI} Models through Veridicality",
author = "Yanaka, Hitomi and
Mineshima, Koji and
Inui, Kentaro",
booktitle = "Proceedings of the 16th Conference of the European Chapter of the Association for Computational Linguistics: Main Volume",
year = "2021",
pages = "920--934",
}
``` |
metaeval/help-nli | 2023-05-31T08:57:01.000Z | [
"task_categories:text-classification",
"task_ids:natural-language-inference",
"language:en",
"license:cc",
"region:us"
] | metaeval | null | null | null | 0 | 3 | ---
license: cc
task_ids:
- natural-language-inference
task_categories:
- text-classification
language:
- en
---
https://github.com/verypluming/HELP
```bib
@InProceedings{yanaka-EtAl:2019:starsem,
author = {Yanaka, Hitomi and Mineshima, Koji and Bekki, Daisuke and Inui, Kentaro and Sekine, Satoshi and Abzianidze, Lasha and Bos, Johan},
title = {HELP: A Dataset for Identifying Shortcomings of Neural Models in Monotonicity Reasoning},
booktitle = {Proceedings of the Eighth Joint Conference on Lexical and Computational Semantics (*SEM2019)},
year = {2019},
}
``` |
danielpleus/wiki-nds | 2023-02-04T19:06:48.000Z | [
"region:us"
] | danielpleus | null | null | null | 0 | 3 | ---
dataset_info:
features:
- name: id
dtype: string
- name: url
dtype: string
- name: title
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 92432660
num_examples: 84158
download_size: 47740161
dataset_size: 92432660
---
# Dataset Card for "wiki-nds"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
metaeval/tomi-nli | 2023-02-09T21:05:13.000Z | [
"task_categories:text-classification",
"task_ids:natural-language-inference",
"language:en",
"license:gpl-3.0",
"arxiv:2301.05948",
"region:us"
] | metaeval | null | null | null | 4 | 3 | ---
license: gpl-3.0
task_ids:
- natural-language-inference
task_categories:
- text-classification
language:
- en
---
tomi dataset (theory of mind question answering) recasted as natural language inference
https://colab.research.google.com/drive/1J_RqDSw9iPxJSBvCJu-VRbjXnrEjKVvr?usp=sharing
```
@article{sileo2023tasksource,
title={tasksource: Structured Dataset Preprocessing Annotations for Frictionless Extreme Multi-Task Learning and Evaluation},
author={Sileo, Damien},
url= {https://arxiv.org/abs/2301.05948},
journal={arXiv preprint arXiv:2301.05948},
year={2023}
}
@inproceedings{le-etal-2019-revisiting,
title = "Revisiting the Evaluation of Theory of Mind through Question Answering",
author = "Le, Matthew and
Boureau, Y-Lan and
Nickel, Maximilian",
booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP)",
month = nov,
year = "2019",
address = "Hong Kong, China",
publisher = "Association for Computational Linguistics",
url = "https://www.aclweb.org/anthology/D19-1598",
doi = "10.18653/v1/D19-1598",
pages = "5872--5877"
}
``` |
danielpleus/tatoeba-nds | 2023-02-05T16:52:14.000Z | [
"region:us"
] | danielpleus | null | null | null | 0 | 3 | ---
dataset_info:
features:
- name: text
dtype: string
splits:
- name: train
num_bytes: 692245
num_examples: 18101
download_size: 478178
dataset_size: 692245
---
# Dataset Card for "tatoeba-nds"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
dangrebenkin/sova_rudevices_audiobooks | 2023-02-06T19:22:04.000Z | [
"license:apache-2.0",
"region:us"
] | dangrebenkin | null | null | null | 1 | 3 | ---
license: apache-2.0
dataset_info:
features:
- name: audio
dtype:
audio:
sampling_rate: 16000
- name: transcription
dtype: string
splits:
- name: train
num_bytes: 29311788948.030003
num_examples: 182835
- name: test
num_bytes: 3181370673.15
num_examples: 20315
download_size: 29701298876
dataset_size: 32493159621.180004
---
## Dataset instance structure
{'audio': {'path': '/path/to/wav.wav',
'array': array([wav numpy array]), dtype=float32),
'sampling_rate': 16000},
'transcription': 'транскрипция'}
## Dataset audio info
- 16000 Hz
- wav
- mono
- Russian speech from audiobooks
## Citation
@misc{sova2021rudevices,
author = {Zubarev, Egor and Moskalets, Timofey and SOVA.ai},
title = {SOVA RuDevices Dataset: free public STT/ASR dataset with manually annotated live speech},
publisher = {GitHub},
journal = {GitHub repository},
year = {2021},
howpublished = {\url{https://github.com/sovaai/sova-dataset}},
} |
ml4pubmed/pubmed-text-classification-cased | 2023-02-06T16:43:19.000Z | [
"task_categories:text-classification",
"size_categories:1M<n<10M",
"source_datasets:pubmed",
"language:en",
"license:apache-2.0",
"pubmed",
"region:us"
] | ml4pubmed | null | null | null | 0 | 3 | ---
license: apache-2.0
task_categories:
- text-classification
language:
- en
tags:
- pubmed
size_categories:
- 1M<n<10M
source_datasets: pubmed
---
# ml4pubmed/pubmed-text-classification-cased
A parsed/cleaned version of the source data retaining case. |
chenghao/quora_questions | 2023-02-06T17:23:12.000Z | [
"task_categories:text-classification",
"task_ids:semantic-similarity-classification",
"annotations_creators:expert-generated",
"language_creators:found",
"multilinguality:monolingual",
"size_categories:100K<n<1M",
"source_datasets:original",
"language:en",
"license:other",
"region:us"
] | chenghao | null | null | null | 2 | 3 | ---
license: other
dataset_info:
features:
- name: questions
dtype: string
splits:
- name: train
num_bytes: 51635953
num_examples: 808580
download_size: 31079310
dataset_size: 51635953
annotations_creators:
- expert-generated
language:
- en
language_creators:
- found
multilinguality:
- monolingual
pretty_name: Quora Questions
size_categories:
- 100K<n<1M
source_datasets:
- original
task_categories:
- text-classification
task_ids:
- semantic-similarity-classification
paperswithcode_id: null
---
# Dataset Card for "quora"
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** [https://www.kaggle.com/c/quora-question-pairs](https://www.kaggle.com/c/quora-question-pairs)
- **Repository:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
- **Paper:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
- **Point of Contact:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
- **Size of downloaded dataset files:** 55.48 MB
- **Size of the generated dataset:** 55.46 MB
- **Total amount of disk used:** 110.94 MB
### Dataset Summary
The Quora dataset is composed of question pairs, and the task is to determine if the questions are paraphrases of each other (have the same meaning).
### Supported Tasks and Leaderboards
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Languages
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
## Dataset Structure
### Data Instances
#### default
- **Size of downloaded dataset files:** 55.48 MB
- **Size of the generated dataset:** 55.46 MB
- **Total amount of disk used:** 110.94 MB
### Data Fields
The data fields are the same among all splits.
### Data Splits
| name |train |
|-------|-----:|
|default|404290|
## Dataset Creation
### Curation Rationale
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
#### Who are the source language producers?
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Annotations
#### Annotation process
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
#### Who are the annotators?
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Personal and Sensitive Information
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Discussion of Biases
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Other Known Limitations
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
## Additional Information
### Dataset Curators
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Licensing Information
[Quora Term of Service](https://www.quora.com/about/tos), no commercial use.
### Citation Information
Unknown.
|
metaeval/autotnli | 2023-05-31T08:55:41.000Z | [
"task_categories:text-classification",
"task_ids:natural-language-inference",
"language:en",
"license:apache-2.0",
"region:us"
] | metaeval | null | null | null | 0 | 3 | ---
license: apache-2.0
language:
- en
task_ids:
- natural-language-inference
task_categories:
- text-classification
---
https://github.com/Dibyakanti/AutoTNLI-code
```
@inproceedings{kumar-etal-2022-autotnli,
title = "Realistic Data Augmentation Framework for Enhancing Tabular Reasoning",
author = "Kumar, Dibyakanti and
Gupta, Vivek and
Sharma, Soumya and
Zhang, Shuo",
booktitle = "Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing",
month = dec,
year = "2022",
address = "Online and Abu Dhabi",
publisher = "Association for Computational Linguistics",
url = "https://vgupta123.github.io/docs/autotnli.pdf",
pages = "",
abstract = "Existing approaches to constructing training data for Natural Language Inference (NLI) tasks, such as for semi-structured table reasoning, are either via crowdsourcing or fully automatic methods. However, the former is expensive and time-consuming and thus limits scale, and the latter often produces naive examples that may lack complex reasoning. This paper develops a realistic semi-automated framework for data augmentation for tabular inference. Instead of manually generating a hypothesis for each table, our methodology generates hypothesis templates transferable to similar tables. In addition, our framework entails the creation of rational counterfactual tables based on human written logical constraints and premise paraphrasing. For our case study, we use the InfoTabS (Gupta et al., 2020), which is an entity-centric tabular inference dataset. We observed that our framework could generate human-like tabular inference examples, which could benefit training data augmentation, especially in the scenario with limited supervision.",
}
``` |
threite/Bundestag-v2 | 2023-02-14T13:08:49.000Z | [
"task_categories:text-classification",
"task_ids:entity-linking-classification",
"language_creators:expert-generated",
"multilinguality:monolingual",
"size_categories:100K<n<1M",
"language:de",
"license:cc0-1.0",
"Bundestag",
"ParlSpeech",
"region:us"
] | threite | null | null | null | 2 | 3 | ---
annotations_creators: []
language:
- de
language_creators:
- expert-generated
license:
- cc0-1.0
multilinguality:
- monolingual
pretty_name: Bundestag-v2
size_categories:
- 100K<n<1M
source_datasets: []
tags: ['Bundestag', 'ParlSpeech']
task_categories:
- text-classification
task_ids:
- entity-linking-classification
---
# Dataset Card for Bundestag-v2
## Table of Contents
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks](#supported-tasks)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- Homepage: https://doi.org/10.7910/DVN/L4OAKN
### Dataset Summary
This dataset was generated from the [ParlSpeech V2](https://doi.org/10.7910/DVN/L4OAKN) dataset. It contains speeches from the german parliament from 1990 until 2020 labelled with the party of the speaker.
### Supported Tasks
Text Classification
### Languages
German
## Dataset Structure
### Data Fields
- text: Transcript of the speech in german
- party: Party of the speaker
### Data Splits
- train
- validation
- test
## Dataset Creation
### Curation Rationale
Created to train a language model, which is able to classify speeches by party.
### Source Data
#### Initial Data Collection and Normalization
- [ParlSpeech V2](https://doi.org/10.7910/DVN/L4OAKN)
## Considerations for Using the Data
### Social Impact of Dataset
These are political speeches, therefor the content can be controversial and potentially harmful.
## Additional Information
### Licensing Information
[CCO 1.0](http://creativecommons.org/publicdomain/zero/1.0)
### Citation Information
Bibtex entry:
```
@data{DVN/L4OAKN_2020,
author = {Rauh, Christian and Schwalbach, Jan},
publisher = {Harvard Dataverse},
title = {{The ParlSpeech V2 data set: Full-text corpora of 6.3 million parliamentary speeches in the key legislative chambers of nine representative democracies}},
year = {2020},
version = {V1},
doi = {10.7910/DVN/L4OAKN},
url = {https://doi.org/10.7910/DVN/L4OAKN}
}
``` |
fathyshalab/massive_weather-de | 2023-02-08T12:34:40.000Z | [
"region:us"
] | fathyshalab | null | null | null | 0 | 3 | ---
dataset_info:
features:
- name: id
dtype: string
- name: label
dtype: int64
- name: text
dtype: string
splits:
- name: train
num_bytes: 31902
num_examples: 573
- name: validation
num_bytes: 7264
num_examples: 126
- name: test
num_bytes: 8886
num_examples: 156
download_size: 25436
dataset_size: 48052
---
# Dataset Card for "massive_weather-de"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
HuggingFaceH4/instruction-pilot-outputs-filtered | 2023-02-10T04:32:26.000Z | [
"license:apache-2.0",
"region:us"
] | HuggingFaceH4 | null | null | null | 9 | 3 | ---
license: apache-2.0
---
|
kentsui/squad_v2_factuality_v1 | 2023-02-13T04:06:43.000Z | [
"task_categories:text-classification",
"size_categories:10K<n<100K",
"language:en",
"license:cc-by-sa-4.0",
"region:us"
] | kentsui | null | null | null | 0 | 3 | ---
license: cc-by-sa-4.0
task_categories:
- text-classification
language:
- en
size_categories:
- 10K<n<100K
---
# squad_v2_factuality_v1
This dataset is derived from "squad_v2" training "context" with the following steps.
1. NER is run to extract entities.
2. Lexicon of person's name, date, organisation name and location are collected.
3. 20% of the time, one of the text attribute (person's name, date, organisation name and location) is randomly replaced. For consistency of context, all other place with the same name is also replaced.
# Purpose of the Dataset
The purpose of this dataset is to assess if a language model could detect factuality. |
cahya/instructions-test | 2023-02-10T01:11:45.000Z | [
"region:us"
] | cahya | null | null | null | 0 | 3 | ---
dataset_info:
features:
- name: id
dtype: int64
- name: text
dtype: string
splits:
- name: train
num_bytes: 16048
num_examples: 22
download_size: 15127
dataset_size: 16048
---
# Dataset Card for "instructions-test"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
bigcode/jupyter-code-text-pairs | 2023-02-21T20:05:33.000Z | [
"region:us"
] | bigcode | null | null | null | 5 | 3 | ---
dataset_info:
features:
- name: markdown
dtype: string
- name: code
dtype: string
- name: output
dtype: string
- name: license
dtype: string
- name: path
dtype: string
- name: repo_name
dtype: string
splits:
- name: train
num_bytes: 13985979285
num_examples: 9305991
download_size: 6176464336
dataset_size: 13985979285
---
# Dataset Card for "jupyter-code-text-pairs"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
VLyb/DBpedia500 | 2023-02-16T08:39:36.000Z | [
"license:unlicense",
"region:us"
] | VLyb | null | null | null | 0 | 3 | ---
license: unlicense
---
|
Mediocreatmybest/John_Gould_Birds_of_Australia | 2023-02-25T10:50:57.000Z | [
"language:en",
"license:cc0-1.0",
"region:us"
] | Mediocreatmybest | null | null | null | 0 | 3 | ---
license: cc0-1.0
language:
- en
---
# Birds of Australia
As described on RAWPIXEL:
Considered the “Father of bird study in Australia”, John Gould (1804–1881) is one of the most celebrated publications on ornithology worldwide. His book "Birds of Australia" (1840–1848) illustrated by his wife, Elizabeth Gould (1804–1841) introduced more than 300 new birds to the world. His work also contributed to the much revered Charles Darwin’s book ‘On the Origins of Species’. Available under the Creative Commons 0 license.
Created from CC-0 files on RawPixel.com
Image file can either be downloaded with your own script using the direct url column, or use the image data saved directly into the image column.
<https://www.rawpixel.com/search?page=1&sort=curated&tags=%24thebirdsofaustralia&topic=%24thebirdsofaustralia&topic_group=%24publicdomain>
Parquet file created here: <https://github.com/mediocreatmybest/gaslightingeveryone/blob/main/tools/images2parq.py>
File can also be extracted from here: <https://github.com/mediocreatmybest/gaslightingeveryone/blob/main/tools/parq2folder.py>
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.