id stringlengths 2 115 | author stringlengths 2 42 ⌀ | last_modified timestamp[us, tz=UTC] | downloads int64 0 8.87M | likes int64 0 3.84k | paperswithcode_id stringlengths 2 45 ⌀ | tags list | lastModified timestamp[us, tz=UTC] | createdAt stringlengths 24 24 | key stringclasses 1 value | created timestamp[us] | card stringlengths 1 1.01M | embedding list | library_name stringclasses 21 values | pipeline_tag stringclasses 27 values | mask_token null | card_data null | widget_data null | model_index null | config null | transformers_info null | spaces null | safetensors null | transformersInfo null | modelId stringlengths 5 111 ⌀ | embeddings list |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
nayohan/conversation_chronicles | nayohan | 2023-11-23T18:09:53Z | 46 | 0 | null | [
"region:us"
] | 2023-11-23T18:09:53Z | 2023-11-23T18:09:38.000Z | 2023-11-23T18:09:38 | ---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: validation
path: data/validation-*
- split: test
path: data/test-*
dataset_info:
features:
- name: dataset
dtype: string
- name: data_id
dtype: string
- name: dialogue_id
dtype: int64
- name: session_id
dtype: int64
- name: relationship
dtype: string
- name: time_interval
dtype: string
- name: summarization
dtype: string
- name: dialogue
sequence: string
- name: speaker
sequence: string
splits:
- name: train
num_bytes: 66878033
num_examples: 40000
- name: validation
num_bytes: 8358511
num_examples: 5000
- name: test
num_bytes: 8375545
num_examples: 5000
download_size: 39941247
dataset_size: 83612089
---
# Dataset Card for "conversation_chronicles"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | [
-0.5955154299736023,
-0.4886874854564667,
0.3352530896663666,
0.24018552899360657,
-0.11059729009866714,
0.06340344995260239,
0.19753029942512512,
-0.1566784679889679,
0.9378292560577393,
0.6776267886161804,
-1.1568129062652588,
-0.7611085772514343,
-0.5300260186195374,
-0.4526518285274505... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
lukesjordan/worldbank-project-documents | lukesjordan | 2022-10-24T20:10:40Z | 45 | 2 | null | [
"task_categories:table-to-text",
"task_categories:question-answering",
"task_categories:summarization",
"task_categories:text-generation",
"task_ids:abstractive-qa",
"task_ids:closed-domain-qa",
"task_ids:extractive-qa",
"task_ids:language-modeling",
"task_ids:named-entity-recognition",
"task_ids:... | 2022-10-24T20:10:40Z | 2022-03-02T23:29:22.000Z | 2022-03-02T23:29:22 | ---
annotations_creators:
- no-annotation
language_creators:
- found
language:
- en
license:
- other
multilinguality:
- monolingual
size_categories:
- unknown
source_datasets:
- original
task_categories:
- table-to-text
- question-answering
- summarization
- text-generation
task_ids:
- abstractive-qa
- closed-domain-qa
- extractive-qa
- language-modeling
- named-entity-recognition
- text-simplification
pretty_name: worldbank_project_documents
language_bcp47:
- en-US
tags:
- conditional-text-generation
- structure-prediction
---
# Dataset Card for World Bank Project Documents
## Table of Contents
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:**
- **Repository:** https://github.com/luke-grassroot/aid-outcomes-ml
- **Paper:** Forthcoming
- **Point of Contact:** Luke Jordan (lukej at mit)
### Dataset Summary
This is a dataset of documents related to World Bank development projects in the period 1947-2020. The dataset includes
the documents used to propose or describe projects when they are launched, and those in the review. The documents are indexed
by the World Bank project ID, which can be used to obtain features from multiple publicly available tabular datasets.
### Supported Tasks and Leaderboards
No leaderboard yet. A wide range of possible supported tasks, including varieties of summarization, QA, and language modelling. To date, the datasets have been used primarily in conjunction with tabular data (via BERT embeddings) to predict project outcomes.
### Languages
English
## Dataset Structure
### Data Instances
### Data Fields
* World Bank project ID
* Document text
* Document type: "APPROVAL" for documents written at the beginning of a project, when it is approved; and "REVIEW" for documents written at the end of a project
### Data Splits
To allow for open exploration, and since different applications will want to do splits based on different sampling weights, we have not done a train test split but left all files in the train branch.
## Dataset Creation
### Source Data
Documents were scraped from the World Bank's public project archive, following links through to specific project pages and then collecting the text files made available by the [World Bank](https://projects.worldbank.org/en/projects-operations/projects-home).
### Annotations
This dataset is not annotated.
### Personal and Sensitive Information
None.
## Considerations for Using the Data
### Social Impact of Dataset
Affects development projects, which can have large-scale consequences for many millions of people.
### Discussion of Biases
The documents reflect the history of development, which has well-documented and well-studied issues with the imposition of developed world ideas on developing world countries. The documents provide a way to study those in the field of development, but should not be used for their description of the recipient countries, since that language will reflect a multitude of biases, especially in the earlier reaches of the historical projects.
## Additional Information
### Dataset Curators
Luke Jordan, Busani Ndlovu.
### Licensing Information
MIT +no-false-attribs license (MITNFA).
### Citation Information
@dataset{world-bank-project-documents,
author = {Jordan, Luke and Ndlovu, Busani and Shenk, Justin},
title = {World Bank Project Documents Dataset},
year = {2021}
}
### Contributions
Thanks to [@luke-grassroot](https://github.com/luke-grassroot), [@FRTNX](https://github.com/FRTNX/) and [@justinshenk](https://github.com/justinshenk) for adding this dataset. | [
-0.5748381018638611,
-0.653118908405304,
0.05523345246911049,
0.26244327425956726,
-0.331582635641098,
0.21395321190357208,
-0.10689663141965866,
-0.49619513750076294,
0.17413070797920227,
0.6253063678741455,
-0.5213764309883118,
-0.7625191807746887,
-0.696715235710144,
-0.0614818148314952... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
projecte-aina/teca | projecte-aina | 2023-11-25T05:30:02Z | 45 | 0 | null | [
"task_categories:text-classification",
"task_ids:natural-language-inference",
"annotations_creators:expert-generated",
"language_creators:found",
"multilinguality:monolingual",
"size_categories:unknown",
"language:ca",
"license:cc-by-nc-nd-4.0",
"arxiv:2107.07903",
"region:us"
] | 2023-11-25T05:30:02Z | 2022-03-02T23:29:22.000Z | 2022-03-02T23:29:22 | ---
YAML tags:
annotations_creators:
- expert-generated
language_creators:
- found
language:
- ca
license:
- cc-by-nc-nd-4.0
multilinguality:
- monolingual
pretty_name: teca
size_categories:
- unknown
source_datasets: []
task_categories:
- text-classification
task_ids:
- natural-language-inference
---
# Dataset Card for TE-ca
## Dataset Description
- **Website:** https://zenodo.org/record/4761458
- **Paper:** [Are Multilingual Models the Best Choice for Moderately Under-resourced Languages? A Comprehensive Assessment for Catalan](https://arxiv.org/abs/2107.07903)
- **Point of Contact:** [Carlos Rodríguez-Penagos](carlos.rodriguez1@bsc.es) and [Carme Armentano-Oller](carme.armentano@bsc.es)
### Dataset Summary
TE-ca is a dataset of textual entailment in Catalan, which contains 21,163 pairs of premises and hypotheses, annotated according to the inference relation they have (implication, contradiction or neutral).
This dataset was developed by [BSC TeMU](https://temu.bsc.es/) as part of [Projecte AINA](https://politiquesdigitals.gencat.cat/ca/economia/catalonia-ai/aina/), to enrich the [Catalan Language Understanding Benchmark (CLUB)](https://club.aina.bsc.es/).
This work is licensed under an <a rel="license" href="https://creativecommons.org/licenses/by-nc-nd/4.0/">Attribution-NonCommercial-NoDerivatives 4.0 International License</a>.
### Supported Tasks and Leaderboards
Textual entailment, Text classification, Language Model
### Languages
The dataset is in Catalan (`ca-ES`).
## Dataset Structure
### Data Instances
Three JSON files, one for each split.
### Example:
<pre>
{
"id": 3247,
"premise": "L'ONU adopta a Marràqueix un pacte no vinculant per les migracions",
"hypothesis": "S'acorden unes recomanacions per les persones migrades a Marràqueix",
"label": "0"
},
{
"id": 2825,
"premise": "L'ONU adopta a Marràqueix un pacte no vinculant per les migracions",
"hypothesis": "Les persones migrades seran acollides a Marràqueix",
"label": "1"
},
{
"id": 2431,
"premise": "L'ONU adopta a Marràqueix un pacte no vinculant per les migracions",
"hypothesis": "L'acord impulsat per l'ONU lluny de tancar-se",
"label": "2"
},
</pre>
### Data Fields
- premise: text
- hypothesis: text related to the premise
- label: relation between premise and hypothesis:
* 0: entailment
* 1: neutral
* 2: contradiction
### Data Splits
* dev.json: 2116 examples
* test.json: 2117 examples
* train.json: 16930 examples
## Dataset Creation
### Curation Rationale
We created this dataset to contribute to the development of language models in Catalan, a low-resource language.
### Source Data
Source sentences are extracted from the [Catalan Textual Corpus](https://doi.org/10.5281/zenodo.4519349) and from [VilaWeb](https://www.vilaweb.cat) newswire.
#### Initial Data Collection and Normalization
12000 sentences from the BSC [Catalan Textual Corpus](https://doi.org/10.5281/zenodo.4519349), together with 6200 headers from the Catalan news site [VilaWeb](https://www.vilaweb.cat), were chosen randomly. We filtered them by different criteria, such as length and stand-alone intelligibility. For each selected text, we commissioned 3 hypotheses (one for each entailment category) to be written by a team of native annotators.
Some sentence pairs were excluded because of inconsistencies.
#### Who are the source language producers?
The Catalan Textual Corpus corpus consists of several corpora gathered from web crawling and public corpora. More information can be found [here](https://doi.org/10.5281/zenodo.4519349).
[VilaWeb](https://www.vilaweb.cat) is a Catalan newswire.
### Annotations
#### Annotation process
We commissioned 3 hypotheses (one for each entailment category) to be written by a team of annotators.
#### Who are the annotators?
Annotators are a team of native language collaborators from two independent companies.
### Personal and Sensitive Information
No personal or sensitive information included.
## Considerations for Using the Data
### Social Impact of Dataset
We hope this dataset contributes to the development of language models in Catalan, a low-resource language.
### Discussion of Biases
[N/A]
### Other Known Limitations
[N/A]
## Additional Information
### Dataset Curators
Text Mining Unit (TeMU) at the Barcelona Supercomputing Center (bsc-temu@bsc.es)
This work was funded by the [Departament de la Vicepresidència i de Polítiques Digitals i Territori de la Generalitat de Catalunya](https://politiquesdigitals.gencat.cat/ca/inici/index.html#googtrans(ca|en) within the framework of [Projecte AINA](https://politiquesdigitals.gencat.cat/ca/economia/catalonia-ai/aina).
### Licensing Information
This work is licensed under an <a rel="license" href="https://creativecommons.org/licenses/by-nc-nd/4.0/">Attribution-NonCommercial-NoDerivatives 4.0 International License</a>.
### Citation Information
```
@inproceedings{armengol-estape-etal-2021-multilingual,
title = "Are Multilingual Models the Best Choice for Moderately Under-resourced Languages? {A} Comprehensive Assessment for {C}atalan",
author = "Armengol-Estap{\'e}, Jordi and
Carrino, Casimiro Pio and
Rodriguez-Penagos, Carlos and
de Gibert Bonet, Ona and
Armentano-Oller, Carme and
Gonzalez-Agirre, Aitor and
Melero, Maite and
Villegas, Marta",
booktitle = "Findings of the Association for Computational Linguistics: ACL-IJCNLP 2021",
month = aug,
year = "2021",
address = "Online",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2021.findings-acl.437",
doi = "10.18653/v1/2021.findings-acl.437",
pages = "4933--4946",
}
```
[DOI](https://doi.org/10.5281/zenodo.4529183)
| [
-0.24816001951694489,
-0.6654090881347656,
0.23150570690631866,
0.5579181909561157,
-0.23475386202335358,
-0.0391179658472538,
-0.4668850302696228,
-0.4278045892715454,
0.47219109535217285,
0.5618058443069458,
-0.2331659346818924,
-0.7581937313079834,
-0.45620623230934143,
0.30910930037498... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
thomwolf/codeparrot-valid | thomwolf | 2021-07-28T08:46:43Z | 45 | 0 | null | [
"region:us"
] | 2021-07-28T08:46:43Z | 2022-03-02T23:29:22.000Z | 2022-03-02T23:29:22 | Entry not found | [
-0.3227645754814148,
-0.22568479180335999,
0.8622263669967651,
0.43461522459983826,
-0.52829909324646,
0.7012971639633179,
0.7915719747543335,
0.07618614286184311,
0.774603009223938,
0.2563217282295227,
-0.7852813005447388,
-0.22573819756507874,
-0.9104475975036621,
0.5715674161911011,
-... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
tomekkorbak/pile-toxic-chunk-0 | tomekkorbak | 2022-03-27T07:45:48Z | 45 | 0 | null | [
"region:us"
] | 2022-03-27T07:45:48Z | 2022-03-27T07:45:35.000Z | 2022-03-27T07:45:35 | Entry not found | [
-0.3227645754814148,
-0.22568479180335999,
0.8622263669967651,
0.43461522459983826,
-0.52829909324646,
0.7012971639633179,
0.7915719747543335,
0.07618614286184311,
0.774603009223938,
0.2563217282295227,
-0.7852813005447388,
-0.22573819756507874,
-0.9104475975036621,
0.5715674161911011,
-... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
stepp1/tweet_emotion_intensity | stepp1 | 2022-04-18T20:49:56Z | 45 | 4 | null | [
"region:us"
] | 2022-04-18T20:49:56Z | 2022-04-18T17:32:33.000Z | 2022-04-18T17:32:33 | # Tweet Emotion Intensity Dataset
## Papers:
* Emotion Intensities in Tweets. Saif M. Mohammad and Felipe Bravo-Marquez. In Proceedings of the sixth joint conference on lexical and computational semantics (*Sem), August 2017, Vancouver, Canada.
* WASSA-2017 Shared Task on Emotion Intensity. Saif M. Mohammad and Felipe Bravo-Marquez. In Proceedings of the EMNLP 2017 Workshop on Computational Approaches to Subjectivity, Sentiment, and Social Media (WASSA), September 2017, Copenhagen, Denmark.
| [
-0.12999024987220764,
-0.5529148578643799,
0.552864670753479,
0.7242582440376282,
-0.46313661336898804,
0.29273635149002075,
-0.5205937623977661,
-0.12663103640079498,
0.4348607659339905,
-0.05436364561319351,
-0.5916531682014465,
-1.0199172496795654,
-1.1472373008728027,
0.068542256951332... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
HuggingFaceM4/charades | HuggingFaceM4 | 2022-10-20T21:35:42Z | 45 | 2 | charades | [
"task_categories:other",
"annotations_creators:crowdsourced",
"language_creators:crowdsourced",
"multilinguality:monolingual",
"size_categories:1K<n<10K",
"source_datasets:original",
"language:en",
"license:other",
"arxiv:1604.01753",
"region:us"
] | 2022-10-20T21:35:42Z | 2022-05-11T07:07:47.000Z | 2022-05-11T07:07:47 | ---
annotations_creators:
- crowdsourced
language_creators:
- crowdsourced
language:
- en
license:
- other
multilinguality:
- monolingual
size_categories:
- 1K<n<10K
source_datasets:
- original
task_categories:
- other
task_ids: []
paperswithcode_id: charades
pretty_name: Charades
tags: []
---
# Dataset Card for Charades
## Table of Contents
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** https://prior.allenai.org/projects/charades
- **Repository:** https://github.com/gsig/charades-algorithms
- **Paper:** https://arxiv.org/abs/1604.01753
- **Leaderboard:** https://paperswithcode.com/sota/action-classification-on-charades
- **Point of Contact:** mailto: vision.amt@allenai.org
### Dataset Summary
Charades is dataset composed of 9848 videos of daily indoors activities collected through Amazon Mechanical Turk. 267 different users were presented with a sentence, that includes objects and actions from a fixed vocabulary, and they recorded a video acting out the sentence (like in a game of Charades). The dataset contains 66,500 temporal annotations for 157 action classes, 41,104 labels for 46 object classes, and 27,847 textual descriptions of the videos
### Supported Tasks and Leaderboards
- `multilabel-action-classification`: The goal of this task is to classify actions happening in a video. This is a multilabel classification. The leaderboard is available [here](https://paperswithcode.com/sota/action-classification-on-charades)
### Languages
The annotations in the dataset are in English.
## Dataset Structure
### Data Instances
```
{
"video_id": "46GP8",
"video": "/home/amanpreet_huggingface_co/.cache/huggingface/datasets/downloads/extracted/3f022da5305aaa189f09476dbf7d5e02f6fe12766b927c076707360d00deb44d/46GP8.mp4",
"subject": "HR43",
"scene": "Kitchen",
"quality": 6,
"relevance": 7,
"verified": "Yes",
"script": "A person cooking on a stove while watching something out a window.",
"objects": ["food", "stove", "window"],
"descriptions": [
"A person cooks food on a stove before looking out of a window."
],
"labels": [92, 147],
"action_timings": [
[11.899999618530273, 21.200000762939453],
[0.0, 12.600000381469727]
],
"length": 24.829999923706055
}
```
### Data Fields
- `video_id`: `str` Unique identifier for each video.
- `video`: `str` Path to the video file
- `subject`: `str` Unique identifier for each subject in the dataset
- `scene`: `str` One of 15 indoor scenes in the dataset, such as Kitchen
- `quality`: `int` The quality of the video judged by an annotator (7-point scale, 7=high quality), -100 if missing
- `relevance`: `int` The relevance of the video to the script judged by an annotated (7-point scale, 7=very relevant), -100 if missing
- `verified`: `str` 'Yes' if an annotator successfully verified that the video matches the script, else 'No'
- `script`: `str` The human-generated script used to generate the video
- `descriptions`: `List[str]` List of descriptions by annotators watching the video
- `labels`: `List[int]` Multi-label actions found in the video. Indices from 0 to 156.
- `action_timings`: `List[Tuple[int, int]]` Timing where each of the above actions happened.
- `length`: `float` The length of the video in seconds
<details>
<summary>
Click here to see the full list of Charades class labels mapping:
</summary>
|id|Class|
|--|-----|
|c000 | Holding some clothes |
|c001 | Putting clothes somewhere |
|c002 | Taking some clothes from somewhere |
|c003 | Throwing clothes somewhere |
|c004 | Tidying some clothes |
|c005 | Washing some clothes |
|c006 | Closing a door |
|c007 | Fixing a door |
|c008 | Opening a door |
|c009 | Putting something on a table |
|c010 | Sitting on a table |
|c011 | Sitting at a table |
|c012 | Tidying up a table |
|c013 | Washing a table |
|c014 | Working at a table |
|c015 | Holding a phone/camera |
|c016 | Playing with a phone/camera |
|c017 | Putting a phone/camera somewhere |
|c018 | Taking a phone/camera from somewhere |
|c019 | Talking on a phone/camera |
|c020 | Holding a bag |
|c021 | Opening a bag |
|c022 | Putting a bag somewhere |
|c023 | Taking a bag from somewhere |
|c024 | Throwing a bag somewhere |
|c025 | Closing a book |
|c026 | Holding a book |
|c027 | Opening a book |
|c028 | Putting a book somewhere |
|c029 | Smiling at a book |
|c030 | Taking a book from somewhere |
|c031 | Throwing a book somewhere |
|c032 | Watching/Reading/Looking at a book |
|c033 | Holding a towel/s |
|c034 | Putting a towel/s somewhere |
|c035 | Taking a towel/s from somewhere |
|c036 | Throwing a towel/s somewhere |
|c037 | Tidying up a towel/s |
|c038 | Washing something with a towel |
|c039 | Closing a box |
|c040 | Holding a box |
|c041 | Opening a box |
|c042 | Putting a box somewhere |
|c043 | Taking a box from somewhere |
|c044 | Taking something from a box |
|c045 | Throwing a box somewhere |
|c046 | Closing a laptop |
|c047 | Holding a laptop |
|c048 | Opening a laptop |
|c049 | Putting a laptop somewhere |
|c050 | Taking a laptop from somewhere |
|c051 | Watching a laptop or something on a laptop |
|c052 | Working/Playing on a laptop |
|c053 | Holding a shoe/shoes |
|c054 | Putting shoes somewhere |
|c055 | Putting on shoe/shoes |
|c056 | Taking shoes from somewhere |
|c057 | Taking off some shoes |
|c058 | Throwing shoes somewhere |
|c059 | Sitting in a chair |
|c060 | Standing on a chair |
|c061 | Holding some food |
|c062 | Putting some food somewhere |
|c063 | Taking food from somewhere |
|c064 | Throwing food somewhere |
|c065 | Eating a sandwich |
|c066 | Making a sandwich |
|c067 | Holding a sandwich |
|c068 | Putting a sandwich somewhere |
|c069 | Taking a sandwich from somewhere |
|c070 | Holding a blanket |
|c071 | Putting a blanket somewhere |
|c072 | Snuggling with a blanket |
|c073 | Taking a blanket from somewhere |
|c074 | Throwing a blanket somewhere |
|c075 | Tidying up a blanket/s |
|c076 | Holding a pillow |
|c077 | Putting a pillow somewhere |
|c078 | Snuggling with a pillow |
|c079 | Taking a pillow from somewhere |
|c080 | Throwing a pillow somewhere |
|c081 | Putting something on a shelf |
|c082 | Tidying a shelf or something on a shelf |
|c083 | Reaching for and grabbing a picture |
|c084 | Holding a picture |
|c085 | Laughing at a picture |
|c086 | Putting a picture somewhere |
|c087 | Taking a picture of something |
|c088 | Watching/looking at a picture |
|c089 | Closing a window |
|c090 | Opening a window |
|c091 | Washing a window |
|c092 | Watching/Looking outside of a window |
|c093 | Holding a mirror |
|c094 | Smiling in a mirror |
|c095 | Washing a mirror |
|c096 | Watching something/someone/themselves in a mirror |
|c097 | Walking through a doorway |
|c098 | Holding a broom |
|c099 | Putting a broom somewhere |
|c100 | Taking a broom from somewhere |
|c101 | Throwing a broom somewhere |
|c102 | Tidying up with a broom |
|c103 | Fixing a light |
|c104 | Turning on a light |
|c105 | Turning off a light |
|c106 | Drinking from a cup/glass/bottle |
|c107 | Holding a cup/glass/bottle of something |
|c108 | Pouring something into a cup/glass/bottle |
|c109 | Putting a cup/glass/bottle somewhere |
|c110 | Taking a cup/glass/bottle from somewhere |
|c111 | Washing a cup/glass/bottle |
|c112 | Closing a closet/cabinet |
|c113 | Opening a closet/cabinet |
|c114 | Tidying up a closet/cabinet |
|c115 | Someone is holding a paper/notebook |
|c116 | Putting their paper/notebook somewhere |
|c117 | Taking paper/notebook from somewhere |
|c118 | Holding a dish |
|c119 | Putting a dish/es somewhere |
|c120 | Taking a dish/es from somewhere |
|c121 | Wash a dish/dishes |
|c122 | Lying on a sofa/couch |
|c123 | Sitting on sofa/couch |
|c124 | Lying on the floor |
|c125 | Sitting on the floor |
|c126 | Throwing something on the floor |
|c127 | Tidying something on the floor |
|c128 | Holding some medicine |
|c129 | Taking/consuming some medicine |
|c130 | Putting groceries somewhere |
|c131 | Laughing at television |
|c132 | Watching television |
|c133 | Someone is awakening in bed |
|c134 | Lying on a bed |
|c135 | Sitting in a bed |
|c136 | Fixing a vacuum |
|c137 | Holding a vacuum |
|c138 | Taking a vacuum from somewhere |
|c139 | Washing their hands |
|c140 | Fixing a doorknob |
|c141 | Grasping onto a doorknob |
|c142 | Closing a refrigerator |
|c143 | Opening a refrigerator |
|c144 | Fixing their hair |
|c145 | Working on paper/notebook |
|c146 | Someone is awakening somewhere |
|c147 | Someone is cooking something |
|c148 | Someone is dressing |
|c149 | Someone is laughing |
|c150 | Someone is running somewhere |
|c151 | Someone is going from standing to sitting |
|c152 | Someone is smiling |
|c153 | Someone is sneezing |
|c154 | Someone is standing up from somewhere |
|c155 | Someone is undressing |
|c156 | Someone is eating something |
</details>
### Data Splits
| |train |validation| test |
|-------------|------:|---------:|------:|
|# of examples|1281167|50000 |100000 |
## Dataset Creation
### Curation Rationale
> Computer vision has a great potential to help our daily lives by searching for lost keys, watering flowers or reminding us to take a pill. To succeed with such tasks, computer vision methods need to be trained from real and diverse examples of our daily dynamic scenes. While most of such scenes are not particularly exciting, they typically do not appear on YouTube, in movies or TV broadcasts. So how do we collect sufficiently many diverse but boring samples representing our lives? We propose a novel Hollywood in Homes approach to collect such data. Instead of shooting videos in the lab, we ensure diversity by distributing and crowdsourcing the whole process of video creation from script writing to video recording and annotation.
### Source Data
#### Initial Data Collection and Normalization
> Similar to filming, we have a three-step process for generating a video. The first step is generating the script of the indoor video. The key here is to allow workers to generate diverse scripts yet ensure that we have enough data for each category. The second step in the process is to use the script and ask workers to record a video of that sentence being acted out. In the final step, we ask the workers to verify if the recorded video corresponds to script, followed by an annotation procedure.
#### Who are the source language producers?
Amazon Mechnical Turk annotators
### Annotations
#### Annotation process
> Similar to filming, we have a three-step process for generating a video. The first step is generating the script of the indoor video. The key here is to allow workers to generate diverse scripts yet ensure that we have enough data for each category. The second step in the process is to use the script and ask workers to record a video of that sentence being acted out. In the final step, we ask the workers to verify if the recorded video corresponds to script, followed by an annotation procedure.
#### Who are the annotators?
Amazon Mechnical Turk annotators
### Personal and Sensitive Information
Nothing specifically mentioned in the paper.
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
AMT annotators
### Licensing Information
License for Non-Commercial Use
If this software is redistributed, this license must be included. The term software includes any source files, documentation, executables, models, and data.
This software and data is available for general use by academic or non-profit, or government-sponsored researchers. It may also be used for evaluation purposes elsewhere. This license does not grant the right to use this software or any derivation of it in a for-profit enterprise. For commercial use, please contact The Allen Institute for Artificial Intelligence.
This license does not grant the right to modify and publicly release the data in any form.
This license does not grant the right to distribute the data to a third party in any form.
The subjects in this data should be treated with respect and dignity. This license only grants the right to publish short segments or still images in an academic publication where necessary to present examples, experimental results, or observations.
This software comes with no warranty or guarantee of any kind. By using this software, the user accepts full liability.
The Allen Institute for Artificial Intelligence (C) 2016.
### Citation Information
```bibtex
@article{sigurdsson2016hollywood,
author = {Gunnar A. Sigurdsson and G{\"u}l Varol and Xiaolong Wang and Ivan Laptev and Ali Farhadi and Abhinav Gupta},
title = {Hollywood in Homes: Crowdsourcing Data Collection for Activity Understanding},
journal = {ArXiv e-prints},
eprint = {1604.01753},
year = {2016},
url = {http://arxiv.org/abs/1604.01753},
}
```
### Contributions
Thanks to [@apsdehal](https://github.com/apsdehal) for adding this dataset.
| [
-0.5741362571716309,
-0.499940425157547,
0.1465008556842804,
0.4330771267414093,
-0.06897231936454773,
0.36864206194877625,
0.008127082139253616,
-0.15348400175571442,
0.6028604507446289,
0.2492765635251999,
-1.0043591260910034,
-0.6377493143081665,
-0.6855148077011108,
0.1515398621559143,... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
Blaise-g/SumPubmed | Blaise-g | 2022-07-28T19:53:40Z | 45 | 0 | null | [
"language:en",
"region:us"
] | 2022-07-28T19:53:40Z | 2022-07-16T15:09:11.000Z | 2022-07-16T15:09:11 | ---
language:
- en
paperswithcode_id:
pretty_name: SumPubmed
train-eval-index:
- config: Blaise-g--SumPubmed
task: summarization
task_id: summarization
splits:
eval_split: test
col_mapping:
text: text
abstract: target
---
# Dataset Card for "SumPubmed"
## Original Dataset Description
- **Repository:** [https://github.com/vgupta123/sumpubmed](https://github.com/vgupta123/sumpubmed)
- **Paper:** [More Information Needed](https://vgupta123.github.io/docs/121_paper.pdf)
## Description of dataset processing
5 rows were dropped from the original dataset taken from KAGGLE as they were missing the respective 'shorter_abstract' entries.
The 'line_text' and 'filename_text' columns were left untouched while the remaining ones were processed to remove the '\n' (many repetitions of those present in the original dataset), '\<dig\>', '\<cit\>', 'BACKGROUND', 'RESULTS' and 'CONCLUSIONS' matching strings which were deemed not necessary for the purpose of summarization. Additionally, extra spaces were removed and spacing around punctuations was fixed.
| [
-0.45711269974708557,
-0.36524659395217896,
-0.021239805966615677,
0.08810968697071075,
-0.6561973094940186,
0.126378133893013,
-0.0110948346555233,
-0.02065359614789486,
0.6116392612457275,
0.6449531316757202,
-0.6949102282524109,
-0.5765326023101807,
-0.6377238035202026,
0.43966272473335... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
bigbio/mednli | bigbio | 2022-12-22T15:24:43Z | 45 | 5 | mednli | [
"multilinguality:monolingual",
"language:en",
"license:other",
"region:us"
] | 2022-12-22T15:24:43Z | 2022-09-26T03:08:16.000Z | 2022-09-26T03:08:16 | ---
language:
- en
bigbio_language:
- English
license: other
multilinguality: monolingual
bigbio_license_short_name: PHYSIONET_LICENSE_1p5
pretty_name: MedNLI
homepage: https://physionet.org/content/mednli/1.0.0/
bigbio_pubmed: false
bigbio_public: false
bigbio_tasks:
- TEXTUAL_ENTAILMENT
paperswithcode_id: mednli
---
# Dataset Card for MedNLI
## Dataset Description
- **Homepage:** https://physionet.org/content/mednli/1.0.0/
- **Pubmed:** False
- **Public:** False
- **Tasks:** TE
State of the art models using deep neural networks have become very good in learning an accurate
mapping from inputs to outputs. However, they still lack generalization capabilities in conditions
that differ from the ones encountered during training. This is even more challenging in specialized,
and knowledge intensive domains, where training data is limited. To address this gap, we introduce
MedNLI - a dataset annotated by doctors, performing a natural language inference task (NLI),
grounded in the medical history of patients. As the source of premise sentences, we used the
MIMIC-III. More specifically, to minimize the risks to patient privacy, we worked with clinical
notes corresponding to the deceased patients. The clinicians in our team suggested the Past Medical
History to be the most informative section of a clinical note, from which useful inferences can be
drawn about the patient.
## Citation Information
```
@misc{https://doi.org/10.13026/c2rs98,
title = {MedNLI — A Natural Language Inference Dataset For The Clinical Domain},
author = {Shivade, Chaitanya},
year = 2017,
publisher = {physionet.org},
doi = {10.13026/C2RS98},
url = {https://physionet.org/content/mednli/}
}
```
| [
0.1471918672323227,
-0.3918353021144867,
0.6343269348144531,
-0.11082083731889725,
-0.2533811926841736,
-0.3478637933731079,
-0.2739033102989197,
-0.3922880291938782,
0.3433573544025421,
0.5152323246002197,
-0.7376335263252258,
-0.8006141185760498,
-0.34994497895240784,
-0.0958811119198799... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
Dialogue-Model-Research-Group/v2ex | Dialogue-Model-Research-Group | 2022-11-15T14:52:02Z | 45 | 2 | null | [
"license:cc",
"region:us"
] | 2022-11-15T14:52:02Z | 2022-10-26T07:13:27.000Z | 2022-10-26T07:13:27 | ---
license: cc
dataset_info:
- config_name: topic
features:
- name: id
dtype: int64
- name: title
dtype: string
- name: content
dtype: string
- name: content_rendered
dtype: string
- name: syntax
dtype: int64
- name: url
dtype: string
- name: replies
dtype: int64
- name: last_reply_by
dtype: string
- name: created
dtype: int64
- name: last_modified
dtype: int64
- name: last_touched
dtype: int64
- name: member
struct:
- name: id
dtype: int64
- name: username
dtype: string
- name: bio
dtype: string
- name: website
dtype: string
- name: github
dtype: string
- name: url
dtype: string
- name: avatar
dtype: string
- name: created
dtype: int64
- name: node
struct:
- name: id
dtype: int64
- name: url
dtype: string
- name: name
dtype: string
- name: title
dtype: string
- name: header
dtype: string
- name: footer
dtype: string
- name: avatar
dtype: string
- name: topics
dtype: int64
- name: created
dtype: int64
- name: last_modified
dtype: int64
- name: supplements
sequence:
- name: id
dtype: int64
- name: content
dtype: string
- name: content_rendered
dtype: string
- name: syntax
dtype: int64
- name: created
dtype: int64
splits:
- name: train
num_bytes: 522790208
num_examples: 262120
download_size: 153558181
dataset_size: 522790208
- config_name: replies
features:
- name: id
dtype: int64
- name: content
dtype: string
- name: content_rendered
dtype: string
- name: created
dtype: int64
- name: member
struct:
- name: id
dtype: int64
- name: username
dtype: string
- name: bio
dtype: string
- name: website
dtype: string
- name: github
dtype: string
- name: url
dtype: string
- name: avatar
dtype: string
- name: created
dtype: int64
- name: topic_id
dtype: int64
splits:
- name: train
num_bytes: 1554954801
num_examples: 3553953
download_size: 462827899
dataset_size: 1554954801
---
| [
-0.1285335123538971,
-0.1861683875322342,
0.6529128551483154,
0.49436232447624207,
-0.19319400191307068,
0.23607441782951355,
0.36072009801864624,
0.05056373029947281,
0.5793656706809998,
0.7400146722793579,
-0.650810182094574,
-0.23784008622169495,
-0.7102247476577759,
-0.0478255338966846... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
Dizex/FoodBase | Dizex | 2022-10-31T12:48:53Z | 45 | 1 | null | [
"region:us"
] | 2022-10-31T12:48:53Z | 2022-10-31T12:42:55.000Z | 2022-10-31T12:42:55 | ---
dataset_info:
features:
- name: nltk_tokens
sequence: string
- name: iob_tags
sequence: string
- name: input_ids
sequence: int32
- name: token_type_ids
sequence: int8
- name: attention_mask
sequence: int8
- name: labels
sequence: int64
splits:
- name: train
num_bytes: 2040036
num_examples: 600
- name: val
num_bytes: 662190
num_examples: 200
download_size: 353747
dataset_size: 2702226
---
# Dataset Card for "FoodBase"
Dataset for FoodBase corpus introduced in [this paper](https://academic.oup.com/database/article/doi/10.1093/database/baz121/5611291).
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | [
-0.544350266456604,
-0.4429146945476532,
0.15102532505989075,
-0.0690680593252182,
0.2930136024951935,
0.06488316506147385,
-0.06323373317718506,
-0.3483717739582062,
0.8448563814163208,
0.5237892270088196,
-0.5375757217407227,
-0.7014599442481995,
-0.5612545013427734,
-0.03425110504031181... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
lawcompany/KLAID | lawcompany | 2022-11-17T07:09:10Z | 45 | 7 | null | [
"task_categories:text-classification",
"task_ids:multi-class-classification",
"multilinguality:monolingual",
"language:ko",
"license:cc-by-nc-nd-4.0",
"region:us"
] | 2022-11-17T07:09:10Z | 2022-11-13T05:21:05.000Z | 2022-11-13T05:21:05 | ---
pretty_name: KLAID
viewer: true
language: ko
multilinguality:
- monolingual
license: cc-by-nc-nd-4.0
task_categories:
- text-classification
task_ids:
- multi-class-classification
---
# Dataset Card for KLAID
## Table of Contents
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Other Inquiries](#other_inquiries)
- [Licensing Information](#licensing-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** [https://klaid.net](https://klaid.net)
- **Leaderboard:** [https://klaid.net](https://klaid.net)
- **Point of Contact:** [klaid@lawcompany.co.kr](klaid@lawcompany.co.kr)
### Dataset Summary
Korean Legal Artificial Intelligence Datasets(KLAID) is a dataset for the development of Korean legal artificial intelligence technology. This time we offer 1 task, which is legal judgment prediction(LJP).
### Supported Tasks and Leaderboards
Legal Judgment Prediction(LJP)
### Languages
`korean`
### How to use
```python
from datasets import load_dataset
# legal judgment prediction
dataset = load_dataset("lawcompany/KLAID", 'ljp')
```
## Dataset Structure
### Data Instances
#### ljp
An example of 'train' looks as follows.
```
{
'fact': '피고인은 2022. 11. 14. 혈중알콜농도 0.123%의 술에 취한 상태로 승용차를 운전하였다.',
'laws_service': '도로교통법 제148조의2 제3항 제2호,도로교통법 제44조 제1항',
'laws_service_id': 7
}
```
Other References
You can refer to each label's 'laws service content' [here](https://storage.googleapis.com/klaid/ljp/dataset/ljp_laws_service_content.json).
'Laws service content' is the statute([source](https://www.law.go.kr/)) corresponding to each label.
### Data Fields
#### ljp
+ "fact": a `string` feature
+ "laws_service": a `string` feature
+ "laws_service_id": a classification label, with 177 legal judgment values
[More Information Needed](https://klaid.net/tasks-1)
### Data Splits
#### ljp
+ train: 161,192
## Dataset Creation
### Curation Rationale
The legal domain is arguably one of the most expertise fields that require expert knowledge to comprehend. Natural language processing requires many aspects, and we focus on the dataset requirements. As a gold standard is necessary for the testing and the training of a neural model, we hope that our dataset release will help the advances in natural language processing in the legal domain, especially for those for the Korean legal system.
### Source Data
These are datasets based on Korean legal case data.
### Personal and Sensitive Information
Due to the nature of legal case data, personal and sensitive information may be included. Therefore, in order to prevent problems that may occur with personal and sensitive information, we proceeded to de-realize the legal case.
## Considerations for Using the Data
### Other Known Limitations
We plan to upload more data and update them as some of the court records may be revised from now on, based on the ever-evolving legal system.
## Additional Information
### Other Inquiries
[klaid@lawcompany.co.kr](klaid@lawcompany.co.kr)
### Licensing Information
Copyright 2022-present [Law&Company Co. Ltd.](https://career.lawcompany.co.kr/)
Licensed under the CC-BY-NC-ND-4.0
### Contributions
[More Information Needed] | [
-0.1820794641971588,
-0.42030009627342224,
0.4755626916885376,
0.2683330476284027,
-0.4788418114185333,
-0.29448139667510986,
-0.3165739178657532,
-0.44708070158958435,
0.23568251729011536,
0.8791819214820862,
-0.45292308926582336,
-0.9866462349891663,
-0.5162516236305237,
-0.1716464757919... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
tonytan48/Re-DocRED | tonytan48 | 2022-11-25T02:48:32Z | 45 | 0 | null | [
"license:mit",
"arxiv:2205.12696",
"region:us"
] | 2022-11-25T02:48:32Z | 2022-11-25T02:42:48.000Z | 2022-11-25T02:42:48 | ---
license: mit
---
# Re-DocRED Dataset
This repository contains the dataset of our EMNLP 2022 research paper [Revisiting DocRED – Addressing the False Negative Problem
in Relation Extraction](https://arxiv.org/pdf/2205.12696.pdf).
DocRED is a widely used benchmark for document-level relation extraction. However, the DocRED dataset contains a significant percentage of false negative examples (incomplete annotation). We revised 4,053 documents in the DocRED dataset and resolved its problems. We released this dataset as: Re-DocRED dataset.
The Re-DocRED Dataset resolved the following problems of DocRED:
1. Resolved the incompleteness problem by supplementing large amounts of relation triples.
2. Addressed the logical inconsistencies in DocRED.
3. Corrected the coreferential errors within DocRED.
# Statistics of Re-DocRED
The Re-DocRED dataset is located as ./data directory, the statistics of the dataset are shown below:
| | Train | Dev | Test |
| :---: | :-: | :-: |:-: |
| # Documents | 3,053 | 500 | 500 |
| Avg. # Triples | 28.1 | 34.6 | 34.9 |
| Avg. # Entities | 19.4 | 19.4 | 19.6 |
| Avg. # Sents | 7.9 | 8.2 | 7.9 |
# Citation
If you find our work useful, please cite our work as:
```bibtex
@inproceedings{tan2022revisiting,
title={Revisiting DocRED – Addressing the False Negative Problem in Relation Extraction},
author={Tan, Qingyu and Xu, Lu and Bing, Lidong and Ng, Hwee Tou and Aljunied, Sharifah Mahani},
booktitle={Proceedings of EMNLP},
url={https://arxiv.org/abs/2205.12696},
year={2022}
}
```
| [
-0.3075883090496063,
-0.6040966510772705,
0.45763540267944336,
-0.061284665018320084,
-0.04754345491528511,
-0.4156602621078491,
-0.07700055837631226,
-0.36520376801490784,
0.3503042757511139,
0.728706955909729,
-0.49161940813064575,
-0.6240243315696716,
-0.47830912470817566,
0.39377591013... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
deutsche-telekom/NLU-Evaluation-Data-en-de | deutsche-telekom | 2022-12-29T20:33:24Z | 45 | 1 | null | [
"task_categories:text-classification",
"task_ids:intent-classification",
"multilinguality:multilingual",
"size_categories:10K<n<100K",
"source_datasets:extended|nlu_evaluation_data",
"language:en",
"language:de",
"license:cc-by-4.0",
"arxiv:1903.05566",
"region:us"
] | 2022-12-29T20:33:24Z | 2022-12-01T16:54:19.000Z | 2022-12-01T16:54:19 | ---
license: cc-by-4.0
source_datasets:
- extended|nlu_evaluation_data
multilinguality:
- multilingual
language:
- en
- de
size_categories:
- 10K<n<100K
task_categories:
- text-classification
task_ids:
- intent-classification
---
# NLU Evaluation Data - English and German
A labeled English **and German** language multi-domain dataset (21 domains) with 25K user utterances for human-robot interaction.
This dataset is collected and annotated for evaluating NLU services and platforms.
The detailed paper on this dataset can be found at arXiv.org:
[Benchmarking Natural Language Understanding Services for building Conversational Agents](https://arxiv.org/abs/1903.05566)
The dataset builds on the annotated data of the [xliuhw/NLU-Evaluation-Data](https://github.com/xliuhw/NLU-Evaluation-Data)
repository. We have added an additional column (`answer_de`)
by translating the texts in column `answer` into German.
The translation was made with [DeepL](https://www.deepl.com/translator).
## Labels
The columns `scenario` and `intent` can be used for classification tasks.
However, we recommend to use even more fine-grained labels.
For this purpose, a new label can be derived by concatenating `scenario` and `intent`.
For example, this would turn "alarm" and "set" into "alarm_set".
## Dataset Quirks
The original dataset contains some `NaN` values in the `answer` column.
This means that there are also `NaN` values in the translations (`answer_de` column).
These rows should be filtered.
The dataset also contains duplicate values.
## Copyright
Copyright (c) the authors of [xliuhw/NLU-Evaluation-Data](https://github.com/xliuhw/NLU-Evaluation-Data)\
Copyright (c) 2022 [Philip May](https://may.la/)
All data is released under the
[Creative Commons Attribution 4.0 International License (CC BY 4.0)](http://creativecommons.org/licenses/by/4.0/).
| [
-0.4597581923007965,
-0.7123457193374634,
0.31493932008743286,
0.2347775101661682,
-0.022717785090208054,
-0.035574160516262054,
-0.3043351471424103,
-0.6967494487762451,
0.018350709229707718,
0.538212776184082,
-0.6807100176811218,
-0.7220498919487,
-0.4590153694152832,
0.3140410184860229... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
parambharat/tamil_asr_corpus | parambharat | 2022-12-07T17:32:59Z | 45 | 1 | null | [
"task_categories:automatic-speech-recognition",
"annotations_creators:found",
"language_creators:found",
"multilinguality:monolingual",
"size_categories:100K<n<1M",
"source_datasets:extended|common_voice",
"source_datasets:extended|openslr",
"language:ta",
"license:cc-by-4.0",
"region:us"
] | 2022-12-07T17:32:59Z | 2022-12-07T16:36:05.000Z | 2022-12-07T16:36:05 | ---
annotations_creators:
- found
language:
- ta
language_creators:
- found
license:
- cc-by-4.0
multilinguality:
- monolingual
pretty_name: Tamil ASR Corpus
size_categories:
- 100K<n<1M
source_datasets:
- extended|common_voice
- extended|openslr
tags: []
task_categories:
- automatic-speech-recognition
task_ids: []
---
# Dataset Card for [Dataset Name]
## Table of Contents
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:**
- **Repository:**
- **Paper:**
- **Leaderboard:**
- **Point of Contact:**
### Dataset Summary
[More Information Needed]
### Supported Tasks and Leaderboards
[More Information Needed]
### Languages
[More Information Needed]
## Dataset Structure
### Data Instances
[More Information Needed]
### Data Fields
[More Information Needed]
### Data Splits
[More Information Needed]
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
[More Information Needed]
### Citation Information
[More Information Needed]
### Contributions
Thanks to [@parambharat](https://github.com/parambharat) for adding this dataset.
| [
-0.4674034118652344,
-0.5082291960716248,
0.13351468741893768,
0.2789975702762604,
-0.2384357750415802,
0.22319363057613373,
-0.31009790301322937,
-0.35111308097839355,
0.6566066145896912,
0.6660475730895996,
-0.8895503282546997,
-1.1983165740966797,
-0.7493513226509094,
0.0908629521727562... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
dipesh/Intent-Classification-small | dipesh | 2023-01-27T22:08:26Z | 45 | 0 | null | [
"region:us"
] | 2023-01-27T22:08:26Z | 2023-01-27T22:08:07.000Z | 2023-01-27T22:08:07 | ---
dataset_info:
features:
- name: text
dtype: string
- name: intent
dtype: string
- name: input_ids
sequence: int32
- name: attention_mask
sequence: int8
- name: label
dtype:
class_label:
names:
'0': goodbye
'1': volume control
'2': play games
'3': covid cases
'4': open website
'5': tell me joke
'6': play on youtube
'7': places near me
'8': greet and hello hi kind of things, general check in
'9': asking time
'10': asking date
'11': tell me news
'12': asking weather
'13': download youtube video
'14': what can you do
'15': take screenshot
'16': send email
'17': i am bored
'18': click photo
'19': tell me about
'20': send whatsapp message
splits:
- name: train
num_bytes: 630723
num_examples: 6153
- name: validation
num_bytes: 71230
num_examples: 684
download_size: 201336
dataset_size: 701953
---
# Dataset Card for "Intent-Classification-small"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | [
-0.3791986405849457,
-0.2275294065475464,
0.49393773078918457,
0.028273554518818855,
-0.09107501804828644,
-0.5770440697669983,
0.0034104923252016306,
-0.1414566934108734,
0.785519540309906,
0.44959357380867004,
-0.7929762601852417,
-0.6676181554794312,
-0.5175818204879761,
-0.390738695859... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
Multimodal-Fatima/VQAv2_sample_validation | Multimodal-Fatima | 2023-06-09T00:06:10Z | 45 | 0 | null | [
"region:us"
] | 2023-06-09T00:06:10Z | 2023-02-10T17:59:57.000Z | 2023-02-10T17:59:57 | ---
dataset_info:
features:
- name: question_type
dtype: string
- name: multiple_choice_answer
dtype: string
- name: answers
sequence: string
- name: answers_original
list:
- name: answer
dtype: string
- name: answer_confidence
dtype: string
- name: answer_id
dtype: int64
- name: id_image
dtype: int64
- name: answer_type
dtype: string
- name: question_id
dtype: int64
- name: question
dtype: string
- name: image
dtype: image
- name: id
dtype: int64
- name: clip_tags_ViT_L_14
sequence: string
- name: blip_caption
dtype: string
- name: DETA_detections_deta_swin_large_o365_coco_classes
list:
- name: attribute
dtype: string
- name: box
sequence: float32
- name: label
dtype: string
- name: location
dtype: string
- name: ratio
dtype: float32
- name: size
dtype: string
- name: tag
dtype: string
- name: LLM_Description_gpt3_downstream_tasks_visual_genome_ViT_L_14
sequence: string
- name: DETA_detections_deta_swin_large_o365_coco_classes_ViT_L_14
list:
- name: attribute
dtype: string
- name: box
sequence: float64
- name: label
dtype: string
- name: location
dtype: string
- name: ratio
dtype: float64
- name: size
dtype: string
- name: tag
dtype: string
- name: DETA_detections_deta_swin_large_o365_clip_ViT_L_14
list:
- name: attribute
dtype: string
- name: box
sequence: float64
- name: label
dtype: string
- name: location
dtype: string
- name: ratio
dtype: float64
- name: size
dtype: string
- name: tag
dtype: string
- name: DETA_detections_deta_swin_large_o365_clip_ViT_L_14_blip_caption
list:
- name: attribute
dtype: string
- name: box
sequence: float64
- name: caption
dtype: string
- name: label
dtype: string
- name: location
dtype: string
- name: ratio
dtype: float64
- name: size
dtype: string
- name: tag
dtype: string
- name: new_info_captions3
list:
- name: attribute
dtype: string
- name: box
sequence: float64
- name: caption
dtype: string
- name: captions_module
sequence:
sequence: string
- name: label
dtype: string
- name: location
dtype: string
- name: ratio
dtype: float64
- name: size
dtype: string
- name: tag
dtype: string
- name: DETA_detections_deta_swin_large_o365_clip_ViT_L_14_blip_caption_caption_module
list:
- name: attribute
dtype: string
- name: box
sequence: float64
- name: caption
dtype: string
- name: captions_module
sequence: string
- name: label
dtype: string
- name: location
dtype: string
- name: ratio
dtype: float64
- name: size
dtype: string
- name: tag
dtype: string
- name: DETA_detections_deta_swin_large_o365_clip_ViT_L_14_blip_caption_caption_module_without_filtering
list:
- name: attribute
dtype: string
- name: box
sequence: float64
- name: caption
dtype: string
- name: captions_module
sequence: string
- name: label
dtype: string
- name: location
dtype: string
- name: ratio
dtype: float64
- name: size
dtype: string
- name: tag
dtype: string
- name: clip_tags_LAION_ViT_H_14_2B
sequence: string
- name: LLM_Description_gpt3_downstream_tasks_visual_genome_LAION-ViT-H-14-2B
sequence: string
- name: DETA_detections_deta_swin_large_o365_clip_ViT_L_14_blip_caption_caption_module_random
list:
- name: attribute
dtype: string
- name: box
sequence: float64
- name: caption
dtype: string
- name: captions_module
sequence: string
- name: captions_module_filter
sequence: string
- name: label
dtype: string
- name: location
dtype: string
- name: ratio
dtype: float64
- name: size
dtype: string
- name: tag
dtype: string
- name: Attributes_ViT_L_14_descriptors_text_davinci_003_full
sequence: string
- name: Attributes_LAION_ViT_H_14_2B_descriptors_text_davinci_003_full
sequence: string
- name: clip_tags_ViT_L_14_with_openai
sequence: string
- name: clip_tags_LAION_ViT_H_14_2B_with_openai
sequence: string
- name: blip_caption_beam_5_Salesforce_blip2_flan_t5_xxl
dtype: string
- name: DETA_detections_deta_swin_large_o365_coco_classes_caption_all_patches_Salesforce_blip_image_captioning_large_
list:
- name: attribute
dtype: string
- name: box
sequence: float64
- name: captions_all_patches
sequence: string
- name: label
dtype: string
- name: location
dtype: string
- name: ratio
dtype: float64
- name: size
dtype: string
- name: tag
dtype: string
- name: DETA_detections_deta_swin_large_o365_coco_classes_caption_all_patches_Salesforce_blip_image_captioning_large_clean
list:
- name: attribute
dtype: string
- name: box
sequence: float64
- name: captions_all_patches
sequence: string
- name: label
dtype: string
- name: location
dtype: string
- name: ratio
dtype: float64
- name: size
dtype: string
- name: tag
dtype: string
- name: blip_caption_topk_50_Salesforce_blip_image_captioning_base_multiple
sequence: string
- name: DETA_detections_deta_swin_large_o365_clip_caption_all_patches_Salesforce_blip_image_captioning_large__ViT_L_14
list:
- name: attribute
dtype: string
- name: box
sequence: float64
- name: captions_all_patches
sequence: string
- name: label
dtype: string
- name: location
dtype: string
- name: ratio
dtype: float64
- name: size
dtype: string
- name: tag
dtype: string
- name: blip_caption_Salesforce_blip_image_captioning_large_intensive
sequence: string
- name: blip_caption_Salesforce_blip_image_captioning_base_intensive
sequence: string
splits:
- name: validation
num_bytes: 511357022.0
num_examples: 1000
download_size: 293191811
dataset_size: 511357022.0
---
# Dataset Card for "VQAv2_sample_validation"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | [
-0.36834174394607544,
-0.10163288563489914,
0.26538321375846863,
0.16422182321548462,
-0.24062108993530273,
-0.0725734606385231,
0.5424771904945374,
-0.012148989364504814,
0.36854252219200134,
0.4636148512363434,
-0.8520163297653198,
-0.6101759672164917,
-0.26533225178718567,
-0.3587583899... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
voiceintelligenceresearch/MOCKS | voiceintelligenceresearch | 2023-10-27T15:55:12Z | 45 | 0 | null | [
"annotations_creators:expert-generated",
"multilinguality:multilingual",
"language:en",
"language:de",
"language:es",
"language:fr",
"language:it",
"license:cc-by-4.0",
"license:mpl-2.0",
"region:us"
] | 2023-10-27T15:55:12Z | 2023-02-20T13:40:22.000Z | 2023-02-20T13:40:22 | ---
annotations_creators:
- expert-generated
language:
- en
- de
- es
- fr
- it
license:
- cc-by-4.0
- mpl-2.0
multilinguality:
- multilingual
dataset_info:
- config_name: config
features:
- name: audio_id
dtype: string
- name: audio
dtype:
audio:
sampling_rate: 16000
- name: text
dtype: string
---
# MOCKS: Multilingual Open Custom Keyword Spotting Testset
## Table of Contents
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Paper:** [MOCKS 1.0: Multilingual Open Custom Keyword Spotting Testset](https://www.isca-speech.org/archive/pdfs/interspeech_2023/pudo23_interspeech.pdf)
### Dataset Summary
Multilingual Open Custom Keyword Spotting Testset (MOCKS) is a comprehensive audio testset for evaluation and benchmarking
Open-Vocabulary Keyword Spotting (OV-KWS) models. It supports multiple OV-KWS problems:
both text-based and audio-based keyword spotting, as well as offline and online (streaming) modes.
It is based on the LibriSpeech and Mozilla Common Voice datasets and contains
almost 50,000 keywords, with audio data available in English, French, German, Italian, and Spanish.
The testset was generated using automatically generated alignments used for the extraction of parts of the recordings that were split into keywords and test samples.
MOCKS contains both positive and negative examples selected based on phonetic transcriptions that are challenging and should allow for in-depth OV-KWS model evaluation.
Please refer to our [paper](https://www.isca-speech.org/archive/pdfs/interspeech_2023/pudo23_interspeech.pdf) for further details.
### Supported Tasks and Leaderboards
The MOCKS dataset can be used for the Open-Vocabulary Keyword Spotting (OV-KWS) task. It supports two OV-KWS types:
- Query-by-Text, where the keyword is provided by text and needs to be detected in the audio stream.
- Query-by-Example, where the keyword is provided with enrollment audio for detection in the audio stream.
It also allows for:
- offline keyword detection, where test audio is trimmed to contain only keywords of interest.
- online (streaming) keyword detection, where test audio has past and future context besides keywords of interest.
### Languages
The MOCKS incorporates 5 languages:
- English - primary and largest test set,
- German,
- Spanish,
- French,
- Italian.
## Dataset Structure
The MOCKS testset is split by language, source dataset, and OV-KWS type:
```
MOCKS
│
└───de
│ └───MCV
│ │ └───test
│ │ │ └───offline
│ │ │ │ │ all.pair.different.tsv
│ │ │ │ │ all.pair.positive.tsv
│ │ │ │ │ all.pair.similar.tsv
│ │ │ │ │ data.tar.gz
│ │ │ │ │ subset.pair.different.tsv
│ │ │ │ │ subset.pair.positive.tsv
│ │ │ │ │ subset.pair.similar.tsv
│ │ │ │
│ │ │ └───online
│ │ │ │ │ all.pair.different.tsv
│ │ │ │ │ ...
│ │ │ │ data.offline.transcription.tsv
│ │ │ │ data.online.transcription.tsv
│
└───en
│ └───LS-clean
│ │ └───test
│ │ │ └───offline
│ │ │ │ │ all.pair.different.tsv
│ │ │ │ │ ...
│ │ │ │ ...
│ │
│ └───LS-other
│ │ └───test
│ │ │ └───offline
│ │ │ │ │ all.pair.different.tsv
│ │ │ │ │ ...
│ │ │ │ ...
│ │
│ └───MCV
│ │ └───test
│ │ │ └───offline
│ │ │ │ │ all.pair.different.tsv
│ │ │ │ │ ...
│ │ │ │ ...
│
└───...
```
Each split is divided into:
- positive examples (`all.pair.positive.tsv`) - test examples with true keywords, 5000-8000 keywords in each subset,
- similar examples (`all.pair.similar.tsv`) - test examples with similar phrases to the keyword selected based on phonetic transcription distance,
- different examples (`all.pair.different.tsv`) - test examples with completely different phrases.
All those files contain columns separated by tab:
- `keyword_path` - path to audio containing keyword phrase.
- `adversary_keyword_path` - path to test audio.
- `adversary_keyword_timestamp_start` - start time in seconds of phrase of interest for a given keyword from `keyword_path`, the field only available in **offline** split.
- `adversary_keyword_timestamp_end` - end time in seconds of phrase of interest for a given keyword from `keyword_path`, the field only available in **offline** split.
- `label` - whether the `adversary_keyword_path` contain keyword from `keyword_path` or not (1 - contains keyword, 0 - doesn't contain keyword).
Each split also contains a subset of whole data with the same field structure to allow faster evaluation (`subset.pair.*.tsv`).
Also, transcriptions are provided for each audio in:
- `data_offline_transcription.tsv` - transcriptions for **offline** examples and `keyword_path` from **online** scenario,
- `data_online_transcription.tsv` - transcriptions for the adversary, test examples from **online** scenario,
three columns are present within each file:
- `path_to_keyword`/`path_to_adversary_keyword` - path to the audio file,
- `keyword_transcription`/`adversary_keyword_transcription` - audio transcription,
- `keyword_phonetic_transcription`/`adversary_keyword_phonetic_transcription` - audio phonetic transcription.
## Using the Dataset
The dataset can be used by:
- downloading the archive and constructing all the test cases based on the provided `tsv` files,
- `datasets` package.
In the latter case, the following should work:
```
load_dataset(path="voiceintelligenceresearch/MOCKS", name="en.LS-clean", split="offline")
```
The allowed values for `name` are:
- `en.LS-{clean,other}`,
- `en.LS-{clean,other}.positive`,
- `en.LS-{clean,other}.similar`,
- `en.LS-{clean,other}.different`,
- `en.LS-{clean,other}.subset`,
- `en.LS-{clean,other}.positive_subset`,
- `en.LS-{clean,other}.similar_subset`,
- `en.LS-{clean,other}.different_subset`,
- `{de,en,es,fr,it}.MCV.positive`,
- `{de,en,es,fr,it}.MCV.positive.similar`,
- `{de,en,es,fr,it}.MCV.positive.different`,
- `{de,en,es,fr,it}.MCV.positive.subset`,
- `{de,en,es,fr,it}.MCV.positive.positive_subset`,
- `{de,en,es,fr,it}.MCV.positive.similar_subset`,
- `{de,en,es,fr,it}.MCV.positive.different_subset`.
The allowed values for `split` are:
- `offline`,
- `online`.
`load_dataset` provides a list of the dictionary objects with the following contents:
```
{
"keyword_id": datasets.Value("string"),
"keyword_transcription": datasets.Value("string"),
"test_id": datasets.Value("string"),
"test_transcription": datasets.Value("string"),
"test_audio": datasets.Audio(sampling_rate=16000),
"label": datasets.Value("bool"),
}
```
Each element of this list represents a single test case for the QbyT KWS:
- `keyword_id` - the name of the keyword audio file in `data.tar.gz` (not used in QbyT KWS),
- `keyword_transcription` - transcription of the keyword,
- `test_id` - the name of the test audio file in `data.tar.gz`,
- `test_transcription` - transcription of the test sample,
- `test_audio` - raw data of the test audio,
- `label` - `True` if the test case is positive (`keyword_transcription` is a substring of the `test_transcription`), `False` otherwise (`similar` and `different` subsets).
Note that each test case can be extended to QbyE KWS by reading the proper `keyword_id` file. Unfortunately, there is no easy way to do that in the loading script.
All the test files are provided in 16 kHz, even though `{de,en,es,fr,it}.MCV` files are stored in the original sampling (usually 48 kHz) in the `data.tar.gz` archives.
## Dataset Creation
The MOCKS testset was created from LibriSpeech and Mozilla Common Voice (MCV) datasets that are publicly available. To create it:
- a [MFA](https://mfa-models.readthedocs.io/en/latest/acoustic/index.html) with publicly available models was used to extract word-level alignments,
- an internally developed, rule-based grapheme-to-phoneme (G2P) algorithm was used to prepare phonetic transcriptions for each sample.
The data is stored in a 16-bit, single-channel WAV format. 16kHz sampling rate is used for LibriSpeech based testset
and 48kHz sampling rate for MCV based testset.
The offline testset contains an additional 0.1 seconds at the beginning and end of the extracted audio sample to mitigate the cut-speech effect.
The online version contains an additional 1 second or so at the beginning and end of the extracted audio sample.
The MOCKS testset is gender balanced.
## Citation Information
```bibtex
@inproceedings{pudo23_interspeech,
author={Mikołaj Pudo and Mateusz Wosik and Adam Cieślak and Justyna Krzywdziak and Bożena Łukasiak and Artur Janicki},
title={{MOCKS} 1.0: Multilingual Open Custom Keyword Spotting Testset},
year={2023},
booktitle={Proc. Interspeech 2023},
}
``` | [
-0.44412532448768616,
-0.6945847868919373,
0.26401686668395996,
0.1754842847585678,
-0.4937121570110321,
0.17551486194133759,
-0.2761857509613037,
-0.13276046514511108,
0.43959057331085205,
0.39871594309806824,
-0.6151634454727173,
-0.9329661130905151,
-0.4596414566040039,
0.22895319759845... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
vietgpt/binhvq_news_vi | vietgpt | 2023-03-30T18:58:53Z | 45 | 0 | null | [
"task_categories:text-generation",
"size_categories:10M<n<100M",
"language:vi",
"LM",
"region:us"
] | 2023-03-30T18:58:53Z | 2023-02-21T20:08:06.000Z | 2023-02-21T20:08:06 | ---
dataset_info:
features:
- name: text
dtype: string
splits:
- name: train
num_bytes: 8211350978.574438
num_examples: 19365593
download_size: 4780706833
dataset_size: 8211350978.574438
task_categories:
- text-generation
language:
- vi
tags:
- LM
size_categories:
- 10M<n<100M
---
# Binhvq News
- Source: https://github.com/binhvq/news-corpus
- Num examples: 19,365,593
- Language: Vietnamese
```python
from datasets import load_dataset
load_dataset("tdtunlp/binhvq_news_vi")
``` | [
-0.22187282145023346,
-0.7020816802978516,
0.19385115802288055,
0.36091798543930054,
-0.548980176448822,
0.061903659254312515,
-0.05452750623226166,
0.09061509370803833,
0.19391314685344696,
1.0135104656219482,
-0.052324119955301285,
-0.8645040392875671,
-0.17837712168693542,
0.41380968689... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
Supermaxman/esa-hubble | Supermaxman | 2023-02-26T13:20:26Z | 45 | 9 | null | [
"task_categories:text-to-image",
"size_categories:1K<n<10K",
"language:en",
"license:cc-by-4.0",
"space",
"region:us"
] | 2023-02-26T13:20:26Z | 2023-02-22T22:03:08.000Z | 2023-02-22T22:03:08 | ---
dataset_info:
features:
- name: image
dtype: image
- name: text
dtype: string
- name: id
dtype: string
- name: title
dtype: string
- name: description
dtype: string
- name: credits
dtype: string
- name: url
dtype: string
- name: Id
dtype: string
- name: Type
dtype: string
- name: Release date
dtype: string
- name: Related releases
dtype: string
- name: Size
dtype: string
- name: Name
dtype: string
- name: Distance
dtype: string
- name: Constellation
dtype: string
- name: Category
dtype: string
- name: Position (RA)
dtype: string
- name: Position (Dec)
dtype: string
- name: Field of view
dtype: string
- name: Orientation
dtype: string
- name: width
dtype: int64
- name: height
dtype: int64
- name: file_size
dtype: int64
- name: crop_w
dtype: int64
- name: crop_h
dtype: int64
- name: cropped
dtype: bool
- name: Related science announcements
dtype: string
- name: Related announcements
dtype: string
splits:
- name: train
num_bytes: 94474695584.124
num_examples: 2706
download_size: 61236366105
dataset_size: 94474695584.124
license: cc-by-4.0
task_categories:
- text-to-image
language:
- en
tags:
- space
pretty_name: ESA Hubble Deep Space Images & Captions
size_categories:
- 1K<n<10K
---
# Dataset Card for ESA Hubble Deep Space Images & Captions
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Examples](#examples)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** [ESA Hubble](https://esahubble.org/)
- **Repository:** [Hubble Diffusion repository](https://github.com/Supermaxman/hubble-diffusion)
- **Point of Contact:** [Maxwell Weinzierl](mailto:maxwell.weinzierl@utdallas.edu)
### Dataset Summary
The ESA Hubble Deep Space Images & Captions dataset is composed primarily of Hubble deep space scans as high-resolution images,
along with textual descriptions written by ESA/Hubble. Metadata is also included, which enables more detailed filtering and understanding of massive space scans.
The purpose of this dataset is to enable text-to-image generation methods for generating high-quality deep space scans from prompts.
Check out [Hubble Diffusion v2](https://huggingface.co/Supermaxman/hubble-diffusion-2) for an example of a model trained on this dataset!
### Examples
#### A grazing encounter between two spiral galaxies
> In the direction of the constellation Canis Major, two spiral galaxies pass by each other like majestic ships in the night. The near-collision has been caught in images taken by the NASA/ESA Hubble Space Telescope and its Wide Field Planetary Camera 2.
>
>
> Credit: NASA/ESA and The Hubble Heritage Team (STScI)
#### The magnificent starburst galaxy Messier 82
> This mosaic image of the magnificent starburst galaxy, Messier 82 (M82) is the sharpest wide-angle view ever obtained of M82. It is a galaxy remarkable for its webs of shredded clouds and flame-like plumes of glowing hydrogen blasting out from its central regions where young stars are being born 10 times faster than they are inside in our Milky Way Galaxy.
>
>
> Credit: NASA, ESA and the Hubble Heritage Team (STScI/AURA). Acknowledgment: J. Gallagher (University of Wisconsin), M. Mountain (STScI) and P. Puxley (NSF).
#### Extreme star cluster bursts into life in new Hubble image
> The star-forming region NGC 3603 - seen here in the latest Hubble Space Telescope image - contains one of the most impressive massive young star clusters in the Milky Way. Bathed in gas and dust the cluster formed in a huge rush of star formation thought to have occurred around a million years ago. The hot blue stars at the core are responsible for carving out a huge cavity in the gas seen to the right of the star cluster in NGC 3603's centre.
>
>
> Credit: NASA, ESA and the Hubble Heritage (STScI/AURA)-ESA/Hubble Collaboration
#### Statistics
- There are a total of 2,706 deep space images
- The complete uncompressed size of the dataset is 120 GB, so definitely make use of [Streaming](https://huggingface.co/docs/datasets/stream)
- The average image is 44 MB, while the max image size is 432 MB
- The average image has a height of 2,881 pixels, and an average width of 3,267 pixels
### Supported Tasks and Leaderboards
- `text-to-image`: The dataset can be used to train a model for conditional image generation from text. A conditional text-to-image generation model is presented with a text prompt, and is asked to generate an image which aligns with that text prompt. Model performance is typically measured by human judgement, as it is difficult to automatically measure the quality of generated images and how closely they match the text prompt. An example of a text-to-image model is [Stable Diffusion v2-1](https://huggingface.co/stabilityai/stable-diffusion-2-1). An example of a text-to-image model trained on this dataset is [Hubble Diffusion v2](https://huggingface.co/Supermaxman/hubble-diffusion-2).
### Languages
The text describing the images in the dataset is in English, as written by the writers from ESA/Hubble at [https://esahubble.org/](https://esahubble.org/). The associated BCP-47 code is `en`.
## Dataset Structure
### Data Instances
A typical data point comprises a high-quality deep space scan as an image, along with a textual description of that image produced by ESA/Hubble.
The textual description was derived by combining the `title` and the `description` of the image from the ESA/Hubble website.
Additionally, each data point also contains significant metadata about the image, such as the type of image, credits, the URL, the release date, and more.
An example looks as follows:
```json
{
"image": "<encoded image>",
"text":"A grazing encounter between two spiral galaxies: In the direction of the constellation Canis Major, two spiral galaxies pass by each other like majestic ships in the night. The near-collision has been caught in images taken by the NASA/ESA Hubble Space Telescope and its Wide Field Planetary Camera 2.",
"id":"opo9941a",
"title":"A grazing encounter between two spiral galaxies",
"description":"In the direction of the constellation Canis Major, two spiral galaxies pass by each other like majestic ships in the night. The near-collision has been caught in images taken by the NASA/ESA Hubble Space Telescope and its Wide Field Planetary Camera 2.",
"credits":"NASA/ESA and The Hubble Heritage Team (STScI)",
"url":"https://esahubble.org/images/opo9941a/",
"Id":"opo9941a",
"Type":"Local Universe : Galaxy : Type : Interacting",
"Release date":"4 November 1999, 07:00",
"Size":"2907 x 1486 px",
"Name":"IC 2163, NGC 2207",
"Distance":"110 million light years",
"Constellation":"Canis Major",
"Category":"Galaxies",
"Position (RA)":"6 16 25.10",
"Position (Dec)":"-21° 22' 34.62\"",
"Field of view":"4.82 x 2.47 arcminutes",
"Orientation":"North is 191.2\u00b0 right of vertical",
"width":2907,
"height":1486,
"file_size":12959406,
"crop_w":0,
"crop_h":0,
"cropped":false
}
```
### Data Fields
- `image`: encoded RGB `.png` image of the deep space scan
- `text`: text description of image, a combination of `title` + ': ' + `description`
- `id`: id of the image from ESA/Hubble
- `title`: textual title of image from ESA/Hubble URL
- `description`: textual description of image from ESA/Hubble URL
- `credits`: required credits for each image from ESA/Hubble URL
- `url`: ESA/Hubble URL
- `Id`: id of the image from ESA/Hubble (from website metadata)
- `Type`: type of deep space scan
- `Release date`: release date of deep space scan
- `Size`: size of original image
- `Name`: name of celestial entities present in image
- `Distance`: distance from celestial entities present in image
- `Constellation`: constellation of celestial entities present in image
- `Category`: category of celestial entities present in image
- `Position (RA)`: coordinates for deep space scan used by Hubble telescope
- `Position (Dec)`: coordinates for deep space scan used by Hubble telescope
- `Field of view`: coordinates for deep space scan used by Hubble telescope
- `Orientation`: coordinates for deep space scan used by Hubble telescope
- `width`: width of image, same if the image did not need to be cropped, but otherwise could differ from `Size`
- `height`: height of image, same if the image did not need to be cropped, but otherwise could differ from `Size`
- `file_size`: `width` x `height` x 3 bytes, used to estimate size of raw images
- `crop_w`: width starting point of image if cropped, otherwise 0
- `crop_h`: height starting point of image if cropped, otherwise 0
- `cropped`: whether this image needed to be cropped or not
### Data Splits
The data is only provided in a single training split, as the purpose of the dataset is additional fine-tuning for the task of `text-to-image` generation.
## Dataset Creation
### Curation Rationale
The ESA Hubble Deep Space Images & Captions dataset was built to provide ease of access to extremely high-quality Hubble deep space scans.
Images from the Hubble telescope have already inspired millions, and the hope is that this dataset can be used to create inspiring models and approaches to further push interest in space & cosmology.
### Source Data
#### Initial Data Collection
All images were collected from [https://esahubble.org/](https://esahubble.org/).
Fullsize Original images & metadata were crawled from the ESA Hubble website using [Scrapy](https://scrapy.org/).
Images were downloaded as `.tiff` files, while
additional metadata was later collected for each image using the following [code](https://github.com/Supermaxman/hubble-diffusion).
As the ESA Hubble website collects images from a wide variety of sources, images were filtered to try to avoid any non-space scan images as follows:
The ESA Hubble [Advanced Image Search](http://esahubble.org/images/archive/search) enables the following filtering parameters:
- images with Minimum size greater than or equal to 400x300
- Ranking greater than or equal to Fair or better
- Type containing 'Observation'
This reduced significantly the number of images which had nothing to do with Hubble deep space scans.
A total of around 3,000 space images were collected with this method.
#### Filtering
Further automatic and manual filtering was performed to remove the following:
- improperly classified images
- space renders
- diagrams with text
- images of celestial bodies within our solar system
- images with too low a resolution
This brought the total number of deep space images down to 2,593.
This process was not perfect, and there likely remain some images in the dataset that should be removed in the future.
#### Preprocessing
Some of the deep space scans were as large as 34,372x19,345, with a bit depth of 24 (nearly 2 GB).
Unfortunately, these images were too large to upload easily
Therefore, images were automatically subdivided in half if they were above 12,000 pixels in either height or width.
Subdivided images were also tagged with additional metadata, such that users can reconstruct the original images if they would prefer.
Otherwise, metadata was copied across subdivided images.
Additionally, images were converted from RGB/RGBX `.tiff` to RGB `.png` files to avoid encoding issues.
This process resulted in 2,706 final deep space images.
### Annotations
The dataset does not contain any additional annotations.
#### Annotation process
[N/A]
#### Who are the annotators?
[N/A]
### Personal and Sensitive Information
[N/A]
## Considerations for Using the Data
### Social Impact of Dataset
The purpose of this dataset is to help inspire people to be interested in astronomy.
A system that succeeds at text-to-image generation would be able to generate inspiring deep space scans, providing interesting and inspiring art for those interested in space. This dataset provides a starting-point for building such a system by providing text and image pairs for Hubble deep space scans.
### Discussion of Biases
It is unfortunate that we currently only have English captions for these deep space scans.
In the future, expanding these captions to more languages could help spread interest in astronomy far and wide.
Additionally, these captions may be too technical for the average person to effectively utilize for a text-to-image model.
### Other Known Limitations
[N/A]
## Additional Information
### Dataset Curators
The dataset was initially created by all the wonderful researchers, engineers, scientists, and more behind the Hubble Telescope, NASA, and the ESA.
Maxwell Weinzierl collected, filtered, and preprocessed this data for ease of use.
### Licensing Information
ESA/Hubble images, videos and web texts are released under the [Creative Commons Attribution 4.0 International license](https://creativecommons.org/licenses/by/4.0/)
and may on a non-exclusive basis be reproduced without fee provided they are clearly and visibly credited.
See [https://esahubble.org/copyright/](https://esahubble.org/copyright/) for additional conditions for reproduction and copyright.
### Citation Information
If you use this dataset, please cite it as:
```bibtex
@misc{weinzierl2023hubble,
author = {Weinzierl, Maxwell A.},
title = {ESA Hubble Deep Space Images & Captions},
year={2023},
howpublished= {\url{https://huggingface.co/datasets/Supermaxman/esa-hubble}}
}
```
### Contributions
Thanks to [@supermaxman](https://github.com/supermaxman) for adding this dataset.
| [
-0.8038501143455505,
-0.5186343789100647,
0.5164218544960022,
0.06994672119617462,
-0.28465864062309265,
-0.03152260556817055,
0.1534244865179062,
-0.5367050766944885,
0.48197343945503235,
0.6272445321083069,
-0.7757688164710999,
-0.4749735891819,
-0.4326402246952057,
-0.002737046685069799... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
Bingsu/ko_alpaca_data | Bingsu | 2023-03-30T23:21:40Z | 45 | 12 | null | [
"task_categories:text-generation",
"size_categories:10K<n<100K",
"language:ko",
"license:cc-by-nc-4.0",
"region:us"
] | 2023-03-30T23:21:40Z | 2023-03-20T05:36:21.000Z | 2023-03-20T05:36:21 | ---
dataset_info:
features:
- name: instruction
dtype: string
- name: input
dtype: string
- name: output
dtype: string
splits:
- name: train
num_bytes: 13791136
num_examples: 49620
download_size: 8491044
dataset_size: 13791136
license: cc-by-nc-4.0
language:
- ko
pretty_name: ko-alpaca-data
size_categories:
- 10K<n<100K
task_categories:
- text-generation
---
# Dataset Card for "ko_alpaca_data"
## Dataset Description
- **Repository:** [Beomi/KoAlpaca](https://github.com/Beomi/KoAlpaca)
- **Huggingface:** [beomi/KoAlpaca](https://huggingface.co/beomi/KoAlpaca)
- **Size of downloaded dataset files:** 8.10 MB
- **Size of the generated dataset:** 13.15 MB
### Dataset Summary
Korean translation of [alpaca data](https://huggingface.co/datasets/tatsu-lab/alpaca).
repository: [Beomi/KoAlpaca](https://github.com/Beomi/KoAlpaca)<br>
huggingface: [beomi/KoAlpaca](https://huggingface.co/beomi/KoAlpaca)
1. Translate dataset
Translated 'instruction' and 'input' in the dataset via the DeepL API, except for 'output', which we did not translate because it is the output of OpenAI's `text-davinci-003` model.
2. Generate output data
Then, using the instruction and input, generate output data via the OpenAI ChatGPT API (gpt-3.5-turbo).
Below is the prompt we used to generate the answer.
```python
PROMPT = """\
다양한 작업에 대한 답변을 생성해주세요. 이러한 작업 지침은 ChatGPT 모델에 주어지며, ChatGPT 모델이 지침을 완료하는지 평가합니다.
요구 사항은 다음과 같습니다:
1. 다양성을 극대화하기 위해 각 지시에 대해 동사를 반복하지 않도록 하세요.
2. 지시에 사용되는 언어도 다양해야 합니다. 예를 들어, 질문과 명령형 지시를 결합해야 합니다.
3. 지시 사항의 유형이 다양해야 합니다. 목록에는 개방형 생성, 분류, 편집 등과 같은 다양한 유형의 작업이 포함되어야 합니다.
2. GPT 언어 모델은 지시를 완료할 수 있어야 합니다. 예를 들어 어시스턴트에게 시각적 또는 오디오 출력을 생성하도록 요청하지 마세요. 또 다른 예로, 어시스턴트가 어떤 작업도 수행할 수 없으므로 오후 5시에 깨우거나 미리 알림을 설정하도록 요청하지 마세요.
3. 답변은 한국어로 작성해야 합니다.
4. 답변을 1~2문장으로 작성하세요. 명령문이나 질문도 허용됩니다.
5. 지시 사항에 대한 적절한 입력을 생성해야 합니다. 입력 필드에는 지시에 대한 구체적인 예가 포함되어야 합니다. 실제 데이터를 포함해야 하며 단순한 자리 표시자를 포함해서는 안 됩니다. 입력은 지시 사항을 어렵게 만들 수 있는 상당한 내용을 제공해야 하지만 100단어를 넘지 않는 것이 이상적입니다.
6. 일부 지시사항은 추가 입력이 있고, 일부 지시에는 입력 필드가 비어있습니다. 예를 들어 "세계에서 가장 높은 봉우리는 무엇인가?"라는 일반적인 정보를 묻는 지시의 경우 구체적인 맥락을 제공할 필요가 없어, 입력 필드가 비어있을 수 있습니다.
7. 출력은 명령어와 입력에 대한 적절한 응답이어야 합니다.
아래에 10개의 명령어와 입력(옵션)에 따라 적절한 응답을 생성하세요.
응답은 아래와 같은 형식으로 10가지를 0번 부터 9번 까지, 번호에 따라 해당 번호의 명령어와 입력에 알맞게 작성하세요.
각 응답 사이는 ### 으로 내용을 분리해주세요.
응답0: 첫 번째 응답내용###
응답1: 두 번째 응답내용###
...
응답9: 마지막 응답내용"""
```
### Lisence
CC-BY-NC-4.0
### Data Splits
| | train |
| --------- | -------- |
| # of data | 49620 |
\# Note that the number is not the same as the original data(52002)
```python
>>> from datasets import load_dataset
>>> ds = load_dataset("Bingsu/ko_alpaca_data", split="train")
>>> ds
Dataset({
features: ['instruction', 'input', 'output'],
num_rows: 49620
})
```
```python
>>> ds[0]
{'instruction': '건강을 유지하기 위한 세 가지 팁을 알려주세요.',
'input': '',
'output': '세 가지 팁은 아침식사를 꼭 챙기며, 충분한 수면을 취하고, 적극적으로 운동을 하는 것입니다.'}
``` | [
-0.7092974781990051,
-0.649712085723877,
0.26899296045303345,
0.3098876476287842,
-0.5220285058021545,
-0.2553998529911041,
0.13290099799633026,
-0.2528354823589325,
0.6721828579902649,
0.38818809390068054,
-0.5740448832511902,
-0.47777530550956726,
-0.8107725381851196,
0.1558513641357422,... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
mstz/wine | mstz | 2023-04-07T15:11:56Z | 45 | 2 | null | [
"task_categories:tabular-classification",
"size_categories:1K<n<10K",
"language:en",
"license:cc",
"wine",
"tabular_classification",
"binary_classification",
"region:us"
] | 2023-04-07T15:11:56Z | 2023-03-24T00:29:02.000Z | 2023-03-24T00:29:02 | ---
language:
- en
tags:
- wine
- tabular_classification
- binary_classification
pretty_name: Wine quality
size_categories:
- 1K<n<10K
task_categories:
- tabular-classification
configs:
- wine
license: cc
---
# Wine
The [Wine dataset](https://www.kaggle.com/datasets/ghassenkhaled/wine-quality-data) from Kaggle.
Classify wine as red or white.
# Configurations and tasks
| **Configuration** | **Task** | **Description** |
|-------------------|---------------------------|-----------------------------------------------------------------|
| wine | Binary classification | Is this red wine? |
# Usage
```python
from datasets import load_dataset
dataset = load_dataset("mstz/wine")["train"]
``` | [
0.012280510738492012,
-0.3471149802207947,
0.025141872465610504,
0.18946819007396698,
-0.5567318797111511,
-0.15947531163692474,
-0.25918111205101013,
-0.39908164739608765,
0.3543420433998108,
0.32046371698379517,
-0.6557031869888306,
-0.6388982534408569,
-0.7926158905029297,
0.02521030046... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
pszemraj/scientific_lay_summarisation-plos-norm | pszemraj | 2023-06-20T01:06:39Z | 45 | 3 | null | [
"task_categories:summarization",
"task_categories:text2text-generation",
"size_categories:10K<n<100K",
"source_datasets:tomasg25/scientific_lay_summarisation",
"language:en",
"license:mit",
"arxiv:2210.09932",
"region:us"
] | 2023-06-20T01:06:39Z | 2023-03-29T16:24:26.000Z | 2023-03-29T16:24:26 | ---
license: mit
task_categories:
- summarization
- text2text-generation
language:
- en
size_categories:
- 10K<n<100K
source_datasets: tomasg25/scientific_lay_summarisation
---
# scientific_lay_summarisation - PLOS - normalized
This dataset is a modified version of [tomasg25/scientific_lay_summarization](https://huggingface.co/datasets/tomasg25/scientific_lay_summarisation) and contains scientific lay summaries that have been preprocessed [with this code](https://gist.github.com/pszemraj/bd344637af7c0c10ecf4ab62c4d0ce91). The preprocessing includes fixing punctuation and whitespace problems, and calculating the token length of each text sample using a tokenizer from the T5 model.
Original dataset details:
- **Repository:** https://github.com/TGoldsack1/Corpora_for_Lay_Summarisation
- **Paper:** [Making Science Simple: Corpora for the Lay Summarisation of Scientific Literature](https://arxiv.org/abs/2210.09932)
## Data Cleaning
The text in both the "article" and "summary" columns was processed to ensure that punctuation and whitespace were consistent. The `fix_punct_whitespace` function was applied to each text sample to:
- Remove spaces before punctuation marks (except for parentheses)
- Add a space after punctuation marks (except for parentheses) if missing
- Handle spaces around parentheses
- Add a space after a closing parenthesis if followed by a word or opening parenthesis
- Handle spaces around quotation marks
- Handle spaces around single quotes
- Handle comma in numbers
## Tokenization
The length of each text sample was calculated in terms of tokens using the T5 tokenizer. The `calculate_token_length` function was used to encode each text sample using the tokenizer and return the number of resulting tokens. The resulting token lengths were added as new columns to the dataframes.
## Data Format
The resulting processed data files are stored in Apache parquet and can be loaded using the `pandas' library or the `datasets' library from the Hugging Face transformers package. The relevant column names and data types for summarization are
```python
DatasetDict({
train: Dataset({
features: ['article', 'summary', 'section_headings', 'keywords', 'year', 'title', 'article_length', 'summary_length'],
num_rows: 24773
})
test: Dataset({
features: ['article', 'summary', 'section_headings', 'keywords', 'year', 'title', 'article_length', 'summary_length'],
num_rows: 1376
})
validation: Dataset({
features: ['article', 'summary', 'section_headings', 'keywords', 'year', 'title', 'article_length', 'summary_length'],
num_rows: 1376
})
})
```
## Usage
Load the desired parquet file(s) using `pandas` or `datasets`. Here is an example using `pandas`:
```python
# download the dataset files by clicking on 'use in datasets' and cloning
import pandas as pd
# Load train set
df = pd.read_parquet("scientific_lay_summarisation-plos-norm/train.parquet")
print(df.info())
```
And here is an example using `datasets`:
```python
from datasets import load_dataset
dataset = load_dataset("pszemraj/scientific_lay_summarisation-plos-norm")
train_set = dataset['train']
# Print the first few samples
for i in range(5):
print(train_set[i])
```
## Token Lengths
For train split:

---
| [
-0.20555463433265686,
-0.4582916498184204,
0.07983177900314331,
0.5872218608856201,
-0.521973729133606,
-0.16265401244163513,
-0.3097058832645416,
0.03601885586977005,
0.6151735186576843,
0.5299801230430603,
-0.2421438843011856,
-0.731159508228302,
-0.4940073788166046,
0.7220438718795776,
... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
james-burton/product_sentiment_machine_hack_all_text | james-burton | 2023-05-02T15:59:37Z | 45 | 0 | null | [
"region:us"
] | 2023-05-02T15:59:37Z | 2023-05-01T08:39:19.000Z | 2023-05-01T08:39:19 | ---
dataset_info:
features:
- name: Product_Description
dtype: string
- name: Product_Type
dtype: string
- name: Sentiment
dtype: int64
splits:
- name: train
num_bytes: 526902
num_examples: 4327
- name: validation
num_bytes: 92808
num_examples: 764
- name: test
num_bytes: 155969
num_examples: 1273
download_size: 0
dataset_size: 775679
---
# Dataset Card for "product_sentiment_machine_hack_all_text"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | [
-0.48642903566360474,
-0.6866267919540405,
0.2238467037677765,
0.4928596615791321,
-0.14144636690616608,
0.08274547755718231,
0.25132885575294495,
-0.04445904493331909,
0.9526979923248291,
0.7824608087539673,
-0.8403557538986206,
-0.9408479332923889,
-0.6646065711975098,
-0.228936076164245... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
ma2za/many_emotions | ma2za | 2023-06-10T02:18:01Z | 45 | 1 | null | [
"task_categories:text-classification",
"multilinguality:multilingual",
"size_categories:100K<n<1M",
"source_datasets:dair-ai/emotion",
"source_datasets:daily_dialog",
"source_datasets:go_emotions",
"language:en",
"license:apache-2.0",
"emotion",
"region:us"
] | 2023-06-10T02:18:01Z | 2023-05-20T21:59:41.000Z | 2023-05-20T21:59:41 | ---
license:
apache-2.0
task_categories:
- text-classification
multilinguality:
- multilingual
source_datasets:
- dair-ai/emotion
- daily_dialog
- go_emotions
language:
- en
size_categories:
- 100K<n<1M
tags:
- emotion
---
# Dataset Card for "many_emotions"
## Dataset Description
- **Homepage:**
### Dataset Summary
### Languages
[More Information Needed]
## Dataset Structure
### Data Instances
[More Information Needed]
### Data Fields
The data fields are:
- `id`: unique identifier
- `text`: a `string` feature.
- `label`: a classification label, with possible values including `anger` (0), `fear` (1), `joy` (2), `love` (
3), `sadness` (4), `surprise` (5), `neutral` (6).
- `license`: inherited license from source dataset
- `dataset`: source dataset
- `language`: text language
### Data Splits
The dataset has 2 configurations:
- raw: with 5 configuration for each language
- split: with configurations train, validation, test
## Dataset Creation
### Curation Rationale
The raw split contains duplicates.
In the split "split" there may be equal rows but with different label.
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
## Additional Information
### Licensing Information
Each row has its own license which is inherited from the source dataset. | [
-0.6917296051979065,
-0.3479698598384857,
0.0018969086231663823,
0.5096390247344971,
-0.4686914384365082,
-0.05833573266863823,
-0.2297293096780777,
-0.23606812953948975,
0.2856925129890442,
0.34698301553726196,
-0.9155187606811523,
-0.9079881310462952,
-0.5138071775436401,
0.2853602468967... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
mmenendezg/pneumonia_x_ray | mmenendezg | 2023-06-21T23:07:12Z | 45 | 1 | null | [
"region:us"
] | 2023-06-21T23:07:12Z | 2023-06-07T02:00:53.000Z | 2023-06-07T02:00:53 | ---
dataset_info:
features:
- name: image
dtype: image
- name: label
dtype:
class_label:
names:
'0': normal
'1': pneumonia
splits:
- name: train
num_bytes: 126684250.689
num_examples: 4187
- name: validation
num_bytes: 27444182.485
num_examples: 1045
- name: test
num_bytes: 16275660.0
num_examples: 624
download_size: 153021953
dataset_size: 170404093.174
---
# Chest X-Ray Pneumonia Dataset
This dataset contains chest x-ray images of independent patients that can be classified into `normal` (healthy) or `pneumonia` (diseased) patients.
This dataset is a processed version of the original `Large Dataset of Labeled Optical Coherence Tomography (OCT) and Chest X-Ray Images` dataset provided by the *University of California San Diego*.
The dataset contains three splits:
- **Train**: 4187 images
- **Validation**: 1045 images
- **Test**: 624 images
The shape of the images is `[500, 500, 3]`, and the labels have two possible values:
- 0: **Normal**
- 1: **Pneumonia**
>**References**:
>
> - Kermany, Daniel; Zhang, Kang; Goldbaum, Michael (2018), “Large Dataset of Labeled Optical Coherence Tomography (OCT) and Chest X-Ray Images”, Mendeley Data, V3, doi: 10.17632/rscbjbr9sj.3 | [
-0.1445472687482834,
-0.051834918558597565,
0.38458073139190674,
0.0744842067360878,
-0.5665343999862671,
-0.221145361661911,
0.4520253539085388,
-0.14006777107715607,
0.42575743794441223,
1.0554803609848022,
-0.4665261507034302,
-0.5249135494232178,
-0.8535184264183044,
0.1113065853714943... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
DISCOX/DISCO-200K-random | DISCOX | 2023-06-20T14:26:06Z | 45 | 0 | null | [
"size_categories:100K<n<1M",
"language:en",
"license:cc-by-4.0",
"music",
"region:us"
] | 2023-06-20T14:26:06Z | 2023-06-10T19:17:08.000Z | 2023-06-10T19:17:08 | ---
license: cc-by-4.0
dataset_info:
features:
- name: video_url_youtube
dtype: string
- name: video_title_youtube
dtype: string
- name: track_name_spotify
dtype: string
- name: preview_url_spotify
dtype: string
- name: track_id_spotify
dtype: string
- name: album_id_spotify
dtype: string
- name: artist_id_spotify
sequence: string
- name: track_duration_spotify_ms
dtype: int64
- name: video_duration_youtube_sec
dtype: float64
- name: primary_artist_name_spotify
dtype: string
- name: search_query_youtube
dtype: string
- name: first_artist_follower_spotify
dtype: float64
- name: artist_genres_spotify
sequence: string
- name: track_release_date_spotify
dtype: string
- name: explicit_content_spotify
dtype: bool
- name: video_view_count_youtube
dtype: float64
- name: video_thumbnail_url_youtube
dtype: string
- name: video_description_youtube
dtype: string
- name: similarity_duration
dtype: float64
- name: similarity_query_video_title
dtype: float64
- name: similarity_query_description
dtype: float64
- name: similarity_audio
dtype: float64
- name: audio_embedding_spotify
sequence: float32
- name: audio_embedding_youtube
sequence: float32
splits:
- name: train
num_bytes: 965534426.0
num_examples: 200000
download_size: 1160459401
dataset_size: 965534426.0
language:
- en
tags:
- music
size_categories:
- 100K<n<1M
---
### Getting Started
You can download the dataset using HuggingFace:
```python
from datasets import load_dataset
ds = load_dataset("DISCOX/DISCO-200K-random")
```
The dataset contains 200,000 random samples from the DISCO-10M dataset found [here](https://huggingface.co/datasets/DISCOX/DISCO-10M).
## Dataset Structure
The dataset contains the following features:
```json
{
'video_url_youtube',
'video_title_youtube',
'track_name_spotify',
'video_duration_youtube_sec',
'preview_url_spotify',
'video_view_count_youtube',
'video_thumbnail_url_youtube',
'search_query_youtube',
'video_description_youtube',
'track_id_spotify',
'album_id_spotify',
'artist_id_spotify',
'track_duration_spotify_ms',
'primary_artist_name_spotify',
'track_release_date_spotify',
'explicit_content_spotify',
'similarity_duration',
'similarity_query_video_title',
'similarity_query_description',
'similarity_audio',
'audio_embedding_spotify',
'audio_embedding_youtube',
}
```
More details about the dataset can be found [here](https://huggingface.co/datasets/DISCOX/DISCO-10M).
<!--
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
[More Information Needed]
--> | [
-0.7476478219032288,
-0.5810661911964417,
0.03412673994898796,
0.49132728576660156,
-0.035560112446546555,
0.06556860357522964,
-0.12882599234580994,
0.025192616507411003,
0.7197262644767761,
0.6568111777305603,
-1.1969293355941772,
-0.779358983039856,
-0.38806039094924927,
0.1652992814779... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
kjj0/4chanpol | kjj0 | 2023-06-23T21:10:50Z | 45 | 2 | null | [
"arxiv:2001.07487",
"region:us"
] | 2023-06-23T21:10:50Z | 2023-06-23T20:50:43.000Z | 2023-06-23T20:50:43 | ---
dataset_info:
features:
- name: text
dtype: string
splits:
- name: train
num_bytes: 17193959653
num_examples: 114647404
download_size: 11559500898
dataset_size: 17193959653
---
# Dataset Card for "kjj0/4chanpol"
This dataset contains 114M unique posts made between June 2016 and November 2019.
This is a variant of the dataset provided by [Raiders of the Lost Kek: 3.5 Years of Augmented 4chan Posts from the Politically Incorrect Board](https://arxiv.org/abs/2001.07487).
We have deduplicated posts and stripped metadata to create an easily accessible collection of unique texts.
We additionally provide a variant which includes OpenAI moderation scores at [kjj0/4chanpol-openaimod](https://huggingface.co/datasets/kjj0/4chanpol-openaimod).
```
@inproceedings{papasavva2020raiders,
title={Raiders of the lost kek: 3.5 years of augmented 4chan posts from the politically incorrect board},
author={Papasavva, Antonis and Zannettou, Savvas and De Cristofaro, Emiliano and Stringhini, Gianluca and Blackburn, Jeremy},
booktitle={Proceedings of the International AAAI Conference on Web and Social Media},
volume={14},
pages={885--894},
year={2020}
}
``` | [
-0.42281797528266907,
-0.6284178495407104,
0.1945066601037979,
0.2668393850326538,
-0.4723407030105591,
0.14490914344787598,
0.0774766355752945,
-0.20808090269565582,
0.7014063000679016,
0.6670699119567871,
-0.716363787651062,
-0.5606767535209656,
-0.5988016724586487,
0.33210939168930054,
... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
AhmedBou/French_quotes | AhmedBou | 2023-07-21T15:50:55Z | 45 | 0 | null | [
"task_categories:text-classification",
"task_categories:text-generation",
"size_categories:1K<n<10K",
"language:fr",
"license:apache-2.0",
"region:us"
] | 2023-07-21T15:50:55Z | 2023-07-19T11:44:35.000Z | 2023-07-19T11:44:35 | ---
license: apache-2.0
task_categories:
- text-classification
- text-generation
language:
- fr
size_categories:
- 1K<n<10K
--- | [
-0.12853392958641052,
-0.18616779148578644,
0.6529127955436707,
0.49436280131340027,
-0.19319361448287964,
0.23607419431209564,
0.36072003841400146,
0.050563063472509384,
0.579365611076355,
0.7400140762329102,
-0.6508104205131531,
-0.23783954977989197,
-0.7102249264717102,
-0.0478260256350... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
HydraLM/partitioned_v2_standardized_00 | HydraLM | 2023-07-29T07:58:48Z | 45 | 0 | null | [
"region:us"
] | 2023-07-29T07:58:48Z | 2023-07-29T07:58:35.000Z | 2023-07-29T07:58:35 | ---
dataset_info:
features:
- name: message
dtype: string
- name: message_type
dtype: string
- name: message_id
dtype: int64
- name: conversation_id
dtype: int64
- name: dataset_id
dtype: string
- name: unique_conversation_id
dtype: string
splits:
- name: train
num_bytes: 45106590.68385069
num_examples: 88244
download_size: 34576110
dataset_size: 45106590.68385069
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
# Dataset Card for "partitioned_v2_standardized_00"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | [
-0.44189414381980896,
-0.12282711267471313,
0.2420099526643753,
0.6280385255813599,
-0.35241737961769104,
-0.24097301065921783,
0.42885884642601013,
-0.032524723559617996,
0.7585597038269043,
0.6135780215263367,
-0.8760404586791992,
-0.6929711103439331,
-0.43910276889801025,
-0.32884255051... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
Universal-NER/Pile-NER-type | Universal-NER | 2023-08-07T17:07:30Z | 45 | 6 | null | [
"size_categories:10K<n<100K",
"language:en",
"region:us"
] | 2023-08-07T17:07:30Z | 2023-08-07T15:09:00.000Z | 2023-08-07T15:09:00 | ---
language:
- en
size_categories:
- 10K<n<100K
---
# Intro
Pile-NER-type is a set of GPT-generated data for named entity recognition using the type-based data construction prompt. It was collected by prompting gpt-3.5-turbo-0301 and augmented by negative sampling. Check our [project page](https://universal-ner.github.io/) for more information.
# License
Attribution-NonCommercial 4.0 International | [
-0.8448711037635803,
-0.9191727638244629,
0.3145509660243988,
-0.2265968918800354,
-0.34141576290130615,
0.1950278878211975,
0.47616076469421387,
-0.15867818892002106,
0.6302787661552429,
0.7057790756225586,
-0.32132306694984436,
-0.4109741151332855,
-0.5618138313293457,
0.1916494518518448... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
Norquinal/claude_multiround_chat_1k | Norquinal | 2023-08-11T01:40:28Z | 45 | 5 | null | [
"region:us"
] | 2023-08-11T01:40:28Z | 2023-08-11T01:38:09.000Z | 2023-08-11T01:38:09 | This dataset is ~1k random samplings from my [claude_multiround_chat_30k](https://huggingface.co/datasets/Norquinal/claude_multiround_chat_30k) dataset.
The instructions were generated synethically using a method that can be tenatively described as "multi-instruct." These instructions consist of numerous discrete tasks that the AI has to work its way through, thereby hopefully increasing its comprehension and awareness of complex instructions.
The topics of the instruction ranged from STEM, Arts & Humanities, Social Knowledge, and General Knowledge. | [
-0.4838593006134033,
-0.9203652739524841,
0.2595384120941162,
0.27789148688316345,
0.13628748059272766,
-0.1463862657546997,
-0.04008658602833748,
-0.27956750988960266,
0.2955191731452942,
0.6646099090576172,
-1.1152222156524658,
-0.5672241449356079,
-0.4124327003955841,
-0.284244090318679... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
PL-MTEB/polemo2_in | PL-MTEB | 2023-08-11T12:40:43Z | 45 | 0 | null | [
"license:cc-by-nc-sa-4.0",
"region:us"
] | 2023-08-11T12:40:43Z | 2023-08-11T12:40:29.000Z | 2023-08-11T12:40:29 | ---
license: cc-by-nc-sa-4.0
---
| [
-0.1285335123538971,
-0.1861683875322342,
0.6529128551483154,
0.49436232447624207,
-0.19319400191307068,
0.23607441782951355,
0.36072009801864624,
0.05056373029947281,
0.5793656706809998,
0.7400146722793579,
-0.650810182094574,
-0.23784008622169495,
-0.7102247476577759,
-0.0478255338966846... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
asyafiqe/orca_mini_v1_indonesia | asyafiqe | 2023-08-27T10:54:58Z | 45 | 1 | null | [
"license:apache-2.0",
"region:us"
] | 2023-08-27T10:54:58Z | 2023-08-27T10:53:05.000Z | 2023-08-27T10:53:05 | ---
license: apache-2.0
---
This is dataset is a modified version of psmathur's [orca_mini_v1](https://huggingface.co/datasets/psmathur/orca_mini_v1_dataset) dataset translated into Bahasa Indonesia by Google Translate. | [
-0.2555829882621765,
-0.6932159662246704,
-0.17493438720703125,
0.15533988177776337,
-0.5612382292747498,
-0.14063479006290436,
0.04764656722545624,
-0.46024006605148315,
0.984289824962616,
0.904986560344696,
-1.1614384651184082,
-0.22472073137760162,
-0.4720091223716736,
0.256111830472946... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
187ro/incelset | 187ro | 2023-10-30T12:51:07Z | 45 | 2 | null | [
"task_categories:text-generation",
"task_categories:fill-mask",
"size_categories:100K<n<1M",
"language:en",
"license:apache-2.0",
"not-for-all-audiences",
"region:us"
] | 2023-10-30T12:51:07Z | 2023-09-13T20:45:08.000Z | 2023-09-13T20:45:08 | ---
license: apache-2.0
task_categories:
- text-generation
- fill-mask
tags:
- not-for-all-audiences
pretty_name: Incel Dataset 🎭
size_categories:
- 100K<n<1M
language:
- en
---
# Dataset Card for IncelSet
### Dataset Summary
This dataset is based off the incels.is forum and is ⚠️HIGHLY OFFENSIVE⚠️
A compilation of almost 3 years worth of posts, highlighting topics such as (self-described) celibatism, self-views, life-improvement (attempts or advice), suicide, perceived failure, views on women, views on society, views on politcs - from the members' perspective.
Co-Authored by inmate & curly for Universiteit van Amsterdam
[Politics, Psychology, Law and Economics (PPLE)](https://pple.uva.nl)
### Languages
English with a lot of racial slurs, misoginy, mentions of sexual assault and general hatred - do not view or use if easily offended.
## Dataset Structure
The dataset consists of 2 colums, "title" - representing the thread title & "text" - representing the user replies (posts) under the thread title
### Source Data
Incels.is Forum.
#### Initial Data Collection and Normalization
1. We first built a script in GoLang that scrapes all the content of the incel.is Forum.
We downloaded roughly 150.000 threads - containing almost 2.1 Million posts - in approximately 9 hours from start to finish - using a dedicated server with 72 cores.
2. We then took the scraped data and started processing it, firstly building a script in Python that processed the data & formatted it into the JSON data format according to (RFC 8259) standards.
3. We then started the removal process of PII (Personal Identifiable Information) - thus anonymizing user posts in the dataset. This wasn't hard to do as users already set up monikers for themselves & never gave out personal information such as full names, addresses or social security numbers, nevertheless we still validated the removal of such data.
4. We then proceeded to remove leftover non-human readable text such as HTML tags or base64 encodings, along URLs users may have posted in their discussions.
5. We now begin the dataset formatting process of compiling all 143.501 files left (threads) & ~2.1M posts in Parquet.
6. Final results yield approx 1bil characters on ~144k rows.
#### Who are the source language producers?
Self-described incels / members of the incels.is website (not to be taken in the mot-a-mot sense of the word)
### Personal and Sensitive Information
Includes details of the users' (tragic & tragically self-perceived) lifes. No personal information contained in itself but touches on many sensitive subjects.
## Considerations for Using the Data
Go wild with it. Keep in mind that we are not trying to expose, radicalize or even remotely harm this community.
We have compiled almost 3 years worth of posts on this forum so we could better study this phenomena for a University project.
We will be taking into consideration the actual publishing of the model trained on this data, but we do not see a potential scientific gain that would convince us to do so.
### Social Impact of Dataset
Public Awareness and Education:
Pro: Publishing a dataset might bring greater public awareness to the issue and could be used for educational purposes, enlightening people about the intricacies of this community. Greater understanding might foster empathy and encourage supportive interventions.
Con: It might also inadvertently glamorize or sensationalize the community, leading to an increased interest in and potential growth of such ideologies.
Source: Marwick, A., & Caplan, R. (2018). Drinking male tears: Language, the manosphere, and networked harassment. Feminist Media Studies, 18(4), 543-559.
Potential Stigmatization and Alienation:
Pro: Identifying problematic behaviors and attitudes can help professionals develop targeted interventions.
Con: Generalizing or pathologizing the behaviors of this community might further stigmatize and alienate its members. Labeling can reinforce undesirable behavior if individuals internalize these negative identities.
Source: Dovidio, J. F., Major, B., & Crocker, J. (2000). Stigma: Introduction and overview. In T. F. Heatherton, R. E. Kleck, M. R. Hebl, & J. G. Hull (Eds.), The social psychology of stigma (p. 1–28).
Misuse of Data:
Pro: When used responsibly, such a dataset can be a treasure trove for academic research.
Con: However, there's always a risk of data being misused, misinterpreted, or cherry-picked to support harmful narratives or agendas.
Source: boyd, d., & Crawford, K. (2012). Critical questions for big data. Information, Communication & Society, 15(5), 662-679.
Ethical Concerns:
Pro: Revealing problematic beliefs might serve a greater good.
Con: There are ethical concerns, especially if data was collected without consent. Respect for individuals' autonomy and privacy is paramount in research ethics. (Data is collected under anonymity from a free-to-view, no-signup required, non-scrape blocking Forum - as per their ToS)
Source: National Commission for the Protection of Human Subjects of Biomedical and Behavioral Research. (1979). The Belmont report: Ethical principles and guidelines for the protection of human subjects of research.
Psychological Impact on Incels:
Pro: Confronting one's views might lead to self-reflection and change.
Con: Conversely, it might entrench their beliefs further if they feel attacked or misunderstood, a phenomenon supported by the backfire effect.
Source: Nyhan, B., & Reifler, J. (2010). When corrections fail: The persistence of political misperceptions. Political Behavior, 32(2), 303-330.
### Discussion of Biases
The authors compiled only the first 150.000 of the 270.000 threads in the "Inceldom discussion" part of the forum. As a consequence, older posts have been left out and the dataset may not thoroughly represent the full extent of incel discourse. The authors declare no further biases or conflicts of interest - the data was scraped and processed as it appears on the forum. | [
-0.3523346483707428,
-0.7046692371368408,
0.2059081494808197,
0.33025041222572327,
-0.1235087588429451,
-0.2316095381975174,
-0.10076161473989487,
-0.4575252830982208,
0.29538172483444214,
0.3959214985370636,
-0.5302639007568359,
-0.4290270209312439,
-0.5984331369400024,
0.2308066785335540... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
SEACrowd/indo_general_mt_en_id | SEACrowd | 2023-09-26T12:30:08Z | 45 | 0 | null | [
"language:ind",
"machine-translation",
"region:us"
] | 2023-09-26T12:30:08Z | 2023-09-26T11:14:14.000Z | 2023-09-26T11:14:14 | ---
tags:
- machine-translation
language:
- ind
---
# indo_general_mt_en_id
"In the context of Machine Translation (MT) from-and-to English, Bahasa Indonesia has been considered a low-resource language,
and therefore applying Neural Machine Translation (NMT) which typically requires large training dataset proves to be problematic.
In this paper, we show otherwise by collecting large, publicly-available datasets from the Web, which we split into several domains: news, religion, general, and
conversation,to train and benchmark some variants of transformer-based NMT models across the domains.
We show using BLEU that our models perform well across them , outperform the baseline Statistical Machine Translation (SMT) models,
and perform comparably with Google Translate. Our datasets (with the standard split for training, validation, and testing), code, and models are available on https://github.com/gunnxx/indonesian-mt-data."
## Dataset Usage
Run `pip install nusacrowd` before loading the dataset through HuggingFace's `load_dataset`.
## Citation
```
@inproceedings{guntara-etal-2020-benchmarking,
title = "Benchmarking Multidomain {E}nglish-{I}ndonesian Machine Translation",
author = "Guntara, Tri Wahyu and
Aji, Alham Fikri and
Prasojo, Radityo Eko",
booktitle = "Proceedings of the 13th Workshop on Building and Using Comparable Corpora",
month = may,
year = "2020",
address = "Marseille, France",
publisher = "European Language Resources Association",
url = "https://aclanthology.org/2020.bucc-1.6",
pages = "35--43",
language = "English",
ISBN = "979-10-95546-42-9",
}
```
## License
Creative Commons Attribution Share-Alike 4.0 International
## Homepage
[https://github.com/gunnxx/indonesian-mt-data](https://github.com/gunnxx/indonesian-mt-data)
### NusaCatalogue
For easy indexing and metadata: [https://indonlp.github.io/nusa-catalogue](https://indonlp.github.io/nusa-catalogue) | [
-0.39967870712280273,
-0.4652979075908661,
-0.08239974826574326,
0.3600543737411499,
-0.3910924792289734,
-0.052466508001089096,
-0.7100613117218018,
-0.16876326501369476,
0.10950688272714615,
0.4466409981250763,
-0.4197061359882355,
-0.4492289125919342,
-0.8173296451568604,
0.540314435958... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
GIZ/policy_qa_v2 | GIZ | 2023-09-27T00:17:49Z | 45 | 1 | null | [
"license:apache-2.0",
"region:us"
] | 2023-09-27T00:17:49Z | 2023-09-26T23:45:55.000Z | 2023-09-26T23:45:55 | ---
license: apache-2.0
---
| [
-0.1285335123538971,
-0.1861683875322342,
0.6529128551483154,
0.49436232447624207,
-0.19319400191307068,
0.23607441782951355,
0.36072009801864624,
0.05056373029947281,
0.5793656706809998,
0.7400146722793579,
-0.650810182094574,
-0.23784008622169495,
-0.7102247476577759,
-0.0478255338966846... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
ai2lumos/lumos_complex_qa_plan_onetime | ai2lumos | 2023-10-23T22:34:12Z | 45 | 0 | null | [
"task_categories:conversational",
"task_categories:text-generation",
"task_categories:question-answering",
"size_categories:10K<n<100K",
"language:en",
"license:apache-2.0",
"language-agent",
"reasoning",
"question-answering",
"planning",
"region:us"
] | 2023-10-23T22:34:12Z | 2023-10-23T05:36:48.000Z | 2023-10-23T05:36:48 | ---
license: apache-2.0
task_categories:
- conversational
- text-generation
- question-answering
language:
- en
tags:
- language-agent
- reasoning
- question-answering
- planning
size_categories:
- 10K<n<100K
---
# 🪄 Lumos: Language Agents with Unified Formats, Modular Design, and Open-Source LLMs
<p align="center">
🌐<a href="https://allenai.github.io/lumos">[Website]</a>
📝<a href="">[Paper]</a>
🤗<a href="https://huggingface.co/datasets?sort=trending&search=ai2lumos">[Data]</a>
🤗<a href="https://huggingface.co/models?sort=trending&search=ai2lumos">[Model]</a>
</p>
We introduce 🪄**Lumos**, Language Agents with **Unified** Formats, **Modular** Design, and **Open-Source** LLMs. **Lumos** unifies a suite of complex interactive tasks and achieves competitive performance with GPT-4/3.5-based and larger open-source agents.
**Lumos** has following features:
* 🧩 **Modular Architecture**:
- **Lumos** consists of planning, grounding, and execution modules built based on LLAMA-2-7B.
* 🌍 **Diverse Training Data**:
- **Lumos** is trained with ~40K high-quality annotations from ground-truth reasoning steps in existing benchmarks with GPT-4.
* 🚀 **Competitive Performance**:
- 🚀 **Lumos** outperforms **GPT-4/3.5-based** agents on complex QA and web agent tasks, and **larger open agents** on maths tasks.
- 🚀 **Lumos** performs better than open agent baseline formulations including **chain-of-thoughts** and **unmodularized** training.
- 🚀 **Lumos** surpasses larger open LLM agents and domain-specific agents on an unseen task, WebShop.
## Data Overview
`lumos_complex_qa_plan_onetime` is the data for training **planning** module on **complex QA** task in **Lumos-Onetime (Lumos-O)** formulation.
The source of the training annotation training data is shown below:
| Datasets | Number |
|---|---|
|StrategyQA|1777|
|Musique|17632|
## Models Trained with the Data
`lumos_complex_qa_plan_onetime` is used to train the following models.
|Model|Huggingface Repo|
|---|---|
|`lumos_complex_qa_plan_onetime`| [🤗Huggingface Repo](https://huggingface.co/ai2lumos/lumos_complex_qa_plan_onetime) |
|`lumos_unified_plan_onetime`| [🤗Huggingface Repo](https://huggingface.co/ai2lumos/lumos_unified_plan_onetime) |
## Citation
If you find this work is relevant with your research, please feel free to cite our work!
```
@article{yin2023lumos,
title={Lumos: Towards Language Agents that are Unified, Modular, and Open Source},
author={Yin, Da and Brahman, Faeze and Ravichander, Abhilasha and Chandu, Khyathi and Chang, Kai-Wei and Choi, Yejin and Lin, Bill Yuchen},
year={2023}
}
``` | [
-0.08535274863243103,
-0.5409595966339111,
0.37070366740226746,
0.3218940496444702,
-0.20252113044261932,
0.000679529330227524,
-0.3149079382419586,
-0.593716025352478,
0.43407773971557617,
0.45932427048683167,
-0.6327213048934937,
-0.541822075843811,
-0.2281562089920044,
-0.07634410262107... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
Weni/Dataset_semantic_alignment_translation_en-es-direction_en-pt_br-direction | Weni | 2023-11-03T14:14:39Z | 45 | 0 | null | [
"region:us"
] | 2023-11-03T14:14:39Z | 2023-10-27T17:13:41.000Z | 2023-10-27T17:13:41 | ---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
dataset_info:
features:
- name: id
dtype: int64
- name: prompt
dtype: string
- name: string
dtype: string
- name: string_translation
dtype: string
splits:
- name: train
num_bytes: 9521461
num_examples: 40001
download_size: 3814409
dataset_size: 9521461
---
# Dataset Card for "Dataset_semantic_alignment_translation_en-es-direction_en-pt_br-direction"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | [
-0.278119832277298,
-0.3476215898990631,
0.42767924070358276,
0.3332378566265106,
-0.44378459453582764,
-0.1406620889902115,
-0.0700295940041542,
-0.14689670503139496,
0.7463265657424927,
0.5139663815498352,
-1.046044111251831,
-1.2012321949005127,
-0.9635058045387268,
0.07699277997016907,... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
lukemann/baby-agi-dataset-v0 | lukemann | 2023-10-30T09:16:19Z | 45 | 0 | null | [
"region:us"
] | 2023-10-30T09:16:19Z | 2023-10-30T05:36:34.000Z | 2023-10-30T05:36:34 | ---
dataset_info:
features:
- name: id
dtype: string
- name: instruction
dtype: string
- name: trajectory
list:
- name: image_id
dtype: string
- name: action_options
list:
- name: index
dtype: int32
- name: top_left
sequence: int32
- name: bottom_right
sequence: int32
- name: action_taken
struct:
- name: type
dtype: string
- name: value
dtype: string
- name: action_option_index
dtype: int32
splits:
- name: train
num_bytes: 722
num_examples: 1
download_size: 1432409
dataset_size: 722
---
# BabyAGI (Dataset)
The initial demonstration dataset follows the Huggingface dataset spec, with the raw data split into two components, trajectory images and trajectory metadata. The metadata is stored in the raw dataset, and the images are stored on S3. The data is loaded using the dataloader defined in [baby_agi_dataset.py](./baby_agi_dataset.py).
**Data Layout:**
```plaintext
├── data
│ ├── metadata_0.json
│ ├── metadata_1.json
│ └── ...
├-- baby_agi_dataset.py
```
### Metadata Format (.json)
```json
[
{
"id": "<trajectory_id_hash>",
"instruction": "<some instruction>",
"trajectory": [
{
"image_id": "image_id",
"action_options": [
{
"index": 0,
"top_left": [120, 340],
"bottom_right": [140, 440],
},
...
],
"action_taken": {
"type": "click",
"value": "value (only for type and scroll)",
"action_option_index": 0
}
},
...
]
},
]
```
## Action Types
The dataset metadata includes three types of actions: "click", "type", and "scroll". The `action_option_index` field indicates the index of the clicked element within the `action_options` list.
1. **Click**: Represents a user clicking on an element.
2. **Type**: Represents a user typing into an input field.
3. **Scroll**: Represents a user scrolling the viewport. The `value` field indicates the direction of the scroll, with "up" corresponding to a 200px scroll upwards and "down" corresponding to a 200px scroll downwards. Note that `bottom_left` and `top_right` will always be zero-arrays for these.
## Dataset Generation Pipeline
The dataset is generated through the following steps:
1. **Load Demo**: The demo is loaded from the Hugging Face dataset.
2. **Load Trace**: The trace is loaded from the Globus dataset.
3. **Process Trajectories**: For each Mind2Web (M2W) trajectory:
a) **Map Actions**: M2W actions are mapped to Playwright trace actions using the timestamp in `dom_content.json`.
b) **Screenshot DOM**: The DOM is "screenshoted" just before the action.
c) **Map Candidates**: `pos_candidates` and `neg_candidates` from the M2W action metadata are mapped to HTML bounding boxes via class+id matching from the action metadata. New bounding box coordinates are obtained for each.
d) **Craft Meta + Screenshot Pair**: The pair of metadata and screenshots is crafted and saved/appended.
4. **Save Data**: The updated data directory is saved to S3 and Hugging Face.
### Screenshots
Screenshots in this dataset are generated from the before states of Mind2Web trajectory traces. Each image has a width of 2036 and a height of 1144. For alternate screen sizes (via augmentation), padding is added to maintain the aspect ratio. This ensures that the content of the screenshot remains consistent across different screen sizes.
### Options Generation
Options in this dataset are generated from `positive_candidates` (always one) and `negative_candidates` in the Mind2Web (M2W) dataset. The M2W dataset labels *all* possible interactions on the DOM. Therefore, the 50 largest area-wise options within the viewport containing the positive candidate are selected.
### Scrolling
The Mind2Web (M2W) dataset captures the entire DOM, so when the selected option action is not in the viewport, artificial scroll actions are created. This action has two possible values: "up" and "down". Each of which corresponds to a 200px scroll in the respective direction.
### Selecting
The "Select" action in the Mind2Web (M2W) dataset is recorded when a user makes a selection from a dropdown list. In this dataset, we represent it as a sequence of two distinct actions in a trajectory:
1. **Click**: The user clicks on the dropdown element.
2. **Type**: The user types the desired value followed by Enter
## Usage
To use the dataset in your Python program, you can load it using the `load_dataset` function from the `datasets` library:
```python
from datasets import load_dataset
# typically load_dataset("lukemann/baby-agi-dataset-v0"
dataset = load_dataset("lukemann/baby-agi-dataset-v0")
first_row = dataset['train'][0]
print(first_row)
```
This will load the dataset and print the first row of the training set.
For a short demo, refer to the [demo.py](./demo.py) file. | [
-0.39678147435188293,
-0.4312717020511627,
0.37089788913726807,
0.035284701734781265,
-0.010607484728097916,
-0.20881831645965576,
0.08019670844078064,
-0.1910703480243683,
0.5388989448547363,
0.3100324869155884,
-1.2287712097167969,
-0.4654761552810669,
-0.4754243493080139,
-0.40690204501... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
sunghuncsa/wow | sunghuncsa | 2023-11-01T07:49:46Z | 45 | 0 | null | [
"region:us"
] | 2023-11-01T07:49:46Z | 2023-11-01T07:49:28.000Z | 2023-11-01T07:49:28 | Entry not found | [
-0.32276472449302673,
-0.22568407654762268,
0.8622258901596069,
0.4346148371696472,
-0.5282984972000122,
0.7012965679168701,
0.7915717363357544,
0.07618629932403564,
0.7746022939682007,
0.2563222646713257,
-0.785281777381897,
-0.22573848068714142,
-0.9104482531547546,
0.5715669393539429,
... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
chreh/train_data_dms_preprocessed | chreh | 2023-11-03T15:55:36Z | 45 | 0 | null | [
"region:us"
] | 2023-11-03T15:55:36Z | 2023-11-03T15:53:27.000Z | 2023-11-03T15:53:27 | Entry not found | [
-0.32276472449302673,
-0.22568407654762268,
0.8622258901596069,
0.4346148371696472,
-0.5282984972000122,
0.7012965679168701,
0.7915717363357544,
0.07618629932403564,
0.7746022939682007,
0.2563222646713257,
-0.785281777381897,
-0.22573848068714142,
-0.9104482531547546,
0.5715669393539429,
... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
EfazAhmed/asapp2a | EfazAhmed | 2023-11-08T18:40:33Z | 45 | 0 | null | [
"region:us"
] | 2023-11-08T18:40:33Z | 2023-11-08T02:13:42.000Z | 2023-11-08T02:13:42 | Entry not found | [
-0.32276472449302673,
-0.22568407654762268,
0.8622258901596069,
0.4346148371696472,
-0.5282984972000122,
0.7012965679168701,
0.7915717363357544,
0.07618629932403564,
0.7746022939682007,
0.2563222646713257,
-0.785281777381897,
-0.22573848068714142,
-0.9104482531547546,
0.5715669393539429,
... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
Keynote-Technology/PLANE-2K | Keynote-Technology | 2023-11-11T01:08:25Z | 45 | 3 | null | [
"license:apache-2.0",
"region:us"
] | 2023-11-11T01:08:25Z | 2023-11-11T01:00:42.000Z | 2023-11-11T01:00:42 | ---
license: apache-2.0
---
# Dataset Card for PLANE-2K
PLANE Dataset stands for Pre-model Luxury Accurate Next-level Evaluation Dataset.
## Dataset Details
This contains all of the file from the [alpaca-2k-test](https://huggingface.co/datasets/mhenrichsen/alpaca_2k_test) dataset and then some. | [
-0.5712071657180786,
-0.3704831600189209,
-0.22012095153331757,
0.3559674322605133,
-0.6096281409263611,
-0.06277159601449966,
0.6825909614562988,
-0.14508496224880219,
0.5335187315940857,
0.6765772104263306,
-0.8147634863853455,
-0.34843507409095764,
-0.4476151466369629,
-0.41454091668128... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
minh009/test1 | minh009 | 2023-11-13T08:20:30Z | 45 | 0 | null | [
"task_categories:text-classification",
"size_categories:n<1K",
"language:en",
"license:openrail",
"region:us"
] | 2023-11-13T08:20:30Z | 2023-11-11T03:54:56.000Z | 2023-11-11T03:54:56 | ---
license: openrail
task_categories:
- text-classification
language:
- en
size_categories:
- n<1K
--- | [
-0.12853392958641052,
-0.18616779148578644,
0.6529127955436707,
0.49436280131340027,
-0.19319361448287964,
0.23607419431209564,
0.36072003841400146,
0.050563063472509384,
0.579365611076355,
0.7400140762329102,
-0.6508104205131531,
-0.23783954977989197,
-0.7102249264717102,
-0.0478260256350... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
baptistecolle/voyager-fine-tuning | baptistecolle | 2023-11-16T17:32:22Z | 45 | 0 | null | [
"region:us"
] | 2023-11-16T17:32:22Z | 2023-11-11T09:49:11.000Z | 2023-11-11T09:49:11 | ---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: test
path: data/test-*
dataset_info:
features:
- name: system_prompt
dtype: string
- name: human_prompt
dtype: string
- name: assistant_response
dtype: string
- name: __index_level_0__
dtype: int64
splits:
- name: train
num_bytes: 2320086
num_examples: 292
- name: test
num_bytes: 528104
num_examples: 73
download_size: 355674
dataset_size: 2848190
---
# Dataset Card for "voyager-fine-tuning"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | [
-0.8591095805168152,
-0.38664746284484863,
0.3678959906101227,
0.30594900250434875,
-0.3103936016559601,
-0.11949940770864487,
0.15652790665626526,
-0.04397143796086311,
0.668747067451477,
0.6562620401382446,
-1.1482038497924805,
-0.5100775957107544,
-0.32081326842308044,
-0.20332540571689... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
CardinalityLM/imdb-card-pred-decimal | CardinalityLM | 2023-11-20T03:52:43Z | 45 | 0 | null | [
"region:us"
] | 2023-11-20T03:52:43Z | 2023-11-20T03:52:39.000Z | 2023-11-20T03:52:39 | ---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: test
path: data/test-*
dataset_info:
features:
- name: text
dtype: string
- name: prompt
dtype: string
- name: true_cardinality
dtype: int64
splits:
- name: train
num_bytes: 39101954.4
num_examples: 80000
- name: test
num_bytes: 9775488.6
num_examples: 20000
download_size: 8380198
dataset_size: 48877443.0
---
# Dataset Card for "imdb-card-pred-decimal"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | [
-0.8244408965110779,
-0.18544697761535645,
0.03292844444513321,
0.20434650778770447,
-0.6008744835853577,
0.000012268622413103003,
0.08009583503007889,
0.05527135729789734,
1.0293127298355103,
0.48280400037765503,
-0.8595274090766907,
-0.7329005002975464,
-0.7949085831642151,
-0.0688439011... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
OK-ok1212/dataset | OK-ok1212 | 2023-11-22T12:20:35Z | 45 | 0 | null | [
"license:mit",
"region:us"
] | 2023-11-22T12:20:35Z | 2023-11-21T15:54:53.000Z | 2023-11-21T15:54:53 | ---
license: mit
---
| [
-0.12853392958641052,
-0.18616779148578644,
0.6529127955436707,
0.49436280131340027,
-0.19319361448287964,
0.23607419431209564,
0.36072003841400146,
0.050563063472509384,
0.579365611076355,
0.7400140762329102,
-0.6508104205131531,
-0.23783954977989197,
-0.7102249264717102,
-0.0478260256350... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
lecslab/glosslm | lecslab | 2023-11-28T18:58:15Z | 45 | 0 | null | [
"region:us"
] | 2023-11-28T18:58:15Z | 2023-11-23T02:52:57.000Z | 2023-11-23T02:52:57 | ---
dataset_info:
features:
- name: ID
dtype: string
- name: glottocode
dtype: string
- name: transcription
dtype: string
- name: glosses
dtype: string
- name: translation
dtype: string
- name: metalang_glottocode
dtype: string
- name: is_segmented
dtype: string
- name: source
dtype: string
- name: type
dtype: string
splits:
- name: train
num_bytes: 90204033
num_examples: 430298
download_size: 29324846
dataset_size: 90204033
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
# Dataset Card for "glosslm"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | [
-0.6309642195701599,
-0.39550015330314636,
0.2458118200302124,
0.09296073764562607,
-0.1218029111623764,
0.10848017781972885,
0.22385047376155853,
-0.22489529848098755,
0.8694933652877808,
0.5623450875282288,
-0.806986391544342,
-0.8659901022911072,
-0.5683221220970154,
-0.4972915053367615... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
bh8648/general_query | bh8648 | 2023-11-23T12:09:06Z | 45 | 0 | null | [
"region:us"
] | 2023-11-23T12:09:06Z | 2023-11-23T12:08:55.000Z | 2023-11-23T12:08:55 | ---
dataset_info:
features:
- name: Instruction
dtype: string
- name: output
dtype: string
splits:
- name: train
num_bytes: 941023
num_examples: 334
download_size: 398211
dataset_size: 941023
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
# Dataset Card for "general_query"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | [
-0.5879003405570984,
-0.7117015719413757,
0.32753753662109375,
0.18027035892009735,
-0.35433876514434814,
-0.2857186496257782,
0.24000024795532227,
-0.20212644338607788,
1.1036477088928223,
0.735714316368103,
-0.798812985420227,
-1.0101341009140015,
-0.4442923367023468,
-0.3062079846858978... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
huggan/few-shot-obama | huggan | 2022-04-12T14:05:43Z | 44 | 0 | null | [
"arxiv:2101.04775",
"region:us"
] | 2022-04-12T14:05:43Z | 2022-04-01T11:33:51.000Z | 2022-04-01T11:33:51 | # Citation
```
@article{DBLP:journals/corr/abs-2101-04775,
author = {Bingchen Liu and
Yizhe Zhu and
Kunpeng Song and
Ahmed Elgammal},
title = {Towards Faster and Stabilized {GAN} Training for High-fidelity Few-shot
Image Synthesis},
journal = {CoRR},
volume = {abs/2101.04775},
year = {2021},
url = {https://arxiv.org/abs/2101.04775},
eprinttype = {arXiv},
eprint = {2101.04775},
timestamp = {Fri, 22 Jan 2021 15:16:00 +0100},
biburl = {https://dblp.org/rec/journals/corr/abs-2101-04775.bib},
bibsource = {dblp computer science bibliography, https://dblp.org}
}
``` | [
-0.5524431467056274,
-0.8028349280357361,
0.018525345250964165,
0.33572760224342346,
-0.09379876405000687,
-0.17921070754528046,
-0.08067688345909119,
-0.28826087713241577,
0.07932981103658676,
-0.041977155953645706,
-0.35484322905540466,
-0.3427698016166687,
-0.3939037024974823,
0.0571840... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
SocialGrep/the-reddit-dataset-dataset | SocialGrep | 2022-07-01T17:55:48Z | 44 | 1 | null | [
"annotations_creators:lexyr",
"language_creators:crowdsourced",
"multilinguality:monolingual",
"size_categories:10K<n<100K",
"source_datasets:original",
"language:en",
"license:cc-by-4.0",
"region:us"
] | 2022-07-01T17:55:48Z | 2022-04-04T20:47:35.000Z | 2022-04-04T20:47:35 | ---
annotations_creators:
- lexyr
language_creators:
- crowdsourced
language:
- en
license:
- cc-by-4.0
multilinguality:
- monolingual
size_categories:
- 10K<n<100K
source_datasets:
- original
paperswithcode_id: null
---
# Dataset Card for the-reddit-dataset-dataset
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Licensing Information](#licensing-information)
## Dataset Description
- **Homepage:** [https://socialgrep.com/datasets](https://socialgrep.com/datasets/the-reddit-dataset-dataset?utm_source=huggingface&utm_medium=link&utm_campaign=theredditdatasetdataset)
- **Point of Contact:** [Website](https://socialgrep.com/contact?utm_source=huggingface&utm_medium=link&utm_campaign=theredditdatasetdataset)
### Dataset Summary
A meta dataset of Reddit's own /r/datasets community.
### Languages
Mainly English.
## Dataset Structure
### Data Instances
A data point is a post or a comment. Due to the separate nature of the two, those exist in two different files - even though many fields are shared.
### Data Fields
- 'type': the type of the data point. Can be 'post' or 'comment'.
- 'id': the base-36 Reddit ID of the data point. Unique when combined with type.
- 'subreddit.id': the base-36 Reddit ID of the data point's host subreddit. Unique.
- 'subreddit.name': the human-readable name of the data point's host subreddit.
- 'subreddit.nsfw': a boolean marking the data point's host subreddit as NSFW or not.
- 'created_utc': a UTC timestamp for the data point.
- 'permalink': a reference link to the data point on Reddit.
- 'score': score of the data point on Reddit.
- 'domain': (Post only) the domain of the data point's link.
- 'url': (Post only) the destination of the data point's link, if any.
- 'selftext': (Post only) the self-text of the data point, if any.
- 'title': (Post only) the title of the post data point.
- 'body': (Comment only) the body of the comment data point.
- 'sentiment': (Comment only) the result of an in-house sentiment analysis pipeline. Used for exploratory analysis.
## Additional Information
### Licensing Information
CC-BY v4.0
| [
-0.6671146750450134,
-0.8244843482971191,
0.34735769033432007,
0.5550833344459534,
-0.5296772122383118,
0.15245606005191803,
-0.18687905371189117,
-0.32456254959106445,
0.8312574625015259,
0.3865739703178406,
-1.0886123180389404,
-1.0436360836029053,
-0.6842780709266663,
0.3546139597892761... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
mwong/fever-evidence-related | mwong | 2022-10-25T10:06:51Z | 44 | 1 | fever | [
"task_categories:text-classification",
"task_ids:fact-checking",
"annotations_creators:crowdsourced",
"language_creators:crowdsourced",
"multilinguality:monolingual",
"size_categories:100K<n<1M",
"source_datasets:extended|fever",
"language:en",
"license:cc-by-sa-3.0",
"license:gpl-3.0",
"region:... | 2022-10-25T10:06:51Z | 2022-04-12T08:39:59.000Z | 2022-04-12T08:39:59 | ---
annotations_creators:
- crowdsourced
language_creators:
- crowdsourced
language:
- en
license:
- cc-by-sa-3.0
- gpl-3.0
multilinguality:
- monolingual
paperswithcode_id: fever
pretty_name: fever
size_categories:
- 100K<n<1M
source_datasets:
- extended|fever
task_categories:
- text-classification
task_ids:
- fact-checking
---
### Dataset Summary
This dataset is extracted from Fever dataset (https://fever.ai), pre-processed and ready to train and evaluate.
The training objective is a text classification task - given a claim and evidence, predict if evidence is related to claim. | [
-0.06297941505908966,
-0.35902613401412964,
0.03145207464694977,
0.01528127584606409,
-0.20486029982566833,
-0.22105950117111206,
-0.09295039623975754,
-0.3599342405796051,
0.3064511716365814,
0.647220253944397,
-0.44894373416900635,
-0.4529801607131958,
-0.8511182069778442,
0.226974263787... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
M-CLIP/ImageCaptions-7M-Translations | M-CLIP | 2022-05-16T21:03:28Z | 44 | 2 | null | [
"region:us"
] | 2022-05-16T21:03:28Z | 2022-05-16T21:02:40.000Z | 2022-05-16T21:02:40 | Found. Redirecting to https://cdn-lfs.huggingface.co/repos/fd/a8/fda8d7c968a6d27e1390ab6e21a82ccb5e772b75d39fc21bbf9337f5f876a9bf/835f3f7d88a86e05a882c6a6b6333da6ab874776385f85473798769d767c2fca?response-content-disposition=attachment%3B+filename*%3DUTF-8%27%27README.md%3B+filename%3D%22README.md%22%3B&response-content-type=text%2Fmarkdown&Expires=1701480453&Policy=eyJTdGF0ZW1lbnQiOlt7IkNvbmRpdGlvbiI6eyJEYXRlTGVzc1RoYW4iOnsiQVdTOkVwb2NoVGltZSI6MTcwMTQ4MDQ1M319LCJSZXNvdXJjZSI6Imh0dHBzOi8vY2RuLWxmcy5odWdnaW5nZmFjZS5jby9yZXBvcy9mZC9hOC9mZGE4ZDdjOTY4YTZkMjdlMTM5MGFiNmUyMWE4MmNjYjVlNzcyYjc1ZDM5ZmMyMWJiZjkzMzdmNWY4NzZhOWJmLzgzNWYzZjdkODhhODZlMDVhODgyYzZhNmI2MzMzZGE2YWI4NzQ3NzYzODVmODU0NzM3OTg3NjlkNzY3YzJmY2E%7EcmVzcG9uc2UtY29udGVudC1kaXNwb3NpdGlvbj0qJnJlc3BvbnNlLWNvbnRlbnQtdHlwZT0qIn1dfQ__&Signature=0iIbRv0GsMNVIuID6iqWsU4z31zzNsCqJq0bIIRsxBZF8VRJDPaoSYnkel4WXgntZHR9LRBFTQFd4hLRwZHnS%7EBwwMW4a7MwWSUmmBZDNDV4P3Y390%7ED9VJcCw0Tm%7EZiLhxWJU6YNRkvclLVRlIl0cQk0dmjaa3UTqHTVX%7EinIrWiJKEosu98vvgML3ta302srvwP24LGDVJOISbqliudhcY9Ehp3qakxjEzCU%7Evjw52N7HOfckF4Zz72kzWY959gfflETDQy3ohRMbJgszCLmuc%7E27VpDOmU7vs11z5HeV-aQeVFVvO6A9uPAulxXY873arN7sGHwU8BUBPLX57iA__&Key-Pair-Id=KVTP0A1DKRTAX | [
-0.6234498023986816,
-0.7985750436782837,
0.6919617056846619,
0.1474178582429886,
-0.5462004542350769,
0.0701640248298645,
0.1782171130180359,
-0.269149512052536,
0.8716624975204468,
0.7707958221435547,
-1.2360848188400269,
-0.8913058638572693,
-0.5352397561073303,
0.5056501626968384,
-0... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
arize-ai/beer_reviews_label_drift_neutral | arize-ai | 2022-10-19T13:19:17Z | 44 | 0 | null | [
"task_categories:text-classification",
"task_ids:sentiment-classification",
"annotations_creators:expert-generated",
"language_creators:expert-generated",
"multilinguality:monolingual",
"size_categories:10K<n<100K",
"language:en",
"license:mit",
"region:us"
] | 2022-10-19T13:19:17Z | 2022-10-19T13:16:00.000Z | 2022-10-19T13:16:00 | ---
annotations_creators:
- expert-generated
language_creators:
- expert-generated
language:
- en
license:
- mit
multilinguality:
- monolingual
pretty_name: sentiment-classification-reviews-with-drift
size_categories:
- 10K<n<100K
task_categories:
- text-classification
task_ids:
- sentiment-classification
---
# Dataset Card for `reviews_with_drift`
## Table of Contents
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [language](#language)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
### Dataset Summary
This dataset was crafted to be used in our tutorial [Link to the tutorial when ready]. It consists on a large Movie Review Dataset mixed with some reviews from a Hotel Review Dataset. The training/validation set are purely obtained from the Movie Review Dataset while the production set is mixed. Some other features have been added (`age`, `gender`, `context`) as well as a made up timestamp `prediction_ts` of when the inference took place.
### Supported Tasks and Leaderboards
`text-classification`, `sentiment-classification`: The dataset is mainly used for text classification: given the text, predict the sentiment (positive or negative).
### language
Text is mainly written in english.
## Dataset Structure
### Data Instances
[More Information Needed]
### Data Fields
[More Information Needed]
### Data Splits
[More Information Needed]
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
[More Information Needed]
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
[More Information Needed]
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
[More Information Needed]
### Citation Information
[More Information Needed]
### Contributions
Thanks to [@fjcasti1](https://github.com/fjcasti1) for adding this dataset. | [
-0.6274514198303223,
-0.45711129903793335,
0.25526872277259827,
0.13153021037578583,
-0.3834383487701416,
0.16594748198986053,
-0.3392210900783539,
-0.1997963786125183,
0.6283900141716003,
0.629920482635498,
-1.0277609825134277,
-0.9896200299263,
-0.5473517179489136,
0.03818346932530403,
... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
RaphaelOlivier/whisper_adversarial_examples | RaphaelOlivier | 2022-11-03T21:48:16Z | 44 | 1 | null | [
"license:cc-by-4.0",
"region:us"
] | 2022-11-03T21:48:16Z | 2022-10-26T20:29:43.000Z | 2022-10-26T20:29:43 | ---
license: cc-by-4.0
---
# Description
This dataset is a subset of [LibriSpeech](https://huggingface.co/datasets/librispeech_asr) and Multilingual [CommonVoice](commonvoice.mozilla.org/) that have been adversarially modified to fool [Whisper](https://huggingface.co/openai/whisper-medium) ASR model.
Original [source code](https://github.com/RaphaelOlivier/whisper_attack).
The raw [tar files](https://data.mendeley.com/datasets/96dh52hz9r).
# Configurations and splits
* The `targeted` config contains targeted adversarial examples. When successful, they fool Whisper into predicting the sentence `OK Google, browse to evil.com` even if the input is entirely different. We provide a split for each Whisper model, and one containing the original, unmodified inputs
* The `untargeted-35` and `untargeted-40` configs contain untargeted adversarial examples, with average Signal-Noise Ratios of 35dB and 40dB respectively. They fool Whisper into predicting erroneous transcriptions. We provide a split for each Whisper model, and one containing the original, unmodified inputs
* The `language-<lang> configs contain adversarial examples in language <lang> that fool Whisper in predicting the wrong language. Split `<lang>.<target_lang>` contain inputs that Whisper perceives as <target_lang>, and split `<lang>.original` contains the original inputs in language <lang>. We use 3 target languages (English, Tagalog and Serbian) and 7 source languages (English, Italian, Indonesian, Danish, Czech, Lithuanian and Armenian).
# Usage
Here is an example of code using this dataset:
```python
model_name="whisper-medium"
config_name="targeted"
split_name="whisper.medium"
hub_path = "openai/whisper-"+model_name
processor = WhisperProcessor.from_pretrained(hub_path)
model = WhisperForConditionalGeneration.from_pretrained(hub_path).to("cuda")
dataset = load_dataset("RaphaelOlivier/whisper_adversarial_examples",config_name ,split=split_name)
def map_to_pred(batch):
input_features = processor(batch["audio"][0]["array"], return_tensors="pt").input_features
predicted_ids = model.generate(input_features.to("cuda"))
transcription = processor.batch_decode(predicted_ids, normalize = True)
batch['text'][0] = processor.tokenizer._normalize(batch['text'][0])
batch["transcription"] = transcription
return batch
result = dataset.map(map_to_pred, batched=True, batch_size=1)
wer = load("wer")
for t in zip(result["text"],result["transcription"]):
print(t)
print(wer.compute(predictions=result["text"], references=result["transcription"]))
``` | [
-0.09224466979503632,
-0.7260974049568176,
0.14568732678890228,
0.29971379041671753,
-0.0749027356505394,
-0.10681449621915817,
-0.4250412583351135,
-0.3743680715560913,
0.23322270810604095,
0.5483216047286987,
-0.732719898223877,
-0.5842159390449524,
-0.8200058937072754,
-0.16834688186645... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
tomekkorbak/pii-pile-chunk3-50000-100000 | tomekkorbak | 2022-11-08T22:27:43Z | 44 | 0 | null | [
"region:us"
] | 2022-11-08T22:27:43Z | 2022-11-08T22:27:36.000Z | 2022-11-08T22:27:36 | Entry not found | [
-0.32276472449302673,
-0.22568407654762268,
0.8622258901596069,
0.4346148371696472,
-0.5282984972000122,
0.7012965679168701,
0.7915717363357544,
0.07618629932403564,
0.7746022939682007,
0.2563222646713257,
-0.785281777381897,
-0.22573848068714142,
-0.9104482531547546,
0.5715669393539429,
... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
bigbio/bionlp_st_2013_ge | bigbio | 2022-12-22T15:43:59Z | 44 | 1 | null | [
"multilinguality:monolingual",
"language:en",
"license:other",
"region:us"
] | 2022-12-22T15:43:59Z | 2022-11-13T22:07:06.000Z | 2022-11-13T22:07:06 |
---
language:
- en
bigbio_language:
- English
license: other
multilinguality: monolingual
bigbio_license_shortname: GENIA_PROJECT_LICENSE
pretty_name: BioNLP 2013 GE
homepage: https://github.com/openbiocorpora/bionlp-st-2013-ge
bigbio_pubmed: True
bigbio_public: True
bigbio_tasks:
- EVENT_EXTRACTION
- NAMED_ENTITY_RECOGNITION
- RELATION_EXTRACTION
- COREFERENCE_RESOLUTION
---
# Dataset Card for BioNLP 2013 GE
## Dataset Description
- **Homepage:** https://github.com/openbiocorpora/bionlp-st-2013-ge
- **Pubmed:** True
- **Public:** True
- **Tasks:** EE,NER,RE,COREF
The BioNLP-ST GE task has been promoting development of fine-grained
information extraction (IE) from biomedical
documents, since 2009. Particularly, it has focused on the domain of
NFkB as a model domain of Biomedical IE
## Citation Information
```
@inproceedings{kim-etal-2013-genia,
title = "The {G}enia Event Extraction Shared Task, 2013 Edition - Overview",
author = "Kim, Jin-Dong and
Wang, Yue and
Yasunori, Yamamoto",
booktitle = "Proceedings of the {B}io{NLP} Shared Task 2013 Workshop",
month = aug,
year = "2013",
address = "Sofia, Bulgaria",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/W13-2002",
pages = "8--15",
}
```
| [
-0.3045884966850281,
-0.7470449805259705,
0.23386597633361816,
0.012830880470573902,
-0.3532699942588806,
-0.16471163928508759,
-0.13794520497322083,
-0.8170090913772583,
0.5272710919380188,
0.2379894256591797,
-0.43259939551353455,
-0.7462931871414185,
-0.5715659856796265,
0.3096242249011... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
bigbio/pico_extraction | bigbio | 2022-12-22T15:46:16Z | 44 | 1 | null | [
"multilinguality:monolingual",
"language:en",
"license:unknown",
"region:us"
] | 2022-12-22T15:46:16Z | 2022-11-13T22:11:27.000Z | 2022-11-13T22:11:27 |
---
language:
- en
bigbio_language:
- English
license: unknown
multilinguality: monolingual
bigbio_license_shortname: UNKNOWN
pretty_name: PICO Annotation
homepage: https://github.com/Markus-Zlabinger/pico-annotation
bigbio_pubmed: True
bigbio_public: True
bigbio_tasks:
- NAMED_ENTITY_RECOGNITION
---
# Dataset Card for PICO Annotation
## Dataset Description
- **Homepage:** https://github.com/Markus-Zlabinger/pico-annotation
- **Pubmed:** True
- **Public:** True
- **Tasks:** NER
This dataset contains annotations for Participants, Interventions, and Outcomes (referred to as PICO task).
For 423 sentences, annotations collected by 3 medical experts are available.
To get the final annotations, we perform the majority voting.
## Citation Information
```
@inproceedings{zlabinger-etal-2020-effective,
title = "Effective Crowd-Annotation of Participants, Interventions, and Outcomes in the Text of Clinical Trial Reports",
author = {Zlabinger, Markus and
Sabou, Marta and
Hofst{"a}tter, Sebastian and
Hanbury, Allan},
booktitle = "Findings of the Association for Computational Linguistics: EMNLP 2020",
month = nov,
year = "2020",
address = "Online",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2020.findings-emnlp.274",
doi = "10.18653/v1/2020.findings-emnlp.274",
pages = "3064--3074",
}
```
| [
-0.2884720265865326,
-0.47103723883628845,
0.29768505692481995,
0.44532808661460876,
-0.4306175708770752,
-0.10836653411388397,
-0.33130162954330444,
-0.51731938123703,
0.5687592029571533,
0.29337266087532043,
-0.3275209367275238,
-0.7256530523300171,
-0.6849868297576904,
0.356388717889785... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
luigisaetta/atco2_normalized_augmented | luigisaetta | 2022-11-19T12:41:21Z | 44 | 0 | null | [
"region:us"
] | 2022-11-19T12:41:21Z | 2022-11-19T12:35:08.000Z | 2022-11-19T12:35:08 | Entry not found | [
-0.32276472449302673,
-0.22568407654762268,
0.8622258901596069,
0.4346148371696472,
-0.5282984972000122,
0.7012965679168701,
0.7915717363357544,
0.07618629932403564,
0.7746022939682007,
0.2563222646713257,
-0.785281777381897,
-0.22573848068714142,
-0.9104482531547546,
0.5715669393539429,
... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
proteinea/deeploc | proteinea | 2023-01-16T14:59:58Z | 44 | 0 | null | [
"doi:10.57967/hf/1105",
"region:us"
] | 2023-01-16T14:59:58Z | 2022-12-12T15:48:32.000Z | 2022-12-12T15:48:32 | Entry not found | [
-0.3227645754814148,
-0.22568479180335999,
0.8622264862060547,
0.43461528420448303,
-0.52829909324646,
0.7012971639633179,
0.7915720343589783,
0.07618614286184311,
0.774603009223938,
0.2563217282295227,
-0.7852813005447388,
-0.22573819756507874,
-0.9104477167129517,
0.5715674161911011,
-... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
irds/msmarco-document | irds | 2023-01-05T03:39:55Z | 44 | 0 | null | [
"task_categories:text-retrieval",
"region:us"
] | 2023-01-05T03:39:55Z | 2023-01-05T03:39:49.000Z | 2023-01-05T03:39:49 | ---
pretty_name: '`msmarco-document`'
viewer: false
source_datasets: []
task_categories:
- text-retrieval
---
# Dataset Card for `msmarco-document`
The `msmarco-document` dataset, provided by the [ir-datasets](https://ir-datasets.com/) package.
For more information about the dataset, see the [documentation](https://ir-datasets.com/msmarco-document#msmarco-document).
# Data
This dataset provides:
- `docs` (documents, i.e., the corpus); count=3,213,835
This dataset is used by: [`msmarco-document_trec-dl-hard`](https://huggingface.co/datasets/irds/msmarco-document_trec-dl-hard), [`msmarco-document_trec-dl-hard_fold1`](https://huggingface.co/datasets/irds/msmarco-document_trec-dl-hard_fold1), [`msmarco-document_trec-dl-hard_fold2`](https://huggingface.co/datasets/irds/msmarco-document_trec-dl-hard_fold2), [`msmarco-document_trec-dl-hard_fold3`](https://huggingface.co/datasets/irds/msmarco-document_trec-dl-hard_fold3), [`msmarco-document_trec-dl-hard_fold4`](https://huggingface.co/datasets/irds/msmarco-document_trec-dl-hard_fold4), [`msmarco-document_trec-dl-hard_fold5`](https://huggingface.co/datasets/irds/msmarco-document_trec-dl-hard_fold5)
## Usage
```python
from datasets import load_dataset
docs = load_dataset('irds/msmarco-document', 'docs')
for record in docs:
record # {'doc_id': ..., 'url': ..., 'title': ..., 'body': ...}
```
Note that calling `load_dataset` will download the dataset (or provide access instructions when it's not public) and make a copy of the
data in 🤗 Dataset format.
## Citation Information
```
@inproceedings{Bajaj2016Msmarco,
title={MS MARCO: A Human Generated MAchine Reading COmprehension Dataset},
author={Payal Bajaj, Daniel Campos, Nick Craswell, Li Deng, Jianfeng Gao, Xiaodong Liu, Rangan Majumder, Andrew McNamara, Bhaskar Mitra, Tri Nguyen, Mir Rosenberg, Xia Song, Alina Stoica, Saurabh Tiwary, Tong Wang},
booktitle={InCoCo@NIPS},
year={2016}
}
```
| [
-0.3512464463710785,
-0.11431699246168137,
0.061704207211732864,
0.13836230337619781,
-0.122396320104599,
-0.024517476558685303,
-0.14157003164291382,
-0.2475622445344925,
0.27947184443473816,
0.4736345708370209,
-0.5861659646034241,
-0.8617376089096069,
-0.5365769267082214,
0.640694737434... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
keremberke/painting-style-classification | keremberke | 2023-01-18T09:30:28Z | 44 | 3 | null | [
"task_categories:image-classification",
"roboflow",
"roboflow2huggingface",
"region:us"
] | 2023-01-18T09:30:28Z | 2023-01-18T09:27:05.000Z | 2023-01-18T09:27:05 | ---
task_categories:
- image-classification
tags:
- roboflow
- roboflow2huggingface
---
<div align="center">
<img width="640" alt="keremberke/painting-style-classification" src="https://huggingface.co/datasets/keremberke/painting-style-classification/resolve/main/thumbnail.jpg">
</div>
### Dataset Labels
```
['Realism', 'Art_Nouveau_Modern', 'Analytical_Cubism', 'Cubism', 'Expressionism', 'Action_painting', 'Synthetic_Cubism', 'Symbolism', 'Ukiyo_e', 'Naive_Art_Primitivism', 'Post_Impressionism', 'Impressionism', 'Fauvism', 'Rococo', 'Minimalism', 'Mannerism_Late_Renaissance', 'Color_Field_Painting', 'High_Renaissance', 'Romanticism', 'Pop_Art', 'Contemporary_Realism', 'Baroque', 'New_Realism', 'Pointillism', 'Northern_Renaissance', 'Early_Renaissance', 'Abstract_Expressionism']
```
### Number of Images
```json
{'valid': 1295, 'train': 4493, 'test': 629}
```
### How to Use
- Install [datasets](https://pypi.org/project/datasets/):
```bash
pip install datasets
```
- Load the dataset:
```python
from datasets import load_dataset
ds = load_dataset("keremberke/painting-style-classification", name="full")
example = ds['train'][0]
```
### Roboflow Dataset Page
[https://universe.roboflow.com/art-dataset/wiki-art/dataset/1](https://universe.roboflow.com/art-dataset/wiki-art/dataset/1?ref=roboflow2huggingface)
### Citation
```
@misc{ wiki-art_dataset,
title = { wiki art Dataset },
type = { Open Source Dataset },
author = { Art Dataset },
howpublished = { \\url{ https://universe.roboflow.com/art-dataset/wiki-art } },
url = { https://universe.roboflow.com/art-dataset/wiki-art },
journal = { Roboflow Universe },
publisher = { Roboflow },
year = { 2022 },
month = { mar },
note = { visited on 2023-01-18 },
}
```
### License
CC BY 4.0
### Dataset Summary
This dataset was exported via roboflow.ai on March 9, 2022 at 1:47 AM GMT
It includes 6417 images.
27 are annotated in folder format.
The following pre-processing was applied to each image:
* Auto-orientation of pixel data (with EXIF-orientation stripping)
* Resize to 416x416 (Stretch)
No image augmentation techniques were applied.
| [
-0.5873245000839233,
-0.2900996506214142,
0.25074249505996704,
-0.061281196773052216,
-0.2157258540391922,
-0.02656177431344986,
-0.07264942675828934,
-0.5181522369384766,
0.44523337483406067,
0.4500700533390045,
-0.6730436086654663,
-0.8628050088882446,
-0.5979973673820496,
0.270622074604... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
metaeval/syntactic-augmentation-nli | metaeval | 2023-06-13T07:28:15Z | 44 | 0 | null | [
"task_categories:text-classification",
"task_ids:natural-language-inference",
"language:en",
"license:mit",
"region:us"
] | 2023-06-13T07:28:15Z | 2023-01-30T10:35:09.000Z | 2023-01-30T10:35:09 | ---
license: mit
task_ids:
- natural-language-inference
task_categories:
- text-classification
language:
- en
---
https://github.com/Aatlantise/syntactic-augmentation-nli/tree/master/datasets
```
@inproceedings{min-etal-2020-syntactic,
title = "Syntactic Data Augmentation Increases Robustness to Inference Heuristics",
author = "Min, Junghyun and
McCoy, R. Thomas and
Das, Dipanjan and
Pitler, Emily and
Linzen, Tal",
booktitle = "Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics",
month = jul,
year = "2020",
address = "Online",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2020.acl-main.212",
doi = "10.18653/v1/2020.acl-main.212",
pages = "2339--2352",
}
``` | [
-0.3145707845687866,
-0.6040135622024536,
0.3223954439163208,
0.13268157839775085,
-0.13808277249336243,
0.08185233175754547,
-0.4928404986858368,
-0.6464857459068298,
0.27856341004371643,
0.13068120181560516,
-0.8578640222549438,
-0.7765789031982422,
-0.4543858468532562,
0.476507633924484... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
gonzalobenegas/clinvar | gonzalobenegas | 2023-02-09T23:32:45Z | 44 | 0 | null | [
"region:us"
] | 2023-02-09T23:32:45Z | 2023-02-09T23:32:39.000Z | 2023-02-09T23:32:39 | Entry not found | [
-0.3227645754814148,
-0.22568479180335999,
0.8622263669967651,
0.43461522459983826,
-0.52829909324646,
0.7012971639633179,
0.7915719747543335,
0.07618614286184311,
0.774603009223938,
0.2563217282295227,
-0.7852813005447388,
-0.22573819756507874,
-0.9104475975036621,
0.5715674161911011,
-... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
SirNeural/flan_v2 | SirNeural | 2023-02-24T19:05:00Z | 44 | 157 | null | [
"license:apache-2.0",
"flan",
"flan 2022",
"flan v2",
"arxiv:2301.13688",
"region:us"
] | 2023-02-24T19:05:00Z | 2023-02-13T23:02:33.000Z | 2023-02-13T23:02:33 | ---
license: apache-2.0
tags:
- flan
- flan 2022
- flan v2
pretty_name: Flan v2
---
# Dataset Card for Flan V2
## Dataset Description
- **Homepage:** https://ai.googleblog.com/2023/02/the-flan-collection-advancing-open.html
- **Repository:** https://github.com/google-research/FLAN/tree/main/flan/v2
- **Paper:** https://arxiv.org/abs/2301.13688
- **Leaderboard:**
- **Point of Contact:**
### Dataset Summary
This is a processed version of the Flan V2 dataset.
I'm not affiliated with the creators, I'm just releasing the files in an easier-to-access format after processing.
The authors of the Flan Collection recommend experimenting with different mixing ratio's of tasks to get optimal results downstream.
## Setup Instructions
Here are the steps I followed to get everything working:
### Build AESLC and WinoGrande datasets manually
The repos for these datasets were updated recently and checksums need to be recomputed in TFDS
- `tfds build --dataset aeslc --register_checksums`
- `tfds build --dataset winogrande --register_checksums`
### Fix dataset versions
I've opened a PR [here](https://github.com/google-research/FLAN/pull/20) to get these updated in the upstream FLAN repo, until that gets merged in run these locally to fix any dataset version errors.
- `sed -i 's/glue\/cola:1.0.0/glue\/cola:2.0.0/g' flan/v2/task_configs_v1.py`
- `sed -i 's/gem\/common_gen:1.0.0/gem\/common_gen:1.1.0/g' flan/v2/task_configs_v1.py`
- `sed -i 's/gem\/dart:1.0.0/gem\/dart:1.1.0/g' flan/v2/task_configs_v1.py`
- `sed -i 's/gem\/e2e_nlg:1.0.0/gem\/e2e_nlg:1.1.0/g' flan/v2/task_configs_v1.py`
- `sed -i 's/gem\/web_nlg_en:1.0.0/gem\/web_nlg_en:1.1.0/g' flan/v2/task_configs_v1.py`
- `sed -i 's/gem\/common_gen:1.0.0/gem\/common_gen:1.1.0/g' flan/v2/task_configs_v1.py`
- `sed -i 's/paws_wiki:1.0.0/paws_wiki:1.1.0/g' flan/v2/task_configs_v1.py`
- `sed -i 's/glue\/mrpc:1.0.0/glue\/mrpc:2.0.0/g' flan/v2/task_configs_v1.py`
- `sed -i 's/glue\/qqp:1.0.0/glue\/qqp:2.0.0/g' flan/v2/task_configs_v1.py`
- `sed -i 's/glue\/sst2:1.0.0/glue\/sst2:2.0.0/g' flan/v2/task_configs_v1.py`
- `sed -i 's/glue\/mnli:1.0.0/glue\/mnli:2.0.0/g' flan/v2/task_configs_v1.py`
- `sed -i 's/glue\/qnli:1.0.0/glue\/qnli:2.0.0/g' flan/v2/task_configs_v1.py`
- `sed -i 's/glue\/wnli:1.0.0/glue\/wnli:2.0.0/g' flan/v2/task_configs_v1.py`
- `sed -i 's/glue\/stsb:1.0.0/glue\/stsb:2.0.0/g' flan/v2/task_configs_v1.py`
- `sed -i 's/hellaswag:0.0.1/hellaswag:1.1.0/g' flan/v2/task_configs_v1.py`
- `sed -i 's/xsum:1.0.0/huggingface:xsum/g' flan/v2/task_configs_v1.py`
### Download and install manual steps
Save these to `~/tensorflow_datasets/downloads/manual`.
- [CzEng (deduped ignoring sections)](https://ufal.mff.cuni.cz/czeng/czeng16pre)
- [Newsroom (extract)](https://lil.nlp.cornell.edu/newsroom/download/index.html)
- [Yandex 1M Corpus](https://translate.yandex.ru/corpus?lang=en)
- [Story Cloze (extract and rename to cloze_test_test__spring2016.csv and cloze_test_val__spring2016.csv)](https://cs.rochester.edu/nlp/)
### Finally, export tasks
```python
import tensorflow as tf
tf.config.set_visible_devices([], 'GPU')
from flan.v2 import constants
from flan.v2 import constants_t0
from flan.v2 import mixtures_utils
from flan.v2 import mixtures
from flan.v2 import tasks
import json
import t5
import seqio
import itertools
from multiprocessing import Pool
seqio.add_global_cache_dirs(constants.CACHE_DIRS)
seqio.set_global_cache_dirs(constants.CACHE_DIRS)
vocab = t5.data.get_default_vocabulary()
def prepare_task(split, shots, opt, task):
dataset = seqio.get_mixture_or_task(f'palmflan_{task}_{shots}_{opt}').get_dataset(
split=split,
num_epochs=1,
sequence_length={'inputs':4096,'targets':4096}
)
print("starting", task, shots, opt, split)
with open(f'./data/{task}_{shots}_{opt}_{split}.jsonl', 'w') as f:
for ex in dataset.as_numpy_iterator():
f.write(
json.dumps({
"inputs": vocab.decode(ex["inputs"]),
"targets": vocab.decode(ex["targets"]),
"task": task,
}))
f.write("\n")
print("done with", task, shots, opt, split)
# prepare_task("train", "zs", "noopt", "dialog") # use this to export a single task
tasks = itertools.product(["train"], ["zs", "fs"], ["opt", "noopt"], ["dialog", "t0", "niv2", "flan", "cot"])
with Pool(5) as p:
p.starmap(prepare_task, [(task[0], task[1], task[2], task[3]) for task in tasks])
```
## Dataset Structure
### Data Instances
Flan 2021 (flan), P3 (t0), Super-Natural Instructions (niv2), Chain-of-thought (cot), and Dialog (dialog)
### Data Fields
Instruction data comes in a few formats:
- Few Shot (fs)
- Zero Shot (zs)
- Options Provided in context (i.e. multiple choice pick one) (opt)
- No Options Provided (noopt)
Each combination of the above tasks + formats are saved as a JSONL with following schema `{"input": ..., "target": ..., "task": ...}`
### Data Splits
Everything is saved as a train split
Note: FLAN-fs-opt-train is too big to be uploaded even when gzipped, so its split into 45gb chunks. To combine and recover, run `cat flan_fs_opt_train_*.gz | gunzip -c > flan_fs_opt_train.jsonl`
| [
-0.5453934073448181,
-0.6336572766304016,
0.3109883666038513,
0.1698775589466095,
0.04426082223653793,
-0.06902170181274414,
-0.30706384778022766,
-0.30198439955711365,
0.39305415749549866,
0.4648960530757904,
-0.6861298680305481,
-0.43824952840805054,
-0.5406635403633118,
0.14468255639076... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
Gooogr/pie_idioms | Gooogr | 2023-07-19T12:22:56Z | 44 | 0 | null | [
"task_categories:token-classification",
"size_categories:10K<n<100K",
"language:en",
"license:cc-by-4.0",
"PIE",
"idioms",
"region:us"
] | 2023-07-19T12:22:56Z | 2023-03-24T16:17:22.000Z | 2023-03-24T16:17:22 | ---
license: cc-by-4.0
dataset_info:
features:
- name: idiom
dtype: string
- name: is_pie
dtype: bool
- name: tokens
sequence: string
- name: ner_tags
sequence:
class_label:
names:
'0': O
'1': B-PIE
'2': I-PIE
splits:
- name: train
num_bytes: 82950018
num_examples: 46090
- name: validation
num_bytes: 10420303
num_examples: 5761
- name: test
num_bytes: 10376839
num_examples: 5762
download_size: 19258913
dataset_size: 103747160
task_categories:
- token-classification
language:
- en
tags:
- PIE
- idioms
size_categories:
- 10K<n<100K
pretty_name: Corpus of potentially idiomatic expressions (PIEs)
---
# Dataset Card for PIEs corpus
### Dataset Summary
This corpus is a collection of 57170 potentially idiomatic expressions (PIEs) based on the British National Corpus, prepaired for NER task.
Each of the objects is comes with a contextual set of tokens, BIO tags and boolean label.
The data sources are:
* [MAGPIE corpus](https://github.com/hslh/magpie-corpus)
* [PIE corpus](https://github.com/hslh/pie-annotation)
Detailed data preparation pipeline can be found [here](https://github.com/Gooogr/Idioms_spotter)
### Supported Tasks and Leaderboards
Token classification (NER)
### Languages
English
## Dataset Structure
### Data Instances
For each instance there is a string with target idiom, tokenized by word text with context of idiom usage, corresponded BIO tags
and boolean label `is_pie`. This tag determines whether or not a collocation is considered an idiom in a given context.
For a PIE dataset the choice was determined by the original PIE_label. For MAGPIE a threshold of 0.75 confidence coefficient was chosen.
An example from the train set looks like the following:
```
{'idiom': "go public"
'is_pie': True
'tokens': [ "Private", "dealers", "in", "the", "States", "go", "public" ]
'ner_tags': [ 0, 0, 0, 0, 0, 1, 2 ]
}
```
Where NER tags is {0: 'O', 1: 'B-PIE', 2: 'I-PIE'}
### Data Fields
* idiom: a string containg original PIE
* is_pie: a boolean label determining whether a PIE can be considered an idiom in a given context
* tokens: sequence of word tkenized string with PIE usage context
* ner_tags: corresponded BIO tags for word tokens
### Data Splits
The SNLI dataset has 3 splits: _train_, _validation_, and _test_.
| Dataset Split | Number of Instances in Split |
| ------------- |----------------------------- |
| Train | 45,736 |
| Validation | 5,717 |
| Test | 5,717 |
## Dataset Creation
### Source Data
#### Initial Data Collection and Normalization
* [MAGPIE corpus](https://github.com/hslh/magpie-corpus)
* [PIE English corpus](https://github.com/hslh/pie-annotation)
## Additional Information
### Licensing Information
Corpus and it's sources are licensed under a Creative Commons Attribution-ShareAlike 4.0 International License.
### Citation Information
[PIE Corpus](https://github.com/hslh/pie-annotation) (Haagsma, H. (Creator), Bos, J. (Contributor), Plank, B. (Contributor), University of Groningen.)<br>
[MAGPIE: A Large Corpus of Potentially Idiomatic Expressions](https://aclanthology.org/2020.lrec-1.35) (Haagsma et al., LREC 2020) | [
-0.5043803453445435,
-0.6069198250770569,
0.005282655358314514,
0.22215530276298523,
-0.2302861511707306,
-0.006023048423230648,
-0.34446752071380615,
-0.17025919258594513,
0.5049701929092407,
0.44081947207450867,
-0.4045759439468384,
-0.6722902059555054,
-0.45298561453819275,
0.4742777049... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
gaussalgo/Canard_Wiki-augmented | gaussalgo | 2023-04-12T13:35:37Z | 44 | 0 | null | [
"task_categories:question-answering",
"task_categories:conversational",
"task_categories:text2text-generation",
"size_categories:10K<n<100K",
"language:en",
"license:cc-by-sa-4.0",
"region:us"
] | 2023-04-12T13:35:37Z | 2023-04-11T12:49:22.000Z | 2023-04-11T12:49:22 | ---
dataset_info:
features:
- name: History
sequence: string
- name: QuAC_dialog_id
dtype: string
- name: Question
dtype: string
- name: Question_no
dtype: int64
- name: Rewrite
dtype: string
- name: true_page_title
dtype: string
- name: true_contexts
dtype: string
- name: answer
dtype: string
- name: true_contexts_wiki
dtype: string
- name: extractive
dtype: bool
- name: retrieved_contexts
sequence: string
splits:
- name: train
num_bytes: 1353765609
num_examples: 31526
- name: test
num_bytes: 252071528
num_examples: 5571
download_size: 231554886
dataset_size: 1605837137
license: cc-by-sa-4.0
task_categories:
- question-answering
- conversational
- text2text-generation
language:
- en
pretty_name: Canard Wikipedia-augmented
size_categories:
- 10K<n<100K
---
# Dataset Card for Canard_Wiki-augmented
### Summary
This is a dataset of fact-retrieving conversations about Wikipedia articles, with all responses grounded in a specific segment of text in the referenced Wikipedia article.
It is an extended version of [Canard](https://sites.google.com/view/qanta/projects/canard)
and [QuAC](https://huggingface.co/datasets/quac) datasets,
augmented with the contexts of [English Wikipedia](https://huggingface.co/datasets/wikipedia).
### Supported Tasks
The dataset is intended to train a factually-consistent conversational model able to ground all its responses to the corresponding source(s).
However, the data can also be used to evaluate the information retrieval (IR) system for given queries, for contextual disambiguation of the queries from a conversation, etc.
## Dataset Structure
The dataset can be loaded by simply choosing a split (`train` or `test`) and calling:
```python
import datasets
canard_augm_test = datasets.load_dataset("gaussalgo/Canard_Wiki-augmented", split="test")
print(canard_augm_test[0]) # print the first sample
```
### Data Instances
The samples of Canard_Wiki-augmented have this format:
```python
{'History': ['Anna Politkovskaya', 'The murder remains unsolved, 2016'],
'QuAC_dialog_id': 'C_0aaa843df0bd467b96e5a496fc0b033d_1',
'Question': 'Did they have any clues?',
'Question_no': 1,
'answer': 'Her colleagues at Novaya gazeta protested that until the instigator or sponsor of the crime was identified, arrested and prosecuted the case was not closed.'
'Rewrite': 'Did investigators have any clues in the unresolved murder of Anna Politkovskaya?',
'true_page_title': 'Anna Politkovskaya',
'true_contexts': 'In September 2016 Vladimir Markin, official spokesman for (...)',
'true_contexts_wiki': 'Anna Stepanovna Politkovskaya was a US-born Russian journalist (...)',
'extractive': True
'retrieved_contexts': ['Clues was an indie rock band from Montreal, Canada formed by Alden Penner (...)',
'High Stakes is a British game show series hosted by Jeremy Kyle, in which (...)']
```
### Data Fields
* **History**: History of the conversation from Canard. The first two entries of the conversation are always synthetic.
* **QuAC_dialog_id**: Dialogue ID mapping the conversation to the original QuAC dataset (*dialogue_id* in QuAC).
* **Question**: Current question of the user from Canard.
* **Question_no**: Ordering of the user's question from the conversation, originally from Canard.
* **answer**: Correctly extracted answer to a given question from a relevant Wikipedia article (*true_contexts*). Note that some of the questions are open, thus the listed answer is not the only correct possibility.
* **Rewrite**: A rephrased version of *Question*, manually disambiguated from the context of *History* by the annotators of Canard.
* **true_page_title**: Title of the Wikipedia article containing *answer*. *wikipedia_page_title* from QuAC.
* **true_contexts**: An excerpt of the paragraph with an answer from the Wikipedia article titled *true_page_title*.
* **true_contexts_wiki**: A full contents of Wikipedia article (*text* from Wikipedia dataset), where *true_page_title* matches Wikipedia *title*. Note that the Wikipedia dataset was retrieved on 2nd of April, 2023.
* **extractive**: A flag whether the *answer* in this sample can be found as an exact-match in *true_contexts_wiki*.
* **retrieved_contexts**: "Distractor" contexts retrieved from the full Wikipedia dataset using the okapi-BM25 IR system on a **Rewrite** question.
### Data Splits
* **train** split is aligned with the training splits of Canard and QuAC.
* **test** split matches the validation split of QuAC and the test split of Canard (where the conversation ids match).
## Licensing
This dataset is composed of [QuAC](https://huggingface.co/datasets/quac) (MIT),
[Canard](https://sites.google.com/view/qanta/projects/canard) (CC BY-SA 4.0)
and [Wikipedia](https://huggingface.co/datasets/wikipedia) (CC BY SA 3.0).
Canard_Wiki-augmented is therefore licensed under CC BY-SA 4.0 as well, allowing it to be also commercially used.
## Cite
If you use this dataset in a research, do not forget to cite the authors of original datasets, that Canard_Wiki-augmented is derived from:
[QuAC](https://huggingface.co/datasets/quac), [Canard](https://sites.google.com/view/qanta/projects/canard). | [
-0.5061105489730835,
-0.7138932943344116,
0.404680073261261,
0.03381075710058212,
-0.2691103219985962,
-0.23367470502853394,
-0.2118137627840042,
-0.5308120846748352,
0.556096613407135,
0.4099668264389038,
-0.7420892119407654,
-0.5603928565979004,
-0.4229830205440521,
0.20458126068115234,
... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
Nan-Do/instructional_code-search-net-ruby | Nan-Do | 2023-05-20T05:25:23Z | 44 | 1 | null | [
"task_categories:conversational",
"task_categories:text-generation",
"task_categories:text2text-generation",
"language:en",
"license:apache-2.0",
"Ruby",
"Code Generation",
"Instruction Response",
"region:us"
] | 2023-05-20T05:25:23Z | 2023-05-19T03:40:00.000Z | 2023-05-19T03:40:00 | ---
dataset_info:
features:
- name: INSTRUCTION
dtype: string
- name: RESPONSE
dtype: string
- name: SOURCE
dtype: string
splits:
- name: train
num_bytes: 30679722
num_examples: 51470
download_size: 12427089
dataset_size: 30679722
license: apache-2.0
task_categories:
- conversational
- text-generation
- text2text-generation
language:
- en
tags:
- Ruby
- Code Generation
- Instruction Response
pretty_name: Instructional Ruby Dataset
---
# Dataset Card for "instructional_code-search-net-ruby"
## Dataset Description
- **Homepage:** None
- **Repository:** https://huggingface.co/datasets/Nan-Do/instructional_code-search-net-ruby
- **Paper:** None
- **Leaderboard:** None
- **Point of Contact:** [@Nan-Do](https://github.com/Nan-Do)
### Dataset Summary
This is an instructional dataset for Ruby.
The dataset contains two different kind of tasks:
- Given a piece of code generate a description of what it does.
- Given a description generate a piece of code that fulfils the description.
### Languages
The dataset is in English.
### Data Splits
There are no splits.
## Dataset Creation
May of 2023
### Curation Rationale
This dataset was created to improve the coding capabilities of LLMs.
### Source Data
The summarized version of the code-search-net dataset can be found at https://huggingface.co/datasets/Nan-Do/code-search-net-ruby
### Annotations
The dataset includes an instruction and response columns.
#### Annotation process
The annotation procedure was done using templates and NLP techniques to generate human-like instructions and responses.
A sample notebook of the process can be found at https://github.com/Nan-Do/OpenAssistantInstructionResponsePython
The annontations have been cleaned to make sure there are no repetitions and/or meaningless summaries.
### Licensing Information
Apache 2.0 | [
-0.2311146855354309,
-0.6190532445907593,
0.09787331521511078,
0.41273489594459534,
-0.11833180487155914,
0.07826662063598633,
-0.2901802659034729,
0.013339165598154068,
0.5351495146751404,
0.4227679669857025,
-0.7014173269271851,
-0.7349927425384521,
-0.42334190011024475,
0.09414900094270... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
saattrupdan/womens-clothing-ecommerce-reviews | saattrupdan | 2023-05-25T20:18:53Z | 44 | 0 | null | [
"task_categories:text-classification",
"size_categories:1K<n<10K",
"language:en",
"multimodal",
"region:us"
] | 2023-05-25T20:18:53Z | 2023-05-25T20:04:03.000Z | 2023-05-25T20:04:03 | ---
dataset_info:
features:
- name: review_text
dtype: string
- name: age
dtype: int64
- name: rating
dtype: int64
- name: positive_feedback_count
dtype: int64
- name: division_name
dtype: string
- name: department_name
dtype: string
- name: class_name
dtype: string
- name: recommended_ind
dtype:
class_label:
names:
'0': '0'
'1': '1'
splits:
- name: train
num_bytes: 7811312.540347158
num_examples: 20641
- name: val
num_bytes: 378436.72982642107
num_examples: 1000
- name: test
num_bytes: 378436.72982642107
num_examples: 1000
download_size: 4357015
dataset_size: 8568186.0
task_categories:
- text-classification
language:
- en
tags:
- multimodal
pretty_name: Women's Clothing E-Commerce Reviews
size_categories:
- 1K<n<10K
---
# Dataset Card for "womens-clothing-ecommerce-reviews"
Processed version of [this dataset](https://github.com/ya-stack/Women-s-Ecommerce-Clothing-Reviews). | [
-0.16928353905677795,
-0.6354963183403015,
-0.07435999065637589,
0.15742622315883636,
-0.6554502844810486,
0.16544687747955322,
0.19751742482185364,
-0.5558286905288696,
0.6865033507347107,
0.902938723564148,
-1.2963151931762695,
-1.0861090421676636,
-0.15102331340312958,
0.036633301526308... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
KaiLv/UDR_MR | KaiLv | 2023-06-21T12:42:19Z | 44 | 0 | null | [
"region:us"
] | 2023-06-21T12:42:19Z | 2023-06-21T12:42:08.000Z | 2023-06-21T12:42:08 | ---
dataset_info:
features:
- name: idx
dtype: int64
- name: label
dtype: int64
- name: sentence
dtype: string
splits:
- name: train
num_bytes: 1164193
num_examples: 8662
- name: test
num_bytes: 266849
num_examples: 2000
- name: debug
num_bytes: 672162
num_examples: 5000
download_size: 1379605
dataset_size: 2103204
---
# Dataset Card for "UDR_MR"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | [
-0.5956734418869019,
-0.20133064687252045,
0.08234503120183945,
0.03011227957904339,
-0.18432573974132538,
0.10683398693799973,
0.3758663833141327,
-0.07269365340471268,
0.8131023645401001,
0.449473112821579,
-0.8256275653839111,
-0.667310357093811,
-0.5526999831199646,
-0.0285718012601137... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
raygx/CORONA_en2np | raygx | 2023-07-09T23:49:57Z | 44 | 0 | null | [
"region:us"
] | 2023-07-09T23:49:57Z | 2023-07-09T13:56:28.000Z | 2023-07-09T13:56:28 | ---
dataset_info:
features:
- name: Sentences
dtype: string
- name: Sentiment
dtype: int64
splits:
- name: train
num_bytes: 3052582
num_examples: 5755
download_size: 1231706
dataset_size: 3052582
---
# Dataset Card for "CORONA_en2np"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | [
-0.41724374890327454,
-0.16058728098869324,
0.15396450459957123,
0.6370474696159363,
-0.12816745042800903,
0.20078563690185547,
0.19867323338985443,
-0.27191680669784546,
0.8558788895606995,
0.5760993957519531,
-0.5736939311027527,
-0.6755121946334839,
-0.4720870852470398,
-0.2315988540649... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
raygx/CORONA_arabic2np | raygx | 2023-07-10T02:31:59Z | 44 | 0 | null | [
"region:us"
] | 2023-07-10T02:31:59Z | 2023-07-10T02:09:07.000Z | 2023-07-10T02:09:07 | ---
dataset_info:
features:
- name: text
dtype: string
- name: label
dtype: int64
splits:
- name: train
num_bytes: 20119541
num_examples: 35676
download_size: 7926342
dataset_size: 20119541
---
# Dataset Card for "CORONA_arabic2np"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | [
-0.4949709475040436,
-0.13932062685489655,
0.0710424929857254,
0.6084275841712952,
-0.24947839975357056,
0.2994145154953003,
0.21071790158748627,
-0.2684188485145569,
0.7608805298805237,
0.47198382019996643,
-0.5351711511611938,
-0.983177900314331,
-0.6717553734779358,
-0.36785653233528137... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
sehyun66/Finnhub-News | sehyun66 | 2023-10-12T11:55:56Z | 44 | 2 | null | [
"region:us"
] | 2023-10-12T11:55:56Z | 2023-09-28T13:37:56.000Z | 2023-09-28T13:37:56 | ---
configs:
- config_name: clean
data_files:
- split: clean
path: clean/clean-*
- config_name: default
data_files:
- split: finbert
path: data/finbert-*
- split: train
path: data/train-*
dataset_info:
config_name: clean
features:
- name: datetime
dtype: int64
- name: image
dtype: string
- name: related
dtype: string
- name: source
dtype: string
- name: summary
dtype: string
- name: url
dtype: string
- name: id
dtype: int64
- name: category
dtype: string
- name: headline
dtype: string
splits:
- name: clean
num_bytes: 150902085
num_examples: 316086
download_size: 78262136
dataset_size: 150902085
---
---
configs:
- config_name: clean
data_files:
- split: clean
path: clean/clean-*
- config_name: default
data_files:
- split: train
path: data/train-*
dataset_info:
config_name: distill_bert
features:
- name: headline
dtype: string
- name: summary
dtype: string
- name: headline_sentiment
struct:
- name: postive
dtype: string
- name: negative
dtype: string
- name: neutral
dtype: string
- name: summary_sentiment
struct:
- name: postive
dtype: string
- name: negative
dtype: string
- name: neutral
dtype: string
splits:
- name: default
num_bytes: 131086592
num_examples: 316086
download_size: 0
dataset_size: 131086592
dataset_info:
- config_name: clean
features:
- name: datetime
dtype: int64
- name: image
dtype: string
- name: related
dtype: string
- name: source
dtype: string
- name: summary
dtype: string
- name: url
dtype: string
- name: id
dtype: int64
- name: category
dtype: string
- name: headline
dtype: string
splits:
- name: clean
num_bytes: 150902085
num_examples: 316086
download_size: 78262136
dataset_size: 150902085
- config_name: default
features:
- name: related
dtype: string
- name: datetime
dtype: int64
- name: image
dtype: string
- name: url
dtype: string
- name: headline
dtype: string
- name: finbert_sentiment
struct:
- name: negative
dtype: float64
- name: neutral
dtype: float64
- name: postive
dtype: float64
- name: source
dtype: string
- name: summary
dtype: string
- name: id
dtype: int64
- name: category
dtype: string
splits:
- name: train
num_bytes: 251731744
num_examples: 515851
download_size: 113022298
dataset_size: 251731744
tags:
- finance
---
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | [
-0.9132819771766663,
-0.4154289960861206,
0.3084089159965515,
0.25616252422332764,
-0.4726601243019104,
0.10742653161287308,
-0.17725566029548645,
0.027685033157467842,
0.5460200905799866,
0.3875557780265808,
-0.8623704314231873,
-0.6785838603973389,
-0.6566535830497742,
0.0422362387180328... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
anhz/finetune_data | anhz | 2023-11-03T03:40:02Z | 44 | 0 | null | [
"region:us"
] | 2023-11-03T03:40:02Z | 2023-10-20T02:41:16.000Z | 2023-10-20T02:41:16 | Entry not found | [
-0.3227645754814148,
-0.22568479180335999,
0.8622264862060547,
0.43461528420448303,
-0.52829909324646,
0.7012971639633179,
0.7915720343589783,
0.07618614286184311,
0.774603009223938,
0.2563217282295227,
-0.7852813005447388,
-0.22573819756507874,
-0.9104477167129517,
0.5715674161911011,
-... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
am96149/guanaco-llama2-1k | am96149 | 2023-11-01T10:37:23Z | 44 | 0 | null | [
"region:us"
] | 2023-11-01T10:37:23Z | 2023-10-26T09:39:00.000Z | 2023-10-26T09:39:00 | ---
dataset_info:
features:
- name: text
dtype: string
splits:
- name: train
num_bytes: 195589
num_examples: 2000
- name: test
num_bytes: 87745
num_examples: 900
download_size: 175131
dataset_size: 283334
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: test
path: data/test-*
---
# Dataset Card for "guanaco-llama2-1k"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | [
-0.31712618470191956,
-0.1850079894065857,
0.25064390897750854,
0.5434027910232544,
-0.5531401634216309,
0.012613247148692608,
0.37307268381118774,
-0.27480971813201904,
0.9305322766304016,
0.4307297170162201,
-0.7881224751472473,
-0.966692328453064,
-0.7247744202613831,
-0.231430932879447... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
xu3kev/proof-pile-2-proofsteps | xu3kev | 2023-10-28T21:45:17Z | 44 | 0 | null | [
"arxiv:2310.10631",
"region:us"
] | 2023-10-28T21:45:17Z | 2023-10-28T20:03:34.000Z | 2023-10-28T20:03:34 | ---
configs:
- config_name: default
data_files:
- split: lean_proofsteps
path: "lean_proofsteps/*.parquet"
- split: isa_proofsteps
path: "isa_proofsteps/*.parquet"
---
Proofsteps data from Proof-Pile-2
includes proofsteps for Lean and Isabelle
```python
from datasets import load_dataset
ds = load_dataset(
"xu3kev/proof-pile-2-proofsteps"
)
ds
DatasetDict({
lean_proofsteps: Dataset({
features: ['text', 'meta'],
num_rows: 3432
})
isa_proofsteps: Dataset({
features: ['text', 'meta'],
num_rows: 260726
})
})
```
Quoting from appendix of [LLEMMA: AN OPEN LANGUAGE MODEL FOR MATHEMATICS](https://arxiv.org/pdf/2310.10631.pdf)
```
B.1.2 LEAN PROOFSTEPS
We extract a dataset of (tactic state, next tactic) pairs from Mathlib 4 (mathlib Community, 2020)
using the lean-training-data (Morrison, 2023) tool. We use Mathlib 4 commit c779bd5,
which was created on August 20th 2023.
B.1.3 ISABELLE PROOFSTEPS
We construct a dataset of Isabelle proofs, building upon the PISA dataset Jiang et al. (2021). Isabelle
Proofsteps comprises proofs from the Archive of Formal Proofs and Isabelle Standard Library, scraped
with PISA Jiang et al. (2021). Each entry in the dataset includes the theorem statement, the proof
states and the proof steps, separated by specific tags. To maintain the integrity of evaluations using
the PISA test set, we decontaminate Isabelle Proofsteps by removing theorems whose names overlap
with those in the PISA test set. Although this approach results in a strict filtering – removing more
than 10,000 theorems although there are only 3600 in the PISA test set – we consider it acceptable in
order to mitigate data contamination. After filtering, Isabelle Proofsteps contains 251,000 theorems.
```
| [
-0.29933854937553406,
-0.26321545243263245,
0.331882119178772,
-0.00879148580133915,
-0.09684962034225464,
-0.40716102719306946,
0.1124689057469368,
-0.3858800232410431,
-0.025028955191373825,
0.6549554467201233,
-0.468692809343338,
-0.5459831953048706,
-0.5571776628494263,
-0.086176782846... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
Argen7um/restrant-qa | Argen7um | 2023-10-30T15:40:22Z | 44 | 1 | null | [
"task_categories:question-answering",
"language:en",
"license:apache-2.0",
"legal",
"region:us"
] | 2023-10-30T15:40:22Z | 2023-10-30T15:28:34.000Z | 2023-10-30T15:28:34 | ---
license: apache-2.0
task_categories:
- question-answering
language:
- en
tags:
- legal
--- | [
-0.12853392958641052,
-0.18616779148578644,
0.6529127955436707,
0.49436280131340027,
-0.19319361448287964,
0.23607419431209564,
0.36072003841400146,
0.050563063472509384,
0.579365611076355,
0.7400140762329102,
-0.6508104205131531,
-0.23783954977989197,
-0.7102249264717102,
-0.0478260256350... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
cxllin/medinstructv2 | cxllin | 2023-11-01T22:18:16Z | 44 | 1 | null | [
"region:us"
] | 2023-11-01T22:18:16Z | 2023-11-01T22:17:35.000Z | 2023-11-01T22:17:35 | Entry not found | [
-0.32276472449302673,
-0.22568407654762268,
0.8622258901596069,
0.4346148371696472,
-0.5282984972000122,
0.7012965679168701,
0.7915717363357544,
0.07618629932403564,
0.7746022939682007,
0.2563222646713257,
-0.785281777381897,
-0.22573848068714142,
-0.9104482531547546,
0.5715669393539429,
... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
gowitheflow/wiki1M-character-level-all | gowitheflow | 2023-11-04T00:39:15Z | 44 | 0 | null | [
"region:us"
] | 2023-11-04T00:39:15Z | 2023-11-04T00:28:38.000Z | 2023-11-04T00:28:38 | Entry not found | [
-0.32276472449302673,
-0.22568407654762268,
0.8622258901596069,
0.4346148371696472,
-0.5282984972000122,
0.7012965679168701,
0.7915717363357544,
0.07618629932403564,
0.7746022939682007,
0.2563222646713257,
-0.785281777381897,
-0.22573848068714142,
-0.9104482531547546,
0.5715669393539429,
... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
MU-NLPC/Calc-X | MU-NLPC | 2023-11-08T09:35:11Z | 44 | 0 | null | [
"arxiv:2305.15017",
"arxiv:2110.14168",
"region:us"
] | 2023-11-08T09:35:11Z | 2023-11-05T21:27:34.000Z | 2023-11-05T21:27:34 | ---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: validation
path: data/validation-*
- split: test
path: data/test-*
dataset_info:
features:
- name: id
dtype: string
- name: question
dtype: string
- name: chain
dtype: string
- name: result
dtype: string
- name: source_ds
dtype: string
splits:
- name: train
num_bytes: 156447731.0
num_examples: 319169
- name: validation
num_bytes: 1428917
num_examples: 3277
- name: test
num_bytes: 2787009
num_examples: 6096
download_size: 73015819
dataset_size: 160663657.0
---
# Dataset Card for "Calc-X"
This dataset is a concatenation of all arithmetical reasoning datasets of [Calc-X collection](https://huggingface.co/collections/MU-NLPC/calc-x-652fee9a6b838fd820055483)
that can be used without data leakages for training, validation and testing of models for arithmetical reasoning.
Find more details in the following resources:
- [**Calc-X collection**](https://huggingface.co/collections/MU-NLPC/calc-x-652fee9a6b838fd820055483) - datasets for training Calcformers
- [**Calcformers collection**](https://huggingface.co/collections/MU-NLPC/calcformers-65367392badc497807b3caf5) - calculator-using models we trained and published on HF
- [**Calc-X and Calcformers paper (EMNLP 2023)**](https://arxiv.org/abs/2305.15017)
- [**Calc-X and Calcformers repo**](https://github.com/prompteus/calc-x)
## How was this dataset created
Below is the code that was used to generate this dataset.
```python
calcx_ds_names = ["gsm8k", "ape210k", "aqua_rat", "math_qa", "svamp", "asdiv_a", "mawps"]
all_ds = {
ds_name: datasets.load_dataset(f"MU-NLPC/calc-{ds_name}")
for ds_name in calcx_ds_names
}
common_cols = ["id", "question", "chain", "result"]
calcx = datasets.DatasetDict({
split: datasets.concatenate_datasets([
(all_ds[ds_name][split]
.select_columns(common_cols)
.add_column("source_ds", [ds_name] * len(all_ds[ds_name][split]))
)
for ds_name in calcx_ds_names
if split in all_ds[ds_name]
])
for split in ["train", "validation", "test"]
})
calcx["train"] = calcx["train"].shuffle(seed=0)
```
## Cite
If you use this version of the dataset in research, please cite the [original GSM8K paper](https://arxiv.org/abs/2110.14168), and [Calc-X collection](https://arxiv.org/abs/2305.15017) as follows:
```bibtex
@inproceedings{kadlcik-etal-2023-soft,
title = "Calc-X and Calcformers: Empowering Arithmetical Chain-of-Thought through Interaction with Symbolic Systems",
author = "Marek Kadlčík and Michal Štefánik and Ondřej Sotolář and Vlastimil Martinek",
booktitle = "Proceedings of the The 2023 Conference on Empirical Methods in Natural Language Processing: Main track",
month = dec,
year = "2023",
address = "Singapore, Singapore",
publisher = "Association for Computational Linguistics",
url = "https://arxiv.org/abs/2305.15017",
}
```
| [
-0.36171409487724304,
-0.30790871381759644,
0.2165023237466812,
0.06005888059735298,
-0.0683642327785492,
0.020472140982747078,
-0.02863972820341587,
-0.23791728913784027,
0.34914857149124146,
0.4672003388404846,
-0.7037509083747864,
-0.44135189056396484,
-0.5213414430618286,
0.30431386828... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
Zynab/sts-arabic-translated-modified | Zynab | 2023-11-14T08:19:28Z | 44 | 0 | null | [
"region:us"
] | 2023-11-14T08:19:28Z | 2023-11-10T12:27:43.000Z | 2023-11-10T12:27:43 | Entry not found | [
-0.3227645754814148,
-0.22568479180335999,
0.8622264862060547,
0.43461528420448303,
-0.52829909324646,
0.7012971639633179,
0.7915720343589783,
0.07618614286184311,
0.774603009223938,
0.2563217282295227,
-0.7852813005447388,
-0.22573819756507874,
-0.9104477167129517,
0.5715674161911011,
-... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
upaya07/NeurIPS-LLM-data | upaya07 | 2023-11-16T11:45:08Z | 44 | 0 | null | [
"license:mit",
"region:us"
] | 2023-11-16T11:45:08Z | 2023-11-16T11:35:13.000Z | 2023-11-16T11:35:13 | ---
configs:
- config_name: default
data_files:
- split: train
path: train_dataset.json
- split: test
path: eval_dataset.json
license: mit
--- | [
-0.1285335123538971,
-0.1861683875322342,
0.6529128551483154,
0.49436232447624207,
-0.19319400191307068,
0.23607441782951355,
0.36072009801864624,
0.05056373029947281,
0.5793656706809998,
0.7400146722793579,
-0.650810182094574,
-0.23784008622169495,
-0.7102247476577759,
-0.0478255338966846... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
SonaliBandi/FictionalCharacters | SonaliBandi | 2023-11-17T21:28:03Z | 44 | 0 | null | [
"region:us"
] | 2023-11-17T21:28:03Z | 2023-11-17T20:58:12.000Z | 2023-11-17T20:58:12 | Entry not found | [
-0.3227645754814148,
-0.22568479180335999,
0.8622264862060547,
0.43461528420448303,
-0.52829909324646,
0.7012971639633179,
0.7915720343589783,
0.07618614286184311,
0.774603009223938,
0.2563217282295227,
-0.7852813005447388,
-0.22573819756507874,
-0.9104477167129517,
0.5715674161911011,
-... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
showchen/MakiseKurisu | showchen | 2023-11-21T17:17:51Z | 44 | 0 | null | [
"license:apache-2.0",
"region:us"
] | 2023-11-21T17:17:51Z | 2023-11-21T17:17:33.000Z | 2023-11-21T17:17:33 | ---
license: apache-2.0
---
| [
-0.12853392958641052,
-0.18616779148578644,
0.6529127955436707,
0.49436280131340027,
-0.19319361448287964,
0.23607419431209564,
0.36072003841400146,
0.050563063472509384,
0.579365611076355,
0.7400140762329102,
-0.6508104205131531,
-0.23783954977989197,
-0.7102249264717102,
-0.0478260256350... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
wza/finfive | wza | 2023-11-22T08:48:22Z | 44 | 0 | null | [
"region:us"
] | 2023-11-22T08:48:22Z | 2023-11-22T08:34:23.000Z | 2023-11-22T08:34:23 | Entry not found | [
-0.32276472449302673,
-0.22568407654762268,
0.8622258901596069,
0.4346148371696472,
-0.5282984972000122,
0.7012965679168701,
0.7915717363357544,
0.07618629932403564,
0.7746022939682007,
0.2563222646713257,
-0.785281777381897,
-0.22573848068714142,
-0.9104482531547546,
0.5715669393539429,
... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
justinphan3110/sharegpt_instructions_small | justinphan3110 | 2023-11-24T00:17:34Z | 44 | 0 | null | [
"region:us"
] | 2023-11-24T00:17:34Z | 2023-11-24T00:17:26.000Z | 2023-11-24T00:17:26 | ---
dataset_info:
features:
- name: instructions
dtype: string
splits:
- name: train
num_bytes: 58210
num_examples: 424
download_size: 40903
dataset_size: 58210
---
# Dataset Card for "sharegpt_instructions_small"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | [
-0.6189532279968262,
-0.39838534593582153,
0.3167915642261505,
0.3899967074394226,
-0.287785142660141,
-0.4310876727104187,
0.04021446779370308,
0.15685534477233887,
0.6775376796722412,
0.3006245493888855,
-1.091958999633789,
-0.6115641593933105,
-0.6897212266921997,
-0.4482437074184418,
... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
petrpan26/typescript-small | petrpan26 | 2023-11-24T14:02:34Z | 44 | 0 | null | [
"region:us"
] | 2023-11-24T14:02:34Z | 2023-11-24T14:02:31.000Z | 2023-11-24T14:02:31 | ---
dataset_info:
features:
- name: index
dtype: int64
- name: repo_id
dtype: string
- name: file_path
dtype: string
- name: content
dtype: string
- name: __index_level_0__
dtype: int64
splits:
- name: train
num_bytes: 7493766
num_examples: 100
download_size: 3088671
dataset_size: 7493766
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
| [
-0.12853392958641052,
-0.18616779148578644,
0.6529127955436707,
0.49436280131340027,
-0.19319361448287964,
0.23607419431209564,
0.36072003841400146,
0.050563063472509384,
0.579365611076355,
0.7400140762329102,
-0.6508104205131531,
-0.23783954977989197,
-0.7102249264717102,
-0.0478260256350... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
peixian/equity_evaluation_corpus | peixian | 2022-10-20T23:35:15Z | 43 | 3 | null | [
"task_categories:text-classification",
"annotations_creators:expert-generated",
"language_creators:expert-generated",
"multilinguality:monolingual",
"size_categories:1K<n<10K",
"source_datasets:original",
"language:en",
"license:unknown",
"gender-classification",
"region:us"
] | 2022-10-20T23:35:15Z | 2022-03-02T23:29:22.000Z | 2022-03-02T23:29:22 | ---
annotations_creators:
- expert-generated
language_creators:
- expert-generated
language:
- en
license:
- unknown
multilinguality:
- monolingual
size_categories:
- 1K<n<10K
source_datasets:
- original
task_categories:
- text-classification
task_ids: []
tags:
- gender-classification
---
# Dataset Card for equity-evaluation-corpus
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-instances)
- [Data Splits](#data-instances)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
## Dataset Description
- **Homepage:** [Needs More Information]
- **Repository:** [Needs More Information]
- **Paper:** [Needs More Information]
- **Leaderboard:** [Needs More Information]
- **Point of Contact:** [Needs More Information]
### Dataset Summary
Automatic machine learning systems can inadvertently accentuate and perpetuate inappropriate human biases. Past work on examining inappropriate biases has largely focused on just individual systems and resources. Further, there is a lack of benchmark datasets for examining inappropriate biases in system predictions. Here, we present the Equity Evaluation Corpus (EEC), which consists of 8,640 English sentences carefully chosen to tease out biases towards certain races and genders. We used the dataset to examine 219 automatic sentiment analysis systems that took part in a recent shared task, SemEval-2018 Task 1 Affect in Tweets. We found that several of the systems showed statistically significant bias; that is, they consistently provide slightly higher sentiment intensity predictions for one race or one gender. We make the EEC freely available, and encourage its use to evaluate biases in sentiment and other NLP tasks.
### Supported Tasks and Leaderboards
[Needs More Information]
### Languages
[Needs More Information]
## Dataset Structure
### Data Instances
[Needs More Information]
### Data Fields
- `sentence`: a `string` feature.
- `template`: a `string` feature.
- `person`: a `string` feature.
- `race`: a `string` feature.
- `emotion`: a `string` feature.
- `emotion word`: a `string` feature.
### Data Splits
[Needs More Information]
## Dataset Creation
### Curation Rationale
[Needs More Information]
### Source Data
#### Initial Data Collection and Normalization
[Needs More Information]
#### Who are the source language producers?
[Needs More Information]
### Annotations
#### Annotation process
[Needs More Information]
#### Who are the annotators?
[Needs More Information]
### Personal and Sensitive Information
[Needs More Information]
## Considerations for Using the Data
### Social Impact of Dataset
[Needs More Information]
### Discussion of Biases
[Needs More Information]
### Other Known Limitations
[Needs More Information]
## Additional Information
### Dataset Curators
[Needs More Information]
### Licensing Information
[Needs More Information]
### Citation Information
[Needs More Information]
| [
-0.5642895698547363,
-0.23190394043922424,
0.0534810945391655,
0.32096895575523376,
0.0406620167195797,
0.22857098281383514,
-0.15552453696727753,
-0.24101415276527405,
0.4049907326698303,
0.30812543630599976,
-0.6214627027511597,
-0.9671770334243774,
-0.8556435704231262,
-0.05236926674842... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
qanastek/WMT-16-PubMed | qanastek | 2022-10-22T15:20:12Z | 43 | 2 | null | [
"task_categories:translation",
"annotations_creators:machine-generated",
"annotations_creators:expert-generated",
"language_creators:found",
"multilinguality:multilingual",
"size_categories:100K<n<1M",
"source_datasets:extended",
"language:bg",
"language:cs",
"language:da",
"language:de",
"lan... | 2022-10-22T15:20:12Z | 2022-03-02T23:29:22.000Z | 2022-03-02T23:29:22 | ---
annotations_creators:
- machine-generated
- expert-generated
language_creators:
- found
language:
- bg
- cs
- da
- de
- el
- en
- es
- et
- fi
- fr
- hu
- it
- lt
- lv
- mt
- nl
- pl
- pt
- ro
- sk
- sl
- sv
multilinguality:
- multilingual
pretty_name: WMT-16-PubMed
size_categories:
- 100K<n<1M
source_datasets:
- extended
task_categories:
- translation
- machine-translation
task_ids:
- translation
- machine-translation
---
# WMT-16-PubMed : European parallel translation corpus from the European Medicines Agency
## Table of Contents
- [Dataset Card for [Needs More Information]](#dataset-card-for-needs-more-information)
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Initial Data Collection and Normalization](#initial-data-collection-and-normalization)
- [Who are the source language producers?](#who-are-the-source-language-producers)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
## Dataset Description
- **Homepage:** https://www.statmt.org/wmt16/biomedical-translation-task.html
- **Repository:** https://github.com/biomedical-translation-corpora/corpora
- **Paper:** https://aclanthology.org/W16-2301/
- **Leaderboard:** [Needs More Information]
- **Point of Contact:** [Yanis Labrak](mailto:yanis.labrak@univ-avignon.fr)
### Dataset Summary
`WMT-16-PubMed` is a parallel corpus for neural machine translation collected and aligned for ACL 2016 during the [WMT'16 Shared Task: Biomedical Translation Task](https://www.statmt.org/wmt16/biomedical-translation-task.html).
### Supported Tasks and Leaderboards
`translation`: The dataset can be used to train a model for translation.
### Languages
The corpora consists of a pair of source and target sentences for all 4 different languages :
**List of languages :** `English (en)`,`Spanish (es)`,`French (fr)`,`Portuguese (pt)`.
## Load the dataset with HuggingFace
```python
from datasets import load_dataset
dataset = load_dataset("qanastek/WMT-16-PubMed", split='train', download_mode='force_redownload')
print(dataset)
print(dataset[0])
```
## Dataset Structure
### Data Instances
```plain
lang doc_id workshop publisher source_text target_text
0 en-fr 26839447 WMT'16 Biomedical Translation Task - PubMed pubmed Global Health: Where Do Physiotherapy and Reha... La place des cheveux et des poils dans les rit...
1 en-fr 26837117 WMT'16 Biomedical Translation Task - PubMed pubmed Carabin Les Carabins
2 en-fr 26837116 WMT'16 Biomedical Translation Task - PubMed pubmed In Process Citation Le laboratoire d'Anatomie, Biomécanique et Org...
3 en-fr 26837115 WMT'16 Biomedical Translation Task - PubMed pubmed Comment on the misappropriation of bibliograph... Du détournement des références bibliographique...
4 en-fr 26837114 WMT'16 Biomedical Translation Task - PubMed pubmed Anti-aging medicine, a science-based, essentia... La médecine anti-âge, une médecine scientifiqu...
... ... ... ... ... ... ...
973972 en-pt 20274330 WMT'16 Biomedical Translation Task - PubMed pubmed Myocardial infarction, diagnosis and treatment Infarto do miocárdio; diagnóstico e tratamento
973973 en-pt 20274329 WMT'16 Biomedical Translation Task - PubMed pubmed The health areas politics A política dos campos de saúde
973974 en-pt 20274328 WMT'16 Biomedical Translation Task - PubMed pubmed The role in tissue edema and liquid exchanges ... O papel dos tecidos nos edemas e nas trocas lí...
973975 en-pt 20274327 WMT'16 Biomedical Translation Task - PubMed pubmed About suppuration of the wound after thoracopl... Sôbre as supurações da ferida operatória após ...
973976 en-pt 20274326 WMT'16 Biomedical Translation Task - PubMed pubmed Experimental study of liver lesions in the tre... Estudo experimental das lesões hepáticas no tr...
```
### Data Fields
**lang** : The pair of source and target language of type `String`.
**source_text** : The source text of type `String`.
**target_text** : The target text of type `String`.
### Data Splits
`en-es` : 285,584
`en-fr` : 614,093
`en-pt` : 74,300
## Dataset Creation
### Curation Rationale
For details, check the corresponding [pages](https://www.statmt.org/wmt16/biomedical-translation-task.html).
### Source Data
<!-- #### Initial Data Collection and Normalization
ddd -->
#### Who are the source language producers?
The shared task as been organized by :
* Antonio Jimeno Yepes (IBM Research Australia)
* Aurélie Névéol (LIMSI, CNRS, France)
* Mariana Neves (Hasso-Plattner Institute, Germany)
* Karin Verspoor (University of Melbourne, Australia)
### Personal and Sensitive Information
The corpora is free of personal or sensitive information.
## Considerations for Using the Data
### Other Known Limitations
The nature of the task introduce a variability in the quality of the target translations.
## Additional Information
### Dataset Curators
__Hugging Face WMT-16-PubMed__: Labrak Yanis, Dufour Richard (Not affiliated with the original corpus)
__WMT'16 Shared Task: Biomedical Translation Task__:
* Antonio Jimeno Yepes (IBM Research Australia)
* Aurélie Névéol (LIMSI, CNRS, France)
* Mariana Neves (Hasso-Plattner Institute, Germany)
* Karin Verspoor (University of Melbourne, Australia)
<!-- ### Licensing Information
ddd -->
### Citation Information
Please cite the following paper when using this dataset.
```latex
@inproceedings{bojar-etal-2016-findings,
title = Findings of the 2016 Conference on Machine Translation,
author = {
Bojar, Ondrej and
Chatterjee, Rajen and
Federmann, Christian and
Graham, Yvette and
Haddow, Barry and
Huck, Matthias and
Jimeno Yepes, Antonio and
Koehn, Philipp and
Logacheva, Varvara and
Monz, Christof and
Negri, Matteo and
Neveol, Aurelie and
Neves, Mariana and
Popel, Martin and
Post, Matt and
Rubino, Raphael and
Scarton, Carolina and
Specia, Lucia and
Turchi, Marco and
Verspoor, Karin and
Zampieri, Marcos,
},
booktitle = Proceedings of the First Conference on Machine Translation: Volume 2, Shared Task Papers,
month = aug,
year = 2016,
address = Berlin, Germany,
publisher = Association for Computational Linguistics,
url = https://aclanthology.org/W16-2301,
doi = 10.18653/v1/W16-2301,
pages = 131--198,
}
```
| [
-0.24051785469055176,
-0.48691001534461975,
0.5126746296882629,
0.185383141040802,
-0.47024035453796387,
0.04447581619024277,
-0.26118364930152893,
-0.6576642990112305,
0.3945351541042328,
0.4192836880683899,
-0.5320634841918945,
-0.8832994103431702,
-0.8090114593505859,
0.8597100973129272... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
SocialGrep/the-reddit-place-dataset | SocialGrep | 2022-07-01T17:51:57Z | 43 | 1 | null | [
"annotations_creators:lexyr",
"language_creators:crowdsourced",
"multilinguality:monolingual",
"size_categories:1M<n<10M",
"source_datasets:original",
"language:en",
"license:cc-by-4.0",
"region:us"
] | 2022-07-01T17:51:57Z | 2022-04-05T21:25:45.000Z | 2022-04-05T21:25:45 | ---
annotations_creators:
- lexyr
language_creators:
- crowdsourced
language:
- en
license:
- cc-by-4.0
multilinguality:
- monolingual
size_categories:
- 1M<n<10M
source_datasets:
- original
paperswithcode_id: null
---
# Dataset Card for the-reddit-place-dataset
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Licensing Information](#licensing-information)
## Dataset Description
- **Homepage:** [https://socialgrep.com/datasets](https://socialgrep.com/datasets/the-reddit-place-dataset?utm_source=huggingface&utm_medium=link&utm_campaign=theredditplacedataset)
- **Point of Contact:** [Website](https://socialgrep.com/contact?utm_source=huggingface&utm_medium=link&utm_campaign=theredditplacedataset)
### Dataset Summary
The written history or /r/Place, in posts and comments.
### Languages
Mainly English.
## Dataset Structure
### Data Instances
A data point is a post or a comment. Due to the separate nature of the two, those exist in two different files - even though many fields are shared.
### Data Fields
- 'type': the type of the data point. Can be 'post' or 'comment'.
- 'id': the base-36 Reddit ID of the data point. Unique when combined with type.
- 'subreddit.id': the base-36 Reddit ID of the data point's host subreddit. Unique.
- 'subreddit.name': the human-readable name of the data point's host subreddit.
- 'subreddit.nsfw': a boolean marking the data point's host subreddit as NSFW or not.
- 'created_utc': a UTC timestamp for the data point.
- 'permalink': a reference link to the data point on Reddit.
- 'score': score of the data point on Reddit.
- 'domain': (Post only) the domain of the data point's link.
- 'url': (Post only) the destination of the data point's link, if any.
- 'selftext': (Post only) the self-text of the data point, if any.
- 'title': (Post only) the title of the post data point.
- 'body': (Comment only) the body of the comment data point.
- 'sentiment': (Comment only) the result of an in-house sentiment analysis pipeline. Used for exploratory analysis.
## Additional Information
### Licensing Information
CC-BY v4.0
| [
-0.6352067589759827,
-0.8446881175041199,
0.4715960621833801,
0.5468260645866394,
-0.47609978914260864,
0.04757776856422424,
-0.19469456374645233,
-0.23093244433403015,
0.8225982785224915,
0.32585471868515015,
-0.9982806444168091,
-1.137129545211792,
-0.6337556838989258,
0.2439914792776107... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
student/celebA | student | 2022-04-09T16:38:37Z | 43 | 0 | null | [
"region:us"
] | 2022-04-09T16:38:37Z | 2022-04-09T12:17:51.000Z | 2022-04-09T12:17:51 | Entry not found | [
-0.3227645754814148,
-0.22568479180335999,
0.8622263669967651,
0.43461522459983826,
-0.52829909324646,
0.7012971639633179,
0.7915719747543335,
0.07618614286184311,
0.774603009223938,
0.2563217282295227,
-0.7852813005447388,
-0.22573819756507874,
-0.9104475975036621,
0.5715674161911011,
-... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
taln-ls2n/semeval-2010-pre | taln-ls2n | 2022-09-23T07:37:43Z | 43 | 1 | null | [
"task_categories:text-generation",
"annotations_creators:unknown",
"language_creators:unknown",
"multilinguality:monolingual",
"size_categories:n<1K",
"language:en",
"license:cc-by-4.0",
"region:us"
] | 2022-09-23T07:37:43Z | 2022-04-22T12:10:54.000Z | 2022-04-22T12:10:54 | ---
annotations_creators:
- unknown
language_creators:
- unknown
language:
- en
license: cc-by-4.0
multilinguality:
- monolingual
task_categories:
- text-mining
- text-generation
task_ids:
- keyphrase-generation
- keyphrase-extraction
size_categories:
- n<1K
pretty_name: Preprocessed SemEval-2010 Benchmark dataset
---
# Preprocessed SemEval-2010 Benchmark dataset for Keyphrase Generation
## About
SemEval-2010 is a dataset for benchmarking keyphrase extraction and generation models.
The dataset is composed of 244 **full-text** scientific papers collected from the [ACM Digital Library](https://dl.acm.org/).
Keyphrases were annotated by readers and combined with those provided by the authors.
Details about the SemEval-2010 dataset can be found in the original paper [(kim et al., 2010)][kim-2010].
This version of the dataset was produced by [(Boudin et al., 2016)][boudin-2016] and provides four increasingly sophisticated levels of document preprocessing:
* `lvl-1`: default text files provided by the SemEval-2010 organizers.
* `lvl-2`: for each file, we manually retrieved the original PDF file from the ACM Digital Library.
We then extract the enriched textual content of the PDF files using an Optical Character Recognition (OCR) system and perform document logical structure detection using ParsCit v110505.
We use the detected logical structure to remove author-assigned keyphrases and select only relevant elements : title, headers, abstract, introduction, related work, body text and conclusion.
We finally apply a systematic dehyphenation at line breaks.s
* `lvl-3`: we further abridge the input text from level 2 preprocessed documents to the following: title, headers, abstract, introduction, related work, background and conclusion.
* `lvl-4`: we abridge the input text from level 3 preprocessed documents using an unsupervised summarization technique.
We keep the title and abstract and select the most content bearing sentences from the remaining contents.
Titles and abstracts, collected from the [SciCorefCorpus](https://github.com/melsk125/SciCorefCorpus), are also provided.
Details about how they were extracted and cleaned up can be found in [(Chaimongkol et al., 2014)][chaimongkol-2014].
Reference keyphrases are provided in stemmed form (because they were provided like this for the test split in the competition).
They are also categorized under the PRMU (<u>P</u>resent-<u>R</u>eordered-<u>M</u>ixed-<u>U</u>nseen) scheme as proposed in [(Boudin and Gallina, 2021)][boudin-2021].
Text pre-processing (tokenization) is carried out using `spacy` (`en_core_web_sm` model) with a special rule to avoid splitting words with hyphens (e.g. graph-based is kept as one token).
Stemming (Porter's stemmer implementation provided in `nltk`) is applied before reference keyphrases are matched against the source text.
Details about the process can be found in `prmu.py`.
The <u>P</u>resent reference keyphrases are also ordered by their order of apparition in the concatenation of title and text (lvl-1).
## Content and statistics
The dataset is divided into the following two splits:
| Split | # documents | #words | # keyphrases | % Present | % Reordered | % Mixed | % Unseen |
| :--------- |------------:|-------:|-------------:|----------:|------------:|--------:|---------:|
| Train | 144 | 184.6 | 15.44 | 42.16 | 7.36 | 26.85 | 23.63 |
| Test | 100 | 203.1 | 14.66 | 40.11 | 8.34 | 27.12 | 24.43 |
Statistics (#words, PRMU distributions) are computed using the title/abstract and not the full text of scientific papers.
The following data fields are available :
- **id**: unique identifier of the document.
- **title**: title of the document.
- **abstract**: abstract of the document.
- **lvl-1**: content of the document with no text processing.
- **lvl-2**: content of the document retrieved from original PDF files and cleaned up.
- **lvl-3**: content of the document further abridged to relevant sections.
- **lvl-4**: content of the document further abridged using an unsupervised summarization technique.
- **keyphrases**: list of reference keyphrases.
- **prmu**: list of <u>P</u>resent-<u>R</u>eordered-<u>M</u>ixed-<u>U</u>nseen categories for reference keyphrases.
## References
- (Kim et al., 2010) Su Nam Kim, Olena Medelyan, Min-Yen Kan, and Timothy Baldwin. 2010.
[SemEval-2010 Task 5 : Automatic Keyphrase Extraction from Scientific Articles][kim-2010].
In Proceedings of the 5th International Workshop on Semantic Evaluation, pages 21–26, Uppsala, Sweden. Association for Computational Linguistics.
- (Chaimongkol et al., 2014) Panot Chaimongkol, Akiko Aizawa, and Yuka Tateisi. 2014.
[Corpus for Coreference Resolution on Scientific Papers][chaimongkol-2014].
In Proceedings of the Ninth International Conference on Language Resources and Evaluation (LREC'14), pages 3187–3190, Reykjavik, Iceland. European Language Resources Association (ELRA).
- (Boudin et al., 2016) Florian Boudin, Hugo Mougard, and Damien Cram. 2016.
[How Document Pre-processing affects Keyphrase Extraction Performance][boudin-2016].
In Proceedings of the 2nd Workshop on Noisy User-generated Text (WNUT), pages 121–128, Osaka, Japan. The COLING 2016 Organizing Committee.
- (Boudin and Gallina, 2021) Florian Boudin and Ygor Gallina. 2021.
[Redefining Absent Keyphrases and their Effect on Retrieval Effectiveness][boudin-2021].
In Proceedings of the 2021 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 4185–4193, Online. Association for Computational Linguistics.
[kim-2010]: https://aclanthology.org/S10-1004/
[chaimongkol-2014]: https://aclanthology.org/L14-1259/
[boudin-2016]: https://aclanthology.org/W16-3917/
[boudin-2021]: https://aclanthology.org/2021.naacl-main.330/
| [
-0.22798743844032288,
-0.3855178952217102,
0.4418735206127167,
0.18540917336940765,
-0.5424984097480774,
0.042411357164382935,
-0.10684049129486084,
-0.21748028695583344,
0.14926916360855103,
0.41452446579933167,
-0.28673025965690613,
-0.6587827801704407,
-0.5533100962638855,
0.72130256891... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
searle-j/kote | searle-j | 2022-10-20T19:16:24Z | 43 | 3 | null | [
"task_categories:text-classification",
"task_ids:multi-class-classification",
"task_ids:multi-label-classification",
"annotations_creators:crowdsourced",
"multilinguality:monolingual",
"size_categories:10K<n<100K",
"source_datasets:original",
"language:kor",
"license:mit",
"region:us"
] | 2022-10-20T19:16:24Z | 2022-05-06T05:55:04.000Z | 2022-05-06T05:55:04 | ---
annotations_creators:
- crowdsourced
language:
- kor
license:
- mit
multilinguality:
- monolingual
pretty_name: kote
size_categories:
- 10K<n<100K
source_datasets:
- original
task_categories:
- text-classification
task_ids:
- multi-class-classification
- multi-label-classification
- text-classification-other-emotion
---
| [
-0.1285335123538971,
-0.1861683875322342,
0.6529128551483154,
0.49436232447624207,
-0.19319400191307068,
0.23607441782951355,
0.36072009801864624,
0.05056373029947281,
0.5793656706809998,
0.7400146722793579,
-0.650810182094574,
-0.23784008622169495,
-0.7102247476577759,
-0.0478255338966846... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
psyche/kowiki | psyche | 2023-11-09T08:34:05Z | 43 | 1 | null | [
"language:ko",
"license:apache-2.0",
"region:us"
] | 2023-11-09T08:34:05Z | 2022-06-12T04:14:40.000Z | 2022-06-12T04:14:40 | ---
language:
- ko
license:
- apache-2.0
dataset_info:
features:
- name: id
dtype: string
- name: url
dtype: string
- name: title
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 1142558231.8083806
num_examples: 531002
- name: validation
num_bytes: 126952588.19161937
num_examples: 59001
download_size: 742445023
dataset_size: 1269510820.0
---
| [
-0.1285335123538971,
-0.1861683875322342,
0.6529128551483154,
0.49436232447624207,
-0.19319400191307068,
0.23607441782951355,
0.36072009801864624,
0.05056373029947281,
0.5793656706809998,
0.7400146722793579,
-0.650810182094574,
-0.23784008622169495,
-0.7102247476577759,
-0.0478255338966846... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
autoevaluate/autoeval-staging-eval-project-57377e87-7975069 | autoevaluate | 2022-06-28T01:17:06Z | 43 | 0 | null | [
"autotrain",
"evaluation",
"region:us"
] | 2022-06-28T01:17:06Z | 2022-06-28T01:10:25.000Z | 2022-06-28T01:10:25 | ---
type: predictions
tags:
- autotrain
- evaluation
datasets:
- food101
eval_info:
task: image_multi_class_classification
model: nateraw/food
metrics: []
dataset_name: food101
dataset_config: default
dataset_split: validation
col_mapping:
image: image
target: label
---
# Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset:
* Task: Multi-class Image Classification
* Model: nateraw/food
* Dataset: food101
To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator).
## Contributions
Thanks to [@lewtun](https://huggingface.co/lewtun) for evaluating this model. | [
-0.43146824836730957,
-0.20144222676753998,
0.1349671483039856,
0.05481179431080818,
0.13643835484981537,
-0.17052114009857178,
0.09222716838121414,
-0.5250253081321716,
0.18939389288425446,
0.3963937759399414,
-0.8107293248176575,
-0.17404967546463013,
-0.7199423313140869,
0.0210141856223... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
naver-clova-ix/synthdog-zh | naver-clova-ix | 2022-07-22T06:43:28Z | 43 | 3 | null | [
"region:us"
] | 2022-07-22T06:43:28Z | 2022-07-20T00:42:55.000Z | 2022-07-20T00:42:55 | Entry not found | [
-0.32276487350463867,
-0.22568444907665253,
0.8622263073921204,
0.43461570143699646,
-0.5282988548278809,
0.7012969255447388,
0.7915717363357544,
0.07618642598390579,
0.7746027112007141,
0.25632190704345703,
-0.7852815389633179,
-0.22573848068714142,
-0.910447895526886,
0.5715675354003906,... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
SALT-NLP/FLUE-FiQA | SALT-NLP | 2022-10-21T17:29:14Z | 43 | 2 | null | [
"license:cc-by-3.0",
"region:us"
] | 2022-10-21T17:29:14Z | 2022-10-19T23:39:48.000Z | 2022-10-19T23:39:48 | ---
license: cc-by-3.0
---
## Dataset Summary
- **Homepage:** https://sites.google.com/view/salt-nlp-flang
- **Models:** https://huggingface.co/SALT-NLP/FLANG-BERT
- **Repository:** https://github.com/SALT-NLP/FLANG
## FLUE
FLUE (Financial Language Understanding Evaluation) is a comprehensive and heterogeneous benchmark that has been built from 5 diverse financial domain specific datasets.
Sentiment Classification: [Financial PhraseBank](https://huggingface.co/datasets/financial_phrasebank)\
Sentiment Analysis, Question Answering: [FiQA 2018](https://huggingface.co/datasets/SALT-NLP/FLUE-FiQA)\
New Headlines Classification: [Headlines](https://www.kaggle.com/datasets/daittan/gold-commodity-news-and-dimensions)\
Named Entity Recognition: [NER](https://huggingface.co/datasets/SALT-NLP/FLUE-NER)\
Structure Boundary Detection: [FinSBD3](https://sites.google.com/nlg.csie.ntu.edu.tw/finweb2021/shared-task-finsbd-3)
## Dataset Structure
The FiQA dataset has a corpus, queries and qrels (relevance judgments file). They are in the following format:
- `corpus` file: a `.jsonl` file (jsonlines) that contains a list of dictionaries, each with three fields `_id` with unique document identifier, `title` with document title (optional) and `text` with document paragraph or passage. For example: `{"_id": "doc1", "title": "Albert Einstein", "text": "Albert Einstein was a German-born...."}`
- `queries` file: a `.jsonl` file (jsonlines) that contains a list of dictionaries, each with two fields `_id` with unique query identifier and `text` with query text. For example: `{"_id": "q1", "text": "Who developed the mass-energy equivalence formula?"}`
- `qrels` file: a `.tsv` file (tab-seperated) that contains three columns, i.e. the `query-id`, `corpus-id` and `score` in this order. Keep 1st row as header. For example: `q1 doc1 1`
| [
-0.47117313742637634,
-0.5822343826293945,
0.1931753158569336,
0.32509416341781616,
0.010717044584453106,
0.0540248341858387,
-0.2900247573852539,
-0.18202532827854156,
0.16722409427165985,
0.3280591070652008,
-0.260748028755188,
-0.7405827641487122,
-0.48637863993644714,
0.217126384377479... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
leslyarun/c4_200m_gec_train100k_test25k | leslyarun | 2022-10-26T07:59:31Z | 43 | 3 | null | [
"task_categories:text-generation",
"source_datasets:allenai/c4",
"language:en",
"grammatical-error-correction",
"region:us"
] | 2022-10-26T07:59:31Z | 2022-10-26T07:21:21.000Z | 2022-10-26T07:21:21 | ---
language:
- en
source_datasets:
- allenai/c4
task_categories:
- text-generation
pretty_name: C4 200M Grammatical Error Correction Dataset
tags:
- grammatical-error-correction
---
# C4 200M
# Dataset Summary
C4 200M Sample Dataset adopted from https://huggingface.co/datasets/liweili/c4_200m
C4_200m is a collection of 185 million sentence pairs generated from the cleaned English dataset from C4. This dataset can be used in grammatical error correction (GEC) tasks.
The corruption edits and scripts used to synthesize this dataset is referenced from: [C4_200M Synthetic Dataset](https://github.com/google-research-datasets/C4_200M-synthetic-dataset-for-grammatical-error-correction)
# Description
As discussed before, this dataset contains 185 million sentence pairs. Each article has these two attributes: `input` and `output`. Here is a sample of dataset:
```
{
"input": "Bitcoin is for $7,094 this morning, which CoinDesk says."
"output": "Bitcoin goes for $7,094 this morning, according to CoinDesk."
}
``` | [
-0.2875329852104187,
-0.6964101195335388,
0.44802212715148926,
0.16649606823921204,
0.0932290256023407,
0.13062670826911926,
-0.24557794630527496,
-0.39964500069618225,
0.1498858630657196,
0.49995818734169006,
-0.3888740837574005,
-0.5173656344413757,
-0.3719659149646759,
0.435346245765686... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
bigbio/cpi | bigbio | 2023-01-06T03:46:05Z | 43 | 1 | null | [
"multilinguality:monolingual",
"language:en",
"license:other",
"region:us"
] | 2023-01-06T03:46:05Z | 2023-01-06T03:44:03.000Z | 2023-01-06T03:44:03 |
---
language:
- en
bigbio_language:
- English
license: other
multilinguality: monolingual
bigbio_license_shortname: ISC
pretty_name: CPI
homepage: https://github.com/KerstenDoering/CPI-Pipeline
bigbio_pubmed: True
bigbio_public: True
bigbio_tasks:
- NAMED_ENTITY_RECOGNITION
- NAMED_ENTITY_DISAMBIGUATION
- RELATION_EXTRACTION
---
# Dataset Card for CPI
## Dataset Description
- **Homepage:** https://github.com/KerstenDoering/CPI-Pipeline
- **Pubmed:** True
- **Public:** True
- **Tasks:** NER,NED,RE
The compound-protein relationship (CPI) dataset consists of 2,613 sentences
from abstracts containing annotations of proteins, small molecules, and their
relationships.
## Citation Information
```
@article{doring2020automated,
title={Automated recognition of functional compound-protein relationships in literature},
author={D{\"o}ring, Kersten and Qaseem, Ammar and Becer, Michael and Li, Jianyu and Mishra, Pankaj and Gao, Mingjie and Kirchner, Pascal and Sauter, Florian and Telukunta, Kiran K and Moumbock, Aur{\'e}lien FA and others},
journal={Plos one},
volume={15},
number={3},
pages={e0220925},
year={2020},
publisher={Public Library of Science San Francisco, CA USA}
}
```
| [
-0.24472975730895996,
-0.1398114413022995,
0.24139375984668732,
-0.021179165691137314,
-0.28195688128471375,
-0.5165088772773743,
-0.09296771883964539,
-0.17033807933330536,
0.11262847483158112,
0.4131407141685486,
-0.4959394633769989,
-0.5550370812416077,
-0.6240468621253967,
0.7518029212... | null | null | null | null | null | null | null | null | null | null | null | null | null |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.