id stringlengths 2 115 | author stringlengths 2 42 ⌀ | last_modified timestamp[us, tz=UTC] | downloads int64 0 8.87M | likes int64 0 3.84k | paperswithcode_id stringlengths 2 45 ⌀ | tags list | lastModified timestamp[us, tz=UTC] | createdAt stringlengths 24 24 | key stringclasses 1 value | created timestamp[us] | card stringlengths 1 1.01M | embedding list | library_name stringclasses 21 values | pipeline_tag stringclasses 27 values | mask_token null | card_data null | widget_data null | model_index null | config null | transformers_info null | spaces null | safetensors null | transformersInfo null | modelId stringlengths 5 111 ⌀ | embeddings list |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
MicPie/unpredictable_mmo-champion-com | MicPie | 2022-08-04T20:09:49Z | 24 | 0 | null | [
"task_categories:multiple-choice",
"task_categories:question-answering",
"task_categories:zero-shot-classification",
"task_categories:text2text-generation",
"task_categories:table-question-answering",
"task_categories:text-generation",
"task_categories:text-classification",
"task_categories:tabular-cl... | 2022-08-04T20:09:49Z | 2022-07-03T08:15:38.000Z | 2022-07-03T08:15:38 | ---
annotations_creators:
- no-annotation
language_creators:
- found
language:
- en
license:
- apache-2.0
multilinguality:
- monolingual
pretty_name: UnpredicTable-mmo-champion-com
size_categories:
- 100K<n<1M
source_datasets: []
task_categories:
- multiple-choice
- question-answering
- zero-shot-classification
- text2text-generation
- table-question-answering
- text-generation
- text-classification
- tabular-classification
task_ids:
- multiple-choice-qa
- extractive-qa
- open-domain-qa
- closed-domain-qa
- closed-book-qa
- open-book-qa
- language-modeling
- multi-class-classification
- natural-language-inference
- topic-classification
- multi-label-classification
- tabular-multi-class-classification
- tabular-multi-label-classification
---
# Dataset Card for "UnpredicTable-mmo-champion-com" - Dataset of Few-shot Tasks from Tables
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-instances)
- [Data Splits](#data-instances)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
## Dataset Description
- **Homepage:** https://ethanperez.net/unpredictable
- **Repository:** https://github.com/JunShern/few-shot-adaptation
- **Paper:** Few-shot Adaptation Works with UnpredicTable Data
- **Point of Contact:** junshern@nyu.edu, perez@nyu.edu
### Dataset Summary
The UnpredicTable dataset consists of web tables formatted as few-shot tasks for fine-tuning language models to improve their few-shot performance.
There are several dataset versions available:
* [UnpredicTable-full](https://huggingface.co/datasets/MicPie/unpredictable_full): Starting from the initial WTC corpus of 50M tables, we apply our tables-to-tasks procedure to produce our resulting dataset, [UnpredicTable-full](https://huggingface.co/datasets/MicPie/unpredictable_full), which comprises 413,299 tasks from 23,744 unique websites.
* [UnpredicTable-unique](https://huggingface.co/datasets/MicPie/unpredictable_unique): This is the same as [UnpredicTable-full](https://huggingface.co/datasets/MicPie/unpredictable_full) but filtered to have a maximum of one task per website. [UnpredicTable-unique](https://huggingface.co/datasets/MicPie/unpredictable_unique) contains exactly 23,744 tasks from 23,744 websites.
* [UnpredicTable-5k](https://huggingface.co/datasets/MicPie/unpredictable_5k): This dataset contains 5k random tables from the full dataset.
* UnpredicTable data subsets based on a manual human quality rating (please see our publication for details of the ratings):
* [UnpredicTable-rated-low](https://huggingface.co/datasets/MicPie/unpredictable_rated-low)
* [UnpredicTable-rated-medium](https://huggingface.co/datasets/MicPie/unpredictable_rated-medium)
* [UnpredicTable-rated-high](https://huggingface.co/datasets/MicPie/unpredictable_rated-high)
* UnpredicTable data subsets based on the website of origin:
* [UnpredicTable-baseball-fantasysports-yahoo-com](https://huggingface.co/datasets/MicPie/unpredictable_baseball-fantasysports-yahoo-com)
* [UnpredicTable-bulbapedia-bulbagarden-net](https://huggingface.co/datasets/MicPie/unpredictable_bulbapedia-bulbagarden-net)
* [UnpredicTable-cappex-com](https://huggingface.co/datasets/MicPie/unpredictable_cappex-com)
* [UnpredicTable-cram-com](https://huggingface.co/datasets/MicPie/unpredictable_cram-com)
* [UnpredicTable-dividend-com](https://huggingface.co/datasets/MicPie/unpredictable_dividend-com)
* [UnpredicTable-dummies-com](https://huggingface.co/datasets/MicPie/unpredictable_dummies-com)
* [UnpredicTable-en-wikipedia-org](https://huggingface.co/datasets/MicPie/unpredictable_en-wikipedia-org)
* [UnpredicTable-ensembl-org](https://huggingface.co/datasets/MicPie/unpredictable_ensembl-org)
* [UnpredicTable-gamefaqs-com](https://huggingface.co/datasets/MicPie/unpredictable_gamefaqs-com)
* [UnpredicTable-mgoblog-com](https://huggingface.co/datasets/MicPie/unpredictable_mgoblog-com)
* [UnpredicTable-mmo-champion-com](https://huggingface.co/datasets/MicPie/unpredictable_mmo-champion-com)
* [UnpredicTable-msdn-microsoft-com](https://huggingface.co/datasets/MicPie/unpredictable_msdn-microsoft-com)
* [UnpredicTable-phonearena-com](https://huggingface.co/datasets/MicPie/unpredictable_phonearena-com)
* [UnpredicTable-sittercity-com](https://huggingface.co/datasets/MicPie/unpredictable_sittercity-com)
* [UnpredicTable-sporcle-com](https://huggingface.co/datasets/MicPie/unpredictable_sporcle-com)
* [UnpredicTable-studystack-com](https://huggingface.co/datasets/MicPie/unpredictable_studystack-com)
* [UnpredicTable-support-google-com](https://huggingface.co/datasets/MicPie/unpredictable_support-google-com)
* [UnpredicTable-w3-org](https://huggingface.co/datasets/MicPie/unpredictable_w3-org)
* [UnpredicTable-wiki-openmoko-org](https://huggingface.co/datasets/MicPie/unpredictable_wiki-openmoko-org)
* [UnpredicTable-wkdu-org](https://huggingface.co/datasets/MicPie/unpredictable_wkdu-org)
* UnpredicTable data subsets based on clustering (for the clustering details please see our publication):
* [UnpredicTable-cluster00](https://huggingface.co/datasets/MicPie/unpredictable_cluster00)
* [UnpredicTable-cluster01](https://huggingface.co/datasets/MicPie/unpredictable_cluster01)
* [UnpredicTable-cluster02](https://huggingface.co/datasets/MicPie/unpredictable_cluster02)
* [UnpredicTable-cluster03](https://huggingface.co/datasets/MicPie/unpredictable_cluster03)
* [UnpredicTable-cluster04](https://huggingface.co/datasets/MicPie/unpredictable_cluster04)
* [UnpredicTable-cluster05](https://huggingface.co/datasets/MicPie/unpredictable_cluster05)
* [UnpredicTable-cluster06](https://huggingface.co/datasets/MicPie/unpredictable_cluster06)
* [UnpredicTable-cluster07](https://huggingface.co/datasets/MicPie/unpredictable_cluster07)
* [UnpredicTable-cluster08](https://huggingface.co/datasets/MicPie/unpredictable_cluster08)
* [UnpredicTable-cluster09](https://huggingface.co/datasets/MicPie/unpredictable_cluster09)
* [UnpredicTable-cluster10](https://huggingface.co/datasets/MicPie/unpredictable_cluster10)
* [UnpredicTable-cluster11](https://huggingface.co/datasets/MicPie/unpredictable_cluster11)
* [UnpredicTable-cluster12](https://huggingface.co/datasets/MicPie/unpredictable_cluster12)
* [UnpredicTable-cluster13](https://huggingface.co/datasets/MicPie/unpredictable_cluster13)
* [UnpredicTable-cluster14](https://huggingface.co/datasets/MicPie/unpredictable_cluster14)
* [UnpredicTable-cluster15](https://huggingface.co/datasets/MicPie/unpredictable_cluster15)
* [UnpredicTable-cluster16](https://huggingface.co/datasets/MicPie/unpredictable_cluster16)
* [UnpredicTable-cluster17](https://huggingface.co/datasets/MicPie/unpredictable_cluster17)
* [UnpredicTable-cluster18](https://huggingface.co/datasets/MicPie/unpredictable_cluster18)
* [UnpredicTable-cluster19](https://huggingface.co/datasets/MicPie/unpredictable_cluster19)
* [UnpredicTable-cluster20](https://huggingface.co/datasets/MicPie/unpredictable_cluster20)
* [UnpredicTable-cluster21](https://huggingface.co/datasets/MicPie/unpredictable_cluster21)
* [UnpredicTable-cluster22](https://huggingface.co/datasets/MicPie/unpredictable_cluster22)
* [UnpredicTable-cluster23](https://huggingface.co/datasets/MicPie/unpredictable_cluster23)
* [UnpredicTable-cluster24](https://huggingface.co/datasets/MicPie/unpredictable_cluster24)
* [UnpredicTable-cluster25](https://huggingface.co/datasets/MicPie/unpredictable_cluster25)
* [UnpredicTable-cluster26](https://huggingface.co/datasets/MicPie/unpredictable_cluster26)
* [UnpredicTable-cluster27](https://huggingface.co/datasets/MicPie/unpredictable_cluster27)
* [UnpredicTable-cluster28](https://huggingface.co/datasets/MicPie/unpredictable_cluster28)
* [UnpredicTable-cluster29](https://huggingface.co/datasets/MicPie/unpredictable_cluster29)
* [UnpredicTable-cluster-noise](https://huggingface.co/datasets/MicPie/unpredictable_cluster-noise)
### Supported Tasks and Leaderboards
Since the tables come from the web, the distribution of tasks and topics is very broad. The shape of our dataset is very wide, i.e., we have 1000's of tasks, while each task has only a few examples, compared to most current NLP datasets which are very deep, i.e., 10s of tasks with many examples. This implies that our dataset covers a broad range of potential tasks, e.g., multiple-choice, question-answering, table-question-answering, text-classification, etc.
The intended use of this dataset is to improve few-shot performance by fine-tuning/pre-training on our dataset.
### Languages
English
## Dataset Structure
### Data Instances
Each task is represented as a jsonline file and consists of several few-shot examples. Each example is a dictionary containing a field 'task', which identifies the task, followed by an 'input', 'options', and 'output' field. The 'input' field contains several column elements of the same row in the table, while the 'output' field is a target which represents an individual column of the same row. Each task contains several such examples which can be concatenated as a few-shot task. In the case of multiple choice classification, the 'options' field contains the possible classes that a model needs to choose from.
There are also additional meta-data fields such as 'pageTitle', 'title', 'outputColName', 'url', 'wdcFile'.
### Data Fields
'task': task identifier
'input': column elements of a specific row in the table.
'options': for multiple choice classification, it provides the options to choose from.
'output': target column element of the same row as input.
'pageTitle': the title of the page containing the table.
'outputColName': output column name
'url': url to the website containing the table
'wdcFile': WDC Web Table Corpus file
### Data Splits
The UnpredicTable datasets do not come with additional data splits.
## Dataset Creation
### Curation Rationale
Few-shot training on multi-task datasets has been demonstrated to improve language models' few-shot learning (FSL) performance on new tasks, but it is unclear which training tasks lead to effective downstream task adaptation. Few-shot learning datasets are typically produced with expensive human curation, limiting the scale and diversity of the training tasks available to study. As an alternative source of few-shot data, we automatically extract 413,299 tasks from diverse internet tables. We provide this as a research resource to investigate the relationship between training data and few-shot learning.
### Source Data
#### Initial Data Collection and Normalization
We use internet tables from the English-language Relational Subset of the WDC Web Table Corpus 2015 (WTC). The WTC dataset tables were extracted from the July 2015 Common Crawl web corpus (http://webdatacommons.org/webtables/2015/EnglishStatistics.html). The dataset contains 50,820,165 tables from 323,160 web domains. We then convert the tables into few-shot learning tasks. Please see our publication for more details on the data collection and conversion pipeline.
#### Who are the source language producers?
The dataset is extracted from [WDC Web Table Corpora](http://webdatacommons.org/webtables/).
### Annotations
#### Annotation process
Manual annotation was only carried out for the [UnpredicTable-rated-low](https://huggingface.co/datasets/MicPie/unpredictable_rated-low),
[UnpredicTable-rated-medium](https://huggingface.co/datasets/MicPie/unpredictable_rated-medium), and [UnpredicTable-rated-high](https://huggingface.co/datasets/MicPie/unpredictable_rated-high) data subsets to rate task quality. Detailed instructions of the annotation instructions can be found in our publication.
#### Who are the annotators?
Annotations were carried out by a lab assistant.
### Personal and Sensitive Information
The data was extracted from [WDC Web Table Corpora](http://webdatacommons.org/webtables/), which in turn extracted tables from the [Common Crawl](https://commoncrawl.org/). We did not filter the data in any way. Thus any user identities or otherwise sensitive information (e.g., data that reveals racial or ethnic origins, sexual orientations, religious beliefs, political opinions or union memberships, or locations; financial or health data; biometric or genetic data; forms of government identification, such as social security numbers; criminal history, etc.) might be contained in our dataset.
## Considerations for Using the Data
### Social Impact of Dataset
This dataset is intended for use as a research resource to investigate the relationship between training data and few-shot learning. As such, it contains high- and low-quality data, as well as diverse content that may be untruthful or inappropriate. Without careful investigation, it should not be used for training models that will be deployed for use in decision-critical or user-facing situations.
### Discussion of Biases
Since our dataset contains tables that are scraped from the web, it will also contain many toxic, racist, sexist, and otherwise harmful biases and texts. We have not run any analysis on the biases prevalent in our datasets. Neither have we explicitly filtered the content. This implies that a model trained on our dataset may potentially reflect harmful biases and toxic text that exist in our dataset.
### Other Known Limitations
No additional known limitations.
## Additional Information
### Dataset Curators
Jun Shern Chan, Michael Pieler, Jonathan Jao, Jérémy Scheurer, Ethan Perez
### Licensing Information
Apache 2.0
### Citation Information
```
@misc{chan2022few,
author = {Chan, Jun Shern and Pieler, Michael and Jao, Jonathan and Scheurer, Jérémy and Perez, Ethan},
title = {Few-shot Adaptation Works with UnpredicTable Data},
publisher={arXiv},
year = {2022},
url = {https://arxiv.org/abs/2208.01009}
}
```
| [
-0.5573517680168152,
-0.559280276298523,
0.4210447371006012,
0.3172094523906708,
0.11969566345214844,
0.16018858551979065,
-0.1047019511461258,
-0.5852320790290833,
0.4974634051322937,
0.3137081265449524,
-1.0644253492355347,
-0.6629040241241455,
-0.6396347880363464,
0.2419571876525879,
... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
embedding-data/coco_captions_quintets | embedding-data | 2022-08-02T02:18:54Z | 24 | 3 | embedding-data/coco_captions | [
"task_categories:sentence-similarity",
"task_ids:semantic-similarity-classification",
"language:en",
"license:mit",
"arxiv:1405.0312",
"region:us"
] | 2022-08-02T02:18:54Z | 2022-07-07T23:12:19.000Z | 2022-07-07T23:12:19 | ---
license: mit
language:
- en
paperswithcode_id: embedding-data/coco_captions
pretty_name: coco_captions
task_categories:
- sentence-similarity
- paraphrase-mining
task_ids:
- semantic-similarity-classification
---
# Dataset Card for "coco_captions"
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** [https://cocodataset.org/#home](https://cocodataset.org/#home)
- **Repository:** [https://github.com/cocodataset/cocodataset.github.io](https://github.com/cocodataset/cocodataset.github.io)
- **Paper:** [More Information Needed](https://arxiv.org/abs/1405.0312)
- **Point of Contact:** [info@cocodataset.org](info@cocodataset.org)
- **Size of downloaded dataset files:**
- **Size of the generated dataset:**
- **Total amount of disk used:** 6.32 MB
### Dataset Summary
COCO is a large-scale object detection, segmentation, and captioning dataset. This repo contains five captions per image; useful for sentence similarity tasks.
Disclaimer: The team releasing COCO did not upload the dataset to the Hub and did not write a dataset card.
These steps were done by the Hugging Face team.
### Supported Tasks
- [Sentence Transformers](https://huggingface.co/sentence-transformers) training; useful for semantic search and sentence similarity.
### Languages
- English.
## Dataset Structure
Each example in the dataset contains quintets of similar sentences and is formatted as a dictionary with the key "set" and a list with the sentences as "value":
```
{"set": [sentence_1, sentence_2, sentence3, sentence4, sentence5]}
{"set": [sentence_1, sentence_2, sentence3, sentence4, sentence5]}
...
{"set": [sentence_1, sentence_2, sentence3, sentence4, sentence5]}
```
This dataset is useful for training Sentence Transformers models. Refer to the following post on how to train models using similar pairs of sentences.
### Usage Example
Install the 🤗 Datasets library with `pip install datasets` and load the dataset from the Hub with:
```python
from datasets import load_dataset
dataset = load_dataset("embedding-data/coco_captions")
```
The dataset is loaded as a `DatasetDict` and has the format:
```python
DatasetDict({
train: Dataset({
features: ['set'],
num_rows: 82783
})
})
```
Review an example `i` with:
```python
dataset["train"][i]["set"]
```
### Data Instances
[More Information Needed](https://cocodataset.org/#format-data)
### Data Splits
[More Information Needed](https://cocodataset.org/#format-data)
## Dataset Creation
### Curation Rationale
[More Information Needed](https://cocodataset.org/#home)
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed](https://cocodataset.org/#home)
#### Who are the source language producers?
[More Information Needed](https://cocodataset.org/#home)
### Annotations
#### Annotation process
[More Information Needed](https://cocodataset.org/#home)
#### Who are the annotators?
[More Information Needed](https://cocodataset.org/#home)
### Personal and Sensitive Information
[More Information Needed](https://cocodataset.org/#home)
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed](https://cocodataset.org/#home)
### Discussion of Biases
[More Information Needed](https://cocodataset.org/#home)
### Other Known Limitations
[More Information Needed](https://cocodataset.org/#home)
## Additional Information
### Dataset Curators
[More Information Needed](https://cocodataset.org/#home)
### Licensing Information
The annotations in this dataset along with this website belong to the COCO Consortium
and are licensed under a [Creative Commons Attribution 4.0 License](https://creativecommons.org/licenses/by/4.0/legalcode)
### Citation Information
[More Information Needed](https://cocodataset.org/#home)
### Contributions
Thanks to:
- Tsung-Yi Lin - Google Brain
- Genevieve Patterson - MSR, Trash TV
- Matteo R. - Ronchi Caltech
- Yin Cui - Google
- Michael Maire - TTI-Chicago
- Serge Belongie - Cornell Tech
- Lubomir Bourdev - WaveOne, Inc.
- Ross Girshick - FAIR
- James Hays - Georgia Tech
- Pietro Perona - Caltech
- Deva Ramanan - CMU
- Larry Zitnick - FAIR
- Piotr Dollár - FAIR
for adding this dataset.
| [
-0.4689551889896393,
-0.6063084602355957,
0.05752008035778999,
0.5429568290710449,
-0.273007333278656,
0.30272534489631653,
-0.37951067090034485,
-0.3493313491344452,
0.5575700402259827,
0.594403862953186,
-0.7403920292854309,
-0.7815284132957458,
-0.6915869116783142,
0.31819406151771545,
... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
readerbench/ro-fb-offense | readerbench | 2023-02-20T13:26:28Z | 24 | 2 | null | [
"task_categories:text-classification",
"task_ids:hate-speech-detection",
"annotations_creators:expert-generated",
"language_creators:found",
"multilinguality:monolingual",
"size_categories:1K<n<10K",
"source_datasets:original",
"language:ro",
"license:apache-2.0",
"hate-speech-detection",
"regio... | 2023-02-20T13:26:28Z | 2022-07-10T17:53:14.000Z | 2022-07-10T17:53:14 | ---
annotations_creators:
- expert-generated
language_creators:
- found
language:
- ro
license:
- apache-2.0
multilinguality:
- monolingual
size_categories:
- 1K<n<10K
source_datasets:
- original
task_categories:
- text-classification
task_ids:
- hate-speech-detection
pretty_name: RO-FB-Offense
extra_gated_prompt: 'Warning: this repository contains harmful content (abusive language,
hate speech).'
tags:
- hate-speech-detection
---
# Dataset Card for "RO-FB-Offense"
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** [https://github.com/readerbench/ro-fb-offense](https://github.com/readerbench/ro-fb-offense)
- **Repository:** [https://github.com/readerbench/ro-fb-offense](https://github.com/readerbench/ro-fb-offense)
- **Paper:** FB-RO-Offense – A Romanian Dataset and Baseline Models for detecting Offensive Language in Facebook Comments
- **Point of Contact:** [Andrei Paraschiv](https://github.com/AndyTheFactory)
### Dataset Summary
FB-RO-Offense corpus, an offensive speech dataset containing 4,455 user-generated comments from Facebook live broadcasts available in Romanian
The annotation follows the hierarchical tagset proposed in the Germeval 2018 Dataset.
The following Classes are available:
* OTHER: Non-Offensive Language
* OFFENSIVE:
- PROFANITY
- INSULT
- ABUSE
### Languages
Romanian
## Dataset Structure
### Data Instances
An example of 'train' looks as follows.
```
{
'sender': '$USER1208',
'no_reacts': 1,
'text': 'PLACEHOLDER TEXT',
'label': OTHER,
}
```
### Data Fields
- `sender`: a `string` feature.
- 'no_reacts': a `integer`
- `text`: a `string`.
- `label`: categorical `OTHER`, `PROFANITY`, `INSULT`, `ABUSE`
### Data Splits
| name |train|test|
|---------|----:|---:|
|ro|x|x|
## Dataset Creation
### Curation Rationale
Collecting data for abusive language classification for Romanian Language.
### Source Data
Facebook comments
#### Initial Data Collection and Normalization
#### Who are the source language producers?
Social media users
### Annotations
#### Annotation process
#### Who are the annotators?
Native speakers
### Personal and Sensitive Information
The data was public at the time of collection. No PII removal has been performed.
## Considerations for Using the Data
### Social Impact of Dataset
The data definitely contains abusive language. The data could be used to develop and propagate offensive language against every target group involved, i.e. ableism, racism, sexism, ageism, and so on.
### Discussion of Biases
### Other Known Limitations
## Additional Information
### Dataset Curators
### Licensing Information
This data is available and distributed under Apache-2.0 license
### Citation Information
```
@inproceedings{busuioc2022fb-ro-offense,
title={FB-RO-Offense – A Romanian Dataset and Baseline Models for detecting Offensive Language in Facebook Comments},
author={ Busuioc, Gabriel-Razvan and Paraschiv, Andrei and Dascalu, Mihai},
booktitle={International Symposium on Symbolic and Numeric Algorithms for Scientific Computing (SYNASC) 2022},
year={2022}
}
```
### Contributions
| [
-0.2083381712436676,
-1.1297779083251953,
-0.1907249242067337,
0.21805429458618164,
-0.20726171135902405,
0.12164678424596786,
-0.328773558139801,
-0.6055375933647156,
0.38366931676864624,
0.3277357518672943,
-0.6317495107650757,
-1.0171239376068115,
-0.6976355314254761,
0.0449594669044017... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
MariaIsabel/FR_NFR_Spanish_requirements_classification | MariaIsabel | 2022-07-22T07:19:16Z | 24 | 0 | null | [
"task_categories:text-classification",
"task_ids:multi-class-classification",
"annotations_creators:other",
"language_creators:other",
"multilinguality:monolingual",
"size_categories:10K<n<100K",
"source_datasets:original",
"language:es",
"license:cc-by-4.0",
"region:us"
] | 2022-07-22T07:19:16Z | 2022-07-15T12:01:21.000Z | 2022-07-15T12:01:21 | ---
annotations_creators:
- other
language:
- es
language_creators:
- other
license:
- cc-by-4.0
multilinguality:
- monolingual
pretty_name: Spanish requirements labeled in functional and non-functional classes.
size_categories:
- 10K<n<100K
source_datasets:
- original
task_categories:
- text-classification
task_ids:
- multi-class-classification
---
## Dataset Description
- **Homepage:**
- **Repository:**
- **Paper:**
- **Leaderboard:**
- **Point of Contact:**
### Dataset Summary
Published version of dataset used for paper 'Towards an automatic requirements classification in a new Spanish dataset'
### Languages
Spanish
## Dataset Structure
### Data Fields
Project: Project's Identifier from which the requirements were obtained.
Requirement: Description of the software requirement.
Final label: Label of the requirement: F (functional requirement) and NF (non-functional requirement).
## Dataset Creation
### Initial Data Collection and Normalization
This dataset was created from a collection of functional and non-functional requirements extracted from 13 final degree and 2 master’s projects carried out from the University of A Coruna. It consist in 300 functional and 89 non-funtcional requirements.
## Additional Information
### Citation Information
https://doi.org/10.5281/zenodo.6556541
| [
-0.41122737526893616,
-0.12916666269302368,
-0.0893763080239296,
0.3499152958393097,
0.08953814208507538,
-0.05084887892007828,
0.10183699429035187,
-0.47805458307266235,
0.34575334191322327,
0.6608112454414368,
-0.6462882161140442,
-0.862648069858551,
-0.4212423264980316,
0.28204345703125... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
ChristophSchuhmann/improved_aesthetics_6.25plus | ChristophSchuhmann | 2022-08-10T11:33:42Z | 24 | 8 | null | [
"region:us"
] | 2022-08-10T11:33:42Z | 2022-08-10T11:33:29.000Z | 2022-08-10T11:33:29 | Entry not found | [
-0.3227645754814148,
-0.22568479180335999,
0.8622264862060547,
0.43461528420448303,
-0.52829909324646,
0.7012971639633179,
0.7915720343589783,
0.07618614286184311,
0.774603009223938,
0.2563217282295227,
-0.7852813005447388,
-0.22573819756507874,
-0.9104477167129517,
0.5715674161911011,
-... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
anandu/eurostat_demo | anandu | 2022-08-13T19:38:55Z | 24 | 0 | null | [
"region:us"
] | 2022-08-13T19:38:55Z | 2022-08-13T19:38:32.000Z | 2022-08-13T19:38:32 | Entry not found | [
-0.3227645754814148,
-0.22568479180335999,
0.8622264862060547,
0.43461528420448303,
-0.52829909324646,
0.7012971639633179,
0.7915720343589783,
0.07618614286184311,
0.774603009223938,
0.2563217282295227,
-0.7852813005447388,
-0.22573819756507874,
-0.9104477167129517,
0.5715674161911011,
-... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
Norod78/EmojiFFHQAlignedFaces | Norod78 | 2022-08-16T13:40:19Z | 24 | 1 | null | [
"region:us"
] | 2022-08-16T13:40:19Z | 2022-08-16T13:39:41.000Z | 2022-08-16T13:39:41 | Entry not found | [
-0.3227645754814148,
-0.22568479180335999,
0.8622264862060547,
0.43461528420448303,
-0.52829909324646,
0.7012971639633179,
0.7915720343589783,
0.07618614286184311,
0.774603009223938,
0.2563217282295227,
-0.7852813005447388,
-0.22573819756507874,
-0.9104477167129517,
0.5715674161911011,
-... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
Bingsu/Gameplay_Images | Bingsu | 2022-08-26T05:31:58Z | 24 | 2 | null | [
"task_categories:image-classification",
"multilinguality:monolingual",
"size_categories:1K<n<10K",
"language:en",
"license:cc-by-4.0",
"region:us"
] | 2022-08-26T05:31:58Z | 2022-08-26T04:42:10.000Z | 2022-08-26T04:42:10 | ---
language:
- en
license:
- cc-by-4.0
multilinguality:
- monolingual
pretty_name: Gameplay Images
size_categories:
- 1K<n<10K
task_categories:
- image-classification
---
# Gameplay Images
## Dataset Description
- **Homepage:** [kaggle](https://www.kaggle.com/datasets/aditmagotra/gameplay-images)
- **Download Size** 2.50 GiB
- **Generated Size** 1.68 GiB
- **Total Size** 4.19 GiB
A dataset from [kaggle](https://www.kaggle.com/datasets/aditmagotra/gameplay-images).
This is a dataset of 10 very famous video games in the world.
These include
- Among Us
- Apex Legends
- Fortnite
- Forza Horizon
- Free Fire
- Genshin Impact
- God of War
- Minecraft
- Roblox
- Terraria
There are 1000 images per class and all are sized `640 x 360`. They are in the `.png` format.
This Dataset was made by saving frames every few seconds from famous gameplay videos on Youtube.
※ This dataset was uploaded in January 2022. Game content updated after that will not be included.
### License
CC-BY-4.0
## Dataset Structure
### Data Instance
```python
>>> from datasets import load_dataset
>>> dataset = load_dataset("Bingsu/Gameplay_Images")
DatasetDict({
train: Dataset({
features: ['image', 'label'],
num_rows: 10000
})
})
```
```python
>>> dataset["train"].features
{'image': Image(decode=True, id=None),
'label': ClassLabel(num_classes=10, names=['Among Us', 'Apex Legends', 'Fortnite', 'Forza Horizon', 'Free Fire', 'Genshin Impact', 'God of War', 'Minecraft', 'Roblox', 'Terraria'], id=None)}
```
### Data Size
download: 2.50 GiB<br>
generated: 1.68 GiB<br>
total: 4.19 GiB
### Data Fields
- image: `Image`
- A `PIL.Image.Image object` containing the image. size=640x360
- Note that when accessing the image column: `dataset[0]["image"]` the image file is automatically decoded. Decoding of a large number of image files might take a significant amount of time. Thus it is important to first query the sample index before the "image" column, i.e. `dataset[0]["image"]` should always be preferred over `dataset["image"][0]`.
- label: an int classification label.
Class Label Mappings:
```json
{
"Among Us": 0,
"Apex Legends": 1,
"Fortnite": 2,
"Forza Horizon": 3,
"Free Fire": 4,
"Genshin Impact": 5,
"God of War": 6,
"Minecraft": 7,
"Roblox": 8,
"Terraria": 9
}
```
```python
>>> dataset["train"][0]
{'image': <PIL.PngImagePlugin.PngImageFile image mode=RGBA size=640x360>,
'label': 0}
```
### Data Splits
| | train |
| ---------- | -------- |
| # of data | 10000 |
### Note
#### train_test_split
```python
>>> ds_new = dataset["train"].train_test_split(0.2, seed=42, stratify_by_column="label")
>>> ds_new
DatasetDict({
train: Dataset({
features: ['image', 'label'],
num_rows: 8000
})
test: Dataset({
features: ['image', 'label'],
num_rows: 2000
})
})
```
| [
-0.6064043641090393,
-0.5766862630844116,
0.1463586539030075,
0.3110826909542084,
-0.20562823116779327,
0.07280737161636353,
-0.16250135004520416,
-0.3245668113231659,
0.35056474804878235,
0.3483509123325348,
-0.5890228152275085,
-0.789614200592041,
-0.5239448547363281,
0.06850792467594147... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
Pratha1m/euroSAT-convnext | Pratha1m | 2022-09-09T09:58:25Z | 24 | 0 | null | [
"region:us"
] | 2022-09-09T09:58:25Z | 2022-09-04T17:45:23.000Z | 2022-09-04T17:45:23 | Entry not found | [
-0.32276496291160583,
-0.22568435966968536,
0.8622260093688965,
0.43461480736732483,
-0.5282987952232361,
0.7012965083122253,
0.7915714979171753,
0.07618625462055206,
0.7746025323867798,
0.25632181763648987,
-0.7852815389633179,
-0.22573819756507874,
-0.9104480743408203,
0.5715669393539429... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
open-source-metrics/text-to-speech-checkpoint-downloads | open-source-metrics | 2022-10-06T19:27:52Z | 24 | 1 | null | [
"region:us"
] | 2022-10-06T19:27:52Z | 2022-09-18T01:29:42.000Z | 2022-09-18T01:29:42 | Entry not found | [
-0.32276496291160583,
-0.22568435966968536,
0.8622260093688965,
0.43461480736732483,
-0.5282987952232361,
0.7012965083122253,
0.7915714979171753,
0.07618625462055206,
0.7746025323867798,
0.25632181763648987,
-0.7852815389633179,
-0.22573819756507874,
-0.9104480743408203,
0.5715669393539429... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
datnth1709/VLSP2016-NER-data | datnth1709 | 2022-09-27T08:53:25Z | 24 | 0 | null | [
"region:us"
] | 2022-09-27T08:53:25Z | 2022-09-27T08:51:50.000Z | 2022-09-27T08:51:50 | Entry not found | [
-0.32276496291160583,
-0.22568435966968536,
0.8622260093688965,
0.43461480736732483,
-0.5282987952232361,
0.7012965083122253,
0.7915714979171753,
0.07618625462055206,
0.7746025323867798,
0.25632181763648987,
-0.7852815389633179,
-0.22573819756507874,
-0.9104480743408203,
0.5715669393539429... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
heegyu/namuwiki-sentences | heegyu | 2022-10-14T07:55:44Z | 24 | 1 | null | [
"task_categories:other",
"language_creators:other",
"multilinguality:monolingual",
"size_categories:100K<n<1M",
"language:ko",
"license:cc-by-nc-sa-2.0",
"region:us"
] | 2022-10-14T07:55:44Z | 2022-10-01T04:48:22.000Z | 2022-10-01T04:48:22 | ---
license: cc-by-nc-sa-2.0
language:
- ko
language_creators:
- other
multilinguality:
- monolingual
size_categories:
- 100K<n<1M
task_categories:
- other
---
- 38,015,081 rows | [
-0.8484488129615784,
-0.20815010368824005,
0.6330020427703857,
0.6113743185997009,
-0.172815203666687,
-0.898030698299408,
0.5689365267753601,
-0.14349661767482758,
0.8296960592269897,
0.5120804309844971,
-0.0533725842833519,
-0.43311816453933716,
-0.36939212679862976,
0.3638326823711395,
... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
ihassan1/auditor-sentiment | ihassan1 | 2022-10-02T08:44:54Z | 24 | 1 | null | [
"task_categories:text-classification",
"task_ids:sentiment-scoring",
"annotations_creators:expert-generated",
"language_creators:expert-generated",
"multilinguality:monolingual",
"auditor",
"financial",
"sentiment",
"markets",
"region:us"
] | 2022-10-02T08:44:54Z | 2022-10-01T15:10:00.000Z | 2022-10-01T15:10:00 | ---
annotations_creators:
- expert-generated
language: []
language_creators:
- expert-generated
license: []
multilinguality:
- monolingual
pretty_name: auditor-sentiment
size_categories: []
source_datasets: []
tags:
- auditor
- financial
- sentiment
- markets
task_categories:
- text-classification
task_ids:
- sentiment-scoring
---
# Dataset Card for Auditor Sentiment | [
-0.3340824544429779,
0.47489050030708313,
-0.11038536578416824,
0.5822736620903015,
-0.8908346891403198,
-0.10918042063713074,
0.09392233937978745,
0.17912159860134125,
0.47854411602020264,
0.3753293454647064,
-0.2935866117477417,
-0.8436318635940552,
-0.5353732109069824,
0.057300847023725... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
RamAnanth1/lex-fridman-podcasts | RamAnanth1 | 2022-12-17T21:39:56Z | 24 | 0 | null | [
"task_categories:text-classification",
"task_categories:text-generation",
"task_categories:summarization",
"task_ids:sentiment-analysis",
"task_ids:dialogue-modeling",
"task_ids:language-modeling",
"language_creators:found",
"multilinguality:monolingual",
"size_categories:n<1K",
"language:en",
"... | 2022-12-17T21:39:56Z | 2022-10-03T18:24:26.000Z | 2022-10-03T18:24:26 | ---
lexicap:
- found
language:
- en
language_creators:
- found
license: []
multilinguality:
- monolingual
pretty_name: 'Lex Fridman Podcasts '
size_categories:
- n<1K
task_categories:
- text-classification
- text-generation
- summarization
task_ids:
- sentiment-analysis
- dialogue-modeling
- language-modeling
---
# Dataset Card for Lex Fridman Podcasts Dataset
This dataset is sourced from Andrej Karpathy's [Lexicap website](https://karpathy.ai/lexicap/) which contains English transcripts of Lex Fridman's wonderful podcast episodes. The transcripts were generated using OpenAI's large-sized [Whisper model]("https://github.com/openai/whisper") | [
-0.37742382287979126,
-0.1695372462272644,
0.20973926782608032,
-0.05463531240820885,
-0.07335670292377472,
0.004517479334026575,
-0.1923225224018097,
-0.43837568163871765,
0.44330838322639465,
0.23295645415782928,
-0.9568328261375427,
-0.6792589426040649,
-0.24159575998783112,
-0.08959554... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
TalTechNLP/ERRnews | TalTechNLP | 2023-04-10T13:17:48Z | 24 | 0 | err-news | [
"task_categories:summarization",
"annotations_creators:expert-generated",
"multilinguality:monolingual",
"size_categories:1K<n<10K",
"source_datasets:original",
"language:et",
"license:cc-by-4.0",
"region:us"
] | 2023-04-10T13:17:48Z | 2022-10-06T15:28:35.000Z | 2022-10-06T15:28:35 | ---
pretty_name: ERRnews
annotations_creators:
- expert-generated
language:
- et
license:
- cc-by-4.0
multilinguality:
- monolingual
size_categories:
- 1K<n<10K
source_datasets:
- original
task_categories:
- summarization
paperswithcode_id: err-news
---
# Dataset Card for "ERRnews"
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
-
## Dataset Description
- **Homepage:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
- **Repository:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
- **Paper:** https://www.bjmc.lu.lv/fileadmin/user_upload/lu_portal/projekti/bjmc/Contents/10_3_23_Harm.pdf
- **Point of Contact:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Dataset Summary
ERRnews is an estonian language summaryzation dataset of ERR News broadcasts scraped from the ERR Archive (https://arhiiv.err.ee/err-audioarhiiv). The dataset consists of news story transcripts generated by an ASR pipeline paired with the human written summary from the archive. For leveraging larger english models the dataset includes machine translated (https://neurotolge.ee/) transcript and summary pairs.
### Supported Tasks and Leaderboards
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Languages
Estonian
## Dataset Structure
### Data Instances
```
{'name': 'Kütuseaktsiis Balti riikides on erinev.', 'summary': 'Eestis praeguse plaani järgi järgmise aasta maini kehtiv madalam diislikütuse aktsiis ei ajenda enam tankima Lätis, kuid bensiin on seal endiselt odavam. Peaminister Kaja Kallas ja kütusemüüjad on eri meelt selles, kui suurel määral mõjutab aktsiis lõpphinda tanklais.', 'transcript': 'Eesti-Läti piiri alal on kütusehinna erinevus eriti märgatav ja ka tuntav. Õigema pildi saamiseks tuleks võrrelda ühe keti keskmist hinda, kuna tanklati võib see erineda Circle K. [...] Olulisel määral mõjutab hinda kütuste sisseost, räägib kartvski. On selge, et maailmaturuhinna põhjal tehtud ost Tallinnas erineb kütusehinnast Riias või Vilniuses või Varssavis. Kolmas mõjur ja oluline mõjur on biolisandite kasutamise erinevad nõuded riikide vahel.', 'url': 'https://arhiiv.err.ee//vaata/uudised-kutuseaktsiis-balti-riikides-on-erinev', 'meta': '\n\n\nSarja pealkiri:\nuudised\n\n\nFonoteegi number:\nRMARH-182882\n\n\nFonogrammi tootja:\n2021 ERR\n\n\nEetris:\n16.09.2021\n\n\nSalvestuskoht:\nRaadiouudised\n\n\nKestus:\n00:02:34\n\n\nEsinejad:\nKond Ragnar, Vahtrik Raimo, Kallas Kaja, Karcevskis Ojars\n\n\nKategooria:\nUudised → uudised, muu\n\n\nPüsiviide:\n\nvajuta siia\n\n\n\n', 'audio': {'path': 'recordings/12049.ogv', 'array': array([0.00000000e+00, 0.00000000e+00, 0.00000000e+00, ..., 2.44576868e-06, 6.38223427e-06, 0.00000000e+00]), 'sampling_rate': 16000}, 'recording_id': 12049}
```
### Data Fields
```
name: News story headline
summary: Hand written summary.
transcript: Automatically generated transcript from the audio file with an ASR system.
url: ERR archive URL.
meta: ERR archive metadata.
en_summary: Machine translated English summary.
en_transcript: Machine translated English transcript.
audio: A dictionary containing the path to the downloaded audio file, the decoded audio array, and the sampling rate. Note that when accessing the audio column: `dataset[0]["audio"]` the audio file is automatically decoded and resampled to `dataset.features["audio"].sampling_rate`. Decoding and resampling of a large number of audio files might take a significant amount of time. Thus it is important to first query the sample index before the `"audio"` column, *i.e.* `dataset[0]["audio"]` should **always** be preferred over `dataset["audio"][0]`.
recording_id: Audio file id.
```
### Data Splits
|train|validation|test|
|:----|:---------|:---|
|10420|523|523|
### BibTeX entry and citation info
```bibtex
article{henryabstractive,
title={Abstractive Summarization of Broadcast News Stories for {Estonian}},
author={Henry, H{\"a}rm and Tanel, Alum{\"a}e},
journal={Baltic J. Modern Computing},
volume={10},
number={3},
pages={511-524},
year={2022}
}
```
| [
-0.5602271556854248,
-0.5986151695251465,
0.2370331734418869,
0.07746115326881409,
-0.4104393422603607,
-0.16125468909740448,
-0.31172463297843933,
-0.43427878618240356,
0.9102985262870789,
0.33363616466522217,
-0.7261478900909424,
-0.9045055508613586,
-0.6141163110733032,
0.26308396458625... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
arbml/Arabic_Hate_Speech | arbml | 2022-10-21T20:22:02Z | 24 | 2 | null | [
"region:us"
] | 2022-10-21T20:22:02Z | 2022-10-21T20:21:56.000Z | 2022-10-21T20:21:56 | ---
dataset_info:
features:
- name: id
dtype: string
- name: tweet
dtype: string
- name: is_off
dtype: string
- name: is_hate
dtype: string
- name: is_vlg
dtype: string
- name: is_vio
dtype: string
splits:
- name: train
num_bytes: 1656540
num_examples: 8557
- name: validation
num_bytes: 234165
num_examples: 1266
download_size: 881261
dataset_size: 1890705
---
# Dataset Card for "Arabic_Hate_Speech"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | [
-0.6937294006347656,
-0.4927757680416107,
-0.07028475403785706,
0.12859466671943665,
-0.13142861425876617,
0.1033020094037056,
0.07634320110082626,
-0.30254045128822327,
0.7363985180854797,
0.37685808539390564,
-0.6565770506858826,
-1.0163991451263428,
-0.9340339303016663,
-0.3984342515468... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
findzebra/queries | findzebra | 2022-10-25T10:02:34Z | 24 | 0 | null | [
"region:us"
] | 2022-10-25T10:02:34Z | 2022-10-25T09:58:49.000Z | 2022-10-25T09:58:49 | # FindZebra Queries
A set of 248 search queries annotated with the correct diagnosis. The diagnosis is referenced with a Concept Unique Identifier ([CUI](https://www.nlm.nih.gov/research/umls/new_users/online_learning/Meta_005.html)). In a retrieval setting, the task consists of retrieving an article from the [FindZebra corpus](https://huggingface.co/datasets/findzebra/corpus) with a CUI that matches the query CUI. | [
-0.4462365508079529,
-0.8906712532043457,
0.64141446352005,
0.4560794234275818,
0.10247789323329926,
0.11819274723529816,
-0.0101012559607625,
-0.40695279836654663,
0.6297018527984619,
0.6591324210166931,
-0.6719940304756165,
-0.45870500802993774,
-0.4279712438583374,
0.07199635356664658,
... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
juanhebert/sv_corpora_parliament_processed | juanhebert | 2022-11-03T10:21:27Z | 24 | 0 | null | [
"region:us"
] | 2022-11-03T10:21:27Z | 2022-10-25T10:51:07.000Z | 2022-10-25T10:51:07 | ---
dataset_info:
features:
- name: text
dtype: string
splits:
- name: train
num_bytes: 292359009
num_examples: 1892723
download_size: 158940474
dataset_size: 292359009
---
# Dataset Card for "sv_corpora_parliament_processed"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | [
-0.4295427203178406,
-0.023557016626000404,
0.2686217427253723,
0.35771676898002625,
-0.5028133988380432,
0.044651784002780914,
-0.14284029603004456,
-0.067416712641716,
0.697914719581604,
0.857525646686554,
-0.819134533405304,
-0.9724900126457214,
-0.7259156703948975,
-0.03812918439507484... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
vonewman/word-embeddings-dataset | vonewman | 2022-10-25T13:07:40Z | 24 | 0 | null | [
"license:mit",
"region:us"
] | 2022-10-25T13:07:40Z | 2022-10-25T13:06:02.000Z | 2022-10-25T13:06:02 | ---
license: mit
---
| [
-0.1285339742898941,
-0.18616800010204315,
0.6529127359390259,
0.4943626821041107,
-0.1931934952735901,
0.2360742688179016,
0.360720157623291,
0.05056300014257431,
0.5793654322624207,
0.7400140166282654,
-0.6508105993270874,
-0.23783984780311584,
-0.7102248668670654,
-0.047826044261455536,... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
marianna13/laion2B-multi-joined-translated-to-en-ultra-hr | marianna13 | 2022-11-07T14:26:15Z | 24 | 0 | null | [
"region:us"
] | 2022-11-07T14:26:15Z | 2022-11-04T14:53:22.000Z | 2022-11-04T14:53:22 | Found. Redirecting to https://cdn-lfs.huggingface.co/repos/9a/b2/9ab26bbb4eddb3497b300bd782143a8a2807fdd8bf2b5ff58f59c90e46e49467/835f3f7d88a86e05a882c6a6b6333da6ab874776385f85473798769d767c2fca?response-content-disposition=attachment%3B+filename*%3DUTF-8%27%27README.md%3B+filename%3D%22README.md%22%3B&response-content-type=text%2Fmarkdown&Expires=1701480478&Policy=eyJTdGF0ZW1lbnQiOlt7IkNvbmRpdGlvbiI6eyJEYXRlTGVzc1RoYW4iOnsiQVdTOkVwb2NoVGltZSI6MTcwMTQ4MDQ3OH19LCJSZXNvdXJjZSI6Imh0dHBzOi8vY2RuLWxmcy5odWdnaW5nZmFjZS5jby9yZXBvcy85YS9iMi85YWIyNmJiYjRlZGRiMzQ5N2IzMDBiZDc4MjE0M2E4YTI4MDdmZGQ4YmYyYjVmZjU4ZjU5YzkwZTQ2ZTQ5NDY3LzgzNWYzZjdkODhhODZlMDVhODgyYzZhNmI2MzMzZGE2YWI4NzQ3NzYzODVmODU0NzM3OTg3NjlkNzY3YzJmY2E%7EcmVzcG9uc2UtY29udGVudC1kaXNwb3NpdGlvbj0qJnJlc3BvbnNlLWNvbnRlbnQtdHlwZT0qIn1dfQ__&Signature=jqMviDPu2iBrheINz6Mh%7EaZN8EJLVsoIAjv1hZBQ5Xj%7EGPi8dZuYsyuQnk9%7E3pWZTAChR7FHrt0-VwGat0h6YEaCBAUrYWse7OXkkhstE6yMqy9q61pVnFp3TNDzDh6dIfo4FcdcjF9HPpfp2Z6xbpZoLU7hltzS9m0b0bbjQE1pS697Jotj6oydE%7EYpDna9iJW0m2-mkEZT2gRiq5BLWFt4NKnI0o4DrIZ-kJlforOKSMY4cGpoS1NqPHZpJW7kkYmWOE8blYEVP%7EE0nH%7EUNymtNiUJqS62nv0z4fQ525XkDmLDhuKvrFdQQyxMDk6Hd0yFprbeCmZ7agKhFcZ8lg__&Key-Pair-Id=KVTP0A1DKRTAX | [
-0.6133275032043457,
-0.832556426525116,
0.6916838884353638,
0.352558970451355,
-0.5637903213500977,
0.12481897324323654,
0.13392165303230286,
-0.23169369995594025,
0.9127945899963379,
0.7365868091583252,
-1.1939862966537476,
-0.8521687388420105,
-0.538483202457428,
0.5465317368507385,
-... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
bigbio/bionlp_st_2013_gro | bigbio | 2022-12-22T15:44:01Z | 24 | 1 | null | [
"multilinguality:monolingual",
"language:en",
"license:other",
"region:us"
] | 2022-12-22T15:44:01Z | 2022-11-13T22:07:10.000Z | 2022-11-13T22:07:10 |
---
language:
- en
bigbio_language:
- English
license: other
multilinguality: monolingual
bigbio_license_shortname: GENIA_PROJECT_LICENSE
pretty_name: BioNLP 2013 GRO
homepage: https://github.com/openbiocorpora/bionlp-st-2013-gro
bigbio_pubmed: True
bigbio_public: True
bigbio_tasks:
- EVENT_EXTRACTION
- NAMED_ENTITY_RECOGNITION
- RELATION_EXTRACTION
---
# Dataset Card for BioNLP 2013 GRO
## Dataset Description
- **Homepage:** https://github.com/openbiocorpora/bionlp-st-2013-gro
- **Pubmed:** True
- **Public:** True
- **Tasks:** EE,NER,RE
GRO Task: Populating the Gene Regulation Ontology with events and
relations. A data set from the bio NLP shared tasks competition from 2013
## Citation Information
```
@inproceedings{kim-etal-2013-gro,
title = "{GRO} Task: Populating the Gene Regulation Ontology with events and relations",
author = "Kim, Jung-jae and
Han, Xu and
Lee, Vivian and
Rebholz-Schuhmann, Dietrich",
booktitle = "Proceedings of the {B}io{NLP} Shared Task 2013 Workshop",
month = aug,
year = "2013",
address = "Sofia, Bulgaria",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/W13-2007",
pages = "50--57",
}
```
| [
-0.02463616617023945,
-0.27376824617385864,
0.33972883224487305,
0.11486107110977173,
-0.21618665754795074,
-0.2873135209083557,
-0.0920877680182457,
-0.6833540201187134,
0.4317704737186432,
0.4819755554199219,
-0.6488818526268005,
-0.7369824051856995,
-0.5421804189682007,
0.35439801216125... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
bigbio/spl_adr_200db | bigbio | 2022-12-22T15:46:56Z | 24 | 2 | null | [
"multilinguality:monolingual",
"language:en",
"license:cc0-1.0",
"region:us"
] | 2022-12-22T15:46:56Z | 2022-11-13T22:12:21.000Z | 2022-11-13T22:12:21 |
---
language:
- en
bigbio_language:
- English
license: cc0-1.0
multilinguality: monolingual
bigbio_license_shortname: CC0_1p0
pretty_name: SPL ADR
homepage: https://bionlp.nlm.nih.gov/tac2017adversereactions/
bigbio_pubmed: False
bigbio_public: True
bigbio_tasks:
- NAMED_ENTITY_RECOGNITION
- NAMED_ENTITY_DISAMBIGUATION
- RELATION_EXTRACTION
---
# Dataset Card for SPL ADR
## Dataset Description
- **Homepage:** https://bionlp.nlm.nih.gov/tac2017adversereactions/
- **Pubmed:** False
- **Public:** True
- **Tasks:** NER,NED,RE
The United States Food and Drug Administration (FDA) partnered with the National Library
of Medicine to create a pilot dataset containing standardised information about known
adverse reactions for 200 FDA-approved drugs. The Structured Product Labels (SPLs),
the documents FDA uses to exchange information about drugs and other products, were
manually annotated for adverse reactions at the mention level to facilitate development
and evaluation of text mining tools for extraction of ADRs from all SPLs. The ADRs were
then normalised to the Unified Medical Language System (UMLS) and to the Medical
Dictionary for Regulatory Activities (MedDRA).
## Citation Information
```
@article{demner2018dataset,
author = {Demner-Fushman, Dina and Shooshan, Sonya and Rodriguez, Laritza and Aronson,
Alan and Lang, Francois and Rogers, Willie and Roberts, Kirk and Tonning, Joseph},
title = {A dataset of 200 structured product labels annotated for adverse drug reactions},
journal = {Scientific Data},
volume = {5},
year = {2018},
month = {01},
pages = {180001},
url = {
https://www.researchgate.net/publication/322810855_A_dataset_of_200_structured_product_labels_annotated_for_adverse_drug_reactions
},
doi = {10.1038/sdata.2018.1}
}
```
| [
0.017585797235369682,
-0.5296688079833984,
0.05510583519935608,
0.15991759300231934,
-0.00827459804713726,
-0.3325929045677185,
-0.0584557019174099,
-0.3784007132053375,
0.27462100982666016,
0.8087748885154724,
-0.37035849690437317,
-0.9843695163726807,
-0.48041030764579773,
0.412517994642... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
olm/olm-october-2022-tokenized-512 | olm | 2022-11-16T01:47:11Z | 24 | 0 | null | [
"region:us"
] | 2022-11-16T01:47:11Z | 2022-11-16T01:24:02.000Z | 2022-11-16T01:24:02 | ---
dataset_info:
features:
- name: input_ids
sequence: int32
- name: attention_mask
sequence: int8
- name: special_tokens_mask
sequence: int8
splits:
- name: train
num_bytes: 79589759460
num_examples: 25807315
download_size: 21375344353
dataset_size: 79589759460
---
# Dataset Card for "olm-october-2022-tokenized-512"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | [
-0.578436553478241,
-0.26910826563835144,
0.22729726135730743,
0.1341218501329422,
-0.35949307680130005,
-0.22626812756061554,
0.3647993206977844,
-0.26976659893989563,
0.9265972375869751,
0.7838736176490784,
-0.7634241580963135,
-0.8507145047187805,
-0.5339993238449097,
-0.141908675432205... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
galman33/gal_yair_83000_256x256_fixed | galman33 | 2022-11-26T13:37:24Z | 24 | 0 | null | [
"region:us"
] | 2022-11-26T13:37:24Z | 2022-11-26T13:33:24.000Z | 2022-11-26T13:33:24 | ---
dataset_info:
features:
- name: lat
dtype: float64
- name: lon
dtype: float64
- name: country_code
dtype:
class_label:
names:
'0': ad
'1': ae
'2': al
'3': aq
'4': ar
'5': au
'6': bd
'7': be
'8': bg
'9': bm
'10': bo
'11': br
'12': bt
'13': bw
'14': ca
'15': ch
'16': cl
'17': co
'18': cz
'19': de
'20': dk
'21': ec
'22': ee
'23': es
'24': fi
'25': fr
'26': gb
'27': gh
'28': gl
'29': gr
'30': gt
'31': hk
'32': hr
'33': hu
'34': id
'35': ie
'36': il
'37': is
'38': it
'39': ix
'40': jp
'41': kg
'42': kh
'43': kr
'44': la
'45': lk
'46': ls
'47': lt
'48': lu
'49': lv
'50': me
'51': mg
'52': mk
'53': mn
'54': mo
'55': mt
'56': mx
'57': my
'58': nl
'59': 'no'
'60': nz
'61': pe
'62': ph
'63': pl
'64': pt
'65': ro
'66': rs
'67': ru
'68': se
'69': sg
'70': si
'71': sk
'72': sn
'73': sz
'74': th
'75': tn
'76': tr
'77': tw
'78': ua
'79': ug
'80': us
'81': uy
'82': za
- name: image
dtype: image
splits:
- name: train
num_bytes: 8075723633.0
num_examples: 83000
download_size: 8055991198
dataset_size: 8075723633.0
---
# Dataset Card for "gal_yair_83000_256x256_fixed"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | [
-0.5873787999153137,
-0.3502446115016937,
0.04338135942816734,
0.21962390840053558,
-0.23490455746650696,
-0.20399075746536255,
0.3051719665527344,
-0.17879854142665863,
0.7889426350593567,
0.6847435832023621,
-0.9150981903076172,
-0.6222034096717834,
-0.46970334649086,
-0.1164649352431297... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
liuyanchen1015/VALUE_mnli_got | liuyanchen1015 | 2022-11-28T22:30:04Z | 24 | 0 | null | [
"region:us"
] | 2022-11-28T22:30:04Z | 2022-11-28T22:29:42.000Z | 2022-11-28T22:29:42 | ---
dataset_info:
features:
- name: premise
dtype: string
- name: hypothesis
dtype: string
- name: label
dtype: int64
- name: idx
dtype: int64
- name: score
dtype: int64
splits:
- name: train
num_bytes: 6007046
num_examples: 25203
- name: dev_matched
num_bytes: 136053
num_examples: 611
- name: dev_mismatched
num_bytes: 130788
num_examples: 511
- name: test_matched
num_bytes: 152545
num_examples: 644
- name: test_mismatched
num_bytes: 113320
num_examples: 482
download_size: 4055143
dataset_size: 6539752
---
# Dataset Card for "VALUE2_mnli_got"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | [
-0.2565159201622009,
-0.2742760181427002,
0.0970163568854332,
0.07381292432546616,
-0.3090258240699768,
-0.13686048984527588,
0.3742266595363617,
-0.24992527067661285,
0.8421169519424438,
0.5241866111755371,
-0.7089740037918091,
-0.5191863775253296,
-0.7225742936134338,
-0.3027681410312652... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
liuyanchen1015/VALUE_mnli_negative_concord | liuyanchen1015 | 2022-11-28T22:31:52Z | 24 | 0 | null | [
"region:us"
] | 2022-11-28T22:31:52Z | 2022-11-28T22:31:29.000Z | 2022-11-28T22:31:29 | ---
dataset_info:
features:
- name: premise
dtype: string
- name: hypothesis
dtype: string
- name: label
dtype: int64
- name: idx
dtype: int64
- name: score
dtype: int64
splits:
- name: train
num_bytes: 11131248
num_examples: 49529
- name: dev_matched
num_bytes: 266084
num_examples: 1192
- name: dev_mismatched
num_bytes: 272231
num_examples: 1203
- name: test_matched
num_bytes: 255070
num_examples: 1140
- name: test_mismatched
num_bytes: 282348
num_examples: 1214
download_size: 7641405
dataset_size: 12206981
---
# Dataset Card for "VALUE2_mnli_negative_concord"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | [
-0.5332750082015991,
-0.14228618144989014,
0.2143496572971344,
0.18916304409503937,
-0.4134552776813507,
-0.2692071795463562,
0.3680807948112488,
-0.23022785782814026,
0.8771419525146484,
0.3604344427585602,
-0.6915951371192932,
-0.7721416354179382,
-0.557410717010498,
-0.27206388115882874... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
liuyanchen1015/VALUE_mnli_null_relcl | liuyanchen1015 | 2022-11-28T22:33:19Z | 24 | 0 | null | [
"region:us"
] | 2022-11-28T22:33:19Z | 2022-11-28T22:32:57.000Z | 2022-11-28T22:32:57 | ---
dataset_info:
features:
- name: premise
dtype: string
- name: hypothesis
dtype: string
- name: label
dtype: int64
- name: idx
dtype: int64
- name: score
dtype: int64
splits:
- name: train
num_bytes: 12182834
num_examples: 45899
- name: dev_matched
num_bytes: 297057
num_examples: 1123
- name: dev_mismatched
num_bytes: 365012
num_examples: 1361
- name: test_matched
num_bytes: 303649
num_examples: 1153
- name: test_mismatched
num_bytes: 344268
num_examples: 1329
download_size: 8501673
dataset_size: 13492820
---
# Dataset Card for "VALUE2_mnli_null_relcl"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | [
-0.18104879558086395,
-0.32862168550491333,
0.03217916563153267,
0.14003442227840424,
-0.4217124879360199,
-0.05921272560954094,
0.4262823760509491,
-0.18182998895645142,
0.8869449496269226,
0.492306649684906,
-0.7406244874000549,
-0.6835967898368835,
-0.5511884689331055,
-0.26722821593284... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
deancgarcia/cs4990_hw3 | deancgarcia | 2022-11-30T03:14:44Z | 24 | 0 | null | [
"region:us"
] | 2022-11-30T03:14:44Z | 2022-11-28T22:33:11.000Z | 2022-11-28T22:33:11 | [Needs More Information]
# Dataset Card for Trains
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-instances)
- [Data Splits](#data-instances)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
## Dataset Description
- **Homepage:** [Needs More Information]
- **Repository:** [Needs More Information]
- **Paper:** [Needs More Information]
- **Leaderboard:** [Needs More Information]
- **Point of Contact:** [Needs More Information]
### Dataset Summary
Each record in the file contains information about one particular shift that an engineer or conductor worked. Clock-in and clock-out information, plus many statistics, are provided. One column, named 'class', is actually the target. This column contains an integer that can have one of three values:
0: No accident occurred during this shift
1: An accident of type '1' occurred during this shift
2. An accident of type '2' occurred during this shift
### Supported Tasks and Leaderboards
[Needs More Information]
### Languages
English
## Dataset Structure
### Data Instances
[Needs More Information]
### Data Fields
FIRM
Class
Start
End
Length
Night
Gap
WS
idx
Base
StartAdj
LenAdj
Comp
Trans
Press
p1s
p1l
p2s
p2l
MalAdj
NFZ
AFZ
MFZ
### Data Splits
## Dataset Creation
### Curation Rationale
[Needs More Information]
### Source Data
#### Initial Data Collection and Normalization
[Needs More Information]
#### Who are the source language producers?
[Needs More Information]
### Annotations
#### Annotation process
[Needs More Information]
#### Who are the annotators?
[Needs More Information]
### Personal and Sensitive Information
[Needs More Information]
## Considerations for Using the Data
### Social Impact of Dataset
[Needs More Information]
### Discussion of Biases
[Needs More Information]
### Other Known Limitations
[Needs More Information]
## Additional Information
### Dataset Curators
[Needs More Information]
### Licensing Information
[Needs More Information]
### Citation Information
[Needs More Information] | [
-0.33901742100715637,
-0.2173268049955368,
0.252930611371994,
0.4533343017101288,
-0.20640285313129425,
-0.010427407920360565,
-0.12921755015850067,
-0.34707048535346985,
0.536580502986908,
0.6200581789016724,
-0.7408204078674316,
-0.8172300457954407,
-0.5682580471038818,
0.042031437158584... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
liuyanchen1015/VALUE_mnli_uninflect | liuyanchen1015 | 2022-11-28T22:34:03Z | 24 | 0 | null | [
"region:us"
] | 2022-11-28T22:34:03Z | 2022-11-28T22:33:39.000Z | 2022-11-28T22:33:39 | ---
dataset_info:
features:
- name: premise
dtype: string
- name: hypothesis
dtype: string
- name: label
dtype: int64
- name: idx
dtype: int64
- name: score
dtype: int64
splits:
- name: train
num_bytes: 29268351
num_examples: 124447
- name: dev_matched
num_bytes: 703766
num_examples: 3056
- name: dev_mismatched
num_bytes: 768556
num_examples: 3170
- name: test_matched
num_bytes: 714516
num_examples: 3095
- name: test_mismatched
num_bytes: 790706
num_examples: 3309
download_size: 20940263
dataset_size: 32245895
---
# Dataset Card for "VALUE2_mnli_uninflect"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | [
-0.3195030391216278,
-0.1302141696214676,
-0.106561079621315,
0.049458298832178116,
-0.4145963788032532,
-0.02207779698073864,
0.297651082277298,
-0.2047419250011444,
0.8218830227851868,
0.5581382513046265,
-0.7706625461578369,
-0.33079245686531067,
-0.5433617234230042,
-0.3083249032497406... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
clarkecody/everettImage | clarkecody | 2022-11-28T23:14:45Z | 24 | 0 | null | [
"region:us"
] | 2022-11-28T23:14:45Z | 2022-11-28T23:06:45.000Z | 2022-11-28T23:06:45 | Entry not found | [
-0.32276472449302673,
-0.22568407654762268,
0.8622258901596069,
0.4346148371696472,
-0.5282984972000122,
0.7012965679168701,
0.7915717363357544,
0.07618629932403564,
0.7746022939682007,
0.2563222646713257,
-0.785281777381897,
-0.22573848068714142,
-0.9104482531547546,
0.5715669393539429,
... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
eminecg/petitions_29-ds | eminecg | 2022-11-29T00:08:59Z | 24 | 0 | null | [
"region:us"
] | 2022-11-29T00:08:59Z | 2022-11-29T00:08:53.000Z | 2022-11-29T00:08:53 | ---
dataset_info:
features:
- name: petition
dtype: string
- name: petition_length
dtype: int64
splits:
- name: train
num_bytes: 30457698.3
num_examples: 2475
- name: validation
num_bytes: 3384188.7
num_examples: 275
download_size: 15645193
dataset_size: 33841887.0
---
# Dataset Card for "petitions_29-ds"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | [
-0.5790992379188538,
-0.09975740313529968,
0.3836326599121094,
0.33390334248542786,
-0.3412802815437317,
0.1072080209851265,
0.25154292583465576,
0.14466987550258636,
0.929023265838623,
0.7397525906562805,
-0.8069413900375366,
-0.8510161638259888,
-0.8819711804389954,
-0.310145765542984,
... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
autoevaluate/autoeval-staging-eval-project-1fbe7e90-eada-4d68-89d2-f46803a319c3-101100 | autoevaluate | 2022-11-29T05:48:28Z | 24 | 0 | null | [
"autotrain",
"evaluation",
"region:us"
] | 2022-11-29T05:48:28Z | 2022-11-29T05:47:49.000Z | 2022-11-29T05:47:49 | ---
type: predictions
tags:
- autotrain
- evaluation
datasets:
- glue
eval_info:
task: binary_classification
model: autoevaluate/binary-classification
metrics: ['matthews_correlation']
dataset_name: glue
dataset_config: sst2
dataset_split: validation
col_mapping:
text: sentence
target: label
---
# Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset:
* Task: Binary Text Classification
* Model: autoevaluate/binary-classification
* Dataset: glue
* Config: sst2
* Split: validation
To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator).
## Contributions
Thanks to [@lewtun](https://huggingface.co/lewtun) for evaluating this model. | [
-0.20361605286598206,
-0.33383142948150635,
0.2989133596420288,
0.17618133127689362,
-0.16354314982891083,
0.03615495190024376,
0.020895475521683693,
-0.39217695593833923,
0.12184618413448334,
0.3618122935295105,
-0.9186378717422485,
-0.21669870615005493,
-0.770520806312561,
-0.01348786149... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
nateraw/test-doc-assets | nateraw | 2022-11-29T07:34:58Z | 24 | 0 | null | [
"region:us"
] | 2022-11-29T07:34:58Z | 2022-11-29T06:34:13.000Z | 2022-11-29T06:34:13 | Entry not found | [
-0.32276472449302673,
-0.22568407654762268,
0.8622258901596069,
0.4346148371696472,
-0.5282984972000122,
0.7012965679168701,
0.7915717363357544,
0.07618629932403564,
0.7746022939682007,
0.2563222646713257,
-0.785281777381897,
-0.22573848068714142,
-0.9104482531547546,
0.5715669393539429,
... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
biomegix/soap_inital | biomegix | 2022-11-29T07:36:46Z | 24 | 0 | null | [
"license:apache-2.0",
"region:us"
] | 2022-11-29T07:36:46Z | 2022-11-29T07:33:15.000Z | 2022-11-29T07:33:15 | ---
license: apache-2.0
---
SOAP dataset Initial Version | [
-0.29858365654945374,
0.2657223343849182,
0.041347164660692215,
0.5031360387802124,
-0.73179692029953,
-0.03788340836763382,
-0.056779682636260986,
-0.4884548783302307,
-0.2462991625070572,
1.0260063409805298,
-0.5692119002342224,
-0.5434007048606873,
-0.6395899653434753,
0.052976600825786... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
m-aliabbas/idrak_unsplitted | m-aliabbas | 2022-11-30T05:17:29Z | 24 | 0 | null | [
"region:us"
] | 2022-11-30T05:17:29Z | 2022-11-29T08:33:37.000Z | 2022-11-29T08:33:37 | This dataset is an unspotted version of the idrak dataset. | [
-0.2467723935842514,
-0.2686721086502075,
-0.1235620304942131,
0.2611215114593506,
-0.7757191061973572,
0.5948682427406311,
-0.07615390419960022,
-0.11759629845619202,
0.8754916787147522,
1.2728869915008545,
-0.6975119113922119,
-0.5589945316314697,
-0.22598174214363098,
-0.386831581592559... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
autoevaluate/autoeval-staging-eval-project-1eb4eb5e-abe1-49b9-90a0-e2c93c094b24-104103 | autoevaluate | 2022-11-29T09:07:05Z | 24 | 0 | null | [
"autotrain",
"evaluation",
"region:us"
] | 2022-11-29T09:07:05Z | 2022-11-29T09:06:28.000Z | 2022-11-29T09:06:28 | ---
type: predictions
tags:
- autotrain
- evaluation
datasets:
- glue
eval_info:
task: binary_classification
model: autoevaluate/binary-classification
metrics: ['matthews_correlation']
dataset_name: glue
dataset_config: sst2
dataset_split: validation
col_mapping:
text: sentence
target: label
---
# Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset:
* Task: Binary Text Classification
* Model: autoevaluate/binary-classification
* Dataset: glue
* Config: sst2
* Split: validation
To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator).
## Contributions
Thanks to [@lewtun](https://huggingface.co/lewtun) for evaluating this model. | [
-0.20361605286598206,
-0.33383142948150635,
0.2989133596420288,
0.17618133127689362,
-0.16354314982891083,
0.03615495190024376,
0.020895475521683693,
-0.39217695593833923,
0.12184618413448334,
0.3618122935295105,
-0.9186378717422485,
-0.21669870615005493,
-0.770520806312561,
-0.01348786149... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
lucadiliello/trivia_as2 | lucadiliello | 2022-11-29T11:25:26Z | 24 | 0 | null | [
"region:us"
] | 2022-11-29T11:25:26Z | 2022-11-29T11:20:09.000Z | 2022-11-29T11:20:09 | ---
dataset_info:
features:
- name: question
dtype: string
- name: answer
dtype: string
- name: label
dtype: int64
splits:
- name: train
num_bytes: 419044714
num_examples: 1843349
- name: dev
num_bytes: 26773779
num_examples: 117012
- name: test
num_bytes: 26061784
num_examples: 114853
download_size: 184246492
dataset_size: 471880277
---
# Dataset Card for "trivia_as2"
Answer Sentence Selection version of the TriviaQA dataset. For more info, check out the original [repository](https://github.com/lucadiliello/answer-selection). | [
-0.35976409912109375,
-0.48050475120544434,
0.12720850110054016,
0.15487411618232727,
-0.452940970659256,
0.18678756058216095,
0.17295213043689728,
-0.3347873091697693,
0.5089443325996399,
0.6799176931381226,
-0.7282316088676453,
-0.3922843933105469,
-0.2010209709405899,
-0.041814424097537... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
lucadiliello/search_as2 | lucadiliello | 2022-11-29T11:25:45Z | 24 | 0 | null | [
"region:us"
] | 2022-11-29T11:25:45Z | 2022-11-29T11:20:42.000Z | 2022-11-29T11:20:42 | ---
dataset_info:
features:
- name: question
dtype: string
- name: answer
dtype: string
- name: label
dtype: int64
splits:
- name: train
num_bytes: 758023208
num_examples: 3281909
- name: dev
num_bytes: 55656603
num_examples: 236360
- name: test
num_bytes: 55473661
num_examples: 236792
download_size: 332417156
dataset_size: 869153472
---
# Dataset Card for "search_as2"
Answer Sentence Selection version of the SearchQA dataset. For more info, check out the original [repository](https://github.com/lucadiliello/answer-selection). | [
-0.38201195001602173,
-0.39637964963912964,
0.1405663788318634,
0.06261789798736572,
-0.2319328486919403,
-0.06974628567695618,
0.21569912135601044,
-0.1800236850976944,
0.5290212631225586,
0.6649927496910095,
-0.7804542779922485,
-0.20841442048549652,
-0.2395046055316925,
-0.0731019154191... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
alecsharpie/nailbiting_classification | alecsharpie | 2022-11-30T07:12:04Z | 24 | 0 | acronym-identification | [
"task_categories:image-classification",
"annotations_creators:expert-generated",
"annotations_creators:machine-generated",
"size_categories:1K<n<10K",
"source_datasets:original",
"language:en",
"license:mit",
"nailbiting",
"image",
"preprocesses",
"region:us"
] | 2022-11-30T07:12:04Z | 2022-11-30T06:02:22.000Z | 2022-11-30T06:02:22 | ---
annotations_creators:
- expert-generated
- machine-generated
language:
- en
language_creators: []
license:
- mit
multilinguality: []
paperswithcode_id: acronym-identification
pretty_name: Nailbiting Classification
size_categories:
- 1K<n<10K
source_datasets:
- original
tags:
- nailbiting
- image
- preprocesses
task_categories:
- image-classification
task_ids: []
train-eval-index:
- col_mapping:
labels: tags
tokens: tokens
config: default
splits:
eval_split: test
task: token-classification
task_id: entity_extraction
dataset_info:
features:
- name: image
dtype: image
- name: label
dtype:
class_label:
names:
'0': biting
'1': no_biting
splits:
- name: train
num_bytes: 11965731.715
num_examples: 6629
- name: test
num_bytes: 1485426.0
num_examples: 736
download_size: 11546517
dataset_size: 13451157.715
---
# Dataset Card for Nail Biting Classification
## Table of Contents
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** [https://huggingface.co/datasets/alecsharpie/nailbiting_classification](https://huggingface.co/datasets/alecsharpie/nailbiting_classification)
- **Repository:** [https://github.com/alecsharpie/nomo_nailbiting](https://github.com/alecsharpie/nomo_nailbiting)
- **Point of Contact:** [alecsharpie@gmail.com](alecsharpie@gmail.com)
### Dataset Summary
A binary image dataset for classifying nailbiting. Images are cropped to only show the mouth area.
Should contain edge cases such as drinking water, talking on the phone, scratching chin etc.. all in "no biting" category
## Dataset Structure
### Data Instances
- 7147 Images
- 14879790 bytes total
- 12332617 bytes download
### Data Fields
128 x 64 (w x h, pixels)
Black and white
Labels
- '0': biting
- '1': no_biting
### Data Splits
- train: 6629 (11965737 bytes)
- test: 1471 (2914053 bytes)
## Dataset Creation
### Curation Rationale
I wanted to create a notification system to help me stop biting my nails. It needed to contain lots of possible no-biting scenarios. eg talking on the phone
### Source Data
#### Initial Data Collection and Normalization
The data was scraped from stock images sites and photos of myself were taken with my webcam.
MTCNN (https://github.com/ipazc/mtcnn) was then used to crop the images down to only the show the mouth area
The images were then converted to a black & white colour scheme.
### Annotations
#### Annotation process
During the scraping process images were labelled with a description, which I then manually sanity checked. I labelled the ones of me manually.
#### Who are the annotators?
Alec Sharp
## Considerations for Using the Data
### Discussion of Biases & Limitations
Tried to make the dataset diverse in terms of age and skin tone. Although, this dataset contains a large number of images of one subject (me) so is biased towards lower quality webcam pictures of a white male with a short beard.
### Dataset Curators
Alec Sharp
### Licensing Information
MIT
### Contributions
Thanks to [@alecsharpie](https://github.com/alecsharpie) for adding this dataset. | [
-0.18486613035202026,
-0.729896068572998,
0.25581082701683044,
0.40130457282066345,
-0.4753198027610779,
0.16453547775745392,
-0.08942489326000214,
-0.5615441799163818,
1.0238845348358154,
0.22900496423244476,
-0.4594082832336426,
-1.2219551801681519,
-0.5384485721588135,
0.079688161611557... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
memray/krapivin | memray | 2022-12-31T06:14:07Z | 24 | 1 | null | [
"license:cc-by-nc-sa-4.0",
"region:us"
] | 2022-12-31T06:14:07Z | 2022-12-31T06:13:03.000Z | 2022-12-31T06:13:03 | ---
license: cc-by-nc-sa-4.0
---
| [
-0.1285335123538971,
-0.1861683875322342,
0.6529128551483154,
0.49436232447624207,
-0.19319400191307068,
0.23607441782951355,
0.36072009801864624,
0.05056373029947281,
0.5793656706809998,
0.7400146722793579,
-0.650810182094574,
-0.23784008622169495,
-0.7102247476577759,
-0.0478255338966846... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
memray/semeval | memray | 2022-12-31T06:16:14Z | 24 | 0 | null | [
"license:cc-by-nc-sa-4.0",
"region:us"
] | 2022-12-31T06:16:14Z | 2022-12-31T06:15:55.000Z | 2022-12-31T06:15:55 | ---
license: cc-by-nc-sa-4.0
---
| [
-0.12853392958641052,
-0.18616779148578644,
0.6529127955436707,
0.49436280131340027,
-0.19319361448287964,
0.23607419431209564,
0.36072003841400146,
0.050563063472509384,
0.579365611076355,
0.7400140762329102,
-0.6508104205131531,
-0.23783954977989197,
-0.7102249264717102,
-0.0478260256350... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
aashsach/legaleval_rr | aashsach | 2023-01-26T08:06:40Z | 24 | 0 | null | [
"region:us"
] | 2023-01-26T08:06:40Z | 2023-01-05T10:20:02.000Z | 2023-01-05T10:20:02 | Entry not found | [
-0.32276472449302673,
-0.22568407654762268,
0.8622258901596069,
0.4346148371696472,
-0.5282984972000122,
0.7012965679168701,
0.7915717363357544,
0.07618629932403564,
0.7746022939682007,
0.2563222646713257,
-0.785281777381897,
-0.22573848068714142,
-0.9104482531547546,
0.5715669393539429,
... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
Ziyang/yfcc15m | Ziyang | 2023-01-06T10:38:29Z | 24 | 1 | null | [
"region:us"
] | 2023-01-06T10:38:29Z | 2023-01-06T09:35:17.000Z | 2023-01-06T09:35:17 | Entry not found | [
-0.32276472449302673,
-0.22568407654762268,
0.8622258901596069,
0.4346148371696472,
-0.5282984972000122,
0.7012965679168701,
0.7915717363357544,
0.07618629932403564,
0.7746022939682007,
0.2563222646713257,
-0.785281777381897,
-0.22573848068714142,
-0.9104482531547546,
0.5715669393539429,
... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
GBaker/MedQA-USMLE-4-options-hf | GBaker | 2023-01-30T22:57:33Z | 24 | 3 | null | [
"license:cc-by-sa-4.0",
"region:us"
] | 2023-01-30T22:57:33Z | 2023-01-24T20:32:54.000Z | 2023-01-24T20:32:54 | ---
license: cc-by-sa-4.0
---
Original dataset introduced by Jin et al. in [What Disease does this Patient Have? A Large-scale Open Domain Question Answering Dataset from Medical Exams](https://paperswithcode.com/paper/what-disease-does-this-patient-have-a-large)
<h4>Citation information:</h4>
@article{jin2020disease,
title={What Disease does this Patient Have? A Large-scale Open Domain Question Answering Dataset from Medical Exams},
author={Jin, Di and Pan, Eileen and Oufattole, Nassim and Weng, Wei-Hung and Fang, Hanyi and Szolovits, Peter},
journal={arXiv preprint arXiv:2009.13081},
year={2020}
} | [
-0.25376224517822266,
-0.7809584140777588,
0.6673122644424438,
-0.4187665283679962,
0.09132136404514313,
-0.6195076107978821,
-0.11089139431715012,
-0.45610755681991577,
0.532005250453949,
0.7576548457145691,
-0.5569379329681396,
-0.4520467221736908,
-0.3654973804950714,
0.1213310807943344... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
fscheffczyk/20newsgroups_embeddings | fscheffczyk | 2023-02-05T17:59:34Z | 24 | 0 | null | [
"task_categories:feature-extraction",
"task_categories:sentence-similarity",
"task_categories:question-answering",
"multilinguality:monolingual",
"size_categories:unknown",
"language:en",
"news",
"20newsgroups",
"region:us"
] | 2023-02-05T17:59:34Z | 2023-02-05T17:48:30.000Z | 2023-02-05T17:48:30 | ---
annotations_creators: []
language:
- en
language_creators: []
license: []
multilinguality:
- monolingual
pretty_name: Feature vector embeddings of the 20newsgroup dataset
size_categories:
- unknown
source_datasets:
- 20newsgroups dataset: http://qwone.com/~jason/20Newsgroups/
tags:
- news
- 20newsgroups
task_categories:
- feature-extraction
- sentence-similarity
- question-answering
task_ids: []
---
# Dataset Card for feature vector embeddings of the 20newsgroup dataset
## Table of Contents
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:**
- **Repository:**
- **Paper:**
- **Leaderboard:**
- **Point of Contact:**
### Dataset Summary
This dataset contains vector embeddings of the [20newsgroups dataset](http://qwone.com/~jason/20Newsgroups/).
The embeddings were created with the [Sentence Transformers library](https://www.sbert.net/index.html) using the `multi-qa-MiniLM-L6-cos-v1` model.
### Supported Tasks and Leaderboards
[More Information Needed]
### Languages
[More Information Needed]
## Dataset Structure
### Data Instances
[More Information Needed]
### Data Fields
[More Information Needed]
### Data Splits
[More Information Needed]
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
[More Information Needed]
### Citation Information
[More Information Needed]
### Contributions
Thanks to [@github-username](https://github.com/<github-username>) for adding this dataset. | [
-0.7143865823745728,
-0.43799838423728943,
0.07579035311937332,
0.39784863591194153,
-0.14884454011917114,
0.2561889588832855,
-0.2787840664386749,
-0.18345090746879578,
0.5127387046813965,
0.5248367190361023,
-0.7724593877792358,
-1.0544801950454712,
-0.6999971270561218,
0.095562785863876... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
fscheffczyk/2D_20newsgroups_embeddings | fscheffczyk | 2023-02-05T18:57:29Z | 24 | 0 | null | [
"task_categories:feature-extraction",
"task_categories:sentence-similarity",
"task_categories:question-answering",
"multilinguality:monolingual",
"size_categories:unknown",
"source_datasets:extended|fscheffczyk/20newsgroups_embeddings",
"language:en",
"news",
"20newsgroups",
"region:us"
] | 2023-02-05T18:57:29Z | 2023-02-05T18:52:06.000Z | 2023-02-05T18:52:06 | ---
annotations_creators: []
language:
- en
language_creators: []
license: []
multilinguality:
- monolingual
pretty_name: Dimensional reduced feature vector embeddings of the 20newsgroup dataset
size_categories:
- unknown
source_datasets:
- extended|fscheffczyk/20newsgroups_embeddings
tags:
- news
- 20newsgroups
task_categories:
- feature-extraction
- sentence-similarity
- question-answering
task_ids: []
---
# Dataset Card for feature vector embeddings of the 20newsgroup dataset
## Table of Contents
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:**
- **Repository:**
- **Paper:**
- **Leaderboard:**
- **Point of Contact:**
### Dataset Summary
This dataset contains dimensional reduced vector embeddings of the [20newsgroups dataset](http://qwone.com/~jason/20Newsgroups/). This dataset contains two dimensions.
The dimensional reduced embeddings were created with the [TruncatedSVD function](https://scikit-learn.org/stable/modules/generated/sklearn.decomposition.TruncatedSVD.html#sklearn.decomposition.TruncatedSVD) from the [scikit-learn library](https://scikit-learn.org/stable/index.html).
These reduced feature vectors are based on the [fscheffczyk/20newsgroup_embeddings dataset](https://huggingface.co/datasets/fscheffczyk/20newsgroups_embeddings).
### Supported Tasks and Leaderboards
[More Information Needed]
### Languages
[More Information Needed]
## Dataset Structure
### Data Instances
[More Information Needed]
### Data Fields
[More Information Needed]
### Data Splits
[More Information Needed]
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
[More Information Needed]
### Citation Information
[More Information Needed]
### Contributions
Thanks to [@github-username](https://github.com/<github-username>) for adding this dataset. | [
-0.6921308040618896,
-0.3822716772556305,
0.08390817046165466,
0.3990531861782074,
-0.2147091031074524,
0.23453602194786072,
-0.41358625888824463,
-0.17840448021888733,
0.5594889521598816,
0.4483714699745178,
-0.7278729677200317,
-1.08460533618927,
-0.6772922873497009,
-0.01807671412825584... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
shahules786/OA-cornell-movies-dialog | shahules786 | 2023-02-10T05:34:43Z | 24 | 3 | null | [
"region:us"
] | 2023-02-10T05:34:43Z | 2023-02-07T15:21:28.000Z | 2023-02-07T15:21:28 | ---
dataset_info:
features:
- name: conversation
dtype: string
splits:
- name: train
num_bytes: 9476338
num_examples: 20959
download_size: 4859997
dataset_size: 9476338
---
# Dataset Card for Open Assistant Cornell Movies Dialog
## Dataset Summary
The dataset was created using [Cornell Movies Dialog Corpus](https://www.cs.cornell.edu/~cristian/Cornell_Movie-Dialogs_Corpus.html) which contains a large metadata-rich collection of fictional conversations extracted from raw movie scripts.
Dialogs and meta-data from the underlying Corpus were used to design a dataset that can be used to InstructGPT based models to learn movie scripts.
Example :
```
User: Assume RICK and ALICE are characters from a fantasy-horror movie, continue the conversation between them
RICK: I heard you screaming. Was it a bad one?
ALICE: It was bad.
RICK: Doesn't the dream master work for you anymore?
Assistant: Sure
ALICE: I can't find him.
RICK: Hey, since when do you play Thomas Edison? This looks like Sheila's.
ALICE: It is...was. It's a zapper, it might help me stay awake.
RICK: Yeah, or turn you into toast.
```
## Citations
```
@InProceedings{Danescu-Niculescu-Mizil+Lee:11a,
author={Cristian Danescu-Niculescu-Mizil and Lillian Lee},
title={Chameleons in imagined conversations:
A new approach to understanding coordination of linguistic style in dialogs.},
booktitle={Proceedings of the Workshop on Cognitive Modeling and Computational Linguistics, ACL 2011},
year={2011}
}
``` | [
-0.3101365864276886,
-0.9496034383773804,
0.24769166111946106,
-0.31955745816230774,
-0.08581972122192383,
0.07610790431499481,
-0.40705180168151855,
-0.09493564069271088,
0.2941287159919739,
0.48713141679763794,
-0.5320670008659363,
-0.488018661737442,
-0.15639029443264008,
0.164104729890... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
IlyaGusev/habr | IlyaGusev | 2023-03-09T23:16:35Z | 24 | 13 | null | [
"task_categories:text-generation",
"size_categories:100K<n<1M",
"language:ru",
"language:en",
"region:us"
] | 2023-03-09T23:16:35Z | 2023-02-10T20:36:09.000Z | 2023-02-10T20:36:09 | ---
dataset_info:
features:
- name: id
dtype: uint32
- name: language
dtype: string
- name: url
dtype: string
- name: title
dtype: string
- name: text_markdown
dtype: string
- name: text_html
dtype: string
- name: author
dtype: string
- name: original_author
dtype: string
- name: original_url
dtype: string
- name: lead_html
dtype: string
- name: lead_markdown
dtype: string
- name: type
dtype: string
- name: time_published
dtype: uint64
- name: statistics
struct:
- name: commentsCount
dtype: uint32
- name: favoritesCount
dtype: uint32
- name: readingCount
dtype: uint32
- name: score
dtype: int32
- name: votesCount
dtype: int32
- name: votesCountPlus
dtype: int32
- name: votesCountMinus
dtype: int32
- name: labels
sequence: string
- name: hubs
sequence: string
- name: flows
sequence: string
- name: tags
sequence: string
- name: reading_time
dtype: uint32
- name: format
dtype: string
- name: complexity
dtype: string
- name: comments
sequence:
- name: id
dtype: uint64
- name: parent_id
dtype: uint64
- name: level
dtype: uint32
- name: time_published
dtype: uint64
- name: score
dtype: int32
- name: votes
dtype: uint32
- name: message_html
dtype: string
- name: message_markdown
dtype: string
- name: author
dtype: string
- name: children
sequence: uint64
splits:
- name: train
num_bytes: 19968161329
num_examples: 302049
download_size: 3485570346
dataset_size: 19968161329
task_categories:
- text-generation
language:
- ru
- en
size_categories:
- 100K<n<1M
---
# Habr dataset
## Table of Contents
- [Table of Contents](#table-of-contents)
- [Description](#description)
- [Usage](#usage)
- [Data Instances](#data-instances)
- [Source Data](#source-data)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
## Description
**Summary:** Dataset of posts and comments from [habr.com](https://habr.com/ru/all/), a Russian collaborative blog about IT, computer science and anything related to the Internet.
**Script:** [create_habr.py](https://github.com/IlyaGusev/rulm/blob/master/data_processing/create_habr.py)
**Point of Contact:** [Ilya Gusev](ilya.gusev@phystech.edu)
**Languages:** Russian, English, some programming code.
## Usage
Prerequisites:
```bash
pip install datasets zstandard jsonlines pysimdjson
```
Dataset iteration:
```python
from datasets import load_dataset
dataset = load_dataset('IlyaGusev/habr', split="train", streaming=True)
for example in dataset:
print(example["text_markdown"])
```
## Data Instances
```
{
"id": 12730,
"language": "ru",
"url": "https://habr.com/ru/post/12730/",
"text_markdown": "...",
"text_html": "...",
"lead_markdown": "...",
"lead_html": "...",
"type": "article",
"labels": [],
"original_author": null,
"original_url": null,
"time_published": 1185962380,
"author": "...",
"title": "Хочешь в университет — сделай презентацию",
"statistics": {
"commentsCount": 23,
"favoritesCount": 1,
"readingCount": 1542,
"score": 7,
"votesCount": 15,
"votesCountPlus": 11,
"votesCountMinus": 4
},
"hubs": [
"itcompanies"
],
"flows": [
"popsci"
],
"tags": [
"PowerPoint",
"презентация",
"абитуриенты",
],
"reading_time": 1,
"format": null,
"complexity": null,
"comments": {
"id": [11653537, 11653541],
"parent_id": [null, 11653537],
"level": [0, 1],
"time_published": [1185963192, 1185967886],
"score": [-1, 0],
"votes": [1, 0],
"message_html": ["...", "..."],
"author": ["...", "..."],
"children": [[11653541], []]
}
}
```
You can use this little helper to unflatten sequences:
```python
def revert_flattening(records):
fixed_records = []
for key, values in records.items():
if not fixed_records:
fixed_records = [{} for _ in range(len(values))]
for i, value in enumerate(values):
fixed_records[i][key] = value
return fixed_records
```
The original JSONL is already unflattened.
## Source Data
* The data source is the [Habr](https://habr.com/) website.
* API call example: [post 709430](https://habr.com/kek/v2/articles/709430).
* Processing script is [here](https://github.com/IlyaGusev/rulm/blob/master/data_processing/create_habr.py).
## Personal and Sensitive Information
The dataset is not anonymized, so individuals' names can be found in the dataset. Information about the original authors is included in the dataset where possible.
| [
-0.3920712471008301,
-0.6313307881355286,
0.06829362362623215,
0.3128443956375122,
-0.27561596035957336,
0.08144842833280563,
-0.28537264466285706,
0.037497617304325104,
0.3443523645401001,
0.3406999409198761,
-0.46140357851982117,
-0.8029592633247375,
-0.3093869686126709,
0.21272046864032... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
jonathan-roberts1/Brazilian_Cerrado-Savanna_Scenes | jonathan-roberts1 | 2023-03-31T15:28:58Z | 24 | 0 | null | [
"task_categories:zero-shot-image-classification",
"task_categories:image-classification",
"license:other",
"region:us"
] | 2023-03-31T15:28:58Z | 2023-02-14T18:28:02.000Z | 2023-02-14T18:28:02 | ---
dataset_info:
features:
- name: image
dtype: image
- name: label
dtype:
class_label:
names:
'0': agriculture
'1': arboreal vegetation
'2': herbaceous vegetation
'3': shrubby vegetation
splits:
- name: train
num_bytes: 16933385.557
num_examples: 1311
download_size: 14574976
dataset_size: 16933385.557
license: other
task_categories:
- zero-shot-image-classification
- image-classification
---
# Dataset Card for "Brazilian_Cerrado-Savanna_Scenes"
## Dataset Description
- **Paper** [Towards vegetation species discrimination by using data-driven descriptors](https://vision.unipv.it/CV/materiale2016-17/3rd%20Choice/0022.pdf)
-
### Licensing Information
[CC BY-NC]
## Citation Information
[Towards vegetation species discrimination by using data-driven descriptors](https://vision.unipv.it/CV/materiale2016-17/3rd%20Choice/0022.pdf)
```
@inproceedings{nogueira2016towards,
title = {Towards vegetation species discrimination by using data-driven descriptors},
author = {Nogueira, Keiller and Dos Santos, Jefersson A and Fornazari, Tamires and Silva, Thiago Sanna Freire and Morellato, Leonor Patricia and Torres, Ricardo da S},
year = 2016,
booktitle = {2016 9th IAPR Workshop on Pattern Recogniton in Remote Sensing (PRRS)},
pages = {1--6},
organization = {Ieee}
}
``` | [
-0.5005304217338562,
-0.4491141140460968,
0.15396015346050262,
0.4505213797092438,
-0.37656456232070923,
0.04019315168261528,
-0.09325115382671356,
-0.6164532899856567,
0.2228473722934723,
0.3568689823150635,
-0.3013319671154022,
-0.773417592048645,
-0.45931655168533325,
0.3339221775531769... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
jonathan-roberts1/GID | jonathan-roberts1 | 2023-03-31T15:38:31Z | 24 | 0 | null | [
"task_categories:image-classification",
"task_categories:zero-shot-image-classification",
"license:other",
"region:us"
] | 2023-03-31T15:38:31Z | 2023-02-15T16:42:03.000Z | 2023-02-15T16:42:03 | ---
dataset_info:
features:
- name: image
dtype: image
- name: label
dtype:
class_label:
names:
'0': arbor woodland
'1': artificial grassland
'2': dry cropland
'3': garden plot
'4': industrial land
'5': irrigated land
'6': lake
'7': natural grassland
'8': paddy field
'9': pond
'10': river
'11': rural residential
'12': shrub land
'13': traffic land
'14': urban residential
splits:
- name: train
num_bytes: 1777210275
num_examples: 30000
download_size: 1263253291
dataset_size: 1777210275
license: other
task_categories:
- image-classification
- zero-shot-image-classification
---
# Dataset Card for "GID"
## Dataset Description
- **Paper** [Land-cover classification with high-resolution remote sensing images using transferable deep models](https://www.sciencedirect.com/science/article/pii/S0034425719303414)
### Licensing Information
Public domain.
## Citation Information
[Land-cover classification with high-resolution remote sensing images using transferable deep models](https://www.sciencedirect.com/science/article/pii/S0034425719303414)
```
@article{GID2020,
title = {Land-cover classification with high-resolution remote sensing images using transferable deep models},
author = {Tong, Xin-Yi and Xia, Gui-Song and Lu, Qikai and Shen, Huanfeng and Li, Shengyang and You, Shucheng and Zhang, Liangpei},
year = 2020,
journal = {Remote Sensing of Environment},
volume = 237,
pages = 111322
}
``` | [
-0.542556643486023,
-0.27971649169921875,
0.09729309380054474,
-0.13363489508628845,
-0.3181321918964386,
-0.005632339511066675,
-0.053463879972696304,
-0.32323572039604187,
-0.16717025637626648,
0.6692365407943726,
-0.3218107223510742,
-0.8204640746116638,
-0.7710310816764832,
-0.37705749... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
jonathan-roberts1/MLRSNet | jonathan-roberts1 | 2023-04-03T16:34:12Z | 24 | 0 | null | [
"license:cc-by-4.0",
"region:us"
] | 2023-04-03T16:34:12Z | 2023-02-27T18:19:58.000Z | 2023-02-27T18:19:58 | ---
dataset_info:
features:
- name: image
dtype: image
- name: label
sequence:
class_label:
names:
'0': airplane
'1': airport
'2': bare soil
'3': baseball diamond
'4': basketball court
'5': beach
'6': bridge
'7': buildings
'8': cars
'9': chaparral
'10': cloud
'11': containers
'12': crosswalk
'13': dense residential area
'14': desert
'15': dock
'16': factory
'17': field
'18': football field
'19': forest
'20': freeway
'21': golf course
'22': grass
'23': greenhouse
'24': gully
'25': habor
'26': intersection
'27': island
'28': lake
'29': mobile home
'30': mountain
'31': overpass
'32': park
'33': parking lot
'34': parkway
'35': pavement
'36': railway
'37': railway station
'38': river
'39': road
'40': roundabout
'41': runway
'42': sand
'43': sea
'44': ships
'45': snow
'46': snowberg
'47': sparse residential area
'48': stadium
'49': swimming pool
'50': tanks
'51': tennis court
'52': terrace
'53': track
'54': trail
'55': transmission tower
'56': trees
'57': water
'58': wetland
'59': wind turbine
splits:
- name: train
num_bytes: 1327782862.875
num_examples: 109161
download_size: 1304951717
dataset_size: 1327782862.875
license: cc-by-4.0
---
# Dataset Card for "MLRSNet"
## Dataset Description
- **Paper:** [MLRSNet: A multi-label high spatial resolution remote sensing dataset for semantic scene understanding](https://www.sciencedirect.com/science/article/pii/S0924271620302677)
### Licensing Information
CC BY 4.0
## Citation Information
[MLRSNet: A multi-label high spatial resolution remote sensing dataset for semantic scene understanding](https://www.sciencedirect.com/science/article/pii/S0924271620302677)
```
@article{qi2020mlrsnet,
title = {MLRSNet: A multi-label high spatial resolution remote sensing dataset for semantic scene understanding},
author = {Qi, Xiaoman and Zhu, Panpan and Wang, Yuebin and Zhang, Liqiang and Peng, Junhuan and Wu, Mengfan and Chen, Jialong and Zhao, Xudong and Zang, Ning and Mathiopoulos, P Takis},
year = 2020,
journal = {ISPRS Journal of Photogrammetry and Remote Sensing},
publisher = {Elsevier},
volume = 169,
pages = {337--350}
}
``` | [
-0.5979433059692383,
-0.29603636264801025,
0.07871547341346741,
0.08017546683549881,
-0.12578268349170685,
-0.4020402133464813,
-0.05008561909198761,
-0.4880200922489166,
0.052737362682819366,
0.48813122510910034,
-0.8451499938964844,
-0.8537752032279968,
-0.7209334373474121,
0.13527925312... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
urialon/summ_screen_validation | urialon | 2023-02-28T16:39:22Z | 24 | 0 | null | [
"region:us"
] | 2023-02-28T16:39:22Z | 2023-02-28T16:39:07.000Z | 2023-02-28T16:39:07 | Entry not found | [
-0.3227645754814148,
-0.22568479180335999,
0.8622264862060547,
0.43461528420448303,
-0.52829909324646,
0.7012971639633179,
0.7915720343589783,
0.07618614286184311,
0.774603009223938,
0.2563217282295227,
-0.7852813005447388,
-0.22573819756507874,
-0.9104477167129517,
0.5715674161911011,
-... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
martinjosifoski/SynthIE | martinjosifoski | 2023-03-06T21:59:52Z | 24 | 4 | null | [
"language:en",
"license:mit",
"arxiv:2303.04132",
"region:us"
] | 2023-03-06T21:59:52Z | 2023-03-03T12:15:35.000Z | 2023-03-03T12:15:35 | ---
license: mit
language:
- en
pretty_name: SynthIE
---
# Dataset Card for Dataset Name
## Dataset Description
- **Homepage and Repository:** https://github.com/epfl-dlab/SynthIE
- **Paper:** https://arxiv.org/abs/2303.04132
### Dataset Summary
[Exploiting Asymmetry for Synthetic Training Data Generation: SynthIE and the Case of Information Extraction](https://arxiv.org/abs/2303.04132) builds on the idea that even for hard tasks of interest (with input X and Y) -- for which human-annotation is not practical and high-quality annotated data is not available -- by reversing the task (from Y to X), useful data can be synthetically generated even when that original task cannot be solved directly by the LLM. This process enables the creation of a high-quality dataset of X-Y pairs that will enable the training/fine-tuning of models for the original task of interest.
In particular, the paper studies the idea in the context of closed information extraction (IE), where a model is tasked with extracting the exhaustive set of facts expressed in natural language text. The synthetic data generation pipeline proposed in the paper comprises three primary components: (i) construction of a knowledge graph containing the entities and relations of interest; (ii) sampling of coherent triplet sets from the KG with comprehensive coverage of the entities and relations, and (iii) generation of high-quality text, expressing the triplets without any supplementary information. For more details regarding the dataset construction procedure, see the [paper](https://arxiv.org/abs/2303.04132).
We used this pipeline to generate two large high-quality datasets:<br>
**SynthIE-code**: consisting of around 1.8M training, 10K validation, and 50K test samples generated with [code-davinci-002](https://platform.openai.com/docs/models/gpt-3-5) <br>
**SynthIE-text**: consisting of 10K validation and 50K test samples generated with [text-davinci-003](https://platform.openai.com/docs/models/gpt-3-5) <br>
The text for the validation and test data points in SynthIE-code and SynthIE-text corresponds to the same triplet sets.
The resulting data is then used to train [SynthIE](https://github.com/epfl-dlab/SynthIE), a series of T5-based versions of [GenIE](https://github.com/epfl-dlab/GenIE) -- a recently proposed autoregressive closed IE system; as well as to enable a more accurate evaluation. As a baseline, T5 versions of GenIE are trained on the same dataset, [REBEL](https://aclanthology.org/2021.findings-emnlp.204.pdf), as the original work. The (processed) version of this dataset, suitable for closed IE and used in the paper's experiments, is provided in this repository.
According to the human evaluation conducted in the paper, the synthetically generated data is substantially more faithful than the distantly supervised REBEL and contains around 15\% false negative (opposed to REBEL's 70\%) and 22\% false positive (opposed to REBEL's 56\%) annotations while uniformly covering all relations (see the paper for more details).
### Languages
To stay comparable to GenIE, [SynthIE](https://github.com/epfl-dlab/SynthIE) considers only English. Therefore, the text in SynthIE-code and SynthIE-text is generated in English only. However, the triplets' constituents come from WikiData and are language invariant. Therefore, triplet sets with labels for many languages can easily be obtained.
## Dataset Structure
The SynthIE meta-dataset actually comprises 3 datasets:
- **SynthIE-code** (`synthie_code`)
- **SynthIE-text** (`synthie_text`)
- **REBEL** (`rebel`)
**SynCode**
The samples in this dataset were generated with `code-davinci-002`.
| | Train | Valid | Test |
| ----- | ----- | ----- | ----- |
| Data Points | 1,815,378 | 10,000 | 50,286 |
| Triplets | 6,055,911 | 34,262 | 172,991 |
| Entities | 1,806,126 | 27,553 | 105,176 |
| Relations | 888 | 883 | 888 |
**SynthIE-text**
The samples in this dataset were generated with `text-davinci-003`.
| | Train | Valid | Test |
| ----- | ----- | ----- | ----- |
| Data Points | -- | 10,000 | 50,286 |
| Triplets | -- | 34,262 | 172,991 |
| Entities | -- | 27,553 | 105,176 |
| Relations | -- | 883 | 888 |
**REBEL**
The samples in this dataset are processed and further annotated from the already existing [REBEL](https://huggingface.co/datasets/Babelscape/rebel-dataset) dataset.
| | Train | Valid | Test |
| ----- | ----- | ----- | ----- |
| Data Points | 2,813,210 | 155,926 | 156,449 |
| Triplets | 7,187,915 | 397,326 | 398,252 |
| Entities | 2,038,741 | 205,080 | 205,549 |
| Relations | 1071 | 691 | 690 |
Note that REBEL is substantially more skewed than SynCode and SynthIE-text. Here are the relation frequency (in terms of data points) statistics for REBEL and SynCode.
| | min | 1st quantile | median | 3rd quantile | max |
| ----- | ----- | ----- | ----- | ----- | ----- |
| SynCode | 61 | 1043 | 1691 | 3944 | 499,783 |
| REBEL | 1 | 7 | 47 | 625 | 1,202,489 |
**SynCode/SynthIE-text/REBEL processed**
Additionally, we provide a processed version (that was used in the paper) of each dataset. The processing consists of pre-computations/pre-processing that were run to speed the data loading for the experiments. The key difference is that in the processed version of SynthIE-code and SynthIE-text, the target triplets are consistently ordered according to a heuristic detecting the constituent entities' appearance position in the text, with triplets corresponding to entities appearing earlier in the output linearization (cf. paper). The triplets for REBEL are ordered even in the "unprocessed version". To load the processed version of the dataset, add the suffix "_pc" to the original identifier (i.e., synthie_code_pc, synthie_text_pc, rebel_pc). The processing is performed by applying [this](https://github.com/epfl-dlab/SynthIE/blob/main/scripts/pre_computing.py) script on the original data.
### Data Fields
All of the datasets share the same schema. Here is a list of the fields paired with a description.
- `id`: A unique numeric identifier, starting from 0 for each dataset.
- `text`: A string expressing the text corresponding to this sample.
- `triplets`: A list of triplets that are expressed in the text. Each triplet corresponds to a dictionary
- `subject`: The subject refers to an entity. It is a dictionary of:
- `surfaceform`: A textual label corresponding to the title of the entity's English Wikipedia page
- `uri`: A string corresponding to the entity's WikiData identifier
- `relation`: The relation refers to a relation. It is a dictionary of:
- `surfaceform`: The textual label assigned to the WikiData item corresponding to the given relation.
- `uri`: A string corresponding to the relation's WikiData identifier
- `object`: Same as the subject, the object refers to an entity and corresponds to a dictionary with the same structure.
- `entities`: A list comprising all the entities expressed in the text (appearing as a subject or an object in any of the triplets). Each entity is expressed as a dictionary following the same structure as the `subject` and `object` entities in the triplet list.
- `relations`: A list comprising all the relations expressed in the text (appearing as the relation in any of the triplets). Each relation is expressed as a dictionary following the same structure as the `relation` in the triplet list.
Here is an example of a data point:
```
{'id': 1,
'text': 'The Journal of Colloid and Interface Science is a bibliographic '
'review indexed in Scopus and published by Elsevier. Its main subject '
'is chemical engineering, and it is written in the English language. '
'It is based in the United States, and is owned by Elsevier, the same '
'company that owns Scopus.',
'triplets': [{'subject': "{'surfaceform': "
"'Journal_of_Colloid_and_Interface_Science', 'uri': "
"'Q3902043'}",
'predicate': "{'surfaceform': 'indexed in bibliographic "
"review', 'uri': 'P8875'}",
'object': "{'surfaceform': 'Scopus', 'uri': 'Q371467'}"},
{'subject': "{'surfaceform': "
"'Journal_of_Colloid_and_Interface_Science', 'uri': "
"'Q3902043'}",
'predicate': "{'surfaceform': 'main subject', 'uri': 'P921'}",
'object': "{'surfaceform': 'Chemical_engineering', 'uri': "
"'Q83588'}"},
{'subject': "{'surfaceform': "
"'Journal_of_Colloid_and_Interface_Science', 'uri': "
"'Q3902043'}",
'predicate': "{'surfaceform': 'language of work or name', "
"'uri': 'P407'}",
'object': "{'surfaceform': 'English_language', 'uri': 'Q1860'}"},
{'subject': "{'surfaceform': "
"'Journal_of_Colloid_and_Interface_Science', 'uri': "
"'Q3902043'}",
'predicate': "{'surfaceform': 'publisher', 'uri': 'P123'}",
'object': "{'surfaceform': 'Elsevier', 'uri': 'Q746413'}"},
{'subject': "{'surfaceform': "
"'Journal_of_Colloid_and_Interface_Science', 'uri': "
"'Q3902043'}",
'predicate': "{'surfaceform': 'country of origin', 'uri': "
"'P495'}",
'object': "{'surfaceform': 'United_States', 'uri': 'Q30'}"},
{'subject': "{'surfaceform': 'Scopus', 'uri': 'Q371467'}",
'predicate': "{'surfaceform': 'owned by', 'uri': 'P127'}",
'object': "{'surfaceform': 'Elsevier', 'uri': 'Q746413'}"}],
'entities': [{'surfaceform': 'Journal_of_Colloid_and_Interface_Science',
'uri': 'Q3902043'},
{'surfaceform': 'Scopus', 'uri': 'Q371467'},
{'surfaceform': 'Chemical_engineering', 'uri': 'Q83588'},
{'surfaceform': 'English_language', 'uri': 'Q1860'},
{'surfaceform': 'Elsevier', 'uri': 'Q746413'},
{'surfaceform': 'United_States', 'uri': 'Q30'}],
'relations': [{'surfaceform': 'indexed in bibliographic review',
'uri': 'P8875'},
{'surfaceform': 'main subject', 'uri': 'P921'},
{'surfaceform': 'language of work or name', 'uri': 'P407'},
{'surfaceform': 'publisher', 'uri': 'P123'},
{'surfaceform': 'country of origin', 'uri': 'P495'},
{'surfaceform': 'owned by', 'uri': 'P127'}]}
```
### Data Splits
Each dataset (except SynthIE-text, which does not have a train set) has the same 4 splits:
- `train`
- `validation`
- `test`
- `test_small`
The first three are self-explanatory; the `test_small` split corresponds to a randomly sampled subset of the `test` split in which the IDs of the data points are kept the same as in the test set from which they were sampled (i.e., after the sampling IDs are not reset to 0 and resigned).
## Dataset Creation
Collecting datasets for the closed IE task is time-consuming, expensive, and even hardly feasible, as it requires annotators to know the entire entity and relation catalogs and reason about all possible facts expressed in the text. As a result, only small or noisy datasets exist. The only large dataset available, REBEL, suffers from several problems: (i) Noise: it is constructed based on distant supervision, and for many data points, the target set does not contain all the facts expressed in the text or is partially incorrect; (ii) Skewness: most relations appear only a few times in the dataset, resulting in models that ignore most of the information when used for training and poor estimates of performance when used for evaluation.
This dataset is constructed using a synthetic data generation pipeline, proposed in the paper [Exploiting Asymmetry for Synthetic Training Data Generation: SynthIE and the Case of Information Extraction](https://arxiv.org/abs/2303.04132), and serves as a use case for a task for which (i) high-quality annotated data is not available; (ii) human-annotation is not practical; (iii) the direct task (closed IE) is challenging for an LLM. Concretely, by reversing the task and generating the data in the opposite direction -- going from triplets to text -- high-quality useful data can be generated. The pipeline used to construct the dataset comprises three components: (i) construction of a knowledge graph containing the entities and relations of interest; (ii) sampling of coherent triplet sets from the KG with comprehensive coverage of the entities and relations, and (iii) generation of high-quality text, expressing the triplets without any supplementary information. For more details regarding the dataset construction procedure and considerations for using the data, see the "Synthetic Data Generation", "Discussion", and "Limitations" sections of the [paper](https://arxiv.org/abs/2303.04132).
## Additional Information
### Licensing Information
The dataset is licensed under the terms of the MIT license.
### Citation Information
```
@article{josifoski2023exploiting,
title={Exploiting Asymmetry for Synthetic Training Data Generation: {S}ynth{IE} and The Case of Information Extraction},
author={Josifoski, Martin and Sakota, Marija and Peyrard, Maxime and West, Robert},
journal={arXiv preprint arXiv:2303.04132},
year={2023}
}
```
| [
-0.39447036385536194,
-0.4867859482765198,
0.45618075132369995,
0.00010321482113795355,
-0.08740600198507309,
0.1282700151205063,
-0.34008175134658813,
-0.5202536582946777,
0.2575762867927551,
0.4528491795063019,
-0.7123393416404724,
-0.6619703769683838,
-0.3751378059387207,
0.347317546606... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
IlyaGusev/ru_news | IlyaGusev | 2023-03-20T23:05:08Z | 24 | 3 | null | [
"task_categories:text-generation",
"size_categories:1M<n<10M",
"language:ru",
"region:us"
] | 2023-03-20T23:05:08Z | 2023-03-12T20:56:14.000Z | 2023-03-12T20:56:14 | ---
dataset_info:
features:
- name: url
dtype: string
- name: text
dtype: string
- name: title
dtype: string
- name: source
dtype: string
- name: timestamp
dtype: uint64
splits:
- name: train
num_bytes: 12858731888
num_examples: 4137525
download_size: 3669747077
dataset_size: 12858731888
task_categories:
- text-generation
language:
- ru
size_categories:
- 1M<n<10M
---
# RuNews dataset
## Table of Contents
- [Table of Contents](#table-of-contents)
- [Description](#description)
- [Usage](#usage)
- [Data Instances](#data-instances)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
## Description
**Summary:** Dataset of news from several sources:
* [Lenta.ru by yutkin](https://github.com/yutkin/Lenta.Ru-News-Dataset)
* [Several sources by buriy](https://github.com/buriy/russian-nlp-datasets/releases)
* [ODS Newsviz Tass](https://github.com/newsviz/newsviz)
* [Taiga fontanka](https://tatianashavrina.github.io/taiga_site/)
* [News from Telegram contest](https://github.com/IlyaGusev/tgcontest)
**Script:** [create_ru_news.py](https://github.com/IlyaGusev/rulm/blob/master/data_processing/create_ru_news.py)
**Point of Contact:** [Ilya Gusev](ilya.gusev@phystech.edu)
**Languages:** Russian.
## Usage
Prerequisites:
```bash
pip install datasets zstandard jsonlines pysimdjson
```
Dataset iteration:
```python
from datasets import load_dataset
dataset = load_dataset('IlyaGusev/ru_news', split="train", streaming=True)
for example in dataset:
print(example["text"])
```
## Data Instances
```
{
"title": "Заместитель главы района в Якутии пожаловался на пьянство начальника",
"text": "Заместитель главы Нерюнгринского района Якутии Геннадий Ленц пожаловался руководителю республики Егору Борисову на своего начальника. Как рассказал Ленц 'Интерфаксу', Андрей Фитисов пьет на рабочем месте и 'уходит в многодневные загулы'...",
"timestamp": 1346284800,
"url": "https://lenta.ru/news/2012/08/30/alco/",
"source": "lenta"
}
```
## Personal and Sensitive Information
The dataset is not anonymized, so individuals' names can be found in the dataset. Information about the original authors is included in the dataset where possible. | [
-0.20195290446281433,
-0.3982933461666107,
0.387959361076355,
0.037391096353530884,
-0.47496190667152405,
-0.05503952503204346,
-0.3219624161720276,
-0.1800474226474762,
0.2837512493133545,
0.4085281491279602,
-0.70511394739151,
-0.96303790807724,
-0.5344417095184326,
0.21523430943489075,
... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
Multimodal-Fatima/COCO_captions_test | Multimodal-Fatima | 2023-03-17T21:23:22Z | 24 | 0 | null | [
"region:us"
] | 2023-03-17T21:23:22Z | 2023-03-17T21:22:46.000Z | 2023-03-17T21:22:46 | ---
dataset_info:
features:
- name: image
dtype: image
- name: filepath
dtype: string
- name: sentids
list: int32
- name: filename
dtype: string
- name: imgid
dtype: int32
- name: split
dtype: string
- name: sentences_tokens
list:
list: string
- name: sentences_raw
list: string
- name: sentences_sentid
list: int32
- name: cocoid
dtype: int32
- name: id
dtype: int64
- name: clip_tags_ViT_L_14
sequence: string
- name: clip_tags_LAION_ViT_H_14_2B
sequence: string
- name: blip_caption_beam_5
dtype: string
- name: LLM_Description_gpt3_downstream_tasks_visual_genome_ViT_L_14
sequence: string
- name: LLM_Description_gpt3_downstream_tasks_visual_genome_LAION-ViT-H-14-2B
sequence: string
- name: DETA_detections_deta_swin_large_o365_coco_classes
list:
- name: attribute
dtype: string
- name: box
sequence: float32
- name: label
dtype: string
- name: location
dtype: string
- name: ratio
dtype: float32
- name: size
dtype: string
- name: tag
dtype: string
splits:
- name: test
num_bytes: 831189492.0
num_examples: 5000
download_size: 823516792
dataset_size: 831189492.0
---
# Dataset Card for "COCO_captions_test"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | [
-0.6476593017578125,
-0.3486635684967041,
-0.03662817180156708,
0.5402343273162842,
-0.36217373609542847,
0.33529964089393616,
0.0682811513543129,
-0.1329222172498703,
0.733666181564331,
0.5646895170211792,
-0.7930801510810852,
-0.740593671798706,
-0.5792741179466248,
0.004482598509639502,... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
neuclir/csl | neuclir | 2023-07-05T20:02:54Z | 24 | 4 | null | [
"task_categories:text-retrieval",
"task_ids:document-retrieval",
"annotations_creators:no-annotation",
"size_categories:100K<n<1M",
"source_datasets:extended|csl",
"language:zh",
"language:en",
"license:apache-2.0",
"region:us"
] | 2023-07-05T20:02:54Z | 2023-03-20T21:17:19.000Z | 2023-03-20T21:17:19 | ---
annotations_creators:
- no-annotation
language:
- zh
- en
license:
- apache-2.0
pretty_name: CSL
size_categories:
- 100K<n<1M
source_datasets:
- extended|csl
tags: []
task_categories:
- text-retrieval
task_ids:
- document-retrieval
---
# Dataset Card for CSL
## Dataset Description
CSL is the Chinese Scientific Literature Dataset.
- **Paper:** https://aclanthology.org/2022.coling-1.344
- **Repository:** https://github.com/ydli-ai/CSL
### Dataset Summary
The dataset contains titles, abstracts, keywords of papers written in Chinese from several academic fields.
### Languages
- Chinese
- English (translation)
## Dataset Structure
### Data Instances
| Split | Documents |
|-----------------|----------:|
| `csl` | 396k |
| `en_translation`| 396k |
### Data Fields
- `doc_id`: unique identifier for this document
- `title`: title of the paper
- `abstract`: abstract of the paper
- `keywords`: keywords associated with the paper
- `category`: the broad category of the paper
- `category_eng`: English translaction of the broad category (e.g., Engineering)
- `discipline`: academic discipline of the paper
- `discipline_eng`: English translation of the academic discipline (e.g., Agricultural Engineering)
The `en_translation` contains documents translated from Google Translation service.
All text are in English, so the fields `category_eng` and `discipline_eng` are omitted.
## Dataset Usage
Using 🤗 Datasets:
```python
from datasets import load_dataset
dataset = load_dataset('neuclir/csl')['csl']
```
## License & Citation
This dataset is based off the [Chinese Scientific Literature Dataset](https://github.com/ydli-ai/CSL) under Apache 2.0.
The primay change is the addition of `doc_id`s, English translactions of the category and discipline descriptions by a native speaker,
and basic de-duplication. Code that performed this modification is avalable in [this repository](https://github.com/NeuCLIR/csl-preprocess).
If you use this data, please cite:
```
@inproceedings{li-etal-2022-csl,
title = "{CSL}: A Large-scale {C}hinese Scientific Literature Dataset",
author = "Li, Yudong and
Zhang, Yuqing and
Zhao, Zhe and
Shen, Linlin and
Liu, Weijie and
Mao, Weiquan and
Zhang, Hui",
booktitle = "Proceedings of the 29th International Conference on Computational Linguistics",
month = oct,
year = "2022",
address = "Gyeongju, Republic of Korea",
publisher = "International Committee on Computational Linguistics",
url = "https://aclanthology.org/2022.coling-1.344",
pages = "3917--3923",
}
```
| [
-0.07074613124132156,
-0.24474546313285828,
0.17278265953063965,
0.39496126770973206,
-0.22786171734333038,
0.06337834894657135,
-0.41857197880744934,
-0.5056363940238953,
0.19862999022006989,
0.20631395280361176,
-0.39871641993522644,
-0.8048483729362488,
-0.2106688916683197,
0.3968123197... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
breadlicker45/musenet-encoders-12k | breadlicker45 | 2023-03-21T22:03:18Z | 24 | 1 | null | [
"region:us"
] | 2023-03-21T22:03:18Z | 2023-03-21T21:54:26.000Z | 2023-03-21T21:54:26 | Entry not found | [
-0.3227645754814148,
-0.22568479180335999,
0.8622264862060547,
0.43461528420448303,
-0.52829909324646,
0.7012971639633179,
0.7915720343589783,
0.07618614286184311,
0.774603009223938,
0.2563217282295227,
-0.7852813005447388,
-0.22573819756507874,
-0.9104477167129517,
0.5715674161911011,
-... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
open-source-metrics/preprocessed_stars | open-source-metrics | 2023-11-23T14:39:39Z | 24 | 0 | null | [
"region:us"
] | 2023-11-23T14:39:39Z | 2023-03-24T22:41:01.000Z | 2023-03-24T22:41:01 | ---
dataset_info:
features:
- name: huggingface_hub
dtype: int64
- name: text_generation_inference
dtype: int64
- name: safetensors
dtype: int64
- name: tokenizers
dtype: int64
- name: transformers
dtype: int64
- name: diffusers
dtype: int64
- name: accelerate
dtype: int64
- name: chat_ui
dtype: int64
- name: candle
dtype: int64
- name: gradio
dtype: int64
- name: evaluate
dtype: int64
- name: pytorch_image_models
dtype: int64
- name: peft
dtype: int64
- name: optimum
dtype: int64
- name: datasets
dtype: int64
- name: hub_docs
dtype: int64
- name: langchain
dtype: int64
- name: stable_diffusion_webui
dtype: int64
- name: tensorflow
dtype: int64
- name: pytorch
dtype: int64
- name: openai_python
dtype: int64
- name: day
dtype: string
splits:
- name: raw
num_bytes: 135794080
num_examples: 698170
- name: wow
num_bytes: 584288
num_examples: 3004
download_size: 14749222
dataset_size: 136378368
configs:
- config_name: default
data_files:
- split: raw
path: data/raw-*
- split: wow
path: data/wow-*
---
# Dataset Card for "preprocessed_stars"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | [
-0.6682730317115784,
-0.2875848710536957,
0.3030940890312195,
0.2312907576560974,
-0.26579394936561584,
0.12029533088207245,
0.04510357603430748,
-0.25054970383644104,
0.9268016219139099,
0.7264907956123352,
-0.965297281742096,
-0.8169506788253784,
-0.6030730605125427,
-0.10287775099277496... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
NeuraXenetica/managpt-4080-nlp-prompts-and-generated-texts | NeuraXenetica | 2023-03-29T17:52:49Z | 24 | 1 | null | [
"task_categories:text-generation",
"size_categories:1K<n<10K",
"language:en",
"license:cc-by-4.0",
"region:us"
] | 2023-03-29T17:52:49Z | 2023-03-26T19:25:25.000Z | 2023-03-26T19:25:25 | ---
license: cc-by-4.0
task_categories:
- text-generation
language:
- en
pretty_name: 'ManaGPT: 4,080 NLP prompts and generated texts'
size_categories:
- 1K<n<10K
---
This dataset includes 4,080 texts that were generated by the [**ManaGPT-1020**](https://huggingface.co/NeuraXenetica/ManaGPT-1020) large language model, in response to particular input sequences.
ManaGPT-1020 is a free, open-source model available for download and use via Hugging Face’s “transformers” Python package. The model is a 1.5-billion-parameter LLM that’s capable of generating text in order to complete a sentence whose first words have been provided via a user-supplied input sequence. The model represents an elaboration of GPT-2 that has been fine-tuned (using Python and TensorFlow) on a specialized English-language corpus of over 509,000 words from the domain of organizational futures studies. In particular, the model has been trained to generate analysis, predictions, and recommendations regarding the emerging role of advanced AI, social robotics, ubiquitous computing, virtual reality, neurocybernetic augmentation, and other “posthumanizing” technologies in organizational life.
In generating the texts, 102 different prompts were used, each of which was employed to generate 20 responses. The 102 input sequences were created by concatenating 12 different "subjects" with 17 different "modal variants," in every possible combination. The subjects included 6 grammatically singular subjects:
- "The workplace of tomorrow"
- "Technological posthumanization"
- "The organizational use of AI"
- "A robotic boss"
- "An artificially intelligent coworker"
- "Business culture within Society 5.0"
Also included were 6 grammatically plural subjects:
- "Social robots"
- "Hybrid human-robotic organizations"
- "Artificially intelligent businesses"
- "The posthumanized workplaces of the future"
- "Cybernetically augmented workers"
- "Organizations in Society 5.0"
For the 6 grammatically singular subjects, the 17 modal variants included one "blank" variant (an empty string) and 16 phrases that lend the input sequence diverse forms of "modal shading," by indicating varying degrees of certainty, probability, predictability, logical necessity, or moral obligation or approbation. These modal variants were:
- ""
- " is"
- " is not"
- " will"
- " will be"
- " may"
- " might never"
- " is likely to"
- " is unlikely to"
- " should"
- " can"
- " cannot"
- " can never"
- " must"
- " must not"
- " is like"
- " will be like"
The variants used with grammatically plural subjects were identical, apart from the fact that the word “is” was changed to “are,” wherever it appeared.
In a small number of cases (only occurring when the empty string "" was used as part of the input sequence), the model failed to generate any output beyond the input sequence itself. | [
-0.22590306401252747,
-0.949921727180481,
0.6347519159317017,
0.279421329498291,
-0.10615655779838562,
-0.29176121950149536,
-0.0734843760728836,
-0.5107941031455994,
0.17915883660316467,
0.4726797342300415,
-0.8308073878288269,
-0.16508613526821136,
-0.37956520915031433,
0.400485992431640... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
nimaster/Devign_for_VD | nimaster | 2023-03-27T20:21:00Z | 24 | 0 | null | [
"task_categories:text-classification",
"size_categories:10K<n<100K",
"region:us"
] | 2023-03-27T20:21:00Z | 2023-03-27T20:02:36.000Z | 2023-03-27T20:02:36 | ---
task_categories:
- text-classification
size_categories:
- 10K<n<100K
--- | [
-0.1285335123538971,
-0.1861683875322342,
0.6529128551483154,
0.49436232447624207,
-0.19319400191307068,
0.23607441782951355,
0.36072009801864624,
0.05056373029947281,
0.5793656706809998,
0.7400146722793579,
-0.650810182094574,
-0.23784008622169495,
-0.7102247476577759,
-0.0478255338966846... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
nkasmanoff/nasa_earth_instagram | nkasmanoff | 2023-03-30T11:04:45Z | 24 | 0 | null | [
"task_categories:image-to-text",
"task_categories:text-to-image",
"size_categories:n<1K",
"region:us"
] | 2023-03-30T11:04:45Z | 2023-03-29T17:45:06.000Z | 2023-03-29T17:45:06 | ---
task_categories:
- image-to-text
- text-to-image
size_categories:
- n<1K
---
# NASA Earth Instagram
This dataset is a moderately curated subset of the posts shown on [NASA Earth's Instagram](https://www.instagram.com/nasaearth/), with an emphasis
on finding image-text pairs where the text associated is as close as possible to being a direct caption of the image in question.
This dataset has a variety of use cases, but the one which it is originally intended for is to provide a fine-tuning dataset for image captioning models,
to be better equipped for describing the exact pheonomena in satellite imagery.
The owner of all images and text in this data is NASA. | [
-0.37608224153518677,
-0.3511102497577667,
0.6938673257827759,
0.04725637286901474,
-0.7475016117095947,
0.37592360377311707,
0.1255248785018921,
-0.18322473764419556,
0.36697500944137573,
0.9401817321777344,
-0.8911440968513489,
-0.7003931403160095,
-0.3982050120830536,
0.3060790896415710... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
mstz/covertype | mstz | 2023-05-29T10:09:11Z | 24 | 0 | null | [
"task_categories:tabular-classification",
"size_categories:100K<n<1M",
"language:en",
"license:cc",
"biology",
"UCI",
"binary_classification",
"multiclass_classification",
"region:us"
] | 2023-05-29T10:09:11Z | 2023-03-31T13:33:53.000Z | 2023-03-31T13:33:53 | ---
task_categories:
- tabular-classification
language:
- en
tags:
- biology
- UCI
- binary_classification
- multiclass_classification
pretty_name: Covertype
size_categories:
- 100K<n<1M
license: cc
---
# Covertype
Classification of pixels into 7 forest cover types based on attributes such as elevation, aspect, slope, hillshade, soil-type, and more.
The [Covertype dataset](https://archive-beta.ics.uci.edu/dataset/31/covertype) from the [UCI ML repository](https://archive-beta.ics.uci.edu).
| **Configuration** | **Task** | **Description** |
|-------------------|---------------------------|-----------------------------------------------------------------|
| covertype | Multiclass classification | Classify the area as one of 7 cover classes. |
# Usage
```python
from datasets import load_dataset
dataset = load_dataset("mstz/covertype")["train"]
``` | [
-0.45550552010536194,
-0.15909087657928467,
0.1750383824110031,
0.30345669388771057,
-0.07276319712400436,
0.16441787779331207,
0.13514980673789978,
-0.2500070333480835,
0.164889395236969,
0.6230996251106262,
-0.6786755919456482,
-0.884933590888977,
-0.6537315249443054,
-0.0400518178939819... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
Yulong-W/squadpara | Yulong-W | 2023-04-01T10:28:25Z | 24 | 0 | null | [
"region:us"
] | 2023-04-01T10:28:25Z | 2023-04-01T10:28:03.000Z | 2023-04-01T10:28:03 | Entry not found | [
-0.3227645754814148,
-0.22568479180335999,
0.8622264862060547,
0.43461528420448303,
-0.52829909324646,
0.7012971639633179,
0.7915720343589783,
0.07618614286184311,
0.774603009223938,
0.2563217282295227,
-0.7852813005447388,
-0.22573819756507874,
-0.9104477167129517,
0.5715674161911011,
-... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
HAERAE-HUB/KoInstruct-QA | HAERAE-HUB | 2023-05-05T13:28:25Z | 24 | 0 | null | [
"region:us"
] | 2023-05-05T13:28:25Z | 2023-05-05T11:28:02.000Z | 2023-05-05T11:28:02 | ---
dataset_info:
features:
- name: instruction
dtype: string
- name: input
dtype: string
- name: output
dtype: string
- name: type
dtype: string
- name: template
dtype: string
splits:
- name: train
num_bytes: 237493038
num_examples: 50276
download_size: 113325801
dataset_size: 237493038
---
# Dataset Card for "ko_instruct_ki_v0.1"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | [
-0.5119810700416565,
-0.12589064240455627,
0.1942143738269806,
0.260431170463562,
-0.2989089787006378,
-0.1939254105091095,
0.39395564794540405,
-0.10767822712659836,
0.8662791848182678,
0.6423568725585938,
-0.925910234451294,
-0.816448986530304,
-0.5307112336158752,
-0.3921820819377899,
... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
innermost47/alpaca-fr | innermost47 | 2023-05-05T13:02:58Z | 24 | 1 | null | [
"task_categories:text-generation",
"language:fr",
"license:cc-by-nc-4.0",
"region:us"
] | 2023-05-05T13:02:58Z | 2023-05-05T12:49:20.000Z | 2023-05-05T12:49:20 | ---
license: cc-by-nc-4.0
task_categories:
- text-generation
language:
- fr
--- | [
-0.1285335123538971,
-0.1861683875322342,
0.6529128551483154,
0.49436232447624207,
-0.19319400191307068,
0.23607441782951355,
0.36072009801864624,
0.05056373029947281,
0.5793656706809998,
0.7400146722793579,
-0.650810182094574,
-0.23784008622169495,
-0.7102247476577759,
-0.0478255338966846... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
claritylab/utcd | claritylab | 2023-05-24T17:27:42Z | 24 | 4 | null | [
"task_categories:text-classification",
"annotations_creators:no-annotation",
"multilinguality:monolingual",
"size_categories:1M<n<10M",
"language:en",
"license:mit",
"arxiv:2005.00547",
"arxiv:2010.12421",
"arxiv:1509.01626",
"arxiv:1307.5336",
"arxiv:1909.05855",
"arxiv:1909.02027",
"arxiv:... | 2023-05-24T17:27:42Z | 2023-05-11T16:17:23.000Z | 2023-05-11T16:17:23 | ---
license: mit
task_categories:
- text-classification
language:
- en
size_categories:
- 1M<n<10M
annotations_creators:
- no-annotation
multilinguality:
- monolingual
pretty_name: UTCD
dataset_info:
- config_name: in-domain
features:
- name: text
dtype: string
- name: labels
sequence:
class_label:
names:
'0': Add Alarm
'1': Album
'2': Animal
'3': Artist
'4': Athlete
'5': Book Appointment
'6': Book House
'7': Building
'8': Business
'9': Business & Finance
'10': Buy Bus Ticket
'11': Buy Event Tickets
'12': Buy Movie Tickets
'13': Check Balance
'14': Company
'15': Computers & Internet
'16': Education & Reference
'17': Educational Institution
'18': Entertainment & Music
'19': Family & Relationships
'20': Film
'21': Find Apartment
'22': Find Attractions
'23': Find Bus
'24': Find Events
'25': Find Home By Area
'26': Find Movies
'27': Find Provider
'28': Find Restaurants
'29': Find Trains
'30': Get Alarms
'31': Get Available Time
'32': Get Cars Available
'33': Get Event Dates
'34': Get Events
'35': Get Ride
'36': Get Times For Movie
'37': Get Weather
'38': Health
'39': Lookup Music
'40': Lookup Song
'41': Make Payment
'42': Mean Of Transportation
'43': Natural Place
'44': Office Holder
'45': Plant
'46': Play Media
'47': Play Movie
'48': Play Song
'49': Politics & Government
'50': Request Payment
'51': Reserve Car
'52': Reserve Hotel
'53': Reserve One way Flight
'54': Reserve Restaurant
'55': Reserve Round trip Flights
'56': Schedule Visit
'57': Science & Mathematics
'58': Science & Technology
'59': Search Hotel
'60': Search House
'61': Search One way Flight
'62': Search Round trip Flights
'63': Society & Culture
'64': Sports
'65': Transfer Money
'66': Village
'67': World News
'68': Written Work
'69': accept reservations
'70': account blocked
'71': add contact
'72': admiration
'73': alarm
'74': alarm query
'75': alarm remove
'76': alarm set
'77': amusement
'78': anger
'79': annoyance
'80': application status
'81': approval
'82': apr
'83': are you a bot
'84': audio volume down
'85': audio volume mute
'86': audio volume other
'87': audio volume up
'88': balance
'89': bill balance
'90': bill due
'91': book flight
'92': book hotel
'93': calculator
'94': calendar
'95': calendar query
'96': calendar remove
'97': calendar set
'98': calendar update
'99': calories
'100': cancel
'101': cancel reservation
'102': car rental
'103': card declined
'104': caring
'105': carry on
'106': change accent
'107': change ai name
'108': change language
'109': change speed
'110': change user name
'111': change volume
'112': cleaning
'113': coffee
'114': confirm reservation
'115': confusion
'116': convert
'117': cook time
'118': cooking query
'119': cooking recipe
'120': create or add
'121': credit limit
'122': credit limit change
'123': credit score
'124': curiosity
'125': currency
'126': current location
'127': damaged card
'128': date
'129': date time convert
'130': date time query
'131': definition
'132': desire
'133': direct deposit
'134': directions
'135': disappointment
'136': disapproval
'137': disgust
'138': distance
'139': do you have pets
'140': email add contact
'141': email query
'142': email query contact
'143': email send email
'144': embarrassment
'145': events
'146': exchange rate
'147': excitement
'148': expiration date
'149': factoid
'150': fear
'151': find phone
'152': flight status
'153': flip coin
'154': food last
'155': freeze account
'156': fun fact
'157': game
'158': gas
'159': gas type
'160': general greet
'161': general joke
'162': general quirky
'163': goodbye
'164': gratitude
'165': greet
'166': greeting
'167': grief
'168': how busy
'169': how old are you
'170': hue light dim
'171': hue light off
'172': hue light up
'173': improve credit score
'174': income
'175': ingredient substitution
'176': ingredients list
'177': insurance
'178': insurance change
'179': interest rate
'180': international fees
'181': international visa
'182': iot cleaning
'183': iot coffee
'184': iot hue light change
'185': iot hue light dim
'186': iot hue light off
'187': iot hue light on
'188': iot hue light up
'189': iot wemo on
'190': iot wemo plug off
'191': joke
'192': joy
'193': jump start
'194': last maintenance
'195': lists create or add
'196': lists query
'197': lists remove
'198': lost luggage
'199': love
'200': make call
'201': maybe
'202': meal suggestion
'203': meaning of life
'204': measurement conversion
'205': meeting schedule
'206': min payment
'207': mpg
'208': music
'209': music dislike ness
'210': music likeness
'211': music query
'212': music settings
'213': negative
'214': nervousness
'215': neutral
'216': new card
'217': news query
'218': next holiday
'219': next song
'220': 'no'
'221': nutrition info
'222': oil change how
'223': oil change when
'224': optimism
'225': order
'226': order checks
'227': order status
'228': paid time off request status
'229': paid time off used
'230': pay bill
'231': payday
'232': pin change
'233': play audiobook
'234': play game
'235': play music
'236': play podcasts
'237': play radio
'238': plug type
'239': podcasts
'240': positive
'241': post
'242': pride
'243': pto balance
'244': pto request
'245': qa currency
'246': qa definition
'247': qa factoid
'248': qa maths
'249': qa stock
'250': query
'251': query contact
'252': quirky
'253': radio
'254': realization
'255': recipe
'256': recommendation events
'257': recommendation locations
'258': recommendation movies
'259': redeem rewards
'260': relief
'261': reminder
'262': reminder update
'263': remorse
'264': remove
'265': repeat
'266': replacement card duration
'267': report fraud
'268': report lost card
'269': reset settings
'270': restaurant reservation
'271': restaurant reviews
'272': restaurant suggestion
'273': rewards balance
'274': roll dice
'275': rollover 401k
'276': routing
'277': sadness
'278': schedule maintenance
'279': schedule meeting
'280': send email
'281': set
'282': settings
'283': share location
'284': shopping list
'285': shopping list update
'286': smart home
'287': social post
'288': social query
'289': spelling
'290': spending history
'291': surprise
'292': sync device
'293': take away order
'294': take away query
'295': taxes
'296': tell joke
'297': text
'298': thank you
'299': ticket
'300': time
'301': timer
'302': timezone
'303': tire change
'304': tire pressure
'305': todo list
'306': todo list update
'307': traffic
'308': transactions
'309': transfer
'310': translate
'311': transport query
'312': transport taxi
'313': transport ticket
'314': transport traffic
'315': travel alert
'316': travel notification
'317': travel suggestion
'318': uber
'319': update playlist
'320': user name
'321': vaccines
'322': volume other
'323': w2 wage and tax statement
'324': weather
'325': weather query
'326': wemo off
'327': wemo plug on
'328': what are your hobbies
'329': what can i ask you
'330': what is your name
'331': what song
'332': where are you from
'333': whisper mode
'334': who do you work for
'335': who made you
'336': 'yes'
- name: dataset_name
dtype:
class_label:
names:
'0': go_emotion
'1': sentiment_tweets_2020
'2': emotion
'3': sgd
'4': clinc_150
'5': slurp
'6': ag_news
'7': dbpedia
'8': yahoo
splits:
- name: train
num_bytes: 347382307
num_examples: 2192703
- name: test
num_bytes: 36063588
num_examples: 168365
download_size: 1744258165
dataset_size: 383445895
- config_name: aspect-normalized-in-domain
features:
- name: text
dtype: string
- name: labels
sequence:
class_label:
names:
'0': Add Alarm
'1': Album
'2': Animal
'3': Artist
'4': Athlete
'5': Book Appointment
'6': Book House
'7': Building
'8': Business
'9': Business & Finance
'10': Buy Bus Ticket
'11': Buy Event Tickets
'12': Buy Movie Tickets
'13': Check Balance
'14': Company
'15': Computers & Internet
'16': Education & Reference
'17': Educational Institution
'18': Entertainment & Music
'19': Family & Relationships
'20': Film
'21': Find Apartment
'22': Find Attractions
'23': Find Bus
'24': Find Events
'25': Find Home By Area
'26': Find Movies
'27': Find Provider
'28': Find Restaurants
'29': Find Trains
'30': Get Alarms
'31': Get Available Time
'32': Get Cars Available
'33': Get Event Dates
'34': Get Events
'35': Get Ride
'36': Get Times For Movie
'37': Get Weather
'38': Health
'39': Lookup Music
'40': Lookup Song
'41': Make Payment
'42': Mean Of Transportation
'43': Natural Place
'44': Office Holder
'45': Plant
'46': Play Media
'47': Play Movie
'48': Play Song
'49': Politics & Government
'50': Request Payment
'51': Reserve Car
'52': Reserve Hotel
'53': Reserve One way Flight
'54': Reserve Restaurant
'55': Reserve Round trip Flights
'56': Schedule Visit
'57': Science & Mathematics
'58': Science & Technology
'59': Search Hotel
'60': Search House
'61': Search One way Flight
'62': Search Round trip Flights
'63': Society & Culture
'64': Sports
'65': Transfer Money
'66': Village
'67': World News
'68': Written Work
'69': accept reservations
'70': account blocked
'71': add contact
'72': admiration
'73': alarm
'74': alarm query
'75': alarm remove
'76': alarm set
'77': amusement
'78': anger
'79': annoyance
'80': application status
'81': approval
'82': apr
'83': are you a bot
'84': audio volume down
'85': audio volume mute
'86': audio volume other
'87': audio volume up
'88': balance
'89': bill balance
'90': bill due
'91': book flight
'92': book hotel
'93': calculator
'94': calendar
'95': calendar query
'96': calendar remove
'97': calendar set
'98': calendar update
'99': calories
'100': cancel
'101': cancel reservation
'102': car rental
'103': card declined
'104': caring
'105': carry on
'106': change accent
'107': change ai name
'108': change language
'109': change speed
'110': change user name
'111': change volume
'112': cleaning
'113': coffee
'114': confirm reservation
'115': confusion
'116': convert
'117': cook time
'118': cooking query
'119': cooking recipe
'120': create or add
'121': credit limit
'122': credit limit change
'123': credit score
'124': curiosity
'125': currency
'126': current location
'127': damaged card
'128': date
'129': date time convert
'130': date time query
'131': definition
'132': desire
'133': direct deposit
'134': directions
'135': disappointment
'136': disapproval
'137': disgust
'138': distance
'139': do you have pets
'140': email add contact
'141': email query
'142': email query contact
'143': email send email
'144': embarrassment
'145': events
'146': exchange rate
'147': excitement
'148': expiration date
'149': factoid
'150': fear
'151': find phone
'152': flight status
'153': flip coin
'154': food last
'155': freeze account
'156': fun fact
'157': game
'158': gas
'159': gas type
'160': general greet
'161': general joke
'162': general quirky
'163': goodbye
'164': gratitude
'165': greet
'166': greeting
'167': grief
'168': how busy
'169': how old are you
'170': hue light dim
'171': hue light off
'172': hue light up
'173': improve credit score
'174': income
'175': ingredient substitution
'176': ingredients list
'177': insurance
'178': insurance change
'179': interest rate
'180': international fees
'181': international visa
'182': iot cleaning
'183': iot coffee
'184': iot hue light change
'185': iot hue light dim
'186': iot hue light off
'187': iot hue light on
'188': iot hue light up
'189': iot wemo on
'190': iot wemo plug off
'191': joke
'192': joy
'193': jump start
'194': last maintenance
'195': lists create or add
'196': lists query
'197': lists remove
'198': lost luggage
'199': love
'200': make call
'201': maybe
'202': meal suggestion
'203': meaning of life
'204': measurement conversion
'205': meeting schedule
'206': min payment
'207': mpg
'208': music
'209': music dislike ness
'210': music likeness
'211': music query
'212': music settings
'213': negative
'214': nervousness
'215': neutral
'216': new card
'217': news query
'218': next holiday
'219': next song
'220': 'no'
'221': nutrition info
'222': oil change how
'223': oil change when
'224': optimism
'225': order
'226': order checks
'227': order status
'228': paid time off request status
'229': paid time off used
'230': pay bill
'231': payday
'232': pin change
'233': play audiobook
'234': play game
'235': play music
'236': play podcasts
'237': play radio
'238': plug type
'239': podcasts
'240': positive
'241': post
'242': pride
'243': pto balance
'244': pto request
'245': qa currency
'246': qa definition
'247': qa factoid
'248': qa maths
'249': qa stock
'250': query
'251': query contact
'252': quirky
'253': radio
'254': realization
'255': recipe
'256': recommendation events
'257': recommendation locations
'258': recommendation movies
'259': redeem rewards
'260': relief
'261': reminder
'262': reminder update
'263': remorse
'264': remove
'265': repeat
'266': replacement card duration
'267': report fraud
'268': report lost card
'269': reset settings
'270': restaurant reservation
'271': restaurant reviews
'272': restaurant suggestion
'273': rewards balance
'274': roll dice
'275': rollover 401k
'276': routing
'277': sadness
'278': schedule maintenance
'279': schedule meeting
'280': send email
'281': set
'282': settings
'283': share location
'284': shopping list
'285': shopping list update
'286': smart home
'287': social post
'288': social query
'289': spelling
'290': spending history
'291': surprise
'292': sync device
'293': take away order
'294': take away query
'295': taxes
'296': tell joke
'297': text
'298': thank you
'299': ticket
'300': time
'301': timer
'302': timezone
'303': tire change
'304': tire pressure
'305': todo list
'306': todo list update
'307': traffic
'308': transactions
'309': transfer
'310': translate
'311': transport query
'312': transport taxi
'313': transport ticket
'314': transport traffic
'315': travel alert
'316': travel notification
'317': travel suggestion
'318': uber
'319': update playlist
'320': user name
'321': vaccines
'322': volume other
'323': w2 wage and tax statement
'324': weather
'325': weather query
'326': wemo off
'327': wemo plug on
'328': what are your hobbies
'329': what can i ask you
'330': what is your name
'331': what song
'332': where are you from
'333': whisper mode
'334': who do you work for
'335': who made you
'336': 'yes'
- name: dataset_name
dtype:
class_label:
names:
'0': go_emotion
'1': sentiment_tweets_2020
'2': emotion
'3': sgd
'4': clinc_150
'5': slurp
'6': ag_news
'7': dbpedia
'8': yahoo
splits:
- name: train
num_bytes: 28974188
num_examples: 115127
- name: validation
num_bytes: 3213586
num_examples: 12806
- name: test
num_bytes: 36063590
num_examples: 168365
download_size: 1744258165
dataset_size: 68251364
- config_name: out-of-domain
features:
- name: text
dtype: string
- name: labels
sequence:
class_label:
names:
'0': Add To Playlist
'1': Bank account or service
'2': Book Restaurant
'3': Checking or savings account
'4': Chemistry; Metallurgy
'5': Consumer Loan
'6': Credit card
'7': Credit card or prepaid card
'8': Credit reporting
'9': Credit reporting, credit repair services, or other personal consumer
reports
'10': Debt collection
'11': EUROPEAN UNION
'12': Electricity
'13': Fixed Constructions
'14': General tagging of new or cross-sectional technology
'15': Get Weather
'16': Human Necessities
'17': Mechanical Engineering; Lightning; Heating; Weapons; Blasting
'18': Money transfer, virtual currency, or money service
'19': Money transfers
'20': Mortgage
'21': Other financial service
'22': Payday loan
'23': Payday loan, title loan, or personal loan
'24': Performing Operations; Transporting
'25': Physics
'26': Play Music
'27': Prepaid card
'28': Rate Book
'29': Refund not showing up
'30': Search Creative Work
'31': Search Screening Event
'32': Student loan
'33': Textiles; Paper
'34': Vehicle loan or lease
'35': Virtual currency
'36': activate my card
'37': age limit
'38': agri-foodstuffs
'39': agriculture, forestry and fisheries
'40': alarm query
'41': alarm remove
'42': alarm set
'43': apple pay or google pay
'44': atm support
'45': audio volume down
'46': audio volume mute
'47': audio volume other
'48': audio volume up
'49': automatic top up
'50': balance not updated after bank transfer
'51': balance not updated after cheque or cash deposit
'52': beneficiary not allowed
'53': business and competition
'54': calendar query
'55': calendar remove
'56': calendar set
'57': cancel transfer
'58': card about to expire
'59': card acceptance
'60': card arrival
'61': card delivery estimate
'62': card linking
'63': card not working
'64': card payment fee charged
'65': card payment not recognised
'66': card payment wrong exchange rate
'67': card swallowed
'68': cash withdrawal charge
'69': cash withdrawal not recognised
'70': change pin
'71': compromised card
'72': contactless not working
'73': cooking query
'74': cooking recipe
'75': country support
'76': datetime convert
'77': datetime query
'78': declined card payment
'79': declined cash withdrawal
'80': declined transfer
'81': direct debit payment not recognised
'82': disposable card limits
'83': economics
'84': edit personal details
'85': education and communications
'86': email addcontact
'87': email query
'88': email querycontact
'89': email sendemail
'90': employment and working conditions
'91': energy
'92': environment
'93': exchange charge
'94': exchange rate
'95': exchange via app
'96': extra charge on statement
'97': failed transfer
'98': fiat currency support
'99': finance
'100': general affirm
'101': general commandstop
'102': general confirm
'103': general dontcare
'104': general explain
'105': general greet
'106': general joke
'107': general negate
'108': general praise
'109': general quirky
'110': general repeat
'111': geography
'112': get disposable virtual card
'113': get physical card
'114': getting spare card
'115': getting virtual card
'116': industry
'117': international organisations
'118': international relations
'119': iot cleaning
'120': iot coffee
'121': iot hue lightchange
'122': iot hue lightdim
'123': iot hue lightoff
'124': iot hue lighton
'125': iot hue lightup
'126': iot wemo off
'127': iot wemo on
'128': law
'129': lists createoradd
'130': lists query
'131': lists remove
'132': lost or stolen card
'133': lost or stolen phone
'134': music dislikeness
'135': music likeness
'136': music query
'137': music settings
'138': negative
'139': neutral
'140': news query
'141': order physical card
'142': passcode forgotten
'143': pending card payment
'144': pending cash withdrawal
'145': pending top up
'146': pending transfer
'147': pin blocked
'148': play audiobook
'149': play game
'150': play music
'151': play podcasts
'152': play radio
'153': politics
'154': positive
'155': production, technology and research
'156': qa currency
'157': qa definition
'158': qa factoid
'159': qa maths
'160': qa stock
'161': receiving money
'162': recommendation events
'163': recommendation locations
'164': recommendation movies
'165': request refund
'166': reverted card payment?
'167': science
'168': social post
'169': social query
'170': social questions
'171': supported cards and currencies
'172': takeaway order
'173': takeaway query
'174': terminate account
'175': top up by bank transfer charge
'176': top up by card charge
'177': top up by cash or cheque
'178': top up failed
'179': top up limits
'180': top up reverted
'181': topping up by card
'182': trade
'183': transaction charged twice
'184': transfer fee charged
'185': transfer into account
'186': transfer not received by recipient
'187': transfer timing
'188': transport
'189': transport query
'190': transport taxi
'191': transport ticket
'192': transport traffic
'193': unable to verify identity
'194': verify my identity
'195': verify source of funds
'196': verify top up
'197': virtual card not working
'198': visa or mastercard
'199': weather query
'200': why verify identity
'201': wrong amount of cash received
'202': wrong exchange rate for cash withdrawal
- name: dataset_name
dtype:
class_label:
names:
'0': amazon_polarity
'1': finance_sentiment
'2': yelp
'3': banking77
'4': snips
'5': nlu_evaluation
'6': multi_eurlex
'7': patent
'8': consumer_finance
splits:
- name: train
num_bytes: 3608196895
num_examples: 4996673
- name: test
num_bytes: 541174753
num_examples: 625911
download_size: 1744258165
dataset_size: 4149371648
- config_name: aspect-normalized-out-of-domain
features:
- name: text
dtype: string
- name: labels
sequence:
class_label:
names:
'0': Add To Playlist
'1': Bank account or service
'2': Book Restaurant
'3': Checking or savings account
'4': Chemistry; Metallurgy
'5': Consumer Loan
'6': Credit card
'7': Credit card or prepaid card
'8': Credit reporting
'9': Credit reporting, credit repair services, or other personal consumer
reports
'10': Debt collection
'11': EUROPEAN UNION
'12': Electricity
'13': Fixed Constructions
'14': General tagging of new or cross-sectional technology
'15': Get Weather
'16': Human Necessities
'17': Mechanical Engineering; Lightning; Heating; Weapons; Blasting
'18': Money transfer, virtual currency, or money service
'19': Money transfers
'20': Mortgage
'21': Other financial service
'22': Payday loan
'23': Payday loan, title loan, or personal loan
'24': Performing Operations; Transporting
'25': Physics
'26': Play Music
'27': Prepaid card
'28': Rate Book
'29': Refund not showing up
'30': Search Creative Work
'31': Search Screening Event
'32': Student loan
'33': Textiles; Paper
'34': Vehicle loan or lease
'35': Virtual currency
'36': activate my card
'37': age limit
'38': agri-foodstuffs
'39': agriculture, forestry and fisheries
'40': alarm query
'41': alarm remove
'42': alarm set
'43': apple pay or google pay
'44': atm support
'45': audio volume down
'46': audio volume mute
'47': audio volume other
'48': audio volume up
'49': automatic top up
'50': balance not updated after bank transfer
'51': balance not updated after cheque or cash deposit
'52': beneficiary not allowed
'53': business and competition
'54': calendar query
'55': calendar remove
'56': calendar set
'57': cancel transfer
'58': card about to expire
'59': card acceptance
'60': card arrival
'61': card delivery estimate
'62': card linking
'63': card not working
'64': card payment fee charged
'65': card payment not recognised
'66': card payment wrong exchange rate
'67': card swallowed
'68': cash withdrawal charge
'69': cash withdrawal not recognised
'70': change pin
'71': compromised card
'72': contactless not working
'73': cooking query
'74': cooking recipe
'75': country support
'76': datetime convert
'77': datetime query
'78': declined card payment
'79': declined cash withdrawal
'80': declined transfer
'81': direct debit payment not recognised
'82': disposable card limits
'83': economics
'84': edit personal details
'85': education and communications
'86': email addcontact
'87': email query
'88': email querycontact
'89': email sendemail
'90': employment and working conditions
'91': energy
'92': environment
'93': exchange charge
'94': exchange rate
'95': exchange via app
'96': extra charge on statement
'97': failed transfer
'98': fiat currency support
'99': finance
'100': general affirm
'101': general commandstop
'102': general confirm
'103': general dontcare
'104': general explain
'105': general greet
'106': general joke
'107': general negate
'108': general praise
'109': general quirky
'110': general repeat
'111': geography
'112': get disposable virtual card
'113': get physical card
'114': getting spare card
'115': getting virtual card
'116': industry
'117': international organisations
'118': international relations
'119': iot cleaning
'120': iot coffee
'121': iot hue lightchange
'122': iot hue lightdim
'123': iot hue lightoff
'124': iot hue lighton
'125': iot hue lightup
'126': iot wemo off
'127': iot wemo on
'128': law
'129': lists createoradd
'130': lists query
'131': lists remove
'132': lost or stolen card
'133': lost or stolen phone
'134': music dislikeness
'135': music likeness
'136': music query
'137': music settings
'138': negative
'139': neutral
'140': news query
'141': order physical card
'142': passcode forgotten
'143': pending card payment
'144': pending cash withdrawal
'145': pending top up
'146': pending transfer
'147': pin blocked
'148': play audiobook
'149': play game
'150': play music
'151': play podcasts
'152': play radio
'153': politics
'154': positive
'155': production, technology and research
'156': qa currency
'157': qa definition
'158': qa factoid
'159': qa maths
'160': qa stock
'161': receiving money
'162': recommendation events
'163': recommendation locations
'164': recommendation movies
'165': request refund
'166': reverted card payment?
'167': science
'168': social post
'169': social query
'170': social questions
'171': supported cards and currencies
'172': takeaway order
'173': takeaway query
'174': terminate account
'175': top up by bank transfer charge
'176': top up by card charge
'177': top up by cash or cheque
'178': top up failed
'179': top up limits
'180': top up reverted
'181': topping up by card
'182': trade
'183': transaction charged twice
'184': transfer fee charged
'185': transfer into account
'186': transfer not received by recipient
'187': transfer timing
'188': transport
'189': transport query
'190': transport taxi
'191': transport ticket
'192': transport traffic
'193': unable to verify identity
'194': verify my identity
'195': verify source of funds
'196': verify top up
'197': virtual card not working
'198': visa or mastercard
'199': weather query
'200': why verify identity
'201': wrong amount of cash received
'202': wrong exchange rate for cash withdrawal
- name: dataset_name
dtype:
class_label:
names:
'0': amazon_polarity
'1': finance_sentiment
'2': yelp
'3': banking77
'4': snips
'5': nlu_evaluation
'6': multi_eurlex
'7': patent
'8': consumer_finance
splits:
- name: train
num_bytes: 109566474
num_examples: 119167
- name: validation
num_bytes: 12432497
num_examples: 13263
- name: test
num_bytes: 541174753
num_examples: 625911
download_size: 1744258165
dataset_size: 663173724
---
# Universal Text Classification Dataset (UTCD)
## Load dataset
```python
from datasets import load_dataset
dataset = load_dataset('claritylab/utcd', name='in-domain')
```
## Description
UTCD is a curated compilation of 18 datasets revised for Zero-shot Text Classification spanning 3 aspect categories of Sentiment, Intent/Dialogue, and Topic classification. UTCD focuses on the task of zero-shot text classification where the candidate labels are descriptive of the text being classified. TUTCD consists of ~ 6M/800K train/test examples.
UTCD was introduced in the Findings of ACL'23 Paper **Label Agnostic Pre-training for Zero-shot Text Classification** by ***Christopher Clarke, Yuzhao Heng, Yiping Kang, Krisztian Flautner, Lingjia Tang and Jason Mars***. [Project Homepage](https://github.com/ChrisIsKing/zero-shot-text-classification/tree/master).
UTCD Datasets & Principles:
In order to make NLP models more broadly useful, zero-shot techniques need to be capable of label, domain \& aspect transfer. As such, in the construction of UTCD we enforce the following principles:
- **Textual labels**: In UTCD, we mandate the use of textual labels. While numerical label values are often used in classification tasks, descriptive textual labels such as those present in the datasets across UTCD enable the development of techniques that can leverage the class name which is instrumental in providing zero-shot support. As such, for each of the compiled datasets, labels are standardized such that the labels are descriptive of the text in natural language.
- **Diverse domains and Sequence lengths**: In addition to broad coverage of aspects, UTCD compiles diverse data across several domains such as Banking, Finance, Legal, etc each comprising varied length sequences (long and short). The datasets are listed above.
- Sentiment
- GoEmotions introduced in [GoEmotions: A Dataset of Fine-Grained Emotions](https://arxiv.org/pdf/2005.00547v2.pdf)
- TweetEval introduced in [TWEETEVAL: Unified Benchmark and Comparative Evaluation for Tweet Classification](https://arxiv.org/pdf/2010.12421v2.pdf) (Sentiment subset)
- Emotion introduced in [CARER: Contextualized Affect Representations for Emotion Recognition](https://aclanthology.org/D18-1404.pdf)
- Amazon Polarity introduced in [Character-level Convolutional Networks for Text Classification](https://arxiv.org/pdf/1509.01626.pdf)
- Finance Phrasebank introduced in [Good debt or bad debt: Detecting semantic orientations in economic texts](https://arxiv.org/pdf/1307.5336.pdf)
- Yelp introduced in [Character-level Convolutional Networks for Text Classification](https://arxiv.org/pdf/1509.01626.pdf)
- Intent/Dialogue
- Schema-Guided Dialogue introduced in [Towards Scalable Multi-Domain Conversational Agents: The Schema-Guided Dialogue Dataset](https://arxiv.org/pdf/1909.05855v2.pdf)
- Clinc-150 introduced in [An Evaluation Dataset for Intent Classification and Out-of-Scope Prediction](https://arxiv.org/pdf/1909.02027v1.pdf)
- SLURP SLU introduced in [SLURP: A Spoken Language Understanding Resource Package](https://arxiv.org/pdf/2011.13205.pdf)
- Banking77 introduced in [Efficient Intent Detection with Dual Sentence Encoders](https://arxiv.org/abs/2003.04807](https://arxiv.org/pdf/2003.04807.pdf)
- Snips introduced in [Snips Voice Platform: an embedded Spoken Language Understanding system for private-by-design voice interfaces](https://arxiv.org/pdf/1805.10190.pdf)
- NLU Evaluation introduced in [Benchmarking Natural Language Understanding Services for building Conversational Agents](https://arxiv.org/pdf/1903.05566.pdf)
- Topic
- AG News introduced in [Character-level Convolutional Networks for Text Classification](https://arxiv.org/pdf/1509.01626.pdf)
- DBpedia 14 introduced in [DBpedia: A Nucleus for a Web of Open Data](https://link.springer.com/chapter/10.1007/978-3-540-76298-0_52)
- Yahoo Answer Topics introduced in [Character-level Convolutional Networks for Text Classification](https://arxiv.org/pdf/1509.01626.pdf)
- MultiEurlex introduced in [MultiEURLEX -- A multi-lingual and multi-label legal document classification dataset for zero-shot cross-lingual transfer](https://aclanthology.org/2021.emnlp-main.559v2.pdf)
- BigPatent introduced in [BIGPATENT: A Large-Scale Dataset for Abstractive and Coherent Summarization](https://aclanthology.org/P19-1212.pdf)
- Consumer Finance introduced in [Consumer Complaint Database](https://www.consumerfinance.gov/data-research/consumer-complaints/)
## Structure
### Data Samples
Each dataset sample contains the text, the label encoded as an integer, and the dataset name encoded as an integer.
```python
{
'text': "My favourite food is anything I didn't have to cook myself.",
'labels': [215],
'dataset_name': 0
}
```
### Datasets Contained
The UTCD dataset contains 18 datasets, 9 `in-domain`, 9 `out-of-domain`, spanning 3 aspects: `sentiment`, `intent` and `topic`.
Below are statistics on the datasets.
**In-Domain Datasets**
| Dataset | Aspect | #Samples in Train/Test | #labels | average #token in text in Train/Test |
| ---------- | --------- | ---------------------- | ------- | ------------------------------------ |
| GoEmotions | sentiment | 43K/5.4K | 28 | 12/12 |
| TweetEval | sentiment | 45K/12K | 3 | 19/14 |
| Emotion | sentiment | 16K/2K | 6 | 17/17 |
| SGD | intent | 16K/4.2K | 26 | 8/9 |
| Clinc-150 | intent | 15K/4.5K | 150 | 8/8 |
| SLURP | intent | 12K/2.6K | 75 | 7/7 |
| AG News | topic | 120K7.6K | 4 | 38/37 |
| DBpedia | topic | 560K/70K | 14 | 45/45 |
| Yahoo | topic | 1.4M/60K | 10 | 10/10 |
**Out-of-Domain Datasets**
| Dataset | Aspect | #Samples in Train/Test | #labels | average #token in text |
| --------------------- | --------- | ---------------------- | ------- | ---------------------- |
| Amazon Polarity | sentiment | 3.6M/400K | 2 | 71/71 |
| Financial Phrase Bank | sentiment | 1.8K/453 | 3 | 19/19 |
| Yelp | sentiment | 650K/50K | 3 | 128/128 |
| Banking77 | intent | 10K/3.1K | 77 | 11/10 |
| SNIPS | intent | 14K/697 | 7 | 8/8 |
| NLU Eval | intent | 21K/5.2K | 68 | 7/7 |
| MultiEURLEX | topic | 55K/5K | 21 | 1198/1853 |
| Big Patent | topic | 25K/5K | 9 | 2872/2892 |
| Consumer Finance | topic | 630K/160K | 18 | 190/189 |
### Configurations
The `in-domain` and `out-of-domain` configurations has 2 splits: `train` and `test`.
The aspect-normalized configurations (`aspect-normalized-in-domain`, `aspect-normalized-out-of-domain`) has 3 splits: `train`, `validation` and `test`.
Below are statistics on the configuration splits.
**In-Domain Configuration**
| Split | #samples |
| ----- | --------- |
| Train | 2,192,703 |
| Test | 168,365 |
**Out-of-Domain Configuration**
| Split | #samples |
| ----- | --------- |
| Train | 4,996,673 |
| Test | 625,911 |
**Aspect-Normalized In-Domain Configuration**
| Split | #samples |
| ---------- | -------- |
| Train | 115,127 |
| Validation | 12,806 |
| Test | 168,365 |
**Aspect-Normalized Out-of-Domain Configuration**
| Split | #samples |
| ---------- | -------- |
| Train | 119,167 |
| Validation | 13,263 |
| Test | 625,911 |
| [
-0.6001265048980713,
-0.7928016781806946,
0.14411714673042297,
0.2875387966632843,
-0.1253141164779663,
0.10119331628084183,
-0.335606187582016,
-0.4773140549659729,
0.24874773621559143,
0.44800782203674316,
-0.4012722074985504,
-0.8544034957885742,
-0.5759957432746887,
0.08382035046815872... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
ai4bharat/Bhasha-Abhijnaanam | ai4bharat | 2023-06-22T08:01:44Z | 24 | 1 | null | [
"task_categories:text-generation",
"language_creators:crowdsourced",
"language_creators:expert-generated",
"language_creators:machine-generated",
"language_creators:found",
"language_creators:other",
"multilinguality:multilingual",
"source_datasets:original",
"language:asm",
"language:ben",
"lan... | 2023-06-22T08:01:44Z | 2023-05-17T04:43:57.000Z | 2023-05-17T04:43:57 | ---
license: cc0-1.0
annotations_creators: []
language_creators:
- crowdsourced
- expert-generated
- machine-generated
- found
- other
language:
- asm
- ben
- brx
- guj
- hin
- kan
- kas
- kok
- mai
- mal
- mar
- mni
- nep
- ori
- pan
- san
- sat
- sid
- snd
- tam
- tel
- urd
multilinguality:
- multilingual
pretty_name: Bhasha-Abhijnaanam
size_categories: []
source_datasets:
- original
task_categories:
- text-generation
task_ids: []
---
# Dataset Card for Aksharantar
## Table of Contents
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:**
- **Repository:** https://github.com/AI4Bharat/IndicLID
- **Paper:** [Bhasha-Abhijnaanam: Native-script and romanized Language Identification for 22 Indic languages](https://arxiv.org/abs/2305.15814)
- **Leaderboard:**
- **Point of Contact:**
### Dataset Summary
Bhasha-Abhijnaanam is a language identification test set for native-script as well as Romanized text which spans 22 Indic languages.
### Supported Tasks and Leaderboards
[More Information Needed]
### Languages
| <!-- --> | <!-- --> | <!-- --> | <!-- --> | <!-- --> | <!-- --> |
| -------------- | -------------- | -------------- | --------------- | -------------- | ------------- |
| Assamese (asm) | Hindi (hin) | Maithili (mai) | Nepali (nep) | Sanskrit (san) | Tamil (tam) |
| Bengali (ben) | Kannada (kan) | Malayalam (mal)| Oriya (ori) | Santali (sat) | Telugu (tel) |
| Bodo(brx) | Kashmiri (kas) | Manipuri (mni) | Punjabi (pan) | Sindhi (snd) | Urdu (urd) |
| Gujarati (guj) | Konkani (kok) | Marathi (mar)
## Dataset Structure
### Data Instances
```
A random sample from Hindi (hin) Test dataset.
{
"unique_identifier": "hin1",
"native sentence": "",
"romanized sentence": "",
"language": "Hindi",
"script": "Devanagari",
"source": "Dakshina",
}
```
### Data Fields
- `unique_identifier` (string): 3-letter language code followed by a unique number in Test set.
- `native sentence` (string): A sentence in Indic language.
- `romanized sentence` (string): Transliteration of native sentence in English (Romanized sentence).
- `language` (string): Language of native sentence.
- `script` (string): Script in which native sentence is written.
- `source` (string): Source of the data.
For created data sources, depending on the destination/sampling method of a pair in a language, it will be one of:
- Dakshina Dataset
- Flores-200
- Manually Romanized
- Manually generated
### Data Splits
| Subset | asm | ben | brx | guj | hin | kan | kas (Perso-Arabic) | kas (Devanagari) | kok | mai | mal | mni (Bengali) | mni (Meetei Mayek) | mar | nep | ori | pan | san | sid | tam | tel | urd |
|:------:|:---:|:---:|:---:|:---:|:---:|:---:|:------------------:|:----------------:|:---:|:---:|:---:|:-------------:|:------------------:|:---:|:---:|:---:|:---:|:---:|:---:|:---:|:---:|:---:|
| Native | 1012 | 5606 | 1500 | 5797 | 5617 | 5859 | 2511 | 1012 | 1500 | 2512 | 5628 | 1012 | 1500 | 5611 | 2512 | 1012 | 5776 | 2510 | 2512 | 5893 | 5779 | 5751 | 6883 |
| Romanized | 512 | 4595 | 433 | 4785 | 4606 | 4848 | 450 | 0 | 444 | 439 | 4617 | 0 | 442 | 4603 | 423 | 512 | 4765 | 448 | 0 | 4881 | 4767 | 4741 | 4371 |
## Dataset Creation
Information in the paper. [Bhasha-Abhijnaanam: Native-script and romanized Language Identification for 22 Indic languages](https://arxiv.org/abs/2305.15814)
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
Information in the paper. [Bhasha-Abhijnaanam: Native-script and romanized Language Identification for 22 Indic languages](https://arxiv.org/abs/2305.15814)
#### Who are the source language producers?
[More Information Needed]
### Annotations
Information in the paper. [Bhasha-Abhijnaanam: Native-script and romanized Language Identification for 22 Indic languages](https://arxiv.org/abs/2305.15814)
#### Who are the annotators?
Information in the paper. [Bhasha-Abhijnaanam: Native-script and romanized Language Identification for 22 Indic languages](https://arxiv.org/abs/2305.15814)
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
<!-- <a rel="license" float="left" href="http://creativecommons.org/publicdomain/zero/1.0/">
<img src="https://licensebuttons.net/p/zero/1.0/88x31.png" style="border-style: none;" alt="CC0" width="100" />
<img src="https://mirrors.creativecommons.org/presskit/buttons/88x31/png/by.png" style="border-style: none;" alt="CC-BY" width="100" href="http://creativecommons.org/publicdomain/zero/1.0/"/>
</a>
<br/> -->
This data is released under the following licensing scheme:
- Manually collected data: Released under CC0 license.
**CC0 License Statement**
<a rel="license" float="left" href="https://creativecommons.org/about/cclicenses/">
<img src="https://licensebuttons.net/p/zero/1.0/88x31.png" style="border-style: none;" alt="CC0" width="100"/>
</a>
<br>
<br>
- We do not own any of the text from which this data has been extracted.
- We license the actual packaging of manually collected data under the [Creative Commons CC0 license (“no rights reserved”)](http://creativecommons.org/publicdomain/zero/1.0).
- To the extent possible under law, <a rel="dct:publisher" href="https://indicnlp.ai4bharat.org/"> <span property="dct:title">AI4Bharat</span></a> has waived all copyright and related or neighboring rights to <span property="dct:title">Aksharantar</span> manually collected data and existing sources.
- This work is published from: India.
### Citation Information
```
@misc{madhani2023bhashaabhijnaanam,
title={Bhasha-Abhijnaanam: Native-script and romanized Language Identification for 22 Indic languages},
author={Yash Madhani and Mitesh M. Khapra and Anoop Kunchukuttan},
year={2023},
eprint={2305.15814},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
### Contributions
---
| [
-0.3444356918334961,
-0.44214698672294617,
-0.17261964082717896,
0.3228897154331207,
-0.4060436189174652,
0.39133942127227783,
-0.4312240183353424,
-0.47262558341026306,
0.402069091796875,
0.17193536460399628,
-0.41025474667549133,
-0.8010491728782654,
-0.5377219319343567,
0.40152993798255... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
RossVermouth/chensu_test_dataset1 | RossVermouth | 2023-05-19T08:26:12Z | 24 | 0 | null | [
"license:apache-2.0",
"region:us"
] | 2023-05-19T08:26:12Z | 2023-05-19T08:25:25.000Z | 2023-05-19T08:25:25 | ---
license: apache-2.0
---
| [
-0.1285335123538971,
-0.1861683875322342,
0.6529128551483154,
0.49436232447624207,
-0.19319400191307068,
0.23607441782951355,
0.36072009801864624,
0.05056373029947281,
0.5793656706809998,
0.7400146722793579,
-0.650810182094574,
-0.23784008622169495,
-0.7102247476577759,
-0.0478255338966846... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
voidful/IIRC | voidful | 2023-05-20T16:50:36Z | 24 | 0 | null | [
"region:us"
] | 2023-05-20T16:50:36Z | 2023-05-20T16:50:00.000Z | 2023-05-20T16:50:00 | Entry not found | [
-0.3227645754814148,
-0.22568479180335999,
0.8622263669967651,
0.43461522459983826,
-0.52829909324646,
0.7012971639633179,
0.7915719747543335,
0.07618614286184311,
0.774603009223938,
0.2563217282295227,
-0.7852813005447388,
-0.22573819756507874,
-0.9104475975036621,
0.5715674161911011,
-... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
juletxara/mgsm_mt | juletxara | 2023-07-21T10:18:37Z | 24 | 0 | multi-task-language-understanding-on-mgsm | [
"task_categories:text2text-generation",
"annotations_creators:found",
"language_creators:found",
"language_creators:expert-generated",
"multilinguality:monolingual",
"size_categories:1K<n<10K",
"source_datasets:extended|gsm8k",
"language:en",
"license:cc-by-sa-4.0",
"math-word-problems",
"arxiv:... | 2023-07-21T10:18:37Z | 2023-05-22T13:42:59.000Z | 2023-05-22T13:42:59 | ---
annotations_creators:
- found
language_creators:
- found
- expert-generated
language:
- en
license:
- cc-by-sa-4.0
multilinguality:
- monolingual
size_categories:
- 1K<n<10K
source_datasets:
- extended|gsm8k
task_categories:
- text2text-generation
task_ids: []
paperswithcode_id: multi-task-language-understanding-on-mgsm
pretty_name: Multilingual Grade School Math Benchmark (MGSM)
tags:
- math-word-problems
dataset_info:
- config_name: nllb-200-distilled-600M
features:
- name: question
dtype: string
- name: answer
dtype: string
- name: answer_number
dtype: int32
- name: equation_solution
dtype: string
splits:
- name: es
num_bytes: 56237
num_examples: 250
- name: fr
num_bytes: 55054
num_examples: 250
- name: de
num_bytes: 58288
num_examples: 250
- name: ru
num_bytes: 52498
num_examples: 250
- name: zh
num_bytes: 55255
num_examples: 250
- name: ja
num_bytes: 44046
num_examples: 250
- name: th
num_bytes: 51445
num_examples: 250
- name: sw
num_bytes: 50844
num_examples: 250
- name: bn
num_bytes: 46158
num_examples: 250
- name: te
num_bytes: 49928
num_examples: 250
- name: train
num_bytes: 2682
num_examples: 8
download_size: 495413
dataset_size: 522435
- config_name: nllb-200-distilled-1.3B
features:
- name: question
dtype: string
- name: answer
dtype: string
- name: answer_number
dtype: int32
- name: equation_solution
dtype: string
splits:
- name: es
num_bytes: 61011
num_examples: 250
- name: fr
num_bytes: 60127
num_examples: 250
- name: de
num_bytes: 61658
num_examples: 250
- name: ru
num_bytes: 58766
num_examples: 250
- name: zh
num_bytes: 55451
num_examples: 250
- name: ja
num_bytes: 51409
num_examples: 250
- name: th
num_bytes: 49158
num_examples: 250
- name: sw
num_bytes: 57085
num_examples: 250
- name: bn
num_bytes: 54208
num_examples: 250
- name: te
num_bytes: 52710
num_examples: 250
- name: train
num_bytes: 2682
num_examples: 8
download_size: 537237
dataset_size: 564265
- config_name: nllb-200-1.3B
features:
- name: question
dtype: string
- name: answer
dtype: string
- name: answer_number
dtype: int32
- name: equation_solution
dtype: string
splits:
- name: es
num_bytes: 60524
num_examples: 250
- name: fr
num_bytes: 59673
num_examples: 250
- name: de
num_bytes: 60375
num_examples: 250
- name: ru
num_bytes: 57837
num_examples: 250
- name: zh
num_bytes: 58165
num_examples: 250
- name: ja
num_bytes: 58423
num_examples: 250
- name: th
num_bytes: 51044
num_examples: 250
- name: sw
num_bytes: 58507
num_examples: 250
- name: bn
num_bytes: 53901
num_examples: 250
- name: te
num_bytes: 51593
num_examples: 250
- name: train
num_bytes: 2682
num_examples: 8
download_size: 545702
dataset_size: 572724
- config_name: nllb-200-3.3B
features:
- name: question
dtype: string
- name: answer
dtype: string
- name: answer_number
dtype: int32
- name: equation_solution
dtype: string
splits:
- name: es
num_bytes: 62012
num_examples: 250
- name: fr
num_bytes: 60219
num_examples: 250
- name: de
num_bytes: 61821
num_examples: 250
- name: ru
num_bytes: 58382
num_examples: 250
- name: zh
num_bytes: 58931
num_examples: 250
- name: ja
num_bytes: 58752
num_examples: 250
- name: th
num_bytes: 57139
num_examples: 250
- name: sw
num_bytes: 60391
num_examples: 250
- name: bn
num_bytes: 55057
num_examples: 250
- name: te
num_bytes: 54888
num_examples: 250
- name: train
num_bytes: 2682
num_examples: 8
download_size: 563242
dataset_size: 590274
- config_name: xglm-564M
features:
- name: question
dtype: string
- name: answer
dtype: string
- name: answer_number
dtype: int32
- name: equation_solution
dtype: string
splits:
- name: es
num_bytes: 42608
num_examples: 250
- name: fr
num_bytes: 45691
num_examples: 250
- name: de
num_bytes: 51470
num_examples: 250
- name: ru
num_bytes: 60715
num_examples: 250
- name: zh
num_bytes: 45629
num_examples: 250
- name: ja
num_bytes: 43786
num_examples: 250
- name: th
num_bytes: 35269
num_examples: 250
- name: sw
num_bytes: 37892
num_examples: 250
- name: bn
num_bytes: 51002
num_examples: 250
- name: te
num_bytes: 98158
num_examples: 250
- name: train
num_bytes: 2682
num_examples: 8
download_size: 487886
dataset_size: 514902
- config_name: xglm-1.7B
features:
- name: question
dtype: string
- name: answer
dtype: string
- name: answer_number
dtype: int32
- name: equation_solution
dtype: string
splits:
- name: es
num_bytes: 59727
num_examples: 250
- name: fr
num_bytes: 59811
num_examples: 250
- name: de
num_bytes: 60222
num_examples: 250
- name: ru
num_bytes: 58039
num_examples: 250
- name: zh
num_bytes: 44307
num_examples: 250
- name: ja
num_bytes: 40936
num_examples: 250
- name: th
num_bytes: 44383
num_examples: 250
- name: sw
num_bytes: 53708
num_examples: 250
- name: bn
num_bytes: 76978
num_examples: 250
- name: te
num_bytes: 56112
num_examples: 250
- name: train
num_bytes: 2682
num_examples: 8
download_size: 529882
dataset_size: 556905
- config_name: xglm-2.9B
features:
- name: question
dtype: string
- name: answer
dtype: string
- name: answer_number
dtype: int32
- name: equation_solution
dtype: string
splits:
- name: es
num_bytes: 60811
num_examples: 250
- name: fr
num_bytes: 58777
num_examples: 250
- name: de
num_bytes: 60297
num_examples: 250
- name: ru
num_bytes: 58133
num_examples: 250
- name: zh
num_bytes: 43453
num_examples: 250
- name: ja
num_bytes: 48201
num_examples: 250
- name: th
num_bytes: 39620
num_examples: 250
- name: sw
num_bytes: 56296
num_examples: 250
- name: bn
num_bytes: 50937
num_examples: 250
- name: te
num_bytes: 46948
num_examples: 250
- name: train
num_bytes: 2682
num_examples: 8
download_size: 499131
dataset_size: 526155
- config_name: xglm-4.5B
features:
- name: question
dtype: string
- name: answer
dtype: string
- name: answer_number
dtype: int32
- name: equation_solution
dtype: string
splits:
- name: es
num_bytes: 68793
num_examples: 250
- name: fr
num_bytes: 68088
num_examples: 250
- name: de
num_bytes: 76522
num_examples: 250
- name: ru
num_bytes: 63439
num_examples: 250
- name: zh
num_bytes: 58577
num_examples: 250
- name: ja
num_bytes: 56872
num_examples: 250
- name: th
num_bytes: 58692
num_examples: 250
- name: sw
num_bytes: 72348
num_examples: 250
- name: bn
num_bytes: 63835
num_examples: 250
- name: te
num_bytes: 58979
num_examples: 250
- name: train
num_bytes: 2682
num_examples: 8
download_size: 621817
dataset_size: 648827
- config_name: xglm-7.5B
features:
- name: question
dtype: string
- name: answer
dtype: string
- name: answer_number
dtype: int32
- name: equation_solution
dtype: string
splits:
- name: es
num_bytes: 56510
num_examples: 250
- name: fr
num_bytes: 56170
num_examples: 250
- name: de
num_bytes: 56587
num_examples: 250
- name: ru
num_bytes: 55870
num_examples: 250
- name: zh
num_bytes: 53385
num_examples: 250
- name: ja
num_bytes: 51831
num_examples: 250
- name: th
num_bytes: 49858
num_examples: 250
- name: sw
num_bytes: 55484
num_examples: 250
- name: bn
num_bytes: 51975
num_examples: 250
- name: te
num_bytes: 51737
num_examples: 250
- name: train
num_bytes: 2682
num_examples: 8
download_size: 515073
dataset_size: 542089
- config_name: bloom-560m
features:
- name: question
dtype: string
- name: answer
dtype: string
- name: answer_number
dtype: int32
- name: equation_solution
dtype: string
splits:
- name: es
num_bytes: 47987
num_examples: 250
- name: fr
num_bytes: 43992
num_examples: 250
- name: de
num_bytes: 56995
num_examples: 250
- name: ru
num_bytes: 72240
num_examples: 250
- name: zh
num_bytes: 61450
num_examples: 250
- name: ja
num_bytes: 73445
num_examples: 250
- name: th
num_bytes: 180123
num_examples: 250
- name: sw
num_bytes: 50369
num_examples: 250
- name: bn
num_bytes: 86465
num_examples: 250
- name: te
num_bytes: 75244
num_examples: 250
- name: train
num_bytes: 2682
num_examples: 8
download_size: 724012
dataset_size: 750992
- config_name: bloom-1b1
features:
- name: question
dtype: string
- name: answer
dtype: string
- name: answer_number
dtype: int32
- name: equation_solution
dtype: string
splits:
- name: es
num_bytes: 56625
num_examples: 250
- name: fr
num_bytes: 53998
num_examples: 250
- name: de
num_bytes: 56874
num_examples: 250
- name: ru
num_bytes: 32323
num_examples: 250
- name: zh
num_bytes: 50902
num_examples: 250
- name: ja
num_bytes: 38347
num_examples: 250
- name: th
num_bytes: 20754
num_examples: 250
- name: sw
num_bytes: 27779
num_examples: 250
- name: bn
num_bytes: 34663
num_examples: 250
- name: te
num_bytes: 24958
num_examples: 250
- name: train
num_bytes: 2682
num_examples: 8
download_size: 372897
dataset_size: 399905
- config_name: bloom-1b7
features:
- name: question
dtype: string
- name: answer
dtype: string
- name: answer_number
dtype: int32
- name: equation_solution
dtype: string
splits:
- name: es
num_bytes: 44595
num_examples: 250
- name: fr
num_bytes: 48809
num_examples: 250
- name: de
num_bytes: 57435
num_examples: 250
- name: ru
num_bytes: 45954
num_examples: 250
- name: zh
num_bytes: 47375
num_examples: 250
- name: ja
num_bytes: 51493
num_examples: 250
- name: th
num_bytes: 24154
num_examples: 250
- name: sw
num_bytes: 41557
num_examples: 250
- name: bn
num_bytes: 37503
num_examples: 250
- name: te
num_bytes: 42682
num_examples: 250
- name: train
num_bytes: 2682
num_examples: 8
download_size: 417273
dataset_size: 444239
- config_name: bloom-3b
features:
- name: question
dtype: string
- name: answer
dtype: string
- name: answer_number
dtype: int32
- name: equation_solution
dtype: string
splits:
- name: es
num_bytes: 60956
num_examples: 250
- name: fr
num_bytes: 61243
num_examples: 250
- name: de
num_bytes: 60337
num_examples: 250
- name: ru
num_bytes: 61329
num_examples: 250
- name: zh
num_bytes: 57078
num_examples: 250
- name: ja
num_bytes: 64180
num_examples: 250
- name: th
num_bytes: 24167
num_examples: 250
- name: sw
num_bytes: 45735
num_examples: 250
- name: bn
num_bytes: 45720
num_examples: 250
- name: te
num_bytes: 40840
num_examples: 250
- name: train
num_bytes: 2682
num_examples: 8
download_size: 497369
dataset_size: 524267
- config_name: bloom-7b1
features:
- name: question
dtype: string
- name: answer
dtype: string
- name: answer_number
dtype: int32
- name: equation_solution
dtype: string
splits:
- name: es
num_bytes: 63425
num_examples: 250
- name: fr
num_bytes: 61340
num_examples: 250
- name: de
num_bytes: 61858
num_examples: 250
- name: ru
num_bytes: 60070
num_examples: 250
- name: zh
num_bytes: 59410
num_examples: 250
- name: ja
num_bytes: 57485
num_examples: 250
- name: th
num_bytes: 24974
num_examples: 250
- name: sw
num_bytes: 58232
num_examples: 250
- name: bn
num_bytes: 57178
num_examples: 250
- name: te
num_bytes: 57703
num_examples: 250
- name: train
num_bytes: 2682
num_examples: 8
download_size: 537348
dataset_size: 564357
- config_name: llama-7B
features:
- name: question
dtype: string
- name: answer
dtype: string
- name: answer_number
dtype: int32
- name: equation_solution
dtype: string
splits:
- name: es
num_bytes: 55313
num_examples: 250
- name: fr
num_bytes: 61302
num_examples: 250
- name: de
num_bytes: 62152
num_examples: 250
- name: ru
num_bytes: 60929
num_examples: 250
- name: zh
num_bytes: 59157
num_examples: 250
- name: ja
num_bytes: 57356
num_examples: 250
- name: th
num_bytes: 41148
num_examples: 250
- name: sw
num_bytes: 56414
num_examples: 250
- name: bn
num_bytes: 52156
num_examples: 250
- name: te
num_bytes: 7360
num_examples: 250
- name: train
num_bytes: 2682
num_examples: 8
download_size: 488983
dataset_size: 515969
- config_name: llama-13B
features:
- name: question
dtype: string
- name: answer
dtype: string
- name: answer_number
dtype: int32
- name: equation_solution
dtype: string
splits:
- name: es
num_bytes: 62592
num_examples: 250
- name: fr
num_bytes: 61965
num_examples: 250
- name: de
num_bytes: 62148
num_examples: 250
- name: ru
num_bytes: 61099
num_examples: 250
- name: zh
num_bytes: 59858
num_examples: 250
- name: ja
num_bytes: 55759
num_examples: 250
- name: th
num_bytes: 51280
num_examples: 250
- name: sw
num_bytes: 56081
num_examples: 250
- name: bn
num_bytes: 48204
num_examples: 250
- name: te
num_bytes: 6128
num_examples: 250
- name: train
num_bytes: 2682
num_examples: 8
download_size: 500978
dataset_size: 527796
- config_name: llama-30B
features:
- name: question
dtype: string
- name: answer
dtype: string
- name: answer_number
dtype: int32
- name: equation_solution
dtype: string
splits:
- name: es
num_bytes: 36577
num_examples: 250
- name: fr
num_bytes: 50763
num_examples: 250
- name: de
num_bytes: 63141
num_examples: 250
- name: ru
num_bytes: 58198
num_examples: 250
- name: zh
num_bytes: 61880
num_examples: 250
- name: ja
num_bytes: 55989
num_examples: 250
- name: th
num_bytes: 53253
num_examples: 250
- name: sw
num_bytes: 59724
num_examples: 250
- name: bn
num_bytes: 51345
num_examples: 250
- name: te
num_bytes: 6546
num_examples: 250
- name: train
num_bytes: 2682
num_examples: 8
download_size: 473194
dataset_size: 500098
- config_name: RedPajama-INCITE-Base-3B-v1
features:
- name: question
dtype: string
- name: answer
dtype: string
- name: answer_number
dtype: int32
- name: equation_solution
dtype: string
splits:
- name: es
num_bytes: 61548
num_examples: 250
- name: fr
num_bytes: 61357
num_examples: 250
- name: de
num_bytes: 58325
num_examples: 250
- name: ru
num_bytes: 61655
num_examples: 250
- name: zh
num_bytes: 61669
num_examples: 250
- name: ja
num_bytes: 59500
num_examples: 250
- name: th
num_bytes: 31415
num_examples: 250
- name: sw
num_bytes: 72056
num_examples: 250
- name: bn
num_bytes: 26241
num_examples: 250
- name: te
num_bytes: 26116
num_examples: 250
- name: train
num_bytes: 2682
num_examples: 8
download_size: 495561
dataset_size: 522564
- config_name: RedPajama-INCITE-7B-Base
features:
- name: question
dtype: string
- name: answer
dtype: string
- name: answer_number
dtype: int32
- name: equation_solution
dtype: string
splits:
- name: es
num_bytes: 63198
num_examples: 250
- name: fr
num_bytes: 61124
num_examples: 250
- name: de
num_bytes: 60728
num_examples: 250
- name: ru
num_bytes: 60378
num_examples: 250
- name: zh
num_bytes: 50030
num_examples: 250
- name: ja
num_bytes: 57939
num_examples: 250
- name: th
num_bytes: 25615
num_examples: 250
- name: sw
num_bytes: 60635
num_examples: 250
- name: bn
num_bytes: 18704
num_examples: 250
- name: te
num_bytes: 21116
num_examples: 250
- name: train
num_bytes: 2682
num_examples: 8
download_size: 455157
dataset_size: 482149
- config_name: open_llama_3b
features:
- name: question
dtype: string
- name: answer
dtype: string
- name: answer_number
dtype: int32
- name: equation_solution
dtype: string
splits:
- name: es
num_bytes: 59734
num_examples: 250
- name: fr
num_bytes: 59925
num_examples: 250
- name: de
num_bytes: 60270
num_examples: 250
- name: ru
num_bytes: 62725
num_examples: 250
- name: zh
num_bytes: 34013
num_examples: 250
- name: ja
num_bytes: 28163
num_examples: 250
- name: th
num_bytes: 13190
num_examples: 250
- name: sw
num_bytes: 46125
num_examples: 250
- name: bn
num_bytes: 5721
num_examples: 250
- name: te
num_bytes: 5605
num_examples: 250
- name: train
num_bytes: 2682
num_examples: 8
download_size: 351125
dataset_size: 378153
- config_name: open_llama_7b
features:
- name: question
dtype: string
- name: answer
dtype: string
- name: answer_number
dtype: int32
- name: equation_solution
dtype: string
splits:
- name: es
num_bytes: 61962
num_examples: 250
- name: fr
num_bytes: 60687
num_examples: 250
- name: de
num_bytes: 60474
num_examples: 250
- name: ru
num_bytes: 61525
num_examples: 250
- name: zh
num_bytes: 36631
num_examples: 250
- name: ja
num_bytes: 29926
num_examples: 250
- name: th
num_bytes: 11176
num_examples: 250
- name: sw
num_bytes: 61601
num_examples: 250
- name: bn
num_bytes: 5080
num_examples: 250
- name: te
num_bytes: 5899
num_examples: 250
- name: train
num_bytes: 2682
num_examples: 8
download_size: 370615
dataset_size: 397643
- config_name: open_llama_13b
features:
- name: question
dtype: string
- name: answer
dtype: string
- name: answer_number
dtype: int32
- name: equation_solution
dtype: string
splits:
- name: es
num_bytes: 63245
num_examples: 250
- name: fr
num_bytes: 61569
num_examples: 250
- name: de
num_bytes: 62071
num_examples: 250
- name: ru
num_bytes: 60086
num_examples: 250
- name: zh
num_bytes: 37475
num_examples: 250
- name: ja
num_bytes: 32072
num_examples: 250
- name: th
num_bytes: 12902
num_examples: 250
- name: sw
num_bytes: 58870
num_examples: 250
- name: bn
num_bytes: 5624
num_examples: 250
- name: te
num_bytes: 5647
num_examples: 250
- name: train
num_bytes: 2682
num_examples: 8
download_size: 375230
dataset_size: 402243
- config_name: open_llama_7b_v2
features:
- name: question
dtype: string
- name: answer
dtype: string
- name: answer_number
dtype: int32
- name: equation_solution
dtype: string
splits:
- name: es
num_bytes: 62306
num_examples: 250
- name: fr
num_bytes: 61168
num_examples: 250
- name: de
num_bytes: 60439
num_examples: 250
- name: ru
num_bytes: 60916
num_examples: 250
- name: zh
num_bytes: 57891
num_examples: 250
- name: ja
num_bytes: 53155
num_examples: 250
- name: th
num_bytes: 34743
num_examples: 250
- name: sw
num_bytes: 58901
num_examples: 250
- name: bn
num_bytes: 34548
num_examples: 250
- name: te
num_bytes: 5253
num_examples: 250
- name: train
num_bytes: 2682
num_examples: 8
download_size: 464986
dataset_size: 492002
- config_name: falcon-7b
features:
- name: question
dtype: string
- name: answer
dtype: string
- name: answer_number
dtype: int32
- name: equation_solution
dtype: string
splits:
- name: es
num_bytes: 46760
num_examples: 250
- name: fr
num_bytes: 33877
num_examples: 250
- name: de
num_bytes: 51277
num_examples: 250
- name: ru
num_bytes: 59591
num_examples: 250
- name: zh
num_bytes: 37624
num_examples: 250
- name: ja
num_bytes: 46601
num_examples: 250
- name: th
num_bytes: 37107
num_examples: 250
- name: sw
num_bytes: 31857
num_examples: 250
- name: bn
num_bytes: 18472
num_examples: 250
- name: te
num_bytes: 18376
num_examples: 250
- name: train
num_bytes: 2682
num_examples: 8
download_size: 357224
dataset_size: 384224
- config_name: xgen-7b-4k-base
features:
- name: question
dtype: string
- name: answer
dtype: string
- name: answer_number
dtype: int32
- name: equation_solution
dtype: string
splits:
- name: es
num_bytes: 63837
num_examples: 250
- name: fr
num_bytes: 62076
num_examples: 250
- name: de
num_bytes: 62146
num_examples: 250
- name: ru
num_bytes: 61401
num_examples: 250
- name: zh
num_bytes: 60295
num_examples: 250
- name: ja
num_bytes: 57008
num_examples: 250
- name: th
num_bytes: 18524
num_examples: 250
- name: sw
num_bytes: 56158
num_examples: 250
- name: bn
num_bytes: 25948
num_examples: 250
- name: te
num_bytes: 5803
num_examples: 250
- name: train
num_bytes: 2682
num_examples: 8
download_size: 448853
dataset_size: 475878
- config_name: xgen-7b-8k-base
features:
- name: question
dtype: string
- name: answer
dtype: string
- name: answer_number
dtype: int32
- name: equation_solution
dtype: string
splits:
- name: es
num_bytes: 63243
num_examples: 250
- name: fr
num_bytes: 60948
num_examples: 250
- name: de
num_bytes: 61832
num_examples: 250
- name: ru
num_bytes: 59217
num_examples: 250
- name: zh
num_bytes: 60354
num_examples: 250
- name: ja
num_bytes: 57012
num_examples: 250
- name: th
num_bytes: 28194
num_examples: 250
- name: sw
num_bytes: 56686
num_examples: 250
- name: bn
num_bytes: 27221
num_examples: 250
- name: te
num_bytes: 5460
num_examples: 250
- name: train
num_bytes: 2682
num_examples: 8
download_size: 455836
dataset_size: 482849
- config_name: xgen-7b-8k-inst
features:
- name: question
dtype: string
- name: answer
dtype: string
- name: answer_number
dtype: int32
- name: equation_solution
dtype: string
splits:
- name: es
num_bytes: 63113
num_examples: 250
- name: fr
num_bytes: 60264
num_examples: 250
- name: de
num_bytes: 59762
num_examples: 250
- name: ru
num_bytes: 59374
num_examples: 250
- name: zh
num_bytes: 62900
num_examples: 250
- name: ja
num_bytes: 60877
num_examples: 250
- name: th
num_bytes: 26089
num_examples: 250
- name: sw
num_bytes: 57640
num_examples: 250
- name: bn
num_bytes: 24301
num_examples: 250
- name: te
num_bytes: 5290
num_examples: 250
- name: train
num_bytes: 2682
num_examples: 8
download_size: 455320
dataset_size: 482292
- config_name: polylm-1.7b
features:
- name: question
dtype: string
- name: answer
dtype: string
- name: answer_number
dtype: int32
- name: equation_solution
dtype: string
splits:
- name: es
num_bytes: 55706
num_examples: 250
- name: fr
num_bytes: 55751
num_examples: 250
- name: de
num_bytes: 54071
num_examples: 250
- name: ru
num_bytes: 37159
num_examples: 250
- name: zh
num_bytes: 47577
num_examples: 250
- name: ja
num_bytes: 38931
num_examples: 250
- name: th
num_bytes: 40203
num_examples: 250
- name: sw
num_bytes: 20814
num_examples: 250
- name: bn
num_bytes: 24317
num_examples: 250
- name: te
num_bytes: 7420
num_examples: 250
- name: train
num_bytes: 2682
num_examples: 8
download_size: 357603
dataset_size: 384631
- config_name: polylm-13b
features:
- name: question
dtype: string
- name: answer
dtype: string
- name: answer_number
dtype: int32
- name: equation_solution
dtype: string
splits:
- name: es
num_bytes: 63444
num_examples: 250
- name: fr
num_bytes: 62136
num_examples: 250
- name: de
num_bytes: 63002
num_examples: 250
- name: ru
num_bytes: 62522
num_examples: 250
- name: zh
num_bytes: 59722
num_examples: 250
- name: ja
num_bytes: 55541
num_examples: 250
- name: th
num_bytes: 57684
num_examples: 250
- name: sw
num_bytes: 46889
num_examples: 250
- name: bn
num_bytes: 28704
num_examples: 250
- name: te
num_bytes: 7883
num_examples: 250
- name: train
num_bytes: 2682
num_examples: 8
download_size: 483392
dataset_size: 510209
- config_name: polylm-multialpaca-13b
features:
- name: question
dtype: string
- name: answer
dtype: string
- name: answer_number
dtype: int32
- name: equation_solution
dtype: string
splits:
- name: es
num_bytes: 62502
num_examples: 250
- name: fr
num_bytes: 60978
num_examples: 250
- name: de
num_bytes: 62310
num_examples: 250
- name: ru
num_bytes: 60440
num_examples: 250
- name: zh
num_bytes: 57642
num_examples: 250
- name: ja
num_bytes: 55315
num_examples: 250
- name: th
num_bytes: 59002
num_examples: 250
- name: sw
num_bytes: 51728
num_examples: 250
- name: bn
num_bytes: 31947
num_examples: 250
- name: te
num_bytes: 12891
num_examples: 250
- name: train
num_bytes: 2682
num_examples: 8
download_size: 490498
dataset_size: 517437
- config_name: open_llama_3b_v2
features:
- name: question
dtype: string
- name: answer
dtype: string
- name: answer_number
dtype: int32
- name: equation_solution
dtype: string
splits:
- name: es
num_bytes: 62474
num_examples: 250
- name: fr
num_bytes: 60493
num_examples: 250
- name: de
num_bytes: 59760
num_examples: 250
- name: ru
num_bytes: 57592
num_examples: 250
- name: zh
num_bytes: 54634
num_examples: 250
- name: ja
num_bytes: 53936
num_examples: 250
- name: th
num_bytes: 38960
num_examples: 250
- name: sw
num_bytes: 57320
num_examples: 250
- name: bn
num_bytes: 27394
num_examples: 250
- name: te
num_bytes: 4680
num_examples: 250
- name: train
num_bytes: 2682
num_examples: 8
download_size: 452910
dataset_size: 479925
- config_name: Llama-2-7b-hf
features:
- name: question
dtype: string
- name: answer
dtype: string
- name: answer_number
dtype: int32
- name: equation_solution
dtype: string
splits:
- name: es
num_bytes: 63035
num_examples: 250
- name: fr
num_bytes: 61128
num_examples: 250
- name: de
num_bytes: 61496
num_examples: 250
- name: ru
num_bytes: 59918
num_examples: 250
- name: zh
num_bytes: 59415
num_examples: 250
- name: ja
num_bytes: 54466
num_examples: 250
- name: th
num_bytes: 37269
num_examples: 250
- name: sw
num_bytes: 53461
num_examples: 250
- name: bn
num_bytes: 42955
num_examples: 250
- name: te
num_bytes: 7122
num_examples: 250
- name: train
num_bytes: 2682
num_examples: 8
download_size: 475925
dataset_size: 502947
- config_name: Llama-2-13b-hf
features:
- name: question
dtype: string
- name: answer
dtype: string
- name: answer_number
dtype: int32
- name: equation_solution
dtype: string
splits:
- name: es
num_bytes: 63347
num_examples: 250
- name: fr
num_bytes: 62187
num_examples: 250
- name: de
num_bytes: 63309
num_examples: 250
- name: ru
num_bytes: 62772
num_examples: 250
- name: zh
num_bytes: 62210
num_examples: 250
- name: ja
num_bytes: 59083
num_examples: 250
- name: th
num_bytes: 57690
num_examples: 250
- name: sw
num_bytes: 57538
num_examples: 250
- name: bn
num_bytes: 54947
num_examples: 250
- name: te
num_bytes: 7062
num_examples: 250
- name: train
num_bytes: 2682
num_examples: 8
download_size: 525803
dataset_size: 552827
- config_name: Llama-2-7b-chat-hf
features:
- name: question
dtype: string
- name: answer
dtype: string
- name: answer_number
dtype: int32
- name: equation_solution
dtype: string
splits:
- name: es
num_bytes: 58203
num_examples: 250
- name: fr
num_bytes: 40149
num_examples: 250
- name: de
num_bytes: 57587
num_examples: 250
- name: ru
num_bytes: 47777
num_examples: 250
- name: zh
num_bytes: 50018
num_examples: 250
- name: ja
num_bytes: 54107
num_examples: 250
- name: th
num_bytes: 41549
num_examples: 250
- name: sw
num_bytes: 61414
num_examples: 250
- name: bn
num_bytes: 37996
num_examples: 250
- name: te
num_bytes: 10156
num_examples: 250
- name: train
num_bytes: 2682
num_examples: 8
download_size: 434632
dataset_size: 461638
- config_name: Llama-2-13b-chat-hf
features:
- name: question
dtype: string
- name: answer
dtype: string
- name: answer_number
dtype: int32
- name: equation_solution
dtype: string
splits:
- name: es
num_bytes: 63304
num_examples: 250
- name: fr
num_bytes: 61708
num_examples: 250
- name: de
num_bytes: 63291
num_examples: 250
- name: ru
num_bytes: 62305
num_examples: 250
- name: zh
num_bytes: 61994
num_examples: 250
- name: ja
num_bytes: 58226
num_examples: 250
- name: th
num_bytes: 60256
num_examples: 250
- name: sw
num_bytes: 58108
num_examples: 250
- name: bn
num_bytes: 55180
num_examples: 250
- name: te
num_bytes: 6525
num_examples: 250
- name: train
num_bytes: 2682
num_examples: 8
download_size: 526574
dataset_size: 553579
---
# Dataset Card for MGSM MT
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-instances)
- [Data Splits](#data-instances)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
## Dataset Description
- **Homepage:** https://openai.com/blog/grade-school-math/
- **Repository:** https://github.com/openai/grade-school-math
- **Paper:** https://arxiv.org/abs/2110.14168
- **Leaderboard:** [Needs More Information]
- **Point of Contact:** [Needs More Information]
### Dataset Summary
Multilingual Grade School Math Benchmark (MGSM) is a benchmark of grade-school math problems, proposed in the paper [Language models are multilingual chain-of-thought reasoners](http://arxiv.org/abs/2210.03057). This dataset is the machine-translated version of MGSM in English from each language.
The same 250 problems from [GSM8K](https://arxiv.org/abs/2110.14168) are each translated via human annotators in 10 languages. The 10 languages are:
- Spanish
- French
- German
- Russian
- Chinese
- Japanese
- Thai
- Swahili
- Bengali
- Telugu
GSM8K (Grade School Math 8K) is a dataset of 8.5K high quality linguistically diverse grade school math word problems. The dataset was created to support the task of question answering on basic mathematical problems that require multi-step reasoning.
You can find the input and targets for each of the ten languages (and English) as `.tsv` files.
We also include few-shot exemplars that are also manually translated from each language in `exemplars.py`.
### Supported Tasks and Leaderboards
[Needs More Information]
### Languages
The same 250 problems from [GSM8K](https://arxiv.org/abs/2110.14168) are each translated via human annotators in 10 languages. The 10 languages are:
- Spanish
- French
- German
- Russian
- Chinese
- Japanese
- Thai
- Swahili
- Bengali
- Telugu
This dataset is the machine-translated version of MGSM in English from each language.
## Dataset Structure
### Data Instances
Each instance in the train split contains:
- a string for the grade-school level math question
- a string for the corresponding answer with chain-of-thought steps.
- the numeric solution to the question
- the equation solution to the question
```python
{'question': 'Question: Roger has 5 tennis balls. He buys 2 more cans of tennis balls. Each can has 3 tennis balls. How many tennis balls does he have now?',
'answer': 'Step-by-Step Answer: Roger started with 5 balls. 2 cans of 3 tennis balls each is 6 tennis balls. 5 + 6 = 11. The answer is 11.',
'answer_number': 11,
'equation_solution': '5 + 6 = 11.'}
```
Each instance in the test split contains:
- a string for the grade-school level math question
- the numeric solution to the question
```python
{'question': "Janet’s ducks lay 16 eggs per day. She eats three for breakfast every morning and bakes muffins for her friends every day with four. She sells the remainder at the farmers' market daily for $2 per fresh duck egg. How much in dollars does she make every day at the farmers' market?",
'answer': None,
'answer_number': 18,
'equation_solution': None}
```
### Data Fields
The data fields are the same among `train` and `test` splits.
- question: The question string to a grade school math problem.
- answer: The full solution string to the `question`. It contains multiple steps of reasoning with calculator annotations and the final numeric solution.
- answer_number: The numeric solution to the `question`.
- equation_solution: The equation solution to the `question`.
### Data Splits
- The train split includes 8 few-shot exemplars that are also manually translated from each language.
- The test split includes the same 250 problems from GSM8K translated via human annotators in 10 languages.
| name |train|test |
|--------|----:|---------:|
|en | 8 | 250 |
|es | 8 | 250 |
|fr | 8 | 250 |
|de | 8 | 250 |
|ru | 8 | 250 |
|zh | 8 | 250 |
|ja | 8 | 250 |
|th | 8 | 250 |
|sw | 8 | 250 |
|bn | 8 | 250 |
|te | 8 | 250 |
## Dataset Creation
### Curation Rationale
[Needs More Information]
### Source Data
#### Initial Data Collection and Normalization
From the paper:
> We initially collected a starting set of a thousand problems and natural language solutions by hiring freelance contractors on Upwork (upwork.com). We then worked with Surge AI (surgehq.ai), an NLP data labeling platform, to scale up our data collection. After collecting the full dataset, we asked workers to re-solve all problems, with no workers re-solving problems they originally wrote. We checked whether their final answers agreed with the original solu- tions, and any problems that produced disagreements were either repaired or discarded. We then performed another round of agreement checks on a smaller subset of problems, finding that 1.7% of problems still produce disagreements among contractors. We estimate this to be the fraction of problems that con- tain breaking errors or ambiguities. It is possible that a larger percentage of problems contain subtle errors.
#### Who are the source language producers?
[Needs More Information]
### Annotations
#### Annotation process
[Needs More Information]
#### Who are the annotators?
Surge AI (surgehq.ai)
### Personal and Sensitive Information
[Needs More Information]
## Considerations for Using the Data
### Social Impact of Dataset
[Needs More Information]
### Discussion of Biases
[Needs More Information]
### Other Known Limitations
[Needs More Information]
## Additional Information
### Dataset Curators
[Needs More Information]
### Licensing Information
The GSM8K dataset is licensed under the [MIT License](https://opensource.org/licenses/MIT).
### Citation Information
```bibtex
@article{cobbe2021gsm8k,
title={Training Verifiers to Solve Math Word Problems},
author={Cobbe, Karl and Kosaraju, Vineet and Bavarian, Mohammad and Chen, Mark and Jun, Heewoo and Kaiser, Lukasz and Plappert, Matthias and Tworek, Jerry and Hilton, Jacob and Nakano, Reiichiro and Hesse, Christopher and Schulman, John},
journal={arXiv preprint arXiv:2110.14168},
year={2021}
}
@misc{shi2022language,
title={Language Models are Multilingual Chain-of-Thought Reasoners},
author={Freda Shi and Mirac Suzgun and Markus Freitag and Xuezhi Wang and Suraj Srivats and Soroush Vosoughi and Hyung Won Chung and Yi Tay and Sebastian Ruder and Denny Zhou and Dipanjan Das and Jason Wei},
year={2022},
eprint={2210.03057},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
### Contributions
Thanks to [@juletx](https://github.com/juletx) for adding this dataset. | [
-0.43329471349716187,
-0.6835206747055054,
0.3572288453578949,
0.23227082192897797,
-0.11462344229221344,
0.012457751668989658,
-0.15465818345546722,
-0.218763068318367,
0.1769639551639557,
0.42711156606674194,
-0.773747444152832,
-0.5676177144050598,
-0.5672430396080017,
0.385069578886032... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
dbdu/ShareGPT-74k-ko | dbdu | 2023-08-19T07:00:39Z | 24 | 11 | null | [
"task_categories:text-generation",
"size_categories:10K<n<100K",
"language:ko",
"license:cc-by-2.0",
"conversation",
"chatgpt",
"gpt-3.5",
"region:us"
] | 2023-08-19T07:00:39Z | 2023-05-23T16:30:43.000Z | 2023-05-23T16:30:43 | ---
language:
- ko
pretty_name: ShareGPT-74k-ko
tags:
- conversation
- chatgpt
- gpt-3.5
license: cc-by-2.0
task_categories:
- text-generation
size_categories:
- 10K<n<100K
---
# ShareGPT-ko-74k
ShareGPT 90k의 cleaned 버전을 구글 번역기를 이용하여 번역하였습니다.\
원본 데이터셋은 [여기](https://github.com/lm-sys/FastChat/issues/90)에서 확인하실 수 있습니다.
Korean-translated version of ShareGPT-90k, translated by Google Translaton.\
You can check the original dataset [here](https://github.com/lm-sys/FastChat/issues/90).
## Dataset Description
json 파일의 구조는 원본 데이터셋과 동일합니다.\
`*_unclneaed.json`은 원본 데이터셋을 번역하고 따로 후처리하지 않은 데이터셋입니다. (총 74k)\
`*_cleaned.json`은 위의 데이터에서 코드가 포함된 데이터를 러프하게 제거한 데이터셋입니다. (총 55k)\
**주의**: 코드는 번역되었을 수 있으므로 cleaned를 쓰시는 걸 추천합니다.
The structure of the dataset is the same with the original dataset.\
`*_unclneaed.json` are Korean-translated data, without any post-processing. (total 74k dialogues)\
`*_clneaed.json` are post-processed version which dialogues containing code snippets are eliminated from. (total 55k dialogues)\
**WARNING**: Code snippets might have been translated into Korean. I recommend you use cleaned files.
## Licensing Information
GPT를 이용한 데이터셋이므로 OPENAI의 [약관](https://openai.com/policies/terms-of-use)을 따릅니다.\
그 외의 경우 [CC BY 2.0 KR](https://creativecommons.org/licenses/by/2.0/kr/)을 따릅니다.
The licensing status of the datasets follows [OPENAI Licence](https://openai.com/policies/terms-of-use) as it contains GPT-generated sentences.\
For all the other cases, the licensing status follows [CC BY 2.0 KR](https://creativecommons.org/licenses/by/2.0/kr/).
## Code
번역에 사용한 코드는 아래 리포지토리에서 확인 가능합니다. Check out the following repository to see the translation code used.\
https://github.com/dubuduru/ShareGPT-translation
You can use the repository to translate ShareGPT-like dataset into your preferred language. | [
-0.2797664999961853,
-0.70662921667099,
0.2736070454120636,
0.41922515630722046,
-0.6505972146987915,
-0.13851796090602875,
-0.4094012975692749,
-0.24410724639892578,
0.42049798369407654,
0.5210071802139282,
-0.687443733215332,
-0.7538614869117737,
-0.6882467865943909,
0.17457285523414612,... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
winddude/reddit_finance_43_250k | winddude | 2023-05-25T23:06:03Z | 24 | 25 | null | [
"language:en",
"license:gpl-3.0",
"finance",
"investing",
"crypto",
"reddit",
"region:us"
] | 2023-05-25T23:06:03Z | 2023-05-25T21:31:02.000Z | 2023-05-25T21:31:02 | ---
license: gpl-3.0
language:
- en
tags:
- finance
- investing
- crypto
- reddit
---
# reddit finance 43 250k
`reddit_finance_43_250k` is a collection of 250k post/comment pairs from 43 financial, investing and crypto subreddits. Post must have all been text, with a length of 250chars, and a positive score. Each subreddit is narrowed down to the 70th qunatile before being mergered with their top 3 comments and than the other subs. Further score based methods are used to select the top 250k post/comment pairs.
The code to recreate the dataset is here: <https://github.com/getorca/ProfitsBot_V0_OLLM/tree/main/ds_builder>
The trained lora model is here: <https://huggingface.co/winddude/pb_lora_7b_v0.1> | [
-0.5921700596809387,
-0.723108172416687,
0.26471036672592163,
0.4189414083957672,
-0.6760358214378357,
0.10731832683086395,
-0.09404793381690979,
-0.6692920327186584,
0.6658473014831543,
0.49526581168174744,
-0.8206382393836975,
-0.6527250409126282,
-0.6587677001953125,
-0.0840518623590469... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
distilled-one-sec-cv12-each-chunk-uniq/chunk_125 | distilled-one-sec-cv12-each-chunk-uniq | 2023-05-29T03:11:17Z | 24 | 0 | null | [
"region:us"
] | 2023-05-29T03:11:17Z | 2023-05-29T03:08:56.000Z | 2023-05-29T03:08:56 | ---
dataset_info:
features:
- name: logits
sequence: float32
- name: mfcc
sequence:
sequence: float64
splits:
- name: train
num_bytes: 1310969400.0
num_examples: 255450
download_size: 1342343695
dataset_size: 1310969400.0
---
# Dataset Card for "chunk_125"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | [
-0.6207195520401001,
-0.4221693277359009,
0.21213653683662415,
0.4314512312412262,
-0.5048896074295044,
0.06397652626037598,
0.3161798119544983,
-0.283447265625,
1.1365838050842285,
0.47997552156448364,
-0.7759819626808167,
-0.5531481504440308,
-0.7326419949531555,
-0.2680428922176361,
-... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
tasksource/PRM800K | tasksource | 2023-05-31T21:22:16Z | 24 | 4 | null | [
"license:mit",
"region:us"
] | 2023-05-31T21:22:16Z | 2023-05-31T21:18:25.000Z | 2023-05-31T21:18:25 | ---
license: mit
---
https://github.com/openai/prm800k/tree/main
| [
-0.6723081469535828,
-0.22507834434509277,
0.10818281024694443,
0.06443436443805695,
-0.647950291633606,
-0.012359149754047394,
0.08882009983062744,
-0.22811932861804962,
0.7216116786003113,
0.46765923500061035,
-0.7970585823059082,
-0.5334869623184204,
-0.1398032158613205,
-0.038546662777... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
cdminix/libritts-r-aligned | cdminix | 2023-07-02T15:13:39Z | 24 | 5 | null | [
"task_categories:automatic-speech-recognition",
"task_categories:text-to-speech",
"annotations_creators:crowdsourced",
"language:en",
"license:cc-by-4.0",
"speech",
"audio",
"automatic-speech-recognition",
"text-to-speech",
"arxiv:1904.02882",
"arxiv:2211.16049",
"region:us"
] | 2023-07-02T15:13:39Z | 2023-06-07T08:35:07.000Z | 2023-06-07T08:35:07 | ---
pretty_name: LibriTTS Corpus with Forced Alignments
annotations_creators:
- crowdsourced
language: en
tags:
- speech
- audio
- automatic-speech-recognition
- text-to-speech
license:
- cc-by-4.0
task_categories:
- automatic-speech-recognition
- text-to-speech
extra_gated_prompt: "When using this dataset to download LibriTTS, you agree to the terms on https://www.openslr.org"
---
> This dataset is identical to **[cdminix/libritts-aligned](https://huggingface.co/datasets/cdminix/libritts-aligned)** except it uses the newly released LibriTTS-R corpus. Please cite **[Y. Koizumi, et al., "LibriTTS-R: Restoration of a Large-Scale Multi-Speaker TTS Corpus", Interspeech 2023](https://google.github.io/df-conformer/librittsr/)**
*When using this dataset to download LibriTTS-R, make sure you agree to the terms on https://www.openslr.org*
# Dataset Card for LibriTTS-R with Forced Alignments (and Measures)
This dataset downloads LibriTTS-R and preprocesses it on your machine to create alignments using [montreal forced aligner](https://montreal-forced-aligner.readthedocs.io/en/latest/).
You need to run ``pip install alignments phones`` before using this dataset.
When running this the first time, it can take an hour or two, but subsequent runs will be lightning fast.
## Requirements
- ``pip install alignments phones`` **(required)**
- ``pip install speech-collator`` (optional)
*Note: version >=0.0.15 of alignments is required for this corpus*
## Example Item
```json
{
'id': '100_122655_000073_000002.wav',
'speaker': '100',
'text': 'the day after, diana and mary quitted it for distant b.',
'start': 0.0,
'end': 3.6500000953674316,
'phones': ['[SILENCE]', 'ð', 'ʌ', '[SILENCE]', 'd', 'eɪ', '[SILENCE]', 'æ', 'f', 't', 'ɜ˞', '[COMMA]', 'd', 'aɪ', 'æ', 'n', 'ʌ', '[SILENCE]', 'æ', 'n', 'd', '[SILENCE]', 'm', 'ɛ', 'ɹ', 'i', '[SILENCE]', 'k', 'w', 'ɪ', 't', 'ɪ', 'd', '[SILENCE]', 'ɪ', 't', '[SILENCE]', 'f', 'ɜ˞', '[SILENCE]', 'd', 'ɪ', 's', 't', 'ʌ', 'n', 't', '[SILENCE]', 'b', 'i', '[FULL STOP]'],
'phone_durations': [5, 2, 4, 0, 5, 13, 0, 16, 7, 5, 20, 2, 6, 9, 15, 4, 2, 0, 11, 3, 5, 0, 3, 8, 9, 8, 0, 13, 3, 5, 3, 6, 4, 0, 8, 5, 0, 9, 5, 0, 7, 5, 6, 7, 4, 5, 10, 0, 3, 35, 9],
'audio': '/dev/shm/metts/train-clean-360-alignments/100/100_122655_000073_000002.wav'
}
```
The phones are IPA phones, and the phone durations are in frames (assuming a hop length of 256, sample rate of 22050 and window length of 1024). These attributes can be changed using the ``hop_length``, ``sample_rate`` and ``window_length`` arguments to ``LibriTTSAlign``.
## Data Collator
This dataset comes with a data collator which can be used to create batches of data for training.
It can be installed using ``pip install speech-collator`` ([MiniXC/speech-collator](https://www.github.com/MiniXC/speech-collator)) and can be used as follows:
```python
import json
from datasets import load_dataset
from speech_collator import SpeechCollator
from torch.utils.data import DataLoader
dataset = load_dataset('cdminix/libritts-aligned', split="train")
speaker2ixd = json.load(open("speaker2idx.json"))
phone2ixd = json.load(open("phone2idx.json"))
collator = SpeechCollator(
speaker2ixd=speaker2idx,
phone2ixd=phone2idx ,
)
dataloader = DataLoader(dataset, collate_fn=collator.collate_fn, batch_size=8)
```
You can either download the ``speaker2idx.json`` and ``phone2idx.json`` files from [here](https://huggingface.co/datasets/cdminix/libritts-aligned/tree/main/data) or create them yourself using the following code:
```python
import json
from datasets import load_dataset
from speech_collator import SpeechCollator, create_speaker2idx, create_phone2idx
dataset = load_dataset("cdminix/libritts-aligned", split="train")
# Create speaker2idx and phone2idx
speaker2idx = create_speaker2idx(dataset, unk_idx=0)
phone2idx = create_phone2idx(dataset, unk_idx=0)
# save to json
with open("speaker2idx.json", "w") as f:
json.dump(speaker2idx, f)
with open("phone2idx.json", "w") as f:
json.dump(phone2idx, f)
```
### Measures
When using ``speech-collator`` you can also use the ``measures`` argument to specify which measures to use. The following example extracts Pitch and Energy on the fly.
```python
import json
from torch.utils.data import DataLoader
from datasets import load_dataset
from speech_collator import SpeechCollator, create_speaker2idx, create_phone2idx
from speech_collator.measures import PitchMeasure, EnergyMeasure
dataset = load_dataset("cdminix/libritts-aligned", split="train")
speaker2idx = json.load(open("data/speaker2idx.json"))
phone2idx = json.load(open("data/phone2idx.json"))
# Create SpeechCollator
speech_collator = SpeechCollator(
speaker2idx=speaker2idx,
phone2idx=phone2idx,
measures=[PitchMeasure(), EnergyMeasure()],
return_keys=["measures"]
)
# Create DataLoader
dataloader = DataLoader(
dataset,
batch_size=8,
collate_fn=speech_collator.collate_fn,
)
```
COMING SOON: Detailed documentation on how to use the measures at [MiniXC/speech-collator](https://www.github.com/MiniXC/speech-collator).
## Splits
This dataset has the following splits:
- ``train``: All the training data, except one sample per speaker which is used for validation.
- ``dev``: The validation data, one sample per speaker.
- ``train.clean.100``: Training set derived from the original materials of the train-clean-100 subset of LibriSpeech.
- ``train.clean.360``: Training set derived from the original materials of the train-clean-360 subset of LibriSpeech.
- ``train.other.500``: Training set derived from the original materials of the train-other-500 subset of LibriSpeech.
- ``dev.clean``: Validation set derived from the original materials of the dev-clean subset of LibriSpeech.
- ``dev.other``: Validation set derived from the original materials of the dev-other subset of LibriSpeech.
- ``test.clean``: Test set derived from the original materials of the test-clean subset of LibriSpeech.
- ``test.other``: Test set derived from the original materials of the test-other subset of LibriSpeech.
## Environment Variables
There are a few environment variable which can be set.
- ``LIBRITTS_VERBOSE``: If set, will print out more information about the dataset creation process.
- ``LIBRITTS_MAX_WORKERS``: The number of workers to use when creating the alignments. Defaults to ``cpu_count()``.
- ``LIBRITTS_PATH``: The path to download LibriTTS to. Defaults to the value of ``HF_DATASETS_CACHE``.
# Citation
When using LibriTTS-R please cite the following papers:
- [LibriTTS-R: Restoration of a Large-Scale Multi-Speaker TTS Corpus](https://google.github.io/df-conformer/librittsr/)
- [LibriTTS: A Corpus Derived from LibriSpeech for Text-to-Speech](https://arxiv.org/abs/1904.02882)
- [Montreal Forced Aligner: Trainable text-speech alignment using Kaldi](https://www.researchgate.net/publication/319185277_Montreal_Forced_Aligner_Trainable_Text-Speech_Alignment_Using_Kaldi)
When using the Measures please cite the following paper (ours):
- [Evaluating and reducing the distance between synthetic and real speech distributions](https://arxiv.org/abs/2211.16049) | [
-0.28354817628860474,
-0.39054834842681885,
0.04939340427517891,
0.015563486143946648,
-0.08678394556045532,
-0.011530006304383278,
-0.35177916288375854,
-0.176126629114151,
0.31132274866104126,
0.2803652286529541,
-0.6073612570762634,
-0.5121657252311707,
-0.19671113789081573,
-0.05063414... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
binhgiangnguyendanh/reddit_casual_conversation_for_alpaca_lora | binhgiangnguyendanh | 2023-06-26T10:20:53Z | 24 | 0 | null | [
"region:us"
] | 2023-06-26T10:20:53Z | 2023-06-12T07:01:00.000Z | 2023-06-12T07:01:00 | ---
dataset_info:
features:
- name: input
dtype: string
- name: output
dtype: string
splits:
- name: train
num_bytes: 7138483
num_examples: 8686
download_size: 2583834
dataset_size: 7138483
---
# Dataset Card for "reddit_casual_conversation_for_alpaca_lora"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | [
-0.6601298451423645,
-0.9497694373130798,
0.20214241743087769,
0.4460426867008209,
-0.5845410227775574,
-0.1485850214958191,
-0.05132230743765831,
-0.4305800795555115,
1.2327046394348145,
0.44066688418388367,
-0.8580188751220703,
-0.9497030973434448,
-0.6433477997779846,
-0.156369224190711... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
tianyang/repobench-c | tianyang | 2023-06-24T01:37:41Z | 24 | 4 | null | [
"task_categories:text-generation",
"task_ids:document-retrieval",
"language_creators:found",
"multilinguality:multilingual",
"size_categories:100K<n<1M",
"source_datasets:original",
"license:cc-by-nc-nd-4.0",
"code",
"arxiv:2306.03091",
"region:us"
] | 2023-06-24T01:37:41Z | 2023-06-16T07:18:00.000Z | 2023-06-16T07:18:00 | ---
language_creators:
- found
license:
- cc-by-nc-nd-4.0
multilinguality:
- multilingual
pretty_name: RepoBench-Completion
source_datasets:
- original
task_categories:
- text-generation
task_ids:
- document-retrieval
tags:
- code
size_categories:
- 100K<n<1M
---
# Dataset Card for RepoBench-C
## Dataset Description
- **Homepage:** https://github.com/Leolty/repobench
- **Paper:** https://arxiv.org/abs/2306.03091
## Dataset Summary
**RepoBench-C (Completion)** is a subtask of **RepoBench**([GitHub](https://github.com/Leolty/repobench), [arXiv](https://arxiv.org/abs/2306.03091)), focuing on the prediction of the next line of code, given in-file context (including several preceding lines and import statements), and cross-file context.
## Settings
- `cff`: short for cross_file_first, indicating the cross-file module in next line is first used in the current file.
- `cfr`: short for cross_file_random, indicating the cross-file module in next line is not first used in the current file.
- `if`: short for in_file, indicating the next line does not contain any cross-file module.
## Supported Tasks
- `python_cff`: python code prediction with cross-file-first setting.
- `python_cfr`: python code prediction with cross-file-random setting.
- `python_if`: python code prediction with in-file setting.
- `java_cff`: java code prediction with cross-file-first setting.
- `java_cfr`: java code prediction with cross-file-random setting.
- `java_if`: java code prediction with in-file setting.
## Loading Data
For example, if you want to load the `test` set to test your model on `Python` code prediction with `cff` setting, you can do the following:
```python
from datasets import load_dataset
dataset = load_dataset("tianyang/repobench-c", "python_cff", split="test")
```
> Note: The `split` argument is optional. If not provided, the entire dataset will be loaded.
## Dataset Structure
```json
{
"repo_name": "repository name of the data point",
"file_path": "path/to/file",
"context": "commented and concatenated cross-file context",
"import_statement": "all import statements in the file",
"code": "the code for next-line prediction",
"prompt": "cross-file context + import statements + in-file code",
"next_line": "the next line of the code"
}
```
## Licensing Information
CC BY-NC-ND 4.0
## Citation Information
```bibtex
@misc{liu2023repobench,
title={RepoBench: Benchmarking Repository-Level Code Auto-Completion Systems},
author={Tianyang Liu and Canwen Xu and Julian McAuley},
year={2023},
eprint={2306.03091},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
## Contributions
Thanks to [@Leolty](https://github.com/Leolty) for adding this dataset. | [
-0.4237877130508423,
-0.178151935338974,
-0.014108787290751934,
0.2080245465040207,
-0.11293083429336548,
0.036662954837083817,
-0.09345805644989014,
-0.45815420150756836,
0.16714780032634735,
0.47899940609931946,
-0.653777003288269,
-0.5613077282905579,
-0.3405420482158661,
0.138750523328... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
PNLPhub/DigiMag | PNLPhub | 2023-06-20T09:39:05Z | 24 | 0 | null | [
"license:apache-2.0",
"region:us"
] | 2023-06-20T09:39:05Z | 2023-06-20T08:51:20.000Z | 2023-06-20T08:51:20 | ---
license: apache-2.0
---
| [
-0.1285335123538971,
-0.1861683875322342,
0.6529128551483154,
0.49436232447624207,
-0.19319400191307068,
0.23607441782951355,
0.36072009801864624,
0.05056373029947281,
0.5793656706809998,
0.7400146722793579,
-0.650810182094574,
-0.23784008622169495,
-0.7102247476577759,
-0.0478255338966846... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
BoyuanJackchen/mlee_he_starcoder_javascript | BoyuanJackchen | 2023-06-23T04:12:28Z | 24 | 0 | null | [
"region:us"
] | 2023-06-23T04:12:28Z | 2023-06-22T23:36:04.000Z | 2023-06-22T23:36:04 | Entry not found | [
-0.3227645754814148,
-0.22568479180335999,
0.8622264862060547,
0.43461528420448303,
-0.52829909324646,
0.7012971639633179,
0.7915720343589783,
0.07618614286184311,
0.774603009223938,
0.2563217282295227,
-0.7852813005447388,
-0.22573819756507874,
-0.9104477167129517,
0.5715674161911011,
-... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
jinmang2/ucf_crime | jinmang2 | 2023-07-11T03:46:53Z | 24 | 0 | null | [
"region:us"
] | 2023-07-11T03:46:53Z | 2023-06-30T07:00:20.000Z | 2023-06-30T07:00:20 | Entry not found | [
-0.3227645754814148,
-0.22568479180335999,
0.8622264862060547,
0.43461528420448303,
-0.52829909324646,
0.7012971639633179,
0.7915720343589783,
0.07618614286184311,
0.774603009223938,
0.2563217282295227,
-0.7852813005447388,
-0.22573819756507874,
-0.9104477167129517,
0.5715674161911011,
-... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
awettig/Pile-YoutubeSubtitles-0.5B-6K-opt | awettig | 2023-07-10T19:35:45Z | 24 | 0 | null | [
"region:us"
] | 2023-07-10T19:35:45Z | 2023-07-10T19:34:17.000Z | 2023-07-10T19:34:17 | ---
dataset_info:
features:
- name: input_ids
sequence: int32
- name: attention_mask
sequence: int8
- name: labels
sequence: int64
splits:
- name: train
num_bytes: 6500643383
num_examples: 81380
- name: test
num_bytes: 64945692
num_examples: 813
download_size: 1594423762
dataset_size: 6565589075
---
# Dataset Card for "Pile-YoutubeSubtitles-0.5B-6K-opt"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | [
-0.840127170085907,
-0.14372782409191132,
-0.15376047790050507,
0.18472571671009064,
-0.513096809387207,
0.10285825282335281,
0.32304510474205017,
0.13899867236614227,
0.9760262966156006,
0.6444172263145447,
-0.7991006970405579,
-0.5063397288322449,
-0.7455399036407471,
-0.2229600399732589... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
dim/roleplay_instruct_v2_final | dim | 2023-10-04T14:15:48Z | 24 | 0 | null | [
"region:us"
] | 2023-10-04T14:15:48Z | 2023-08-19T17:55:17.000Z | 2023-08-19T17:55:17 | ---
dataset_info:
features:
- name: instruction
dtype: string
- name: input
dtype: string
- name: output
dtype: string
splits:
- name: train
num_bytes: 4382098
num_examples: 7188
download_size: 2880335
dataset_size: 4382098
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
# Dataset Card for "roleplay_instruct_v2_final"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | [
-0.24759061634540558,
-0.19675563275814056,
0.11130595207214355,
0.2813774049282074,
-0.09313696622848511,
-0.2801879048347473,
0.36528831720352173,
-0.1930777132511139,
0.5355757474899292,
0.822071373462677,
-0.9913845658302307,
-0.6557160019874573,
-0.46844473481178284,
-0.47700431942939... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
renumics/emodb-enriched | renumics | 2023-09-23T08:54:14Z | 24 | 0 | null | [
"size_categories:n<1K",
"region:us"
] | 2023-09-23T08:54:14Z | 2023-08-25T12:59:02.000Z | 2023-08-25T12:59:02 | ---
size_categories:
- n<1K
dataset_info:
features:
- name: age
dtype: float32
- name: gender
dtype:
class_label:
names:
'0': female
'1': male
- name: emotion
dtype:
class_label:
names:
'0': anger
'1': boredom
'2': disgust
'3': fear
'4': happiness
'5': neutral
'6': sadness
- name: audio
dtype: audio
- name: m1_gender_prediction
dtype:
class_label:
names:
'0': female
'1': male
- name: m2_gender_prediction
dtype:
class_label:
names:
'0': female
'1': male
- name: m1_embedding
sequence: float32
length: 1028
- name: m2_embedding
sequence: float32
length: 1028
- name: emotion_embedding
sequence: float32
length: 1024
- name: m1_correct
dtype:
class_label:
names:
'0': wrong
'1': correct
- name: m2_correct
dtype:
class_label:
names:
'0': wrong
'1': correct
splits:
- name: train
num_bytes: 54231717.0
num_examples: 535
download_size: 56965550
dataset_size: 54231717.0
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
# Dataset Card for Dataset Name
## Dataset Description
About Dataset
Emo-DB Database
The EMODB database is the freely available German emotional database. The database is created by the Institute of Communication Science, Technical University, Berlin, Germany. Ten professional speakers (five males and five females) participated in data recording. The database contains a total of 535 utterances. The EMODB database comprises of seven emotions: 1) anger; 2) boredom; 3) anxiety; 4) happiness; 5) sadness; 6) disgust; and 7) neutral. The data was recorded at a 48-kHz sampling rate and then down-sampled to 16-kHz.
Additional Information
Original URL: https://www.tu.berlin/en/kw/research/projects/emotional-speech
Every utterance is named according to the same scheme:
Positions 1-2: number of speaker
Positions 3-5: code for text
Position 6: emotion (sorry, letter stands for german emotion word)
Position 7: if there are more than two versions these are numbered a, b, c ....
Example: 03a01Fa.wav is the audio file from Speaker 03 speaking text a01 with the emotion "Freude" (Happiness).
Information about the speakers
03 - male, 31 years old
08 - female, 34 years
09 - female, 21 years
10 - male, 32 years
11 - male, 26 years
12 - male, 30 years
13 - female, 32 years
14 - female, 35 years
15 - male, 25 years
16 - female, 31 years
| [
-0.7148935794830322,
-0.8834722638130188,
0.45816656947135925,
0.40114453434944153,
-0.30193182826042175,
-0.19051417708396912,
-0.15396840870380402,
-0.4172471761703491,
0.5174921154975891,
0.22834421694278717,
-0.8908725380897522,
-0.9908404350280762,
-0.4158773720264435,
0.4790742099285... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
Kant1/French_Wikipedia_articles | Kant1 | 2023-08-29T17:09:13Z | 24 | 0 | null | [
"task_categories:text-generation",
"language:fr",
"region:us"
] | 2023-08-29T17:09:13Z | 2023-08-29T16:59:23.000Z | 2023-08-29T16:59:23 | ---
task_categories:
- text-generation
language:
- fr
---
Dump of 2023-08-20 of all french article in wikipedia
https://dumps.wikimedia.org/frwiki/20230820/frwiki-20230820-pages-articles.xml.bz2 | [
-0.5906935334205627,
-0.4634683132171631,
0.8187386989593506,
0.9070637822151184,
0.013876586221158504,
-0.7042540311813354,
0.3847454786300659,
-0.6320598721504211,
0.37287959456443787,
1.1563806533813477,
-0.4905036389827728,
-0.13487394154071808,
-0.5190807580947876,
0.38654541969299316... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
BEE-spoke-data/bees-internal | BEE-spoke-data | 2023-11-24T23:58:42Z | 24 | 1 | null | [
"task_categories:text-generation",
"task_categories:fill-mask",
"size_categories:n<1K",
"language:en",
"license:apache-2.0",
"region:us"
] | 2023-11-24T23:58:42Z | 2023-09-17T20:59:41.000Z | 2023-09-17T20:59:41 | ---
language:
- en
license: apache-2.0
size_categories:
- n<1K
task_categories:
- text-generation
- fill-mask
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: validation
path: data/validation-*
- split: test
path: data/test-*
dataset_info:
features:
- name: section
dtype: string
- name: filename
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 124476110.64634146
num_examples: 934
- name: validation
num_bytes: 3331801.676829268
num_examples: 25
- name: test
num_bytes: 3331801.676829268
num_examples: 25
download_size: 77991240
dataset_size: 131139713.99999999
thumbnail: https://i.ibb.co/DCjs6R2/bessinternal.png
---
# Dataset Card for "bees-internal"
Full length OCR of Bee material. Documents are split into multiple chunks if they contain more than 0.5 MB of text, to avoid destroying the CPU during tokenization.
Tokens (tiktoken):
<pre> "metadata": {
"model": "gpt-3.5-turbo",
"clean_text": true,
"extension": "mmd",
"recursive": true,
"global_token_count": 30608761
}
</pre>
Files:
<pre>INFO:__main__:Found 984 text files.
INFO:__main__:Performing train-test split...
INFO:__main__:Performing validation-test split...
INFO:__main__:Train size: 934
INFO:__main__:Validation size: 25
INFO:__main__:Test size: 25
</pre>
| [
-0.4629153907299042,
-0.6322008967399597,
0.4129643440246582,
-0.0974944680929184,
-0.7683532238006592,
0.03372976928949356,
-0.36393412947654724,
-0.4444884657859802,
0.10359730571508408,
0.421813040971756,
-0.6841639280319214,
-0.6584274172782898,
-0.5751610398292542,
0.4019488990306854,... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
BAAI/COIG-PC-core | BAAI | 2023-09-25T10:33:33Z | 24 | 10 | null | [
"language:zh",
"license:unknown",
"region:us"
] | 2023-09-25T10:33:33Z | 2023-09-19T06:24:01.000Z | 2023-09-19T06:24:01 | ---
extra_gated_heading: "Acknowledge license to accept the repository"
extra_gated_prompt: |
北京智源人工智能研究院(以下简称“我们”或“研究院”)通过BAAI DataHub(data.baai.ac.cn)和COIG-PC HuggingFace仓库(https://huggingface.co/datasets/BAAI/COIG-PC)向您提供开源数据集(以下或称“数据集”),您可通过下载的方式获取您所需的开源数据集,并在遵守各原始数据集使用规则前提下,基于学习、研究、商业等目的使用相关数据集。
在您获取(包括但不限于访问、下载、复制、传播、使用等处理数据集的行为)开源数据集前,您应认真阅读并理解本《COIG-PC开源数据集使用须知与免责声明》(以下简称“本声明”)。一旦您获取开源数据集,无论您的获取方式为何,您的获取行为均将被视为对本声明全部内容的认可。
1. 平台的所有权与运营权
您应充分了解并知悉,BAAI DataHub和COIG-PC HuggingFace仓库(包括当前版本及全部历史版本)的所有权与运营权归智源人工智能研究院所有,智源人工智能研究院对本平台/本工具及开源数据集开放计划拥有最终解释权和决定权。
您知悉并理解,基于相关法律法规更新和完善以及我们需履行法律合规义务的客观变化,我们保留对本平台/本工具进行不定时更新、维护,或者中止乃至永久终止提供本平台/本工具服务的权利。我们将在合理时间内将可能发生前述情形通过公告或邮件等合理方式告知您,您应当及时做好相应的调整和安排,但我们不因发生前述任何情形对您造成的任何损失承担任何责任。
2. 开源数据集的权利主张
为了便于您基于学习、研究、商业的目的开展数据集获取、使用等活动,我们对第三方原始数据集进行了必要的格式整合、数据清洗、标注、分类、注释等相关处理环节,形成可供本平台/本工具用户使用的开源数据集。
您知悉并理解,我们不对开源数据集主张知识产权中的相关财产性权利,因此我们亦无相应义务对开源数据集可能存在的知识产权进行主动识别和保护,但这不意味着我们放弃开源数据集主张署名权、发表权、修改权和保护作品完整权(如有)等人身性权利。而原始数据集可能存在的知识产权及相应合法权益由原权利人享有。
此外,向您开放和使用经合理编排、加工和处理后的开源数据集,并不意味着我们对原始数据集知识产权、信息内容等真实、准确或无争议的认可,您应当自行筛选、仔细甄别,使用经您选择的开源数据集。您知悉并同意,研究院对您自行选择使用的原始数据集不负有任何无缺陷或无瑕疵的承诺义务或担保责任。
3. 开源数据集的使用限制
您使用数据集不得侵害我们或任何第三方的合法权益(包括但不限于著作权、专利权、商标权等知识产权与其他权益)。
获取开源数据集后,您应确保对开源数据集的使用不超过原始数据集的权利人以公示或协议等形式明确规定的使用规则,包括原始数据的使用范围、目的和合法用途等。我们在此善意地提请您留意,如您对开源数据集的使用超出原始数据集的原定使用范围及用途,您可能面临侵犯原始数据集权利人的合法权益例如知识产权的风险,并可能承担相应的法律责任。
4. 个人信息保护
基于技术限制及开源数据集的公益性质等客观原因,我们无法保证开源数据集中不包含任何个人信息,我们不对开源数据集中可能涉及的个人信息承担任何法律责任。
如开源数据集涉及个人信息,我们不对您使用开源数据集可能涉及的任何个人信息处理行为承担法律责任。我们在此善意地提请您留意,您应依据《个人信息保护法》等相关法律法规的规定处理个人信息。
为了维护信息主体的合法权益、履行可能适用的法律、行政法规的规定,如您在使用开源数据集的过程中发现涉及或者可能涉及个人信息的内容,应立即停止对数据集中涉及个人信息部分的使用,并及时通过“6. 投诉与通知”中载明的联系我们。
5. 信息内容管理
我们不对开源数据集可能涉及的违法与不良信息承担任何法律责任。
如您在使用开源数据集的过程中发现开源数据集涉及或者可能涉及任何违法与不良信息,您应立即停止对数据集中涉及违法与不良信息部分的使用,并及时通过“6. 投诉与通知”中载明的联系我们。
6. 投诉与通知
如您认为开源数据集侵犯了您的合法权益,您可通过010-50955974联系我们,我们会及时依法处理您的主张与投诉。
为了处理您的主张和投诉,我们可能需要您提供联系方式、侵权证明材料以及身份证明等材料。请注意,如果您恶意投诉或陈述失实,您将承担由此造成的全部法律责任(包括但不限于合理的费用赔偿等)。
7. 责任声明
您理解并同意,基于开源数据集的性质,数据集中可能包含来自不同来源和贡献者的数据,其真实性、准确性、客观性等可能会有所差异,我们无法对任何数据集的可用性、可靠性等做出任何承诺。
在任何情况下,我们不对开源数据集可能存在的个人信息侵权、违法与不良信息传播、知识产权侵权等任何风险承担任何法律责任。
在任何情况下,我们不对您因开源数据集遭受的或与之相关的任何损失(包括但不限于直接损失、间接损失以及可得利益损失等)承担任何法律责任。
8. 其他
开源数据集处于不断发展、变化的阶段,我们可能因业务发展、第三方合作、法律法规变动等原因更新、调整所提供的开源数据集范围,或中止、暂停、终止开源数据集提供业务。
extra_gated_fields:
Name: text
Affiliation: text
Country: text
I agree to use this model for non-commercial use ONLY: checkbox
extra_gated_button_content: "Acknowledge license"
license: unknown
language:
- zh
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
dataset_info:
features:
- name: instruction
dtype: string
- name: input
dtype: string
- name: output
dtype: string
- name: task_type
struct:
- name: major
sequence: string
- name: minor
sequence: string
- name: domain
sequence: string
- name: other
dtype: string
- name: task_name_in_eng
dtype: string
- name: index
dtype: string
splits:
- name: train
num_bytes: 1053129000
num_examples: 744592
download_size: 416315627
dataset_size: 1053129000
---
# COIG Prompt Collection
## License
**Default Licensing for Sub-Datasets Without Specific License Declaration**: In instances where sub-datasets within the COIG-PC Dataset do not have a specific license declaration, the Apache License 2.0 (Apache-2.0) will be the applicable licensing terms by default.
**Precedence of Declared Licensing for Sub-Datasets**: For any sub-dataset within the COIG-PC Dataset that has an explicitly declared license, the terms and conditions of the declared license shall take precedence and govern the usage of that particular sub-dataset.
Users and developers utilizing the COIG-PC Dataset must ensure compliance with the licensing terms as outlined above. It is imperative to review and adhere to the specified licensing conditions of each sub-dataset, as they may vary.
## What is COIG-PC?
The COIG-PC Dataset is a meticulously curated and comprehensive collection of Chinese tasks and data, designed to facilitate the fine-tuning and optimization of language models for Chinese natural language processing (NLP). The dataset aims to provide researchers and developers with a rich set of resources to improve the capabilities of language models in handling Chinese text, which can be utilized in various fields such as text generation, information extraction, sentiment analysis, machine translation, among others.
If you think COIG-PC is too huge, please refer to [COIG-PC-Lite](https://huggingface.co/datasets/BAAI/COIG-PC-Lite) which is a subset of COIG-PC with only 200 samples from each task file.
## Why COIG-PC?
The COIG-PC Dataset is an invaluable resource for the domain of natural language processing (NLP) for various compelling reasons:
**Addressing Language Complexity**: Chinese is known for its intricacy, with a vast array of characters and diverse grammatical structures. A specialized dataset like COIG-PC, which is tailored for the Chinese language, is essential to adequately address these complexities during model training.
**Comprehensive Data Aggregation**: The COIG-PC Dataset is a result of an extensive effort in integrating almost all available Chinese datasets in the market. This comprehensive aggregation makes it one of the most exhaustive collections for Chinese NLP.
**Data Deduplication and Normalization**: The COIG-PC Dataset underwent rigorous manual processing to eliminate duplicate data and perform normalization. This ensures that the dataset is free from redundancy, and the data is consistent and well-structured, making it more user-friendly and efficient for model training.
**Fine-tuning and Optimization**: The dataset’s instruction-based phrasing facilitates better fine-tuning and optimization of language models. This structure allows models to better understand and execute tasks, which is particularly beneficial in improving performance on unseen or novel tasks.
The COIG-PC Dataset, with its comprehensive aggregation, meticulous selection, deduplication, and normalization of data, stands as an unmatched resource for training and optimizing language models tailored for the Chinese language and culture. It addresses the unique challenges of Chinese language processing and serves as a catalyst for advancements in Chinese NLP.
## Who builds COIG-PC?
The bedrock of COIG-PC is anchored in the dataset furnished by stardust.ai, which comprises an aggregation of data collected from the Internet.
And COIG-PC is the result of a collaborative effort involving engineers and experts from over twenty distinguished universities both domestically and internationally. Due to space constraints, it is not feasible to list all of them; however, the following are a few notable institutions among the collaborators:
- Beijing Academy of Artificial Intelligence, China
<img src="https://huggingface.co/datasets/BAAI/COIG-PC-core/resolve/main/assets/baai.png" alt= “BAAI” height="100" width="150">
- Peking University, China
<img src="https://huggingface.co/datasets/BAAI/COIG-PC-core/resolve/main/assets/pku.png" alt= “PKU” height="100" width="200">
- The Hong Kong University of Science and Technology (HKUST), China
<img src="https://huggingface.co/datasets/BAAI/COIG-PC-core/resolve/main/assets/hkust.png" alt= “HKUST” height="100" width="200">
- The University of Waterloo, Canada
<img src="https://huggingface.co/datasets/BAAI/COIG-PC-core/resolve/main/assets/waterloo.png" alt= “Waterloo” height="100" width="150">
- The University of Sheffield, United Kingdom
<img src="https://huggingface.co/datasets/BAAI/COIG-PC-core/resolve/main/assets/sheffield.png" alt= “Sheffield” height="100" width="200">
- Beijing University of Posts and Telecommunications, China
<img src="https://huggingface.co/datasets/BAAI/COIG-PC-core/resolve/main/assets/bupt.png" alt= “BUPT” height="100" width="200">
- [Multimodal Art Projection](https://huggingface.co/m-a-p)
<img src="https://huggingface.co/datasets/BAAI/COIG-PC-core/resolve/main/assets/map.png" alt= “M.A.P” height="100" width="200">
- stardust.ai, China
<img src="https://huggingface.co/datasets/BAAI/COIG-PC-core/resolve/main/assets/stardust.png" alt= “stardust.ai” height="100" width="200">
- LinkSoul.AI, China
<img src="https://huggingface.co/datasets/BAAI/COIG-PC-core/resolve/main/assets/linksoul.png" alt= “linksoul.ai” height="100" width="200">
For the detailed list of engineers involved in the creation and refinement of COIG-PC, please refer to the paper that will be published subsequently. This paper will provide in-depth information regarding the contributions and the specifics of the dataset’s development process.
## How to use COIG-PC?
COIG-PC is structured in a **.jsonl** file format. Each line in the file represents a single data record and is structured in JSON (JavaScript Object Notation) format. Below is a breakdown of the elements within each line:
**instruction**: This is a text string that provides the instruction for the task. For example, it might tell the model what to do with the input data.
**input**: This is the input data that the model needs to process. In the context of translation, it would be the text that needs to be translated.
**output**: This contains the expected output data after processing the input. In the context of translation, it would be the translated text.
**split**: Indicates the official split of the original dataset, which is used to categorize data for different phases of model training and evaluation. It can be 'train', 'test', 'valid', etc.
**task_type**: Contains major and minor categories for the dataset. Major categories are broader, while minor categories can be more specific subcategories.
**domain**: Indicates the domain or field to which the data belongs.
**other**: This field can contain additional information or metadata regarding the data record. If there is no additional information, it may be set to null.
### Example
Here is an example of how a line in the COIG-PC dataset might be structured:
```
{
"instruction": "请把下面的中文句子翻译成英文",
"input": "我爱你。",
"output": "I love you.",
"split": "train",
"task_type": {
"major": ["翻译"],
"minor": ["翻译", "中译英"]
},
"domain": ["通用"],
"other": null
}
```
In this example:
**instruction** tells the model to translate the following Chinese sentence into English.
**input** contains the Chinese text "我爱你" which means "I love you".
**output** contains the expected translation in English: "I love you".
**split** indicates that this data record is part of the training set.
**task_type** specifies that the major category is "Translation" and the minor categories are "Translation" and "Chinese to English".
**domain** specifies that this data record belongs to the general domain.
**other** is set to null as there is no additional information for this data record.
## Update: Aug. 30, 2023
- v1.0: First version of COIG-PC-core.
## COIG-PC Citation
If you want to cite COIG-PC-core dataset, you could use this:
```
```
## Contact Us
To contact us feel free to create an Issue in this repository.
| [
-0.49273011088371277,
-0.6924780011177063,
-0.08462495356798172,
0.3152913451194763,
-0.25919902324676514,
-0.12982547283172607,
-0.2837918996810913,
-0.5741947293281555,
0.18172746896743774,
0.21909065544605255,
-0.7840288877487183,
-0.5402434468269348,
-0.296442449092865,
0.0589323453605... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
dim/forum_uristov_rf_prompts | dim | 2023-09-21T23:06:22Z | 24 | 0 | null | [
"region:us"
] | 2023-09-21T23:06:22Z | 2023-09-21T23:06:19.000Z | 2023-09-21T23:06:19 | ---
dataset_info:
features:
- name: prompt
dtype: string
- name: solution
dtype: string
- name: link
dtype: string
splits:
- name: train
num_bytes: 3043144
num_examples: 1849
download_size: 1343977
dataset_size: 3043144
---
# Dataset Card for "forum_uristov_rf_prompts"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | [
-0.7068154811859131,
-0.30101287364959717,
0.27617964148521423,
0.3462051749229431,
-0.28737595677375793,
-0.07588211447000504,
0.1490594446659088,
0.21728022396564484,
0.7259324193000793,
0.5624160766601562,
-1.2252082824707031,
-0.8874285221099854,
-0.2576162815093994,
0.0243832990527153... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
yashnbx/iamgroot | yashnbx | 2023-10-28T11:09:20Z | 24 | 0 | null | [
"license:mit",
"region:us"
] | 2023-10-28T11:09:20Z | 2023-09-25T08:38:18.000Z | 2023-09-25T08:38:18 | ---
license: mit
---
| [
-0.1285335123538971,
-0.1861683875322342,
0.6529128551483154,
0.49436232447624207,
-0.19319400191307068,
0.23607441782951355,
0.36072009801864624,
0.05056373029947281,
0.5793656706809998,
0.7400146722793579,
-0.650810182094574,
-0.23784008622169495,
-0.7102247476577759,
-0.0478255338966846... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
dim/HC3_ru | dim | 2023-09-25T14:51:34Z | 24 | 0 | null | [
"region:us"
] | 2023-09-25T14:51:34Z | 2023-09-25T14:50:00.000Z | 2023-09-25T14:50:00 | ---
dataset_info:
features:
- name: id
dtype: string
- name: question
dtype: string
- name: human_answers
sequence: string
- name: chatgpt_answers
sequence: string
- name: source
dtype: string
splits:
- name: train
num_bytes: 135406074
num_examples: 24322
download_size: 62378894
dataset_size: 135406074
---
# Dataset Card for "HC3_ru"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | [
-0.421507865190506,
-0.24344506859779358,
0.3483074903488159,
0.30149394273757935,
-0.23736387491226196,
-0.11236131191253662,
0.39485400915145874,
-0.36149659752845764,
0.6158235669136047,
0.39214813709259033,
-0.7548444271087646,
-0.8159950375556946,
-0.44627660512924194,
-0.090406060218... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
mmathys/profanity | mmathys | 2023-09-27T09:01:04Z | 24 | 0 | null | [
"license:mit",
"region:us"
] | 2023-09-27T09:01:04Z | 2023-09-27T08:59:08.000Z | 2023-09-27T08:59:08 | ---
license: mit
---
# The Obscenity List
*by [Surge AI, the world's most powerful NLP data labeling platform and workforce](https://www.surgehq.ai)*
Ever wish you had a ready-made list of profanity? Maybe you want to remove NSFW comments, filter offensive usernames, or build content moderation tools, and you can't dream up enough obscenities on your own.
At Surge AI, we help companies build human-powered datasets to train stunning AI and NLP, and we're creating the world's largest profanity list in 20+ languages.
## Dataset
This repo contains 1600+ popular English profanities and their variations.
**Columns**
* `text`: the profanity
* `canonical_form_1`: the profanity's canonical form
* `canonical_form_2`: an additional canonical form, if applicable
* `canonical_form_3`: an additional canonical form, if applicable
* `category_1`: the profanity's primary category (see below for list of categories)
* `category_2`: the profanity's secondary category, if applicable
* `category_3`: the profanity's tertiary category, if applicable
* `severity_rating`: We asked 5 [Surge AI](https://www.surgehq.ai) data labelers to rate how severe they believed each profanity to be, on a 1-3 point scale. This is the mean of those 5 ratings.
* `severity_description`: We rounded `severity_rating` to the nearest integer. `Mild` corresponds to a rounded mean rating of `1`, `Strong` to `2`, and `Severe` to `3`.
## Categories
We organized the profanity into the following categories:
- sexual anatomy / sexual acts (ass kisser, dick, pigfucker)
- bodily fluids / excrement (shit, cum)
- sexual orientation / gender (faggot, tranny, bitch, whore)
- racial / ethnic (chink, n3gro)
- mental disability (retard, dumbass)
- physical disability (quadriplegic bitch)
- physical attributes (fatass, ugly whore)
- animal references (pigfucker, jackass)
- religious offense (goddamn)
- political (China virus)
## Future
We'll be adding more languages and profanity annotations (e.g., augmenting each profanity with its severity level, type, and other variations) over time.
Check out our other [free datasets](https://www.surgehq.ai/datasets).
Sign up [here](https://forms.gle/u1SKL4zySK2wMp1r7) to receive updates on this dataset and be the first to learn about new datasets we release!
## Contact
Need a larger set of expletives and slurs, or a list of swear words in other languages (Spanish, French, German, Japanese, Portuguese, etc)? We work with top AI and content moderation companies around the world, and we love feedback. Post an issue or reach out to team@surgehq.ai!

Follow us on Twitter at [@HelloSurgeAI](https://www.twitter.com/@HelloSurgeAI).
## Original Repo
You can find the original repository here: https://github.com/surge-ai/profanity/ | [
-0.17022980749607086,
-0.6728180646896362,
-0.026861831545829773,
0.05826292559504509,
-0.2240239381790161,
-0.0012451299699023366,
-0.1718222051858902,
-0.5100680589675903,
0.10038398206233978,
0.608437180519104,
-0.06952358782291412,
-0.6171681880950928,
-0.5664091110229492,
0.0982170924... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
japanese-denim/naga-eng | japanese-denim | 2023-09-29T01:36:09Z | 24 | 0 | null | [
"license:mit",
"region:us"
] | 2023-09-29T01:36:09Z | 2023-09-29T01:34:12.000Z | 2023-09-29T01:34:12 | ---
license: mit
---
| [
-0.1285335123538971,
-0.1861683875322342,
0.6529128551483154,
0.49436232447624207,
-0.19319400191307068,
0.23607441782951355,
0.36072009801864624,
0.05056373029947281,
0.5793656706809998,
0.7400146722793579,
-0.650810182094574,
-0.23784008622169495,
-0.7102247476577759,
-0.0478255338966846... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
shyam-incedoinc/qa-finetune-dataset | shyam-incedoinc | 2023-10-02T10:33:15Z | 24 | 0 | null | [
"region:us"
] | 2023-10-02T10:33:15Z | 2023-10-02T10:32:57.000Z | 2023-10-02T10:32:57 | Entry not found | [
-0.3227645754814148,
-0.22568479180335999,
0.8622264862060547,
0.43461528420448303,
-0.52829909324646,
0.7012971639633179,
0.7915720343589783,
0.07618614286184311,
0.774603009223938,
0.2563217282295227,
-0.7852813005447388,
-0.22573819756507874,
-0.9104477167129517,
0.5715674161911011,
-... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
SniiKz/llama2_Chat_trainingsetv2 | SniiKz | 2023-10-03T06:37:07Z | 24 | 0 | null | [
"region:us"
] | 2023-10-03T06:37:07Z | 2023-10-03T06:37:05.000Z | 2023-10-03T06:37:05 | ---
dataset_info:
features:
- name: text
dtype: string
splits:
- name: train
num_bytes: 837513
num_examples: 2645
download_size: 196452
dataset_size: 837513
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
# Dataset Card for "llama2_Chat_trainingsetv2"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | [
-0.28751322627067566,
-0.20095236599445343,
0.013115093111991882,
0.5393026471138,
-0.3267895579338074,
0.20651881396770477,
0.2488083392381668,
-0.2367285043001175,
0.7904217839241028,
0.43160077929496765,
-0.8328253626823425,
-0.6631131768226624,
-0.7127610445022583,
-0.4218936860561371,... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
roszcz/giant-midi-masked-v3 | roszcz | 2023-10-03T18:34:23Z | 24 | 0 | null | [
"region:us"
] | 2023-10-03T18:34:23Z | 2023-10-03T16:25:29.000Z | 2023-10-03T16:25:29 | ---
dataset_info:
features:
- name: pitch
sequence: int8
length: 90
- name: start
sequence: float64
length: 90
- name: dstart
sequence: float64
length: 90
- name: end
sequence: float64
length: 90
- name: duration
sequence: float64
length: 90
- name: velocity
sequence: int8
length: 90
- name: source
dtype: string
- name: masking_space
struct:
- name: <Random Mask>
sequence: bool
length: 90
- name: <LH Mask>
sequence: bool
length: 90
- name: <RH Mask>
sequence: bool
length: 90
- name: <Harmonic Root Mask>
sequence: bool
length: 90
- name: <Harmonic Outliers Mask>
sequence: bool
length: 90
splits:
- name: train
num_bytes: 24181696800
num_examples: 7140520
download_size: 23770439021
dataset_size: 24181696800
---
# Dataset Card for "giant-midi-masked-v3"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | [
-0.7411597967147827,
-0.17718720436096191,
0.3960992991924286,
0.3150392770767212,
-0.28749606013298035,
0.0843900665640831,
0.3254093527793884,
-0.3853146433830261,
1.0226411819458008,
0.7851777076721191,
-0.8755708932876587,
-0.7630628943443298,
-0.606722354888916,
-0.317326158285141,
... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
HuggingSara/medqa | HuggingSara | 2023-10-05T14:12:30Z | 24 | 0 | null | [
"region:us"
] | 2023-10-05T14:12:30Z | 2023-10-05T14:10:17.000Z | 2023-10-05T14:10:17 | ---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: validation
path: data/validation-*
- split: test
path: data/test-*
dataset_info:
features:
- name: question
dtype: string
- name: answer
dtype: string
- name: options
struct:
- name: A
dtype: string
- name: B
dtype: string
- name: C
dtype: string
- name: D
dtype: string
- name: E
dtype: string
- name: meta_info
dtype: string
- name: answer_idx
dtype: string
splits:
- name: train
num_bytes: 9470204
num_examples: 10178
- name: validation
num_bytes: 1184039
num_examples: 1272
- name: test
num_bytes: 1211382
num_examples: 1273
download_size: 6952745
dataset_size: 11865625
---
# Dataset Card for "Med_QA"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | [
-0.4288010597229004,
-0.17482569813728333,
0.5430204272270203,
0.021848924458026886,
-0.2505159080028534,
0.014026293531060219,
0.5809940099716187,
-0.1748923659324646,
0.9390375018119812,
0.4708394408226013,
-0.8378549814224243,
-0.8098991513252258,
-0.45234981179237366,
-0.18429532647132... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
Luciya/llama-2-nuv-intent-noE | Luciya | 2023-10-10T06:04:10Z | 24 | 0 | null | [
"region:us"
] | 2023-10-10T06:04:10Z | 2023-10-10T06:02:19.000Z | 2023-10-10T06:02:19 | ---
dataset_info:
features:
- name: text
dtype: string
splits:
- name: train
num_bytes: 711010
num_examples: 1585
download_size: 0
dataset_size: 711010
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
# Dataset Card for "llama-2-nuv-intent-noE"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | [
-0.2815101146697998,
-0.23800821602344513,
0.32548773288726807,
0.44408196210861206,
-0.4857393205165863,
-0.1671268343925476,
0.4260723292827606,
-0.052694763988256454,
1.0148574113845825,
0.6475266814231873,
-0.8965411186218262,
-0.9198005199432373,
-0.7257035970687866,
-0.17385178804397... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
hippocrates/2012i2b2_NER_train | hippocrates | 2023-10-17T20:21:29Z | 24 | 0 | null | [
"region:us"
] | 2023-10-17T20:21:29Z | 2023-10-17T20:21:26.000Z | 2023-10-17T20:21:26 | Entry not found | [
-0.3227645754814148,
-0.22568479180335999,
0.8622263669967651,
0.43461522459983826,
-0.52829909324646,
0.7012971639633179,
0.7915719747543335,
0.07618614286184311,
0.774603009223938,
0.2563217282295227,
-0.7852813005447388,
-0.22573819756507874,
-0.9104475975036621,
0.5715674161911011,
-... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
hackaprompt/hackaprompt-dataset | hackaprompt | 2023-11-16T19:04:38Z | 24 | 8 | null | [
"size_categories:100K<n<1M",
"language:en",
"code",
"region:us"
] | 2023-11-16T19:04:38Z | 2023-10-19T03:01:52.000Z | 2023-10-19T03:01:52 | ---
language:
- en
tags:
- code
pretty_name: HackAPrompt Dataset
size_categories:
- 100K<n<1M
---
# Dataset Card for HackAPrompt 💻🔍
This dataset contains submissions from a prompt hacking competition. An in-depth analysis of the dataset has been accepted at the EMNLP 2023 conference. 📊👾
Submissions were sourced from two environments: a playground for experimentation and an official submissions platform.
The playground itself can be accessed [here](https://huggingface.co/spaces/hackaprompt/playground) 🎮
More details about the competition itself [here](http://paper.hackaprompt.com) 🏆
## Dataset Details 📋
### Dataset Description 📄
We conducted a prompt hacking competition where users were competing to "hack" different large language models (LLMs). Different levels were proposed, with varying degrees of difficulty, and for each level, 3 LLMs were evaluated: GPT-3 (`text-davinci-003`), FlanT5-XXL (`philschmid/flan-t5-xxl-sharded-fp16`), and ChatGPT (`gpt-3.5-turbo`).
We anonymously collected user submissions throughout the competition and also had users submit their best attempts via an online platform for a chance to win the competition. Users submitted their prompts, and our servers automatically evaluated their attempts. To delineate between ties, token counts were used where lower counts gave better scores.
This dataset releases all submissions sent to both our playground and submission servers. 📤📥
### Columns Description 🧾
- **level**: A numerical value indicating the difficulty or complexity of the prompt.
- **user_input**: The input provided by the user or participant in response to the given challenge.
- **prompt**: The full prompt that was used to query the model, this includes the user's input.
- **completion**: The output or completion generated by the model based on the user's input.
- **model**: The type or version of the model that generated the completion. For example, "gpt-3.5-turbo" or "FlanT5-XXL".
- **expected_completion**: The expected or ideal output that should have been generated by the model for the given user input.
- **token_count**: The number of tokens present in the user's input. This serves as a measure of the input's length.
- **correct**: A boolean value indicating whether the model's completion was correct or not, based on the expected output.
- **error**: A boolean value indicating if there was an error during the model's processing of the user input. Note: we did not include submissions that triggered errors in this dataset.
- **score**: A numerical value representing the score assigned to the model's completion based on its accuracy, correctness, and other evaluation metrics. (Only available for prompts on the submissions platform)
- **dataset**: A categorical variable indicating the source of the submission. The two categories are "playground_data" (for submissions from the playground environment) and "submission_data" (for official submissions).
- **timestamp**: The date and time when the submission was made. (Only available for playground dataset)
<!-- - **Curated by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed] -->
## Uses 🧑🔬
This dataset is meant to be used in a research context to better understand the different types of attacks "in the wild" on LLMs. 📚🔬
<!-- Address questions around how the dataset is intended to be used. -->
#### Personal and Sensitive Information 🔒
We did not release directly any personal or sensitive information explicitly. On the playground, users could submit anonymously, and we did not collect information about the users directly.
For the submissions data, teams did submit in their names, but that information has not been made available in this version of the dataset to preserve participants' privacy.
<!-- State whether the dataset contains data that might be considered personal, sensitive, or private (e.g., data that reveals addresses, uniquely identifiable names or aliases, racial or ethnic origins, sexual orientations, religious beliefs, political opinions, financial or health data, etc.). If efforts were made to anonymize the data, describe the anonymization process. -->
## Bias, Risks, and Limitations ⚠️
The data was submitted via a public portal hosted on huggingface.
We did not curate the data before publishing it.
The data may contain offensive material.
Please use at your own risk.
### Recommendations 🚀
Users should be made aware of the risks, biases, and limitations of the dataset and use at their own risk.
Please use at your own risk.
## Citation 📝
<!-- If there is a paper or blog post introducing the dataset, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
```
@inproceedings{Schulhoff:Pinto:Khan:Bouchard:Si:Boyd-Graber:Anati:Tagliabue:Kost:Carnahan-2023,
Title = {Ignore This Title and HackAPrompt: Exposing Systemic Vulnerabilities of LLMs Through a Global Prompt Hacking Competition},
Author = {Sander V Schulhoff and Jeremy Pinto and Anaum Khan and Louis-François Bouchard and Chenglei Si and Jordan Lee Boyd-Graber and Svetlina Anati and Valen Tagliabue and Anson Liu Kost and Christopher R Carnahan},
Booktitle = {Empirical Methods in Natural Language Processing},
Year = {2023},
Location = {Singapore}
}
```
| [
-0.27316343784332275,
-0.8186498880386353,
0.29435789585113525,
0.3719060719013214,
-0.11161022633314133,
0.2683804929256439,
-0.051152151077985764,
-0.5786558389663696,
0.4398474395275116,
0.3721151053905487,
-0.7214993834495544,
-0.7403557896614075,
-0.5324589014053345,
0.331397861242294... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
kardosdrur/hestenet-qa | kardosdrur | 2023-10-23T14:16:16Z | 24 | 1 | null | [
"license:mit",
"region:us"
] | 2023-10-23T14:16:16Z | 2023-10-23T13:37:15.000Z | 2023-10-23T13:37:15 | ---
license: mit
dataset_info:
features:
- name: question
dtype: string
- name: answer
dtype: string
splits:
- name: train
num_bytes: 1144206.5903728174
num_examples: 1695
- name: test
num_bytes: 286220.40962718264
num_examples: 424
download_size: 936129
dataset_size: 1430427.0
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: test
path: data/test-*
---
# Hestenet Question-Answer
The dataset is based on data from Hestenettet in the Danish Gigaword corpus.
Question-answer pairs are purely extracted on the basis of heuristics, and have not been manually evaluated.
The dataset was created for aiding the training of sentence transformer models in the Danish Foundation Models project.
The dataset is currently not production-ready.
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | [
-0.5699905753135681,
-0.7262397408485413,
0.3409099578857422,
0.10743395984172821,
0.033262841403484344,
-0.00891057588160038,
-0.3254016935825348,
-0.4244886636734009,
0.4141174852848053,
0.6729192137718201,
-0.8456782102584839,
-0.34258100390434265,
-0.5890116095542908,
0.139008700847625... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
sam-bha/un-general-assembly-votes-2000-2023 | sam-bha | 2023-11-01T14:56:11Z | 24 | 0 | null | [
"task_categories:tabular-regression",
"task_categories:tabular-classification",
"language:en",
"license:cc-by-nc-4.0",
"politics",
"region:us"
] | 2023-11-01T14:56:11Z | 2023-10-30T02:36:34.000Z | 2023-10-30T02:36:34 | ---
license: cc-by-nc-4.0
task_categories:
- tabular-regression
- tabular-classification
language:
- en
tags:
- politics
pretty_name: UN General Assembly Votes from 2000 to 2023
---
# UN General Assembly Votes from 2000 to 2023
The following is a cleaned and compiled version of all of the UN General Assembly votes, from [the UN Digital Library](https://digitallibrary.un.org/), which includes ~1800 different resolutions and votes by the 196 voting members.
Fields include **Title**, **Resolution Number** and the actual votes.
The votes are in a dict format, with the name of the country. Countries have have changed names over the period (such as Turkey -> Türkiye, Swaziland -> Eswatini), so we use the latest name each country has used as of 2023. One voting member country (Serbia and Montengro) has since split into two voting member countries during the time period in question, and is not considered. South Sudan, Serbia, and Montenegro only came into existing in the middle of the time period in question, and so we consider them as not voting / null votes before they became voting members.
Please follow the [UN Digital Library terms of service](https://digitallibrary.un.org/pages/?ln=en&page=tos) (e.g. non-commercial use)
© United Nations, 2023, https://digitallibrary.un.org, downloaded on 10/29/2023 | [
-0.6553323864936829,
-0.04701720178127289,
0.8900007009506226,
0.06828591972589493,
-0.6846373081207275,
-0.03158928453922272,
0.4840412735939026,
-0.41356053948402405,
0.2707478702068329,
0.49729475378990173,
-0.5583656430244446,
-0.5963014364242554,
-0.6957727074623108,
0.541119158267974... | null | null | null | null | null | null | null | null | null | null | null | null | null |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.