id stringlengths 2 115 | lastModified stringlengths 24 24 | tags list | author stringlengths 2 42 ⌀ | description stringlengths 0 68.7k ⌀ | citation stringlengths 0 10.7k ⌀ | cardData null | likes int64 0 3.55k | downloads int64 0 10.1M | card stringlengths 0 1.01M |
|---|---|---|---|---|---|---|---|---|---|
hackathon-pln-es/comentarios_depresivos | 2022-04-01T01:40:06.000Z | [
"license:cc-by-sa-4.0",
"region:us"
] | hackathon-pln-es | null | null | null | 4 | 10 |
---
license: cc-by-sa-4.0
---
La base de datos consta de una cantidad de 192 347 filas de datos para el entrenamiento, 33 944 para las pruebas y 22630 para la validación. Su contenido está compuesto por comentarios suicidas y comentarios normales de la red social Reddit traducidos al español, y obtenidos de la base de datos: Suicide and Depression Detection de Nikhileswar Komati, la cual se puede visualizar en la siguiente dirección: https://www.kaggle.com/datasets/nikhileswarkomati/suicide-watch
Autores
- Danny Vásquez
- César Salazar
- Alexis Cañar
- Yannela Castro
- Daniel Patiño
|
nreimers/trec-covid | 2022-03-23T12:55:44.000Z | [
"region:us"
] | nreimers | null | null | null | 0 | 10 | This is the corpus file from the [BEIR benchmark](https://github.com/beir-cellar/beir) for the [TREC-COVID 19 dataset](https://ir.nist.gov/trec-covid/).
|
atenglens/taiwanese_english_translation | 2022-10-24T19:51:45.000Z | [
"task_categories:question-answering",
"task_categories:text2text-generation",
"task_categories:text-generation",
"task_categories:translation",
"task_ids:language-modeling",
"language_creators:other",
"multilinguality:translation",
"size_categories:unknown",
"source_datasets:extended|other",
"lang... | atenglens | This new dataset is designed to solve this great NLP task and is crafted with a lot of care. | @InProceedings{huggingface:dataset,
title = {A great new dataset},
author={huggingface, Inc.
},
year={2020}
} | null | 2 | 10 | ---
annotations_creators: []
language_creators:
- other
language:
- tw
- en
license: []
multilinguality:
- translation
size_categories:
- unknown
source_datasets:
- extended|other
task_categories:
- question-answering
- text2text-generation
- text-generation
- translation
task_ids:
- language-modeling
pretty_name: taiwanese_english_translation
tags:
- conditional-text-generation
---
# Dataset Card for taiwanese_english_translation
## Table of Contents
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage: https://taigi.fhl.net/list.html**
### Dataset Summary
[More Information Needed]
### Languages
Source Language: Taiwanese (Tailo romanization system)
Target Language: English
## Dataset Structure
csv: Tailo,English
### Data Instances
[More Information Needed]
### Data Fields
[More Information Needed]
### Data Splits
[More Information Needed]
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
[More Information Needed]
### Citation Information
[More Information Needed]
### Contributions
Thanks to [@atenglens](https://github.com/atenglens) for adding this dataset. |
deepakvk/squad2_valdn | 2022-04-08T09:46:56.000Z | [
"region:us"
] | deepakvk | null | null | null | 0 | 10 | Entry not found |
surrey-nlp/PLOD-filtered | 2023-01-14T23:30:12.000Z | [
"task_categories:token-classification",
"annotations_creators:Leonardo Zilio, Hadeel Saadany, Prashant Sharma, Diptesh Kanojia, Constantin Orasan",
"language_creators:found",
"multilinguality:monolingual",
"size_categories:100K<n<1M",
"source_datasets:original",
"language:en",
"license:cc-by-sa-4.0",
... | surrey-nlp | This is the dataset repository for PLOD Dataset accepted to be published at LREC 2022.
The dataset can help build sequence labelling models for the task Abbreviation Detection. | null | 0 | 10 | ---
annotations_creators:
- Leonardo Zilio, Hadeel Saadany, Prashant Sharma, Diptesh Kanojia, Constantin Orasan
language_creators:
- found
language:
- en
license: cc-by-sa-4.0
multilinguality:
- monolingual
size_categories:
- 100K<n<1M
source_datasets:
- original
task_categories:
- token-classification
task_ids: []
paperswithcode_id: plod-filtered
pretty_name: 'PLOD: An Abbreviation Detection Dataset'
tags:
- abbreviation-detection
---
# PLOD: An Abbreviation Detection Dataset
This is the repository for PLOD Dataset published at LREC 2022. The dataset can help build sequence labelling models for the task Abbreviation Detection.
### Dataset
We provide two variants of our dataset - Filtered and Unfiltered. They are described in our paper here.
1. The Filtered version can be accessed via [Huggingface Datasets here](https://huggingface.co/datasets/surrey-nlp/PLOD-filtered) and a [CONLL format is present here](https://github.com/surrey-nlp/PLOD-AbbreviationDetection).<br/>
2. The Unfiltered version can be accessed via [Huggingface Datasets here](https://huggingface.co/datasets/surrey-nlp/PLOD-unfiltered) and a [CONLL format is present here](https://github.com/surrey-nlp/PLOD-AbbreviationDetection).<br/>
3. The [SDU Shared Task](https://sites.google.com/view/sdu-aaai22/home) data we use for zero-shot testing is [available here](https://huggingface.co/datasets/surrey-nlp/SDU-test).
# Dataset Card for PLOD-filtered
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-instances)
- [Data Splits](#data-instances)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
## Dataset Description
- **Homepage:** [Needs More Information]
- **Repository:** https://github.com/surrey-nlp/PLOD-AbbreviationDetection
- **Paper:** https://arxiv.org/abs/2204.12061
- **Leaderboard:** https://paperswithcode.com/sota/abbreviationdetection-on-plod-filtered
- **Point of Contact:** [Diptesh Kanojia](mailto:d.kanojia@surrey.ac.uk)
### Dataset Summary
This PLOD Dataset is an English-language dataset of abbreviations and their long-forms tagged in text. The dataset has been collected for research from the PLOS journals indexing of abbreviations and long-forms in the text. This dataset was created to support the Natural Language Processing task of abbreviation detection and covers the scientific domain.
### Supported Tasks and Leaderboards
This dataset primarily supports the Abbreviation Detection Task. It has also been tested on a train+dev split provided by the Acronym Detection Shared Task organized as a part of the Scientific Document Understanding (SDU) workshop at AAAI 2022.
### Languages
English
## Dataset Structure
### Data Instances
A typical data point comprises an ID, a set of `tokens` present in the text, a set of `pos_tags` for the corresponding tokens obtained via Spacy NER, and a set of `ner_tags` which are limited to `AC` for `Acronym` and `LF` for `long-forms`.
An example from the dataset:
{'id': '1',
'tokens': ['Study', '-', 'specific', 'risk', 'ratios', '(', 'RRs', ')', 'and', 'mean', 'BW', 'differences', 'were', 'calculated', 'using', 'linear', 'and', 'log', '-', 'binomial', 'regression', 'models', 'controlling', 'for', 'confounding', 'using', 'inverse', 'probability', 'of', 'treatment', 'weights', '(', 'IPTW', ')', 'truncated', 'at', 'the', '1st', 'and', '99th', 'percentiles', '.'],
'pos_tags': [8, 13, 0, 8, 8, 13, 12, 13, 5, 0, 12, 8, 3, 16, 16, 0, 5, 0, 13, 0, 8, 8, 16, 1, 8, 16, 0, 8, 1, 8, 8, 13, 12, 13, 16, 1, 6, 0, 5, 0, 8, 13],
'ner_tags': [0, 0, 0, 3, 4, 0, 1, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 3, 4, 4, 4, 4, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0]
}
### Data Fields
- id: the row identifier for the dataset point.
- tokens: The tokens contained in the text.
- pos_tags: the Part-of-Speech tags obtained for the corresponding token above from Spacy NER.
- ner_tags: The tags for abbreviations and long-forms.
### Data Splits
| | Train | Valid | Test |
| ----- | ------ | ----- | ---- |
| Filtered | 112652 | 24140 | 24140|
| Unfiltered | 113860 | 24399 | 24399|
## Dataset Creation
### Source Data
#### Initial Data Collection and Normalization
Extracting the data from PLOS Journals online and then tokenization, normalization.
#### Who are the source language producers?
PLOS Journal
## Additional Information
### Dataset Curators
The dataset was initially created by Leonardo Zilio, Hadeel Saadany, Prashant Sharma,
Diptesh Kanojia, Constantin Orasan.
### Licensing Information
CC-BY-SA 4.0
### Citation Information
[Needs More Information]
### Installation
We use the custom NER pipeline in the [spaCy transformers](https://spacy.io/universe/project/spacy-transformers) library to train our models. This library supports training via any pre-trained language models available at the :rocket: [HuggingFace repository](https://huggingface.co/).<br/>
Please see the instructions at these websites to setup your own custom training with our dataset to reproduce the experiments using Spacy.
OR<br/>
However, you can also reproduce the experiments via the Python notebook we [provide here](https://github.com/surrey-nlp/PLOD-AbbreviationDetection/blob/main/nbs/fine_tuning_abbr_det.ipynb) which uses HuggingFace Trainer class to perform the same experiments. The exact hyperparameters can be obtained from the models readme cards linked below. Before starting, please perform the following steps:
```bash
git clone https://github.com/surrey-nlp/PLOD-AbbreviationDetection
cd PLOD-AbbreviationDetection
pip install -r requirements.txt
```
Now, you can use the notebook to reproduce the experiments.
### Model(s)
Our best performing models are hosted on the HuggingFace models repository
| Models | [`PLOD - Unfiltered`](https://huggingface.co/datasets/surrey-nlp/PLOD-unfiltered) | [`PLOD - Filtered`](https://huggingface.co/datasets/surrey-nlp/PLOD-filtered) | Description |
| --- | :---: | :---: | --- |
| [RoBERTa<sub>large</sub>](https://huggingface.co/roberta-large) | [RoBERTa<sub>large</sub>-finetuned-abbr](https://huggingface.co/surrey-nlp/roberta-large-finetuned-abbr) | -soon- | Fine-tuning on the RoBERTa<sub>large</sub> language model |
| [RoBERTa<sub>base</sub>](https://huggingface.co/roberta-base) | -soon- | [RoBERTa<sub>base</sub>-finetuned-abbr](https://huggingface.co/surrey-nlp/roberta-large-finetuned-abbr) | Fine-tuning on the RoBERTa<sub>base</sub> language model |
| [AlBERT<sub>large-v2</sub>](https://huggingface.co/albert-large-v2) | [AlBERT<sub>large-v2</sub>-finetuned-abbDet](https://huggingface.co/surrey-nlp/albert-large-v2-finetuned-abbDet) | -soon- | Fine-tuning on the AlBERT<sub>large-v2</sub> language model |
On the link provided above, the model(s) can be used with the help of the Inference API via the web-browser itself. We have placed some examples with the API for testing.<br/>
### Usage
You can use the HuggingFace Model link above to find the instructions for using this model in Python locally using the notebook provided in the Git repo.
| |
ysharma/eurosat-demo | 2022-04-23T14:19:14.000Z | [
"region:us"
] | ysharma | null | null | null | 0 | 10 | Entry not found |
EMBO/sd-nlp-non-tokenized | 2023-01-19T10:12:45.000Z | [
"task_categories:token-classification",
"task_categories:text-classification",
"task_ids:multi-class-classification",
"task_ids:named-entity-recognition",
"task_ids:parsing",
"annotations_creators:expert-generated",
"language_creators:expert-generated",
"multilinguality:monolingual",
"size_categorie... | EMBO | This dataset is based on the SourceData database and is intented to facilitate training of NLP tasks in the cell and molecualr biology domain. | @Unpublished{
huggingface: dataset,
title = {SourceData NLP},
authors={Thomas Lemberger & Jorge Abreu-Vicente, EMBO},
year={2021}
} | null | 0 | 10 | ---
annotations_creators:
- expert-generated
language_creators:
- expert-generated
language:
- en
license:
- cc-by-4.0
multilinguality:
- monolingual
size_categories:
- 10K<n<100K
source_datasets: []
task_categories:
- token-classification
- text-classification
task_ids:
- multi-class-classification
- named-entity-recognition
- parsing
---
# Dataset Card for sd-nlp
## Table of Contents
- [Dataset Card for [EMBO/sd-nlp-non-tokenized]](#dataset-card-for-dataset-name)
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Initial Data Collection and Normalization](#initial-data-collection-and-normalization)
- [Who are the source language producers?](#who-are-the-source-language-producers)
- [Annotations](#annotations)
- [Annotation process](#annotation-process)
- [Who are the annotators?](#who-are-the-annotators)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** https://sourcedata.embo.org
- **Repository:** https://github.com/source-data/soda-roberta
- **Paper:**
- **Leaderboard:**
- **Point of Contact:** thomas.lemberger@embo.org, jorge.abreu@embo.org
### Dataset Summary
This dataset is based on the content of the SourceData (https://sourcedata.embo.org) database, which contains manually annotated figure legends written in English and extracted from scientific papers in the domain of cell and molecular biology (Liechti et al, Nature Methods, 2017, https://doi.org/10.1038/nmeth.4471).
Unlike the dataset [`sd-nlp`](https://huggingface.co/datasets/EMBO/sd-nlp), pre-tokenized with the `roberta-base` tokenizer, this dataset is not previously tokenized, but just splitted into words. Users can therefore use it to fine-tune other models.
Additional details at https://github.com/source-data/soda-roberta
### Supported Tasks and Leaderboards
Tags are provided as [IOB2-style tags](https://en.wikipedia.org/wiki/Inside%E2%80%93outside%E2%80%93beginning_(tagging)).
`PANELIZATION`: figure captions (or figure legends) are usually composed of segments that each refer to one of several 'panels' of the full figure. Panels tend to represent results obtained with a coherent method and depicts data points that can be meaningfully compared to each other. `PANELIZATION` provide the start (B-PANEL_START) of these segments and allow to train for recogntion of the boundary between consecutive panel lengends.
`NER`: biological and chemical entities are labeled. Specifically the following entities are tagged:
- `SMALL_MOLECULE`: small molecules
- `GENEPROD`: gene products (genes and proteins)
- `SUBCELLULAR`: subcellular components
- `CELL`: cell types and cell lines.
- `TISSUE`: tissues and organs
- `ORGANISM`: species
- `DISEASE`: diseases (see limitations)
- `EXP_ASSAY`: experimental assays
`ROLES`: the role of entities with regard to the causal hypotheses tested in the reported results. The tags are:
- `CONTROLLED_VAR`: entities that are associated with experimental variables and that subjected to controlled and targeted perturbations.
- `MEASURED_VAR`: entities that are associated with the variables measured and the object of the measurements.
`BORING`: entities are marked with the tag `BORING` when they are more of descriptive value and not directly associated with causal hypotheses ('boring' is not an ideal choice of word, but it is short...). Typically, these entities are so-called 'reporter' geneproducts, entities used as common baseline across samples, or specify the context of the experiment (cellular system, species, etc...).
### Languages
The text in the dataset is English.
## Dataset Structure
### Data Instances
```json
{
"words": [
".", "Figure", "6", "(", "A", ")", "Cisplatin", "dose", "response", "curves", "of", "(", "i", ")", "MB002", ",", "(", "ii", ")", "Daoy", ",", "and", "(", "iii", ")", "MIC", "in", "the", "absence", "(", "EV", ")", "or", "presence", "of", "SOX9", "by", "Alamar", "blue", ".", "Cells", "were", "pre", "-", "conditioned", "with", "doxycycline", "to", "induce", "expression", "of", "SOX9", "(", "or", "EV", ")", "prior", "to", "treatment", "with", "increasing", "concentrations", "of", "cisplatin", ".", "The", "IC50", "were", "calculated", "following", "5", "(", "MB002", "and", "MIC", ")", "or", "3", "days", "(", "Daoy", ")", "of", "treatment", ".", "Data", "are", "mean", "+", "standard", "deviation", "from", "3", "independent", "repeats", ",", "each", "containing", "5", "technical", "replicates", ".", "(", "B", ")", "Cisplatin", "dose", "response", "curves", "of", "SOX9", "-", "expressing", "(", "i", ")", "Daoy", "and", "(", "ii", ")", "MIC", "in", "the", "absence", "or", "presence", "of", "FBW7\u03b1", ".", "Experiments", "and", "data", "analysis", "were", "performed", "as", "described", "in", "(", "A", ")", "(", "C", ")", "Overall", "survival", "analysis", "of", "mice", "bearing", "Daoy", "or", "Daoy", "-", "expressing", "dox", "-", "inducible", "SOX9", "treated", "with", "cisplatin", ".", "The", "dox", "-", "preconditioned", "cells", "(", "105", "cells", ")", "were", "orthotopically", "xenografted", "to", "Nude", "-", "Foxn1nu", "mice", "and", "left", "for", "1", "week", "to", "prior", "to", "being", "treated", "with", "vehicle", "control", "or", "cisplatin", "(", "2mg", "/", "kg", ")", "intraperitoneally", "for", "every", "other", "day", "for", "a", "total", "of", "6", "doses", ".", "(", "D", ")", "Heat", "map", "of", "the", "row", "-", "wise", "z", "-", "scores", "of", "11", "genes", "associated", "with", "cisplatin", "resistance", "in", "MB002", "expressing", "Sox9", "-", "WT", "or", "Sox9", "-", "T236", "/", "T240A", ".", "Heat", "map", "was", "generated", "using", "the", "GenePattern", "software", ".", "(", "E", ")", "Quantitative", "analysis", "of", "ATP7A", ",", "DUSP2", ",", "and", "TTK", "mRNAs", "in", "MB002", "following", "expression", "of", "SOX9", "-", "WT", "or", "SOX9", "-", "T236", "/", "240A", ".", "Total", "RNA", "were", "collected", "24", "hours", "following", "doxycycline", "treatment", ",", "from", "which", "cDNA", "were", "generated", "for", "qPCR", ".", "Data", "are", "mean", "mRNA", "level", "(", "normalized", "to", "B2M", "transcript", ")", "+", "standard", "deviation", "from", "3", "independent", "experiments", "with", "statistical", "significance", "were", "determined", "by", "Multiple", "comparisons", "2", "-", "way", "ANOVA", "with", "Bonferroni", "'", "s", "post", "-", "test", ".", "(", "F", ")", "Time", "course", "western", "blotting", "of", "HA", "-", "SOX9", ",", "ATP7A", ",", "DUSP2", ",", "ERK1", "/", "2", "pThr202", "/", "Tyr204", "and", "total", "ERK1", "/", "2", "in", "MB002", "cells", "following", "doxycycline", "induction", "of", "either", "EV", ",", "SOX9", "-", "WT", "or", "SOX9", "-", "T236", "/", "240A", ".", "GAPDH", "was", "used", "as", "a", "loading", "control", "."
],
"panel_id": "12345",
"label_ids": {
"entity_types": [
"O", "O", "O", "O", "O", "O", "B-SMALL_MOLECULE", "O", "O", "O", "O", "O", "O", "O", "B-CELL", "O", "O", "O", "O", "B-CELL", "O", "O", "O", "O", "O", "B-CELL", "O", "O", "O", "O", "O", "O", "O", "O", "O", "B-GENEPROD", "O", "B-EXP_ASSAY", "I-EXP_ASSAY", "O", "O", "O", "O", "O", "O", "O", "B-SMALL_MOLECULE", "O", "O", "O", "O", "B-GENEPROD", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "B-SMALL_MOLECULE", "O", "O", "O", "O", "O", "O", "O", "O", "B-CELL", "O", "B-CELL", "O", "O", "O", "O", "O", "B-CELL", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "B-SMALL_MOLECULE", "O", "O", "O", "O", "B-GENEPROD", "O", "O", "O", "O", "O", "B-CELL", "O", "O", "O", "O", "B-CELL", "O", "O", "O", "O", "O", "O", "B-GENEPROD", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "B-EXP_ASSAY", "O", "O", "B-ORGANISM", "O", "B-CELL", "O", "B-CELL", "O", "O", "B-SMALL_MOLECULE", "O", "O", "B-GENEPROD", "O", "O", "B-SMALL_MOLECULE", "O", "O", "B-SMALL_MOLECULE", "O", "O", "O", "O", "O", "O", "O", "O", "O", "B-ORGANISM", "O", "O", "O", "B-GENEPROD", "B-ORGANISM", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "B-SMALL_MOLECULE", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "B-SMALL_MOLECULE", "O", "O", "B-CELL", "O", "B-GENEPROD", "O", "O", "O", "B-GENEPROD", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "B-GENEPROD", "O", "B-GENEPROD", "O", "O", "B-GENEPROD", "O", "O", "B-CELL", "O", "O", "O", "B-GENEPROD", "O", "O", "O", "B-GENEPROD", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "B-SMALL_MOLECULE", "O", "O", "O", "O", "O", "O", "O", "O", "B-EXP_ASSAY", "O", "O", "O", "O", "O", "O", "O", "O", "O", "B-GENEPROD", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "B-EXP_ASSAY", "I-EXP_ASSAY", "O", "B-GENEPROD", "O", "B-GENEPROD", "O", "B-GENEPROD", "O", "B-GENEPROD", "O", "B-GENEPROD", "I-GENEPROD", "I-GENEPROD", "O", "O", "O", "O", "O", "B-GENEPROD", "I-GENEPROD", "I-GENEPROD", "O", "B-CELL", "O", "O", "B-SMALL_MOLECULE", "O", "O", "O", "O", "O", "B-GENEPROD", "O", "O", "O", "B-GENEPROD", "O", "O", "O", "O", "O", "B-GENEPROD", "O", "O", "O", "O", "O", "O", "O"
],
"geneprod_roles": [
"O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "B-CONTROLLED_VAR", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "B-CONTROLLED_VAR", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "B-CONTROLLED_VAR", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "B-CONTROLLED_VAR", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "B-CONTROLLED_VAR", "O", "O", "O", "B-CONTROLLED_VAR", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "B-MEASURED_VAR", "O", "B-MEASURED_VAR", "O", "O", "B-MEASURED_VAR", "O", "O", "O", "O", "O", "O", "B-CONTROLLED_VAR", "O", "O", "O", "B-CONTROLLED_VAR", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "B-MEASURED_VAR", "O", "B-MEASURED_VAR", "O", "B-MEASURED_VAR", "O", "B-MEASURED_VAR", "I-MEASURED_VAR", "I-MEASURED_VAR", "O", "O", "O", "O", "O", "B-MEASURED_VAR", "I-MEASURED_VAR", "I-MEASURED_VAR", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "B-CONTROLLED_VAR", "O", "O", "O", "B-CONTROLLED_VAR", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O"
],
"boring": [
"O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "B-BORING", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "B-BORING", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "B-BORING", "O", "B-BORING", "O", "O", "B-BORING", "O", "O", "O", "O", "O", "O", "O", "O", "B-BORING", "O", "O", "O", "O", "O", "O", "O", "O", "O", "B-BORING", "O", "O", "O", "B-BORING", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "B-BORING", "O", "O", "B-BORING", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "B-BORING", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "B-BORING", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "B-BORING", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "B-BORING", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "B-BORING", "O", "O", "B-BORING", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "B-BORING", "O", "O", "O", "O", "O", "O", "O"
],
"panel_start": [
"O", "O", "O", "B-PANEL_START", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "B-PANEL_START", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "B-PANEL_START", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "B-PANEL_START", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "B-PANEL_START", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "B-PANEL_START", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O"
],
"small_mol_roles": ["O", "O", "O", "O", "O", "O", "B-CONTROLLED_VAR", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "B-CONTROLLED_VAR", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "B-CONTROLLED_VAR", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "B-CONTROLLED_VAR", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "B-CONTROLLED_VAR", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O"]
}
}
```
### Data Fields
- `words`: `list` of `strings` text tokenized into words.
- `panel_id`: ID of the panel to which the example belongs to in the SourceData database.
- `label_ids`:
- `entity_types`: `list` of `strings` for the IOB2 tags for entity type; possible value in `["O", "I-SMALL_MOLECULE", "B-SMALL_MOLECULE", "I-GENEPROD", "B-GENEPROD", "I-SUBCELLULAR", "B-SUBCELLULAR", "I-CELL", "B-CELL", "I-TISSUE", "B-TISSUE", "I-ORGANISM", "B-ORGANISM", "I-EXP_ASSAY", "B-EXP_ASSAY"]`
- `geneprod_roles`: `list` of `strings` for the IOB2 tags for experimental roles; values in `["O", "I-CONTROLLED_VAR", "B-CONTROLLED_VAR", "I-MEASURED_VAR", "B-MEASURED_VAR"]`
- `boring`: `list` of `strings` for IOB2 tags for entities unrelated to causal design; values in `["O", "I-BORING", "B-BORING"]`
- `panel_start`: `list` of `strings` for IOB2 tags `["O", "B-PANEL_START"]`
- `small_mol_roles`: `list` of `strings` for IOB2 tags showing whether the entity is the variable being measured or the control variable `["O", "B-CONTROLLED_VAR", "I-CONTROLLED_VAR", "B-MEASURED_VAR", "I-MEASURED_VAR",]`
### Data Splits
- train:
- features: ['words', 'labels', 'tag_mask', 'panel_id'],
- num_rows: 50_198
- validation:
- features: ['words', 'labels', 'tag_mask', 'panel_id'],
- num_rows: 5_946
- test:
- features: ['words', 'labels', 'tag_mask', 'panel_id'],
- num_rows: 6_222
## Dataset Creation
### Curation Rationale
The dataset was built to train models for the automatic extraction of a knowledge graph based from the scientific literature. The dataset can be used to train models for text segmentation, named entity recognition and semantic role labeling.
### Source Data
#### Initial Data Collection and Normalization
Figure legends were annotated according to the SourceData framework described in Liechti et al 2017 (Nature Methods, 2017, https://doi.org/10.1038/nmeth.4471). The curation tool at https://curation.sourcedata.io was used to segment figure legends into panel legends, tag enities, assign experiemental roles and normalize with standard identifiers (not available in this dataset). The source data was downloaded from the SourceData API (https://api.sourcedata.io) on 21 Jan 2021.
#### Who are the source language producers?
The examples are extracted from the figure legends from scientific papers in cell and molecular biology.
### Annotations
#### Annotation process
The annotations were produced manually with expert curators from the SourceData project (https://sourcedata.embo.org)
#### Who are the annotators?
Curators of the SourceData project.
### Personal and Sensitive Information
None known.
## Considerations for Using the Data
### Social Impact of Dataset
Not applicable.
### Discussion of Biases
The examples are heavily biased towards cell and molecular biology and are enriched in examples from papers published in EMBO Press journals (https://embopress.org)
The annotation of diseases has been added recently to the dataset. Although they appear, the number is very low and they are not consistently tagged through the entire dataset.
We recommend to use the diseases by filtering the examples that contain them.
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
Thomas Lemberger, EMBO.
Jorge Abreu Vicente, EMBO
### Licensing Information
CC BY 4.0
### Citation Information
We are currently working on a paper to present the dataset. It is expected to be ready by 2023 spring. In the meantime, the following paper should be cited.
```latex
@article {Liechti2017,
author = {Liechti, Robin and George, Nancy and Götz, Lou and El-Gebali, Sara and Chasapi, Anastasia and Crespo, Isaac and Xenarios, Ioannis and Lemberger, Thomas},
title = {SourceData - a semantic platform for curating and searching figures},
year = {2017},
volume = {14},
number = {11},
doi = {10.1038/nmeth.4471},
URL = {https://doi.org/10.1038/nmeth.4471},
eprint = {https://www.biorxiv.org/content/early/2016/06/20/058529.full.pdf},
journal = {Nature Methods}
}
```
### Contributions
Thanks to [@tlemberger](https://github.com/tlemberger>) and [@drAbreu](https://github.com/drAbreu>) for adding this dataset.
|
Lehrig/GTZAN-Collection | 2022-06-13T13:54:08.000Z | [
"license:apache-2.0",
"region:us"
] | Lehrig | The dataset consists of 1000 audio tracks each 30 seconds long.
It contains 10 genres, each represented by 100 tracks.
The tracks are all 22050Hz Mono 16-bit audio files in .wav format.
The genres are:
* blues
* classical
* country
* disco
* hiphop
* jazz
* metal
* pop
* reggae
* rock
This collection includes the following GTZAN variants:
* raw (original WAV files)
* melspectrograms (from each WAV file, contiguous 2-second windows at 4 random locations are sampled and transformed to Mel Spectrograms, resulting in 8000 Mel Spectrograms) | @ARTICLE{1021072,
author={Tzanetakis, G. and Cook, P.},
journal={IEEE Transactions on Speech and Audio Processing},
title={Musical genre classification of audio signals},
year={2002},
volume={10},
number={5},
pages={293-302},
doi={10.1109/TSA.2002.800560}} | null | 1 | 10 | ---
license: apache-2.0
---
# Dataset Card for GTZAN Collection
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-instances)
- [Data Splits](#data-instances)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
## Dataset Description
- **Homepage:** https://github.com/derekahuang/Music-Classification
- **Repository:** https://github.com/derekahuang/Music-Classification
- **Paper:** [Musical genre classification of audio signals](https://ieeexplore.ieee.org/document/1021072)
- **Leaderboard:** [Needs More Information]
- **Point of Contact:** [Needs More Information]
### Dataset Summary
The dataset consists of 1000 audio tracks each 30 seconds long.
It contains 10 genres, each represented by 100 tracks.
The tracks are all 22050Hz Mono 16-bit audio files in .wav format.
The genres are:
* blues
* classical
* country
* disco
* hiphop
* jazz
* metal
* pop
* reggae
* rock
This collection includes the following GTZAN variants:
* raw (original WAV files)
* melspectrograms (from each WAV file, contiguous 2-second windows at 4 random locations are sampled and transformed to Mel Spectrograms, resulting in 8000 Mel Spectrograms)
### Supported Tasks and Leaderboards
[Needs More Information]
### Languages
[Needs More Information]
## Dataset Structure
### Data Instances
[Needs More Information]
### Data Fields
[Needs More Information]
### Data Splits
[Needs More Information]
## Dataset Creation
### Curation Rationale
[Needs More Information]
### Source Data
#### Initial Data Collection and Normalization
[Needs More Information]
#### Who are the source language producers?
[Needs More Information]
### Annotations
#### Annotation process
[Needs More Information]
#### Who are the annotators?
[Needs More Information]
### Personal and Sensitive Information
[Needs More Information]
## Considerations for Using the Data
### Social Impact of Dataset
[Needs More Information]
### Discussion of Biases
[Needs More Information]
### Other Known Limitations
[Needs More Information]
## Additional Information
### Dataset Curators
[Needs More Information]
### Licensing Information
[Needs More Information]
### Citation Information
[Needs More Information] |
Abdelrahman-Rezk/Arabic_Poem_Comprehensive_Dataset_APCD | 2022-05-27T19:01:21.000Z | [
"region:us"
] | Abdelrahman-Rezk | null | null | null | 0 | 10 | Entry not found |
arize-ai/ecommerce_reviews_with_language_drift | 2022-07-01T17:26:03.000Z | [
"task_categories:text-classification",
"task_ids:sentiment-classification",
"annotations_creators:expert-generated",
"language_creators:expert-generated",
"multilinguality:monolingual",
"size_categories:10K<n<100K",
"source_datasets:extended|imdb",
"language:en",
"license:mit",
"region:us"
] | arize-ai | This dataset was crafted to be used in our tutorial [Link to the tutorial when
ready]. It consists on product reviews from an e-commerce store. The reviews
are labeled on a scale from 1 to 5 (stars). The training & validation sets are
fully composed by reviews written in english. However, the production set has
some reviews written in spanish. At Arize, we work to surface this issue and
help you solve it. | # @InProceedings{huggingface:dataset,
# title = {A great new dataset},
# author={huggingface, Inc.
# },
# year={2020}
# }
# | null | 1 | 10 | ---
annotations_creators:
- expert-generated
language_creators:
- expert-generated
language:
- en
license:
- mit
multilinguality:
- monolingual
pretty_name: sentiment-classification-reviews-with-drift
size_categories:
- 10K<n<100K
source_datasets:
- extended|imdb
task_categories:
- text-classification
task_ids:
- sentiment-classification
---
# Dataset Card for `reviews_with_drift`
## Table of Contents
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
### Dataset Summary
This dataset was crafted to be used in our tutorial [Link to the tutorial when ready]. It consists on a large Movie Review Dataset mixed with some reviews from a Hotel Review Dataset. The training/validation set are purely obtained from the Movie Review Dataset while the production set is mixed. Some other features have been added (`age`, `gender`, `context`) as well as a made up timestamp `prediction_ts` of when the inference took place.
### Supported Tasks and Leaderboards
`text-classification`, `sentiment-classification`: The dataset is mainly used for text classification: given the text, predict the sentiment (positive or negative).
### Languages
Text is mainly written in english.
## Dataset Structure
### Data Instances
[More Information Needed]
### Data Fields
[More Information Needed]
### Data Splits
[More Information Needed]
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
[More Information Needed]
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
[More Information Needed]
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
[More Information Needed]
### Citation Information
[More Information Needed]
### Contributions
Thanks to [@fjcasti1](https://github.com/fjcasti1) for adding this dataset. |
AlekseyKorshuk/mystery-crime-books | 2022-06-11T10:54:38.000Z | [
"region:us"
] | AlekseyKorshuk | null | null | null | 1 | 10 | Entry not found |
Paul/hatecheck-german | 2022-07-05T10:38:52.000Z | [
"task_categories:text-classification",
"task_ids:hate-speech-detection",
"annotations_creators:crowdsourced",
"language_creators:expert-generated",
"multilinguality:monolingual",
"size_categories:1K<n<10K",
"source_datasets:original",
"language:de",
"license:cc-by-4.0",
"arxiv:2206.09917",
"regi... | Paul | null | null | null | 0 | 10 | ---
annotations_creators:
- crowdsourced
language_creators:
- expert-generated
language:
- de
license:
- cc-by-4.0
multilinguality:
- monolingual
pretty_name: German HateCheck
size_categories:
- 1K<n<10K
source_datasets:
- original
task_categories:
- text-classification
task_ids:
- hate-speech-detection
---
# Dataset Card for Multilingual HateCheck
## Dataset Description
Multilingual HateCheck (MHC) is a suite of functional tests for hate speech detection models in 10 different languages: Arabic, Dutch, French, German, Hindi, Italian, Mandarin, Polish, Portuguese and Spanish.
For each language, there are 25+ functional tests that correspond to distinct types of hate and challenging non-hate.
This allows for targeted diagnostic insights into model performance.
For more details, please refer to our paper about MHC, published at the 2022 Workshop on Online Abuse and Harms (WOAH) at NAACL 2022. If you are using MHC, please cite our work!
- **Paper:** Röttger et al. (2022) - Multilingual HateCheck: Functional Tests for Multilingual Hate Speech Detection Models. https://arxiv.org/abs/2206.09917
- **Repository:** https://github.com/rewire-online/multilingual-hatecheck
- **Point of Contact:** paul@rewire.online
## Dataset Structure
The csv format mostly matches the original HateCheck data, with some adjustments for specific languages.
**mhc_case_id**
The test case ID that is unique to each test case across languages (e.g., "mandarin-1305")
**functionality**
The shorthand for the functionality tested by the test case (e.g, "target_obj_nh"). The same functionalities are tested in all languages, except for Mandarin and Arabic, where non-Latin script required adapting the tests for spelling variations.
**test_case**
The test case text.
**label_gold**
The gold standard label ("hateful" or "non-hateful") of the test case. All test cases within a given functionality have the same gold standard label.
**target_ident**
Where applicable, the protected group that is targeted or referenced in the test case. All HateChecks cover seven target groups, but their composition varies across languages.
**ref_case_id**
For hateful cases, where applicable, the ID of the hateful case which was perturbed to generate this test case. For non-hateful cases, where applicable, the ID of the hateful case which is contrasted by this test case.
**ref_templ_id**
The equivalent to ref_case_id, but for template IDs.
**templ_id**
The ID of the template from which the test case was generated.
**case_templ**
The template from which the test case was generated (where applicable).
**gender_male** and **gender_female**
For gender-inflected languages (French, Spanish, Portuguese, Hindi, Arabic, Italian, Polish, German), only for cases where gender inflection is relevant, separate entries for gender_male and gender_female replace case_templ.
**label_annotated**
A list of labels given by the three annotators who reviewed the test case (e.g., "['hateful', 'hateful', 'hateful']").
**label_annotated_maj**
The majority vote of the three annotators (e.g., "hateful"). In some cases this differs from the gold label given by our language experts.
**disagreement_in_case**
True if label_annotated_maj does not match label_gold for the entry.
**disagreement_in_template**
True if the test case is generated from an IDENT template and there is at least one case with disagreement_in_case generated from the same template. This can be used to exclude entire templates from MHC. |
rongzhangibm/NaturalQuestionsV2 | 2022-07-07T05:22:20.000Z | [
"task_categories:question-answering",
"task_ids:open-domain-qa",
"annotations_creators:no-annotation",
"language_creators:crowdsourced",
"multilinguality:monolingual",
"size_categories:100K<n<1M",
"source_datasets:original",
"language:en",
"license:cc-by-sa-3.0",
"region:us"
] | rongzhangibm | null | null | null | 5 | 10 | ---
annotations_creators:
- no-annotation
language_creators:
- crowdsourced
language:
- en
license:
- cc-by-sa-3.0
multilinguality:
- monolingual
pretty_name: Natural Questions
size_categories:
- 100K<n<1M
source_datasets:
- original
task_categories:
- question-answering
task_ids:
- open-domain-qa
paperswithcode_id: natural-questions
---
# Dataset Card for Natural Questions
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** [https://ai.google.com/research/NaturalQuestions/dataset](https://ai.google.com/research/NaturalQuestions/dataset)
- **Repository:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
- **Paper:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
- **Point of Contact:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
- **Size of downloaded dataset files:** 42981 MB
- **Size of the generated dataset:** 139706 MB
- **Total amount of disk used:** 182687 MB
### Dataset Summary
The NQ corpus contains questions from real users, and it requires QA systems to
read and comprehend an entire Wikipedia article that may or may not contain the
answer to the question. The inclusion of real user questions, and the
requirement that solutions should read an entire page to find the answer, cause
NQ to be a more realistic and challenging task than prior QA datasets.
### Supported Tasks and Leaderboards
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Languages
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
## Dataset Structure
### Data Instances
#### default
- **Size of downloaded dataset files:** 42981 MB
- **Size of the generated dataset:** 139706 MB
- **Total amount of disk used:** 182687 MB
An example of 'train' looks as follows.
```
```
### Data Fields
The data fields are the same among all splits.
#### default
```
"id": datasets.Value("string"),
"document": {
"title": datasets.Value("string"),
"url": datasets.Value("string"),
"html": datasets.Value("string"),
"tokens": datasets.features.Sequence(
{
"token": datasets.Value("string"),
"is_html": datasets.Value("bool"),
"start_byte": datasets.Value("int64"),
"end_byte": datasets.Value("int64"),
}
),
},
"question": {
"text": datasets.Value("string"),
"tokens": datasets.features.Sequence(datasets.Value("string")),
},
"long_answer_candidates": datasets.features.Sequence(
{
"start_token": datasets.Value("int64"),
"end_token": datasets.Value("int64"),
"start_byte": datasets.Value("int64"),
"end_byte": datasets.Value("int64"),
"top_level": datasets.Value("bool"),
}
),
"annotations": datasets.features.Sequence(
{
"id": datasets.Value("string"),
"long_answer": {
"start_token": datasets.Value("int64"),
"end_token": datasets.Value("int64"),
"start_byte": datasets.Value("int64"),
"end_byte": datasets.Value("int64"),
"candidate_index": datasets.Value("int64")
},
"short_answers": datasets.features.Sequence(
{
"start_token": datasets.Value("int64"),
"end_token": datasets.Value("int64"),
"start_byte": datasets.Value("int64"),
"end_byte": datasets.Value("int64"),
"text": datasets.Value("string"),
}
),
"yes_no_answer": datasets.features.ClassLabel(
names=["NO", "YES"]
), # Can also be -1 for NONE.
}
)
```
### Data Splits
| name | train | validation |
|---------|-------:|-----------:|
| default | 307373 | 7830 |
| dev | N/A | 7830 |
## Dataset Creation
### Curation Rationale
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
#### Who are the source language producers?
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Annotations
#### Annotation process
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
#### Who are the annotators?
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Personal and Sensitive Information
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Discussion of Biases
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Other Known Limitations
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
## Additional Information
### Dataset Curators
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Licensing Information
[Creative Commons Attribution-ShareAlike 3.0 Unported](https://creativecommons.org/licenses/by-sa/3.0/).
### Citation Information
```
@article{47761,
title = {Natural Questions: a Benchmark for Question Answering Research},
author = {Tom Kwiatkowski and Jennimaria Palomaki and Olivia Redfield and Michael Collins and Ankur Parikh and Chris Alberti and Danielle Epstein and Illia Polosukhin and Matthew Kelcey and Jacob Devlin and Kenton Lee and Kristina N. Toutanova and Llion Jones and Ming-Wei Chang and Andrew Dai and Jakob Uszkoreit and Quoc Le and Slav Petrov},
year = {2019},
journal = {Transactions of the Association of Computational Linguistics}
}
```
### Contributions
|
embedding-data/SPECTER | 2022-08-02T03:45:52.000Z | [
"task_categories:sentence-similarity",
"task_ids:semantic-similarity-classification",
"language:en",
"license:mit",
"arxiv:2004.07180",
"region:us"
] | embedding-data | null | null | null | 0 | 10 | ---
license: mit
language:
- en
paperswithcode_id: embedding-data/SPECTER
pretty_name: SPECTER
task_categories:
- sentence-similarity
- paraphrase-mining
task_ids:
- semantic-similarity-classification
---
# Dataset Card for "SPECTER"
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** [https://github.com/allenai/specter](https://github.com/allenai/specter)
- **Repository:** [More Information Needed](https://github.com/allenai/specter/blob/master/README.md)
- **Paper:** [More Information Needed](https://arxiv.org/pdf/2004.07180.pdf)
- **Point of Contact:** [@armancohan](https://github.com/armancohan), [@sergeyf](https://github.com/sergeyf), [@haroldrubio](https://github.com/haroldrubio), [@jinamshah](https://github.com/jinamshah)
### Dataset Summary
Dataset containing triplets (three sentences): anchor, positive, and negative. Contains titles of papers.
Disclaimer: The team releasing SPECTER did not upload the dataset to the Hub and did not write a dataset card.
These steps were done by the Hugging Face team.
## Dataset Structure
Each example in the dataset contains triplets of equivalent sentences and is formatted as a dictionary with the key "set" and a list with the sentences as "value".
Each example is a dictionary with a key, "set", containing a list of three sentences (anchor, positive, and negative):
```
{"set": [anchor, positive, negative]}
{"set": [anchor, positive, negative]}
...
{"set": [anchor, positive, negative]}
```
This dataset is useful for training Sentence Transformers models. Refer to the following post on how to train models using triplets.
### Usage Example
Install the 🤗 Datasets library with `pip install datasets` and load the dataset from the Hub with:
```python
from datasets import load_dataset
dataset = load_dataset("embedding-data/SPECTER")
```
The dataset is loaded as a `DatasetDict` and has the format:
```python
DatasetDict({
train: Dataset({
features: ['set'],
num_rows: 684100
})
})
```
Review an example `i` with:
```python
dataset["train"][i]["set"]
```
### Curation Rationale
[More Information Needed](https://github.com/allenai/specter)
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed](https://github.com/allenai/specter)
#### Who are the source language producers?
[More Information Needed](https://github.com/allenai/specter)
### Annotations
#### Annotation process
[More Information Needed](https://github.com/allenai/specter)
#### Who are the annotators?
[More Information Needed](https://github.com/allenai/specter)
### Personal and Sensitive Information
[More Information Needed](https://github.com/allenai/specter)
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed](https://github.com/allenai/specter)
### Discussion of Biases
[More Information Needed](https://github.com/allenai/specter)
### Other Known Limitations
[More Information Needed](https://github.com/allenai/specter)
## Additional Information
### Dataset Curators
[More Information Needed](https://github.com/allenai/specter)
### Licensing Information
[More Information Needed](https://github.com/allenai/specter)
### Citation Information
### Contributions
|
MariaIsabel/FR_NFR_Spanish_requirements_classification | 2022-07-22T07:19:16.000Z | [
"task_categories:text-classification",
"task_ids:multi-class-classification",
"annotations_creators:other",
"language_creators:other",
"multilinguality:monolingual",
"size_categories:10K<n<100K",
"source_datasets:original",
"language:es",
"license:cc-by-4.0",
"region:us"
] | MariaIsabel | null | null | null | 0 | 10 | ---
annotations_creators:
- other
language:
- es
language_creators:
- other
license:
- cc-by-4.0
multilinguality:
- monolingual
pretty_name: Spanish requirements labeled in functional and non-functional classes.
size_categories:
- 10K<n<100K
source_datasets:
- original
task_categories:
- text-classification
task_ids:
- multi-class-classification
---
## Dataset Description
- **Homepage:**
- **Repository:**
- **Paper:**
- **Leaderboard:**
- **Point of Contact:**
### Dataset Summary
Published version of dataset used for paper 'Towards an automatic requirements classification in a new Spanish dataset'
### Languages
Spanish
## Dataset Structure
### Data Fields
Project: Project's Identifier from which the requirements were obtained.
Requirement: Description of the software requirement.
Final label: Label of the requirement: F (functional requirement) and NF (non-functional requirement).
## Dataset Creation
### Initial Data Collection and Normalization
This dataset was created from a collection of functional and non-functional requirements extracted from 13 final degree and 2 master’s projects carried out from the University of A Coruna. It consist in 300 functional and 89 non-funtcional requirements.
## Additional Information
### Citation Information
https://doi.org/10.5281/zenodo.6556541
|
OATML-Markslab/ProteinGym | 2022-07-29T00:12:02.000Z | [
"arxiv:2205.13760",
"region:us"
] | OATML-Markslab | null | null | null | 6 | 10 | ## ProteinGym benchmarks overview
ProteinGym is an extensive set of Deep Mutational Scanning (DMS) assays curated to enable thorough comparisons of various mutation effect predictors indifferent regimes. It is comprised of two benchmarks: 1) a substitution benchmark which consists of the experimental characterisation of ∼1.5M missense variants across 87 DMS assays 2) an indel benchmark that includes ∼300k mutants across 7 DMS assays.
Each processed file in each benchmark corresponds to a single DMS assay, and contains the following three variables:
1) mutant (str):
- for the substitution benchmark, it describes the set of substitutions to apply on the reference sequence to obtain the mutated sequence (eg., A1P:D2N implies the amino acid 'A' at position 1 should be replaced by 'P', and 'D' at position 2 should be replaced by 'N')
- for the indel benchmark, it corresponds to the full mutated sequence
2) DMS_score (float): corresponds to the experimental measurement in the DMS assay. Across all assays, the higher the DMS_score value, the higher the fitness of the mutated protein
3) DMS_score_bin (int): indicates whether the DMS_score is above the fitness cutoff (1 is fit, 0 is not fit)
Additionally, we provide two reference files (ProteinGym_reference_file_substitutions.csv and ProteinGym_reference_file_indels.csv) that give further details on each assay and contain in particular:
- The UniProt_ID of the corresponding protein, along with taxon and MSA depth category
- The target sequence (target_seq) used in the assay
- Details on how the DMS_score was created from the raw files and how it was binarized
## Reference
If you use ProteinGym in your work, please cite the following paper:
```
Notin, P., Dias, M., Frazer, J., Marchena-Hurtado, J., Gomez, A., Marks, D.S., Gal, Y. (2022). Tranception: Protein Fitness Prediction with Autoregressive Transformers and Inference-time Retrieval. ICML.
```
## Links
- Pre-print: https://arxiv.org/abs/2205.13760
- Code: https://github.com/OATML-Markslab/Tranception |
google/cvss | 2022-08-27T23:19:14.000Z | [
"license:cc-by-4.0",
"arxiv:2201.03713",
"region:us"
] | google | CVSS is a massively multilingual-to-English speech-to-speech translation corpus,
covering sentence-level parallel speech-to-speech translation pairs from 21
languages into English. | @inproceedings{jia2022cvss,
title={{CVSS} Corpus and Massively Multilingual Speech-to-Speech Translation},
author={Jia, Ye and Tadmor Ramanovich, Michelle and Wang, Quan and Zen, Heiga},
booktitle={Proceedings of Language Resources and Evaluation Conference (LREC)},
pages={6691--6703},
year={2022}
} | null | 8 | 10 | ---
license: cc-by-4.0
---
# CVSS: A Massively Multilingual Speech-to-Speech Translation Corpus
*CVSS* is a massively multilingual-to-English speech-to-speech translation corpus, covering sentence-level parallel speech-to-speech translation pairs from 21 languages into English. CVSS is derived from the [Common Voice](https://commonvoice.mozilla.org/) speech corpus and the [CoVoST 2](https://github.com/facebookresearch/covost) speech-to-text translation corpus. The translation speech in CVSS is synthesized with two state-of-the-art TTS models trained on the [LibriTTS](http://www.openslr.org/60/) corpus.
CVSS includes two versions of spoken translation for all the 21 x-en language pairs from CoVoST 2, with each version providing unique values:
- *CVSS-C*: All the translation speeches are in a single canonical speaker's voice. Despite being synthetic, these speeches are of very high naturalness and cleanness, as well as having a consistent speaking style. These properties ease the modeling of the target speech and enable models to produce high quality translation speech suitable for user-facing applications.
- *CVSS-T*: The translation speeches are in voices transferred from the corresponding source speeches. Each translation pair has similar voices on the two sides despite being in different languages, making this dataset suitable for building models that preserve speakers' voices when translating speech into different languages.
Together with the source speeches originated from Common Voice, they make two multilingual speech-to-speech translation datasets each with about 1,900 hours of speech.
In addition to translation speech, CVSS also provides normalized translation text matching the pronunciation in the translation speech (e.g. on numbers, currencies, acronyms, etc.), which can be used for both model training as well as standardizing evaluation.
Please check out [our paper](https://arxiv.org/abs/2201.03713) for the detailed description of this corpus, as well as the baseline models we trained on both datasets.
# Load the data
The following example loads the translation speech (i.e. target speech) and the normalized translation text (i.e. target text) released in CVSS corpus. You'll need to load the source speech and optionally the source text from [Common Voice v4.0](https://huggingface.co/datasets/mozilla-foundation/common_voice_4_0) separately, and join them by the file names.
```py
from datasets import load_dataset
# Load only ar-en and ja-en language pairs. Omitting the `languages` argument
# would load all the language pairs.
cvss_c = load_dataset('google/cvss', 'cvss_c', languages=['ar', 'ja'])
# Print the structure of the dataset.
print(cvss_c)
```
# License
CVSS is released under the very permissive [Creative Commons Attribution 4.0 International (CC BY 4.0)](https://creativecommons.org/licenses/by/4.0/) license.
## Citation
Please cite this paper when referencing the CVSS corpus:
```
@inproceedings{jia2022cvss,
title={{CVSS} Corpus and Massively Multilingual Speech-to-Speech Translation},
author={Jia, Ye and Tadmor Ramanovich, Michelle and Wang, Quan and Zen, Heiga},
booktitle={Proceedings of Language Resources and Evaluation Conference (LREC)},
pages={6691--6703},
year={2022}
}
```
|
copenlu/citeworth | 2022-08-17T13:48:22.000Z | [
"task_categories:text-classification",
"annotations_creators:expert-generated",
"language_creators:found",
"multilinguality:monolingual",
"size_categories:1M<n<10M",
"source_datasets:extended|s2orc",
"language:en",
"license:cc-by-nc-4.0",
"citation detection",
"citation",
"science",
"scholarly... | copenlu | null | null | null | 2 | 10 | ---
annotations_creators:
- expert-generated
language:
- en
language_creators:
- found
license:
- cc-by-nc-4.0
multilinguality:
- monolingual
paperswithcode_id: citeworth
pretty_name: CiteWorth
size_categories:
- 1M<n<10M
source_datasets:
- extended|s2orc
tags:
- citation detection
- citation
- science
- scholarly documents
- bio
- medicine
- computer science
- citeworthiness
task_categories:
- text-classification
task_ids: []
---
# Dataset Card for CiteWorth
## Dataset Description
- **Repo** https://github.com/copenlu/cite-worth
- **Paper** https://aclanthology.org/2021.findings-acl.157.pdf
### Dataset Summary
Scientific document understanding is challenging as the data is highly domain specific and diverse. However, datasets for tasks with scientific text require expensive manual annotation and tend to be small and limited to only one or a few fields. At the same time, scientific documents contain many potential training signals, such as citations, which can be used to build large labelled datasets. Given this, we present an in-depth study of cite-worthiness detection in English, where a sentence is labelled for whether or not it cites an external source. To accomplish this, we introduce CiteWorth, a large, contextualized, rigorously cleaned labelled dataset for cite-worthiness detection built from a massive corpus of extracted plain-text scientific documents. We show that CiteWorth is high-quality, challenging, and suitable for studying problems such as domain adaptation. Our best performing cite-worthiness detection model is a paragraph-level contextualized sentence labelling model based on Longformer, exhibiting a 5 F1 point improvement over SciBERT which considers only individual sentences. Finally, we demonstrate that language model fine-tuning with cite-worthiness as a secondary task leads to improved performance on downstream scientific document understanding tasks.
## Dataset Structure
The data is structured as follows
- `paper_id`: The S2ORC paper ID where the paragraph comes from
- `section_idx`: An index into the section array in the original S2ORC data
- `file_index`: The volume in the S2ORC dataset that the paper belongs to
- `file_offset`: Byte offset to the start of the paper json in the S2ORC paper PDF file
- `mag_field_of_study`: The field of study to which a paper belongs (an array, but each paper belongs to a single field)
- `original_text`: The original text of the paragraph
- `section_title`: Title of the section to which the paragraph belongs
- `samples`: An array containing dicts of the cleaned sentences for the paragraph, in order. The fields for each dict are as follows
- `text`: The cleaned text for the sentence
- `label`: Label for the sentence, either `check-worthy` for cite-worthy sentences or `non-check-worthy` non-cite-worthy sentences
- `original_text`: The original sentence text
- `ref_ids`: List of the reference IDs in the S2ORC dataset for papers cited in this sentence
- `citation_text`: List of all citation text in this sentence
## Dataset Creation
The data is derived from the [S2ORC dataset](https://github.com/allenai/s2orc), specifically the 20200705v1 release of the data. It is licensed under the [CC By-NC 2.0](https://creativecommons.org/licenses/by-nc/2.0/) license. For details on the dataset creation process, see section 3 of our [paper](https://aclanthology.org/2021.findings-acl.157.pdf)
.
## Citing
Please use the following citation when referencing this work or using the data:
```
@inproceedings{wright2021citeworth,
title={{CiteWorth: Cite-Worthiness Detection for Improved Scientific Document Understanding}},
author={Dustin Wright and Isabelle Augenstein},
booktitle = {Findings of ACL-IJCNLP},
publisher = {Association for Computational Linguistics},
year = 2021
}
``` |
djaym7/wiki_dialog | 2022-08-20T02:36:29.000Z | [
"region:us"
] | djaym7 | WikiDialog is a large dataset of synthetically generated information-seeking
conversations. Each conversation in the dataset contains two speakers grounded
in a passage from English Wikipedia: one speaker’s utterances consist of exact
sentences from the passage; the other speaker is generated by a large language
model. | @inproceedings{dai2022dialoginpainting,
title={Dialog Inpainting: Turning Documents to Dialogs},
author={Dai, Zhuyun and Chaganty, Arun Tejasvi and Zhao, Vincent and Amini, Aida and Green, Mike and Rashid, Qazi and Guu, Kelvin},
booktitle={International Conference on Machine Learning (ICML)},
year={2022},
organization={PMLR}
} | null | 1 | 10 | # I've just ported the dataset from tfds to huggingface. All credits goes to original authors, readme is copied from https://github.com/google-research/dialog-inpainting/blob/main/README.md
Load in huggingface using :
dataset = datasets.load_dataset('djaym7/wiki_dialog','OQ', beam_runner='DirectRunner')
# Dialog Inpainting: Turning Documents into Dialogs
## Abstract
Many important questions (e.g. "How to eat healthier?") require conversation to establish context and explore in depth.
However, conversational question answering (ConvQA) systems have long been stymied by scarce training data that is expensive to collect.
To address this problem, we propose a new technique for synthetically generating diverse and high-quality dialog data: *dialog inpainting*.
Our approach takes the text of any document and transforms it into a two-person dialog between the writer and an imagined reader:
we treat sentences from the article as utterances spoken by the writer, and then use a dialog inpainter to predict what the imagined reader asked or said in between each of the writer's utterances.
By applying this approach to passages from Wikipedia and the web, we produce `WikiDialog` and `WebDialog`, two datasets totalling 19 million diverse information-seeking dialogs---1,000x larger than the largest existing ConvQA dataset.
Furthermore, human raters judge the *answer adequacy* and *conversationality* of `WikiDialog` to be as good or better than existing manually-collected datasets.
Using our inpainted data to pre-train ConvQA retrieval systems, we significantly advance state-of-the-art across three benchmarks (`QReCC`, `OR-QuAC`, `TREC CaST`) yielding up to 40\% relative gains on standard evaluation metrics.
## Disclaimer
This is not an officially supported Google product.
# `WikiDialog-OQ`
We are making `WikiDialog-OQ`, a dataset containing 11M information-seeking conversations from passages in English Wikipedia, publicly available.
Each conversation was generated using the dialog inpainting method detailed in the paper using the `Inpaint-OQ` inpainter model, a T5-XXL model that was fine-tuned on `OR-QuAC` and `QReCC` using a dialog reconstruction loss. For a detailed summary of the dataset, please refer to the [data card](WikiDialog-OQ_Data_Card.pdf).
The passages in the dataset come from the `OR-QuAC` retrieval corpus and share passage ids.
You can download the `OR-QuAC` dataset and find more details about it [here](https://github.com/prdwb/orconvqa-release).
## Download the raw JSON format data.
The dataset can be downloaded in (gzipped) JSON format from Google Cloud using the following commands:
```bash
# Download validation data (72Mb)
wget https://storage.googleapis.com/gresearch/dialog-inpainting/WikiDialog_OQ/data_validation.jsonl.gz
# Download training data (100 shards, about 72Mb each)
wget $(seq -f "https://storage.googleapis.com/gresearch/dialog-inpainting/WikiDialog_OQ/data_train.jsonl-%05g-of-00099.gz" 0 99)
```
Each line contains a single conversation serialized as a JSON object, for example:
```json
{
"pid": "894686@1",
"title": "Mother Mary Alphonsa",
"passage": "Two years after Nathaniel's death in 1864, Rose was enrolled at a boarding school run by Diocletian Lewis in nearby Lexington, Massachusetts; she disliked the experience. After Nathaniel's death, the family moved to Germany and then to England. Sophia and Una died there in 1871 and 1877, respectively. Rose married author George Parsons Lathrop in 1871. Prior to the marriage, Lathrop had shown romantic interest in Rose's sister Una. Their brother...",
"sentences": [
"Two years after Nathaniel's death in 1864, Rose was enrolled at a boarding school run by Diocletian Lewis in nearby Lexington, Massachusetts; she disliked the experience.",
"After Nathaniel's death, the family moved to Germany and then to England.",
"Sophia and Una died there in 1871 and 1877, respectively.",
"Rose married author George Parsons Lathrop in 1871.",
"Prior to the marriage, Lathrop had shown romantic interest in Rose's sister Una.",
"..."],
"utterances": [
"Hi, I'm your automated assistant. I can answer your questions about Mother Mary Alphonsa.",
"What was Mother Mary Alphonsa's first education?",
"Two years after Nathaniel's death in 1864, Rose was enrolled at a boarding school run by Diocletian Lewis in nearby Lexington, Massachusetts; she disliked the experience.",
"Did she stay in the USA?",
"After Nathaniel's death, the family moved to Germany and then to England.",
"Why did they move?",
"Sophia and Una died there in 1871 and 1877, respectively.",
"..."],
"author_num": [0, 1, 0, 1, 0, 1, 0, 1, 0, 1, 0, 1, 0]
}
```
The fields are:
* `pid (string)`: a unique identifier of the passage that corresponds to the passage ids in the public OR-QuAC dataset.
* `title (string)`: Title of the source Wikipedia page for `passage`
* `passage (string)`: A passage from English Wikipedia
* `sentences (list of strings)`: A list of all the sentences that were segmented from `passage`.
* `utterances (list of strings)`: A synthetic dialog generated from `passage` by our Dialog Inpainter model. The list contains alternating utterances from each speaker (`[utterance_1, utterance_2, …, utterance_n]`). In this dataset, the first utterance is a "prompt" that was provided to the model, and every alternating utterance is a sentence from the passage.
* `author_num (list of ints)`: a list of integers indicating the author number in `text`. `[utterance_1_author, utterance_2_author, …, utterance_n_author]`. Author numbers are either 0 or 1.
Note that the dialog in `utterances` only uses the first 6 sentences of the passage; the remaining sentences are provided in the `sentences` field and can be used to extend the dialog.
## Download the processed dataset via [TFDS](https://www.tensorflow.org/datasets/catalog/wiki_dialog).
First, install the [`tfds-nightly`](https://www.tensorflow.org/datasets/overview#installation) package and other dependencies.
```bash
pip install -q tfds-nightly tensorflow apache_beam
```
After installation, load the `WikiDialog-OQ` dataset using the following snippet:
```python
>>> import tensorflow_datasets as tfds
>>> dataset, info = tfds.load('wiki_dialog/OQ', with_info=True)
>>> info
tfds.core.DatasetInfo(
name='wiki_dialog',
full_name='wiki_dialog/OQ/1.0.0',
description="""
WikiDialog is a large dataset of synthetically generated information-seeking
conversations. Each conversation in the dataset contains two speakers grounded
in a passage from English Wikipedia: one speaker’s utterances consist of exact
sentences from the passage; the other speaker is generated by a large language
model.
""",
config_description="""
WikiDialog generated from the dialog inpainter finetuned on OR-QuAC and QReCC. `OQ` stands for OR-QuAC and QReCC.
""",
homepage='https://www.tensorflow.org/datasets/catalog/wiki_dialog',
data_path='/placer/prod/home/tensorflow-datasets-cns-storage-owner/datasets/wiki_dialog/OQ/1.0.0',
file_format=tfrecord,
download_size=7.04 GiB,
dataset_size=36.58 GiB,
features=FeaturesDict({
'author_num': Sequence(tf.int32),
'passage': Text(shape=(), dtype=tf.string),
'pid': Text(shape=(), dtype=tf.string),
'sentences': Sequence(Text(shape=(), dtype=tf.string)),
'title': Text(shape=(), dtype=tf.string),
'utterances': Sequence(Text(shape=(), dtype=tf.string)),
}),
supervised_keys=None,
disable_shuffling=False,
splits={
'train': <SplitInfo num_examples=11264129, num_shards=512>,
'validation': <SplitInfo num_examples=113822, num_shards=4>,
},
citation="""""",
)
```
## Citing WikiDialog
```
@inproceedings{dai2022dialoginpainting,
title={Dialog Inpainting: Turning Documents to Dialogs},
author={Dai, Zhuyun and Chaganty, Arun Tejasvi and Zhao, Vincent and Amini, Aida and Green, Mike and Rashid, Qazi and Guu, Kelvin},
booktitle={International Conference on Machine Learning (ICML)},
year={2022},
organization={PMLR}
}
``` |
sil-ai/audio-keyword-spotting | 2023-07-24T18:08:02.000Z | [
"task_categories:automatic-speech-recognition",
"annotations_creators:machine-generated",
"language_creators:other",
"multilinguality:multilingual",
"source_datasets:extended|common_voice",
"source_datasets:MLCommons/ml_spoken_words",
"language:eng",
"language:en",
"language:spa",
"language:es",
... | sil-ai | null | @InProceedings{huggingface:audio-keyword-spotting,
title = {audio-keyword-spotting},
author={Joshua Nemecek
},
year={2022}
} | null | 0 | 10 | ---
annotations_creators:
- machine-generated
language_creators:
- other
language:
- eng
- en
- spa
- es
- ind
- id
license: cc-by-4.0
multilinguality:
- multilingual
source_datasets:
- extended|common_voice
- MLCommons/ml_spoken_words
task_categories:
- automatic-speech-recognition
task_ids: []
pretty_name: Audio Keyword Spotting
tags:
- other-keyword-spotting
---
# Dataset Card for Audio Keyword Spotting
## Table of Contents
- [Table of Contents](#table-of-contents)
## Dataset Description
- **Homepage:** https://sil.ai.org
- **Point of Contact:** [SIL AI email](mailto:idx_aqua@sil.org)
- **Source Data:** [MLCommons/ml_spoken_words](https://huggingface.co/datasets/MLCommons/ml_spoken_words), [trabina GitHub](https://github.com/wswu/trabina)

## Dataset Summary
The initial version of this dataset is a subset of [MLCommons/ml_spoken_words](https://huggingface.co/datasets/MLCommons/ml_spoken_words), which is derived from Common Voice, designed for easier loading. Specifically, the subset consists of `ml_spoken_words` files filtered by the names and placenames transliterated in Bible translations, as found in [trabina](https://github.com/wswu/trabina). For our initial experiment, we have focused only on English, Spanish, and Indonesian, three languages whose name spellings are frequently used in other translations. We anticipate growing this dataset in the future to include additional keywords and other languages as the experiment progresses.
### Data Fields
* file: strinrelative audio path inside the archive
* is_valid: if a sample is valid
* language: language of an instance.
* speaker_id: unique id of a speaker. Can be "NA" if an instance is invalid
* gender: speaker gender. Can be one of `["MALE", "FEMALE", "OTHER", "NAN"]`
* keyword: word spoken in a current sample
* audio: a dictionary containing the relative path to the audio file,
the decoded audio array, and the sampling rate.
Note that when accessing the audio column: `dataset[0]["audio"]` the audio file is automatically
decoded and resampled to `dataset.features["audio"].sampling_rate`. Decoding and resampling of
a large number of audio files might take a significant amount of time.
Thus, it is important to first query the sample index before the "audio" column,
i.e. `dataset[0]["audio"]` should always be preferred over `dataset["audio"][0]`
### Data Splits
The data for each language is splitted into train / validation / test parts.
## Supported Tasks
Keyword spotting and spoken term search
### Personal and Sensitive Information
The dataset consists of people who have donated their voice online.
You agree to not attempt to determine the identity of speakers.
### Licensing Information
The dataset is licensed under [CC-BY 4.0](https://creativecommons.org/licenses/by/4.0/) and can be used for academic
research and commercial applications in keyword spotting and spoken term search.
|
ShapeNet/shapenetcore-glb | 2023-09-20T15:04:40.000Z | [
"language:en",
"license:other",
"3D shapes",
"region:us"
] | ShapeNet | null | null | null | 0 | 10 | ---
language:
- en
pretty_name: ShapeNetCore
tags:
- 3D shapes
license: other
extra_gated_heading: Acknowledge license to accept the repository
extra_gated_prompt: >-
To request access to this ShapeNet repo, you will need to provide your **full name** (please provide both your first and last name), the name of your **advisor or the principal investigator (PI)** of your lab (in the PI/Advisor) fields, and the **school or company** that you are affiliated with (the **Affiliation** field).
After requesting access to this ShapeNet repo, you will be considered for access approval.
After access approval, you (the "Researcher") receive permission to use the ShapeNet database (the "Database") at Princeton University and Stanford University. In exchange for being able to join the ShapeNet community and receive such permission, Researcher hereby agrees to the following terms and conditions:
Researcher shall use the Database only for non-commercial research and educational purposes.
Princeton University and Stanford University make no representations or warranties regarding the Database, including but not limited to warranties of non-infringement or fitness for a particular purpose.
Researcher accepts full responsibility for his or her use of the Database and shall defend and indemnify Princeton University and Stanford University, including their employees, Trustees, officers and agents, against any and all claims arising from Researcher's use of the Database, including but not limited to Researcher's use of any copies of copyrighted 3D models that he or she may create from the Database.
Researcher may provide research associates and colleagues with access to the Database provided that they first agree to be bound by these terms and conditions.
Princeton University and Stanford University reserve the right to terminate Researcher's access to the Database at any time.
If Researcher is employed by a for-profit, commercial entity, Researcher's employer shall also be bound by these terms and conditions, and Researcher hereby represents that he or she is fully authorized to enter into this agreement on behalf of such employer.
The law of the State of New Jersey shall apply to all disputes under this agreement.
For access to the data, please fill in your **full name** (both first and last name), the name of your **advisor or principal investigator (PI)**, and the name of the **school or company** you are affliated with.
Please actually fill out the fields (DO NOT put the word "Advisor" for PI/Advisor and the word "School" for "Affiliation", please specify the name of your advisor and the name of your school).
extra_gated_fields:
Name: text
PI/Advisor: text
Affiliation: text
Purpose: text
Country: text
I agree to use this dataset for non-commercial use ONLY: checkbox
---
This repository contains ShapeNetCore (v2) in [GLB](https://en.wikipedia.org/wiki/GlTF#GLB) format, a subset of [ShapeNet](https://shapenet.org).
ShapeNetCore is a densely annotated subset of ShapeNet covering 55 common object categories with ~51,300 unique 3D models. Each model in ShapeNetCore are linked to an appropriate synset in [WordNet 3.0](https://wordnet.princeton.edu/).
If you use ShapeNet data, you agree to abide by the [ShapeNet terms of use](https://shapenet.org/terms). You are only allowed to redistribute the data to your research associates and colleagues provided that they first agree to be bound by these terms and conditions.
If you use this data, please cite the main ShapeNet technical report.
```
@techreport{shapenet2015,
title = {{ShapeNet: An Information-Rich 3D Model Repository}},
author = {Chang, Angel X. and Funkhouser, Thomas and Guibas, Leonidas and Hanrahan, Pat and Huang, Qixing and Li, Zimo and Savarese, Silvio and Savva, Manolis and Song, Shuran and Su, Hao and Xiao, Jianxiong and Yi, Li and Yu, Fisher},
number = {arXiv:1512.03012 [cs.GR]},
institution = {Stanford University --- Princeton University --- Toyota Technological Institute at Chicago},
year = {2015}
}
```
For more information, please contact us at shapenetwebmaster@gmail.com and indicate ShapeNetCore v2 in the title of your email.
|
khalidalt/SANAD | 2022-09-03T19:36:00.000Z | [
"license:cc-by-4.0",
"region:us"
] | khalidalt | null | null | null | 0 | 10 | ---
license: cc-by-4.0
---
# Dataset Card for SANAD
## Table of Contents
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:https://data.mendeley.com/datasets/57zpx667y9/2**
### Dataset Summary
SANAD Dataset is a large collection of Arabic news articles that can be used in different Arabic NLP tasks such as Text Classification and Word Embedding. The articles were collected using Python scripts written specifically for three popular news websites: AlKhaleej, AlArabiya and Akhbarona. All datasets have seven categories [Culture, Finance, Medical, Politics, Religion, Sports and Tech], except AlArabiya which doesn’t have [Religion]. SANAD contains a total number of 190k+ articles.
### Supported Tasks and Leaderboards
[More Information Needed]
### Languages
[More Information Needed]
## Dataset Structure
### Data Instances
[More Information Needed]
### Data Fields
[More Information Needed]
### Data Splits
[More Information Needed]
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
license: cc-by-4.0
### Citation Information
```
@article{einea2019sanad,
title={Sanad: Single-label arabic news articles dataset for automatic text categorization},
author={Einea, Omar and Elnagar, Ashraf and Al Debsi, Ridhwan},
journal={Data in brief},
volume={25},
pages={104076},
year={2019},
publisher={Elsevier}
}
```
### Contributions
|
j0hngou/ccmatrix_de-en | 2022-09-26T16:35:03.000Z | [
"language:en",
"language:de",
"region:us"
] | j0hngou | null | null | null | 0 | 10 | ---
language:
- en
- de
---
A sampled version of the [CCMatrix](https://huggingface.co/datasets/yhavinga/ccmatrix) dataset for the German-English pair, containing 1M train entries. |
kejian/codesearchnet-python-raw-457k | 2022-09-20T01:45:26.000Z | [
"region:us"
] | kejian | null | null | null | 2 | 10 | Entry not found |
kevinjesse/ManyRefactors4C | 2022-09-25T12:59:34.000Z | [
"license:cc-by-2.0",
"region:us"
] | kevinjesse | null | null | null | 0 | 10 | ---
license: cc-by-2.0
---
|
Kunling/layoutlm_resume_data | 2022-09-29T05:18:32.000Z | [
"license:bsd",
"region:us"
] | Kunling | null | null | null | 1 | 10 | ---
license: bsd
---
|
khaclinh/pp4av | 2022-10-26T04:19:10.000Z | [
"task_categories:object-detection",
"task_ids:face-detection",
"annotations_creators:expert-generated",
"language_creators:found",
"multilinguality:monolingual",
"size_categories:1K<n<10K",
"source_datasets:extended",
"language:en",
"license:cc-by-nc-nd-4.0",
"license-plate-detection",
"region:u... | khaclinh | PP4AV is the first public dataset with faces and license plates annotated with driving scenarios.
P4AV provides 3,447 annotated driving images for both faces and license plates.
For normal camera data, dataset sampled images from the existing videos in which cameras were mounted in moving vehicles, running around the European cities.
The images in PP4AV were sampled from 6 European cities at various times of day, including nighttime.
This dataset use the fisheye images from the WoodScape dataset to select 244 images from the front, rear, left, and right cameras for fisheye camera data.
PP4AV dataset can be used as a benchmark suite (evaluating dataset) for data anonymization models in autonomous driving. | @article{PP4AV2022,
title = {PP4AV: A benchmarking Dataset for Privacy-preserving Autonomous Driving},
author = {Linh Trinh, Phuong Pham, Hoang Trinh, Nguyen Bach, Dung Nguyen, Giang Nguyen, Huy Nguyen},
booktitle = {IEEE/CVF Winter Conference on Applications of Computer Vision (WACV)},
year = {2023}
} | null | 2 | 10 | ---
annotations_creators:
- expert-generated
language_creators:
- found
language:
- en
license:
- cc-by-nc-nd-4.0
multilinguality:
- monolingual
size_categories:
- 1K<n<10K
source_datasets:
- extended
task_categories:
- object-detection
task_ids:
- face-detection
pretty_name: PP4AV
tags:
- license-plate-detection
---
# Dataset Card for PP4AV
## Table of Contents
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Languages](#languages)
- [Dataset Creation](#dataset-creation)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Dataset folder](#folder)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Baseline Model](#baseline-model)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** https://github.com/khaclinh/pp4av
- **Repository:** https://github.com/khaclinh/pp4av
- **Baseline model:** https://huggingface.co/spaces/khaclinh/self-driving-anonymization
- **Paper:** [PP4AV: A benchmarking Dataset for Privacy-preserving Autonomous Driving]
- **Point of Contact:** linhtk.dhbk@gmail.com
### Dataset Summary
PP4AV is the first public dataset with faces and license plates annotated with driving scenarios. P4AV provides 3,447 annotated driving images for both faces and license plates. For normal camera data, dataset sampled images from the existing videos in which cameras were mounted in moving vehicles, running around the European cities. The images in PP4AV were sampled from 6 European cities at various times of day, including nighttime. This dataset use the fisheye images from the WoodScape dataset to select 244 images from the front, rear, left, and right cameras for fisheye camera data. PP4AV dataset can be used as a benchmark suite (evaluating dataset) for data anonymization models in autonomous driving.
### Languages
English
## Dataset Creation
### Source Data
#### Initial Data Collection and Normalization
The objective of PP4AV is to build a benchmark dataset that can be used to evaluate face and license plate detection models for autonomous driving. For normal camera data, we sampled images from the existing videos in which cameras were mounted in moving vehicles, running around the European cities. We focus on sampling data in urban areas rather than highways in order to provide sufficient samples of license plates and pedestrians. The images in PP4AV were sampled from **6** European cities at various times of day, including nighttime. The source data from 6 cities in European was described as follow:
- `Paris`: This subset contains **1450** images of the car driving down a Parisian street during the day. The video frame rate is 30 frames per second. The video is longer than one hour. We cut a shorter video for sampling and annotation. The original video can be found at the following URL:
URL: [paris_youtube_video](https://www.youtube.com/watch?v=nqWtGWymV6c)
- `Netherland day time`: This subset consists of **388** images of Hague, Amsterdam city in day time. The image of this subset are sampled from the bellow original video:
URL: [netherland_youtube_video](https://www.youtube.com/watch?v=Xuo4uCZxNrE)
The frame rate of the video is 30 frames per second. We cut a shorter video for sampling and annotation. The original video was longer than a half hour.
- `Netherland night time`: This subset consists of **824** images of Hague, Amsterdam city in night time sampled by the following original video:
URL: [netherland_youtube_video](https://www.youtube.com/watch?v=eAy9eHsynhM)
The frame rate of the video is 30 frames per second. We cut a shorter video for sampling and annotation. The original video was longer than a half hour.
- `Switzerland`: This subset consists of **372** images of Switzerland sampled by the following video:
URL: [switzerland_youtube_video](https://www.youtube.com/watch?v=0iw5IP94m0Q)
The frame rate of the video is 30 frames per second. We cut a shorter video for sampling and annotation. The original video was longer than one hour.
- `Zurich`: This subset consists of **50** images of Zurich city provided by the Cityscapes training set in package [leftImg8bit_trainvaltest.zip](https://www.cityscapes-dataset.com/file-handling/?packageID=3)
- `Stuttgart`: This subset consists of **69** images of Stuttgart city provided by the Cityscapes training set in package [leftImg8bit_trainvaltest.zip](https://www.cityscapes-dataset.com/file-handling/?packageID=3)
- `Strasbourg`: This subset consists of **50** images of Strasbourg city provided by the Cityscapes training set in package [leftImg8bit_trainvaltest.zip](https://www.cityscapes-dataset.com/file-handling/?packageID=3)
We use the fisheye images from the WoodScape dataset to select **244** images from the front, rear, left, and right cameras for fisheye camera data.
The source of fisheye data for sampling is located at WoodScape's [Fisheye images](https://woodscape.valeo.com/download).
In total, **3,447** images were selected and annotated in PP4AV.
### Annotations
#### Annotation process
Annotators annotate facial and license plate objects in images. For facial objects, bounding boxes are defined by all detectable human faces from the forehead to the chin to the ears. Faces were labelled with diverse sizes, skin tones, and faces partially obscured by a transparent material, such as a car windshield. For license plate objects, bounding boxes consists of all recognizable license plates with high variability, such as different sizes, countries, vehicle types (motorcycle, automobile, bus, truck), and occlusions by other vehicles. License plates were annotated for vehicles involved in moving traffic. To ensure the quality of annotation, there are two-step process for annotation. In the first phase, two teams of annotators will independently annotate identical image sets. After their annotation output is complete, a merging method based on the IoU scores between the two bounding boxes of the two annotations will be applied. Pairs of annotations with IoU scores above a threshold will be merged and saved as a single annotation. Annotated pairs with IoU scores below a threshold will be considered conflicting. In the second phase, two teams of reviewers will inspect the conflicting pairs of annotations for revision before a second merging method similar to the first is applied. The results of these two phases will be combined to form the final annotation. All work is conducted on the CVAT tool https://github.com/openvinotoolkit/cvat.
#### Who are the annotators?
Vantix Data Science team
### Dataset Folder
The `data` folder contains below files:
- `images.zip`: contains all preprocessed images of PP4AV dataset. In this `zip` file, there are bellow folder included:
`fisheye`: folder contains 244 fisheye images in `.png` file type
`zurich`: folder contains 244 fisheye images in `.png` file type
`strasbourg`: folder contains 244 fisheye images in `.png` file type
`stuttgart`: folder contains 244 fisheye images in `.png` file type
`switzerland`: folder contains 244 fisheye images in `.png` file type
`netherlands_day`: folder contains 244 fisheye images in `.png` file type
`netherlands_night`: folder contains 244 fisheye images in `.png` file type
`paris`: folder contains 244 fisheye images in `.png` file type
- `annotations.zip`: contains annotation data corresponding to `images.zip` data. In this file, there are bellow folder included:
`fisheye`: folder contains 244 annotation `.txt` file type for fisheye image following `yolo v1.1` format.
`zurich`: folder contains 50 file `.txt` annotation following `yolo v1.1` format, which corresponding to 50 images file of `zurich` subset.
`strasbourg`: folder contains 50 file `.txt` annotation following `yolo v1.1` format, which corresponding to 50 images file of `strasbourg` subset.
`stuttgart`: folder contains 69 file `.txt` annotation following `yolo v1.1` format, which corresponding to 69 images file of `stuttgart` subset.
`switzerland`: folder contains 372 file `.txt` annotation following `yolo v1.1` format, which corresponding to 372 images file of `switzerland` subset.
`netherlands_day`: folder contains 388 file `.txt` annotation following `yolo v1.1` format, which corresponding to 388 images file of `netherlands_day` subset.
`netherlands_night`: folder contains 824 file `.txt` annotation following `yolo v1.1` format, which corresponding to 824 images file of `netherlands_night` subset.
`paris`: folder contains 1450 file `.txt` annotation following `yolo v1.1` format, which corresponding to 1450 images file of `paris` subset.
- `soiling_annotations.zip`: contain raw annotation data without filtering. The folder structure stored in this file is similar to format of `annotations.zip`.
### Personal and Sensitive Information
[More Information Needed]
## Dataset Structure
### Data Instances
A data point comprises an image and its face and license plate annotations.
```
{
'image': <PIL.JpegImagePlugin.JpegImageFile image mode=RGB size=1920x1080 at 0x19FA12186D8>, 'objects': {
'bbox': [
[0 0.230078 0.317081 0.239062 0.331367],
[1 0.5017185 0.0306425 0.5185935 0.0410975],
[1 0.695078 0.0710145 0.7109375 0.0863355],
[1 0.4089065 0.31646 0.414375 0.32764],
[0 0.1843745 0.403416 0.201093 0.414182],
[0 0.7132 0.3393474 0.717922 0.3514285]
]
}
}
```
### Data Fields
- `image`: A `PIL.Image.Image` object containing the image. Note that when accessing the image column: `dataset[0]["image"]` the image file is automatically decoded. Decoding of a large number of image files might take a significant amount of time. Thus it is important to first query the sample index before the `"image"` column, *i.e.* `dataset[0]["image"]` should **always** be preferred over `dataset["image"][0]`
- `objects`: a dictionary of face and license plate bounding boxes present on the image
- `bbox`: the bounding box of each face and license plate (in the [yolo](https://albumentations.ai/docs/getting_started/bounding_boxes_augmentation/#yolo) format). Basically, each row in annotation `.txt` file for each image `.png` file consists of data in format: `<object-class> <x_center> <y_center> <width> <height>`:
- `object-class`: integer number of object from 0 to 1, where 0 indicate face object, and 1 indicate licese plate object
- `x_center`: normalized x-axis coordinate of the center of the bounding box.
`x_center = <absolute_x_center> / <image_width>`
- `y_center`: normalized y-axis coordinate of the center of the bounding box.
`y_center = <absolute_y_center> / <image_height>`
- `width`: normalized width of the bounding box.
`width = <absolute_width> / <image_width>`
- `height`: normalized wheightdth of the bounding box.
`height = <absolute_height> / <image_height>`
- Example lines in YOLO v1.1 format `.txt' annotation file:
`1 0.716797 0.395833 0.216406 0.147222
0 0.687109 0.379167 0.255469 0.158333
1 0.420312 0.395833 0.140625 0.166667
`
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Baseline Model
Pretrained weight and demo of baseline model are available in [self-driving-anonymization huggingface spaces](https://huggingface.co/spaces/khaclinh/self-driving-anonymization)
### Dataset Curators
Linh Trinh
### Licensing Information
[Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International (CC BY-NC-ND 4.0)](https://creativecommons.org/licenses/by-nc-nd/4.0/).
### Citation Information
```
@article{PP4AV2022,
title = {PP4AV: A benchmarking Dataset for Privacy-preserving Autonomous Driving},
author = {Linh Trinh, Phuong Pham, Hoang Trinh, Nguyen Bach, Dung Nguyen, Giang Nguyen, Huy Nguyen},
booktitle = {IEEE/CVF Winter Conference on Applications of Computer Vision (WACV)},
year = {2023}
}
```
### Contributions
Thanks to [@khaclinh](https://github.com/khaclinh) for adding this dataset.
|
brendenc/celeb-identities | 2022-10-09T02:33:12.000Z | [
"region:us"
] | brendenc | null | null | null | 0 | 10 | This is a small dataset containing celebrity faces. This dataset was created for educational purposes and is far too small for any sort of model training. However, these images can be used for demo examples or other educational purposes. |
julien-c/titanic-survival | 2022-10-10T19:20:30.000Z | [
"task_categories:tabular-classification",
"license:cc",
"tabular-classification",
"region:us"
] | julien-c | null | null | null | 1 | 10 | ---
license: cc
tags:
- tabular-classification
task_categories:
- tabular-classification
---
## Titanic Survival
from https://web.stanford.edu/class/archive/cs/cs109/cs109.1166/problem12.html |
ghoumrassi/clothes_sample | 2022-10-15T18:07:22.000Z | [
"region:us"
] | ghoumrassi | null | null | null | 2 | 10 | ---
dataset_info:
features:
- name: image
dtype: image
- name: text
dtype: string
splits:
- name: train
num_bytes: 20078406.0
num_examples: 990
download_size: 0
dataset_size: 20078406.0
---
# Dataset Card for "clothes_sample"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
anab/copa-sse | 2022-10-26T01:53:17.000Z | [
"task_categories:text2text-generation",
"task_categories:multiple-choice",
"task_ids:explanation-generation",
"annotations_creators:crowdsourced",
"language_creators:crowdsourced",
"multilinguality:monolingual",
"size_categories:1K<n<10K",
"language:en",
"license:mit",
"commonsense reasoning",
"... | anab | null | null | null | 3 | 10 | ---
annotations_creators:
- crowdsourced
language:
- en
language_creators:
- crowdsourced
license:
- mit
multilinguality:
- monolingual
pretty_name: Semi-structured Explanations for Commonsense Reasoning
size_categories:
- 1K<n<10K
source_datasets: []
tags:
- commonsense reasoning
- explanation
- graph-based reasoning
task_categories:
- text2text-generation
- multiple-choice
task_ids:
- explanation-generation
---
# Dataset Card for COPA-SSE
## Table of Contents
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** https://github.com/a-brassard/copa-sse
- **Paper:** [COPA-SSE: Semi-Structured Explanations for Commonsense Reasoning](https://arxiv.org/abs/2201.06777)
- **Point of Contact:** [Ana Brassard](mailto:ana.brassard@riken.jp)
### Dataset Summary

COPA-SSE contains crowdsourced explanations for the [Balanced COPA](https://balanced-copa.github.io/) dataset, a variant of the [Choice of Plausible Alternatives (COPA)](https://people.ict.usc.edu/~gordon/copa.html) benchmark. The explanations are formatted as a set of triple-like common sense statements with [ConceptNet](https://conceptnet.io/) relations but freely written concepts.
### Supported Tasks and Leaderboards
Can be used to train a model for explain+predict or predict+explain settings. Suited for both text-based and graph-based architectures. Base task is COPA (causal QA).
### Languages
English
## Dataset Structure
### Data Instances
Validation and test set each contains Balanced COPA samples with added explanations in `.jsonl` format. The question ids match the original questions of the Balanced COPA validation and test sets, respectively.
### Data Fields
Each entry contains:
- the original question (matching format and ids)
- `human-explanations`: a list of explanations each containing:
- `expl-id`: the explanation id
- `text`: the explanation in plain text (full sentences)
- `worker-id`: anonymized worker id (the author of the explanation)
- `worker-avg`: the average score the author got for their explanations
- `all-ratings`: all collected ratings for the explanation
- `filtered-ratings`: ratings excluding those that failed the control
- `triples`: the triple-form explanation (a list of ConceptNet-like triples)
Example entry:
```
id: 1,
asks-for: cause,
most-plausible-alternative: 1,
p: "My body cast a shadow over the grass.",
a1: "The sun was rising.",
a2: "The grass was cut.",
human-explanations: [
{expl-id: f4d9b407-681b-4340-9be1-ac044f1c2230,
text: "Sunrise causes casted shadows.",
worker-id: 3a71407b-9431-49f9-b3ca-1641f7c05f3b,
worker-avg: 3.5832864694635025,
all-ratings: [1, 3, 3, 4, 3],
filtered-ratings: [3, 3, 4, 3],
filtered-avg-rating: 3.25,
triples: [["sunrise", "Causes", "casted shadows"]]
}, ...]
```
### Data Splits
Follows original Balanced COPA split: 1000 dev and 500 test instances. Each instance has up to nine explanations.
## Dataset Creation
### Curation Rationale
The goal was to collect human-written explanations to supplement an existing commonsense reasoning benchmark. The triple-like format was designed to support graph-based models and increase the overall data quality, the latter being notoriously lacking in freely-written crowdsourced text.
### Source Data
#### Initial Data Collection and Normalization
The explanations in COPA-SSE are fully crowdsourced via the Amazon Mechanical Turk platform. Workers entered explanations by providing one or more concept-relation-concept triples. The explanations were then rated by different annotators with one- to five-star ratings. The final dataset contains explanations with a range of quality ratings. Additional collection rounds guaranteed that each sample has at least one explanation rated 3.5 stars or higher.
#### Who are the source language producers?
The original COPA questions (500 dev+500 test) were initially hand-crafted by experts. Similarly, the additional 500 development samples in Balanced COPA were authored by a small team of NLP researchers. Finally, the added explanations and quality ratings in COPA-SSE were collected with the help of Amazon Mechanical Turk workers who passed initial qualification rounds.
### Annotations
#### Annotation process
Workers were shown a Balanced COPA question, its answer, and a short instructional text. Then, they filled in free-form text fields for head and tail concepts and selected the relation from a drop-down menu with a curated selection of ConceptNet relations. Each explanation was rated by five different workers who were shown the same question and answer with five candidate explanations.
#### Who are the annotators?
The workers were restricted to persons located in the U.S. or G.B., with a HIT approval of 98% or more, and 500 or more approved HITs. Their identity and further personal information are not available.
### Personal and Sensitive Information
N/A
## Considerations for Using the Data
### Social Impact of Dataset
Models trained to output similar explanations as those in COPA-SSE may not necessarily provide convincing or faithful explanations. Researchers should carefully evaluate the resulting explanations before considering any real-world applications.
### Discussion of Biases
COPA questions ask for causes or effects of everyday actions or interactions, some of them containing gendered language. Some explanations may reinforce harmful stereotypes if their reasoning is based on biased assumptions. These biases were not verified during collection.
### Other Known Limitations
The data was originally intended to be explanation *graphs*, i.e., hypothetical "ideal" subgraphs of a commonsense knowledge graph. While they can still function as valid natural language explanations, their wording may be at times unnatural to a human and may be better suited for graph-based implementations.
## Additional Information
### Dataset Curators
This work was authored by Ana Brassard, Benjamin Heinzerling, Pride Kavumba, and Kentaro Inui. All are both members of the Riken AIP Natural Language Understanding Team and the Tohoku NLP Lab under Tohoku University.
### Licensing Information
COPA-SSE is released under the [MIT License](https://mit-license.org/).
### Citation Information
```
@InProceedings{copa-sse:LREC2022,
author = {Brassard, Ana and Heinzerling, Benjamin and Kavumba, Pride and Inui, Kentaro},
title = {COPA-SSE: Semi-structured Explanations for Commonsense Reasoning},
booktitle = {Proceedings of the Language Resources and Evaluation Conference},
month = {June},
year = {2022},
address = {Marseille, France},
publisher = {European Language Resources Association},
pages = {3994--4000},
url = {https://aclanthology.org/2022.lrec-1.425}
}
```
### Contributions
Thanks to [@a-brassard](https://github.com/a-brassard) for adding this dataset. |
lmqg/qa_harvesting_from_wikipedia | 2022-11-05T03:19:40.000Z | [
"task_categories:question-answering",
"task_ids:extractive-qa",
"multilinguality:monolingual",
"size_categories:1M<",
"source_datasets:extended|wikipedia",
"language:en",
"license:cc-by-4.0",
"region:us"
] | lmqg | QA pairs generated in https://aclanthology.org/P18-1177/ | @inproceedings{du-cardie-2018-harvesting,
title = "Harvesting Paragraph-level Question-Answer Pairs from {W}ikipedia",
author = "Du, Xinya and
Cardie, Claire",
booktitle = "Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)",
month = jul,
year = "2018",
address = "Melbourne, Australia",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/P18-1177",
doi = "10.18653/v1/P18-1177",
pages = "1907--1917",
abstract = "We study the task of generating from Wikipedia articles question-answer pairs that cover content beyond a single sentence. We propose a neural network approach that incorporates coreference knowledge via a novel gating mechanism. As compared to models that only take into account sentence-level information (Heilman and Smith, 2010; Du et al., 2017; Zhou et al., 2017), we find that the linguistic knowledge introduced by the coreference representation aids question generation significantly, producing models that outperform the current state-of-the-art. We apply our system (composed of an answer span extraction system and the passage-level QG system) to the 10,000 top ranking Wikipedia articles and create a corpus of over one million question-answer pairs. We provide qualitative analysis for the this large-scale generated corpus from Wikipedia.",
} | null | 0 | 10 | ---
license: cc-by-4.0
pretty_name: Harvesting QA paris from Wikipedia.
language: en
multilinguality: monolingual
size_categories: 1M<
source_datasets:
- extended|wikipedia
task_categories:
- question-answering
task_ids:
- extractive-qa
---
# Dataset Card for "lmqg/qa_harvesting_from_wikipedia"
## Dataset Description
- **Repository:** [https://github.com/asahi417/lm-question-generation](https://github.com/asahi417/lm-question-generation)
- **Paper:** [https://aclanthology.org/P18-1177/](https://aclanthology.org/P18-1177/)
- **Point of Contact:** [Asahi Ushio](http://asahiushio.com/)
### Dataset Summary
This is the QA dataset collected by [Harvesting Paragraph-level Question-Answer Pairs from Wikipedia](https://aclanthology.org/P18-1177) (Du & Cardie, ACL 2018).
### Supported Tasks and Leaderboards
* `question-answering`
### Languages
English (en)
## Dataset Structure
### Data Fields
The data fields are the same among all splits.
#### plain_text
- `id`: a `string` feature of id
- `title`: a `string` feature of title of the paragraph
- `context`: a `string` feature of paragraph
- `question`: a `string` feature of question
- `answers`: a `json` feature of answers
### Data Splits
|train |validation|test |
|--------:|---------:|-------:|
|1,204,925| 30,293| 24,473|
## Citation Information
```
@inproceedings{du-cardie-2018-harvesting,
title = "Harvesting Paragraph-level Question-Answer Pairs from {W}ikipedia",
author = "Du, Xinya and
Cardie, Claire",
booktitle = "Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)",
month = jul,
year = "2018",
address = "Melbourne, Australia",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/P18-1177",
doi = "10.18653/v1/P18-1177",
pages = "1907--1917",
abstract = "We study the task of generating from Wikipedia articles question-answer pairs that cover content beyond a single sentence. We propose a neural network approach that incorporates coreference knowledge via a novel gating mechanism. As compared to models that only take into account sentence-level information (Heilman and Smith, 2010; Du et al., 2017; Zhou et al., 2017), we find that the linguistic knowledge introduced by the coreference representation aids question generation significantly, producing models that outperform the current state-of-the-art. We apply our system (composed of an answer span extraction system and the passage-level QG system) to the 10,000 top ranking Wikipedia articles and create a corpus of over one million question-answer pairs. We provide qualitative analysis for the this large-scale generated corpus from Wikipedia.",
}
``` |
alexandrainst/da-wit | 2022-11-18T15:48:44.000Z | [
"task_categories:image-to-text",
"task_categories:zero-shot-image-classification",
"task_categories:feature-extraction",
"task_ids:image-captioning",
"size_categories:100K<n<1M",
"source_datasets:wikimedia/wit_base",
"language:da",
"license:cc-by-sa-4.0",
"region:us"
] | alexandrainst | null | null | null | 2 | 10 | ---
pretty_name: Danish WIT
language:
- da
license:
- cc-by-sa-4.0
size_categories:
- 100K<n<1M
source_datasets:
- wikimedia/wit_base
task_categories:
- image-to-text
- zero-shot-image-classification
- feature-extraction
task_ids:
- image-captioning
---
# Dataset Card for Danish WIT
## Dataset Description
- **Repository:** <https://gist.github.com/saattrupdan/bb6c9c52d9f4b35258db2b2456d31224>
- **Point of Contact:** [Dan Saattrup Nielsen](mailto:dan.nielsen@alexandra.dk)
- **Size of downloaded dataset files:** 7.5 GB
- **Size of the generated dataset:** 7.8 GB
- **Total amount of disk used:** 15.3 GB
### Dataset Summary
Google presented the Wikipedia Image Text (WIT) dataset in [July
2021](https://dl.acm.org/doi/abs/10.1145/3404835.3463257), a dataset which contains
scraped images from Wikipedia along with their descriptions. WikiMedia released
WIT-Base in [September
2021](https://techblog.wikimedia.org/2021/09/09/the-wikipedia-image-caption-matching-challenge-and-a-huge-release-of-image-data-for-research/),
being a modified version of WIT where they have removed the images with empty
"reference descriptions", as well as removing images where a person's face covers more
than 10% of the image surface, along with inappropriate images that are candidate for
deletion. This dataset is the Danish portion of the WIT-Base dataset, consisting of
roughly 160,000 images with associated Danish descriptions. We release the dataset
under the [CC BY-SA 4.0 license](https://creativecommons.org/licenses/by-sa/4.0/), in
accordance with WIT-Base's [identical
license](https://huggingface.co/datasets/wikimedia/wit_base#licensing-information).
### Supported Tasks and Leaderboards
Training machine learning models for caption generation, zero-shot image classification
and text-image search are the intended tasks for this dataset. No leaderboard is active
at this point.
### Languages
The dataset is available in Danish (`da`).
## Dataset Structure
### Data Instances
- **Size of downloaded dataset files:** 7.5 GB
- **Size of the generated dataset:** 7.8 GB
- **Total amount of disk used:** 15.3 GB
An example from the `train` split looks as follows.
```
{
"image": [PIL.JpegImagePlugin.JpegImageFile image mode=RGB size=300x409 at 0x7FE4384E2190],
"image_url": "https://upload.wikimedia.org/wikipedia/commons/4/45/Bispen_-_inside.jpg",
"embedding": [2.8568285, 2.9562542, 0.33794892, 8.753725, ...],
"metadata_url": "http://commons.wikimedia.org/wiki/File:Bispen_-_inside.jpg",
"original_height": 3161,
"original_width": 2316,
"mime_type": "image/jpeg",
"caption_attribution_description": "Kulturhuset Bispen set indefra. Biblioteket er til venstre",
"page_url": "https://da.wikipedia.org/wiki/Bispen",
"attribution_passes_lang_id": True,
"caption_alt_text_description": None,
"caption_reference_description": "Bispen set indefra fra 1. sal, hvor ....",
"caption_title_and_reference_description": "Bispen [SEP] Bispen set indefra ...",
"context_page_description": "Bispen er navnet på det offentlige kulturhus i ...",
"context_section_description": "Bispen er navnet på det offentlige kulturhus i ...",
"hierarchical_section_title": "Bispen",
"is_main_image": True,
"page_changed_recently": True,
"page_title": "Bispen",
"section_title": None
}
```
### Data Fields
The data fields are the same among all splits.
- `image`: an `Image` feature.
- `image_url`: a `str` feature.
- `embedding`: a `list` feature.
- `metadata_url`: a `str` feature.
- `original_height`: an `int` or `NaN` feature.
- `original_width`: an `int` or `NaN` feature.
- `mime_type`: a `str` or `None` feature.
- `caption_attribution_description`: a `str` or `None` feature.
- `page_url`: a `str` feature.
- `attribution_passes_lang_id`: a `bool` or `None` feature.
- `caption_alt_text_description`: a `str` or `None` feature.
- `caption_reference_description`: a `str` or `None` feature.
- `caption_title_and_reference_description`: a `str` or `None` feature.
- `context_page_description`: a `str` or `None` feature.
- `context_section_description`: a `str` or `None` feature.
- `hierarchical_section_title`: a `str` feature.
- `is_main_image`: a `bool` or `None` feature.
- `page_changed_recently`: a `bool` or `None` feature.
- `page_title`: a `str` feature.
- `section_title`: a `str` or `None` feature.
### Data Splits
Roughly 2.60% of the WIT-Base dataset comes from the Danish Wikipedia. We have split
the resulting 168,740 samples into a training set, validation set and testing set of
the following sizes:
| split | samples |
|---------|--------:|
| train | 167,460 |
| val | 256 |
| test | 1,024 |
## Dataset Creation
### Curation Rationale
It is quite cumbersome to extract the Danish portion of the WIT-Base dataset,
especially as the dataset takes up 333 GB of disk space, so the curation of Danish-WIT
is purely to make it easier to work with the Danish portion of it.
### Source Data
The original data was collected from WikiMedia's
[WIT-Base](https://huggingface.co/datasets/wikimedia/wit_base) dataset, which in turn
comes from Google's [WIT](https://huggingface.co/datasets/google/wit) dataset.
## Additional Information
### Dataset Curators
[Dan Saattrup Nielsen](https://saattrupdan.github.io/) from the [The Alexandra
Institute](https://alexandra.dk/) curated this dataset.
### Licensing Information
The dataset is licensed under the [CC BY-SA 4.0
license](https://creativecommons.org/licenses/by-sa/4.0/).
|
Elite35P-Server/EliteVoiceProject | 2023-01-14T19:28:16.000Z | [
"annotations_creators:crowdsourced",
"language_creators:さくらみこ",
"language_creators:hololive production",
"multilinguality:monolingual",
"language:ja",
"license:other",
"region:us"
] | Elite35P-Server | null | @InProceedings{elitevoiceproject:dataset,
title = {Elite Voice Project},
author={Elite35P Server.},
year={2022}
} | null | 4 | 10 | ---
annotations_creators:
- crowdsourced
language_creators:
- さくらみこ
- hololive production
language:
- ja
multilinguality:
- monolingual
license: other
---
# Elite Voice Project
これはホロライブ所属Vtuberさくらみこ氏の声をデータセット化し音声認識などで活用できるようにする事を目的とした非公式プロジェクトです。
---
# LICENSEについて
## データセット内の音声データ
すべてのデータは、[hololive productionの二次創作ガイドライン](https://hololive.hololivepro.com/guidelines/)に準拠する形で利用されています。
これらのデータの著作権はカバー株式会社等が保有しており、リポジトリオーナー、コントリビューターは一切の権利を有しておりません。
---
# 当プロジェクトへのご協力
当プロジェクトは皆様のご協力を心より歓迎いたします。 以下の方法をご一読いただき、そのうえでプルリクエストをお願い致します。
## 始める前に
[hololive productionの二次創作ガイドライン](https://hololive.hololivepro.com/guidelines/)を必ずお読みください。
---
## 音声データの追加
基本的には、データセットに追加したい音声データを`audio_raw`ディレクトリ内の所定のディレクトリへ追加していただく形になります。
git等を使用して音声データを追加する場合にはgit-lfsが必要になります。事前にgit-lfsのインストールをお願い致します。
`audio_raw`ディレクトリ内の構造は以下の通りです。
```
audio_raw
├─twitch
│ ├─test
│ │ └─<ID>
│ │ ├─1.mp3
│ │ ├─2.mp3
│ │ ├─3.mp3
│ │ ├─.
│ │ └─.
│ └─train
│ └─<ID>
│ ├─1.mp3
│ ├─2.mp3
│ ├─3.mp3
│ ├─.
│ └─.
├─twitter
│ ├─test
│ │ └─<ID>
│ │ ├─1.mp3
│ │ ├─2.mp3
│ │ ├─3.mp3
│ │ ├─.
│ │ └─.
│ └─train
│ └─<ID>
│ ├─1.mp3
│ ├─2.mp3
│ ├─3.mp3
│ ├─.
│ └─.
└─youtube
├─test
│ └─<ID>
│ ├─1.mp3
│ ├─2.mp3
│ ├─3.mp3
│ ├─.
│ └─.
└─train
└─<ID>
├─1.mp3
├─2.mp3
├─3.mp3
├─.
└─.
```
- `youtube`, `twitch`, `twitch`ディレクトリはデータセットに追加するデータの切り出し元のプラットフォーム名です。
- `train`と`test`ディレクトリについてですが、[OpenAI Whisper](https://openai.com/blog/whisper/)等の学習を行う際にtrainとtest、2種類のデータが必要になるために存在しています。
- `train`と`test`には同じ配信から切り出したデータを入れても良いですが全く同じデータを入れることは辞めてください。正確に学習を行うことができなくなります。
- `<ID>`には音声データを切り出す元になった配信等のIDが入ります。
- YouTubeであれば`https://www.youtube.com/watch?v=X9zw0QF12Kc`の`X9zw0QF12Kc`がディレクトリ名となります。
- Twitterであれば`https://twitter.com/i/spaces/1lPKqmyQPOAKb`の`1lPKqmyQPOAKb`がディレクトリ名となります。
- Twitchであれば`https://www.twitch.tv/videos/824387510`の`824387510`がディレクトリ名となります。
- `<ID>`ディレクトリ内には連番でmp3形式の音声ファイルを入れてください。
- 音声データは30秒以内である必要があります。
- BGMやSE、ノイズ等が含まれる音声データは避けてください。
- あまりに短すぎる音声データは避けてください。(既にデータセットにある音声は削除予定です。)
- 出来る限り30秒に近い音声データを入れていただけると助かります。
- 文脈のある音声データが望ましいです。
- 英語の音声は避けてください。
---
## 書き起こしテキストデータの追加
基本的には、データセットに追加したい音声データの書き起こしテキストデータを`transcript_raw`ディレクトリ内の所定のディレクトリへ追加していただく形になります。
`transcript_raw`ディレクトリ内の構造は以下の通りです。
```
transcript_raw
├─twitch
│ ├─test
│ │ └─<ID>.csv
│ │
│ └─train
│ └─<ID>.csv
│
├─twitter
│ ├─test
│ │ └─<ID>.csv
│ │
│ └─train
│ └─<ID>.csv
│
└─youtube
├─test
│ └─<ID>.csv
│
└─train
└─<ID>.csv
```
- `youtube`, `twitch`, `twitch`ディレクトリはデータセットに追加するデータの切り出し元のプラットフォーム名です。
- `<ID>`には音声データを切り出す元になった配信等のIDが入ります。
- YouTubeであれば`https://www.youtube.com/watch?v=X9zw0QF12Kc`の`X9zw0QF12Kc`がディレクトリ名となります。
- Twitterであれば`https://twitter.com/i/spaces/1lPKqmyQPOAKb`の`1lPKqmyQPOAKb`がディレクトリ名となります。
- Twitchであれば`https://www.twitch.tv/videos/824387510`の`824387510`がディレクトリ名となります。
- `<ID>.csv`について
- 必ず`audio_raw`に追加した音声データに対応した書き起こしテキストを追加する必要があります。
- 句読点、!,?等は正確に入れてください。
- 半角英数字記号を使用してください。(!, ?, 1等)
- 漢数字は避けてください。
- csvファイルの1行目は必ず`path,sentence`で始めてください。
- 書き起こしテキストはWhisper等で一度書き起こしたものを修正して行く方法を推奨致します。
### CSVファイルの記述例
```csv
path,sentence
1.mp3,雷が落ちた時のみこ
2.mp3,コメント止まった?
3.mp3,見えてるー?いやコメント止まった。壊れた。
4.mp3,インターネット繋がってない!
5.mp3,雷鳴ったよまた
``` |
zpn/delaney | 2022-11-30T17:09:36.000Z | [
"task_categories:other",
"annotations_creators:machine-generated",
"language_creators:machine-generated",
"multilinguality:monolingual",
"size_categories:n<1K",
"license:mit",
"bio",
"bio-chem",
"molnet",
"molecule-net",
"biophysics",
"arxiv:1703.00564",
"region:us"
] | zpn | null | null | null | 1 | 10 | ---
annotations_creators:
- machine-generated
language_creators:
- machine-generated
license:
- mit
multilinguality:
- monolingual
pretty_name: delaney
size_categories:
- n<1K
source_datasets: []
tags:
- bio
- bio-chem
- molnet
- molecule-net
- biophysics
task_categories:
- other
task_ids: []
---
# Dataset Card for delaney
## Table of Contents
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage: https://moleculenet.org/**
- **Repository: https://github.com/deepchem/deepchem/tree/master**
- **Paper: https://arxiv.org/abs/1703.00564**
### Dataset Summary
`delaney` (aka. `ESOL`) is a dataset included in [MoleculeNet](https://moleculenet.org/). Water solubility data(log solubility in mols per litre) for common organic small molecules.
## Dataset Structure
### Data Fields
Each split contains
* `smiles`: the [SMILES](https://en.wikipedia.org/wiki/Simplified_molecular-input_line-entry_system) representation of a molecule
* `selfies`: the [SELFIES](https://github.com/aspuru-guzik-group/selfies) representation of a molecule
* `target`: log solubility in mols per litre
### Data Splits
The dataset is split into an 80/10/10 train/valid/test split using scaffold split.
### Source Data
#### Initial Data Collection and Normalization
Data was originially generated by the Pande Group at Standford
### Licensing Information
This dataset was originally released under an MIT license
### Citation Information
```
@misc{https://doi.org/10.48550/arxiv.1703.00564,
doi = {10.48550/ARXIV.1703.00564},
url = {https://arxiv.org/abs/1703.00564},
author = {Wu, Zhenqin and Ramsundar, Bharath and Feinberg, Evan N. and Gomes, Joseph and Geniesse, Caleb and Pappu, Aneesh S. and Leswing, Karl and Pande, Vijay},
keywords = {Machine Learning (cs.LG), Chemical Physics (physics.chem-ph), Machine Learning (stat.ML), FOS: Computer and information sciences, FOS: Computer and information sciences, FOS: Physical sciences, FOS: Physical sciences},
title = {MoleculeNet: A Benchmark for Molecular Machine Learning},
publisher = {arXiv},
year = {2017},
copyright = {arXiv.org perpetual, non-exclusive license}
}
```
### Contributions
Thanks to [@zanussbaum](https://github.com/zanussbaum) for adding this dataset.
|
argilla/twitter-coronavirus | 2022-12-06T16:20:31.000Z | [
"task_categories:text-classification",
"task_ids:sentiment-classification",
"task_ids:sentiment-analysis",
"size_categories:10K<n<100K",
"source_datasets:original",
"language:en",
"license:unknown",
"region:us"
] | argilla | null | null | null | 0 | 10 | ---
language:
- en
license:
- unknown
size_categories:
- 10K<n<100K
source_datasets:
- original
task_categories:
- text-classification
task_ids:
- sentiment-classification
- sentiment-analysis
dataset_info:
features:
- name: text
dtype: string
- name: inputs
struct:
- name: text
dtype: string
- name: prediction
list:
- name: label
dtype: string
- name: score
dtype: float64
- name: prediction_agent
dtype: string
- name: annotation
dtype: 'null'
- name: annotation_agent
dtype: 'null'
- name: multi_label
dtype: bool
- name: explanation
dtype: 'null'
- name: id
dtype: string
- name: metadata
struct:
- name: location
dtype: string
- name: screen_name
dtype: int64
- name: split
dtype: string
- name: user_name
dtype: int64
- name: status
dtype: string
- name: event_timestamp
dtype: timestamp[us]
- name: metrics
struct:
- name: text_length
dtype: int64
splits:
- name: train
num_bytes: 25394534
num_examples: 44955
download_size: 15712627
dataset_size: 25394534
---
# Dataset Card for "twitter-coronavirus"
## Dataset Description
- **Homepage:** Kaggle Challenge
- **Repository:** https://www.kaggle.com/datasets/datatattle/covid-19-nlp-text-classification
- **Paper:** N.A.
- **Leaderboard:** N.A.
- **Point of Contact:** N.A.
### Dataset Summary
Perform Text Classification on the data. The tweets have been pulled from Twitter and manual tagging has been done then.
The names and usernames have been given codes to avoid any privacy concerns.
Columns:
1) Location
2) Tweet At
3) Original Tweet
4) Label
- Extremely Negative
- Negative
- Neutral
- Positive
- Extremely Positive
### Languages
english
### Citation Information
https://www.kaggle.com/datasets/datatattle/covid-19-nlp-text-classification
### Contributions
Thanks to [@davidberenstein1957](https://github.com/davidberenstein1957) for adding this dataset. |
ksaml/Stanford_dogs | 2022-12-11T17:55:02.000Z | [
"license:other",
"region:us"
] | ksaml | null | null | null | 0 | 10 | ---
license: other
---
## Context
The Stanford Dogs dataset contains images of 120 breeds of dogs from around the world. This dataset has been built using images and annotation from ImageNet for the task of fine-grained image categorization. It was originally collected for fine-grain image categorization, a challenging problem as certain dog breeds have near identical features or differ in colour and age. <b> I have used only images, so this does not contain any labels <b>.
## Content
Number of images: 20,580
## Acknowledgements
The original data source is found on http://vision.stanford.edu/aditya86/ImageNetDogs/ and contains additional information on the train/test splits and baseline results.
If you use this dataset in a publication, please cite the dataset on the following papers:
Aditya Khosla, Nityananda Jayadevaprakash, Bangpeng Yao and Li Fei-Fei. Novel dataset for Fine-Grained Image Categorization. First Workshop on Fine-Grained Visual Categorization (FGVC), IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2011. [pdf] [poster] [BibTex]
Secondary:
J. Deng, W. Dong, R. Socher, L.-J. Li, K. Li and L. Fei-Fei, ImageNet: A Large-Scale Hierarchical Image Database. IEEE Computer Vision and Pattern Recognition (CVPR), 2009. [pdf] [BibTex] |
breadlicker45/wizards-of-Waverly-place-scripts | 2023-09-09T20:34:30.000Z | [
"license:other",
"region:us"
] | breadlicker45 | null | null | null | 0 | 10 | ---
license: other
---
|
EdBianchi/SmokeFire | 2022-12-29T14:45:31.000Z | [
"region:us"
] | EdBianchi | null | null | null | 0 | 10 | ---
dataset_info:
features:
- name: image
dtype: image
- name: label
dtype:
class_label:
names:
'0': Fire
'1': Normal
'2': Smoke
splits:
- name: train
num_bytes: 166216842.46
num_examples: 6060
- name: test
num_bytes: 89193578.0
num_examples: 759
- name: validation
num_bytes: 75838884.0
num_examples: 756
download_size: 890673915
dataset_size: 331249304.46000004
---
# Dataset Card for "SmokeFire"
Wildfires or forest fires are unpredictable catastrophic and destructive events that affect rural areas. The impact of these events affects both vegetation and wildlife.
This dataset can be used to train networks able to detect smoke and/or fire in forest environments.
## Data Sources & Description
- **This dataset consist of sample from two datasets hosted on Kaggle:**
- [Forest Fire](https://www.kaggle.com/datasets/kutaykutlu/forest-fire?select=train_fire)
- [Forest Fire Images](https://www.kaggle.com/datasets/mohnishsaiprasad/forest-fire-images)
- **The datasets consist of:**
- 2525 **Fire** samples
- 2525 **Smoke** samples
- 2525 **Normal** samples
- **The dataset is splitted into:**
- Train Set -> 6060 samples
- Validation Set -> 756 samples
- Test Set -> 759 samples
|
jordiclive/wikipedia-summary-dataset | 2023-02-05T16:15:04.000Z | [
"region:us"
] | jordiclive | null | null | null | 4 | 10 |
## Dataset Description
- **Repository:** https://github.com/tscheepers/Wikipedia-Summary-Dataset
### Dataset Summary
This is a dataset that can be used for research into machine learning and natural language processing. It contains all titles and summaries (or introductions) of English Wikipedia articles, extracted in September of 2017.
The dataset is different from the regular Wikipedia dump and different from the datasets that can be created by gensim because ours contains the extracted summaries and not the entire unprocessed page body. This could be useful if one wants to use the smaller, more concise, and more definitional summaries in their research. Or if one just wants to use a smaller but still diverse dataset for efficient training with resource constraints.
A summary or introduction of an article is everything starting from the page title up to the content outline.
### Citation Information
```
@mastersthesis{scheepers2017compositionality,
author = {Scheepers, Thijs},
title = {Improving the Compositionality of Word Embeddings},
school = {Universiteit van Amsterdam},
year = {2017},
month = {11},
address = {Science Park 904, Amsterdam, Netherlands}
}
``` |
nlphuji/dollar_street_test | 2023-01-17T21:05:24.000Z | [
"region:us"
] | nlphuji | null | null | null | 0 | 10 | # Dollar Street (test set)
Original paper: [The Dollar Street Dataset: Images Representing the Geographic and Socioeconomic Diversity of the World](https://openreview.net/forum?id=qnfYsave0U4)
Homepage: https://www.kaggle.com/datasets/mlcommons/the-dollar-street-dataset
Bibtex:
```
@inproceedings{
rojas2022the,
title={The Dollar Street Dataset: Images Representing the Geographic and Socioeconomic Diversity of the World},
author={William A Gaviria Rojas and Sudnya Diamos and Keertan Ranjan Kini and David Kanter and Vijay Janapa Reddi and Cody Coleman},
booktitle={Thirty-sixth Conference on Neural Information Processing Systems Datasets and Benchmarks Track},
year={2022},
url={https://openreview.net/forum?id=qnfYsave0U4}
}
``` |
Cohere/miracl-ar-corpus-22-12 | 2023-02-06T12:00:08.000Z | [
"task_categories:text-retrieval",
"task_ids:document-retrieval",
"annotations_creators:expert-generated",
"multilinguality:multilingual",
"language:ar",
"license:apache-2.0",
"region:us"
] | Cohere | null | null | null | 0 | 10 | ---
annotations_creators:
- expert-generated
language:
- ar
multilinguality:
- multilingual
size_categories: []
source_datasets: []
tags: []
task_categories:
- text-retrieval
license:
- apache-2.0
task_ids:
- document-retrieval
---
# MIRACL (ar) embedded with cohere.ai `multilingual-22-12` encoder
We encoded the [MIRACL dataset](https://huggingface.co/miracl) using the [cohere.ai](https://txt.cohere.ai/multilingual/) `multilingual-22-12` embedding model.
The query embeddings can be found in [Cohere/miracl-ar-queries-22-12](https://huggingface.co/datasets/Cohere/miracl-ar-queries-22-12) and the corpus embeddings can be found in [Cohere/miracl-ar-corpus-22-12](https://huggingface.co/datasets/Cohere/miracl-ar-corpus-22-12).
For the orginal datasets, see [miracl/miracl](https://huggingface.co/datasets/miracl/miracl) and [miracl/miracl-corpus](https://huggingface.co/datasets/miracl/miracl-corpus).
Dataset info:
> MIRACL 🌍🙌🌏 (Multilingual Information Retrieval Across a Continuum of Languages) is a multilingual retrieval dataset that focuses on search across 18 different languages, which collectively encompass over three billion native speakers around the world.
>
> The corpus for each language is prepared from a Wikipedia dump, where we keep only the plain text and discard images, tables, etc. Each article is segmented into multiple passages using WikiExtractor based on natural discourse units (e.g., `\n\n` in the wiki markup). Each of these passages comprises a "document" or unit of retrieval. We preserve the Wikipedia article title of each passage.
## Embeddings
We compute for `title+" "+text` the embeddings using our `multilingual-22-12` embedding model, a state-of-the-art model that works for semantic search in 100 languages. If you want to learn more about this model, have a look at [cohere.ai multilingual embedding model](https://txt.cohere.ai/multilingual/).
## Loading the dataset
In [miracl-ar-corpus-22-12](https://huggingface.co/datasets/Cohere/miracl-ar-corpus-22-12) we provide the corpus embeddings. Note, depending on the selected split, the respective files can be quite large.
You can either load the dataset like this:
```python
from datasets import load_dataset
docs = load_dataset(f"Cohere/miracl-ar-corpus-22-12", split="train")
```
Or you can also stream it without downloading it before:
```python
from datasets import load_dataset
docs = load_dataset(f"Cohere/miracl-ar-corpus-22-12", split="train", streaming=True)
for doc in docs:
docid = doc['docid']
title = doc['title']
text = doc['text']
emb = doc['emb']
```
## Search
Have a look at [miracl-ar-queries-22-12](https://huggingface.co/datasets/Cohere/miracl-ar-queries-22-12) where we provide the query embeddings for the MIRACL dataset.
To search in the documents, you must use **dot-product**.
And then compare this query embeddings either with a vector database (recommended) or directly computing the dot product.
A full search example:
```python
# Attention! For large datasets, this requires a lot of memory to store
# all document embeddings and to compute the dot product scores.
# Only use this for smaller datasets. For large datasets, use a vector DB
from datasets import load_dataset
import torch
#Load documents + embeddings
docs = load_dataset(f"Cohere/miracl-ar-corpus-22-12", split="train")
doc_embeddings = torch.tensor(docs['emb'])
# Load queries
queries = load_dataset(f"Cohere/miracl-ar-queries-22-12", split="dev")
# Select the first query as example
qid = 0
query = queries[qid]
query_embedding = torch.tensor(queries['emb'])
# Compute dot score between query embedding and document embeddings
dot_scores = torch.mm(query_embedding, doc_embeddings.transpose(0, 1))
top_k = torch.topk(dot_scores, k=3)
# Print results
print("Query:", query['query'])
for doc_id in top_k.indices[0].tolist():
print(docs[doc_id]['title'])
print(docs[doc_id]['text'])
```
You can get embeddings for new queries using our API:
```python
#Run: pip install cohere
import cohere
co = cohere.Client(f"{api_key}") # You should add your cohere API Key here :))
texts = ['my search query']
response = co.embed(texts=texts, model='multilingual-22-12')
query_embedding = response.embeddings[0] # Get the embedding for the first text
```
## Performance
In the following table we compare the cohere multilingual-22-12 model with Elasticsearch version 8.6.0 lexical search (title and passage indexed as independent fields). Note that Elasticsearch doesn't support all languages that are part of the MIRACL dataset.
We compute nDCG@10 (a ranking based loss), as well as hit@3: Is at least one relevant document in the top-3 results. We find that hit@3 is easier to interpret, as it presents the number of queries for which a relevant document is found among the top-3 results.
Note: MIRACL only annotated a small fraction of passages (10 per query) for relevancy. Especially for larger Wikipedias (like English), we often found many more relevant passages. This is know as annotation holes. Real nDCG@10 and hit@3 performance is likely higher than depicted.
| Model | cohere multilingual-22-12 nDCG@10 | cohere multilingual-22-12 hit@3 | ES 8.6.0 nDCG@10 | ES 8.6.0 acc@3 |
|---|---|---|---|---|
| miracl-ar | 64.2 | 75.2 | 46.8 | 56.2 |
| miracl-bn | 61.5 | 75.7 | 49.2 | 60.1 |
| miracl-de | 44.4 | 60.7 | 19.6 | 29.8 |
| miracl-en | 44.6 | 62.2 | 30.2 | 43.2 |
| miracl-es | 47.0 | 74.1 | 27.0 | 47.2 |
| miracl-fi | 63.7 | 76.2 | 51.4 | 61.6 |
| miracl-fr | 46.8 | 57.1 | 17.0 | 21.6 |
| miracl-hi | 50.7 | 62.9 | 41.0 | 48.9 |
| miracl-id | 44.8 | 63.8 | 39.2 | 54.7 |
| miracl-ru | 49.2 | 66.9 | 25.4 | 36.7 |
| **Avg** | 51.7 | 67.5 | 34.7 | 46.0 |
Further languages (not supported by Elasticsearch):
| Model | cohere multilingual-22-12 nDCG@10 | cohere multilingual-22-12 hit@3 |
|---|---|---|
| miracl-fa | 44.8 | 53.6 |
| miracl-ja | 49.0 | 61.0 |
| miracl-ko | 50.9 | 64.8 |
| miracl-sw | 61.4 | 74.5 |
| miracl-te | 67.8 | 72.3 |
| miracl-th | 60.2 | 71.9 |
| miracl-yo | 56.4 | 62.2 |
| miracl-zh | 43.8 | 56.5 |
| **Avg** | 54.3 | 64.6 |
|
fabraz/writingPromptAug | 2023-05-26T14:13:34.000Z | [
"task_categories:text-generation",
"language:en",
"license:mit",
"writingPrompt",
"arxiv:1805.04833",
"region:us"
] | fabraz | null | null | null | 0 | 10 | ---
license: mit
task_categories:
- text-generation
language:
- en
tags:
- writingPrompt
dataset_info:
- config_name: 0.0.1
features:
- name: splitLineIndex
dtype: int64
- name: text
dtype: string
splits:
- name: train
num_examples: 13232
- name: valid
num_examples: 5168
- name: test
num_examples: 5700
---
# Dataset Card for Writing Prompt Augmentation Dataset
## Dataset Description
- **Homepage:**
- **Repository:**
- **Paper:** Writing Prompt Augmentation Dataset was built to expand samples of [FAIR Writing Prompt Dataset](https://www.kaggle.com/datasets/ratthachat/writing-prompts)
- **Leaderboard:**
- **Point of Contact:** [Fabricio Braz](mailto:fabraz@unb.br)
### Dataset Summary
Writing Prompt Augmentation Dataset was built to expand samples of [FAIR Writing Prompt Dataset](https://www.kaggle.com/datasets/ratthachat/writing-prompts), for feeding [Open Assistant](https://github.com/LAION-AI/Open-Assistant).
### Languages
English
## Dataset Structure
### Data Instances
```javascript
{"splitLineIndex":36888,
"text":"User: write me a story about: Most responses on here have a twist , and all of them are fictional . Show us a piece of your actual life ; let the reader experience you ., where the middle of the story is about: The boss sits on my wall and asks me where a urgent report is up. I stare at the screen, confused. -> Rosey: Sure, here\'s a story about: Most responses on here have a twist , and all of them are fictional . Show us a piece of your actual life ; let the reader experience you ., where the middle of the story is about: The boss sits on my wall and asks me where a urgent report is up. I stare at the screen, confused.:\\nMy bedroom was dark , but the chatter of distant birds outside of the heavily curtained window suggested a new day was dawning . I stayed in bed , one with the shadows , and stared at the indistinct blackness that filled the ceiling . A trickle of sickly grey light would soon spring from the edges of curtain - the heavy fabric could only do so much to dam the brightness of day . \\n \\n But , for a few more minutes , it was still dark . \\n \\n And I was staring at it - staring at the darkness . Examining it . \\n \\n What was I looking for ? Why , answers of course . Why does anyone stare into the darkness of a bedroom ceiling ? I was seeking answers . \\n \\n Why do it ? I asked myself . Why go to work ? You \'re good at your job when you can be bothered to do it , but how often does that happen ? How often do you really put the effort in ? \\n \\n Can you even remember enjoying it ? \\n \\n Can you remember when you were happy ? \\n \\n I had been too deep in my hunt for answers to notice that the homogenous darkness had given way to a bluish grey world of shapes and objects . My feet swung out of bed and I sat up in the early morning coldness . \\n \\n When *was* I happy last ? \\n \\n I stood up and started my day . \\n \\n * * * \\n \\n The kitchen was filling with light , the muted greys and blues of morning had arrived first , but each minute that passed promised the arrival of the full colours of day . \\n \\n The spoon clinked in the bowl as I scooped up some cereal . I wore only what I had to bed : boxer shirts and a t-shirt . The winter cold does n\'t bother you when you \'ve stopped caring . \\n \\n *When* was I happy ? \\n \\n The question was echoing in my head . A great puzzle . A mystery of the ages . \\n \\n I gulped the last of my morning coffee and went to the bathroom . \\n \\n * * * \\n \\n The plug hole held no answers , no matter how long I stared . \\n \\n How long had I been staring ? \\n \\n I turned the shower off and stepped out into the sterile tiled whiteness . A lifetime of habits drew me to the basin and , without thought , I started to brush my teeth . My mind was still locked , frozen , on the question . \\n \\n When was I happy ? \\n \\n As I wondered , day continued it \'s steady march outside . \\n \\n The bathroom was clean and white , morning light filtered in through a frosted window . The birds were loud now , but I could hardly hear them over the whir of the steam sucking fan above me . \\n \\n Day had officially arrived . \\n \\n Perhaps I am asking myself the wrong question , I thought . \\n \\n The man in the mirror bared his teeth and scrubbed some more , white foam dripped in blobs about the basin . \\n \\n *What* makes me happy ? \\n \\n * * * \\n \\n I had slipped into my work clothes : business shirt , dress pants , leather shoes . My prisoners garb . As I pulled the items on they weighed me down , each a colossal burden . At least I did n\'t wear a tie any more . \\n \\n I had given up on ties , and the rest of my uniform wore the scars of neglect : the shirt was unironed , the pants were thin at the knees and the stitching had come loose at the bottoms , the shoes were beaten , scratched , the soles and tops barely held their bond . \\n \\n This is the business attire of a man who has stopped caring . \\n \\n No one at work seemed to mind . \\n \\n I walked to the front door of my house , shuffling without enthusiasm , without joy for the new day that lay on the other side . \\n \\n I grabbed the handle . \\n \\n What makes me happy ? \\n \\n * * * \\n \\n Another request , another complaint , and my list of work grew longer . It only ever grew longer these days . I had important calls to make , issues to resolve , reports to write - but all I did , for the most part , was stare . \\n \\n Stare at my screen . At my hands . At nothing . \\n \\n The questions I had been asking in the darkness and through-out my house during my morning preparations were not new . I had been thinking on them for a while . I did not know for how long . \\n \\n Weeks ? No . Months . \\n \\n Still no answers . \\n \\n What I do know is : I am *not* happy . \\n \\n The boss leaned on my cubicle wall and asked me where an urgent report , a report that had been urgent for weeks , was up to . The bullshit I served sated his questions and as he walked away I sighed and stared at my screen . \\n \\n To my surprise the report was there . I had been working on it absent-mindedly . Try as I might I still did my job , at least to a degree . \\n \\n Manager for a division of one . Writer of reports and promiser of game changing applications . Mr IT . \\n \\n Well ... at one time I had been Mr IT . Once , when I had been passionate , had had a fire in my belly that churned the engine of my rising star . A career in IT . I had wanted this . \\n \\n Had n\'t I ? \\n \\n Then , why are n\'t I happy ? \\n \\n Because , you did n\'t want this . You never did . You stepped out of high school and fell into it . You \'re good with computers - at least , you were - but they never made you happy . You liked the challenge , sure , but you did it because you had to pay the bills and you had to leave your parents house at some point . \\n \\n Then it was a matter of you being lazy and gutless . Work is a hard habit to break , especially when people keep throwing money at you . You \'d just go in , day after day . Week after week . Month after ... \\n \\n School was almost a decade away and you have n\'t done half of what you wanted . Remember writing ? You were going to write , remember ? You \'ve done some shorts over the years , but you wanted more . You wanted to type those two words . After months and months , you \'d type those two words and you \'d have accomplished sonething . The End . And your book would be done - who cares if it got published . Who cares if no one but you ever saw it . \\n \\n You \'d have written something . You \'d have accomplished something . \\n \\n You \'d be ... \\n \\n And there it is . The answer . \\n \\n Ten years of wasted time - ten years of excuses and meeting other people \'s expectations . Ten years of syaing you \'ll get around to it . \\n \\n Ten years of regret . \\n \\n The report was done . So was I . \\n \\n How do I do this ? Do I walk in and hand in the report and a resignation . No . I ca n\'t do that . These people have been good to me . I need to finish up some of the jobs . Need to get them ready for my abscence . \\n \\n Or am I making excuses ? \\n \\n My screen and my work came into focus . I knew what I needed to do , could feel , almost by instinct , what job \'s were my biggest priorities . A spark lit in my gut and passion trickled through my veins . \\n \\n I was n\'t turning back into Mr IT - could in fact , never be that man again . \\n \\n But I knew what made me happy . Knew how to get there ... \\n \\n ... and could feel it there , just on my horizon ."}
```
### Data Fields
* splitLineIndex: refers to the index line of the data source.
* text: refers to the actual prompt/story text
### Data Splits
|split|samples|
|--|--
|train| 13232|
|valid|5168|
|test| 5700|
## Dataset Creation
### Source Data
#### Initial Data Collection and Normalization
As mentioned, this dataset is an extension of FAIR writing prompt dataset. The steps employed to create the dataset are in the jupyter notebook at files.
#### Who are the source language producers?
FAIR
### Personal and Sensitive Information
The data comes with NSFW samples. Be aware!
## Additional Information
### Licensing Information
Writing Prompt Augmentation Dataset is licensed under MIT.
### Citation Information
Use to generate consistent stories by Hierarchical Neural Story Generation (Fan et al., 2018) https://arxiv.org/abs/1805.04833
### Contributions
Thanks to Huu Nguyen (gh:ontocord)! |
ArielACE/Lora-Dataset | 2023-02-05T18:19:44.000Z | [
"region:us"
] | ArielACE | null | null | null | 3 | 10 | Entry not found |
neulab/mconala | 2023-02-10T19:01:31.000Z | [
"task_categories:text-generation",
"task_categories:translation",
"size_categories:n<1K",
"language:es",
"language:ja",
"language:ru",
"license:cc-by-sa-4.0",
"code generation",
"arxiv:2203.08388",
"region:us"
] | neulab | MCoNaLa is a Multilingual Code/Natural Language Challenge dataset with
896 NL-Code pairs in three languages: Spanish, Japanese, and Russian. | @article{wang2022mconala,
title={MCoNaLa: A Benchmark for Code Generation from Multiple Natural Languages},
author={Zhiruo Wang, Grace Cuenca, Shuyan Zhou, Frank F. Xu, Graham Neubig},
journal={arXiv preprint arXiv:2203.08388},
year={2022}
} | null | 2 | 10 | ---
license: cc-by-sa-4.0
task_categories:
- text-generation
- translation
language:
- es
- ja
- ru
tags:
- code generation
pretty_name: mconala
size_categories:
- n<1K
---
# Dataset Card for MCoNaLa
## Dataset Description
- **Homepage:** https://github.com/zorazrw/multilingual-conala
- **Repository:** https://github.com/zorazrw/multilingual-conala
- **Paper:** https://arxiv.org/pdf/2203.08388.pdf
- **Leaderboard:** https://explainaboard.inspiredco.ai/leaderboards?show_mine=false&sort_dir=desc&sort_field=created_at&dataset=mconala
### Dataset Summary
MCoNaLa is a Multilingual Code/Natural Language Challenge dataset with 896 NL-Code pairs in three languages: Spanish, Japanese, and Russian.
### Languages
Spanish, Japanese, Russian; Python
## Dataset Structure
### How to Use
```bash
from datasets import load_dataset
# Spanish subset
load_dataset("neulab/mconala", "es")
DatasetDict({
test: Dataset({
features: ['question_id', 'intent', 'rewritten_intent', 'snippet'],
num_rows: 341
})
})
# Japanese subset
load_dataset("neulab/mconala", "ja")
DatasetDict({
test: Dataset({
features: ['question_id', 'intent', 'rewritten_intent', 'snippet'],
num_rows: 210
})
})
# Russian subset
load_dataset("neulab/mconala", "ru")
DatasetDict({
test: Dataset({
features: ['question_id', 'intent', 'rewritten_intent', 'snippet'],
num_rows: 345
})
})
```
### Data Fields
|Field|Type|Description|
|---|---|---|
|question_id|int|StackOverflow post id of the sample|
|intent|string|Title of the Stackoverflow post as the initial NL intent|
|rewritten_intent|string|nl intent rewritten by human annotators|
|snippet|string|Python code solution to the NL intent|
### Data Splits
The dataset contains 341, 210, and 345 samples in Spanish, Japanese, and Russian.
### Citation Information
```
@article{wang2022mconala,
title={MCoNaLa: A Benchmark for Code Generation from Multiple Natural Languages},
author={Zhiruo Wang, Grace Cuenca, Shuyan Zhou, Frank F. Xu, Graham Neubig},
journal={arXiv preprint arXiv:2203.08388},
year={2022}
}
``` |
Isamu136/big-animal-dataset-with-embedding | 2023-02-12T22:42:07.000Z | [
"license:mit",
"region:us"
] | Isamu136 | null | null | null | 1 | 10 | ---
license: mit
dataset_info:
features:
- name: image
dtype: image
- name: caption
dtype: string
- name: l14_embeddings
sequence: float32
- name: moco_vitb_imagenet_embeddings
sequence: float32
- name: moco_vitb_imagenet_embeddings_without_last_layer
sequence: float32
splits:
- name: train
num_bytes: 2125655956.375
num_examples: 62149
download_size: 2238679414
dataset_size: 2125655956.375
---
|
jonathan-roberts1/RS_C11 | 2023-03-31T17:07:50.000Z | [
"task_categories:image-classification",
"task_categories:zero-shot-image-classification",
"license:other",
"region:us"
] | jonathan-roberts1 | null | null | null | 0 | 10 | ---
dataset_info:
features:
- name: image
dtype: image
- name: label
dtype:
class_label:
names:
'0': dense forest
'1': grassland
'2': harbor
'3': high buildings
'4': low buildings
'5': overpass
'6': railway
'7': residential area
'8': roads
'9': sparse forest
'10': storage tanks
splits:
- name: train
num_bytes: 969136595.28
num_examples: 1232
download_size: 916398984
dataset_size: 969136595.28
license: other
task_categories:
- image-classification
- zero-shot-image-classification
---
# Dataset Card for "RS_C11"
## Dataset Description
- **Paper** [Feature significance-based multibag-of-visual-words model for remote sensing image scene classification](https://www.spiedigitallibrary.org/journals/journal-of-applied-remote-sensing/volume-10/issue-3/035004/Feature-significance-based-multibag-of-visual-words-model-for-remote/10.1117/1.JRS.10.035004.pdf)
### Licensing Information
Free usage without license.
## Citation Information
[Feature significance-based multibag-of-visual-words model for remote sensing image scene classification](https://www.spiedigitallibrary.org/journals/journal-of-applied-remote-sensing/volume-10/issue-3/035004/Feature-significance-based-multibag-of-visual-words-model-for-remote/10.1117/1.JRS.10.035004.pdf)
```
@article{zhao2016feature,
title = {Feature significance-based multibag-of-visual-words model for remote sensing image scene classification},
author = {Zhao, Lijun and Tang, Ping and Huo, Lianzhi},
year = 2016,
journal = {Journal of Applied Remote Sensing},
publisher = {Society of Photo-Optical Instrumentation Engineers},
volume = 10,
number = 3,
pages = {035004--035004}
}
``` |
GEM/xmediasum | 2023-02-15T14:01:56.000Z | [
"task_categories:summarization",
"annotations_creators:expert-generated",
"language_creators:crowdsourced",
"multilinguality:multilingual",
"size_categories:10K<n<100K",
"source_datasets:original",
"language:en",
"language:zh",
"language:de",
"license:cc-by-nc-sa-4.0",
"region:us"
] | GEM | \
We present XMediaSum, a cross-lingual dialogue summarization dataset with 40K English(dialogues)->Chinese(summaries) and 40K English (dialogues)->German(summaries) samples. XMediaSum is created by manually translating the English summaries of MediaSum (a English monolingual dialogue summarization dataset) to both Chinese and German. | \
@inproceedings{wang-etal-2022-clidsum,
title = "{C}lid{S}um: A Benchmark Dataset for Cross-Lingual Dialogue Summarization",
author = "Wang, Jiaan and
Meng, Fandong and
Lu, Ziyao and
Zheng, Duo and
Li, Zhixu and
Qu, Jianfeng and
Zhou, Jie",
booktitle = "Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing",
month = dec,
year = "2022",
address = "Abu Dhabi, United Arab Emirates",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2022.emnlp-main.526",
pages = "7716--7729",
abstract = "We present ClidSum, a benchmark dataset towards building cross-lingual summarization systems on dialogue documents. It consists of 67k+ dialogue documents and 112k+ annotated summaries in different target languages. Based on the proposed ClidSum, we introduce two benchmark settings for supervised and semi-supervised scenarios, respectively. We then build various baseline systems in different paradigms (pipeline and end-to-end) and conduct extensive experiments on ClidSum to provide deeper analyses. Furthermore, we propose mDialBART which extends mBART via further pre-training, where the multiple objectives help the pre-trained model capture the structural characteristics as well as key content in dialogues and the transformation from source to the target language. Experimental results show the superiority of mDialBART, as an end-to-end model, outperforms strong pipeline models on ClidSum. Finally, we discuss specific challenges that current approaches faced with this task and give multiple promising directions for future research. We have released the dataset and code at https://github.com/krystalan/ClidSum.",
} | null | 3 | 10 | ---
annotations_creators:
- expert-generated
language:
- en
- zh
- de
language_creators:
- crowdsourced
license:
- cc-by-nc-sa-4.0
multilinguality:
- multilingual
pretty_name: xmediasum
size_categories:
- 10K<n<100K
source_datasets:
- original
tags: []
task_categories:
- summarization
task_ids: []
---
# Dataset Card for XMediaSum
### Dataset Summary
We present XMediaSum, a cross-lingual dialogue summarization dataset with 40K English(dialogues)->Chinese(summaries) and 40K English (dialogues)->German(summaries) samples. XMediaSum is created by manually translating the English summaries of MediaSum (a English monolingual dialogue summarization dataset) to both Chinese and German.
- Paper: [ClidSum: A Benchmark Dataset for Cross-Lingual Dialogue Summarization](https://aclanthology.org/2022.emnlp-main.526/) (EMNLP 2022)
- GitHub: https://github.com/krystalan/ClidSum
### Supported Task
- Cross-Lingual Summarization
- Cross-Lingual Dialogue Summarization
### Languages
- source language: English
- target language: Chinese and German
## Dataset Structure
### Data Instances
One example is given below in JSON format:
```json
{
"dialogue": "MADELELEINE BRAND, host: OK, here's some good news on the jobs front for both men and women. A new survey out today from the employment firm Manpower finds that about a quarter of employers will add jobs this summer. That's for adults, but for teenagers this summer's job market is shaping up to be the weakest in more than 50 years.\r\nALEX COHEN, host: So, how do you get your teenage kids not to spend the entire summer glued to the couch? You're about to get some tips from Michelle Singletary. She's Day to Day's personal finance contributor. Hi, Michelle!\r\nMICHELLE SINGLETARY: Hi!\r\nALEX COHEN, host: So why is the summer job market so hard for teens this year?\r\nMICHELLE SINGLETARY: Lot of things going on right now. We've got a tough economy. We've got a lot of college graduates going into the market. We have people who are losing their jobs and taking jobs that would traditionally go to teens, like in restaurants and retailers. And we have a lot of older people holding on to their jobs and not retiring because they can't afford to retire. And that puts teens at the end of the line when it comes to these types of jobs.\r\nALEX COHEN, host: So you've got a teenager at home, a little bit young for the working world just yet, but what would you say to a teenager who's out there hunting around for a job?\r\nMICHELLE SINGLETARY: If you absolutely need a job, keep looking. You know, obviously the types of jobs that teens tend to go for in retail, fast food, you know, they still need people. And oftentimes you know, listen, you may not get the job at the beginning of the summer, but hold on because in late summer, when some of those college students are going back and perhaps some of those people who lost their jobs are finding permanent positions with more pay, you might be able to still get that job. So don't give up, you may spend a month or month and a half without it, but go back to those retailers and those restaurants and those fast food places to see if they still need someone.\r\nALEX COHEN, host: And now I know parents like having the break from providing allowance. But, you know, is - are there reasons maybe not to push your teen towards taking a job?\r\nMICHELLE SINGLETARY: I think it absolutely is. In fact I think too many teens are working and they don't need to work. They're some who absolutely need, they're contributing to their household or they're putting money into their own college fund. But more often than not, what parents do is say you've got to get a job, and then the teens get the job and they spend all the money on clothes and you know videos and iPods and paying their cell phone bills because they don't need a cell phone anyway.\r\nALEX COHEN, host: So it's not going towards the college tuition at all.\r\nMICHELLE SINGLETARY: It is not. It's just disposable income that they're disposing of. And parents are not setting any limits and you know and then the kids get used to the fact that they're using all of their paycheck. That's another bad habit. Because they don't have to pay bills and all, all their income goes through you know this stuff.\r\nMICHELLE SINGLETARY: And when it comes time to get a real job, they're surprised they don't have enough money. And so you know what? You can wait to work. Instead, maybe they can spend the summer volunteering at a charitable organization or you know going back to school and boosting up their math skills or their English skills. We push the teens out into the market too soon, I think for some families.\r\nALEX COHEN, host: But now let's say your kid is working. What tips can parents provide in terms of holding on to that summer money?\r\nMICHELLE SINGLETARY: You know, before they get their job, they need to sit down with them and do a budget. So before they actually work and get that first paycheck I mean, you know, have them draw up a budge where the money is going. And you ought to have some requirements for some of their money. That's right, be a parent.\r\nMICHELLE SINGLETARY: So make them put some of it towards their college fund, if in fact they're headed for college. You know what? Make them put some away, I call it the tax fund, even though they may not have to pay taxes, but to pay for long-term things that they may want. You know, books once they get to college, or maybe they want to get a car, and they can actually pay cash for it, with some of these funds. Don't let them just go out and spend it on movies and stuff. You ought to set some guidelines - this is where you should put the money. And look at their budget.\r\nALEX COHEN, host: Day to Day's personal finance contributor Michelle Singletary. Thank you, Michelle!\r\nMICHELLE SINGLETARY: You're welcome.\r\nALEX COHEN, host: Stay with us. NPR's Day to Day continues.",
"summary": "The tight job market could be bad news for teens seeking summer work. If your teen does find a job, will he or she know how to manage those paychecks? Our personal finance contributor talks with Alex Cohen about ways to help teens find a job.",
"summary_de": "Der angespannte Arbeitsmarkt könnte für Jugendliche, die Sommerarbeit suchen, eine schlechte Nachricht sein. Wenn Ihr Teenager einen Job findet, wird er oder sie wissen, wie er mit diesen Gehaltsschecks umgeht? Unser Mitarbeiter für persönliche Finanzen spricht mit Alex Cohen darüber, wie Teenager bei der Jobsuche unterstützt werden können.",
"summary_zh": "紧张的就业市场对寻找暑期工作的青少年来说可能是个坏消息。如果你的孩子找到了一份工作,他/她懂得怎么管理这些薪水吗?我们的个人理财撰稿人与亚历克斯·科恩谈论如何帮助青少年找到工作。"
},
```
### Data Fields
- 'dialogue': An English dialogue
- 'summary': the original English summary of the corresponding dialogue (provided by MediaSum)
- 'summary_de': the human-translated German summary
- 'summary_zh': the human-translated Chinese summary
### Data Splits
- training set: 20K samples
- validation set: 10K samples
- testing set: 10K samples
## Dataset Creation
Please refer to [our paper](https://aclanthology.org/2022.emnlp-main.526/) for more details.
## Considerations for Using the Data
Please refer to [our paper](https://aclanthology.org/2022.emnlp-main.526/) for more details.
## Additional Information
### Dataset Curators
[More Information Needed](https://github.com/krystalan/ClidSum)
### Licensing Information
License: CC BY-NC-SA 4.0
### Citation Information
```
@inproceedings{wang-etal-2022-clidsum,
title = "{C}lid{S}um: A Benchmark Dataset for Cross-Lingual Dialogue Summarization",
author = "Wang, Jiaan and
Meng, Fandong and
Lu, Ziyao and
Zheng, Duo and
Li, Zhixu and
Qu, Jianfeng and
Zhou, Jie",
booktitle = "Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing",
month = dec,
year = "2022",
address = "Abu Dhabi, United Arab Emirates",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2022.emnlp-main.526",
pages = "7716--7729",
abstract = "We present ClidSum, a benchmark dataset towards building cross-lingual summarization systems on dialogue documents. It consists of 67k+ dialogue documents and 112k+ annotated summaries in different target languages. Based on the proposed ClidSum, we introduce two benchmark settings for supervised and semi-supervised scenarios, respectively. We then build various baseline systems in different paradigms (pipeline and end-to-end) and conduct extensive experiments on ClidSum to provide deeper analyses. Furthermore, we propose mDialBART which extends mBART via further pre-training, where the multiple objectives help the pre-trained model capture the structural characteristics as well as key content in dialogues and the transformation from source to the target language. Experimental results show the superiority of mDialBART, as an end-to-end model, outperforms strong pipeline models on ClidSum. Finally, we discuss specific challenges that current approaches faced with this task and give multiple promising directions for future research. We have released the dataset and code at https://github.com/krystalan/ClidSum.",
}
```
### Contributions
Thanks to [@krystalan](https://github.com/krystalan) for adding this dataset. |
civility-lab/incivility-arizona-daily-star-comments | 2023-02-15T23:18:17.000Z | [
"task_categories:text-classification",
"task_ids:multi-label-classification",
"annotations_creators:expert-generated",
"language_creators:found",
"multilinguality:monolingual",
"size_categories:1K<n<10K",
"source_datasets:original",
"language:en",
"license:apache-2.0",
"social media",
"incivilit... | civility-lab | null | null | null | 0 | 10 | ---
annotations_creators:
- expert-generated
language:
- en
language_creators:
- found
license:
- apache-2.0
multilinguality:
- monolingual
pretty_name: Incivility in Arizona Daily Star Comments
size_categories:
- 1K<n<10K
source_datasets:
- original
tags:
- social media
- incivility
- aspersion
- hyperbole
- lying
- namecalling
- noncooperation
- pejorative
- sarcasm
- vulgarity
task_categories:
- text-classification
task_ids:
- multi-label-classification
dataset_info:
features:
- name: text
dtype: string
- name: aspersion
dtype: int64
- name: hyperbole
dtype: int64
- name: lying
dtype: int64
- name: namecalling
dtype: int64
- name: noncooperation
dtype: int64
- name: offtopic
dtype: int64
- name: other_incivility
dtype: int64
- name: pejorative
dtype: int64
- name: sarcasm
dtype: int64
- name: vulgarity
dtype: int64
- name: __index_level_0__
dtype: int64
splits:
- name: train
num_bytes: 1568771
num_examples: 3910
- name: validation
num_bytes: 398667
num_examples: 976
- name: test
num_bytes: 486262
num_examples: 1228
download_size: 1400753
dataset_size: 2453700
---
# Dataset Card for incivility-arizona-daily-star-comments
This is a collection of more than 6000 comments on Arizona Daily Star news articles from 2011 that have been manually annotated for various forms of incivility including aspersion, namecalling, sarcasm, and vulgarity.
## Dataset Structure
Each instance in the dataset corresponds to a single comment from a single commenter.
An instance's `text` field contains the text of the comment with any quotes of other commenters removed.
The remaining fields in each instance provide binary labels for each type of incivility annotated:
`aspersion`, `hyperbole`, `lying`, `namecalling`, `noncooperation`, `offtopic`, `pejorative`, `sarcasm`, `vulgarity`, and `other_incivility`.
The dataset provides three standard splits: `train`, `validation`, and `test`.
## Dataset Creation
The original annotation effort is described in:
- Kevin Coe, Kate Kenski, Stephen A. Rains.
[Online and Uncivil? Patterns and Determinants of Incivility in Newspaper Website Comments](https://doi.org/10.1111/jcom.12104).
Journal of Communication, Volume 64, Issue 4, August 2014, Pages 658–679.
That dataset was converted to a computer-friendly form as described in section 4.2.1 of:
- Farig Sadeque.
[User behavior in social media: engagement, incivility, and depression](https://repository.arizona.edu/handle/10150/633192).
PhD thesis. The University of Arizona. 2019.
The current upload is a 2023 conversion of that form to a huggingface Dataset.
## Considerations for Using the Data
The data is intended for the study of incivility.
It should not be used to train models to generate incivility.
The human coders and their trainers were mostly [Western, educated, industrialized, rich and democratic (WEIRD)](https://www.nature.com/articles/466029a), which may have shaped how they evaluated incivility.
## Citation
```bibtex
@article{10.1111/jcom.12104,
author = {Coe, Kevin and Kenski, Kate and Rains, Stephen A.},
title = {Online and Uncivil? Patterns and Determinants of Incivility in Newspaper Website Comments},
journal = {Journal of Communication},
volume = {64},
number = {4},
pages = {658-679},
year = {2014},
month = {06},
issn = {0021-9916},
doi = {10.1111/jcom.12104},
url = {https://doi.org/10.1111/jcom.12104},
}
``` |
yoshitomo-matsubara/srsd-feynman_easy_dummy | 2023-10-07T17:48:43.000Z | [
"task_categories:tabular-regression",
"annotations_creators:expert",
"language_creators:expert-generated",
"multilinguality:monolingual",
"size_categories:100K<n<1M",
"source_datasets:extended",
"language:en",
"license:mit",
"arxiv:2206.10540",
"doi:10.57967/hf/0760",
"region:us"
] | yoshitomo-matsubara | null | null | null | 0 | 10 | ---
pretty_name: SRSD-Feynman (Easy w/ Dummy Variables)
annotations_creators:
- expert
language_creators:
- expert-generated
language:
- en
license:
- mit
multilinguality:
- monolingual
size_categories:
- 100K<n<1M
source_datasets:
- extended
task_categories:
- tabular-regression
task_ids: []
---
# Dataset Card for SRSD-Feynman (Easy set with Dummy Variables)
## Table of Contents
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:**
- **Repository:** https://github.com/omron-sinicx/srsd-benchmark
- **Paper:** [Rethinking Symbolic Regression Datasets and Benchmarks for Scientific Discovery](https://arxiv.org/abs/2206.10540)
- **Point of Contact:** [Yoshitaka Ushiku](mailto:yoshitaka.ushiku@sinicx.com)
### Dataset Summary
Our SRSD (Feynman) datasets are designed to discuss the performance of Symbolic Regression for Scientific Discovery.
We carefully reviewed the properties of each formula and its variables in [the Feynman Symbolic Regression Database](https://space.mit.edu/home/tegmark/aifeynman.html) to design reasonably realistic sampling range of values so that our SRSD datasets can be used for evaluating the potential of SRSD such as whether or not an SR method con (re)discover physical laws from such datasets.
This is the ***Easy set with dummy variables*** of our SRSD-Feynman datasets, which consists of the following 30 different physics formulas:
[](https://huggingface.co/datasets/yoshitomo-matsubara/srsd-feynman_easy_dummy/resolve/main/problem_table.pdf)
Dummy variables were randomly generated, and symbolic regression models should not use the dummy variables as part of their predictions.
The following datasets contain
**1 dummy variable**: I.12.1, I.12.4, I.12.5, I.18.12, I.25.13, I.47.23
**2 dummy variables**: I.14.3, I.18.16, I.43.16, II.3.24, II.8.31, II.10.9, II.13.17, II.15.5, II.27.18, III.7.38, III.12.43
**3 dummy variables**: I.14.4, I.26.2, I.27.6, I.30.5, II.2.42, II.4.23, II.15.4, II.27.16, II.34.11, II.34.29b, II.38.3, II.38.14, III.15.27
More details of these datasets are provided in [the paper and its supplementary material](https://arxiv.org/abs/2206.10540).
### Supported Tasks and Leaderboards
Symbolic Regression
## Dataset Structure
### Data Instances
Tabular data + Ground-truth equation per equation
Tabular data: (num_samples, num_variables+1), where the last (rightmost) column indicate output of the target function for given variables.
Note that the number of variables (`num_variables`) varies from equation to equation.
Ground-truth equation: *pickled* symbolic representation (equation with symbols in sympy) of the target function.
### Data Fields
For each dataset, we have
1. train split (txt file, whitespace as a delimiter)
2. val split (txt file, whitespace as a delimiter)
3. test split (txt file, whitespace as a delimiter)
4. true equation (pickle file for sympy object)
### Data Splits
- train: 8,000 samples per equation
- val: 1,000 samples per equation
- test: 1,000 samples per equation
## Dataset Creation
### Curation Rationale
We chose target equations based on [the Feynman Symbolic Regression Database](https://space.mit.edu/home/tegmark/aifeynman.html).
### Annotations
#### Annotation process
We significantly revised the sampling range for each variable from the annotations in the Feynman Symbolic Regression Database.
First, we checked the properties of each variable and treat physical constants (e.g., light speed, gravitational constant) as constants.
Next, variable ranges were defined to correspond to each typical physics experiment to confirm the physical phenomenon for each equation.
In cases where a specific experiment is difficult to be assumed, ranges were set within which the corresponding physical phenomenon can be seen.
Generally, the ranges are set to be sampled on log scales within their orders as 10^2 in order to take both large and small changes in value as the order changes.
Variables such as angles, for which a linear distribution is expected are set to be sampled uniformly.
In addition, variables that take a specific sign were set to be sampled within that range.
#### Who are the annotators?
The main annotators are
- Naoya Chiba (@nchiba)
- Ryo Igarashi (@rigarash)
### Personal and Sensitive Information
N/A
## Considerations for Using the Data
### Social Impact of Dataset
We annotated this dataset, assuming typical physical experiments. The dataset will engage research on symbolic regression for scientific discovery (SRSD) and help researchers discuss the potential of symbolic regression methods towards data-driven scientific discovery.
### Discussion of Biases
Our choices of target equations are based on [the Feynman Symbolic Regression Database](https://space.mit.edu/home/tegmark/aifeynman.html), which are focused on a field of Physics.
### Other Known Limitations
Some variables used in our datasets indicate some numbers (counts), which should be treated as integer.
Due to the capacity of 32-bit integer, however, we treated some of such variables as float e.g., number of molecules (10^{23} - 10^{25})
## Additional Information
### Dataset Curators
The main curators are
- Naoya Chiba (@nchiba)
- Ryo Igarashi (@rigarash)
### Licensing Information
MIT License
### Citation Information
[[Preprint](https://arxiv.org/abs/2206.10540)]
```bibtex
@article{matsubara2022rethinking,
title={Rethinking Symbolic Regression Datasets and Benchmarks for Scientific Discovery},
author={Matsubara, Yoshitomo and Chiba, Naoya and Igarashi, Ryo and Ushiku, Yoshitaka},
journal={arXiv preprint arXiv:2206.10540},
year={2022}
}
```
### Contributions
Authors:
- Yoshitomo Matsubara (@yoshitomo-matsubara)
- Naoya Chiba (@nchiba)
- Ryo Igarashi (@rigarash)
- Yoshitaka Ushiku (@yushiku)
|
yoshitomo-matsubara/srsd-feynman_medium_dummy | 2023-10-07T17:49:15.000Z | [
"task_categories:tabular-regression",
"annotations_creators:expert",
"language_creators:expert-generated",
"multilinguality:monolingual",
"size_categories:100K<n<1M",
"source_datasets:extended",
"language:en",
"license:mit",
"arxiv:2206.10540",
"doi:10.57967/hf/0759",
"region:us"
] | yoshitomo-matsubara | null | null | null | 0 | 10 | ---
pretty_name: SRSD-Feynman (Medium w/ Dummy Variables)
annotations_creators:
- expert
language_creators:
- expert-generated
language:
- en
license:
- mit
multilinguality:
- monolingual
size_categories:
- 100K<n<1M
source_datasets:
- extended
task_categories:
- tabular-regression
task_ids: []
---
# Dataset Card for SRSD-Feynman (Medium set with Dummy Variables)
## Table of Contents
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:**
- **Repository:** https://github.com/omron-sinicx/srsd-benchmark
- **Paper:** [Rethinking Symbolic Regression Datasets and Benchmarks for Scientific Discovery](https://arxiv.org/abs/2206.10540)
- **Point of Contact:** [Yoshitaka Ushiku](mailto:yoshitaka.ushiku@sinicx.com)
### Dataset Summary
Our SRSD (Feynman) datasets are designed to discuss the performance of Symbolic Regression for Scientific Discovery.
We carefully reviewed the properties of each formula and its variables in [the Feynman Symbolic Regression Database](https://space.mit.edu/home/tegmark/aifeynman.html) to design reasonably realistic sampling range of values so that our SRSD datasets can be used for evaluating the potential of SRSD such as whether or not an SR method con (re)discover physical laws from such datasets.
This is the ***Medium set with dummy variables*** of our SRSD-Feynman datasets, which consists of the following 40 different physics formulas:
[](https://huggingface.co/datasets/yoshitomo-matsubara/srsd-feynman_medium_dummy/resolve/main/problem_table.pdf)
Dummy variables were randomly generated, and symbolic regression models should not use the dummy variables as part of their predictions.
The following datasets contain
**1 dummy variable**: I.10.7, I.12.2, I.13.12, I.16.6, I.32.5, I.43.31, II.11.3, II.34.2, II.34.29a, III.14.14, III.15.14, B8
**2 dummy variables**: I.11.19, I.12.11, I.13.4, I.15.10, I.18.4, I.24.6, I.34.8, I.38.12, I.39.11, I.43.43, I.48.2, II.6.11, II.21.32, II.34.2a, III.4.32, III.13.18, III.15.12, III.17.37
**3 dummy variables**: I.8.14, I.29.4, I.34.10, I.34.27, I.39.10, II.8.7, II.37.1, III.8.54, III.19.51, B18
More details of these datasets are provided in [the paper and its supplementary material](https://arxiv.org/abs/2206.10540).
### Supported Tasks and Leaderboards
Symbolic Regression
## Dataset Structure
### Data Instances
Tabular data + Ground-truth equation per equation
Tabular data: (num_samples, num_variables+1), where the last (rightmost) column indicate output of the target function for given variables.
Note that the number of variables (`num_variables`) varies from equation to equation.
Ground-truth equation: *pickled* symbolic representation (equation with symbols in sympy) of the target function.
### Data Fields
For each dataset, we have
1. train split (txt file, whitespace as a delimiter)
2. val split (txt file, whitespace as a delimiter)
3. test split (txt file, whitespace as a delimiter)
4. true equation (pickle file for sympy object)
### Data Splits
- train: 8,000 samples per equation
- val: 1,000 samples per equation
- test: 1,000 samples per equation
## Dataset Creation
### Curation Rationale
We chose target equations based on [the Feynman Symbolic Regression Database](https://space.mit.edu/home/tegmark/aifeynman.html).
### Annotations
#### Annotation process
We significantly revised the sampling range for each variable from the annotations in the Feynman Symbolic Regression Database.
First, we checked the properties of each variable and treat physical constants (e.g., light speed, gravitational constant) as constants.
Next, variable ranges were defined to correspond to each typical physics experiment to confirm the physical phenomenon for each equation.
In cases where a specific experiment is difficult to be assumed, ranges were set within which the corresponding physical phenomenon can be seen.
Generally, the ranges are set to be sampled on log scales within their orders as 10^2 in order to take both large and small changes in value as the order changes.
Variables such as angles, for which a linear distribution is expected are set to be sampled uniformly.
In addition, variables that take a specific sign were set to be sampled within that range.
#### Who are the annotators?
The main annotators are
- Naoya Chiba (@nchiba)
- Ryo Igarashi (@rigarash)
### Personal and Sensitive Information
N/A
## Considerations for Using the Data
### Social Impact of Dataset
We annotated this dataset, assuming typical physical experiments. The dataset will engage research on symbolic regression for scientific discovery (SRSD) and help researchers discuss the potential of symbolic regression methods towards data-driven scientific discovery.
### Discussion of Biases
Our choices of target equations are based on [the Feynman Symbolic Regression Database](https://space.mit.edu/home/tegmark/aifeynman.html), which are focused on a field of Physics.
### Other Known Limitations
Some variables used in our datasets indicate some numbers (counts), which should be treated as integer.
Due to the capacity of 32-bit integer, however, we treated some of such variables as float e.g., number of molecules (10^{23} - 10^{25})
## Additional Information
### Dataset Curators
The main curators are
- Naoya Chiba (@nchiba)
- Ryo Igarashi (@rigarash)
### Licensing Information
MIT License
### Citation Information
[[Preprint](https://arxiv.org/abs/2206.10540)]
```bibtex
@article{matsubara2022rethinking,
title={Rethinking Symbolic Regression Datasets and Benchmarks for Scientific Discovery},
author={Matsubara, Yoshitomo and Chiba, Naoya and Igarashi, Ryo and Ushiku, Yoshitaka},
journal={arXiv preprint arXiv:2206.10540},
year={2022}
}
```
### Contributions
Authors:
- Yoshitomo Matsubara (@yoshitomo-matsubara)
- Naoya Chiba (@nchiba)
- Ryo Igarashi (@rigarash)
- Yoshitaka Ushiku (@yushiku)
|
yoshitomo-matsubara/srsd-feynman_hard_dummy | 2023-10-07T17:49:44.000Z | [
"task_categories:tabular-regression",
"annotations_creators:expert",
"language_creators:expert-generated",
"multilinguality:monolingual",
"size_categories:100K<n<1M",
"source_datasets:extended",
"language:en",
"license:mit",
"arxiv:2206.10540",
"doi:10.57967/hf/0758",
"region:us"
] | yoshitomo-matsubara | null | null | null | 0 | 10 | ---
pretty_name: SRSD-Feynman (Hard w/ Dummy Variables)
annotations_creators:
- expert
language_creators:
- expert-generated
language:
- en
license:
- mit
multilinguality:
- monolingual
size_categories:
- 100K<n<1M
source_datasets:
- extended
task_categories:
- tabular-regression
task_ids: []
---
# Dataset Card for SRSD-Feynman (Hard set with Dummy Variables)
## Table of Contents
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:**
- **Repository:** https://github.com/omron-sinicx/srsd-benchmark
- **Paper:** [Rethinking Symbolic Regression Datasets and Benchmarks for Scientific Discovery](https://arxiv.org/abs/2206.10540)
- **Point of Contact:** [Yoshitaka Ushiku](mailto:yoshitaka.ushiku@sinicx.com)
### Dataset Summary
Our SRSD (Feynman) datasets are designed to discuss the performance of Symbolic Regression for Scientific Discovery.
We carefully reviewed the properties of each formula and its variables in [the Feynman Symbolic Regression Database](https://space.mit.edu/home/tegmark/aifeynman.html) to design reasonably realistic sampling range of values so that our SRSD datasets can be used for evaluating the potential of SRSD such as whether or not an SR method con (re)discover physical laws from such datasets.
This is the ***Hard set with dummy variables*** of our SRSD-Feynman datasets, which consists of the following 50 different physics formulas:
[](https://huggingface.co/datasets/yoshitomo-matsubara/srsd-feynman_hard_dummy/resolve/main/problem_table.pdf)
Dummy variables were randomly generated, and symbolic regression models should not use the dummy variables as part of their predictions.
The following datasets contain
**1 dummy variable**: I.15.3x, I.30.3, II.6.15a, II.11.17, II.11.28, II.13.23, II.13.34, II.24.17, B1, B6, B12, B16, B17
**2 dummy variables**: I.6.20, I.6.20b, I.9.18, I.15.3t, I.29.16, I.34.14, I.39.22, I.44.4, II.11.20, II.11.27, II.35.18, III.9.52, III.10.19, III.21.20, B2, B3, B7, B9
**3 dummy variables**: I.6.20a, I.32.17, I.37.4, I.40.1, I.41.16, I.50.26, II.6.15b, II.35.21, II.36.38, III.4.33, B4, B5, B10, B11, B13, B14, B15, B19, B20
More details of these datasets are provided in [the paper and its supplementary material](https://arxiv.org/abs/2206.10540).
### Supported Tasks and Leaderboards
Symbolic Regression
## Dataset Structure
### Data Instances
Tabular data + Ground-truth equation per equation
Tabular data: (num_samples, num_variables+1), where the last (rightmost) column indicate output of the target function for given variables.
Note that the number of variables (`num_variables`) varies from equation to equation.
Ground-truth equation: *pickled* symbolic representation (equation with symbols in sympy) of the target function.
### Data Fields
For each dataset, we have
1. train split (txt file, whitespace as a delimiter)
2. val split (txt file, whitespace as a delimiter)
3. test split (txt file, whitespace as a delimiter)
4. true equation (pickle file for sympy object)
### Data Splits
- train: 8,000 samples per equation
- val: 1,000 samples per equation
- test: 1,000 samples per equation
## Dataset Creation
### Curation Rationale
We chose target equations based on [the Feynman Symbolic Regression Database](https://space.mit.edu/home/tegmark/aifeynman.html).
### Annotations
#### Annotation process
We significantly revised the sampling range for each variable from the annotations in the Feynman Symbolic Regression Database.
First, we checked the properties of each variable and treat physical constants (e.g., light speed, gravitational constant) as constants.
Next, variable ranges were defined to correspond to each typical physics experiment to confirm the physical phenomenon for each equation.
In cases where a specific experiment is difficult to be assumed, ranges were set within which the corresponding physical phenomenon can be seen.
Generally, the ranges are set to be sampled on log scales within their orders as 10^2 in order to take both large and small changes in value as the order changes.
Variables such as angles, for which a linear distribution is expected are set to be sampled uniformly.
In addition, variables that take a specific sign were set to be sampled within that range.
#### Who are the annotators?
The main annotators are
- Naoya Chiba (@nchiba)
- Ryo Igarashi (@rigarash)
### Personal and Sensitive Information
N/A
## Considerations for Using the Data
### Social Impact of Dataset
We annotated this dataset, assuming typical physical experiments. The dataset will engage research on symbolic regression for scientific discovery (SRSD) and help researchers discuss the potential of symbolic regression methods towards data-driven scientific discovery.
### Discussion of Biases
Our choices of target equations are based on [the Feynman Symbolic Regression Database](https://space.mit.edu/home/tegmark/aifeynman.html), which are focused on a field of Physics.
### Other Known Limitations
Some variables used in our datasets indicate some numbers (counts), which should be treated as integer.
Due to the capacity of 32-bit integer, however, we treated some of such variables as float e.g., number of molecules (10^{23} - 10^{25})
## Additional Information
### Dataset Curators
The main curators are
- Naoya Chiba (@nchiba)
- Ryo Igarashi (@rigarash)
### Licensing Information
MIT License
### Citation Information
[[Preprint](https://arxiv.org/abs/2206.10540)]
```bibtex
@article{matsubara2022rethinking,
title={Rethinking Symbolic Regression Datasets and Benchmarks for Scientific Discovery},
author={Matsubara, Yoshitomo and Chiba, Naoya and Igarashi, Ryo and Ushiku, Yoshitaka},
journal={arXiv preprint arXiv:2206.10540},
year={2022}
}
```
### Contributions
Authors:
- Yoshitomo Matsubara (@yoshitomo-matsubara)
- Naoya Chiba (@nchiba)
- Ryo Igarashi (@rigarash)
- Yoshitaka Ushiku (@yushiku)
|
theblackcat102/alexa-qa | 2023-02-19T04:14:43.000Z | [
"task_categories:question-answering",
"size_categories:10K<n<100K",
"language:en",
"license:mit",
"alexa",
"region:us"
] | theblackcat102 | null | null | null | 0 | 10 | ---
license: mit
task_categories:
- question-answering
language:
- en
pretty_name: Alexa Question Answering dataset
tags:
- alexa
size_categories:
- 10K<n<100K
---
# Alexa Answers from [alexaanswers.amazon.com](https://alexaanswers.amazon.com/)
The Alexa Answers community helps to improve Alexa’s knowledge and answer questions asked by Alexa users. Which contains some very quirky and hard question like
Q: what percent of the population has blackhair
A: The most common hair color in the world is black and its found in wide array of background and ethnicities. About 75 to 85% of the global population has either black hair or the deepest brown shade.
Q: what was the world population during world war two
A: 2.3 billion
However, with unusual questions there are unsual answers.
Q: what is nascar poem
A: Roses are red; Violets are blue; For Blaney's new ride; Switch the 1 and the 2.
there's no official nascar poem
# Dataset stats
Total dataset size are 136039 and splitted into train-test-validation via 7-2-1 ratio. The split are same as [alexa-qa-with-rank](https://huggingface.co/datasets/theblackcat102/alexa-qa-with-rank), so no train question in alexa-qa can be found in validation and test splits in alex-qa-with-rank.
Train : 95,227
Test : 27,208
Validation : 13,604
Do note that similar repharses of question does exist between splits and I will leave the study to others.
# Last update
19/02/2023
|
byunggill/gpt-2-output | 2023-02-24T08:11:53.000Z | [
"region:us"
] | byunggill | null | null | null | 0 | 10 | Entry not found |
awacke1/ICD10-Clinical-Terminology | 2023-02-28T12:21:15.000Z | [
"license:mit",
"region:us"
] | awacke1 | null | null | null | 2 | 10 | ---
license: mit
---
|
wannaphong/iapp_wiki_qa_squad_oa | 2023-02-28T17:30:16.000Z | [
"language:th",
"license:mit",
"Open Assistant",
"region:us"
] | wannaphong | null | null | null | 1 | 10 | ---
license: mit
language:
- th
tags:
- Open Assistant
---
This dataset is fork from [https://huggingface.co/datasets/iapp_wiki_qa_squad](https://huggingface.co/datasets/iapp_wiki_qa_squad) that made for Open Assistant.
Pull request: [Add iapp_wiki_qa_squad to datasets #1903 ](https://github.com/LAION-AI/Open-Assistant/pull/1903) |
qfrodicio/gesture-prediction-21-classes | 2023-03-10T11:49:53.000Z | [
"region:us"
] | qfrodicio | null | null | null | 0 | 10 | ---
dataset_info:
features:
- name: id
dtype: string
- name: tokens
sequence: string
- name: sentence
dtype: string
- name: gestures
sequence: string
splits:
- name: train
num_bytes: 437051
num_examples: 1649
- name: test
num_bytes: 115160
num_examples: 423
- name: validation
num_bytes: 142541
num_examples: 528
download_size: 207086
dataset_size: 694752
---
# Dataset Card for "gesture-prediction-21-classes"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
jjmachan/NSFW-reddit | 2023-03-04T10:22:48.000Z | [
"region:us"
] | jjmachan | null | null | null | 2 | 10 | ---
dataset_info:
features:
- name: title
dtype: string
- name: subreddit
dtype: string
- name: post_id
dtype: string
- name: score
dtype: int64
- name: link_flair_text
dtype: string
- name: is_self
dtype: bool
- name: over_18
dtype: bool
- name: upvote_ratio
dtype: float64
- name: is_question
dtype: bool
- name: C1
dtype: string
- name: C2
dtype: string
- name: C3
dtype: string
- name: C4
dtype: string
- name: C5
dtype: string
splits:
- name: train
num_bytes: 3178233
num_examples: 23519
download_size: 1238046
dataset_size: 3178233
---
# Dataset Card for "NSFW-reddit"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
theblackcat102/joke_explaination | 2023-03-09T02:35:40.000Z | [
"task_categories:text-generation",
"task_categories:text2text-generation",
"size_categories:n<1K",
"language:en",
"license:mit",
"joke",
"high quality",
"region:us"
] | theblackcat102 | null | null | null | 1 | 10 | ---
license: mit
task_categories:
- text-generation
- text2text-generation
language:
- en
tags:
- joke
- high quality
size_categories:
- n<1K
---
# Dataset Card for Dataset Name
## Dataset Description
- **Homepage:** : https://explainthejoke.com/
### Dataset Summary
Corpus for testing whether your LLM can explain the joke well. But this is a rather small dataset, if someone can point to a larger ones would be very nice.
### Languages
English
## Dataset Structure
### Data Fields
* url : link to the explaination
* joke : the original joke
* explaination : the explaination of the joke
### Data Splits
Since its so small, there's no splits just like gsm8k |
wantswanda/chinese | 2023-03-09T18:10:05.000Z | [
"task_categories:image-classification",
"size_categories:1K<n<10K",
"language:en",
"region:us"
] | wantswanda | null | null | null | 0 | 10 | ---
task_categories:
- image-classification
language:
- en
pretty_name: chinese_characters
size_categories:
- 1K<n<10K
--- |
open-source-metrics/pip-external | 2023-10-03T09:12:37.000Z | [
"region:us"
] | open-source-metrics | null | null | null | 0 | 10 | ---
dataset_info:
features:
- name: day
dtype: string
- name: num_downloads
dtype: int64
splits:
- name: pytorch
num_bytes: 32626
num_examples: 1483
- name: tensorflow
num_bytes: 32626
num_examples: 1483
- name: langchain
num_bytes: 7172
num_examples: 326
download_size: 43083
dataset_size: 72424
configs:
- config_name: default
data_files:
- split: langchain
path: data/langchain-*
- split: pytorch
path: data/pytorch-*
- split: tensorflow
path: data/tensorflow-*
---
# Dataset Card for "pip-external"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
under-tree/labeled-multiple-choice | 2023-03-24T17:13:59.000Z | [
"region:us"
] | under-tree | null | null | null | 1 | 10 | ---
dataset_info:
features:
- name: formatted_question
dtype: string
- name: combinedfact
dtype: string
- name: answerKey
dtype: string
- name: topic
dtype: string
splits:
- name: train
num_bytes: 9098435
num_examples: 36503
download_size: 1292178
dataset_size: 9098435
---
# Dataset Card for "labeled-multiple-choice"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
hazyresearch/evaporate | 2023-04-20T02:27:07.000Z | [
"region:us"
] | hazyresearch | null | null | null | 4 | 10 | # Evaporate
Datasets for paper "Evaporate: Language Models Enable Simple Systems for Generating Structured Views of Heterogeneous Data Lakes".
|
camel-ai/code | 2023-05-23T21:13:16.000Z | [
"task_categories:text-generation",
"language:en",
"license:cc-by-nc-4.0",
"instruction-finetuning",
"arxiv:2303.17760",
"region:us"
] | camel-ai | null | null | null | 23 | 10 | ---
license: cc-by-nc-4.0
language:
- en
tags:
- instruction-finetuning
pretty_name: CAMEL Code
task_categories:
- text-generation
arxiv: 2303.17760
extra_gated_prompt: "By using this data, you acknowledge and agree to utilize it solely for research purposes, recognizing that the dataset may contain inaccuracies due to its artificial generation through ChatGPT."
extra_gated_fields:
Name: text
Email: text
I will adhere to the terms and conditions of this dataset: checkbox
---
# **CAMEL: Communicative Agents for “Mind” Exploration of Large Scale Language Model Society**
- **Github:** https://github.com/lightaime/camel
- **Website:** https://www.camel-ai.org/
- **Arxiv Paper:** https://arxiv.org/abs/2303.17760
## Dataset Summary
Code dataset is composed of 50K conversations between two gpt-3.5-turbo agents. This dataset is simulating a programmer specialized in a particular language with another person from a particular domain. We cover up 20 programming languages and 50 domains with a total of 50 tasks per combination of language and domain.
We provide two formats, one is "chat" format which is `code_chat.tar.gz` file containing the conversational instruction following format. The other format is "instruction" format which is `code_instructions.json`.
## Data Fields
**The data fields for instructions format (`code_instructions.json`) are as follows:**
* `id`: {assistant\_role\_index}\_{user\_role\_index}\_{task\_index}, for example 001_002_003 refers to assistant role 1, user role 2, and task 3 from our text assistant role names, user role names and task text files.
* `role_1`: assistant role
* `role_2`: user role
* `original_task`: the general assigned task for the assistant and user to cooperate on.
* `specified_task`: the task after task specifier, this task is more specific than the original task.
* `role_1_response`: user response text before the instruction.
* `role_1_message_id`: message ID in the full raw conversation.
* `instruction`: describes the task the assistant is supposed to perform.
* `input`: provides further context or information for the requested instruction.
* `output`: the answer to the instruction as generated by 'gpt-3.5-turbo'
* `termination_reason`: refers to the reason of termination of the chat.
**The data fields for chat format (`code_chat.tar.gz`) are as follows:**
* `input`: {assistant\_role\_index}\_{user\_role\_index}\_{task\_index}, for example 001_002_003 refers to assistant role 1, user role 2, and task 3 from our text assistant role names, user role names and task text files.
* `role_1`: assistant role
* `role_2`: user role
* `original_task`: the general assigned task for the assistant and user to cooperate on.
* `specified_task`: the task after task specifier, this task is more specific than the original task.
* `message_k`: refers to the k<sup>_th_</sup> message of the conversation.
* `role_type`: refers to whether the agent is an assistant or a user.
* `role_name`: refers to the assigned assistant/user role.
* `role`: refers to the role of the agent during the message for openai api. [usually not needed]
* `content`: refers to the content of the message.
* `termination_reason`: refers to the reason of termination of the chat.
* `num_messages`: refers to the total number of messages in the chat.
**Download in python**
```
from huggingface_hub import hf_hub_download
hf_hub_download(repo_id="camel-ai/code", repo_type="dataset", filename="code_chat.tar.gz",
local_dir="datasets/", local_dir_use_symlinks=False)
```
### Citation
```
@misc{li2023camel,
title={CAMEL: Communicative Agents for "Mind" Exploration of Large Scale Language Model Society},
author={Guohao Li and Hasan Abed Al Kader Hammoud and Hani Itani and Dmitrii Khizbullin and Bernard Ghanem},
year={2023},
eprint={2303.17760},
archivePrefix={arXiv},
primaryClass={cs.AI}
}
```
## Disclaimer:
This data was synthetically generated by gpt-3.5-turbo and might contain incorrect information. The dataset is there only for research purposes.
---
license: cc-by-nc-4.0
--- |
Korakoe/Vicuna-Uncleaned-Alpaca-Format | 2023-04-06T08:14:10.000Z | [
"task_categories:text-generation",
"language:en",
"instruct",
"alpaca",
"vicuna",
"region:us"
] | Korakoe | null | null | null | 2 | 10 | ---
task_categories:
- text-generation
language:
- en
tags:
- instruct
- alpaca
- vicuna
pretty_name: Vicuna Uncleaned - Alpaca Format
--- |
mstz/mammography | 2023-04-16T17:34:26.000Z | [
"task_categories:tabular-classification",
"size_categories:n<1K",
"language:en",
"license:cc",
"mammography",
"tabular_classification",
"binary_classification",
"UCI",
"region:us"
] | mstz | null | @misc{misc_mammographic_mass_161,
author = {Elter,Matthias},
title = {{Mammographic Mass}},
year = {2007},
howpublished = {UCI Machine Learning Repository},
note = {{DOI}: \\url{10.24432/C53K6Z}}
} | null | 1 | 10 | ---
language:
- en
tags:
- mammography
- tabular_classification
- binary_classification
- UCI
pretty_name: Mammography
size_categories:
- n<1K
task_categories:
- tabular-classification
configs:
- mammography
license: cc
---
# Mammography
The [Mammography dataset](https://archive.ics.uci.edu/ml/datasets/Mammography) from the [UCI ML repository](https://archive.ics.uci.edu/ml/datasets).
# Configurations and tasks
| **Configuration** | **Task** | **Description** |
|-------------------|---------------------------|------------------------|
| mammography | Binary classification | Is the lesion benign? |
# Usage
```python
from datasets import load_dataset
dataset = load_dataset("mstz/mammography")["train"]
``` |
mstz/mushroom | 2023-04-16T17:34:40.000Z | [
"task_categories:tabular-classification",
"size_categories:1K<n<10K",
"language:en",
"license:cc",
"mushroom",
"tabular_classification",
"binary_classification",
"UCI",
"region:us"
] | mstz | null | @misc{misc_mushroom_73,
title = {{Mushroom}},
year = {1987},
howpublished = {UCI Machine Learning Repository},
note = {{DOI}: \\url{10.24432/C5959T}}
} | null | 0 | 10 | ---
language:
- en
tags:
- mushroom
- tabular_classification
- binary_classification
- UCI
pretty_name: Mushroom
size_categories:
- 1K<n<10K
task_categories:
- tabular-classification
configs:
- mushroom
license: cc
---
# Mushroom
The [Mushroom dataset](https://archive.ics.uci.edu/ml/datasets/Mushroom) from the [UCI ML repository](https://archive.ics.uci.edu/ml/datasets).
# Configurations and tasks
| **Configuration** | **Task** | **Description** |
|-------------------|---------------------------|---------------------------|
| mushroom | Binary classification | Is the mushroom poisonous?|
# Usage
```python
from datasets import load_dataset
dataset = load_dataset("mstz/mushroom")["train"]
``` |
mstz/titanic | 2023-04-09T23:30:09.000Z | [
"task_categories:tabular-classification",
"size_categories:n<1K",
"language:en",
"license:cc",
"titanic",
"tabular_classification",
"binary_classification",
"region:us"
] | mstz | null | null | null | 0 | 10 | ---
language:
- en
tags:
- titanic
- tabular_classification
- binary_classification
pretty_name: Titanic
size_categories:
- n<1K
task_categories:
- tabular-classification
configs:
- survival
license: cc
---
# Titanic
The [Titanic dataset](https://www.kaggle.com/datasets/vinicius150987/titanic3) from [Kaggle](https://www.kaggle.com/).
# Configurations and tasks
| **Configuration** | **Task** | **Description** |
|-------------------|---------------------------|----------------------------|
| survival | Binary classification | Has the passanger survived?|
# Usage
```python
from datasets import load_dataset
dataset = load_dataset("mstz/titanic")["train"]
``` |
larryvrh/WikiMatrix-v1-Ja_Zh-filtered | 2023-04-08T05:16:37.000Z | [
"task_categories:translation",
"size_categories:100K<n<1M",
"language:ja",
"language:zh",
"license:cc-by-sa-4.0",
"region:us"
] | larryvrh | null | null | null | 6 | 10 | ---
license: cc-by-sa-4.0
dataset_info:
features:
- name: ja
dtype: string
- name: zh
dtype: string
splits:
- name: train
num_bytes: 149036235
num_examples: 690095
download_size: 115870646
dataset_size: 149036235
task_categories:
- translation
language:
- ja
- zh
size_categories:
- 100K<n<1M
---
Filtered and modified version of Japanese/Chinese language pair data from [WikiMatrix v1](https://opus.nlpl.eu/WikiMatrix.php).
Process steps:
1. Basic regex based filtering / length checking to remove abnormal pairs.
2. Semantic similarity filtering with a threshold value of 0.6, based on [sentence-transformers/LaBSE](https://huggingface.co/sentence-transformers/LaBSE).
3. Convert all Traditional Chinese sentences into Simplified Chinese with [zhconv](https://github.com/gumblex/zhconv).
------
经过过滤和修改的日语/中文语言对数据,来自[WikiMatrix v1](https://opus.nlpl.eu/WikiMatrix.php)。
处理步骤:
1. 基本的基于正则表达式的过滤/长度检查,以删除异常对。
2. 基于[sentence-transformers/LaBSE](https://huggingface.co/sentence-transformers/LaBSE)的语义相似性过滤,阈值为0.6。
3. 使用[zhconv](https://github.com/gumblex/zhconv)将所有繁体中文句子转换为简体中文。
------
以下はフィルタリングされ修正された日本語/中国語のペアデータです。データ元は[WikiMatrix v1](https://opus.nlpl.eu/WikiMatrix.php)です。
処理手順:
1. 正規表現に基づくフィルタリング/長さのチェックを行い、異常なペアを削除します。
2. [sentence-transformers/LaBSE](https://huggingface.co/sentence-transformers/LaBSE)に基づくセマンティック類似性フィルタリングを行い、閾値は0.6です。
3. [zhconv](https://github.com/gumblex/zhconv)を使って、すべての繁体字中国語の文を簡体字中国語に変換します。 |
hackathon-somos-nlp-2023/podcasts-ner-es | 2023-04-09T23:40:50.000Z | [
"task_categories:token-classification",
"size_categories:n<1K",
"language:es",
"license:mit",
"region:us"
] | hackathon-somos-nlp-2023 | null | null | null | 9 | 10 | ---
dataset_info:
features:
- name: text
dtype: string
- name: annotation
list:
- name: end
dtype: int64
- name: label
dtype: string
- name: start
dtype: int64
- name: id
dtype: string
splits:
- name: train
num_bytes: 43389.8358778626
num_examples: 209
- name: test
num_bytes: 11003.164122137405
num_examples: 53
download_size: 42448
dataset_size: 54393
task_categories:
- token-classification
language:
- es
size_categories:
- n<1K
license: mit
---
# Dataset Card for "podcasts-ner-es"
## Table of Contents
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
- [Team members](#team-members)
## Dataset Description
- **Homepage:**
- **Repository:**
- **Paper:**
- **Leaderboard:**
- **Point of Contact:**
### Dataset Summary
This dataset comprises of small text snippets extracted from the "Deforme Semanal" podcast,
accompanied by annotations that identify the presence of a predetermined set of entities.
The purpose of this dataset is to facilitate Named Entity Recognition (NER) tasks.
The dataset was created to aid in the identification of entities such as famous people, books, or films in podcasts.
The transcription of the audio was first done, followed by annotation with GPT-3 and curation with Argilla.
The dataset is in Spanish, covering mostly topics such as love, feminism, and art, which are the main themes of the podcast.
### Supported Tasks and Leaderboards
Named Entity Recognition
### Languages
The dataset is in Spanish and the language used is primarily informal.
It is important to note that the language may include aggressive or offensive content.
## Dataset Structure
### Data Instances
```
{
"text":"Tengo 39 años, pues, ya veré cuándo yo quiero dejar de comer ternera, está mal, porque hay sobre explotación y todo esto, muy mal."
"annotation": [ { "end": 13, "label": "DATES", "start": 6 } ]
"id": "53c4748e-dbd2-4cf5-946f-d134b0bf6155"
}
```
### Data Fields
`text`: Snippet of text of no more than 512 characters extracted from a podcast episode.
`id`: Unique identification number for each instance in the dataset.
`annotation`: List of dictonary-like format with the following fields:
- `end`: end character of the entity ocurrence in the text.
- `start`: start character of the entity ocurrence in the text.
- `label`: label for the entity from the predefined set of entities. The label of the entities is one of:
'people', 'products', 'books', 'animals', 'organizations', 'topics', 'dates', 'places', 'artista', 'objects','songs', and 'films'.
### Data Splits
The dataset was shuffled and split using the `train_test_split` function from the Hugging Face datasets library.
The split was made with a train size of 0.8 and a seed of 42.
## Dataset Creation
### Curation Rationale
We created this dataset with the aim of making the information from our favorite podcasts more accessible, as retrieving information from audio formats can be challenging.
We chose to focus on the Named Entity Recognition (NER) task as it was relatively easy to annotate and validate.
### Source Data
#### Initial Data Collection and Normalization
We collected the data from a playlist on YouTube containing approximately 15 episodes of the "Deforme Semanal" podcast.
You can find the playlist at this [link](https://www.youtube.com/playlist?list=PLLbN7SMQhMVZoXhtQ00AyebQE_-ttDrs9).
We then transcribed the audio stream using OpenAI's Whisper (medium size) and split the resulting text files
into chunks of less than 512 characters.
### Annotations
#### Annotation process
To annotate the texts, we used OpenAI's API and GPT-3, with the following prompt:
```
Perform named entity recognition in Spanish. The classes are books, films, video games, songs, places, dates, topics, organizations, and people. The output should follow the format:
[{'class': 'people', 'text': 'name of the person'}, {'class': 'books', 'start': 'name of the book'}]
Sentence:
```
Finally, to ensure the quality of the dataset, we validated the annotations using Argilla by checking that the tokens were classified
correctly.
## Considerations for Using the Data
### Discussion of Biases
The dataset was obtained from the "Deforme Semanal" podcast, which primarily focuses on art, feminism, and culture.
As a result, the data is directly related to the topics and individuals discussed in these contexts. Additionally,
the language used in the podcast is informal and can be aggressive or offensive at times, which may be reflected in the dataset.
Although we attempted to minimize these biases during the validation process, their effectiveness is likely limited.
### Other Known Limitations
One issue that we have encountered with the token/entity data is that there can be some ambiguity in terms of their distinctions.
In some cases, it may not be clear how to differentiate between two tokens or entities, which can impact the accuracy
and effectiveness of models trained on this data.
Furthermore, the dataset size is relatively small, which can pose a challenge when it comes to training machine learning models.
With a limited amount of data, it can be difficult to capture the full range of variations and patterns in the data,
and overfitting can become a concern. This is especially true when dealing with complex models that require a large
amount of data to train effectively.
## Team members
[David Mora](https://huggingface.co/DavidFM43)
[Sergio Perez](https://huggingface.co/sergiopperez)
[Albeto Fernandez](https://huggingface.co/AlbertoFH98)
|
larryvrh/WikiMedia-v20210402-Ja_Zh-filtered | 2023-04-09T19:30:00.000Z | [
"region:us"
] | larryvrh | null | null | null | 0 | 10 | ---
dataset_info:
features:
- name: ja
dtype: string
- name: zh
dtype: string
splits:
- name: train
num_bytes: 7517762
num_examples: 15989
download_size: 4720167
dataset_size: 7517762
---
# Dataset Card for "WikiMedia-v20210402-Ja_Zh-filtered"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
crangana/railroad-fault-detection | 2023-04-12T10:07:13.000Z | [
"region:us"
] | crangana | null | null | null | 0 | 10 | Entry not found |
vicclab/HumanvGPT | 2023-04-18T21:42:12.000Z | [
"license:mit",
"region:us"
] | vicclab | null | null | null | 0 | 10 | ---
license: mit
---
DO NOT USE - PLACEHOLDER DATASET
LITERALLY JUST THE SAME FEW ROWS REPEATED DOZENS OF TIMES |
kiviki/SlovakSum | 2023-05-05T12:14:16.000Z | [
"license:openrail",
"region:us"
] | kiviki | null | null | null | 3 | 10 | ---
license: openrail
---
The SlovakSum dataset from the SlovakSum: Slovak News Summarization Dataset paper |
nRuaif/Vietnamese_x_Alpaca | 2023-04-21T05:32:45.000Z | [
"license:mit",
"region:us"
] | nRuaif | null | null | null | 3 | 10 | ---
license: mit
---
|
AlekseyKorshuk/oasst1-chatml | 2023-06-05T22:04:39.000Z | [
"region:us"
] | AlekseyKorshuk | null | null | null | 1 | 10 | ---
dataset_info:
features:
- name: conversation
list:
- name: content
dtype: string
- name: do_train
dtype: bool
- name: role
dtype: string
splits:
- name: train
num_bytes: 6948001
num_examples: 3670
download_size: 3661524
dataset_size: 6948001
---
# Dataset Card for "oasst1-chatml"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
pki/SecurityGPT | 2023-08-25T13:10:29.000Z | [
"language:en",
"license:unknown",
"region:us"
] | pki | null | null | null | 4 | 10 | ---
license: unknown
language:
- en
pretty_name: SecurityGPT
---
Dataset for cybsec research Q&A fine tuning
Initial datasets incorporates results from below;
https://datasetsearch.research.google.com/search?src=0&query=cybersecurity&docid=L2cvMTFuX3hudnBtZw%3D%3D&filters=WyJbXCJsaWNlbnNlX2NsYXNzXCIsW1wiY29tbWVyY2lhbFwiXV0iXQ%3D%3D&property=bGljZW5zZV9jbGFzcw%3D%3D
Training when sufficient amount gathered, as of today prob based on Llama / Orca 8k token at 7b or 13b, decided later.
---
|
durhamvin/followup_questions_dataset_paired | 2023-05-18T13:31:20.000Z | [
"region:us"
] | durhamvin | null | null | null | 0 | 10 | ---
dataset_info:
features:
- name: question
dtype: string
- name: response_j
dtype: string
- name: response_k
dtype: string
splits:
- name: train
num_bytes: 89911000
num_examples: 51359
- name: validation
num_bytes: 5878662
num_examples: 3031
download_size: 25102863
dataset_size: 95789662
---
# Dataset Card for "followup_questions_dataset_paired"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
aneeshas/imsdb-comedy-movie-scripts | 2023-05-07T21:27:39.000Z | [
"region:us"
] | aneeshas | null | null | null | 0 | 10 | ---
dataset_info:
features:
- name: Comedy
dtype: string
splits:
- name: train
num_bytes: 34816719
num_examples: 150
download_size: 15474490
dataset_size: 34816719
---
# Dataset Card for "imsdb-comedy-movie-scripts"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
a6kme/minds14-mirror | 2023-05-13T11:42:15.000Z | [
"task_categories:automatic-speech-recognition",
"task_ids:keyword-spotting",
"annotations_creators:expert-generated",
"annotations_creators:crowdsourced",
"annotations_creators:machine-generated",
"language_creators:crowdsourced",
"language_creators:expert-generated",
"multilinguality:multilingual",
... | a6kme | MINDS-14 is training and evaluation resource for intent
detection task with spoken data. It covers 14
intents extracted from a commercial system
in the e-banking domain, associated with spoken examples in 14 diverse language varieties. | @article{gerz2021multilingual,
title={Multilingual and cross-lingual intent detection from spoken data},
author={Gerz, Daniela and Su, Pei-Hao and Kusztos, Razvan and Mondal, Avishek and Lis, Michal and Singhal, Eshan and Mrk{\v{s}}i{\'c}, Nikola and Wen, Tsung-Hsien and Vuli{\'c}, Ivan},
journal={arXiv preprint arXiv:2104.08524},
year={2021}
} | null | 0 | 10 | ---
annotations_creators:
- expert-generated
- crowdsourced
- machine-generated
language_creators:
- crowdsourced
- expert-generated
language:
- en
- fr
- it
- es
- pt
- de
- nl
- ru
- pl
- cs
- ko
- zh
language_bcp47:
- en
- en-GB
- en-US
- en-AU
- fr
- it
- es
- pt
- de
- nl
- ru
- pl
- cs
- ko
- zh
license:
- cc-by-4.0
multilinguality:
- multilingual
pretty_name: 'MInDS-14'
size_categories:
- 10K<n<100K
task_categories:
- automatic-speech-recognition
- speech-processing
task_ids:
- speech-recognition
- keyword-spotting
---
# MInDS-14
## Dataset Description
- **Fine-Tuning script:** [pytorch/audio-classification](https://github.com/huggingface/transformers/tree/main/examples/pytorch/audio-classification)
- **Paper:** [Multilingual and Cross-Lingual Intent Detection from Spoken Data](https://arxiv.org/abs/2104.08524)
- **Total amount of disk used:** ca. 500 MB
MINDS-14 is training and evaluation resource for intent detection task with spoken data. It covers 14
intents extracted from a commercial system in the e-banking domain, associated with spoken examples in 14 diverse language varieties.
## Example
MInDS-14 can be downloaded and used as follows:
```py
from datasets import load_dataset
minds_14 = load_dataset("PolyAI/minds14", "fr-FR") # for French
# to download all data for multi-lingual fine-tuning uncomment following line
# minds_14 = load_dataset("PolyAI/all", "all")
# see structure
print(minds_14)
# load audio sample on the fly
audio_input = minds_14["train"][0]["audio"] # first decoded audio sample
intent_class = minds_14["train"][0]["intent_class"] # first transcription
intent = minds_14["train"].features["intent_class"].names[intent_class]
# use audio_input and language_class to fine-tune your model for audio classification
```
## Dataset Structure
We show detailed information the example configurations `fr-FR` of the dataset.
All other configurations have the same structure.
### Data Instances
**fr-FR**
- Size of downloaded dataset files: 471 MB
- Size of the generated dataset: 300 KB
- Total amount of disk used: 471 MB
An example of a datainstance of the config `fr-FR` looks as follows:
```
{
"path": "/home/patrick/.cache/huggingface/datasets/downloads/extracted/3ebe2265b2f102203be5e64fa8e533e0c6742e72268772c8ac1834c5a1a921e3/fr-FR~ADDRESS/response_4.wav",
"audio": {
"path": "/home/patrick/.cache/huggingface/datasets/downloads/extracted/3ebe2265b2f102203be5e64fa8e533e0c6742e72268772c8ac1834c5a1a921e3/fr-FR~ADDRESS/response_4.wav",
"array": array(
[0.0, 0.0, 0.0, ..., 0.0, 0.00048828, -0.00024414], dtype=float32
),
"sampling_rate": 8000,
},
"transcription": "je souhaite changer mon adresse",
"english_transcription": "I want to change my address",
"intent_class": 1,
"lang_id": 6,
}
```
### Data Fields
The data fields are the same among all splits.
- **path** (str): Path to the audio file
- **audio** (dict): Audio object including loaded audio array, sampling rate and path ot audio
- **transcription** (str): Transcription of the audio file
- **english_transcription** (str): English transcription of the audio file
- **intent_class** (int): Class id of intent
- **lang_id** (int): Id of language
### Data Splits
Every config only has the `"train"` split containing of *ca.* 600 examples.
## Dataset Creation
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Discussion of Biases
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Other Known Limitations
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
## Additional Information
### Dataset Curators
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Licensing Information
All datasets are licensed under the [Creative Commons license (CC-BY)](https://creativecommons.org/licenses/).
### Citation Information
```
@article{DBLP:journals/corr/abs-2104-08524,
author = {Daniela Gerz and
Pei{-}Hao Su and
Razvan Kusztos and
Avishek Mondal and
Michal Lis and
Eshan Singhal and
Nikola Mrksic and
Tsung{-}Hsien Wen and
Ivan Vulic},
title = {Multilingual and Cross-Lingual Intent Detection from Spoken Data},
journal = {CoRR},
volume = {abs/2104.08524},
year = {2021},
url = {https://arxiv.org/abs/2104.08524},
eprinttype = {arXiv},
eprint = {2104.08524},
timestamp = {Mon, 26 Apr 2021 17:25:10 +0200},
biburl = {https://dblp.org/rec/journals/corr/abs-2104-08524.bib},
bibsource = {dblp computer science bibliography, https://dblp.org}
}
```
### Contributions
Thanks to [@patrickvonplaten](https://github.com/patrickvonplaten) for adding this dataset
|
omniquad/BioNLP11ID-ggp-IOB | 2023-05-16T11:52:23.000Z | [
"region:us"
] | omniquad | The automatic extraction of chemical information from text requires the recognition of chemical entity mentions as one of its key steps. When developing supervised named entity recognition (NER) systems, the availability of a large, manually annotated text corpus is desirable. Furthermore, large corpora permit the robust evaluation and comparison of different approaches that detect chemicals in documents. We present the CHEMDNER corpus, a collection of 10,000 PubMed abstracts that contain a total of 84,355 chemical entity mentions labeled manually by expert chemistry literature curators, following annotation guidelines specifically defined for this task. The abstracts of the CHEMDNER corpus were selected to be representative for all major chemical disciplines. Each of the chemical entity mentions was manually labeled according to its structure-associated chemical entity mention (SACEM) class: abbreviation, family, formula, identifier, multiple, systematic and trivial. The difficulty and consistency of tagging chemicals in text was measured using an agreement study between annotators, obtaining a percentage agreement of 91. For a subset of the CHEMDNER corpus (the test set of 3,000 abstracts) we provide not only the Gold Standard manual annotations, but also mentions automatically detected by the 26 teams that participated in the BioCreative IV CHEMDNER chemical mention recognition task. In addition, we release the CHEMDNER silver standard corpus of automatically extracted mentions from 17,000 randomly selected PubMed abstracts. A version of the CHEMDNER corpus in the BioC format has been generated as well. We propose a standard for required minimum information about entity annotations for the construction of domain specific corpora on chemical and drug entities. The CHEMDNER corpus and annotation guidelines are available at: http://www.biocreative.org/resources/biocreative-iv/chemdner-corpus/ | @article{Krallinger2015TheCC,
title={The CHEMDNER corpus of chemicals and drugs and its annotation principles},
author={Martin Krallinger and Obdulia Rabal and Florian Leitner and Miguel Vazquez and David Salgado and Zhiyong Lu and Robert Leaman and Yanan Lu and Dong-Hong Ji and Daniel M. Lowe and Roger A. Sayle and Riza Theresa Batista-Navarro and Rafal Rak and Torsten Huber and Tim Rockt{\"a}schel and S{\'e}rgio Matos and David Campos and Buzhou Tang and Hua Xu and Tsendsuren Munkhdalai and Keun Ho Ryu and S. V. Ramanan and P. Senthil Nathan and Slavko Zitnik and Marko Bajec and Lutz Weber and Matthias Irmer and Saber Ahmad Akhondi and Jan A. Kors and Shuo Xu and Xin An and Utpal Kumar Sikdar and Asif Ekbal and Masaharu Yoshioka and Thaer M. Dieb and Miji Choi and Karin M. Verspoor and Madian Khabsa and C. Lee Giles and Hongfang Liu and K. E. Ravikumar and Andre Lamurias and Francisco M. Couto and Hong-Jie Dai and Richard Tzong-Han Tsai and C Ata and Tolga Can and Anabel Usie and Rui Alves and Isabel Segura-Bedmar and Paloma Mart{\'i}nez and Julen Oyarz{\'a}bal and Alfonso Valencia},
journal={Journal of Cheminformatics},
year={2015},
volume={7},
pages={S2 - S2}
} | null | 0 | 10 | Entry not found |
aalksii/ml-arxiv-papers | 2023-05-19T11:47:18.000Z | [
"task_categories:summarization",
"task_categories:text2text-generation",
"language:en",
"arxiv",
"ML",
"region:us"
] | aalksii | null | null | null | 1 | 10 | ---
dataset_info:
features:
- name: title
dtype: string
- name: abstract
dtype: string
splits:
- name: train
num_bytes: 130808836.19633989
num_examples: 105832
- name: test
num_bytes: 14535413.803660113
num_examples: 11760
download_size: 81252051
dataset_size: 145344250
language:
- en
pretty_name: ML ArXiv Papers
task_categories:
- summarization
- text2text-generation
tags:
- arxiv
- ML
---
# Dataset Card for "ml-arxiv-papers"
This is a dataset containing ML ArXiv papers. The dataset is a version of the original one from [CShorten](https://huggingface.co/datasets/CShorten/ML-ArXiv-Papers), which is a part of the ArXiv papers dataset from [Kaggle](https://www.kaggle.com/datasets/Cornell-University/arxiv).
Three steps are made to process the source data:
1. useless columns removal;
2. train-test split;
3. '\n' removal and trimming spaces on sides of the text.
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
joey234/mmlu-clinical_knowledge | 2023-08-23T04:29:12.000Z | [
"region:us"
] | joey234 | null | null | null | 0 | 10 | ---
dataset_info:
features:
- name: question
dtype: string
- name: choices
sequence: string
- name: answer
dtype:
class_label:
names:
'0': A
'1': B
'2': C
'3': D
- name: negate_openai_prompt
struct:
- name: content
dtype: string
- name: role
dtype: string
- name: neg_question
dtype: string
- name: fewshot_context
dtype: string
- name: fewshot_context_neg
dtype: string
splits:
- name: dev
num_bytes: 4228
num_examples: 5
- name: test
num_bytes: 848200
num_examples: 265
download_size: 103156
dataset_size: 852428
configs:
- config_name: default
data_files:
- split: dev
path: data/dev-*
- split: test
path: data/test-*
---
# Dataset Card for "mmlu-clinical_knowledge"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
ProfessorBob/relation_extraction | 2023-07-26T21:21:06.000Z | [
"region:us"
] | ProfessorBob | null | null | null | 0 | 10 | ---
dataset_info:
features:
- name: triplets
sequence: string
- name: passage
dtype: string
- name: label
dtype: string
- name: label_id
dtype: int64
- name: synonyms
sequence: string
- name: __index_level_1__
dtype: int64
- name: __index_level_0__
dtype: int64
splits:
- name: train
num_bytes: 179763173
num_examples: 123169
download_size: 66368928
dataset_size: 179763173
---
# Data points per relation :
| | Count |
|---------------------------------------------------|-------|
| instance of | 8461 |
| occupation | 3552 |
| place of birth | 1980 |
| family name | 1977 |
| given name | 1886 |
| country | 1731 |
| country of citizenship | 1677 |
| has part(s) | 1639 |
| educated at | 1457 |
| shares border with | 1329 |
| sex or gender | 1326 |
| award received | 1313 |
| genre | 1285 |
| contains the administrative territorial entity | 1094 |
| child | 1080 |
| located in the administrative territorial entity | 994 |
| participant in | 984 |
| position held | 966 |
| spouse | 956 |
| sibling | 905 |
| place of death | 886 |
| partially coincident with | 761 |
| father | 748 |
| member of | 738 |
| sport | 675 |
| notable work | 657 |
| field of work | 612 |
| mother | 569 |
| languages spoken, written or signed | 560 |
| country of origin | 545 |
| facet of | 528 |
| conflict | 528 |
| member of sports team | 522 |
| part of | 505 |
| present in work | 485 |
| has effect | 468 |
| place of burial | 420 |
| named after | 419 |
| location | 407 |
| significant event | 371 |
| characters | 367 |
| subclass of | 359 |
| manner of death | 359 |
| headquarters location | 352 |
| director | 348 |
| participant | 345 |
| employer | 338 |
| uses | 319 |
| religion or worldview | 315 |
| has use | 304 |
| noble title | 303 |
| language used | 302 |
| nominated for | 301 |
| has works in the collection | 298 |
| opposite of | 292 |
| family | 291 |
| different from | 287 |
| native language | 286 |
| capital | 283 |
| founded by | 273 |
| work location | 272 |
| residence | 269 |
| language of work or name | 263 |
| member of political party | 262 |
| platform | 257 |
| applies to jurisdiction | 255 |
| cause of death | 249 |
| owned by | 235 |
| military branch | 232 |
| student of | 226 |
| composer | 221 |
| cause | 219 |
| continent | 219 |
| screenwriter | 219 |
| performer | 215 |
| military rank | 214 |
| main subject | 210 |
| relative | 207 |
| creator | 193 |
| depicts | 191 |
| head of government | 190 |
| industry | 189 |
| producer | 187 |
| has quality | 182 |
| form of creative work | 181 |
| record label | 181 |
| operator | 177 |
| has contributing factor | 176 |
| replaces | 174 |
| student | 173 |
| developer | 173 |
| color | 172 |
| country for sport | 172 |
| said to be the same as | 166 |
| writing language | 165 |
| sports discipline competed in | 163 |
| based on | 162 |
| instrument | 161 |
| topic's main category | 159 |
| participating team | 157 |
| followed by | 157 |
| production company | 155 |
| ethnic group | 151 |
| office held by head of government | 144 |
| league | 143 |
| original language of film or TV show | 143 |
| has subsidiary | 143 |
| architect | 141 |
| victory | 141 |
| has part(s) of the class | 135 |
| located in/on physical feature | 132 |
| time period | 132 |
| part of the series | 131 |
| made from material | 128 |
| author | 125 |
| heritage designation | 120 |
| location of formation | 118 |
| allegiance | 117 |
| parent organization | 115 |
| narrative location | 114 |
| capital of | 112 |
| manufacturer | 111 |
| product or material produced | 110 |
| replaced by | 110 |
| position played on team / speciality | 109 |
| taxon rank | 108 |
| tracklist | 107 |
| consecrator | 106 |
| twinned administrative body | 105 |
| found in taxon | 104 |
| winner | 101 |
| connects with | 96 |
| parent taxon | 95 |
| original broadcaster | 95 |
| home venue | 94 |
| publisher | 91 |
| discoverer or inventor | 89 |
| has edition or translation | 88 |
| distribution format | 88 |
| legal form | 80 |
| operating system | 79 |
| architectural style | 78 |
| filming location | 77 |
| described by source | 76 |
| medical condition | 73 |
| subject has role | 71 |
| movement | 71 |
| lyrics by | 70 |
| organizer | 70 |
| competition class | 67 |
| chairperson | 67 |
| presenter | 65 |
| located in protected area | 64 |
| religious order | 64 |
| academic degree | 63 |
| media franchise | 63 |
| candidate | 62 |
| head coach | 61 |
| candidacy in election | 59 |
| transport network | 58 |
| has immediate cause | 58 |
| category of associated people | 57 |
| follows | 55 |
| affiliation | 52 |
| legislated by | 51 |
| copyright license | 49 |
| connecting line | 49 |
| contributor to the creative work or subject | 47 |
| connecting service | 47 |
| country of registry | 46 |
| start point | 42 |
| collection | 39 |
| exhibition history | 38 |
| located on street | 37 |
| season | 36 |
| indigenous to | 36 |
| place of publication | 36 |
| contains settlement | 35 |
| voice actor | 34 |
| distributed by | 34 |
| film editor | 33 |
| archives at | 32 |
| foundational text | 32 |
| owner of | 32 |
| sponsor | 31 |
| mountain range | 30 |
| place of detention | 29 |
| day of week | 28 |
| ancestral home | 27 |
| occupant | 27 |
| location of creation | 26 |
| game mode | 26 |
| state of use | 25 |
| adjacent station | 25 |
| writing system | 24 |
| crosses | 24 |
| honorific prefix | 21 |
| dedicated to | 20 |
| amended by | 20 |
| director of photography | 18 |
| copyright status | 18 |
| published in | 17 |
| is a list of | 17 |
| maintained by | 16 |
| commemorates | 14 |
| repealed by | 14 |
| sports season of league or competition | 12 |
| editor | 12 |
| voice type | 12 |
| category for people born here | 11 |
| associated electoral district | 11 |
| topic's main template | 10 |
| fabrication method | 10 |
| does not have cause | 10 |
| addressee | 10 |
| has facility | 10 |
| endemic to | 9 |
| cause of destruction | 9 |
| general classification of race participants | 8 |
| state of conservation | 8 |
| artist files at | 8 |
| terminus location | 8 |
| related category | 8 |
| terminus | 6 |
| referee | 5 |
| significant place | 4 |
| hotel rating | 3 |
| access restriction status | 2 |
| associated cadastral district | 1 |
| appears in the heritage monument list | 1 |
| category for ship name | 1 |
| study type | 1 |
| online access status | 1 |
| diel cycle | 1 |
| copyright status as a creator | 1 |
| taxon synonym | 1 | |
EulerianKnight/breast-histopathology-images-train-test-valid-split | 2023-05-22T17:45:55.000Z | [
"task_categories:image-classification",
"size_categories:100K<n<1M",
"license:apache-2.0",
"region:us"
] | EulerianKnight | null | null | null | 0 | 10 | ---
license: apache-2.0
task_categories:
- image-classification
size_categories:
- 100K<n<1M
---
# Breast Histopathology Image dataset
- This dataset is just a rearrangement of the Original dataset at Kaggle: https://www.kaggle.com/datasets/paultimothymooney/breast-histopathology-images
- Data Citation: https://www.ncbi.nlm.nih.gov/pubmed/27563488 , http://spie.org/Publications/Proceedings/Paper/10.1117/12.2043872
- The original dataset has structure:
<pre>
|-- patient_id
|-- class(0 and 1)
</pre>
- The present dataset has following structure:
<pre>
|-- train
|-- class(0 and 1)
|-- valid
|-- class(0 and 1)
|-- test
|-- class(0 and 1) |
Brand24/mms | 2023-08-23T21:49:55.000Z | [
"task_categories:text-classification",
"task_ids:sentiment-classification",
"annotations_creators:mixed",
"multilinguality:multi-lingual",
"size_categories:1M<n<10M",
"language:ar",
"language:bg",
"language:bs",
"language:cs",
"language:de",
"language:el",
"language:en",
"language:es",
"la... | Brand24 | This work presents the most extensive open massively multi-lingual corpus of datasets for training sentiment models.
The corpus consists of 79 manually selected from over 350 datasets reported in the scientific literature based on strict quality criteria and covers 25 languages.
Datasets can be queried using several linguistic and functional features.
In addition, we present a multi-faceted sentiment classification benchmark summarizing hundreds of experiments conducted on different base models, training objectives, dataset collections, and fine-tuning strategies. | @misc{augustyniak2023massively,
title={Massively Multilingual Corpus of Sentiment Datasets and Multi-faceted Sentiment Classification Benchmark},
author={Łukasz Augustyniak and Szymon Woźniak and Marcin Gruza and Piotr Gramacki and Krzysztof Rajda and Mikołaj Morzy and Tomasz Kajdanowicz},
year={2023},
eprint={2306.07902},
archivePrefix={arXiv},
primaryClass={cs.CL}
} | null | 2 | 10 | ---
annotations_creators:
- mixed
language:
- ar
- bg
- bs
- cs
- de
- el
- en
- es
- fa
- fr
- he
- hi
- hr
- hu
- it
- ja
- lv
- pl
- pt
- ru
- sk
- sl
- sq
- sr
- sv
- th
- ur
- zh
license:
- other
multilinguality:
- multi-lingual
size_categories:
- 1M<n<10M
task_categories:
- text-classification
task_ids:
- sentiment-classification
pretty_name: Massive-Multilingual-Sentiment
---
# Massive Multilingual Sentiment Corpora (MMS)
## Corpora Summary
Despite impressive advancements in multilingual corpora collection and model training, developing large-scale deployments of multilingual models still presents a significant challenge. This is particularly true for language tasks that are culture-dependent. One such example is the area of multilingual sentiment analysis, where affective markers can be subtle and deeply ensconced in culture.
This work presents the most extensive open massively multilingual corpus of datasets for training sentiment models. The corpus consists of 79 manually selected from over 350 datasets reported in the scientific literature based on strict quality criteria and covers 27 languages. Datasets can be queried using several linguistic and functional features. In addition, we present a multi-faceted sentiment classification benchmark summarizing hundreds of experiments conducted on different base models, training objectives, dataset collections, and fine-tuning strategies.
More about dataset here [https://brand24-ai.github.io/mms_benchmark](https://brand24-ai.github.io/mms_benchmark).
## General licenses information
This is a library of the open-sourced datasets that we gathered. We provide citations or links to sources of these datasets. It is essential to mention that these datasets could have different licenses, and we encourage everybody to check the permissions of each dataset separately. It is critical because, for example, not all datasets will be available for commercial purposes. This ensures that proper consent and permissions are obtained for the use and curation of the data, respecting the rights and privacy of the individuals whose data is included in the datasets. You will cite our library and the authors of each dataset you want to use.
## Usage
```python
import datasets
# whole dataset will be downloaded and cached
mms_dataset = datasets.load_dataset("Brand24/mms")
# filter only texts in Polish
pl = mms_dataset.filter(lambda row: row['language'] == 'pl')
```
## Corpora statistics
### Per language
| language | label_name | count |
|:-----------|:-------------|--------:|
| ar | negative | 138899 |
| ar | neutral | 192774 |
| ar | positive | 600402 |
| bg | negative | 13930 |
| bg | neutral | 28657 |
| bg | positive | 19563 |
| bs | negative | 11974 |
| bs | neutral | 11145 |
| bs | positive | 13064 |
| cs | negative | 39674 |
| cs | neutral | 59200 |
| cs | positive | 97413 |
| de | negative | 104667 |
| de | neutral | 100071 |
| de | positive | 111149 |
| el | negative | 230 |
| el | neutral | 38 |
| el | positive | 232 |
| en | negative | 304939 |
| en | neutral | 290823 |
| en | positive | 1734724 |
| es | negative | 108733 |
| es | neutral | 122493 |
| es | positive | 187486 |
| fa | negative | 1602 |
| fa | neutral | 5091 |
| fa | positive | 6832 |
| fr | negative | 84187 |
| fr | neutral | 43245 |
| fr | positive | 83199 |
| he | negative | 2279 |
| he | neutral | 243 |
| he | positive | 6097 |
| hi | negative | 4992 |
| hi | neutral | 6392 |
| hi | positive | 5615 |
| hr | negative | 19757 |
| hr | neutral | 19470 |
| hr | positive | 38367 |
| hu | negative | 8974 |
| hu | neutral | 17621 |
| hu | positive | 30087 |
| it | negative | 4043 |
| it | neutral | 4193 |
| it | positive | 3829 |
| ja | negative | 83982 |
| ja | neutral | 41979 |
| ja | positive | 83819 |
| lv | negative | 1378 |
| lv | neutral | 2618 |
| lv | positive | 1794 |
| pl | negative | 77422 |
| pl | neutral | 62074 |
| pl | positive | 97192 |
| pt | negative | 56827 |
| pt | neutral | 55165 |
| pt | positive | 45842 |
| ru | negative | 31770 |
| ru | neutral | 48106 |
| ru | positive | 31054 |
| sk | negative | 14431 |
| sk | neutral | 12842 |
| sk | positive | 29350 |
| sl | negative | 33694 |
| sl | neutral | 50553 |
| sl | positive | 29296 |
| sq | negative | 6889 |
| sq | neutral | 14757 |
| sq | positive | 22638 |
| sr | negative | 25089 |
| sr | neutral | 32283 |
| sr | positive | 18996 |
| sv | negative | 16266 |
| sv | neutral | 13342 |
| sv | positive | 11738 |
| th | negative | 9326 |
| th | neutral | 28616 |
| th | positive | 34377 |
| ur | negative | 5239 |
| ur | neutral | 8585 |
| ur | positive | 5836 |
| zh | negative | 117967 |
| zh | neutral | 69016 |
| zh | positive | 144719 |
## Dataset Structure
### Linguistic Typology
The field of language typology focuses on studying the similarities and differences among languages. These differences can be categorized into phonological (sounds), syntactic (structures), lexical (vocabulary), and theoretical aspects. Linguistic typology analyzes the current state of languages, contrasting with genealogical linguistics, which examines historical relationships between languages.
Genealogical linguistics studies language families and genera. A language family consists of languages that share a common ancestral language, while genera are branches within a language family. The Indo-European family, for example, includes genera such as Slavic, Romance, Germanic, and Indic. Over 7000 languages are categorized into approximately 150 language families, with Indo-European, Sino-Tibetan, Turkic, Afro-Asiatic, Nilo-Saharan, Niger-Congo, and Eskimo-Aleut being some of the largest families.
Within linguistic typology, languages are described using various linguistic features. Our work focuses on sentiment classification and selects ten relevant features:
- `text`: The feature text represents the actual text of the sentiment dataset. It is of type string and contains the text samples or sentences for sentiment analysis.
- `label`: The feature label corresponds to the sentiment labels of the text samples. It is of type ClassLabel and has three possible values: negative, neutral, and positive. These labels indicate the sentiment or emotional polarity associated with the text.
- `original_dataset`: The feature original_dataset refers to the name or identifier of the original dataset from which the text samples were extracted. It is of type string and provides information about the source dataset.
- `domain`: The feature domain represents the domain or topic of the sentiment dataset. It is of type string and provides context regarding the subject matter of the text samples.
- `language`: The feature language indicates the language of the text samples in the sentiment dataset. It is of type string and specifies the language in which the text is written.
- `Family`: The feature Family represents the language family to which a specific language belongs. It is of type string and provides information about the broader categorization of languages into language families.
- `Genus`: The feature Genus corresponds to the genus or branch within a language family. It is of type string and indicates the specific subgrouping of languages within a language family.
- `Definite article`: Half of the languages do not use the definite article, which signals uniqueness or definiteness of a concept.
- `Indefinite article`: Half of the languages do not use the indefinite article, with some languages using a separate article or the numeral "one."
- `Number of cases`: Languages vary greatly in the number of morphological cases used.
- `Order of subject, verb, and object`: Different languages have different word orderings, with variations like SOV, SVO, VSO, VOS, OVS, and OSV.
- `Negative morphemes`: Negative morphemes indicate clausal negation in declarative sentences.
- `Polar questions`: Questions with yes/no answers, which can be formed using question particles, interrogative morphology, or intonation.
- `Position of the negative morpheme`: The position of the negative morpheme can vary in relation to subjects and objects.
- `Prefixing vs. suffixing`: Languages differ in their use of prefixes and suffixes in inflectional morphology.
- `Coding of nominal plurals`: Plurals can be expressed through morphological changes or the use of plurality indicator morphemes.
- `Grammatical genders`: Languages vary in the number of grammatical genders used, or may not use the concept at all.
These language features are available as filtering options in our library. Users can download specific facets of the collection, such as datasets in Slavic languages with interrogative word order for polar questions or datasets from the Afro-Asiatic language family without morphological case-making.
### Usage
Code example for loading and filtering Slavic language in which polar questions are formed using the interrogative word order
```python
import datasets
mms_dataset = datasets.load_dataset("Brand24/mms")
slavic = mms_dataset.filter(lambda row: row["Genus"] == "Slavic" and row["Polar questions"] == "interrogative word order")
```
Filtering sentiment datasets from the Afro-Asiatic language family without morphological case-making
```python
afro_asiatic = mms_dataset.filter(lambda row: row["Family"] == "Afro-Asiatic" and row["Number of cases"] == "no morphological case-making")
```
## Dataset Creation
### Who are the source language producers?
The data comes from multiple papers and covers a large variety of languages. For the specific dataset information, please check out the companion paper.
### Annotations
Similarly, like for data producers, you should check papers that propose the specific datasets you are interested in.
#### Annotation process
We describe the annotations process of our internally created dataset in this corpus.
## Considerations for Using the Data
### Social Impact and Limitations
Corpus is intended to bring more sentiment annotated data to a wide variety of lanuages, the aim of the corpus is to make large amounts of data available to lower resource languages in order to facilitate the training of state-of-the-art ML models for sentiment analysis.
## Additional Information
### Dataset Curators
The corpus was put together by
- [@laugustyniak](https://www.linkedin.com/in/lukaszaugustyniak/)
- [@swozniak](https://www.linkedin.com/in/wscode/)
- [@mgruza](https://www.linkedin.com/in/marcin-gruza-276b2512b/)
- [@pgramacki](https://www.linkedin.com/in/piotrgramacki/)
- [@krajda](https://www.linkedin.com/in/krzysztof-rajda/)
- [@mmorzy](https://www.linkedin.com/in/mikolajmorzy/)
- [@tkajdanowicz](https://www.linkedin.com/in/kajdanowicz/)
### Licensing Information
These data are released under this licensing scheme.
We do not own any text from which these data and datasets have been extracted.
We license the actual packaging of these data under the Attribution-NonCommercial 4.0 International (CC BY-NC 4.0) https://creativecommons.org/licenses/by-nc/4.0/
This work is published from Poland.
Should you consider that our data contains material that is owned by you and should, therefore not be reproduced here, please:
* Clearly identify yourself with detailed contact data such as an address, telephone number, or email address at which you can be contacted.
* Clearly identify the copyrighted work claimed to be infringed.
* Clearly identify the material claimed to be infringing and the information reasonably sufficient to allow us to locate the material.
We will comply with legitimate requests by removing the affected sources from the next release of the corpus.
### Citation Information
### The main corpus citation
```bibtex
@misc{augustyniak2023massively,
title={Massively Multilingual Corpus of Sentiment Datasets and Multi-faceted Sentiment Classification Benchmark},
author={Łukasz Augustyniak and Szymon Woźniak and Marcin Gruza and Piotr Gramacki and Krzysztof Rajda and Mikołaj Morzy and Tomasz Kajdanowicz},
year={2023},
eprint={2306.07902},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
### All datasets in corpus
[https://brand24-ai.github.io/mms_benchmark/citations.html](https://brand24-ai.github.io/mms_benchmark/citations.html)
## Acknowledgements
- BRAND24 - https://brand24.com
- CLARIN-PL-Biz - https://clarin.biz
|
Abzu/dolly_hhrlhf | 2023-06-04T19:33:11.000Z | [
"task_categories:question-answering",
"task_categories:text2text-generation",
"language:en",
"license:cc-by-sa-3.0",
"region:us"
] | Abzu | null | null | null | 3 | 10 | ---
dataset_info:
features:
- name: prompt
dtype: string
- name: response
dtype: string
splits:
- name: train
num_bytes: 22346337.075312525
num_examples: 35205
- name: test
num_bytes: 2483137.924687476
num_examples: 3912
download_size: 16025539
dataset_size: 24829475
license: cc-by-sa-3.0
task_categories:
- question-answering
- text2text-generation
language:
- en
---
# Dataset Card for "dolly_hhrlhf"
This is the dataset from mosaic mosaicml/dolly_hhrlhf removing some duplicates found.
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
projecte-aina/CaWikiTC | 2023-09-13T12:35:09.000Z | [
"task_categories:text-classification",
"task_ids:multi-class-classification",
"annotations_creators:automatically-generated",
"language_creators:found",
"multilinguality:monolingual",
"size_categories:unknown",
"language:ca",
"license:cc-by-sa-3.0",
"region:us"
] | projecte-aina | null | null | null | 1 | 10 | ---
YAML tags:
annotations_creators:
- automatically-generated
language_creators:
- found
language:
- ca
license:
- cc-by-sa-3.0
multilinguality:
- monolingual
pretty_name: cawikitc
size_categories:
- unknown
source_datasets: []
task_categories:
- text-classification
task_ids:
- multi-class-classification
---
# Dataset Card for CaWikiTC
## Dataset Description
- **Point of Contact:** [Irene Baucells de la Peña](irene.baucells@bsc.es)
### Dataset Summary
CaWikiTC (Catalan Wikipedia Text Classification) is a text classification dataset authomatically created by scraping Catalan Wikipedia article summaries and their associated thematic category. It contains 21002 texts (19952 and 1050 in the train and dev partitions, respectively) classified under 67 exclusive categories.
For the dataset creation, we selected all the Catalan Wikipedia article summaries from a previously fixed variety of subcategories, most of which are professional disciplines and social sciences-related fields. The texts that were originally associated with more than one category were discarded to avoid class overlappings.
This dataset was created as part of the experiments from [reference]. Its original purpose was to serve as a task transfer source to train an entailment model, which was then used to perform a different text classification task.
### Supported Tasks and Leaderboards
Text classification, Language Model
### Languages
The dataset is in Catalan (`ca-ES`).
## Dataset Structure
### Data Instances
Two json files (train and development splits).
### Data Fields
Each example contains the following 3 fields:
* text: Catalan Wikipedia article summary (string)
* label: topic
#### Example:
<pre>
[
{
'text': "Novum Organum és el títol de l'obra més important de Francis Bacon, publicada el 1620. Rep el seu nom perquè pretén ser una superació del tractat sobre lògica d'Aristòtil, anomenat Organon. Es basa a trobar la causa de tot fenomen per inducció, observant quan passa i quan no i extrapolant aleshores les condicions que fan que es doni. Aquest raonament va influir decisivament en la formació del mètode científic, especialment en la fase d'elaboració d'hipòtesis. També indica que el prejudici és l'enemic de la ciència, perquè impideix generar noves idees. Els prejudicis més comuns s'expliquen amb la metàfora de l'ídol o allò que és falsament adorat. Existeixen ídols de la tribu (comuns a tots els éssers humans per la seva naturalesa), de la caverna (procedents de l'educació), del fòrum (causats per un ús incorrecte del llenguatge) i del teatre (basats en idees anteriors errònies, notablement en filosofia).",
'label': 'Filosofia',
},
...
]
</pre>
#### Labels
* 'Administració', 'Aeronàutica', 'Agricultura', 'Antropologia', 'Arqueologia', 'Arquitectura', 'Art', 'Astronomia', 'Astronàutica', 'Biblioteconomia', 'Biotecnologia', 'Catàstrofes', 'Circ', 'Ciència militar', 'Ciència-ficció', 'Ciències ambientals', 'Ciències de la salut', 'Ciències polítiques', 'Conflictes', 'Cronometria', 'Cultura popular', 'Dansa', 'Dret', 'Ecologia', 'Enginyeria', 'Epidèmies', 'Esoterisme', 'Estris', 'Festivals', 'Filologia', 'Filosofia', 'Fiscalitat', 'Física', 'Geografia', 'Geologia', 'Gestió', 'Heràldica', 'Història', 'Humor', 'Indumentària', 'Informàtica', 'Jaciments paleontològics', 'Jocs', 'Lingüística', 'Llengües', 'Llocs ficticis', 'Matemàtiques', 'Metodologia', 'Mitologia', 'Multimèdia', 'Museologia', 'Nàutica', 'Objectes astronòmics', 'Pedagogia', 'Periodisme', 'Protestes', 'Pseudociència', 'Psicologia', 'Química', 'Robòtica', 'Ràdio', 'Seguretat laboral', 'Sociologia', 'Telecomunicacions', 'Televisió', 'Teologia', 'Ètica'
### Data Splits
Train and development splits were created in a stratified fashion, following a 95% and 5% proportion, respectively. The sizes of each split are the following:
* train.json: 19952 examples
* dev.json: 1050 examples
### Annotations
#### Annotation process
The crawled data contained the categories' annotations, which were then used to create this dataset with the mentioned criteria.
### Personal and Sensitive Information
No personal or sensitive information included.
## Considerations for Using the Data
### Social Impact of Dataset
We hope this dataset contributes to the development of language modeCAls in Catalan, a low-resource language.
### Discussion of Biases
[N/A]
### Other Known Limitations
[N/A]
## Additional Information
### Dataset Curators
Irene Baucells (irene.baucells@bsc.es)
This work was funded by the [Departament de la Vicepresidència i de Polítiques Digitals i Territori de la Generalitat de Catalunya](https://politiquesdigitals.gencat.cat/ca/inici/index.html#googtrans(ca|en) within the framework of [Projecte AINA](https://politiquesdigitals.gencat.cat/ca/economia/catalonia-ai/aina).
### Licensing Information
This work is licensed under a <a rel="license" href="https://creativecommons.org/licenses/by-sa/4.0/">Attribution-ShareAlike 4.0 International</a>.
### Citation Information
|
aisyahhrazak/ms-news-harakahdaily | 2023-06-24T00:24:27.000Z | [
"language:ms",
"region:us"
] | aisyahhrazak | null | null | null | 0 | 10 | ---
language:
- ms
---
### Dataset Summary
- 45505 Scraped News Article From Harakah Daily From 2017 to 21st May 2023
- Nearly all malay , small portion in english
### Dataset Format
```
{"url": "...", "headline": "...", "content": [...,...]}
``` |
raygx/NepCov19TweetsPlus | 2023-07-01T04:10:37.000Z | [
"region:us"
] | raygx | null | null | null | 0 | 10 | ---
dataset_info:
features:
- name: Sentiment
dtype: int64
- name: Sentences
dtype: string
splits:
- name: train
num_bytes: 14110875
num_examples: 41541
download_size: 5219950
dataset_size: 14110875
---
# Dataset Card for "NepCov19TweetsPlus"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
Shanav12/sports_ball_dataset | 2023-05-28T03:33:16.000Z | [
"region:us"
] | Shanav12 | null | null | null | 0 | 10 | Entry not found |
TigerResearch/tigerbot-law-plugin | 2023-06-01T03:11:47.000Z | [
"language:zh",
"license:apache-2.0",
"region:us"
] | TigerResearch | null | null | null | 10 | 10 | ---
license: apache-2.0
language:
- zh
---
[Tigerbot](https://github.com/TigerResearch/TigerBot) 模型rethink时使用的外脑原始数据,法律11大类,共5.5W+条款
- 宪法
- 刑法
- 行政法
- 司法解释
- 民法商法
- 民法典
- 行政法规
- 社会法
- 部门规章
- 经济法
- 诉讼与非诉讼程序法
<p align="center" width="40%">
## Usage
```python
import datasets
ds_sft = datasets.load_dataset('TigerResearch/tigerbot-law-plugin')
``` |
whu9/multi_xsciene_postprocess | 2023-06-02T23:24:12.000Z | [
"region:us"
] | whu9 | null | null | null | 0 | 10 | ---
dataset_info:
features:
- name: source
dtype: string
- name: summary
dtype: string
- name: source_num_tokens
dtype: int64
- name: summary_num_tokens
dtype: int64
splits:
- name: train
num_bytes: 165217234
num_examples: 30351
- name: test
num_bytes: 27275286
num_examples: 5090
- name: validation
num_bytes: 27471336
num_examples: 5061
download_size: 101726627
dataset_size: 219963856
---
# Dataset Card for "multi_xsciene_postprocess"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
aisyahhrazak/ms-news-utusanborneo | 2023-06-29T04:00:06.000Z | [
"language:ms",
"region:us"
] | aisyahhrazak | null | null | null | 0 | 10 | ---
language:
- ms
---
Dataset Summary
- Scraped News Article From Utusan Borneo on 27.5.2023
- All malay articles
Dataset Format
```
{"url": "...", "content": [...,...]}
``` |
Yulong-W/squadorirobustness | 2023-06-11T03:59:10.000Z | [
"region:us"
] | Yulong-W | Stanford Question Answering Dataset (SQuAD) is a reading comprehension dataset, consisting of questions posed by crowdworkers on a set of Wikipedia articles, where the answer to every question is a segment of text, or span, from the corresponding reading passage, or the question might be unanswerable. | @article{2016arXiv160605250R,
author = {{Rajpurkar}, Pranav and {Zhang}, Jian and {Lopyrev},
Konstantin and {Liang}, Percy},
title = "{SQuAD: 100,000+ Questions for Machine Comprehension of Text}",
journal = {arXiv e-prints},
year = 2016,
eid = {arXiv:1606.05250},
pages = {arXiv:1606.05250},
archivePrefix = {arXiv},
eprint = {1606.05250},
} | null | 0 | 10 | Entry not found |
dmayhem93/agieval-gaokao-biology | 2023-06-18T17:16:57.000Z | [
"license:mit",
"arxiv:2304.06364",
"region:us"
] | dmayhem93 | null | null | null | 0 | 10 | ---
dataset_info:
features:
- name: query
dtype: string
- name: choices
sequence: string
- name: gold
sequence: int64
splits:
- name: test
num_bytes: 159178
num_examples: 210
download_size: 94276
dataset_size: 159178
license: mit
---
# Dataset Card for "agieval-gaokao-biology"
Dataset taken from https://github.com/microsoft/AGIEval and processed as in that repo.
MIT License
Copyright (c) Microsoft Corporation.
Permission is hereby granted, free of charge, to any person obtaining a copy
of this software and associated documentation files (the "Software"), to deal
in the Software without restriction, including without limitation the rights
to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
copies of the Software, and to permit persons to whom the Software is
furnished to do so, subject to the following conditions:
The above copyright notice and this permission notice shall be included in all
copies or substantial portions of the Software.
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
SOFTWARE
@misc{zhong2023agieval,
title={AGIEval: A Human-Centric Benchmark for Evaluating Foundation Models},
author={Wanjun Zhong and Ruixiang Cui and Yiduo Guo and Yaobo Liang and Shuai Lu and Yanlin Wang and Amin Saied and Weizhu Chen and Nan Duan},
year={2023},
eprint={2304.06364},
archivePrefix={arXiv},
primaryClass={cs.CL}
} |
dmayhem93/agieval-gaokao-chemistry | 2023-06-18T17:17:33.000Z | [
"license:mit",
"arxiv:2304.06364",
"region:us"
] | dmayhem93 | null | null | null | 0 | 10 | ---
dataset_info:
features:
- name: query
dtype: string
- name: choices
sequence: string
- name: gold
sequence: int64
splits:
- name: test
num_bytes: 173207
num_examples: 207
download_size: 78411
dataset_size: 173207
license: mit
---
# Dataset Card for "agieval-gaokao-chemistry"
Dataset taken from https://github.com/microsoft/AGIEval and processed as in that repo.
MIT License
Copyright (c) Microsoft Corporation.
Permission is hereby granted, free of charge, to any person obtaining a copy
of this software and associated documentation files (the "Software"), to deal
in the Software without restriction, including without limitation the rights
to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
copies of the Software, and to permit persons to whom the Software is
furnished to do so, subject to the following conditions:
The above copyright notice and this permission notice shall be included in all
copies or substantial portions of the Software.
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
SOFTWARE
@misc{zhong2023agieval,
title={AGIEval: A Human-Centric Benchmark for Evaluating Foundation Models},
author={Wanjun Zhong and Ruixiang Cui and Yiduo Guo and Yaobo Liang and Shuai Lu and Yanlin Wang and Amin Saied and Weizhu Chen and Nan Duan},
year={2023},
eprint={2304.06364},
archivePrefix={arXiv},
primaryClass={cs.CL}
} |
dmayhem93/agieval-gaokao-chinese | 2023-06-18T17:18:09.000Z | [
"license:mit",
"arxiv:2304.06364",
"region:us"
] | dmayhem93 | null | null | null | 0 | 10 | ---
dataset_info:
features:
- name: query
dtype: string
- name: choices
sequence: string
- name: gold
sequence: int64
splits:
- name: test
num_bytes: 833642
num_examples: 246
download_size: 371866
dataset_size: 833642
license: mit
---
# Dataset Card for "agieval-gaokao-chinese"
Dataset taken from https://github.com/microsoft/AGIEval and processed as in that repo.
MIT License
Copyright (c) Microsoft Corporation.
Permission is hereby granted, free of charge, to any person obtaining a copy
of this software and associated documentation files (the "Software"), to deal
in the Software without restriction, including without limitation the rights
to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
copies of the Software, and to permit persons to whom the Software is
furnished to do so, subject to the following conditions:
The above copyright notice and this permission notice shall be included in all
copies or substantial portions of the Software.
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
SOFTWARE
@misc{zhong2023agieval,
title={AGIEval: A Human-Centric Benchmark for Evaluating Foundation Models},
author={Wanjun Zhong and Ruixiang Cui and Yiduo Guo and Yaobo Liang and Shuai Lu and Yanlin Wang and Amin Saied and Weizhu Chen and Nan Duan},
year={2023},
eprint={2304.06364},
archivePrefix={arXiv},
primaryClass={cs.CL}
} |
dmayhem93/agieval-gaokao-english | 2023-06-18T17:19:13.000Z | [
"license:mit",
"arxiv:2304.06364",
"region:us"
] | dmayhem93 | null | null | null | 0 | 10 | ---
dataset_info:
features:
- name: query
dtype: string
- name: choices
sequence: string
- name: gold
sequence: int64
splits:
- name: test
num_bytes: 688986
num_examples: 306
download_size: 200843
dataset_size: 688986
license: mit
---
# Dataset Card for "agieval-gaokao-english"
Dataset taken from https://github.com/microsoft/AGIEval and processed as in that repo.
MIT License
Copyright (c) Microsoft Corporation.
Permission is hereby granted, free of charge, to any person obtaining a copy
of this software and associated documentation files (the "Software"), to deal
in the Software without restriction, including without limitation the rights
to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
copies of the Software, and to permit persons to whom the Software is
furnished to do so, subject to the following conditions:
The above copyright notice and this permission notice shall be included in all
copies or substantial portions of the Software.
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
SOFTWARE
@misc{zhong2023agieval,
title={AGIEval: A Human-Centric Benchmark for Evaluating Foundation Models},
author={Wanjun Zhong and Ruixiang Cui and Yiduo Guo and Yaobo Liang and Shuai Lu and Yanlin Wang and Amin Saied and Weizhu Chen and Nan Duan},
year={2023},
eprint={2304.06364},
archivePrefix={arXiv},
primaryClass={cs.CL}
} |
dmayhem93/agieval-gaokao-geography | 2023-06-18T17:19:48.000Z | [
"license:mit",
"arxiv:2304.06364",
"region:us"
] | dmayhem93 | null | null | null | 0 | 10 | ---
dataset_info:
features:
- name: query
dtype: string
- name: choices
sequence: string
- name: gold
sequence: int64
splits:
- name: test
num_bytes: 116612
num_examples: 199
download_size: 52868
dataset_size: 116612
license: mit
---
# Dataset Card for "agieval-gaokao-geography"
Dataset taken from https://github.com/microsoft/AGIEval and processed as in that repo.
MIT License
Copyright (c) Microsoft Corporation.
Permission is hereby granted, free of charge, to any person obtaining a copy
of this software and associated documentation files (the "Software"), to deal
in the Software without restriction, including without limitation the rights
to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
copies of the Software, and to permit persons to whom the Software is
furnished to do so, subject to the following conditions:
The above copyright notice and this permission notice shall be included in all
copies or substantial portions of the Software.
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
SOFTWARE
@misc{zhong2023agieval,
title={AGIEval: A Human-Centric Benchmark for Evaluating Foundation Models},
author={Wanjun Zhong and Ruixiang Cui and Yiduo Guo and Yaobo Liang and Shuai Lu and Yanlin Wang and Amin Saied and Weizhu Chen and Nan Duan},
year={2023},
eprint={2304.06364},
archivePrefix={arXiv},
primaryClass={cs.CL}
} |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.