id stringlengths 2 115 | lastModified stringlengths 24 24 | tags list | author stringlengths 2 42 ⌀ | description stringlengths 0 6.67k ⌀ | citation stringlengths 0 10.7k ⌀ | likes int64 0 3.66k | downloads int64 0 8.89M | created timestamp[us] | card stringlengths 11 977k | card_len int64 11 977k | embeddings list |
|---|---|---|---|---|---|---|---|---|---|---|---|
BDas/ArabicNLPDataset | 2022-09-26T18:52:01.000Z | [
"task_categories:text-classification",
"task_ids:multi-class-classification",
"task_ids:multi-label-classification",
"annotations_creators:expert-generated",
"language_creators:expert-generated",
"multilinguality:monolingual",
"size_categories:100K<n<1M",
"source_datasets:original",
"language:ar",
... | BDas | The dataset, prepared in Arabic, includes 10.000 tests, 10.000 validations and 80000 train data.
The data is composed of customer comments and created from e-commerce sites. | ----ArabicNLPDataset---- | 0 | 6 | 2022-08-26T21:33:24 | ---
annotations_creators:
- expert-generated
language_creators:
- expert-generated
language:
- ar
license:
- other
multilinguality:
- monolingual
size_categories:
- 100K<n<1M
source_datasets:
- original
task_categories:
- text-classification
task_ids:
- multi-class-classification
- multi-label-classification
pretty_name: 'ArabicNLPDataset'
---
# Dataset Card for "ArabicNLPDataset"
## Table of Contents
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Dataset Preprocessing](#dataset-preprocessing)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** [https://github.com/BihterDass/ArabicTextClassificationDataset]
- **Repository:** [https://github.com/BihterDass/ArabicTextClassificationDataset]
- **Size of downloaded dataset files:** 23.5 MB
- **Size of the generated dataset:** 23.5 MB
### Dataset Summary
The dataset was compiled from user comments from e-commerce sites. It consists of 10,000 validations, 10,000 tests and 80000 train data. Data were classified into 3 classes (positive(pos), negative(neg) and natural(nor). The data is available to you on github.
### Supported Tasks and Leaderboards
[More Information Needed]
### Languages
[More Information Needed]
## Dataset Structure
### Data Instances
[More Information Needed]
#### arabic-dataset-v1
- **Size of downloaded dataset files:** 23.5 MB
- **Size of the generated dataset:** 23.5 MB
### Data Fields
The data fields are the same among all splits.
#### arabic-dataset-v-v1
- `text`: a `string` feature.
- `label`: a classification label, with possible values including `positive` (2), `natural` (1), `negative` (0).
### Data Splits
| |train |validation|test |
|----|--------:|---------:|---------:|
|Data| 80000 | 10000 | 10000 |
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
[More Information Needed]
### Citation Information
[More Information Needed]
### Contributions
Thanks to [@PnrSvc](https://github.com/PnrSvc) for adding this dataset. | 3,647 | [
[
-0.046417236328125,
-0.0276031494140625,
-0.00968170166015625,
0.014129638671875,
-0.0219879150390625,
0.01512908935546875,
-0.018218994140625,
-0.033477783203125,
0.0215606689453125,
0.0302734375,
-0.0447998046875,
-0.07745361328125,
-0.051513671875,
0.0139... |
sagawa/pubchem-10m-canonicalized | 2022-09-04T02:18:37.000Z | [
"language_creators:expert-generated",
"multilinguality:monolingual",
"size_categories:100K<n<1M",
"source_datasets:original",
"license:apache-2.0",
"PubChem",
"chemical",
"SMILES",
"region:us"
] | sagawa | null | null | 0 | 6 | 2022-09-03T05:35:49 | ---
annotations_creators: []
language: []
language_creators:
- expert-generated
license:
- apache-2.0
multilinguality:
- monolingual
pretty_name: canonicalized PubChem-10m
size_categories:
- 100K<n<1M
source_datasets:
- original
tags:
- PubChem
- chemical
- SMILES
task_categories: []
task_ids: []
---
### dataset description
We downloaded PubChem-10m dataset from [here](https://deepchemdata.s3-us-west-1.amazonaws.com/datasets/pubchem_10m.txt.zip) and canonicalized it.
We used the following function to canonicalize the data and removed some SMILES that cannot be read by RDKit.
```python:
from rdkit import Chem
def canonicalize(mol):
mol = Chem.MolToSmiles(Chem.MolFromSmiles(mol),True)
return mol
```
We randomly split the preprocessed data into train and validation. The ratio is 9 : 1. | 809 | [
[
-0.0221710205078125,
-0.0250701904296875,
0.0399169921875,
0.0220489501953125,
-0.026336669921875,
0.01291656494140625,
-0.026885986328125,
0.0043182373046875,
0.0251922607421875,
0.04541015625,
-0.04718017578125,
-0.058258056640625,
-0.02520751953125,
0.042... |
agemagician/uniref30 | 2022-09-18T12:38:41.000Z | [
"region:us"
] | agemagician | null | null | 2 | 6 | 2022-09-18T12:35:15 | Entry not found | 15 | [
[
-0.02142333984375,
-0.01495361328125,
0.05718994140625,
0.0288238525390625,
-0.035064697265625,
0.046539306640625,
0.052520751953125,
0.005062103271484375,
0.0513916015625,
0.016998291015625,
-0.052093505859375,
-0.014984130859375,
-0.060394287109375,
0.0379... |
Gxg/MathQA | 2022-10-06T11:13:19.000Z | [
"region:us"
] | Gxg | null | null | 1 | 6 | 2022-10-06T05:25:28 | Entry not found | 15 | [
[
-0.02142333984375,
-0.01495361328125,
0.05718994140625,
0.0288238525390625,
-0.035064697265625,
0.046539306640625,
0.052520751953125,
0.005062103271484375,
0.0513916015625,
0.016998291015625,
-0.052093505859375,
-0.014984130859375,
-0.060394287109375,
0.0379... |
YaYaB/magic-blip-captions | 2023-01-09T15:01:47.000Z | [
"task_categories:text-to-image",
"annotations_creators:machine-generated",
"language_creators:other",
"multilinguality:monolingual",
"size_categories:n<1K",
"source_datasets:YaYaB/magic-creature-blip-captions",
"language:en",
"license:cc-by-nc-sa-4.0",
"region:us"
] | YaYaB | null | null | 2 | 6 | 2022-10-11T07:23:25 | ---
license: cc-by-nc-sa-4.0
annotations_creators:
- machine-generated
language:
- en
language_creators:
- other
multilinguality:
- monolingual
pretty_name: 'Subset of Magic card (Creature only) BLIP captions'
size_categories:
- n<1K
source_datasets:
- YaYaB/magic-creature-blip-captions
tags: []
task_categories:
- text-to-image
task_ids: []
---
# Disclaimer
This was inspired from https://huggingface.co/datasets/lambdalabs/pokemon-blip-captions
# Dataset Card for A subset of Magic card BLIP captions
_Dataset used to train [Magic card text to image model](https://github.com/LambdaLabsML/examples/tree/main/stable-diffusion-finetuning)_
BLIP generated captions for Magic Card images collected from the web. Original images were obtained from [Scryfall](https://scryfall.com/) and captioned with the [pre-trained BLIP model](https://github.com/salesforce/BLIP).
For each row the dataset contains `image` and `text` keys. `image` is a varying size PIL jpeg, and `text` is the accompanying text caption. Only a train split is provided.
## Examples

> A woman holding a flower

> two knights fighting

> a card with a unicorn on it
## Citation
If you use this dataset, please cite it as:
```
@misc{yayab2022onepiece,
author = {YaYaB},
title = {Magic card creature split BLIP captions},
year={2022},
howpublished= {\url{https://huggingface.co/datasets/YaYaB/magic-blip-captions/}}
}
``` | 1,704 | [
[
-0.01352691650390625,
-0.0113525390625,
-0.004364013671875,
0.033233642578125,
-0.042205810546875,
-0.006237030029296875,
-0.0013103485107421875,
-0.041259765625,
0.039306640625,
0.048095703125,
-0.055450439453125,
-0.0220947265625,
-0.033721923828125,
0.018... |
allenai/ms2_dense_mean | 2022-11-18T19:40:11.000Z | [
"task_categories:summarization",
"task_categories:text2text-generation",
"annotations_creators:expert-generated",
"language_creators:expert-generated",
"multilinguality:monolingual",
"size_categories:10K<n<100K",
"source_datasets:extended|other-MS^2",
"source_datasets:extended|other-Cochrane",
"lang... | allenai | null | null | 0 | 6 | 2022-10-12T14:06:02 | ---
annotations_creators:
- expert-generated
language_creators:
- expert-generated
language:
- en
license:
- apache-2.0
multilinguality:
- monolingual
size_categories:
- 10K<n<100K
source_datasets:
- extended|other-MS^2
- extended|other-Cochrane
task_categories:
- summarization
- text2text-generation
paperswithcode_id: multi-document-summarization
pretty_name: MSLR Shared Task
---
This is a copy of the [MS^2](https://huggingface.co/datasets/allenai/mslr2022) dataset, except the input source documents of its `train`, `validation` and `test` splits have been replaced by a __dense__ retriever. The retrieval pipeline used:
- __query__: The `background` field of each example
- __corpus__: The union of all documents in the `train`, `validation` and `test` splits. A document is the concatenation of the `title` and `abstract`.
- __retriever__: [`facebook/contriever-msmarco`](https://huggingface.co/facebook/contriever-msmarco) via [PyTerrier](https://pyterrier.readthedocs.io/en/latest/) with default settings
- __top-k strategy__: `"max"`, i.e. the number of documents retrieved, `k`, is set as the maximum number of documents seen across examples in this dataset, in this case `k==17`
Retrieval results on the `train` set:
| Recall@100 | Rprec | Precision@k | Recall@k |
| ----------- | ----------- | ----------- | ----------- |
| 0.4764 | 0.2395 | 0.2271 | 0.2418 |
Retrieval results on the `validation` set:
| Recall@100 | Rprec | Precision@k | Recall@k |
| ----------- | ----------- | ----------- | ----------- |
| 0.4364 | 0.2125 | 0.2131 | 0.2074 |
Retrieval results on the `test` set:
| Recall@100 | Rprec | Precision@k | Recall@k |
| ----------- | ----------- | ----------- | ----------- |
| 0.4481 | 0.2224 | 0.2254 | 0.2100 | | 1,749 | [
[
-0.019439697265625,
-0.022674560546875,
0.013885498046875,
0.00995635986328125,
-0.0149383544921875,
-0.01007080078125,
-0.017913818359375,
0.001560211181640625,
0.0269012451171875,
0.031646728515625,
-0.033905029296875,
-0.036224365234375,
-0.05645751953125,
... |
sled-umich/TRIP | 2022-10-14T19:17:29.000Z | [
"task_categories:text-classification",
"task_ids:natural-language-inference",
"annotations_creators:expert-generated",
"language_creators:crowdsourced",
"multilinguality:monolingual",
"size_categories:1K<n<10K",
"source_datasets:original",
"language:en",
"region:us"
] | sled-umich | We introduce Tiered Reasoning for Intuitive Physics (TRIP), a novel commonsense reasoning dataset with dense annotations that enable multi-tiered evaluation of machines’ reasoning process. | @misc{storks2021tiered,
title={Tiered Reasoning for Intuitive Physics: Toward Verifiable Commonsense Language Understanding},
author={Shane Storks and Qiaozi Gao and Yichi Zhang and Joyce Chai},
year={2021},
booktitle={Findings of the Association for Computational Linguistics: EMNLP 2021},
location={Punta Cana, Dominican Republic},
publisher={Association for Computational Linguistics},
} | 0 | 6 | 2022-10-12T18:23:13 | ---
annotations_creators:
- expert-generated
language:
- en
language_creators:
- crowdsourced
license: []
multilinguality:
- monolingual
pretty_name: 'TRIP: Tiered Reasoning for Intuitive Physics'
size_categories:
- 1K<n<10K
source_datasets:
- original
tags: []
task_categories:
- text-classification
task_ids:
- natural-language-inference
---
# [TRIP - Tiered Reasoning for Intuitive Physics](https://aclanthology.org/2021.findings-emnlp.422/)
Official dataset for [Tiered Reasoning for Intuitive Physics: Toward Verifiable Commonsense Language Understanding](https://aclanthology.org/2021.findings-emnlp.422/). Shane Storks, Qiaozi Gao, Yichi Zhang, Joyce Chai. EMNLP Findings, 2021.
For our official model and experiment code, please check [GitHub](https://github.com/sled-group/Verifiable-Coherent-NLU).
## Overview

We introduce Tiered Reasoning for Intuitive Physics (TRIP), a novel commonsense reasoning dataset with dense annotations that enable multi-tiered evaluation of machines’ reasoning process.
It includes dense annotations for each story capturing multiple tiers of reasoning beyond the end task. From these annotations, we propose a tiered evaluation, where given a pair of highly similar stories (differing only by one sentence which makes one of the stories implausible), systems must jointly identify (1) the plausible story, (2) a pair of conflicting sentences in the implausible story, and (3) the underlying physical states in those sentences causing the conflict. The goal of TRIP is to enable a systematic evaluation of machine coherence toward the end task prediction of plausibility. In particular, we evaluate whether a high-level plausibility prediction can be verified based on lower-level understanding, for example, physical state changes that would support the prediction.
## Download
```python
from datasets import load_dataset
dataset = load_dataset("sled-umich/TRIP")
```
* [HuggingFace-Dataset](https://huggingface.co/datasets/sled-umich/TRIP)
* [GitHub](https://github.com/sled-group/Verifiable-Coherent-NLU)
## Cite
```bibtex
@misc{storks2021tiered,
title={Tiered Reasoning for Intuitive Physics: Toward Verifiable Commonsense Language Understanding},
author={Shane Storks and Qiaozi Gao and Yichi Zhang and Joyce Chai},
year={2021},
booktitle={Findings of the Association for Computational Linguistics: EMNLP 2021},
location={Punta Cana, Dominican Republic},
publisher={Association for Computational Linguistics},
}
```
| 2,528 | [
[
-0.01904296875,
-0.06561279296875,
0.0499267578125,
0.0184478759765625,
0.00537872314453125,
0.00922393798828125,
-0.0109710693359375,
-0.03594970703125,
0.00374603271484375,
0.02276611328125,
-0.039947509765625,
-0.031707763671875,
-0.0190887451171875,
0.00... |
dennlinger/wiki-paragraphs | 2022-10-13T22:12:37.000Z | [
"task_categories:text-classification",
"task_categories:sentence-similarity",
"task_ids:semantic-similarity-scoring",
"annotations_creators:machine-generated",
"language_creators:crowdsourced",
"multilinguality:monolingual",
"size_categories:10M<n<100M",
"source_datasets:original",
"language:en",
... | dennlinger | null | null | 0 | 6 | 2022-10-13T15:15:55 | ---
annotations_creators:
- machine-generated
language:
- en
language_creators:
- crowdsourced
license:
- cc-by-sa-3.0
multilinguality:
- monolingual
pretty_name: wiki-paragraphs
size_categories:
- 10M<n<100M
source_datasets:
- original
tags:
- wikipedia
- self-similarity
task_categories:
- text-classification
- sentence-similarity
task_ids:
- semantic-similarity-scoring
---
# Dataset Card for `wiki-paragraphs`
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-instances)
- [Data Splits](#data-instances)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
## Dataset Description
- **Homepage:** [Needs More Information]
- **Repository:** https://github.com/dennlinger/TopicalChange
- **Paper:** https://arxiv.org/abs/2012.03619
- **Leaderboard:** [Needs More Information]
- **Point of Contact:** [Dennis Aumiller](aumiller@informatik.uni-heidelberg.de)
### Dataset Summary
The wiki-paragraphs dataset is constructed by automatically sampling two paragraphs from a Wikipedia article. If they are from the same section, they will be considered a "semantic match", otherwise as "dissimilar". Dissimilar paragraphs can in theory also be sampled from other documents, but have not shown any improvement in the particular evaluation of the linked work.
The alignment is in no way meant as an accurate depiction of similarity, but allows to quickly mine large amounts of samples.
### Supported Tasks and Leaderboards
The dataset can be used for "same-section classification", which is a binary classification task (either two sentences/paragraphs belong to the same section or not).
This can be combined with document-level coherency measures, where we can check how many misclassifications appear within a single document.
Please refer to [our paper](https://arxiv.org/abs/2012.03619) for more details.
### Languages
The data was extracted from English Wikipedia, therefore predominantly in English.
## Dataset Structure
### Data Instances
A single instance contains three attributes:
```
{
"sentence1": "<Sentence from the first paragraph>",
"sentence2": "<Sentence from the second paragraph>",
"label": 0/1 # 1 indicates two belong to the same section
}
```
### Data Fields
- sentence1: String containing the first paragraph
- sentence2: String containing the second paragraph
- label: Integer, either 0 or 1. Indicates whether two paragraphs belong to the same section (1) or come from different sections (0)
### Data Splits
We provide train, validation and test splits, which were split as 80/10/10 from a randomly shuffled original data source.
In total, we provide 25375583 training pairs, as well as 3163685 validation and test instances, respectively.
## Dataset Creation
### Curation Rationale
The original idea was applied to self-segmentation of Terms of Service documents. Given that these are of domain-specific nature, we wanted to provide a more generally applicable model trained on Wikipedia data.
It is meant as a cheap-to-acquire pre-training strategy for large-scale experimentation with semantic similarity for long texts (paragraph-level).
Based on our experiments, it is not necessarily sufficient by itself to replace traditional hand-labeled semantic similarity datasets.
### Source Data
#### Initial Data Collection and Normalization
The data was collected based on the articles considered in the Wiki-727k dataset by Koshorek et al. The dump of their dataset can be found through the [respective Github repository](https://github.com/koomri/text-segmentation). Note that we did *not* use the pre-processed data, but rather only information on the considered articles, which were re-acquired from Wikipedia at a more recent state.
This is due to the fact that paragraph information was not retained by the original Wiki-727k authors.
We did not verify the particular focus of considered pages.
#### Who are the source language producers?
We do not have any further information on the contributors; these are volunteers contributing to en.wikipedia.org.
### Annotations
#### Annotation process
No manual annotation was added to the dataset.
We automatically sampled two sections from within the same article; if these belong to the same section, they were assigned a label indicating the "similarity" (1), otherwise the label indicates that they are not belonging to the same section (0).
We sample three positive and three negative samples per section, per article.
#### Who are the annotators?
No annotators were involved in the process.
### Personal and Sensitive Information
We did not modify the original Wikipedia text in any way. Given that personal information, such as dates of birth (e.g., for a person of interest) may be on Wikipedia, this information is also considered in our dataset.
## Considerations for Using the Data
### Social Impact of Dataset
The purpose of the dataset is to serve as a *pre-training addition* for semantic similarity learning.
Systems building on this dataset should consider additional, manually annotated data, before using a system in production.
### Discussion of Biases
To our knowledge, there are some works indicating that male people have a several times larger chance of having a Wikipedia page created (especially in historical contexts). Therefore, a slight bias towards over-representation might be left in this dataset.
### Other Known Limitations
As previously stated, the automatically extracted semantic similarity is not perfect; it should be treated as such.
## Additional Information
### Dataset Curators
The dataset was originally developed as a practical project by Lucienne-Sophie Marm� under the supervision of Dennis Aumiller.
Contributions to the original sampling strategy were made by Satya Almasian and Michael Gertz
### Licensing Information
Wikipedia data is available under the CC-BY-SA 3.0 license.
### Citation Information
```
@inproceedings{DBLP:conf/icail/AumillerAL021,
author = {Dennis Aumiller and
Satya Almasian and
Sebastian Lackner and
Michael Gertz},
editor = {Juliano Maranh{\~{a}}o and
Adam Zachary Wyner},
title = {Structural text segmentation of legal documents},
booktitle = {{ICAIL} '21: Eighteenth International Conference for Artificial Intelligence
and Law, S{\~{a}}o Paulo Brazil, June 21 - 25, 2021},
pages = {2--11},
publisher = {{ACM}},
year = {2021},
url = {https://doi.org/10.1145/3462757.3466085},
doi = {10.1145/3462757.3466085}
}
``` | 7,396 | [
[
-0.045501708984375,
-0.050537109375,
0.023101806640625,
0.0032215118408203125,
-0.0295257568359375,
-0.0196533203125,
-0.016448974609375,
-0.033111572265625,
0.0297393798828125,
0.027923583984375,
-0.056854248046875,
-0.06494140625,
-0.04132080078125,
0.0348... |
prajjwal1/discosense | 2023-07-21T11:21:26.000Z | [
"arxiv:2210.12478",
"region:us"
] | prajjwal1 | Discosense | \ | 0 | 6 | 2022-10-14T19:09:30 | Paper: https://arxiv.org/abs/2210.12478
---
license: apache-2.0
---
| 68 | [
[
-0.03662109375,
-0.0474853515625,
0.01335906982421875,
0.0758056640625,
-0.055999755859375,
-0.031829833984375,
0.0086212158203125,
-0.050506591796875,
-0.0101318359375,
0.058868408203125,
-0.0289154052734375,
-0.017791748046875,
-0.04473876953125,
0.0066795... |
julianmoraes/bayc-captions-manual | 2022-10-18T05:04:25.000Z | [
"region:us"
] | julianmoraes | null | null | 1 | 6 | 2022-10-18T05:04:09 | Entry not found | 15 | [
[
-0.02142333984375,
-0.01495361328125,
0.05718994140625,
0.0288238525390625,
-0.035064697265625,
0.046539306640625,
0.052520751953125,
0.005062103271484375,
0.0513916015625,
0.016998291015625,
-0.052093505859375,
-0.014984130859375,
-0.060394287109375,
0.0379... |
laion/laion2b-multi-vit-h-14-embeddings | 2022-12-23T20:29:43.000Z | [
"region:us"
] | laion | null | null | 1 | 6 | 2022-10-25T22:02:16 | Entry not found | 15 | [
[
-0.02142333984375,
-0.01495361328125,
0.05718994140625,
0.0288238525390625,
-0.035064697265625,
0.046539306640625,
0.052520751953125,
0.005062103271484375,
0.0513916015625,
0.016998291015625,
-0.052093505859375,
-0.014984130859375,
-0.060394287109375,
0.0379... |
arbml/DataSet_Arabic_Classification | 2022-10-31T19:27:02.000Z | [
"region:us"
] | arbml | null | null | 0 | 6 | 2022-10-31T19:25:47 | Entry not found | 15 | [
[
-0.02142333984375,
-0.01495361328125,
0.05718994140625,
0.0288238525390625,
-0.035064697265625,
0.046539306640625,
0.052520751953125,
0.005062103271484375,
0.0513916015625,
0.016998291015625,
-0.052093505859375,
-0.014984130859375,
-0.060394287109375,
0.0379... |
pszemraj/text2image-multi-prompt | 2023-10-06T23:00:38.000Z | [
"multilinguality:monolingual",
"source_datasets:bartman081523/stable-diffusion-discord-prompts",
"source_datasets:succinctly/midjourney-prompts",
"source_datasets:Gustavosta/Stable-Diffusion-Prompts",
"language:en",
"license:apache-2.0",
"text generation",
"region:us"
] | pszemraj | null | null | 8 | 6 | 2022-11-09T22:47:39 | ---
language:
- en
license: apache-2.0
multilinguality:
- monolingual
source_datasets:
- bartman081523/stable-diffusion-discord-prompts
- succinctly/midjourney-prompts
- Gustavosta/Stable-Diffusion-Prompts
pretty_name: multi text2image prompts a dataset collection
tags:
- text generation
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: test
path: data/test-*
- config_name: original
data_files:
- split: train
path: original/train-*
- split: test
path: original/test-*
dataset_info:
- config_name: default
features:
- name: text
dtype: string
- name: src_dataset
dtype: string
splits:
- name: train
num_bytes: 262736830
num_examples: 1677221
- name: test
num_bytes: 56294291
num_examples: 292876
download_size: 151054782
dataset_size: 319031121
- config_name: original
features:
- name: text
dtype: string
- name: src_dataset
dtype: string
splits:
- name: train
num_bytes: 741427383
num_examples: 3551734
- name: test
num_bytes: 83615440
num_examples: 399393
download_size: 402186258
dataset_size: 825042823
---
# text2image multi-prompt(s): a dataset collection
- collection of several text2image prompt datasets
- data was cleaned/normalized with the goal of removing "model specific APIs" like the "--ar" for Midjourney and so on
- data de-duplicated on a basic level: exactly duplicate prompts were dropped (_after cleaning and normalization_)
## updates
- Oct 2023: the `default` config has been updated with better deduplication. It was deduplicated with minhash (_params: n-gram size set to 3, deduplication threshold at 0.6, hash function chosen as xxh3 with 32-bit hash bits, and 128 permutations with a batch size of 10,000._) which drops 2+ million rows.
- original version is still available under `config_name="original"`
## contents
default:
```
DatasetDict({
train: Dataset({
features: ['text', 'src_dataset'],
num_rows: 1677221
})
test: Dataset({
features: ['text', 'src_dataset'],
num_rows: 292876
})
})
```
For `original` config:
```
DatasetDict({
train: Dataset({
features: ['text', 'src_dataset'],
num_rows: 3551734
})
test: Dataset({
features: ['text', 'src_dataset'],
num_rows: 399393
})
})
```
_NOTE: as the other two datasets did not have a `validation` split, the validation split of `succinctly/midjourney-prompts` was merged into `train`._ | 2,524 | [
[
-0.03399658203125,
-0.0423583984375,
0.03497314453125,
0.00406646728515625,
-0.037078857421875,
-0.01438140869140625,
-0.014404296875,
-0.01910400390625,
-0.004802703857421875,
0.04119873046875,
-0.058807373046875,
-0.042877197265625,
-0.053497314453125,
0.0... |
bigbio/bio_sim_verb | 2022-12-22T15:43:25.000Z | [
"multilinguality:monolingual",
"language:en",
"license:unknown",
"region:us"
] | bigbio | This repository contains the evaluation datasets for the paper Bio-SimVerb and Bio-SimLex: Wide-coverage Evaluation Sets of Word Similarity in Biomedicine by Billy Chiu, Sampo Pyysalo and Anna Korhonen. | @article{article,
title = {
Bio-SimVerb and Bio-SimLex: Wide-coverage evaluation sets of word
similarity in biomedicine
},
author = {Chiu, Billy and Pyysalo, Sampo and Vulić, Ivan and Korhonen, Anna},
year = 2018,
month = {02},
journal = {BMC Bioinformatics},
volume = 19,
pages = {},
doi = {10.1186/s12859-018-2039-z}
} | 1 | 6 | 2022-11-13T22:06:20 |
---
language:
- en
bigbio_language:
- English
license: unknown
multilinguality: monolingual
bigbio_license_shortname: UNKNOWN
pretty_name: Bio-SimVerb
homepage: https://github.com/cambridgeltl/bio-simverb
bigbio_pubmed: True
bigbio_public: True
bigbio_tasks:
- SEMANTIC_SIMILARITY
---
# Dataset Card for Bio-SimVerb
## Dataset Description
- **Homepage:** https://github.com/cambridgeltl/bio-simverb
- **Pubmed:** True
- **Public:** True
- **Tasks:** STS
This repository contains the evaluation datasets for the paper Bio-SimVerb and Bio-SimLex: Wide-coverage Evaluation Sets of Word Similarity in Biomedicine by Billy Chiu, Sampo Pyysalo and Anna Korhonen.
## Citation Information
```
@article{article,
title = {
Bio-SimVerb and Bio-SimLex: Wide-coverage evaluation sets of word
similarity in biomedicine
},
author = {Chiu, Billy and Pyysalo, Sampo and Vulić, Ivan and Korhonen, Anna},
year = 2018,
month = {02},
journal = {BMC Bioinformatics},
volume = 19,
pages = {},
doi = {10.1186/s12859-018-2039-z}
}
```
| 1,107 | [
[
-0.0124969482421875,
-0.024993896484375,
0.033660888671875,
0.0154266357421875,
-0.037322998046875,
0.00960540771484375,
-0.00269317626953125,
-0.015899658203125,
0.0335693359375,
0.010345458984375,
-0.03997802734375,
-0.060943603515625,
-0.046173095703125,
... |
bigbio/medal | 2022-12-22T15:45:07.000Z | [
"multilinguality:monolingual",
"language:en",
"license:other",
"region:us"
] | bigbio | The Repository for Medical Dataset for Abbreviation Disambiguation for Natural Language Understanding (MeDAL) is
a large medical text dataset curated for abbreviation disambiguation, designed for natural language understanding
pre-training in the medical domain. | @inproceedings{,
title = {MeDAL\: Medical Abbreviation Disambiguation Dataset for Natural Language Understanding Pretraining},
author = {Wen, Zhi and Lu, Xing Han and Reddy, Siva},
booktitle = {Proceedings of the 3rd Clinical Natural Language Processing Workshop},
month = {Nov},
year = {2020},
address = {Online},
publisher = {Association for Computational Linguistics},
url = {https://www.aclweb.org/anthology/2020.clinicalnlp-1.15},
pages = {130--135},
} | 0 | 6 | 2022-11-13T22:09:21 |
---
language:
- en
bigbio_language:
- English
license: other
multilinguality: monolingual
bigbio_license_shortname: NLM_LICENSE
pretty_name: MeDAL
homepage: https://github.com/BruceWen120/medal
bigbio_pubmed: True
bigbio_public: True
bigbio_tasks:
- NAMED_ENTITY_DISAMBIGUATION
---
# Dataset Card for MeDAL
## Dataset Description
- **Homepage:** https://github.com/BruceWen120/medal
- **Pubmed:** True
- **Public:** True
- **Tasks:** NED
The Repository for Medical Dataset for Abbreviation Disambiguation for Natural Language Understanding (MeDAL) is
a large medical text dataset curated for abbreviation disambiguation, designed for natural language understanding
pre-training in the medical domain.
## Citation Information
```
@inproceedings{,
title = {MeDAL\: Medical Abbreviation Disambiguation Dataset for Natural Language Understanding Pretraining},
author = {Wen, Zhi and Lu, Xing Han and Reddy, Siva},
booktitle = {Proceedings of the 3rd Clinical Natural Language Processing Workshop},
month = {Nov},
year = {2020},
address = {Online},
publisher = {Association for Computational Linguistics},
url = {https://www.aclweb.org/anthology/2020.clinicalnlp-1.15},
pages = {130--135},
}
```
| 1,242 | [
[
-0.0135345458984375,
-0.040496826171875,
0.029022216796875,
0.01047515869140625,
-0.048309326171875,
0.006847381591796875,
-0.001316070556640625,
-0.024444580078125,
0.0255889892578125,
0.0210113525390625,
-0.038604736328125,
-0.07440185546875,
-0.07275390625,
... |
bigbio/medhop | 2022-12-22T15:45:26.000Z | [
"multilinguality:monolingual",
"language:en",
"license:cc-by-sa-3.0",
"region:us"
] | bigbio | With the same format as WikiHop, this dataset is based on research paper
abstracts from PubMed, and the queries are about interactions between
pairs of drugs. The correct answer has to be inferred by combining
information from a chain of reactions of drugs and proteins. | @article{welbl-etal-2018-constructing,
title = Constructing Datasets for Multi-hop Reading Comprehension Across Documents,
author = Welbl, Johannes and Stenetorp, Pontus and Riedel, Sebastian,
journal = Transactions of the Association for Computational Linguistics,
volume = 6,
year = 2018,
address = Cambridge, MA,
publisher = MIT Press,
url = https://aclanthology.org/Q18-1021,
doi = 10.1162/tacl_a_00021,
pages = 287--302,
abstract = {
Most Reading Comprehension methods limit themselves to queries which
can be answered using a single sentence, paragraph, or document.
Enabling models to combine disjoint pieces of textual evidence would
extend the scope of machine comprehension methods, but currently no
resources exist to train and test this capability. We propose a novel
task to encourage the development of models for text understanding
across multiple documents and to investigate the limits of existing
methods. In our task, a model learns to seek and combine evidence
-- effectively performing multihop, alias multi-step, inference.
We devise a methodology to produce datasets for this task, given a
collection of query-answer pairs and thematically linked documents.
Two datasets from different domains are induced, and we identify
potential pitfalls and devise circumvention strategies. We evaluate
two previously proposed competitive models and find that one can
integrate information across documents. However, both models
struggle to select relevant information; and providing documents
guaranteed to be relevant greatly improves their performance. While
the models outperform several strong baselines, their best accuracy
reaches 54.5 % on an annotated test set, compared to human
performance at 85.0 %, leaving ample room for improvement.
} | 0 | 6 | 2022-11-13T22:09:32 |
---
language:
- en
bigbio_language:
- English
license: cc-by-sa-3.0
multilinguality: monolingual
bigbio_license_shortname: CC_BY_SA_3p0
pretty_name: MedHop
homepage: http://qangaroo.cs.ucl.ac.uk/
bigbio_pubmed: True
bigbio_public: True
bigbio_tasks:
- QUESTION_ANSWERING
---
# Dataset Card for MedHop
## Dataset Description
- **Homepage:** http://qangaroo.cs.ucl.ac.uk/
- **Pubmed:** True
- **Public:** True
- **Tasks:** QA
With the same format as WikiHop, this dataset is based on research paper
abstracts from PubMed, and the queries are about interactions between
pairs of drugs. The correct answer has to be inferred by combining
information from a chain of reactions of drugs and proteins.
## Citation Information
```
@article{welbl-etal-2018-constructing,
title = Constructing Datasets for Multi-hop Reading Comprehension Across Documents,
author = Welbl, Johannes and Stenetorp, Pontus and Riedel, Sebastian,
journal = Transactions of the Association for Computational Linguistics,
volume = 6,
year = 2018,
address = Cambridge, MA,
publisher = MIT Press,
url = https://aclanthology.org/Q18-1021,
doi = 10.1162/tacl_a_00021,
pages = 287--302,
abstract = {
Most Reading Comprehension methods limit themselves to queries which
can be answered using a single sentence, paragraph, or document.
Enabling models to combine disjoint pieces of textual evidence would
extend the scope of machine comprehension methods, but currently no
resources exist to train and test this capability. We propose a novel
task to encourage the development of models for text understanding
across multiple documents and to investigate the limits of existing
methods. In our task, a model learns to seek and combine evidence
-- effectively performing multihop, alias multi-step, inference.
We devise a methodology to produce datasets for this task, given a
collection of query-answer pairs and thematically linked documents.
Two datasets from different domains are induced, and we identify
potential pitfalls and devise circumvention strategies. We evaluate
two previously proposed competitive models and find that one can
integrate information across documents. However, both models
struggle to select relevant information; and providing documents
guaranteed to be relevant greatly improves their performance. While
the models outperform several strong baselines, their best accuracy
reaches 54.5 % on an annotated test set, compared to human
performance at 85.0 %, leaving ample room for improvement.
}
```
| 2,583 | [
[
-0.01348114013671875,
-0.04974365234375,
0.047698974609375,
0.0014543533325195312,
0.00020003318786621094,
0.00415802001953125,
0.00363922119140625,
-0.0447998046875,
0.00812530517578125,
0.0487060546875,
-0.057342529296875,
-0.02630615234375,
-0.046905517578125... |
bigbio/n2c2_2010 | 2022-12-22T15:45:51.000Z | [
"multilinguality:monolingual",
"language:en",
"license:other",
"region:us"
] | bigbio | The i2b2/VA corpus contained de-identified discharge summaries from Beth Israel
Deaconess Medical Center, Partners Healthcare, and University of Pittsburgh Medical
Center (UPMC). In addition, UPMC contributed de-identified progress notes to the
i2b2/VA corpus. This dataset contains the records from Beth Israel and Partners.
The 2010 i2b2/VA Workshop on Natural Language Processing Challenges for Clinical Records comprises three tasks:
1) a concept extraction task focused on the extraction of medical concepts from patient reports;
2) an assertion classification task focused on assigning assertion types for medical problem concepts;
3) a relation classification task focused on assigning relation types that hold between medical problems,
tests, and treatments.
i2b2 and the VA provided an annotated reference standard corpus for the three tasks.
Using this reference standard, 22 systems were developed for concept extraction,
21 for assertion classification, and 16 for relation classification. | @article{DBLP:journals/jamia/UzunerSSD11,
author = {
Ozlem Uzuner and
Brett R. South and
Shuying Shen and
Scott L. DuVall
},
title = {2010 i2b2/VA challenge on concepts, assertions, and relations in clinical
text},
journal = {J. Am. Medical Informatics Assoc.},
volume = {18},
number = {5},
pages = {552--556},
year = {2011},
url = {https://doi.org/10.1136/amiajnl-2011-000203},
doi = {10.1136/amiajnl-2011-000203},
timestamp = {Mon, 11 May 2020 23:00:20 +0200},
biburl = {https://dblp.org/rec/journals/jamia/UzunerSSD11.bib},
bibsource = {dblp computer science bibliography, https://dblp.org}
} | 0 | 6 | 2022-11-13T22:10:35 |
---
language:
- en
bigbio_language:
- English
license: other
multilinguality: monolingual
bigbio_license_shortname: DUA
pretty_name: n2c2 2010 Concepts, Assertions, and Relations
homepage: https://portal.dbmi.hms.harvard.edu/projects/n2c2-nlp/
bigbio_pubmed: False
bigbio_public: False
bigbio_tasks:
- NAMED_ENTITY_RECOGNITION
- RELATION_EXTRACTION
---
# Dataset Card for n2c2 2010 Concepts, Assertions, and Relations
## Dataset Description
- **Homepage:** https://portal.dbmi.hms.harvard.edu/projects/n2c2-nlp/
- **Pubmed:** False
- **Public:** False
- **Tasks:** NER,RE
The i2b2/VA corpus contained de-identified discharge summaries from Beth Israel
Deaconess Medical Center, Partners Healthcare, and University of Pittsburgh Medical
Center (UPMC). In addition, UPMC contributed de-identified progress notes to the
i2b2/VA corpus. This dataset contains the records from Beth Israel and Partners.
The 2010 i2b2/VA Workshop on Natural Language Processing Challenges for Clinical Records comprises three tasks:
1) a concept extraction task focused on the extraction of medical concepts from patient reports;
2) an assertion classification task focused on assigning assertion types for medical problem concepts;
3) a relation classification task focused on assigning relation types that hold between medical problems,
tests, and treatments.
i2b2 and the VA provided an annotated reference standard corpus for the three tasks.
Using this reference standard, 22 systems were developed for concept extraction,
21 for assertion classification, and 16 for relation classification.
## Citation Information
```
@article{DBLP:journals/jamia/UzunerSSD11,
author = {
Ozlem Uzuner and
Brett R. South and
Shuying Shen and
Scott L. DuVall
},
title = {2010 i2b2/VA challenge on concepts, assertions, and relations in clinical
text},
journal = {J. Am. Medical Informatics Assoc.},
volume = {18},
number = {5},
pages = {552--556},
year = {2011},
url = {https://doi.org/10.1136/amiajnl-2011-000203},
doi = {10.1136/amiajnl-2011-000203},
timestamp = {Mon, 11 May 2020 23:00:20 +0200},
biburl = {https://dblp.org/rec/journals/jamia/UzunerSSD11.bib},
bibsource = {dblp computer science bibliography, https://dblp.org}
}
```
| 2,376 | [
[
-0.0086517333984375,
-0.045867919921875,
0.0458984375,
0.026092529296875,
-0.00485992431640625,
0.0013971328735351562,
0.0055694580078125,
-0.04638671875,
0.007293701171875,
0.038787841796875,
-0.025238037109375,
-0.041778564453125,
-0.051177978515625,
0.013... |
bigbio/scicite | 2022-12-22T15:46:37.000Z | [
"multilinguality:monolingual",
"language:en",
"license:unknown",
"region:us"
] | bigbio | SciCite is a dataset of 11K manually annotated citation intents based on
citation context in the computer science and biomedical domains. | @inproceedings{cohan:naacl19,
author = {Arman Cohan and Waleed Ammar and Madeleine van Zuylen and Field Cady},
title = {Structural Scaffolds for Citation Intent Classification in Scientific Publications},
booktitle = {Conference of the North American Chapter of the Association for Computational Linguistics},
year = {2019},
url = {https://aclanthology.org/N19-1361/},
doi = {10.18653/v1/N19-1361},
} | 0 | 6 | 2022-11-13T22:12:03 |
---
language:
- en
bigbio_language:
- English
license: unknown
multilinguality: monolingual
bigbio_license_shortname: UNKNOWN
pretty_name: SciCite
homepage: https://allenai.org/data/scicite
bigbio_pubmed: False
bigbio_public: True
bigbio_tasks:
- TEXT_CLASSIFICATION
---
# Dataset Card for SciCite
## Dataset Description
- **Homepage:** https://allenai.org/data/scicite
- **Pubmed:** False
- **Public:** True
- **Tasks:** TXTCLASS
SciCite is a dataset of 11K manually annotated citation intents based on
citation context in the computer science and biomedical domains.
## Citation Information
```
@inproceedings{cohan:naacl19,
author = {Arman Cohan and Waleed Ammar and Madeleine van Zuylen and Field Cady},
title = {Structural Scaffolds for Citation Intent Classification in Scientific Publications},
booktitle = {Conference of the North American Chapter of the Association for Computational Linguistics},
year = {2019},
url = {https://aclanthology.org/N19-1361/},
doi = {10.18653/v1/N19-1361},
}
```
| 1,053 | [
[
0.01036834716796875,
-0.0267181396484375,
0.0253448486328125,
0.034637451171875,
-0.0242462158203125,
-0.00012552738189697266,
-0.01287841796875,
-0.0251312255859375,
0.0277862548828125,
0.0052642822265625,
-0.0172882080078125,
-0.0552978515625,
-0.0392150878906... |
dlwh/eu_wikipedias | 2022-11-17T08:13:51.000Z | [
"task_categories:text-generation",
"task_categories:fill-mask",
"task_ids:language-modeling",
"task_ids:masked-language-modeling",
"annotations_creators:no-annotation",
"language_creators:crowdsourced",
"multilinguality:multilingual",
"size_categories:n<1K",
"size_categories:1K<n<10K",
"size_categ... | dlwh | Wikipedia dataset containing cleaned articles of all languages.
The datasets are built from the Wikipedia dump
(https://dumps.wikimedia.org/) with one split per language. Each example
contains the content of one full Wikipedia article with cleaning to strip
markdown and unwanted sections (references, etc.). | @ONLINE {wikidump,
author = {Wikimedia Foundation},
title = {Wikimedia Downloads},
url = {https://dumps.wikimedia.org}
} | 0 | 6 | 2022-11-16T18:03:07 | ---
annotations_creators:
- no-annotation
language_creators:
- crowdsourced
pretty_name: Wikipedia
paperswithcode_id: null
license:
- cc-by-sa-3.0
- gfdl
task_categories:
- text-generation
- fill-mask
task_ids:
- language-modeling
- masked-language-modeling
source_datasets:
- original
multilinguality:
- multilingual
size_categories:
- n<1K
- 1K<n<10K
- 10K<n<100K
- 100K<n<1M
- 1M<n<10M
language:
- bg
- cs
- da
- de
- el
- en
- es
- et
- fi
- fr
- ga
- hr
- hu
- it
- lt
- lv
- mt
- nl
- pl
- pt
- ro
- sk
- sl
- sv
---
# Dataset Card for Wikipedia
This repo is a wrapper around [olm/wikipedia](https://huggingface.co/datasets/olm/wikipedia) that just concatenates data from the EU languages.
Please refer to it for a complete data card.
The EU languages we include are:
- bg
- cs
- da
- de
- el
- en
- es
- et
- fi
- fr
- ga
- hr
- hu
- it
- lt
- lv
- mt
- nl
- pl
- pt
- ro
- sk
- sl
- sv
As with `olm/wikipedia` you will need to install a few dependencies:
```
pip install mwparserfromhell==0.6.4 multiprocess==0.70.13
```
```python
from datasets import load_dataset
load_dataset("dlwh/eu_wikipedias", date="20221101")
```
Please refer to the original olm/wikipedia for a complete data card.
| 1,352 | [
[
-0.05206298828125,
-0.029266357421875,
0.012481689453125,
-0.00016438961029052734,
-0.0238800048828125,
-0.017822265625,
-0.025665283203125,
-0.03179931640625,
0.028533935546875,
0.040435791015625,
-0.049346923828125,
-0.05206298828125,
-0.0225677490234375,
... |
kasnerz/numericnlg | 2023-03-14T15:04:02.000Z | [
"region:us"
] | kasnerz | null | null | 0 | 6 | 2022-11-28T10:29:01 | Entry not found | 15 | [
[
-0.02142333984375,
-0.01495361328125,
0.05718994140625,
0.0288238525390625,
-0.035064697265625,
0.046539306640625,
0.052520751953125,
0.005062103271484375,
0.0513916015625,
0.016998291015625,
-0.052093505859375,
-0.014984130859375,
-0.060394287109375,
0.0379... |
kasnerz/logic2text | 2023-03-14T15:08:47.000Z | [
"region:us"
] | kasnerz | null | null | 0 | 6 | 2022-11-28T12:09:45 | Entry not found | 15 | [
[
-0.02142333984375,
-0.01495361328125,
0.05718994140625,
0.0288238525390625,
-0.035064697265625,
0.046539306640625,
0.052520751953125,
0.005062103271484375,
0.0513916015625,
0.016998291015625,
-0.052093505859375,
-0.014984130859375,
-0.060394287109375,
0.0379... |
graphs-datasets/CSL | 2023-02-07T16:37:07.000Z | [
"task_categories:graph-ml",
"license:mit",
"arxiv:2003.00982",
"region:us"
] | graphs-datasets | null | null | 1 | 6 | 2022-12-08T10:03:06 | ---
license: mit
task_categories:
- graph-ml
---
# Dataset Card for CSK
## Table of Contents
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [External Use](#external-use)
- [PyGeometric](#pygeometric)
- [Dataset Structure](#dataset-structure)
- [Data Properties](#data-properties)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Additional Information](#additional-information)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **[Homepage](https://github.com/graphdeeplearning/benchmarking-gnns)**
- **Paper:**: (see citation)
### Dataset Summary
The CSL dataset is a synthetic dataset, to test GNN expressivity.
### Supported Tasks and Leaderboards
`CSL` should be used for binary graph classification, on isomoprhism or not.
## External Use
### PyGeometric
To load in PyGeometric, do the following:
```python
from datasets import load_dataset
from torch_geometric.data import Data
from torch_geometric.loader import DataLoader
dataset_hf = load_dataset("graphs-datasets/<mydataset>")
# For the train set (replace by valid or test as needed)
dataset_pg_list = [Data(graph) for graph in dataset_hf["train"]]
dataset_pg = DataLoader(dataset_pg_list)
```
## Dataset Structure
### Data Properties
| property | value |
|---|---|
| #graphs | 150 |
| average #nodes | 41.0 |
| average #edges | 164.0 |
### Data Fields
Each row of a given file is a graph, with:
- `node_feat` (list: #nodes x #node-features): nodes
- `edge_index` (list: 2 x #edges): pairs of nodes constituting edges
- `edge_attr` (list: #edges x #edge-features): for the aforementioned edges, contains their features
- `y` (list: #labels): contains the number of labels available to predict
- `num_nodes` (int): number of nodes of the graph
### Data Splits
This data is split. It comes from the PyGeometric version of the dataset.
## Additional Information
### Licensing Information
The dataset has been released under MIT license.
### Citation Information
```
@article{DBLP:journals/corr/abs-2003-00982,
author = {Vijay Prakash Dwivedi and
Chaitanya K. Joshi and
Thomas Laurent and
Yoshua Bengio and
Xavier Bresson},
title = {Benchmarking Graph Neural Networks},
journal = {CoRR},
volume = {abs/2003.00982},
year = {2020},
url = {https://arxiv.org/abs/2003.00982},
eprinttype = {arXiv},
eprint = {2003.00982},
timestamp = {Sat, 23 Jan 2021 01:14:30 +0100},
biburl = {https://dblp.org/rec/journals/corr/abs-2003-00982.bib},
bibsource = {dblp computer science bibliography, https://dblp.org}
}
``` | 2,885 | [
[
-0.022430419921875,
-0.0287628173828125,
0.005336761474609375,
0.0097808837890625,
-0.01300811767578125,
-0.00302886962890625,
-0.0140380859375,
-0.035430908203125,
0.019561767578125,
0.011383056640625,
-0.040283203125,
-0.054107666015625,
-0.031341552734375,
... |
Drozdik/tattoo_v3 | 2022-12-09T01:00:20.000Z | [
"region:us"
] | Drozdik | null | null | 3 | 6 | 2022-12-09T00:57:27 | ---
dataset_info:
features:
- name: image
dtype: image
- name: text
dtype: string
splits:
- name: train
num_bytes: 101626056.169
num_examples: 4239
download_size: 78738858
dataset_size: 101626056.169
---
# Dataset Card for "tattoo_v3"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 397 | [
[
-0.0302276611328125,
-0.00554656982421875,
0.0224151611328125,
0.0120849609375,
-0.0384521484375,
-0.003387451171875,
0.038818359375,
-0.04241943359375,
0.06329345703125,
0.0518798828125,
-0.0491943359375,
-0.0675048828125,
-0.040496826171875,
-0.00609970092... |
tkarr/sprite_caption_dataset | 2022-12-15T02:27:33.000Z | [
"region:us"
] | tkarr | null | null | 0 | 6 | 2022-12-09T07:46:51 | ---
dataset_info:
features:
- name: image
dtype: image
- name: text
dtype: string
splits:
- name: train
num_bytes: 32010144.421997756
num_examples: 12830
- name: test
num_bytes: 1778895.7890011223
num_examples: 713
- name: valid
num_bytes: 1778895.7890011223
num_examples: 713
download_size: 26944262
dataset_size: 35567936.0
---
# Dataset Card for "sprite_caption_dataset"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 556 | [
[
-0.035858154296875,
-0.004756927490234375,
0.01030731201171875,
0.038055419921875,
-0.0275726318359375,
0.0183563232421875,
0.0134429931640625,
-0.0033245086669921875,
0.06732177734375,
0.04376220703125,
-0.060791015625,
-0.044097900390625,
-0.052337646484375,
... |
sasha/birdsnap | 2022-12-17T21:29:07.000Z | [
"region:us"
] | sasha | null | null | 0 | 6 | 2022-12-17T20:35:55 | ---
dataset_info:
features:
- name: image
dtype: image
- name: label
dtype: string
splits:
- name: train
num_bytes: 63184668691.7
num_examples: 39860
download_size: 69093722465
dataset_size: 63184668691.7
---
# Dataset Card for Birdsnap Dataset v 1.1.
Welcome to the Birdsnap dataset, consisting of 49,829 images of 500 species of North American birds, collected from Flickr, and corresponding species, bounding box, and part labels.
The dataset distribution also consists of the following files:
1. species.txt
This file lists the species in the dataset. The first line is a header. Each subsequent line represents a species. Lines are tab-delimited, and the fields
are:
- id: An integer id for the species. These ids run from 1 to 500 for the 500 species.
- common: The common English name of the species, for example "Blue Jay."
- scientific: The scientific (Latin) name of the species, for example "Cyanocitta cristata."
- dir: The name of the a directory in which to store the images of this species. This is just the common name with spaces and other dangerous-in-file-path characters replaced or removed.
2. images.txt
This file lists the images in the dataset, with the coresponding bounding boxes, part locations, and species labels. Like species.txt, it is tab-delimited with the first line giving field names. The fields are:
- url: The URL from which the image was downloaded.
- md5: An MD5 sum of the image file constants.
- path: The local path of the image.
- species_id: The id of the species of the labeled bird in the image.
- bb_x1, bb_y1, bb_x2, bb_y2: The coordinates of the top-left (bb_x1, bb_y1) and bottom-right (bb_x2, bb_y2) corners of the bounding box of the labeled
bird.
- ${part}_x, ${part}_y: The coordinates of part ${part}. Parts are back, beak, belly, breast, crown, forehead, left_cheek, left_eye, left_leg, left_wing, nape, right_cheek, right_eye, right_leg, right_wing, tail, throat.
3. test_images.txt
This file lists the 2443 test images used in the species identification experiments in the paper. It has a header line, then the "path" (from images.txt) of each test image, one per line.
### Citation
```
@inproceedings{berg2014birdsnap,
title={Birdsnap: Large-scale fine-grained visual categorization of birds},
author={Berg, Thomas and Liu, Jiongxin and Woo Lee, Seung and Alexander, Michelle L and Jacobs, David W and Belhumeur, Peter N},
booktitle={Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition},
pages={2011--2018},
year={2014}
}
```
| 2,585 | [
[
-0.042236328125,
-0.02392578125,
-0.01568603515625,
0.00429534912109375,
-0.037872314453125,
-0.01033782958984375,
0.00966644287109375,
-0.037017822265625,
0.024261474609375,
0.0250701904296875,
-0.0364990234375,
-0.04278564453125,
-0.034637451171875,
0.0344... |
hanamizuki-ai/genshin-voice-v3.3-mandarin | 2022-12-31T05:01:47.000Z | [
"task_categories:text-to-speech",
"task_categories:automatic-speech-recognition",
"multilinguality:monolingual",
"source_datasets:original",
"language:zh",
"region:us"
] | hanamizuki-ai | null | null | 14 | 6 | 2022-12-30T18:13:13 | ---
language:
- zh
multilinguality:
- monolingual
pretty_name: Genshin Voice
source_datasets:
- original
task_categories:
- text-to-speech
- automatic-speech-recognition
dataset_info:
features:
- name: audio
dtype: audio
- name: language
dtype: string
- name: npcName
dtype: string
- name: text
dtype: string
- name: type
dtype: string
splits:
- name: train
num_bytes: 36412736429.25
num_examples: 75033
download_size: 18251937481
dataset_size: 36412736429.25
---
# Dataset Card for Genshin Voice
## Dataset Description
### Dataset Summary
The Genshin Voice dataset is a text-to-voice dataset of different Genshin Impact characters unpacked from the game.
### Languages
The text in the dataset is in Mandarin.
## Dataset Creation
### Source Data
#### Initial Data Collection and Normalization
The data was obtained by unpacking the [Genshin Impact](https://genshin.hoyoverse.com/) game.
#### Who are the source language producers?
The language producers are the employee of [Hoyoverse](https://hoyoverse.com/) and contractors from [EchoSky Studio](http://qx.asiacu.com/).
### Annotations
The dataset contains official annotations from the game, including ingame speaker name and transcripts.
## Additional Information
### Dataset Curators
The dataset was created by [w4123](https://github.com/w4123) initially in his [GitHub repository](https://github.com/w4123/GenshinVoice).
### Licensing Information
Copyright © COGNOSPHERE. All Rights Reserved. | 1,518 | [
[
-0.00814056396484375,
-0.0129241943359375,
-0.006694793701171875,
0.0200347900390625,
-0.01242828369140625,
0.029541015625,
-0.022552490234375,
-0.022003173828125,
0.023162841796875,
0.05670166015625,
-0.07989501953125,
-0.06365966796875,
-0.007381439208984375,
... |
irds/mr-tydi_ko | 2023-01-05T03:36:12.000Z | [
"task_categories:text-retrieval",
"region:us"
] | irds | null | null | 0 | 6 | 2023-01-05T03:36:06 | ---
pretty_name: '`mr-tydi/ko`'
viewer: false
source_datasets: []
task_categories:
- text-retrieval
---
# Dataset Card for `mr-tydi/ko`
The `mr-tydi/ko` dataset, provided by the [ir-datasets](https://ir-datasets.com/) package.
For more information about the dataset, see the [documentation](https://ir-datasets.com/mr-tydi#mr-tydi/ko).
# Data
This dataset provides:
- `docs` (documents, i.e., the corpus); count=1,496,126
- `queries` (i.e., topics); count=2,019
- `qrels`: (relevance assessments); count=2,116
This dataset is used by: [`mr-tydi_ko_dev`](https://huggingface.co/datasets/irds/mr-tydi_ko_dev), [`mr-tydi_ko_test`](https://huggingface.co/datasets/irds/mr-tydi_ko_test), [`mr-tydi_ko_train`](https://huggingface.co/datasets/irds/mr-tydi_ko_train)
## Usage
```python
from datasets import load_dataset
docs = load_dataset('irds/mr-tydi_ko', 'docs')
for record in docs:
record # {'doc_id': ..., 'text': ...}
queries = load_dataset('irds/mr-tydi_ko', 'queries')
for record in queries:
record # {'query_id': ..., 'text': ...}
qrels = load_dataset('irds/mr-tydi_ko', 'qrels')
for record in qrels:
record # {'query_id': ..., 'doc_id': ..., 'relevance': ..., 'iteration': ...}
```
Note that calling `load_dataset` will download the dataset (or provide access instructions when it's not public) and make a copy of the
data in 🤗 Dataset format.
## Citation Information
```
@article{Zhang2021MrTyDi,
title={{Mr. TyDi}: A Multi-lingual Benchmark for Dense Retrieval},
author={Xinyu Zhang and Xueguang Ma and Peng Shi and Jimmy Lin},
year={2021},
journal={arXiv:2108.08787},
}
@article{Clark2020TyDiQa,
title={{TyDi QA}: A Benchmark for Information-Seeking Question Answering in Typologically Diverse Languages},
author={Jonathan H. Clark and Eunsol Choi and Michael Collins and Dan Garrette and Tom Kwiatkowski and Vitaly Nikolaev and Jennimaria Palomaki},
year={2020},
journal={Transactions of the Association for Computational Linguistics}
}
```
| 2,004 | [
[
-0.0252227783203125,
-0.0291900634765625,
0.01319122314453125,
0.0029506683349609375,
-0.01837158203125,
-0.005916595458984375,
-0.01476287841796875,
-0.00774383544921875,
0.03460693359375,
0.0241851806640625,
-0.037261962890625,
-0.0667724609375,
-0.02026367187... |
irds/nfcorpus | 2023-01-05T03:45:40.000Z | [
"task_categories:text-retrieval",
"region:us"
] | irds | null | null | 0 | 6 | 2023-01-05T03:45:34 | ---
pretty_name: '`nfcorpus`'
viewer: false
source_datasets: []
task_categories:
- text-retrieval
---
# Dataset Card for `nfcorpus`
The `nfcorpus` dataset, provided by the [ir-datasets](https://ir-datasets.com/) package.
For more information about the dataset, see the [documentation](https://ir-datasets.com/nfcorpus#nfcorpus).
# Data
This dataset provides:
- `docs` (documents, i.e., the corpus); count=5,371
This dataset is used by: [`nfcorpus_dev`](https://huggingface.co/datasets/irds/nfcorpus_dev), [`nfcorpus_dev_nontopic`](https://huggingface.co/datasets/irds/nfcorpus_dev_nontopic), [`nfcorpus_dev_video`](https://huggingface.co/datasets/irds/nfcorpus_dev_video), [`nfcorpus_test`](https://huggingface.co/datasets/irds/nfcorpus_test), [`nfcorpus_test_nontopic`](https://huggingface.co/datasets/irds/nfcorpus_test_nontopic), [`nfcorpus_test_video`](https://huggingface.co/datasets/irds/nfcorpus_test_video), [`nfcorpus_train`](https://huggingface.co/datasets/irds/nfcorpus_train), [`nfcorpus_train_nontopic`](https://huggingface.co/datasets/irds/nfcorpus_train_nontopic), [`nfcorpus_train_video`](https://huggingface.co/datasets/irds/nfcorpus_train_video)
## Usage
```python
from datasets import load_dataset
docs = load_dataset('irds/nfcorpus', 'docs')
for record in docs:
record # {'doc_id': ..., 'url': ..., 'title': ..., 'abstract': ...}
```
Note that calling `load_dataset` will download the dataset (or provide access instructions when it's not public) and make a copy of the
data in 🤗 Dataset format.
## Citation Information
```
@inproceedings{Boteva2016Nfcorpus,
title="A Full-Text Learning to Rank Dataset for Medical Information Retrieval",
author = "Vera Boteva and Demian Gholipour and Artem Sokolov and Stefan Riezler",
booktitle = "Proceedings of the European Conference on Information Retrieval ({ECIR})",
location = "Padova, Italy",
publisher = "Springer",
year = 2016
}
```
| 1,930 | [
[
-0.038909912109375,
-0.0174713134765625,
-0.009765625,
0.00617218017578125,
-0.0272369384765625,
0.0019197463989257812,
-0.0013895034790039062,
-0.023651123046875,
0.045654296875,
0.01457977294921875,
-0.051849365234375,
-0.04150390625,
-0.046966552734375,
0... |
irds/nfcorpus_test | 2023-01-05T03:46:24.000Z | [
"task_categories:text-retrieval",
"source_datasets:irds/nfcorpus",
"region:us"
] | irds | null | null | 0 | 6 | 2023-01-05T03:46:19 | ---
pretty_name: '`nfcorpus/test`'
viewer: false
source_datasets: ['irds/nfcorpus']
task_categories:
- text-retrieval
---
# Dataset Card for `nfcorpus/test`
The `nfcorpus/test` dataset, provided by the [ir-datasets](https://ir-datasets.com/) package.
For more information about the dataset, see the [documentation](https://ir-datasets.com/nfcorpus#nfcorpus/test).
# Data
This dataset provides:
- `queries` (i.e., topics); count=325
- `qrels`: (relevance assessments); count=15,820
- For `docs`, use [`irds/nfcorpus`](https://huggingface.co/datasets/irds/nfcorpus)
## Usage
```python
from datasets import load_dataset
queries = load_dataset('irds/nfcorpus_test', 'queries')
for record in queries:
record # {'query_id': ..., 'title': ..., 'all': ...}
qrels = load_dataset('irds/nfcorpus_test', 'qrels')
for record in qrels:
record # {'query_id': ..., 'doc_id': ..., 'relevance': ..., 'iteration': ...}
```
Note that calling `load_dataset` will download the dataset (or provide access instructions when it's not public) and make a copy of the
data in 🤗 Dataset format.
## Citation Information
```
@inproceedings{Boteva2016Nfcorpus,
title="A Full-Text Learning to Rank Dataset for Medical Information Retrieval",
author = "Vera Boteva and Demian Gholipour and Artem Sokolov and Stefan Riezler",
booktitle = "Proceedings of the European Conference on Information Retrieval ({ECIR})",
location = "Padova, Italy",
publisher = "Springer",
year = 2016
}
```
| 1,485 | [
[
-0.01971435546875,
-0.016326904296875,
0.00884246826171875,
-0.0024280548095703125,
-0.021148681640625,
-0.01540374755859375,
0.00017523765563964844,
-0.01165771484375,
0.029541015625,
0.0240631103515625,
-0.037322998046875,
-0.053314208984375,
-0.02662658691406... |
irds/trec-arabic | 2023-01-05T03:51:15.000Z | [
"task_categories:text-retrieval",
"region:us"
] | irds | null | null | 0 | 6 | 2023-01-05T03:51:09 | ---
pretty_name: '`trec-arabic`'
viewer: false
source_datasets: []
task_categories:
- text-retrieval
---
# Dataset Card for `trec-arabic`
The `trec-arabic` dataset, provided by the [ir-datasets](https://ir-datasets.com/) package.
For more information about the dataset, see the [documentation](https://ir-datasets.com/trec-arabic#trec-arabic).
# Data
This dataset provides:
- `docs` (documents, i.e., the corpus); count=383,872
This dataset is used by: [`trec-arabic_ar2001`](https://huggingface.co/datasets/irds/trec-arabic_ar2001), [`trec-arabic_ar2002`](https://huggingface.co/datasets/irds/trec-arabic_ar2002)
## Usage
```python
from datasets import load_dataset
docs = load_dataset('irds/trec-arabic', 'docs')
for record in docs:
record # {'doc_id': ..., 'text': ..., 'marked_up_doc': ...}
```
Note that calling `load_dataset` will download the dataset (or provide access instructions when it's not public) and make a copy of the
data in 🤗 Dataset format.
## Citation Information
```
@misc{Graff2001Arabic,
title={Arabic Newswire Part 1 LDC2001T55},
author={Graff, David, and Walker, Kevin},
year={2001},
url={https://catalog.ldc.upenn.edu/LDC2001T55},
publisher={Linguistic Data Consortium}
}
```
| 1,232 | [
[
-0.0318603515625,
-0.0251312255859375,
-0.006259918212890625,
0.003841400146484375,
-0.02789306640625,
0.020721435546875,
-0.01287841796875,
-0.0109100341796875,
0.01995849609375,
0.0250701904296875,
-0.0308837890625,
-0.093505859375,
-0.0543212890625,
0.034... |
irds/wapo_v3_trec-news-2020 | 2023-01-05T03:56:49.000Z | [
"task_categories:text-retrieval",
"region:us"
] | irds | null | null | 0 | 6 | 2023-01-05T03:56:43 | ---
pretty_name: '`wapo/v3/trec-news-2020`'
viewer: false
source_datasets: []
task_categories:
- text-retrieval
---
# Dataset Card for `wapo/v3/trec-news-2020`
The `wapo/v3/trec-news-2020` dataset, provided by the [ir-datasets](https://ir-datasets.com/) package.
For more information about the dataset, see the [documentation](https://ir-datasets.com/wapo#wapo/v3/trec-news-2020).
# Data
This dataset provides:
- `queries` (i.e., topics); count=50
- `qrels`: (relevance assessments); count=17,764
## Usage
```python
from datasets import load_dataset
queries = load_dataset('irds/wapo_v3_trec-news-2020', 'queries')
for record in queries:
record # {'query_id': ..., 'doc_id': ..., 'url': ...}
qrels = load_dataset('irds/wapo_v3_trec-news-2020', 'qrels')
for record in qrels:
record # {'query_id': ..., 'doc_id': ..., 'relevance': ..., 'iteration': ...}
```
Note that calling `load_dataset` will download the dataset (or provide access instructions when it's not public) and make a copy of the
data in 🤗 Dataset format.
| 1,039 | [
[
-0.0260009765625,
-0.017974853515625,
0.013031005859375,
0.0125579833984375,
-0.0277862548828125,
-0.01045989990234375,
0.00653839111328125,
-0.0117340087890625,
0.01412200927734375,
0.04193115234375,
-0.04315185546875,
-0.06103515625,
-0.021270751953125,
0.... |
LLukas22/lfqa_preprocessed | 2023-01-10T14:21:56.000Z | [
"task_categories:question-answering",
"task_categories:sentence-similarity",
"size_categories:100K<n<1M",
"language:en",
"license:mit",
"region:us"
] | LLukas22 | null | null | 0 | 6 | 2023-01-10T13:30:52 | ---
license: mit
task_categories:
- question-answering
- sentence-similarity
language:
- en
size_categories:
- 100K<n<1M
---
# Dataset Card for "lfqa_preprocessed"
## Table of Contents
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Additional Information](#additional-information)
- [Licensing Information](#licensing-information)
## Dataset Description
- **Homepage:** [https://towardsdatascience.com/long-form-qa-beyond-eli5-an-updated-dataset-and-approach-319cb841aabb](https://towardsdatascience.com/long-form-qa-beyond-eli5-an-updated-dataset-and-approach-319cb841aabb)
### Dataset Summary
This is a simplified version of [vblagoje's](https://huggingface.co/vblagoje) *[lfqa_support_docs](https://huggingface.co/datasets/vblagoje/lfqa_support_docs)* and *[lfqa](https://huggingface.co/datasets/vblagoje/lfqa)* datasets.
It was generated by me to have a more straight forward way to train Seq2Seq models on context based long form question answering tasks.
## Dataset Structure
### Data Instances
An example of 'train' looks as follows.
```json
{
"question": "what's the difference between a forest and a wood?",
"answer": "They're used interchangeably a lot. You'll get different answers from different resources, but the ...",
"context": [
"Wood is divided, according to its botanical origin, into two kinds: softwoods, ...",
"Processing and products differs especially with regard to the distinction between softwood and hardwood ..."
]
}
```
### Data Fields
The data fields are the same among all splits.
- `question`: a `string` feature.
- `answer`: a `string` feature.
- `context`: a list feature containing `string` features.
### Data Splits
| name |train|validation|
|----------|----:|---------:|
| |226147| 3020|
## Additional Information
### Licensing Information
This dataset is distributed under the MIT licence. | 2,126 | [
[
-0.03240966796875,
-0.05230712890625,
0.02398681640625,
0.007602691650390625,
-0.01218414306640625,
-0.00652313232421875,
-0.0021915435791015625,
-0.024444580078125,
0.017852783203125,
0.04974365234375,
-0.0689697265625,
-0.05157470703125,
-0.0096893310546875,
... |
RamAnanth1/talkrl-podcast | 2023-01-12T20:46:26.000Z | [
"task_categories:text-classification",
"task_categories:text-generation",
"task_categories:summarization",
"size_categories:n<1K",
"language:en",
"region:us"
] | RamAnanth1 | null | null | 0 | 6 | 2023-01-10T23:09:01 | ---
dataset_info:
features:
- name: title
dtype: string
- name: summary
dtype: string
- name: link
dtype: string
- name: transcript
dtype: string
- name: segments
list:
- name: end
dtype: float64
- name: start
dtype: float64
- name: text
dtype: string
splits:
- name: train
num_bytes: 4845076
num_examples: 39
download_size: 2633561
dataset_size: 4845076
task_categories:
- text-classification
- text-generation
- summarization
language:
- en
size_categories:
- n<1K
pretty_name: TalkRL Podcast
---
# Dataset Card for "talkrl-podcast"
This dataset is sourced from the [TalkRL Podcast website](https://www.talkrl.com/) and contains English transcripts of wonderful TalkRL podcast episodes. The transcripts were generated using OpenAI's base Whisper model | 832 | [
[
-0.00814056396484375,
-0.03143310546875,
-0.00969696044921875,
0.018707275390625,
-0.0198822021484375,
0.00388336181640625,
-0.0126800537109375,
-0.03509521484375,
0.031494140625,
0.026824951171875,
-0.07391357421875,
-0.048858642578125,
-0.017059326171875,
... |
Dahoas/instruct-human-assistant-prompt | 2023-01-12T22:31:39.000Z | [
"region:us"
] | Dahoas | null | null | 7 | 6 | 2023-01-12T22:31:34 | Entry not found | 15 | [
[
-0.02142333984375,
-0.01495361328125,
0.05718994140625,
0.0288238525390625,
-0.035064697265625,
0.046539306640625,
0.052520751953125,
0.005062103271484375,
0.0513916015625,
0.016998291015625,
-0.052093505859375,
-0.014984130859375,
-0.060394287109375,
0.0379... |
nateraw/meta-vsc-2022 | 2023-01-14T04:09:52.000Z | [
"region:us"
] | nateraw | null | null | 0 | 6 | 2023-01-14T04:05:29 | Entry not found | 15 | [
[
-0.02142333984375,
-0.01495361328125,
0.05718994140625,
0.0288238525390625,
-0.035064697265625,
0.046539306640625,
0.052520751953125,
0.005062103271484375,
0.0513916015625,
0.016998291015625,
-0.052093505859375,
-0.014984130859375,
-0.060394287109375,
0.0379... |
metaeval/syntactic-augmentation-nli | 2023-06-13T07:28:15.000Z | [
"task_categories:text-classification",
"task_ids:natural-language-inference",
"language:en",
"license:mit",
"region:us"
] | metaeval | null | null | 0 | 6 | 2023-01-30T10:35:09 | ---
license: mit
task_ids:
- natural-language-inference
task_categories:
- text-classification
language:
- en
---
https://github.com/Aatlantise/syntactic-augmentation-nli/tree/master/datasets
```
@inproceedings{min-etal-2020-syntactic,
title = "Syntactic Data Augmentation Increases Robustness to Inference Heuristics",
author = "Min, Junghyun and
McCoy, R. Thomas and
Das, Dipanjan and
Pitler, Emily and
Linzen, Tal",
booktitle = "Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics",
month = jul,
year = "2020",
address = "Online",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2020.acl-main.212",
doi = "10.18653/v1/2020.acl-main.212",
pages = "2339--2352",
}
``` | 813 | [
[
-0.0230865478515625,
-0.044342041015625,
0.0236968994140625,
0.0097808837890625,
-0.01012420654296875,
0.006011962890625,
-0.03619384765625,
-0.0474853515625,
0.0204620361328125,
0.009613037109375,
-0.06298828125,
-0.057037353515625,
-0.033355712890625,
0.03... |
Basvoju/SemEval2018Task7 | 2023-02-03T12:59:36.000Z | [
"task_categories:text-classification",
"task_ids:entity-linking-classification",
"annotations_creators:expert-generated",
"language_creators:found",
"multilinguality:monolingual",
"size_categories:1K<n<10K",
"language:en",
"license:other",
"Relation Classification",
"Relation extraction",
"Scien... | Basvoju | This paper describes the first task on semantic relation extraction and classification in scientific paper
abstracts at SemEval 2018. The challenge focuses on domain-specific semantic relations and includes three
different subtasks. The subtasks were designed so as to compare and quantify the effect of different
pre-processing steps on the relation classification results. We expect the task to be relevant for a broad
range of researchers working on extracting specialized knowledge from domain corpora, for example but not
limited to scientific or bio-medical information extraction. The task attracted a total of 32 participants,
with 158 submissions across different scenarios. | @inproceedings{gabor-etal-2018-semeval,
title = "{S}em{E}val-2018 Task 7: Semantic Relation Extraction and Classification in Scientific Papers",
author = {G{\'a}bor, Kata and
Buscaldi, Davide and
Schumann, Anne-Kathrin and
QasemiZadeh, Behrang and
Zargayouna, Ha{\"\i}fa and
Charnois, Thierry},
booktitle = "Proceedings of the 12th International Workshop on Semantic Evaluation",
month = jun,
year = "2018",
address = "New Orleans, Louisiana",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/S18-1111",
doi = "10.18653/v1/S18-1111",
pages = "679--688",
abstract = "This paper describes the first task on semantic relation extraction and classification in
scientific paper abstracts at SemEval 2018. The challenge focuses on domain-specific semantic relations
and includes three different subtasks. The subtasks were designed so as to compare and quantify the
effect of different pre-processing steps on the relation classification results. We expect the task to
be relevant for a broad range of researchers working on extracting specialized knowledge from domain
corpora, for example but not limited to scientific or bio-medical information extraction. The task
attracted a total of 32 participants, with 158 submissions across different scenarios.",
} | 0 | 6 | 2023-01-31T22:13:20 | ---
annotations_creators:
- expert-generated
language:
- en
language_creators:
- found
license:
- other
multilinguality:
- monolingual
paperswithcode_id: acronym-identification
pretty_name: >-
Semeval2018Task7 is a dataset that describes the Semantic Relation Extraction
and Classification in Scientific Papers
size_categories:
- 1K<n<10K
source_datasets: []
tags:
- Relation Classification
- Relation extraction
- Scientific papers
- Research papers
task_categories:
- text-classification
task_ids:
- entity-linking-classification
train-eval-index:
- col_mapping:
labels: tags
tokens: tokens
config: default
splits:
eval_split: test
task: text-classification
task_id: entity_extraction
---
# Dataset Card for SemEval2018Task7
## Table of Contents
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** [https://lipn.univ-paris13.fr/~gabor/semeval2018task7/](https://lipn.univ-paris13.fr/~gabor/semeval2018task7/)
- **Repository:** [https://github.com/gkata/SemEval2018Task7/tree/testing](https://github.com/gkata/SemEval2018Task7/tree/testing)
- **Paper:** [SemEval-2018 Task 7: Semantic Relation Extraction and Classification in Scientific Papers](https://aclanthology.org/S18-1111/)
- **Leaderboard:** [https://competitions.codalab.org/competitions/17422#learn_the_details-overview](https://competitions.codalab.org/competitions/17422#learn_the_details-overview)
- **Size of downloaded dataset files:** 1.93 MB
### Dataset Summary
Semeval2018Task7 is a dataset that describes the Semantic Relation Extraction and Classification in Scientific Papers.
The challenge focuses on domain-specific semantic relations and includes three different subtasks. The subtasks were designed so as to compare and quantify the effect of different pre-processing steps on the relation classification results. We expect the task to be relevant for a broad range of researchers working on extracting specialized knowledge from domain corpora, for example but not limited to scientific or bio-medical information extraction. The task attracted a total of 32 participants, with 158 submissions across different scenarios.
The three subtasks are:
- Subtask 1.1: Relation classification on clean data
- In the training data, semantic relations are manually annotated between entities.
- In the test data, only entity annotations and unlabeled relation instances are given.
- Given a scientific publication, The task is to predict the semantic relation between the entities.
- Subtask 1.2: Relation classification on noisy data
- Entity occurrences are automatically annotated in both the training and the test data.
- The task is to predict the semantic
relation between the entities.
- Subtask 2: Metrics for the extraction and classification scenario
- Evaluation of relation extraction
- Evaluation of relation classification
The Relations types are USAGE, RESULT, MODEL, PART_WHOLE, TOPIC, COMPARISION.
The following example shows a text snippet with the information provided in the test data:
Korean, a \<entity id=”H01-1041.10”>verb final language\</entity>with\<entity id=”H01-1041.11”>overt case markers\</entity>(...)
- A relation instance is identified by the unique identifier of the entities in the pair, e.g.(H01-1041.10, H01-1041.11)
- The information to be predicted is the relation class label: MODEL-FEATURE(H01-1041.10, H01-1041.11).
For details, see the paper https://aclanthology.org/S18-1111/.
### Supported Tasks and Leaderboards
- **Tasks:** Relation extraction and classification in scientific papers
- **Leaderboards:** [https://competitions.codalab.org/competitions/17422#learn_the_details-overview](https://competitions.codalab.org/competitions/17422#learn_the_details-overview)
### Languages
The language in the dataset is English.
## Dataset Structure
### Data Instances
#### subtask_1.1
- **Size of downloaded dataset files:** 714 KB
An example of 'train' looks as follows:
```json
{
"id": "H01-1041",
"title": "'Interlingua-Based Broad-Coverage Korean-to-English Translation in CCLING'",
"abstract": 'At MIT Lincoln Laboratory, we have been developing a Korean-to-English machine translation system CCLINC (Common Coalition Language System at Lincoln Laboratory) . The CCLINC Korean-to-English translation system consists of two core modules , language understanding and generation modules mediated by a language neutral meaning representation called a semantic frame . The key features of the system include: (i) Robust efficient parsing of Korean (a verb final language with overt case markers , relatively free word order , and frequent omissions of arguments ). (ii) High quality translation via word sense disambiguation and accurate word order generation of the target language . (iii) Rapid system development and porting to new domains via knowledge-based automated acquisition of grammars . Having been trained on Korean newspaper articles on missiles and chemical biological warfare, the system produces the translation output sufficient for content understanding of the original document.
"entities": [{'id': 'H01-1041.1', 'char_start': 54, 'char_end': 97},
{'id': 'H01-1041.2', 'char_start': 99, 'char_end': 161},
{'id': 'H01-1041.3', 'char_start': 169, 'char_end': 211},
{'id': 'H01-1041.4', 'char_start': 229, 'char_end': 240},
{'id': 'H01-1041.5', 'char_start': 244, 'char_end': 288},
{'id': 'H01-1041.6', 'char_start': 304, 'char_end': 342},
{'id': 'H01-1041.7', 'char_start': 353, 'char_end': 366},
{'id': 'H01-1041.8', 'char_start': 431, 'char_end': 437},
{'id': 'H01-1041.9', 'char_start': 442, 'char_end': 447},
{'id': 'H01-1041.10', 'char_start': 452, 'char_end': 470},
{'id': 'H01-1041.11', 'char_start': 477, 'char_end': 494},
{'id': 'H01-1041.12', 'char_start': 509, 'char_end': 523},
{'id': 'H01-1041.13', 'char_start': 553, 'char_end': 561},
{'id': 'H01-1041.14', 'char_start': 584, 'char_end': 594},
{'id': 'H01-1041.15', 'char_start': 600, 'char_end': 624},
{'id': 'H01-1041.16', 'char_start': 639, 'char_end': 659},
{'id': 'H01-1041.17', 'char_start': 668, 'char_end': 682},
{'id': 'H01-1041.18', 'char_start': 692, 'char_end': 715},
{'id': 'H01-1041.19', 'char_start': 736, 'char_end': 742},
{'id': 'H01-1041.20', 'char_start': 748, 'char_end': 796},
{'id': 'H01-1041.21', 'char_start': 823, 'char_end': 847},
{'id': 'H01-1041.22', 'char_start': 918, 'char_end': 935},
{'id': 'H01-1041.23', 'char_start': 981, 'char_end': 997}],
}
"relation": [{'label': 3, 'arg1': 'H01-1041.3', 'arg2': 'H01-1041.4', 'reverse': True},
{'label': 0, 'arg1': 'H01-1041.8', 'arg2': 'H01-1041.9', 'reverse': False},
{'label': 2, 'arg1': 'H01-1041.10', 'arg2': 'H01-1041.11', 'reverse': True},
{'label': 0, 'arg1': 'H01-1041.14', 'arg2': 'H01-1041.15', 'reverse': True}]
```
#### Subtask_1.2
- **Size of downloaded dataset files:** 1.00 MB
An example of 'train' looks as follows:
```json
{'id': 'L08-1450',
'title': '\nA LAF/GrAF based Encoding Scheme for underspecified Representations of syntactic Annotations.\n',
'abstract': 'Data models and encoding formats for syntactically annotated text corpora need to deal with syntactic ambiguity; underspecified representations are particularly well suited for the representation of ambiguousdata because they allow for high informational efficiency. We discuss the issue of being informationally efficient, and the trade-off between efficient encoding of linguistic annotations and complete documentation of linguistic analyses. The main topic of this article is adata model and an encoding scheme based on LAF/GrAF ( Ide and Romary, 2006 ; Ide and Suderman, 2007 ) which provides a flexible framework for encoding underspecified representations. We show how a set of dependency structures and a set of TiGer graphs ( Brants et al., 2002 ) representing the readings of an ambiguous sentence can be encoded, and we discuss basic issues in querying corpora which are encoded using the framework presented here.\n',
'entities': [{'id': 'L08-1450.4', 'char_start': 0, 'char_end': 3},
{'id': 'L08-1450.5', 'char_start': 5, 'char_end': 10},
{'id': 'L08-1450.6', 'char_start': 25, 'char_end': 31},
{'id': 'L08-1450.7', 'char_start': 61, 'char_end': 64},
{'id': 'L08-1450.8', 'char_start': 66, 'char_end': 72},
{'id': 'L08-1450.9', 'char_start': 82, 'char_end': 85},
{'id': 'L08-1450.10', 'char_start': 92, 'char_end': 100},
{'id': 'L08-1450.11', 'char_start': 102, 'char_end': 110},
{'id': 'L08-1450.12', 'char_start': 128, 'char_end': 142},
{'id': 'L08-1450.13', 'char_start': 181, 'char_end': 194},
{'id': 'L08-1450.14', 'char_start': 208, 'char_end': 211},
{'id': 'L08-1450.15', 'char_start': 255, 'char_end': 264},
{'id': 'L08-1450.16', 'char_start': 282, 'char_end': 286},
{'id': 'L08-1450.17', 'char_start': 408, 'char_end': 420},
{'id': 'L08-1450.18', 'char_start': 425, 'char_end': 443},
{'id': 'L08-1450.19', 'char_start': 450, 'char_end': 453},
{'id': 'L08-1450.20', 'char_start': 455, 'char_end': 459},
{'id': 'L08-1450.21', 'char_start': 481, 'char_end': 484},
{'id': 'L08-1450.22', 'char_start': 486, 'char_end': 490},
{'id': 'L08-1450.23', 'char_start': 508, 'char_end': 513},
{'id': 'L08-1450.24', 'char_start': 515, 'char_end': 519},
{'id': 'L08-1450.25', 'char_start': 535, 'char_end': 537},
{'id': 'L08-1450.26', 'char_start': 559, 'char_end': 561},
{'id': 'L08-1450.27', 'char_start': 591, 'char_end': 598},
{'id': 'L08-1450.28', 'char_start': 611, 'char_end': 619},
{'id': 'L08-1450.29', 'char_start': 649, 'char_end': 663},
{'id': 'L08-1450.30', 'char_start': 687, 'char_end': 707},
{'id': 'L08-1450.31', 'char_start': 722, 'char_end': 726},
{'id': 'L08-1450.32', 'char_start': 801, 'char_end': 808},
{'id': 'L08-1450.33', 'char_start': 841, 'char_end': 845},
{'id': 'L08-1450.34', 'char_start': 847, 'char_end': 852},
{'id': 'L08-1450.35', 'char_start': 857, 'char_end': 864},
{'id': 'L08-1450.36', 'char_start': 866, 'char_end': 872},
{'id': 'L08-1450.37', 'char_start': 902, 'char_end': 910},
{'id': 'L08-1450.1', 'char_start': 12, 'char_end': 16},
{'id': 'L08-1450.2', 'char_start': 27, 'char_end': 32},
{'id': 'L08-1450.3', 'char_start': 72, 'char_end': 80}],
'relation': [{'label': 1,
'arg1': 'L08-1450.12',
'arg2': 'L08-1450.13',
'reverse': False},
{'label': 5, 'arg1': 'L08-1450.17', 'arg2': 'L08-1450.18', 'reverse': False},
{'label': 1, 'arg1': 'L08-1450.28', 'arg2': 'L08-1450.29', 'reverse': False},
{'label': 3, 'arg1': 'L08-1450.30', 'arg2': 'L08-1450.32', 'reverse': False},
{'label': 3, 'arg1': 'L08-1450.34', 'arg2': 'L08-1450.35', 'reverse': False},
{'label': 3, 'arg1': 'L08-1450.36', 'arg2': 'L08-1450.37', 'reverse': True}]}
[ ]
```
### Data Fields
#### subtask_1_1
- `id`: the instance id of this abstract, a `string` feature.
- `title`: the title of this abstract, a `string` feature
- `abstract`: the abstract from the scientific papers, a `string` feature
- `entities`: the entity id's for the key phrases, a `list` of entity id's.
- `id`: the instance id of this sentence, a `string` feature.
- `char_start`: the 0-based index of the entity starting, an `ìnt` feature.
- `char_end`: the 0-based index of the entity ending, an `ìnt` feature.
- `relation`: the list of relations of this sentence marking the relation between the key phrases, a `list` of classification labels.
- `label`: the list of relations between the key phrases, a `list` of classification labels.
- `arg1`: the entity id of this key phrase, a `string` feature.
- `arg2`: the entity id of the related key phrase, a `string` feature.
- `reverse`: the reverse is `True` only if reverse is possible otherwise `False`, a `bool` feature.
```python
RELATIONS
{"":0,"USAGE": 1, "RESULT": 2, "MODEL-FEATURE": 3, "PART_WHOLE": 4, "TOPIC": 5, "COMPARE": 6}
```
#### subtask_1_2
- `id`: the instance id of this abstract, a `string` feature.
- `title`: the title of this abstract, a `string` feature
- `abstract`: the abstract from the scientific papers, a `string` feature
- `entities`: the entity id's for the key phrases, a `list` of entity id's.
- `id`: the instance id of this sentence, a `string` feature.
- `char_start`: the 0-based index of the entity starting, an `ìnt` feature.
- `char_end`: the 0-based index of the entity ending, an `ìnt` feature.
- `relation`: the list of relations of this sentence marking the relation between the key phrases, a `list` of classification labels.
- `label`: the list of relations between the key phrases, a `list` of classification labels.
- `arg1`: the entity id of this key phrase, a `string` feature.
- `arg2`: the entity id of the related key phrase, a `string` feature.
- `reverse`: the reverse is `True` only if reverse is possible otherwise `False`, a `bool` feature.
```python
RELATIONS
{"":0,"USAGE": 1, "RESULT": 2, "MODEL-FEATURE": 3, "PART_WHOLE": 4, "TOPIC": 5, "COMPARE": 6}
```
### Data Splits
| | | Train| Test |
|-------------|-----------|------|------|
| subtask_1_1 | text | 2807 | 3326 |
| | relations | 1228 | 1248 |
| subtask_1_2 | text | 1196 | 1193 |
| | relations | 335 | 355 |
## Dataset Creation
### Curation Rationale
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
#### Who are the source language producers?
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Annotations
#### Annotation process
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
#### Who are the annotators?
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Personal and Sensitive Information
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Discussion of Biases
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Other Known Limitations
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
## Additional Information
### Dataset Curators
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Licensing Information
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Citation Information
```
@inproceedings{gabor-etal-2018-semeval,
title = "{S}em{E}val-2018 Task 7: Semantic Relation Extraction and Classification in Scientific Papers",
author = {G{\'a}bor, Kata and
Buscaldi, Davide and
Schumann, Anne-Kathrin and
QasemiZadeh, Behrang and
Zargayouna, Ha{\"\i}fa and
Charnois, Thierry},
booktitle = "Proceedings of the 12th International Workshop on Semantic Evaluation",
month = jun,
year = "2018",
address = "New Orleans, Louisiana",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/S18-1111",
doi = "10.18653/v1/S18-1111",
pages = "679--688",
abstract = "This paper describes the first task on semantic relation extraction and classification in scientific paper abstracts at SemEval 2018. The challenge focuses on domain-specific semantic relations and includes three different subtasks. The subtasks were designed so as to compare and quantify the effect of different pre-processing steps on the relation classification results. We expect the task to be relevant for a broad range of researchers working on extracting specialized knowledge from domain corpora, for example but not limited to scientific or bio-medical information extraction. The task attracted a total of 32 participants, with 158 submissions across different scenarios.",
}
```
### Contributions
Thanks to [@basvoju](https://github.com/basvoju) for adding this dataset. | 17,783 | [
[
-0.032257080078125,
-0.039703369140625,
0.03436279296875,
0.01080322265625,
-0.0267181396484375,
-0.00994110107421875,
-0.0113372802734375,
-0.03759765625,
0.035400390625,
0.02996826171875,
-0.05303955078125,
-0.07318115234375,
-0.040863037109375,
0.02871704... |
mlfoundations/datacomp_pools | 2023-08-21T21:43:57.000Z | [
"license:cc-by-4.0",
"region:us"
] | mlfoundations | null | null | 12 | 6 | 2023-02-01T20:36:30 | ---
license: cc-by-4.0
---
## DataComp Pools
This repository contains metadata files for DataComp. For details on how to use the metadata, please visit [our website](https://www.datacomp.ai/) and our [github repository](https://github.com/mlfoundations/datacomp).
We distribute the image url-text samples and metadata under a standard Creative Common CC-BY-4.0 license. The individual images are under their own copyrights.
## Terms and Conditions
We have terms of service that are similar to those adopted by HuggingFace (https://huggingface.co/terms-of-service), which covers their dataset library. Specifically, any content you download, access or use from our index, is at your own risk and subject to the terms of service or copyright limitations accompanying such content. The image url-text index, which is a research artifact, is provided as is. By using said index, you assume all risks, including but not limited to, liabilities related to image downloading and storage.
| 989 | [
[
-0.04107666015625,
-0.0310821533203125,
0.0199127197265625,
0.0219268798828125,
-0.036163330078125,
-0.003940582275390625,
0.0091094970703125,
-0.039154052734375,
0.022186279296875,
0.042327880859375,
-0.06781005859375,
-0.04669189453125,
-0.03704833984375,
... |
Kaludi/data-csgo-weapon-classification | 2023-02-02T23:34:31.000Z | [
"task_categories:image-classification",
"region:us"
] | Kaludi | null | null | 0 | 6 | 2023-02-02T22:42:56 | ---
task_categories:
- image-classification
---
# Dataset for project: csgo-weapon-classification
## Dataset Description
This dataset has for project csgo-weapon-classification was collected with the help of a bulk google image downloader.
### Languages
The BCP-47 code for the dataset's language is unk.
## Dataset Structure
### Data Instances
A sample from this dataset looks as follows:
```json
[
{
"image": "<1768x718 RGB PIL image>",
"target": 0
},
{
"image": "<716x375 RGBA PIL image>",
"target": 0
}
]
```
### Dataset Fields
The dataset has the following fields (also called "features"):
```json
{
"image": "Image(decode=True, id=None)",
"target": "ClassLabel(names=['AK-47', 'AWP', 'Famas', 'Galil-AR', 'Glock', 'M4A1', 'M4A4', 'P-90', 'SG-553', 'UMP', 'USP'], id=None)"
}
```
### Dataset Splits
This dataset is split into a train and validation split. The split sizes are as follow:
| Split name | Num samples |
| ------------ | ------------------- |
| train | 1100 |
| valid | 275 |
| 1,060 | [
[
-0.028961181640625,
-0.009063720703125,
0.005985260009765625,
-0.00405120849609375,
-0.0275421142578125,
0.0411376953125,
-0.015533447265625,
-0.01329803466796875,
-0.0139617919921875,
0.0238037109375,
-0.0295257568359375,
-0.06689453125,
-0.05560302734375,
... |
chenghao/quora_questions | 2023-02-06T17:23:12.000Z | [
"task_categories:text-classification",
"task_ids:semantic-similarity-classification",
"annotations_creators:expert-generated",
"language_creators:found",
"multilinguality:monolingual",
"size_categories:100K<n<1M",
"source_datasets:original",
"language:en",
"license:other",
"region:us"
] | chenghao | null | null | 3 | 6 | 2023-02-06T17:16:39 | ---
license: other
dataset_info:
features:
- name: questions
dtype: string
splits:
- name: train
num_bytes: 51635953
num_examples: 808580
download_size: 31079310
dataset_size: 51635953
annotations_creators:
- expert-generated
language:
- en
language_creators:
- found
multilinguality:
- monolingual
pretty_name: Quora Questions
size_categories:
- 100K<n<1M
source_datasets:
- original
task_categories:
- text-classification
task_ids:
- semantic-similarity-classification
paperswithcode_id: null
---
# Dataset Card for "quora"
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** [https://www.kaggle.com/c/quora-question-pairs](https://www.kaggle.com/c/quora-question-pairs)
- **Repository:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
- **Paper:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
- **Point of Contact:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
- **Size of downloaded dataset files:** 55.48 MB
- **Size of the generated dataset:** 55.46 MB
- **Total amount of disk used:** 110.94 MB
### Dataset Summary
The Quora dataset is composed of question pairs, and the task is to determine if the questions are paraphrases of each other (have the same meaning).
### Supported Tasks and Leaderboards
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Languages
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
## Dataset Structure
### Data Instances
#### default
- **Size of downloaded dataset files:** 55.48 MB
- **Size of the generated dataset:** 55.46 MB
- **Total amount of disk used:** 110.94 MB
### Data Fields
The data fields are the same among all splits.
### Data Splits
| name |train |
|-------|-----:|
|default|404290|
## Dataset Creation
### Curation Rationale
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
#### Who are the source language producers?
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Annotations
#### Annotation process
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
#### Who are the annotators?
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Personal and Sensitive Information
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Discussion of Biases
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Other Known Limitations
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
## Additional Information
### Dataset Curators
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Licensing Information
[Quora Term of Service](https://www.quora.com/about/tos), no commercial use.
### Citation Information
Unknown.
| 5,106 | [
[
-0.04351806640625,
-0.04742431640625,
0.01126861572265625,
0.0008511543273925781,
-0.02215576171875,
0.0030002593994140625,
-0.0146942138671875,
-0.022918701171875,
0.0506591796875,
0.036376953125,
-0.060760498046875,
-0.0611572265625,
-0.036468505859375,
0.... |
MoyAI/Funniest-answers | 2023-09-07T11:09:39.000Z | [
"task_categories:conversational",
"task_categories:text-generation",
"task_categories:text2text-generation",
"task_categories:text-classification",
"size_categories:n<1K",
"language:ru",
"region:us"
] | MoyAI | null | null | 4 | 6 | 2023-02-08T12:15:42 | ---
task_categories:
- conversational
- text-generation
- text2text-generation
- text-classification
language:
- ru
pretty_name: Funny-responses
size_categories:
- n<1K
---
# Funniest responses dataset
This crowdsourced dataset is a dataset of the funny answers we've collected over time. The collection started in feb. 8 of 2023.
# Usage
## Data structure
Here's how the data looks.
```json
[
{"text": "How's your day?", "response": "Better than yours.", "type": "a"},
{"text": "I'm just going to start flying in some direction", "response": "Down.", "type": "n"},
...
```
The json is a list of dictionaries that have keys "text", "response", "type". Here's a list of all the types:
- "n" Neutral, no offense.
- "a" Agressive/offensive/toxic response.
- "w" Contains a response with not always appropriate or aggresive words but not swearing (like "frik" or "crap" or "dumbass")
- "s" The entire sample contains swearing. (Means the text or response contains swearing. I don't think examples are required)
- "p" Pissimistic, can include low self-esteem, suicide, etc. (like "Why is this thing not working??" with a response "Unemployed just like me...")
## Datasets library
Here is the code to use this dataset with `datasets` for a classifier of text toxicity.
```py
from datasets import load_dataset
import numpy as np
from keras.preprocessing.text import Tokenizer
dset = load_dataset("MoyAI/Funniest-answers")['train']
tokenizer = Tokenizer()
type_toxicity = {"n": 0, "p": 0.2, "w": 0.5, "a": 0.7, "s": 1}
tokenizer.fit_on_texts(dset['response'])
X = tokenizer.texts_to_sequences(dset['response'])
y = np.array([type_toxicity[type] for type in dset['type']])
``` | 1,684 | [
[
-0.024444580078125,
-0.04290771484375,
0.031280517578125,
0.0189971923828125,
-0.033538818359375,
-0.01047515869140625,
0.002788543701171875,
0.0028629302978515625,
0.035400390625,
0.026702880859375,
-0.053802490234375,
-0.03509521484375,
-0.05242919921875,
... |
AlekseyKorshuk/chai-synthetic-pairwise | 2023-02-10T13:03:25.000Z | [
"region:us"
] | AlekseyKorshuk | null | null | 0 | 6 | 2023-02-10T13:01:05 | ---
dataset_info:
features:
- name: prompt
dtype: string
- name: chosen
dtype: string
- name: rejected
dtype: string
splits:
- name: train
num_bytes: 1690616961
num_examples: 41128
- name: test
num_bytes: 47839521
num_examples: 4570
download_size: 781208088
dataset_size: 1738456482
---
# Dataset Card for "chai-synthetic-pairwise"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 510 | [
[
-0.03985595703125,
-0.0308837890625,
0.017547607421875,
0.0237579345703125,
-0.0128631591796875,
0.01036834716796875,
0.0158233642578125,
-0.0180206298828125,
0.0714111328125,
0.03057861328125,
-0.057861328125,
-0.040863037109375,
-0.0268707275390625,
-0.011... |
zpn/uniref50 | 2023-02-13T17:58:28.000Z | [
"region:us"
] | zpn | null | null | 1 | 6 | 2023-02-11T20:20:51 | Entry not found | 15 | [
[
-0.02142333984375,
-0.01495361328125,
0.05718994140625,
0.0288238525390625,
-0.035064697265625,
0.046539306640625,
0.052520751953125,
0.005062103271484375,
0.0513916015625,
0.016998291015625,
-0.052093505859375,
-0.014984130859375,
-0.060394287109375,
0.0379... |
joelniklaus/legal-mc4 | 2023-08-06T22:54:20.000Z | [
"task_categories:fill-mask",
"annotations_creators:other",
"language_creators:found",
"multilinguality:multilingual",
"size_categories:10M<n<100M",
"source_datasets:original",
"language:bg",
"language:cs",
"language:da",
"language:de",
"language:el",
"language:en",
"language:es",
"language... | joelniklaus | Legal-MC4: A Corpus Covering the Legal Part of MC4 for European Languages | 5 | 6 | 2023-02-14T17:48:58 | ---
annotations_creators:
- other
language_creators:
- found
language:
- bg
- cs
- da
- de
- el
- en
- es
- et
- fi
- fr
- ga
- hu
- it
- lt
- lv
- mt
- nl
- pl
- pt
- ro
- sk
- sl
- sv
license:
- cc-by-4.0
multilinguality:
- multilingual
paperswithcode_id: null
pretty_name: "MC4_Legal: A Corpus Covering the Legal Part of MC4 for European Languages"
size_categories:
- 10M<n<100M
source_datasets:
- original
task_categories:
- fill-mask
---
# Dataset Card for MC4_Legal: A Corpus Covering the Legal Part of MC4 for European Languages
## Table of Contents
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:**
- **Repository:** [GitHub](https://github.com/JoelNiklaus/LegalDatasets/tree/main/pretrain/mc4_legal)
- **Paper:**
- **Leaderboard:**
- **Point of Contact:** [Joel Niklaus](mailto:joel@niklaus.ai)
### Dataset Summary
This dataset contains large text resources (~106GB in total) from mc4 filtered for legal data that can be used for pretraining language models.
This dataset uses a different filtering method compared to [mc4_legal](https://huggingface.co/datasets/joelito/mc4_legal) and uses the smaller filtered [c4](https://huggingface.co/datasets/c4) dataset for the English split to speed up the filtering.
Use the dataset like this:
```python
from datasets import load_dataset
dataset = load_dataset("joelito/mc4_legal", "de", split='train', streaming=True)
```
### Supported Tasks and Leaderboards
The dataset supports the task of masked language modeling.
### Languages
The following languages are supported: bg, cs, da, de, el, en, es, et, fi, fr, ga, hu, it, lt, lv, mt, nl, pl, pt, ro, sk, sl, sv
## Dataset Structure
### Data Instances
The file format is jsonl.xz and there is a validation and train split available.
| Source | Size (MB) | Words | Documents | Words/Document |
|:---------|------------:|------------:|------------:|-----------------:|
| all | 448980 | 28599300521 | 9873288 | 2896 |
| bg | 57 | 2390349 | 379 | 6306 |
| cs | 31005 | 1840827375 | 677796 | 2715 |
| da | 162 | 10466716 | 3231 | 3239 |
| de | 105739 | 6184578784 | 3164461 | 1954 |
| el | 30 | 1155977 | 307 | 3765 |
| en | 13734 | 966539309 | 359283 | 2690 |
| es | 132053 | 9058939804 | 2281888 | 3969 |
| et | 2059 | 110198368 | 49987 | 2204 |
| fi | 1270 | 62799074 | 44875 | 1399 |
| fr | 30878 | 2117306229 | 598983 | 3534 |
| ga | 1 | 32772 | 8 | 4096 |
| hu | 4677 | 244911748 | 58857 | 4161 |
| it | 46957 | 3053920779 | 990823 | 3082 |
| lt | 156 | 9142223 | 1529 | 5979 |
| lv | 1 | 58702 | 16 | 3668 |
| mt | 65 | 3479869 | 731 | 4760 |
| nl | 326 | 21962633 | 6875 | 3194 |
| pl | 37950 | 2235839721 | 827641 | 2701 |
| pt | 20120 | 1338147828 | 382173 | 3501 |
| ro | 8816 | 551372510 | 136513 | 4038 |
| sk | 5850 | 349265172 | 130701 | 2672 |
| sl | 1742 | 107493024 | 32574 | 3299 |
| sv | 5332 | 328471555 | 123657 | 2656 |
### Data Fields
[More Information Needed]
### Data Splits
#### Data Size
```bash
$ xz --list data/*.xz
Strms Blocks Compressed Uncompressed Ratio Check Filename
1 1 2,080.7 KiB 33.4 MiB 0.061 CRC64 data/bg.train.0.jsonl.xz
1 1 22.8 KiB 315.9 KiB 0.072 CRC64 data/bg.validation.0.jsonl.xz
1 1 608.0 MiB 3,881.0 MiB 0.157 CRC64 data/cs.train.0.jsonl.xz
1 1 608.0 MiB 3,902.6 MiB 0.156 CRC64 data/cs.train.1.jsonl.xz
1 1 256.1 MiB 1,644.5 MiB 0.156 CRC64 data/cs.train.2.jsonl.xz
1 1 1,450.6 KiB 8,690.7 KiB 0.167 CRC64 data/cs.validation.0.jsonl.xz
1 1 7,578.6 KiB 38.3 MiB 0.193 CRC64 data/da.train.0.jsonl.xz
1 1 19.7 KiB 82.3 KiB 0.240 CRC64 data/da.validation.0.jsonl.xz
1 1 608.0 MiB 3,026.9 MiB 0.201 CRC64 data/de.train.0.jsonl.xz
1 1 608.0 MiB 3,038.7 MiB 0.200 CRC64 data/de.train.1.jsonl.xz
1 1 608.0 MiB 3,036.1 MiB 0.200 CRC64 data/de.train.2.jsonl.xz
1 1 608.0 MiB 3,040.3 MiB 0.200 CRC64 data/de.train.3.jsonl.xz
1 1 608.0 MiB 3,038.6 MiB 0.200 CRC64 data/de.train.4.jsonl.xz
1 1 608.0 MiB 3,044.2 MiB 0.200 CRC64 data/de.train.5.jsonl.xz
1 1 608.0 MiB 3,043.8 MiB 0.200 CRC64 data/de.train.6.jsonl.xz
1 1 608.0 MiB 3,038.2 MiB 0.200 CRC64 data/de.train.7.jsonl.xz
1 1 55.1 MiB 274.7 MiB 0.201 CRC64 data/de.train.8.jsonl.xz
1 1 5,033.5 KiB 24.5 MiB 0.201 CRC64 data/de.validation.0.jsonl.xz
1 1 1,280.9 KiB 17.0 MiB 0.073 CRC64 data/el.train.0.jsonl.xz
1 1 5,552 B 15.7 KiB 0.346 CRC64 data/el.validation.0.jsonl.xz
1 1 608.0 MiB 2,602.1 MiB 0.234 CRC64 data/en.train.0.jsonl.xz
1 1 90.0 MiB 386.5 MiB 0.233 CRC64 data/en.train.1.jsonl.xz
1 1 826.6 KiB 3,298.8 KiB 0.251 CRC64 data/en.validation.0.jsonl.xz
1 1 608.0 MiB 3,106.5 MiB 0.196 CRC64 data/es.train.0.jsonl.xz
1 1 608.0 MiB 3,118.1 MiB 0.195 CRC64 data/es.train.1.jsonl.xz
1 1 608.0 MiB 3,113.6 MiB 0.195 CRC64 data/es.train.2.jsonl.xz
1 1 608.0 MiB 3,122.5 MiB 0.195 CRC64 data/es.train.3.jsonl.xz
1 1 608.0 MiB 3,121.5 MiB 0.195 CRC64 data/es.train.4.jsonl.xz
1 1 608.0 MiB 3,122.9 MiB 0.195 CRC64 data/es.train.5.jsonl.xz
1 1 608.0 MiB 3,128.4 MiB 0.194 CRC64 data/es.train.6.jsonl.xz
1 1 608.0 MiB 3,129.5 MiB 0.194 CRC64 data/es.train.7.jsonl.xz
1 1 608.0 MiB 3,132.2 MiB 0.194 CRC64 data/es.train.8.jsonl.xz
1 1 528.5 MiB 2,722.5 MiB 0.194 CRC64 data/es.train.9.jsonl.xz
1 1 6,159.9 KiB 30.7 MiB 0.196 CRC64 data/es.validation.0.jsonl.xz
1 1 93.5 MiB 506.2 MiB 0.185 CRC64 data/et.train.0.jsonl.xz
1 1 136.2 KiB 571.3 KiB 0.238 CRC64 data/et.validation.0.jsonl.xz
1 1 60.6 MiB 312.6 MiB 0.194 CRC64 data/fi.train.0.jsonl.xz
1 1 63.2 KiB 262.4 KiB 0.241 CRC64 data/fi.validation.0.jsonl.xz
1 1 608.0 MiB 3,400.7 MiB 0.179 CRC64 data/fr.train.0.jsonl.xz
1 1 608.0 MiB 3,405.5 MiB 0.179 CRC64 data/fr.train.1.jsonl.xz
1 1 135.9 MiB 763.7 MiB 0.178 CRC64 data/fr.train.2.jsonl.xz
1 1 1,414.3 KiB 7,626.1 KiB 0.185 CRC64 data/fr.validation.0.jsonl.xz
1 1 31.2 KiB 146.4 KiB 0.213 CRC64 data/ga.train.0.jsonl.xz
1 0 32 B 0 B --- CRC64 data/ga.validation.0.jsonl.xz
1 1 211.5 MiB 1,407.3 MiB 0.150 CRC64 data/hu.train.0.jsonl.xz
1 1 212.9 KiB 1,287.6 KiB 0.165 CRC64 data/hu.validation.0.jsonl.xz
1 1 608.0 MiB 2,963.4 MiB 0.205 CRC64 data/it.train.0.jsonl.xz
1 1 608.0 MiB 2,970.0 MiB 0.205 CRC64 data/it.train.1.jsonl.xz
1 1 608.0 MiB 2,973.7 MiB 0.204 CRC64 data/it.train.2.jsonl.xz
1 1 315.2 MiB 1,541.6 MiB 0.204 CRC64 data/it.train.3.jsonl.xz
1 1 2,419.3 KiB 11.2 MiB 0.211 CRC64 data/it.validation.0.jsonl.xz
1 1 9,966.7 KiB 38.2 MiB 0.255 CRC64 data/lt.train.0.jsonl.xz
1 1 17.2 KiB 84.7 KiB 0.203 CRC64 data/lt.validation.0.jsonl.xz
1 1 66.4 KiB 326.7 KiB 0.203 CRC64 data/lv.train.0.jsonl.xz
1 0 32 B 0 B --- CRC64 data/lv.validation.0.jsonl.xz
1 1 2,851.6 KiB 16.7 MiB 0.167 CRC64 data/mt.train.0.jsonl.xz
1 1 2,092 B 5,079 B 0.412 CRC64 data/mt.validation.0.jsonl.xz
1 1 14.6 MiB 71.6 MiB 0.203 CRC64 data/nl.train.0.jsonl.xz
1 1 23.5 KiB 79.2 KiB 0.296 CRC64 data/nl.validation.0.jsonl.xz
1 1 608.0 MiB 3,635.5 MiB 0.167 CRC64 data/pl.train.0.jsonl.xz
1 1 608.0 MiB 3,646.0 MiB 0.167 CRC64 data/pl.train.1.jsonl.xz
1 1 401.9 MiB 2,409.0 MiB 0.167 CRC64 data/pl.train.2.jsonl.xz
1 1 1,870.5 KiB 10.5 MiB 0.173 CRC64 data/pl.validation.0.jsonl.xz
1 1 608.0 MiB 3,173.1 MiB 0.192 CRC64 data/pt.train.0.jsonl.xz
1 1 329.1 MiB 1,721.6 MiB 0.191 CRC64 data/pt.train.1.jsonl.xz
1 1 989.0 KiB 4,841.2 KiB 0.204 CRC64 data/pt.validation.0.jsonl.xz
1 1 365.2 MiB 2,237.9 MiB 0.163 CRC64 data/ro.train.0.jsonl.xz
1 1 419.2 KiB 2,320.4 KiB 0.181 CRC64 data/ro.validation.0.jsonl.xz
1 1 266.1 MiB 1,668.1 MiB 0.160 CRC64 data/sk.train.0.jsonl.xz
1 1 304.1 KiB 1,618.2 KiB 0.188 CRC64 data/sk.validation.0.jsonl.xz
1 1 81.6 MiB 416.1 MiB 0.196 CRC64 data/sl.train.0.jsonl.xz
1 1 101.0 KiB 416.6 KiB 0.242 CRC64 data/sl.validation.0.jsonl.xz
1 1 252.0 MiB 1,423.2 MiB 0.177 CRC64 data/sv.train.0.jsonl.xz
1 1 210.8 KiB 1,091.2 KiB 0.193 CRC64 data/sv.validation.0.jsonl.xz
-------------------------------------------------------------------------------
74 72 20.0 GiB 106.2 GiB 0.189 CRC64 74 files
```
## Dataset Creation
The dataset was created by filtering mc4 for legal data.
We used terms indicating legal citations to get the texts.
Note that this dataset can be quite noisy, and the quality is not known.
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
[More Information Needed]
### Citation Information
[More Information Needed]
### Contributions
Thanks to [@JoelNiklaus](https://github.com/joelniklaus) for adding this dataset.
| 12,335 | [
[
-0.040985107421875,
-0.01284027099609375,
0.01617431640625,
0.01080322265625,
-0.01464080810546875,
0.01506805419921875,
-0.0112152099609375,
-0.01145172119140625,
0.03448486328125,
0.043975830078125,
-0.03564453125,
-0.05621337890625,
-0.046783447265625,
0.... | |
yoshitomo-matsubara/srsd-feynman_easy_dummy | 2023-10-11T02:07:46.000Z | [
"task_categories:tabular-regression",
"annotations_creators:expert",
"language_creators:expert-generated",
"multilinguality:monolingual",
"size_categories:100K<n<1M",
"source_datasets:extended",
"language:en",
"license:cc-by-4.0",
"arxiv:2206.10540",
"doi:10.57967/hf/0760",
"region:us"
] | yoshitomo-matsubara | null | null | 0 | 6 | 2023-02-16T06:56:39 | ---
pretty_name: SRSD-Feynman (Easy w/ Dummy Variables)
annotations_creators:
- expert
language_creators:
- expert-generated
language:
- en
license:
- cc-by-4.0
multilinguality:
- monolingual
size_categories:
- 100K<n<1M
source_datasets:
- extended
task_categories:
- tabular-regression
task_ids: []
---
# Dataset Card for SRSD-Feynman (Easy set with Dummy Variables)
## Table of Contents
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:**
- **Repository:** https://github.com/omron-sinicx/srsd-benchmark
- **Paper:** [Rethinking Symbolic Regression Datasets and Benchmarks for Scientific Discovery](https://arxiv.org/abs/2206.10540)
- **Point of Contact:** [Yoshitaka Ushiku](mailto:yoshitaka.ushiku@sinicx.com)
### Dataset Summary
Our SRSD (Feynman) datasets are designed to discuss the performance of Symbolic Regression for Scientific Discovery.
We carefully reviewed the properties of each formula and its variables in [the Feynman Symbolic Regression Database](https://space.mit.edu/home/tegmark/aifeynman.html) to design reasonably realistic sampling range of values so that our SRSD datasets can be used for evaluating the potential of SRSD such as whether or not an SR method con (re)discover physical laws from such datasets.
This is the ***Easy set with dummy variables*** of our SRSD-Feynman datasets, which consists of the following 30 different physics formulas:
[](https://huggingface.co/datasets/yoshitomo-matsubara/srsd-feynman_easy_dummy/resolve/main/problem_table.pdf)
Dummy variables were randomly generated, and symbolic regression models should not use the dummy variables as part of their predictions.
The following datasets contain
**1 dummy variable**: I.12.1, I.12.4, I.12.5, I.18.12, I.25.13, I.47.23
**2 dummy variables**: I.14.3, I.18.16, I.43.16, II.3.24, II.8.31, II.10.9, II.13.17, II.15.5, II.27.18, III.7.38, III.12.43
**3 dummy variables**: I.14.4, I.26.2, I.27.6, I.30.5, II.2.42, II.4.23, II.15.4, II.27.16, II.34.11, II.34.29b, II.38.3, II.38.14, III.15.27
More details of these datasets are provided in [the paper and its supplementary material](https://arxiv.org/abs/2206.10540).
### Supported Tasks and Leaderboards
Symbolic Regression
## Dataset Structure
### Data Instances
Tabular data + Ground-truth equation per equation
Tabular data: (num_samples, num_variables+1), where the last (rightmost) column indicate output of the target function for given variables.
Note that the number of variables (`num_variables`) varies from equation to equation.
Ground-truth equation: *pickled* symbolic representation (equation with symbols in sympy) of the target function.
### Data Fields
For each dataset, we have
1. train split (txt file, whitespace as a delimiter)
2. val split (txt file, whitespace as a delimiter)
3. test split (txt file, whitespace as a delimiter)
4. true equation (pickle file for sympy object)
### Data Splits
- train: 8,000 samples per equation
- val: 1,000 samples per equation
- test: 1,000 samples per equation
## Dataset Creation
### Curation Rationale
We chose target equations based on [the Feynman Symbolic Regression Database](https://space.mit.edu/home/tegmark/aifeynman.html).
### Annotations
#### Annotation process
We significantly revised the sampling range for each variable from the annotations in the Feynman Symbolic Regression Database.
First, we checked the properties of each variable and treat physical constants (e.g., light speed, gravitational constant) as constants.
Next, variable ranges were defined to correspond to each typical physics experiment to confirm the physical phenomenon for each equation.
In cases where a specific experiment is difficult to be assumed, ranges were set within which the corresponding physical phenomenon can be seen.
Generally, the ranges are set to be sampled on log scales within their orders as 10^2 in order to take both large and small changes in value as the order changes.
Variables such as angles, for which a linear distribution is expected are set to be sampled uniformly.
In addition, variables that take a specific sign were set to be sampled within that range.
#### Who are the annotators?
The main annotators are
- Naoya Chiba (@nchiba)
- Ryo Igarashi (@rigarash)
### Personal and Sensitive Information
N/A
## Considerations for Using the Data
### Social Impact of Dataset
We annotated this dataset, assuming typical physical experiments. The dataset will engage research on symbolic regression for scientific discovery (SRSD) and help researchers discuss the potential of symbolic regression methods towards data-driven scientific discovery.
### Discussion of Biases
Our choices of target equations are based on [the Feynman Symbolic Regression Database](https://space.mit.edu/home/tegmark/aifeynman.html), which are focused on a field of Physics.
### Other Known Limitations
Some variables used in our datasets indicate some numbers (counts), which should be treated as integer.
Due to the capacity of 32-bit integer, however, we treated some of such variables as float e.g., number of molecules (10^{23} - 10^{25})
## Additional Information
### Dataset Curators
The main curators are
- Naoya Chiba (@nchiba)
- Ryo Igarashi (@rigarash)
### Licensing Information
Creative Commons Attribution 4.0
### Citation Information
[[Preprint](https://arxiv.org/abs/2206.10540)]
```bibtex
@article{matsubara2022rethinking,
title={Rethinking Symbolic Regression Datasets and Benchmarks for Scientific Discovery},
author={Matsubara, Yoshitomo and Chiba, Naoya and Igarashi, Ryo and Ushiku, Yoshitaka},
journal={arXiv preprint arXiv:2206.10540},
year={2022}
}
```
### Contributions
Authors:
- Yoshitomo Matsubara (@yoshitomo-matsubara)
- Naoya Chiba (@nchiba)
- Ryo Igarashi (@rigarash)
- Yoshitaka Ushiku (@yushiku)
| 6,921 | [
[
-0.008697509765625,
-0.0367431640625,
0.033843994140625,
0.01739501953125,
-0.0093841552734375,
-0.0174102783203125,
0.0023956298828125,
-0.0171356201171875,
0.0238189697265625,
0.0266876220703125,
-0.05670166015625,
-0.03790283203125,
-0.042877197265625,
0.... |
yoshitomo-matsubara/srsd-feynman_medium_dummy | 2023-10-11T02:08:13.000Z | [
"task_categories:tabular-regression",
"annotations_creators:expert",
"language_creators:expert-generated",
"multilinguality:monolingual",
"size_categories:100K<n<1M",
"source_datasets:extended",
"language:en",
"license:cc-by-4.0",
"arxiv:2206.10540",
"doi:10.57967/hf/0759",
"region:us"
] | yoshitomo-matsubara | null | null | 0 | 6 | 2023-02-16T07:01:48 | ---
pretty_name: SRSD-Feynman (Medium w/ Dummy Variables)
annotations_creators:
- expert
language_creators:
- expert-generated
language:
- en
license:
- cc-by-4.0
multilinguality:
- monolingual
size_categories:
- 100K<n<1M
source_datasets:
- extended
task_categories:
- tabular-regression
task_ids: []
---
# Dataset Card for SRSD-Feynman (Medium set with Dummy Variables)
## Table of Contents
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:**
- **Repository:** https://github.com/omron-sinicx/srsd-benchmark
- **Paper:** [Rethinking Symbolic Regression Datasets and Benchmarks for Scientific Discovery](https://arxiv.org/abs/2206.10540)
- **Point of Contact:** [Yoshitaka Ushiku](mailto:yoshitaka.ushiku@sinicx.com)
### Dataset Summary
Our SRSD (Feynman) datasets are designed to discuss the performance of Symbolic Regression for Scientific Discovery.
We carefully reviewed the properties of each formula and its variables in [the Feynman Symbolic Regression Database](https://space.mit.edu/home/tegmark/aifeynman.html) to design reasonably realistic sampling range of values so that our SRSD datasets can be used for evaluating the potential of SRSD such as whether or not an SR method con (re)discover physical laws from such datasets.
This is the ***Medium set with dummy variables*** of our SRSD-Feynman datasets, which consists of the following 40 different physics formulas:
[](https://huggingface.co/datasets/yoshitomo-matsubara/srsd-feynman_medium_dummy/resolve/main/problem_table.pdf)
Dummy variables were randomly generated, and symbolic regression models should not use the dummy variables as part of their predictions.
The following datasets contain
**1 dummy variable**: I.10.7, I.12.2, I.13.12, I.16.6, I.32.5, I.43.31, II.11.3, II.34.2, II.34.29a, III.14.14, III.15.14, B8
**2 dummy variables**: I.11.19, I.12.11, I.13.4, I.15.10, I.18.4, I.24.6, I.34.8, I.38.12, I.39.11, I.43.43, I.48.2, II.6.11, II.21.32, II.34.2a, III.4.32, III.13.18, III.15.12, III.17.37
**3 dummy variables**: I.8.14, I.29.4, I.34.10, I.34.27, I.39.10, II.8.7, II.37.1, III.8.54, III.19.51, B18
More details of these datasets are provided in [the paper and its supplementary material](https://arxiv.org/abs/2206.10540).
### Supported Tasks and Leaderboards
Symbolic Regression
## Dataset Structure
### Data Instances
Tabular data + Ground-truth equation per equation
Tabular data: (num_samples, num_variables+1), where the last (rightmost) column indicate output of the target function for given variables.
Note that the number of variables (`num_variables`) varies from equation to equation.
Ground-truth equation: *pickled* symbolic representation (equation with symbols in sympy) of the target function.
### Data Fields
For each dataset, we have
1. train split (txt file, whitespace as a delimiter)
2. val split (txt file, whitespace as a delimiter)
3. test split (txt file, whitespace as a delimiter)
4. true equation (pickle file for sympy object)
### Data Splits
- train: 8,000 samples per equation
- val: 1,000 samples per equation
- test: 1,000 samples per equation
## Dataset Creation
### Curation Rationale
We chose target equations based on [the Feynman Symbolic Regression Database](https://space.mit.edu/home/tegmark/aifeynman.html).
### Annotations
#### Annotation process
We significantly revised the sampling range for each variable from the annotations in the Feynman Symbolic Regression Database.
First, we checked the properties of each variable and treat physical constants (e.g., light speed, gravitational constant) as constants.
Next, variable ranges were defined to correspond to each typical physics experiment to confirm the physical phenomenon for each equation.
In cases where a specific experiment is difficult to be assumed, ranges were set within which the corresponding physical phenomenon can be seen.
Generally, the ranges are set to be sampled on log scales within their orders as 10^2 in order to take both large and small changes in value as the order changes.
Variables such as angles, for which a linear distribution is expected are set to be sampled uniformly.
In addition, variables that take a specific sign were set to be sampled within that range.
#### Who are the annotators?
The main annotators are
- Naoya Chiba (@nchiba)
- Ryo Igarashi (@rigarash)
### Personal and Sensitive Information
N/A
## Considerations for Using the Data
### Social Impact of Dataset
We annotated this dataset, assuming typical physical experiments. The dataset will engage research on symbolic regression for scientific discovery (SRSD) and help researchers discuss the potential of symbolic regression methods towards data-driven scientific discovery.
### Discussion of Biases
Our choices of target equations are based on [the Feynman Symbolic Regression Database](https://space.mit.edu/home/tegmark/aifeynman.html), which are focused on a field of Physics.
### Other Known Limitations
Some variables used in our datasets indicate some numbers (counts), which should be treated as integer.
Due to the capacity of 32-bit integer, however, we treated some of such variables as float e.g., number of molecules (10^{23} - 10^{25})
## Additional Information
### Dataset Curators
The main curators are
- Naoya Chiba (@nchiba)
- Ryo Igarashi (@rigarash)
### Licensing Information
Creative Commons Attribution 4.0
### Citation Information
[[Preprint](https://arxiv.org/abs/2206.10540)]
```bibtex
@article{matsubara2022rethinking,
title={Rethinking Symbolic Regression Datasets and Benchmarks for Scientific Discovery},
author={Matsubara, Yoshitomo and Chiba, Naoya and Igarashi, Ryo and Ushiku, Yoshitaka},
journal={arXiv preprint arXiv:2206.10540},
year={2022}
}
```
### Contributions
Authors:
- Yoshitomo Matsubara (@yoshitomo-matsubara)
- Naoya Chiba (@nchiba)
- Ryo Igarashi (@rigarash)
- Yoshitaka Ushiku (@yushiku)
| 7,012 | [
[
-0.009735107421875,
-0.03814697265625,
0.032440185546875,
0.0169219970703125,
-0.00984954833984375,
-0.01751708984375,
-0.00019490718841552734,
-0.015472412109375,
0.025604248046875,
0.0275115966796875,
-0.05462646484375,
-0.0374755859375,
-0.0426025390625,
... |
yoshitomo-matsubara/srsd-feynman_hard_dummy | 2023-10-11T02:08:41.000Z | [
"task_categories:tabular-regression",
"annotations_creators:expert",
"language_creators:expert-generated",
"multilinguality:monolingual",
"size_categories:100K<n<1M",
"source_datasets:extended",
"language:en",
"license:cc-by-4.0",
"arxiv:2206.10540",
"doi:10.57967/hf/0758",
"region:us"
] | yoshitomo-matsubara | null | null | 0 | 6 | 2023-02-16T07:05:02 | ---
pretty_name: SRSD-Feynman (Hard w/ Dummy Variables)
annotations_creators:
- expert
language_creators:
- expert-generated
language:
- en
license:
- cc-by-4.0
multilinguality:
- monolingual
size_categories:
- 100K<n<1M
source_datasets:
- extended
task_categories:
- tabular-regression
task_ids: []
---
# Dataset Card for SRSD-Feynman (Hard set with Dummy Variables)
## Table of Contents
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:**
- **Repository:** https://github.com/omron-sinicx/srsd-benchmark
- **Paper:** [Rethinking Symbolic Regression Datasets and Benchmarks for Scientific Discovery](https://arxiv.org/abs/2206.10540)
- **Point of Contact:** [Yoshitaka Ushiku](mailto:yoshitaka.ushiku@sinicx.com)
### Dataset Summary
Our SRSD (Feynman) datasets are designed to discuss the performance of Symbolic Regression for Scientific Discovery.
We carefully reviewed the properties of each formula and its variables in [the Feynman Symbolic Regression Database](https://space.mit.edu/home/tegmark/aifeynman.html) to design reasonably realistic sampling range of values so that our SRSD datasets can be used for evaluating the potential of SRSD such as whether or not an SR method con (re)discover physical laws from such datasets.
This is the ***Hard set with dummy variables*** of our SRSD-Feynman datasets, which consists of the following 50 different physics formulas:
[](https://huggingface.co/datasets/yoshitomo-matsubara/srsd-feynman_hard_dummy/resolve/main/problem_table.pdf)
Dummy variables were randomly generated, and symbolic regression models should not use the dummy variables as part of their predictions.
The following datasets contain
**1 dummy variable**: I.15.3x, I.30.3, II.6.15a, II.11.17, II.11.28, II.13.23, II.13.34, II.24.17, B1, B6, B12, B16, B17
**2 dummy variables**: I.6.20, I.6.20b, I.9.18, I.15.3t, I.29.16, I.34.14, I.39.22, I.44.4, II.11.20, II.11.27, II.35.18, III.9.52, III.10.19, III.21.20, B2, B3, B7, B9
**3 dummy variables**: I.6.20a, I.32.17, I.37.4, I.40.1, I.41.16, I.50.26, II.6.15b, II.35.21, II.36.38, III.4.33, B4, B5, B10, B11, B13, B14, B15, B19, B20
More details of these datasets are provided in [the paper and its supplementary material](https://arxiv.org/abs/2206.10540).
### Supported Tasks and Leaderboards
Symbolic Regression
## Dataset Structure
### Data Instances
Tabular data + Ground-truth equation per equation
Tabular data: (num_samples, num_variables+1), where the last (rightmost) column indicate output of the target function for given variables.
Note that the number of variables (`num_variables`) varies from equation to equation.
Ground-truth equation: *pickled* symbolic representation (equation with symbols in sympy) of the target function.
### Data Fields
For each dataset, we have
1. train split (txt file, whitespace as a delimiter)
2. val split (txt file, whitespace as a delimiter)
3. test split (txt file, whitespace as a delimiter)
4. true equation (pickle file for sympy object)
### Data Splits
- train: 8,000 samples per equation
- val: 1,000 samples per equation
- test: 1,000 samples per equation
## Dataset Creation
### Curation Rationale
We chose target equations based on [the Feynman Symbolic Regression Database](https://space.mit.edu/home/tegmark/aifeynman.html).
### Annotations
#### Annotation process
We significantly revised the sampling range for each variable from the annotations in the Feynman Symbolic Regression Database.
First, we checked the properties of each variable and treat physical constants (e.g., light speed, gravitational constant) as constants.
Next, variable ranges were defined to correspond to each typical physics experiment to confirm the physical phenomenon for each equation.
In cases where a specific experiment is difficult to be assumed, ranges were set within which the corresponding physical phenomenon can be seen.
Generally, the ranges are set to be sampled on log scales within their orders as 10^2 in order to take both large and small changes in value as the order changes.
Variables such as angles, for which a linear distribution is expected are set to be sampled uniformly.
In addition, variables that take a specific sign were set to be sampled within that range.
#### Who are the annotators?
The main annotators are
- Naoya Chiba (@nchiba)
- Ryo Igarashi (@rigarash)
### Personal and Sensitive Information
N/A
## Considerations for Using the Data
### Social Impact of Dataset
We annotated this dataset, assuming typical physical experiments. The dataset will engage research on symbolic regression for scientific discovery (SRSD) and help researchers discuss the potential of symbolic regression methods towards data-driven scientific discovery.
### Discussion of Biases
Our choices of target equations are based on [the Feynman Symbolic Regression Database](https://space.mit.edu/home/tegmark/aifeynman.html), which are focused on a field of Physics.
### Other Known Limitations
Some variables used in our datasets indicate some numbers (counts), which should be treated as integer.
Due to the capacity of 32-bit integer, however, we treated some of such variables as float e.g., number of molecules (10^{23} - 10^{25})
## Additional Information
### Dataset Curators
The main curators are
- Naoya Chiba (@nchiba)
- Ryo Igarashi (@rigarash)
### Licensing Information
Creative Commons Attribution 4.0
### Citation Information
[[Preprint](https://arxiv.org/abs/2206.10540)]
```bibtex
@article{matsubara2022rethinking,
title={Rethinking Symbolic Regression Datasets and Benchmarks for Scientific Discovery},
author={Matsubara, Yoshitomo and Chiba, Naoya and Igarashi, Ryo and Ushiku, Yoshitaka},
journal={arXiv preprint arXiv:2206.10540},
year={2022}
}
```
### Contributions
Authors:
- Yoshitomo Matsubara (@yoshitomo-matsubara)
- Naoya Chiba (@nchiba)
- Ryo Igarashi (@rigarash)
- Yoshitaka Ushiku (@yushiku)
| 7,029 | [
[
-0.00839996337890625,
-0.0367431640625,
0.034454345703125,
0.0167236328125,
-0.00812530517578125,
-0.01537322998046875,
0.0009522438049316406,
-0.01425933837890625,
0.0219268798828125,
0.0271453857421875,
-0.055511474609375,
-0.03948974609375,
-0.04022216796875,... |
jed351/rthk_news | 2023-02-16T17:24:50.000Z | [
"language:zh",
"region:us"
] | jed351 | null | null | 3 | 6 | 2023-02-16T16:44:01 | ---
language:
- zh
---
### RTHK News Dataset
(RTHK)[https://www.rthk.hk/] is a public broadcasting service under the Hong Kong Government according to (Wikipedia)[https://en.wikipedia.org/wiki/RTHK]
This dataset at the moment is obtained from exporting messages from their (telegram channel)[https://t.me/rthk_new_c],
which contains news since April 2018.
I will update this dataset with more data in the future. | 419 | [
[
-0.0033416748046875,
-0.045684814453125,
0.0229949951171875,
0.027923583984375,
-0.048797607421875,
0.0014619827270507812,
-0.00855255126953125,
-0.0272369384765625,
0.0384521484375,
0.06707763671875,
-0.054046630859375,
-0.05645751953125,
-0.0401611328125,
... |
Riksarkivet/mini_raw_diachronic_swe | 2023-03-13T11:39:53.000Z | [
"size_categories:1M<n<10M",
"language:sv",
"license:mit",
"historical",
"WIP",
"region:us"
] | Riksarkivet | null | null | 0 | 6 | 2023-02-21T14:21:36 | ---
dataset_info:
features:
- name: text
dtype: string
splits:
- name: train
num_bytes: 796312222
num_examples: 4760470
download_size: 475243460
dataset_size:
license: mit
language:
- sv
tags:
- historical
- WIP
pretty_name: Kbuhist2
size_categories:
- 1M<n<10M
---
# Dataset Card for mini_raw_diachronic_swe
The Swedish Diachronic Corpus is a project funded by [Swe-Clarin](https://sweclarin.se/eng) and provides a corpus of texts covering the time period from Old Swedish.
### Data Splits
**This will be further extended!**
* Number of instances in split: 4760470
## Acknowledgements
We gratefully acknowledge [SWE-clarin](https://sweclarin.se/) for the datasets.
## Citation Information
Eva Pettersson and Lars Borin (2022)
Swedish Diachronic Corpus
In Darja Fišer & Andreas Witt (eds.), CLARIN. The Infrastructure for Language Resources. Berlin: deGruyter. https://degruyter.com/document/doi/10.1515/9783110767377-022/html | 957 | [
[
-0.030731201171875,
-0.0247650146484375,
0.0202789306640625,
0.0019502639770507812,
-0.045318603515625,
-0.0105133056640625,
-0.01433563232421875,
-0.034332275390625,
0.0220184326171875,
0.027740478515625,
-0.04400634765625,
-0.049102783203125,
-0.00786590576171... |
dctanner/oa_recipes | 2023-02-24T13:42:50.000Z | [
"region:us"
] | dctanner | null | null | 4 | 6 | 2023-02-24T11:52:38 | ---
dataset_info:
features:
- name: INSTRUCTION
dtype: string
- name: RESPONSE
dtype: string
- name: SOURCE
dtype: string
splits:
- name: train
num_bytes: 7600684
num_examples: 4747
download_size: 3325663
dataset_size: 7600684
---
# Dataset Card for Recipes dialogue
Derrived from the Kaggle dataset [Recipes from Tasty](https://www.kaggle.com/datasets/zeeenb/recipes-from-tasty), we turn the recipe ingredients and instructions into chat dialogue using a preset list of user prompt templates.
Dataset license: CC0: Public Domain. | 568 | [
[
-0.0183258056640625,
-0.0509033203125,
0.024169921875,
-0.00139617919921875,
-0.004638671875,
-0.0008788108825683594,
-0.005306243896484375,
-0.00324249267578125,
0.047210693359375,
0.08282470703125,
-0.08294677734375,
-0.045135498046875,
-0.0261383056640625,
... |
CaoHaiNam/summarization_wikilingua_en | 2023-03-01T08:55:32.000Z | [
"region:us"
] | CaoHaiNam | null | null | 1 | 6 | 2023-03-01T08:55:20 | ---
dataset_info:
features:
- name: summary
dtype: string
- name: document
dtype: string
- name: __index_level_0__
dtype: int64
splits:
- name: train
num_bytes: 326708878
num_examples: 141457
download_size: 184148284
dataset_size: 326708878
---
# Dataset Card for "summarization_wikilingua_en"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 462 | [
[
-0.0482177734375,
-0.008392333984375,
0.0025634765625,
0.0207061767578125,
-0.0224609375,
-0.0048065185546875,
-0.0007548332214355469,
-0.005344390869140625,
0.0797119140625,
0.0258636474609375,
-0.052032470703125,
-0.04986572265625,
-0.04730224609375,
0.003... |
qfrodicio/gesture-prediction-21-classes | 2023-03-10T11:49:53.000Z | [
"region:us"
] | qfrodicio | null | null | 0 | 6 | 2023-03-02T19:51:32 | ---
dataset_info:
features:
- name: id
dtype: string
- name: tokens
sequence: string
- name: sentence
dtype: string
- name: gestures
sequence: string
splits:
- name: train
num_bytes: 437051
num_examples: 1649
- name: test
num_bytes: 115160
num_examples: 423
- name: validation
num_bytes: 142541
num_examples: 528
download_size: 207086
dataset_size: 694752
---
# Dataset Card for "gesture-prediction-21-classes"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 605 | [
[
-0.04156494140625,
-0.01311492919921875,
0.01049041748046875,
0.015533447265625,
-0.004421234130859375,
0.00936126708984375,
0.034820556640625,
-0.035980224609375,
0.05487060546875,
0.019622802734375,
-0.06488037109375,
-0.04681396484375,
-0.05560302734375,
... |
cbasu/Med-EASi | 2023-03-08T18:24:31.000Z | [
"arxiv:2302.09155",
"region:us"
] | cbasu | null | null | 0 | 6 | 2023-03-08T00:27:53 | ---
# For reference on model card metadata, see the spec: https://github.com/huggingface/hub-docs/blob/main/datasetcard.md?plain=1
# Doc / guide: https://huggingface.co/docs/hub/datasets-cards
{}
---
# Dataset Card for Med-EASi
## Dataset Description
- **Repository:https://github.com/Chandrayee/CTRL-SIMP**
- **Paper:https://arxiv.org/pdf/2302.09155.pdf**
- **Point of Contact:Chandrayee Basu**
### Dataset Summary
Med-EASi (Medical dataset for Elaborative and Abstractive Simplification), a uniquely crowdsourced and finely annotated dataset for supervised simplification of short medical
texts. It contains 1979 expert-simple text pairs in medical domain, spanning a total of 4478 UMLS concepts across all text pairs. The dataset is annotated with four textual transformations:
replacement, elaboration, insertion and deletion.
### Supported Tasks
The dataset can be used for direct generation of simplified medical text or generation of simplified text along with controllability over individual transformations. Please refer to the paper for more information.
### Languages
English
## Dataset Structure
- **train.csv: 1397 text pairs (5.19 MB)**
- **validation.csv: 197 text pairs (1.5 MB)**
- **test.csv: 300 text pairs (1.19 MB)**
We also provide several metrics per data point including Levenstein similarity, SentenceBERT embedding cosine similarity, compression ratio, Flesch Kincaid readability grade,
automated readability index for each of the expert and simple text, and UMLS concepts in each of them.
### Data Instances
```
Expert: Some patients have weight loss, rarely enough to become underweight. Anemia, glossitis, angular stomatitis, and aphthous ulcers are usually seen in these patients.
Simple: Some people are undernourished, have mild weight loss and anemia, or have mouth sores and an inflamed tongue.
Annotated: Some <elab>patients<by>people are undernourished,</elab> have <elab>weight loss<by>mild weight loss</elab><del>, rarely enough to become underweight.</del> <rep>Anemia, glossitis, angular stomatitis, and aphthous ulcers<by>and anemia, or have mouth sores and an inflamed tongue</rep><del>usually seen in these patients</del>.
```
### Data Fields
```
Expert
Simple
Annotation
sim (Levenstein Similarity)
sentence_sim (SentenceBERT embedding cosine similarity)
compression
expert_fk_grade
expert_ari
layman_fk_grade
layman_ari
umls_expert
umls_layman
expert_terms
layman_terms
idx (original data index before shuffling, redundant)
```
### Data Splits
75 % train, 10 % validation and 15 % test
## Dataset Creation
This dataset is created by annotating 1500 SIMPWIKI data points (Van den Bercken, Sips, and Lofi 2019) and all of MSD (Cao et al. 2020) data points. We used expert-layman-AI collaboration for annotation.
### Personal and Sensitive Information
There is no personal or sensitive information in this dataset.
## Considerations for Using the Data
### Discussion of Biases
The dataset contains biomedical and clinical short texts.
### Other Known Limitations
The expert and simple texts in the original datasets were extracted and aligned using automated methods that have their own limitations.
### Citation Information
```
@article{basu2023med,
title={Med-EASi: Finely Annotated Dataset and Models for Controllable Simplification of Medical Texts},
author={Basu, Chandrayee and Vasu, Rosni and Yasunaga, Michihiro and Yang, Qian},
journal={arXiv preprint arXiv:2302.09155},
year={2023}
}
``` | 3,491 | [
[
-0.01507568359375,
-0.04595947265625,
0.037567138671875,
0.0009417533874511719,
-0.0157318115234375,
-0.0265045166015625,
-0.015716552734375,
-0.029510498046875,
0.050262451171875,
0.0305938720703125,
-0.027252197265625,
-0.055389404296875,
-0.033966064453125,
... |
venetis/symptom_text_to_disease_mk4 | 2023-03-10T04:30:59.000Z | [
"region:us"
] | venetis | null | null | 2 | 6 | 2023-03-10T04:30:54 | ---
dataset_info:
features:
- name: text
dtype: string
- name: labels
dtype:
class_label:
names:
'0': emotional pain
'1': hair falling out
'2': heart hurts
'3': infected wound
'4': foot ache
'5': shoulder pain
'6': injury from sports
'7': skin issue
'8': stomach ache
'9': knee pain
'10': joint pain
'11': hard to breath
'12': head ache
'13': body feels weak
'14': feeling dizzy
'15': back pain
'16': open wound
'17': internal pain
'18': blurry vision
'19': acne
'20': muscle pain
'21': neck pain
'22': cough
'23': ear ache
'24': feeling cold
splits:
- name: train
num_bytes: 330494.3762197868
num_examples: 5328
- name: test
num_bytes: 41373.82675273983
num_examples: 667
- name: valid
num_bytes: 41311.79702747335
num_examples: 666
download_size: 144224
dataset_size: 413180.0
---
# Dataset Card for "symptom_text_to_disease_mk4"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 1,278 | [
[
-0.00655364990234375,
-0.00418853759765625,
0.034698486328125,
0.0164337158203125,
-0.0217437744140625,
-0.0086517333984375,
0.0283050537109375,
-0.00884246826171875,
0.06817626953125,
0.03167724609375,
-0.051910400390625,
-0.0777587890625,
-0.06500244140625,
... |
s-nlp/ru_paradetox | 2023-09-07T13:15:00.000Z | [
"task_categories:text-generation",
"language:ru",
"license:openrail++",
"region:us"
] | s-nlp | null | null | 2 | 6 | 2023-03-10T15:52:55 | ---
license: openrail++
task_categories:
- text-generation
language:
- ru
---
# ParaDetox: Detoxification with Parallel Data (Russian)
This repository contains information about Russian Paradetox dataset -- the first parallel corpus for the detoxification task -- as well as models for the detoxification of Russian texts.
## ParaDetox Collection Pipeline
The ParaDetox Dataset collection was done via [Yandex.Toloka](https://toloka.yandex.com/) crowdsource platform. The collection was done in three steps:
* *Task 1:* **Generation of Paraphrases**: The first crowdsourcing task asks users to eliminate toxicity in a given sentence while keeping the content.
* *Task 2:* **Content Preservation Check**: We show users the generated paraphrases along with their original variants and ask them to indicate if they have close meanings.
* *Task 3:* **Toxicity Check**: Finally, we check if the workers succeeded in removing toxicity.
All these steps were done to ensure high quality of the data and make the process of collection automated. For more details please refer to the original paper.
## Detoxification model
**New SOTA** for detoxification task -- ruT5 (base) model trained on Russian ParaDetox dataset -- we released online in HuggingFace🤗 repository [here](https://huggingface.co/s-nlp/ruT5-base-detox).
You can also check out our [demo](https://detoxifier.nlp.zhores.net/junction/) and telegram [bot](https://t.me/rudetoxifierbot).
## Citation
```
@article{dementievarusse,
title={RUSSE-2022: Findings of the First Russian Detoxification Shared Task Based on Parallel Corpora},
author={Dementieva, Daryna and Logacheva, Varvara and Nikishina, Irina and Fenogenova, Alena and Dale, David and Krotova, Irina and Semenov, Nikita and Shavrina, Tatiana and Panchenko, Alexander}
}
```
## Contacts
If you find some issue, do not hesitate to add it to [Github Issues](https://github.com/s-nlp/russe_detox_2022).
For any questions, please contact: Daryna Dementieva (dardem96@gmail.com) | 2,004 | [
[
-0.005931854248046875,
-0.020233154296875,
0.05682373046875,
0.02276611328125,
-0.022674560546875,
0.00472259521484375,
0.00724029541015625,
-0.00421905517578125,
0.021240234375,
0.042572021484375,
-0.038421630859375,
-0.0792236328125,
-0.041473388671875,
0.... |
huynguyendayrui/ecthr | 2023-03-14T11:38:42.000Z | [
"region:us"
] | huynguyendayrui | null | null | 0 | 6 | 2023-03-14T11:37:14 | ---
dataset_info:
features:
- name: text
sequence: string
- name: labels_task_a
sequence:
class_label:
names:
'0': '2'
'1': '3'
'2': '5'
'3': '6'
'4': '8'
'5': '9'
'6': '10'
'7': '11'
'8': '14'
'9': P1-1
- name: law
sequence: string
- name: labels_task_b
sequence: int64
splits:
- name: train
num_bytes: 267570945
num_examples: 9000
- name: test
num_bytes: 35381069
num_examples: 1000
- name: validation
num_bytes: 33956620
num_examples: 1000
download_size: 157641185
dataset_size: 336908634
---
# Dataset Card for "ecthr"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 831 | [
[
-0.050994873046875,
-0.0241851806640625,
0.016937255859375,
-0.004787445068359375,
-0.016571044921875,
-0.0009832382202148438,
0.0121612548828125,
-0.0157470703125,
0.052520751953125,
0.038360595703125,
-0.051422119140625,
-0.051025390625,
-0.035491943359375,
... |
nastyboget/stackmix_hkr | 2023-03-23T18:42:10.000Z | [
"task_categories:image-to-text",
"size_categories:100K<n<1M",
"language:ru",
"license:mit",
"region:us"
] | nastyboget | null | null | 0 | 6 | 2023-03-20T15:41:33 | ---
license: mit
task_categories:
- image-to-text
language:
- ru
size_categories:
- 100K<n<1M
---
Dataset generated from HKR train set using Stackmix
===================================================
Number of images: 300000
Sources:
* [HKR dataset](https://github.com/abdoelsayed2016/HKR_Dataset)
* [Stackmix code](https://github.com/ai-forever/StackMix-OCR)
| 366 | [
[
-0.0229034423828125,
0.006404876708984375,
0.0039825439453125,
-0.0013523101806640625,
-0.02825927734375,
-0.007419586181640625,
0.03216552734375,
-0.0269012451171875,
-0.004878997802734375,
0.05816650390625,
-0.02685546875,
-0.03497314453125,
-0.03985595703125,... |
nastyboget/synthetic_hkr | 2023-03-23T18:43:05.000Z | [
"task_categories:image-to-text",
"size_categories:100K<n<1M",
"language:ru",
"license:mit",
"region:us"
] | nastyboget | null | null | 0 | 6 | 2023-03-21T09:53:26 | ---
license: mit
task_categories:
- image-to-text
language:
- ru
size_categories:
- 100K<n<1M
---
Dataset generated using handwritten fonts
=========================================
Number of images: 300000
Sources:
* [Handwriting generation code](https://github.com/NastyBoget/HandwritingGeneration)
The code was executed with `hkr` option (with fewer augmentations) | 372 | [
[
-0.01461029052734375,
-0.0299530029296875,
0.01537322998046875,
0.003322601318359375,
-0.045166015625,
-0.007625579833984375,
0.01285552978515625,
-0.0184326171875,
-0.01093292236328125,
0.0753173828125,
-0.049530029296875,
-0.049072265625,
-0.042877197265625,
... |
cahya/instructions-ar | 2023-03-22T15:42:43.000Z | [
"region:us"
] | cahya | null | null | 0 | 6 | 2023-03-22T15:40:47 | ---
dataset_info:
features:
- name: id
dtype: int64
- name: text
dtype: string
splits:
- name: train
num_bytes: 1335708.4343484773
num_examples: 1802
- name: test
num_bytes: 74864.90114827758
num_examples: 101
- name: validation
num_bytes: 74123.66450324513
num_examples: 100
download_size: 0
dataset_size: 1484697.0
---
# Dataset Card for "instructions-ar"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 540 | [
[
-0.0380859375,
-0.0274505615234375,
0.022247314453125,
0.020233154296875,
-0.01267242431640625,
-0.010589599609375,
0.0260162353515625,
-0.0004940032958984375,
0.04779052734375,
0.043365478515625,
-0.0657958984375,
-0.05865478515625,
-0.041534423828125,
-0.0... |
semeru/code-text-javascript | 2023-03-23T20:05:02.000Z | [
"license:mit",
"arxiv:1909.09436",
"region:us"
] | semeru | null | null | 4 | 6 | 2023-03-22T16:31:18 | ---
license: mit
Programminglanguage: "JavaScript"
version: "N/A"
Date: "Codesearchnet(Jun 2020 - paper release date)"
Contaminated: "Very Likely"
Size: "Standar Tokenizer (TreeSitter)"
---
### Dataset is imported from CodeXGLUE and pre-processed using their script.
# Where to find in Semeru:
The dataset can be found at /nfs/semeru/semeru_datasets/code_xglue/code-to-text/javascript in Semeru
# CodeXGLUE -- Code-To-Text
## Task Definition
The task is to generate natural language comments for a code, and evaluted by [smoothed bleu-4](https://www.aclweb.org/anthology/C04-1072.pdf) score.
## Dataset
The dataset we use comes from [CodeSearchNet](https://arxiv.org/pdf/1909.09436.pdf) and we filter the dataset as the following:
- Remove examples that codes cannot be parsed into an abstract syntax tree.
- Remove examples that #tokens of documents is < 3 or >256
- Remove examples that documents contain special tokens (e.g. <img ...> or https:...)
- Remove examples that documents are not English.
### Data Format
After preprocessing dataset, you can obtain three .jsonl files, i.e. train.jsonl, valid.jsonl, test.jsonl
For each file, each line in the uncompressed file represents one function. One row is illustrated below.
- **repo:** the owner/repo
- **path:** the full path to the original file
- **func_name:** the function or method name
- **original_string:** the raw string before tokenization or parsing
- **language:** the programming language
- **code/function:** the part of the `original_string` that is code
- **code_tokens/function_tokens:** tokenized version of `code`
- **docstring:** the top-level comment or docstring, if it exists in the original string
- **docstring_tokens:** tokenized version of `docstring`
### Data Statistic
| Programming Language | Training | Dev | Test |
| :------------------- | :------: | :----: | :----: |
| JavaScript | 58,025 | 3,885 | 3,291 |
## Reference
<pre><code>@article{husain2019codesearchnet,
title={Codesearchnet challenge: Evaluating the state of semantic code search},
author={Husain, Hamel and Wu, Ho-Hsiang and Gazit, Tiferet and Allamanis, Miltiadis and Brockschmidt, Marc},
journal={arXiv preprint arXiv:1909.09436},
year={2019}
}</code></pre>
| 2,285 | [
[
-0.033966064453125,
-0.0330810546875,
0.01488494873046875,
0.004177093505859375,
-0.0128173828125,
0.0056610107421875,
-0.037872314453125,
-0.0242156982421875,
0.022613525390625,
0.06488037109375,
-0.046173095703125,
-0.0748291015625,
-0.040924072265625,
0.0... |
slhenty/climate-fever-nli-stsb | 2023-03-24T21:08:44.000Z | [
"license:unknown",
"region:us"
] | slhenty | A modified CLIMATE-FEVER dataset that includes NLI-style features and STSb-features suitable for SentenceBERT training scripts. | @InProceedings{huggingface:dataset,
title = {climate-fever-nli-stsb},
author={Steve Henty, Omdena, "Cologne, Germany Chapter - Detecting Bias in Climate Reporting in English and German Language News Media"},
year={2023}
} | 1 | 6 | 2023-03-23T00:10:11 | ---
license: unknown
viewer: false
---
**==========================================**
**_IN PROGRESS - NOT READY FOR LOADING OR USE_**
**==========================================**
---
# Dataset Card for climate-fever-nli-stsb
## Dataset Description
- **Homepage:**
- **Repository:**
- **Paper:**
- **Leaderboard:**
- **Point of Contact:**
### Dataset Summary
The CLIMATE-FEVER dataset modified to supply NLI-style (**cf-nli**) features or STSb-style (**cf-stsb**) features that SentenceBERT training scripts can use as drop-in replacements for AllNLI and/or STSb datasets.
There are two **cf-nli** datasets: one derived from only SUPPORTS and REFUTES evidence (**cf-nli**), and one that also derived data from NOT_ENOUGH_INFO evidence based on the annotator votes (**cf-nli-nei**).
The feature style is specified as a named configuration when loading the dataset: cf-nli, cf-nli-nei, or cf-stsb. See usage notes below for `load_dataset` examples.
### Usage
Load the **cf-nli** dataset
```python
# if datasets not already in your environment
!pip install datasets
from datasets import load_dataset
# all splits...
dd = load_dataset('climate-fever-nli-stsb', 'cf-nli')
# ... or specific split (only 'train' is available)
ds_train = load_dataset('climate-fever-nli-stsb', 'cf-nli', split='train')
## ds_train can now be injected into SentenceBERT training scripts at the point
## where individual sentence pairs are aggregated into
## {'claim': {'entailment': set(), 'contradiction': set(), 'neutral': set()}} dicts
## for further processing into training samples
```
Load the **cf-nli-nei** dataset
```python
# if datasets not already in your environment
!pip install datasets
from datasets import load_dataset
# all splits...
dd = load_dataset('climate-fever-nli-stsb', 'cf-nli-nei')
# ... or specific split (only 'train' is available)
ds_train = load_dataset('climate-fever-nli-stsb', 'cf-nli-nei', split='train')
## ds_train can now be injected into SentenceBERT training scripts at the point
## where individual sentence pairs are aggregated into
## {'claim': {'entailment': set(), 'contradiction': set(), 'neutral': set()}} dicts
## for further processing into training samples
```
Load the **cf-stsb** dataset
```python
# if datasets not already in your environment
!pip install datasets
from datasets import load_dataset
# all splits...
dd = load_dataset('climate-fever-nli-stsb', 'cf-stsb')
# ... or specific split ('train', 'dev', 'test' available)
ds_dev = load_dataset('climate-fever-nli-stsb', 'cf-stsb', split='dev')
## ds_dev (or test) can now be injected into SentenceBERT training scripts at the point
## where individual sentence pairs are aggregated into
## a list of dev (or test) samples
```
<!--
### Supported Tasks and Leaderboards
[More Information Needed]
### Languages
[More Information Needed]
-->
## Dataset Structure
### Data Instances
[More Information Needed]
### Data Fields
[More Information Needed]
### Data Splits
[More Information Needed]
## Dataset Creation
### Curation Rationale
SentenceBERT models are designed for 'Domain Adaptation' and/or 'Fine-tuning' using labeled data in the downstream task domain. As a bi-encoder, the primary objective function is real-valued similarity scoring. Typical training datasets use NLI-style features as input, and STSb-style features as model evaluation during training, and to measure post-hoc, _intrinsic_ STSb performance. Classification tasks typically use a classifier network that accepts SentenceBERT encodings as input, and is trained on class-labeled datasets.
So, to fine-tune a SentenceBERT model in a climate-change domain, a labeled climate change dataset would be ideal. Much like the authors of the CLIMATE-FEVER dataset, we know of no other _labeled_ datasets specific to climate change. And while CLIMATE-FEVER is suitably labeled for classification tasks, it is not ready for similarity tuning in the style of SentenceBERT.
This modified CLIMATE-FEVER dataset attempts to fill that gap by deriving NLI-style features typically used in pre-training and fine-tuning a SentenceBERT model. SentenceBERT also uses STSb-style features to evaluate model performance both during training and after training to gauge _intrinsic_ model performance on STSb.
### Source Data
#### Initial Data Collection and Normalization
see CLIMATE-FEVER
#### Who are the source language producers?
see CLIMATE-FEVER
<!--
### Annotations
-->
### Annotation process
#### **cf-nli**
For each Claim that has both SUPPORTS evidence and REFUTES evidence, create labeled pairs in the style of NLI dataset:
| split | dataset | sentence1 | sentence2 | label |
|---|---|---|---|---|
| {'train', 'test'} | 'climate-fever' | claim | evidence | evidence_label SUPPORTS -> 'entailment', REFUTES -> 'contradiction' |
> Note that by defintion, only claims classified as DISPUTED include both SUPPORTS and REFUTES evidence, so this dataset is limited to a small subset of CLIMATE-FEVER.
### **cf-nli-nei**
This dataset uses the list of annotator 'votes' to cast a NOT_ENOUGH_INFO (NEI) evidence to a SUPPORTS or REFUTES evidence. By doing so, Claims in the SUPPORTS, REFUTES, and NEI classes can be used to generate additional sentence pairs.
| votes | effective evidence_label |
|---|---|
| SUPPORTS > REFUTES | _SUPPORTS_ |
| SUPPORTS < REFUTES | _REFUTES_ |
In addition to all the claims in **cf-nli**, any Claims that have,
* **_at least one_** SUPPORTS or REFUTES evidence, AND
* NEI evidences that can be cast to effective _SUPPORTS_ or _REFUTES_
are included in the datasset.
### **cf-stsb**
For each Claim <-> Evidence pair, create labeled pairs in the style of STSb dataset:
| split | dataset | score | sentence1 | sentence2 |
|---|---|---|---|---|
| {'train', 'dev', 'test'} | 'climate-fever' | cos_sim score | claim | evidence |
This dataset uses 'evidence_label', vote 'entropy', and the list of annotator 'votes' to derive a similarity score for each claim <-> evidence pairing. Similarity score conversion:
> `mean(entropy)` refers to the average entropy within the defined group of evidence
| evidence_label | votes | similarity score |
|---|---|---|
| SUPPORTS | SUPPORTS > 0, REFUTES == 0, NOT_ENOUGH_INFO (NEI) == 0 | 1 |
| | SUPPORTS > 0, REFUTES == 0 | mean(entropy) |
| | SUPPORTS > 0, REFUTES > 0 | 1 - mean(entropy) |
| NEI | SUPPORTS > REFUTES | (1 - mean(entropy)) / 2|
| | SUPPORTS == REFUTES | 0 |
| | SUPPORTS < REFUTES | -(1 - mean(entropy)) / 2 |
| REFUTES | SUPPORTS > 0, REFUTES > 0 | -(1 - mean(entropy)) |
| | SUPPORTS == 0, REFUTES > 0 | -mean(entropy) |
| | SUPPORTS == 0, REFUTES > 0, NEI == 0 | -1 |
The above derivation roughly maps the strength of evidence annotation (REFUTES..NEI..SUPPORTS) to cosine similarity (-1..0..1).
<!--
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
[More Information Needed]
### Citation Information
[More Information Needed]
### Contributions
[More Information Needed]
--> | 7,359 | [
[
-0.01398468017578125,
-0.038543701171875,
0.01461029052734375,
0.0247650146484375,
-0.00433349609375,
-0.0167388916015625,
-0.023193359375,
-0.0187530517578125,
0.0360107421875,
0.0311737060546875,
-0.041412353515625,
-0.06402587890625,
-0.04937744140625,
0.... |
open-source-metrics/preprocessed_stars | 2023-08-23T19:55:22.000Z | [
"region:us"
] | open-source-metrics | null | null | 0 | 6 | 2023-03-24T22:41:01 | ---
dataset_info:
features:
- name: accelerate
dtype: int64
- name: datasets
dtype: int64
- name: diffusers
dtype: int64
- name: evaluate
dtype: int64
- name: gradio
dtype: int64
- name: hub_docs
dtype: int64
- name: huggingface_hub
dtype: int64
- name: optimum
dtype: int64
- name: peft
dtype: int64
- name: pytorch_image_models
dtype: int64
- name: safetensors
dtype: int64
- name: tokenizers
dtype: int64
- name: transformers
dtype: int64
- name: langchain
dtype: int64
- name: pytorch
dtype: int64
- name: stable_diffusion_webui
dtype: int64
- name: tensorflow
dtype: int64
- name: day
dtype: string
splits:
- name: raw
num_bytes: 16368
num_examples: 101
- name: wow
num_bytes: 16528
num_examples: 102
download_size: 32298
dataset_size: 32896
configs:
- config_name: default
data_files:
- split: raw
path: data/raw-*
- split: wow
path: data/wow-*
---
# Dataset Card for "preprocessed_stars"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 1,177 | [
[
-0.047027587890625,
-0.0202178955078125,
0.0213165283203125,
0.0162811279296875,
-0.0187225341796875,
0.00844573974609375,
0.0031719207763671875,
-0.01763916015625,
0.065185546875,
0.05108642578125,
-0.06793212890625,
-0.057525634765625,
-0.04241943359375,
-... |
open-source-metrics/preprocessed_issues | 2023-08-23T20:11:07.000Z | [
"region:us"
] | open-source-metrics | null | null | 0 | 6 | 2023-03-24T22:49:09 | ---
dataset_info:
features:
- name: accelerate
dtype: int64
- name: datasets
dtype: int64
- name: diffusers
dtype: int64
- name: evaluate
dtype: int64
- name: gradio
dtype: int64
- name: hub_docs
dtype: int64
- name: huggingface_hub
dtype: int64
- name: optimum
dtype: int64
- name: peft
dtype: int64
- name: pytorch_image_models
dtype: int64
- name: safetensors
dtype: int64
- name: tokenizers
dtype: int64
- name: transformers
dtype: int64
- name: langchain
dtype: int64
- name: pytorch
dtype: int64
- name: stable_diffusion_webui
dtype: int64
- name: tensorflow
dtype: int64
- name: day
dtype: string
splits:
- name: raw
num_bytes: 16368
num_examples: 101
- name: wow
num_bytes: 16368
num_examples: 101
- name: eom
num_bytes: 16368
num_examples: 101
- name: eom_wow
num_bytes: 16368
num_examples: 101
download_size: 64567
dataset_size: 65472
configs:
- config_name: default
data_files:
- split: raw
path: data/raw-*
- split: wow
path: data/wow-*
- split: eom
path: data/eom-*
- split: eom_wow
path: data/eom_wow-*
---
# Dataset Card for "preprocessed_issues"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 1,376 | [
[
-0.04461669921875,
-0.030517578125,
0.0271453857421875,
0.032562255859375,
-0.0080108642578125,
0.006683349609375,
0.005176544189453125,
-0.0118255615234375,
0.064697265625,
0.040557861328125,
-0.060760498046875,
-0.055572509765625,
-0.0338134765625,
-0.0094... |
LEL-A/translated_german_alpaca | 2023-04-10T09:32:34.000Z | [
"region:us"
] | LEL-A | null | null | 1 | 6 | 2023-03-31T18:08:19 | ---
dataset_info:
features:
- name: text
dtype: string
- name: inputs
struct:
- name: _instruction
dtype: string
- name: input
dtype: string
- name: output
dtype: string
- name: prediction
list:
- name: label
dtype: string
- name: score
dtype: float64
- name: prediction_agent
dtype: 'null'
- name: annotation
dtype: 'null'
- name: annotation_agent
dtype: 'null'
- name: vectors
struct:
- name: input
sequence: float64
- name: instruction
sequence: float64
- name: output
sequence: float64
- name: multi_label
dtype: bool
- name: explanation
dtype: 'null'
- name: id
dtype: string
- name: metadata
struct:
- name: original_id
dtype: int64
- name: translation_model
dtype: string
- name: status
dtype: string
- name: event_timestamp
dtype: timestamp[us]
- name: metrics
dtype: 'null'
splits:
- name: train
num_bytes: 1004916509
num_examples: 51759
download_size: 690637366
dataset_size: 1004916509
---
# Dataset Card for "translated_german_alpaca"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 1,281 | [
[
-0.0499267578125,
-0.036865234375,
0.017486572265625,
0.029632568359375,
-0.035736083984375,
-0.01904296875,
0.0111236572265625,
-0.0290985107421875,
0.0718994140625,
0.030548095703125,
-0.06219482421875,
-0.069091796875,
-0.06396484375,
-0.00914764404296875... |
emre/stanford-alpaca-cleaned-turkish-translated | 2023-04-08T21:28:43.000Z | [
"task_categories:text-generation",
"size_categories:10K<n<100K",
"language:tr",
"license:afl-3.0",
"region:us"
] | emre | null | null | 15 | 6 | 2023-04-02T22:25:08 | ---
license: afl-3.0
task_categories:
- text-generation
language:
- tr
size_categories:
- 10K<n<100K
---
09/04/2023 Update:
New instructions added from: https://github.com/Instruction-Tuning-with-GPT-4/GPT-4-LLM
Original Version: https://github.com/tatsu-lab/stanford_alpaca#data-release
AI BASED TRANSLATION RESULTS OF STANFORD ALPACA EN TO TR
For academic only, please cite before you use it.
Taşar, D. E. T. (2023). stanford-alpaca-cleaned-turkish-translated [Dataset]. In Stanford Alpaca TR (1.0.1.a). https://huggingface.co/datasets/emre/stanford-alpaca-cleaned-turkish-translated
### Citation
Please cite the repo if you use the data or code in this repo.
```
@misc{alpaca-tr,tasar-2023
author = {Taşar, Davut Emre},
title = {stanford-alpaca-cleaned-turkish-translated},
year = {2023},
publisher = {Huggingface},
journal = {Huggingface repository},
howpublished = {\url{https://huggingface.co/datasets/emre/stanford-alpaca-cleaned-turkish-translated}},
}
``` | 989 | [
[
-0.03851318359375,
-0.04656982421875,
0.0357666015625,
0.034423828125,
-0.0537109375,
-0.0211029052734375,
-0.0229034423828125,
-0.046966552734375,
0.009521484375,
0.03912353515625,
-0.049652099609375,
-0.06280517578125,
-0.050994873046875,
0.023941040039062... |
nanakonoda/xnli_parallel | 2023-04-18T13:23:10.000Z | [
"task_categories:text-classification",
"annotations_creators:expert-generated",
"language_creators:found",
"multilinguality:multilingual",
"size_categories:100K<n<1M",
"source_datasets:extended|xnli",
"language:en",
"language:de",
"language:fr",
"mode classification",
"aligned",
"region:us"
] | nanakonoda | This dataset was taken from XNLI for a binary text classification task. It has been parallelized in English, German, and French. | # @InProceedings{huggingface:dataset,
# title = {A great new dataset},
# author={huggingface, Inc.
# },
# year={2020}
# } | 0 | 6 | 2023-04-03T00:49:12 | ---
annotations_creators:
- expert-generated
language:
- en
- de
- fr
language_creators:
- found
license: []
multilinguality:
- multilingual
pretty_name: XNLI Parallel Corpus
size_categories:
- 100K<n<1M
source_datasets:
- extended|xnli
tags:
- mode classification
- aligned
task_categories:
- text-classification
task_ids: []
dataset_info:
- config_name: en
features:
- name: text
dtype: string
- name: label
dtype:
class_label:
names:
'0': spoken
'1': written
splits:
- name: train
num_bytes: 92288
num_examples: 830
- name: test
num_bytes: 186853
num_examples: 1669
- config_name: de
features:
- name: text
dtype: string
- name: label
dtype:
class_label:
names:
'0': spoken
'1': written
splits:
- name: train
num_bytes: 105681
num_examples: 830
- name: test
num_bytes: 214008
num_examples: 1669
- config_name: fr
features:
- name: text
dtype: string
- name: label
dtype:
class_label:
names:
'0': spoken
'1': written
splits:
- name: train
num_bytes: 830
num_examples: 109164
- name: test
num_bytes: 221286
num_examples: 1669
download_size: 1864
dataset_size: 1840
---
# Dataset Card for XNLI Parallel Corpus
## Dataset Description
- **Homepage:**
- **Repository:**
- **Paper:**
- **Leaderboard:**
- **Point of Contact:**
### Dataset Summary
### Supported Tasks and Leaderboards
Binary mode classification (spoken vs written)
### Languages
- English
- German
- French
## Dataset Structure
### Data Instances
{
'text': "And he said , Mama , I 'm home .",
'label': 0
}
### Data Fields
- text: sentence
- label: binary label of text (0: spoken 1: written)
### Data Splits
- train: 830
- test: 1669
### Other Statistics
#### Vocabulary Size
- English
- train: 4363
- test: 7128
- German
- train: 5070
- test: 8601
- French
- train: 4881
- test: 7935
#### Average Sentence Length
- English
- train: 20.689156626506023
- test: 20.75254643499101
- German
- train: 20.367469879518072
- test: 20.639904134212102
- French
- train: 23.455421686746988
- test: 23.731575793888556
#### Label Split
- train:
- 0: 166
- 1: 664
- test:
- 0: 334
- 1: 1335
#### Out-of-vocabulary words in model
- English
- BERT (bert-base-uncased)
- train: 800
- test: 1638
- mBERT (bert-base-multilingual-uncased)
- train: 1347
- test: 2693
- German BERT (bert-base-german-dbmdz-uncased)
- train: 3228
- test: 5581
- flauBERT (flaubert-base-uncased)
- train: 4363
- test: 7128
- German
- BERT (bert-base-uncased)
- train: 4285
- test: 7387
- mBERT (bert-base-multilingual-uncased)
- train: 3126
- test: 5863
- German BERT (bert-base-german-dbmdz-uncased)
- train: 2033
- test: 3938
- flauBERT (flaubert-base-uncased)
- train: 5069
- test: 8600
- French
- BERT (bert-base-uncased)
- train: 3784
- test: 6289
- mBERT (bert-base-multilingual-uncased)
- train: 2847
- test: 5084
- German BERT (bert-base-german-dbmdz-uncased)
- train: 4212
- test: 6964
- flauBERT (flaubert-base-uncased)
- train: 4881
- test: 7935
## Dataset Creation
### Curation Rationale
N/A
### Source Data
https://github.com/facebookresearch/XNLI
Here is the citation for the original XNLI paper.
```
@InProceedings{conneau2018xnli,
author = "Conneau, Alexis
and Rinott, Ruty
and Lample, Guillaume
and Williams, Adina
and Bowman, Samuel R.
and Schwenk, Holger
and Stoyanov, Veselin",
title = "XNLI: Evaluating Cross-lingual Sentence Representations",
booktitle = "Proceedings of the 2018 Conference on Empirical Methods
in Natural Language Processing",
year = "2018",
publisher = "Association for Computational Linguistics",
location = "Brussels, Belgium",
}
```
#### Initial Data Collection and Normalization
N/A
#### Who are the source language producers?
N/A
### Annotations
#### Annotation process
N/A
#### Who are the annotators?
N/A
### Personal and Sensitive Information
N/A
## Considerations for Using the Data
### Social Impact of Dataset
N/A
### Discussion of Biases
N/A
### Other Known Limitations
N/A
## Additional Information
### Dataset Curators
N/A
### Licensing Information
N/A
### Citation Information
### Contributions
N/A | 4,772 | [
[
-0.02764892578125,
-0.0364990234375,
0.00937652587890625,
0.0183563232421875,
-0.005191802978515625,
0.0029392242431640625,
-0.040252685546875,
-0.029571533203125,
0.049560546875,
0.01256561279296875,
-0.046112060546875,
-0.057647705078125,
-0.039794921875,
... |
hackathon-somos-nlp-2023/DiagTrast | 2023-04-09T22:38:37.000Z | [
"task_categories:text-classification",
"size_categories:1K<n<10K",
"language:es",
"license:mit",
"mental",
"medical",
"disorder",
"region:us"
] | hackathon-somos-nlp-2023 | null | null | 6 | 6 | 2023-04-04T10:53:27 | ---
dataset_info:
features:
- name: Sintoma
dtype: string
- name: Padecimiento
dtype: string
- name: Padecimiento_cat
dtype: int64
- name: Sintoma_limpia
dtype: string
splits:
- name: train
num_bytes: 524464
num_examples: 1333
download_size: 232511
dataset_size: 524464
task_categories:
- text-classification
language:
- es
size_categories:
- 1K<n<10K
license: mit
tags:
- mental
- medical
- disorder
pretty_name: DiagTrast
---
# Dataset Card for "DiagTrast"
## Table of Content
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Team members](#team-members)
## Dataset Description
### Dataset Summary
For the creation of this dataset, ChatGPT-4 was used to generate statements based on the characteristics of some of the mental disorders described in the "Manual Diagnóstico y Estadístico de Trastornos Mentales (DSM-5)". The mental disorders included are:
- Narcissistic personality disorder.
- Histrionic personality disorder.
- Borderline personality disorder.
- Antisocial personality disorder.
- Schizotypal personality disorder.
### Supported Tasks and Leaderboards
- text-classification: The dataset can be used to train a model for text classification, which consists in assigning a label or class to a given text. Some use cases are sentiment analysis, natural language inference, and assessing grammatical correctness. Success on this task is typically measured by achieving a high/low accuracy.
### Languages
This dataset of statements is in Spanish only.
## Dataset Structure
### Data Instances
A typical instance in the dataset comprises a statement describing one or more symptoms of a disorder, the name of the disorder, a sequential numerical id representing the disorder, and the clean text of the initial statement (i.e. free of punctuation marks and connectors).
The following is a JSON-formatted example of a typical case in this dataset:
```
{
'Sintoma': "Su comportamiento es a menudo extraño y excéntrico, como llevar ropa que no coincide o actuar de una manera inapropiada en situaciones sociales.",
'Padecimiento': "Trastornos de la personalidad esquizotípica",
'Padecimiento_cat': 2,
'Sintoma_limpia ': "comportamiento menudo extraño excentrico llevar ropa coincide actuar manera inapropiada situaciones sociales"
}
```
### Data Fields
- `Sintoma`: a string, representing a paragraph that a professional would enter describing the symptoms identified in a patient.
- `Padecimiento`: a string that indicates the disorder according to DSM-5.
- `Padecimiento_cat`: an integer representing the `Padecimiento` field, this field can be used as a label in a text-classification model.
- `Sintoma_Limpia`: a string, this field is the clean text of the `Sintoma` field. For the text-classification task, is advisable to use this field instead of the "Padecimiento" field to reduce the noise that punctuation marks, articles and connectors generate in the models.
### Data Splits
The data were not split into training and test subsets, instead having a single set with the following distribution:
| Disorder | Records |
| - | - |
| Narcissistic personality disorder| 250 |
| Histrionic personality disorder | 250 |
| Borderline personality disorder | 358 |
| Antisocial personality disorder | 250 |
| Schizotypal personality disorder | 225 |
## Dataset Creation
### Curation Rationale
It was decided to create this dataset because there is an extensive manual called DSM-5 which details the characteristics that must be present in a patient to diagnose a mental disorder. Some disorders have characteristics in common as well as their differences, for this reason we sought to classify, according to the DSM-5, statements that contain symptoms and characteristics identified by health professionals.
### Source Data
Data was generated using chatGPT, we first introduce the symptoms specified in the DSM-5 and request it to create statements containing one or more characteristics but without mentioning the name of the disorder. When the artificial intelligence generates the statements, a quick check is made to ensure that they are of the minimum expected quality, i.e., that they do not include the name of the disorder, that they are not too long or too short, and above all that they specifically contain the characteristics that were entered.
### Annotations
#### Annotation process
The generation of the data was carried out for each mental disorder, so that when we obtained the statements we also knew which label corresponded to it, so it was not necessary to make manual or automated annotations.
## Considerations for Using the Data
### Social Impact of Dataset
We hope that through the creation of models using this or a similar dataset, we can help to reduce the diagnosis times of mental disorders and increase the number of patients that can be seen and treated. On the other hand, we must consider the importance of using these technologies properly because if these models are used indiscriminately by people who do not have sufficient knowledge or experience to detect unusual behaviors in people, these models could negatively influence people by making them believe that they have a disorder.
### Discussion of Biases
It should not be forgotten that these data have been artificially generated so models that are trained might expect different inputs than a real mental health professional would generate. To mitigate this bias the team has closely verified the data generation process and this has evolved while identifying better prompts as well as filtering the statements and feeding back to the artificial intelligence to finally obtain the desired quality.
### Other Known Limitations
We have only generated data for 5 of the disorders described in the DSM-5.
## Team members
- [Alberto Martín Garrido](https://huggingface.co/Stremie)
- [Edgar Mencia](https://huggingface.co/edmenciab)
- [Miguel Ángel Solís Orozco](https://huggingface.co/homosapienssapiens)
- [Jose Carlos Vílchez Villegas](https://huggingface.co/JCarlos) | 6,851 | [
[
-0.043792724609375,
-0.05023193359375,
0.0322265625,
0.024627685546875,
-0.0177459716796875,
-0.00336456298828125,
-0.015777587890625,
-0.0272216796875,
0.037750244140625,
0.03094482421875,
-0.0640869140625,
-0.082763671875,
-0.053375244140625,
0.02877807617... |
AlekseyKorshuk/roleplay-io | 2023-04-05T21:44:58.000Z | [
"region:us"
] | AlekseyKorshuk | null | null | 9 | 6 | 2023-04-05T21:44:55 | ---
dataset_info:
features:
- name: input_text
dtype: string
- name: output_text
dtype: string
splits:
- name: train
num_bytes: 2495441
num_examples: 3146
download_size: 1543319
dataset_size: 2495441
---
# Dataset Card for "roleplay-io"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 399 | [
[
-0.029296875,
-0.0172576904296875,
0.0115509033203125,
0.0233612060546875,
-0.0022258758544921875,
-0.00885009765625,
0.028717041015625,
-0.020782470703125,
0.062225341796875,
0.041656494140625,
-0.0714111328125,
-0.0509033203125,
-0.033935546875,
-0.0283660... |
mstz/haberman | 2023-04-07T13:50:58.000Z | [
"task_categories:tabular-classification",
"size_categories:n<1K",
"language:en",
"license:cc",
"haberman",
"tabular_classification",
"binary_classification",
"multiclass_classification",
"region:us"
] | mstz | null | @misc{misc_haberman's_survival_43,
author = {Haberman,S.},
title = {{Haberman's Survival}},
year = {1999},
howpublished = {UCI Machine Learning Repository},
note = {{DOI}: \\url{10.24432/C5XK51}}
} | 0 | 6 | 2023-04-06T09:49:53 | ---
language:
- en
tags:
- haberman
- tabular_classification
- binary_classification
- multiclass_classification
pretty_name: Haberman
size_categories:
- n<1K
task_categories:
- tabular-classification
configs:
- survival
license: cc
---
# Haberman
The [Haberman dataset](https://archive.ics.uci.edu/ml/datasets/Haberman) from the [UCI ML repository](https://archive.ics.uci.edu/ml/datasets).
Has the patient survived surgery?
# Configurations and tasks
| **Configuration** | **Task** | **Description** |
|-------------------|---------------------------|------------------------------------|
| sruvival | Binary classification | Has the patient survived surgery? |
# Usage
```python
from datasets import load_dataset
dataset = load_dataset("mstz/haberman", "survival")["train"]
``` | 832 | [
[
-0.00012636184692382812,
-0.031005859375,
0.01284027099609375,
0.0240020751953125,
-0.019805908203125,
-0.0205841064453125,
0.005706787109375,
0.01166534423828125,
0.0255889892578125,
0.044525146484375,
-0.0399169921875,
-0.051025390625,
-0.0458984375,
0.013... |
mstz/liver | 2023-04-16T17:33:33.000Z | [
"task_categories:tabular-classification",
"size_categories:n<1K",
"language:en",
"license:cc",
"ilpd",
"tabular_classification",
"binary_classification",
"multiclass_classification",
"UCI",
"region:us"
] | mstz | null | @misc{misc_ilpd_(indian_liver_patient_dataset)_225,
author = {Ramana,Bendi & Venkateswarlu,N.},
title = {{ILPD (Indian Liver Patient Dataset)}},
year = {2012},
howpublished = {UCI Machine Learning Repository},
note = {{DOI}: \\url{10.24432/C5D02C}}
} | 1 | 6 | 2023-04-06T13:53:51 | ---
language:
- en
tags:
- ilpd
- tabular_classification
- binary_classification
- multiclass_classification
- UCI
pretty_name: Liver
size_categories:
- n<1K
task_categories:
- tabular-classification
configs:
- liver
license: cc
---
# ILPD
The [ILPD dataset](https://archive.ics.uci.edu/ml/datasets/ILPD) from the [UCI ML repository](https://archive.ics.uci.edu/ml/datasets).
# Configurations and tasks
| **Configuration** | **Task** | **Description** |
|-------------------|---------------------------|---------------------------------------|
| liver | Binary classification | Does the patient have liver problems? |
# Usage
```python
from datasets import load_dataset
dataset = load_dataset("mstz/liver")["train"]
``` | 778 | [
[
-0.024444580078125,
0.00812530517578125,
-0.001209259033203125,
0.0273284912109375,
-0.01287841796875,
-0.031524658203125,
0.01378631591796875,
-0.0173492431640625,
0.0195159912109375,
0.04443359375,
-0.01666259765625,
-0.03778076171875,
-0.043365478515625,
... |
mstz/monks | 2023-04-16T17:34:32.000Z | [
"task_categories:tabular-classification",
"size_categories:n<1K",
"language:en",
"license:cc",
"student performance",
"tabular_classification",
"UCI",
"region:us"
] | mstz | null | @misc{misc_monk's_problems_70,
author = {Wnek,J.},
title = {{MONK's Problems}},
year = {1992},
howpublished = {UCI Machine Learning Repository},
note = {{DOI}: \\url{10.24432/C5R30R}}
} | 0 | 6 | 2023-04-06T16:43:42 | ---
language:
- en
tags:
- student performance
- tabular_classification
- UCI
pretty_name: Monk
size_categories:
- n<1K
task_categories:
- tabular-classification
configs:
- monks1
- monks2
- monks3
license: cc
---
# Monks
The [Monk dataset](https://archive-beta.ics.uci.edu/dataset/70/monk+s+problems) from UCI.
# Configurations and tasks
| **Configuration** | **Task** |
|-------------------|---------------------------|
| monks1 | Binary classification |
| monks2 | Binary classification |
| monks3 | Binary classification |
# Usage
```python
from datasets import load_dataset
dataset = load_dataset("mstz/monks", "monks1")["train"]
``` | 703 | [
[
-0.00951385498046875,
-0.01132965087890625,
0.01210784912109375,
0.044281005859375,
-0.01517486572265625,
-0.0089874267578125,
-0.023406982421875,
0.0008730888366699219,
0.01422119140625,
0.032196044921875,
-0.04010009765625,
-0.036163330078125,
-0.0503845214843... |
one-sec-cv12/chunk_0 | 2023-04-06T21:46:09.000Z | [
"region:us"
] | one-sec-cv12 | null | null | 0 | 6 | 2023-04-06T21:08:53 | ---
dataset_info:
features:
- name: audio
dtype:
audio:
sampling_rate: 16000
splits:
- name: train
num_bytes: 21981833424.125
num_examples: 228863
download_size: 18831760350
dataset_size: 21981833424.125
---
# Dataset Card for "chunk_0"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 407 | [
[
-0.037628173828125,
-0.032806396484375,
0.0225372314453125,
0.0235748291015625,
-0.033935546875,
-0.0006875991821289062,
0.0280303955078125,
-0.0181884765625,
0.0872802734375,
0.034454345703125,
-0.05279541015625,
-0.03729248046875,
-0.046661376953125,
-0.02... |
youngp5/detecting_glasses | 2023-04-10T02:42:56.000Z | [
"region:us"
] | youngp5 | null | null | 1 | 6 | 2023-04-10T02:39:37 | Entry not found | 15 | [
[
-0.0213775634765625,
-0.014984130859375,
0.05718994140625,
0.0288543701171875,
-0.0350341796875,
0.046478271484375,
0.052520751953125,
0.005062103271484375,
0.051361083984375,
0.016998291015625,
-0.0521240234375,
-0.01496124267578125,
-0.0604248046875,
0.037... |
gaussalgo/Canard_Wiki-augmented | 2023-04-12T13:35:37.000Z | [
"task_categories:question-answering",
"task_categories:conversational",
"task_categories:text2text-generation",
"size_categories:10K<n<100K",
"language:en",
"license:cc-by-sa-4.0",
"region:us"
] | gaussalgo | null | null | 0 | 6 | 2023-04-11T12:49:22 | ---
dataset_info:
features:
- name: History
sequence: string
- name: QuAC_dialog_id
dtype: string
- name: Question
dtype: string
- name: Question_no
dtype: int64
- name: Rewrite
dtype: string
- name: true_page_title
dtype: string
- name: true_contexts
dtype: string
- name: answer
dtype: string
- name: true_contexts_wiki
dtype: string
- name: extractive
dtype: bool
- name: retrieved_contexts
sequence: string
splits:
- name: train
num_bytes: 1353765609
num_examples: 31526
- name: test
num_bytes: 252071528
num_examples: 5571
download_size: 231554886
dataset_size: 1605837137
license: cc-by-sa-4.0
task_categories:
- question-answering
- conversational
- text2text-generation
language:
- en
pretty_name: Canard Wikipedia-augmented
size_categories:
- 10K<n<100K
---
# Dataset Card for Canard_Wiki-augmented
### Summary
This is a dataset of fact-retrieving conversations about Wikipedia articles, with all responses grounded in a specific segment of text in the referenced Wikipedia article.
It is an extended version of [Canard](https://sites.google.com/view/qanta/projects/canard)
and [QuAC](https://huggingface.co/datasets/quac) datasets,
augmented with the contexts of [English Wikipedia](https://huggingface.co/datasets/wikipedia).
### Supported Tasks
The dataset is intended to train a factually-consistent conversational model able to ground all its responses to the corresponding source(s).
However, the data can also be used to evaluate the information retrieval (IR) system for given queries, for contextual disambiguation of the queries from a conversation, etc.
## Dataset Structure
The dataset can be loaded by simply choosing a split (`train` or `test`) and calling:
```python
import datasets
canard_augm_test = datasets.load_dataset("gaussalgo/Canard_Wiki-augmented", split="test")
print(canard_augm_test[0]) # print the first sample
```
### Data Instances
The samples of Canard_Wiki-augmented have this format:
```python
{'History': ['Anna Politkovskaya', 'The murder remains unsolved, 2016'],
'QuAC_dialog_id': 'C_0aaa843df0bd467b96e5a496fc0b033d_1',
'Question': 'Did they have any clues?',
'Question_no': 1,
'answer': 'Her colleagues at Novaya gazeta protested that until the instigator or sponsor of the crime was identified, arrested and prosecuted the case was not closed.'
'Rewrite': 'Did investigators have any clues in the unresolved murder of Anna Politkovskaya?',
'true_page_title': 'Anna Politkovskaya',
'true_contexts': 'In September 2016 Vladimir Markin, official spokesman for (...)',
'true_contexts_wiki': 'Anna Stepanovna Politkovskaya was a US-born Russian journalist (...)',
'extractive': True
'retrieved_contexts': ['Clues was an indie rock band from Montreal, Canada formed by Alden Penner (...)',
'High Stakes is a British game show series hosted by Jeremy Kyle, in which (...)']
```
### Data Fields
* **History**: History of the conversation from Canard. The first two entries of the conversation are always synthetic.
* **QuAC_dialog_id**: Dialogue ID mapping the conversation to the original QuAC dataset (*dialogue_id* in QuAC).
* **Question**: Current question of the user from Canard.
* **Question_no**: Ordering of the user's question from the conversation, originally from Canard.
* **answer**: Correctly extracted answer to a given question from a relevant Wikipedia article (*true_contexts*). Note that some of the questions are open, thus the listed answer is not the only correct possibility.
* **Rewrite**: A rephrased version of *Question*, manually disambiguated from the context of *History* by the annotators of Canard.
* **true_page_title**: Title of the Wikipedia article containing *answer*. *wikipedia_page_title* from QuAC.
* **true_contexts**: An excerpt of the paragraph with an answer from the Wikipedia article titled *true_page_title*.
* **true_contexts_wiki**: A full contents of Wikipedia article (*text* from Wikipedia dataset), where *true_page_title* matches Wikipedia *title*. Note that the Wikipedia dataset was retrieved on 2nd of April, 2023.
* **extractive**: A flag whether the *answer* in this sample can be found as an exact-match in *true_contexts_wiki*.
* **retrieved_contexts**: "Distractor" contexts retrieved from the full Wikipedia dataset using the okapi-BM25 IR system on a **Rewrite** question.
### Data Splits
* **train** split is aligned with the training splits of Canard and QuAC.
* **test** split matches the validation split of QuAC and the test split of Canard (where the conversation ids match).
## Licensing
This dataset is composed of [QuAC](https://huggingface.co/datasets/quac) (MIT),
[Canard](https://sites.google.com/view/qanta/projects/canard) (CC BY-SA 4.0)
and [Wikipedia](https://huggingface.co/datasets/wikipedia) (CC BY SA 3.0).
Canard_Wiki-augmented is therefore licensed under CC BY-SA 4.0 as well, allowing it to be also commercially used.
## Cite
If you use this dataset in a research, do not forget to cite the authors of original datasets, that Canard_Wiki-augmented is derived from:
[QuAC](https://huggingface.co/datasets/quac), [Canard](https://sites.google.com/view/qanta/projects/canard). | 5,259 | [
[
-0.038299560546875,
-0.054046630859375,
0.0306243896484375,
0.0025386810302734375,
-0.0203704833984375,
-0.0176849365234375,
-0.016021728515625,
-0.04022216796875,
0.042083740234375,
0.0310516357421875,
-0.056182861328125,
-0.04241943359375,
-0.032012939453125,
... |
metaeval/chaos-mnli-ambiguity | 2023-04-11T13:43:17.000Z | [
"region:us"
] | metaeval | null | null | 0 | 6 | 2023-04-11T13:37:19 | Chaos NLI MNLI portion with gini coefficient pre-computed (from 0 to 1)
High gini means unambiguous inference.
```
@inproceedings{xzhou2022distnli,
Author = {Xiang Zhou and Yixin Nie and Mohit Bansal},
Booktitle = {Findings of the Association for Computational Linguistics: ACL 2022},
Publisher = {Association for Computational Linguistics},
Title = {Distributed NLI: Learning to Predict Human Opinion Distributions for Language Reasoning},
Year = {2022}
}
``` | 466 | [
[
-0.033538818359375,
-0.041107177734375,
0.0030956268310546875,
0.042388916015625,
-0.00946807861328125,
-0.0067138671875,
0.0035762786865234375,
-0.0372314453125,
0.04150390625,
-0.0026397705078125,
-0.0635986328125,
-0.03155517578125,
-0.0301513671875,
0.01... |
hs-knowledge/hateval_enriched | 2023-06-16T14:35:58.000Z | [
"region:us"
] | hs-knowledge | null | null | 0 | 6 | 2023-04-11T22:36:17 | ---
dataset_info:
features:
- name: id
dtype: int64
- name: text
dtype: string
- name: HS
dtype:
class_label:
names:
'0': OK
'1': HATEFUL
- name: TR
dtype:
class_label:
names:
'0': GROUP
'1': INDIVIDUAL
- name: AG
dtype:
class_label:
names:
'0': NOT AGGRESSIVE
'1': AGGRESSIVE
- name: ner_output
struct:
- name: entities
list:
- name: end
dtype: int64
- name: kg_results
struct:
- name: '@context'
struct:
- name: '@vocab'
dtype: string
- name: EntitySearchResult
dtype: string
- name: detailedDescription
dtype: string
- name: goog
dtype: string
- name: kg
dtype: string
- name: resultScore
dtype: string
- name: '@type'
dtype: string
- name: itemListElement
list:
- name: '@type'
dtype: string
- name: result
struct:
- name: '@id'
dtype: string
- name: '@type'
sequence: string
- name: description
dtype: string
- name: detailedDescription
struct:
- name: articleBody
dtype: string
- name: license
dtype: string
- name: url
dtype: string
- name: image
struct:
- name: contentUrl
dtype: string
- name: url
dtype: string
- name: name
dtype: string
- name: url
dtype: string
- name: resultScore
dtype: float64
- name: wikidata_id
dtype: string
- name: query_text
dtype: string
- name: start
dtype: int64
- name: text
dtype: string
- name: type
dtype: string
- name: labels
sequence: string
- name: sentence
dtype: string
- name: tokens
sequence: string
- name: entities
list:
- name: '@type'
dtype: string
- name: end
dtype: int64
- name: kg_result
struct:
- name: '@id'
dtype: string
- name: '@type'
sequence: string
- name: description
dtype: string
- name: detailedDescription
struct:
- name: articleBody
dtype: string
- name: license
dtype: string
- name: url
dtype: string
- name: image
struct:
- name: contentUrl
dtype: string
- name: url
dtype: string
- name: name
dtype: string
- name: url
dtype: string
- name: resultScore
dtype: float64
- name: score
dtype: float64
- name: similarity
dtype: float64
- name: start
dtype: int64
- name: text
dtype: string
- name: type
dtype: string
- name: wikidata_id
dtype: string
splits:
- name: train
num_bytes: 68155606
num_examples: 9000
- name: test
num_bytes: 25403783
num_examples: 3000
- name: dev
num_bytes: 9040250
num_examples: 1000
download_size: 0
dataset_size: 102599639
---
# Dataset Card for "hateval_enriched"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 3,606 | [
[
-0.046630859375,
-0.030242919921875,
-0.0002608299255371094,
0.004150390625,
-0.0192413330078125,
0.004253387451171875,
0.01441192626953125,
-0.021392822265625,
0.0662841796875,
0.031829833984375,
-0.0550537109375,
-0.053253173828125,
-0.051483154296875,
-0.... |
BramVanroy/alpaca-cleaned-dutch-baize | 2023-06-18T13:05:51.000Z | [
"task_categories:question-answering",
"task_categories:text-generation",
"size_categories:10K<n<100K",
"language:nl",
"license:cc-by-nc-4.0",
"alpaca",
"chat",
"region:us"
] | BramVanroy | null | null | 1 | 6 | 2023-04-12T07:33:03 | ---
license: cc-by-nc-4.0
task_categories:
- question-answering
- text-generation
language:
- nl
tags:
- alpaca
- chat
pretty_name: Alpaca Cleaned Dutch - Baize
size_categories:
- 10K<n<100K
---
This is a transformed copy of [Alpaca Cleaned Dutch](https://huggingface.co/datasets/BramVanroy/alpaca-cleaned-dutch) to make it suitable for the format used in [Baize](https://github.com/project-baize/baize-chatbot). Please refer to that dataset for more information, which includes:
- licensing information;
- biases description;
- and citation requirements.
### Citation Information
If you use this data set, please cite :
Bram Vanroy. (2023). Alpaca Cleaned Dutch [Data set]. Hugging Face. https://doi.org/10.57967/HF/0530
```bibtex
@misc{https://doi.org/10.57967/hf/0530,
doi = {10.57967/HF/0530},
url = {https://huggingface.co/datasets/BramVanroy/alpaca-cleaned-dutch},
author = {{Bram Vanroy}},
title = {{A}lpaca {C}leaned {D}utch},
publisher = {Hugging Face},
year = {2023}
}
```
| 1,003 | [
[
-0.019134521484375,
-0.056732177734375,
0.004245758056640625,
0.05120849609375,
-0.0238189697265625,
-0.040008544921875,
-0.008453369140625,
-0.03936767578125,
0.0703125,
0.055633544921875,
-0.049346923828125,
-0.02789306640625,
-0.056365966796875,
0.0029678... |
mstz/splice | 2023-04-16T18:03:01.000Z | [
"task_categories:tabular-classification",
"size_categories:1K<n<10K",
"language:en",
"license:cc",
"splice",
"tabular_classification",
"binary_classification",
"multiclass_classification",
"UCI",
"region:us"
] | mstz | null | @misc{misc_molecular_biology_(splice-junction_gene_sequences)_69,
title = {{Molecular Biology (Splice-junction Gene Sequences)}},
year = {1992},
howpublished = {UCI Machine Learning Repository},
note = {{DOI}: \\url{10.24432/C5M888}}
} | 0 | 6 | 2023-04-13T09:16:09 | ---
language:
- en
tags:
- splice
- tabular_classification
- binary_classification
- multiclass_classification
- UCI
pretty_name: Splice
size_categories:
- 1K<n<10K
task_categories:
- tabular-classification
configs:
- splice
- splice_EI
- splice_IE
- splice_N
license: cc
---
# Splice
The [Splice dataset](https://archive-beta.ics.uci.edu/dataset/69/molecular+biology+splice+junction+gene+sequences) from the [UCI repository](https://archive-beta.ics.uci.edu/).
# Configurations and tasks
| **Configuration** | **Task** |
|-------------------|---------------------------|
| splice | Multiclass classification |
| splice_EI | Binary classification |
| splice_IE | Binary classification |
| splice_N | Binary classification | | 789 | [
[
-0.047210693359375,
-0.036041259765625,
0.0171356201171875,
0.01226806640625,
-0.0171356201171875,
0.006366729736328125,
-0.00019884109497070312,
0.00463104248046875,
0.03057861328125,
0.037872314453125,
-0.0335693359375,
-0.041748046875,
-0.051177978515625,
... |
mstz/landsat | 2023-04-16T17:33:23.000Z | [
"task_categories:tabular-classification",
"size_categories:1K<n<10K",
"language:en",
"license:cc",
"landsat",
"tabular_classification",
"binary_classification",
"multiclass_classification",
"UCI",
"region:us"
] | mstz | null | @misc{misc_statlog_(landsat_satellite)_146,
author = {Srinivasan,Ashwin},
title = {{Statlog (Landsat Satellite)}},
year = {1993},
howpublished = {UCI Machine Learning Repository},
note = {{DOI}: \\url{10.24432/C55887}}
} | 0 | 6 | 2023-04-14T10:42:16 | ---
language:
- en
tags:
- landsat
- tabular_classification
- binary_classification
- multiclass_classification
- UCI
pretty_name: Landsat
size_categories:
- 1K<n<10K
task_categories:
- tabular-classification
configs:
- landsat
- landsat_binary
license: cc
---
# Landsat
The [Landsat dataset](https://archive-beta.ics.uci.edu/dataset/146/statlog+landsat+satellite) from the [UCI repository](https://archive-beta.ics.uci.edu/).
# Configurations and tasks
| **Configuration** | **Task** | **Description** |
|-----------------------|---------------------------|-------------------------|
| landsat | Multiclass classification.| |
| landsat_0 | Binary classification. | Is the image of class 0? |
| landsat_1 | Binary classification. | Is the image of class 1? |
| landsat_2 | Binary classification. | Is the image of class 2? |
| landsat_3 | Binary classification. | Is the image of class 3? |
| landsat_4 | Binary classification. | Is the image of class 4? |
| landsat_5 | Binary classification. | Is the image of class 5? | | 1,194 | [
[
-0.0577392578125,
0.005214691162109375,
0.0229949951171875,
0.00778961181640625,
-0.0303802490234375,
0.010345458984375,
0.007244110107421875,
-0.00949859619140625,
0.0160980224609375,
0.057830810546875,
-0.03173828125,
-0.0516357421875,
-0.0592041015625,
-0... |
mstz/waveform_noise_v1 | 2023-04-16T18:04:18.000Z | [
"task_categories:tabular-classification",
"size_categories:1K<n<5K",
"language:en",
"license:cc",
"waveformnoiseV1",
"tabular_classification",
"binary_classification",
"multiclass_classification",
"UCI",
"region:us"
] | mstz | null | @misc{misc_waveform_database_generator_(version_1)_107,
author = {Breiman,L. & Stone,C.J.},
title = {{Waveform Database Generator (Version 1)}},
year = {1988},
howpublished = {UCI Machine Learning Repository},
note = {{DOI}: \\url{10.24432/C5CS3C}}
} | 0 | 6 | 2023-04-14T16:00:46 | ---
language:
- en
tags:
- waveformnoiseV1
- tabular_classification
- binary_classification
- multiclass_classification
- UCI
pretty_name: WaveformNoiseV1
size_categories:
- 1K<n<5K
task_categories:
- tabular-classification
configs:
- waveformnoiseV1
- waveformnoiseV1_0
- waveformnoiseV1_1
- waveformnoiseV1_2
license: cc
---
# WaveformNoiseV1
The [WaveformNoiseV1 dataset](https://archive-beta.ics.uci.edu/dataset/107/waveform+database+generator+version+1) from the [UCI repository](https://archive-beta.ics.uci.edu/).
# Configurations and tasks
| **Configuration** | **Task** | **Description** |
|-----------------------|---------------------------|-------------------------|
| waveformnoiseV1 | Multiclass classification.| |
| waveformnoiseV1_0 | Binary classification. | Is the image of class 0? |
| waveformnoiseV1_1 | Binary classification. | Is the image of class 1? |
| waveformnoiseV1_2 | Binary classification. | Is the image of class 2? | | 1,071 | [
[
-0.05499267578125,
-0.0194854736328125,
0.01486968994140625,
0.029083251953125,
-0.0236968994140625,
-0.00809478759765625,
-0.0007648468017578125,
-0.0225677490234375,
0.0184478759765625,
0.037689208984375,
-0.04949951171875,
-0.057098388671875,
-0.0451354980468... |
mstz/yeast | 2023-04-25T09:22:12.000Z | [
"task_categories:tabular-classification",
"size_categories:n<1K",
"language:en",
"license:cc",
"yeast",
"tabular_classification",
"binary_classification",
"multiclass_classification",
"UCI",
"region:us"
] | mstz | null | @misc{misc_yeast_110,
author = {Nakai,Kenta},
title = {{Yeast}},
year = {1996},
howpublished = {UCI Machine Learning Repository},
note = {{DOI}: \\url{10.24432/C5KG68}}
} | 0 | 6 | 2023-04-14T22:01:54 | ---
language:
- en
tags:
- yeast
- tabular_classification
- binary_classification
- multiclass_classification
- UCI
pretty_name: Yeast
size_categories:
- n<1K
task_categories:
- tabular-classification
configs:
- yeast
- yeast_0
- yeast_1
- yeast_2
- yeast_3
- yeast_4
- yeast_5
- yeast_6
- yeast_7
- yeast_8
- yeast_9
license: cc
---
# Yeast
The [Yeast dataset](https://archive-beta.ics.uci.edu/dataset/110/yeast) from the [UCI repository](https://archive-beta.ics.uci.edu/).
# Usage
```python
from datasets import load_dataset
dataset = load_dataset("mstz/yeast")["train"]
```
# Configurations and tasks
| **Configuration** | **Task** | **Description** |
|-----------------------|---------------------------|-------------------------|
| yeast | Multiclass classification.| |
| yeast_0 | Binary classification. | Is the instance of class 0? |
| yeast_1 | Binary classification. | Is the instance of class 1? |
| yeast_2 | Binary classification. | Is the instance of class 2? |
| yeast_3 | Binary classification. | Is the instance of class 3? |
| yeast_4 | Binary classification. | Is the instance of class 4? |
| yeast_5 | Binary classification. | Is the instance of class 5? |
| yeast_6 | Binary classification. | Is the instance of class 6? |
| yeast_7 | Binary classification. | Is the instance of class 7? |
| yeast_8 | Binary classification. | Is the instance of class 8? |
| yeast_9 | Binary classification. | Is the instance of class 9? | | 1,682 | [
[
-0.0227508544921875,
-0.01457977294921875,
0.025604248046875,
0.0313720703125,
0.011566162109375,
0.0031261444091796875,
-0.00420379638671875,
0.0009145736694335938,
0.021484375,
0.0345458984375,
-0.0445556640625,
-0.047210693359375,
-0.03179931640625,
0.022... |
mstz/golf | 2023-04-20T09:33:01.000Z | [
"task_categories:tabular-classification",
"language:en",
"golf",
"tabular_classification",
"binary_classification",
"region:us"
] | mstz | null | null | 0 | 6 | 2023-04-17T12:40:47 | ---
language:
- en
tags:
- golf
- tabular_classification
- binary_classification
pretty_name: Golf
task_categories: # Full list at https://github.com/huggingface/hub-docs/blob/main/js/src/lib/interfaces/Types.ts
- tabular-classification
configs:
- golf
---
# Golf
The Golf dataset.
Is it a good day to play golf?
# Configurations and tasks
| **Configuration** | **Task** |
|-----------------------|---------------------------|
| golf | Binary classification.|
| 500 | [
[
-0.0146026611328125,
-0.037689208984375,
-0.001392364501953125,
0.0036773681640625,
-0.037933349609375,
0.000946044921875,
0.0019512176513671875,
-0.01947021484375,
0.00936126708984375,
0.0303802490234375,
-0.041351318359375,
-0.06890869140625,
-0.05221557617187... |
cq01/mawps-asdiv-a_svamp | 2023-04-19T09:09:31.000Z | [
"region:us"
] | cq01 | null | null | 0 | 6 | 2023-04-17T14:24:52 | ---
dataset_info:
features:
- name: Question
dtype: string
- name: Numbers
sequence: float64
- name: Equation
sequence: string
- name: Answer
dtype: float64
- name: group_nums
sequence: int64
- name: Body
dtype: string
- name: Ques
dtype: string
- name: Type
dtype: string
- name: Variation Type
sequence: int64
- name: id
dtype: int64
splits:
- name: train
num_bytes: 1570340
num_examples: 3138
- name: validation
num_bytes: 595992
num_examples: 1000
download_size: 634229
dataset_size: 2166332
---
# Dataset Card for "mawps-asdiv-a_svamp"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 761 | [
[
-0.05859375,
0.0028705596923828125,
-0.003032684326171875,
0.02587890625,
-0.0298614501953125,
-0.006252288818359375,
0.0298309326171875,
0.002880096435546875,
0.053314208984375,
0.040557861328125,
-0.07147216796875,
-0.048736572265625,
-0.061187744140625,
-... |
mstz/letter | 2023-04-17T14:50:00.000Z | [
"task_categories:tabular-classification",
"language:en",
"letter",
"tabular_classification",
"multiclass_classification",
"binary_classification",
"UCI",
"region:us"
] | mstz | null | @misc{misc_letter_recognition_59,
author = {Slate,David},
title = {{Letter Recognition}},
year = {1991},
howpublished = {UCI Machine Learning Repository},
note = {{DOI}: \\url{10.24432/C5ZP40}}
} | 0 | 6 | 2023-04-17T14:47:13 | ---
language:
- en
tags:
- letter
- tabular_classification
- multiclass_classification
- binary_classification
- UCI
pretty_name: Letter
task_categories: # Full list at https://github.com/huggingface/hub-docs/blob/main/js/src/lib/interfaces/Types.ts
- tabular-classification
configs:
- letter
---
# Letter
The [Letter dataset](https://archive-beta.ics.uci.edu/dataset/59/letter+recognition) from the [UCI repository](https://archive-beta.ics.uci.edu/).
Letter recognition.
# Configurations and tasks
| **Configuration** | **Task** | **Description** |
|-----------------------|---------------------------|-------------------------|
| letter | Multiclass classification.| |
| A | Binary classification. | Is this letter A? |
| B | Binary classification. | Is this letter B? |
| C | Binary classification. | Is this letter C? |
| ... | Binary classification. | ... |
| 1,048 | [
[
-0.0229949951171875,
-0.005886077880859375,
0.0218505859375,
0.002239227294921875,
-0.004001617431640625,
0.0182952880859375,
-0.004817962646484375,
-0.0232086181640625,
0.0250396728515625,
0.0293121337890625,
-0.048675537109375,
-0.0609130859375,
-0.06359863281... |
mstz/p53 | 2023-04-17T15:55:36.000Z | [
"task_categories:tabular-classification",
"language:en",
"p53",
"tabular_classification",
"binary_classification",
"region:us"
] | mstz | null | @misc{misc_p53_mutants_188,
author = {Lathrop,Richard},
title = {{p53 Mutants}},
year = {2010},
howpublished = {UCI Machine Learning Repository},
note = {{DOI}: \\url{10.24432/C5T89H}}
} | 0 | 6 | 2023-04-17T15:24:41 | ---
language:
- en
tags:
- p53
- tabular_classification
- binary_classification
pretty_name: P53
task_categories: # Full list at https://github.com/huggingface/hub-docs/blob/main/js/src/lib/interfaces/Types.ts
- tabular-classification
configs:
- p53
---
# P53
The [P53 dataset](https://archive-beta.ics.uci.edu/dataset/170/p53) from the [UCI repository](https://archive-beta.ics.uci.edu/).
# Configurations and tasks
| **Configuration** | **Task** | **Description** |
|-----------------------|---------------------------|-------------------------|
| p53 | Binary classification.| |
| 655 | [
[
-0.00937652587890625,
0.004337310791015625,
0.0102081298828125,
0.0300445556640625,
-0.016815185546875,
-0.002468109130859375,
0.018890380859375,
-0.0142059326171875,
0.005039215087890625,
0.0248870849609375,
-0.036376953125,
-0.05975341796875,
-0.05230712890625... |
mstz/soybean | 2023-04-18T08:09:13.000Z | [
"task_categories:tabular-classification",
"language:en",
"soybean",
"tabular_classification",
"binary_classification",
"multiclass_classification",
"UCI",
"region:us"
] | mstz | null | @misc{misc_soybean_(large)_90,
author = {Michalski,R.S. & Chilausky,R.L.},
title = {{Soybean (Large)}},
year = {1988},
howpublished = {UCI Machine Learning Repository},
note = {{DOI}: \\url{10.24432/C5JG6Z}}
} | 0 | 6 | 2023-04-18T08:01:59 | ---
language:
- en
tags:
- soybean
- tabular_classification
- binary_classification
- multiclass_classification
- UCI
pretty_name: Isoybean
task_categories: # Full list at https://github.com/huggingface/hub-docs/blob/main/js/src/lib/interfaces/Types.ts
- tabular-classification
configs:
- soybean
---
# Soybean
The [Soybean dataset](https://archive-beta.ics.uci.edu/dataset/90/soybean+large) from the [UCI repository](https://archive-beta.ics.uci.edu/).
Classify the type of soybean.
# Configurations and tasks
| **Configuration** | **Task** | **Description** |
|-----------------------|---------------------------|-----------------|
| soybean | Binary classification.| Classify soybean type. |
| diaporthe_stem_canker | Binary classification | Is this instance of class diaporthe_stem_canker? |
| charcoal_rot | Binary classification | Is this instance of class charcoal_rot? |
| rhizoctonia_root_rot | Binary classification | Is this instance of class rhizoctonia_root_rot? |
| phytophthora_rot | Binary classification | Is this instance of class phytophthora_rot? |
| brown_stem_rot | Binary classification | Is this instance of class brown_stem_rot? |
| powdery_mildew | Binary classification | Is this instance of class powdery_mildew? |
| downy_mildew | Binary classification | Is this instance of class downy_mildew? |
| brown_spot | Binary classification | Is this instance of class brown_spot? |
| bacterial_blight | Binary classification | Is this instance of class bacterial_blight? |
| bacterial_pustule | Binary classification | Is this instance of class bacterial_pustule? |
| purple_seed_stain | Binary classification | Is this instance of class purple_seed_stain? |
| anthracnose | Binary classification | Is this instance of class anthracnose? |
| phyllosticta_leaf_spot | Binary classification | Is this instance of class phyllosticta_leaf_spot? |
| alternarialeaf_spot | Binary classification | Is this instance of class alternarialeaf_spot? |
| frog_eye_leaf_spot | Binary classification | Is this instance of class frog_eye_leaf_spot? |
| diaporthe_pod_&_stem_blight | Binary classification | Is this instance of class diaporthe_pod_? |
| cyst_nematode | Binary classification | Is this instance of class cyst_nematode? |
| 2_4_d_injury | Binary classification | Is this instance of class 2_4_d_injury? |
| herbicide_injury | Binary classification | Is this instance of class herbicide_injury? | | 2,454 | [
[
-0.0103759765625,
-0.022369384765625,
0.037689208984375,
0.01412200927734375,
0.00371551513671875,
0.0172271728515625,
0.007965087890625,
-0.0222930908203125,
0.029205322265625,
0.03082275390625,
-0.0173797607421875,
-0.07232666015625,
-0.05499267578125,
0.0... |
mstz/uscensus | 2023-04-18T09:01:20.000Z | [
"task_categories:tabular-classification",
"language:en",
"uscensus",
"tabular_classification",
"binary_classification",
"UCI",
"region:us"
] | mstz | null | null | 0 | 6 | 2023-04-18T08:50:06 | ---
language:
- en
tags:
- uscensus
- tabular_classification
- binary_classification
- UCI
pretty_name: Uscensus
task_categories: # Full list at https://github.com/huggingface/hub-docs/blob/main/js/src/lib/interfaces/Types.ts
- tabular-classification
configs:
- uscensus
---
# Uscensus
[US census dataset]() from [UCI](). | 322 | [
[
-0.02423095703125,
-0.00862884521484375,
0.0016613006591796875,
0.00832366943359375,
-0.035308837890625,
0.00939178466796875,
0.025909423828125,
0.022674560546875,
0.038299560546875,
0.067626953125,
-0.03363037109375,
-0.04388427734375,
-0.02044677734375,
-0... |
renumics/cifar10-enriched | 2023-06-06T07:42:35.000Z | [
"task_categories:image-classification",
"task_ids:multi-class-image-classification",
"size_categories:10K<n<100K",
"source_datasets:extended|cifar10",
"language:en",
"license:apache-2.0",
"image classification",
"cifar-10",
"cifar-10-enriched",
"embeddings",
"enhanced",
"spotlight",
"region:... | renumics | The CIFAR-10 dataset consists of 60000 32x32 colour images in 10 classes, with 6000 images
per class. There are 50000 training images and 10000 test images.
This version if CIFAR-10 is enriched with several metadata such as embeddings, baseline results and label error scores. | @TECHREPORT{Krizhevsky09learningmultiple,
author = {Alex Krizhevsky},
title = {Learning multiple layers of features from tiny images},
institution = {},
year = {2009}
} | 1 | 6 | 2023-04-18T15:16:41 | ---
license: apache-2.0
task_categories:
- image-classification
task_ids:
- multi-class-image-classification
paperswithcode_id: cifar-10
pretty_name: CIFAR-10
size_categories:
- 10K<n<100K
source_datasets:
- extended|cifar10
tags:
- image classification
- cifar-10
- cifar-10-enriched
- embeddings
- enhanced
- spotlight
language:
- en
---
# Dataset Card for CIFAR-10-Enriched (Enhanced by Renumics)
## Dataset Description
- **Homepage:** [Renumics Homepage](https://renumics.com/?hf-dataset-card=cifar10-enriched)
- **GitHub** [Spotlight](https://github.com/Renumics/spotlight)
- **Dataset Homepage** [CS Toronto Homepage](https://www.cs.toronto.edu/~kriz/cifar.html)
- **Paper:** [Learning Multiple Layers of Features from Tiny Images](https://www.cs.toronto.edu/~kriz/learning-features-2009-TR.pdf)
### Dataset Summary
📊 [Data-centric AI](https://datacentricai.org) principles have become increasingly important for real-world use cases.
At [Renumics](https://renumics.com/?hf-dataset-card=cifar10-enriched) we believe that classical benchmark datasets and competitions should be extended to reflect this development.
🔍 This is why we are publishing benchmark datasets with application-specific enrichments (e.g. embeddings, baseline results, uncertainties, label error scores). We hope this helps the ML community in the following ways:
1. Enable new researchers to quickly develop a profound understanding of the dataset.
2. Popularize data-centric AI principles and tooling in the ML community.
3. Encourage the sharing of meaningful qualitative insights in addition to traditional quantitative metrics.
📚 This dataset is an enriched version of the [CIFAR-10 Dataset](https://www.cs.toronto.edu/~kriz/cifar.html).
### Explore the Dataset

The enrichments allow you to quickly gain insights into the dataset. The open source data curation tool [Renumics Spotlight](https://github.com/Renumics/spotlight) enables that with just a few lines of code:
Install datasets and Spotlight via [pip](https://packaging.python.org/en/latest/key_projects/#pip):
```python
!pip install renumics-spotlight datasets
```
Load the dataset from huggingface in your notebook:
```python
import datasets
dataset = datasets.load_dataset("renumics/cifar10-enriched", split="train")
```
Start exploring with a simple view:
```python
from renumics import spotlight
df = dataset.to_pandas()
df_show = df.drop(columns=['img'])
spotlight.show(df_show, port=8000, dtype={"img_path": spotlight.Image})
```
You can use the UI to interactively configure the view on the data. Depending on the concrete tasks (e.g. model comparison, debugging, outlier detection) you might want to leverage different enrichments and metadata.
### CIFAR-10 Dataset
The CIFAR-10 dataset consists of 60000 32x32 colour images in 10 classes, with 6000 images per class. There are 50000 training images and 10000 test images.
The dataset is divided into five training batches and one test batch, each with 10000 images. The test batch contains exactly 1000 randomly-selected images from each class. The training batches contain the remaining images in random order, but some training batches may contain more images from one class than another. Between them, the training batches contain exactly 5000 images from each class.
The classes are completely mutually exclusive. There is no overlap between automobiles and trucks. "Automobile" includes sedans, SUVs, things of that sort. "Truck" includes only big trucks. Neither includes pickup trucks.
Here is the list of classes in the CIFAR-10:
- airplane
- automobile
- bird
- cat
- deer
- dog
- frog
- horse
- ship
- truck
### Supported Tasks and Leaderboards
- `image-classification`: The goal of this task is to classify a given image into one of 10 classes. The leaderboard is available [here](https://paperswithcode.com/sota/image-classification-on-cifar-10).
### Languages
English class labels.
## Dataset Structure
### Data Instances
A sample from the training set is provided below:
```python
{
'img': <PIL.PngImagePlugin.PngImageFile image mode=RGB size=32x32 at 0x7FD19FABC1D0>,
'img_path': '/huggingface/datasets/downloads/extracted/7faec2e0fd4aa3236f838ed9b105fef08d1a6f2a6bdeee5c14051b64619286d5/0/0.png',
'label': 0,
'split': 'train'
}
```
### Data Fields
| Feature | Data Type |
|---------------------------------|-----------------------------------------------|
| img | Image(decode=True, id=None) |
| img_path | Value(dtype='string', id=None) |
| label | ClassLabel(names=[...], id=None) |
| split | Value(dtype='string', id=None) |
### Data Splits
| Dataset Split | Number of Images in Split | Samples per Class |
| ------------- |---------------------------| -------------------------|
| Train | 50000 | 5000 |
| Test | 10000 | 1000 |
## Dataset Creation
### Curation Rationale
The CIFAR-10 and CIFAR-100 are labeled subsets of the [80 million tiny images](http://people.csail.mit.edu/torralba/tinyimages/) dataset.
They were collected by Alex Krizhevsky, Vinod Nair, and Geoffrey Hinton.
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
[More Information Needed]
### Citation Information
If you use this dataset, please cite the following paper:
```
@article{krizhevsky2009learning,
added-at = {2021-01-21T03:01:11.000+0100},
author = {Krizhevsky, Alex},
biburl = {https://www.bibsonomy.org/bibtex/2fe5248afe57647d9c85c50a98a12145c/s364315},
interhash = {cc2d42f2b7ef6a4e76e47d1a50c8cd86},
intrahash = {fe5248afe57647d9c85c50a98a12145c},
keywords = {},
pages = {32--33},
timestamp = {2021-01-21T03:01:11.000+0100},
title = {Learning Multiple Layers of Features from Tiny Images},
url = {https://www.cs.toronto.edu/~kriz/learning-features-2009-TR.pdf},
year = 2009
}
```
### Contributions
Alex Krizhevsky, Vinod Nair, Geoffrey Hinton, and Renumics GmbH. | 6,977 | [
[
-0.053009033203125,
-0.034576416015625,
0.0055084228515625,
0.00772857666015625,
-0.00984954833984375,
-0.0034618377685546875,
-0.0296630859375,
-0.055145263671875,
0.0249481201171875,
0.0075836181640625,
-0.024261474609375,
-0.04766845703125,
-0.0465087890625,
... |
Lakera/autotrain-data-cancer-lakera | 2023-04-19T15:06:49.000Z | [
"task_categories:image-classification",
"region:us"
] | Lakera | null | null | 1 | 6 | 2023-04-19T14:59:00 | ---
task_categories:
- image-classification
---
# AutoTrain Dataset for project: cancer-lakera
## Dataset Description
This dataset has been automatically processed by AutoTrain for project cancer-lakera.
### Languages
The BCP-47 code for the dataset's language is unk.
## Dataset Structure
### Data Instances
A sample from this dataset looks as follows:
```json
[
{
"image": "<600x450 RGB PIL image>",
"feat_image_id": "ISIC_0024329",
"feat_lesion_id": "HAM_0002954",
"target": 0,
"feat_dx_type": "histo",
"feat_age": 75.0,
"feat_sex": "female",
"feat_localization": "lower extremity"
},
{
"image": "<600x450 RGB PIL image>",
"feat_image_id": "ISIC_0024372",
"feat_lesion_id": "HAM_0005389",
"target": 0,
"feat_dx_type": "histo",
"feat_age": 70.0,
"feat_sex": "male",
"feat_localization": "lower extremity"
}
]
```
### Dataset Fields
The dataset has the following fields (also called "features"):
```json
{
"image": "Image(decode=True, id=None)",
"feat_image_id": "Value(dtype='string', id=None)",
"feat_lesion_id": "Value(dtype='string', id=None)",
"target": "ClassLabel(names=['actinic_keratoses', 'basal_cell_carcinoma', 'benign_keratosis-like_lesions'], id=None)",
"feat_dx_type": "Value(dtype='string', id=None)",
"feat_age": "Value(dtype='float64', id=None)",
"feat_sex": "Value(dtype='string', id=None)",
"feat_localization": "Value(dtype='string', id=None)"
}
```
### Dataset Splits
This dataset is split into a train and validation split. The split sizes are as follow:
| Split name | Num samples |
| ------------ | ------------------- |
| train | 1200 |
| valid | 150 |
| 1,707 | [
[
-0.0205841064453125,
-0.007843017578125,
0.0103302001953125,
0.0036678314208984375,
-0.0214996337890625,
0.0146636962890625,
0.0088348388671875,
-0.0241851806640625,
0.01140594482421875,
0.0318603515625,
-0.0472412109375,
-0.07086181640625,
-0.04638671875,
0... |
joey234/mmlu-high_school_physics-neg | 2023-04-20T06:14:21.000Z | [
"region:us"
] | joey234 | null | null | 2 | 6 | 2023-04-20T06:14:17 | ---
dataset_info:
features:
- name: choices
sequence: string
- name: answer
dtype:
class_label:
names:
'0': A
'1': B
'2': C
'3': D
- name: question
dtype: string
splits:
- name: test
num_bytes: 52597
num_examples: 151
download_size: 29012
dataset_size: 52597
---
# Dataset Card for "mmlu-high_school_physics-neg"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 535 | [
[
-0.038818359375,
-0.039459228515625,
0.01340484619140625,
-0.001743316650390625,
0.0012845993041992188,
0.00006717443466186523,
0.0294952392578125,
0.010955810546875,
0.06201171875,
0.0036754608154296875,
-0.064697265625,
-0.04937744140625,
-0.04534912109375,
... |
cartesinus/iva_mt_wslot-exp | 2023-04-26T21:53:33.000Z | [
"task_categories:translation",
"size_categories:10K<n<100K",
"language:en",
"language:pl",
"language:de",
"language:es",
"language:sv",
"license:cc-by-4.0",
"machine translation",
"nlu",
"natural-language-understanding",
"virtual assistant",
"region:us"
] | cartesinus | \ | null | 0 | 6 | 2023-04-26T21:49:25 | ---
dataset_info:
features:
- name: id
dtype: string
- name: locale
dtype: string
- name: origin
dtype: string
- name: partition
dtype: string
- name: translation_utt
dtype:
translation:
languages:
- en
- pl
- name: translation_xml
dtype:
translation:
languages:
- en
- pl
- name: src_bio
dtype: string
- name: tgt_bio
dtype: string
task_categories:
- translation
language:
- en
- pl
- de
- es
- sv
tags:
- machine translation
- nlu
- natural-language-understanding
- virtual assistant
pretty_name: Machine translation for NLU with slot transfer
size_categories:
- 10K<n<100K
license: cc-by-4.0
---
# Machine translation dataset for NLU (Virual Assistant) with slot transfer between languages
## Dataset Summary
Disclaimer: This is for research purposes only. Please have a look at the license section below. Some of the datasets used to construct IVA_MT have an unknown license.
IVA_MT is a machine translation dataset that can be used to train, adapt and evaluate MT models used in Virtual Assistant NLU context (e.g. to translate trainig corpus of NLU).
## Dataset Composition
### en-pl
| Corpus | Train | Dev | Test |
|----------------------------------------------------------------------|--------|-------|-------|
| [Massive 1.1](https://huggingface.co/datasets/AmazonScience/massive) | 11514 | 2033 | 2974 |
| [Leyzer 0.2.0](https://github.com/cartesinus/leyzer/tree/0.2.0) | 3974 | 701 | 1380 |
| [OpenSubtitles from OPUS](https://opus.nlpl.eu/OpenSubtitles-v1.php) | 2329 | 411 | 500 |
| [KDE from OPUS](https://opus.nlpl.eu/KDE4.php) | 1154 | 241 | 241 |
| [CCMatrix from Opus](https://opus.nlpl.eu/CCMatrix.php) | 1096 | 232 | 237 |
| [Ubuntu from OPUS](https://opus.nlpl.eu/Ubuntu.php) | 281 | 60 | 59 |
| [Gnome from OPUS](https://opus.nlpl.eu/GNOME.php) | 14 | 3 | 3 |
| *total* | 20362 | 3681 | 5394 |
### en-de
| Corpus | Train | Dev | Test |
|----------------------------------------------------------------------|--------|-------|-------|
| [Massive 1.1](https://huggingface.co/datasets/AmazonScience/massive) | 7536 | 1346 | 1955 |
### en-es
| Corpus | Train | Dev | Test |
|----------------------------------------------------------------------|--------|-------|-------|
| [Massive 1.1](https://huggingface.co/datasets/AmazonScience/massive) | 8415 | 1526 | 2202 |
### en-sv
| Corpus | Train | Dev | Test |
|----------------------------------------------------------------------|--------|-------|-------|
| [Massive 1.1](https://huggingface.co/datasets/AmazonScience/massive) | 7540 | 1360 | 1921 |
## Tools
Scripts used to generate this dataset can be found on [github](https://github.com/cartesinus/iva_mt).
## License
This is a composition of 7 datasets, and the license is as defined in original release:
- MASSIVE: [CC-BY 4.0](https://huggingface.co/datasets/AmazonScience/massive/blob/main/LICENSE)
- Leyzer: [CC BY-NC 4.0](https://github.com/cartesinus/leyzer/blob/master/LICENSE)
- OpenSubtitles: unknown
- KDE: [GNU Public License](https://l10n.kde.org/about.php)
- CCMatrix: no license given, therefore assuming it is LASER project license [BSD](https://github.com/facebookresearch/LASER/blob/main/LICENSE)
- Ubuntu: [GNU Public License](https://help.launchpad.net/Legal)
- Gnome: unknown
| 3,806 | [
[
-0.041168212890625,
-0.03289794921875,
0.0264739990234375,
0.020751953125,
-0.028289794921875,
-0.00716400146484375,
-0.00809478759765625,
-0.0268707275390625,
0.0250396728515625,
0.05279541015625,
-0.05010986328125,
-0.0584716796875,
-0.0406494140625,
0.016... |
vjain/tax_embeddings | 2023-04-27T02:51:28.000Z | [
"region:us"
] | vjain | null | null | 1 | 6 | 2023-04-26T23:34:42 | Entry not found | 15 | [
[
-0.0213775634765625,
-0.014984130859375,
0.05718994140625,
0.0288543701171875,
-0.0350341796875,
0.046478271484375,
0.052520751953125,
0.005062103271484375,
0.051361083984375,
0.016998291015625,
-0.0521240234375,
-0.01496124267578125,
-0.0604248046875,
0.037... |
crumb/Clean-Instruct-440k | 2023-04-28T21:20:34.000Z | [
"task_categories:conversational",
"language:en",
"license:mit",
"region:us"
] | crumb | null | null | 7 | 6 | 2023-04-28T21:02:52 | ---
dataset_info:
features:
- name: instruction
dtype: string
- name: output
dtype: string
- name: input
dtype: string
splits:
- name: train
num_bytes: 650842125.0
num_examples: 443612
download_size: 357775511
dataset_size: 650842125.0
license: mit
task_categories:
- conversational
language:
- en
---
# Dataset Card for "Clean-Instruct"
[yahma/alpaca-cleaned](https://hf.co/datasets/yahma/alpaca-cleaned) + [crumb/gpt4all-clean](https://hf.co/datasets/crumb/gpt4all-clean) + GPTeacher-Instruct-Dedup
It isn't perfect, but it's 443k high quality semi-cleaned instructions without "As an Ai language model".
```python
from datasets import load_dataset
dataset = load_dataset("crumb/clean-instruct", split="train")
def promptify(example):
if example['input']!='':
return {"text": f"<instruction> {example['instruction']} <input> {example['input']} <output> {example['output']}"}
return {"text": f"<instruction> {example['instruction']} <output> {example['output']}"}
dataset = dataset.map(promptify, batched=False)
dataset = dataset.remove_columns(["instruction", "input", "output"])
``` | 1,132 | [
[
-0.019775390625,
-0.0428466796875,
0.00860595703125,
-0.003971099853515625,
-0.0182342529296875,
-0.0125579833984375,
-0.035491943359375,
0.00008368492126464844,
-0.00765228271484375,
0.0274505615234375,
-0.045867919921875,
-0.0552978515625,
-0.03826904296875,
... |
theblackcat102/oasst-red-team | 2023-05-07T09:15:21.000Z | [
"language:en",
"language:de",
"language:fr",
"language:ru",
"language:zh",
"language:ja",
"language:it",
"language:pt",
"language:th",
"language:nl",
"language:ro",
"language:pl",
"language:hu",
"language:hr",
"region:us"
] | theblackcat102 | null | null | 0 | 6 | 2023-05-06T09:59:47 | ---
language:
- en
- de
- fr
- ru
- zh
- ja
- it
- pt
- th
- nl
- ro
- pl
- hu
- hr
---
Work in progress
Red team datasets for training and testing reward model for open assistant | 181 | [
[
-0.0032062530517578125,
-0.007488250732421875,
-0.00933837890625,
0.01042938232421875,
0.02294921875,
-0.0364990234375,
-0.007205963134765625,
-0.02386474609375,
0.0177001953125,
0.042022705078125,
-0.04833984375,
-0.0098114013671875,
-0.032562255859375,
0.0... |
neurae/dnd_style_intents | 2023-07-16T08:10:05.000Z | [
"task_categories:text-classification",
"size_categories:100K<n<1M",
"language:en",
"license:apache-2.0",
"D&D",
"intent",
"classification",
"region:us"
] | neurae | null | null | 4 | 6 | 2023-05-07T11:09:12 | ---
dataset_info:
features:
- name: examples
dtype: string
- name: label_names
dtype: string
- name: labels
dtype: int64
splits:
- name: train
num_bytes: 9654988
num_examples: 130570
- name: test
num_bytes: 1208016
num_examples: 16330
- name: eval
num_bytes: 1203046
num_examples: 16321
download_size: 5759885
dataset_size: 12066050
task_categories:
- text-classification
language:
- en
size_categories:
- 100K<n<1M
tags:
- D&D
- intent
- classification
pretty_name: D&D Style Intents
license: apache-2.0
---
# Dataset Card for "dnd_style_intents"
This dataset was designed for intent classification module in dialogue system for game developers.
There are about 163K examples over 17 intents in dataset.
All intents belong to one of two group: intents for interaction with game mechanics and intents for more correctly dialogue understanding.
Data was generated artificially and augmented with masking and paraphrase model. All examples are in D&D style. | 1,014 | [
[
-0.0243988037109375,
-0.04534912109375,
0.029815673828125,
0.0180511474609375,
-0.01346588134765625,
-0.01435089111328125,
0.01261138916015625,
0.01227569580078125,
0.023284912109375,
0.0631103515625,
-0.0782470703125,
-0.06494140625,
-0.0128631591796875,
0.... |
BelleGroup/train_3.5M_CN | 2023-08-16T06:13:59.000Z | [
"task_categories:text2text-generation",
"size_categories:1M<n<10M",
"language:zh",
"license:gpl-3.0",
"region:us"
] | BelleGroup | null | null | 72 | 6 | 2023-05-10T07:45:23 | ---
license: gpl-3.0
task_categories:
- text2text-generation
language:
- zh
size_categories:
- 1M<n<10M
---
## 内容
包含约350万条由[BELLE](https://github.com/LianjiaTech/BELLE)项目生成的中文指令数据。
## 样例
```
{
"id":"66623028",
"conversations":[
{
"from":"human",
"value":"归纳介绍三种不同类型的电影题材"
},
{
"from":"assistant",
"value":"1. 动作片:动作片通常强调动作和战斗场面,故事情节紧凑,充满了紧张和兴奋的氛围。主角一般是具有超凡实力或特殊技能的英雄人物,往往会与反派角色进行一系列激烈的格斗,追逐和爆破等酷炫场面。例如《速度与激情》和《复仇者联盟》等。\n2. 爱情片:爱情片是以爱情为主题的电影,着重呈现角色之间的感情纠葛,展现了爱情的美好和悲壮。情节与角色之间的发展令人深思,增加了观众的感性体验。这种电影的故事情节主要以人类与人类之间的情感变化为主题。例如《泰坦尼克号》和《让子弹飞》等。\n3. 科幻片:科幻片是以未来科技和未知世界为主题的电影,通常包括时间旅行、外星人、虚拟现实、未来社会等各种奇幻的元素。这种电影描绘了一种比现实更加宏伟、更加神秘和惊奇的世界。例如《星际穿越》和《发条橙》等。"
}
]
}
{
"id":"18205437",
"conversations":[
{
"from":"human",
"value":"根据给定的食材,编写一道美食菜谱。\n鸡腿肉、蘑菇、洋葱"
},
{
"from":"assistant",
"value":"鸡肉炒蘑菇洋葱\n食材:\n- 鸡腿肉 400克,去骨去皮切成小块\n- 蘑菇 200克,切片\n- 洋葱 1个,切丝\n- 生姜 适量,切丝\n- 酱油 2勺\n- 料酒 1勺\n- 盐 适量\n- 白胡椒粉 适量\n- 植物油 适量\n做法:\n1. 在一个碗里放入切好的鸡腿肉块,加入酱油、料酒和适量的白胡椒粉,搅拌均匀,腌制10分钟。\n2. 热锅凉油,加入姜丝翻炒,放入腌好的鸡肉块翻炒至变色。\n3. 加入洋葱丝和蘑菇,翻炒2-3分钟至蘑菇变软。\n4. 加入适量的盐和少量的水,盖上锅盖焖煮5分钟直至鸡肉熟透。\n5. 最后翻炒几下,使汤汁变浓稠即可。\n温馨提示:\n1. 在腌制鸡肉时,如果想让肉质更嫩滑,可以加入少量的淀粉和蛋清。\n2. 可以根据个人口味加入其他调料,如蒜末、辣椒等。\n3. 洋葱和蘑菇可以用其他蔬菜代替,如青椒、胡萝卜等。"
},
{
"from":"human",
"value":"太好了,我会试着照着你教的做一次,但是我最近买了一部新相机,想让你推荐一些好的拍照地点。"
},
{
"from":"assistant",
"value":"当地的风景区、公园、景点等等都是比较适合拍照的地点。如果你想要一些特别的地方,可以考虑一些建筑群,街道巷子、咖啡馆、书店等具有文艺气息的地方。"
},
{
"from":"human",
"value":"嗯,这些地方都很不错,但是我还想找一些可以体验当地文化和风俗的地点,你有什么好的建议吗?"
},
{
"from":"assistant",
"value":"如果你想要更深入地了解当地文化和风俗,可以考虑参加一些游学活动,比如参观当地的博物馆、参加民俗节庆、尝试当地美食等等。此外,也可以去一些小城镇,感受那里的古朴风情。"
}
]
}
```
### 字段:
```
id: 数据id
conversations: 数据内容,以对话形式给出,包括多轮和单轮对话的数据
```
### 补充:
针对该数据集,新增了指令类别字段,共包括13个类别,详情见
[train_3.5M_CN_With_Category](https://huggingface.co/datasets/BELLE-2/train_3.5M_CN_With_Category)
## 使用限制
仅允许将此数据集及使用此数据集生成的衍生物用于研究目的,不得用于商业,以及其他会对社会带来危害的用途。
本数据集不代表任何一方的立场、利益或想法,无关任何团体的任何类型的主张。因使用本数据集带来的任何损害、纠纷,本项目不承担任何责任。 | 2,226 | [
[
-0.0390625,
-0.04241943359375,
0.0038738250732421875,
0.023956298828125,
-0.022674560546875,
-0.0250701904296875,
0.014068603515625,
-0.0335693359375,
0.060760498046875,
0.041748046875,
-0.036376953125,
-0.0460205078125,
-0.042449951171875,
0.0240478515625,
... |
Thaweewat/LaMini-instruction-th | 2023-05-13T17:15:17.000Z | [
"task_categories:question-answering",
"size_categories:1M<n<10M",
"language:th",
"license:cc-by-nc-4.0",
"instruction-finetuning",
"arxiv:2304.14402",
"region:us"
] | Thaweewat | null | null | 0 | 6 | 2023-05-13T16:57:39 | ---
license: cc-by-nc-4.0
task_categories:
- question-answering
tags:
- instruction-finetuning
language:
- th
size_categories:
- 1M<n<10M
---
# Summary
This is a 🇹🇭 Thai-translated (GCP) dataset based on [MBZUAI/LaMini-instruction](MBZUAI/LaMini-instruction), The dataset was generated with a total of 2.58 million pairs of instructions and responses which later used to fine-tune the LaMini-LM model series.
This dataset utilizes GPT-3.5-turbo and is based on several existing resources of prompts, including self-instruct (Wang et al., 2022), P3 (Sanh et al., 2022), FLAN (Longpre et al., 2023), and Alpaca (Taori et al., 2023).
For more information about the process of generating instruction dataset, please refer to [the accompanying paper](https://arxiv.org/abs/2304.14402).
Supported Tasks:
- Training LLMs
- Synthetic Data Generation
- Data Augmentation
### Special Thanks:
- Mr. Harris Boonkerd (Data Annotator)
### Languages: Thai
### Version: 1.0
--- | 969 | [
[
-0.0298919677734375,
-0.0491943359375,
0.031829833984375,
0.0279541015625,
-0.0288848876953125,
-0.0262298583984375,
-0.01107025146484375,
-0.0049896240234375,
0.00025081634521484375,
0.050628662109375,
-0.0703125,
-0.030914306640625,
-0.0182342529296875,
0.... |
hongerzh/NFT | 2023-09-28T06:00:22.000Z | [
"region:us"
] | hongerzh | null | null | 0 | 6 | 2023-05-15T06:39:51 | Entry not found | 15 | [
[
-0.02142333984375,
-0.01495361328125,
0.05718994140625,
0.0288238525390625,
-0.035064697265625,
0.046539306640625,
0.052520751953125,
0.005062103271484375,
0.0513916015625,
0.016998291015625,
-0.052093505859375,
-0.014984130859375,
-0.060394287109375,
0.0379... |
deedax/UTK-Face-Revised | 2023-05-16T02:05:28.000Z | [
"region:us"
] | deedax | null | null | 0 | 6 | 2023-05-16T01:45:11 | ---
dataset_info:
features:
- name: image
dtype: image
- name: age
dtype: int64
- name: gender
dtype: string
- name: race
dtype: string
- name: age_group
dtype: string
splits:
- name: train
num_bytes: 352669015.125
num_examples: 7623
- name: valid
num_bytes: 39348419.0
num_examples: 846
download_size: 391281119
dataset_size: 392017434.125
---
# Dataset Card for "UTK-Face-Revised"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 573 | [
[
-0.0347900390625,
-0.00628662109375,
0.01250457763671875,
0.01076507568359375,
-0.0168304443359375,
-0.0013561248779296875,
0.0192413330078125,
-0.01103973388671875,
0.053802490234375,
0.057098388671875,
-0.0740966796875,
-0.04498291015625,
-0.0180206298828125,
... |
J-Mourad/MNAD.v1 | 2023-05-16T12:23:40.000Z | [
"region:us"
] | J-Mourad | null | null | 0 | 6 | 2023-05-16T11:43:58 | # About the MNAD Dataset
The MNAD corpus is a collection of over **1 million Moroccan news articles** written in the modern Arabic language. These news articles have been gathered from 11 prominent electronic news sources. The dataset is made available to the academic community for research purposes, such as data mining (clustering, classification, etc.), information retrieval (ranking, search, etc.), and other non-commercial activities.
## Dataset Fields
- Title: The title of the article
- Body: The body of the article
- Category: The category of the article
- Source: The Electronic Newspaper source of the article
## About Version 1 of the Dataset (MNAD.v1)
Version 1 of the dataset comprises **418,563** articles classified into 19 categories. The data was collected from well-known electronic news sources, namely Akhbarona.ma, Hespress.ma, Hibapress.com, and Le360.com. The articles were stored in four separate CSV files, each corresponding to the news website source. Each CSV file contains three fields: Title, Body, and Category of the news article.
The dataset is rich in Arabic vocabulary, with approximately 906,125 unique words. It has been utilized as a benchmark in the research paper:
```"A Moroccan News Articles Dataset (MNAD) For Arabic Text Categorization". In 2021 International Conference on Decision Aid Sciences and Application (DASA).```
This dataset is available for download from the following sources:
- Kaggle Datasets : [MNADv1](https://www.kaggle.com/datasets/jmourad100/mnad-moroccan-news-articles-dataset)
- Huggingface Datasets: [MNADv1](https://huggingface.co/datasets/J-Mourad/MNAD.v1)
## About Version 2 of the Dataset (MNAD.v2)
Version 2 of the MNAD dataset includes an additional **653,901** articles, bringing the total number of articles to over 1 million (**1,069,489**), classified into the same 19 categories as in version 1. The new documents were collected from seven additional prominent Moroccan news websites, namely al3omk.com, medi1news.com, alayam24.com, anfaspress.com, alyaoum24.com, barlamane.com, and SnrtNews.com.
The newly collected articles have been merged with the articles from the previous version into a single CSV file named ```MNADv2.csv```. This file includes an additional column called "Source" to indicate the source of each news article.
Furthermore, MNAD.v2 incorporates improved pre-processing techniques and data-cleaning methods. These enhancements involve removing duplicates, eliminating multiple spaces, discarding rows with NaN values, replacing new lines with "\n", excluding very long and very short articles, and removing non-Arabic articles. These additions and improvements aim to enhance the usability and value of the MNAD dataset for researchers and practitioners in the field of Arabic text analysis.
This dataset is available for download from the following sources:
- Kaggle Datasets : [MNADv2](https://huggingface.co/datasets/J-Mourad/MNAD.v2)
- Huggingface Datasets: [MNADv2](https://huggingface.co/datasets/J-Mourad/MNAD.v2)
## Citation
If you use our data, please cite the following paper:
```bibtex
@inproceedings{MNAD2021,
author = {Mourad Jbene and
Smail Tigani and
Rachid Saadane and
Abdellah Chehri},
title = {A Moroccan News Articles Dataset ({MNAD}) For Arabic Text Categorization},
year = {2021},
publisher = {{IEEE}},
booktitle = {2021 International Conference on Decision Aid Sciences and Application ({DASA})}
doi = {10.1109/dasa53625.2021.9682402},
url = {https://doi.org/10.1109/dasa53625.2021.9682402},
}
``` | 3,639 | [
[
-0.047698974609375,
-0.0246124267578125,
0.005649566650390625,
0.0286102294921875,
-0.02899169921875,
0.00916290283203125,
-0.00022983551025390625,
-0.02337646484375,
0.0296173095703125,
0.037384033203125,
-0.0243377685546875,
-0.068603515625,
-0.052276611328125... |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.