id stringlengths 2 115 | lastModified stringlengths 24 24 | tags list | author stringlengths 2 42 ⌀ | description stringlengths 0 68.7k ⌀ | citation stringlengths 0 10.7k ⌀ | cardData null | likes int64 0 3.55k | downloads int64 0 10.1M | card stringlengths 0 1.01M |
|---|---|---|---|---|---|---|---|---|---|
frankier/cross_domain_reviews | 2022-10-14T11:06:51.000Z | [
"task_categories:text-classification",
"task_ids:text-scoring",
"task_ids:sentiment-scoring",
"language_creators:found",
"multilinguality:monolingual",
"size_categories:10K<n<100K",
"source_datasets:extended|app_reviews",
"language:en",
"license:unknown",
"reviews",
"ratings",
"ordinal",
"te... | frankier | null | null | null | 0 | 5 | ---
language:
- en
language_creators:
- found
license: unknown
multilinguality:
- monolingual
pretty_name: Blue
size_categories:
- 10K<n<100K
source_datasets:
- extended|app_reviews
tags:
- reviews
- ratings
- ordinal
- text
task_categories:
- text-classification
task_ids:
- text-scoring
- sentiment-scoring
---
This dataset is a quick-and-dirty benchmark for predicting ratings across
different domains and on different rating scales based on text. It pulls in a
bunch of rating datasets, takes at most 1000 instances from each and combines
them into a big dataset.
Requires the `kaggle` library to be installed, and kaggle API keys passed
through environment variables or in ~/.kaggle/kaggle.json. See [the Kaggle
docs](https://www.kaggle.com/docs/api#authentication).
|
argilla/news | 2022-10-07T13:23:10.000Z | [
"region:us"
] | argilla | null | null | null | 0 | 5 | Entry not found |
Harsit/xnli2.0_train_urdu | 2022-10-15T09:30:11.000Z | [
"region:us"
] | Harsit | null | null | null | 0 | 5 | language: ["Urdu"] |
KGraph/FB15k-237 | 2022-10-21T09:03:28.000Z | [
"task_categories:other",
"annotations_creators:found",
"annotations_creators:crowdsourced",
"multilinguality:monolingual",
"size_categories:100K<n<1M",
"source_datasets:original",
"language:en",
"license:cc-by-4.0",
"knowledge graph",
"knowledge",
"link prediction",
"link",
"region:us"
] | KGraph | null | null | null | 3 | 5 | ---
annotations_creators:
- found
- crowdsourced
language:
- en
language_creators: []
license:
- cc-by-4.0
multilinguality:
- monolingual
pretty_name: FB15k-237
size_categories:
- 100K<n<1M
source_datasets:
- original
tags:
- knowledge graph
- knowledge
- link prediction
- link
task_categories:
- other
task_ids: []
---
# Dataset Card for FB15k-237
## Table of Contents
- [Dataset Card for FB15k-237](#dataset-card-for-fb15k-237)
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Initial Data Collection and Normalization](#initial-data-collection-and-normalization)
- [Who are the source language producers?](#who-are-the-source-language-producers)
- [Annotations](#annotations)
- [Annotation process](#annotation-process)
- [Who are the annotators?](#who-are-the-annotators)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** [https://deepai.org/dataset/fb15k-237](https://deepai.org/dataset/fb15k-237)
- **Repository:**
- **Paper:** [More Information Needed](https://paperswithcode.com/dataset/fb15k-237)
- **Leaderboard:**
- **Point of Contact:**
### Dataset Summary
FB15k-237 is a link prediction dataset created from FB15k. While FB15k consists of 1,345 relations, 14,951 entities, and 592,213 triples, many triples are inverses that cause leakage from the training to testing and validation splits. FB15k-237 was created by Toutanova and Chen (2015) to ensure that the testing and evaluation datasets do not have inverse relation test leakage. In summary, FB15k-237 dataset contains 310,079 triples with 14,505 entities and 237 relation types.
### Supported Tasks and Leaderboards
Supported Tasks: link prediction task on knowledge graphs.
Leaderboads:
[More Information Needed](https://paperswithcode.com/sota/link-prediction-on-fb15k-237)
### Languages
[More Information Needed]
## Dataset Structure
### Data Instances
[More Information Needed]
### Data Fields
[More Information Needed]
### Data Splits
[More Information Needed]
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
[More Information Needed]
### Citation Information
```
@inproceedings{schlichtkrull2018modeling,
title={Modeling relational data with graph convolutional networks},
author={Schlichtkrull, Michael and Kipf, Thomas N and Bloem, Peter and Berg, Rianne van den and Titov, Ivan and Welling, Max},
booktitle={European semantic web conference},
pages={593--607},
year={2018},
organization={Springer}
}
```
### Contributions
Thanks to [@pp413](https://github.com/pp413) for adding this dataset. |
drt/complex_web_questions | 2023-04-27T21:04:50.000Z | [
"license:apache-2.0",
"arxiv:1803.06643",
"arxiv:1807.09623",
"region:us"
] | drt | ComplexWebQuestions is a dataset for answering complex questions that require reasoning over multiple web snippets. It contains a large set of complex questions in natural language, and can be used in multiple ways: 1) By interacting with a search engine, which is the focus of our paper (Talmor and Berant, 2018); 2) As a reading comprehension task: we release 12,725,989 web snippets that are relevant for the questions, and were collected during the development of our model; 3) As a semantic parsing task: each question is paired with a SPARQL query that can be executed against Freebase to retrieve the answer. | @inproceedings{Talmor2018TheWA,
title={The Web as a Knowledge-Base for Answering Complex Questions},
author={Alon Talmor and Jonathan Berant},
booktitle={NAACL},
year={2018}
} | null | 3 | 5 | ---
license: apache-2.0
source: https://github.com/KGQA/KGQA-datasets
---
# Dataset Card for Dataset Name
## Dataset Description
- **Homepage:** https://www.tau-nlp.sites.tau.ac.il/compwebq
- **Repository:** https://github.com/alontalmor/WebAsKB
- **Paper:** https://arxiv.org/abs/1803.06643
- **Leaderboard:** https://www.tau-nlp.sites.tau.ac.il/compwebq-leaderboard
- **Point of Contact:** alontalmor@mail.tau.ac.il.
### Dataset Summary
**A dataset for answering complex questions that require reasoning over multiple web snippets**
ComplexWebQuestions is a new dataset that contains a large set of complex questions in natural language, and can be used in multiple ways:
- By interacting with a search engine, which is the focus of our paper (Talmor and Berant, 2018);
- As a reading comprehension task: we release 12,725,989 web snippets that are relevant for the questions, and were collected during the development of our model;
- As a semantic parsing task: each question is paired with a SPARQL query that can be executed against Freebase to retrieve the answer.
### Supported Tasks and Leaderboards
[More Information Needed]
### Languages
- English
## Dataset Structure
QUESTION FILES
The dataset contains 34,689 examples divided into 27,734 train, 3,480 dev, 3,475 test.
each containing:
```
"ID”: The unique ID of the example;
"webqsp_ID": The original WebQuestionsSP ID from which the question was constructed;
"webqsp_question": The WebQuestionsSP Question from which the question was constructed;
"machine_question": The artificial complex question, before paraphrasing;
"question": The natural language complex question;
"sparql": Freebase SPARQL query for the question. Note that the SPARQL was constructed for the machine question, the actual question after paraphrasing
may differ from the SPARQL.
"compositionality_type": An estimation of the type of compositionally. {composition, conjunction, comparative, superlative}. The estimation has not been manually verified,
the question after paraphrasing may differ from this estimation.
"answers": a list of answers each containing answer: the actual answer; answer_id: the Freebase answer id; aliases: freebase extracted aliases for the answer.
"created": creation time
```
NOTE: test set does not contain “answer” field. For test evaluation please send email to
alontalmor@mail.tau.ac.il.
WEB SNIPPET FILES
The snippets files consist of 12,725,989 snippets each containing
PLEASE DON”T USE CHROME WHEN DOWNLOADING THESE FROM DROPBOX (THE UNZIP COULD FAIL)
"question_ID”: the ID of related question, containing at least 3 instances of the same ID (full question, split1, split2);
"question": The natural language complex question;
"web_query": Query sent to the search engine.
“split_source”: 'noisy supervision split' or ‘ptrnet split’, please train on examples containing “ptrnet split” when comparing to Split+Decomp from https://arxiv.org/abs/1807.09623
“split_type”: 'full_question' or ‘split_part1' or ‘split_part2’ please use ‘composition_answer’ in question of type composition and split_type: “split_part1” when training a reading comprehension model on splits as in Split+Decomp from https://arxiv.org/abs/1807.09623 (in the rest of the cases use the original answer).
"web_snippets": ~100 web snippets per query. Each snippet includes Title,Snippet. They are ordered according to Google results.
With a total of
10,035,571 training set snippets
1,350,950 dev set snippets
1,339,468 test set snippets
### Source Data
The original files can be found at this [dropbox link](https://www.dropbox.com/sh/7pkwkrfnwqhsnpo/AACuu4v3YNkhirzBOeeaHYala)
### Licensing Information
Not specified
### Citation Information
```
@inproceedings{talmor2018web,
title={The Web as a Knowledge-Base for Answering Complex Questions},
author={Talmor, Alon and Berant, Jonathan},
booktitle={Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long Papers)},
pages={641--651},
year={2018}
}
```
### Contributions
Thanks for [happen2me](https://github.com/happen2me) for contributing this dataset. |
projecte-aina/GuiaCat | 2023-09-13T12:50:53.000Z | [
"task_categories:text-classification",
"task_ids:sentiment-classification",
"task_ids:sentiment-scoring",
"annotations_creators:found",
"language_creators:found",
"multilinguality:monolingual",
"language:ca",
"license:cc-by-nc-nd-4.0",
"region:us"
] | projecte-aina | null | null | null | 1 | 5 | ---
annotations_creators:
- found
language_creators:
- found
language:
- ca
license:
- cc-by-nc-nd-4.0
multilinguality:
- monolingual
pretty_name: GuiaCat
size_categories:
- ?
task_categories:
- text-classification
task_ids:
- sentiment-classification
- sentiment-scoring
---
# Dataset Card for GuiaCat
## Table of Contents
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Point of Contact:** [blanca.calvo@bsc.es](blanca.calvo@bsc.es)
### Dataset Summary
GuiaCat is a dataset consisting of 5.750 restaurant reviews in Catalan, with 5 associated scores and a label of sentiment. The data was provided by [GuiaCat](https://guiacat.cat) and curated by the BSC.
### Supported Tasks and Leaderboards
This corpus is mainly intended for sentiment analysis.
### Languages
The dataset is in Catalan (`ca-ES`).
## Dataset Structure
The dataset consists of restaurant reviews labelled with 5 scores: service, food, price-quality, environment, and average. Reviews also have a sentiment label, derived from the average score, all stored as a csv file.
### Data Instances
```
7,7,7,7,7.0,"Aquest restaurant té una llarga història. Ara han tornat a canviar d'amos i aquest canvi s'ha vist molt repercutit en la carta, preus, servei, etc. Hi ha molta varietat de menjar, i tot boníssim, amb especialitats molt ben trobades. El servei molt càlid i agradable, dóna gust que et serveixin així. I la decoració molt agradable també, bastant curiosa. En fi, pel meu gust, un bon restaurant i bé de preu.",bo
8,9,8,7,8.0,"Molt recomanable en tots els sentits. El servei és molt atent, pulcre i gens agobiant; alhora els plats també presenten un aspecte acurat, cosa que fa, juntament amb l'ambient, que t'oblidis de que, malauradament, està situat pròxim a l'autopista.Com deia, l'ambient és molt acollidor, té un menjador principal molt elegant, perfecte per quedar bé amb tothom!Tot i això, destacar la bona calitat / preu, ja que aquest restaurant té una carta molt extensa en totes les branques i completa, tant de menjar com de vins. Pel qui entengui de vins, podriem dir que tot i tenir una carta molt rica, es recolza una mica en els clàssics.",molt bo
```
### Data Fields
- service: a score from 0 to 10 grading the service
- food: a score from 0 to 10 grading the food
- price-quality: a score from 0 to 10 grading the relation between price and quality
- environment: a score from 0 to 10 grading the environment
- avg: average of all the scores
- text: the review
- label: it can be "molt bo", "bo", "regular", "dolent", "molt dolent"
### Data Splits
* dev.csv: 500 examples
* test.csv: 500 examples
* train.csv: 4,750 examples
## Dataset Creation
### Curation Rationale
We created this corpus to contribute to the development of language models in Catalan, a low-resource language.
### Source Data
The data of this dataset has been provided by [GuiaCat](https://guiacat.cat).
#### Initial Data Collection and Normalization
[N/A]
#### Who are the source language producers?
The language producers were the users from GuiaCat.
### Annotations
The annotations are automatically derived from the scores that the users provided while reviewing the restaurants.
#### Annotation process
The mapping between average scores and labels is:
- Higher than 8: molt bo
- Between 8 and 6: bo
- Between 6 and 4: regular
- Between 4 and 2: dolent
- Less than 2: molt dolent
#### Who are the annotators?
Users
### Personal and Sensitive Information
No personal information included, although it could contain hate or abusive language.
## Considerations for Using the Data
### Social Impact of Dataset
We hope this corpus contributes to the development of language models in Catalan, a low-resource language.
### Discussion of Biases
We are aware that this data might contain biases. We have not applied any steps to reduce their impact.
### Other Known Limitations
[N/A]
## Additional Information
### Dataset Curators
Text Mining Unit (TeMU) at the Barcelona Supercomputing Center (bsc-temu@bsc.es).
This work was funded by the [Departament de la Vicepresidència i de Polítiques Digitals i Territori de la Generalitat de Catalunya](https://politiquesdigitals.gencat.cat/ca/inici/index.html#googtrans(ca|en) within the framework of [Projecte AINA](https://politiquesdigitals.gencat.cat/ca/economia/catalonia-ai/aina).
### Licensing Information
[Creative Commons Attribution Non-commercial No-Derivatives 4.0 International](https://creativecommons.org/licenses/by-nc-nd/4.0/).
### Citation Information
```
```
### Contributions
We want to thank GuiaCat for providing this data.
|
beyond/chinese_clean_passages_80m | 2022-12-06T07:09:20.000Z | [
"region:us"
] | beyond | null | null | null | 21 | 5 | ---
dataset_info:
features:
- name: passage
dtype: string
splits:
- name: train
num_bytes: 18979214734
num_examples: 88328203
download_size: 1025261393
dataset_size: 18979214734
---
# `chinese_clean_passages_80m`
包含**8千余万**(88328203)个**纯净**中文段落,不包含任何字母、数字。\
Containing more than **80 million pure \& clean** Chinese passages, without any letters/digits/special tokens.
文本长度大部分介于50\~200个汉字之间。\
The passage length is approximately 50\~200 Chinese characters.
通过`datasets.load_dataset()`下载数据,会产生38个大小约340M的数据包,共约12GB,所以请确保有足够空间。\
Downloading the dataset will result in 38 data shards each of which is about 340M and 12GB in total. Make sure there's enough space in your device:)
```
>>>
passage_dataset = load_dataset('beyond/chinese_clean_passages_80m')
<<<
Downloading data: 100%|█| 341M/341M [00:06<00:00, 52.0MB
Downloading data: 100%|█| 342M/342M [00:06<00:00, 54.4MB
Downloading data: 100%|█| 341M/341M [00:06<00:00, 49.1MB
Downloading data: 100%|█| 341M/341M [00:14<00:00, 23.5MB
Downloading data: 100%|█| 341M/341M [00:10<00:00, 33.6MB
Downloading data: 100%|█| 342M/342M [00:07<00:00, 43.1MB
...(38 data shards)
```
本数据集被用于训练[GENIUS模型中文版](https://huggingface.co/spaces/beyond/genius),如果这个数据集对您的研究有帮助,请引用以下论文。
This dataset is created for the pre-training of [GENIUS model](https://huggingface.co/spaces/beyond/genius), if you find this dataset useful, please cite our paper.
```
@article{guo2022genius,
title={GENIUS: Sketch-based Language Model Pre-training via Extreme and Selective Masking for Text Generation and Augmentation},
author={Guo, Biyang and Gong, Yeyun and Shen, Yelong and Han, Songqiao and Huang, Hailiang and Duan, Nan and Chen, Weizhu},
journal={arXiv preprint arXiv:2211.10330},
year={2022}
}
```
---
Acknowledgment:\
数据是基于[CLUE中文预训练语料集](https://github.com/CLUEbenchmark/CLUE)进行处理、过滤得到的。\
This dataset is processed/filtered from the [CLUE pre-training corpus](https://github.com/CLUEbenchmark/CLUE).
原始数据集引用:
```
@misc{bright_xu_2019_3402023,
author = {Bright Xu},
title = {NLP Chinese Corpus: Large Scale Chinese Corpus for NLP },
month = sep,
year = 2019,
doi = {10.5281/zenodo.3402023},
version = {1.0},
publisher = {Zenodo},
url = {https://doi.org/10.5281/zenodo.3402023}
}
```
|
Twitter/HashtagPrediction | 2022-11-21T21:22:07.000Z | [
"language:sl",
"language:ur",
"language:sd",
"language:pl",
"language:vi",
"language:sv",
"language:am",
"language:da",
"language:mr",
"language:no",
"language:gu",
"language:in",
"language:ja",
"language:el",
"language:lv",
"language:it",
"language:ca",
"language:is",
"language:... | Twitter | null | null | null | 1 | 5 | ---
license: cc-by-4.0
language:
- sl
- ur
- sd
- pl
- vi
- sv
- am
- da
- mr
- no
- gu
- in
- ja
- el
- lv
- it
- ca
- is
- cs
- te
- tl
- ro
- ckb
- pt
- ps
- zh
- sr
- pa
- si
- ml
- ht
- kn
- ar
- hu
- nl
- bg
- bn
- ne
- hi
- de
- ko
- fi
- fr
- es
- et
- en
- fa
- lt
- or
- cy
- eu
- iw
- ta
- th
- tr
tags:
- Twitter
- Multilingual
- Classification
- Benchmark
---
# Hashtag Prediction Dataset from paper TwHIN-BERT: A Socially-Enriched Pre-trained Language Model for Multilingual Tweet Representations
[](https://huggingface.co/datasets/Twitter/HashtagPrediction/discussions) [](https://arxiv.org/abs/2209.07562) [](https://github.com/xinyangz/TwHIN-BERT)
This repo contains the Hashtag prediction dataset from our paper [TwHIN-BERT: A Socially-Enriched Pre-trained Language Model for Multilingual Tweet Representations](https://arxiv.org/abs/2209.07562). <br />
[[arXiv]](https://arxiv.org/abs/2209.07562)
[[HuggingFace Models]](https://huggingface.co/Twitter/twhin-bert-base)
[[Github repo]](https://github.com/xinyangz/TwHIN-BERT)
<a rel="license" href="http://creativecommons.org/licenses/by/4.0/"><img alt="Creative Commons License" style="border-width:0" src="https://i.creativecommons.org/l/by/4.0/88x31.png" /></a><br />This work is licensed under a <a rel="license" href="http://creativecommons.org/licenses/by/4.0/">Creative Commons Attribution 4.0 International License</a>.
## Download
Use the `hashtag-classification-id.zip` in this repo. [Link](https://huggingface.co/datasets/Twitter/HashtagPrediction/blob/main/hashtag-classification-id.zip).
Check the first-author's GitHub repo for any supplemental dataset material or code. [Link](https://github.com/xinyangz/TwHIN-BERT)
## Dataset Description
The hashtag prediction dataset is a multilingual classification dataset. Separate datasets are given for different languages. We first select 500 (or all available) popular hashtags of each language and then sample 10k (or all available) popular Tweets that contain these hashtags. We make sure each Tweet will have exactly one of the selected hashtags.
The evaluation task is a multiclass classification task, with hashtags as labels. We remove the hashtag from the Tweet, and let the model predict the removed hashtag.
We provide Tweet ID and raw text hashtag labels in `tsv` files. For each language, we provide train, development, and test splits.
To use the dataset, you must hydrate the Tweet text with [Twitter API](https://developer.twitter.com/en/docs/twitter-api), and **remove the hashtag used for label from each Tweet** .
The data format is displayed below.
| ID | label |
| ------------- | ------------- |
| 1 | hashtag |
| 2 | another hashtag |
## Citation
If you use our dataset in your work, please cite the following:
```bib
@article{zhang2022twhin,
title={TwHIN-BERT: A Socially-Enriched Pre-trained Language Model for Multilingual Tweet Representations},
author={Zhang, Xinyang and Malkov, Yury and Florez, Omar and Park, Serim and McWilliams, Brian and Han, Jiawei and El-Kishky, Ahmed},
journal={arXiv preprint arXiv:2209.07562},
year={2022}
}
``` |
jpwahle/dblp-discovery-dataset | 2022-11-28T13:18:13.000Z | [
"task_categories:other",
"annotations_creators:found",
"language_creators:found",
"multilinguality:monolingual",
"size_categories:1M<n<10M",
"source_datasets:extended|s2orc",
"language:en",
"license:cc-by-4.0",
"dblp",
"s2",
"scientometrics",
"computer science",
"papers",
"arxiv",
"regio... | jpwahle | This repository provides metadata to papers from DBLP. | @inproceedings{wahle-etal-2022-d3,
title = "D3: A Massive Dataset of Scholarly Metadata for Analyzing the State of Computer Science Research",
author = "Wahle, Jan Philip and
Ruas, Terry and
Mohammad, Saif and
Gipp, Bela",
booktitle = "Proceedings of the Thirteenth Language Resources and Evaluation Conference",
month = jun,
year = "2022",
address = "Marseille, France",
publisher = "European Language Resources Association",
url = "https://aclanthology.org/2022.lrec-1.283",
pages = "2642--2651",
abstract = "DBLP is the largest open-access repository of scientific articles on computer science and provides metadata associated with publications, authors, and venues. We retrieved more than 6 million publications from DBLP and extracted pertinent metadata (e.g., abstracts, author affiliations, citations) from the publication texts to create the DBLP Discovery Dataset (D3). D3 can be used to identify trends in research activity, productivity, focus, bias, accessibility, and impact of computer science research. We present an initial analysis focused on the volume of computer science research (e.g., number of papers, authors, research activity), trends in topics of interest, and citation patterns. Our findings show that computer science is a growing research field (15{\%} annually), with an active and collaborative researcher community. While papers in recent years present more bibliographical entries in comparison to previous decades, the average number of citations has been declining. Investigating papers{'} abstracts reveals that recent topic trends are clearly reflected in D3. Finally, we list further applications of D3 and pose supplemental research questions. The D3 dataset, our findings, and source code are publicly available for research purposes.",
} | null | 1 | 5 | ---
annotations_creators:
- found
language:
- en
language_creators:
- found
license:
- cc-by-4.0
multilinguality:
- monolingual
pretty_name: DBLP Discovery Dataset (D3)
size_categories:
- 1M<n<10M
source_datasets:
- extended|s2orc
tags:
- dblp
- s2
- scientometrics
- computer science
- papers
- arxiv
task_categories:
- other
task_ids: []
paperswithcode_id: d3
dataset_info:
- config_name: papers
download_size: 15876152
dataset_size: 15876152
- config_name: authors
download_size: 1177888
dataset_size: 1177888
---
# Dataset Card for DBLP Discovery Dataset (D3)
## Table of Contents
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Repository:** https://github.com/jpwahle/lrec22-d3-dataset
- **Paper:** https://aclanthology.org/2022.lrec-1.283/
- **Total size:** 8.71 GB
### Dataset Summary
DBLP is the largest open-access repository of scientific articles on computer science and provides metadata associated with publications, authors, and venues. We retrieved more than 6 million publications from DBLP and extracted pertinent metadata (e.g., abstracts, author affiliations, citations) from the publication texts to create the DBLP Discovery Dataset (D3). D3 can be used to identify trends in research activity, productivity, focus, bias, accessibility, and impact of computer science research. We present an initial analysis focused on the volume of computer science research (e.g., number of papers, authors, research activity), trends in topics of interest, and citation patterns. Our findings show that computer science is a growing research field (15% annually), with an active and collaborative researcher community. While papers in recent years present more bibliographical entries in comparison to previous decades, the average number of citations has been declining. Investigating papers’ abstracts reveals that recent topic trends are clearly reflected in D3. Finally, we list further applications of D3 and pose supplemental research questions. The D3 dataset, our findings, and source code are publicly available for research purposes.
### Supported Tasks and Leaderboards
[More Information Needed]
### Languages
English
## Dataset Structure
### Data Instances
Total size: 8.71 GB
Papers size: 8.13 GB
Authors size: 0.58 GB
### Data Fields
#### Papers
| Feature | Description |
| --- | --- |
| `corpusid` | The unique identifier of the paper. |
| `externalids` | The same paper in other repositories (e.g., DOI, ACL). |
| `title` | The title of the paper. |
| `authors` | The authors of the paper with their `authorid` and `name`. |
| `venue` | The venue of the paper. |
| `year` | The year of the paper publication. |
| `publicationdate` | A more precise publication date of the paper. |
| `abstract` | The abstract of the paper. |
| `outgoingcitations` | The number of references of the paper. |
| `ingoingcitations` | The number of citations of the paper. |
| `isopenaccess` | Whether the paper is open access. |
| `influentialcitationcount` | The number of influential citations of the paper according to SemanticScholar. |
| `s2fieldsofstudy` | The fields of study of the paper according to SemanticScholar. |
| `publicationtypes` | The publication types of the paper. |
| `journal` | The journal of the paper. |
| `updated` | The last time the paper was updated. |
| `url` | A url to the paper in SemanticScholar. |
#### Authors
| Feature | Description |
| --- | --- |
| `authorid` | The unique identifier of the author. |
| `externalids` | The same author in other repositories (e.g., ACL, PubMed). This can include `ORCID` |
| `name` | The name of the author. |
| `affiliations` | The affiliations of the author. |
| `homepage` | The homepage of the author. |
| `papercount` | The number of papers the author has written. |
| `citationcount` | The number of citations the author has received. |
| `hindex` | The h-index of the author. |
| `updated` | The last time the author was updated. |
| `email` | The email of the author. |
| `s2url` | A url to the author in SemanticScholar. |
### Data Splits
- `papers`
- `authors`
## Dataset Creation
### Curation Rationale
Providing a resource to analyze the state of computer science research statistically and semantically.
### Source Data
#### Initial Data Collection and Normalization
DBLP and from v2.0 SemanticScholar
## Additional Information
### Dataset Curators
[Jan Philip Wahle](https://jpwahle.com/)
### Licensing Information
The DBLP Discovery Dataset is released under the CC BY-NC 4.0. By using this corpus, you are agreeing to its usage terms.
### Citation Information
If you use the dataset in any way, please cite:
```bib
@inproceedings{Wahle2022c,
title = {D3: A Massive Dataset of Scholarly Metadata for Analyzing the State of Computer Science Research},
author = {Wahle, Jan Philip and Ruas, Terry and Mohammad, Saif M. and Gipp, Bela},
year = {2022},
month = {July},
booktitle = {Proceedings of The 13th Language Resources and Evaluation Conference},
publisher = {European Language Resources Association},
address = {Marseille, France},
doi = {},
}
```
Also make sure to cite the following papers if you use SemanticScholar data:
```bib
@inproceedings{ammar-etal-2018-construction,
title = "Construction of the Literature Graph in Semantic Scholar",
author = "Ammar, Waleed and
Groeneveld, Dirk and
Bhagavatula, Chandra and
Beltagy, Iz",
booktitle = "Proceedings of the 2018 Conference of the North {A}merican Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 3 (Industry Papers)",
month = jun,
year = "2018",
address = "New Orleans - Louisiana",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/N18-3011",
doi = "10.18653/v1/N18-3011",
pages = "84--91",
}
```
```bib
@inproceedings{lo-wang-2020-s2orc,
title = "{S}2{ORC}: The Semantic Scholar Open Research Corpus",
author = "Lo, Kyle and Wang, Lucy Lu and Neumann, Mark and Kinney, Rodney and Weld, Daniel",
booktitle = "Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics",
month = jul,
year = "2020",
address = "Online",
publisher = "Association for Computational Linguistics",
url = "https://www.aclweb.org/anthology/2020.acl-main.447",
doi = "10.18653/v1/2020.acl-main.447",
pages = "4969--4983"
}
```### Contributions
Thanks to [@jpwahle](https://github.com/jpwahle) for adding this dataset.
|
tomekkorbak/pii-pile-chunk3-0-50000 | 2022-11-08T18:59:20.000Z | [
"region:us"
] | tomekkorbak | null | null | null | 0 | 5 | Entry not found |
kakaobrain/coyo-labeled-300m | 2022-11-11T01:11:22.000Z | [
"task_categories:image-classification",
"task_ids:multi-label-image-classification",
"annotations_creators:no-annotation",
"language_creators:other",
"multilinguality:monolingual",
"size_categories:100M<n<1B",
"source_datasets:original",
"language:en",
"license:cc-by-4.0",
"image-labeled pairs",
... | kakaobrain | null | null | null | 1 | 5 |
---
annotations_creators:
- no-annotation
language:
- en
language_creators:
- other
license:
- cc-by-4.0
multilinguality:
- monolingual
pretty_name: COYO-Labeled-300M
size_categories:
- 100M<n<1B
source_datasets:
- original
tags:
- image-labeled pairs
task_categories:
- image-classification
task_ids:
- multi-label-image-classification
---
# Dataset Card for COYO-Labeled-300M
## Table of Contents
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** [COYO homepage](https://kakaobrain.com/contents/?contentId=7eca73e3-3089-43cb-b701-332e8a1743fd)
- **Repository:** [COYO repository](https://github.com/kakaobrain/coyo-dataset)
- **Paper:**
- **Leaderboard:**
- **Point of Contact:** [COYO email](coyo@kakaobrain.com)
### Dataset Summary
**COYO-Labeled-300M** is a dataset of **machine-labeled** 300M images-multi-label pairs. We labeled subset of COYO-700M with a large model (efficientnetv2-xl) trained on imagenet-21k. We followed the same evaluation pipeline as in efficientnet-v2. The labels are top 50 most likely labels out of 21,841 classes from imagenet-21k. The label probabilies are provided rather than label so that the user can select threshold of their choice for multi-label classification use or can take top-1 class for single class classification use.
In other words, **COYO-Labeled-300M** is a ImageNet-like dataset. Instead of human labeled 1.25 million samples, it's machine-labeled 300 million samples. This dataset is similar to JFT-300M which is not released to the public.
### Supported Tasks and Leaderboards
We empirically validated the quality of COYO-Labeled-300M dataset by re-implementing popular model, [ViT](https://arxiv.org/abs/2010.11929).
We found that our ViT implementation trained on COYO-Labeled-300M performs similar to the performance numbers in the ViT paper trained on JFT-300M.
We also provide weights for the pretrained ViT model on COYO-Labeled-300M as well as its training & fine-tuning code.
### Languages
The labels in the COYO-Labeled-300M dataset consist of English.
## Dataset Structure
### Data Instances
Each instance in COYO-Labeled-300M represents multi-labels and image pair information with meta-attributes.
And we also provide label information, **imagenet21k_tree.pickle**.
```
{
'id': 315,
'url': 'https://a.1stdibscdn.com/pair-of-blue-and-white-table-lamps-for-sale/1121189/f_121556431538206028457/12155643_master.jpg?width=240',
'imagehash': 'daf5a50aae4aa54a',
'labels': [8087, 11054, 8086, 6614, 6966, 8193, 10576, 9710, 4334, 9909, 8090, 10104, 10105, 9602, 5278, 9547, 6978, 12011, 7272, 5273, 6279, 4279, 10903, 8656, 9601, 8795, 9326, 4606, 9907, 9106, 7574, 10006, 7257, 6959, 9758, 9039, 10682, 7164, 5888, 11654, 8201, 4546, 9238, 8197, 10882, 17380, 4470, 5275, 10537, 11548],
'label_probs': [0.4453125, 0.30419921875, 0.09417724609375, 0.033905029296875, 0.03240966796875, 0.0157928466796875, 0.01406097412109375, 0.01129150390625, 0.00978851318359375, 0.00841522216796875, 0.007720947265625, 0.00634002685546875, 0.0041656494140625, 0.004070281982421875, 0.002910614013671875, 0.0028018951416015625, 0.002262115478515625, 0.0020503997802734375, 0.0017080307006835938, 0.0016880035400390625, 0.0016679763793945312, 0.0016613006591796875, 0.0014324188232421875, 0.0012445449829101562, 0.0011739730834960938, 0.0010318756103515625, 0.0008969306945800781, 0.0008792877197265625, 0.0008726119995117188, 0.0008263587951660156, 0.0007123947143554688, 0.0006799697875976562, 0.0006561279296875, 0.0006542205810546875, 0.0006093978881835938, 0.0006046295166015625, 0.0005769729614257812, 0.00057220458984375, 0.0005636215209960938, 0.00055694580078125, 0.0005092620849609375, 0.000507354736328125, 0.000507354736328125, 0.000499725341796875, 0.000484466552734375, 0.0004456043243408203, 0.0004439353942871094, 0.0004355907440185547, 0.00043392181396484375, 0.00041866302490234375],
'width': 240,
'height': 240
}
```
### Data Fields
| name | type | description |
|--------------------------|---------|--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| id | long | Unique 64-bit integer ID generated by [monotonically_increasing_id()](https://spark.apache.org/docs/3.1.3/api/python/reference/api/pyspark.sql.functions.monotonically_increasing_id.html) which is the same value that is mapped with the existing COYO-700M. |
| url | string | The image URL extracted from the `src` attribute of the `<img>` |
| imagehash | string | The [perceptual hash(pHash)](http://www.phash.org/) of the image |
| labels | sequence[integer] | Inference results of EfficientNetV2-XL model trained on ImageNet-21K dataset (Top 50 indices among 21,841 classes) |
| label_probs | sequence[float] | Inference results of EfficientNetV2-XL model trained on ImageNet-21K dataset (Top 50 indices among 21,841 probabilites) |
| width | integer | The width of the image |
| height | integer | The height of the image |
### Data Splits
Data was not split, since the evaluation was expected to be performed on more widely used downstream task(s).
## Dataset Creation
### Curation Rationale
We labeled subset of COYO-700M with a large model (efficientnetv2-xl) trained on imagenet-21k. Data sampling was done with a size similar to jft-300m, filtered by a specific threshold for probabilities for the top-1 label.
### Source Data
[COYO-700M](https://huggingface.co/datasets/kakaobrain/coyo-700m)
#### Who are the source language producers?
[Common Crawl](https://commoncrawl.org/) is the data source for COYO-700M.
### Annotations
#### Annotation process
The dataset was built in a fully automated process that did not require human annotation.
#### Who are the annotators?
No human annotation
### Personal and Sensitive Information
The basic instruction, licenses and contributors are the same as for the [coyo-700m](https://huggingface.co/datasets/kakaobrain/coyo-700m).
|
bigbio/bio_simlex | 2022-12-22T15:43:27.000Z | [
"multilinguality:monolingual",
"language:en",
"license:unknown",
"region:us"
] | bigbio | Bio-SimLex enables intrinsic evaluation of word representations. This evaluation can serve as a predictor of performance on various downstream tasks in the biomedical domain. The results on Bio-SimLex using standard word representation models highlight the importance of developing dedicated evaluation resources for NLP in biomedicine for particular word classes (e.g. verbs). | @article{article,
title = {
Bio-SimVerb and Bio-SimLex: Wide-coverage evaluation sets of word
similarity in biomedicine
},
author = {Chiu, Billy and Pyysalo, Sampo and Vulić, Ivan and Korhonen, Anna},
year = 2018,
month = {02},
journal = {BMC Bioinformatics},
volume = 19,
pages = {},
doi = {10.1186/s12859-018-2039-z}
} | null | 0 | 5 |
---
language:
- en
bigbio_language:
- English
license: unknown
multilinguality: monolingual
bigbio_license_shortname: UNKNOWN
pretty_name: Bio-SimLex
homepage: https://github.com/cambridgeltl/bio-simverb
bigbio_pubmed: True
bigbio_public: True
bigbio_tasks:
- SEMANTIC_SIMILARITY
---
# Dataset Card for Bio-SimLex
## Dataset Description
- **Homepage:** https://github.com/cambridgeltl/bio-simverb
- **Pubmed:** True
- **Public:** True
- **Tasks:** STS
Bio-SimLex enables intrinsic evaluation of word representations. This evaluation can serve as a predictor of performance on various downstream tasks in the biomedical domain. The results on Bio-SimLex using standard word representation models highlight the importance of developing dedicated evaluation resources for NLP in biomedicine for particular word classes (e.g. verbs).
## Citation Information
```
@article{article,
title = {
Bio-SimVerb and Bio-SimLex: Wide-coverage evaluation sets of word
similarity in biomedicine
},
author = {Chiu, Billy and Pyysalo, Sampo and Vulić, Ivan and Korhonen, Anna},
year = 2018,
month = {02},
journal = {BMC Bioinformatics},
volume = 19,
pages = {},
doi = {10.1186/s12859-018-2039-z}
}
```
|
bigbio/lll | 2022-12-22T15:44:52.000Z | [
"multilinguality:monolingual",
"language:en",
"license:unknown",
"region:us"
] | bigbio | The LLL05 challenge task is to learn rules to extract protein/gene interactions from biology abstracts from the Medline
bibliography database. The goal of the challenge is to test the ability of the participating IE systems to identify the
interactions and the gene/proteins that interact. The participants will test their IE patterns on a test set with the
aim of extracting the correct agent and target.The challenge focuses on information extraction of gene interactions in
Bacillus subtilis. Extracting gene interaction is the most popular event IE task in biology. Bacillus subtilis (Bs) is
a model bacterium and many papers have been published on direct gene interactions involved in sporulation. The gene
interactions are generally mentioned in the abstract and the full text of the paper is not needed. Extracting gene
interaction means, extracting the agent (proteins) and the target (genes) of all couples of genic interactions from
sentences. | @article{article,
author = {Nédellec, C.},
year = {2005},
month = {01},
pages = {},
title = {Learning Language in Logic - Genic Interaction Extraction Challenge},
journal = {Proceedings of the Learning Language in Logic 2005 Workshop at the International Conference on Machine Learning}
} | null | 1 | 5 |
---
language:
- en
bigbio_language:
- English
license: unknown
multilinguality: monolingual
bigbio_license_shortname: UNKNOWN
pretty_name: LLL05
homepage: http://genome.jouy.inra.fr/texte/LLLchallenge
bigbio_pubmed: True
bigbio_public: True
bigbio_tasks:
- RELATION_EXTRACTION
---
# Dataset Card for LLL05
## Dataset Description
- **Homepage:** http://genome.jouy.inra.fr/texte/LLLchallenge
- **Pubmed:** True
- **Public:** True
- **Tasks:** RE
The LLL05 challenge task is to learn rules to extract protein/gene interactions from biology abstracts from the Medline
bibliography database. The goal of the challenge is to test the ability of the participating IE systems to identify the
interactions and the gene/proteins that interact. The participants will test their IE patterns on a test set with the
aim of extracting the correct agent and target.The challenge focuses on information extraction of gene interactions in
Bacillus subtilis. Extracting gene interaction is the most popular event IE task in biology. Bacillus subtilis (Bs) is
a model bacterium and many papers have been published on direct gene interactions involved in sporulation. The gene
interactions are generally mentioned in the abstract and the full text of the paper is not needed. Extracting gene
interaction means, extracting the agent (proteins) and the target (genes) of all couples of genic interactions from
sentences.
## Citation Information
```
@article{article,
author = {Nédellec, C.},
year = {2005},
month = {01},
pages = {},
title = {Learning Language in Logic - Genic Interaction Extraction Challenge},
journal = {Proceedings of the Learning Language in Logic 2005 Workshop at the International Conference on Machine Learning}
}
```
|
bigbio/meddocan | 2022-12-22T15:45:24.000Z | [
"multilinguality:monolingual",
"language:es",
"license:cc-by-4.0",
"region:us"
] | bigbio | MEDDOCAN: Medical Document Anonymization Track
This dataset is designed for the MEDDOCAN task, sponsored by Plan de Impulso de las Tecnologías del Lenguaje.
It is a manually classified collection of 1,000 clinical case reports derived from the Spanish Clinical Case Corpus (SPACCC), enriched with PHI expressions.
The annotation of the entire set of entity mentions was carried out by experts annotatorsand it includes 29 entity types relevant for the annonymiation of medical documents.22 of these annotation types are actually present in the corpus: TERRITORIO, FECHAS, EDAD_SUJETO_ASISTENCIA, NOMBRE_SUJETO_ASISTENCIA, NOMBRE_PERSONAL_SANITARIO, SEXO_SUJETO_ASISTENCIA, CALLE, PAIS, ID_SUJETO_ASISTENCIA, CORREO, ID_TITULACION_PERSONAL_SANITARIO,ID_ASEGURAMIENTO, HOSPITAL, FAMILIARES_SUJETO_ASISTENCIA, INSTITUCION, ID_CONTACTO ASISTENCIAL,NUMERO_TELEFONO, PROFESION, NUMERO_FAX, OTROS_SUJETO_ASISTENCIA, CENTRO_SALUD, ID_EMPLEO_PERSONAL_SANITARIO
For further information, please visit https://temu.bsc.es/meddocan/ or send an email to encargo-pln-life@bsc.es | @inproceedings{marimon2019automatic,
title={Automatic De-identification of Medical Texts in Spanish: the MEDDOCAN Track, Corpus, Guidelines, Methods and Evaluation of Results.},
author={Marimon, Montserrat and Gonzalez-Agirre, Aitor and Intxaurrondo, Ander and Rodriguez, Heidy and Martin, Jose Lopez and Villegas, Marta and Krallinger, Martin},
booktitle={IberLEF@ SEPLN},
pages={618--638},
year={2019}
} | null | 1 | 5 |
---
language:
- es
bigbio_language:
- Spanish
license: cc-by-4.0
multilinguality: monolingual
bigbio_license_shortname: CC_BY_4p0
pretty_name: MEDDOCAN
homepage: https://temu.bsc.es/meddocan/
bigbio_pubmed: False
bigbio_public: True
bigbio_tasks:
- NAMED_ENTITY_RECOGNITION
---
# Dataset Card for MEDDOCAN
## Dataset Description
- **Homepage:** https://temu.bsc.es/meddocan/
- **Pubmed:** False
- **Public:** True
- **Tasks:** NER
MEDDOCAN: Medical Document Anonymization Track
This dataset is designed for the MEDDOCAN task, sponsored by Plan de Impulso de las Tecnologías del Lenguaje.
It is a manually classified collection of 1,000 clinical case reports derived from the Spanish Clinical Case Corpus (SPACCC), enriched with PHI expressions.
The annotation of the entire set of entity mentions was carried out by experts annotatorsand it includes 29 entity types relevant for the annonymiation of medical documents.22 of these annotation types are actually present in the corpus: TERRITORIO, FECHAS, EDAD_SUJETO_ASISTENCIA, NOMBRE_SUJETO_ASISTENCIA, NOMBRE_PERSONAL_SANITARIO, SEXO_SUJETO_ASISTENCIA, CALLE, PAIS, ID_SUJETO_ASISTENCIA, CORREO, ID_TITULACION_PERSONAL_SANITARIO,ID_ASEGURAMIENTO, HOSPITAL, FAMILIARES_SUJETO_ASISTENCIA, INSTITUCION, ID_CONTACTO ASISTENCIAL,NUMERO_TELEFONO, PROFESION, NUMERO_FAX, OTROS_SUJETO_ASISTENCIA, CENTRO_SALUD, ID_EMPLEO_PERSONAL_SANITARIO
For further information, please visit https://temu.bsc.es/meddocan/ or send an email to encargo-pln-life@bsc.es
## Citation Information
```
@inproceedings{marimon2019automatic,
title={Automatic De-identification of Medical Texts in Spanish: the MEDDOCAN Track, Corpus, Guidelines, Methods and Evaluation of Results.},
author={Marimon, Montserrat and Gonzalez-Agirre, Aitor and Intxaurrondo, Ander and Rodriguez, Heidy and Martin, Jose Lopez and Villegas, Marta and Krallinger, Martin},
booktitle={IberLEF@ SEPLN},
pages={618--638},
year={2019}
}
```
|
bigbio/multi_xscience | 2022-12-22T15:45:44.000Z | [
"multilinguality:monolingual",
"language:en",
"license:mit",
"arxiv:2010.14235",
"region:us"
] | bigbio | Multi-document summarization is a challenging task for which there exists little large-scale datasets.
We propose Multi-XScience, a large-scale multi-document summarization dataset created from scientific articles.
Multi-XScience introduces a challenging multi-document summarization task: writing the related-work section
of a paper based on its abstract and the articles it references. Our work is inspired by extreme summarization,
a dataset construction protocol that favours abstractive modeling approaches. Descriptive statistics and
empirical results---using several state-of-the-art models trained on the Multi-XScience dataset---reveal t
hat Multi-XScience is well suited for abstractive models. | @misc{https://doi.org/10.48550/arxiv.2010.14235,
doi = {10.48550/ARXIV.2010.14235},
url = {https://arxiv.org/abs/2010.14235},
author = {Lu, Yao and Dong, Yue and Charlin, Laurent},
keywords = {Computation and Language (cs.CL), Artificial Intelligence (cs.AI), FOS: Computer and information sciences, FOS: Computer and information sciences},
title = {Multi-XScience: A Large-scale Dataset for Extreme Multi-document Summarization of Scientific Articles},
publisher = {arXiv},
year = {2020},
copyright = {arXiv.org perpetual, non-exclusive license}
} | null | 1 | 5 |
---
language:
- en
bigbio_language:
- English
license: mit
multilinguality: monolingual
bigbio_license_shortname: MIT
pretty_name: Multi-XScience
homepage: https://github.com/yaolu/Multi-XScience
bigbio_pubmed: False
bigbio_public: True
bigbio_tasks:
- PARAPHRASING
- SUMMARIZATION
---
# Dataset Card for Multi-XScience
## Dataset Description
- **Homepage:** https://github.com/yaolu/Multi-XScience
- **Pubmed:** False
- **Public:** True
- **Tasks:** PARA,SUM
Multi-document summarization is a challenging task for which there exists little large-scale datasets.
We propose Multi-XScience, a large-scale multi-document summarization dataset created from scientific articles.
Multi-XScience introduces a challenging multi-document summarization task: writing the related-work section
of a paper based on its abstract and the articles it references. Our work is inspired by extreme summarization,
a dataset construction protocol that favours abstractive modeling approaches. Descriptive statistics and
empirical results---using several state-of-the-art models trained on the Multi-XScience dataset---reveal t
hat Multi-XScience is well suited for abstractive models.
## Citation Information
```
@misc{https://doi.org/10.48550/arxiv.2010.14235,
doi = {10.48550/ARXIV.2010.14235},
url = {https://arxiv.org/abs/2010.14235},
author = {Lu, Yao and Dong, Yue and Charlin, Laurent},
keywords = {Computation and Language (cs.CL), Artificial Intelligence (cs.AI), FOS: Computer and information sciences, FOS: Computer and information sciences},
title = {Multi-XScience: A Large-scale Dataset for Extreme Multi-document Summarization of Scientific Articles},
publisher = {arXiv},
year = {2020},
copyright = {arXiv.org perpetual, non-exclusive license}
}
```
|
bigbio/nlmchem | 2022-12-22T15:46:07.000Z | [
"multilinguality:monolingual",
"language:en",
"license:cc0-1.0",
"region:us"
] | bigbio | NLM-Chem corpus consists of 150 full-text articles from the PubMed Central Open Access dataset,
comprising 67 different chemical journals, aiming to cover a general distribution of usage of chemical
names in the biomedical literature.
Articles were selected so that human annotation was most valuable (meaning that they were rich in bio-entities,
and current state-of-the-art named entity recognition systems disagreed on bio-entity recognition. | @Article{islamaj2021nlm,
title={NLM-Chem, a new resource for chemical entity recognition in PubMed full text literature},
author={Islamaj, Rezarta and Leaman, Robert and Kim, Sun and Kwon, Dongseop and Wei, Chih-Hsuan and Comeau, Donald C and Peng, Yifan and Cissel, David and Coss, Cathleen and Fisher, Carol and others},
journal={Scientific Data},
volume={8},
number={1},
pages={1--12},
year={2021},
publisher={Nature Publishing Group}
} | null | 0 | 5 |
---
language:
- en
bigbio_language:
- English
license: cc0-1.0
multilinguality: monolingual
bigbio_license_shortname: CC0_1p0
pretty_name: NLM-Chem
homepage: https://biocreative.bioinformatics.udel.edu/tasks/biocreative-vii/track-2
bigbio_pubmed: True
bigbio_public: True
bigbio_tasks:
- NAMED_ENTITY_RECOGNITION
- NAMED_ENTITY_DISAMBIGUATION
- TEXT_CLASSIFICATION
---
# Dataset Card for NLM-Chem
## Dataset Description
- **Homepage:** https://biocreative.bioinformatics.udel.edu/tasks/biocreative-vii/track-2
- **Pubmed:** True
- **Public:** True
- **Tasks:** NER,NED,TXTCLASS
NLM-Chem corpus consists of 150 full-text articles from the PubMed Central Open Access dataset,
comprising 67 different chemical journals, aiming to cover a general distribution of usage of chemical
names in the biomedical literature.
Articles were selected so that human annotation was most valuable (meaning that they were rich in bio-entities,
and current state-of-the-art named entity recognition systems disagreed on bio-entity recognition.
## Citation Information
```
@Article{islamaj2021nlm,
title={NLM-Chem, a new resource for chemical entity recognition in PubMed full text literature},
author={Islamaj, Rezarta and Leaman, Robert and Kim, Sun and Kwon, Dongseop and Wei, Chih-Hsuan and Comeau, Donald C and Peng, Yifan and Cissel, David and Coss, Cathleen and Fisher, Carol and others},
journal={Scientific Data},
volume={8},
number={1},
pages={1--12},
year={2021},
publisher={Nature Publishing Group}
}
```
|
Capstone/autotrain-data-healthcare_summarization_uta | 2022-11-22T19:40:55.000Z | [
"language:en",
"region:us"
] | Capstone | null | null | null | 0 | 5 | ---
language:
- en
task_categories:
- conditional-text-generation
---
# AutoTrain Dataset for project: healthcare_summarization_uta
## Dataset Description
This dataset has been automatically processed by AutoTrain for project healthcare_summarization_uta.
### Languages
The BCP-47 code for the dataset's language is en.
## Dataset Structure
### Data Instances
A sample from this dataset looks as follows:
```json
[
{
"text": "We get people to do things that are good for them. And you. | Make the world a healthier place, one person at a time. | Healthcare today is nothing short of amazing. Yet all of it only works when people connect with it. And too often, they dont. Healthcare can be impersonal. Confusing. All elbows. The record scratch at lifes party. Were here to help connect healthcare with the people who need it. Which is everyone. How? By listening. Collaborating. And inspiring. Were pioneering a better way forward. Were making healthcare more human. | In 2020, Revel + NovuHealth joined forces to create Icario because we knew we could do better togethercreating value by uniting pioneering technology, data science, and behavioral insights to make the world a healthier place, one person at a time. | There is an island in the Aegean Sea where people live extremely long lives. Theyre happy, too. Families are close. They eat well. They exercise. And they stay connected with each other, and not just by smartphone. This got us thinking. What if we apply what we learn from the Blue Zone island of Ikaria (our namesake), add pioneering technology and exabytes of data, and help healthcare connect better with everyone? Well have a lot more healthy, happy people, and thats a pretty good thing. | As an organization, were a collaborative team of pioneers, inventors, and systems thinkers. We speak truth, are driven by data, and sweat the details. Were a friendly and easygoing group, but we work hard because we are mission-driven, we know a better way, and were here to make it happen. | The Icario name and brand represent a successful, growing business that deeply understands people and is focused on making healthcare more human through personalized communication. | ",
"target": "We're putting people first in healthcare. In order to create value by fusing cutting-edge technology, data science, and behavioral insights to improve the world's health one person at a time, Revel and NovuHealth partnered to become Icario in 2020. The Icario name and brand speak for a prosperous, expanding company that has a keen understanding of people and is committed to enhancing healthcare through individualized communication."
},
{
"text": "Medicat is the industry leader in College Health Software and serves more college and university campuses than all other college health software companies combined. From the largest universities to the smallest colleges, Medicat specializes in workflow efficiency and has been improving outcomes for campus health centers since 1993. | Over 500 clients, covering 5+ million students across 48 states and three countries, use Medicats total EHR solution to deliver higher quality care more efficiently with the industrys most secure software platform offerings. Medicat offers private cloud hosting with unmatched security and has continued to improve its offering over 29 years, leading the industry in response to client needs. | The industry leader in College Health EHR | Medicat has the leading market share in the college health EHR industry, serving more college and university campuses than all other college health EHR companies combined. From the largest universities to the smallest colleges, Medicat specializes in workflow efficiency and a seamless transition from other EHRs or less efficient manual, paper-based systems to reap the benefits of going digital. | Two kinds of clients are switching from other systems to Medicat: Colleges who are with small niche EHR companies that dont have the capabilities or the security infrastructure; and those who are with larger companies that dont offer Medicats specialized expertise and support in college health. Wherever you fit in, Medicat can help; lets talk! | Medicat is the #1 health management system supporting college health. We support healthcare providers at over 500 universities, from the largest universities to the smallest colleges, covering more than 5 million students. Our software and services are co-created with healthcare professionals to address the unique workflow challenges of medical and mental health practitioners | 303 Perimeter Center North, Suite 450Atlanta, GA 30346 | Toll-Free: (866) 633-4053Phone: (404) 252-2295Fax: (404) 252-2298 | 2022 MEDICAT. | Notifications",
"target": "Medicat is the industry leader in College Health Software and serves more college and university campuses than all other college health software companies combined. From the largest universities to the smallest colleges, Medicat specializes in workflow efficiency and has been improving outcomes for campus health centers since 1993. Over 500 clients, covering 5+ million students across 48 states and three countries, use Medicats total EHR solution to deliver higher quality care more efficiently with the industrys most secure software platform offerings. Medicat offers private cloud hosting with unmatched security and has continued to improve its offering over 29 years, leading the industry in response to client needs. The industry leader in College Health EHR Medicat has the leading market share in the college health EHR industry, serving more college and university campuses than all other college health EHR companies combined."
}
]
```
### Dataset Fields
The dataset has the following fields (also called "features"):
```json
{
"text": "Value(dtype='string', id=None)",
"target": "Value(dtype='string', id=None)"
}
```
### Dataset Splits
This dataset is split into a train and validation split. The split sizes are as follow:
| Split name | Num samples |
| ------------ | ------------------- |
| train | 63 |
| valid | 16 |
|
VIMA/VIMA-Data | 2023-06-17T04:52:09.000Z | [
"license:cc-by-4.0",
"arxiv:2210.03094",
"region:us"
] | VIMA | null | null | null | 15 | 5 | ---
license: cc-by-4.0
---
# Dataset Card for VIMA-Data
## Table of Contents
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Dataset Structure](#dataset-structure)
- [Dataset Creation](#dataset-creation)
- [Additional Information](#additional-information)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
## Dataset Description
- **Homepage:** https://vimalabs.github.io/
- **Repository:** https://github.com/vimalabs/VimaBench
- **Paper:** https://arxiv.org/abs/2210.03094
### Dataset Summary
This is the official dataset used to train general robot manipulation agents with multimodal prompts, as presented in [paper](https://arxiv.org/abs/2210.03094). It contains 650K trajectories for 13 tasks in [VIMA-Bench](https://github.com/vimalabs/VimaBench). All demonstrations are generated by oracles.
## Dataset Structure
Data are grouped into different tasks. Within each trajectory's folder, there are two folders `rgb_front` and `rgb_top`, and three files `obs.pkl`, `action.pkl`, and `trajectory.pkl`. RGB frames from a certain perspective are separately stored in corresponding folder. `obs.pkl` includes segmentation and state of end effector. `action.pkl` contains oracle actions. `trajectory.pkl` contains meta information such as elapsed steps, task information, and object information. Users can build their custom data piepline starting from here. More details and examples can be found [here](https://github.com/vimalabs/VimaBench#training-data).
## Dataset Creation
All demonstrations are generated by scripted oracles.
## Additional Information
### Licensing Information
This dataset is released under the [Creative Commons Attribution 4.0 International (CC BY 4.0)](https://creativecommons.org/licenses/by/4.0/legalcode) license.
### Citation Information
If you find our work useful, please consider citing us!
```bibtex
@inproceedings{jiang2023vima,
title = {VIMA: General Robot Manipulation with Multimodal Prompts},
author = {Yunfan Jiang and Agrim Gupta and Zichen Zhang and Guanzhi Wang and Yongqiang Dou and Yanjun Chen and Li Fei-Fei and Anima Anandkumar and Yuke Zhu and Linxi Fan},
booktitle = {Fortieth International Conference on Machine Learning},
year = {2023}
}
``` |
kasnerz/hitab | 2023-03-14T15:09:50.000Z | [
"region:us"
] | kasnerz | null | null | null | 1 | 5 | Entry not found |
kasnerz/numericnlg | 2023-03-14T15:04:02.000Z | [
"region:us"
] | kasnerz | null | null | null | 0 | 5 | Entry not found |
liuyanchen1015/VALUE_mnli_negative_concord | 2022-11-28T22:31:52.000Z | [
"region:us"
] | liuyanchen1015 | null | null | null | 0 | 5 | ---
dataset_info:
features:
- name: premise
dtype: string
- name: hypothesis
dtype: string
- name: label
dtype: int64
- name: idx
dtype: int64
- name: score
dtype: int64
splits:
- name: train
num_bytes: 11131248
num_examples: 49529
- name: dev_matched
num_bytes: 266084
num_examples: 1192
- name: dev_mismatched
num_bytes: 272231
num_examples: 1203
- name: test_matched
num_bytes: 255070
num_examples: 1140
- name: test_mismatched
num_bytes: 282348
num_examples: 1214
download_size: 7641405
dataset_size: 12206981
---
# Dataset Card for "VALUE2_mnli_negative_concord"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
mlxen/squad_validation_with_JJ_VB_synonyms | 2022-11-29T21:29:40.000Z | [
"region:us"
] | mlxen | null | null | null | 0 | 5 | ---
dataset_info:
features:
- name: id
dtype: string
- name: title
dtype: string
- name: context
dtype: string
- name: question
dtype: string
- name: answers
sequence:
- name: text
dtype: string
- name: answer_start
dtype: int32
splits:
- name: validation
num_bytes: 10484818
num_examples: 10570
download_size: 1825207
dataset_size: 10484818
---
# Dataset Card for "squad_validation_with_JJ_VB_synonyms"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
JoBeer/eclassTrainST | 2023-01-07T12:10:51.000Z | [
"task_categories:sentence-similarity",
"size_categories:100K<n<1M",
"language:en",
"region:us"
] | JoBeer | null | null | null | 0 | 5 | ---
dataset_info:
features:
- name: text
dtype: string
- name: entailment
dtype: string
- name: contradiction
dtype: string
- name: label
dtype: string
splits:
- name: train
num_bytes: 327174992
num_examples: 698880
- name: eval
num_bytes: 219201779
num_examples: 450912
download_size: 46751846
dataset_size: 546376771
task_categories:
- sentence-similarity
language:
- en
size_categories:
- 100K<n<1M
---
# Dataset Card for "eclassTrainST"
This NLI-Dataset can be used to fine-tune Models for the task of sentence-simularity. It consists of names and descriptions of pump-properties from the ECLASS-standard. |
dferndz/cSQuAD1 | 2022-12-09T23:17:57.000Z | [
"task_categories:question-answering",
"annotations_creators:expert-generated",
"language_creators:other",
"multilinguality:monolingual",
"language:en",
"license:apache-2.0",
"region:us"
] | dferndz | null | null | null | 0 | 5 | ---
annotations_creators:
- expert-generated
language:
- en
language_creators:
- other
license:
- apache-2.0
multilinguality:
- monolingual
pretty_name: cSQuAD1
size_categories: []
source_datasets: []
tags: []
task_categories:
- question-answering
task_ids: []
---
# Dataset Card for cSQuAD1
## Table of Contents
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks](#supported-tasks)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Additional Information](#additional-information)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
## Dataset Description
- **Homepage:**
- **Repository:**
- **Paper:**
- **Leaderboard:**
- **Point of Contact:**
### Dataset Summary
A contrast set generated from the eval set of SQuAD. Questions and answers were modified
to help detecting dataset artifacts. This dataset only contains a validation set, which
should only be used to evaluate a model.
### Supported Tasks
Question Answering (SQuAD).
### Languages
English
## Dataset Structure
### Data Instances
Dataset contains 100 instances
### Data Fields
| Field | Description |
|----------|--------------------------------------------------
| id | Id of document containing context |
| title | Title of the document |
| context | The context of the question |
| question | The question to answer |
| answers | A list of possible answers from the context |
| answer_start | The index in context where the answer starts |
### Data Splits
A single `eval` split is provided
## Dataset Creation
Dataset was created by modifying a sample of 100 examples from SQuAD test split.
## Additional Information
### Licensing Information
Apache 2.0 license
### Citation Information
TODO: add citations |
deutsche-telekom/NLU-few-shot-benchmark-en-de | 2023-01-01T07:23:53.000Z | [
"task_categories:text-classification",
"task_ids:intent-classification",
"multilinguality:multilingual",
"size_categories:1K<n<10K",
"source_datasets:extended|deutsche-telekom/NLU-Evaluation-Data-en-de",
"language:en",
"language:de",
"license:cc-by-4.0",
"region:us"
] | deutsche-telekom | null | null | null | 1 | 5 | ---
license: cc-by-4.0
language:
- en
- de
multilinguality:
- multilingual
source_datasets:
- extended|deutsche-telekom/NLU-Evaluation-Data-en-de
size_categories:
- 1K<n<10K
task_categories:
- text-classification
task_ids:
- intent-classification
---
# NLU Few-shot Benchmark - English and German
This is a few-shot training dataset from the domain of human-robot interaction.
It contains texts in German and English language with 64 different utterances (classes).
Each utterance (class) has exactly 20 samples in the training set.
This leads to a total of 1280 different training samples.
The dataset is intended to benchmark the intent classifiers of chat bots in English and especially in German language.
We are building on our
[deutsche-telekom/NLU-Evaluation-Data-en-de](https://huggingface.co/datasets/deutsche-telekom/NLU-Evaluation-Data-en-de)
data set.
## Processing Steps
- drop `NaN` values
- drop duplicates in `answer_de` and `answer`
- delete all rows where `answer_de` has more than 70 characters
- add column `label`: `df["label"] = df["scenario"] + "_" + df["intent"]`
- remove classes (`label`) with less than 25 samples:
- `audio_volume_other`
- `cooking_query`
- `general_greet`
- `music_dislikeness`
- random selection for train set - exactly 20 samples for each class (`label`)
- rest for test set
## Copyright
Copyright (c) the authors of [xliuhw/NLU-Evaluation-Data](https://github.com/xliuhw/NLU-Evaluation-Data)\
Copyright (c) 2022 [Philip May](https://may.la/), [Deutsche Telekom AG](https://www.telekom.com/)
All data is released under the
[Creative Commons Attribution 4.0 International License (CC BY 4.0)](http://creativecommons.org/licenses/by/4.0/).
|
dippatel11/autotrain-data-whatsapp_chat_summarization | 2022-12-04T04:44:33.000Z | [
"language:en",
"region:us"
] | dippatel11 | null | null | null | 0 | 5 | ---
language:
- en
task_categories:
- conditional-text-generation
---
# AutoTrain Dataset for project: whatsapp_chat_summarization
## Dataset Description
This dataset has been automatically processed by AutoTrain for project whatsapp_chat_summarization.
### Languages
The BCP-47 code for the dataset's language is en.
## Dataset Structure
### Data Instances
A sample from this dataset looks as follows:
```json
[
{
"feat_id": "13682435",
"text": "Ella: Hi, did you get my text?\nJesse: Hey, yeah sorry- It's been crazy here. I'll collect Owen, don't worry about it :)\nElla: Oh thank you!! You're a lifesaver!\nJesse: It's not problem ;) Good luck with your meeting!!\nElla: Thanks again! :)",
"target": "Jesse will collect Owen so that Ella can go for a meeting."
},
{
"feat_id": "13728090",
"text": "William: Hey. Today i saw you were arguing with Blackett.\nWilliam: Are you guys fine?\nElizabeth: Hi. Sorry you had to see us argue.\nElizabeth: It was just a small misunderstanding but we will solve it.\nWilliam: Hope so\nWilliam: You think I should to talk to him about it?\nElizabeth: No don't\nElizabeth: He won't like it that we talked after the argument.\nWilliam: Ok. But if you need any help, don't hesitate to call me\nElizabeth: Definitely",
"target": "Elizabeth had an argument with Blackett today, but she doesn't want William to intermeddle."
}
]
```
### Dataset Fields
The dataset has the following fields (also called "features"):
```json
{
"feat_id": "Value(dtype='string', id=None)",
"text": "Value(dtype='string', id=None)",
"target": "Value(dtype='string', id=None)"
}
```
### Dataset Splits
This dataset is split into a train and validation split. The split sizes are as follow:
| Split name | Num samples |
| ------------ | ------------------- |
| train | 1600 |
| valid | 400 |
|
its5Q/yandex-q | 2023-04-02T16:48:29.000Z | [
"task_categories:text-generation",
"task_categories:question-answering",
"task_ids:language-modeling",
"task_ids:open-domain-qa",
"annotations_creators:crowdsourced",
"language_creators:crowdsourced",
"multilinguality:monolingual",
"size_categories:100K<n<1M",
"source_datasets:original",
"language... | its5Q | This is a dataset of questions and answers scraped from Yandex.Q. | null | null | 6 | 5 | ---
annotations_creators:
- crowdsourced
language:
- ru
language_creators:
- crowdsourced
license:
- cc0-1.0
multilinguality:
- monolingual
pretty_name: Yandex.Q
size_categories:
- 100K<n<1M
source_datasets:
- original
tags: []
task_categories:
- text-generation
- question-answering
task_ids:
- language-modeling
- open-domain-qa
---
# Dataset Card for Yandex.Q
## Table of Contents
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Citation Information](#citation-information)
## Dataset Description
- **Repository:** https://github.com/its5Q/yandex-q
### Dataset Summary
This is a dataset of questions and answers scraped from [Yandex.Q](https://yandex.ru/q/). There are 836810 answered questions out of the total of 1297670.
The full dataset that includes all metadata returned by Yandex.Q APIs and contains unanswered questions can be found in `full.jsonl.gz`
### Languages
The dataset is mostly in Russian, but there may be other languages present
## Dataset Structure
### Data Fields
The dataset consists of 3 fields:
- `question` - question title (`string`)
- `description` - question description (`string` or `null`)
- `answer` - answer to the question (`string`)
### Data Splits
All 836810 examples are in the train split, there is no validation split.
## Dataset Creation
The data was scraped through some "hidden" APIs using several scripts, located in [my GitHub repository](https://github.com/its5Q/yandex-q)
## Additional Information
### Dataset Curators
- https://github.com/its5Q
|
society-ethics/medmcqa_age_gender_custom | 2022-12-07T18:30:06.000Z | [
"region:us"
] | society-ethics | null | null | null | 0 | 5 | ---
dataset_info:
features:
- name: id
dtype: string
- name: question
dtype: string
- name: opa
dtype: string
- name: opb
dtype: string
- name: opc
dtype: string
- name: opd
dtype: string
- name: cop
dtype:
class_label:
names:
'0': a
'1': b
'2': c
'3': d
- name: choice_type
dtype: string
- name: exp
dtype: string
- name: subject_name
dtype: string
- name: topic_name
dtype: string
- name: age.infant
dtype: bool
- name: age.child_preschool
dtype: bool
- name: age.child
dtype: bool
- name: age.adolescent
dtype: bool
- name: age.adult
dtype: bool
- name: age.middle_aged
dtype: bool
- name: age.aged
dtype: bool
- name: age.aged_80_over
dtype: bool
- name: gender.male
dtype: bool
- name: gender.female
dtype: bool
splits:
- name: train
num_bytes: 132131827
num_examples: 182822
download_size: 86345498
dataset_size: 132131827
---
# Dataset Card for "medmcqa_age_gender_custom"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
society-ethics/the-stack-tabs_spaces | 2022-12-08T00:06:50.000Z | [
"region:us"
] | society-ethics | null | null | null | 0 | 5 | Entry not found |
jinaai/fashion-captions-de | 2023-07-09T10:37:31.000Z | [
"task_categories:text-to-image",
"multilinguality:monolingual",
"size_categories:1K<n<10K",
"source_datasets:original",
"language:de",
"license:cc-by-4.0",
"region:us"
] | jinaai | null | null | null | 7 | 5 | ---
license: cc-by-4.0
dataset_info:
features:
- name: text
dtype: string
- name: image
dtype: image
splits:
- name: train
num_bytes: 282285477
num_examples: 10000
- name: test
num_bytes: 56612023.875
num_examples: 2001
download_size: 320681179
dataset_size: 338897500.875
task_categories:
- text-to-image
multilinguality:
- monolingual
language:
- de
size_categories:
- 1K<n<10K
source_datasets:
- original
pretty_name: Fashion12k DE
---
<br><br>
<p align="center">
<img src="https://github.com/jina-ai/finetuner/blob/main/docs/_static/finetuner-logo-ani.svg?raw=true" alt="Finetuner logo: Finetuner helps you to create experiments in order to improve embeddings on search tasks. It accompanies you to deliver the last mile of performance-tuning for neural search applications." width="150px">
</p>
<p align="center">
<b>The data offered by Jina AI, Finetuner team.</b>
</p>
## Summary
This dataset is a German-language dataset based on the [Fashion12K](https://github.com/Toloka/Fashion12K_german_queries) dataset, which originally contains both English and German text descriptions for each item.
This dataset was used to to finetuner CLIP using the [Finetuner](https://finetuner.jina.ai/) tool.
## Fine-tuning
Please refer to our documentation: [Multilingual Text-to-Image Search with MultilingualCLIP](https://finetuner.jina.ai/notebooks/multilingual_text_to_image/)
and blog [Improving Search Quality for Non-English Queries with Fine-tuned Multilingual CLIP Models](https://jina.ai/news/improving-search-quality-non-english-queries-fine-tuned-multilingual-clip-models/)
## Instances
Each data point consists of a 'text' and an 'image' field, where the 'text' field describes an item of clothing in German, and the 'image' field contains and image of that item of clothing.
## Fields
- 'text': A string describing the item of clothing.
- 'image': A `PIL.Image.Image` object containing the image. Note that when accessing the image column: `dataset[0]["image"]` the image file is automatically decoded. Decoding of a large number of image files might take a significant amount of time. Thus it is important to first query the sample index before the "image" column, i.e. `dataset[0]["image"]` should always be preferred over dataset["image"][0].
## Splits
| | train | test |
|------------|-------|------|
| # of items | 10000 | 2001 |
## Source
Images were sampled from the [Fashion200K dataset](https://github.com/xthan/fashion-200k).
## Annotations
Data was annotated using [Toloka](https://toloka.ai/). See their site for more details.
## Licensing Information
This work is licensed under a Creative Commons Attribution 4.0 International License.
## Contributors
Thanks to contributors from [Jina AI](https://jina.ai) and [Toloka](https://toloka.ai) for adding this dataset. |
society-ethics/laion2B-en_continents | 2022-12-15T16:44:52.000Z | [
"region:us"
] | society-ethics | null | null | null | 0 | 5 | Entry not found |
Dahoas/sft-hh-rlhf | 2022-12-22T16:46:10.000Z | [
"region:us"
] | Dahoas | null | null | null | 2 | 5 | Entry not found |
echodpp/mbti-cleaned | 2022-12-18T22:31:20.000Z | [
"region:us"
] | echodpp | null | null | null | 0 | 5 | ---
dataset_info:
features:
- name: label
dtype: int64
- name: text
dtype: string
splits:
- name: train
num_bytes: 51651122
num_examples: 327828
- name: test
num_bytes: 12922409
num_examples: 81957
download_size: 42684526
dataset_size: 64573531
---
# Dataset Card for "mbti-cleaned"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
kmewhort/quickdraw-bins-50M | 2022-12-19T18:12:46.000Z | [
"region:us"
] | kmewhort | null | null | null | 0 | 5 | ---
dataset_info:
features:
- name: label
dtype:
class_label:
names:
'0': The Eiffel Tower
'1': The Great Wall of China
'2': The Mona Lisa
'3': aircraft carrier
'4': airplane
'5': alarm clock
'6': ambulance
'7': angel
'8': animal migration
'9': ant
'10': anvil
'11': apple
'12': arm
'13': asparagus
'14': axe
'15': backpack
'16': banana
'17': bandage
'18': barn
'19': baseball
'20': baseball bat
'21': basket
'22': basketball
'23': bat
'24': bathtub
'25': beach
'26': bear
'27': beard
'28': bed
'29': bee
'30': belt
'31': bench
'32': bicycle
'33': binoculars
'34': bird
'35': birthday cake
'36': blackberry
'37': blueberry
'38': book
'39': boomerang
'40': bottlecap
'41': bowtie
'42': bracelet
'43': brain
'44': bread
'45': bridge
'46': broccoli
'47': broom
'48': bucket
'49': bulldozer
'50': bus
'51': bush
'52': butterfly
'53': cactus
'54': cake
'55': calculator
'56': calendar
'57': camel
'58': camera
'59': camouflage
'60': campfire
'61': candle
'62': cannon
'63': canoe
'64': car
'65': carrot
'66': castle
'67': cat
'68': ceiling fan
'69': cell phone
'70': cello
'71': chair
'72': chandelier
'73': church
'74': circle
'75': clarinet
'76': clock
'77': cloud
'78': coffee cup
'79': compass
'80': computer
'81': cookie
'82': cooler
'83': couch
'84': cow
'85': crab
'86': crayon
'87': crocodile
'88': crown
'89': cruise ship
'90': cup
'91': diamond
'92': dishwasher
'93': diving board
'94': dog
'95': dolphin
'96': donut
'97': door
'98': dragon
'99': dresser
'100': drill
'101': drums
'102': duck
'103': dumbbell
'104': ear
'105': elbow
'106': elephant
'107': envelope
'108': eraser
'109': eye
'110': eyeglasses
'111': face
'112': fan
'113': feather
'114': fence
'115': finger
'116': fire hydrant
'117': fireplace
'118': firetruck
'119': fish
'120': flamingo
'121': flashlight
'122': flip flops
'123': floor lamp
'124': flower
'125': flying saucer
'126': foot
'127': fork
'128': frog
'129': frying pan
'130': garden
'131': garden hose
'132': giraffe
'133': goatee
'134': golf club
'135': grapes
'136': grass
'137': guitar
'138': hamburger
'139': hammer
'140': hand
'141': harp
'142': hat
'143': headphones
'144': hedgehog
'145': helicopter
'146': helmet
'147': hexagon
'148': hockey puck
'149': hockey stick
'150': horse
'151': hospital
'152': hot air balloon
'153': hot dog
'154': hot tub
'155': hourglass
'156': house
'157': house plant
'158': hurricane
'159': ice cream
'160': jacket
'161': jail
'162': kangaroo
'163': key
'164': keyboard
'165': knee
'166': knife
'167': ladder
'168': lantern
'169': laptop
'170': leaf
'171': leg
'172': light bulb
'173': lighter
'174': lighthouse
'175': lightning
'176': line
'177': lion
'178': lipstick
'179': lobster
'180': lollipop
'181': mailbox
'182': map
'183': marker
'184': matches
'185': megaphone
'186': mermaid
'187': microphone
'188': microwave
'189': monkey
'190': moon
'191': mosquito
'192': motorbike
'193': mountain
'194': mouse
'195': moustache
'196': mouth
'197': mug
'198': mushroom
'199': nail
'200': necklace
'201': nose
'202': ocean
'203': octagon
'204': octopus
'205': onion
'206': oven
'207': owl
'208': paint can
'209': paintbrush
'210': palm tree
'211': panda
'212': pants
'213': paper clip
'214': parachute
'215': parrot
'216': passport
'217': peanut
'218': pear
'219': peas
'220': pencil
'221': penguin
'222': piano
'223': pickup truck
'224': picture frame
'225': pig
'226': pillow
'227': pineapple
'228': pizza
'229': pliers
'230': police car
'231': pond
'232': pool
'233': popsicle
'234': postcard
'235': potato
'236': power outlet
'237': purse
'238': rabbit
'239': raccoon
'240': radio
'241': rain
'242': rainbow
'243': rake
'244': remote control
'245': rhinoceros
'246': rifle
'247': river
'248': roller coaster
'249': rollerskates
'250': sailboat
'251': sandwich
'252': saw
'253': saxophone
'254': school bus
'255': scissors
'256': scorpion
'257': screwdriver
'258': sea turtle
'259': see saw
'260': shark
'261': sheep
'262': shoe
'263': shorts
'264': shovel
'265': sink
'266': skateboard
'267': skull
'268': skyscraper
'269': sleeping bag
'270': smiley face
'271': snail
'272': snake
'273': snorkel
'274': snowflake
'275': snowman
'276': soccer ball
'277': sock
'278': speedboat
'279': spider
'280': spoon
'281': spreadsheet
'282': square
'283': squiggle
'284': squirrel
'285': stairs
'286': star
'287': steak
'288': stereo
'289': stethoscope
'290': stitches
'291': stop sign
'292': stove
'293': strawberry
'294': streetlight
'295': string bean
'296': submarine
'297': suitcase
'298': sun
'299': swan
'300': sweater
'301': swing set
'302': sword
'303': syringe
'304': t-shirt
'305': table
'306': teapot
'307': teddy-bear
'308': telephone
'309': television
'310': tennis racquet
'311': tent
'312': tiger
'313': toaster
'314': toe
'315': toilet
'316': tooth
'317': toothbrush
'318': toothpaste
'319': tornado
'320': tractor
'321': traffic light
'322': train
'323': tree
'324': triangle
'325': trombone
'326': truck
'327': trumpet
'328': umbrella
'329': underwear
'330': van
'331': vase
'332': violin
'333': washing machine
'334': watermelon
'335': waterslide
'336': whale
'337': wheel
'338': windmill
'339': wine bottle
'340': wine glass
'341': wristwatch
'342': yoga
'343': zebra
'344': zigzag
- name: packed_drawing
dtype: binary
splits:
- name: train
num_bytes: 5196066788.157136
num_examples: 40341012
- name: test
num_bytes: 1299016825.8428645
num_examples: 10085254
download_size: 6290637578
dataset_size: 6495083614.0
---
# Quick!Draw! Dataset (per-row bin format)
This is the full 50M-row dataset from [QuickDraw! dataset](https://github.com/googlecreativelab/quickdraw-dataset). The row for each drawing contains a byte-encoded packed representation of the drawing and data, which you can unpack using the following snippet:
```
def unpack_drawing(file_handle):
key_id, = unpack('Q', file_handle.read(8))
country_code, = unpack('2s', file_handle.read(2))
recognized, = unpack('b', file_handle.read(1))
timestamp, = unpack('I', file_handle.read(4))
n_strokes, = unpack('H', file_handle.read(2))
image = []
n_bytes = 17
for i in range(n_strokes):
n_points, = unpack('H', file_handle.read(2))
fmt = str(n_points) + 'B'
x = unpack(fmt, file_handle.read(n_points))
y = unpack(fmt, file_handle.read(n_points))
image.append((x, y))
n_bytes += 2 + 2*n_points
result = {
'key_id': key_id,
'country_code': country_code,
'recognized': recognized,
'timestamp': timestamp,
'image': image,
}
return result
```
The `image` in the above is still in line vector format. To convert render this to a raster image (I recommend you do this on-the-fly in a pre-processor):
```
# packed bin -> RGB PIL
def binToPIL(packed_drawing):
padding = 8
radius = 7
scale = (224.0-(2*padding)) / 256
unpacked = unpack_drawing(io.BytesIO(packed_drawing))
unpacked_image = unpacked['image']
image = np.full((224,224), 255, np.uint8)
for stroke in unpacked['image']:
prevX = round(stroke[0][0]*scale)
prevY = round(stroke[1][0]*scale)
for i in range(1, len(stroke[0])):
x = round(stroke[0][i]*scale)
y = round(stroke[1][i]*scale)
cv2.line(image, (padding+prevX, padding+prevY), (padding+x, padding+y), 0, radius, -1)
prevX = x
prevY = y
pilImage = Image.fromarray(image).convert("RGB")
return pilImage
``` |
fewshot-goes-multilingual/cs_mall-product-reviews | 2022-12-20T21:11:15.000Z | [
"task_categories:text-classification",
"task_ids:sentiment-classification",
"annotations_creators:found",
"language_creators:found",
"multilinguality:monolingual",
"size_categories:10K<n<100K",
"source_datasets:original",
"language:cs",
"license:cc-by-nc-sa-3.0",
"region:us"
] | fewshot-goes-multilingual | null | null | null | 1 | 5 | ---
annotations_creators:
- found
language:
- cs
language_creators:
- found
license:
- cc-by-nc-sa-3.0
multilinguality:
- monolingual
pretty_name: Mall.cz Product Reviews
size_categories:
- 10K<n<100K
source_datasets:
- original
tags: []
task_categories:
- text-classification
task_ids:
- sentiment-classification
---
# Dataset Card for Mall.cz Product Reviews (Czech)
## Dataset Description
The dataset contains user reviews from Czech eshop <mall.cz>
Each review contains text, sentiment (positive/negative/neutral), and automatically-detected language (mostly Czech, occasionaly Slovak) using [lingua-py](https://github.com/pemistahl/lingua-py)
The dataset has in total (train+validation+test) 30,000 reviews. The data is balanced.
Train set has 8000 positive, 8000 neutral and 8000 negative reviews.
Validation and test set each have 1000 positive, 1000 neutral and 1000 negative reviews.
## Dataset Features
Each sample contains:
- `review_id`: unique string identifier of the review.
- `rating_str`: string representation of the rating - "pozitivní" / "neutrální" / "negativní"
- `rating_int`: integer representation of the rating (1=positive, 0=neutral, -1=negative)
- `comment_language`: language of the review (mostly "cs", occasionaly "sk")
- `comment`: the string of the review
## Dataset Source
The data is a processed adaptation of [Mall CZ corpus](https://liks.fav.zcu.cz/sentiment/).
The adaptation is label-balanced and adds automatically-detected language
|
Jean-Baptiste/financial_news_sentiment_mixte_with_phrasebank_75 | 2022-12-29T03:19:16.000Z | [
"task_categories:text-classification",
"task_ids:multi-class-classification",
"task_ids:sentiment-classification",
"annotations_creators:expert-generated",
"multilinguality:monolingual",
"size_categories:1K<n<10K",
"language:en",
"license:cc-by-nc-sa-3.0",
"region:us"
] | Jean-Baptiste | null | null | null | 0 | 5 | ---
language:
- en
dataset_info:
splits:
- name: test
num_examples: 785
- name: train
num_examples: 4446
annotations_creators:
- expert-generated
license:
- cc-by-nc-sa-3.0
multilinguality:
- monolingual
pretty_name: financial_news_sentiment_mixte_with_phrasebank_75
size_categories:
- 1K<n<10K
tags: []
task_categories:
- text-classification
task_ids:
- multi-class-classification
- sentiment-classification
---
# Dataset Card for "financial_news_sentiment_mixte_with_phrasebank_75"
This is a customized version of the phrasebank dataset in which I kept only sentences validated by at least 75% annotators.
In addition I added ~2000 articles of Canadian news where sentiment was validated manually.
The dataset also include a column topic which contains one of the following value:
* acquisition
* other
* quaterly financial release
* appointment to new position
* dividend
* corporate update
* drillings results
* conference
* share repurchase program
* grant of stocks
This was generated automatically using a zero-shot classification model and **was not** reviewed manually.
## References
Original dataset is available here:
[https://huggingface.co/datasets/financial_phrasebank]
|
RuyuanWan/SBIC_Disagreement | 2022-12-26T22:07:09.000Z | [
"task_categories:text-classification",
"annotations_creators:crowdsourced",
"language_creators:found",
"multilinguality:monolingual",
"source_datasets:extended|social_bias_frames",
"language:en",
"region:us"
] | RuyuanWan | null | null | null | 0 | 5 | ---
annotations_creators:
- crowdsourced
language:
- en
language_creators:
- found
license: []
multilinguality:
- monolingual
pretty_name: RuyuanWan/SBIC_Disagreement
size_categories: []
source_datasets:
- extended|social_bias_frames
tags: []
task_categories:
- text-classification
task_ids: []
---
This dataset is processed version of Social Bias Inference Corpus(SBIC) dataset including text, annotator's demographics and the annotation disagreement labels. <br>
Paper: Everyone's Voice Matters: Quantifying Annotation Disagreement Using Demographic Information <br>
Authors: Ruyuan Wan, Jaehyung Kim, Dongyeop Kang <br>
Github repo: https://github.com/minnesotanlp/Quantifying-Annotation-Disagreement <br>
|
DavidVivancos/MindBigData2022 | 2023-01-07T10:18:30.000Z | [
"arxiv:2212.14746",
"region:us"
] | DavidVivancos | null | null | null | 2 | 5 | # MindBigData 2022 A Large Dataset of Brain Signals
> Supporting datasets for paper [ arXiv:2212.14746](https://arxiv.org/abs/2212.14746)
> There are 3 Main datasets with subdatasets:
>
**1.- MindBigData MNIST of Brain Digits**
> based on http://mindbigdata.com/opendb/index.html
> But all datasets splitted to 80% Train 20% Test (also proportional in the 11 classes)
> EEG's Resampled to match original headsets sampling rate
> Included headers.
> and simplified to contain only label & EEG data as rows named in headers as ChannelName-SampleNum, ie for channel FP1 and MindWave will be FP1-0 FP1-1 ..... FP1-1023 since there are 1024 samples.
> There are 4 subdatasets:
>
> For MindWave with 1 EEG Channel and 1024 samples x Channel
>
> For EPOC1 with 14 EEG Channels and 256 samples x Channel
>
> For Muse1 with 4 EEG Channels and 440 samples x Channel
>
> For Insight1 with 5 EEG Channels and 256 samples x Channel
>
**1.1.- MindBigData MNIST of Brain digits MindWave1**
https://huggingface.co/datasets/DavidVivancos/MindBigData2022_MNIST_MW
>
**1.2.- MindBigData MNIST of Brain digits EPOC1**
https://huggingface.co/datasets/DavidVivancos/MindBigData2022_MNIST_EP
**1.3.- MindBigData MNIST of Brain digits Muse1**
https://huggingface.co/datasets/DavidVivancos/MindBigData2022_MNIST_MU
**1.4.- MindBigData MNIST of Brain digits Insight1**
https://huggingface.co/datasets/DavidVivancos/MindBigData2022_MNIST_IN
**2.- MindBigData Imagenet of the Brain**
> based on http://mindbigdata.com/opendb/imagenet.html
> But all datasets splitted to 80% Train 20% Test (also proportional in all the classes)
> EEG's Resampled to match original headsets sampling rate
> Included headers.
> contains label as the ILSVRC2013 category, and a hotencoded name lists, the RGB pixel values of the image seen resampled to 150pixels by 150 pixels & EEG data as rows named in headers as ChannelName-SampleNum,
> There are 2 subdatasets:
>
> One with the Insight 1 EEG signals at 384 samples per channel (5 channels)
>
> One with the Spectrogram image 64x64px instead of the EEG as described in the paper
>
**2.1.- MindBigData Imagenet of the Brain Insight1 EEG**
https://huggingface.co/datasets/DavidVivancos/MindBigData2022_Imagenet_IN
**2.2.- MindBigData Imagenet of the Brain Insight1 Spectrogram**
https://huggingface.co/datasets/DavidVivancos/MindBigData2022_Imagenet_IN_Spct
**3.- MindBigData Visual MNIST of Brain Digits**
> based on http://mindbigdata.com/opendb/visualmnist.html
> But all datasets splitted to 80% Train 20% Test (also proportional in the 11 classes)
> Included headers.
> and simplified to contain only label, the original MNIST pixels of the digit seen 28x28pixels & EEG data as rows named in headers as ChannelName-SampleNum, ie for channel TP9 and Muse2 will be TP9-0 TP9-1 ..... TP9-511 since there are 512 samples.
> There are 3 subdatasets:
>
> For Muse2 with 5 EEG Channels, 3 PPG Channels, 3 ACC Channels & 3 GYR Channels and 512 samples x Channel
>
> For Cap64 with 64 EEG Channels and 400 samples x Channel
>
> For Cap64 with 64 EEG Channels and 400 samples x Channel but with Morlet png images as EEG outputs
>
**3.1.- MindBigData Visual MNIST of Brain digits Muse2**
https://huggingface.co/datasets/DavidVivancos/MindBigData2022_VisMNIST_MU2
**3.2.- MindBigData Visual MNIST of Brain digits Cap64**
https://huggingface.co/datasets/DavidVivancos/MindBigData2022_VisMNIST_Cap64
**3.3.- MindBigData Visual MNIST of Brain digits Cap64 Morlet**
https://huggingface.co/datasets/DavidVivancos/MindBigData2022_VisMNIST_Cap64_Morlet
|
sdadas/sick_pl | 2022-12-29T11:01:28.000Z | [
"task_categories:text-classification",
"task_ids:natural-language-inference",
"task_ids:semantic-similarity-scoring",
"multilinguality:monolingual",
"size_categories:1K<n<10K",
"source_datasets:sick",
"language:pl",
"license:cc-by-nc-sa-3.0",
"region:us"
] | sdadas | null | null | null | 1 | 5 | ---
language:
- pl
license:
- cc-by-nc-sa-3.0
multilinguality:
- monolingual
size_categories:
- 1K<n<10K
source_datasets:
- sick
task_categories:
- text-classification
task_ids:
- natural-language-inference
- semantic-similarity-scoring
pretty_name: Sentences Involving Compositional Knowledge (Polish)
dataset_info:
features:
- name: pair_ID
dtype: string
- name: sentence_A
dtype: string
- name: sentence_B
dtype: string
- name: relatedness_score
dtype: float32
- name: entailment_judgment
dtype: string
splits:
- name: train
- name: validation
- name: test
---
# SICK_PL - Sentences Involving Compositional Knowledge (Polish)
### Dataset Summary
This dataset is a manually translated version of popular English natural language inference (NLI) corpus consisting of 10,000 sentence pairs. NLI is the task of determining whether one statement (premise) semantically entails other statement (hypothesis). Such relation can be classified as entailment (if the first sentence entails second sentence), neutral (the first statement does not determine the truth value of the second statement), or contradiction (if the first sentence is true, the second is false). Additionally, the original SICK dataset contains semantic relatedness scores for the sentence pairs as real numbers ranging from 1 to 5. When translating the corpus to Polish, we tried to be as close as possible to the original meaning. In some cases, however, two different English sentences had an identical translation in Polish. Such instances were slightly modified in order to preserve both the meaning and the syntactic differences in sentence pair.
### Data Instances
Example instance:
```
{
"pair_ID": "122",
"sentence_A": "Pięcioro dzieci stoi blisko siebie , a jedno dziecko ma pistolet",
"sentence_B": "Pięcioro dzieci stoi blisko siebie i żadne z nich nie ma pistoletu",
"relatedness_score": 3.7,
"entailment_judgment": "CONTRADICTION"
}
```
### Data Fields
- pair_ID: sentence pair ID
- sentence_A: sentence A
- sentence_B: sentence B
- entailment_judgment: textual entailment gold label: entailment (0), neutral (1) or contradiction (2)
- relatedness_score: semantic relatedness gold score (on a 1-5 continuous scale)
### Citation Information
```
@inproceedings{dadas-etal-2020-evaluation,
title = "Evaluation of Sentence Representations in {P}olish",
author = "Dadas, Slawomir and Pere{\l}kiewicz, Micha{\l} and Po{\'s}wiata, Rafa{\l}",
booktitle = "Proceedings of the 12th Language Resources and Evaluation Conference",
month = may,
year = "2020",
address = "Marseille, France",
publisher = "European Language Resources Association",
url = "https://aclanthology.org/2020.lrec-1.207",
pages = "1674--1680",
language = "English",
ISBN = "979-10-95546-34-4",
}
``` |
DavidVivancos/MindBigData2022_Imagenet_IN_Spct | 2023-01-04T08:12:38.000Z | [
"license:odbl",
"region:us"
] | DavidVivancos | null | null | null | 0 | 5 | ---
license: odbl
---
|
irds/vaswani | 2023-01-05T03:56:04.000Z | [
"task_categories:text-retrieval",
"region:us"
] | irds | null | null | null | 1 | 5 | ---
pretty_name: '`vaswani`'
viewer: false
source_datasets: []
task_categories:
- text-retrieval
---
# Dataset Card for `vaswani`
The `vaswani` dataset, provided by the [ir-datasets](https://ir-datasets.com/) package.
For more information about the dataset, see the [documentation](https://ir-datasets.com/vaswani#vaswani).
# Data
This dataset provides:
- `docs` (documents, i.e., the corpus); count=11,429
- `queries` (i.e., topics); count=93
- `qrels`: (relevance assessments); count=2,083
## Usage
```python
from datasets import load_dataset
docs = load_dataset('irds/vaswani', 'docs')
for record in docs:
record # {'doc_id': ..., 'text': ...}
queries = load_dataset('irds/vaswani', 'queries')
for record in queries:
record # {'query_id': ..., 'text': ...}
qrels = load_dataset('irds/vaswani', 'qrels')
for record in qrels:
record # {'query_id': ..., 'doc_id': ..., 'relevance': ..., 'iteration': ...}
```
Note that calling `load_dataset` will download the dataset (or provide access instructions when it's not public) and make a copy of the
data in 🤗 Dataset format.
|
metaeval/imppres | 2023-06-21T12:52:43.000Z | [
"task_categories:text-classification",
"task_ids:natural-language-inference",
"language:en",
"license:apache-2.0",
"region:us"
] | metaeval | Over >25k semiautomatically generated sentence pairs illustrating well-studied pragmatic inference types. IMPPRES is an NLI dataset following the format of SNLI (Bowman et al., 2015), MultiNLI (Williams et al., 2018) and XNLI (Conneau et al., 2018), which was created to evaluate how well trained NLI models recognize several classes of presuppositions and scalar implicatures. | @inproceedings{jeretic-etal-2020-natural,
title = "Are Natural Language Inference Models {IMPPRESsive}? {L}earning {IMPlicature} and {PRESupposition}",
author = "Jereti\v{c}, Paloma and
Warstadt, Alex and
Bhooshan, Suvrat and
Williams, Adina",
booktitle = "Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics",
month = jul,
year = "2020",
address = "Online",
publisher = "Association for Computational Linguistics",
url = "https://www.aclweb.org/anthology/2020.acl-main.768",
doi = "10.18653/v1/2020.acl-main.768",
pages = "8690--8705",
abstract = "Natural language inference (NLI) is an increasingly important task for natural language understanding, which requires one to infer whether a sentence entails another. However, the ability of NLI models to make pragmatic inferences remains understudied. We create an IMPlicature and PRESupposition diagnostic dataset (IMPPRES), consisting of 32K semi-automatically generated sentence pairs illustrating well-studied pragmatic inference types. We use IMPPRES to evaluate whether BERT, InferSent, and BOW NLI models trained on MultiNLI (Williams et al., 2018) learn to make pragmatic inferences. Although MultiNLI appears to contain very few pairs illustrating these inference types, we find that BERT learns to draw pragmatic inferences. It reliably treats scalar implicatures triggered by {``}some{''} as entailments. For some presupposition triggers like {``}only{''}, BERT reliably recognizes the presupposition as an entailment, even when the trigger is embedded under an entailment canceling operator like negation. BOW and InferSent show weaker evidence of pragmatic reasoning. We conclude that NLI training encourages models to learn some, but not all, pragmatic inferences.",
} | null | 0 | 5 | ---
license: apache-2.0
task_categories:
- text-classification
language:
- en
task_ids:
- natural-language-inference
---
Imppres, but it works
https://github.com/facebookresearch/Imppres
```
@inproceedings{jeretic-etal-2020-natural,
title = "Are Natural Language Inference Models {IMPPRESsive}? {L}earning {IMPlicature} and {PRESupposition}",
author = "Jereti\v{c}, Paloma and
Warstadt, Alex and
Bhooshan, Suvrat and
Williams, Adina",
booktitle = "Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics",
month = jul,
year = "2020",
address = "Online",
publisher = "Association for Computational Linguistics",
url = "https://www.aclweb.org/anthology/2020.acl-main.768",
doi = "10.18653/v1/2020.acl-main.768",
pages = "8690--8705",
abstract = "Natural language inference (NLI) is an increasingly important task for natural language understanding, which requires one to infer whether a sentence entails another. However, the ability of NLI models to make pragmatic inferences remains understudied. We create an IMPlicature and PRESupposition diagnostic dataset (IMPPRES), consisting of 32K semi-automatically generated sentence pairs illustrating well-studied pragmatic inference types. We use IMPPRES to evaluate whether BERT, InferSent, and BOW NLI models trained on MultiNLI (Williams et al., 2018) learn to make pragmatic inferences. Although MultiNLI appears to contain very few pairs illustrating these inference types, we find that BERT learns to draw pragmatic inferences. It reliably treats scalar implicatures triggered by {``}some{''} as entailments. For some presupposition triggers like {``}only{''}, BERT reliably recognizes the presupposition as an entailment, even when the trigger is embedded under an entailment canceling operator like negation. BOW and InferSent show weaker evidence of pragmatic reasoning. We conclude that NLI training encourages models to learn some, but not all, pragmatic inferences.",
}
``` |
bigbio/cpi | 2023-01-06T03:46:05.000Z | [
"multilinguality:monolingual",
"language:en",
"license:other",
"region:us"
] | bigbio | The compound-protein relationship (CPI) dataset consists of 2,613 sentences from abstracts containing annotations of proteins, small molecules, and their relationships | @article{doring2020automated,
title={Automated recognition of functional compound-protein relationships in literature},
author={D{\"o}ring, Kersten and Qaseem, Ammar and Becer, Michael and Li, Jianyu and Mishra, Pankaj and Gao, Mingjie and Kirchner, Pascal and Sauter, Florian and Telukunta, Kiran K and Moumbock, Aur{\'e}lien FA and others},
journal={Plos one},
volume={15},
number={3},
pages={e0220925},
year={2020},
publisher={Public Library of Science San Francisco, CA USA}
} | null | 0 | 5 |
---
language:
- en
bigbio_language:
- English
license: other
multilinguality: monolingual
bigbio_license_shortname: ISC
pretty_name: CPI
homepage: https://github.com/KerstenDoering/CPI-Pipeline
bigbio_pubmed: True
bigbio_public: True
bigbio_tasks:
- NAMED_ENTITY_RECOGNITION
- NAMED_ENTITY_DISAMBIGUATION
- RELATION_EXTRACTION
---
# Dataset Card for CPI
## Dataset Description
- **Homepage:** https://github.com/KerstenDoering/CPI-Pipeline
- **Pubmed:** True
- **Public:** True
- **Tasks:** NER,NED,RE
The compound-protein relationship (CPI) dataset consists of 2,613 sentences
from abstracts containing annotations of proteins, small molecules, and their
relationships.
## Citation Information
```
@article{doring2020automated,
title={Automated recognition of functional compound-protein relationships in literature},
author={D{\"o}ring, Kersten and Qaseem, Ammar and Becer, Michael and Li, Jianyu and Mishra, Pankaj and Gao, Mingjie and Kirchner, Pascal and Sauter, Florian and Telukunta, Kiran K and Moumbock, Aur{\'e}lien FA and others},
journal={Plos one},
volume={15},
number={3},
pages={e0220925},
year={2020},
publisher={Public Library of Science San Francisco, CA USA}
}
```
|
leonweber/teaching_motivational_quotes | 2023-01-09T10:26:21.000Z | [
"region:us"
] | leonweber | null | null | null | 0 | 5 | Entry not found |
torileatherman/sentiment_analysis_batch_predictions | 2023-01-15T12:04:48.000Z | [
"license:apache-2.0",
"region:us"
] | torileatherman | null | null | null | 0 | 5 | ---
license: apache-2.0
---
|
torileatherman/sentiment_analysis_training | 2023-08-04T13:04:15.000Z | [
"license:apache-2.0",
"region:us"
] | torileatherman | null | null | null | 0 | 5 | ---
license: apache-2.0
dataset_info:
features:
- name: Sentiment
dtype: int64
- name: Headline
sequence: int64
- name: Headline_string
dtype: string
splits:
- name: train
num_bytes: 6608592
num_examples: 11143
download_size: 1012250
dataset_size: 6608592
---
|
Cohere/wikipedia-22-12-hi-embeddings | 2023-03-22T16:53:57.000Z | [
"task_categories:text-retrieval",
"task_ids:document-retrieval",
"annotations_creators:expert-generated",
"multilinguality:multilingual",
"language:hi",
"license:apache-2.0",
"region:us"
] | Cohere | null | null | null | 0 | 5 | ---
annotations_creators:
- expert-generated
language:
- hi
multilinguality:
- multilingual
size_categories: []
source_datasets: []
tags: []
task_categories:
- text-retrieval
license:
- apache-2.0
task_ids:
- document-retrieval
---
# Wikipedia (hi) embedded with cohere.ai `multilingual-22-12` encoder
We encoded [Wikipedia (hi)](https://hi.wikipedia.org) using the [cohere.ai](https://txt.cohere.ai/multilingual/) `multilingual-22-12` embedding model.
To get an overview how this dataset was created and pre-processed, have a look at [Cohere/wikipedia-22-12](https://huggingface.co/datasets/Cohere/wikipedia-22-12).
## Embeddings
We compute for `title+" "+text` the embeddings using our `multilingual-22-12` embedding model, a state-of-the-art model that works for semantic search in 100 languages. If you want to learn more about this model, have a look at [cohere.ai multilingual embedding model](https://txt.cohere.ai/multilingual/).
## Further languages
We provide embeddings of Wikipedia in many different languages:
[ar](https://huggingface.co/datasets/Cohere/wikipedia-22-12-ar-embeddings), [de](https://huggingface.co/datasets/Cohere/wikipedia-22-12-de-embeddings), [en](https://huggingface.co/datasets/Cohere/wikipedia-22-12-en-embeddings), [es](https://huggingface.co/datasets/Cohere/wikipedia-22-12-es-embeddings), [fr](https://huggingface.co/datasets/Cohere/wikipedia-22-12-fr-embeddings), [hi](https://huggingface.co/datasets/Cohere/wikipedia-22-12-hi-embeddings), [it](https://huggingface.co/datasets/Cohere/wikipedia-22-12-it-embeddings), [ja](https://huggingface.co/datasets/Cohere/wikipedia-22-12-ja-embeddings), [ko](https://huggingface.co/datasets/Cohere/wikipedia-22-12-ko-embeddings), [simple english](https://huggingface.co/datasets/Cohere/wikipedia-22-12-simple-embeddings), [zh](https://huggingface.co/datasets/Cohere/wikipedia-22-12-zh-embeddings),
You can find the Wikipedia datasets without embeddings at [Cohere/wikipedia-22-12](https://huggingface.co/datasets/Cohere/wikipedia-22-12).
## Loading the dataset
You can either load the dataset like this:
```python
from datasets import load_dataset
docs = load_dataset(f"Cohere/wikipedia-22-12-hi-embeddings", split="train")
```
Or you can also stream it without downloading it before:
```python
from datasets import load_dataset
docs = load_dataset(f"Cohere/wikipedia-22-12-hi-embeddings", split="train", streaming=True)
for doc in docs:
docid = doc['id']
title = doc['title']
text = doc['text']
emb = doc['emb']
```
## Search
A full search example:
```python
#Run: pip install cohere datasets
from datasets import load_dataset
import torch
import cohere
co = cohere.Client(f"<<COHERE_API_KEY>>") # Add your cohere API key from www.cohere.com
#Load at max 1000 documents + embeddings
max_docs = 1000
docs_stream = load_dataset(f"Cohere/wikipedia-22-12-hi-embeddings", split="train", streaming=True)
docs = []
doc_embeddings = []
for doc in docs_stream:
docs.append(doc)
doc_embeddings.append(doc['emb'])
if len(docs) >= max_docs:
break
doc_embeddings = torch.tensor(doc_embeddings)
query = 'Who founded Youtube'
response = co.embed(texts=[query], model='multilingual-22-12')
query_embedding = response.embeddings
query_embedding = torch.tensor(query_embedding)
# Compute dot score between query embedding and document embeddings
dot_scores = torch.mm(query_embedding, doc_embeddings.transpose(0, 1))
top_k = torch.topk(dot_scores, k=3)
# Print results
print("Query:", query)
for doc_id in top_k.indices[0].tolist():
print(docs[doc_id]['title'])
print(docs[doc_id]['text'], "\n")
```
## Performance
You can find performance on the MIRACL dataset (a semantic search evaluation dataset) here: [miracl-en-queries-22-12#performance](https://huggingface.co/datasets/Cohere/miracl-en-queries-22-12#performance) |
Cohere/wikipedia-22-12-zh-embeddings | 2023-03-22T16:55:57.000Z | [
"task_categories:text-retrieval",
"task_ids:document-retrieval",
"multilinguality:multilingual",
"language:zh",
"license:apache-2.0",
"region:us"
] | Cohere | null | null | null | 11 | 5 | ---
language:
- zh
multilinguality:
- multilingual
size_categories: []
source_datasets: []
tags: []
task_categories:
- text-retrieval
license:
- apache-2.0
task_ids:
- document-retrieval
---
# Wikipedia (zh) embedded with cohere.ai `multilingual-22-12` encoder
We encoded [Wikipedia (zh)](https://zh.wikipedia.org) using the [cohere.ai](https://txt.cohere.ai/multilingual/) `multilingual-22-12` embedding model.
To get an overview how this dataset was created and pre-processed, have a look at [Cohere/wikipedia-22-12](https://huggingface.co/datasets/Cohere/wikipedia-22-12).
## Embeddings
We compute for `title+" "+text` the embeddings using our `multilingual-22-12` embedding model, a state-of-the-art model that works for semantic search in 100 languages. If you want to learn more about this model, have a look at [cohere.ai multilingual embedding model](https://txt.cohere.ai/multilingual/).
## Further languages
We provide embeddings of Wikipedia in many different languages:
[ar](https://huggingface.co/datasets/Cohere/wikipedia-22-12-ar-embeddings), [de](https://huggingface.co/datasets/Cohere/wikipedia-22-12-de-embeddings), [en](https://huggingface.co/datasets/Cohere/wikipedia-22-12-en-embeddings), [es](https://huggingface.co/datasets/Cohere/wikipedia-22-12-es-embeddings), [fr](https://huggingface.co/datasets/Cohere/wikipedia-22-12-fr-embeddings), [hi](https://huggingface.co/datasets/Cohere/wikipedia-22-12-hi-embeddings), [it](https://huggingface.co/datasets/Cohere/wikipedia-22-12-it-embeddings), [ja](https://huggingface.co/datasets/Cohere/wikipedia-22-12-ja-embeddings), [ko](https://huggingface.co/datasets/Cohere/wikipedia-22-12-ko-embeddings), [simple english](https://huggingface.co/datasets/Cohere/wikipedia-22-12-simple-embeddings), [zh](https://huggingface.co/datasets/Cohere/wikipedia-22-12-zh-embeddings),
You can find the Wikipedia datasets without embeddings at [Cohere/wikipedia-22-12](https://huggingface.co/datasets/Cohere/wikipedia-22-12).
## Loading the dataset
You can either load the dataset like this:
```python
from datasets import load_dataset
docs = load_dataset(f"Cohere/wikipedia-22-12-zh-embeddings", split="train")
```
Or you can also stream it without downloading it before:
```python
from datasets import load_dataset
docs = load_dataset(f"Cohere/wikipedia-22-12-zh-embeddings", split="train", streaming=True)
for doc in docs:
docid = doc['id']
title = doc['title']
text = doc['text']
emb = doc['emb']
```
## Search
A full search example:
```python
#Run: pip install cohere datasets
from datasets import load_dataset
import torch
import cohere
co = cohere.Client(f"<<COHERE_API_KEY>>") # Add your cohere API key from www.cohere.com
#Load at max 1000 documents + embeddings
max_docs = 1000
docs_stream = load_dataset(f"Cohere/wikipedia-22-12-zh-embeddings", split="train", streaming=True)
docs = []
doc_embeddings = []
for doc in docs_stream:
docs.append(doc)
doc_embeddings.append(doc['emb'])
if len(docs) >= max_docs:
break
doc_embeddings = torch.tensor(doc_embeddings)
query = 'Who founded Youtube'
response = co.embed(texts=[query], model='multilingual-22-12')
query_embedding = response.embeddings
query_embedding = torch.tensor(query_embedding)
# Compute dot score between query embedding and document embeddings
dot_scores = torch.mm(query_embedding, doc_embeddings.transpose(0, 1))
top_k = torch.topk(dot_scores, k=3)
# Print results
print("Query:", query)
for doc_id in top_k.indices[0].tolist():
print(docs[doc_id]['title'])
print(docs[doc_id]['text'], "\n")
```
## Performance
You can find performance on the MIRACL dataset (a semantic search evaluation dataset) here: [miracl-en-queries-22-12#performance](https://huggingface.co/datasets/Cohere/miracl-en-queries-22-12#performance) |
Cohere/wikipedia-22-12-ja-embeddings | 2023-03-22T16:55:06.000Z | [
"task_categories:text-retrieval",
"task_ids:document-retrieval",
"multilinguality:multilingual",
"language:ja",
"license:apache-2.0",
"region:us"
] | Cohere | null | null | null | 1 | 5 | ---
language:
- ja
multilinguality:
- multilingual
size_categories: []
source_datasets: []
tags: []
task_categories:
- text-retrieval
license:
- apache-2.0
task_ids:
- document-retrieval
---
# Wikipedia (ja) embedded with cohere.ai `multilingual-22-12` encoder
We encoded [Wikipedia (ja)](https://ja.wikipedia.org) using the [cohere.ai](https://txt.cohere.ai/multilingual/) `multilingual-22-12` embedding model.
To get an overview how this dataset was created and pre-processed, have a look at [Cohere/wikipedia-22-12](https://huggingface.co/datasets/Cohere/wikipedia-22-12).
## Embeddings
We compute for `title+" "+text` the embeddings using our `multilingual-22-12` embedding model, a state-of-the-art model that works for semantic search in 100 languages. If you want to learn more about this model, have a look at [cohere.ai multilingual embedding model](https://txt.cohere.ai/multilingual/).
## Further languages
We provide embeddings of Wikipedia in many different languages:
[ar](https://huggingface.co/datasets/Cohere/wikipedia-22-12-ar-embeddings), [de](https://huggingface.co/datasets/Cohere/wikipedia-22-12-de-embeddings), [en](https://huggingface.co/datasets/Cohere/wikipedia-22-12-en-embeddings), [es](https://huggingface.co/datasets/Cohere/wikipedia-22-12-es-embeddings), [fr](https://huggingface.co/datasets/Cohere/wikipedia-22-12-fr-embeddings), [hi](https://huggingface.co/datasets/Cohere/wikipedia-22-12-hi-embeddings), [it](https://huggingface.co/datasets/Cohere/wikipedia-22-12-it-embeddings), [ja](https://huggingface.co/datasets/Cohere/wikipedia-22-12-ja-embeddings), [ko](https://huggingface.co/datasets/Cohere/wikipedia-22-12-ko-embeddings), [simple english](https://huggingface.co/datasets/Cohere/wikipedia-22-12-simple-embeddings), [zh](https://huggingface.co/datasets/Cohere/wikipedia-22-12-zh-embeddings),
You can find the Wikipedia datasets without embeddings at [Cohere/wikipedia-22-12](https://huggingface.co/datasets/Cohere/wikipedia-22-12).
## Loading the dataset
You can either load the dataset like this:
```python
from datasets import load_dataset
docs = load_dataset(f"Cohere/wikipedia-22-12-ja-embeddings", split="train")
```
Or you can also stream it without downloading it before:
```python
from datasets import load_dataset
docs = load_dataset(f"Cohere/wikipedia-22-12-ja-embeddings", split="train", streaming=True)
for doc in docs:
docid = doc['id']
title = doc['title']
text = doc['text']
emb = doc['emb']
```
## Search
A full search example:
```python
#Run: pip install cohere datasets
from datasets import load_dataset
import torch
import cohere
co = cohere.Client(f"<<COHERE_API_KEY>>") # Add your cohere API key from www.cohere.com
#Load at max 1000 documents + embeddings
max_docs = 1000
docs_stream = load_dataset(f"Cohere/wikipedia-22-12-ja-embeddings", split="train", streaming=True)
docs = []
doc_embeddings = []
for doc in docs_stream:
docs.append(doc)
doc_embeddings.append(doc['emb'])
if len(docs) >= max_docs:
break
doc_embeddings = torch.tensor(doc_embeddings)
query = 'Who founded Youtube'
response = co.embed(texts=[query], model='multilingual-22-12')
query_embedding = response.embeddings
query_embedding = torch.tensor(query_embedding)
# Compute dot score between query embedding and document embeddings
dot_scores = torch.mm(query_embedding, doc_embeddings.transpose(0, 1))
top_k = torch.topk(dot_scores, k=3)
# Print results
print("Query:", query)
for doc_id in top_k.indices[0].tolist():
print(docs[doc_id]['title'])
print(docs[doc_id]['text'], "\n")
```
## Performance
You can find performance on the MIRACL dataset (a semantic search evaluation dataset) here: [miracl-en-queries-22-12#performance](https://huggingface.co/datasets/Cohere/miracl-en-queries-22-12#performance) |
Cohere/wikipedia-22-12-fr-embeddings | 2023-03-22T16:53:41.000Z | [
"task_categories:text-retrieval",
"task_ids:document-retrieval",
"annotations_creators:expert-generated",
"multilinguality:multilingual",
"language:fr",
"license:apache-2.0",
"region:us"
] | Cohere | null | null | null | 4 | 5 | ---
annotations_creators:
- expert-generated
language:
- fr
multilinguality:
- multilingual
size_categories: []
source_datasets: []
tags: []
task_categories:
- text-retrieval
license:
- apache-2.0
task_ids:
- document-retrieval
---
# Wikipedia (fr) embedded with cohere.ai `multilingual-22-12` encoder
We encoded [Wikipedia (fr)](https://fr.wikipedia.org) using the [cohere.ai](https://txt.cohere.ai/multilingual/) `multilingual-22-12` embedding model.
To get an overview how this dataset was created and pre-processed, have a look at [Cohere/wikipedia-22-12](https://huggingface.co/datasets/Cohere/wikipedia-22-12).
## Embeddings
We compute for `title+" "+text` the embeddings using our `multilingual-22-12` embedding model, a state-of-the-art model that works for semantic search in 100 languages. If you want to learn more about this model, have a look at [cohere.ai multilingual embedding model](https://txt.cohere.ai/multilingual/).
## Further languages
We provide embeddings of Wikipedia in many different languages:
[ar](https://huggingface.co/datasets/Cohere/wikipedia-22-12-ar-embeddings), [de](https://huggingface.co/datasets/Cohere/wikipedia-22-12-de-embeddings), [en](https://huggingface.co/datasets/Cohere/wikipedia-22-12-en-embeddings), [es](https://huggingface.co/datasets/Cohere/wikipedia-22-12-es-embeddings), [fr](https://huggingface.co/datasets/Cohere/wikipedia-22-12-fr-embeddings), [hi](https://huggingface.co/datasets/Cohere/wikipedia-22-12-hi-embeddings), [it](https://huggingface.co/datasets/Cohere/wikipedia-22-12-it-embeddings), [ja](https://huggingface.co/datasets/Cohere/wikipedia-22-12-ja-embeddings), [ko](https://huggingface.co/datasets/Cohere/wikipedia-22-12-ko-embeddings), [simple english](https://huggingface.co/datasets/Cohere/wikipedia-22-12-simple-embeddings), [zh](https://huggingface.co/datasets/Cohere/wikipedia-22-12-zh-embeddings),
You can find the Wikipedia datasets without embeddings at [Cohere/wikipedia-22-12](https://huggingface.co/datasets/Cohere/wikipedia-22-12).
## Loading the dataset
You can either load the dataset like this:
```python
from datasets import load_dataset
docs = load_dataset(f"Cohere/wikipedia-22-12-fr-embeddings", split="train")
```
Or you can also stream it without downloading it before:
```python
from datasets import load_dataset
docs = load_dataset(f"Cohere/wikipedia-22-12-fr-embeddings", split="train", streaming=True)
for doc in docs:
docid = doc['id']
title = doc['title']
text = doc['text']
emb = doc['emb']
```
## Search
A full search example:
```python
#Run: pip install cohere datasets
from datasets import load_dataset
import torch
import cohere
co = cohere.Client(f"<<COHERE_API_KEY>>") # Add your cohere API key from www.cohere.com
#Load at max 1000 documents + embeddings
max_docs = 1000
docs_stream = load_dataset(f"Cohere/wikipedia-22-12-fr-embeddings", split="train", streaming=True)
docs = []
doc_embeddings = []
for doc in docs_stream:
docs.append(doc)
doc_embeddings.append(doc['emb'])
if len(docs) >= max_docs:
break
doc_embeddings = torch.tensor(doc_embeddings)
query = 'Who founded Youtube'
response = co.embed(texts=[query], model='multilingual-22-12')
query_embedding = response.embeddings
query_embedding = torch.tensor(query_embedding)
# Compute dot score between query embedding and document embeddings
dot_scores = torch.mm(query_embedding, doc_embeddings.transpose(0, 1))
top_k = torch.topk(dot_scores, k=3)
# Print results
print("Query:", query)
for doc_id in top_k.indices[0].tolist():
print(docs[doc_id]['title'])
print(docs[doc_id]['text'], "\n")
```
## Performance
You can find performance on the MIRACL dataset (a semantic search evaluation dataset) here: [miracl-en-queries-22-12#performance](https://huggingface.co/datasets/Cohere/miracl-en-queries-22-12#performance) |
kjhkjh95/kote_ekman | 2023-01-17T15:18:28.000Z | [
"arxiv:2205.05300",
"region:us"
] | kjhkjh95 | null | null | null | 2 | 5 |
# Ekman Taxomony of KOTE(Korean Online That-gul Emotions) datasets
I mapped 44 emotion types in the KOTE dataset to 7 Ekman Taxonomy (Disgust, Fear, Sadness, Surprise, Joy, + No Emotion).
For the mapping, I referred to the clustering results in the KOTE paper (https://arxiv.org/pdf/2205.05300.pdf).
The distance between each emotion and Ekman basic emotion (Disgust, Fear, Sadness, Surprise, Joy, + No Emotion) was calculated and configured to map to the nearest basic emotion.
# Emotion Grouping
Disgust: fed up, shock, disgust, contempt
Anger: anger, irritation, dissatisfaction, preposterous
Fear: pathetic, distrust, disappointment, embarrassment, shame, guilt, gessepany, fear, anxiety
Sadness: compassion, sadness, sorrow, despair, exhaustion, laziness, reluctant, boredom
No Emotion: no emotion arrogance, resolute
Surprise: realization, surprise, respect, Interest
Joy: Expectancy, Welcome, Care, attracted, Excitement, joy, happiness, admiration, pride, gratitude, relief, comfort
annotations_creators: https://github.com/searle-j/KOTE, language: "Korean", license: mit
|
AnonymousSubmissionOnly/Abb_Pinyin | 2023-06-25T12:00:35.000Z | [
"license:mit",
"region:us"
] | AnonymousSubmissionOnly | null | null | null | 0 | 5 | ---
license: mit
---
|
AnonymousSubmissionOnly/Chaizi | 2023-01-19T04:53:17.000Z | [
"license:mit",
"region:us"
] | AnonymousSubmissionOnly | null | null | null | 0 | 5 | ---
license: mit
---
|
clip-benchmark/wds_imagenet-a | 2023-01-20T05:33:05.000Z | [
"region:us"
] | clip-benchmark | null | null | null | 0 | 5 | Entry not found |
KTH/hungarian-single-speaker-tts | 2023-01-22T13:11:38.000Z | [
"task_categories:text-to-speech",
"task_categories:other",
"annotations_creators:expert-generated",
"multilinguality:monolingual",
"size_categories:1K<n<10K",
"source_datasets:original",
"language:hu",
"license:cc0-1.0",
"arxiv:1903.11269",
"region:us"
] | KTH | null | null | null | 1 | 5 | ---
dataset_info:
features:
- name: id
dtype: string
- name: audio
dtype:
audio:
sampling_rate: 22050
- name: original_text
dtype: string
- name: text
dtype: string
- name: duration
dtype: float64
splits:
- name: train
num_bytes: 3173032948.2
num_examples: 4515
download_size: 0
dataset_size: 3173032948.2
annotations_creators:
- expert-generated
language:
- hu
license: cc0-1.0
multilinguality:
- monolingual
size_categories:
- 1K<n<10K
source_datasets:
- original
task_categories:
- text-to-speech
- other
task_ids: []
---
# Dataset Card for CSS10 Hungarian: Single Speaker Speech Dataset
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-instances)
- [Data Splits](#data-instances)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** [Hungarian Single Speaker Speech Dataset](https://www.kaggle.com/datasets/bryanpark/hungarian-single-speaker-speech-dataset)
- **Repository:** [CSS10](https://github.com/kyubyong/css10)
- **Paper:** [CSS10: A Collection of Single Speaker Speech Datasets for 10 Languages](https://arxiv.org/abs/1903.11269)
### Dataset Summary
The corpus consists of a single speaker, with 4515 segments extracted
from a single LibriVox audiobook.
### Supported Tasks and Leaderboards
[Needs More Information]
### Languages
The audio is in Hungarian.
## Dataset Structure
[Needs More Information]
### Data Instances
[Needs More Information]
### Data Fields
[Needs More Information]
### Data Splits
[Needs More Information]
## Dataset Creation
### Curation Rationale
CSS10 is a collection of single speaker speech datasets for 10 languages. Each of them consists of audio files recorded by a single volunteer and their aligned text sourced from LibriVox.
### Source Data
#### Initial Data Collection and Normalization
[Egri csillagok](https://librivox.org/egri-csillagok-by-geza-gardonyi/),
read by Diana Majlinger.
#### Who are the source language producers?
[Needs More Information]
### Annotations
#### Annotation process
[Needs More Information]
#### Who are the annotators?
[Needs More Information]
### Personal and Sensitive Information
[Needs More Information]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[Needs More Information]
## Additional Information
### Dataset Curators
Kyubyong Park & Tommy Mulc
### Licensing Information
[CC0: Public Domain](https://creativecommons.org/publicdomain/zero/1.0/)
### Citation Information
```
@article{park2019css10,
title={CSS10: A Collection of Single Speaker Speech Datasets for 10 Languages},
author={Park, Kyubyong and Mulc, Thomas},
journal={Interspeech},
year={2019}
}
```
### Contributions
[Needs More Information] |
aadityaubhat/perturbed_faces | 2023-01-25T04:29:39.000Z | [
"task_categories:feature-extraction",
"task_categories:image-classification",
"task_categories:zero-shot-image-classification",
"size_categories:1K<n<10K",
"arxiv:2301.07315",
"region:us"
] | aadityaubhat | null | null | null | 1 | 5 | ---
task_categories:
- feature-extraction
- image-classification
- zero-shot-image-classification
pretty_name: Perturbed Faces
size_categories:
- 1K<n<10K
---
# Perturbed Faces
This dataset contains 1000 images from [CelebA dataset](!https://www.kaggle.com/datasets/jessicali9530/celeba-dataset). For each of the thousand images dataset also has [LowKey](https://openreview.net/forum?id=hJmtwocEqzc) perturbed version and [Fawkes](https://sandlab.cs.uchicago.edu/fawkes/) perturbed version.
LowKey and Fawkes perturbed images have _attacked & _cloaked at the end of the filename respectively.
| File Name | Version |
|---------------------|--------------------------|
| 000001.jpg | Original |
| 000001_cloaked.png | Fawkes perturbed version |
| 000001_attacked.png | LowKey perturbed version |
The Fawkes perturbed images are created using CLI provided in the [github repository](https://github.com/Shawn-Shan/fawkes) with protection mode set to mid. The LowKey version of
images are created using Python code provided with the paper.
## Citation
If you found this work helpful for your research, please cite it as following:
```
@misc{2301.07315,
Author = {Aaditya Bhat and Shrey Jain},
Title = {Face Recognition in the age of CLIP & Billion image datasets},
Year = {2023},
Eprint = {arXiv:2301.07315},
}
``` |
nglaura/scielo-summarization | 2023-04-11T10:21:45.000Z | [
"task_categories:summarization",
"language:fr",
"license:apache-2.0",
"region:us"
] | nglaura | null | null | null | 0 | 5 | ---
license: apache-2.0
task_categories:
- summarization
language:
- fr
pretty_name: SciELO
---
# LoRaLay: A Multilingual and Multimodal Dataset for Long Range and Layout-Aware Summarization
A collaboration between [reciTAL](https://recital.ai/en/), [MLIA](https://mlia.lip6.fr/) (ISIR, Sorbonne Université), [Meta AI](https://ai.facebook.com/), and [Università di Trento](https://www.unitn.it/)
## SciELO dataset for summarization
SciELO is a dataset for summarization of research papers written in Spanish and Portuguese, for which layout information is provided.
### Data Fields
- `article_id`: article id
- `article_words`: sequence of words constituting the body of the article
- `article_bboxes`: sequence of corresponding word bounding boxes
- `norm_article_bboxes`: sequence of corresponding normalized word bounding boxes
- `abstract`: a string containing the abstract of the article
- `article_pdf_url`: URL of the article's PDF
### Data Splits
This dataset has 3 splits: _train_, _validation_, and _test_.
| Dataset Split | Number of Instances (ES/PT) |
| ------------- | ----------------------------|
| Train | 20,853 / 19,407 |
| Validation | 1,158 / 1,078 |
| Test | 1,159 / 1,078 |
## Citation
``` latex
@article{nguyen2023loralay,
title={LoRaLay: A Multilingual and Multimodal Dataset for Long Range and Layout-Aware Summarization},
author={Nguyen, Laura and Scialom, Thomas and Piwowarski, Benjamin and Staiano, Jacopo},
journal={arXiv preprint arXiv:2301.11312},
year={2023}
}
``` |
emreisik/news | 2023-01-25T18:50:02.000Z | [
"task_categories:text-generation",
"size_categories:1K<n<10K",
"language:tr",
"license:bsd",
"region:us"
] | emreisik | null | null | null | 0 | 5 | ---
license: bsd
task_categories:
- text-generation
language:
- tr
pretty_name: News
size_categories:
- 1K<n<10K
---
This is the reporsitory of Turkish fake news dataset which consists of Zaytung posts and Hurriyet news articles.
Code folder contains the web scrapper python files.
Raw folder contains txt files downloaded from sources.
Clean folder contains txt files in lowercase, punctuation and numbers removed. |
liyucheng/UFSAC | 2023-01-26T15:41:19.000Z | [
"task_categories:token-classification",
"size_categories:1M<n<10M",
"language:en",
"license:cc-by-2.0",
"region:us"
] | liyucheng | null | null | null | 0 | 5 | ---
license: cc-by-2.0
task_categories:
- token-classification
language:
- en
size_categories:
- 1M<n<10M
---
# Dataset Card for Dataset Name
UFSAC: Unification of Sense Annotated Corpora and Tools
## Dataset Description
- **Homepage:** https://github.com/getalp/UFSAC
- **Repository:** https://github.com/getalp/UFSAC
- **Paper:**
- **Leaderboard:**
- **Point of Contact:**
### Dataset Summary
### Supported Tasks and Leaderboards
WSD: Word Sense Disambiguation
### Languages
English
## Dataset Structure
### Data Instances
```
{'lemmas': ['_',
'be',
'quite',
'_',
'hefty',
'spade',
'_',
'_',
'bicycle',
'_',
'type',
'handlebar',
'_',
'_',
'spring',
'lever',
'_',
'_',
'rear',
'_',
'_',
'_',
'step',
'on',
'_',
'activate',
'_',
'_'],
'pos_tags': ['PRP',
'VBZ',
'RB',
'DT',
'JJ',
'NN',
',',
'IN',
'NN',
':',
'NN',
'NNS',
'CC',
'DT',
'VBN',
'NN',
'IN',
'DT',
'NN',
',',
'WDT',
'PRP',
'VBP',
'RP',
'TO',
'VB',
'PRP',
'.'],
'sense_keys': ['activate%2:36:00::'],
'target_idx': 25,
'tokens': ['It',
'is',
'quite',
'a',
'hefty',
'spade',
',',
'with',
'bicycle',
'-',
'type',
'handlebars',
'and',
'a',
'sprung',
'lever',
'at',
'the',
'rear',
',',
'which',
'you',
'step',
'on',
'to',
'activate',
'it',
'.']}
```
### Data Fields
```
{'tokens': Sequence(feature=Value(dtype='string', id=None), length=-1, id=None),
'lemmas': Sequence(feature=Value(dtype='string', id=None), length=-1, id=None),
'pos_tags': Sequence(feature=Value(dtype='string', id=None), length=-1, id=None),
'target_idx': Value(dtype='int32', id=None),
'sense_keys': Sequence(feature=Value(dtype='string', id=None), length=-1, id=None)}
```
### Data Splits
Not split. Use `train` split directly.
|
wadhwani-ai/pest-management-opendata | 2023-06-02T09:25:17.000Z | [
"license:apache-2.0",
"region:us"
] | wadhwani-ai | null | null | null | 0 | 5 | ---
license: apache-2.0
---
# Wadhwani AI Pest Management Open Data
This dataset is a Hugging Face adaptor to the official dataset [hosted
on
Github](https://github.com/wadhwani-ai/pest-management-opendata). Please
refer to that repository for detailed and up-to-date documentation.
## Usage
This dataset is large. It is strongly recommended users access it as a
stream:
```python
from datasets import load_dataset
dataset = load_dataset('wadhwani-ai/pest-management-opendata', streaming=True)
```
Bounding boxes are stored as geospatial types. Once loaded, they can be
read as follows:
```python
from shapely.wkb import loads
for (s, data) in dataset.items():
for d in data:
pests = d['pests']
iterable = map(pests.get, ('label', 'geometry'))
for (i, j) in zip(*iterable):
geom = loads(j)
print(i, geom.bounds)
```
The bounds of a geometry are what most object detection systems
require. See the [Shapely
documentation](https://shapely.readthedocs.io/en/stable/manual.html#object.bounds)
for more.
|
bridgeconn/snow-mountain | 2023-05-23T05:42:14.000Z | [
"task_categories:automatic-speech-recognition",
"task_categories:text-to-speech",
"multilinguality:multilingual",
"source_datasets:Snow Mountain",
"language:hi",
"language:bgc",
"language:kfs",
"language:dgo",
"language:bhd",
"language:gbk",
"language:xnr",
"language:kfx",
"language:mjl",
... | bridgeconn | The Snow Mountain dataset contains the audio recordings (in .mp3 format) and the corresponding text of The Bible
in 11 Indian languages. The recordings were done in a studio setting by native speakers. Each language has a single
speaker in the dataset. Most of these languages are geographically concentrated in the Northern part of India around
the state of Himachal Pradesh. Being related to Hindi they all use the Devanagari script for transcription. | @inproceedings{Raju2022SnowMD,
title={Snow Mountain: Dataset of Audio Recordings of The Bible in Low Resource Languages},
author={Kavitha Raju and V. Anjaly and R. Allen Lish and Joel Mathew},
year={2022}
} | null | 0 | 5 | ---
pretty_name: Snow Mountain
language:
- hi
- bgc
- kfs
- dgo
- bhd
- gbk
- xnr
- kfx
- mjl
- kfo
- bfz
annotations_creators:
- 'null': null
language_creators:
- 'null': null
multilinguality:
- multilingual
source_datasets:
- Snow Mountain
task_categories:
- automatic-speech-recognition
- text-to-speech
task_ids: []
configs:
- hi
- bgc
dataset_info:
- config_name: hi
features:
- name: Unnamed
dtype: int64
- name: sentence
dtype: string
- name: path
dtype: string
splits:
- name: train_500
num_examples: 400
- name: val_500
num_examples: 100
- name: train_1000
num_examples: 800
- name: val_1000
num_examples: 200
- name: test_common
num_examples: 500
dataset_size: 71.41 hrs
- config_name: bgc
features:
- name: Unnamed
dtype: int64
- name: sentence
dtype: string
- name: path
dtype: string
splits:
- name: train_500
num_examples: 400
- name: val_500
num_examples: 100
- name: train_1000
num_examples: 800
- name: val_1000
num_examples: 200
- name: test_common
num_examples: 500
dataset_size: 27.41 hrs
license: cc-by-sa-4.0
---
# Snow Mountain
## Dataset Description
- **Paper: https://arxiv.org/abs/2206.01205**
- **Point of Contact: Joel Mathew**
### Dataset Summary
The Snow Mountain dataset contains the audio recordings (in .mp3 format) and the corresponding text of The Bible (contains both Old Testament (OT) and New Testament (NT)) in 11 Indian languages. The recordings were done in a studio setting by native speakers. Each language has a single speaker in the dataset. Most of these languages are geographically concentrated in the Northern part of India around the state of Himachal Pradesh. Being related to Hindi they all use the Devanagari script for transcription.
We have used this dataset for experiments in ASR tasks. But these could be used for other applications in speech domain, like speaker recognition, language identification or even as unlabelled corpus for pre-training.
### Supported Tasks and Leaderboards
Atomatic speech recognition, Speech-to-Text, Speaker recognition, Language identification
### Languages
Hindi, Haryanvi, Bilaspuri, Dogri, Bhadrawahi, Gaddi, Kangri, Kulvi, Mandeali, Kulvi Outer Seraji, Pahari Mahasui, Malayalam, Kannada, Tamil, Telugu
## Dataset Structure
```
data
|- cleaned
|- lang1
|- book1_verse_audios.tar.gz
|- book2_verse_audios.tar.gz
...
...
|- all_verses.tar.gz
|- short_verses.tar.gz
|- lang2
...
...
|- experiments
|- lang1
|- train_500.csv
|- val_500.csv
|- test_common.csv
...
...
|- lang2
...
...
|- raw
|- lang1
|- chapter1_audio.mp3
|- chapter2_audio.mp3
...
...
|- text
|- book1.csv
|- book1.usfm
...
...
|- lang2
...
...
```
### Data Instances
A data point comprises of the path to the audio file, called `path` and its transcription, called `sentence`.
```
{'sentence': 'क्यूँके तू अपणी बात्तां कै कारण बेकसूर अर अपणी बात्तां ए कै कारण कसूरवार ठहराया जावैगा',
'audio': {'path': 'data/cleaned/haryanvi/MAT/MAT_012_037.wav',
'array': array([0., 0., 0., ..., 0., 0., 0.]),
'sampling_rate': 16000},
'path': 'data/cleaned/haryanvi/MAT/MAT_012_037.wav'}
```
### Data Fields
`path`: The path to the audio file
`audio`: A dictionary containing the path to the downloaded audio file, the decoded audio array, and the sampling rate. Note that when accessing the audio column: `dataset[0]["audio"]` the audio file is automatically decoded and resampled to `dataset.features["audio"].sampling_rate`. Decoding and resampling of a large number of audio files might take a significant amount of time. Thus it is important to first query the sample index before the "audio" column, i.e. `dataset[0]["audio"]` should always be preferred over `dataset["audio"][0]`.
`sentence`: The transcription of the audio file.
### Data Splits
We create splits of the cleaned data for training and analysing the performance of ASR models. The splits are available in the `experiments` directory. The file names indicate the experiment and the split category. Additionally two CSV files are included in the data splits - `all_verses` and `short_verses`. Various data splits were generated from these main two CSVs. `short_verses.csv` contains audios of length < 10s and corresponding transcriptions. `all_verses.csv` contains complete cleaned verses including long and short audios. Due to the large size (>10MB), we keep these CSVs compressed in the `tar.gz format in the `cleaned` folder.
## Dataset Loading
`raw` folder has chapter wise audios in .mp3 format. For doing experiments, we might need audios in .wav format. Verse wise audio files are keept in the `cleaned` folder in .wav format. This results in a much larger size which contributes to longer loading time into memory. Here is the approximate time needed for loading the Dataset.
- Hindi (OT books): ~20 minutes
- Hindi minority languages (NT books): ~9 minutes
- Dravidian languages (OT+NT books): ~30 minutes
## Details
Please refer to the paper for more details on the creation and the rationale for the splits we created in the dataset.
### Licensing Information
The data is licensed under the Creative Commons Attribution-ShareAlike 4.0 International Public License (CC BY-SA 4.0)
### Citation Information
Please cite this work if you make use of it:
```
@inproceedings{Raju2022SnowMD,
title={Snow Mountain: Dataset of Audio Recordings of The Bible in Low Resource Languages},
author={Kavitha Raju and V. Anjaly and R. Allen Lish and Joel Mathew},
year={2022}
}
``` |
LangChainHub-Prompts/LLM_Math | 2023-02-28T07:39:19.000Z | [
"langchain",
"prompt",
"region:us"
] | LangChainHub-Prompts | null | null | null | 3 | 5 |
---
tags:
- langchain
- prompt
---
# Description of LLM Math
Prompt designed to optionally output iPython syntax to be run in order to better answer math questions.
## Inputs
This is a description of the inputs that the prompt expects.
question: User question to be answered.
## Usage
Below is a code snippet for how to use the prompt.
```python
from langchain.prompts import load_prompt
from langchain.chains import LLMMathChain
llm = ...
prompt = load_prompt('lc://prompts/llm_math/<file-name>')
chain = LLMMathChain(llm=llm, prompt=prompt)
```
|
huggingface/badges | 2023-09-22T14:35:51.000Z | [
"license:mit",
"region:us"
] | huggingface | null | null | null | 20 | 5 | ---
license: mit
thumbnail: "https://huggingface.co/datasets/huggingface/badges/resolve/main/badges-thumbnail.png"
---
<style>
.prose img {
display: inline;
margin: 0 6px !important;
}
.prose table {
max-width: 320px;
margin: 0;
}
</style>
# Badges
A set of badges you can use anywhere. Just update the anchor URL to point to the correct action for your Space. Light or dark background with 4 sizes available: small, medium, large, and extra large.
## How to use?
- With markdown, just copy the badge from: https://huggingface.co/datasets/huggingface/badges/blob/main/README.md?code=true
- With HTML, inspect this page with your web browser and copy the outer html.
## Available sizes
| Small | Medium | Large | Extra large |
| ------------- | :-----------: | ------------- | ------------- |
| 20px (height) | 24px (height) | 36px (height) | 48px (height) |
## Paper page
[](https://huggingface.co/papers)
[](https://huggingface.co/papers)
[](https://huggingface.co/papers)
[](https://huggingface.co/papers)
[](https://huggingface.co/papers)
[](https://huggingface.co/papers)
[](https://huggingface.co/papers)
[](https://huggingface.co/papers)
## Deploy on Spaces
[](https://huggingface.co/new-space)
[](https://huggingface.co/new-space)
[](https://huggingface.co/new-space)
[](https://huggingface.co/new-space)
[](https://huggingface.co/new-space)
[](https://huggingface.co/new-space)
[](https://huggingface.co/new-space)
[](https://huggingface.co/new-space)
## Duplicate this Space
[](https://huggingface.co/spaces/huggingface-projects/diffusers-gallery?duplicate=true)
[](https://huggingface.co/spaces/huggingface-projects/diffusers-gallery?duplicate=true)
[](https://huggingface.co/spaces/huggingface-projects/diffusers-gallery?duplicate=true)
[](https://huggingface.co/spaces/huggingface-projects/diffusers-gallery?duplicate=true)
[](https://huggingface.co/spaces/huggingface-projects/diffusers-gallery?duplicate=true)
[](https://huggingface.co/spaces/huggingface-projects/diffusers-gallery?duplicate=true)
[](https://huggingface.co/spaces/huggingface-projects/diffusers-gallery?duplicate=true)
[](https://huggingface.co/spaces/huggingface-projects/diffusers-gallery?duplicate=true)
## Open in HF Spaces
[](https://huggingface.co/spaces)
[](https://huggingface.co/spaces)
[](https://huggingface.co/spaces)
[](https://huggingface.co/spaces)
[](https://huggingface.co/spaces)
[](https://huggingface.co/spaces)
[](https://huggingface.co/spaces)
[](https://huggingface.co/spaces)
## Open a Discussion
[](https://huggingface.co/spaces)
[](https://huggingface.co/spaces)
[](https://huggingface.co/spaces)
[](https://huggingface.co/spaces)
[](https://huggingface.co/spaces)
[](https://huggingface.co/spaces)
[](https://huggingface.co/spaces)
[](https://huggingface.co/spaces)
## Share to Community
[](https://huggingface.co/spaces)
[](https://huggingface.co/spaces)
[](https://huggingface.co/spaces)
[](https://huggingface.co/spaces)
[](https://huggingface.co/spaces)
[](https://huggingface.co/spaces)
[](https://huggingface.co/spaces)
[](https://huggingface.co/spaces)
## Sign in with Hugging Face
[](https://huggingface.co/)
[](https://huggingface.co/)
[](https://huggingface.co/)
[](https://huggingface.co/)
[](https://huggingface.co/)
[](https://huggingface.co/)
[](https://huggingface.co/)
[](https://huggingface.co/)
## Open a Pull Request
[](https://huggingface.co/spaces/victor/ChatUI/discussions)
[](https://huggingface.co/spaces/victor/ChatUI/discussions)
[](https://huggingface.co/spaces/victor/ChatUI/discussions)
[](https://huggingface.co/spaces/victor/ChatUI/discussions)
[](https://huggingface.co/spaces/victor/ChatUI/discussions)
[](https://huggingface.co/spaces/victor/ChatUI/discussions)
[](https://huggingface.co/spaces/victor/ChatUI/discussions)
[](https://huggingface.co/spaces/victor/ChatUI/discussions)
## Subscribe to PRO
[](https://huggingface.co/subscribe/pro)
[](https://huggingface.co/subscribe/pro)
[](https://huggingface.co/subscribe/pro)
[](https://huggingface.co/subscribe/pro)
[](https://huggingface.co/subscribe/pro)
[](https://huggingface.co/subscribe/pro)
[](https://huggingface.co/subscribe/pro)
[](https://huggingface.co/subscribe/pro)
## Powered by Hugging Face
[](https://huggingface.co)
[](https://huggingface.co)
|
tti-bias/identities | 2023-08-21T18:34:57.000Z | [
"language:en",
"license:cc-by-sa-4.0",
"arxiv:2301.13188",
"region:us"
] | tti-bias | null | null | null | 0 | 5 | ---
license: cc-by-sa-4.0
dataset_info:
features:
- name: ethnicity
dtype: string
- name: gender
dtype: string
- name: 'no'
dtype: int32
- name: image_path
dtype: string
- name: image
dtype: image
- name: model
dtype: string
splits:
- name: train
num_bytes: 585336673
num_examples: 2040
download_size: 465986042
dataset_size: 585336673
language:
- en
---
# Dataset Card for identities
## Table of Contents
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks](#supported-tasks)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases and Limitations](#discussion-of-biases-and-limitations)
- [Additional Information](#additional-information)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** https://huggingface.co/spaces/tti-bias/stable-bias
- **Repository:** Forthcoming
- **Paper:** Forthcoming
- **Leaderboard:**
- **Point of Contact:** Forthcoming
### Dataset Summary
<span style="color:blue" font-size:smaller><b>🏗️ WORK IN PROGRESS</b></span>
<span style="color:red" font-size:smaller><b>⚠️ DISCLAIMER: The images in this dataset were generated by text-to-image systems and may depict offensive stereotypes or contain explicit content.</b></span>
The *Identities* dataset is a collection of computer-generated images generated using Text-to-Image (TTI) systems.
In order to generate a diverse set of prompts to evaluate the system outputs’ variation across dimensions of interest, we use the pattern **`Photo portrait of a [X] [Y] at work`**, where **[X]** spans ... and **[Y]** spans ....
```python
["American_Indian", "Black"]
```
```python
["woman", "man", "non-binary", "no_gender_specified"]
# no_gender_specified corresponds to a value of "person" for **[Y]**
```
Every prompt is used to generate images from the following models: **Stable Diffusion v.1.4, Stable Diffusion v.2., and Dall-E 2**
### Supported Tasks
This dataset can be used to evaluate the output space of TTI systems, particularly against the backdrop of societal representativeness.
### Languages
The prompts that generated the images are all in US-English.
## Dataset Structure
The dataset is stored in `parquet` format and contains 2040 rows which can be loaded like so:
```python
from datasets import load_dataset
dataset = load_dataset("tti-bias/professions", split="train")
```
### Data Fields
Each row corresponds to the output of a TTI system and looks as follows:
```python
{
'ethnicity': 'South_Asian',
'gender': 'man',
'no': 1,
'image_path': 'Photo_portrait_of_a_South_Asian_man_at_work/Photo_portrait_of_a_South_Asian_man_at_work_1.jpg',
'image': <PIL.JpegImagePlugin.JpegImageFile image mode=RGB size=512x512>,
'model': 'SD_2'
}
```
### Data Splits
All the data is contained within the `train` split. As such, the dataset contains practically no splits.
## Dataset Creation
### Curation Rationale
This dataset was created to explore the output characteristics of TTI systems from the vantage point of societal characteristics of interest.
### Source Data
#### Initial Data Collection and Normalization
The data was generated using the [DiffusionPipeline]() from Hugging Face:
```python
from diffusers import DiffusionPipeline
import torch
pipeline = DiffusionPipeline.from_pretrained("runwayml/stable-diffusion-v1-5", torch_dtype=torch.float16)
images = pipeline(prompt="Photo portrait of an African woman at work", num_images_per_prompt=9).images
```
### Personal and Sensitive Information
Generative models trained on large datasets have been shown to memorize part of their training sets (See e.g.: [(Carlini et al. 2023)](https://arxiv.org/abs/2301.13188)) and the people generated could theoretically bear resemblance to real people.
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases and Limitations
At this point in time, the data is limited to images generated using English prompts and a set of professions sourced form the U.S. Bureau of Labor Statistics (BLS), which also provides us with additional information such as the demographic characteristics and salaries of each profession. While this data can also be leveraged in interesting analyses, it is currently limited to the North American context.
## Additional Information
### Licensing Information
The dataset is licensed under the Creative Commons [Attribution-ShareAlike 4.0 International (CC BY-SA 4.0)](https://creativecommons.org/licenses/by-sa/4.0/) license.
### Citation Information
If you use this dataset in your own work, please consider citing:
```json
@article{stable-bias-authors-2023,
author = {Anonymous Authors},
title = {Stable Bias: Analyzing Societal Representations in Diffusion Models},
year = {2023},
}
``` |
Kamtera/Persian-conversational-dataset | 2023-04-04T08:19:27.000Z | [
"task_categories:conversational",
"task_categories:text-generation",
"language:fa",
"license:apache-2.0",
"region:us"
] | Kamtera | persian-conversational-dataset | null | null | 0 | 5 | ---
license: apache-2.0
task_categories:
- conversational
- text-generation
language:
- fa
pretty_name: persianConversation
---
persianConversation |
tj-solergibert/Europarl-ST | 2023-02-09T10:22:06.000Z | [
"task_categories:translation",
"task_categories:text-to-speech",
"size_categories:100K<n<1M",
"language:es",
"language:de",
"language:en",
"language:fr",
"language:nl",
"language:pl",
"language:pt",
"language:ro",
"language:it",
"license:cc-by-nc-4.0",
"region:us"
] | tj-solergibert | null | null | null | 0 | 5 | ---
dataset_info:
features:
- name: original_speech
dtype: string
- name: original_language
dtype: string
- name: audio_path
dtype: string
- name: segment_start
dtype: float32
- name: segment_end
dtype: float32
- name: transcriptions
struct:
- name: de
dtype: string
- name: en
dtype: string
- name: es
dtype: string
- name: fr
dtype: string
- name: it
dtype: string
- name: nl
dtype: string
- name: pl
dtype: string
- name: pt
dtype: string
- name: ro
dtype: string
splits:
- name: train
num_bytes: 147857910
num_examples: 116138
- name: valid
num_bytes: 21318985
num_examples: 17538
- name: test
num_bytes: 22580968
num_examples: 18901
download_size: 109205144
dataset_size: 191757863
task_categories:
- translation
- text-to-speech
language:
- es
- de
- en
- fr
- nl
- pl
- pt
- ro
- it
size_categories:
- 100K<n<1M
license: cc-by-nc-4.0
---
# Dataset Card for "Europarl-ST"
## Table of Contents
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Additional Information](#additional-information)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
## Dataset Description
- **Homepage:** https://www.mllp.upv.es/europarl-st/
- **Paper:** https://ieeexplore.ieee.org/document/9054626
- **Point of Contact:** https://www.mllp.upv.es/
### Dataset Summary
Europarl-ST is a Multilingual Speech Translation Corpus, that contains paired audio-text samples for Speech Translation, constructed using the debates carried out in the European Parliament in the period between 2008 and 2012.
### Languages
Spanish, German, English, French, Dutch, Polish, Portuguese, Romanian, Italian
## Dataset Structure
### Data Fields
- **original_audio:** The original speech that is heard on the recording.
- **original_language:** The language of the audio
- **audio_path:** Path to the audio file
- **segment_start:** Second in which the speech begins
- **segment_end:** Second in which the speech ends
- **transcriptions:** Dictionary containing transcriptions into different languages
### Data Splits
- **train split:** 116138 samples
- **valid split:** 17538 samples
- **test split:** 18901 samples
Train set (hours):
| src/tgt | en | fr | de | it | es | pt | pl | ro | nl |
|---------|----|----|----|----|----|----|----|----|----|
| en | - | 81 | 83 | 80 | 81 | 81 | 79 | 72 | 80 |
| fr | 32 | - | 21 | 20 | 21 | 22 | 20 | 18 | 22 |
| de | 30 | 18 | - | 17 | 18 | 18 | 17 | 17 | 18 |
| it | 37 | 21 | 21 | - | 21 | 21 | 21 | 19 | 20 |
| es | 22 | 14 | 14 | 14 | - | 14 | 13 | 12 | 13 |
| pt | 15 | 10 | 10 | 10 | 10 | - | 9 | 9 | 9 |
| pl | 28 | 18 | 18 | 17 | 18 | 18 | - | 16 | 18 |
| ro | 24 | 12 | 12 | 12 | 12 | 12 | 12 | - | 12 |
| nl | 7 | 5 | 5 | 4 | 5 | 4 | 4 | 4 | - |
Valid/Test sets are all between 3 and 6 hours.
## Additional Information
### Licensing Information
* The work carried out for constructing the Europarl-ST corpus is released under a Creative Commons Attribution-NonCommercial 4.0 International license (CC BY-NC 4.0)
* All rights of the data belong to the European Union and respective copyright holders.
### Citation Information
If you use the corpus in your research please cite the following reference:
@INPROCEEDINGS{jairsan2020a,
author={J. {Iranzo-Sánchez} and J. A. {Silvestre-Cerdà} and J. {Jorge} and N. {Roselló} and A. {Giménez} and A. {Sanchis} and J. {Civera} and A. {Juan}},
booktitle={ICASSP 2020 - 2020 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP)},
title={Europarl-ST: A Multilingual Corpus for Speech Translation of Parliamentary Debates},
year={2020},
pages={8229-8233},} |
anytp/test2 | 2023-02-09T15:07:07.000Z | [
"region:us"
] | anytp | null | null | null | 0 | 5 | Entry not found |
marianna13/superuser | 2023-02-16T08:17:10.000Z | [
"region:us"
] | marianna13 | null | null | null | 0 | 5 | Entry not found |
jonathan-roberts1/RSD46-WHU | 2023-03-31T14:43:55.000Z | [
"license:other",
"region:us"
] | jonathan-roberts1 | null | null | null | 0 | 5 | ---
dataset_info:
features:
- name: image
dtype: image
- name: label
dtype:
class_label:
names:
'0': airplane
'1': airport
'2': artificial dense forest land
'3': artificial sparse forest land
'4': bare land
'5': basketball court
'6': blue structured factory building
'7': building
'8': construction site
'9': cross river bridge
'10': crossroads
'11': dense tall building
'12': dock
'13': fish pond
'14': footbridge
'15': graff
'16': grassland
'17': irregular farmland
'18': low scattered building
'19': medium density scattered building
'20': medium density structured building
'21': natural dense forest land
'22': natural sparse forest land
'23': oil tank
'24': overpass
'25': parking lot
'26': plastic greenhouse
'27': playground
'28': railway
'29': red structured factory building
'30': refinery
'31': regular farmland
'32': scattered blue roof factory building
'33': scattered red roof factory building
'34': sewage plant-type-one
'35': sewage plant-type-two
'36': ship
'37': solar power station
'38': sparse residential area
'39': square
'40': steelworks
'41': storage land
'42': tennis court
'43': thermal power plant
'44': vegetable plot
'45': water
splits:
- name: train
num_bytes: 1650045051.96
num_examples: 17516
download_size: 2184490825
dataset_size: 1650045051.96
license: other
---
# Dataset Card for "RSD46-WHU"
## Dataset Description
- **Paper** [Accurate Object Localization in Remote Sensing Images Based on Convolutional Neural Networks](https://ieeexplore.ieee.org/iel7/36/7880748/07827088.pdf)
- **Paper** [High-Resolution Remote Sensing Image Retrieval Based on CNNs from a Dimensional Perspective](https://www.mdpi.com/209338)
- **Split** Validation
## Split Information
This HuggingFace dataset repository contains just the Validation split.
### Licensing Information
[Free for education, research and commercial use.](https://github.com/RSIA-LIESMARS-WHU/RSD46-WHU)
## Citation Information
[Accurate Object Localization in Remote Sensing Images Based on Convolutional Neural Networks](https://ieeexplore.ieee.org/iel7/36/7880748/07827088.pdf)
[High-Resolution Remote Sensing Image Retrieval Based on CNNs from a Dimensional Perspective](https://www.mdpi.com/209338)
```
@article{long2017accurate,
title = {Accurate object localization in remote sensing images based on convolutional neural networks},
author = {Long, Yang and Gong, Yiping and Xiao, Zhifeng and Liu, Qing},
year = 2017,
journal = {IEEE Transactions on Geoscience and Remote Sensing},
publisher = {IEEE},
volume = 55,
number = 5,
pages = {2486--2498}
}
@article{xiao2017high,
title = {High-resolution remote sensing image retrieval based on CNNs from a dimensional perspective},
author = {Xiao, Zhifeng and Long, Yang and Li, Deren and Wei, Chunshan and Tang, Gefu and Liu, Junyi},
year = 2017,
journal = {Remote Sensing},
publisher = {MDPI},
volume = 9,
number = 7,
pages = 725
}
``` |
KocLab-Bilkent/turkish-constitutional-court | 2023-02-20T19:53:46.000Z | [
"task_categories:text-classification",
"annotations_creators:found",
"language_creators:found",
"multilinguality:monolingual",
"size_categories:10M<n<100M",
"source_datasets:original",
"language:tr",
"license:cc-by-4.0",
"region:us"
] | KocLab-Bilkent | null | null | null | 0 | 5 | ---
license: cc-by-4.0
task_categories:
- text-classification
annotations_creators:
- found
language_creators:
- found
multilinguality:
- monolingual
language:
- tr
size_categories:
- 10M<n<100M
pretty_name: predicting-turkish-constitutional-court-decisions
source_datasets:
- original
---
## Table of Contents
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
- **Homepage:**
- **Repository:** https://github.com/koc-lab/law-turk
- **Paper:** https://doi.org/10.1016/j.ipm.2021.102684
- **Point of Contact:** [Ceyhun Emre Öztürk](mailto:ceyhun.ozturk@bilkent.edu.tr)
### Dataset Summary
This dataset is extracted from the following Github repo, which was created for the journal paper with URL https://www.sciencedirect.com/science/article/abs/pii/S0306457321001692.
https://github.com/koc-lab/law-turk
The dataset includes 1290 court case decision texts from the Turkish Court of Cassation. Each sample has one label, which is the ruling of the court. The possible rulings are "Violation" and "No violation". There are 1290 samples. 1141 of these samples are labeled as "Violation".
### Supported Tasks and Leaderboards
Legal Judgment Prediction
### Languages
Turkish
## Dataset Structure
### Data Instances
The file format is jsonl and three data splits are present (train, validation and test) for each configuration.
### Data Fields
The dataset contains the following fields:
- `Text`: Legal case decision texts
- `Label`: The ruling of the court.
- 'Violation': The court decides for the legal case that there is a violation of the constitution.
- 'No violation': The court decides for the legal case that there is no violation of the constitution.
### Data Splits
The data has been split randomly into 70% train (903), 15% validation (195), 15% test (195).
## Dataset Creation
### Curation Rationale
This dataset was created to further the research on developing models for predicting Brazilian court decisions that are
also able to predict whether the decision will be unanimous.
### Source Data
The data were collected from *Türkiye Cumhuriyeti Anayasa Mahkemesi* (T.C. AYM, Turkish Constitutional Court).
#### Initial Data Collection and Normalization
The data were collected from the official website of the Turkish Contitutional Court: https://www.anayasa.gov.tr/tr/kararlar-bilgi-bankasi/.
#### Who are the source language producers?
The source language producers are judges.
### Annotations
#### Annotation process
The dataset was not annotated.
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
The court decisions might contain sensitive information about individuals.
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
### Dataset Curators
The data collection was done by Emre Mumcuoğlu ([Email](mailto:mumcuoglu@ee.bilkent.edu.tr)).
### Licensing Information
No licensing information was provided for this dataset. However, please make sure that you use the dataset according to
Turkish law.
### Citation Information
```
@article{mumcuoglu21natural,
title = {{Natural language processing in law: Prediction of outcomes in the higher courts of Turkey}},
journal = {Information Processing \& Management},
volume = {58},
number = {5},
year = {2021},
author = {Mumcuoğlu, Emre and Öztürk, Ceyhun E. and Ozaktas, Haldun M. and Koç, Aykut}
}
``` |
Gaborandi/Lung_Cancer_pubmed_abstracts | 2023-02-21T23:20:11.000Z | [
"region:us"
] | Gaborandi | null | null | null | 0 | 5 | - This Dataset has been downloaded from PubMed
- It has abstracts and titles that are related to Lung Cancer
- the data has been cleaned before uploading
- it could be used for any NLP task, such as Domain Adaptation |
vietgpt/xnli_vi | 2023-07-04T05:38:23.000Z | [
"region:us"
] | vietgpt | null | null | null | 1 | 5 | ---
dataset_info:
features:
- name: premise
dtype: string
- name: hypothesis
dtype: string
- name: label
dtype:
class_label:
names:
'0': entailment
'1': neutral
'2': contradiction
splits:
- name: train
num_bytes: 101417430
num_examples: 392702
- name: test
num_bytes: 1190217
num_examples: 5010
- name: validation
num_bytes: 590680
num_examples: 2490
download_size: 57688285
dataset_size: 103198327
---
# XNLI
- Source: https://huggingface.co/datasets/xnli
- Num examples:
- 392,702 (train)
- 2,490 (validation)
- 5,010 (test)
- Language: Vietnamese
```python
from datasets import load_dataset
load_dataset("tdtunlp/xnli_vi")
```
- Format for NLI task
```python
import random
def preprocess(
sample,
sep_key="<|endofprompt|>",
end_key="<|endoftext|>",
):
premise = sample['premise']
hypothesis = sample['hypothesis']
label = sample['label']
template_idx = random.randint(0, 3)
if template_idx == 0:
answer_choices = ["Đúng", "Không kết luận", "Sai"]
return {'text': """Hãy coi những điều sau đây là sự thật: "{premise}"
Vậy phát biểu sau đây: "{hypothesis}" là Đúng hay Sai, hay Không kết luận?
{sep_key}
{label}
{end_key}""".format(
premise=premise,
hypothesis=hypothesis,
sep_key=sep_key,
label=answer_choices[label],
end_key=end_key,
)}
elif template_idx == 1:
answer_choices = ["Đúng", "Không kết luận", "Sai"]
return {'text': """{premise}
Câu hỏi: Điều này có nghĩa là "{hypothesis}"? Đúng hay Sai, hay Không kết luận?
{sep_key}
{label}
{end_key}""".format(
premise=premise,
hypothesis=hypothesis,
sep_key=sep_key,
label=answer_choices[label],
end_key=end_key,
)}
elif template_idx == 2:
answer_choices = ["Đúng", "Không kết luận", "Sai"]
return {'text': """{premise}
Câu hỏi: {hypothesis} là Đúng hay Sai, hay Không kết luận?
{sep_key}
{label}
{end_key}""".format(
premise=premise,
hypothesis=hypothesis,
sep_key=sep_key,
label=answer_choices[label],
end_key=end_key,
)}
elif template_idx == 3:
answer_choices = ["Yes", "Maybe", "No"]
return {'text': """Cho rằng {premise}, nó có tuân theo giả thiết {hypothesis} không? Trả lời Có hay Không, hay Có thể.
{sep_key}
{label}
{end_key}""".format(
premise=premise,
hypothesis=hypothesis,
sep_key=sep_key,
label=answer_choices[label],
end_key=end_key,
)}
"""
Cho rằng Bạn biết trong mùa giải và tôi đoán ở mức độ của bạn , bạn sẽ mất chúng đến mức độ tiếp theo nếu họ quyết định nhớ lại đội ngũ cha mẹ các chiến binh quyết định gọi để nhớ lại một người từ ba a sau đó một người đàn ông đi lên đến thay thế anh ta và một người đàn ông nào đó đi lên để thay thế anh ta ., nó có tuân theo giả thiết Anh sẽ mất mọi thứ ở mức độ sau nếu người dân nhớ lại . không? Trả lời Có hay Không, hay Có thể.
<|endofprompt|>
Yes
<|endoftext|>
"""
``` |
lansinuote/diffusion.4.text_to_image | 2023-04-07T08:48:17.000Z | [
"region:us"
] | lansinuote | null | null | null | 0 | 5 | ---
dataset_info:
features:
- name: image
dtype: image
- name: input_ids
sequence: int32
splits:
- name: train
num_bytes: 119636585.0
num_examples: 833
download_size: 0
dataset_size: 119636585.0
---
# Dataset Card for "diffusion.4.text_to_image"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
martinms20/eurosat50-land-cover | 2023-02-24T16:30:39.000Z | [
"task_categories:image-classification",
"region:us"
] | martinms20 | null | null | null | 0 | 5 | ---
task_categories:
- image-classification
---
# AutoTrain Dataset for project: klasifikasi-tutupan-lahan
## Dataset Description
This dataset has been automatically processed by AutoTrain for project klasifikasi-tutupan-lahan.
### Languages
The BCP-47 code for the dataset's language is unk.
## Dataset Structure
### Data Instances
A sample from this dataset looks as follows:
```json
[
{
"image": "<64x64 RGB PIL image>",
"target": 8
},
{
"image": "<64x64 RGB PIL image>",
"target": 0
}
]
```
### Dataset Fields
The dataset has the following fields (also called "features"):
```json
{
"image": "Image(decode=True, id=None)",
"target": "ClassLabel(names=['AnnualCrop', 'Forest', 'HerbaceousVegetation', 'Highway', 'Industrial', 'Pasture', 'PermanentCrop', 'Residential', 'River', 'SeaLake'], id=None)"
}
```
### Dataset Splits
This dataset is split into a train and validation split. The split sizes are as follow:
| Split name | Num samples |
| ------------ | ------------------- |
| train | 400 |
| valid | 100 |
|
jonathan-roberts1/MultiScene | 2023-04-03T16:15:59.000Z | [
"task_categories:image-classification",
"task_categories:zero-shot-image-classification",
"license:mit",
"region:us"
] | jonathan-roberts1 | null | null | null | 0 | 5 | ---
dataset_info:
features:
- name: image
dtype: image
- name: label
sequence:
class_label:
names:
'0': apron
'1': baseball field
'2': basketball field
'3': beach
'4': bridge
'5': cemetery
'6': commercial
'7': farmland
'8': woodland
'9': golf course
'10': greenhouse
'11': helipad
'12': lake or pond
'13': oil field
'14': orchard
'15': parking lot
'16': park
'17': pier
'18': port
'19': quarry
'20': railway
'21': residential
'22': river
'23': roundabout
'24': runway
'25': soccer
'26': solar panel
'27': sparse shrub
'28': stadium
'29': storage tank
'30': tennis court
'31': train station
'32': wastewater plant
'33': wind turbine
'34': works
'35': sea
splits:
- name: train
num_bytes: 867506522
num_examples: 14000
download_size: 867005851
dataset_size: 867506522
license: mit
task_categories:
- image-classification
- zero-shot-image-classification
---
# Dataset Card for "MultiScene"
## Dataset Description
- **Paper** [MultiScene: A Large-scale Dataset and Benchmark for Multi-scene Recognition in Single Aerial Images](https://ieeexplore.ieee.org/iel7/36/4358825/09537917.pdf)
- **Split** Clean
### Split Information
This HuggingFace dataset repository contains just the 'Clean' split.
### Licensing Information
MIT.
## Citation Information
[MultiScene: A Large-scale Dataset and Benchmark for Multi-scene Recognition in Single Aerial Images](https://ieeexplore.ieee.org/iel7/36/4358825/09537917.pdf)
```
@article{hua2021multiscene,
title = {MultiScene: A Large-scale Dataset and Benchmark for Multi-scene Recognition in Single Aerial Images},
author = {Hua, Y. and Mou, L. and Jin, P. and Zhu, X. X.},
year = {in press},
journal = {IEEE Transactions on Geoscience and Remote Sensing}
}
``` |
nishakathiriya/images | 2023-03-02T10:14:13.000Z | [
"task_categories:image-classification",
"size_categories:n<1K",
"language:en",
"region:us"
] | nishakathiriya | null | null | null | 0 | 5 | ---
task_categories:
- image-classification
language:
- en
size_categories:
- n<1K
--- |
Fuminides/blobs_dataset | 2023-03-15T12:24:57.000Z | [
"task_categories:image-classification",
"license:mit",
"region:us"
] | Fuminides | null | null | null | 0 | 5 | ---
license: mit
task_categories:
- image-classification
---
The blob dataset!
-----
This dataset consists of a collection of 100000 that contains randomly generated blobs over a random noise background. Each image has annotated its number of blobs and if they are large or small.
The task consists of learning at the same time a quantitable guess (number of blobs) and a qualitative one (its size). |
IlyaGusev/yandex_q_full | 2023-03-07T20:30:24.000Z | [
"region:us"
] | IlyaGusev | null | null | null | 1 | 5 | ---
dataset_info:
features:
- name: id
dtype: string
- name: id2
dtype: int64
- name: title
dtype: string
- name: text_plain
dtype: string
- name: text_html
dtype: string
- name: author
dtype: string
- name: negative_votes
dtype: int32
- name: positive_votes
dtype: int32
- name: quality
dtype: int8
- name: views
dtype: uint64
- name: votes
dtype: int32
- name: approved_answer
dtype: string
- name: timestamp
dtype: uint64
- name: tags
sequence: string
- name: answers
sequence:
- name: id
dtype: string
- name: id2
dtype: int64
- name: text_plain
dtype: string
- name: text_html
dtype: string
- name: author
dtype: string
- name: negative_votes
dtype: int32
- name: positive_votes
dtype: int32
- name: votes
dtype: int32
- name: quality
dtype: int8
- name: views
dtype: uint64
- name: reposts
dtype: int32
- name: timestamp
dtype: uint64
splits:
- name: train
num_bytes: 5468460217
num_examples: 1297670
download_size: 1130317937
dataset_size: 5468460217
---
Based on https://huggingface.co/datasets/its5Q/yandex-q, parsed full.jsonl.gz
|
cbasu/Med-EASi | 2023-03-08T18:24:31.000Z | [
"arxiv:2302.09155",
"region:us"
] | cbasu | null | null | null | 0 | 5 | ---
# For reference on model card metadata, see the spec: https://github.com/huggingface/hub-docs/blob/main/datasetcard.md?plain=1
# Doc / guide: https://huggingface.co/docs/hub/datasets-cards
{}
---
# Dataset Card for Med-EASi
## Dataset Description
- **Repository:https://github.com/Chandrayee/CTRL-SIMP**
- **Paper:https://arxiv.org/pdf/2302.09155.pdf**
- **Point of Contact:Chandrayee Basu**
### Dataset Summary
Med-EASi (Medical dataset for Elaborative and Abstractive Simplification), a uniquely crowdsourced and finely annotated dataset for supervised simplification of short medical
texts. It contains 1979 expert-simple text pairs in medical domain, spanning a total of 4478 UMLS concepts across all text pairs. The dataset is annotated with four textual transformations:
replacement, elaboration, insertion and deletion.
### Supported Tasks
The dataset can be used for direct generation of simplified medical text or generation of simplified text along with controllability over individual transformations. Please refer to the paper for more information.
### Languages
English
## Dataset Structure
- **train.csv: 1397 text pairs (5.19 MB)**
- **validation.csv: 197 text pairs (1.5 MB)**
- **test.csv: 300 text pairs (1.19 MB)**
We also provide several metrics per data point including Levenstein similarity, SentenceBERT embedding cosine similarity, compression ratio, Flesch Kincaid readability grade,
automated readability index for each of the expert and simple text, and UMLS concepts in each of them.
### Data Instances
```
Expert: Some patients have weight loss, rarely enough to become underweight. Anemia, glossitis, angular stomatitis, and aphthous ulcers are usually seen in these patients.
Simple: Some people are undernourished, have mild weight loss and anemia, or have mouth sores and an inflamed tongue.
Annotated: Some <elab>patients<by>people are undernourished,</elab> have <elab>weight loss<by>mild weight loss</elab><del>, rarely enough to become underweight.</del> <rep>Anemia, glossitis, angular stomatitis, and aphthous ulcers<by>and anemia, or have mouth sores and an inflamed tongue</rep><del>usually seen in these patients</del>.
```
### Data Fields
```
Expert
Simple
Annotation
sim (Levenstein Similarity)
sentence_sim (SentenceBERT embedding cosine similarity)
compression
expert_fk_grade
expert_ari
layman_fk_grade
layman_ari
umls_expert
umls_layman
expert_terms
layman_terms
idx (original data index before shuffling, redundant)
```
### Data Splits
75 % train, 10 % validation and 15 % test
## Dataset Creation
This dataset is created by annotating 1500 SIMPWIKI data points (Van den Bercken, Sips, and Lofi 2019) and all of MSD (Cao et al. 2020) data points. We used expert-layman-AI collaboration for annotation.
### Personal and Sensitive Information
There is no personal or sensitive information in this dataset.
## Considerations for Using the Data
### Discussion of Biases
The dataset contains biomedical and clinical short texts.
### Other Known Limitations
The expert and simple texts in the original datasets were extracted and aligned using automated methods that have their own limitations.
### Citation Information
```
@article{basu2023med,
title={Med-EASi: Finely Annotated Dataset and Models for Controllable Simplification of Medical Texts},
author={Basu, Chandrayee and Vasu, Rosni and Yasunaga, Michihiro and Yang, Qian},
journal={arXiv preprint arXiv:2302.09155},
year={2023}
}
``` |
ontocord/OIG-moderation | 2023-03-10T04:05:57.000Z | [
"license:apache-2.0",
"region:us"
] | ontocord | null | null | null | 22 | 5 | ---
license: apache-2.0
---
# This is the Open Instruction Generalist - Moderation Dataset
This is our attempt to create a diverse dataset of dialogue that may be related to NSFW subject matters, abuse eliciting text, privacy violation eliciting instructions, depression or related content, hate speech, and other similar topics. We use the [prosocial], [anthropic redteam], subsets of [English wikipedia] datasets along with other public datasets and data created or contributed by volunteers. To regularize the dataset we also have "regular" OIG instructions, which includes Q/A instructions, coding instructions, and similar types of queries. Currently there are two versions of the datasets, but more will be created.
- OIG_safety_v0.1.jsonl (66200)
- OIG_safety_v0.2.jsonl (134530)
OIG-moderation includes data from:
Public datasets such as anthropic-redteam and anthropic-harmless, prosocial, and contributed datasets from community members
Augmented toxic data such as civil comments data converted into instructions, (c) anthropic-redteam data augmented with prosocial tags
Data provided by the LAION community that might include NSFW prompt
Synthetic depression data generated from a public depression bag of words dataset using https://huggingface.co/pszemraj/flan-t5-large-grammar-synthesis.
A model trained on the OIG-moderation dataset can be used to provide moderation labels, and the bot providers can choose to then block responses from their chatbots based on these labels. If a bot provider's policy for example permits sexual content, but prohibits PII eliciting text, they can hopefully do so with the output of a model trained on this data.
The tags consist of (a) Base prosocial tags: casual, possibly needs caution, probably needs caution, needs caution, needs intervention and (b) Additional tags: abuse related, personal information related, sexual content, hate.
An utterance can have more than one tag. For example, a wikipedia article about pornography content might be tagged: needs caution | sexual content.
## Acknowledgement
We would like to thank all the following people for their amazing contirbutions: @Rallio, @Summer, @Iamiakk @Jue, @yp_yurilee, @Jjmachan, @Coco.han, @Pszemraj, and many others.
We would like to thank Together.xyz for testing the v0.1 data for effectiveness and their dedication to the open source community.
We would like to thank AI Horde and user @Db0 for their incredible contribution of filtered data that were flagged as unethical.
## Disclaimer
These datasets contain synthetic data and in some cases data that includes NSFW subject matter and triggering text such as toxic/offensive/trolling things. If you are concerned about the presence of this type of material in the dataset please make sure you carefully inspect each of the entries and filter appropriately. Our goal is for the model to be as helpful and non-toxic as possible and we are actively evaluating ways to help create models that can detect potentially unwanted or problematic instructions or content.
## Risk Factors
While we acknowledge that this dataset can be modified to train a model to generate unsafe text, it is important to release this publicly as a resource for both researchers and those building production agents to train detection models.
|
its5Q/habr_qna | 2023-03-11T04:43:35.000Z | [
"task_categories:text-generation",
"task_categories:question-answering",
"task_ids:language-modeling",
"task_ids:open-domain-qa",
"annotations_creators:crowdsourced",
"language_creators:crowdsourced",
"multilinguality:monolingual",
"size_categories:100K<n<1M",
"source_datasets:original",
"language... | its5Q | null | null | null | 2 | 5 | ---
annotations_creators:
- crowdsourced
language:
- ru
language_creators:
- crowdsourced
license:
- cc0-1.0
multilinguality:
- monolingual
pretty_name: Habr QnA
size_categories:
- 100K<n<1M
source_datasets:
- original
tags: []
task_categories:
- text-generation
- question-answering
task_ids:
- language-modeling
- open-domain-qa
---
# Dataset Card for Habr QnA
## Table of Contents
- [Dataset Card for Habr QnA](#dataset-card-for-habr-qna)
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
## Dataset Description
- **Repository:** https://github.com/its5Q/habr-qna-parser
### Dataset Summary
This is a dataset of questions and answers scraped from [Habr QnA](https://qna.habr.com/). There are 723430 asked questions with answers, comments and other metadata.
### Languages
The dataset is mostly Russian with source code in different languages.
## Dataset Structure
### Data Fields
Data fields can be previewed on the dataset card page.
### Data Splits
All 723430 examples are in the train split, there is no validation split.
## Dataset Creation
The data was scraped with a script, located in [my GitHub repository](https://github.com/its5Q/habr-qna-parser)
## Additional Information
### Dataset Curators
- https://github.com/its5Q |
anforsm/common_voice_11_clean_tokenized | 2023-03-09T23:53:49.000Z | [
"task_categories:text-to-speech",
"task_categories:text-generation",
"size_categories:10K<n<100K",
"language:en",
"license:cc0-1.0",
"region:us"
] | anforsm | null | null | null | 2 | 5 | ---
license: cc0-1.0
language:
- en
task_categories:
- text-to-speech
- text-generation
pretty_name: Common Voice 11 (en) Cleaned and Tokenized
size_categories:
- 10K<n<100K
dataset_info:
features:
- name: input_ids
sequence: int32
- name: attention_mask
sequence: int8
- name: labels
sequence: int64
splits:
- name: train
num_bytes: 1109542776
num_examples: 83274
- name: validation
num_bytes: 17374496
num_examples: 1304
download_size: 197852035
dataset_size: 1126917272
---
A cleaned and tokenized version of the English data from [Mozilla Common Voice 11 dataset](https://huggingface.co/datasets/mozilla-foundation/common_voice_11_0/tree/main).
Cleaning steps:
* Filtered on samples with >2 upvotes and <1 downvotes]
* Removed non voice audio at start and end through pytorch VAD
Tokenization:
* Audio tokenized through [EnCodec by Meta](https://github.com/facebookresearch/encodec)
* Using 24khz pre-trained model, and target bandwidth of 1.5
* Represented in text as audio_token_0 - audio_token_1023
* Prompts constructed as "text: \<common voice transcript\>\naudio: \<audio tokens\>"
* Prompts tokenized with GPT tokenizer with added vocab of audio tokens.
* Tokenized prompts padded to size 1024 with eos_token.
Each sample has 3 properties: input_ids, attention_mask and labels. input_ids and labels are the tokenized prompts and attention_mask is the attention mask. |
LangChainDatasets/question-answering-paul-graham | 2023-03-12T01:02:15.000Z | [
"license:mit",
"region:us"
] | LangChainDatasets | null | null | null | 3 | 5 | ---
license: mit
---
|
LLukas22/nq-simplified | 2023-04-30T20:28:17.000Z | [
"task_categories:question-answering",
"task_categories:sentence-similarity",
"task_categories:feature-extraction",
"language:en",
"license:cc-by-sa-3.0",
"region:us"
] | LLukas22 | null | null | null | 0 | 5 | ---
license: cc-by-sa-3.0
task_categories:
- question-answering
- sentence-similarity
- feature-extraction
language:
- en
---
# Dataset Card for "nq"
## Table of Contents
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Additional Information](#additional-information)
- [Licensing Information](#licensing-information)
## Dataset Description
- **Homepage:** [https://ai.google.com/research/NaturalQuestions](https://ai.google.com/research/NaturalQuestions)
### Dataset Summary
This is a modified version of the original Natural Questions (nq) dataset for qa tasks. The original is availabe [here](https://ai.google.com/research/NaturalQuestions).
Each sample was preprocessed into a squadlike format. The context was shortened from an entire wikipedia article into the passage containing the answer.
## Dataset Structure
### Data Instances
An example of 'train' looks as follows.
```json
{
"context": "The 2017 Major League Baseball All - Star Game was the 88th edition of the Major League Baseball All Star Game. The game was",
"question": "where is the 2017 baseball all-star game being played",
"answers":
{
"text":["Marlins Park"],
"answer_start":[171]
}
}
```
### Data Fields
The data fields are the same among all splits.
- `question`: a `string` feature.
- `context`: a `string` feature.
- `answers`: a dictionary feature containing:
- `text`: a `string` feature.
- `answer_start`: a `int32` feature.
## Additional Information
### Licensing Information
This dataset is distributed under the cc-by-sa-3.0 license. |
katarinagresova/Genomic_Benchmarks_demo_human_or_worm | 2023-10-04T13:09:13.000Z | [
"region:us"
] | katarinagresova | null | null | null | 0 | 5 | ---
dataset_info:
features:
- name: seq
dtype: string
- name: label
dtype: int64
splits:
- name: train
num_bytes: 15900000
num_examples: 75000
- name: test
num_bytes: 5300000
num_examples: 25000
download_size: 2380379
dataset_size: 21200000
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: test
path: data/test-*
---
# Dataset Card for "Genomic_Benchmarks_demo_human_or_worm"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
HuggingFaceH4/aws-pm-pilot | 2023-03-20T20:03:11.000Z | [
"license:apache-2.0",
"region:us"
] | HuggingFaceH4 | null | null | null | 0 | 5 | ---
license: apache-2.0
---
Pilot annotations for PM dataset that will be used for RLHF. The dataset used outputs from opensource models (https://huggingface.co/spaces/HuggingFaceH4/instruction-models-outputs) on a mix on Anthropic hh-rlhf (https://huggingface.co/datasets/HuggingFaceH4/hh-rlhf) dataset and Self-Instruct's seed (https://huggingface.co/datasets/HuggingFaceH4/self-instruct-seed) dataset. |
abhi28577/nennepedia | 2023-06-24T08:27:44.000Z | [
"task_categories:question-answering",
"size_categories:n<1K",
"language:en",
"license:openrail",
"region:us"
] | abhi28577 | null | null | null | 0 | 5 | ---
license: openrail
task_categories:
- question-answering
language:
- en
pretty_name: nennepedia
size_categories:
- n<1K
---
# Dataset Card for Dataset Name
## Dataset Description
- **Homepage:**
- **Repository:**
- **Paper:**
- **Leaderboard:**
- **Point of Contact:**
### Dataset Summary
This dataset card aims to be a base template for new datasets. It has been generated using [this raw template](https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/templates/datasetcard_template.md?plain=1).
### Supported Tasks and Leaderboards
[More Information Needed]
### Languages
[More Information Needed]
## Dataset Structure
### Data Instances
[More Information Needed]
### Data Fields
[More Information Needed]
### Data Splits
[More Information Needed]
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
[More Information Needed]
### Citation Information
[More Information Needed]
### Contributions
[More Information Needed] |
cc92yy3344/vegetable | 2023-03-29T12:21:19.000Z | [
"task_categories:image-classification",
"task_ids:multi-class-image-classification",
"annotations_creators:crowdsourced",
"language_creators:found",
"multilinguality:monolingual",
"size_categories:10K<n<100K",
"source_datasets:original",
"language:zh",
"license:apache-2.0",
"蔬菜",
"图像分类",
"regi... | cc92yy3344 | null | null | null | 0 | 5 | ---
annotations_creators:
- crowdsourced
language:
- zh
language_creators:
- found
license:
- apache-2.0
multilinguality:
- monolingual
pretty_name: "15\u79CD\u852C\u83DC\u6570\u636E\u96C6"
size_categories:
- 10K<n<100K
source_datasets:
- original
tags:
- "\u852C\u83DC"
- "\u56FE\u50CF\u5206\u7C7B"
task_categories:
- image-classification
task_ids:
- multi-class-image-classification
---
## 蔬菜图像数据集
### 背景
最初的实验是用世界各地发现的15种常见蔬菜进行的。实验选择的蔬菜有:豆类、苦瓜、葫芦、茄子、西兰花、卷心菜、辣椒、胡萝卜、花椰菜、黄瓜、木瓜、土豆、南瓜、萝卜和番茄。共使用了来自15个类的21000张图像,其中每个类包含1400张尺寸为224×224、格式为*.jpg的图像。数据集中70%用于培训,15%用于验证,15%用于测试。
### 目录
此数据集包含三个文件夹:
- train (15000 张图像)
- test (3000 张图像)
- validation (3000 张图像)
### 数据收集
这个数据集中的图像是我们为一个项目从蔬菜农场和市场收集的。
### 制作元数据文件
运行下面`python`的代码,就可以在桌面生成三个csv格式的元数据文件、一个分类数据文件(需要放入到数据文件中)
```python
#!/usr/bin/env python3
# -*- coding: utf-8 -*-
"""
1.下载的数据文件 Vegetable Images.zip ,并解压到桌面
2.然后执行 python generate.py 即可生成三个元数据文件和一个分类数据文件
"""
import os
from pathlib import Path
category_dict = {
'Bean': '豆类',
'Bitter_Gourd': '苦瓜',
'Bottle_Gourd': '葫芦',
'Brinjal': '茄子',
'Broccoli': '西兰花',
'Cabbage': '卷心菜',
'Capsicum': '辣椒',
'Carrot': '胡萝卜',
'Cauliflower': '花椰菜',
'Cucumber': '黄瓜',
'Papaya': '木瓜',
'Potato': '土豆',
'Pumpkin': '南瓜',
'Radish': '萝卜',
'Tomato': '番茄',
}
base_path = Path.home().joinpath('desktop')
data = '\n'.join((item for item in category_dict.values())) # 注意:利用了python 3.6之后字典插入有序的特性
base_path.joinpath('classname.txt').write_text(data, encoding='utf-8')
def create(filename):
csv_path = base_path.joinpath(f'{filename}.csv')
with csv_path.open('wt', encoding='utf-8', newline='') as csv:
csv.writelines([f'image,category{os.linesep}'])
data_path = base_path.joinpath('Vegetable Images', filename)
batch = 0
datas = []
keys = list(category_dict.keys())
for image_path in data_path.rglob('*.jpg'):
batch += 1
part1 = str(image_path).removeprefix(str(base_path)).replace('\\', '/')[1:]
part2 = keys.index(image_path.parents[0].name)
datas.append(f'{part1},{part2}{os.linesep}')
if batch > 100:
csv.writelines(datas)
datas.clear()
if datas:
csv.writelines(datas)
return csv_path.stat().st_size
if __name__ == '__main__':
print(create('train'))
print(create('test'))
print(create('validation'))
```
### 致谢
非常感谢原始数据集提供方 [Vegetable Image Dataset](https://www.kaggle.com/datasets/misrakahmed/vegetable-image-dataset)。
### 克隆数据
```bash
git clone https://huggingface.co/datasets/cc92yy3344/vegetable.git
``` |
saier/unarXive_imrad_clf | 2023-04-02T00:56:43.000Z | [
"task_categories:text-classification",
"task_ids:multi-class-classification",
"annotations_creators:machine-generated",
"language_creators:found",
"multilinguality:monolingual",
"size_categories:100K<n<1M",
"source_datasets:extended|10.5281/zenodo.7752615",
"language:en",
"license:cc-by-sa-4.0",
"... | saier | null | null | null | 3 | 5 | ---
annotations_creators:
- machine-generated
language:
- en
language_creators:
- found
license:
- cc-by-sa-4.0
multilinguality:
- monolingual
pretty_name: unarXive IMRaD classification
size_categories:
- 100K<n<1M
tags:
- arXiv.org
- arXiv
- IMRaD
- publication
- paper
- preprint
- section
- physics
- mathematics
- computer science
- cs
task_categories:
- text-classification
task_ids:
- multi-class-classification
source_datasets:
- extended|10.5281/zenodo.7752615
dataset_info:
features:
- name: _id
dtype: string
- name: text
dtype: string
- name: label
dtype: string
splits:
- name: train
num_bytes: 451908280
num_examples: 520053
- name: test
num_bytes: 4650429
num_examples: 5000
- name: validation
num_bytes: 4315597
num_examples: 5001
download_size: 482376743
dataset_size: 460874306
---
# Dataset Card for unarXive IMRaD classification
## Dataset Description
* **Homepage:** [https://github.com/IllDepence/unarXive](https://github.com/IllDepence/unarXive)
* **Paper:** [unarXive 2022: All arXiv Publications Pre-Processed for NLP, Including Structured Full-Text and Citation Network](https://arxiv.org/abs/2303.14957)
### Dataset Summary
The unarXive IMRaD classification dataset contains 530k paragraphs from computer science papers and the IMRaD section they originate from. The paragraphs are derived from [unarXive](https://github.com/IllDepence/unarXive).
The dataset can be used as follows.
```
from datasets import load_dataset
imrad_data = load_dataset('saier/unarXive_imrad_clf')
imrad_data = imrad_data.class_encode_column('label') # assign target label column
imrad_data = imrad_data.remove_columns('_id') # remove sample ID column
```
## Dataset Structure
### Data Instances
Each data instance contains the paragraph’s text as well as one of the labels ('i', 'm', 'r', 'd', 'w' — for Introduction, Methods, Results, Discussion and Related Work). An example is shown below.
```
{'_id': '789f68e7-a1cc-4072-b07d-ecffc3e7ca38',
'label': 'm',
'text': 'To link the mentions encoded by BERT to the KGE entities, we define '
'an entity linking loss as cross-entropy between self-supervised '
'entity labels and similarities obtained from the linker in KGE '
'space:\n'
'\\(\\mathcal {L}_{EL}=\\sum -\\log \\dfrac{\\exp (h_m^{proj}\\cdot '
'\\textbf {e})}{\\sum _{\\textbf {e}_j\\in \\mathcal {E}} \\exp '
'(h_m^{proj}\\cdot \\textbf {e}_j)}\\) \n'}
```
### Data Splits
The data is split into training, development, and testing data as follows.
* Training: 520,053 instances
* Development: 5000 instances
* Testing: 5001 instances
## Dataset Creation
### Source Data
The paragraph texts are extracted from the data set [unarXive](https://github.com/IllDepence/unarXive).
#### Who are the source language producers?
The paragraphs were written by the authors of the arXiv papers. In file `license_info.jsonl` author and text licensing information can be found for all samples, An example is shown below.
```
{'authors': 'Yusuke Sekikawa, Teppei Suzuki',
'license': 'http://creativecommons.org/licenses/by/4.0/',
'paper_arxiv_id': '2011.09852',
'sample_ids': ['cc375518-347c-43d0-bfb2-f88564d66df8',
'18dc073e-a48e-488e-b34c-e5fc3cb8a4ca',
'0c2e89b3-d863-4bc2-9e11-8f6c48d867cb',
'd85e46cf-b11d-49b6-801b-089aa2dd037d',
'92915cea-17ab-4a98-aad2-417f6cdd53d2',
'e88cb422-47b7-4f69-9b0b-fbddf8140d98',
'4f5094a4-0e6e-46ae-a34d-e15ce0b9803c',
'59003494-096f-4a7c-ad65-342b74eed561',
'6a99b3f5-217e-4d3d-a770-693483ef8670']}
```
### Annotations
Class labels were automatically determined ([see implementation](https://github.com/IllDepence/unarXive/blob/master/src/utility_scripts/ml_tasks_prep_data.py)).
## Considerations for Using the Data
### Discussion and Biases
Because only paragraphs unambiguously assignable to one of the IMRaD classeswere used, a certain selection bias is to be expected in the data.
### Other Known Limitations
Depending on authors’ writing styles as well LaTeX processing quirks, paragraphs can vary in length a significantly.
## Additional Information
### Licensing information
The dataset is released under the Creative Commons Attribution-ShareAlike 4.0.
### Citation Information
```
@inproceedings{Saier2023unarXive,
author = {Saier, Tarek and Krause, Johan and F\"{a}rber, Michael},
title = {{unarXive 2022: All arXiv Publications Pre-Processed for NLP, Including Structured Full-Text and Citation Network}},
booktitle = {Proceedings of the 23rd ACM/IEEE Joint Conference on Digital Libraries},
year = {2023},
series = {JCDL '23}
}
```
|
Francesco/bccd-ouzjz | 2023-03-30T09:14:05.000Z | [
"task_categories:object-detection",
"annotations_creators:crowdsourced",
"language_creators:found",
"multilinguality:monolingual",
"size_categories:1K<n<10K",
"source_datasets:original",
"language:en",
"license:cc",
"rf100",
"region:us"
] | Francesco | null | null | null | 0 | 5 | ---
dataset_info:
features:
- name: image_id
dtype: int64
- name: image
dtype: image
- name: width
dtype: int32
- name: height
dtype: int32
- name: objects
sequence:
- name: id
dtype: int64
- name: area
dtype: int64
- name: bbox
sequence: float32
length: 4
- name: category
dtype:
class_label:
names:
'0': bccd
'1': Platelets
'2': RBC
'3': WBC
annotations_creators:
- crowdsourced
language_creators:
- found
language:
- en
license:
- cc
multilinguality:
- monolingual
size_categories:
- 1K<n<10K
source_datasets:
- original
task_categories:
- object-detection
task_ids: []
pretty_name: bccd-ouzjz
tags:
- rf100
---
# Dataset Card for bccd-ouzjz
** The original COCO dataset is stored at `dataset.tar.gz`**
## Dataset Description
- **Homepage:** https://universe.roboflow.com/object-detection/bccd-ouzjz
- **Point of Contact:** francesco.zuppichini@gmail.com
### Dataset Summary
bccd-ouzjz
### Supported Tasks and Leaderboards
- `object-detection`: The dataset can be used to train a model for Object Detection.
### Languages
English
## Dataset Structure
### Data Instances
A data point comprises an image and its object annotations.
```
{
'image_id': 15,
'image': <PIL.JpegImagePlugin.JpegImageFile image mode=RGB size=640x640 at 0x2373B065C18>,
'width': 964043,
'height': 640,
'objects': {
'id': [114, 115, 116, 117],
'area': [3796, 1596, 152768, 81002],
'bbox': [
[302.0, 109.0, 73.0, 52.0],
[810.0, 100.0, 57.0, 28.0],
[160.0, 31.0, 248.0, 616.0],
[741.0, 68.0, 202.0, 401.0]
],
'category': [4, 4, 0, 0]
}
}
```
### Data Fields
- `image`: the image id
- `image`: `PIL.Image.Image` object containing the image. Note that when accessing the image column: `dataset[0]["image"]` the image file is automatically decoded. Decoding of a large number of image files might take a significant amount of time. Thus it is important to first query the sample index before the `"image"` column, *i.e.* `dataset[0]["image"]` should **always** be preferred over `dataset["image"][0]`
- `width`: the image width
- `height`: the image height
- `objects`: a dictionary containing bounding box metadata for the objects present on the image
- `id`: the annotation id
- `area`: the area of the bounding box
- `bbox`: the object's bounding box (in the [coco](https://albumentations.ai/docs/getting_started/bounding_boxes_augmentation/#coco) format)
- `category`: the object's category.
#### Who are the annotators?
Annotators are Roboflow users
## Additional Information
### Licensing Information
See original homepage https://universe.roboflow.com/object-detection/bccd-ouzjz
### Citation Information
```
@misc{ bccd-ouzjz,
title = { bccd ouzjz Dataset },
type = { Open Source Dataset },
author = { Roboflow 100 },
howpublished = { \url{ https://universe.roboflow.com/object-detection/bccd-ouzjz } },
url = { https://universe.roboflow.com/object-detection/bccd-ouzjz },
journal = { Roboflow Universe },
publisher = { Roboflow },
year = { 2022 },
month = { nov },
note = { visited on 2023-03-29 },
}"
```
### Contributions
Thanks to [@mariosasko](https://github.com/mariosasko) for adding this dataset. |
Francesco/vehicles-q0x2v | 2023-03-30T09:17:19.000Z | [
"task_categories:object-detection",
"annotations_creators:crowdsourced",
"language_creators:found",
"multilinguality:monolingual",
"size_categories:1K<n<10K",
"source_datasets:original",
"language:en",
"license:cc",
"rf100",
"region:us"
] | Francesco | null | null | null | 2 | 5 | ---
dataset_info:
features:
- name: image_id
dtype: int64
- name: image
dtype: image
- name: width
dtype: int32
- name: height
dtype: int32
- name: objects
sequence:
- name: id
dtype: int64
- name: area
dtype: int64
- name: bbox
sequence: float32
length: 4
- name: category
dtype:
class_label:
names:
'0': vehicles
'1': big bus
'2': big truck
'3': bus-l-
'4': bus-s-
'5': car
'6': mid truck
'7': small bus
'8': small truck
'9': truck-l-
'10': truck-m-
'11': truck-s-
'12': truck-xl-
annotations_creators:
- crowdsourced
language_creators:
- found
language:
- en
license:
- cc
multilinguality:
- monolingual
size_categories:
- 1K<n<10K
source_datasets:
- original
task_categories:
- object-detection
task_ids: []
pretty_name: vehicles-q0x2v
tags:
- rf100
---
# Dataset Card for vehicles-q0x2v
** The original COCO dataset is stored at `dataset.tar.gz`**
## Dataset Description
- **Homepage:** https://universe.roboflow.com/object-detection/vehicles-q0x2v
- **Point of Contact:** francesco.zuppichini@gmail.com
### Dataset Summary
vehicles-q0x2v
### Supported Tasks and Leaderboards
- `object-detection`: The dataset can be used to train a model for Object Detection.
### Languages
English
## Dataset Structure
### Data Instances
A data point comprises an image and its object annotations.
```
{
'image_id': 15,
'image': <PIL.JpegImagePlugin.JpegImageFile image mode=RGB size=640x640 at 0x2373B065C18>,
'width': 964043,
'height': 640,
'objects': {
'id': [114, 115, 116, 117],
'area': [3796, 1596, 152768, 81002],
'bbox': [
[302.0, 109.0, 73.0, 52.0],
[810.0, 100.0, 57.0, 28.0],
[160.0, 31.0, 248.0, 616.0],
[741.0, 68.0, 202.0, 401.0]
],
'category': [4, 4, 0, 0]
}
}
```
### Data Fields
- `image`: the image id
- `image`: `PIL.Image.Image` object containing the image. Note that when accessing the image column: `dataset[0]["image"]` the image file is automatically decoded. Decoding of a large number of image files might take a significant amount of time. Thus it is important to first query the sample index before the `"image"` column, *i.e.* `dataset[0]["image"]` should **always** be preferred over `dataset["image"][0]`
- `width`: the image width
- `height`: the image height
- `objects`: a dictionary containing bounding box metadata for the objects present on the image
- `id`: the annotation id
- `area`: the area of the bounding box
- `bbox`: the object's bounding box (in the [coco](https://albumentations.ai/docs/getting_started/bounding_boxes_augmentation/#coco) format)
- `category`: the object's category.
#### Who are the annotators?
Annotators are Roboflow users
## Additional Information
### Licensing Information
See original homepage https://universe.roboflow.com/object-detection/vehicles-q0x2v
### Citation Information
```
@misc{ vehicles-q0x2v,
title = { vehicles q0x2v Dataset },
type = { Open Source Dataset },
author = { Roboflow 100 },
howpublished = { \url{ https://universe.roboflow.com/object-detection/vehicles-q0x2v } },
url = { https://universe.roboflow.com/object-detection/vehicles-q0x2v },
journal = { Roboflow Universe },
publisher = { Roboflow },
year = { 2022 },
month = { nov },
note = { visited on 2023-03-29 },
}"
```
### Contributions
Thanks to [@mariosasko](https://github.com/mariosasko) for adding this dataset. |
Francesco/stomata-cells | 2023-03-30T09:32:34.000Z | [
"task_categories:object-detection",
"annotations_creators:crowdsourced",
"language_creators:found",
"multilinguality:monolingual",
"size_categories:1K<n<10K",
"source_datasets:original",
"language:en",
"license:cc",
"rf100",
"region:us"
] | Francesco | null | null | null | 0 | 5 | ---
dataset_info:
features:
- name: image_id
dtype: int64
- name: image
dtype: image
- name: width
dtype: int32
- name: height
dtype: int32
- name: objects
sequence:
- name: id
dtype: int64
- name: area
dtype: int64
- name: bbox
sequence: float32
length: 4
- name: category
dtype:
class_label:
names:
'0': stomata-cells
'1': close
'2': open
annotations_creators:
- crowdsourced
language_creators:
- found
language:
- en
license:
- cc
multilinguality:
- monolingual
size_categories:
- 1K<n<10K
source_datasets:
- original
task_categories:
- object-detection
task_ids: []
pretty_name: stomata-cells
tags:
- rf100
---
# Dataset Card for stomata-cells
** The original COCO dataset is stored at `dataset.tar.gz`**
## Dataset Description
- **Homepage:** https://universe.roboflow.com/object-detection/stomata-cells
- **Point of Contact:** francesco.zuppichini@gmail.com
### Dataset Summary
stomata-cells
### Supported Tasks and Leaderboards
- `object-detection`: The dataset can be used to train a model for Object Detection.
### Languages
English
## Dataset Structure
### Data Instances
A data point comprises an image and its object annotations.
```
{
'image_id': 15,
'image': <PIL.JpegImagePlugin.JpegImageFile image mode=RGB size=640x640 at 0x2373B065C18>,
'width': 964043,
'height': 640,
'objects': {
'id': [114, 115, 116, 117],
'area': [3796, 1596, 152768, 81002],
'bbox': [
[302.0, 109.0, 73.0, 52.0],
[810.0, 100.0, 57.0, 28.0],
[160.0, 31.0, 248.0, 616.0],
[741.0, 68.0, 202.0, 401.0]
],
'category': [4, 4, 0, 0]
}
}
```
### Data Fields
- `image`: the image id
- `image`: `PIL.Image.Image` object containing the image. Note that when accessing the image column: `dataset[0]["image"]` the image file is automatically decoded. Decoding of a large number of image files might take a significant amount of time. Thus it is important to first query the sample index before the `"image"` column, *i.e.* `dataset[0]["image"]` should **always** be preferred over `dataset["image"][0]`
- `width`: the image width
- `height`: the image height
- `objects`: a dictionary containing bounding box metadata for the objects present on the image
- `id`: the annotation id
- `area`: the area of the bounding box
- `bbox`: the object's bounding box (in the [coco](https://albumentations.ai/docs/getting_started/bounding_boxes_augmentation/#coco) format)
- `category`: the object's category.
#### Who are the annotators?
Annotators are Roboflow users
## Additional Information
### Licensing Information
See original homepage https://universe.roboflow.com/object-detection/stomata-cells
### Citation Information
```
@misc{ stomata-cells,
title = { stomata cells Dataset },
type = { Open Source Dataset },
author = { Roboflow 100 },
howpublished = { \url{ https://universe.roboflow.com/object-detection/stomata-cells } },
url = { https://universe.roboflow.com/object-detection/stomata-cells },
journal = { Roboflow Universe },
publisher = { Roboflow },
year = { 2022 },
month = { nov },
note = { visited on 2023-03-29 },
}"
```
### Contributions
Thanks to [@mariosasko](https://github.com/mariosasko) for adding this dataset. |
nomic-ai/gpt4all_prompt_generations_with_p3 | 2023-03-30T16:52:36.000Z | [
"license:apache-2.0",
"region:us"
] | nomic-ai | null | null | null | 32 | 5 | ---
license: apache-2.0
---
GPT4All extended training set. The original model was trained on https://huggingface.co/datasets/nomic-ai/gpt4all_prompt_generations. We filtered out P3 for our final training. See detail here for why: https://s3.amazonaws.com/static.nomic.ai/gpt4all/2023_GPT4All_Technical_Report.pdf
|
argilla/alpaca-gigo-detector | 2023-04-02T19:40:38.000Z | [
"task_categories:text-classification",
"language:en",
"region:us"
] | argilla | null | null | null | 0 | 5 | ---
dataset_info:
features:
- name: id
dtype: string
- name: output
dtype: string
- name: input
dtype: string
- name: _instruction
dtype: string
- name: label
dtype:
class_label:
names:
'0': ALL GOOD
'1': BAD INSTRUCTION
- name: text
dtype: string
splits:
- name: train
num_bytes: 545007
num_examples: 697
- name: test
num_bytes: 58515
num_examples: 78
download_size: 364798
dataset_size: 603522
task_categories:
- text-classification
language:
- en
---
# Dataset Card for "alpaca-gigo-detector"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
LEL-A/translated_german_alpaca | 2023-04-10T09:32:34.000Z | [
"region:us"
] | LEL-A | null | null | null | 1 | 5 | ---
dataset_info:
features:
- name: text
dtype: string
- name: inputs
struct:
- name: _instruction
dtype: string
- name: input
dtype: string
- name: output
dtype: string
- name: prediction
list:
- name: label
dtype: string
- name: score
dtype: float64
- name: prediction_agent
dtype: 'null'
- name: annotation
dtype: 'null'
- name: annotation_agent
dtype: 'null'
- name: vectors
struct:
- name: input
sequence: float64
- name: instruction
sequence: float64
- name: output
sequence: float64
- name: multi_label
dtype: bool
- name: explanation
dtype: 'null'
- name: id
dtype: string
- name: metadata
struct:
- name: original_id
dtype: int64
- name: translation_model
dtype: string
- name: status
dtype: string
- name: event_timestamp
dtype: timestamp[us]
- name: metrics
dtype: 'null'
splits:
- name: train
num_bytes: 1004916509
num_examples: 51759
download_size: 690637366
dataset_size: 1004916509
---
# Dataset Card for "translated_german_alpaca"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
rjjan/reuters21578 | 2023-04-01T21:32:38.000Z | [
"region:us"
] | rjjan | The Reuters-21578 dataset is one of the most widely used data collections for text
categorization research. It is collected from the Reuters financial newswire service in 1987. | @article{APTE94,
author = {Chidanand Apt{\'{e}} and Fred Damerau and Sholom M. Weiss},
title = {Automated Learning of Decision Rules for Text Categorization},
journal = {ACM Transactions on Information Systems},
year = {1994},
note = {To appear.}
}
@inproceedings{APTE94b,
author = {Chidanand Apt{\'{e}} and Fred Damerau and Sholom M. Weiss},
title = {Toward Language Independent Automated Learning of Text Categorization Models},
booktitle = {sigir94},
year = {1994},
note = {To appear.}
}
@inproceedings{HAYES8},
author = {Philip J. Hayes and Peggy M. Anderson and Irene B. Nirenburg and
Linda M. Schmandt},
title = {{TCS}: A Shell for Content-Based Text Categorization},
booktitle = {IEEE Conference on Artificial Intelligence Applications},
year = {1990}
}
@inproceedings{HAYES90b,
author = {Philip J. Hayes and Steven P. Weinstein},
title = {{CONSTRUE/TIS:} A System for Content-Based Indexing of a
Database of News Stories},
booktitle = {Second Annual Conference on Innovative Applications of
Artificial Intelligence},
year = {1990}
}
@incollection{HAYES92 ,
author = {Philip J. Hayes},
title = {Intelligent High-Volume Text Processing using Shallow,
Domain-Specific Techniques},
booktitle = {Text-Based Intelligent Systems},
publisher = {Lawrence Erlbaum},
address = {Hillsdale, NJ},
year = {1992},
editor = {Paul S. Jacobs}
}
@inproceedings{LEWIS91c ,
author = {David D. Lewis},
title = {Evaluating Text Categorization},
booktitle = {Proceedings of Speech and Natural Language Workshop},
year = {1991},
month = {feb},
organization = {Defense Advanced Research Projects Agency},
publisher = {Morgan Kaufmann},
pages = {312--318}
}
@phdthesis{LEWIS91d,
author = {David Dolan Lewis},
title = {Representation and Learning in Information Retrieval},
school = {Computer Science Dept.; Univ. of Massachusetts; Amherst, MA 01003},
year = 1992},
note = {Technical Report 91--93.}
}
@inproceedings{LEWIS91e,
author = {David D. Lewis},
title = {Data Extraction as Text Categorization: An Experiment with
the {MUC-3} Corpus},
booktitle = {Proceedings of the Third Message Understanding Evaluation
and Conference},
year = {1991},
month = {may},
organization = {Defense Advanced Research Projects Agency},
publisher = {Morgan Kaufmann},
address = {Los Altos, CA}
}
@inproceedings{LEWIS92b,
author = {David D. Lewis},
title = {An Evaluation of Phrasal and Clustered Representations on a Text
Categorization Task},
booktitle = {Fifteenth Annual International ACM SIGIR Conference on
Research and Development in Information Retrieval},
year = {1992},
pages = {37--50}
}
@inproceedings{LEWIS92d ,
author = {David D. Lewis and Richard M. Tong},
title = {Text Filtering in {MUC-3} and {MUC-4}},
booktitle = {Proceedings of the Fourth Message Understanding Conference ({MUC-4})},
year = {1992},
month = {jun},
organization = {Defense Advanced Research Projects Agency},
publisher = {Morgan Kaufmann},
address = {Los Altos, CA}
}
@inproceedings{LEWIS92e,
author = {David D. Lewis},
title = {Feature Selection and Feature Extraction for Text Categorization},
booktitle = {Proceedings of Speech and Natural Language Workshop},
year = {1992},
month = {feb} ,
organization = {Defense Advanced Research Projects Agency},
publisher = {Morgan Kaufmann},
pages = {212--217}
}
@inproceedings{LEWIS94b,
author = {David D. Lewis and Marc Ringuette},
title = {A Comparison of Two Learning Algorithms for Text Categorization},
booktitle = {Symposium on Document Analysis and Information Retrieval},
year = {1994},
organization = {ISRI; Univ. of Nevada, Las Vegas},
address = {Las Vegas, NV},
month = {apr},
pages = {81--93}
}
@article{LEWIS94d,
author = {David D. Lewis and Philip J. Hayes},
title = {Guest Editorial},
journal = {ACM Transactions on Information Systems},
year = {1994},
volume = {12},
number = {3},
pages = {231},
month = {jul}
}
@article{SPARCKJONES76,
author = {K. {Sparck Jones} and C. J. {van Rijsbergen}},
title = {Information Retrieval Test Collections},
journal = {Journal of Documentation},
year = {1976},
volume = {32},
number = {1},
pages = {59--75}
}
@book{WEISS91,
author = {Sholom M. Weiss and Casimir A. Kulikowski},
title = {Computer Systems That Learn},
publisher = {Morgan Kaufmann},
year = {1991},
address = {San Mateo, CA}
} | null | 0 | 5 | Entry not found |
RyokoAI/Honeyfeed3600 | 2023-04-05T01:01:36.000Z | [
"task_categories:text-classification",
"task_categories:text-generation",
"size_categories:1K<n<10K",
"language:en",
"license:apache-2.0",
"novel",
"training",
"story",
"region:us"
] | RyokoAI | null | null | null | 2 | 5 | ---
license: apache-2.0
language:
- en
tags:
- novel
- training
- story
task_categories:
- text-classification
- text-generation
pretty_name: Honeyfeed3600
size_categories:
- 1K<n<10K
---
# Dataset Card for Honeyfeed3600
*The BigKnow2022 dataset and its subsets are not yet complete. Not all information here may be accurate or accessible.*
## Dataset Description
- **Homepage:** (TODO)
- **Repository:** <https://github.com/RyokoAI/BigKnow2022>
- **Paper:** N/A
- **Leaderboard:** N/A
- **Point of Contact:** Ronsor/undeleted <ronsor@ronsor.com>
### Dataset Summary
Honeyfeed3600 is a dataset consisting of text from over 38,000 chapters across approximately 3,600 series posted on the
English-language web novel site [Honeyfeed](https://www.honeyfeed.fm).
### Supported Tasks and Leaderboards
This dataset is primarily intended for unsupervised training of text generation models; however, it may be useful for other purposes.
* text-classification
* text-generation
### Languages
* English
## Dataset Structure
### Data Instances
```json
{
"text": "Dark, black, nothingness. There are so many ways to describe that hole, but nothing would get me down there...","
"meta": {
"subset": "honeyfeed",
"themes": [],
"my_themes": [],
"prompt": "",
"author": "Lucianael",
"novel": "10009",
"id": "55686",
"title": "13 Steps - 13 Steps",
"likes": 4,
"views": 21,
"q": 0.5999999999999999
}
}
```
### Data Fields
* `text`: the actual chapter text
* `meta`: novel and chapter metadata
* `subset`: dataset tag: `honeyfeed`
* `lang`: dataset language: `en` (English)
* `themes`: array of novel themes
* `my_themes`: array of additional novel themes
* `prompt`: writing prompt
* `author`: author name
* `novel`: novel ID
* `id`: chapter ID
* `title`: novel and chapter title in the form `<chapter title> - <novel title>`
* `likes`: novel like count
* `views`: novel view count
* `q`: q-score (quality score)
#### Q-Score Distribution
```
0.00: 499
0.10: 420
0.20: 2562
0.30: 0
0.40: 0
0.50: 13344
0.60: 9021
0.70: 5997
0.80: 4217
0.90: 1931
1.00: 801
```
### Data Splits
No splitting of the data was performed.
## Dataset Creation
### Curation Rationale
TODO
### Source Data
#### Initial Data Collection and Normalization
TODO
#### Who are the source language producers?
The authors of each novel.
### Annotations
#### Annotation process
Chapter and novel titles were scraped alongside chapter text.
#### Who are the annotators?
No human annotators.
### Personal and Sensitive Information
The dataset contains only works of fiction, and we do not believe it contains any PII.
## Considerations for Using the Data
### Social Impact of Dataset
This dataset is intended to be useful for anyone who wishes to train a model to generate "more entertaining" content.
It may also be useful for other languages depending on your language model.
### Discussion of Biases
This dataset is composed of fictional works by various authors. Because of this fact, the contents of this dataset will reflect
the biases of those authors. Beware of stereotypes.
### Other Known Limitations
N/A
## Additional Information
### Dataset Curators
Ronsor Labs
### Licensing Information
Apache 2.0, for all parts of which Ronsor Labs or the Ryoko AI Production Committee may be considered authors. All other material is
distributed under fair use principles.
### Citation Information
```
@misc{ryokoai2023-bigknow2022,
title = {BigKnow2022: Bringing Language Models Up to Speed},
author = {Ronsor},
year = {2023},
howpublished = {\url{https://github.com/RyokoAI/BigKnow2022}},
}
```
### Contributions
Thanks to @ronsor (GH) for gathering this dataset. |
tanmaykm/indian_dance_forms | 2023-04-03T13:26:12.000Z | [
"task_categories:image-classification",
"size_categories:n<1K",
"license:apache-2.0",
"art",
"region:us"
] | tanmaykm | null | null | null | 0 | 5 | ---
license: apache-2.0
task_categories:
- image-classification
tags:
- art
pretty_name: Indian Dance Forms
size_categories:
- n<1K
---
This dataset is taken from https://www.kaggle.com/datasets/aditya48/indian-dance-form-classification but is originally from the Hackerearth deep learning contest of identifying Indian dance forms. All the credits of dataset goes to them.
### Content
The dataset consists of 599 images belonging to 8 categories, namely manipuri, bharatanatyam, odissi, kathakali, kathak, sattriya, kuchipudi, and mohiniyattam. The original dataset was quite unstructured and all the images were put together. I have organized it in their respective directories so that the process of preparing training data becomes easier.
### Acknowledgements
- https://www.hackerearth.com/challenges/competitive/hackerearth-deep-learning-challenge-identify-dance-form/
- https://www.kaggle.com/datasets/aditya48/indian-dance-form-classification |
teelinsan/camoscio | 2023-04-02T20:18:52.000Z | [
"task_categories:conversational",
"size_categories:10K<n<100K",
"language:it",
"license:openrail",
"llama",
"instruction-tuning",
"region:us"
] | teelinsan | null | null | null | 1 | 5 | ---
license: openrail
task_categories:
- conversational
language:
- it
tags:
- llama
- instruction-tuning
size_categories:
- 10K<n<100K
---
# Camoscio instruction-tuning dataset
This repository contains the dataset used to train [Camoscio](https://huggingface.co/teelinsan/camoscio-7b-llama).
This dataset is an Italian translation with ChatGPT of the [Stanford Alpaca dataset](https://github.com/tatsu-lab/stanford_alpaca).
Please refer to the [Camoscio repo](https://github.com/teelinsan/camoscio) for more info.
|
anon8231489123/Omegle_logs_dataset | 2023-04-02T23:34:21.000Z | [
"language:en",
"license:apache-2.0",
"region:us"
] | anon8231489123 | null | null | null | 6 | 5 | ---
license: apache-2.0
language:
- en
---
~10k conversations from Omegle. Scraped using: http://web.archive.org/cdx/search/xd?url=logs.omegle.com/*&fl=timestamp,original,statuscode&output=json. For these logs to have ended up on the cdx, it means the url was posted publicly at some point.
* PII removed by searching for conversations with these words: forbidden_words = ["kik", "telegram", "skype", "wickr", "discord", "dropbox", "insta ", "insta?", "instagram", "snap ", "snapchat"].
* Conversations with racial slurs removed.
* English only.
* Obviously, the dataset still contains a lot of (sometimes extreme) NSFW content. Do not view or use this dataset if you are under 18.
General process for scraping (There are probably other datasets that can be scraped using this method):
1. Go to page in archive.org cdx
2. Check if the page contains a log
3. Download the log image
4. Use OCR to read it
5. Save it to a json file.
This dataset could be useful for training casual conversational AI's but it likely still requires more filtering. Use at your own risk. |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.