id
stringlengths
2
115
author
stringlengths
2
42
last_modified
timestamp[us, tz=UTC]
downloads
int64
0
8.87M
likes
int64
0
3.84k
paperswithcode_id
stringlengths
2
45
tags
list
lastModified
timestamp[us, tz=UTC]
createdAt
stringlengths
24
24
key
stringclasses
1 value
created
timestamp[us]
card
stringlengths
1
1.01M
embedding
list
library_name
stringclasses
21 values
pipeline_tag
stringclasses
27 values
mask_token
null
card_data
null
widget_data
null
model_index
null
config
null
transformers_info
null
spaces
null
safetensors
null
transformersInfo
null
modelId
stringlengths
5
111
embeddings
list
huggan/few-shot-panda
huggan
2022-04-12T14:06:07Z
19
0
null
[ "arxiv:2101.04775", "region:us" ]
2022-04-12T14:06:07Z
2022-04-01T11:37:01.000Z
2022-04-01T11:37:01
# Citation ``` @article{DBLP:journals/corr/abs-2101-04775, author = {Bingchen Liu and Yizhe Zhu and Kunpeng Song and Ahmed Elgammal}, title = {Towards Faster and Stabilized {GAN} Training for High-fidelity Few-shot Image Synthesis}, journal = {CoRR}, volume = {abs/2101.04775}, year = {2021}, url = {https://arxiv.org/abs/2101.04775}, eprinttype = {arXiv}, eprint = {2101.04775}, timestamp = {Fri, 22 Jan 2021 15:16:00 +0100}, biburl = {https://dblp.org/rec/journals/corr/abs-2101-04775.bib}, bibsource = {dblp computer science bibliography, https://dblp.org} } ```
[ -0.5524431467056274, -0.8028348684310913, 0.018525348976254463, 0.33572760224342346, -0.09379876405000687, -0.17921073734760284, -0.08067686855792999, -0.28826087713241577, 0.07932980358600616, -0.041977155953645706, -0.35484322905540466, -0.3427697718143463, -0.3939037621021271, 0.0571840...
null
null
null
null
null
null
null
null
null
null
null
null
null
huggan/few-shot-anime-face
huggan
2022-04-12T14:08:09Z
19
0
null
[ "arxiv:2101.04775", "region:us" ]
2022-04-12T14:08:09Z
2022-04-01T11:42:03.000Z
2022-04-01T11:42:03
# Citation ``` @article{DBLP:journals/corr/abs-2101-04775, author = {Bingchen Liu and Yizhe Zhu and Kunpeng Song and Ahmed Elgammal}, title = {Towards Faster and Stabilized {GAN} Training for High-fidelity Few-shot Image Synthesis}, journal = {CoRR}, volume = {abs/2101.04775}, year = {2021}, url = {https://arxiv.org/abs/2101.04775}, eprinttype = {arXiv}, eprint = {2101.04775}, timestamp = {Fri, 22 Jan 2021 15:16:00 +0100}, biburl = {https://dblp.org/rec/journals/corr/abs-2101-04775.bib}, bibsource = {dblp computer science bibliography, https://dblp.org} } ```
[ -0.5524431467056274, -0.8028348684310913, 0.018525348976254463, 0.33572760224342346, -0.09379876405000687, -0.17921073734760284, -0.08067686855792999, -0.28826087713241577, 0.07932980358600616, -0.041977155953645706, -0.35484322905540466, -0.3427697718143463, -0.3939037621021271, 0.0571840...
null
null
null
null
null
null
null
null
null
null
null
null
null
huggingartists/olga-buzova
huggingartists
2022-10-25T10:03:54Z
19
0
null
[ "language:en", "huggingartists", "lyrics", "region:us" ]
2022-10-25T10:03:54Z
2022-04-04T11:18:31.000Z
2022-04-04T11:18:31
--- language: - en tags: - huggingartists - lyrics models: - huggingartists/olga-buzova --- # Dataset Card for "huggingartists/olga-buzova" ## Table of Contents - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards) - [Languages](#languages) - [How to use](#how-to-use) - [Dataset Structure](#dataset-structure) - [Data Fields](#data-fields) - [Data Splits](#data-splits) - [Dataset Creation](#dataset-creation) - [Curation Rationale](#curation-rationale) - [Source Data](#source-data) - [Annotations](#annotations) - [Personal and Sensitive Information](#personal-and-sensitive-information) - [Considerations for Using the Data](#considerations-for-using-the-data) - [Social Impact of Dataset](#social-impact-of-dataset) - [Discussion of Biases](#discussion-of-biases) - [Other Known Limitations](#other-known-limitations) - [Additional Information](#additional-information) - [Dataset Curators](#dataset-curators) - [Licensing Information](#licensing-information) - [Citation Information](#citation-information) - [About](#about) ## Dataset Description - **Homepage:** [https://github.com/AlekseyKorshuk/huggingartists](https://github.com/AlekseyKorshuk/huggingartists) - **Repository:** [https://github.com/AlekseyKorshuk/huggingartists](https://github.com/AlekseyKorshuk/huggingartists) - **Paper:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) - **Point of Contact:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) - **Size of the generated dataset:** 0.164278 MB <div class="inline-flex flex-col" style="line-height: 1.5;"> <div class="flex"> <div style="display:DISPLAY_1; margin-left: auto; margin-right: auto; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;https://images.genius.com/efacbc8bb2d22ab78e494539bba61b3e.1000x1000x1.jpg&#39;)"> </div> </div> <a href="https://huggingface.co/huggingartists/olga-buzova"> <div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 HuggingArtists Model 🤖</div> </a> <div style="text-align: center; font-size: 16px; font-weight: 800">Ольга Бузова (Olga Buzova)</div> <a href="https://genius.com/artists/olga-buzova"> <div style="text-align: center; font-size: 14px;">@olga-buzova</div> </a> </div> ### Dataset Summary The Lyrics dataset parsed from Genius. This dataset is designed to generate lyrics with HuggingArtists. Model is available [here](https://huggingface.co/huggingartists/olga-buzova). ### Supported Tasks and Leaderboards [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ### Languages en ## How to use How to load this dataset directly with the datasets library: ```python from datasets import load_dataset dataset = load_dataset("huggingartists/olga-buzova") ``` ## Dataset Structure An example of 'train' looks as follows. ``` This example was too long and was cropped: { "text": "Look, I was gonna go easy on you\nNot to hurt your feelings\nBut I'm only going to get this one chance\nSomething's wrong, I can feel it..." } ``` ### Data Fields The data fields are the same among all splits. - `text`: a `string` feature. ### Data Splits | train |validation|test| |------:|---------:|---:| |66| -| -| 'Train' can be easily divided into 'train' & 'validation' & 'test' with few lines of code: ```python from datasets import load_dataset, Dataset, DatasetDict import numpy as np datasets = load_dataset("huggingartists/olga-buzova") train_percentage = 0.9 validation_percentage = 0.07 test_percentage = 0.03 train, validation, test = np.split(datasets['train']['text'], [int(len(datasets['train']['text'])*train_percentage), int(len(datasets['train']['text'])*(train_percentage + validation_percentage))]) datasets = DatasetDict( { 'train': Dataset.from_dict({'text': list(train)}), 'validation': Dataset.from_dict({'text': list(validation)}), 'test': Dataset.from_dict({'text': list(test)}) } ) ``` ## Dataset Creation ### Curation Rationale [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ### Source Data #### Initial Data Collection and Normalization [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) #### Who are the source language producers? [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ### Annotations #### Annotation process [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) #### Who are the annotators? [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ### Personal and Sensitive Information [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ## Considerations for Using the Data ### Social Impact of Dataset [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ### Discussion of Biases [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ### Other Known Limitations [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ## Additional Information ### Dataset Curators [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ### Licensing Information [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ### Citation Information ``` @InProceedings{huggingartists, author={Aleksey Korshuk} year=2022 } ``` ## About *Built by Aleksey Korshuk* [![Follow](https://img.shields.io/github/followers/AlekseyKorshuk?style=social)](https://github.com/AlekseyKorshuk) [![Follow](https://img.shields.io/twitter/follow/alekseykorshuk?style=social)](https://twitter.com/intent/follow?screen_name=alekseykorshuk) [![Follow](https://img.shields.io/badge/dynamic/json?color=blue&label=Telegram%20Channel&query=%24.result&url=https%3A%2F%2Fapi.telegram.org%2Fbot1929545866%3AAAFGhV-KKnegEcLiyYJxsc4zV6C-bdPEBtQ%2FgetChatMemberCount%3Fchat_id%3D-1001253621662&style=social&logo=telegram)](https://t.me/joinchat/_CQ04KjcJ-4yZTky) For more details, visit the project repository. [![GitHub stars](https://img.shields.io/github/stars/AlekseyKorshuk/huggingartists?style=social)](https://github.com/AlekseyKorshuk/huggingartists)
[ -0.5842279195785522, -0.525180995464325, 0.10655505210161209, 0.2774873375892639, -0.2707178592681885, 0.009139029309153557, -0.34180307388305664, -0.4672152101993561, 0.8015654683113098, 0.33992648124694824, -0.8994832634925842, -0.884740948677063, -0.5254163146018982, 0.14888431131839752...
null
null
null
null
null
null
null
null
null
null
null
null
null
huggan/sim2real_gta5_to_cityscapes
huggan
2022-04-05T20:21:11Z
19
0
null
[ "region:us" ]
2022-04-05T20:21:11Z
2022-04-05T16:36:44.000Z
2022-04-05T16:36:44
Entry not found
[ -0.3227645754814148, -0.22568479180335999, 0.8622263669967651, 0.43461522459983826, -0.52829909324646, 0.7012971639633179, 0.7915719747543335, 0.07618614286184311, 0.774603009223938, 0.2563217282295227, -0.7852813005447388, -0.22573819756507874, -0.9104475975036621, 0.5715674161911011, -...
null
null
null
null
null
null
null
null
null
null
null
null
null
StanBienaives/french-open-fiscal-texts
StanBienaives
2022-10-25T10:03:56Z
19
0
null
[ "task_categories:summarization", "task_categories:feature-extraction", "annotations_creators:no-annotation", "language_creators:other", "multilinguality:monolingual", "size_categories:100K<n<1M", "source_datasets:original", "license:cc0-1.0", "region:us" ]
2022-10-25T10:03:56Z
2022-04-06T11:42:06.000Z
2022-04-06T11:42:06
--- annotations_creators: - no-annotation language_creators: - other language: - fr-FR license: - cc0-1.0 multilinguality: - monolingual pretty_name: french-open-fiscal-texts size_categories: - 100K<n<1M source_datasets: - original task_categories: - summarization - feature-extraction task_ids: [] --- # Dataset Card for french-open-fiscal-texts ## Table of Contents - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Supported Tasks](#supported-tasks-and-leaderboards) - [Languages](#languages) - [Dataset Structure](#dataset-structure) - [Data Instances](#data-instances) - [Data Fields](#data-instances) - [Data Splits](#data-instances) - [Dataset Creation](#dataset-creation) - [Curation Rationale](#curation-rationale) - [Source Data](#source-data) - [Annotations](#annotations) - [Personal and Sensitive Information](#personal-and-sensitive-information) - [Considerations for Using the Data](#considerations-for-using-the-data) - [Social Impact of Dataset](#social-impact-of-dataset) - [Discussion of Biases](#discussion-of-biases) - [Other Known Limitations](#other-known-limitations) - [Additional Information](#additional-information) - [Dataset Curators](#dataset-curators) - [Licensing Information](#licensing-information) - [Citation Information](#citation-information) ## Dataset Description - **Homepage:** https://echanges.dila.gouv.fr/OPENDATA/JADE/ - **Repository:** [Needs More Information] - **Paper:** [Needs More Information] - **Leaderboard:** [Needs More Information] - **Point of Contact:** [Needs More Information] ### Dataset Summary This dataset is an extraction from the OPENDATA/JADE. A list of case laws from the French court "Conseil d'Etat". ### Supported Tasks and Leaderboards [Needs More Information] ### Languages fr-FR ## Dataset Structure ### Data Instances ```json { "file": "CETATEXT000007584427.xml", "title": "Cour administrative d'appel de Marseille, 3�me chambre - formation � 3, du 21 octobre 2004, 00MA01080, in�dit au recueil Lebon", "summary": "", "content": "Vu la requête, enregistrée le 22 mai 2000, présentée pour M. Roger X, par Me Luherne, élisant domicile ...), et les mémoires complémentaires en date des 28 octobre 2002, 22 mars 2004 et 16 septembre 2004 ; M. X demande à la Cour :\n\n\n \n 11/ d'annuler le jugement n° 951520 en date du 16 mars 2000 par lequel le Tribunal administratif de Montpellier a rejeté sa requête tendant à la réduction des cotisations supplémentaires à l'impôt sur le revenu et des pénalités dont elles ont été assorties, auxquelles il a été assujetti au titre des années 1990, 1991 et 1992 ;\n\n\n \n 22/ de prononcer la réduction desdites cotisations ;\n\n\n \n 3°/ de condamner de l'Etat à lui verser une somme de 32.278 francs soit 4.920,75 euros" } ``` ### Data Fields `file`: identifier on the JADE OPENDATA file `title`: Name of the law case `summary`: Summary provided by JADE (may be missing) `content`: Text content of the case law ### Data Splits train test ## Dataset Creation ### Curation Rationale This dataset is an attempt to gather multiple tax related french text law. The first intent it to build model to summarize law cases ### Source Data #### Initial Data Collection and Normalization Collected from the https://echanges.dila.gouv.fr/OPENDATA/ - Filtering xml files containing "Code général des impôts" (tax related) - Extracting content, summary, identifier, title #### Who are the source language producers? DILA ### Annotations #### Annotation process [Needs More Information] #### Who are the annotators? [Needs More Information] ### Personal and Sensitive Information [Needs More Information] ## Considerations for Using the Data ### Social Impact of Dataset [Needs More Information] ### Discussion of Biases [Needs More Information] ### Other Known Limitations [Needs More Information] ## Additional Information ### Dataset Curators [Needs More Information] ### Licensing Information [Needs More Information] ### Citation Information [Needs More Information]
[ -0.31385868787765503, -0.4943047761917114, 0.2767051160335541, 0.1704167127609253, -0.383076012134552, -0.18388152122497559, -0.4396180510520935, -0.0964573547244072, 0.4115923345088959, 0.9410891532897949, -0.5106325149536133, -0.9467443227767944, -0.5421289205551147, 0.02963765151798725,...
null
null
null
null
null
null
null
null
null
null
null
null
null
lm233/humor_train
lm233
2022-04-08T18:13:45Z
19
1
null
[ "region:us" ]
2022-04-08T18:13:45Z
2022-04-08T18:10:37.000Z
2022-04-08T18:10:37
annotations_creators: [] language_creators: [] languages: [] licenses: [] multilinguality: [] pretty_name: humor_train size_categories: [] source_datasets: [] task_categories: [] task_ids: []
[ -0.34765681624412537, -0.12280556559562683, 0.10958874225616455, 0.6954052448272705, -0.5493748784065247, 0.31480473279953003, -0.3173598647117615, -0.1481558084487915, 0.7121341228485107, 0.688444972038269, -0.4745329022407532, -0.6323171854019165, -0.8348629474639893, 0.5207300782203674,...
null
null
null
null
null
null
null
null
null
null
null
null
null
enimai/MuST-C-de
enimai
2022-04-11T08:25:26Z
19
0
null
[ "license:afl-3.0", "region:us" ]
2022-04-11T08:25:26Z
2022-04-11T08:23:21.000Z
2022-04-11T08:23:21
--- license: afl-3.0 ---
[ -0.1285335123538971, -0.1861683875322342, 0.6529128551483154, 0.49436232447624207, -0.19319400191307068, 0.23607441782951355, 0.36072009801864624, 0.05056373029947281, 0.5793656706809998, 0.7400146722793579, -0.650810182094574, -0.23784008622169495, -0.7102247476577759, -0.0478255338966846...
null
null
null
null
null
null
null
null
null
null
null
null
null
AntoineLB/Frozen-lake-dataset
AntoineLB
2022-04-21T12:16:39Z
19
0
null
[ "region:us" ]
2022-04-21T12:16:39Z
2022-04-11T12:55:48.000Z
2022-04-11T12:55:48
# Dataset Card for [FrozenLake-v1]
[ -0.4780793786048889, -0.046031396836042404, -0.22062666714191437, 0.011778175830841064, -0.9831834435462952, 0.15529169142246246, 0.6120204329490662, 0.32096734642982483, 0.3722088038921356, 0.7123598456382751, -0.8575986623764038, -0.8552755117416382, -0.3853550851345062, -0.2931001484394...
null
null
null
null
null
null
null
null
null
null
null
null
null
taln-ls2n/taln-archives
taln-ls2n
2022-09-23T07:58:07Z
19
3
null
[ "task_categories:text-generation", "annotations_creators:unknown", "language_creators:unknown", "multilinguality:multilingual", "size_categories:1K<n<10K", "language:fr", "language:en", "license:cc-by-4.0", "region:us" ]
2022-09-23T07:58:07Z
2022-04-19T13:45:33.000Z
2022-04-19T13:45:33
--- annotations_creators: - unknown language_creators: - unknown language: - fr - en license: - cc-by-4.0 multilinguality: - multilingual task_categories: - text-mining - text-generation task_ids: - keyphrase-generation - keyphrase-extraction size_categories: - 1K<n<10K pretty_name: TALN-Archives --- # TALN-Archives Benchmark Dataset for Keyphrase Generation ## About TALN-Archives is a dataset for benchmarking keyphrase extraction and generation models. The dataset is composed of 1207 abstracts of scientific papers in French collected from the [TALN Archives](http://talnarchives.atala.org/). Keyphrases were annotated by authors in an uncontrolled setting (that is, not limited to thesaurus entries). English translations of title/abstract/keyphrases are also available for a subset of the documents (456 fully- and 719 partially-translated documents), allowing to experiment with cross-lingual / multilingual keyphrase generation. Details about the dataset can be found in the original paper [(Boudin, 2013)][boudin-2013]. Reference (indexer-assigned) keyphrases are also categorized under the PRMU (<u>P</u>resent-<u>R</u>eordered-<u>M</u>ixed-<u>U</u>nseen) scheme as proposed in [(Boudin and Gallina, 2021)][boudin-2021]. <u>P</u>resent reference keyphrases are also ordered by their order of apparition in the concatenation of title and abstract. Text pre-processing (tokenization) is carried out using `spacy` (`fr_core_news_sm` model) with a special rule to avoid splitting words with hyphens (e.g. graph-based is kept as one token). Stemming (Snowball stemmer implementation for french provided in `nltk`) is applied before reference keyphrases are matched against the source text. Details about the process can be found in `prmu.py`. ## Content and statistics The dataset contains the following test split: | Split | # documents | #words | # keyphrases | % Present | % Reordered | % Mixed | % Unseen | | :--------- | ----------: | -----: | -----------: | --------: | ----------: | ------: | -------: | | Test | 1207 | 138.3 | 4.12 | 53.83 | 12.32 | 21.69 | 12.16 | The following data fields are available : - **id**: unique identifier of the document. - **title**: title of the document. - **abstract**: abstract of the document. - **keyphrases**: list of reference keyphrases. - **prmu**: list of <u>P</u>resent-<u>R</u>eordered-<u>M</u>ixed-<u>U</u>nseen categories for reference keyphrases. - **translation**: translations of title, abstract and keyphrases in English if available. ## References - (Boudin, 2013) Florian Boudin. 2013. [TALN Archives : a digital archive of French research articles in Natural Language Processing (TALN Archives : une archive numérique francophone des articles de recherche en Traitement Automatique de la Langue) [in French]][boudin-2013]. In Proceedings of TALN 2013 (Volume 2: Short Papers), pages 507–514, Les Sables d’Olonne, France. ATALA. - (Boudin and Gallina, 2021) Florian Boudin and Ygor Gallina. 2021. [Redefining Absent Keyphrases and their Effect on Retrieval Effectiveness][boudin-2021]. In Proceedings of the 2021 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 4185–4193, Online. Association for Computational Linguistics. [boudin-2013]: https://aclanthology.org/F13-2001/ [boudin-2021]: https://aclanthology.org/2021.naacl-main.330/
[ -0.2648768424987793, -0.522236168384552, 0.3308601975440979, 0.24147672951221466, -0.46357056498527527, 0.1400325894355774, -0.22523102164268494, -0.12549251317977905, 0.20237652957439423, 0.3322465419769287, -0.5131711363792419, -0.8786335587501526, -0.6483403444290161, 0.7709382176399231...
null
null
null
null
null
null
null
null
null
null
null
null
null
mwong/climatetext-evidence-claim-pair-related-evaluation
mwong
2022-10-25T10:08:53Z
19
1
null
[ "task_categories:text-classification", "task_ids:fact-checking", "annotations_creators:crowdsourced", "language_creators:crowdsourced", "multilinguality:monolingual", "size_categories:100K<n<1M", "source_datasets:extended|climate_text", "language:en", "license:cc-by-sa-3.0", "license:gpl-3.0", "...
2022-10-25T10:08:53Z
2022-04-21T10:16:15.000Z
2022-04-21T10:16:15
--- annotations_creators: - crowdsourced language_creators: - crowdsourced language: - en license: - cc-by-sa-3.0 - gpl-3.0 multilinguality: - monolingual size_categories: - 100K<n<1M source_datasets: - extended|climate_text task_categories: - text-classification task_ids: - fact-checking --- ### Dataset Summary This dataset is extracted from Climate Text dataset (https://www.sustainablefinance.uzh.ch/en/research/climate-fever/climatext.html), pre-processed and, ready to evaluate. The evaluation objective is a text classification task - given a climate related evidence and claim, predict if pair is related.
[ -0.14341400563716888, -0.48722925782203674, 0.35009831190109253, 0.14013230800628662, -0.26145139336586, -0.1236497163772583, -0.16305267810821533, -0.3674857020378113, 0.05380818992853165, 0.8949097394943237, -0.5720857381820679, -0.6110986471176147, -0.7051115036010742, 0.113102778792381...
null
null
null
null
null
null
null
null
null
null
null
null
null
h4iku/coconut_javascript2010_preprocessed
h4iku
2022-04-21T20:34:35Z
19
0
null
[ "region:us" ]
2022-04-21T20:34:35Z
2022-04-21T20:05:05.000Z
2022-04-21T20:05:05
Entry not found
[ -0.3227645754814148, -0.22568479180335999, 0.8622264862060547, 0.43461528420448303, -0.52829909324646, 0.7012971639633179, 0.7915720343589783, 0.07618614286184311, 0.774603009223938, 0.2563217282295227, -0.7852813005447388, -0.22573819756507874, -0.9104477167129517, 0.5715674161911011, -...
null
null
null
null
null
null
null
null
null
null
null
null
null
deancgarcia/Diversity
deancgarcia
2022-12-08T00:16:35Z
19
0
null
[ "region:us" ]
2022-12-08T00:16:35Z
2022-04-22T16:55:24.000Z
2022-04-22T16:55:24
[Needs More Information] # Dataset Card for dei_article_sentiment ## Table of Contents - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Supported Tasks](#supported-tasks-and-leaderboards) - [Languages](#languages) - [Dataset Structure](#dataset-structure) - [Data Instances](#data-instances) - [Data Fields](#data-instances) - [Data Splits](#data-instances) - [Dataset Creation](#dataset-creation) - [Curation Rationale](#curation-rationale) - [Source Data](#source-data) - [Annotations](#annotations) - [Personal and Sensitive Information](#personal-and-sensitive-information) - [Considerations for Using the Data](#considerations-for-using-the-data) - [Social Impact of Dataset](#social-impact-of-dataset) - [Discussion of Biases](#discussion-of-biases) - [Other Known Limitations](#other-known-limitations) - [Additional Information](#additional-information) - [Dataset Curators](#dataset-curators) - [Licensing Information](#licensing-information) - [Citation Information](#citation-information) ## Dataset Description - **Homepage:** [Needs More Information] - **Repository:** [Needs More Information] - **Paper:** [Needs More Information] - **Leaderboard:** [Needs More Information] - **Point of Contact:** [Needs More Information] ### Dataset Summary Diversity Equity and Inclusion related article title, content, url, sentiment and basis. Basis is a term I use to describe the underline topic related to diveristy I have four at the moment 1 = Gender, 2 = Race, 3 = Disability and 4 = Other. ### Supported Tasks and Leaderboards [Needs More Information] ### Languages English ## Dataset Structure ### Data Instances [Needs More Information] ### Data Fields ID Title Content Basis URL Sentiment ### Data Splits train validate ## Dataset Creation ### Curation Rationale [Needs More Information] ### Source Data #### Initial Data Collection and Normalization [Needs More Information] #### Who are the source language producers? [Needs More Information] ### Annotations #### Annotation process [Needs More Information] #### Who are the annotators? [Needs More Information] ### Personal and Sensitive Information [Needs More Information] ## Considerations for Using the Data ### Social Impact of Dataset [Needs More Information] ### Discussion of Biases [Needs More Information] ### Other Known Limitations [Needs More Information] ## Additional Information ### Dataset Curators [Needs More Information] ### Licensing Information [Needs More Information] ### Citation Information [Needs More Information]
[ -0.8614344596862793, -0.4259754419326782, 0.15150775015354156, 0.29598408937454224, -0.4285370111465454, 0.2588255703449249, -0.24083390831947327, -0.23605039715766907, 0.6688933372497559, 0.5489088892936707, -0.8651399612426758, -1.0632846355438232, -0.7409915328025818, 0.2194677293300628...
null
null
null
null
null
null
null
null
null
null
null
null
null
pietrolesci/gpt3_nli
pietrolesci
2022-04-25T10:17:45Z
19
2
null
[ "region:us" ]
2022-04-25T10:17:45Z
2022-04-25T09:49:23.000Z
2022-04-25T09:49:23
## Overview Original dataset available [here](https://github.com/krandiash/gpt3-nli). Debiased dataset generated with GPT-3. ## Dataset curation All string columns are stripped. Labels are encoded with the following mapping ``` {"entailment": 0, "neutral": 1, "contradiction": 2} ``` ## Code to create the dataset ```python import pandas as pd from datasets import Dataset, ClassLabel, Value, Features import json # load data with open("data/dataset.jsonl", "r") as fl: df = pd.DataFrame([json.loads(line) for line in fl]) df.columns = df.columns.str.strip() # fix dtypes df["guid"] = df["guid"].astype(int) for col in df.select_dtypes(object): df[col] = df[col].str.strip() # encode labels df["label"] = df["label"].map({"entailment": 0, "neutral": 1, "contradiction": 2}) # cast to dataset features = Features( { "text_a": Value(dtype="string"), "text_b": Value(dtype="string"), "label": ClassLabel(num_classes=3, names=["entailment", "neutral", "contradiction"]), "guid": Value(dtype="int64"), } ) ds = Dataset.from_pandas(df, features=features) ds.push_to_hub("pietrolesci/gpt3_nli", token="<token>") ```
[ -0.3134404718875885, -0.7848435640335083, 0.44208571314811707, 0.1748346984386444, -0.2815070152282715, -0.059707120060920715, -0.17855416238307953, -0.12878136336803436, 0.1081581562757492, 0.6127507090568542, -0.23287001252174377, -0.8897314667701721, -0.6923072934150696, 0.3465908169746...
null
null
null
null
null
null
null
null
null
null
null
null
null
Filippo/osdg_cd
Filippo
2023-10-08T09:57:13Z
19
1
null
[ "task_categories:text-classification", "task_ids:natural-language-inference", "annotations_creators:crowdsourced", "language_creators:crowdsourced", "multilinguality:monolingual", "size_categories:10K<n<100K", "language:en", "license:cc-by-4.0", "region:us" ]
2023-10-08T09:57:13Z
2022-04-30T21:54:04.000Z
2022-04-30T21:54:04
--- annotations_creators: - crowdsourced language_creators: - crowdsourced language: - en license: - cc-by-4.0 multilinguality: - monolingual size_categories: - 10K<n<100K task_categories: - text-classification task_ids: - natural-language-inference pretty_name: OSDG Community Dataset (OSDG-CD) dataset_info: config_name: main_config features: - name: doi dtype: string - name: text_id dtype: string - name: text dtype: string - name: sdg dtype: uint16 - name: label dtype: class_label: names: '0': SDG 1 '1': SDG 2 '2': SDG 3 '3': SDG 4 '4': SDG 5 '5': SDG 6 '6': SDG 7 '7': SDG 8 '8': SDG 9 '9': SDG 10 '10': SDG 11 '11': SDG 12 '12': SDG 13 '13': SDG 14 '14': SDG 15 '15': SDG 16 - name: labels_negative dtype: uint16 - name: labels_positive dtype: uint16 - name: agreement dtype: float32 splits: - name: train num_bytes: 30151244 num_examples: 42355 download_size: 29770590 dataset_size: 30151244 --- # Dataset Card for OSDG-CD ## Table of Contents - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards) - [Languages](#languages) - [Dataset Structure](#dataset-structure) - [Data Instances](#data-instances) - [Data Fields](#data-fields) - [Data Splits](#data-splits) - [Dataset Creation](#dataset-creation) - [Curation Rationale](#curation-rationale) - [Source Data](#source-data) - [Annotations](#annotations) - [Personal and Sensitive Information](#personal-and-sensitive-information) - [Considerations for Using the Data](#considerations-for-using-the-data) - [Social Impact of Dataset](#social-impact-of-dataset) - [Discussion of Biases](#discussion-of-biases) - [Other Known Limitations](#other-known-limitations) - [Additional Information](#additional-information) - [Dataset Curators](#dataset-curators) - [Licensing Information](#licensing-information) - [Citation Information](#citation-information) - [Contributions](#contributions) ## Dataset Description - **Homepage:** [OSDG-CD homepage](https://zenodo.org/record/8397907) ### Dataset Summary The OSDG Community Dataset (OSDG-CD) is a public dataset of thousands of text excerpts, which were validated by approximately 1,000 OSDG Community Platform (OSDG-CP) citizen scientists from over 110 countries, with respect to the Sustainable Development Goals (SDGs). > NOTES > > * There are currently no examples for SDGs 16 and 17. See [this GitHub issue](https://github.com/osdg-ai/osdg-data/issues/3). > * As of July 2023, there areexamples also for SDG 16. ### Supported Tasks and Leaderboards TBD ### Languages The language of the dataset is English. ## Dataset Structure ### Data Instances For each instance, there is a string for the text, a string for the SDG, and an integer for the label. ``` {'text': 'Each section states the economic principle, reviews international good practice and discusses the situation in Brazil.', 'label': 5} ``` The average token count for the premises and hypotheses are given below: | Feature | Mean Token Count | | ---------- | ---------------- | | Premise | 14.1 | | Hypothesis | 8.3 | ### Data Fields - `doi`: Digital Object Identifier of the original document - `text_id`: unique text identifier - `text`: text excerpt from the document - `sdg`: the SDG the text is validated against - `label`: an integer from `0` to `17` which corresponds to the `sdg` field - `labels_negative`: the number of volunteers who rejected the suggested SDG label - `labels_positive`: the number of volunteers who accepted the suggested SDG label - `agreement`: agreement score based on the formula ### Data Splits The OSDG-CD dataset has 1 splits: _train_. | Dataset Split | Number of Instances in Split | | ------------- |----------------------------- | | Train | 32,327 | ## Dataset Creation ### Curation Rationale The [The OSDG Community Dataset (OSDG-CD)](https://zenodo.org/record/8397907) was developed as a benchmark for ... with the goal of producing a dataset large enough to train models using neural methodologies. ### Source Data #### Initial Data Collection and Normalization TBD #### Who are the source language producers? TBD ### Annotations #### Annotation process TBD #### Who are the annotators? TBD ### Personal and Sensitive Information The dataset does not contain any personal information about the authors or the crowdworkers. ## Considerations for Using the Data ### Social Impact of Dataset TBD ## Additional Information TBD ### Dataset Curators TBD ### Licensing Information The OSDG Community Dataset (OSDG-CD) is licensed under a [Creative Commons Attribution 4.0 International License](http://creativecommons.org/licenses/by/4.0/). ### Citation Information ``` @dataset{osdg_2023_8397907, author = {OSDG and UNDP IICPSD SDG AI Lab and PPMI}, title = {OSDG Community Dataset (OSDG-CD)}, month = oct, year = 2023, note = {{This CSV file uses UTF-8 character encoding. For easy access on MS Excel, open the file using Data → From Text/CSV. Please split CSV data into different columns by using a TAB delimiter.}}, publisher = {Zenodo}, version = {2023.10}, doi = {10.5281/zenodo.8397907}, url = {https://doi.org/10.5281/zenodo.8397907} } ``` ### Contributions TBD
[ -0.5488648414611816, -0.588140070438385, 0.3246345818042755, 0.08854301273822784, -0.43901145458221436, -0.03868985176086426, -0.31920987367630005, -0.40883326530456543, 0.4024019241333008, 0.5330994725227356, -0.804212749004364, -1.0384302139282227, -0.7265607118606567, 0.2770462334156036...
null
null
null
null
null
null
null
null
null
null
null
null
null
LHF/escorpius-mr
LHF
2023-05-11T22:29:21Z
19
2
null
[ "task_categories:text-generation", "task_categories:fill-mask", "task_ids:language-modeling", "task_ids:masked-language-modeling", "multilinguality:multilingual", "size_categories:100B<n<1T", "source_datasets:original", "language:af", "language:ar", "language:bn", "language:ca", "language:cs",...
2023-05-11T22:29:21Z
2022-05-03T18:49:47.000Z
2022-05-03T18:49:47
--- license: cc-by-nc-nd-4.0 language: - af - ar - bn - ca - cs - da - de - el - eu - fa - fi - fr - gl - hi - hr - it - ja - ko - mt - nl - no - oc - pa - pl - pt - ro - sl - sr - sv - tr - uk - ur multilinguality: - multilingual size_categories: - 100B<n<1T source_datasets: - original task_categories: - text-generation - fill-mask task_ids: - language-modeling - masked-language-modeling --- # esCorpius Multilingual Raw In the recent years, Transformer-based models have lead to significant advances in language modelling for natural language processing. However, they require a vast amount of data to be (pre-)trained and there is a lack of corpora in languages other than English. Recently, several initiatives have presented multilingual datasets obtained from automatic web crawling. However, they present important shortcomings for languages different from English, as they are either too small, or present a low quality derived from sub-optimal cleaning and deduplication. In this repository, we introduce esCorpius-m, a multilingual crawling corpus obtained from near 1 Pb of Common Crawl data. It is the most extensive corpus in some of the languages covered with this level of quality in the extraction, purification and deduplication of web textual content. Our data curation process involves a novel highly parallel cleaning pipeline and encompasses a series of deduplication mechanisms that together ensure the integrity of both document and paragraph boundaries. Additionally, we maintain both the source web page URL and the WARC shard origin URL in order to complain with EU regulations. esCorpius-m has been released under CC BY-NC-ND 4.0 license. # Usage ``` dataset = load_dataset('LHF/escorpius-m', split='train', streaming=True) ``` # Intended use This corpus is the *raw version* of the esCorpius-m corpus. This corpus can be used for benchmarking deduplication tools. ## Other corpora - esCorpius multilingual corpus (deduplicated): https://huggingface.co/datasets/LHF/escorpius-m - esCorpius original *Spanish-only* corpus (deduplicated): https://huggingface.co/datasets/LHF/escorpius ## Citation Link to paper: https://www.isca-speech.org/archive/pdfs/iberspeech_2022/gutierrezfandino22_iberspeech.pdf / https://arxiv.org/abs/2206.15147 Cite this work: ``` @inproceedings{gutierrezfandino22_iberspeech, author={Asier Gutiérrez-Fandiño and David Pérez-Fernández and Jordi Armengol-Estapé and David Griol and Zoraida Callejas}, title={{esCorpius: A Massive Spanish Crawling Corpus}}, keywords = {Computation and Language (cs.CL), Artificial Intelligence (cs.AI), FOS: Computer and information sciences, FOS: Computer and information sciences}, year=2022, booktitle={Proc. IberSPEECH 2022}, pages={126--130}, doi={10.21437/IberSPEECH.2022-26} } ``` ## Disclaimer We did not perform any kind of filtering and/or censorship to the corpus. We expect users to do so applying their own methods. We are not liable for any misuse of the corpus.
[ -0.4621693193912506, -0.6162374019622803, 0.1381826400756836, 0.4041445851325989, -0.22818322479724884, 0.5403138399124146, -0.1402767151594162, -0.572929322719574, 0.6785258650779724, 0.4726186990737915, -0.45919081568717957, -0.5643812417984009, -0.6031078100204468, 0.5167700052261353, ...
null
null
null
null
null
null
null
null
null
null
null
null
null
SetFit/amazon_massive_intent_ko-KR
SetFit
2022-05-06T09:09:47Z
19
0
null
[ "region:us" ]
2022-05-06T09:09:47Z
2022-05-06T09:09:44.000Z
2022-05-06T09:09:44
Entry not found
[ -0.3227645754814148, -0.22568479180335999, 0.8622263669967651, 0.43461522459983826, -0.52829909324646, 0.7012971639633179, 0.7915719747543335, 0.07618614286184311, 0.774603009223938, 0.2563217282295227, -0.7852813005447388, -0.22573819756507874, -0.9104475975036621, 0.5715674161911011, -...
null
null
null
null
null
null
null
null
null
null
null
null
null
SetFit/amazon_massive_intent_my-MM
SetFit
2022-05-06T09:10:15Z
19
0
null
[ "region:us" ]
2022-05-06T09:10:15Z
2022-05-06T09:10:12.000Z
2022-05-06T09:10:12
Entry not found
[ -0.3227645754814148, -0.22568479180335999, 0.8622263669967651, 0.43461522459983826, -0.52829909324646, 0.7012971639633179, 0.7915719747543335, 0.07618614286184311, 0.774603009223938, 0.2563217282295227, -0.7852813005447388, -0.22573819756507874, -0.9104475975036621, 0.5715674161911011, -...
null
null
null
null
null
null
null
null
null
null
null
null
null
strombergnlp/twitter_pos
strombergnlp
2022-10-25T21:43:15Z
19
2
ritter-pos
[ "task_categories:token-classification", "task_ids:part-of-speech", "annotations_creators:expert-generated", "language_creators:found", "multilinguality:monolingual", "size_categories:10K<n<100K", "source_datasets:original", "language:en", "license:cc-by-4.0", "region:us" ]
2022-10-25T21:43:15Z
2022-05-06T19:09:49.000Z
2022-05-06T19:09:49
--- annotations_creators: - expert-generated language_creators: - found language: - en license: - cc-by-4.0 multilinguality: - monolingual size_categories: - 10K<n<100K source_datasets: - original task_categories: - token-classification task_ids: - part-of-speech paperswithcode_id: ritter-pos pretty_name: Twitter Part-of-speech --- # Dataset Card for "twitter-pos" ## Table of Contents - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards) - [Languages](#languages) - [Dataset Structure](#dataset-structure) - [Data Instances](#data-instances) - [Data Fields](#data-fields) - [Data Splits](#data-splits) - [Dataset Creation](#dataset-creation) - [Curation Rationale](#curation-rationale) - [Source Data](#source-data) - [Annotations](#annotations) - [Personal and Sensitive Information](#personal-and-sensitive-information) - [Considerations for Using the Data](#considerations-for-using-the-data) - [Social Impact of Dataset](#social-impact-of-dataset) - [Discussion of Biases](#discussion-of-biases) - [Other Known Limitations](#other-known-limitations) - [Additional Information](#additional-information) - [Dataset Curators](#dataset-curators) - [Licensing Information](#licensing-information) - [Citation Information](#citation-information) - [Contributions](#contributions) ## Dataset Description - **Homepage:** [https://gate.ac.uk/wiki/twitter-postagger.html](https://gate.ac.uk/wiki/twitter-postagger.html) - **Repository:** [https://github.com/GateNLP/gateplugin-Twitter](https://github.com/GateNLP/gateplugin-Twitter) - **Paper:** [https://aclanthology.org/R13-1026/](https://aclanthology.org/R13-1026/) - **Point of Contact:** [Leon Derczynski](https://github.com/leondz) - **Size of downloaded dataset files:** 51.96 MiB - **Size of the generated dataset:** 251.22 KiB - **Total amount of disk used:** 52.05 MB ### Dataset Summary Part-of-speech information is basic NLP task. However, Twitter text is difficult to part-of-speech tag: it is noisy, with linguistic errors and idiosyncratic style. This dataset contains two datasets for English PoS tagging for tweets: * Ritter, with train/dev/test * Foster, with dev/test Splits defined in the Derczynski paper, but the data is from Ritter and Foster. * Ritter: [https://aclanthology.org/D11-1141.pdf](https://aclanthology.org/D11-1141.pdf), * Foster: [https://www.aaai.org/ocs/index.php/ws/aaaiw11/paper/download/3912/4191](https://www.aaai.org/ocs/index.php/ws/aaaiw11/paper/download/3912/4191) ### Supported Tasks and Leaderboards * [Part of speech tagging on Ritter](https://paperswithcode.com/sota/part-of-speech-tagging-on-ritter) ### Languages English, non-region-specific. `bcp47:en` ## Dataset Structure ### Data Instances An example of 'train' looks as follows. ``` {'id': '0', 'tokens': ['Antick', 'Musings', 'post', ':', 'Book-A-Day', '2010', '#', '243', '(', '10/4', ')', '--', 'Gray', 'Horses', 'by', 'Hope', 'Larson', 'http://bit.ly/as8fvc'], 'pos_tags': [23, 23, 22, 9, 23, 12, 22, 12, 5, 12, 6, 9, 23, 23, 16, 23, 23, 51]} ``` ### Data Fields The data fields are the same among all splits. #### twitter-pos - `id`: a `string` feature. - `tokens`: a `list` of `string` features. - `pos_tags`: a `list` of classification labels (`int`). Full tagset with indices: ```python ``` ### Data Splits | name |tokens|sentences| |---------|----:|---------:| |ritter train|10652|551| |ritter dev |2242|118| |ritter test |2291|118| |foster dev |2998|270| |foster test |2841|250| ## Dataset Creation ### Curation Rationale [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ### Source Data #### Initial Data Collection and Normalization [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) #### Who are the source language producers? [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ### Annotations #### Annotation process [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) #### Who are the annotators? [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ### Personal and Sensitive Information [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ## Considerations for Using the Data ### Social Impact of Dataset [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ### Discussion of Biases [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ### Other Known Limitations [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ## Additional Information ### Dataset Curators [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ### Licensing Information ### Citation Information ``` @inproceedings{ritter2011named, title={Named entity recognition in tweets: an experimental study}, author={Ritter, Alan and Clark, Sam and Etzioni, Oren and others}, booktitle={Proceedings of the 2011 conference on empirical methods in natural language processing}, pages={1524--1534}, year={2011} } @inproceedings{foster2011hardtoparse, title={\# hardtoparse: POS Tagging and Parsing the Twitterverse}, author={Foster, Jennifer and Cetinoglu, Ozlem and Wagner, Joachim and Le Roux, Joseph and Hogan, Stephen and Nivre, Joakim and Hogan, Deirdre and Van Genabith, Josef}, booktitle={Workshops at the Twenty-Fifth AAAI Conference on Artificial Intelligence}, year={2011} } @inproceedings{derczynski2013twitter, title={Twitter part-of-speech tagging for all: Overcoming sparse and noisy data}, author={Derczynski, Leon and Ritter, Alan and Clark, Sam and Bontcheva, Kalina}, booktitle={Proceedings of the international conference recent advances in natural language processing ranlp 2013}, pages={198--206}, year={2013} } ``` ### Contributions Author uploaded ([@leondz](https://github.com/leondz))
[ -0.4126706123352051, -0.5383011698722839, 0.1496376246213913, 0.2686498165130615, -0.2672540545463562, 0.18925274908542633, -0.41433951258659363, -0.41519057750701904, 0.6702490448951721, 0.21301543712615967, -0.6868792772293091, -0.960845410823822, -0.6179501414299011, 0.06002918258309364...
null
null
null
null
null
null
null
null
null
null
null
null
null
Chr0my/freesound.org
Chr0my
2023-04-09T14:31:11Z
19
9
null
[ "size_categories:100K<n<1M", "language:en", "music", "region:us" ]
2023-04-09T14:31:11Z
2022-05-15T17:31:35.000Z
2022-05-15T17:31:35
--- language: - en tags: - music size_categories: - 100K<n<1M --- This dataset has been scraped from https://freesound.org Containing 554849 audio clips. License: cc-by-sa-3.0, https://creativecommons.org/licenses/by-sa/3.0/
[ -0.5032792091369629, -0.24306902289390564, 0.48953843116760254, 0.4639659821987152, -0.3864404857158661, -0.20854856073856354, 0.0415848046541214, -0.28540539741516113, 0.4454440772533417, 0.6067507266998291, -0.8543437719345093, -0.5952737331390381, -0.400054007768631, 0.01548705995082855...
null
null
null
null
null
null
null
null
null
null
null
null
null
hongdijk/kluetest
hongdijk
2022-06-30T08:42:34Z
19
0
null
[ "license:other", "region:us" ]
2022-06-30T08:42:34Z
2022-05-20T10:38:20.000Z
2022-05-20T10:38:20
--- license: other ---
[ -0.1285335123538971, -0.1861683875322342, 0.6529128551483154, 0.49436232447624207, -0.19319400191307068, 0.23607441782951355, 0.36072009801864624, 0.05056373029947281, 0.5793656706809998, 0.7400146722793579, -0.650810182094574, -0.23784008622169495, -0.7102247476577759, -0.0478255338966846...
null
null
null
null
null
null
null
null
null
null
null
null
null
arize-ai/movie_reviews_with_context_drift
arize-ai
2022-07-01T17:26:12Z
19
0
null
[ "task_categories:text-classification", "task_ids:sentiment-classification", "annotations_creators:expert-generated", "language_creators:expert-generated", "multilinguality:monolingual", "size_categories:10K<n<100K", "source_datasets:extended|imdb", "language:en", "license:mit", "region:us" ]
2022-07-01T17:26:12Z
2022-05-20T23:25:49.000Z
2022-05-20T23:25:49
--- annotations_creators: - expert-generated language_creators: - expert-generated language: - en license: - mit multilinguality: - monolingual pretty_name: sentiment-classification-reviews-with-drift size_categories: - 10K<n<100K source_datasets: - extended|imdb task_categories: - text-classification task_ids: - sentiment-classification --- # Dataset Card for `reviews_with_drift` ## Table of Contents - [Table of Contents](#table-of-contents) - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards) - [Languages](#languages) - [Dataset Structure](#dataset-structure) - [Data Instances](#data-instances) - [Data Fields](#data-fields) - [Data Splits](#data-splits) - [Dataset Creation](#dataset-creation) - [Curation Rationale](#curation-rationale) - [Source Data](#source-data) - [Annotations](#annotations) - [Personal and Sensitive Information](#personal-and-sensitive-information) - [Considerations for Using the Data](#considerations-for-using-the-data) - [Social Impact of Dataset](#social-impact-of-dataset) - [Discussion of Biases](#discussion-of-biases) - [Other Known Limitations](#other-known-limitations) - [Additional Information](#additional-information) - [Dataset Curators](#dataset-curators) - [Licensing Information](#licensing-information) - [Citation Information](#citation-information) - [Contributions](#contributions) ## Dataset Description ### Dataset Summary This dataset was crafted to be used in our tutorial [Link to the tutorial when ready]. It consists on a large Movie Review Dataset mixed with some reviews from a Hotel Review Dataset. The training/validation set are purely obtained from the Movie Review Dataset while the production set is mixed. Some other features have been added (`age`, `gender`, `context`) as well as a made up timestamp `prediction_ts` of when the inference took place. ### Supported Tasks and Leaderboards `text-classification`, `sentiment-classification`: The dataset is mainly used for text classification: given the text, predict the sentiment (positive or negative). ### Languages Text is mainly written in english. ## Dataset Structure ### Data Instances #### default An example of `training` looks as follows: ```json { 'prediction_ts': 1650092416.0, 'age': 44, 'gender': 'female', 'context': 'movies', 'text': "An interesting premise, and Billy Drago is always good as a dangerous nut-bag (side note: I'd love to see Drago, Stephen McHattie and Lance Hendrikson in a flick together; talk about raging cheekbones!). The soundtrack wasn't terrible, either.<br /><br />But the acting--even that of such professionals as Drago and Debbie Rochon--was terrible, the directing worse (perhaps contributory to the former), the dialog chimp-like, and the camera work, barely tolerable. Still, it was the SETS that got a big 10 on my oy-vey scale. I don't know where this was filmed, but were I to hazard a guess, it would be either an open-air museum, or one of those re-enactment villages, where everything is just a bit too well-kept to do more than suggest the real Old West. Okay, so it was shot on a college kid's budget. That said, I could have forgiven one or two of the aforementioned faults. But taken all together, and being generous, I could not see giving it more than three stars.", 'label': 0 } ``` ### Data Fields #### default The data fields are the same among all splits. An example of `training` looks as follows: - `prediction_ts`: a `float` feature. - `age`: an `int` feature. - `gender`: a `string` feature. - `context`: a `string` feature. - `text`: a `string` feature. - `label`: a `ClassLabel` feature, with possible values including negative(0) and positive(1). ### Data Splits | name |training|validation|production | |----------|-------:|---------:|----------:| | default | 9916 | 2479 | 40079 | ## Dataset Creation ### Curation Rationale [More Information Needed] ### Contributions Thanks to [@fjcasti1](https://github.com/fjcasti1) for adding this dataset.
[ -0.6306349039077759, -0.4772126078605652, 0.23833712935447693, -0.14187325537204742, -0.3259514272212982, 0.03965630382299423, -0.2803017199039459, -0.24887913465499878, 0.5413499474525452, 0.24213020503520966, -0.992721438407898, -0.5816421508789062, -0.5947282910346985, -0.00023615259851...
null
null
null
null
null
null
null
null
null
null
null
null
null
emilylearning/cond_ft_none_on_reddit__prcnt_100__test_run_False__xlm-roberta-base
emilylearning
2022-05-26T01:16:58Z
19
0
null
[ "region:us" ]
2022-05-26T01:16:58Z
2022-05-25T08:47:26.000Z
2022-05-25T08:47:26
Entry not found
[ -0.3227645754814148, -0.22568479180335999, 0.8622263669967651, 0.43461522459983826, -0.52829909324646, 0.7012971639633179, 0.7915719747543335, 0.07618614286184311, 0.774603009223938, 0.2563217282295227, -0.7852813005447388, -0.22573819756507874, -0.9104475975036621, 0.5715674161911011, -...
null
null
null
null
null
null
null
null
null
null
null
null
null
thomasg26/sv_corpora_parliament_processed
thomasg26
2022-05-25T10:00:15Z
19
0
null
[ "region:us" ]
2022-05-25T10:00:15Z
2022-05-25T09:50:55.000Z
2022-05-25T09:50:55
Entry not found
[ -0.3227645754814148, -0.22568479180335999, 0.8622263669967651, 0.43461522459983826, -0.52829909324646, 0.7012971639633179, 0.7915719747543335, 0.07618614286184311, 0.774603009223938, 0.2563217282295227, -0.7852813005447388, -0.22573819756507874, -0.9104475975036621, 0.5715674161911011, -...
null
null
null
null
null
null
null
null
null
null
null
null
null
cradle-bio/meltome_cluster_split
cradle-bio
2022-05-25T13:21:09Z
19
1
null
[ "region:us" ]
2022-05-25T13:21:09Z
2022-05-25T13:20:52.000Z
2022-05-25T13:20:52
Entry not found
[ -0.3227645754814148, -0.22568479180335999, 0.8622263669967651, 0.43461522459983826, -0.52829909324646, 0.7012971639633179, 0.7915719747543335, 0.07618614286184311, 0.774603009223938, 0.2563217282295227, -0.7852813005447388, -0.22573819756507874, -0.9104475975036621, 0.5715674161911011, -...
null
null
null
null
null
null
null
null
null
null
null
null
null
evaluate/media
evaluate
2022-11-22T18:06:21Z
19
0
null
[ "region:us" ]
2022-11-22T18:06:21Z
2022-05-25T14:35:22.000Z
2022-05-25T14:35:22
Entry not found
[ -0.3227649927139282, -0.225684255361557, 0.862226128578186, 0.43461498618125916, -0.5282987952232361, 0.7012963891029358, 0.7915717363357544, 0.07618629932403564, 0.7746025919914246, 0.2563219666481018, -0.7852816581726074, -0.2257382869720459, -0.9104480743408203, 0.5715669393539429, -0...
null
null
null
null
null
null
null
null
null
null
null
null
null
pysentimiento/spanish-targeted-sentiment-headlines
pysentimiento
2022-06-17T21:28:01Z
19
1
null
[ "region:us" ]
2022-06-17T21:28:01Z
2022-06-10T21:21:22.000Z
2022-06-10T21:21:22
Entry not found
[ -0.3227649927139282, -0.225684255361557, 0.862226128578186, 0.43461498618125916, -0.5282987952232361, 0.7012963891029358, 0.7915717363357544, 0.07618629932403564, 0.7746025919914246, 0.2563219666481018, -0.7852816581726074, -0.2257382869720459, -0.9104480743408203, 0.5715669393539429, -0...
null
null
null
null
null
null
null
null
null
null
null
null
null
tomekkorbak/codeparrot-clean-train-v2-pep8
tomekkorbak
2022-06-17T18:23:03Z
19
0
null
[ "region:us" ]
2022-06-17T18:23:03Z
2022-06-17T18:22:33.000Z
2022-06-17T18:22:33
Entry not found
[ -0.3227649927139282, -0.225684255361557, 0.862226128578186, 0.43461498618125916, -0.5282987952232361, 0.7012963891029358, 0.7915717363357544, 0.07618629932403564, 0.7746025919914246, 0.2563219666481018, -0.7852816581726074, -0.2257382869720459, -0.9104480743408203, 0.5715669393539429, -0...
null
null
null
null
null
null
null
null
null
null
null
null
null
spacemanidol/ESCI-product-dataset-corpus
spacemanidol
2022-08-04T14:05:57Z
19
1
null
[ "region:us" ]
2022-08-04T14:05:57Z
2022-06-17T20:39:15.000Z
2022-06-17T20:39:15
Entry not found
[ -0.3227649927139282, -0.225684255361557, 0.862226128578186, 0.43461498618125916, -0.5282987952232361, 0.7012963891029358, 0.7915717363357544, 0.07618629932403564, 0.7746025919914246, 0.2563219666481018, -0.7852816581726074, -0.2257382869720459, -0.9104480743408203, 0.5715669393539429, -0...
null
null
null
null
null
null
null
null
null
null
null
null
null
julien-c/kaggle-rounakbanik-pokemon
julien-c
2022-12-08T09:50:43Z
19
2
null
[ "license:cc0-1.0", "pokemon", "region:us" ]
2022-12-08T09:50:43Z
2022-06-17T21:18:00.000Z
2022-06-17T21:18:00
--- license: cc0-1.0 tags: - pokemon --- ![](https://huggingface.co/datasets/julien-c/kaggle-rounakbanik-pokemon/resolve/main/pokemon-mystery-dungeon-9qdfdeoy8xdw8cb6.jpg) ## Source: https://www.kaggle.com/datasets/rounakbanik/pokemon Also published to https://datasette.fly.dev/pokemon/pokemon using `datasette` Columns: - name: The English name of the Pokemon - japanese_name: The Original Japanese name of the Pokemon - pokedex_number: The entry number of the Pokemon in the National Pokedex - percentage_male: The percentage of the species that are male. Blank if the Pokemon is genderless. - type1: The Primary Type of the Pokemon - type2: The Secondary Type of the Pokemon - classification: The Classification of the Pokemon as described by the Sun and Moon Pokedex - height_m: Height of the Pokemon in metres - weight_kg: The Weight of the Pokemon in kilograms - capture_rate: Capture Rate of the Pokemon - baseeggsteps: The number of steps required to hatch an egg of the Pokemon - abilities: A stringified list of abilities that the Pokemon is capable of having - experience_growth: The Experience Growth of the Pokemon - base_happiness: Base Happiness of the Pokemon - against_?: Eighteen features that denote the amount of damage taken against an attack of a particular type - hp: The Base HP of the Pokemon - attack: The Base Attack of the Pokemon - defense: The Base Defense of the Pokemon - sp_attack: The Base Special Attack of the Pokemon - sp_defense: The Base Special Defense of the Pokemon - speed: The Base Speed of the Pokemon - generation: The numbered generation which the Pokemon was first introduced - is_legendary: Denotes if the Pokemon is legendary.
[ -0.3378840982913971, -0.10855557769536972, 0.17418313026428223, 0.05626410245895386, -0.048661574721336365, 0.28541433811187744, -0.03884372115135193, -0.3698950409889221, 1.0418410301208496, 0.43545112013816833, -0.8878641724586487, -0.7536487579345703, -0.5104360580444336, 0.374921351671...
null
null
null
null
null
null
null
null
null
null
null
null
null
alpalasaul/test
alpalasaul
2022-06-18T02:08:52Z
19
0
null
[ "region:us" ]
2022-06-18T02:08:52Z
2022-06-18T01:57:16.000Z
2022-06-18T01:57:16
Entry not found
[ -0.3227645754814148, -0.22568479180335999, 0.8622264862060547, 0.43461528420448303, -0.52829909324646, 0.7012971639633179, 0.7915720343589783, 0.07618614286184311, 0.774603009223938, 0.2563217282295227, -0.7852813005447388, -0.22573819756507874, -0.9104477167129517, 0.5715674161911011, -...
null
null
null
null
null
null
null
null
null
null
null
null
null
alpalasaul/span_full_corpus_clean
alpalasaul
2022-06-18T02:19:43Z
19
0
null
[ "region:us" ]
2022-06-18T02:19:43Z
2022-06-18T02:19:30.000Z
2022-06-18T02:19:30
Entry not found
[ -0.3227645754814148, -0.22568479180335999, 0.8622264862060547, 0.43461528420448303, -0.52829909324646, 0.7012971639633179, 0.7915720343589783, 0.07618614286184311, 0.774603009223938, 0.2563217282295227, -0.7852813005447388, -0.22573819756507874, -0.9104477167129517, 0.5715674161911011, -...
null
null
null
null
null
null
null
null
null
null
null
null
null
imvladikon/hebrew_news
imvladikon
2022-07-09T19:53:05Z
19
1
null
[ "task_categories:summarization", "task_ids:news-articles-summarization", "annotations_creators:no-annotation", "language_creators:other", "multilinguality:monolingual", "size_categories:100K<n<1M", "source_datasets:original", "language:he", "license:other", "region:us" ]
2022-07-09T19:53:05Z
2022-06-19T16:19:53.000Z
2022-06-19T16:19:53
--- annotations_creators: - no-annotation language_creators: - other language: - he license: - other multilinguality: - monolingual size_categories: - 100K<n<1M source_datasets: - original task_categories: - summarization task_ids: - news-articles-summarization --- ## Table of Contents - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards) - [Languages](#languages) - [Dataset Structure](#dataset-structure) - [Data Instances](#data-instances) - [Data Fields](#data-fields) - [Data Splits](#data-splits) - [Dataset Creation](#dataset-creation) - [Curation Rationale](#curation-rationale) - [Source Data](#source-data) - [Annotations](#annotations) - [Personal and Sensitive Information](#personal-and-sensitive-information) - [Considerations for Using the Data](#considerations-for-using-the-data) - [Social Impact of Dataset](#social-impact-of-dataset) - [Discussion of Biases](#discussion-of-biases) - [Other Known Limitations](#other-known-limitations) - [Additional Information](#additional-information) - [Dataset Curators](#dataset-curators) - [Licensing Information](#licensing-information) - [Citation Information](#citation-information) - [Contributions](#contributions) ## Dataset Description ``` id - article id articleBody - article main content description - short version of the article, description of the article headline - headline of the article title - title of the article ```
[ -0.6232843399047852, -0.4770576059818268, 0.24052774906158447, 0.34589797258377075, -0.1975361406803131, 0.31436729431152344, -0.1999082714319229, -0.22804564237594604, 0.6278252601623535, 0.6739808320999146, -1.0589795112609863, -1.260535478591919, -0.7371204495429993, 0.23171451687812805...
null
null
null
null
null
null
null
null
null
null
null
null
null
fusing/dog_captions
fusing
2022-06-22T14:28:13Z
19
0
null
[ "region:us" ]
2022-06-22T14:28:13Z
2022-06-21T18:28:05.000Z
2022-06-21T18:28:05
Entry not found
[ -0.3227645754814148, -0.22568479180335999, 0.8622264862060547, 0.43461528420448303, -0.52829909324646, 0.7012971639633179, 0.7915720343589783, 0.07618614286184311, 0.774603009223938, 0.2563217282295227, -0.7852813005447388, -0.22573819756507874, -0.9104477167129517, 0.5715674161911011, -...
null
null
null
null
null
null
null
null
null
null
null
null
null
Nexdata/Human_Face_Image_Data_with_Multiple_Angles_Light_Conditions_and_Expressions
Nexdata
2023-08-31T02:47:14Z
19
1
null
[ "region:us" ]
2023-08-31T02:47:14Z
2022-06-27T08:09:31.000Z
2022-06-27T08:09:31
--- YAML tags: - copy-paste the tags obtained with the tagging app: https://github.com/huggingface/datasets-tagging --- # Dataset Card for Nexdata/Human_Face_Image_Data_with_Multiple_Angles_Light_Conditions_and_Expressions ## Table of Contents - [Table of Contents](#table-of-contents) - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards) - [Languages](#languages) - [Dataset Structure](#dataset-structure) - [Data Instances](#data-instances) - [Data Fields](#data-fields) - [Data Splits](#data-splits) - [Dataset Creation](#dataset-creation) - [Curation Rationale](#curation-rationale) - [Source Data](#source-data) - [Annotations](#annotations) - [Personal and Sensitive Information](#personal-and-sensitive-information) - [Considerations for Using the Data](#considerations-for-using-the-data) - [Social Impact of Dataset](#social-impact-of-dataset) - [Discussion of Biases](#discussion-of-biases) - [Other Known Limitations](#other-known-limitations) - [Additional Information](#additional-information) - [Dataset Curators](#dataset-curators) - [Licensing Information](#licensing-information) - [Citation Information](#citation-information) - [Contributions](#contributions) ## Dataset Description - **Homepage:** https://www.nexdata.ai/datasets/4?source=Huggingface - **Repository:** - **Paper:** - **Leaderboard:** - **Point of Contact:** ### Dataset Summary 110 People – Human Face Image Data with Multiple Angles, Light Conditions, and Expressions. The subjects are all young people. For each subject, 2,100 images were collected. The 2,100 images includes 14 kinds of camera angles *5 kinds of light conditions * 30 kinds of expressions. The data can be used for face recognition, 3D face reconstruction, etc. For more details, please refer to the link: https://www.nexdata.ai/datasets/4?source=Huggingface ### Supported Tasks and Leaderboards face-detection, computer-vision: The dataset can be used to train a model for face detection. ### Languages English ## Dataset Structure ### Data Instances [More Information Needed] ### Data Fields [More Information Needed] ### Data Splits [More Information Needed] ## Dataset Creation ### Curation Rationale [More Information Needed] ### Source Data #### Initial Data Collection and Normalization [More Information Needed] #### Who are the source language producers? [More Information Needed] ### Annotations #### Annotation process [More Information Needed] #### Who are the annotators? [More Information Needed] ### Personal and Sensitive Information [More Information Needed] ## Considerations for Using the Data ### Social Impact of Dataset [More Information Needed] ### Discussion of Biases [More Information Needed] ### Other Known Limitations [More Information Needed] ## Additional Information ### Dataset Curators [More Information Needed] ### Licensing Information Commerical License: https://drive.google.com/file/d/1saDCPm74D4UWfBL17VbkTsZLGfpOQj1J/view?usp=sharing ### Citation Information [More Information Needed] ### Contributions
[ -0.7387109398841858, -0.6092322468757629, 0.2078755646944046, 0.41335421800613403, -0.04411636292934418, -0.055958520621061325, -0.0018549138912931085, -0.595026969909668, 0.5981706976890564, 0.6921139359474182, -0.8711429238319397, -0.9403320550918579, -0.5216334462165833, 0.1969989240169...
null
null
null
null
null
null
null
null
null
null
null
null
null
Nexdata/Multi-pose_and_Multi-expression_Face_Data
Nexdata
2023-08-31T02:43:18Z
19
0
null
[ "region:us" ]
2023-08-31T02:43:18Z
2022-06-27T08:11:16.000Z
2022-06-27T08:11:16
--- YAML tags: - copy-paste the tags obtained with the tagging app: https://github.com/huggingface/datasets-tagging --- # Dataset Card for Nexdata/Multi-pose_and_Multi-expression_Face_Data ## Table of Contents - [Table of Contents](#table-of-contents) - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards) - [Languages](#languages) - [Dataset Structure](#dataset-structure) - [Data Instances](#data-instances) - [Data Fields](#data-fields) - [Data Splits](#data-splits) - [Dataset Creation](#dataset-creation) - [Curation Rationale](#curation-rationale) - [Source Data](#source-data) - [Annotations](#annotations) - [Personal and Sensitive Information](#personal-and-sensitive-information) - [Considerations for Using the Data](#considerations-for-using-the-data) - [Social Impact of Dataset](#social-impact-of-dataset) - [Discussion of Biases](#discussion-of-biases) - [Other Known Limitations](#other-known-limitations) - [Additional Information](#additional-information) - [Dataset Curators](#dataset-curators) - [Licensing Information](#licensing-information) - [Citation Information](#citation-information) - [Contributions](#contributions) ## Dataset Description - **Homepage:** https://www.nexdata.ai/datasets/9?source=Huggingface - **Repository:** - **Paper:** - **Leaderboard:** - **Point of Contact:** ### Dataset Summary 1,507 People 102,476 Images Multi-pose and Multi-expression Face Data. The data includes 1,507 Chinese people (762 males, 745 females). For each subject, 62 multi-pose face images and 6 multi-expression face images were collected. The data diversity includes multiple angles, multiple poses and multple light conditions image data from all ages. This data can be used for tasks such as face recognition and facial expression recognition. For more details, please refer to the link: https://www.nexdata.ai/datasets/9?source=Huggingface ### Supported Tasks and Leaderboards face-detection, computer-vision: The dataset can be used to train a model for face detection. ### Languages English ## Dataset Structure ### Data Instances [More Information Needed] ### Data Fields [More Information Needed] ### Data Splits [More Information Needed] ## Dataset Creation ### Curation Rationale [More Information Needed] ### Source Data #### Initial Data Collection and Normalization [More Information Needed] #### Who are the source language producers? [More Information Needed] ### Annotations #### Annotation process [More Information Needed] #### Who are the annotators? [More Information Needed] ### Personal and Sensitive Information [More Information Needed] ## Considerations for Using the Data ### Social Impact of Dataset [More Information Needed] ### Discussion of Biases [More Information Needed] ### Other Known Limitations [More Information Needed] ## Additional Information ### Dataset Curators [More Information Needed] ### Licensing Information Commerical License: https://drive.google.com/file/d/1saDCPm74D4UWfBL17VbkTsZLGfpOQj1J/view?usp=sharing ### Citation Information [More Information Needed] ### Contributions
[ -0.6205889582633972, -0.6428126692771912, 0.05981150642037392, 0.46114158630371094, -0.06830248236656189, -0.06673990935087204, -0.032863397151231766, -0.5934418439865112, 0.6472415328025818, 0.5338234305381775, -0.9002859592437744, -0.8802255988121033, -0.6231258511543274, 0.1194919273257...
null
null
null
null
null
null
null
null
null
null
null
null
null
Nexdata/Driver_Behavior_Collection_Data
Nexdata
2023-08-31T02:39:56Z
19
1
null
[ "region:us" ]
2023-08-31T02:39:56Z
2022-06-27T08:12:41.000Z
2022-06-27T08:12:41
--- YAML tags: - copy-paste the tags obtained with the tagging app: https://github.com/huggingface/datasets-tagging --- # Dataset Card for Nexdata/Driver_Behavior_Collection_Data ## Table of Contents - [Table of Contents](#table-of-contents) - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards) - [Languages](#languages) - [Dataset Structure](#dataset-structure) - [Data Instances](#data-instances) - [Data Fields](#data-fields) - [Data Splits](#data-splits) - [Dataset Creation](#dataset-creation) - [Curation Rationale](#curation-rationale) - [Source Data](#source-data) - [Annotations](#annotations) - [Personal and Sensitive Information](#personal-and-sensitive-information) - [Considerations for Using the Data](#considerations-for-using-the-data) - [Social Impact of Dataset](#social-impact-of-dataset) - [Discussion of Biases](#discussion-of-biases) - [Other Known Limitations](#other-known-limitations) - [Additional Information](#additional-information) - [Dataset Curators](#dataset-curators) - [Licensing Information](#licensing-information) - [Citation Information](#citation-information) - [Contributions](#contributions) ## Dataset Description - **Homepage:** https://www.nexdata.ai/datasets/963?source=Huggingface - **Repository:** - **Paper:** - **Leaderboard:** - **Point of Contact:** ### Dataset Summary 1,003 People-Driver Behavior Collection Data. The data includes multiple ages and multiple time periods. The driver behaviors includes Dangerous behavior, fatigue behavior and visual movement behavior. In terms of device, binocular cameras of RGB and infrared channels were applied. This data can be used for tasks such as driver behavior analysis. For more details, please refer to the link: https://www.nexdata.ai/datasets/963?source=Huggingface ### Supported Tasks and Leaderboards face-detection, computer-vision: The dataset can be used to train a model for face detection. ### Languages English ## Dataset Structure ### Data Instances [More Information Needed] ### Data Fields [More Information Needed] ### Data Splits [More Information Needed] ## Dataset Creation ### Curation Rationale [More Information Needed] ### Source Data #### Initial Data Collection and Normalization [More Information Needed] #### Who are the source language producers? [More Information Needed] ### Annotations #### Annotation process [More Information Needed] #### Who are the annotators? [More Information Needed] ### Personal and Sensitive Information [More Information Needed] ## Considerations for Using the Data ### Social Impact of Dataset [More Information Needed] ### Discussion of Biases [More Information Needed] ### Other Known Limitations [More Information Needed] ## Additional Information ### Dataset Curators [More Information Needed] ### Licensing Information Commerical License: https://drive.google.com/file/d/1saDCPm74D4UWfBL17VbkTsZLGfpOQj1J/view?usp=sharing ### Citation Information [More Information Needed] ### Contributions
[ -0.6716100573539734, -0.749836802482605, 0.3322926461696625, 0.3544267416000366, -0.009810823947191238, -0.15565043687820435, -0.11060257256031036, -0.5847907662391663, 0.5731465816497803, 0.5512005686759949, -1.019184947013855, -0.7931568026542664, -0.4961192011833191, 0.06126938760280609...
null
null
null
null
null
null
null
null
null
null
null
null
null
Nexdata/Multi-race_Driver_Behavior_Collection_Data
Nexdata
2023-08-31T02:42:51Z
19
0
null
[ "region:us" ]
2023-08-31T02:42:51Z
2022-06-27T08:17:17.000Z
2022-06-27T08:17:17
--- YAML tags: - copy-paste the tags obtained with the tagging app: https://github.com/huggingface/datasets-tagging --- # Dataset Card for Nexdata/Multi-race_Driver_Behavior_Collection_Data ## Table of Contents - [Table of Contents](#table-of-contents) - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards) - [Languages](#languages) - [Dataset Structure](#dataset-structure) - [Data Instances](#data-instances) - [Data Fields](#data-fields) - [Data Splits](#data-splits) - [Dataset Creation](#dataset-creation) - [Curation Rationale](#curation-rationale) - [Source Data](#source-data) - [Annotations](#annotations) - [Personal and Sensitive Information](#personal-and-sensitive-information) - [Considerations for Using the Data](#considerations-for-using-the-data) - [Social Impact of Dataset](#social-impact-of-dataset) - [Discussion of Biases](#discussion-of-biases) - [Other Known Limitations](#other-known-limitations) - [Additional Information](#additional-information) - [Dataset Curators](#dataset-curators) - [Licensing Information](#licensing-information) - [Citation Information](#citation-information) - [Contributions](#contributions) ## Dataset Description - **Homepage:** https://www.nexdata.ai/datasets/1075?source=Huggingface - **Repository:** - **Paper:** - **Leaderboard:** - **Point of Contact:** ### Dataset Summary 304 People Multi-race - Driver Behavior Collection Data. The data includes multiple ages, multiple time periods and multiple races (Caucasian, Black, Indian). The driver behaviors includes dangerous behavior, fatigue behavior and visual movement behavior. In terms of device, binocular cameras of RGB and infrared channels were applied. This data can be used for tasks such as driver behavior analysis. For more details, please refer to the link: https://www.nexdata.ai/datasets/1075?source=Huggingface ### Supported Tasks and Leaderboards face-detection, computer-vision, object-detection: The dataset can be used to train a model for face detection. ### Languages English ## Dataset Structure ### Data Instances [More Information Needed] ### Data Fields [More Information Needed] ### Data Splits [More Information Needed] ## Dataset Creation ### Curation Rationale [More Information Needed] ### Source Data #### Initial Data Collection and Normalization [More Information Needed] #### Who are the source language producers? [More Information Needed] ### Annotations #### Annotation process [More Information Needed] #### Who are the annotators? [More Information Needed] ### Personal and Sensitive Information [More Information Needed] ## Considerations for Using the Data ### Social Impact of Dataset [More Information Needed] ### Discussion of Biases [More Information Needed] ### Other Known Limitations [More Information Needed] ## Additional Information ### Dataset Curators [More Information Needed] ### Licensing Information Commerical License: https://drive.google.com/file/d/1saDCPm74D4UWfBL17VbkTsZLGfpOQj1J/view?usp=sharing ### Citation Information [More Information Needed] ### Contributions
[ -0.7168707251548767, -0.692514181137085, 0.3031635582447052, 0.4108560383319855, -0.05463213846087456, 0.07247719913721085, -0.21105632185935974, -0.6273093223571777, 0.5854145884513855, 0.37184467911720276, -0.9271062016487122, -0.7669822573661804, -0.5111427307128906, 0.09885279834270477...
null
null
null
null
null
null
null
null
null
null
null
null
null
Nexdata/Face_Recognition_Data_with_Gauze_Mask
Nexdata
2023-08-31T02:46:17Z
19
0
null
[ "region:us" ]
2023-08-31T02:46:17Z
2022-06-27T08:18:32.000Z
2022-06-27T08:18:32
--- YAML tags: - copy-paste the tags obtained with the tagging app: https://github.com/huggingface/datasets-tagging --- # Dataset Card for Nexdata/Face_Recognition_Data_with_Gauze_Mask ## Table of Contents - [Table of Contents](#table-of-contents) - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards) - [Languages](#languages) - [Dataset Structure](#dataset-structure) - [Data Instances](#data-instances) - [Data Fields](#data-fields) - [Data Splits](#data-splits) - [Dataset Creation](#dataset-creation) - [Curation Rationale](#curation-rationale) - [Source Data](#source-data) - [Annotations](#annotations) - [Personal and Sensitive Information](#personal-and-sensitive-information) - [Considerations for Using the Data](#considerations-for-using-the-data) - [Social Impact of Dataset](#social-impact-of-dataset) - [Discussion of Biases](#discussion-of-biases) - [Other Known Limitations](#other-known-limitations) - [Additional Information](#additional-information) - [Dataset Curators](#dataset-curators) - [Licensing Information](#licensing-information) - [Citation Information](#citation-information) - [Contributions](#contributions) ## Dataset Description - **Homepage:** https://www.nexdata.ai/datasets/1084?source=Huggingface - **Repository:** - **Paper:** - **Leaderboard:** - **Point of Contact:** ### Dataset Summary 5,030 People - Face Recognition Data with Gauze Mask, for each subject, 7 images were collected. The dataset diversity includes multiple mask types, multiple ages, multiple light conditions and scenes.This data can be applied to computer vision tasks such as occluded face detection and recognition. For more details, please refer to the link: https://www.nexdata.ai/datasets/1084?source=Huggingface ### Supported Tasks and Leaderboards face-detection, computer-vision: The dataset can be used to train a model for face detection. ### Languages English ## Dataset Structure ### Data Instances [More Information Needed] ### Data Fields [More Information Needed] ### Data Splits [More Information Needed] ## Dataset Creation ### Curation Rationale [More Information Needed] ### Source Data #### Initial Data Collection and Normalization [More Information Needed] #### Who are the source language producers? [More Information Needed] ### Annotations #### Annotation process [More Information Needed] #### Who are the annotators? [More Information Needed] ### Personal and Sensitive Information [More Information Needed] ## Considerations for Using the Data ### Social Impact of Dataset [More Information Needed] ### Discussion of Biases [More Information Needed] ### Other Known Limitations [More Information Needed] ## Additional Information ### Dataset Curators [More Information Needed] ### Licensing Information Commerical License: https://drive.google.com/file/d/1saDCPm74D4UWfBL17VbkTsZLGfpOQj1J/view?usp=sharing ### Citation Information [More Information Needed] ### Contributions
[ -0.6804701089859009, -0.6094836592674255, 0.21296745538711548, 0.1797172874212265, -0.1642904281616211, 0.12678122520446777, -0.02425374835729599, -0.5559049248695374, 0.7954890727996826, 0.7009615898132324, -0.7200976610183716, -0.8911634087562561, -0.7495498061180115, 0.11900373548269272...
null
null
null
null
null
null
null
null
null
null
null
null
null
Nexdata/Occluded_and_Multi-pose_Face_Recognition_Data
Nexdata
2023-08-31T02:42:06Z
19
0
null
[ "region:us" ]
2023-08-31T02:42:06Z
2022-06-27T08:20:04.000Z
2022-06-27T08:20:04
--- YAML tags: - copy-paste the tags obtained with the tagging app: https://github.com/huggingface/datasets-tagging --- # Dataset Card for Nexdata/MOccluded_and_Multi-pose_Face_Recognition_Data ## Table of Contents - [Table of Contents](#table-of-contents) - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards) - [Languages](#languages) - [Dataset Structure](#dataset-structure) - [Data Instances](#data-instances) - [Data Fields](#data-fields) - [Data Splits](#data-splits) - [Dataset Creation](#dataset-creation) - [Curation Rationale](#curation-rationale) - [Source Data](#source-data) - [Annotations](#annotations) - [Personal and Sensitive Information](#personal-and-sensitive-information) - [Considerations for Using the Data](#considerations-for-using-the-data) - [Social Impact of Dataset](#social-impact-of-dataset) - [Discussion of Biases](#discussion-of-biases) - [Other Known Limitations](#other-known-limitations) - [Additional Information](#additional-information) - [Dataset Curators](#dataset-curators) - [Licensing Information](#licensing-information) - [Citation Information](#citation-information) - [Contributions](#contributions) ## Dataset Description - **Homepage:** https://www.nexdata.ai/datasets/1073?source=Huggingface - **Repository:** - **Paper:** - **Leaderboard:** - **Point of Contact:** ### Dataset Summary 1,930 People with Occlusion and Multi-pose Face Recognition Data, for each subject, 200 images were collected. The 200 images includes 4 kinds of light conditions * 10 kinds of occlusion cases (including non-occluded case) * 5 kinds of face pose. This data can be applied to computer vision tasks such as occluded face detection and recognition. For more details, please refer to the link: https://www.nexdata.ai/datasets/1073?source=Huggingface ### Supported Tasks and Leaderboards face-detection, computer-vision: The dataset can be used to train a model for face detection. ### Languages English ## Dataset Structure ### Data Instances [More Information Needed] ### Data Fields [More Information Needed] ### Data Splits [More Information Needed] ## Dataset Creation ### Curation Rationale [More Information Needed] ### Source Data #### Initial Data Collection and Normalization [More Information Needed] #### Who are the source language producers? [More Information Needed] ### Annotations #### Annotation process [More Information Needed] #### Who are the annotators? [More Information Needed] ### Personal and Sensitive Information [More Information Needed] ## Considerations for Using the Data ### Social Impact of Dataset [More Information Needed] ### Discussion of Biases [More Information Needed] ### Other Known Limitations [More Information Needed] ## Additional Information ### Dataset Curators [More Information Needed] ### Licensing Information Commerical License: https://drive.google.com/file/d/1saDCPm74D4UWfBL17VbkTsZLGfpOQj1J/view?usp=sharing ### Citation Information [More Information Needed] ### Contributions
[ -0.697555422782898, -0.6027403473854065, 0.12824468314647675, 0.1414637416601181, -0.07240721583366394, -0.021598655730485916, -0.03340747207403183, -0.5454860925674438, 0.6996536254882812, 0.6836540699005127, -0.7691879272460938, -0.8108958601951599, -0.6420416831970215, 0.097756437957286...
null
null
null
null
null
null
null
null
null
null
null
null
null
Nexdata/Handwriting_OCR_Data_of_Japanese_and_Korean
Nexdata
2023-08-31T02:44:01Z
19
1
null
[ "region:us" ]
2023-08-31T02:44:01Z
2022-06-27T08:21:39.000Z
2022-06-27T08:21:39
--- YAML tags: - copy-paste the tags obtained with the tagging app: https://github.com/huggingface/datasets-tagging --- # Dataset Card for Nexdata/Handwriting_OCR_Data_of_Japanese_and_Korean ## Table of Contents - [Table of Contents](#table-of-contents) - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards) - [Languages](#languages) - [Dataset Structure](#dataset-structure) - [Data Instances](#data-instances) - [Data Fields](#data-fields) - [Data Splits](#data-splits) - [Dataset Creation](#dataset-creation) - [Curation Rationale](#curation-rationale) - [Source Data](#source-data) - [Annotations](#annotations) - [Personal and Sensitive Information](#personal-and-sensitive-information) - [Considerations for Using the Data](#considerations-for-using-the-data) - [Social Impact of Dataset](#social-impact-of-dataset) - [Discussion of Biases](#discussion-of-biases) - [Other Known Limitations](#other-known-limitations) - [Additional Information](#additional-information) - [Dataset Curators](#dataset-curators) - [Licensing Information](#licensing-information) - [Citation Information](#citation-information) - [Contributions](#contributions) ## Dataset Description - **Homepage:** https://www.nexdata.ai/datasets/127?source=Huggingface - **Repository:** - **Paper:** - **Leaderboard:** - **Point of Contact:** ### Dataset Summary 100 People - Handwriting OCR Data of Japanese and Korean,. This dadaset was collected from 100 subjects including 50 Japanese, 49 Koreans and 1 Afghan. For different subjects, the corpus are different. The data diversity includes multiple cellphone models and different corpus. This dataset can be used for tasks, such as handwriting OCR data of Japanese and Korean. For more details, please refer to the link: https://www.nexdata.ai/datasets/127?source=Huggingface ### Supported Tasks and Leaderboards image-to-text, computer-vision: The dataset can be used to train a model for image-to-text. ### Languages Japanese, Korean ## Dataset Structure ### Data Instances [More Information Needed] ### Data Fields [More Information Needed] ### Data Splits [More Information Needed] ## Dataset Creation ### Curation Rationale [More Information Needed] ### Source Data #### Initial Data Collection and Normalization [More Information Needed] #### Who are the source language producers? [More Information Needed] ### Annotations #### Annotation process [More Information Needed] #### Who are the annotators? [More Information Needed] ### Personal and Sensitive Information [More Information Needed] ## Considerations for Using the Data ### Social Impact of Dataset [More Information Needed] ### Discussion of Biases [More Information Needed] ### Other Known Limitations [More Information Needed] ## Additional Information ### Dataset Curators [More Information Needed] ### Licensing Information Commerical License: https://drive.google.com/file/d/1saDCPm74D4UWfBL17VbkTsZLGfpOQj1J/view?usp=sharing ### Citation Information [More Information Needed] ### Contributions
[ -0.39481320977211, -0.6974644660949707, 0.4089285731315613, 0.031152268871665, -0.1680344194173813, 0.0843263491988182, -0.24168945848941803, -0.6613118052482605, 0.6405661106109619, 0.8451483845710754, -0.7447962760925293, -1.1841565370559692, -0.8197458386421204, 0.31354305148124695, -...
null
null
null
null
null
null
null
null
null
null
null
null
null
Nexdata/Natural_Scenes_OCR_Data_of_12_Languages
Nexdata
2023-08-31T02:15:54Z
19
0
null
[ "region:us" ]
2023-08-31T02:15:54Z
2022-06-27T08:23:57.000Z
2022-06-27T08:23:57
--- YAML tags: - copy-paste the tags obtained with the tagging app: https://github.com/huggingface/datasets-tagging --- # Dataset Card for Nexdata/Natural_Scenes_OCR_Data_of_12_Languages ## Table of Contents - [Table of Contents](#table-of-contents) - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards) - [Languages](#languages) - [Dataset Structure](#dataset-structure) - [Data Instances](#data-instances) - [Data Fields](#data-fields) - [Data Splits](#data-splits) - [Dataset Creation](#dataset-creation) - [Curation Rationale](#curation-rationale) - [Source Data](#source-data) - [Annotations](#annotations) - [Personal and Sensitive Information](#personal-and-sensitive-information) - [Considerations for Using the Data](#considerations-for-using-the-data) - [Social Impact of Dataset](#social-impact-of-dataset) - [Discussion of Biases](#discussion-of-biases) - [Other Known Limitations](#other-known-limitations) - [Additional Information](#additional-information) - [Dataset Curators](#dataset-curators) - [Licensing Information](#licensing-information) - [Citation Information](#citation-information) - [Contributions](#contributions) ## Dataset Description - **Homepage:** https://www.nexdata.ai/datasets/1064?source=Huggingface - **Repository:** - **Paper:** - **Leaderboard:** - **Point of Contact:** ### Dataset Summary 105,941 Images Natural Scenes OCR Data of 12 Languages. The data covers 12 languages (6 Asian languages, 6 European languages), multiple natural scenes, multiple photographic angles. For annotation, line-level quadrilateral bounding box annotation and transcription for the texts were annotated in the data. The data can be used for tasks such as OCR of multi-language. For more details, please refer to the link: https://www.nexdata.ai/datasets/1064?source=Huggingface ### Supported Tasks and Leaderboards image-to-text, computer-vision: The dataset can be used to train a model for image-to-text. ### Languages Japanese, Korean, Indonesian, Malay, Vietnamese, Thai, French, German, Italian, Portuguese, Russian and Spanish ## Dataset Structure ### Data Instances [More Information Needed] ### Data Fields [More Information Needed] ### Data Splits [More Information Needed] ## Dataset Creation ### Curation Rationale [More Information Needed] ### Source Data #### Initial Data Collection and Normalization [More Information Needed] #### Who are the source language producers? [More Information Needed] ### Annotations #### Annotation process [More Information Needed] #### Who are the annotators? [More Information Needed] ### Personal and Sensitive Information [More Information Needed] ## Considerations for Using the Data ### Social Impact of Dataset [More Information Needed] ### Discussion of Biases [More Information Needed] ### Other Known Limitations [More Information Needed] ## Additional Information ### Dataset Curators [More Information Needed] ### Licensing Information Commerical License: https://drive.google.com/file/d/1saDCPm74D4UWfBL17VbkTsZLGfpOQj1J/view?usp=sharing ### Citation Information [More Information Needed] ### Contributions
[ -0.5508370995521545, -0.6861667633056641, 0.22165687382221222, 0.08789705485105515, -0.23153941333293915, -0.035651903599500656, -0.3131570518016815, -0.757050633430481, 0.5183610320091248, 0.7968606948852539, -0.6044655442237854, -1.0661317110061646, -0.683853805065155, 0.5698704123497009...
null
null
null
null
null
null
null
null
null
null
null
null
null
MicPie/unpredictable_baseball-fantasysports-yahoo-com
MicPie
2022-08-04T19:37:41Z
19
0
null
[ "task_categories:multiple-choice", "task_categories:question-answering", "task_categories:zero-shot-classification", "task_categories:text2text-generation", "task_categories:table-question-answering", "task_categories:text-generation", "task_categories:text-classification", "task_categories:tabular-cl...
2022-08-04T19:37:41Z
2022-07-03T08:46:09.000Z
2022-07-03T08:46:09
--- annotations_creators: - no-annotation language_creators: - found language: - en license: - apache-2.0 multilinguality: - monolingual pretty_name: UnpredicTable-baseball-fantasysports-yahoo-com size_categories: - 100K<n<1M source_datasets: [] task_categories: - multiple-choice - question-answering - zero-shot-classification - text2text-generation - table-question-answering - text-generation - text-classification - tabular-classification task_ids: - multiple-choice-qa - extractive-qa - open-domain-qa - closed-domain-qa - closed-book-qa - open-book-qa - language-modeling - multi-class-classification - natural-language-inference - topic-classification - multi-label-classification - tabular-multi-class-classification - tabular-multi-label-classification --- # Dataset Card for "UnpredicTable-baseball-fantasysports-yahoo-com" - Dataset of Few-shot Tasks from Tables ## Table of Contents - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Supported Tasks](#supported-tasks-and-leaderboards) - [Languages](#languages) - [Dataset Structure](#dataset-structure) - [Data Instances](#data-instances) - [Data Fields](#data-instances) - [Data Splits](#data-instances) - [Dataset Creation](#dataset-creation) - [Curation Rationale](#curation-rationale) - [Source Data](#source-data) - [Annotations](#annotations) - [Personal and Sensitive Information](#personal-and-sensitive-information) - [Considerations for Using the Data](#considerations-for-using-the-data) - [Social Impact of Dataset](#social-impact-of-dataset) - [Discussion of Biases](#discussion-of-biases) - [Other Known Limitations](#other-known-limitations) - [Additional Information](#additional-information) - [Dataset Curators](#dataset-curators) - [Licensing Information](#licensing-information) - [Citation Information](#citation-information) ## Dataset Description - **Homepage:** https://ethanperez.net/unpredictable - **Repository:** https://github.com/JunShern/few-shot-adaptation - **Paper:** Few-shot Adaptation Works with UnpredicTable Data - **Point of Contact:** junshern@nyu.edu, perez@nyu.edu ### Dataset Summary The UnpredicTable dataset consists of web tables formatted as few-shot tasks for fine-tuning language models to improve their few-shot performance. There are several dataset versions available: * [UnpredicTable-full](https://huggingface.co/datasets/MicPie/unpredictable_full): Starting from the initial WTC corpus of 50M tables, we apply our tables-to-tasks procedure to produce our resulting dataset, [UnpredicTable-full](https://huggingface.co/datasets/MicPie/unpredictable_full), which comprises 413,299 tasks from 23,744 unique websites. * [UnpredicTable-unique](https://huggingface.co/datasets/MicPie/unpredictable_unique): This is the same as [UnpredicTable-full](https://huggingface.co/datasets/MicPie/unpredictable_full) but filtered to have a maximum of one task per website. [UnpredicTable-unique](https://huggingface.co/datasets/MicPie/unpredictable_unique) contains exactly 23,744 tasks from 23,744 websites. * [UnpredicTable-5k](https://huggingface.co/datasets/MicPie/unpredictable_5k): This dataset contains 5k random tables from the full dataset. * UnpredicTable data subsets based on a manual human quality rating (please see our publication for details of the ratings): * [UnpredicTable-rated-low](https://huggingface.co/datasets/MicPie/unpredictable_rated-low) * [UnpredicTable-rated-medium](https://huggingface.co/datasets/MicPie/unpredictable_rated-medium) * [UnpredicTable-rated-high](https://huggingface.co/datasets/MicPie/unpredictable_rated-high) * UnpredicTable data subsets based on the website of origin: * [UnpredicTable-baseball-fantasysports-yahoo-com](https://huggingface.co/datasets/MicPie/unpredictable_baseball-fantasysports-yahoo-com) * [UnpredicTable-bulbapedia-bulbagarden-net](https://huggingface.co/datasets/MicPie/unpredictable_bulbapedia-bulbagarden-net) * [UnpredicTable-cappex-com](https://huggingface.co/datasets/MicPie/unpredictable_cappex-com) * [UnpredicTable-cram-com](https://huggingface.co/datasets/MicPie/unpredictable_cram-com) * [UnpredicTable-dividend-com](https://huggingface.co/datasets/MicPie/unpredictable_dividend-com) * [UnpredicTable-dummies-com](https://huggingface.co/datasets/MicPie/unpredictable_dummies-com) * [UnpredicTable-en-wikipedia-org](https://huggingface.co/datasets/MicPie/unpredictable_en-wikipedia-org) * [UnpredicTable-ensembl-org](https://huggingface.co/datasets/MicPie/unpredictable_ensembl-org) * [UnpredicTable-gamefaqs-com](https://huggingface.co/datasets/MicPie/unpredictable_gamefaqs-com) * [UnpredicTable-mgoblog-com](https://huggingface.co/datasets/MicPie/unpredictable_mgoblog-com) * [UnpredicTable-mmo-champion-com](https://huggingface.co/datasets/MicPie/unpredictable_mmo-champion-com) * [UnpredicTable-msdn-microsoft-com](https://huggingface.co/datasets/MicPie/unpredictable_msdn-microsoft-com) * [UnpredicTable-phonearena-com](https://huggingface.co/datasets/MicPie/unpredictable_phonearena-com) * [UnpredicTable-sittercity-com](https://huggingface.co/datasets/MicPie/unpredictable_sittercity-com) * [UnpredicTable-sporcle-com](https://huggingface.co/datasets/MicPie/unpredictable_sporcle-com) * [UnpredicTable-studystack-com](https://huggingface.co/datasets/MicPie/unpredictable_studystack-com) * [UnpredicTable-support-google-com](https://huggingface.co/datasets/MicPie/unpredictable_support-google-com) * [UnpredicTable-w3-org](https://huggingface.co/datasets/MicPie/unpredictable_w3-org) * [UnpredicTable-wiki-openmoko-org](https://huggingface.co/datasets/MicPie/unpredictable_wiki-openmoko-org) * [UnpredicTable-wkdu-org](https://huggingface.co/datasets/MicPie/unpredictable_wkdu-org) * UnpredicTable data subsets based on clustering (for the clustering details please see our publication): * [UnpredicTable-cluster00](https://huggingface.co/datasets/MicPie/unpredictable_cluster00) * [UnpredicTable-cluster01](https://huggingface.co/datasets/MicPie/unpredictable_cluster01) * [UnpredicTable-cluster02](https://huggingface.co/datasets/MicPie/unpredictable_cluster02) * [UnpredicTable-cluster03](https://huggingface.co/datasets/MicPie/unpredictable_cluster03) * [UnpredicTable-cluster04](https://huggingface.co/datasets/MicPie/unpredictable_cluster04) * [UnpredicTable-cluster05](https://huggingface.co/datasets/MicPie/unpredictable_cluster05) * [UnpredicTable-cluster06](https://huggingface.co/datasets/MicPie/unpredictable_cluster06) * [UnpredicTable-cluster07](https://huggingface.co/datasets/MicPie/unpredictable_cluster07) * [UnpredicTable-cluster08](https://huggingface.co/datasets/MicPie/unpredictable_cluster08) * [UnpredicTable-cluster09](https://huggingface.co/datasets/MicPie/unpredictable_cluster09) * [UnpredicTable-cluster10](https://huggingface.co/datasets/MicPie/unpredictable_cluster10) * [UnpredicTable-cluster11](https://huggingface.co/datasets/MicPie/unpredictable_cluster11) * [UnpredicTable-cluster12](https://huggingface.co/datasets/MicPie/unpredictable_cluster12) * [UnpredicTable-cluster13](https://huggingface.co/datasets/MicPie/unpredictable_cluster13) * [UnpredicTable-cluster14](https://huggingface.co/datasets/MicPie/unpredictable_cluster14) * [UnpredicTable-cluster15](https://huggingface.co/datasets/MicPie/unpredictable_cluster15) * [UnpredicTable-cluster16](https://huggingface.co/datasets/MicPie/unpredictable_cluster16) * [UnpredicTable-cluster17](https://huggingface.co/datasets/MicPie/unpredictable_cluster17) * [UnpredicTable-cluster18](https://huggingface.co/datasets/MicPie/unpredictable_cluster18) * [UnpredicTable-cluster19](https://huggingface.co/datasets/MicPie/unpredictable_cluster19) * [UnpredicTable-cluster20](https://huggingface.co/datasets/MicPie/unpredictable_cluster20) * [UnpredicTable-cluster21](https://huggingface.co/datasets/MicPie/unpredictable_cluster21) * [UnpredicTable-cluster22](https://huggingface.co/datasets/MicPie/unpredictable_cluster22) * [UnpredicTable-cluster23](https://huggingface.co/datasets/MicPie/unpredictable_cluster23) * [UnpredicTable-cluster24](https://huggingface.co/datasets/MicPie/unpredictable_cluster24) * [UnpredicTable-cluster25](https://huggingface.co/datasets/MicPie/unpredictable_cluster25) * [UnpredicTable-cluster26](https://huggingface.co/datasets/MicPie/unpredictable_cluster26) * [UnpredicTable-cluster27](https://huggingface.co/datasets/MicPie/unpredictable_cluster27) * [UnpredicTable-cluster28](https://huggingface.co/datasets/MicPie/unpredictable_cluster28) * [UnpredicTable-cluster29](https://huggingface.co/datasets/MicPie/unpredictable_cluster29) * [UnpredicTable-cluster-noise](https://huggingface.co/datasets/MicPie/unpredictable_cluster-noise) ### Supported Tasks and Leaderboards Since the tables come from the web, the distribution of tasks and topics is very broad. The shape of our dataset is very wide, i.e., we have 1000's of tasks, while each task has only a few examples, compared to most current NLP datasets which are very deep, i.e., 10s of tasks with many examples. This implies that our dataset covers a broad range of potential tasks, e.g., multiple-choice, question-answering, table-question-answering, text-classification, etc. The intended use of this dataset is to improve few-shot performance by fine-tuning/pre-training on our dataset. ### Languages English ## Dataset Structure ### Data Instances Each task is represented as a jsonline file and consists of several few-shot examples. Each example is a dictionary containing a field 'task', which identifies the task, followed by an 'input', 'options', and 'output' field. The 'input' field contains several column elements of the same row in the table, while the 'output' field is a target which represents an individual column of the same row. Each task contains several such examples which can be concatenated as a few-shot task. In the case of multiple choice classification, the 'options' field contains the possible classes that a model needs to choose from. There are also additional meta-data fields such as 'pageTitle', 'title', 'outputColName', 'url', 'wdcFile'. ### Data Fields 'task': task identifier 'input': column elements of a specific row in the table. 'options': for multiple choice classification, it provides the options to choose from. 'output': target column element of the same row as input. 'pageTitle': the title of the page containing the table. 'outputColName': output column name 'url': url to the website containing the table 'wdcFile': WDC Web Table Corpus file ### Data Splits The UnpredicTable datasets do not come with additional data splits. ## Dataset Creation ### Curation Rationale Few-shot training on multi-task datasets has been demonstrated to improve language models' few-shot learning (FSL) performance on new tasks, but it is unclear which training tasks lead to effective downstream task adaptation. Few-shot learning datasets are typically produced with expensive human curation, limiting the scale and diversity of the training tasks available to study. As an alternative source of few-shot data, we automatically extract 413,299 tasks from diverse internet tables. We provide this as a research resource to investigate the relationship between training data and few-shot learning. ### Source Data #### Initial Data Collection and Normalization We use internet tables from the English-language Relational Subset of the WDC Web Table Corpus 2015 (WTC). The WTC dataset tables were extracted from the July 2015 Common Crawl web corpus (http://webdatacommons.org/webtables/2015/EnglishStatistics.html). The dataset contains 50,820,165 tables from 323,160 web domains. We then convert the tables into few-shot learning tasks. Please see our publication for more details on the data collection and conversion pipeline. #### Who are the source language producers? The dataset is extracted from [WDC Web Table Corpora](http://webdatacommons.org/webtables/). ### Annotations #### Annotation process Manual annotation was only carried out for the [UnpredicTable-rated-low](https://huggingface.co/datasets/MicPie/unpredictable_rated-low), [UnpredicTable-rated-medium](https://huggingface.co/datasets/MicPie/unpredictable_rated-medium), and [UnpredicTable-rated-high](https://huggingface.co/datasets/MicPie/unpredictable_rated-high) data subsets to rate task quality. Detailed instructions of the annotation instructions can be found in our publication. #### Who are the annotators? Annotations were carried out by a lab assistant. ### Personal and Sensitive Information The data was extracted from [WDC Web Table Corpora](http://webdatacommons.org/webtables/), which in turn extracted tables from the [Common Crawl](https://commoncrawl.org/). We did not filter the data in any way. Thus any user identities or otherwise sensitive information (e.g., data that reveals racial or ethnic origins, sexual orientations, religious beliefs, political opinions or union memberships, or locations; financial or health data; biometric or genetic data; forms of government identification, such as social security numbers; criminal history, etc.) might be contained in our dataset. ## Considerations for Using the Data ### Social Impact of Dataset This dataset is intended for use as a research resource to investigate the relationship between training data and few-shot learning. As such, it contains high- and low-quality data, as well as diverse content that may be untruthful or inappropriate. Without careful investigation, it should not be used for training models that will be deployed for use in decision-critical or user-facing situations. ### Discussion of Biases Since our dataset contains tables that are scraped from the web, it will also contain many toxic, racist, sexist, and otherwise harmful biases and texts. We have not run any analysis on the biases prevalent in our datasets. Neither have we explicitly filtered the content. This implies that a model trained on our dataset may potentially reflect harmful biases and toxic text that exist in our dataset. ### Other Known Limitations No additional known limitations. ## Additional Information ### Dataset Curators Jun Shern Chan, Michael Pieler, Jonathan Jao, Jérémy Scheurer, Ethan Perez ### Licensing Information Apache 2.0 ### Citation Information ``` @misc{chan2022few, author = {Chan, Jun Shern and Pieler, Michael and Jao, Jonathan and Scheurer, Jérémy and Perez, Ethan}, title = {Few-shot Adaptation Works with UnpredicTable Data}, publisher={arXiv}, year = {2022}, url = {https://arxiv.org/abs/2208.01009} } ```
[ -0.5598666071891785, -0.5912361741065979, 0.424005389213562, 0.33776795864105225, 0.06857472658157349, 0.17471188306808472, -0.125833660364151, -0.6219690442085266, 0.5435531735420227, 0.2830978333950043, -1.063981294631958, -0.6501271724700928, -0.6197425127029419, 0.22493040561676025, ...
null
null
null
null
null
null
null
null
null
null
null
null
null
erickdp/newdataset
erickdp
2022-07-06T18:13:19Z
19
0
null
[ "region:us" ]
2022-07-06T18:13:19Z
2022-07-06T18:03:38.000Z
2022-07-06T18:03:38
Entry not found
[ -0.3227645754814148, -0.22568479180335999, 0.8622263669967651, 0.43461522459983826, -0.52829909324646, 0.7012971639633179, 0.7915719747543335, 0.07618614286184311, 0.774603009223938, 0.2563217282295227, -0.7852813005447388, -0.22573819756507874, -0.9104475975036621, 0.5715674161911011, -...
null
null
null
null
null
null
null
null
null
null
null
null
null
embedding-data/flickr30k_captions_quintets
embedding-data
2022-08-02T01:59:48Z
19
1
embedding-data/flickr30k-captions
[ "language:en", "license:mit", "region:us" ]
2022-08-02T01:59:48Z
2022-07-07T23:09:35.000Z
2022-07-07T23:09:35
--- license: mit language: - en paperswithcode_id: embedding-data/flickr30k-captions pretty_name: flickr30k-captions --- # Dataset Card for "flickr30k-captions" ## Table of Contents - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards) - [Languages](#languages) - [Dataset Structure](#dataset-structure) - [Data Instances](#data-instances) - [Data Fields](#data-fields) - [Data Splits](#data-splits) - [Usage Example](#usage-example) - [Dataset Creation](#dataset-creation) - [Curation Rationale](#curation-rationale) - [Source Data](#source-data) - [Annotations](#annotations) - [Personal and Sensitive Information](#personal-and-sensitive-information) - [Considerations for Using the Data](#considerations-for-using-the-data) - [Social Impact of Dataset](#social-impact-of-dataset) - [Discussion of Biases](#discussion-of-biases) - [Other Known Limitations](#other-known-limitations) - [Additional Information](#additional-information) - [Dataset Curators](#dataset-curators) - [Licensing Information](#licensing-information) - [Citation Information](#citation-information) - [Contributions](#contributions) ## Dataset Description - **Homepage:** [https://shannon.cs.illinois.edu/DenotationGraph/](https://shannon.cs.illinois.edu/DenotationGraph/) - **Repository:** [More Information Needed](https://shannon.cs.illinois.edu/DenotationGraph/) - **Paper:** [https://transacl.org/ojs/index.php/tacl/article/view/229/33](https://transacl.org/ojs/index.php/tacl/article/view/229/33) - **Point of Contact:** [Peter Young](pyoung2@illinois.edu), [Alice Lai](aylai2@illinois.edu), [Micah Hodosh](mhodosh2@illinois.edu), [Julia Hockenmaier](juliahmr@illinois.edu) ### Dataset Summary We propose to use the visual denotations of linguistic expressions (i.e. the set of images they describe) to define novel denotational similarity metrics, which we show to be at least as beneficial as distributional similarities for two tasks that require semantic inference. To compute these denotational similarities, we construct a denotation graph, i.e. a subsumption hierarchy over constituents and their denotations, based on a large corpus of 30K images and 150K descriptive captions. Disclaimer: The team releasing Flickr30k did not upload the dataset to the Hub and did not write a dataset card. These steps were done by the Hugging Face team. ### Supported Tasks - [Sentence Transformers](https://huggingface.co/sentence-transformers) training; useful for semantic search and sentence similarity. ### Languages - English. ## Dataset Structure Each example in the dataset contains quintets of similar sentences and is formatted as a dictionary with the key "set" and a list with the sentences as "value": ``` {"set": [sentence_1, sentence_2, sentence3, sentence4, sentence5]} {"set": [sentence_1, sentence_2, sentence3, sentence4, sentence5]} ... {"set": [sentence_1, sentence_2, sentence3, sentence4, sentence5]} ``` This dataset is useful for training Sentence Transformers models. Refer to the following post on how to train models using similar pairs of sentences. ### Usage Example Install the 🤗 Datasets library with `pip install datasets` and load the dataset from the Hub with: ```python from datasets import load_dataset dataset = load_dataset("embedding-data/flickr30k-captions") ``` The dataset is loaded as a `DatasetDict` has the format: ```python DatasetDict({ train: Dataset({ features: ['set'], num_rows: 31783 }) }) ``` Review an example `i` with: ```python dataset["train"][i]["set"] ``` ### Curation Rationale [More Information Needed](https://shannon.cs.illinois.edu/DenotationGraph/) ### Source Data #### Initial Data Collection and Normalization [More Information Needed](https://shannon.cs.illinois.edu/DenotationGraph/) #### Who are the source language producers? [More Information Needed](https://shannon.cs.illinois.edu/DenotationGraph/) ### Annotations #### Annotation process [More Information Needed](https://shannon.cs.illinois.edu/DenotationGraph/) #### Who are the annotators? [More Information Needed](https://shannon.cs.illinois.edu/DenotationGraph/) ### Personal and Sensitive Information [More Information Needed](https://shannon.cs.illinois.edu/DenotationGraph/) ## Considerations for Using the Data ### Social Impact of Dataset [More Information Needed](https://shannon.cs.illinois.edu/DenotationGraph/) ### Discussion of Biases [More Information Needed](https://shannon.cs.illinois.edu/DenotationGraph/) ### Other Known Limitations [More Information Needed](https://shannon.cs.illinois.edu/DenotationGraph/) ## Additional Information ### Dataset Curators [More Information Needed](https://shannon.cs.illinois.edu/DenotationGraph/) ### Licensing Information [More Information Needed](https://shannon.cs.illinois.edu/DenotationGraph/) ### Citation Information [More Information Needed](https://shannon.cs.illinois.edu/DenotationGraph/) ### Contributions Thanks to [Peter Young](pyoung2@illinois.edu), [Alice Lai](aylai2@illinois.edu), [Micah Hodosh](mhodosh2@illinois.edu), [Julia Hockenmaier](juliahmr@illinois.edu) for adding this dataset.
[ -0.5918236374855042, -0.40099599957466125, 0.15713101625442505, 0.022712381556630135, -0.14544667303562164, 0.01665293611586094, -0.05288828909397125, -0.2574631869792938, 0.30384573340415955, 0.4603160619735718, -0.750166654586792, -0.7208023071289062, -0.6195902228355408, 0.3158650100231...
null
null
null
null
null
null
null
null
null
null
null
null
null
autoevaluate/autoeval-staging-eval-project-kmfoda__booksum-79c1c0d8-10905464
autoevaluate
2022-07-15T08:27:05Z
19
0
null
[ "autotrain", "evaluation", "region:us" ]
2022-07-15T08:27:05Z
2022-07-14T12:47:36.000Z
2022-07-14T12:47:36
--- type: predictions tags: - autotrain - evaluation datasets: - kmfoda/booksum eval_info: task: summarization model: pszemraj/bigbird-pegasus-large-K-booksum metrics: ['bleu', 'perplexity'] dataset_name: kmfoda/booksum dataset_config: kmfoda--booksum dataset_split: test col_mapping: text: chapter target: summary_text --- # Dataset Card for AutoTrain Evaluator This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset: * Task: Summarization * Model: pszemraj/bigbird-pegasus-large-K-booksum * Dataset: kmfoda/booksum * Config: kmfoda--booksum * Split: test To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator). ## Contributions Thanks to [@pszemraj](https://huggingface.co/pszemraj) for evaluating this model.
[ -0.49054089188575745, -0.05182591825723648, 0.210295170545578, 0.2502972185611725, -0.18319839239120483, -0.20628120005130768, 0.016820011660456657, -0.35133352875709534, 0.15353992581367493, 0.4608500897884369, -0.9973030686378479, -0.31082943081855774, -0.7420911192893982, -0.05403302982...
null
null
null
null
null
null
null
null
null
null
null
null
null
bhadresh-savani/photo-to-cartoon
bhadresh-savani
2022-07-22T11:48:40Z
19
3
null
[ "region:us" ]
2022-07-22T11:48:40Z
2022-07-22T11:45:54.000Z
2022-07-22T11:45:54
Entry not found
[ -0.32276472449302673, -0.22568407654762268, 0.8622258901596069, 0.4346148371696472, -0.5282984972000122, 0.7012965679168701, 0.7915717363357544, 0.07618629932403564, 0.7746022939682007, 0.2563222646713257, -0.785281777381897, -0.22573848068714142, -0.9104482531547546, 0.5715669393539429, ...
null
null
null
null
null
null
null
null
null
null
null
null
null
schnell/kuci_juman
schnell
2022-07-23T14:43:47Z
19
0
null
[ "region:us" ]
2022-07-23T14:43:47Z
2022-07-23T14:43:02.000Z
2022-07-23T14:43:02
Entry not found
[ -0.32276472449302673, -0.22568407654762268, 0.8622258901596069, 0.4346148371696472, -0.5282984972000122, 0.7012965679168701, 0.7915717363357544, 0.07618629932403564, 0.7746022939682007, 0.2563222646713257, -0.785281777381897, -0.22573848068714142, -0.9104482531547546, 0.5715669393539429, ...
null
null
null
null
null
null
null
null
null
null
null
null
null
zluvolyote/DREAM_SAMPLE_600K
zluvolyote
2022-07-23T22:59:37Z
19
0
null
[ "region:us" ]
2022-07-23T22:59:37Z
2022-07-23T17:33:40.000Z
2022-07-23T17:33:40
Entry not found
[ -0.32276472449302673, -0.22568407654762268, 0.8622258901596069, 0.4346148371696472, -0.5282984972000122, 0.7012965679168701, 0.7915717363357544, 0.07618629932403564, 0.7746022939682007, 0.2563222646713257, -0.785281777381897, -0.22573848068714142, -0.9104482531547546, 0.5715669393539429, ...
null
null
null
null
null
null
null
null
null
null
null
null
null
Gpaiva/NERDE_sentences
Gpaiva
2022-07-24T00:22:44Z
19
0
null
[ "license:cc-by-4.0", "region:us" ]
2022-07-24T00:22:44Z
2022-07-24T00:12:59.000Z
2022-07-24T00:12:59
--- license: cc-by-4.0 ---
[ -0.1285335123538971, -0.1861683875322342, 0.6529128551483154, 0.49436232447624207, -0.19319400191307068, 0.23607441782951355, 0.36072009801864624, 0.05056373029947281, 0.5793656706809998, 0.7400146722793579, -0.650810182094574, -0.23784008622169495, -0.7102247476577759, -0.0478255338966846...
null
null
null
null
null
null
null
null
null
null
null
null
null
ttxy/online_shopping_10_cats
ttxy
2022-07-24T07:54:05Z
19
0
null
[ "region:us" ]
2022-07-24T07:54:05Z
2022-07-24T07:45:51.000Z
2022-07-24T07:45:51
Entry not found
[ -0.3227645754814148, -0.22568479180335999, 0.8622263669967651, 0.43461522459983826, -0.52829909324646, 0.7012971639633179, 0.7915719747543335, 0.07618614286184311, 0.774603009223938, 0.2563217282295227, -0.7852813005447388, -0.22573819756507874, -0.9104475975036621, 0.5715674161911011, -...
null
null
null
null
null
null
null
null
null
null
null
null
null
autoevaluate/autoeval-staging-eval-project-squad_v2-2eb94bfa-11695556
autoevaluate
2022-07-24T08:23:49Z
19
0
null
[ "autotrain", "evaluation", "region:us" ]
2022-07-24T08:23:49Z
2022-07-24T08:20:54.000Z
2022-07-24T08:20:54
--- type: predictions tags: - autotrain - evaluation datasets: - squad_v2 eval_info: task: extractive_question_answering model: deepset/minilm-uncased-squad2 metrics: [] dataset_name: squad_v2 dataset_config: squad_v2 dataset_split: validation col_mapping: context: context question: question answers-text: answers.text answers-answer_start: answers.answer_start --- # Dataset Card for AutoTrain Evaluator This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset: * Task: Question Answering * Model: deepset/minilm-uncased-squad2 * Dataset: squad_v2 * Config: squad_v2 * Split: validation To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator). ## Contributions Thanks to [@ghpkishore](https://huggingface.co/ghpkishore) for evaluating this model.
[ -0.4394873082637787, -0.3899570107460022, 0.31101763248443604, 0.043844182044267654, -0.022944046184420586, 0.16881759464740753, 0.13888850808143616, -0.40020236372947693, -0.026260925456881523, 0.3972335457801819, -1.3563989400863647, -0.019681649282574654, -0.455274760723114, -0.06161094...
null
null
null
null
null
null
null
null
null
null
null
null
null
biglam/contentious_contexts
biglam
2022-08-01T17:02:11Z
19
2
null
[ "task_categories:text-classification", "task_ids:sentiment-scoring", "task_ids:multi-label-classification", "annotations_creators:expert-generated", "annotations_creators:crowdsourced", "language_creators:machine-generated", "multilinguality:monolingual", "size_categories:1K<n<10K", "source_datasets...
2022-08-01T17:02:11Z
2022-07-26T22:07:48.000Z
2022-07-26T22:07:48
--- annotations_creators: - expert-generated - crowdsourced language: - nl language_creators: - machine-generated license: - cc-by-2.0 multilinguality: - monolingual pretty_name: Contentious Contexts Corpus size_categories: - 1K<n<10K source_datasets: - original tags: - newspapers - historic - dutch - problematic - ConConCor task_categories: - text-classification task_ids: - sentiment-scoring - multi-label-classification --- # Dataset Card for Contentious Contexts Corpus ## Table of Contents - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Supported Tasks](#supported-tasks-and-leaderboards) - [Languages](#languages) - [Dataset Structure](#dataset-structure) - [Data Instances](#data-instances) - [Data Fields](#data-instances) - [Data Splits](#data-instances) - [Dataset Creation](#dataset-creation) - [Curation Rationale](#curation-rationale) - [Source Data](#source-data) - [Annotations](#annotations) - [Personal and Sensitive Information](#personal-and-sensitive-information) - [Considerations for Using the Data](#considerations-for-using-the-data) - [Social Impact of Dataset](#social-impact-of-dataset) - [Discussion of Biases](#discussion-of-biases) - [Other Known Limitations](#other-known-limitations) - [Additional Information](#additional-information) - [Dataset Curators](#dataset-curators) - [Licensing Information](#licensing-information) - [Citation Information](#citation-information) ## Dataset Description - **Homepage:** [ConConCor](https://github.com/cultural-ai/ConConCor) - **Repository:** [ConConCor](https://github.com/cultural-ai/ConConCor) - **Paper:** [N/A] - **Leaderboard:** [N/A] - **Point of Contact:** [Jacco van Ossenbruggen](https://github.com/jrvosse) **Note** One can also find a Datasheet produced by the creators of this dataset as a [PDF document](https://github.com/cultural-ai/ConConCor/blob/main/Dataset/DataSheet.pdf) ### Dataset Summary This dataset contains extracts from historical Dutch newspapers containing keywords of potentially contentious words (according to present-day sensibilities). The dataset contains multiple annotations per instance, given the option to quantify agreement scores for annotations. This dataset can be used to track how words and their meanings have changed over time ### Supported Tasks and Leaderboards - `text-classification`: This dataset can be used for tracking how the meanings of words in different contexts have changed and become contentious over time ### Languages The text in the dataset is in Dutch. The responses are available in both English and Dutch. Suggestions, where present, are only in Dutch. The associated BCP-47 code is `nl` ## Dataset Structure ### Data Instances ``` { 'extract_id': 'H97', 'text': 'en waardoor het eerste doel wordt voorbijgestreefd om voor den 5D5c5Y 5d-5@5j5g5d5e5Z5V5V5c een speciale eigen werkingssfeer te scheppen.Intusschen is het', 'target': '5D 5c5Y5d-5@5j5g5d5e5Z5V5V5c', 'annotator_responses_english': [ {'id': 'unknown_2a', 'response': 'Not contentious'}, {'id': 'unknown_2b', 'response': 'Contentious according to current standards'}, {'id': 'unknown_2c', 'response': "I don't know"}, {'id': 'unknown_2d', 'response': 'Contentious according to current standards'}, {'id': 'unknown_2e', 'response': 'Not contentious'}, {'id': 'unknown_2f', 'response': "I don't know"}, {'id': 'unknown_2g', 'response': 'Not contentious'}], 'annotator_responses_dutch': [ {'id': 'unknown_2a', 'response': 'Niet omstreden'}, {'id': 'unknown_2b', 'response': 'Omstreden naar huidige maatstaven'}, {'id': 'unknown_2c', 'response': 'Weet ik niet'}, {'id': 'unknown_2d', 'response': 'Omstreden naar huidige maatstaven'}, {'id': 'unknown_2e', 'response': 'Niet omstreden'}, {'id': 'unknown_2f', 'response': 'Weet ik niet'}, {'id': 'unknown_2g', 'response': 'Niet omstreden'}], 'annotator_suggestions': [ {'id': 'unknown_2a', 'suggestion': ''}, {'id': 'unknown_2b', 'suggestion': 'ander ras nodig'}, {'id': 'unknown_2c', 'suggestion': 'personen van ander ras'}, {'id': 'unknown_2d', 'suggestion': ''}, {'id': 'unknown_2e', 'suggestion': ''}, {'id': 'unknown_2f', 'suggestion': ''}, {'id': 'unknown_2g', 'suggestion': 'ras'}] } ``` ### Data Fields |extract_id|text|target|annotator_responses_english|annotator_responses_dutch|annotator_suggestions| |---|---|---|---|---|---| |Unique identifier|Text|Target phrase or word|Response(translated to English)|Response in Dutch|Suggestions, if present| ### Data Splits Train: 2720 ## Dataset Creation ### Curation Rationale > Cultural heritage institutions recognise the problem of language use in their collections. The cultural objects in archives, libraries, and museums contain words and phrases that are inappropriate in modern society but were used broadly back in times. Such words can be offensive and discriminative. In our work, we use the term "contentious" to refer to all (potentially) inappropriate or otherwise sensitive words. For example, words suggestive of some (implicit or explicit) bias towards or against something. The National Archives of the Netherlands stated that they "explore the possibility of explaining language that was acceptable and common in the past and providing it with contemporary alternatives", meanwhile "keeping the original descriptions [with contentious words], because they give an idea of the time in which they were made or included in the collection". There is a page on the institution website where people can report "offensive language". ### Source Data #### Initial Data Collection and Normalization > The queries were run on OCR'd versions of the Europeana Newspaper collection, as provided by the KB National Library of the Netherlands. We limited our pool to text categorised as "article", thus excluding other types of texts such as advertisements and family notices. We then only focused our sample on the 6 decades between 1890-01-01 and 1941-12-31, as this is the period available in the Europeana newspaper corpus. The dataset represents a stratified sample set over target word, decade, and newspaper issue distribution metadata. For the final set of extracts for annotation, we gave extracts sampling weights proportional to their actual probabilities, as estimated from the initial set of extracts via trigram frequencies, rather than sampling uniformly. #### Who are the source language producers? [N/A] ### Annotations #### Annotation process > The annotation process included 3 stages: pilot annotation, expert annotation, and crowdsourced annotation on the "Prolific" platform. All stages required the participation of Dutch speakers. The pilot stage was intended for testing the annotation layout, the instructions clarity, the number of sentences provided as context, the survey questions, and the difficulty of the task in general. The Dutch-speaking members of the Cultural AI Lab were asked to test the annotation process and give their feedback anonymously using Google Sheets. Six volunteers contributed to the pilot stage, each annotating the same 40 samples where either a context of 3 or 5 sentences surrounding the term were given. An individual annotation sheet had a table layout with 4 options to choose for every sample > - 'Omstreden'(Contentious) > - 'Niet omstreden'(Not contentious) > - 'Weet ik niet'(I don't know) > - 'Onleesbare OCR'(Illegible OCR)</br> 2 open fields > - 'Andere omstreden termen in de context'(Other contentious terms in the context) > - 'Notities'(Notes)</br> and the instructions in the header. The rows were the samples with the highlighted words, the tickboxes for every option, and 2 empty cells for the open questions. The obligatory part of the annotation was to select one of the 4 options for every sample. Finding other contentious terms in the given sample, leaving notes, and answering 4 additional open questions at the end of the task were optional. Based on the received feedback and the answers to the open questions in the pilot study, the following decisions were made regarding the next, experts' annotation stage: > - The annotation layout was built in Google Forms as a questionnaire instead of the table layout in Google Sheets to make the data collection and analysis faster as the number of participants would increase; > - The context window of 5 sentences per sample was found optimal; > - The number of samples per annotator was increased to 50; > - The option 'Omstreden' (Contentious) was changed to 'Omstreden naar huidige maatstaven' ('Contentious according to current standards') to clarify that annotators should judge contentiousness of the word's use in context from today's perspective; > - The annotation instruction was edited to clarify 2 points: (1) that annotators while judging contentiousness should take into account not only a bolded word but also the context surrounding it, and (2) if a word seems even slightly contentious to an annotator, they should choose the option 'Omstreden naar huidige maatstaven' (Contentious according to current standards); > - The non-required field for every sample 'Notities' (Notes) was removed as there was an open question at the end of the annotation, where participants could leave their comments; > - Another open question was added at the end of the annotation asking how much time it took to complete the annotation. #### Who are the annotators? Volunteers and Expert annotators ### Personal and Sensitive Information [N/A] ## Considerations for Using the Data ## Accessing the annotations Each example text has multiple annotations. These annotations may not always agree. There are various approaches one could take to calculate agreement, including a majority vote, rating some annotators more highly, or calculating a score based on the 'votes' of annotators. Since there are many ways of doing this, we have not implemented this as part of the dataset loading script. An example of how one could generate an "OCR quality rating" based on the number of times an annotator labelled an example with `Illegible OCR`: ```python from collections import Counter def calculate_ocr_score(example): annotator_responses = [response['response'] for response in example['annotator_responses_english']] counts = Counter(annotator_responses) bad_ocr_ratings = counts.get("Illegible OCR") if bad_ocr_ratings is None: bad_ocr_ratings = 0 return round(1 - bad_ocr_ratings/len(annotator_responses),3) dataset = dataset.map(lambda example: {"ocr_score":calculate_ocr_score(example)}) ``` To take the majority vote (or return a tie) based on whether a example is labelled contentious or not: ```python def most_common_vote(example): annotator_responses = [response['response'] for response in example['annotator_responses_english']] counts = Counter(annotator_responses) contentious_count = counts.get("Contentious according to current standards") if not contentious_count: contentious_count = 0 not_contentious_count = counts.get("Not contentious") if not not_contentious_count: not_contentious_count = 0 if contentious_count > not_contentious_count: return "contentious" if contentious_count < not_contentious_count: return "not_contentious" if contentious_count == not_contentious_count: return "tied" ``` ### Social Impact of Dataset This dataset can be used to see how words change in meaning over time ### Discussion of Biases > Due to the nature of the project, some examples used in this documentation may be shocking or offensive. They are provided only as an illustration or explanation of the resulting dataset and do not reflect the opinions of the project team or their organisations. Since this project was explicitly created to help assess bias, it should be used primarily in the context of assess bias, and methods for detecting bias. ### Other Known Limitations [Needs More Information] ## Additional Information ### Dataset Curators [Cultural AI](https://github.com/cultural-ai) ### Licensing Information CC-BY ### Citation Information ``` @misc{ContentiousContextsCorpus2021, author = {Cultural AI}, title = {Contentious Contexts Corpus}, year = {2021}, publisher = {GitHub}, journal = {GitHub repository}, howpublished = {\\url{https://github.com/cultural-ai/ConConCor}}, } ```
[ -0.7937257289886475, -0.8436493277549744, 0.08725252002477646, 0.20790022611618042, -0.41068753600120544, -0.18007737398147583, -0.5941969156265259, -0.574517548084259, 0.5983442664146423, 0.5307366847991943, -0.3057361841201782, -0.7344334721565247, -0.7904301881790161, 0.4542206823825836...
null
null
null
null
null
null
null
null
null
null
null
null
null
SLPL/syntran-fa
SLPL
2022-11-03T06:34:17Z
19
7
null
[ "task_categories:question-answering", "task_categories:text2text-generation", "task_categories:text-generation", "multilinguality:monolingual", "size_categories:30k<n<50k", "language:fa", "license:mit", "conditional-text-generation", "conversational-question-answering", "region:us" ]
2022-11-03T06:34:17Z
2022-08-10T04:13:13.000Z
2022-08-10T04:13:13
--- language: - fa license: mit multilinguality: - monolingual size_categories: - 30k<n<50k task_categories: - question-answering - text2text-generation - text-generation task_ids: [] pretty_name: SynTranFa tags: - conditional-text-generation - conversational-question-answering --- # SynTran-fa Syntactic Transformed Version of Farsi QA datasets to make fluent responses from questions and short answers. You can use this dataset by the code below: ```python import datasets data = datasets.load_dataset('SLPL/syntran-fa', split="train") ``` ## Table of Contents - [Table of Contents](#table-of-contents) - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards) - [Languages](#languages) - [Dataset Structure](#dataset-structure) - [Dataset Creation](#dataset-creation) - [Source Data](#source-data) - [Considerations for Using the Data](#considerations-for-using-the-data) - [Social Impact of Dataset](#social-impact-of-dataset) - [Additional Information](#additional-information) - [Dataset Curators](#dataset-curators) - [Licensing Information](#licensing-information) - [Citation Information](#citation-information) - [Contributions](#contributions) ## Dataset Description - **Homepage:** [Sharif-SLPL](https://github.com/Sharif-SLPL) - **Repository:** [SynTran-fa](https://github.com/agp-internship/syntran-fa) - **Point of Contact:** [Sadra Sabouri](mailto:sabouri.sadra@gmail.com) ### Dataset Summary Generating fluent responses has always been challenging for the question-answering task, especially in low-resource languages like Farsi. In recent years there were some efforts for enhancing the size of datasets in Farsi. Syntran-fa is a question-answering dataset that accumulates the former Farsi QA dataset's short answers and proposes a complete fluent answer for each pair of (question, short_answer). This dataset contains nearly 50,000 indices of questions and answers. The dataset that has been used as our sources are in [Source Data section](#source-data). The main idea for this dataset comes from [Fluent Response Generation for Conversational Question Answering](https://aclanthology.org/2020.acl-main.19.pdf) where they used a "parser + syntactic rules" module to make different fluent answers from a pair of question and a short answer using a parser and some syntactic rules. In this project, we used [stanza](https://stanfordnlp.github.io/stanza/) as our parser to parse the question and generate a response according to it using the short (sentences without verbs - up to ~4 words) answers. One can continue this project by generating different permutations of the sentence's parts (and thus providing more than one sentence for an answer) or training a seq2seq model which does what we do with our rule-based system (by defining a new text-to-text task). ### Supported Tasks and Leaderboards This dataset can be used for the question-answering task, especially when you are going to generate fluent responses. You can train a seq2seq model with this dataset to generate fluent responses - as done by [Fluent Response Generation for Conversational Question Answering](https://aclanthology.org/2020.acl-main.19.pdf). ### Languages + Persian (fa) ## Dataset Structure Each row of the dataset will look like something like the below: ```json { 'id': 0, 'question': 'باشگاه هاکی ساوتهمپتون چه نام دارد؟', 'short_answer': 'باشگاه هاکی ساوتهمپتون', 'fluent_answer': 'باشگاه هاکی ساوتهمپتون باشگاه هاکی ساوتهمپتون نام دارد.', 'bert_loss': 1.110097069682014 } ``` + `id` : the entry id in dataset + `question` : the question + `short_answer` : the short answer corresponding to the `question` (the primary answer) + `fluent_answer` : fluent (long) answer generated from both `question` and the `short_answer` (the secondary answer) + `bert_loss` : the loss that [pars-bert](https://huggingface.co/HooshvareLab/bert-base-parsbert-uncased) gives when inputting the `fluent_answer` to it. As it increases the sentence is more likely to be influent. Note: the dataset is sorted increasingly by the `bert_loss`, so first sentences are more likely to be fluent. ### Data Splits Currently, the dataset just provided the `train` split. There would be a `test` split soon. ## Dataset Creation ### Source Data The source datasets that we used are as follows: + [PersianQA](https://github.com/sajjjadayobi/PersianQA) + [PersianQuAD](https://ieeexplore.ieee.org/document/9729745) #### Initial Data Collection and Normalization We extract all short answer (sentences without verbs - up to ~4 words) entries of all open source QA datasets in Farsi and used some rules featuring the question parse tree to make long (fluent) answers. ### Annotations #### Annotation process [More Information Needed] #### Who are the annotators? [More Information Needed] ### Personal and Sensitive Information The dataset is completely a subset of open source known datasets so all information in it is already there on the internet as a open-source dataset. By the way, we do not take responsibility for any of that. ## Additional Information ### Dataset Curators The dataset is gathered together completely in the Asr Gooyesh Pardaz company's summer internship under the supervision of Soroush Gooran, Prof. Hossein Sameti, and the mentorship of Sadra Sabouri. This project was Farhan Farsi's first internship project. ### Licensing Information MIT ### Citation Information [More Information Needed] ### Contributions Thanks to [@farhaaaaa](https://github.com/farhaaaaa) and [@sadrasabouri](https://github.com/sadrasabouri) for adding this dataset.
[ -0.45566055178642273, -0.8606376647949219, 0.3224969506263733, 0.2568626403808594, -0.2656990587711334, -0.2095670998096466, -0.32828882336616516, -0.15824230015277863, 0.42591392993927, 0.2839934825897217, -0.6873894929885864, -0.52166348695755, -0.06626852601766586, 0.35962384939193726, ...
null
null
null
null
null
null
null
null
null
null
null
null
null
chrisjay/test-mnist-5
chrisjay
2022-08-11T20:27:39Z
19
0
null
[ "region:us" ]
2022-08-11T20:27:39Z
2022-08-11T20:26:24.000Z
2022-08-11T20:26:24
Entry not found
[ -0.3227645754814148, -0.22568479180335999, 0.8622263669967651, 0.43461522459983826, -0.52829909324646, 0.7012971639633179, 0.7915719747543335, 0.07618614286184311, 0.774603009223938, 0.2563217282295227, -0.7852813005447388, -0.22573819756507874, -0.9104475975036621, 0.5715674161911011, -...
null
null
null
null
null
null
null
null
null
null
null
null
null
philschmid/processed_bert_dataset
philschmid
2022-08-14T08:43:17Z
19
1
null
[ "region:us" ]
2022-08-14T08:43:17Z
2022-08-14T08:29:53.000Z
2022-08-14T08:29:53
Entry not found
[ -0.3227645754814148, -0.22568479180335999, 0.8622263669967651, 0.43461522459983826, -0.52829909324646, 0.7012971639633179, 0.7915719747543335, 0.07618614286184311, 0.774603009223938, 0.2563217282295227, -0.7852813005447388, -0.22573819756507874, -0.9104475975036621, 0.5715674161911011, -...
null
null
null
null
null
null
null
null
null
null
null
null
null
hugginglearners/amazon-reviews-sentiment-analysis
hugginglearners
2022-08-18T04:28:40Z
19
0
null
[ "license:cc-by-nc-sa-4.0", "region:us" ]
2022-08-18T04:28:40Z
2022-08-18T04:28:36.000Z
2022-08-18T04:28:36
--- license: - cc-by-nc-sa-4.0 kaggle_id: tarkkaanko/amazon --- # Dataset Card for amazon reviews for sentiment analysis ## Table of Contents - [Table of Contents](#table-of-contents) - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards) - [Languages](#languages) - [Dataset Structure](#dataset-structure) - [Data Instances](#data-instances) - [Data Fields](#data-fields) - [Data Splits](#data-splits) - [Dataset Creation](#dataset-creation) - [Curation Rationale](#curation-rationale) - [Source Data](#source-data) - [Annotations](#annotations) - [Personal and Sensitive Information](#personal-and-sensitive-information) - [Considerations for Using the Data](#considerations-for-using-the-data) - [Social Impact of Dataset](#social-impact-of-dataset) - [Discussion of Biases](#discussion-of-biases) - [Other Known Limitations](#other-known-limitations) - [Additional Information](#additional-information) - [Dataset Curators](#dataset-curators) - [Licensing Information](#licensing-information) - [Citation Information](#citation-information) - [Contributions](#contributions) ## Dataset Description - **Homepage:** https://kaggle.com/datasets/tarkkaanko/amazon - **Repository:** - **Paper:** - **Leaderboard:** - **Point of Contact:** ### Dataset Summary One of the most important problems in e-commerce is the correct calculation of the points given to after-sales products. The solution to this problem is to provide greater customer satisfaction for the e-commerce site, product prominence for sellers, and a seamless shopping experience for buyers. Another problem is the correct ordering of the comments given to the products. The prominence of misleading comments will cause both financial losses and customer losses. In solving these 2 basic problems, e-commerce site and sellers will increase their sales, while customers will complete their purchasing journey without any problems. This dataset consists of ranking product ratings and reviews on Amazon. Please review this notebook to observe how I came up with this [dataset](https://www.kaggle.com/code/tarkkaanko/rating-product-sorting-reviews-in-amazon) This dataset containing Amazon Product Data includes product categories and various metadata. ---- ### What is expected of you? The product with the most comments in the electronics category has user ratings and comments. In this way, we expect you to perform sentiment analysis with your specific methods. ### Supported Tasks and Leaderboards [More Information Needed] ### Languages [More Information Needed] ## Dataset Structure ### Data Instances [More Information Needed] ### Data Fields [More Information Needed] ### Data Splits [More Information Needed] ## Dataset Creation ### Curation Rationale [More Information Needed] ### Source Data #### Initial Data Collection and Normalization [More Information Needed] #### Who are the source language producers? [More Information Needed] ### Annotations #### Annotation process [More Information Needed] #### Who are the annotators? [More Information Needed] ### Personal and Sensitive Information [More Information Needed] ## Considerations for Using the Data ### Social Impact of Dataset [More Information Needed] ### Discussion of Biases [More Information Needed] ### Other Known Limitations [More Information Needed] ## Additional Information ### Dataset Curators This dataset was shared by [@tarkkaanko](https://kaggle.com/tarkkaanko) ### Licensing Information The license for this dataset is cc-by-nc-sa-4.0 ### Citation Information ```bibtex [More Information Needed] ``` ### Contributions [More Information Needed]
[ -0.5581740736961365, -0.48576992750167847, 0.15959973633289337, 0.43819841742515564, -0.4052775204181671, 0.07490953058004379, -0.11387341469526291, -0.4433470070362091, 0.3816247880458832, 0.5307109951972961, -0.8029102683067322, -0.9579268097877502, -0.5771429538726807, 0.085925512015819...
null
null
null
null
null
null
null
null
null
null
null
null
null
SLPL/naab-raw
SLPL
2022-11-03T06:34:28Z
19
6
null
[ "task_categories:fill-mask", "task_categories:text-generation", "task_ids:language-modeling", "task_ids:masked-language-modeling", "multilinguality:monolingual", "language:fa", "license:mit", "arxiv:2208.13486", "region:us" ]
2022-11-03T06:34:28Z
2022-08-18T14:15:15.000Z
2022-08-18T14:15:15
--- language: - fa license: - mit multilinguality: - monolingual task_categories: - fill-mask - text-generation task_ids: - language-modeling - masked-language-modeling pretty_name: naab-raw (raw version of the naab corpus) --- # naab-raw (raw version of the naab corpus) _[If you want to join our community to keep up with news, models and datasets from naab, click on [this](https://docs.google.com/forms/d/e/1FAIpQLSe8kevFl_ODCx-zapAuOIAQYr8IvkVVaVHOuhRL9Ha0RVJ6kg/viewform) link.]_ ## Table of Contents - [Dataset Card Creation Guide](#dataset-card-creation-guide) - [Table of Contents](#table-of-contents) - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards) - [Changelog](#changelog) - [Dataset Structure](#dataset-structure) - [Data Instances](#data-instances) - [Data Fields](#data-fields) - [Data Splits](#data-splits) - [Dataset Creation](#dataset-creation) - [Curation Rationale](#curation-rationale) - [Contribution Guideline](#contribution-guideline) - [Personal and Sensitive Information](#personal-and-sensitive-information) - [Additional Information](#additional-information) - [Dataset Curators](#dataset-curators) - [Licensing Information](#licensing-information) - [Citation Information](#citation-information) - [Contributions](#contributions) ## Dataset Description - **Homepage:** [Sharif Speech and Language Processing Lab](https://huggingface.co/SLPL) - **Paper:** [naab: A ready-to-use plug-and-play corpus for Farsi](https://arxiv.org/abs/2208.13486) - **Point of Contact:** [Sadra Sabouri](mailto:sabouri.sadra@gmail.com) ### Dataset Summary This is the raw (uncleaned) version of the [naab](https://huggingface.co/datasets/SLPL/naab) corpus. You can use also customize our [preprocess script](https://github.com/Sharif-SLPL/t5-fa/tree/main/preprocess) and make your own cleaned corpus. This repository is a hub for all Farsi corpora. Feel free to add your corpus following the [contribution guidelines](#contribution-guideline). You can download the dataset by the command below: ```python from datasets import load_dataset dataset = load_dataset("SLPL/naab-raw") ``` If you wanted to download a specific part of the corpus you can set the config name to the specific corpus name: ```python from datasets import load_dataset dataset = load_dataset("SLPL/naab-raw", "CC-fa") ``` ### Supported Tasks and Leaderboards This corpus can be used for training all language models trained by Masked Language Modeling (MLM) or any other self-supervised objective. - `language-modeling` - `masked-language-modeling` ### Changelog It's crucial to log changes on the projects which face changes periodically. Please refer to the [CHANGELOG.md](https://huggingface.co/datasets/SLPL/naab-raw/blob/main/CHANGELOG.md) for more details. ## Dataset Structure Each row of the dataset will look like something like the below: ```json { 'text': "این یک تست برای نمایش یک پاراگراف در پیکره متنی ناب است.", } ``` + `text` : the textual paragraph. ### Data Splits This corpus contains only a split (the `train` split). ## Dataset Creation ### Curation Rationale Here are some details about each part of this corpus. #### CC-fa The Common Crawl corpus contains petabytes of data collected since 2008. It contains raw web page data, extracted metadata, and text extractions. We use the Farsi part of it here. #### W2C The W2C stands for Web to Corpus and it contains several corpera. We contain the Farsi part of it in this corpus. ### Contribution Guideline In order to add your dataset, you should follow the below steps and make a pull request in order to be merged with the _naab-raw_: 1. Add your dataset to `_CORPUS_URLS` in `naab-raw.py` like: ```python ... "DATASET_NAME": "LINK_TO_A_PUBLIC_DOWNLOADABLE_FILE.txt" ... ``` 2. Add a log of your changes to the [CHANGELOG.md](https://huggingface.co/datasets/SLPL/naab-raw/blob/main/CHANGELOG.md). 3. Add some minor descriptions to the [Curation Rationale](#curation-rationale) under a subsection with your dataset name. ### Personal and Sensitive Information Since this corpus is briefly a compilation of some former corpora we take no responsibility for personal information included in this corpus. If you detect any of these violations please let us know, we try our best to remove them from the corpus ASAP. We tried our best to provide anonymity while keeping the crucial information. We shuffled some parts of the corpus so the information passing through possible conversations wouldn't be harmful. ## Additional Information ### Dataset Curators + Sadra Sabouri (Sharif University of Technology) + Elnaz Rahmati (Sharif University of Technology) ### Licensing Information mit ### Citation Information ``` @article{sabouri2022naab, title={naab: A ready-to-use plug-and-play corpus for Farsi}, author={Sabouri, Sadra and Rahmati, Elnaz and Gooran, Soroush and Sameti, Hossein}, journal={arXiv preprint arXiv:2208.13486}, year={2022} } ``` DOI:[https://doi.org/10.48550/arXiv.2208.13486](https://doi.org/10.48550/arXiv.2208.13486). ### Contributions Thanks to [@sadrasabouri](https://github.com/sadrasabouri) and [@elnazrahmati](https://github.com/elnazrahmati) for adding this dataset. ### Keywords + Farsi + Persian + raw text + پیکره فارسی + پیکره متنی + آموزش مدل زبانی
[ -0.6091822981834412, -0.5683408975601196, 0.23044642806053162, 0.29378828406333923, -0.16786833107471466, -0.01123878825455904, -0.420306533575058, -0.238173246383667, 0.33250534534454346, 0.4737038016319275, -0.4808935523033142, -0.9155001640319824, -0.21684665977954865, 0.289954632520675...
null
null
null
null
null
null
null
null
null
null
null
null
null
carlosejimenez/cc12m_openai-clip-vit-patch32_image_retrieval_top15_start1000000_end3500000_SHORT500K
carlosejimenez
2022-09-15T18:06:52Z
19
0
null
[ "region:us" ]
2022-09-15T18:06:52Z
2022-09-15T18:04:27.000Z
2022-09-15T18:04:27
Entry not found
[ -0.3227645754814148, -0.22568479180335999, 0.8622263669967651, 0.43461522459983826, -0.52829909324646, 0.7012971639633179, 0.7915719747543335, 0.07618614286184311, 0.774603009223938, 0.2563217282295227, -0.7852813005447388, -0.22573819756507874, -0.9104475975036621, 0.5715674161911011, -...
null
null
null
null
null
null
null
null
null
null
null
null
null
codesue/kelly
codesue
2022-12-18T22:06:55Z
19
0
null
[ "task_categories:text-classification", "task_ids:text-scoring", "annotations_creators:expert-generated", "language_creators:expert-generated", "multilinguality:monolingual", "size_categories:1K<n<10K", "language:sv", "license:cc-by-4.0", "lexicon", "swedish", "CEFR", "region:us" ]
2022-12-18T22:06:55Z
2022-09-16T02:18:16.000Z
2022-09-16T02:18:16
--- annotations_creators: - expert-generated language: - sv language_creators: - expert-generated license: - cc-by-4.0 multilinguality: - monolingual pretty_name: kelly size_categories: - 1K<n<10K source_datasets: [] tags: - lexicon - swedish - CEFR task_categories: - text-classification task_ids: - text-scoring --- # Dataset Card for Kelly Keywords for Language Learning for Young and adults alike ## Table of Contents - [Table of Contents](#table-of-contents) - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Languages](#languages) - [Dataset Structure](#dataset-structure) - [Data Instances](#data-instances) - [Data Fields](#data-fields) - [Data Splits](#data-splits) - [Dataset Creation](#dataset-creation) - [Additional Information](#additional-information) - [Licensing Information](#licensing-information) - [Citation Information](#citation-information) - [Contributions](#contributions) ## Dataset Description - **Homepage:** https://spraakbanken.gu.se/en/resources/kelly - **Paper:** https://link.springer.com/article/10.1007/s10579-013-9251-2 ### Dataset Summary The Swedish Kelly list is a freely available frequency-based vocabulary list that comprises general-purpose language of modern Swedish. The list was generated from a large web-acquired corpus (SweWaC) of 114 million words dating from the 2010s. It is adapted to the needs of language learners and contains 8,425 most frequent lemmas that cover 80% of SweWaC. ### Languages Swedish (sv-SE) ## Dataset Structure ### Data Instances Here is a sample of the data: ```python { 'id': 190, 'raw_frequency': 117835.0, 'relative_frequency': 1033.61, 'cefr_level': 'A1', 'source': 'SweWaC', 'marker': 'en', 'lemma': 'dag', 'pos': 'noun-en', 'examples': 'e.g. god dag' } ``` This can be understood as: > The common noun "dag" ("day") has a rank of 190 in the list. It was used 117,835 times in SweWaC, meaning it occured 1033.61 times per million words. This word is among the most important vocabulary words for Swedish language learners and should be learned at the A1 CEFR level. An example usage of this word is the phrase "god dag" ("good day"). ### Data Fields - `id`: The row number for the data entry, starting at 1. Generally corresponds to the rank of the word. - `raw_frequency`: The raw frequency of the word. - `relative_frequency`: The relative frequency of the word measured in number of occurences per million words. - `cefr_level`: The CEFR level (A1, A2, B1, B2, C1, C2) of the word. - `source`: Whether the word came from SweWaC, translation lists (T2), or was manually added (manual). - `marker`: The grammatical marker of the word, if any, such as an article or infinitive marker. - `lemma`: The lemma of the word, sometimes provided with its spelling or stylistic variants. - `pos`: The word's part-of-speech. - `examples`: Usage examples and comments. Only available for some of the words. Manual entries were prepended to the list, giving them a higher rank than they might otherwise have had. For example, the manual entry "Göteborg ("Gothenberg") has a rank of 20, while the first non-manual entry "och" ("and") has a rank of 87. However, a conjunction and common stopword is far more likely to occur than the name of a city. ### Data Splits There is a single split, `train`. ## Dataset Creation Please refer to the article [Corpus-based approaches for the creation of a frequency based vocabulary list in the EU project KELLY – issues on reliability, validity and coverage](https://gup.ub.gu.se/publication/148533?lang=en) for information about how the original dataset was created and considerations for using the data. **The following changes have been made to the original dataset**: - Changed header names. - Normalized the large web-acquired corpus name to "SweWac" in the `source` field. - Set the relative frequency of manual entries to null rather than 1000000. ## Additional Information ### Licensing Information [CC BY 4.0](https://creativecommons.org/licenses/by/4.0) ### Citation Information Please cite the authors if you use this dataset in your work: ```bibtex @article{Kilgarriff2013, doi = {10.1007/s10579-013-9251-2}, url = {https://doi.org/10.1007/s10579-013-9251-2}, year = {2013}, month = sep, publisher = {Springer Science and Business Media {LLC}}, volume = {48}, number = {1}, pages = {121--163}, author = {Adam Kilgarriff and Frieda Charalabopoulou and Maria Gavrilidou and Janne Bondi Johannessen and Saussan Khalil and Sofie Johansson Kokkinakis and Robert Lew and Serge Sharoff and Ravikiran Vadlapudi and Elena Volodina}, title = {Corpus-based vocabulary lists for language learners for nine languages}, journal = {Language Resources and Evaluation} } ``` ### Contributions Thanks to [@spraakbanken](https://github.com/spraakbanken) for creating this dataset and to [@codesue](https://github.com/codesue) for adding it.
[ -0.471726655960083, -0.5911422371864319, -0.05616672337055206, 0.13941921293735504, -0.422728568315506, -0.18809440732002258, -0.4775294363498688, -0.42593416571617126, 0.2268764078617096, 0.30706265568733215, -0.37315210700035095, -0.897183895111084, -0.5903680920600891, 0.225344821810722...
null
null
null
null
null
null
null
null
null
null
null
null
null
gokulraja17/rice-thermal
gokulraja17
2022-09-30T19:39:46Z
19
0
null
[ "region:us" ]
2022-09-30T19:39:46Z
2022-09-30T19:39:37.000Z
2022-09-30T19:39:37
Entry not found
[ -0.3227645754814148, -0.22568479180335999, 0.8622263669967651, 0.43461522459983826, -0.52829909324646, 0.7012971639633179, 0.7915719747543335, 0.07618614286184311, 0.774603009223938, 0.2563217282295227, -0.7852813005447388, -0.22573819756507874, -0.9104475975036621, 0.5715674161911011, -...
null
null
null
null
null
null
null
null
null
null
null
null
null
YaYaB/onepiece-blip-captions
YaYaB
2022-10-05T10:08:34Z
19
6
null
[ "task_categories:text-to-image", "annotations_creators:machine-generated", "language_creators:other", "multilinguality:monolingual", "size_categories:n<1K", "source_datasets:YaYaB/onepiece-blip-captions", "language:en", "license:cc-by-nc-sa-4.0", "region:us" ]
2022-10-05T10:08:34Z
2022-10-05T08:53:42.000Z
2022-10-05T08:53:42
--- license: cc-by-nc-sa-4.0 annotations_creators: - machine-generated language: - en language_creators: - other multilinguality: - monolingual pretty_name: 'One Piece BLIP captions' size_categories: - n<1K source_datasets: - YaYaB/onepiece-blip-captions tags: [] task_categories: - text-to-image task_ids: [] --- # Disclaimer This was inspired from https://huggingface.co/datasets/lambdalabs/pokemon-blip-captions # Dataset Card for One Piece BLIP captions _Dataset used to train [One Piece text to image model](https://github.com/LambdaLabsML/examples/tree/main/stable-diffusion-finetuning)_ BLIP generated captions for One piece images collected from the web. Original images were obtained from [Anime Characters](https://www.animecharactersdatabase.com) and captioned with the [pre-trained BLIP model](https://github.com/salesforce/BLIP). For each row the dataset contains `image` and `text` keys. `image` is a varying size PIL jpeg, and `text` is the accompanying text caption. Only a train split is provided. ## Examples ![pk1.jpg](https://ami.animecharactersdatabase.com/uploads/chars/11076-782139445.jpg) > a man in a straw hat ![pk10.jpg](https://www.animecharactersdatabase.com/uploads/chars/5457-1977266515.png) > a man in a green coat holding two swords ![pk100.jpg](https://ami.animecharactersdatabase.com/uploads/chars/12602-925960129.jpg) > a man with red hair and a black coat ## Citation If you use this dataset, please cite it as: ``` @misc{yayab2022onepiece, author = {YaYaB}, title = {One Piece BLIP captions}, year={2022}, howpublished= {\url{https://huggingface.co/datasets/YaYaB/onepiece-blip-captions/}} } ```
[ -0.39999285340309143, -0.30963361263275146, 0.2255561351776123, 0.44409623742103577, -0.6006948351860046, 0.03953007981181145, 0.006680019199848175, -0.43885475397109985, 0.7368422150611877, 0.7045778036117554, -1.0073933601379395, -0.33245623111724854, -0.37457698583602905, 0.326921433210...
null
null
null
null
null
null
null
null
null
null
null
null
null
arbml/MedicalCorpus
arbml
2022-11-03T13:53:51Z
19
0
null
[ "region:us" ]
2022-11-03T13:53:51Z
2022-10-05T13:29:35.000Z
2022-10-05T13:29:35
Entry not found
[ -0.3227645754814148, -0.22568479180335999, 0.8622264862060547, 0.43461528420448303, -0.52829909324646, 0.7012971639633179, 0.7915720343589783, 0.07618614286184311, 0.774603009223938, 0.2563217282295227, -0.7852813005447388, -0.22573819756507874, -0.9104477167129517, 0.5715674161911011, -...
null
null
null
null
null
null
null
null
null
null
null
null
null
olm/olm-CC-MAIN-2022-33-sampling-ratio-0.20
olm
2022-11-04T17:14:03Z
19
1
null
[ "annotations_creators:no-annotation", "language_creators:found", "multilinguality:monolingual", "size_categories:10M<n<100M", "language:en", "pretraining", "language modelling", "common crawl", "web", "region:us" ]
2022-11-04T17:14:03Z
2022-10-06T06:53:07.000Z
2022-10-06T06:53:07
--- annotations_creators: - no-annotation language: - en language_creators: - found license: [] multilinguality: - monolingual pretty_name: OLM August 2022 Common Crawl size_categories: - 10M<n<100M source_datasets: [] tags: - pretraining - language modelling - common crawl - web task_categories: [] task_ids: [] --- # Dataset Card for OLM August 2022 Common Crawl Cleaned and deduplicated pretraining dataset, created with the OLM repo [here](https://github.com/huggingface/olm-datasets) from 20% of the August 2022 Common Crawl snapshot. Note: `last_modified_timestamp` was parsed from whatever a website returned in it's `Last-Modified` header; there are likely a small number of outliers that are incorrect, so we recommend removing the outliers before doing statistics with `last_modified_timestamp`.
[ -0.47459715604782104, -0.46950793266296387, 0.3729490637779236, -0.2836388945579529, -0.46332287788391113, -0.30073368549346924, 0.29951924085617065, -0.5006965398788452, 0.39292117953300476, 0.7176251411437988, -0.7682700157165527, -0.870841920375824, -0.36411386728286743, -0.005121328402...
null
null
null
null
null
null
null
null
null
null
null
null
null
tomekkorbak/detoxify-pile-chunk3-4900000-4950000
tomekkorbak
2022-10-06T19:50:30Z
19
0
null
[ "region:us" ]
2022-10-06T19:50:30Z
2022-10-06T19:50:22.000Z
2022-10-06T19:50:22
Entry not found
[ -0.3227645754814148, -0.22568479180335999, 0.8622264862060547, 0.43461528420448303, -0.52829909324646, 0.7012971639633179, 0.7915720343589783, 0.07618614286184311, 0.774603009223938, 0.2563217282295227, -0.7852813005447388, -0.22573819756507874, -0.9104477167129517, 0.5715674161911011, -...
null
null
null
null
null
null
null
null
null
null
null
null
null
rogerdehe/xfund
rogerdehe
2022-10-12T12:42:35Z
19
0
null
[ "task_categories:text-classification", "annotations_creators:found", "language_creators:found", "multilinguality:multilingual", "language:de", "language:es", "language:fr", "language:it", "language:ja", "license:other", "layoutlmv3", "xfund", "funsd", "region:us" ]
2022-10-12T12:42:35Z
2022-10-09T08:22:00.000Z
2022-10-09T08:22:00
--- annotations_creators: - found language_creators: - found task_categories: - text-classification tags: - layoutlmv3 - xfund - funsd language: - de - es - fr - it - ja license: - other multilinguality: - multilingual --- XFUND dataset see more detail at [this](https://github.com/doc-analysis/XFUND) ### Citation Information ``` latex @inproceedings{xu-etal-2022-xfund, title = "{XFUND}: A Benchmark Dataset for Multilingual Visually Rich Form Understanding", author = "Xu, Yiheng and Lv, Tengchao and Cui, Lei and Wang, Guoxin and Lu, Yijuan and Florencio, Dinei and Zhang, Cha and Wei, Furu", booktitle = "Findings of the Association for Computational Linguistics: ACL 2022", month = may, year = "2022", address = "Dublin, Ireland", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2022.findings-acl.253", doi = "10.18653/v1/2022.findings-acl.253", pages = "3214--3224", abstract = "Multimodal pre-training with text, layout, and image has achieved SOTA performance for visually rich document understanding tasks recently, which demonstrates the great potential for joint learning across different modalities. However, the existed research work has focused only on the English domain while neglecting the importance of multilingual generalization. In this paper, we introduce a human-annotated multilingual form understanding benchmark dataset named XFUND, which includes form understanding samples in 7 languages (Chinese, Japanese, Spanish, French, Italian, German, Portuguese). Meanwhile, we present LayoutXLM, a multimodal pre-trained model for multilingual document understanding, which aims to bridge the language barriers for visually rich document understanding. Experimental results show that the LayoutXLM model has significantly outperformed the existing SOTA cross-lingual pre-trained models on the XFUND dataset. The XFUND dataset and the pre-trained LayoutXLM model have been publicly available at https://aka.ms/layoutxlm.", } ```
[ -0.2833155691623688, -0.5728512406349182, 0.4233150780200958, 0.3361724019050598, -0.1769886314868927, 0.11256971210241318, -0.30810150504112244, -0.54996657371521, 0.10237280279397964, 0.3326739966869354, -0.5742571353912354, -0.591865599155426, -0.5992199182510376, -0.11117865145206451, ...
null
null
null
null
null
null
null
null
null
null
null
null
null
allenai/cochrane_dense_max
allenai
2022-11-18T19:41:49Z
19
1
multi-document-summarization
[ "task_categories:summarization", "task_categories:text2text-generation", "annotations_creators:expert-generated", "language_creators:expert-generated", "multilinguality:monolingual", "size_categories:10K<n<100K", "source_datasets:extended|other-MS^2", "source_datasets:extended|other-Cochrane", "lang...
2022-11-18T19:41:49Z
2022-10-12T13:42:35.000Z
2022-10-12T13:42:35
--- annotations_creators: - expert-generated language_creators: - expert-generated language: - en license: - apache-2.0 multilinguality: - monolingual size_categories: - 10K<n<100K source_datasets: - extended|other-MS^2 - extended|other-Cochrane task_categories: - summarization - text2text-generation paperswithcode_id: multi-document-summarization pretty_name: MSLR Shared Task --- This is a copy of the [Cochrane](https://huggingface.co/datasets/allenai/mslr2022) dataset, except the input source documents of its `validation` split have been replaced by a __dense__ retriever. The retrieval pipeline used: - __query__: The `target` field of each example - __corpus__: The union of all documents in the `train`, `validation` and `test` splits. A document is the concatenation of the `title` and `abstract`. - __retriever__: [`facebook/contriever-msmarco`](https://huggingface.co/facebook/contriever-msmarco) via [PyTerrier](https://pyterrier.readthedocs.io/en/latest/) with default settings - __top-k strategy__: `"max"`, i.e. the number of documents retrieved, `k`, is set as the maximum number of documents seen across examples in this dataset, in this case `k==25` Retrieval results on the `train` set: | Recall@100 | Rprec | Precision@k | Recall@k | | ----------- | ----------- | ----------- | ----------- | | 0.7790 | 0.4487 | 0.1959 | 0.6268 | Retrieval results on the `validation` set: | Recall@100 | Rprec | Precision@k | Recall@k | | ----------- | ----------- | ----------- | ----------- | | 0.7856 | 0.4424 | 0.1995 | 0.6433 | Retrieval results on the `test` set: N/A. Test set is blind so we do not have any queries.
[ -0.09263371676206589, -0.1730441451072693, 0.287810355424881, 0.266133189201355, -0.19174295663833618, -0.22298112511634827, -0.1490577906370163, -0.005112430080771446, 0.39566749334335327, 0.5295099020004272, -0.5098040103912354, -0.6656433939933777, -0.8418945670127869, 0.232322216033935...
null
null
null
null
null
null
null
null
null
null
null
null
null
umair894/rvl_cdip_100_examples_per_class
umair894
2022-10-13T09:52:58Z
19
0
null
[ "region:us" ]
2022-10-13T09:52:58Z
2022-10-13T09:52:52.000Z
2022-10-13T09:52:52
Entry not found
[ -0.3227645754814148, -0.22568479180335999, 0.8622264862060547, 0.43461528420448303, -0.52829909324646, 0.7012971639633179, 0.7915720343589783, 0.07618614286184311, 0.774603009223938, 0.2563217282295227, -0.7852813005447388, -0.22573819756507874, -0.9104477167129517, 0.5715674161911011, -...
null
null
null
null
null
null
null
null
null
null
null
null
null
autoevaluate/autoeval-eval-phpthinh__examplei-all-929d48-1748861028
autoevaluate
2022-10-13T16:36:34Z
19
0
null
[ "autotrain", "evaluation", "region:us" ]
2022-10-13T16:36:34Z
2022-10-13T15:48:19.000Z
2022-10-13T15:48:19
--- type: predictions tags: - autotrain - evaluation datasets: - phpthinh/examplei eval_info: task: text_zero_shot_classification model: bigscience/bloom-560m metrics: ['f1'] dataset_name: phpthinh/examplei dataset_config: all dataset_split: test col_mapping: text: text classes: classes target: target --- # Dataset Card for AutoTrain Evaluator This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset: * Task: Zero-Shot Text Classification * Model: bigscience/bloom-560m * Dataset: phpthinh/examplei * Config: all * Split: test To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator). ## Contributions Thanks to [@phpthinh](https://huggingface.co/phpthinh) for evaluating this model.
[ -0.25855931639671326, -0.3518945574760437, 0.5161019563674927, 0.13795509934425354, -0.12268412113189697, -0.13853058218955994, -0.03883346542716026, -0.4257979989051819, 0.05250687897205353, 0.34062114357948303, -0.9517399072647095, -0.32040518522262573, -0.6791965365409851, 0.00843813363...
null
null
null
null
null
null
null
null
null
null
null
null
null
suresh-subramanian/crowdsourced-movieposter-demo
suresh-subramanian
2022-10-23T02:09:05Z
19
0
null
[ "region:us" ]
2022-10-23T02:09:05Z
2022-10-14T19:18:43.000Z
2022-10-14T19:18:43
Entry not found
[ -0.3227645754814148, -0.22568479180335999, 0.8622264862060547, 0.43461528420448303, -0.52829909324646, 0.7012971639633179, 0.7915720343589783, 0.07618614286184311, 0.774603009223938, 0.2563217282295227, -0.7852813005447388, -0.22573819756507874, -0.9104477167129517, 0.5715674161911011, -...
null
null
null
null
null
null
null
null
null
null
null
null
null
arbml/BRAD
arbml
2022-10-14T19:38:36Z
19
0
null
[ "region:us" ]
2022-10-14T19:38:36Z
2022-10-14T19:38:23.000Z
2022-10-14T19:38:23
--- dataset_info: features: - name: review_id dtype: string - name: book_id dtype: string - name: user_id dtype: string - name: review dtype: string - name: label dtype: class_label: names: 0: 1 1: 2 2: 3 3: 4 4: 5 splits: - name: train num_bytes: 407433642 num_examples: 510598 download_size: 211213150 dataset_size: 407433642 --- # Dataset Card for "BRAD" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
[ -0.605553925037384, -0.40223556756973267, 0.2237139195203781, 0.2995087802410126, -0.14406155049800873, 0.16990679502487183, 0.19978870451450348, -0.1874369978904724, 0.8275765180587769, 0.4978109300136566, -0.9173403978347778, -0.6985421180725098, -0.6511964201927185, -0.1383587121963501,...
null
null
null
null
null
null
null
null
null
null
null
null
null
tiagoblima/nilc-school-books
tiagoblima
2022-11-13T01:03:20Z
19
0
null
[ "license:mit", "region:us" ]
2022-11-13T01:03:20Z
2022-10-14T21:09:32.000Z
2022-10-14T21:09:32
--- license: mit dataset_info: features: - name: text_id dtype: int64 - name: text dtype: string - name: level dtype: string splits: - name: test num_bytes: 1276559.048483246 num_examples: 8321 - name: train num_bytes: 4595060.28364021 num_examples: 29952 - name: validation num_bytes: 510715.6678765444 num_examples: 3329 download_size: 3645953 dataset_size: 6382335.0 --- ## Córpus de Complexidade Textual para Estágios Escolares do Sistema Educacional Brasileiro O córpus inclui trechos de: livros-textos cuja lista completa é apresentada abaixo, notícias da Seção Para Seu Filho Ler (PSFL) do jornal Zero Hora que apresenta algumas notícias sobre o mesmo córpus do jornal do Zero Hora, mas escritas para crianças de 8 a 11 anos de idade , Exames do SAEB , Livros Digitais do Wikilivros em Português, Exames do Enem dos anos 2015, 2016 e 2017. Todo o material em português foi disponibilizado para avaliar a tarefa de complexidade textual (readability). Lista completa dos Livros Didáticos e suas fontes originais Esse corpus faz parte dos recursos de meu doutorado na área de Natural Language Processing, sendo realizado no Núcleo Interinstitucional de Linguística Computacional da USP de São Carlos. Esse trabalho foi orientado pela Profa. Sandra Maria Aluísio. http://nilc.icmc.usp.br @inproceedings{mgazzola19, title={Predição da Complexidade Textual de Recursos Educacionais Abertos em Português}, author={Murilo Gazzola, Sidney Evaldo Leal, Sandra Maria Aluisio}, booktitle={Proceedings of the Brazilian Symposium in Information and Human Language Technology}, year={2019} }
[ -0.2807428240776062, -1.0531059503555298, 0.20569083094596863, 0.4471614360809326, -0.3674120604991913, 0.5428787469863892, -0.10020001232624054, -0.7535787224769592, 0.40146204829216003, 0.40685662627220154, -0.2890244126319885, -0.7756848335266113, -0.6527050733566284, 0.5977851748466492...
null
null
null
null
null
null
null
null
null
null
null
null
null
Harsit/xnli2.0_train_english
Harsit
2022-10-15T09:16:57Z
19
0
null
[ "region:us" ]
2022-10-15T09:16:57Z
2022-10-15T09:16:27.000Z
2022-10-15T09:16:27
Entry not found
[ -0.3227645754814148, -0.22568479180335999, 0.8622264862060547, 0.43461528420448303, -0.52829909324646, 0.7012971639633179, 0.7915720343589783, 0.07618614286184311, 0.774603009223938, 0.2563217282295227, -0.7852813005447388, -0.22573819756507874, -0.9104477167129517, 0.5715674161911011, -...
null
null
null
null
null
null
null
null
null
null
null
null
null
Harsit/xnli2.0_train_french
Harsit
2023-10-03T07:37:59Z
19
0
null
[ "language:fr", "region:us" ]
2023-10-03T07:37:59Z
2022-10-15T09:17:22.000Z
2022-10-15T09:17:22
--- language: - fr ---
[ -0.12853392958641052, -0.18616779148578644, 0.6529127955436707, 0.49436280131340027, -0.19319361448287964, 0.23607419431209564, 0.36072003841400146, 0.050563063472509384, 0.579365611076355, 0.7400140762329102, -0.6508104205131531, -0.23783954977989197, -0.7102249264717102, -0.0478260256350...
null
null
null
null
null
null
null
null
null
null
null
null
null
Harsit/xnli2.0_train_spanish
Harsit
2022-10-15T09:21:42Z
19
1
null
[ "region:us" ]
2022-10-15T09:21:42Z
2022-10-15T09:21:11.000Z
2022-10-15T09:21:11
Entry not found
[ -0.3227647542953491, -0.22568407654762268, 0.8622258901596069, 0.4346148371696472, -0.5282984972000122, 0.7012965083122253, 0.7915717959403992, 0.07618629932403564, 0.7746022343635559, 0.2563222348690033, -0.785281777381897, -0.22573848068714142, -0.9104482531547546, 0.5715669393539429, ...
null
null
null
null
null
null
null
null
null
null
null
null
null
CG80499/Inverse-scaling-test
CG80499
2022-10-16T11:33:06Z
19
0
null
[ "task_categories:multiple-choice", "task_categories:question-answering", "task_categories:zero-shot-classification", "license:bigscience-openrail-m", "region:us" ]
2022-10-16T11:33:06Z
2022-10-15T11:47:40.000Z
2022-10-15T11:47:40
--- license: bigscience-openrail-m task_categories: - multiple-choice - question-answering - zero-shot-classification train-eval-index: - config: inverse-scaling-test task: text-generation task_id: text_zero_shot_classification splits: eval_split: train col_mapping: prompt: text classes: classes answer_index: target ---
[ -0.12853392958641052, -0.18616779148578644, 0.6529127955436707, 0.49436280131340027, -0.19319361448287964, 0.23607419431209564, 0.36072003841400146, 0.050563063472509384, 0.579365611076355, 0.7400140762329102, -0.6508104205131531, -0.23783954977989197, -0.7102249264717102, -0.0478260256350...
null
null
null
null
null
null
null
null
null
null
null
null
null
nielsr/balloon
nielsr
2022-10-15T13:02:05Z
19
1
null
[ "region:us" ]
2022-10-15T13:02:05Z
2022-10-15T12:59:06.000Z
2022-10-15T12:59:06
--- dataset_info: features: - name: image dtype: image splits: - name: train num_bytes: 30808803.0 num_examples: 61 - name: validation num_bytes: 8076058.0 num_examples: 13 download_size: 38814125 dataset_size: 38884861.0 --- # Dataset Card for "balloon" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
[ -0.7152652144432068, -0.2579745948314667, -0.25280898809432983, 0.26552289724349976, -0.15409815311431885, 0.007976789958775043, 0.15069344639778137, -0.16207104921340942, 0.8499738574028015, 0.4403226971626282, -0.6538819670677185, -0.6044318079948425, -0.5587226748466492, -0.230944409966...
null
null
null
null
null
null
null
null
null
null
null
null
null
Alfitauwu/Pruebitaaaxd
Alfitauwu
2022-10-15T23:48:35Z
19
0
null
[ "license:openrail", "region:us" ]
2022-10-15T23:48:35Z
2022-10-15T23:48:14.000Z
2022-10-15T23:48:14
--- license: openrail ---
[ -0.12853367626667023, -0.18616794049739838, 0.6529126763343811, 0.4943627417087555, -0.19319313764572144, 0.23607443273067474, 0.36071979999542236, 0.05056338757276535, 0.5793654322624207, 0.7400138974189758, -0.6508103013038635, -0.23783987760543823, -0.710224986076355, -0.047825977206230...
null
null
null
null
null
null
null
null
null
null
null
null
null
vattvar/polusa_capstone
vattvar
2022-10-17T17:36:59Z
19
0
null
[ "region:us" ]
2022-10-17T17:36:59Z
2022-10-17T17:22:29.000Z
2022-10-17T17:22:29
Entry not found
[ -0.3227649927139282, -0.225684255361557, 0.862226128578186, 0.43461498618125916, -0.5282987952232361, 0.7012963891029358, 0.7915717363357544, 0.07618629932403564, 0.7746025919914246, 0.2563219666481018, -0.7852816581726074, -0.2257382869720459, -0.9104480743408203, 0.5715669393539429, -0...
null
null
null
null
null
null
null
null
null
null
null
null
null
arbml/Sanadset
arbml
2022-10-25T17:17:38Z
19
0
null
[ "region:us" ]
2022-10-25T17:17:38Z
2022-10-25T17:15:07.000Z
2022-10-25T17:15:07
Entry not found
[ -0.3227649927139282, -0.225684255361557, 0.862226128578186, 0.43461498618125916, -0.5282987952232361, 0.7012963891029358, 0.7915717363357544, 0.07618629932403564, 0.7746025919914246, 0.2563219666481018, -0.7852816581726074, -0.2257382869720459, -0.9104480743408203, 0.5715669393539429, -0...
null
null
null
null
null
null
null
null
null
null
null
null
null
arbml/Arabic_News
arbml
2022-10-26T00:19:15Z
19
0
null
[ "region:us" ]
2022-10-26T00:19:15Z
2022-10-26T00:14:49.000Z
2022-10-26T00:14:49
Entry not found
[ -0.3227649927139282, -0.225684255361557, 0.862226128578186, 0.43461498618125916, -0.5282987952232361, 0.7012963891029358, 0.7915717363357544, 0.07618629932403564, 0.7746025919914246, 0.2563219666481018, -0.7852816581726074, -0.2257382869720459, -0.9104480743408203, 0.5715669393539429, -0...
null
null
null
null
null
null
null
null
null
null
null
null
null
NeelNanda/counterfact-tracing
NeelNanda
2022-11-05T15:19:43Z
19
6
null
[ "arxiv:2211.00593", "region:us" ]
2022-11-05T15:19:43Z
2022-11-05T15:09:51.000Z
2022-11-05T15:09:51
--- dataset_info: features: - name: relation dtype: string - name: relation_prefix dtype: string - name: relation_suffix dtype: string - name: prompt dtype: string - name: relation_id dtype: string - name: target_false_id dtype: string - name: target_true_id dtype: string - name: target_true dtype: string - name: target_false dtype: string - name: subject dtype: string splits: - name: train num_bytes: 3400668 num_examples: 21919 download_size: 1109314 dataset_size: 3400668 --- # Dataset Card for "counterfact-tracing" This is adapted from the counterfact dataset from the excellent [ROME paper](https://rome.baulab.info/) from David Bau and Kevin Meng. This is a dataset of 21919 factual relations, formatted as `data["prompt"]==f"{data['relation_prefix']}{data['subject']}{data['relation_suffix']}"`. Each has two responses `data["target_true"]` and `data["target_false"]` which is intended to go immediately after the prompt. The dataset was originally designed for memory editing in models. I made this for a research project doing mechanistic interpretability of how models recall factual knowledge, building on their causal tracing technique, and so stripped their data down to the information relevant to causal tracing. I also prepended spaces where relevant so that the subject and targets can be properly tokenized as is (spaces are always prepended to targets, and are prepended to subjects unless the subject is at the start of a sentence). Each fact has both a true and false target. I recommend measuring the logit *difference* between the true and false target (at least, if it's a single token target!), so as to control for eg the parts of the model which identify that it's supposed to be giving a fact of this type at all. (Idea inspired by the excellent [Interpretability In the Wild](https://arxiv.org/abs/2211.00593) paper).
[ -0.3202056586742401, -0.9593599438667297, 0.7378925681114197, 0.006827862001955509, -0.33828186988830566, -0.4503774642944336, 0.13797380030155182, -0.25178059935569763, 0.24066662788391113, 0.40482333302497864, -0.8237425088882446, -0.31973493099212646, -0.23264406621456146, -0.2081227600...
null
null
null
null
null
null
null
null
null
null
null
null
null
Conrad747/lg-ner
Conrad747
2023-03-30T13:44:30Z
19
0
null
[ "region:us" ]
2023-03-30T13:44:30Z
2022-11-09T08:19:08.000Z
2022-11-09T08:19:08
Entry not found
[ -0.32276472449302673, -0.22568407654762268, 0.8622258901596069, 0.4346148371696472, -0.5282984972000122, 0.7012965679168701, 0.7915717363357544, 0.07618629932403564, 0.7746022939682007, 0.2563222646713257, -0.785281777381897, -0.22573848068714142, -0.9104482531547546, 0.5715669393539429, ...
null
null
null
null
null
null
null
null
null
null
null
null
null
reducedcarpet/cora
reducedcarpet
2022-11-12T19:30:34Z
19
0
null
[ "region:us" ]
2022-11-12T19:30:34Z
2022-11-12T19:29:36.000Z
2022-11-12T19:29:36
Entry not found
[ -0.32276472449302673, -0.22568407654762268, 0.8622258901596069, 0.4346148371696472, -0.5282984972000122, 0.7012965679168701, 0.7915717363357544, 0.07618629932403564, 0.7746022939682007, 0.2563222646713257, -0.785281777381897, -0.22573848068714142, -0.9104482531547546, 0.5715669393539429, ...
null
null
null
null
null
null
null
null
null
null
null
null
null
nlhappy/CLUE-NER
nlhappy
2022-11-19T15:43:09Z
19
1
null
[ "license:mit", "region:us" ]
2022-11-19T15:43:09Z
2022-11-19T15:36:25.000Z
2022-11-19T15:36:25
--- license: mit ---
[ -0.12853392958641052, -0.18616779148578644, 0.6529127955436707, 0.49436280131340027, -0.19319361448287964, 0.23607419431209564, 0.36072003841400146, 0.050563063472509384, 0.579365611076355, 0.7400140762329102, -0.6508104205131531, -0.23783954977989197, -0.7102249264717102, -0.0478260256350...
null
null
null
null
null
null
null
null
null
null
null
null
null
kerpr/cc_openwebtext
kerpr
2022-12-01T03:06:09Z
19
0
null
[ "region:us" ]
2022-12-01T03:06:09Z
2022-11-21T03:58:20.000Z
2022-11-21T03:58:20
Entry not found
[ -0.3227645754814148, -0.22568479180335999, 0.8622264862060547, 0.43461528420448303, -0.52829909324646, 0.7012971639633179, 0.7915720343589783, 0.07618614286184311, 0.774603009223938, 0.2563217282295227, -0.7852813005447388, -0.22573819756507874, -0.9104477167129517, 0.5715674161911011, -...
null
null
null
null
null
null
null
null
null
null
null
null
null
DTU54DL/common3k-train
DTU54DL
2022-11-21T06:29:04Z
19
0
acronym-identification
[ "task_categories:token-classification", "annotations_creators:expert-generated", "language_creators:found", "multilinguality:monolingual", "size_categories:10K<n<100K", "source_datasets:original", "language:en", "license:mit", "region:us" ]
2022-11-21T06:29:04Z
2022-11-21T06:18:09.000Z
2022-11-21T06:18:09
--- annotations_creators: - expert-generated language: - en language_creators: - found license: - mit multilinguality: - monolingual paperswithcode_id: acronym-identification pretty_name: Acronym Identification Dataset size_categories: - 10K<n<100K source_datasets: - original task_categories: - token-classification task_ids: - token-classification-other-acronym-identification train-eval-index: - col_mapping: labels: tags tokens: tokens config: default splits: eval_split: test task: token-classification task_id: entity_extraction --- # Dataset Card for [Dataset Name] ## Table of Contents - [Table of Contents](#table-of-contents) - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards) - [Languages](#languages) - [Dataset Structure](#dataset-structure) - [Data Instances](#data-instances) - [Data Fields](#data-fields) - [Data Splits](#data-splits) - [Dataset Creation](#dataset-creation) - [Curation Rationale](#curation-rationale) - [Source Data](#source-data) - [Annotations](#annotations) - [Personal and Sensitive Information](#personal-and-sensitive-information) - [Considerations for Using the Data](#considerations-for-using-the-data) - [Social Impact of Dataset](#social-impact-of-dataset) - [Discussion of Biases](#discussion-of-biases) - [Other Known Limitations](#other-known-limitations) - [Additional Information](#additional-information) - [Dataset Curators](#dataset-curators) - [Licensing Information](#licensing-information) - [Citation Information](#citation-information) - [Contributions](#contributions) ## Dataset Description - **Homepage:** - **Repository:** - **Paper:** - **Leaderboard:** - **Point of Contact:** ### Dataset Summary [More Information Needed] ### Supported Tasks and Leaderboards [More Information Needed] ### Languages [More Information Needed] ## Dataset Structure ### Data Instances [More Information Needed] ### Data Fields [More Information Needed] ### Data Splits [More Information Needed] ## Dataset Creation ### Curation Rationale [More Information Needed] ### Source Data #### Initial Data Collection and Normalization [More Information Needed] #### Who are the source language producers? [More Information Needed] ### Annotations #### Annotation process [More Information Needed] #### Who are the annotators? [More Information Needed] ### Personal and Sensitive Information [More Information Needed] ## Considerations for Using the Data ### Social Impact of Dataset [More Information Needed] ### Discussion of Biases [More Information Needed] ### Other Known Limitations [More Information Needed] ## Additional Information ### Dataset Curators [More Information Needed] ### Licensing Information [More Information Needed] ### Citation Information [More Information Needed] ### Contributions Thanks to [@github-username](https://github.com/<github-username>) for adding this dataset.
[ -0.47841677069664, -0.5084842443466187, 0.14602938294410706, 0.278889000415802, -0.21702472865581512, 0.24832050502300262, -0.3366999328136444, -0.3758932054042816, 0.6720380783081055, 0.6457639932632446, -0.9167346358299255, -1.2200127840042114, -0.7551794052124023, 0.07273735105991364, ...
null
null
null
null
null
null
null
null
null
null
null
null
null
atokforps/gf1-preprocessed
atokforps
2022-11-21T20:26:34Z
19
0
null
[ "region:us" ]
2022-11-21T20:26:34Z
2022-11-21T17:15:24.000Z
2022-11-21T17:15:24
Entry not found
[ -0.3227645754814148, -0.22568479180335999, 0.8622264862060547, 0.43461528420448303, -0.52829909324646, 0.7012971639633179, 0.7915720343589783, 0.07618614286184311, 0.774603009223938, 0.2563217282295227, -0.7852813005447388, -0.22573819756507874, -0.9104477167129517, 0.5715674161911011, -...
null
null
null
null
null
null
null
null
null
null
null
null
null
datablations/c4-filter-small
datablations
2023-01-17T18:52:58Z
19
0
null
[ "region:us" ]
2023-01-17T18:52:58Z
2022-11-24T11:43:28.000Z
2022-11-24T11:43:28
--- dataset_info: features: - name: text dtype: string - name: timestamp dtype: string - name: url dtype: string - name: meta struct: - name: perplexity_score dtype: float64 - name: text_length dtype: int64 - name: domain dtype: 'null' - name: perplexity dtype: float64 - name: dup_ratio dtype: float64 - name: pairs sequence: sequence: int64 - name: repetitions sequence: binary - name: cluster sequence: int64 splits: - name: train num_bytes: 236459743 num_examples: 100000 download_size: 140935431 dataset_size: 236459743 --- # Dataset Card for "small-c4" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
[ -0.7538449764251709, -0.11057340353727341, 0.4205944836139679, 0.04191306233406067, -0.3134489953517914, -0.020516976714134216, 0.10780032724142075, -0.3489367663860321, 0.8589019775390625, 0.33778324723243713, -0.8866158127784729, -0.6863272786140442, -0.4547539949417114, -0.0622104890644...
null
null
null
null
null
null
null
null
null
null
null
null
null
nlphuji/vasr
nlphuji
2022-12-30T19:39:46Z
19
0
vasr
[ "annotations_creators:crowdsourced", "language_creators:found", "multilinguality:monolingual", "size_categories:1K<n<10K", "source_datasets:original", "language:en", "license:cc-by-4.0", "commonsense-reasoning", "visual-reasoning", "arxiv:2212.04542", "region:us" ]
2022-12-30T19:39:46Z
2022-11-24T21:05:27.000Z
2022-11-24T21:05:27
--- annotations_creators: - crowdsourced language: - en language_creators: - found license: - cc-by-4.0 multilinguality: - monolingual paperswithcode_id: vasr pretty_name: VASR size_categories: - 1K<n<10K source_datasets: - original tags: - commonsense-reasoning - visual-reasoning task_ids: [] extra_gated_prompt: "By clicking on “Access repository” below, you also agree that you are using it solely for research purposes. The full license agreement is available in the dataset files." --- # Dataset Card for VASR - [Dataset Description](#dataset-description) - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards) - [How to Submit Predictions?](#how-to-submit-predictions?) - [Colab notebook code for VASR evaluation with ViT](#colab-notebook-code-for-vasr-evaluation-with-clip) - [Languages](#languages) - [Dataset Structure](#dataset-structure) - [Data Fields](#data-fields) - [Data Splits](#data-splits) - [Dataset Creation](#dataset-creation) - [Considerations for Using the Data](#considerations-for-using-the-data) - [Licensing Information](#licensing-information) - [Citation Information](#citation-information) ## Dataset Description VASR is a challenging dataset for evaluating computer vision commonsense reasoning abilities. Given a triplet of images, the task is to select an image candidate B' that completes the analogy (A to A' is like B to what?). Unlike previous work on visual analogy that focused on simple image transformations, we tackle complex analogies requiring understanding of scenes. Our experiments demonstrate that state-of-the-art models struggle with carefully chosen distractors (±53%, compared to 90% human accuracy). - **Homepage:** https://vasr-dataset.github.io/ - **Colab** https://colab.research.google.com/drive/1HUg0aHonFDK3hVFrIRYdSEfpUJeY-4dI - **Repository:** https://github.com/vasr-dataset/vasr/tree/main/experiments - **Paper:** https://arxiv.org/abs/2212.04542 - **Leaderboard:** https://vasr-dataset.github.io/ - **Point of Contact:** yonatan.bitton@mail.huji.ac.il ## Supported Tasks and Leaderboards https://vasr.github.io/leaderboard. https://paperswithcode.com/dataset/vasr. ## How to Submit Predictions? To submit predictions, please send a prediction CSV file to vasr.benchmark@gmail.com / yonatan.bitton@mail.huji.ac.il. The prediction file should include a "B'" column with the predicted candidate name that best solves the analogy, and an index from 1 to 4 indicating the location of the predicted candidate in the given candidate list. An example prediction file is available [HERE](https://drive.google.com/file/d/1NvBNdvlWmEOYjIVi2xdmQ_tUm-TXo42u/view?usp=share_link). A submission is allowed once a week, and you will receive a response within a week. ## Colab notebook code for VASR evaluation with ViT https://colab.research.google.com/drive/1HUg0aHonFDK3hVFrIRYdSEfpUJeY-4dI ### Languages English. ## Dataset Structure ### Data Fields A: datasets.Image() - the first input image, **A**:A'. A': datasets.Image() - the second input image, different from A in a single key, A:**A'**. B: datasets.Image() - the third input image, has the same different item as A, **B**:B'. B': datasets.Image() - the forth image, which is the analogy solution. Different from B in a single key (the same different one as in A:A'), B:**B'**. Hidden in the test set. candidates_images: [datasets.Image()] - a list of candidate images solutions to the analogy. label: datasets.Value("int64") - the index of the ground-truth solution. Hidden in the test set. candidates: [datasets.Value("string")] - a list of candidate string solutions to the analogy. ### Data Splits There are three splits, TRAIN, VALIDATION, and TEST. Since there are four candidates and one solution, random chance is 25%. ## Dataset Creation We leverage situation recognition annotations and the CLIP model to generate a large set of 500k candidate analogies. There are two types of labels: - Silver labels, obtained from the automatic generation. - Gold labels, obtained from human annotations over the silver annotations. In the huggingface version we provide only the gold labeled dataset. Please refer to the project website download page if you want to download the silver labels version. ### Annotations #### Annotation process We paid Amazon Mechanical Turk Workers to solve analogies, five annotators for each analogy. Workers were asked to select the image that best solves the analogy. The resulting dataset is composed of the 3,820 instances agreed upon with a majority vote of at least 3 annotators, which was obtained in 93% of the cases. ## Considerations for Using the Data All associations were obtained with human annotators. All used images are from the imSitu dataset (http://imsitu.org/) Using this data is allowed for academic research alone. ### Licensing Information CC-By 4.0 ### Citation Information NA
[ -0.6007764339447021, -0.5105012655258179, 0.3542179465293884, -0.04046610742807388, -0.315104216337204, -0.3017033636569977, -0.08565497398376465, -0.4663778245449066, 0.16930276155471802, 0.2935684621334076, -0.4146791398525238, -0.3106100857257843, -0.37398967146873474, 0.073739424347877...
null
null
null
null
null
null
null
null
null
null
null
null
null
Klarks/naruto
Klarks
2022-11-29T07:32:15Z
19
0
null
[ "license:afl-3.0", "region:us" ]
2022-11-29T07:32:15Z
2022-11-29T07:31:04.000Z
2022-11-29T07:31:04
--- license: afl-3.0 ---
[ -0.1285335123538971, -0.1861683875322342, 0.6529128551483154, 0.49436232447624207, -0.19319400191307068, 0.23607441782951355, 0.36072009801864624, 0.05056373029947281, 0.5793656706809998, 0.7400146722793579, -0.650810182094574, -0.23784008622169495, -0.7102247476577759, -0.0478255338966846...
null
null
null
null
null
null
null
null
null
null
null
null
null
optimum/documentation-images
optimum
2023-10-05T14:52:23Z
19
2
null
[ "region:us" ]
2023-10-05T14:52:23Z
2022-11-29T12:47:16.000Z
2022-11-29T12:47:16
This dataset contains images used in the documentation of HuggingFace's Optimum library.
[ -0.8142774701118469, -0.35879677534103394, 0.188999205827713, 0.24012236297130585, -0.16699084639549255, -0.2639918625354767, 0.038712672889232635, -0.2886550724506378, 0.6354681253433228, 0.7690992951393127, -0.7943148016929626, -0.5629271268844604, -0.37430107593536377, 0.129904791712760...
null
null
null
null
null
null
null
null
null
null
null
null
null
MCG-NJU/MultiSports
MCG-NJU
2022-12-13T07:47:16Z
19
11
null
[ "task_categories:image-classification", "task_categories:object-detection", "task_categories:other", "task_ids:multi-class-image-classification", "annotations_creators:crowdsourced", "language_creators:expert-generated", "multilinguality:monolingual", "source_datasets:original", "language:en", "li...
2022-12-13T07:47:16Z
2022-12-06T08:32:53.000Z
2022-12-06T08:32:53
--- annotations_creators: - crowdsourced language: - en language_creators: - expert-generated license: - cc-by-nc-4.0 multilinguality: - monolingual pretty_name: MultiSports size_categories: [] source_datasets: - original tags: - video - action detection - spatial-temporal action localization task_categories: - image-classification - object-detection - other task_ids: - multi-class-image-classification extra_gated_heading: "Acknowledge license to accept the repository" extra_gated_prompt: "This work is licensed under a Creative Commons Attribution-NonCommercial 4.0 International License" extra_gated_fields: I agree to use this dataset for non-commerical use ONLY: checkbox --- # Dataset Card for MultiSports ## Table of Contents - [Table of Contents](#table-of-contents) - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards) - [Languages](#languages) - [Dataset Structure](#dataset-structure) - [Data Instances](#data-instances) - [Data Fields](#data-fields) - [Data Splits](#data-splits) - [Dataset Creation](#dataset-creation) - [Curation Rationale](#curation-rationale) - [Source Data](#source-data) - [Annotations](#annotations) - [Personal and Sensitive Information](#personal-and-sensitive-information) - [Considerations for Using the Data](#considerations-for-using-the-data) - [Social Impact of Dataset](#social-impact-of-dataset) - [Discussion of Biases](#discussion-of-biases) - [Other Known Limitations](#other-known-limitations) - [Additional Information](#additional-information) - [Dataset Curators](#dataset-curators) - [Licensing Information](#licensing-information) - [Citation Information](#citation-information) - [Contributions](#contributions) ## Dataset Description - **Homepage:** https://deeperaction.github.io/datasets/multisports.html - **Repository:** https://github.com/MCG-NJU/MultiSports - **Paper:** https://arxiv.org/abs/2105.07404 - **Leaderboard:** https://paperswithcode.com/dataset/multisports - **Point of Contact:** mailto: runyu_he@smail.nju.edu.cn ### Dataset Summary Spatio-temporal action localization is an important and challenging problem in video understanding. Previous action detection benchmarks are limited in aspects of small numbers of instances in a trimmed video or low-level atomic actions. MultiSports is a multi-person dataset of spatio-temporal localized sports actions. Please refer to [this paper](https://arxiv.org/abs/2105.07404) for more details. Please refer to [this repository](https://github.com/MCG-NJU/MultiSports) for evaluation. ### Supported Tasks and Leaderboards - `Spatial-temporal action localization` Details about evaluation can be found in the [GitHub Repository](https://github.com/mcG-NJU/MultiSports). Previous challenge results can be found in [this page](https://deeperaction.github.io/results/index.html) and [this CodaLab challenge](https://codalab.lisn.upsaclay.fr/competitions/3736). ### Languages The class labels in the dataset are in English. ## Dataset Structure ### Data Instances Demo is available on [dataset homepage](https://deeperaction.github.io/datasets/multisports.html). The dataset contains ```rawframes.tar``` and ```multisports_GT.pkl```. The GT pkl file is a dictionary with the following structure: ``` { 'labels': ['label1', 'label2', ...], 'train_videos': [['train_vid_1', 'train_vid_2', ...]], 'test_videos': [['test_vid_1', 'test_vid_2', ...]], 'nframes': { 'vid_1': nframes_1, 'vid_2': nframes_2, ... }, 'resolution': { 'vid_1': resolution_1, 'vid_2': resolution_2, ... }, 'gttubes': { 'vid_1': { 'label_1': [tube_1, tube_2, ...], 'label_2': [tube_1, tube_2, ...], ... } ... } } ``` Here a ```tube``` is a ```numpy.ndarray``` with ```nframes``` rows and 5 columns ```<frame number> <x1> <y1> <x2> <y2>```. ### Data Fields Raw frames are organized according to their sport category. The pickle file of GT contains the following fields. - labels: list of labels - train_videos: a list with one split element containing the list of training videos - test_videos: a list with one split element containing the list of validation videos - nframes: dictionary that gives the number of frames for each video - resolution: dictionary that output a tuple ```(h,w)``` of the resolution for each video - gttubes: dictionary that contains the gt tubes for each video. Gt tubes are dictionary that associates from each index of label, a list of tubes. A ```tube``` is a ```numpy.ndarray``` with ```nframes``` rows and 5 columns ```<frame number> <x1> <y1> <x2> <y2>```. Please note that the label index starts from 0 and the frame index starts from 1. For the label index ```i```, the label name is ```labels[i]```. <details> <summary> Click here to see the full list of MultiSports class labels mapping: </summary> |id|Class| |--|-----| | 0 | aerobic push up | | 1 | aerobic explosive push up | | 2 | aerobic explosive support | | 3 | aerobic leg circle | | 4 | aerobic helicopter | | 5 | aerobic support | | 6 | aerobic v support | | 7 | aerobic horizontal support | | 8 | aerobic straight jump | | 9 | aerobic illusion | | 10 | aerobic bent leg(s) jump | | 11 | aerobic pike jump | | 12 | aerobic straddle jump | | 13 | aerobic split jump | | 14 | aerobic scissors leap | | 15 | aerobic kick jump | | 16 | aerobic off axis jump | | 17 | aerobic butterfly jump | | 18 | aerobic split | | 19 | aerobic turn | | 20 | aerobic balance turn | | 21 | volleyball serve | | 22 | volleyball block | | 23 | volleyball first pass | | 24 | volleyball defend | | 25 | volleyball protect | | 26 | volleyball second pass | | 27 | volleyball adjust | | 28 | volleyball save | | 29 | volleyball second attack | | 30 | volleyball spike | | 31 | volleyball dink | | 32 | volleyball no offensive attack | | 33 | football shoot | | 34 | football long pass | | 35 | football short pass | | 36 | football through pass | | 37 | football cross | | 38 | football dribble | | 39 | football trap | | 40 | football throw | | 41 | football diving | | 42 | football tackle | | 43 | football steal | | 44 | football clearance | | 45 | football block | | 46 | football press | | 47 | football aerial duels | | 48 | basketball pass | | 49 | basketball drive | | 50 | basketball dribble | | 51 | basketball 3-point shot | | 52 | basketball 2-point shot | | 53 | basketball free throw | | 54 | basketball block | | 55 | basketball offensive rebound | | 56 | basketball defensive rebound | | 57 | basketball pass steal | | 58 | basketball dribble steal | | 59 | basketball interfere shot | | 60 | basketball pick-and-roll defensive | | 61 | basketball sag | | 62 | basketball screen | | 63 | basketball pass-inbound | | 64 | basketball save | | 65 | basketball jump ball | </details> ### Data Splits | |train |validation| test | |-------------|------:|---------:|------:| |# of tubes |28514 |10116 | - | *GT for test split is not provided. Please wait for the new competition to start. Information will be updated in [dataset homepage](https://deeperaction.github.io/datasets/multisports.html).* ## Dataset Creation ### Curation Rationale Spatio-temporal action detection is an important and challenging problem in video understanding. Previous action detection benchmarks are limited in aspects of small numbers of instances in a trimmed video or low-level atomic actions. ### Source Data #### Initial Data Collection and Normalization > After choosing the four sports, we search for their competition videos by querying the name of sports like volleyball and the name of competition levels like Olympics and World Cup on YouTube, and then down- load videos from top search results. For each video, we only select high-resolution, e.g. 720P or 1080P, competition records and then manually cut them into clips of minutes, with less shot changes in each clip and to be more suitable for action detection. #### Who are the source language producers? The annotators of action categories and temporal boundaries are professional athletes of the corresponding sports. Please refer to [the paper](https://arxiv.org/abs/2105.07404) for more information. ### Annotations #### Annotation process 1. (FIRST STAGE) A team of professional athletes generate records of the action la- bel, the starting and ending frame, and the person box in the starting frame, which can ensure the efficiency, accu- racy and consistency of our annotation results. 2. At least one annotator with domain knowledge double-check the annotations, correct wrong or inaccurate ones and also add missing annotations 3. (SECOND STAGE) With the help of FCOT tracking algorithm, a team of crowd-sourced annotators adjust bounding boxes of tracking results at each frame for each record. 4. Double-check each instance by playing it in 5fps and manually correct the inaccurate bounding boxes. #### Who are the annotators? For the first stage, annotators are professional athletes. For the second stage, annotators are common volunteers. ### Personal and Sensitive Information [More Information Needed] ## Considerations for Using the Data ### Social Impact of Dataset [More Information Needed] ### Discussion of Biases [More Information Needed] ### Other Known Limitations [More Information Needed] ## Additional Information ### Dataset Curators Authors of [this paper](https://arxiv.org/abs/2105.07404) - Yixuan Li - Lei Chen - Runyu He - Zhenzhi Wang - Gangshan Wu - Limin Wang ### Licensing Information <a rel="license" href="http://creativecommons.org/licenses/by-nc/4.0/"><img alt="Creative Commons License" style="border-width:0" src="https://i.creativecommons.org/l/by-nc/4.0/88x31.png" /></a><br />This work is licensed under a <a rel="license" href="http://creativecommons.org/licenses/by-nc/4.0/">Creative Commons Attribution-NonCommercial 4.0 International License</a>. ### Citation Information If you find this dataset useful, please cite as ``` @InProceedings{Li_2021_ICCV, author = {Li, Yixuan and Chen, Lei and He, Runyu and Wang, Zhenzhi and Wu, Gangshan and Wang, Limin}, title = {MultiSports: A Multi-Person Video Dataset of Spatio-Temporally Localized Sports Actions}, booktitle = {Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV)}, month = {October}, year = {2021}, pages = {13536-13545} } ``` ### Contributions Thanks to [@Judie1999](https://github.com/Judie1999) for adding this dataset.
[ -0.588257908821106, -0.44085484743118286, 0.14356088638305664, 0.07309548556804657, -0.2203136533498764, 0.3329370319843292, -0.06320780515670776, -0.15288153290748596, 0.47369739413261414, -0.07316844910383224, -0.7837209105491638, -0.7863140106201172, -0.827812135219574, 0.03895973041653...
null
null
null
null
null
null
null
null
null
null
null
null
null
sasha/australian_sea_slugs
sasha
2022-12-16T17:37:05Z
19
0
null
[ "region:us" ]
2022-12-16T17:37:05Z
2022-12-16T17:34:52.000Z
2022-12-16T17:34:52
--- dataset_info: features: - name: url dtype: string - name: image dtype: image - name: label dtype: string splits: - name: train num_bytes: 86677304.65602817 num_examples: 2107 download_size: 87406259 dataset_size: 86677304.65602817 --- # Dataset Card for "australian_sea_slugs" This is a filtered version of the [Nudibranchs of the Sunshine Coast Australia](https://www.gbif.org/dataset/ee412fa2-edc9-4c6b-91f3-ff2a02c245e0) dataset. ## Citation ``` Atlas of Living Australia (2019). Nudibranchs of the Sunshine Coast Australia. Occurrence dataset https://doi.org/10.15468/gtoiks accessed via GBIF.org on 2022-12-16. ```
[ -0.6261134743690491, -0.5104938745498657, 0.4395079016685486, -0.011615000665187836, -0.7286273837089539, -0.17477840185165405, 0.37050387263298035, -0.2218717485666275, 0.9733060002326965, 0.909881591796875, -1.102851390838623, -0.7945593595504761, -0.10271267592906952, 0.5963651537895203...
null
null
null
null
null
null
null
null
null
null
null
null
null
abirmunna/BSLWord40
abirmunna
2023-01-03T08:30:39Z
19
0
null
[ "license:creativeml-openrail-m", "doi:10.57967/hf/0245", "region:us" ]
2023-01-03T08:30:39Z
2023-01-03T07:16:05.000Z
2023-01-03T07:16:05
--- license: creativeml-openrail-m dataset_info: features: - name: image dtype: image - name: label dtype: class_label: names: '0': Aaj '1': Basha '2': Biyog '3': Bondhu '4': Darano '5': Darao '6': Desh '7': Ekhane '8': Gun '9': Kichuta '10': Kothay '11': Onurodh '12': Shahajjo '13': She '14': Shomoi '15': Shundor '16': Sir '17': Tara '18': Tumi '19': bagh '20': bouddho '21': chamra '22': girja '23': hockey '24': jail '25': keram '26': piano '27': puru '28': shomajkollan '29': shotto splits: - name: train num_bytes: 2192638367.4 num_examples: 1200 download_size: 2042629430 dataset_size: 2192638367.4 ---
[ -0.1285335123538971, -0.1861683875322342, 0.6529128551483154, 0.49436232447624207, -0.19319400191307068, 0.23607441782951355, 0.36072009801864624, 0.05056373029947281, 0.5793656706809998, 0.7400146722793579, -0.650810182094574, -0.23784008622169495, -0.7102247476577759, -0.0478255338966846...
null
null
null
null
null
null
null
null
null
null
null
null
null
Hack90/virus_dna_dataset
Hack90
2023-08-26T13:07:54Z
19
2
null
[ "region:us" ]
2023-08-26T13:07:54Z
2023-01-08T02:21:44.000Z
2023-01-08T02:21:44
--- dataset_info: features: - name: id dtype: string - name: sequence dtype: string - name: name dtype: string - name: description dtype: string - name: features dtype: int64 - name: seq_length dtype: int64 splits: - name: train num_bytes: 6621468623 num_examples: 2602437 download_size: 2319826398 dataset_size: 6621468623 configs: - config_name: default data_files: - split: train path: data/train-* --- [Needs More Information] # Dataset Card for virus_dna_dataset ## Table of Contents - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Supported Tasks](#supported-tasks-and-leaderboards) - [Languages](#languages) - [Dataset Structure](#dataset-structure) - [Data Instances](#data-instances) - [Data Fields](#data-instances) - [Data Splits](#data-instances) - [Dataset Creation](#dataset-creation) - [Curation Rationale](#curation-rationale) - [Source Data](#source-data) - [Annotations](#annotations) - [Personal and Sensitive Information](#personal-and-sensitive-information) - [Considerations for Using the Data](#considerations-for-using-the-data) - [Social Impact of Dataset](#social-impact-of-dataset) - [Discussion of Biases](#discussion-of-biases) - [Other Known Limitations](#other-known-limitations) - [Additional Information](#additional-information) - [Dataset Curators](#dataset-curators) - [Licensing Information](#licensing-information) - [Citation Information](#citation-information) ## Dataset Description - **Homepage:** [Needs More Information] - **Repository:** [Needs More Information] - **Paper:** [Needs More Information] - **Leaderboard:** [Needs More Information] - **Point of Contact:** [Needs More Information] ### Dataset Summary A collection of full virus genome dna, the dataset was built from NCBI data ### Supported Tasks and Leaderboards [Needs More Information] ### Languages DNA ## Dataset Structure ### Data Instances { 'Description' : 'NC_030848.1 Haloarcula californiae icosahedral...', 'dna_sequence' : 'TCATCTC TCTCTCT CTCTCTT GTTCCCG CGCCCGC CCGCCC...', 'sequence_length':'35787', 'organism_id':' AB063393.2'} ### Data Fields { 'Description' : 'this contains the description about the DNA sequence contained in the NCBI dataset', 'dna_sequence' : 'this contains the dna sequence grouped by 7 nucleotides', 'sequence_length':'this contains the length of the dna sequence'} ### Data Splits [Needs More Information] ## Dataset Creation ### Curation Rationale The goal of this dataset was to make it easier to train an LLM on virus DNA ### Source Data #### Initial Data Collection and Normalization DNA sequences were grouped by 7 nucleotides to make it easier to tokenize. Only full genomes were selected #### Who are the source language producers? Viruses :) ### Annotations #### Annotation process NCBI #### Who are the annotators? NCBI ### Personal and Sensitive Information N/A ## Considerations for Using the Data ### Social Impact of Dataset Make it easier to train LLMs on virus DNA ### Discussion of Biases Only virus data that has been sequenced and upload into NCBI is contained in here ### Other Known Limitations [Needs More Information] ## Additional Information ### Dataset Curators Hassan Ahmed ### Licensing Information [Needs More Information] ### Citation Information [Needs More Information]
[ -0.327811598777771, -0.6638795733451843, -0.025677930563688278, -0.11456387490034103, -0.4172753691673279, 0.09618829935789108, -0.15559321641921997, -0.05108598247170448, 0.6094881296157837, 0.31854087114334106, -0.6733259558677673, -0.9599134922027588, -0.6183326840400696, 0.476450771093...
null
null
null
null
null
null
null
null
null
null
null
null
null
HeNLP/HeDC4
HeNLP
2023-04-24T06:04:29Z
19
3
null
[ "task_categories:fill-mask", "size_categories:1B<n<10B", "language:he", "arxiv:2304.11077", "region:us" ]
2023-04-24T06:04:29Z
2023-01-10T10:28:22.000Z
2023-01-10T10:28:22
--- task_categories: - fill-mask language: - he size_categories: - 1B<n<10B --- ### Dataset Summary A Hebrew Deduplicated and Cleaned Common Crawl Corpus. A thoroughly cleaned and approximately deduplicated dataset for unsupervised learning. ### Citing If you use HeDC4 in your research, please cite [HeRo: RoBERTa and Longformer Hebrew Language Models](http://arxiv.org/abs/2304.11077). ``` @article{shalumov2023hero, title={HeRo: RoBERTa and Longformer Hebrew Language Models}, author={Vitaly Shalumov and Harel Haskey}, year={2023}, journal={arXiv:2304.11077}, } ```
[ -0.12934233248233795, -0.38391807675361633, 0.1889287829399109, -0.07143616676330566, -0.21383072435855865, 0.051816899329423904, -0.27719712257385254, 0.0430142879486084, 0.12424305826425552, 0.6442586779594421, -0.27981942892074585, -1.2002158164978027, -0.5575751066207886, 0.00723810726...
null
null
null
null
null
null
null
null
null
null
null
null
null
LLukas22/lfqa_preprocessed
LLukas22
2023-01-10T14:21:56Z
19
0
null
[ "task_categories:question-answering", "task_categories:sentence-similarity", "size_categories:100K<n<1M", "language:en", "license:mit", "region:us" ]
2023-01-10T14:21:56Z
2023-01-10T13:30:52.000Z
2023-01-10T13:30:52
--- license: mit task_categories: - question-answering - sentence-similarity language: - en size_categories: - 100K<n<1M --- # Dataset Card for "lfqa_preprocessed" ## Table of Contents - [Table of Contents](#table-of-contents) - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Dataset Structure](#dataset-structure) - [Data Instances](#data-instances) - [Data Fields](#data-fields) - [Data Splits](#data-splits) - [Additional Information](#additional-information) - [Licensing Information](#licensing-information) ## Dataset Description - **Homepage:** [https://towardsdatascience.com/long-form-qa-beyond-eli5-an-updated-dataset-and-approach-319cb841aabb](https://towardsdatascience.com/long-form-qa-beyond-eli5-an-updated-dataset-and-approach-319cb841aabb) ### Dataset Summary This is a simplified version of [vblagoje's](https://huggingface.co/vblagoje) *[lfqa_support_docs](https://huggingface.co/datasets/vblagoje/lfqa_support_docs)* and *[lfqa](https://huggingface.co/datasets/vblagoje/lfqa)* datasets. It was generated by me to have a more straight forward way to train Seq2Seq models on context based long form question answering tasks. ## Dataset Structure ### Data Instances An example of 'train' looks as follows. ```json { "question": "what's the difference between a forest and a wood?", "answer": "They're used interchangeably a lot. You'll get different answers from different resources, but the ...", "context": [ "Wood is divided, according to its botanical origin, into two kinds: softwoods, ...", "Processing and products differs especially with regard to the distinction between softwood and hardwood ..." ] } ``` ### Data Fields The data fields are the same among all splits. - `question`: a `string` feature. - `answer`: a `string` feature. - `context`: a list feature containing `string` features. ### Data Splits | name |train|validation| |----------|----:|---------:| | |226147| 3020| ## Additional Information ### Licensing Information This dataset is distributed under the MIT licence.
[ -0.44024133682250977, -0.7107387781143188, 0.3266705274581909, 0.10377085953950882, -0.16550113260746002, -0.08889064192771912, -0.02987000159919262, -0.3321220278739929, 0.24249188601970673, 0.6760793328285217, -0.9381334781646729, -0.7008883357048035, -0.13187289237976074, 0.117830805480...
null
null
null
null
null
null
null
null
null
null
null
null
null
matchbench/srprs-dbp-wd-15k-v1
matchbench
2023-01-18T11:32:12Z
19
0
null
[ "region:us" ]
2023-01-18T11:32:12Z
2023-01-12T13:06:33.000Z
2023-01-12T13:06:33
Entry not found
[ -0.3227645754814148, -0.22568479180335999, 0.8622263669967651, 0.43461522459983826, -0.52829909324646, 0.7012971639633179, 0.7915719747543335, 0.07618614286184311, 0.774603009223938, 0.2563217282295227, -0.7852813005447388, -0.22573819756507874, -0.9104475975036621, 0.5715674161911011, -...
null
null
null
null
null
null
null
null
null
null
null
null
null
NeelNanda/openwebtext-tokenized-9b
NeelNanda
2023-01-19T07:23:02Z
19
0
null
[ "region:us" ]
2023-01-19T07:23:02Z
2023-01-19T03:18:45.000Z
2023-01-19T03:18:45
--- dataset_info: features: - name: tokens sequence: uint16 splits: - name: train num_bytes: 18125188776 num_examples: 8832938 download_size: 17426592454 dataset_size: 18125188776 --- # Dataset Card for "openwebtext-tokenized-9b" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
[ -0.5583269596099854, -0.293139785528183, 0.0018715113401412964, 0.35185983777046204, -0.41940224170684814, 0.14992673695087433, 0.0020061880350112915, -0.19135282933712006, 0.9206456542015076, 0.311480849981308, -0.6066049337387085, -0.7852484583854675, -0.6124548316001892, -0.129563584923...
null
null
null
null
null
null
null
null
null
null
null
null
null
tomekkorbak/pile-pii-scrubadub
tomekkorbak
2023-02-07T15:26:41Z
19
2
null
[ "task_categories:text-classification", "task_categories:other", "task_ids:acceptability-classification", "task_ids:text-scoring", "annotations_creators:machine-generated", "language_creators:found", "multilinguality:monolingual", "size_categories:1M<n<10M", "source_datasets:extended|the_pile", "la...
2023-02-07T15:26:41Z
2023-01-25T18:00:01.000Z
2023-01-25T18:00:01
--- annotations_creators: - machine-generated language: - en language_creators: - found license: - mit multilinguality: - monolingual pretty_name: pile-pii-scrubadub size_categories: - 1M<n<10M source_datasets: - extended|the_pile tags: - pii - personal - identifiable - information - pretraining-with-human-feedback task_categories: - text-classification - other task_ids: - acceptability-classification - text-scoring --- # Dataset Card for pile-pii-scrubadub ## Dataset Description - **Repository: https://github.com/tomekkorbak/aligned-pretraining-objectives** - **Paper: Arxiv link to be added** ### Dataset Summary This dataset contains text from [The Pile](https://huggingface.co/datasets/the_pile), annotated based on the personal idenfitiable information (PII) in each sentence. Each document (row in the dataset) is segmented into sentences, and each sentence is given a score: the percentage of words in it that are classified as PII by [Scrubadub](https://scrubadub.readthedocs.io/en/stable/). ### Supported Tasks and Leaderboards [More Information Needed] ### Languages This dataset is taken from [The Pile](https://huggingface.co/datasets/the_pile), which is English text. ## Dataset Structure ### Data Instances 1949977 ### Data Fields - texts (sequence): a list of the sentences in the document (segmented using [SpaCy](https://spacy.io/)) - meta (dict): the section of [The Pile](https://huggingface.co/datasets/the_pile) from which it originated - scores (sequence): a score for each sentence in the `texts` column indicating the percent of words that are detected as PII by [Scrubadub](https://scrubadub.readthedocs.io/en/stable/) - avg_score (float64): the average of the scores listed in the `scores` column - num_sents (int64): the number of sentences (and scores) in that document ### Data Splits Training set only ## Dataset Creation ### Curation Rationale This is labeled text from [The Pile](https://huggingface.co/datasets/the_pile), a large dataset of text in English. The PII is labeled so that generative language models can be trained to avoid generating PII. ### Source Data #### Initial Data Collection and Normalization This is labeled text from [The Pile](https://huggingface.co/datasets/the_pile). #### Who are the source language producers? Please see [The Pile](https://huggingface.co/datasets/the_pile) for the source of the dataset. ### Annotations #### Annotation process For each sentence, [Scrubadub](https://scrubadub.readthedocs.io/en/stable/) was used to detect: - email addresses - addresses and postal codes - phone numbers - credit card numbers - US social security numbers - vehicle plates numbers - dates of birth - URLs - login credentials #### Who are the annotators? [Scrubadub](https://scrubadub.readthedocs.io/en/stable/) ### Personal and Sensitive Information This dataset contains all PII that was originally contained in [The Pile](https://huggingface.co/datasets/the_pile), with all detected PII annotated. ## Considerations for Using the Data ### Social Impact of Dataset This dataset contains examples of real PII (conveniently annotated in the text!). Please take care to avoid misusing it or putting anybody in danger by publicizing their information. This dataset is intended for research purposes only. We cannot guarantee that all PII has been detected, and we cannot guarantee that models trained using it will avoid generating PII. We do not recommend deploying models trained on this data. ### Discussion of Biases This dataset contains all biases from The Pile discussed in their paper: https://arxiv.org/abs/2101.00027 ### Other Known Limitations The PII in this dataset was detected using imperfect automated detection methods. We cannot guarantee that the labels are 100% accurate. ## Additional Information ### Dataset Curators [The Pile](https://huggingface.co/datasets/the_pile) ### Licensing Information From [The Pile](https://huggingface.co/datasets/the_pile): PubMed Central: [MIT License](https://github.com/EleutherAI/pile-pubmedcentral/blob/master/LICENSE) ### Citation Information Paper information to be added ### Contributions [The Pile](https://huggingface.co/datasets/the_pile)
[ -0.4115360975265503, -0.6705450415611267, 0.12943674623966217, 0.2604316771030426, -0.34723472595214844, -0.09645020216703415, -0.13480904698371887, -0.3241775929927826, 0.4771029055118561, 0.5405864715576172, -0.5727667212486267, -0.7070831060409546, -0.7629037499427795, 0.198888286948204...
null
null
null
null
null
null
null
null
null
null
null
null
null