author
stringlengths
2
29
cardData
null
citation
stringlengths
0
9.58k
description
stringlengths
0
5.93k
disabled
bool
1 class
downloads
float64
1
1M
gated
bool
2 classes
id
stringlengths
2
108
lastModified
stringlengths
24
24
paperswithcode_id
stringlengths
2
45
private
bool
2 classes
sha
stringlengths
40
40
siblings
list
tags
list
readme_url
stringlengths
57
163
readme
stringlengths
0
977k
dyhsup
null
null
null
false
1
false
dyhsup/ChemProt_CPR
2022-08-31T12:09:31.000Z
null
false
bd9f47f758affab100c81931d6afba84bab9ae06
[]
[ "license:other" ]
https://huggingface.co/datasets/dyhsup/ChemProt_CPR/resolve/main/README.md
--- license: other --- Warning: We don't follow the standard of the hugging face, download and process files according to your own needs. There only contain the intra-sentence relationship.Gold is the positive from the original corpus.Positive is all the relationship intra-sentence.
BigBang
null
null
null
false
1
false
BigBang/rosetta_new
2022-08-24T16:24:00.000Z
null
false
6e9893e2a78b8fa852f3268583592f8c4e37362a
[]
[ "license:cc-by-sa-4.0" ]
https://huggingface.co/datasets/BigBang/rosetta_new/resolve/main/README.md
--- license: cc-by-sa-4.0 ---
jonathanli
null
@inproceedings{chalkidis-etal-2019-large, title = "Large-Scale Multi-Label Text Classification on {EU} Legislation", author = "Chalkidis, Ilias and Fergadiotis, Emmanouil and Malakasiotis, Prodromos and Androutsopoulos, Ion", booktitle = "Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics", year = "2019", address = "Florence, Italy", publisher = "Association for Computational Linguistics", url = "https://www.aclweb.org/anthology/P19-1636", doi = "10.18653/v1/P19-1636", pages = "6314--6322" }
EURLEX57K contains 57k legislative documents in English from EUR-Lex portal, annotated with EUROVOC concepts.
false
3
false
jonathanli/eurlex
2022-10-24T15:26:49.000Z
eurlex57k
false
6f7dc71b8fd4e8aed7b04752b563c5edf84694c7
[]
[ "annotations_creators:found", "language_creators:found", "language:en", "license:cc-by-sa-4.0", "multilinguality:monolingual", "size_categories:10K<n<100K", "source_datasets:original", "task_categories:text-classification", "task_ids:multi-label-classification", "tags:legal-topic-classification" ]
https://huggingface.co/datasets/jonathanli/eurlex/resolve/main/README.md
--- annotations_creators: - found language_creators: - found language: - en license: - cc-by-sa-4.0 multilinguality: - monolingual size_categories: - 10K<n<100K source_datasets: - original task_categories: - text-classification task_ids: - multi-label-classification paperswithcode_id: eurlex57k pretty_name: the EUR-Lex dataset tags: - legal-topic-classification --- # Dataset Card for the EUR-Lex dataset ## Table of Contents - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards) - [Languages](#languages) - [Dataset Structure](#dataset-structure) - [Data Instances](#data-instances) - [Data Fields](#data-fields) - [Data Splits](#data-splits) - [Dataset Creation](#dataset-creation) - [Curation Rationale](#curation-rationale) - [Source Data](#source-data) - [Annotations](#annotations) - [Personal and Sensitive Information](#personal-and-sensitive-information) - [Considerations for Using the Data](#considerations-for-using-the-data) - [Social Impact of Dataset](#social-impact-of-dataset) - [Discussion of Biases](#discussion-of-biases) - [Other Known Limitations](#other-known-limitations) - [Additional Information](#additional-information) - [Dataset Curators](#dataset-curators) - [Licensing Information](#licensing-information) - [Citation Information](#citation-information) - [Contributions](#contributions) ## Dataset Description - **Homepage:** http://nlp.cs.aueb.gr/software_and_datasets/EURLEX57K/ - **Repository:** http://nlp.cs.aueb.gr/software_and_datasets/EURLEX57K/ - **Paper:** https://www.aclweb.org/anthology/P19-1636/ - **Leaderboard:** N/A ### Dataset Summary EURLEX57K can be viewed as an improved version of the dataset released by Mencia and Furnkranzand (2007), which has been widely used in Large-scale Multi-label Text Classification (LMTC) research, but is less than half the size of EURLEX57K (19.6k documents, 4k EUROVOC labels) and more than ten years old. EURLEX57K contains 57k legislative documents in English from EUR-Lex (https://eur-lex.europa.eu) with an average length of 727 words. Each document contains four major zones: - the header, which includes the title and name of the legal body enforcing the legal act; - the recitals, which are legal background references; and - the main body, usually organized in articles. **Labeling / Annotation** All the documents of the dataset have been annotated by the Publications Office of EU (https://publications.europa.eu/en) with multiple concepts from EUROVOC (http://eurovoc.europa.eu/). While EUROVOC includes approx. 7k concepts (labels), only 4,271 (59.31%) are present in EURLEX57K, from which only 2,049 (47.97%) have been assigned to more than 10 documents. The 4,271 labels are also divided into frequent (746 labels), few-shot (3,362), and zero- shot (163), depending on whether they were assigned to more than 50, fewer than 50 but at least one, or no training documents, respectively. ### Supported Tasks and Leaderboards The dataset supports: **Multi-label Text Classification:** Given the text of a document, a model predicts the relevant EUROVOC concepts. **Few-shot and Zero-shot learning:** As already noted, the labels can be divided into three groups: frequent (746 labels), few-shot (3,362), and zero- shot (163), depending on whether they were assigned to more than 50, fewer than 50 but at least one, or no training documents, respectively. ### Languages All documents are written in English. ## Dataset Structure ### Data Instances ```json { "celex_id": "31979D0509", "title": "79/509/EEC: Council Decision of 24 May 1979 on financial aid from the Community for the eradication of African swine fever in Spain", "text": "COUNCIL DECISION of 24 May 1979 on financial aid from the Community for the eradication of African swine fever in Spain (79/509/EEC)\nTHE COUNCIL OF THE EUROPEAN COMMUNITIES\nHaving regard to the Treaty establishing the European Economic Community, and in particular Article 43 thereof,\nHaving regard to the proposal from the Commission (1),\nHaving regard to the opinion of the European Parliament (2),\nWhereas the Community should take all appropriate measures to protect itself against the appearance of African swine fever on its territory;\nWhereas to this end the Community has undertaken, and continues to undertake, action designed to contain outbreaks of this type of disease far from its frontiers by helping countries affected to reinforce their preventive measures ; whereas for this purpose Community subsidies have already been granted to Spain;\nWhereas these measures have unquestionably made an effective contribution to the protection of Community livestock, especially through the creation and maintenance of a buffer zone north of the river Ebro;\nWhereas, however, in the opinion of the Spanish authorities themselves, the measures so far implemented must be reinforced if the fundamental objective of eradicating the disease from the entire country is to be achieved;\nWhereas the Spanish authorities have asked the Community to contribute to the expenses necessary for the efficient implementation of a total eradication programme;\nWhereas a favourable response should be given to this request by granting aid to Spain, having regard to the undertaking given by that country to protect the Community against African swine fever and to eliminate completely this disease by the end of a five-year eradication plan;\nWhereas this eradication plan must include certain measures which guarantee the effectiveness of the action taken, and it must be possible to adapt these measures to developments in the situation by means of a procedure establishing close cooperation between the Member States and the Commission;\nWhereas it is necessary to keep the Member States regularly informed as to the progress of the action undertaken,", "eurovoc_concepts": ["192", "2356", "2560", "862", "863"] } ``` ### Data Fields The following data fields are provided for documents (`train`, `dev`, `test`): `celex_id`: (**str**) The official ID of the document. The CELEX number is the unique identifier for all publications in both Eur-Lex and CELLAR.\ `title`: (**str**) The title of the document.\ `text`: (**str**) The full content of each document, which is represented by its `header`, `recitals` and `main_body`.\ `eurovoc_concepts`: (**List[str]**) The relevant EUROVOC concepts (labels). If you want to use the descriptors of EUROVOC concepts, similar to Chalkidis et al. (2020), please load: https://archive.org/download/EURLEX57K/eurovoc_concepts.jsonl ```python import json with open('./eurovoc_concepts.jsonl') as jsonl_file: eurovoc_concepts = {json.loads(concept) for concept in jsonl_file.readlines()} ``` ### Data Splits | Split | No of Documents | Avg. words | Avg. labels | | ------------------- | ------------------------------------ | --- | --- | | Train | 45,000 | 729 | 5 | |Development | 6,000 | 714 | 5 | |Test | 6,000 | 725 | 5 | ## Dataset Creation ### Curation Rationale The dataset was curated by Chalkidis et al. (2019).\ The documents have been annotated by the Publications Office of EU (https://publications.europa.eu/en). ### Source Data #### Initial Data Collection and Normalization The original data are available at EUR-Lex portal (https://eur-lex.europa.eu) in an unprocessed format. The documents were downloaded from EUR-Lex portal in HTML format. The relevant metadata and EUROVOC concepts were downloaded from the SPARQL endpoint of the Publications Office of EU (http://publications.europa.eu/webapi/rdf/sparql). #### Who are the source language producers? [More Information Needed] ### Annotations #### Annotation process * The original documents are available at EUR-Lex portal (https://eur-lex.europa.eu) in an unprocessed HTML format. The HTML code was striped and the documents split into sections. * The documents have been annotated by the Publications Office of EU (https://publications.europa.eu/en). #### Who are the annotators? Publications Office of EU (https://publications.europa.eu/en) ### Personal and Sensitive Information The dataset does not include personal or sensitive information. ## Considerations for Using the Data ### Social Impact of Dataset [More Information Needed] ### Discussion of Biases [More Information Needed] ### Other Known Limitations [More Information Needed] ## Additional Information ### Dataset Curators Chalkidis et al. (2019) ### Licensing Information © European Union, 1998-2021 The Commission’s document reuse policy is based on Decision 2011/833/EU. Unless otherwise specified, you can re-use the legal documents published in EUR-Lex for commercial or non-commercial purposes. The copyright for the editorial content of this website, the summaries of EU legislation and the consolidated texts, which is owned by the EU, is licensed under the Creative Commons Attribution 4.0 International licence. This means that you can re-use the content provided you acknowledge the source and indicate any changes you have made. Source: https://eur-lex.europa.eu/content/legal-notice/legal-notice.html \ Read more: https://eur-lex.europa.eu/content/help/faq/reuse-contents-eurlex.html ### Citation Information *Ilias Chalkidis, Manos Fergadiotis, Prodromos Malakasiotis and Ion Androutsopoulos.* *Large-Scale Multi-Label Text Classification on EU Legislation.* *Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics (ACL 2019). Florence, Italy. 2019* ``` @inproceedings{chalkidis-etal-2019-large, title = "Large-Scale Multi-Label Text Classification on {EU} Legislation", author = "Chalkidis, Ilias and Fergadiotis, Manos and Malakasiotis, Prodromos and Androutsopoulos, Ion", booktitle = "Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics", year = "2019", address = "Florence, Italy", publisher = "Association for Computational Linguistics", url = "https://www.aclweb.org/anthology/P19-1636", doi = "10.18653/v1/P19-1636", pages = "6314--6322" } ``` ### Contributions Thanks to [@iliaschalkidis](https://github.com/iliaschalkidis) for adding this dataset.
mbarnig
null
null
null
false
1
false
mbarnig/Tatoeba-en-lb
2022-08-24T15:38:33.000Z
null
false
2cf7ca5314557b4a3203da2f987ff122e87aebbb
[]
[ "license:cc-by-nc-sa-4.0" ]
https://huggingface.co/datasets/mbarnig/Tatoeba-en-lb/resolve/main/README.md
--- license: cc-by-nc-sa-4.0 ---
teven
null
null
null
false
21
false
teven/code_docstring_corpus
2022-08-24T20:01:58.000Z
null
false
816621ee6b2c082e5e1062a5bad126feb81b9449
[]
[]
https://huggingface.co/datasets/teven/code_docstring_corpus/resolve/main/README.md
HF version of Edinburgh-NLP's [Code docstrings corpus](https://github.com/EdinburghNLP/code-docstring-corpus)
teven
null
null
null
false
21
false
teven/code_contests
2022-08-24T20:01:04.000Z
null
false
1d750cb1af1c154e447d6baa330110933105a600
[]
[]
https://huggingface.co/datasets/teven/code_contests/resolve/main/README.md
HF-datasets version of Deepmind's [code_contests](https://github.com/deepmind/code_contests) dataset, notably used for AlphaGo. 1 row per solution, no test data or incorrect solutions included (only name/source/description/solution/language/difficulty)
gondolas
null
null
null
false
1
false
gondolas/test
2022-08-24T18:00:02.000Z
null
false
eef7d6d11d0e1bfe8cfab8e3030cb1ad35b45b49
[]
[ "license:unknown" ]
https://huggingface.co/datasets/gondolas/test/resolve/main/README.md
--- license: unknown ---
autoevaluate
null
null
null
false
1
false
autoevaluate/autoeval-eval-project-squad-54745b0c-1311450106
2022-08-24T20:36:33.000Z
null
false
059b500407cd10d3d0254d9c143d353f89ed7271
[]
[ "type:predictions", "tags:autotrain", "tags:evaluation", "datasets:squad" ]
https://huggingface.co/datasets/autoevaluate/autoeval-eval-project-squad-54745b0c-1311450106/resolve/main/README.md
--- type: predictions tags: - autotrain - evaluation datasets: - squad eval_info: task: extractive_question_answering model: FardinSaboori/bert-finetuned-squad metrics: [] dataset_name: squad dataset_config: plain_text dataset_split: validation col_mapping: context: context question: question answers-text: answers.text answers-answer_start: answers.answer_start --- # Dataset Card for AutoTrain Evaluator This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset: * Task: Question Answering * Model: FardinSaboori/bert-finetuned-squad * Dataset: squad * Config: plain_text * Split: validation To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator). ## Contributions Thanks to [@ahmetgunduz](https://huggingface.co/ahmetgunduz) for evaluating this model.
autoevaluate
null
null
null
false
1
false
autoevaluate/autoeval-eval-project-squad-54745b0c-1311450107
2022-08-24T20:37:00.000Z
null
false
00f6010354dc41b964436402e91548d954663e01
[]
[ "type:predictions", "tags:autotrain", "tags:evaluation", "datasets:squad" ]
https://huggingface.co/datasets/autoevaluate/autoeval-eval-project-squad-54745b0c-1311450107/resolve/main/README.md
--- type: predictions tags: - autotrain - evaluation datasets: - squad eval_info: task: extractive_question_answering model: 21iridescent/distilbert-base-uncased-finetuned-squad metrics: [] dataset_name: squad dataset_config: plain_text dataset_split: validation col_mapping: context: context question: question answers-text: answers.text answers-answer_start: answers.answer_start --- # Dataset Card for AutoTrain Evaluator This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset: * Task: Question Answering * Model: 21iridescent/distilbert-base-uncased-finetuned-squad * Dataset: squad * Config: plain_text * Split: validation To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator). ## Contributions Thanks to [@ahmetgunduz](https://huggingface.co/ahmetgunduz) for evaluating this model.
autoevaluate
null
null
null
false
1
false
autoevaluate/autoeval-eval-project-squad-54745b0c-1311450108
2022-08-24T20:37:49.000Z
null
false
d2e7a920820db43013d54b67ef1fc315cb5f55cb
[]
[ "type:predictions", "tags:autotrain", "tags:evaluation", "datasets:squad" ]
https://huggingface.co/datasets/autoevaluate/autoeval-eval-project-squad-54745b0c-1311450108/resolve/main/README.md
--- type: predictions tags: - autotrain - evaluation datasets: - squad eval_info: task: extractive_question_answering model: Aiyshwariya/bert-finetuned-squad metrics: [] dataset_name: squad dataset_config: plain_text dataset_split: validation col_mapping: context: context question: question answers-text: answers.text answers-answer_start: answers.answer_start --- # Dataset Card for AutoTrain Evaluator This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset: * Task: Question Answering * Model: Aiyshwariya/bert-finetuned-squad * Dataset: squad * Config: plain_text * Split: validation To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator). ## Contributions Thanks to [@ahmetgunduz](https://huggingface.co/ahmetgunduz) for evaluating this model.
britneymuller
null
null
null
false
1
false
britneymuller/cnbc_newsfeed
2022-08-24T23:04:39.000Z
null
false
df82aa55f008dabd0c2c2d4d58bf8ebb38ce1928
[]
[ "license:other" ]
https://huggingface.co/datasets/britneymuller/cnbc_newsfeed/resolve/main/README.md
--- license: other ---
ZhangYuanhan
null
null
null
false
1
false
ZhangYuanhan/OmniBenchmark
2022-08-25T02:10:18.000Z
null
false
4611cf4a48fe1e181ffc5e64a6b25c8a1a6b4c83
[]
[ "license:cc-by-nc-nd-4.0" ]
https://huggingface.co/datasets/ZhangYuanhan/OmniBenchmark/resolve/main/README.md
--- license: cc-by-nc-nd-4.0 ---
sberbank-ai
null
null
null
false
1
false
sberbank-ai/Peter
2022-10-25T11:09:06.000Z
null
false
f7396bc0d39f208076d0d8af13b4644dc3bdd7f8
[]
[ "arxiv:2103.09354", "language:ru", "license:mit", "source_datasets:original", "task_categories:image-segmentation", "task_categories:object-detection", "tags:optical-character-recognition", "tags:text-detection", "tags:ocr" ]
https://huggingface.co/datasets/sberbank-ai/Peter/resolve/main/README.md
--- language: - ru license: - mit source_datasets: - original task_categories: - image-segmentation - object-detection task_ids: [] tags: - optical-character-recognition - text-detection - ocr --- # Digital Peter The Peter dataset can be used for reading texts from the manuscripts written by Peter the Great. The dataset annotation contain end-to-end markup for training detection and OCR models, as well as an end-to-end model for reading text from pages. Paper is available at http://arxiv.org/abs/2103.09354 ## Description Digital Peter is an educational task with a historical slant created on the basis of several AI technologies (Computer Vision, NLP, and knowledge graphs). The task was prepared jointly with the Saint Petersburg Institute of History (N.P.Lihachov mansion) of Russian Academy of Sciences, Federal Archival Agency of Russia and Russian State Archive of Ancient Acts. A detailed description of the problem (with an immersion in the problem) can be found in [detailed_description_of_the_task_en.pdf](https://github.com/sberbank-ai/digital_peter_aij2020/blob/master/desc/detailed_description_of_the_task_en.pdf) The dataset consists of 662 full page images and 9696 annotated text files. There are 265788 symbols and approximately 50998 words. ## Annotation format The annotation is in COCO format. The `annotation.json` should have the following dictionaries: - `annotation["categories"]` - a list of dicts with a categories info (categotiy names and indexes). - `annotation["images"]` - a list of dictionaries with a description of images, each dictionary must contain fields: - `file_name` - name of the image file. - `id` for image id. - `annotation["annotations"]` - a list of dictioraties with a murkup information. Each dictionary stores a description for one polygon from the dataset, and must contain the following fields: - `image_id` - the index of the image on which the polygon is located. - `category_id` - the polygon’s category index. - ```attributes``` - dict with some additional annotatioin information. In the `translation` subdict you can find text translation for the line. - `segmentation` - the coordinates of the polygon, a list of numbers - which are coordinate pairs x and y. ## Competition We held a competition based on Digital Peter dataset. Here is github [link](https://github.com/sberbank-ai/digital_peter_aij2020). Here is competition [page](https://ods.ai/tracks/aij2020) (need to register).
gishnum
null
null
null
false
7
false
gishnum/worldpopulation_neo4j_graph_dump
2022-08-25T11:22:14.000Z
null
false
e5af44c540cda2e9007ad35b7f8e994225da7786
[]
[ "license:gpl" ]
https://huggingface.co/datasets/gishnum/worldpopulation_neo4j_graph_dump/resolve/main/README.md
--- license: gpl ---
OxAISH-AL-LLM
null
""" _DESCRIPTION =
Jigsaw Toxic Comment Challenge dataset. This dataset was the basis of a Kaggle competition run by Jigsaw
false
17
false
OxAISH-AL-LLM/wiki_toxic
2022-09-19T15:53:19.000Z
null
false
872656a156f32e4058307e50e234a44a727a9503
[]
[ "annotations_creators:crowdsourced", "language:en", "language_creators:found", "license:cc0-1.0", "multilinguality:monolingual", "size_categories:100K<n<1M", "source_datasets:extended|other", "tags:wikipedia", "tags:toxicity", "tags:toxic comments", "task_categories:text-classification", "task...
https://huggingface.co/datasets/OxAISH-AL-LLM/wiki_toxic/resolve/main/README.md
--- annotations_creators: - crowdsourced language: - en language_creators: - found license: - cc0-1.0 multilinguality: - monolingual pretty_name: Toxic Wikipedia Comments size_categories: - 100K<n<1M source_datasets: - extended|other tags: - wikipedia - toxicity - toxic comments task_categories: - text-classification task_ids: - hate-speech-detection --- # Dataset Card for Wiki Toxic ## Table of Contents - [Table of Contents](#table-of-contents) - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards) - [Languages](#languages) - [Dataset Structure](#dataset-structure) - [Data Instances](#data-instances) - [Data Fields](#data-fields) - [Data Splits](#data-splits) - [Dataset Creation](#dataset-creation) - [Curation Rationale](#curation-rationale) - [Source Data](#source-data) - [Annotations](#annotations) - [Personal and Sensitive Information](#personal-and-sensitive-information) - [Considerations for Using the Data](#considerations-for-using-the-data) - [Social Impact of Dataset](#social-impact-of-dataset) - [Discussion of Biases](#discussion-of-biases) - [Other Known Limitations](#other-known-limitations) - [Additional Information](#additional-information) - [Dataset Curators](#dataset-curators) - [Licensing Information](#licensing-information) - [Citation Information](#citation-information) - [Contributions](#contributions) ## Dataset Description - **Homepage:** - **Repository:** - **Paper:** - **Leaderboard:** - **Point of Contact:** ### Dataset Summary The Wiki Toxic dataset is a modified, cleaned version of the dataset used in the [Kaggle Toxic Comment Classification challenge](https://www.kaggle.com/competitions/jigsaw-toxic-comment-classification-challenge/overview) from 2017/18. The dataset contains comments collected from Wikipedia forums and classifies them into two categories, `toxic` and `non-toxic`. The Kaggle dataset was cleaned using the included `clean.py` file. ### Supported Tasks and Leaderboards - Text Classification: the dataset can be used for training a model to recognise toxicity in sentences and classify them accordingly. ### Languages The sole language used in the dataset is English. ## Dataset Structure ### Data Instances For each data point, there is an id, the comment_text itself, and a label (0 for non-toxic, 1 for toxic). ``` {'id': 'a123a58f610cffbc', 'comment_text': '"This article SUCKS. It may be poorly written, poorly formatted, or full of pointless crap that no one cares about, and probably all of the above. If it can be rewritten into something less horrible, please, for the love of God, do so, before the vacuum caused by its utter lack of quality drags the rest of Wikipedia down into a bottomless pit of mediocrity."', 'label': 1} ``` ### Data Fields - `id`: A unique identifier string for each comment - `comment_text`: A string containing the text of the comment - `label`: An integer, either 0 if the comment is non-toxic, or 1 if the comment is toxic ### Data Splits The Wiki Toxic dataset has three splits: *train*, *validation*, and *test*. The statistics for each split are below: | Dataset Split | Number of data points in split | | ----------- | ----------- | | Train | 127,656 | | Validation | 31,915 | | Test | 63,978 | ## Dataset Creation ### Curation Rationale [More Information Needed] ### Source Data #### Initial Data Collection and Normalization [More Information Needed] #### Who are the source language producers? [More Information Needed] ### Annotations #### Annotation process [More Information Needed] #### Who are the annotators? [More Information Needed] ### Personal and Sensitive Information [More Information Needed] ## Considerations for Using the Data ### Social Impact of Dataset [More Information Needed] ### Discussion of Biases [More Information Needed] ### Other Known Limitations [More Information Needed] ## Additional Information ### Dataset Curators [More Information Needed] ### Licensing Information [More Information Needed] ### Citation Information [More Information Needed] ### Contributions Thanks to [@github-username](https://github.com/<github-username>) for adding this dataset.
wushan
null
null
null
false
1
false
wushan/vehicle_qa
2022-08-25T13:14:33.000Z
null
false
41688aa331d9ff438cd9a940495de12d6dd0bc8e
[]
[ "license:apache-2.0" ]
https://huggingface.co/datasets/wushan/vehicle_qa/resolve/main/README.md
--- license: apache-2.0 ---
jokerak
null
null
null
false
1
false
jokerak/camvid
2022-08-25T13:34:19.000Z
null
false
2dcf46e0fe13816745e79fab84347e5d71fe74cc
[]
[ "license:apache-2.0" ]
https://huggingface.co/datasets/jokerak/camvid/resolve/main/README.md
--- license: apache-2.0 ---
kakaobrain
null
null
null
false
51
false
kakaobrain/coyo-700m
2022-08-30T19:07:52.000Z
null
false
54ee2d8c64d3d80a5e10ef6952a4466551834fc1
[]
[ "arxiv:2102.05918", "arxiv:2204.06125", "arxiv:2010.11929", "annotations_creators:no-annotation", "language:en", "language_creators:other", "license:cc-by-4.0", "multilinguality:monolingual", "size_categories:100M<n<1B", "source_datasets:original", "tags:image-text pairs", "task_categories:tex...
https://huggingface.co/datasets/kakaobrain/coyo-700m/resolve/main/README.md
--- annotations_creators: - no-annotation language: - en language_creators: - other license: - cc-by-4.0 multilinguality: - monolingual pretty_name: COYO-700M size_categories: - 100M<n<1B source_datasets: - original tags: - image-text pairs task_categories: - text-to-image - image-to-text - zero-shot-classification task_ids: - image-captioning --- # Dataset Card for COYO-700M ## Table of Contents - [Table of Contents](#table-of-contents) - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards) - [Languages](#languages) - [Dataset Structure](#dataset-structure) - [Data Instances](#data-instances) - [Data Fields](#data-fields) - [Data Splits](#data-splits) - [Dataset Creation](#dataset-creation) - [Curation Rationale](#curation-rationale) - [Source Data](#source-data) - [Annotations](#annotations) - [Personal and Sensitive Information](#personal-and-sensitive-information) - [Considerations for Using the Data](#considerations-for-using-the-data) - [Social Impact of Dataset](#social-impact-of-dataset) - [Discussion of Biases](#discussion-of-biases) - [Other Known Limitations](#other-known-limitations) - [Additional Information](#additional-information) - [Dataset Curators](#dataset-curators) - [Licensing Information](#licensing-information) - [Citation Information](#citation-information) - [Contributions](#contributions) ## Dataset Description - **Homepage:** [COYO homepage](https://kakaobrain.com/contents/?contentId=7eca73e3-3089-43cb-b701-332e8a1743fd) - **Repository:** [COYO repository](https://github.com/kakaobrain/coyo-dataset) - **Paper:** - **Leaderboard:** - **Point of Contact:** [COYO email](coyo@kakaobrain.com) ### Dataset Summary **COYO-700M** is a large-scale dataset that contains **747M image-text pairs** as well as many other **meta-attributes** to increase the usability to train various models. Our dataset follows a similar strategy to previous vision-and-language datasets, collecting many informative pairs of alt-text and its associated image in HTML documents. We expect COYO to be used to train popular large-scale foundation models complementary to other similar datasets. For more details on the data acquisition process, please refer to the technical paper to be released later. ### Supported Tasks and Leaderboards We empirically validated the quality of COYO dataset by re-implementing popular models such as [ALIGN](https://arxiv.org/abs/2102.05918), [unCLIP](https://arxiv.org/abs/2204.06125), and [ViT](https://arxiv.org/abs/2010.11929). We trained these models on COYO-700M or its subsets from scratch, achieving competitive performance to the reported numbers or generated samples in the original papers. Our pre-trained models and training codes will be released soon along with the technical paper. ### Languages The texts in the COYO-700M dataset consist of English. ## Dataset Structure ### Data Instances Each instance in COYO-700M represents single image-text pair information with meta-attributes: ``` { 'id': 841814333321, 'url': 'https://blog.dogsof.com/wp-content/uploads/2021/03/Image-from-iOS-5-e1614711641382.jpg', 'text': 'A Pomsky dog sitting and smiling in field of orange flowers', 'width': 1000, 'height': 988, 'image_phash': 'c9b6a7d8469c1959', 'text_length': 59, 'word_count': 11, 'num_tokens_bert': 13, 'num_tokens_gpt': 12, 'num_faces': 0, 'clip_similarity_vitb32': 0.4296875, 'clip_similarity_vitl14': 0.35205078125, 'nsfw_score_opennsfw2': 0.00031447410583496094, 'nsfw_score_gantman': 0.03298913687467575, 'watermark_score': 0.1014641746878624, 'aesthetic_score_laion_v2': 5.435476303100586 } ``` ### Data Fields | name | type | description | |--------------------------|---------|--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| | id | long | Unique 64-bit integer ID generated by [monotonically_increasing_id()](https://spark.apache.org/docs/3.1.3/api/python/reference/api/pyspark.sql.functions.monotonically_increasing_id.html) | | url | string | The image URL extracted from the `src` attribute of the `<img>` tag | | text | string | The text extracted from the `alt` attribute of the `<img>` tag | | width | integer | The width of the image | | height | integer | The height of the image | | image_phash | string | The [perceptual hash(pHash)](http://www.phash.org/) of the image | | text_length | integer | The length of the text | | word_count | integer | The number of words separated by spaces. | | num_tokens_bert | integer | The number of tokens using [BertTokenizer](https://huggingface.co/docs/transformers/model_doc/bert#transformers.BertTokenizer) | | num_tokens_gpt | integer | The number of tokens using [GPT2TokenizerFast](https://huggingface.co/docs/transformers/model_doc/gpt2#transformers.GPT2TokenizerFast) | | num_faces | integer | The number of faces in the image detected by [SCRFD](https://insightface.ai/scrfd) | | clip_similarity_vitb32 | float | The cosine similarity between text and image(ViT-B/32) embeddings by [OpenAI CLIP](https://github.com/openai/CLIP) | | clip_similarity_vitl14 | float | The cosine similarity between text and image(ViT-L/14) embeddings by [OpenAI CLIP](https://github.com/openai/CLIP) | | nsfw_score_opennsfw2 | float | The NSFW score of the image by [OpenNSFW2](https://github.com/bhky/opennsfw2) | | nsfw_score_gantman | float | The NSFW score of the image by [GantMan/NSFW](https://github.com/GantMan/nsfw_model) | | watermark_score | float | The watermark probability of the image by our internal model | | aesthetic_score_laion_v2 | float | The aesthetic score of the image by [LAION-Aesthetics-Predictor-V2](https://github.com/christophschuhmann/improved-aesthetic-predictor) | ### Data Splits Data was not split, since the evaluation was expected to be performed on more widely used downstream task(s). ## Dataset Creation ### Curation Rationale Similar to most vision-and-language datasets, our primary goal in the data creation process is to collect many pairs of alt-text and image sources in HTML documents crawled from the web. Therefore, We attempted to eliminate uninformative images or texts with minimal cost and improve our dataset's usability by adding various meta-attributes. Users can use these meta-attributes to sample a subset from COYO-700M and use it to train the desired model. For instance, the *num_faces* attribute could be used to make a subset like *COYO-Faces* and develop a privacy-preserving generative model. ### Source Data #### Initial Data Collection and Normalization We collected about 10 billion pairs of alt-text and image sources in HTML documents in [CommonCrawl](https://commoncrawl.org/) from Oct. 2020 to Aug. 2021. and eliminated uninformative pairs through the image and/or text level filtering process with minimal cost. **Image Level** * Included all image formats that [Pillow library](https://pillow.readthedocs.io/en/stable/handbook/image-file-formats.html) can decode. (JPEG, WEBP, PNG, BMP, ...) * Removed images less than 5KB image size. * Removed images with an aspect ratio greater than 3.0. * Removed images with min(width, height) < 200. * Removed images with a score of [OpenNSFW2](https://github.com/bhky/opennsfw2) or [GantMan/NSFW](https://github.com/GantMan/nsfw_model) higher than 0.5. * Removed all duplicate images based on the image [pHash](http://www.phash.org/) value from external public datasets. * ImageNet-1K/21K, Flickr-30K, MS-COCO, CC-3M, CC-12M **Text Level** * Collected only English text using [cld3](https://github.com/google/cld3). * Replaced consecutive whitespace characters with a single whitespace and removed the whitespace before and after the sentence. (e.g. `"\n \n Load image into Gallery viewer, valentine&amp;#39;s day roses\n \n" → "Load image into Gallery viewer, valentine&amp;#39;s day roses"`) * Removed texts with a length of 5 or less. * Removed texts that do not have a noun form. * Removed texts with less than 3 words or more than 256 words and texts over 1000 in length. * Removed texts appearing more than 10 times. (e.g. `“thumbnail for”, “image for”, “picture of”`) * Removed texts containing NSFW words collected from [profanity_filter](https://github.com/rominf/profanity-filter/blob/master/profanity_filter/data/en_profane_words.txt), [better_profanity](https://github.com/snguyenthanh/better_profanity/blob/master/better_profanity/profanity_wordlist.txt), and [google_twunter_lol](https://gist.github.com/ryanlewis/a37739d710ccdb4b406d). **Image-Text Level** * Removed duplicated samples based on (image_phash, text). (Different text may exist for the same image URL.) #### Who are the source language producers? [Common Crawl](https://commoncrawl.org/) is the data source for COYO-700M. ### Annotations #### Annotation process The dataset was built in a fully automated process that did not require human annotation. #### Who are the annotators? No human annotation ### Personal and Sensitive Information #### Disclaimer & Content Warning The COYO dataset is recommended to be used for research purposes. Kakao Brain tried to construct a "Safe" dataset when building the COYO dataset. (See [Data Filtering](#source-data) Section) Kakao Brain is constantly making efforts to create more "Safe" datasets. However, despite these efforts, this large-scale dataset was not hand-picked by humans to avoid the risk due to its very large size (over 700M). Keep in mind that the unscreened nature of the dataset means that the collected images can lead to strongly discomforting and disturbing content for humans. The COYO dataset may contain some inappropriate data, and any problems resulting from such data are the full responsibility of the user who used it. Therefore, it is strongly recommended that this dataset be used only for research, keeping this in mind when using the dataset, and Kakao Brain does not recommend using this dataset as it is without special processing to clear inappropriate data to create commercial products. ## Considerations for Using the Data ### Social Impact of Dataset It will be described in a paper to be released soon. ### Discussion of Biases It will be described in a paper to be released soon. ### Other Known Limitations It will be described in a paper to be released soon. ## Additional Information ### Dataset Curators COYO dataset was released as an open source in the hope that it will be helpful to many research institutes and startups for research purposes. We look forward to contacting us from various places who wish to cooperate with us. [coyo@kakaobrain.com](mailto:coyo@kakaobrain.com) ### Licensing Information #### License The COYO dataset of Kakao Brain is licensed under [CC-BY-4.0 License](https://creativecommons.org/licenses/by/4.0/). The full license can be found in the [LICENSE.cc-by-4.0 file](./coyo-700m/blob/main/LICENSE.cc-by-4.0). The dataset includes “Image URL” and “Text” collected from various sites by analyzing Common Crawl data, an open data web crawling project. The collected data (images and text) is subject to the license to which each content belongs. #### Obligation to use While Open Source may be free to use, that does not mean it is free of obligation. To determine whether your intended use of the COYO dataset is suitable for the CC-BY-4.0 license, please consider the license guide. If you violate the license, you may be subject to legal action such as the prohibition of use or claim for damages depending on the use. ### Citation Information If you apply this dataset to any project and research, please cite our code: ``` @misc{kakaobrain2022coyo-700m, title = {COYO-700M: Image-Text Pair Dataset}, author = {Minwoo Byeon, Beomhee Park, Haecheon Kim, Sungjun Lee, Woonhyuk Baek, Saehoon Kim}, year = {2022}, howpublished = {\url{https://github.com/kakaobrain/coyo-dataset}}, } ``` ### Contributions - Minwoo Byeon ([@mwbyeon](https://github.com/mwbyeon)) - Beomhee Park ([@beomheepark](https://github.com/beomheepark)) - Haecheon Kim ([@HaecheonKim](https://github.com/HaecheonKim)) - Sungjun Lee ([@justhungryman](https://github.com/justHungryMan)) - Woonhyuk Baek ([@wbaek](https://github.com/wbaek)) - Saehoon Kim ([@saehoonkim](https://github.com/saehoonkim)) - and Kakao Brain Large-Scale AI Studio
teticio
null
null
null
false
77
false
teticio/audio-diffusion-256
2022-11-09T10:49:48.000Z
null
false
60eceef746f537c1efe46ffd2d5485d631a9c9d8
[]
[ "size_categories:10K<n<100K", "tags:audio", "tags:spectrograms", "task_categories:image-to-image" ]
https://huggingface.co/datasets/teticio/audio-diffusion-256/resolve/main/README.md
--- annotations_creators: [] language: [] language_creators: [] license: [] multilinguality: [] pretty_name: Mel spectrograms of music size_categories: - 10K<n<100K source_datasets: [] tags: - audio - spectrograms task_categories: - image-to-image task_ids: [] --- Over 20,000 256x256 mel spectrograms of 5 second samples of music from my Spotify liked playlist. The code to convert from audio to spectrogram and vice versa can be found in https://github.com/teticio/audio-diffusion along with scripts to train and run inference using De-noising Diffusion Probabilistic Models. ``` x_res = 256 y_res = 256 sample_rate = 22050 n_fft = 2048 hop_length = 512 ```
allenai
null
null
null
false
2
false
allenai/multixscience_sparse_mean
2022-11-03T21:37:02.000Z
multi-xscience
false
c7f32a0dee3d5baaeb76b4ea9a665294e0b097eb
[]
[ "annotations_creators:found", "language_creators:found", "language:en", "license:unknown", "multilinguality:monolingual", "size_categories:10K<n<100K", "source_datasets:original", "task_categories:summarization", "task_ids:summarization-other-paper-abstract-generation" ]
https://huggingface.co/datasets/allenai/multixscience_sparse_mean/resolve/main/README.md
--- annotations_creators: - found language_creators: - found language: - en license: - unknown multilinguality: - monolingual size_categories: - 10K<n<100K source_datasets: - original task_categories: - summarization task_ids: - summarization-other-paper-abstract-generation paperswithcode_id: multi-xscience pretty_name: Multi-XScience --- This is a copy of the [Multi-XScience](https://huggingface.co/datasets/multi_x_science_sum) dataset, except the input source documents of its `test` split have been replaced by a __sparse__ retriever. The retrieval pipeline used: - __query__: The `related_work` field of each example - __corpus__: The union of all documents in the `train`, `validation` and `test` splits - __retriever__: BM25 via [PyTerrier](https://pyterrier.readthedocs.io/en/latest/) with default settings - __top-k strategy__: `"mean"`, i.e. the number of documents retrieved, `k`, is set as the mean number of documents seen across examples in this dataset, in this case `k==4` Retrieval results on the `test` set: | Recall@100 | Rprec | Precision@k | Recall@k | | ----------- | ----------- | ----------- | ----------- | | 0.548 | 0.2272 | 0.1611 | 0.2704 |
allenai
null
null
null
false
2
false
allenai/multixscience_sparse_max
2022-11-03T21:36:09.000Z
multi-xscience
false
7f3fadb0ae53ea8691def662411b4c453dc7172e
[]
[ "annotations_creators:found", "language_creators:found", "language:en", "license:unknown", "multilinguality:monolingual", "size_categories:10K<n<100K", "source_datasets:original", "task_categories:summarization", "task_ids:summarization-other-paper-abstract-generation" ]
https://huggingface.co/datasets/allenai/multixscience_sparse_max/resolve/main/README.md
--- annotations_creators: - found language_creators: - found language: - en license: - unknown multilinguality: - monolingual size_categories: - 10K<n<100K source_datasets: - original task_categories: - summarization task_ids: - summarization-other-paper-abstract-generation paperswithcode_id: multi-xscience pretty_name: Multi-XScience --- This is a copy of the [Multi-XScience](https://huggingface.co/datasets/multi_x_science_sum) dataset, except the input source documents of its `test` split have been replaced by a __sparse__ retriever. The retrieval pipeline used: - __query__: The `related_work` field of each example - __corpus__: The union of all documents in the `train`, `validation` and `test` splits - __retriever__: BM25 via [PyTerrier](https://pyterrier.readthedocs.io/en/latest/) with default settings - __top-k strategy__: `"max"`, i.e. the number of documents retrieved, `k`, is set as the maximum number of documents seen across examples in this dataset, in this case `k==20` Retrieval results on the `test` set: | Recall@100 | Rprec | Precision@k | Recall@k | | ----------- | ----------- | ----------- | ----------- | | 0.548 | 0.2272 | 0.055 | 0.4039 |
angelolab
null
@InProceedings{huggingface:dataset, title = {Ark Analysis Example Dataset}, author={Angelo Lab}, year={2022} }
This dataset contains 11 Field of Views (FOVs), each with 22 channels.
false
4,375
false
angelolab/ark_example
2022-11-11T02:52:32.000Z
null
false
9b38f7ef9596b183cffa9ddcea70136668c3a459
[]
[ "annotations_creators:no-annotation", "license:apache-2.0", "size_categories:n<1K", "source_datasets:original", "tags:MIBI", "tags:Multiplexed-Imaging", "task_categories:image-segmentation", "task_ids:instance-segmentation" ]
https://huggingface.co/datasets/angelolab/ark_example/resolve/main/README.md
--- annotations_creators: - no-annotation language: [] language_creators: [] license: - apache-2.0 multilinguality: [] pretty_name: An example dataset for analyzing multiplexed imaging data. size_categories: - n<1K source_datasets: - original tags: - MIBI - Multiplexed-Imaging task_categories: - image-segmentation task_ids: - instance-segmentation --- # Dataset Card for [Dataset Name] ## Table of Contents - [Table of Contents](#table-of-contents) - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards) - [Languages](#languages) - [Dataset Structure](#dataset-structure) - [Data Instances](#data-instances) - [Data Fields](#data-fields) - [Data Splits](#data-splits) - [Dataset Creation](#dataset-creation) - [Curation Rationale](#curation-rationale) - [Source Data](#source-data) - [Annotations](#annotations) - [Personal and Sensitive Information](#personal-and-sensitive-information) - [Considerations for Using the Data](#considerations-for-using-the-data) - [Social Impact of Dataset](#social-impact-of-dataset) - [Discussion of Biases](#discussion-of-biases) - [Other Known Limitations](#other-known-limitations) - [Additional Information](#additional-information) - [Dataset Curators](#dataset-curators) - [Licensing Information](#licensing-information) - [Citation Information](#citation-information) - [Contributions](#contributions) ## Dataset Description - **Homepage:** - **Repository:** - **Paper:** - **Leaderboard:** - **Point of Contact:** ### Dataset Summary [More Information Needed] ### Supported Tasks and Leaderboards [More Information Needed] ### Languages [More Information Needed] ## Dataset Structure ### Data Instances [More Information Needed] ### Data Fields [More Information Needed] ### Data Splits [More Information Needed] ## Dataset Creation ### Curation Rationale [More Information Needed] ### Source Data #### Initial Data Collection and Normalization [More Information Needed] #### Who are the source language producers? [More Information Needed] ### Annotations #### Annotation process [More Information Needed] #### Who are the annotators? [More Information Needed] ### Personal and Sensitive Information [More Information Needed] ## Considerations for Using the Data ### Social Impact of Dataset [More Information Needed] ### Discussion of Biases [More Information Needed] ### Other Known Limitations [More Information Needed] ## Additional Information ### Dataset Curators [More Information Needed] ### Licensing Information [More Information Needed] ### Citation Information [More Information Needed] ### Contributions Thanks to [@angelolab](https://github.com/angelolab) for adding this dataset.
iejMac
null
null
null
false
5
false
iejMac/CLIP-WebVid
2022-10-04T09:10:24.000Z
null
false
11ef172f3c13e60eaf30fcf319e3919c760785fb
[]
[]
https://huggingface.co/datasets/iejMac/CLIP-WebVid/resolve/main/README.md
--- license: mit ---
merkalo-ziri
null
null
null
false
1
false
merkalo-ziri/qa_shreded
2022-08-26T01:27:18.000Z
null
false
f9ad319b1eb78b0af0b1c8f5dc951c3092d6ee9c
[]
[ "annotations_creators:found", "language:rus", "language_creators:found", "license:other", "multilinguality:monolingual", "size_categories:1K<n<10K", "source_datasets:original", "task_categories:question-answering" ]
https://huggingface.co/datasets/merkalo-ziri/qa_shreded/resolve/main/README.md
--- annotations_creators: - found language: - rus language_creators: - found license: - other multilinguality: - monolingual pretty_name: qa_main size_categories: - 1K<n<10K source_datasets: - original tags: [] task_categories: - question-answering task_ids: [] --- # Dataset Card for [Dataset Name] ## Table of Contents - [Table of Contents](#table-of-contents) - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards) - [Languages](#languages) - [Dataset Structure](#dataset-structure) - [Data Instances](#data-instances) - [Data Fields](#data-fields) - [Data Splits](#data-splits) - [Dataset Creation](#dataset-creation) - [Curation Rationale](#curation-rationale) - [Source Data](#source-data) - [Annotations](#annotations) - [Personal and Sensitive Information](#personal-and-sensitive-information) - [Considerations for Using the Data](#considerations-for-using-the-data) - [Social Impact of Dataset](#social-impact-of-dataset) - [Discussion of Biases](#discussion-of-biases) - [Other Known Limitations](#other-known-limitations) - [Additional Information](#additional-information) - [Dataset Curators](#dataset-curators) - [Licensing Information](#licensing-information) - [Citation Information](#citation-information) - [Contributions](#contributions) ## Dataset Description - **Homepage:** - **Repository:** - **Paper:** - **Leaderboard:** - **Point of Contact:** ### Dataset Summary [More Information Needed] ### Supported Tasks and Leaderboards [More Information Needed] ### Languages [More Information Needed] ## Dataset Structure ### Data Instances [More Information Needed] ### Data Fields [More Information Needed] ### Data Splits [More Information Needed] ## Dataset Creation ### Curation Rationale [More Information Needed] ### Source Data #### Initial Data Collection and Normalization [More Information Needed] #### Who are the source language producers? [More Information Needed] ### Annotations #### Annotation process [More Information Needed] #### Who are the annotators? [More Information Needed] ### Personal and Sensitive Information [More Information Needed] ## Considerations for Using the Data ### Social Impact of Dataset [More Information Needed] ### Discussion of Biases [More Information Needed] ### Other Known Limitations [More Information Needed] ## Additional Information ### Dataset Curators [More Information Needed] ### Licensing Information [More Information Needed] ### Citation Information [More Information Needed] ### Contributions Thanks to [@github-username](https://github.com/<github-username>) for adding this dataset.
justahandsomeboy
null
null
null
false
1
false
justahandsomeboy/recipedia_1
2022-08-26T04:22:13.000Z
null
false
1ac837cf3234412532906d405756f6918233ca1e
[]
[ "license:mit" ]
https://huggingface.co/datasets/justahandsomeboy/recipedia_1/resolve/main/README.md
--- license: mit ---
Zaid
null
@inproceedings{tiedemann-2020-tatoeba, title = "The {T}atoeba {T}ranslation {C}hallenge {--} {R}ealistic Data Sets for Low Resource and Multilingual {MT}", author = {Tiedemann, J{\"o}rg}, booktitle = "Proceedings of the Fifth Conference on Machine Translation", month = nov, year = "2020", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2020.wmt-1.139", pages = "1174--1182", }
The Tatoeba Translation Challenge is a multilingual data set of machine translation benchmarks derived from user-contributed translations collected by [Tatoeba.org](https://tatoeba.org/) and provided as parallel corpus from [OPUS](https://opus.nlpl.eu/). This dataset includes test and development data sorted by language pair. It includes test sets for hundreds of language pairs and is continuously updated. Please, check the version number tag to refer to the release that your are using.
false
11
false
Zaid/tatoeba_mt
2022-08-26T04:55:12.000Z
null
false
488d2a94c56bd52eb4f69cecdd868204886e418e
[]
[ "license:other" ]
https://huggingface.co/datasets/Zaid/tatoeba_mt/resolve/main/README.md
--- license: other ---
Bingsu
null
null
null
false
8
false
Bingsu/Gameplay_Images
2022-08-26T05:31:58.000Z
null
false
227e4266899d746172ebd46f90e26af2d370f799
[]
[ "language:en", "license:cc-by-4.0", "multilinguality:monolingual", "size_categories:1K<n<10K", "task_categories:image-classification" ]
https://huggingface.co/datasets/Bingsu/Gameplay_Images/resolve/main/README.md
--- language: - en license: - cc-by-4.0 multilinguality: - monolingual pretty_name: Gameplay Images size_categories: - 1K<n<10K task_categories: - image-classification --- # Gameplay Images ## Dataset Description - **Homepage:** [kaggle](https://www.kaggle.com/datasets/aditmagotra/gameplay-images) - **Download Size** 2.50 GiB - **Generated Size** 1.68 GiB - **Total Size** 4.19 GiB A dataset from [kaggle](https://www.kaggle.com/datasets/aditmagotra/gameplay-images). This is a dataset of 10 very famous video games in the world. These include - Among Us - Apex Legends - Fortnite - Forza Horizon - Free Fire - Genshin Impact - God of War - Minecraft - Roblox - Terraria There are 1000 images per class and all are sized `640 x 360`. They are in the `.png` format. This Dataset was made by saving frames every few seconds from famous gameplay videos on Youtube. ※ This dataset was uploaded in January 2022. Game content updated after that will not be included. ### License CC-BY-4.0 ## Dataset Structure ### Data Instance ```python >>> from datasets import load_dataset >>> dataset = load_dataset("Bingsu/Gameplay_Images") DatasetDict({ train: Dataset({ features: ['image', 'label'], num_rows: 10000 }) }) ``` ```python >>> dataset["train"].features {'image': Image(decode=True, id=None), 'label': ClassLabel(num_classes=10, names=['Among Us', 'Apex Legends', 'Fortnite', 'Forza Horizon', 'Free Fire', 'Genshin Impact', 'God of War', 'Minecraft', 'Roblox', 'Terraria'], id=None)} ``` ### Data Size download: 2.50 GiB<br> generated: 1.68 GiB<br> total: 4.19 GiB ### Data Fields - image: `Image` - A `PIL.Image.Image object` containing the image. size=640x360 - Note that when accessing the image column: `dataset[0]["image"]` the image file is automatically decoded. Decoding of a large number of image files might take a significant amount of time. Thus it is important to first query the sample index before the "image" column, i.e. `dataset[0]["image"]` should always be preferred over `dataset["image"][0]`. - label: an int classification label. Class Label Mappings: ```json { "Among Us": 0, "Apex Legends": 1, "Fortnite": 2, "Forza Horizon": 3, "Free Fire": 4, "Genshin Impact": 5, "God of War": 6, "Minecraft": 7, "Roblox": 8, "Terraria": 9 } ``` ```python >>> dataset["train"][0] {'image': <PIL.PngImagePlugin.PngImageFile image mode=RGBA size=640x360>, 'label': 0} ``` ### Data Splits | | train | | ---------- | -------- | | # of data | 10000 | ### Note #### train_test_split ```python >>> ds_new = dataset["train"].train_test_split(0.2, seed=42, stratify_by_column="label") >>> ds_new DatasetDict({ train: Dataset({ features: ['image', 'label'], num_rows: 8000 }) test: Dataset({ features: ['image', 'label'], num_rows: 2000 }) }) ```
autoevaluate
null
null
null
false
5
false
autoevaluate/autoeval-eval-project-samsum-61336320-1319050351
2022-08-26T07:18:03.000Z
null
false
863991fde636390a0678f092906ca0bbabdd8566
[]
[ "type:predictions", "tags:autotrain", "tags:evaluation", "datasets:samsum" ]
https://huggingface.co/datasets/autoevaluate/autoeval-eval-project-samsum-61336320-1319050351/resolve/main/README.md
--- type: predictions tags: - autotrain - evaluation datasets: - samsum eval_info: task: summarization model: facebook/bart-large-xsum metrics: [] dataset_name: samsum dataset_config: samsum dataset_split: test col_mapping: text: dialogue target: summary --- # Dataset Card for AutoTrain Evaluator This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset: * Task: Summarization * Model: facebook/bart-large-xsum * Dataset: samsum * Config: samsum * Split: test To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator). ## Contributions Thanks to [@hgoyal194](https://huggingface.co/hgoyal194) for evaluating this model.
Lo
null
null
null
false
1
false
Lo/clip-bert-data
2022-08-29T07:51:51.000Z
null
false
1aa5ac59eca5b4a5922cd999d83188ee40237277
[]
[ "arxiv:2109.11321", "language:en", "license:cc-by-4.0", "multilinguality:monolingual" ]
https://huggingface.co/datasets/Lo/clip-bert-data/resolve/main/README.md
--- language: - en license: - cc-by-4.0 multilinguality: - monolingual --- # CLIP-BERT training data This data was used to train the CLIP-BERT model first described in [this paper](https://arxiv.org/abs/2109.11321). The dataset is based on text and images from MS COCO, SBU Captions, Visual Genome QA and Conceptual Captions. The image features have been extracted using the CLIP model [openai/clip-vit-base-patch32](https://huggingface.co/openai/clip-vit-base-patch32) available on Huggingface.
Lo
null
null
null
false
1
false
Lo/adapt-pre-trained-VL-models-to-text-data-Wikipedia
2022-08-29T08:26:22.000Z
null
false
bc8abd0b59c26ab913464fb535e080c27dce15ff
[]
[ "language:en", "license:cc-by-sa-3.0", "multilinguality:monolingual" ]
https://huggingface.co/datasets/Lo/adapt-pre-trained-VL-models-to-text-data-Wikipedia/resolve/main/README.md
--- language: - en license: - cc-by-sa-3.0 multilinguality: - monolingual --- The Wikipedia train data used to train BERT-base baselines and adapt vision-and-language models to text-only tasks in the paper "How to Adapt Pre-trained Vision-and-Language Models to a Text-only Input?". The data has been created from the "20200501.en" revision of the [wikipedia dataset](https://huggingface.co/datasets/wikipedia) on Huggingface.
autoevaluate
null
null
null
false
1
false
autoevaluate/autoeval-staging-eval-project-emotion-2d469b4f-13675887
2022-08-26T09:18:42.000Z
null
false
9006ce5811a9c44f8435dd489af9d18205f98a1d
[]
[ "type:predictions", "tags:autotrain", "tags:evaluation", "datasets:emotion" ]
https://huggingface.co/datasets/autoevaluate/autoeval-staging-eval-project-emotion-2d469b4f-13675887/resolve/main/README.md
--- type: predictions tags: - autotrain - evaluation datasets: - emotion eval_info: task: multi_class_classification model: autoevaluate/multi-class-classification metrics: [] dataset_name: emotion dataset_config: default dataset_split: test col_mapping: text: text target: label --- # Dataset Card for AutoTrain Evaluator This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset: * Task: Multi-class Text Classification * Model: autoevaluate/multi-class-classification * Dataset: emotion * Config: default * Split: test To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator). ## Contributions Thanks to [@lewtun](https://huggingface.co/lewtun) for evaluating this model.
autoevaluate
null
null
null
false
1
false
autoevaluate/autoeval-staging-eval-project-emotion-ed9fef1a-13685888
2022-08-26T09:38:16.000Z
null
false
161773d6bbc56e44575c2c3fe2eb367531843818
[]
[ "type:predictions", "tags:autotrain", "tags:evaluation", "datasets:emotion" ]
https://huggingface.co/datasets/autoevaluate/autoeval-staging-eval-project-emotion-ed9fef1a-13685888/resolve/main/README.md
--- type: predictions tags: - autotrain - evaluation datasets: - emotion eval_info: task: multi_class_classification model: autoevaluate/multi-class-classification metrics: [] dataset_name: emotion dataset_config: default dataset_split: test col_mapping: text: text target: label --- # Dataset Card for AutoTrain Evaluator This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset: * Task: Multi-class Text Classification * Model: autoevaluate/multi-class-classification * Dataset: emotion * Config: default * Split: test To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator). ## Contributions Thanks to [@lewtun](https://huggingface.co/lewtun) for evaluating this model.
autoevaluate
null
null
null
false
2
false
autoevaluate/autoeval-staging-eval-project-emotion-a7ced70d-13715889
2022-08-26T09:52:29.000Z
null
false
9d1adbcfd839d250e57ba00f5626c2a9bc2ba7b6
[]
[ "type:predictions", "tags:autotrain", "tags:evaluation", "datasets:emotion" ]
https://huggingface.co/datasets/autoevaluate/autoeval-staging-eval-project-emotion-a7ced70d-13715889/resolve/main/README.md
--- type: predictions tags: - autotrain - evaluation datasets: - emotion eval_info: task: multi_class_classification model: autoevaluate/multi-class-classification metrics: [] dataset_name: emotion dataset_config: default dataset_split: test col_mapping: text: text target: label --- # Dataset Card for AutoTrain Evaluator This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset: * Task: Multi-class Text Classification * Model: autoevaluate/multi-class-classification * Dataset: emotion * Config: default * Split: test To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator). ## Contributions Thanks to [@lewtun](https://huggingface.co/lewtun) for evaluating this model.
NareshIT
null
null
null
false
1
false
NareshIT/javatraining
2022-08-26T09:56:08.000Z
null
false
a5841b873d4be24808b58c1273fde15f374aed41
[]
[ "license:other" ]
https://huggingface.co/datasets/NareshIT/javatraining/resolve/main/README.md
--- license: other ---
autoevaluate
null
null
null
false
1
false
autoevaluate/autoeval-staging-eval-project-emotion-1d3a2bc7-13735890
2022-08-26T10:08:48.000Z
null
false
41b13853d318d8f2aac4db268055ab7c99d27d9f
[]
[ "type:predictions", "tags:autotrain", "tags:evaluation", "datasets:emotion" ]
https://huggingface.co/datasets/autoevaluate/autoeval-staging-eval-project-emotion-1d3a2bc7-13735890/resolve/main/README.md
--- type: predictions tags: - autotrain - evaluation datasets: - emotion eval_info: task: multi_class_classification model: autoevaluate/multi-class-classification metrics: [] dataset_name: emotion dataset_config: default dataset_split: test col_mapping: text: text target: label --- # Dataset Card for AutoTrain Evaluator This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset: * Task: Multi-class Text Classification * Model: autoevaluate/multi-class-classification * Dataset: emotion * Config: default * Split: test To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator). ## Contributions Thanks to [@lewtun](https://huggingface.co/lewtun) for evaluating this model.
autoevaluate
null
null
null
false
1
false
autoevaluate/autoeval-eval-project-LeoCordoba__CC-NEWS-ES-titles-0e1ed2c7-1320150403
2022-08-26T11:42:03.000Z
null
false
6ab186192e317f65fb9f28127827c3b6a5001f30
[]
[ "type:predictions", "tags:autotrain", "tags:evaluation", "datasets:LeoCordoba/CC-NEWS-ES-titles" ]
https://huggingface.co/datasets/autoevaluate/autoeval-eval-project-LeoCordoba__CC-NEWS-ES-titles-0e1ed2c7-1320150403/resolve/main/README.md
--- type: predictions tags: - autotrain - evaluation datasets: - LeoCordoba/CC-NEWS-ES-titles eval_info: task: summarization model: josmunpen/mt5-small-spanish-summarization metrics: [] dataset_name: LeoCordoba/CC-NEWS-ES-titles dataset_config: default dataset_split: test col_mapping: text: text target: output_text --- # Dataset Card for AutoTrain Evaluator This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset: * Task: Summarization * Model: josmunpen/mt5-small-spanish-summarization * Dataset: LeoCordoba/CC-NEWS-ES-titles * Config: default * Split: test To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator). ## Contributions Thanks to [@LeoCordoba](https://huggingface.co/LeoCordoba) for evaluating this model.
autoevaluate
null
null
null
false
1
false
autoevaluate/autoeval-eval-project-LeoCordoba__CC-NEWS-ES-titles-0e1ed2c7-1320150404
2022-08-26T11:42:07.000Z
null
false
f8135894035cb2881d24390353fbf528fe3dc906
[]
[ "type:predictions", "tags:autotrain", "tags:evaluation", "datasets:LeoCordoba/CC-NEWS-ES-titles" ]
https://huggingface.co/datasets/autoevaluate/autoeval-eval-project-LeoCordoba__CC-NEWS-ES-titles-0e1ed2c7-1320150404/resolve/main/README.md
--- type: predictions tags: - autotrain - evaluation datasets: - LeoCordoba/CC-NEWS-ES-titles eval_info: task: summarization model: LeoCordoba/mt5-small-cc-news-es-titles metrics: [] dataset_name: LeoCordoba/CC-NEWS-ES-titles dataset_config: default dataset_split: test col_mapping: text: text target: output_text --- # Dataset Card for AutoTrain Evaluator This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset: * Task: Summarization * Model: LeoCordoba/mt5-small-cc-news-es-titles * Dataset: LeoCordoba/CC-NEWS-ES-titles * Config: default * Split: test To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator). ## Contributions Thanks to [@LeoCordoba](https://huggingface.co/LeoCordoba) for evaluating this model.
autoevaluate
null
null
null
false
1
false
autoevaluate/autoeval-staging-eval-project-squad-b541c518-13705892
2022-08-26T13:03:38.000Z
null
false
6d228ace568d2c1de21d663452f1c25938774286
[]
[ "type:predictions", "tags:autotrain", "tags:evaluation", "datasets:squad" ]
https://huggingface.co/datasets/autoevaluate/autoeval-staging-eval-project-squad-b541c518-13705892/resolve/main/README.md
--- type: predictions tags: - autotrain - evaluation datasets: - squad eval_info: task: extractive_question_answering model: autoevaluate/extractive-question-answering metrics: [] dataset_name: squad dataset_config: plain_text dataset_split: validation col_mapping: context: context question: question answers-text: answers.text answers-answer_start: answers.answer_start --- # Dataset Card for AutoTrain Evaluator This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset: * Task: Question Answering * Model: autoevaluate/extractive-question-answering * Dataset: squad * Config: plain_text * Split: validation To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator). ## Contributions Thanks to [@lewtun](https://huggingface.co/lewtun) for evaluating this model.
autoevaluate
null
null
null
false
1
false
autoevaluate/autoeval-staging-eval-project-squad-30a8951e-13725893
2022-08-26T13:03:44.000Z
null
false
5261fdbd27f9caf2abd70fdb48963c829ef7c00e
[]
[ "type:predictions", "tags:autotrain", "tags:evaluation", "datasets:squad" ]
https://huggingface.co/datasets/autoevaluate/autoeval-staging-eval-project-squad-30a8951e-13725893/resolve/main/README.md
--- type: predictions tags: - autotrain - evaluation datasets: - squad eval_info: task: extractive_question_answering model: autoevaluate/extractive-question-answering metrics: [] dataset_name: squad dataset_config: plain_text dataset_split: validation col_mapping: context: context question: question answers-text: answers.text answers-answer_start: answers.answer_start --- # Dataset Card for AutoTrain Evaluator This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset: * Task: Question Answering * Model: autoevaluate/extractive-question-answering * Dataset: squad * Config: plain_text * Split: validation To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator). ## Contributions Thanks to [@lewtun](https://huggingface.co/lewtun) for evaluating this model.
autoevaluate
null
null
null
false
1
false
autoevaluate/autoeval-staging-eval-project-squad-08ca88d1-13695891
2022-08-26T13:04:02.000Z
null
false
9cc1c7b8d9200c633fb1fdb3870ee18a43bcbc26
[]
[ "type:predictions", "tags:autotrain", "tags:evaluation", "datasets:squad" ]
https://huggingface.co/datasets/autoevaluate/autoeval-staging-eval-project-squad-08ca88d1-13695891/resolve/main/README.md
--- type: predictions tags: - autotrain - evaluation datasets: - squad eval_info: task: extractive_question_answering model: autoevaluate/extractive-question-answering metrics: [] dataset_name: squad dataset_config: plain_text dataset_split: validation col_mapping: context: context question: question answers-text: answers.text answers-answer_start: answers.answer_start --- # Dataset Card for AutoTrain Evaluator This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset: * Task: Question Answering * Model: autoevaluate/extractive-question-answering * Dataset: squad * Config: plain_text * Split: validation To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator). ## Contributions Thanks to [@lewtun](https://huggingface.co/lewtun) for evaluating this model.
autoevaluate
null
null
null
false
1
false
autoevaluate/autoeval-staging-eval-project-conll2003-90a08c43-13745894
2022-08-26T13:03:03.000Z
null
false
300aa70d0b8680b78f26487f34738c3ad25d20de
[]
[ "type:predictions", "tags:autotrain", "tags:evaluation", "datasets:conll2003" ]
https://huggingface.co/datasets/autoevaluate/autoeval-staging-eval-project-conll2003-90a08c43-13745894/resolve/main/README.md
--- type: predictions tags: - autotrain - evaluation datasets: - conll2003 eval_info: task: entity_extraction model: autoevaluate/entity-extraction metrics: [] dataset_name: conll2003 dataset_config: conll2003 dataset_split: test col_mapping: tokens: tokens tags: ner_tags --- # Dataset Card for AutoTrain Evaluator This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset: * Task: Token Classification * Model: autoevaluate/entity-extraction * Dataset: conll2003 * Config: conll2003 * Split: test To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator). ## Contributions Thanks to [@lewtun](https://huggingface.co/lewtun) for evaluating this model.
autoevaluate
null
null
null
false
1
false
autoevaluate/autoeval-staging-eval-project-squad-884b60f3-13755895
2022-08-26T13:15:55.000Z
null
false
e897197576f659a384e06cdf1586482fa76efc87
[]
[ "type:predictions", "tags:autotrain", "tags:evaluation", "datasets:squad" ]
https://huggingface.co/datasets/autoevaluate/autoeval-staging-eval-project-squad-884b60f3-13755895/resolve/main/README.md
--- type: predictions tags: - autotrain - evaluation datasets: - squad eval_info: task: extractive_question_answering model: autoevaluate/extractive-question-answering metrics: [] dataset_name: squad dataset_config: plain_text dataset_split: validation col_mapping: context: context question: question answers-text: answers.text answers-answer_start: answers.answer_start --- # Dataset Card for AutoTrain Evaluator This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset: * Task: Question Answering * Model: autoevaluate/extractive-question-answering * Dataset: squad * Config: plain_text * Split: validation To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator). ## Contributions Thanks to [@lewtun](https://huggingface.co/lewtun) for evaluating this model.
Shagun5
null
null
null
false
1
false
Shagun5/sandhi
2022-08-26T14:13:49.000Z
null
false
36a121215a184bceb6e183ddeb169beef7e8eab3
[]
[ "license:cc-by-nc-sa-4.0" ]
https://huggingface.co/datasets/Shagun5/sandhi/resolve/main/README.md
--- license: cc-by-nc-sa-4.0 ---
autoevaluate
null
null
null
false
1
false
autoevaluate/autoeval-staging-eval-project-sasha__dog-food-8a6c4abe-13775897
2022-08-26T14:55:53.000Z
null
false
2e4b287dda99722789449ed901e31a6b153d7739
[]
[ "type:predictions", "tags:autotrain", "tags:evaluation", "datasets:sasha/dog-food" ]
https://huggingface.co/datasets/autoevaluate/autoeval-staging-eval-project-sasha__dog-food-8a6c4abe-13775897/resolve/main/README.md
--- type: predictions tags: - autotrain - evaluation datasets: - sasha/dog-food eval_info: task: image_binary_classification model: abhishek/autotrain-dog-vs-food metrics: ['matthews_correlation'] dataset_name: sasha/dog-food dataset_config: sasha--dog-food dataset_split: train col_mapping: image: image target: label --- # Dataset Card for AutoTrain Evaluator This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset: * Task: Binary Image Classification * Model: abhishek/autotrain-dog-vs-food * Dataset: sasha/dog-food * Config: sasha--dog-food * Split: train To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator). ## Contributions Thanks to [@ahmetgunduz](https://huggingface.co/ahmetgunduz) for evaluating this model.
autoevaluate
null
null
null
false
1
false
autoevaluate/autoeval-staging-eval-project-sasha__dog-food-8a6c4abe-13775898
2022-08-26T14:55:52.000Z
null
false
5cdc512c0c73bde43a077497e24fc006f149b377
[]
[ "type:predictions", "tags:autotrain", "tags:evaluation", "datasets:sasha/dog-food" ]
https://huggingface.co/datasets/autoevaluate/autoeval-staging-eval-project-sasha__dog-food-8a6c4abe-13775898/resolve/main/README.md
--- type: predictions tags: - autotrain - evaluation datasets: - sasha/dog-food eval_info: task: image_binary_classification model: sasha/dog-food-swin-tiny-patch4-window7-224 metrics: ['matthews_correlation'] dataset_name: sasha/dog-food dataset_config: sasha--dog-food dataset_split: train col_mapping: image: image target: label --- # Dataset Card for AutoTrain Evaluator This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset: * Task: Binary Image Classification * Model: sasha/dog-food-swin-tiny-patch4-window7-224 * Dataset: sasha/dog-food * Config: sasha--dog-food * Split: train To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator). ## Contributions Thanks to [@ahmetgunduz](https://huggingface.co/ahmetgunduz) for evaluating this model.
autoevaluate
null
null
null
false
1
false
autoevaluate/autoeval-staging-eval-project-sasha__dog-food-8a6c4abe-13775899
2022-08-26T14:55:56.000Z
null
false
f3ce6b224624d2dbb8fc7ba79ddddc4eb102c89e
[]
[ "type:predictions", "tags:autotrain", "tags:evaluation", "datasets:sasha/dog-food" ]
https://huggingface.co/datasets/autoevaluate/autoeval-staging-eval-project-sasha__dog-food-8a6c4abe-13775899/resolve/main/README.md
--- type: predictions tags: - autotrain - evaluation datasets: - sasha/dog-food eval_info: task: image_binary_classification model: sasha/dog-food-convnext-tiny-224 metrics: ['matthews_correlation'] dataset_name: sasha/dog-food dataset_config: sasha--dog-food dataset_split: train col_mapping: image: image target: label --- # Dataset Card for AutoTrain Evaluator This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset: * Task: Binary Image Classification * Model: sasha/dog-food-convnext-tiny-224 * Dataset: sasha/dog-food * Config: sasha--dog-food * Split: train To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator). ## Contributions Thanks to [@ahmetgunduz](https://huggingface.co/ahmetgunduz) for evaluating this model.
autoevaluate
null
null
null
false
1
false
autoevaluate/autoeval-staging-eval-project-sasha__dog-food-8a6c4abe-13775900
2022-08-26T14:56:07.000Z
null
false
5348159e41b3268f6acbd0fb8f548e2fcaa81dca
[]
[ "type:predictions", "tags:autotrain", "tags:evaluation", "datasets:sasha/dog-food" ]
https://huggingface.co/datasets/autoevaluate/autoeval-staging-eval-project-sasha__dog-food-8a6c4abe-13775900/resolve/main/README.md
--- type: predictions tags: - autotrain - evaluation datasets: - sasha/dog-food eval_info: task: image_binary_classification model: sasha/dog-food-vit-base-patch16-224-in21k metrics: ['matthews_correlation'] dataset_name: sasha/dog-food dataset_config: sasha--dog-food dataset_split: train col_mapping: image: image target: label --- # Dataset Card for AutoTrain Evaluator This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset: * Task: Binary Image Classification * Model: sasha/dog-food-vit-base-patch16-224-in21k * Dataset: sasha/dog-food * Config: sasha--dog-food * Split: train To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator). ## Contributions Thanks to [@ahmetgunduz](https://huggingface.co/ahmetgunduz) for evaluating this model.
autoevaluate
null
null
null
false
1
false
autoevaluate/autoeval-staging-eval-project-emotion-8f618256-13785901
2022-08-26T14:55:39.000Z
null
false
113d1a02c1000ed7d2fc83ea05b793aedf45ed04
[]
[ "type:predictions", "tags:autotrain", "tags:evaluation", "datasets:emotion" ]
https://huggingface.co/datasets/autoevaluate/autoeval-staging-eval-project-emotion-8f618256-13785901/resolve/main/README.md
--- type: predictions tags: - autotrain - evaluation datasets: - emotion eval_info: task: multi_class_classification model: Abdelrahman-Rezk/distilbert-base-uncased-finetuned-emotion metrics: ['matthews_correlation'] dataset_name: emotion dataset_config: default dataset_split: test col_mapping: text: text target: label --- # Dataset Card for AutoTrain Evaluator This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset: * Task: Multi-class Text Classification * Model: Abdelrahman-Rezk/distilbert-base-uncased-finetuned-emotion * Dataset: emotion * Config: default * Split: test To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator). ## Contributions Thanks to [@ahmetgunduz](https://huggingface.co/ahmetgunduz) for evaluating this model.
autoevaluate
null
null
null
false
1
false
autoevaluate/autoeval-staging-eval-project-emotion-8f618256-13785902
2022-08-26T14:55:44.000Z
null
false
7b656d3d66a90c5f20d5c39934ffdc4a7fca1b66
[]
[ "type:predictions", "tags:autotrain", "tags:evaluation", "datasets:emotion" ]
https://huggingface.co/datasets/autoevaluate/autoeval-staging-eval-project-emotion-8f618256-13785902/resolve/main/README.md
--- type: predictions tags: - autotrain - evaluation datasets: - emotion eval_info: task: multi_class_classification model: Ahmed007/distilbert-base-uncased-finetuned-emotion metrics: ['matthews_correlation'] dataset_name: emotion dataset_config: default dataset_split: test col_mapping: text: text target: label --- # Dataset Card for AutoTrain Evaluator This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset: * Task: Multi-class Text Classification * Model: Ahmed007/distilbert-base-uncased-finetuned-emotion * Dataset: emotion * Config: default * Split: test To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator). ## Contributions Thanks to [@ahmetgunduz](https://huggingface.co/ahmetgunduz) for evaluating this model.
autoevaluate
null
null
null
false
1
false
autoevaluate/autoeval-staging-eval-project-emotion-04ae905d-13795904
2022-08-26T15:05:37.000Z
null
false
f806a9562420f08f3ac7be388014a057449722f5
[]
[ "type:predictions", "tags:autotrain", "tags:evaluation", "datasets:emotion" ]
https://huggingface.co/datasets/autoevaluate/autoeval-staging-eval-project-emotion-04ae905d-13795904/resolve/main/README.md
--- type: predictions tags: - autotrain - evaluation datasets: - emotion eval_info: task: multi_class_classification model: tbasic5/distilbert-base-uncased-finetuned-emotion metrics: [] dataset_name: emotion dataset_config: default dataset_split: test col_mapping: text: text target: label --- # Dataset Card for AutoTrain Evaluator This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset: * Task: Multi-class Text Classification * Model: tbasic5/distilbert-base-uncased-finetuned-emotion * Dataset: emotion * Config: default * Split: test To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator). ## Contributions Thanks to [@lewtun](https://huggingface.co/lewtun) for evaluating this model.
asaxena1990
null
null
null
false
1
false
asaxena1990/dummyset2
2022-08-26T15:12:01.000Z
null
false
d9e7c98518e605a1caf45c3391939d2416aa0616
[]
[ "license:cc-by-nc-sa-4.0" ]
https://huggingface.co/datasets/asaxena1990/dummyset2/resolve/main/README.md
--- license: cc-by-nc-sa-4.0 ---
autoevaluate
null
null
null
false
1
false
autoevaluate/autoeval-staging-eval-project-squad_v2-bddd30a5-13805905
2022-08-26T15:27:25.000Z
null
false
aacf079fc5d248f979e4a1c7dedf1fcdc07a2b69
[]
[ "type:predictions", "tags:autotrain", "tags:evaluation", "datasets:squad_v2" ]
https://huggingface.co/datasets/autoevaluate/autoeval-staging-eval-project-squad_v2-bddd30a5-13805905/resolve/main/README.md
--- type: predictions tags: - autotrain - evaluation datasets: - squad_v2 eval_info: task: extractive_question_answering model: 123tarunanand/roberta-base-finetuned metrics: [] dataset_name: squad_v2 dataset_config: squad_v2 dataset_split: validation col_mapping: context: context question: question answers-text: answers.text answers-answer_start: answers.answer_start --- # Dataset Card for AutoTrain Evaluator This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset: * Task: Question Answering * Model: 123tarunanand/roberta-base-finetuned * Dataset: squad_v2 * Config: squad_v2 * Split: validation To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator). ## Contributions Thanks to [@lewtun](https://huggingface.co/lewtun) for evaluating this model.
autoevaluate
null
null
null
false
1
false
autoevaluate/autoeval-staging-eval-project-glue-fa8727be-13825907
2022-08-26T16:43:30.000Z
null
false
86cb54e837d8bd67b8432be7b4a7a4e73f64535f
[]
[ "type:predictions", "tags:autotrain", "tags:evaluation", "datasets:glue" ]
https://huggingface.co/datasets/autoevaluate/autoeval-staging-eval-project-glue-fa8727be-13825907/resolve/main/README.md
--- type: predictions tags: - autotrain - evaluation datasets: - glue eval_info: task: natural_language_inference model: autoevaluate/glue-mrpc metrics: [] dataset_name: glue dataset_config: mrpc dataset_split: test col_mapping: text1: sentence1 text2: sentence2 target: label --- # Dataset Card for AutoTrain Evaluator This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset: * Task: Natural Language Inference * Model: autoevaluate/glue-mrpc * Dataset: glue * Config: mrpc * Split: test To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator). ## Contributions Thanks to [@lewtun](https://huggingface.co/lewtun) for evaluating this model.
autoevaluate
null
null
null
false
1
false
autoevaluate/autoeval-staging-eval-project-autoevaluate__zero-shot-classification-sample-c8bb9099-11
2022-08-26T19:54:42.000Z
null
false
5eb65ec3e766cf83f00e4bd20d7f214dfee652da
[]
[ "type:predictions", "tags:autotrain", "tags:evaluation", "datasets:autoevaluate/zero-shot-classification-sample" ]
https://huggingface.co/datasets/autoevaluate/autoeval-staging-eval-project-autoevaluate__zero-shot-classification-sample-c8bb9099-11/resolve/main/README.md
--- type: predictions tags: - autotrain - evaluation datasets: - autoevaluate/zero-shot-classification-sample eval_info: task: zero_shot_classification model: autoevaluate/zero-shot-classification metrics: [] dataset_name: autoevaluate/zero-shot-classification-sample dataset_config: autoevaluate--zero-shot-classification-sample dataset_split: test col_mapping: text: text classes: classes target: target --- # Dataset Card for AutoTrain Evaluator This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset: * Task: Zero-Shot Text Classification * Model: autoevaluate/zero-shot-classification * Dataset: autoevaluate/zero-shot-classification-sample * Config: autoevaluate--zero-shot-classification-sample * Split: test To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator). ## Contributions Thanks to [@mathemakitten](https://huggingface.co/mathemakitten) for evaluating this model.
BDas
null
----ArabicNLPDataset----
The dataset, prepared in Arabic, includes 10.000 tests, 10.000 validations and 80000 train data. The data is composed of customer comments and created from e-commerce sites.
false
61
false
BDas/ArabicNLPDataset
2022-09-26T18:52:01.000Z
null
false
322604b436887a56f8cbcdd4ed3ecf2e60a2a488
[]
[ "annotations_creators:expert-generated", "language_creators:expert-generated", "language:ar", "license:other", "multilinguality:monolingual", "size_categories:100K<n<1M", "source_datasets:original", "task_categories:text-classification", "task_ids:multi-class-classification", "task_ids:multi-label...
https://huggingface.co/datasets/BDas/ArabicNLPDataset/resolve/main/README.md
--- annotations_creators: - expert-generated language_creators: - expert-generated language: - ar license: - other multilinguality: - monolingual size_categories: - 100K<n<1M source_datasets: - original task_categories: - text-classification task_ids: - multi-class-classification - multi-label-classification pretty_name: 'ArabicNLPDataset' --- # Dataset Card for "ArabicNLPDataset" ## Table of Contents - [Table of Contents](#table-of-contents) - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Dataset Preprocessing](#dataset-preprocessing) - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards) - [Languages](#languages) - [Dataset Structure](#dataset-structure) - [Data Instances](#data-instances) - [Data Fields](#data-fields) - [Data Splits](#data-splits) - [Dataset Creation](#dataset-creation) - [Curation Rationale](#curation-rationale) - [Source Data](#source-data) - [Annotations](#annotations) - [Personal and Sensitive Information](#personal-and-sensitive-information) - [Considerations for Using the Data](#considerations-for-using-the-data) - [Social Impact of Dataset](#social-impact-of-dataset) - [Discussion of Biases](#discussion-of-biases) - [Other Known Limitations](#other-known-limitations) - [Additional Information](#additional-information) - [Dataset Curators](#dataset-curators) - [Licensing Information](#licensing-information) - [Citation Information](#citation-information) - [Contributions](#contributions) ## Dataset Description - **Homepage:** [https://github.com/BihterDass/ArabicTextClassificationDataset] - **Repository:** [https://github.com/BihterDass/ArabicTextClassificationDataset] - **Size of downloaded dataset files:** 23.5 MB - **Size of the generated dataset:** 23.5 MB ### Dataset Summary The dataset was compiled from user comments from e-commerce sites. It consists of 10,000 validations, 10,000 tests and 80000 train data. Data were classified into 3 classes (positive(pos), negative(neg) and natural(nor). The data is available to you on github. ### Supported Tasks and Leaderboards [More Information Needed] ### Languages [More Information Needed] ## Dataset Structure ### Data Instances [More Information Needed] #### arabic-dataset-v1 - **Size of downloaded dataset files:** 23.5 MB - **Size of the generated dataset:** 23.5 MB ### Data Fields The data fields are the same among all splits. #### arabic-dataset-v-v1 - `text`: a `string` feature. - `label`: a classification label, with possible values including `positive` (2), `natural` (1), `negative` (0). ### Data Splits | |train |validation|test | |----|--------:|---------:|---------:| |Data| 80000 | 10000 | 10000 | ## Dataset Creation ### Curation Rationale [More Information Needed] ### Source Data #### Initial Data Collection and Normalization [More Information Needed] #### Who are the source language producers? [More Information Needed] ### Annotations #### Annotation process [More Information Needed] #### Who are the annotators? [More Information Needed] ### Personal and Sensitive Information [More Information Needed] ## Considerations for Using the Data ### Social Impact of Dataset [More Information Needed] ### Discussion of Biases [More Information Needed] ### Other Known Limitations [More Information Needed] ## Additional Information ### Dataset Curators [More Information Needed] ### Licensing Information [More Information Needed] ### Citation Information [More Information Needed] ### Contributions Thanks to [@PnrSvc](https://github.com/PnrSvc) for adding this dataset.
allenai
null
null
null
false
1
false
allenai/ms2_sparse_max
2022-11-04T00:48:15.000Z
multi-document-summarization
false
8077caffc0d89430c15479f250bdb7774e3bac7a
[]
[ "annotations_creators:expert-generated", "language_creators:expert-generated", "language:en", "license:apache-2.0", "multilinguality:monolingual", "size_categories:10K<n<100K", "source_datasets:extended|other-MS^2", "source_datasets:extended|other-Cochrane", "task_categories:summarization", "task_...
https://huggingface.co/datasets/allenai/ms2_sparse_max/resolve/main/README.md
--- annotations_creators: - expert-generated language_creators: - expert-generated language: - en license: - apache-2.0 multilinguality: - monolingual size_categories: - 10K<n<100K source_datasets: - extended|other-MS^2 - extended|other-Cochrane task_categories: - summarization - text2text-generation task_ids: - summarization-other-query-based-summarization - summarization-other-query-based-multi-document-summarization - summarization-other-scientific-documents-summarization paperswithcode_id: multi-document-summarization pretty_name: MSLR Shared Task --- This is a copy of the [MS^2](https://huggingface.co/datasets/allenai/mslr2022) dataset, except the input source documents of its `validation` split have been replaced by a __sparse__ retriever. The retrieval pipeline used: - __query__: The `background` field of each example - __corpus__: The union of all documents in the `train`, `validation` and `test` splits. A document is the concatenation of the `title` and `abstract`. - __retriever__: BM25 via [PyTerrier](https://pyterrier.readthedocs.io/en/latest/) with default settings - __top-k strategy__: `"max"`, i.e. the number of documents retrieved, `k`, is set as the maximum number of documents seen across examples in this dataset, in this case `k==25` Retrieval results on the `validation` set: | Recall@100 | Rprec | Precision@k | Recall@k | | ----------- | ----------- | ----------- | ----------- | | 0.378 | 0.1827 | 0.1559 | 0.2188 |
allenai
null
null
null
false
5
false
allenai/multinews_sparse_max
2022-11-12T00:15:32.000Z
multi-news
false
3c14c1694fea0f8466712a252a62f4caaf9e061d
[]
[ "annotations_creators:expert-generated", "language_creators:expert-generated", "language:en", "license:other", "multilinguality:monolingual", "size_categories:10K<n<100K", "source_datasets:original", "task_categories:summarization", "task_ids:news-articles-summarization" ]
https://huggingface.co/datasets/allenai/multinews_sparse_max/resolve/main/README.md
--- annotations_creators: - expert-generated language_creators: - expert-generated language: - en license: - other multilinguality: - monolingual pretty_name: Multi-News size_categories: - 10K<n<100K source_datasets: - original task_categories: - summarization task_ids: - news-articles-summarization paperswithcode_id: multi-news train-eval-index: - config: default task: summarization task_id: summarization splits: train_split: train eval_split: test col_mapping: document: text summary: target metrics: - type: rouge name: Rouge --- This is a copy of the [Multi-News](https://huggingface.co/datasets/multi_news) dataset, except the input source documents of its `test` split have been replaced by a __sparse__ retriever. The retrieval pipeline used: - __query__: The `summary` field of each example - __corpus__: The union of all documents in the `train`, `validation` and `test` splits - __retriever__: BM25 via [PyTerrier](https://pyterrier.readthedocs.io/en/latest/) with default settings - __top-k strategy__: `"max"`, i.e. the number of documents retrieved, `k`, is set as the maximum number of documents seen across examples in this dataset, in this case `k==10` Retrieval results on the `test` set: | Recall@100 | Rprec | Precision@k | Recall@k | | ----------- | ----------- | ----------- | ----------- | | 0.8775 | 0.7480 | 0.2187 | 0.8250 |
allenai
null
null
null
false
1
false
allenai/ms2_sparse_mean
2022-11-04T00:27:35.000Z
multi-document-summarization
false
42620d2817bbfdda6b54c02e91e06280aed1736e
[]
[ "annotations_creators:expert-generated", "language_creators:expert-generated", "language:en", "license:apache-2.0", "multilinguality:monolingual", "size_categories:10K<n<100K", "source_datasets:extended|other-MS^2", "source_datasets:extended|other-Cochrane", "task_categories:summarization", "task_...
https://huggingface.co/datasets/allenai/ms2_sparse_mean/resolve/main/README.md
--- annotations_creators: - expert-generated language_creators: - expert-generated language: - en license: - apache-2.0 multilinguality: - monolingual size_categories: - 10K<n<100K source_datasets: - extended|other-MS^2 - extended|other-Cochrane task_categories: - summarization - text2text-generation task_ids: - summarization-other-query-based-summarization - summarization-other-query-based-multi-document-summarization - summarization-other-scientific-documents-summarization paperswithcode_id: multi-document-summarization pretty_name: MSLR Shared Task --- This is a copy of the [MS^2](https://huggingface.co/datasets/allenai/mslr2022) dataset, except the input source documents of its `validation` split have been replaced by a __sparse__ retriever. The retrieval pipeline used: - __query__: The `background` field of each example - __corpus__: The union of all documents in the `train`, `validation` and `test` splits. A document is the concatenation of the `title` and `abstract`. - __retriever__: BM25 via [PyTerrier](https://pyterrier.readthedocs.io/en/latest/) with default settings - __top-k strategy__: `"mean"`, i.e. the number of documents retrieved, `k`, is set as the mean number of documents seen across examples in this dataset, in this case `k==17` Retrieval results on the `validation` set: | Recall@100 | Rprec | Precision@k | Recall@k | | ----------- | ----------- | ----------- | ----------- | | 0.3780 | 0.1827 | 0.1815 | 0.1792 |
allenai
null
null
null
false
2
false
allenai/ms2_sparse_oracle
2022-11-04T00:47:22.000Z
multi-document-summarization
false
4d49eb87fcfcb496794b1c23c05252a744335654
[]
[ "annotations_creators:expert-generated", "language_creators:expert-generated", "language:en", "license:apache-2.0", "multilinguality:monolingual", "size_categories:10K<n<100K", "source_datasets:extended|other-MS^2", "source_datasets:extended|other-Cochrane", "task_categories:summarization", "task_...
https://huggingface.co/datasets/allenai/ms2_sparse_oracle/resolve/main/README.md
--- annotations_creators: - expert-generated language_creators: - expert-generated language: - en license: - apache-2.0 multilinguality: - monolingual size_categories: - 10K<n<100K source_datasets: - extended|other-MS^2 - extended|other-Cochrane task_categories: - summarization - text2text-generation task_ids: - summarization-other-query-based-summarization - summarization-other-query-based-multi-document-summarization - summarization-other-scientific-documents-summarization paperswithcode_id: multi-document-summarization pretty_name: MSLR Shared Task --- This is a copy of the [MS^2](https://huggingface.co/datasets/allenai/mslr2022) dataset, except the input source documents of its `validation` split have been replaced by a __sparse__ retriever. The retrieval pipeline used: - __query__: The `background` field of each example - __corpus__: The union of all documents in the `train`, `validation` and `test` splits. A document is the concatenation of the `title` and `abstract`. - __retriever__: BM25 via [PyTerrier](https://pyterrier.readthedocs.io/en/latest/) with default settings - __top-k strategy__: `"oracle"`, i.e. the number of documents retrieved, `k`, is set as the original number of input documents for each example Retrieval results on the `validation` set: | Recall@100 | Rprec | Precision@k | Recall@k | | ----------- | ----------- | ----------- | ----------- | | 0.3780 | 0.1827 | 0.1827 | 0.1827 |
allenai
null
null
null
false
3
false
allenai/multinews_sparse_mean
2022-11-12T00:15:19.000Z
multi-news
false
62858daa311434d8f3531bd4e587ba9f86a9bfba
[]
[ "annotations_creators:expert-generated", "language_creators:expert-generated", "language:en", "license:other", "multilinguality:monolingual", "size_categories:10K<n<100K", "source_datasets:original", "task_categories:summarization", "task_ids:news-articles-summarization" ]
https://huggingface.co/datasets/allenai/multinews_sparse_mean/resolve/main/README.md
--- annotations_creators: - expert-generated language_creators: - expert-generated language: - en license: - other multilinguality: - monolingual pretty_name: Multi-News size_categories: - 10K<n<100K source_datasets: - original task_categories: - summarization task_ids: - news-articles-summarization paperswithcode_id: multi-news train-eval-index: - config: default task: summarization task_id: summarization splits: train_split: train eval_split: test col_mapping: document: text summary: target metrics: - type: rouge name: Rouge --- This is a copy of the [Multi-News](https://huggingface.co/datasets/multi_news) dataset, except the input source documents of its `test` split have been replaced by a __sparse__ retriever. The retrieval pipeline used: - __query__: The `summary` field of each example - __corpus__: The union of all documents in the `train`, `validation` and `test` splits - __retriever__: BM25 via [PyTerrier](https://pyterrier.readthedocs.io/en/latest/) with default settings - __top-k strategy__: `"mean"`, i.e. the number of documents retrieved, `k`, is set as the mean number of documents seen across examples in this dataset, in this case `k==3` Retrieval results on the `test` set: | Recall@100 | Rprec | Precision@k | Recall@k | | ----------- | ----------- | ----------- | ----------- | | 0.8775 | 0.7480 | 0.6370 | 0.7443 |
autoevaluate
null
null
null
false
1
false
autoevaluate/autoeval-staging-eval-project-autoevaluate__zero-shot-classification-sample-18ef74e8-21
2022-08-27T00:14:03.000Z
null
false
21dbd148b6f8581ce774fbe1a84d225aa0dd5a06
[]
[ "type:predictions", "tags:autotrain", "tags:evaluation", "datasets:autoevaluate/zero-shot-classification-sample" ]
https://huggingface.co/datasets/autoevaluate/autoeval-staging-eval-project-autoevaluate__zero-shot-classification-sample-18ef74e8-21/resolve/main/README.md
--- type: predictions tags: - autotrain - evaluation datasets: - autoevaluate/zero-shot-classification-sample eval_info: task: text_zero_shot_classification model: autoevaluate/zero-shot-classification metrics: [] dataset_name: autoevaluate/zero-shot-classification-sample dataset_config: autoevaluate--zero-shot-classification-sample dataset_split: test col_mapping: text: text classes: classes target: target --- # Dataset Card for AutoTrain Evaluator This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset: * Task: Zero-Shot Text Classification * Model: autoevaluate/zero-shot-classification * Dataset: autoevaluate/zero-shot-classification-sample * Config: autoevaluate--zero-shot-classification-sample * Split: test To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator). ## Contributions Thanks to [@mathemakitten](https://huggingface.co/mathemakitten) for evaluating this model.
BDas
null
----EnglishNLPDataset----
The dataset, prepared in English, includes 10.000 tests, 10.000 validations and 80000 train data. The data is composed of customer comments and created from e-commerce sites.
false
6
false
BDas/EnglishNLPDataset
2022-08-27T11:13:01.000Z
null
false
a3692ff6d4f7958e6eea80025ac7ae9f4472cfe0
[]
[ "annotations_creators:expert-generated", "language_creators:expert-generated", "language:en", "license:other", "multilinguality:monolingual", "size_categories:100K<n<1M", "source_datasets:original", "task_categories:text-classification", "task_ids:multi-class-classification", "task_ids:multi-label...
https://huggingface.co/datasets/BDas/EnglishNLPDataset/resolve/main/README.md
--- annotations_creators: - expert-generated language_creators: - expert-generated language: - en license: - other multilinguality: - monolingual size_categories: - 100K<n<1M source_datasets: - original task_categories: - text-classification task_ids: - multi-class-classification - multi-label-classification pretty_name: 'EnglishNLPDataset' --- # Dataset Card for "EnglishNLPDataset" ## Table of Contents - [Table of Contents](#table-of-contents) - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Dataset Preprocessing](#dataset-preprocessing) - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards) - [Languages](#languages) - [Dataset Structure](#dataset-structure) - [Data Instances](#data-instances) - [Data Fields](#data-fields) - [Data Splits](#data-splits) - [Dataset Creation](#dataset-creation) - [Curation Rationale](#curation-rationale) - [Source Data](#source-data) - [Annotations](#annotations) - [Personal and Sensitive Information](#personal-and-sensitive-information) - [Considerations for Using the Data](#considerations-for-using-the-data) - [Social Impact of Dataset](#social-impact-of-dataset) - [Discussion of Biases](#discussion-of-biases) - [Other Known Limitations](#other-known-limitations) - [Additional Information](#additional-information) - [Dataset Curators](#dataset-curators) - [Licensing Information](#licensing-information) - [Citation Information](#citation-information) - [Contributions](#contributions) ## Dataset Description - **Homepage:** [https://github.com/BihterDass/EnglishTextClassificationDataset] - **Repository:** [https://github.com/BihterDass/EnglishTextClassificationDataset] - **Size of downloaded dataset files:** 8.71 MB - **Size of the generated dataset:** 8.71 MB ### Dataset Summary The dataset was compiled from user comments from e-commerce sites. It consists of 10,000 validations, 10,000 tests and 80000 train data. Data were classified into 3 classes (positive(pos), negative(neg) and natural(nor). The data is available to you on github. ### Supported Tasks and Leaderboards [More Information Needed] ### Languages [More Information Needed] ## Dataset Structure ### Data Instances [More Information Needed] #### english-dataset-v1 - **Size of downloaded dataset files:** 8.71 MB - **Size of the generated dataset:** 8.71 MB ### Data Fields The data fields are the same among all splits. #### english-dataset-v-v1 - `text`: a `string` feature. - `label`: a classification label, with possible values including `positive` (2), `natural` (1), `negative` (0). ### Data Splits | |train |validation|test | |----|--------:|---------:|---------:| |Data| 80000 | 10000 | 10000 | ## Dataset Creation ### Curation Rationale [More Information Needed] ### Source Data #### Initial Data Collection and Normalization [More Information Needed] #### Who are the source language producers? [More Information Needed] ### Annotations #### Annotation process [More Information Needed] #### Who are the annotators? [More Information Needed] ### Personal and Sensitive Information [More Information Needed] ## Considerations for Using the Data ### Social Impact of Dataset [More Information Needed] ### Discussion of Biases [More Information Needed] ### Other Known Limitations [More Information Needed] ## Additional Information ### Dataset Curators [More Information Needed] ### Licensing Information [More Information Needed] ### Citation Information [More Information Needed] ### Contributions Thanks to [@PnrSvc](https://github.com/PnrSvc) for adding this dataset.
autoevaluate
null
null
null
false
1
false
autoevaluate/autoeval-staging-eval-project-squad-d38f255e-13865909
2022-08-27T13:15:49.000Z
null
false
2a76ba3097a5386ab779d20e6a9f86c14de143e0
[]
[ "type:predictions", "tags:autotrain", "tags:evaluation", "datasets:squad" ]
https://huggingface.co/datasets/autoevaluate/autoeval-staging-eval-project-squad-d38f255e-13865909/resolve/main/README.md
--- type: predictions tags: - autotrain - evaluation datasets: - squad eval_info: task: extractive_question_answering model: deepset/xlm-roberta-base-squad2 metrics: [] dataset_name: squad dataset_config: plain_text dataset_split: validation col_mapping: context: context question: question answers-text: answers.text answers-answer_start: answers.answer_start --- # Dataset Card for AutoTrain Evaluator This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset: * Task: Question Answering * Model: deepset/xlm-roberta-base-squad2 * Dataset: squad * Config: plain_text * Split: validation To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator). ## Contributions Thanks to [@sakamoto](https://huggingface.co/sakamoto) for evaluating this model.
sazibc
null
null
null
false
1
false
sazibc/flowers
2022-08-27T20:19:25.000Z
null
false
cee0bbe45cd41cfbf181459fa786cedc4f075542
[]
[ "license:mit" ]
https://huggingface.co/datasets/sazibc/flowers/resolve/main/README.md
--- license: mit ---
priyank-m
null
null
null
false
103
false
priyank-m/SROIE_2019_text_recognition
2022-08-27T21:38:24.000Z
null
false
04f6537e418eeb88863d617eb27817cc496522d7
[]
[ "language:en", "multilinguality:monolingual", "size_categories:10K<n<100K", "tags:text-recognition", "tags:recognition", "task_categories:image-to-text", "task_ids:image-captioning" ]
https://huggingface.co/datasets/priyank-m/SROIE_2019_text_recognition/resolve/main/README.md
--- annotations_creators: [] language: - en language_creators: [] license: [] multilinguality: - monolingual pretty_name: SROIE_2019_text_recognition size_categories: - 10K<n<100K source_datasets: [] tags: - text-recognition - recognition task_categories: - image-to-text task_ids: - image-captioning --- This dataset we prepared using the Scanned receipts OCR and information extraction(SROIE) dataset. The SROIE dataset contains 973 scanned receipts in English language. Cropping the bounding boxes from each of the receipts to generate this text-recognition dataset resulted in 33626 images for train set and 18704 images for the test set. The text annotations for all the images inside a split are stored in a metadata.jsonl file. usage: from dataset import load_dataset data = load_dataset("priyank-m/SROIE_2019_text_recognition") source of raw SROIE dataset: https://www.kaggle.com/datasets/urbikn/sroie-datasetv2
jamescalam
null
@InProceedings{huggingface:dataset, title = {Unsplash Lite Dataset 1.2.0 Photos}, author={Unsplash}, year={2022} }
This is a dataset that streams photos data from the Unsplash 25K servers.
false
13
false
jamescalam/unsplash-25k-photos
2022-09-13T13:02:46.000Z
null
false
ae9e759dd31d60479354cc06e4f4291c0c27bbca
[]
[ "annotations_creators:found", "language:en", "language_creators:found", "multilinguality:monolingual", "size_categories:10K<n<100K", "tags:images", "tags:unsplash", "tags:photos", "task_categories:image-to-image", "task_categories:image-classification", "task_categories:image-to-text", "task_c...
https://huggingface.co/datasets/jamescalam/unsplash-25k-photos/resolve/main/README.md
--- annotations_creators: - found language: - en language_creators: - found license: [] multilinguality: - monolingual pretty_name: Unsplash Lite 25K Photos size_categories: - 10K<n<100K source_datasets: [] tags: - images - unsplash - photos task_categories: - image-to-image - image-classification - image-to-text - text-to-image - zero-shot-image-classification task_ids: [] --- # Unsplash Lite Dataset Photos This dataset is linked to the Unsplash Lite dataset containing data on 25K images from Unsplash. The dataset here only includes data from a single file `photos.tsv000`. The dataset builder script streams this data directly from the Unsplash 25K dataset source. For full details, please see the [Unsplash Dataset GitHub repo](https://github.com/unsplash/datasets), or read the preview (copied from the repo) below. --- # The Unsplash Dataset ![](https://unsplash.com/blog/content/images/2020/08/dataheader.jpg) The Unsplash Dataset is made up of over 250,000+ contributing global photographers and data sourced from hundreds of millions of searches across a nearly unlimited number of uses and contexts. Due to the breadth of intent and semantics contained within the Unsplash dataset, it enables new opportunities for research and learning. The Unsplash Dataset is offered in two datasets: - the Lite dataset: available for commercial and noncommercial usage, containing 25k nature-themed Unsplash photos, 25k keywords, and 1M searches - the Full dataset: available for noncommercial usage, containing 3M+ high-quality Unsplash photos, 5M keywords, and over 250M searches As the Unsplash library continues to grow, we’ll release updates to the dataset with new fields and new images, with each subsequent release being [semantically versioned](https://semver.org/). We welcome any feedback regarding the content of the datasets or their format. With your input, we hope to close the gap between the data we provide and the data that you would like to leverage. You can [open an issue](https://github.com/unsplash/datasets/issues/new/choose) to report a problem or to let us know what you would like to see in the next release of the datasets. For more on the Unsplash Dataset, see [our announcement](https://unsplash.com/blog/the-unsplash-dataset/) and [site](https://unsplash.com/data). ## Download ### Lite Dataset The Lite dataset contains all of the same fields as the Full dataset, but is limited to ~25,000 photos. It can be used for both commercial and non-commercial usage, provided you abide by [the terms](https://github.com/unsplash/datasets/blob/master/TERMS.md). [⬇️ Download the Lite dataset](https://unsplash.com/data/lite/latest) [~650MB compressed, ~1.4GB raw] ### Full Dataset The Full dataset is available for non-commercial usage and all uses must abide by [the terms](https://github.com/unsplash/datasets/blob/master/TERMS.md). To access, please go to [unsplash.com/data](https://unsplash.com/data) and request access. The dataset weighs ~20 GB compressed (~43GB raw)). ## Documentation See the [documentation for a complete list of tables and fields](https://github.com/unsplash/datasets/blob/master/DOCS.md). ## Usage You can follow these examples to load the dataset in these common formats: - [Load the dataset in a PostgreSQL database](https://github.com/unsplash/datasets/tree/master/how-to/psql) - [Load the dataset in a Python environment](https://github.com/unsplash/datasets/tree/master/how-to/python) - [Submit an example doc](https://github.com/unsplash/datasets/blob/master/how-to/README.md#submit-an-example) ## Share your work We're making this data open and available with the hopes of enabling researchers and developers to discover interesting and useful connections in the data. We'd love to see what you create, whether that's a research paper, a machine learning model, a blog post, or just an interesting discovery in the data. Send us an email at [data@unsplash.com](mailto:data@unsplash.com). If you're using the dataset in a research paper, you can attribute the dataset as `Unsplash Lite Dataset 1.2.0` or `Unsplash Full Dataset 1.2.0` and link to the permalink [`unsplash.com/data`](https://unsplash.com/data). ---- The Unsplash Dataset is made available for research purposes. [It cannot be used to redistribute the images contained within](https://github.com/unsplash/datasets/blob/master/TERMS.md). To use the Unsplash library in a product, see [the Unsplash API](https://unsplash.com/developers). ![](https://unsplash.com/blog/content/images/2020/08/footer-alt.jpg)
teticio
null
null
null
false
17
false
teticio/audio-diffusion-breaks-256
2022-11-09T10:50:38.000Z
null
false
82e568dfe8ee3e016c18290dbbbddd010479eb87
[]
[ "size_categories:10K<n<100K", "tags:audio", "tags:spectrograms", "task_categories:image-to-image" ]
https://huggingface.co/datasets/teticio/audio-diffusion-breaks-256/resolve/main/README.md
--- annotations_creators: [] language: [] language_creators: [] license: [] multilinguality: [] pretty_name: Mel spectrograms of sampled music size_categories: - 10K<n<100K source_datasets: [] tags: - audio - spectrograms task_categories: - image-to-image task_ids: [] --- 30,000 256x256 mel spectrograms of 5 second samples that have been used in music, sourced from [WhoSampled](https://whosampled.com) and [YouTube](https://youtube.com). The code to convert from audio to spectrogram and vice versa can be found in https://github.com/teticio/audio-diffusion along with scripts to train and run inference using De-noising Diffusion Probabilistic Models. ``` x_res = 256 y_res = 256 sample_rate = 22050 n_fft = 2048 hop_length = 512 ```
anonymousdeepcc
null
null
null
false
1
false
anonymousdeepcc/DeepCC
2022-09-05T01:15:43.000Z
null
false
214dbf214e508c61aeaf431ef753e60ab5e263aa
[]
[]
https://huggingface.co/datasets/anonymousdeepcc/DeepCC/resolve/main/README.md
In this repository, we have all the datasets and source code used to develop DeepCC. Below, we describe each files contained in the repository: 1) It contains the raw dataset used for DeepCC, Py150, name as DeepCCDatasetPy150.zip 2) It contains the extracted dataset from the Py150 dataset inside processed_dataset.zip 3) It contains the dataset extracting code inside processed_dataset.zip 4) It contains the source code to build the model pipeline, train the model, and evaluate the model inside the DeepCC.ipynb.
QuoQA-NLP
null
null
null
false
1
false
QuoQA-NLP/KoCC12M
2022-08-28T06:44:47.000Z
null
false
5cadc7b30860162ea82aa2729102c02485d152b3
[]
[]
https://huggingface.co/datasets/QuoQA-NLP/KoCC12M/resolve/main/README.md
CC12M of flax-community/conceptual-captions-12 translated from English to Korean.
autoevaluate
null
null
null
false
1
false
autoevaluate/autoeval-staging-eval-project-squad_v2-c78baf7d-13885910
2022-08-28T10:52:35.000Z
null
false
71d5c298b9dc85f34b468eb393301fa436405bbb
[]
[ "type:predictions", "tags:autotrain", "tags:evaluation", "datasets:squad_v2" ]
https://huggingface.co/datasets/autoevaluate/autoeval-staging-eval-project-squad_v2-c78baf7d-13885910/resolve/main/README.md
--- type: predictions tags: - autotrain - evaluation datasets: - squad_v2 eval_info: task: extractive_question_answering model: nlpconnect/deberta-v3-xsmall-squad2 metrics: [] dataset_name: squad_v2 dataset_config: squad_v2 dataset_split: validation col_mapping: context: context question: question answers-text: answers.text answers-answer_start: answers.answer_start --- # Dataset Card for AutoTrain Evaluator This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset: * Task: Question Answering * Model: nlpconnect/deberta-v3-xsmall-squad2 * Dataset: squad_v2 * Config: squad_v2 * Split: validation To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator). ## Contributions Thanks to [@ankur310974](https://huggingface.co/ankur310974) for evaluating this model.
autoevaluate
null
null
null
false
null
false
autoevaluate/autoeval-staging-eval-project-squad-4690f1f9-13895911
2022-08-28T10:52:24.000Z
null
false
7ad42c0cbd4e102579d6323231e05a87c739318b
[]
[ "type:predictions", "tags:autotrain", "tags:evaluation", "datasets:squad" ]
https://huggingface.co/datasets/autoevaluate/autoeval-staging-eval-project-squad-4690f1f9-13895911/resolve/main/README.md
--- type: predictions tags: - autotrain - evaluation datasets: - squad eval_info: task: extractive_question_answering model: nlpconnect/deberta-v3-xsmall-squad2 metrics: [] dataset_name: squad dataset_config: plain_text dataset_split: validation col_mapping: context: context question: question answers-text: answers.text answers-answer_start: answers.answer_start --- # Dataset Card for AutoTrain Evaluator This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset: * Task: Question Answering * Model: nlpconnect/deberta-v3-xsmall-squad2 * Dataset: squad * Config: plain_text * Split: validation To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator). ## Contributions Thanks to [@ankur310794](https://huggingface.co/ankur310794) for evaluating this model.
kokhayas
null
null
null
false
1
false
kokhayas/english-debate-motions-utds
2022-08-30T03:18:43.000Z
null
false
2afbf37414683a8ad881fe0dc8913b1f246b9aa7
[]
[]
https://huggingface.co/datasets/kokhayas/english-debate-motions-utds/resolve/main/README.md
English Debate Motions gathered by University of Tokyo Debate Society @misc{english-debate-motions-utds, title={english-debate-motions-utds}, author={members of the University of Tokyo Debate Society}, year={2022}, }
unpredictable
null
null
The UnpredicTable dataset consists of web tables formatted as few-shot tasks for fine-tuning language models to improve their few-shot performance. For more details please see the accompanying dataset card.
false
1
false
unpredictable/unpredictable_full
2022-08-28T18:42:31.000Z
null
false
72aa912bbf09c96c6cf38bb76bec24e8d8a82367
[]
[ "annotations_creators:no-annotation", "language_creators:found", "language:en", "license:apache-2.0", "multilinguality:monolingual", "size_categories:100K<n<1M", "task_categories:multiple-choice", "task_categories:question-answering", "task_categories:zero-shot-classification", "task_categories:te...
https://huggingface.co/datasets/unpredictable/unpredictable_full/resolve/main/README.md
--- annotations_creators: - no-annotation language_creators: - found language: - en license: - apache-2.0 multilinguality: - monolingual pretty_name: UnpredicTable-full size_categories: - 100K<n<1M source_datasets: [] task_categories: - multiple-choice - question-answering - zero-shot-classification - text2text-generation - table-question-answering - text-generation - text-classification - tabular-classification task_ids: - multiple-choice-qa - extractive-qa - open-domain-qa - closed-domain-qa - closed-book-qa - open-book-qa - language-modeling - multi-class-classification - natural-language-inference - topic-classification - multi-label-classification - tabular-multi-class-classification - tabular-multi-label-classification --- # Dataset Card for "UnpredicTable-full" - Dataset of Few-shot Tasks from Tables ## Table of Contents - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Supported Tasks](#supported-tasks-and-leaderboards) - [Languages](#languages) - [Dataset Structure](#dataset-structure) - [Data Instances](#data-instances) - [Data Fields](#data-instances) - [Data Splits](#data-instances) - [Dataset Creation](#dataset-creation) - [Curation Rationale](#curation-rationale) - [Source Data](#source-data) - [Annotations](#annotations) - [Personal and Sensitive Information](#personal-and-sensitive-information) - [Considerations for Using the Data](#considerations-for-using-the-data) - [Social Impact of Dataset](#social-impact-of-dataset) - [Discussion of Biases](#discussion-of-biases) - [Other Known Limitations](#other-known-limitations) - [Additional Information](#additional-information) - [Dataset Curators](#dataset-curators) - [Licensing Information](#licensing-information) - [Citation Information](#citation-information) ## Dataset Description - **Repository:** https://github.com/AnonCodeShare/few-shot-adaptation - **Paper:** Few-shot Adaptation Works with UnpredicTable Data ### Dataset Summary The UnpredicTable dataset consists of web tables formatted as few-shot tasks for fine-tuning language models to improve their few-shot performance. There are several dataset versions available: * [UnpredicTable-full](https://huggingface.co/datasets/unpredictable/unpredictable_full): Starting from the initial WTC corpus of 50M tables, we apply our tables-to-tasks procedure to produce our resulting dataset, [UnpredicTable-full](https://huggingface.co/datasets/unpredictable/unpredictable_full), which comprises 413,299 tasks from 23,744 unique websites. * [UnpredicTable-unique](https://huggingface.co/datasets/unpredictable/unpredictable_unique): This is the same as [UnpredicTable-full](https://huggingface.co/datasets/unpredictable/unpredictable_full) but filtered to have a maximum of one task per website. [UnpredicTable-unique](https://huggingface.co/datasets/unpredictable/unpredictable_unique) contains exactly 23,744 tasks from 23,744 websites. * [UnpredicTable-5k](https://huggingface.co/datasets/unpredictable/unpredictable_5k): This dataset contains 5k random tables from the full dataset. * UnpredicTable data subsets based on the website of origin: * [UnpredicTable-support-google-com](https://huggingface.co/datasets/unpredictable/unpredictable_support-google-com) ### Supported Tasks and Leaderboards Since the tables come from the web, the distribution of tasks and topics is very broad. The shape of our dataset is very wide, i.e., we have 1000's of tasks, while each task has only a few examples, compared to most current NLP datasets which are very deep, i.e., 10s of tasks with many examples. This implies that our dataset covers a broad range of potential tasks, e.g., multiple-choice, question-answering, table-question-answering, text-classification, etc. The intended use of this dataset is to improve few-shot performance by fine-tuning/pre-training on our dataset. ### Languages English ## Dataset Structure ### Data Instances Each task is represented as a jsonline file and consists of several few-shot examples. Each example is a dictionary containing a field 'task', which identifies the task, followed by an 'input', 'options', and 'output' field. The 'input' field contains several column elements of the same row in the table, while the 'output' field is a target which represents an individual column of the same row. Each task contains several such examples which can be concatenated as a few-shot task. In the case of multiple choice classification, the 'options' field contains the possible classes that a model needs to choose from. There are also additional meta-data fields such as 'pageTitle', 'title', 'outputColName', 'url', 'wdcFile'. ### Data Fields 'task': task identifier 'input': column elements of a specific row in the table. 'options': for multiple choice classification, it provides the options to choose from. 'output': target column element of the same row as input. 'pageTitle': the title of the page containing the table. 'outputColName': output column name 'url': url to the website containing the table 'wdcFile': WDC Web Table Corpus file ### Data Splits The UnpredicTable datasets do not come with additional data splits. ## Dataset Creation ### Curation Rationale Few-shot training on multi-task datasets has been demonstrated to improve language models' few-shot learning (FSL) performance on new tasks, but it is unclear which training tasks lead to effective downstream task adaptation. Few-shot learning datasets are typically produced with expensive human curation, limiting the scale and diversity of the training tasks available to study. As an alternative source of few-shot data, we automatically extract 413,299 tasks from diverse internet tables. We provide this as a research resource to investigate the relationship between training data and few-shot learning. ### Source Data #### Initial Data Collection and Normalization We use internet tables from the English-language Relational Subset of the WDC Web Table Corpus 2015 (WTC). The WTC dataset tables were extracted from the July 2015 Common Crawl web corpus (http://webdatacommons.org/webtables/2015/EnglishStatistics.html). The dataset contains 50,820,165 tables from 323,160 web domains. We then convert the tables into few-shot learning tasks. Please see our publication for more details on the data collection and conversion pipeline. #### Who are the source language producers? The dataset is extracted from [WDC Web Table Corpora](http://webdatacommons.org/webtables/). ### Personal and Sensitive Information The data was extracted from [WDC Web Table Corpora](http://webdatacommons.org/webtables/), which in turn extracted tables from the [Common Crawl](https://commoncrawl.org/). We did not filter the data in any way. Thus any user identities or otherwise sensitive information (e.g., data that reveals racial or ethnic origins, sexual orientations, religious beliefs, political opinions or union memberships, or locations; financial or health data; biometric or genetic data; forms of government identification, such as social security numbers; criminal history, etc.) might be contained in our dataset. ## Considerations for Using the Data ### Social Impact of Dataset This dataset is intended for use as a research resource to investigate the relationship between training data and few-shot learning. As such, it contains high- and low-quality data, as well as diverse content that may be untruthful or inappropriate. Without careful investigation, it should not be used for training models that will be deployed for use in decision-critical or user-facing situations. ### Discussion of Biases Since our dataset contains tables that are scraped from the web, it will also contain many toxic, racist, sexist, and otherwise harmful biases and texts. We have not run any analysis on the biases prevalent in our datasets. Neither have we explicitly filtered the content. This implies that a model trained on our dataset may potentially reflect harmful biases and toxic text that exist in our dataset. ### Other Known Limitations No additional known limitations. ## Additional Information ### Licensing Information Apache 2.0
unpredictable
null
null
The UnpredicTable dataset consists of web tables formatted as few-shot tasks for fine-tuning language models to improve their few-shot performance. For more details please see the accompanying dataset card.
false
1
false
unpredictable/unpredictable_5k
2022-08-28T18:13:41.000Z
null
false
ec38db9a85ca5dca7ef9211bbb73cc27e1a47208
[]
[ "annotations_creators:no-annotation", "language_creators:found", "language:en", "license:apache-2.0", "multilinguality:monolingual", "size_categories:100K<n<1M", "task_categories:multiple-choice", "task_categories:question-answering", "task_categories:zero-shot-classification", "task_categories:te...
https://huggingface.co/datasets/unpredictable/unpredictable_5k/resolve/main/README.md
--- annotations_creators: - no-annotation language_creators: - found language: - en license: - apache-2.0 multilinguality: - monolingual pretty_name: UnpredicTable-5k size_categories: - 100K<n<1M source_datasets: [] task_categories: - multiple-choice - question-answering - zero-shot-classification - text2text-generation - table-question-answering - text-generation - text-classification - tabular-classification task_ids: - multiple-choice-qa - extractive-qa - open-domain-qa - closed-domain-qa - closed-book-qa - open-book-qa - language-modeling - multi-class-classification - natural-language-inference - topic-classification - multi-label-classification - tabular-multi-class-classification - tabular-multi-label-classification --- # Dataset Card for "UnpredicTable-5k" - Dataset of Few-shot Tasks from Tables ## Table of Contents - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Supported Tasks](#supported-tasks-and-leaderboards) - [Languages](#languages) - [Dataset Structure](#dataset-structure) - [Data Instances](#data-instances) - [Data Fields](#data-instances) - [Data Splits](#data-instances) - [Dataset Creation](#dataset-creation) - [Curation Rationale](#curation-rationale) - [Source Data](#source-data) - [Annotations](#annotations) - [Personal and Sensitive Information](#personal-and-sensitive-information) - [Considerations for Using the Data](#considerations-for-using-the-data) - [Social Impact of Dataset](#social-impact-of-dataset) - [Discussion of Biases](#discussion-of-biases) - [Other Known Limitations](#other-known-limitations) - [Additional Information](#additional-information) - [Dataset Curators](#dataset-curators) - [Licensing Information](#licensing-information) - [Citation Information](#citation-information) ## Dataset Description - **Repository:** https://github.com/AnonCodeShare/few-shot-adaptation - **Paper:** Few-shot Adaptation Works with UnpredicTable Data ### Dataset Summary The UnpredicTable dataset consists of web tables formatted as few-shot tasks for fine-tuning language models to improve their few-shot performance. There are several dataset versions available: * [UnpredicTable-full](https://huggingface.co/datasets/unpredictable/unpredictable_full): Starting from the initial WTC corpus of 50M tables, we apply our tables-to-tasks procedure to produce our resulting dataset, [UnpredicTable-full](https://huggingface.co/datasets/unpredictable/unpredictable_full), which comprises 413,299 tasks from 23,744 unique websites. * [UnpredicTable-unique](https://huggingface.co/datasets/unpredictable/unpredictable_unique): This is the same as [UnpredicTable-full](https://huggingface.co/datasets/unpredictable/unpredictable_full) but filtered to have a maximum of one task per website. [UnpredicTable-unique](https://huggingface.co/datasets/unpredictable/unpredictable_unique) contains exactly 23,744 tasks from 23,744 websites. * [UnpredicTable-5k](https://huggingface.co/datasets/unpredictable/unpredictable_5k): This dataset contains 5k random tables from the full dataset. * UnpredicTable data subsets based on the website of origin: * [UnpredicTable-support-google-com](https://huggingface.co/datasets/unpredictable/unpredictable_support-google-com) ### Supported Tasks and Leaderboards Since the tables come from the web, the distribution of tasks and topics is very broad. The shape of our dataset is very wide, i.e., we have 1000's of tasks, while each task has only a few examples, compared to most current NLP datasets which are very deep, i.e., 10s of tasks with many examples. This implies that our dataset covers a broad range of potential tasks, e.g., multiple-choice, question-answering, table-question-answering, text-classification, etc. The intended use of this dataset is to improve few-shot performance by fine-tuning/pre-training on our dataset. ### Languages English ## Dataset Structure ### Data Instances Each task is represented as a jsonline file and consists of several few-shot examples. Each example is a dictionary containing a field 'task', which identifies the task, followed by an 'input', 'options', and 'output' field. The 'input' field contains several column elements of the same row in the table, while the 'output' field is a target which represents an individual column of the same row. Each task contains several such examples which can be concatenated as a few-shot task. In the case of multiple choice classification, the 'options' field contains the possible classes that a model needs to choose from. There are also additional meta-data fields such as 'pageTitle', 'title', 'outputColName', 'url', 'wdcFile'. ### Data Fields 'task': task identifier 'input': column elements of a specific row in the table. 'options': for multiple choice classification, it provides the options to choose from. 'output': target column element of the same row as input. 'pageTitle': the title of the page containing the table. 'outputColName': output column name 'url': url to the website containing the table 'wdcFile': WDC Web Table Corpus file ### Data Splits The UnpredicTable datasets do not come with additional data splits. ## Dataset Creation ### Curation Rationale Few-shot training on multi-task datasets has been demonstrated to improve language models' few-shot learning (FSL) performance on new tasks, but it is unclear which training tasks lead to effective downstream task adaptation. Few-shot learning datasets are typically produced with expensive human curation, limiting the scale and diversity of the training tasks available to study. As an alternative source of few-shot data, we automatically extract 413,299 tasks from diverse internet tables. We provide this as a research resource to investigate the relationship between training data and few-shot learning. ### Source Data #### Initial Data Collection and Normalization We use internet tables from the English-language Relational Subset of the WDC Web Table Corpus 2015 (WTC). The WTC dataset tables were extracted from the July 2015 Common Crawl web corpus (http://webdatacommons.org/webtables/2015/EnglishStatistics.html). The dataset contains 50,820,165 tables from 323,160 web domains. We then convert the tables into few-shot learning tasks. Please see our publication for more details on the data collection and conversion pipeline. #### Who are the source language producers? The dataset is extracted from [WDC Web Table Corpora](http://webdatacommons.org/webtables/). ### Personal and Sensitive Information The data was extracted from [WDC Web Table Corpora](http://webdatacommons.org/webtables/), which in turn extracted tables from the [Common Crawl](https://commoncrawl.org/). We did not filter the data in any way. Thus any user identities or otherwise sensitive information (e.g., data that reveals racial or ethnic origins, sexual orientations, religious beliefs, political opinions or union memberships, or locations; financial or health data; biometric or genetic data; forms of government identification, such as social security numbers; criminal history, etc.) might be contained in our dataset. ## Considerations for Using the Data ### Social Impact of Dataset This dataset is intended for use as a research resource to investigate the relationship between training data and few-shot learning. As such, it contains high- and low-quality data, as well as diverse content that may be untruthful or inappropriate. Without careful investigation, it should not be used for training models that will be deployed for use in decision-critical or user-facing situations. ### Discussion of Biases Since our dataset contains tables that are scraped from the web, it will also contain many toxic, racist, sexist, and otherwise harmful biases and texts. We have not run any analysis on the biases prevalent in our datasets. Neither have we explicitly filtered the content. This implies that a model trained on our dataset may potentially reflect harmful biases and toxic text that exist in our dataset. ### Other Known Limitations No additional known limitations. ## Additional Information ### Licensing Information Apache 2.0
unpredictable
null
null
The UnpredicTable dataset consists of web tables formatted as few-shot tasks for fine-tuning language models to improve their few-shot performance. For more details please see the accompanying dataset card.
false
1
false
unpredictable/unpredictable_support-google-com
2022-08-28T18:25:26.000Z
null
false
76db35834d995d0bd5d14d1352277461fe3f225f
[]
[ "annotations_creators:no-annotation", "language_creators:found", "language:en", "license:apache-2.0", "multilinguality:monolingual", "size_categories:100K<n<1M", "task_categories:multiple-choice", "task_categories:question-answering", "task_categories:zero-shot-classification", "task_categories:te...
https://huggingface.co/datasets/unpredictable/unpredictable_support-google-com/resolve/main/README.md
--- annotations_creators: - no-annotation language_creators: - found language: - en license: - apache-2.0 multilinguality: - monolingual pretty_name: UnpredicTable-support-google-com size_categories: - 100K<n<1M source_datasets: [] task_categories: - multiple-choice - question-answering - zero-shot-classification - text2text-generation - table-question-answering - text-generation - text-classification - tabular-classification task_ids: - multiple-choice-qa - extractive-qa - open-domain-qa - closed-domain-qa - closed-book-qa - open-book-qa - language-modeling - multi-class-classification - natural-language-inference - topic-classification - multi-label-classification - tabular-multi-class-classification - tabular-multi-label-classification --- # Dataset Card for "UnpredicTable-support-google-com" - Dataset of Few-shot Tasks from Tables ## Table of Contents - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Supported Tasks](#supported-tasks-and-leaderboards) - [Languages](#languages) - [Dataset Structure](#dataset-structure) - [Data Instances](#data-instances) - [Data Fields](#data-instances) - [Data Splits](#data-instances) - [Dataset Creation](#dataset-creation) - [Curation Rationale](#curation-rationale) - [Source Data](#source-data) - [Annotations](#annotations) - [Personal and Sensitive Information](#personal-and-sensitive-information) - [Considerations for Using the Data](#considerations-for-using-the-data) - [Social Impact of Dataset](#social-impact-of-dataset) - [Discussion of Biases](#discussion-of-biases) - [Other Known Limitations](#other-known-limitations) - [Additional Information](#additional-information) - [Dataset Curators](#dataset-curators) - [Licensing Information](#licensing-information) - [Citation Information](#citation-information) ## Dataset Description - **Repository:** https://github.com/AnonCodeShare/few-shot-adaptation - **Paper:** Few-shot Adaptation Works with UnpredicTable Data ### Dataset Summary The UnpredicTable dataset consists of web tables formatted as few-shot tasks for fine-tuning language models to improve their few-shot performance. There are several dataset versions available: * [UnpredicTable-full](https://huggingface.co/datasets/unpredictable/unpredictable_full): Starting from the initial WTC corpus of 50M tables, we apply our tables-to-tasks procedure to produce our resulting dataset, [UnpredicTable-full](https://huggingface.co/datasets/unpredictable/unpredictable_full), which comprises 413,299 tasks from 23,744 unique websites. * [UnpredicTable-unique](https://huggingface.co/datasets/unpredictable/unpredictable_unique): This is the same as [UnpredicTable-full](https://huggingface.co/datasets/unpredictable/unpredictable_full) but filtered to have a maximum of one task per website. [UnpredicTable-unique](https://huggingface.co/datasets/unpredictable/unpredictable_unique) contains exactly 23,744 tasks from 23,744 websites. * [UnpredicTable-5k](https://huggingface.co/datasets/unpredictable/unpredictable_5k): This dataset contains 5k random tables from the full dataset. * UnpredicTable data subsets based on the website of origin: * [UnpredicTable-support-google-com](https://huggingface.co/datasets/unpredictable/unpredictable_support-google-com) ### Supported Tasks and Leaderboards Since the tables come from the web, the distribution of tasks and topics is very broad. The shape of our dataset is very wide, i.e., we have 1000's of tasks, while each task has only a few examples, compared to most current NLP datasets which are very deep, i.e., 10s of tasks with many examples. This implies that our dataset covers a broad range of potential tasks, e.g., multiple-choice, question-answering, table-question-answering, text-classification, etc. The intended use of this dataset is to improve few-shot performance by fine-tuning/pre-training on our dataset. ### Languages English ## Dataset Structure ### Data Instances Each task is represented as a jsonline file and consists of several few-shot examples. Each example is a dictionary containing a field 'task', which identifies the task, followed by an 'input', 'options', and 'output' field. The 'input' field contains several column elements of the same row in the table, while the 'output' field is a target which represents an individual column of the same row. Each task contains several such examples which can be concatenated as a few-shot task. In the case of multiple choice classification, the 'options' field contains the possible classes that a model needs to choose from. There are also additional meta-data fields such as 'pageTitle', 'title', 'outputColName', 'url', 'wdcFile'. ### Data Fields 'task': task identifier 'input': column elements of a specific row in the table. 'options': for multiple choice classification, it provides the options to choose from. 'output': target column element of the same row as input. 'pageTitle': the title of the page containing the table. 'outputColName': output column name 'url': url to the website containing the table 'wdcFile': WDC Web Table Corpus file ### Data Splits The UnpredicTable datasets do not come with additional data splits. ## Dataset Creation ### Curation Rationale Few-shot training on multi-task datasets has been demonstrated to improve language models' few-shot learning (FSL) performance on new tasks, but it is unclear which training tasks lead to effective downstream task adaptation. Few-shot learning datasets are typically produced with expensive human curation, limiting the scale and diversity of the training tasks available to study. As an alternative source of few-shot data, we automatically extract 413,299 tasks from diverse internet tables. We provide this as a research resource to investigate the relationship between training data and few-shot learning. ### Source Data #### Initial Data Collection and Normalization We use internet tables from the English-language Relational Subset of the WDC Web Table Corpus 2015 (WTC). The WTC dataset tables were extracted from the July 2015 Common Crawl web corpus (http://webdatacommons.org/webtables/2015/EnglishStatistics.html). The dataset contains 50,820,165 tables from 323,160 web domains. We then convert the tables into few-shot learning tasks. Please see our publication for more details on the data collection and conversion pipeline. #### Who are the source language producers? The dataset is extracted from [WDC Web Table Corpora](http://webdatacommons.org/webtables/). ### Personal and Sensitive Information The data was extracted from [WDC Web Table Corpora](http://webdatacommons.org/webtables/), which in turn extracted tables from the [Common Crawl](https://commoncrawl.org/). We did not filter the data in any way. Thus any user identities or otherwise sensitive information (e.g., data that reveals racial or ethnic origins, sexual orientations, religious beliefs, political opinions or union memberships, or locations; financial or health data; biometric or genetic data; forms of government identification, such as social security numbers; criminal history, etc.) might be contained in our dataset. ## Considerations for Using the Data ### Social Impact of Dataset This dataset is intended for use as a research resource to investigate the relationship between training data and few-shot learning. As such, it contains high- and low-quality data, as well as diverse content that may be untruthful or inappropriate. Without careful investigation, it should not be used for training models that will be deployed for use in decision-critical or user-facing situations. ### Discussion of Biases Since our dataset contains tables that are scraped from the web, it will also contain many toxic, racist, sexist, and otherwise harmful biases and texts. We have not run any analysis on the biases prevalent in our datasets. Neither have we explicitly filtered the content. This implies that a model trained on our dataset may potentially reflect harmful biases and toxic text that exist in our dataset. ### Other Known Limitations No additional known limitations. ## Additional Information ### Licensing Information Apache 2.0
unpredictable
null
null
The UnpredicTable dataset consists of web tables formatted as few-shot tasks for fine-tuning language models to improve their few-shot performance. For more details please see the accompanying dataset card.
false
1
false
unpredictable/unpredictable_unique
2022-08-28T18:26:18.000Z
null
false
7b0b1a6c2c61cc1f9304725ceb54c826be65816f
[]
[ "annotations_creators:no-annotation", "language_creators:found", "language:en", "license:apache-2.0", "multilinguality:monolingual", "size_categories:100K<n<1M", "task_categories:multiple-choice", "task_categories:question-answering", "task_categories:zero-shot-classification", "task_categories:te...
https://huggingface.co/datasets/unpredictable/unpredictable_unique/resolve/main/README.md
--- annotations_creators: - no-annotation language_creators: - found language: - en license: - apache-2.0 multilinguality: - monolingual pretty_name: UnpredicTable-unique size_categories: - 100K<n<1M source_datasets: [] task_categories: - multiple-choice - question-answering - zero-shot-classification - text2text-generation - table-question-answering - text-generation - text-classification - tabular-classification task_ids: - multiple-choice-qa - extractive-qa - open-domain-qa - closed-domain-qa - closed-book-qa - open-book-qa - language-modeling - multi-class-classification - natural-language-inference - topic-classification - multi-label-classification - tabular-multi-class-classification - tabular-multi-label-classification --- # Dataset Card for "UnpredicTable-unique" - Dataset of Few-shot Tasks from Tables ## Table of Contents - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Supported Tasks](#supported-tasks-and-leaderboards) - [Languages](#languages) - [Dataset Structure](#dataset-structure) - [Data Instances](#data-instances) - [Data Fields](#data-instances) - [Data Splits](#data-instances) - [Dataset Creation](#dataset-creation) - [Curation Rationale](#curation-rationale) - [Source Data](#source-data) - [Annotations](#annotations) - [Personal and Sensitive Information](#personal-and-sensitive-information) - [Considerations for Using the Data](#considerations-for-using-the-data) - [Social Impact of Dataset](#social-impact-of-dataset) - [Discussion of Biases](#discussion-of-biases) - [Other Known Limitations](#other-known-limitations) - [Additional Information](#additional-information) - [Dataset Curators](#dataset-curators) - [Licensing Information](#licensing-information) - [Citation Information](#citation-information) ## Dataset Description - **Repository:** https://github.com/AnonCodeShare/few-shot-adaptation - **Paper:** Few-shot Adaptation Works with UnpredicTable Data ### Dataset Summary The UnpredicTable dataset consists of web tables formatted as few-shot tasks for fine-tuning language models to improve their few-shot performance. There are several dataset versions available: * [UnpredicTable-full](https://huggingface.co/datasets/unpredictable/unpredictable_full): Starting from the initial WTC corpus of 50M tables, we apply our tables-to-tasks procedure to produce our resulting dataset, [UnpredicTable-full](https://huggingface.co/datasets/unpredictable/unpredictable_full), which comprises 413,299 tasks from 23,744 unique websites. * [UnpredicTable-unique](https://huggingface.co/datasets/unpredictable/unpredictable_unique): This is the same as [UnpredicTable-full](https://huggingface.co/datasets/unpredictable/unpredictable_full) but filtered to have a maximum of one task per website. [UnpredicTable-unique](https://huggingface.co/datasets/unpredictable/unpredictable_unique) contains exactly 23,744 tasks from 23,744 websites. * [UnpredicTable-5k](https://huggingface.co/datasets/unpredictable/unpredictable_5k): This dataset contains 5k random tables from the full dataset. * UnpredicTable data subsets based on the website of origin: * [UnpredicTable-support-google-com](https://huggingface.co/datasets/unpredictable/unpredictable_support-google-com) ### Supported Tasks and Leaderboards Since the tables come from the web, the distribution of tasks and topics is very broad. The shape of our dataset is very wide, i.e., we have 1000's of tasks, while each task has only a few examples, compared to most current NLP datasets which are very deep, i.e., 10s of tasks with many examples. This implies that our dataset covers a broad range of potential tasks, e.g., multiple-choice, question-answering, table-question-answering, text-classification, etc. The intended use of this dataset is to improve few-shot performance by fine-tuning/pre-training on our dataset. ### Languages English ## Dataset Structure ### Data Instances Each task is represented as a jsonline file and consists of several few-shot examples. Each example is a dictionary containing a field 'task', which identifies the task, followed by an 'input', 'options', and 'output' field. The 'input' field contains several column elements of the same row in the table, while the 'output' field is a target which represents an individual column of the same row. Each task contains several such examples which can be concatenated as a few-shot task. In the case of multiple choice classification, the 'options' field contains the possible classes that a model needs to choose from. There are also additional meta-data fields such as 'pageTitle', 'title', 'outputColName', 'url', 'wdcFile'. ### Data Fields 'task': task identifier 'input': column elements of a specific row in the table. 'options': for multiple choice classification, it provides the options to choose from. 'output': target column element of the same row as input. 'pageTitle': the title of the page containing the table. 'outputColName': output column name 'url': url to the website containing the table 'wdcFile': WDC Web Table Corpus file ### Data Splits The UnpredicTable datasets do not come with additional data splits. ## Dataset Creation ### Curation Rationale Few-shot training on multi-task datasets has been demonstrated to improve language models' few-shot learning (FSL) performance on new tasks, but it is unclear which training tasks lead to effective downstream task adaptation. Few-shot learning datasets are typically produced with expensive human curation, limiting the scale and diversity of the training tasks available to study. As an alternative source of few-shot data, we automatically extract 413,299 tasks from diverse internet tables. We provide this as a research resource to investigate the relationship between training data and few-shot learning. ### Source Data #### Initial Data Collection and Normalization We use internet tables from the English-language Relational Subset of the WDC Web Table Corpus 2015 (WTC). The WTC dataset tables were extracted from the July 2015 Common Crawl web corpus (http://webdatacommons.org/webtables/2015/EnglishStatistics.html). The dataset contains 50,820,165 tables from 323,160 web domains. We then convert the tables into few-shot learning tasks. Please see our publication for more details on the data collection and conversion pipeline. #### Who are the source language producers? The dataset is extracted from [WDC Web Table Corpora](http://webdatacommons.org/webtables/). ### Personal and Sensitive Information The data was extracted from [WDC Web Table Corpora](http://webdatacommons.org/webtables/), which in turn extracted tables from the [Common Crawl](https://commoncrawl.org/). We did not filter the data in any way. Thus any user identities or otherwise sensitive information (e.g., data that reveals racial or ethnic origins, sexual orientations, religious beliefs, political opinions or union memberships, or locations; financial or health data; biometric or genetic data; forms of government identification, such as social security numbers; criminal history, etc.) might be contained in our dataset. ## Considerations for Using the Data ### Social Impact of Dataset This dataset is intended for use as a research resource to investigate the relationship between training data and few-shot learning. As such, it contains high- and low-quality data, as well as diverse content that may be untruthful or inappropriate. Without careful investigation, it should not be used for training models that will be deployed for use in decision-critical or user-facing situations. ### Discussion of Biases Since our dataset contains tables that are scraped from the web, it will also contain many toxic, racist, sexist, and otherwise harmful biases and texts. We have not run any analysis on the biases prevalent in our datasets. Neither have we explicitly filtered the content. This implies that a model trained on our dataset may potentially reflect harmful biases and toxic text that exist in our dataset. ### Other Known Limitations No additional known limitations. ## Additional Information ### Licensing Information Apache 2.0
williamlee
null
null
null
false
1
false
williamlee/test2
2022-08-29T02:00:50.000Z
null
false
103c2fe8cb50ef4f095da366e90254008bae0bb8
[]
[ "license:apache-2.0" ]
https://huggingface.co/datasets/williamlee/test2/resolve/main/README.md
--- license: apache-2.0 ---
Lo
null
null
null
false
1
false
Lo/adapt-pre-trained-VL-models-to-text-data-Wikipedia-finetune
2022-08-29T08:27:33.000Z
null
false
5f17b065b8739c725a84d3a6965ed7f040cdae04
[]
[ "language:en", "license:cc-by-sa-3.0", "multilinguality:monolingual" ]
https://huggingface.co/datasets/Lo/adapt-pre-trained-VL-models-to-text-data-Wikipedia-finetune/resolve/main/README.md
--- language: - en license: - cc-by-sa-3.0 multilinguality: - monolingual --- The Wikipedia finetune data used to train visual features for the adaption of vision-and-language models to text-only tasks in the paper "How to Adapt Pre-trained Vision-and-Language Models to a Text-only Input?". The data has been created from the "20200501.en" revision of the [wikipedia dataset](https://huggingface.co/datasets/wikipedia) on Huggingface.
Lo
null
null
null
false
1
false
Lo/adapt-pre-trained-VL-models-to-text-data-LXMERT
2022-08-29T08:30:05.000Z
null
false
d6fe56688ae0435f11bcc1860fe7de01e0d3ffe4
[]
[ "language:en", "license:mit", "multilinguality:monolingual" ]
https://huggingface.co/datasets/Lo/adapt-pre-trained-VL-models-to-text-data-LXMERT/resolve/main/README.md
--- language: - en license: - mit multilinguality: - monolingual --- The LXMERT text train data used to train BERT-base baselines and adapt vision-and-language models to text-only tasks in the paper "How to Adapt Pre-trained Vision-and-Language Models to a Text-only Input?". The data has been created from the data made available by the [LXMERT repo](https://github.com/airsplay/lxmert).
Lo
null
null
null
false
1
false
Lo/adapt-pre-trained-VL-models-to-text-data-LXMERT-finetune
2022-08-29T08:31:45.000Z
null
false
ea1623c9c1f7b042aff76cbcf1ca5c0a3ef8e114
[]
[ "language:en", "license:mit", "multilinguality:monolingual" ]
https://huggingface.co/datasets/Lo/adapt-pre-trained-VL-models-to-text-data-LXMERT-finetune/resolve/main/README.md
--- language: - en license: - mit multilinguality: - monolingual --- The LXMERT text finetune data used to train visual features for the adaption of vision-and-language models to text-only tasks in the paper "How to Adapt Pre-trained Vision-and-Language Models to a Text-only Input?". The data has been created from the data made available by the [LXMERT repo](https://github.com/airsplay/lxmert).
ashwinperti
null
null
null
false
1
false
ashwinperti/yelpnew
2022-08-29T08:38:42.000Z
null
false
3d2ddf11220d67832edb32043e9abdbfb8d035af
[]
[ "license:eupl-1.1" ]
https://huggingface.co/datasets/ashwinperti/yelpnew/resolve/main/README.md
--- license: eupl-1.1 ---
autoevaluate
null
null
null
false
1
false
autoevaluate/autoeval-staging-eval-project-glue-f7900ebf-13965913
2022-08-29T09:37:29.000Z
null
false
683b752aaead07750f544d18639ee871f912a697
[]
[ "type:predictions", "tags:autotrain", "tags:evaluation", "datasets:glue" ]
https://huggingface.co/datasets/autoevaluate/autoeval-staging-eval-project-glue-f7900ebf-13965913/resolve/main/README.md
--- type: predictions tags: - autotrain - evaluation datasets: - glue eval_info: task: binary_classification model: autoevaluate/binary-classification metrics: [] dataset_name: glue dataset_config: sst2 dataset_split: validation col_mapping: text: sentence target: label --- # Dataset Card for AutoTrain Evaluator This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset: * Task: Binary Text Classification * Model: autoevaluate/binary-classification * Dataset: glue * Config: sst2 * Split: validation To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator). ## Contributions Thanks to [@lewtun](https://huggingface.co/lewtun) for evaluating this model.
autoevaluate
null
null
null
false
1
false
autoevaluate/autoeval-staging-eval-project-glue-e9a4b61a-13985914
2022-08-29T10:05:51.000Z
null
false
513ed4cfbc29df4be9c167bef472b3a4aeae7dca
[]
[ "type:predictions", "tags:autotrain", "tags:evaluation", "datasets:glue" ]
https://huggingface.co/datasets/autoevaluate/autoeval-staging-eval-project-glue-e9a4b61a-13985914/resolve/main/README.md
--- type: predictions tags: - autotrain - evaluation datasets: - glue eval_info: task: natural_language_inference model: autoevaluate/glue-mrpc metrics: [] dataset_name: glue dataset_config: mrpc dataset_split: validation col_mapping: text1: sentence1 text2: sentence2 target: label --- # Dataset Card for AutoTrain Evaluator This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset: * Task: Natural Language Inference * Model: autoevaluate/glue-mrpc * Dataset: glue * Config: mrpc * Split: validation To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator). ## Contributions Thanks to [@lewtun](https://huggingface.co/lewtun) for evaluating this model.
autoevaluate
null
null
null
false
1
false
autoevaluate/autoeval-staging-eval-project-glue-4805e982-13995915
2022-08-29T10:07:21.000Z
null
false
6b3840bc7bb94a480e42c79200caf31a3b598fd1
[]
[ "type:predictions", "tags:autotrain", "tags:evaluation", "datasets:glue" ]
https://huggingface.co/datasets/autoevaluate/autoeval-staging-eval-project-glue-4805e982-13995915/resolve/main/README.md
--- type: predictions tags: - autotrain - evaluation datasets: - glue eval_info: task: natural_language_inference model: autoevaluate/glue-qqp metrics: [] dataset_name: glue dataset_config: qqp dataset_split: validation col_mapping: text1: question1 text2: question2 target: label --- # Dataset Card for AutoTrain Evaluator This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset: * Task: Natural Language Inference * Model: autoevaluate/glue-qqp * Dataset: glue * Config: qqp * Split: validation To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator). ## Contributions Thanks to [@lewtun](https://huggingface.co/lewtun) for evaluating this model.
autoevaluate
null
null
null
false
1
false
autoevaluate/autoeval-staging-eval-project-autoevaluate__squad-sample-11b52eb1-14005916
2022-08-29T10:25:07.000Z
null
false
9cee6f8497cb95ce974e7e7e511c347c5a572d8f
[]
[ "type:predictions", "tags:autotrain", "tags:evaluation", "datasets:autoevaluate/squad-sample" ]
https://huggingface.co/datasets/autoevaluate/autoeval-staging-eval-project-autoevaluate__squad-sample-11b52eb1-14005916/resolve/main/README.md
--- type: predictions tags: - autotrain - evaluation datasets: - autoevaluate/squad-sample eval_info: task: extractive_question_answering model: autoevaluate/extractive-question-answering metrics: [] dataset_name: autoevaluate/squad-sample dataset_config: autoevaluate--squad-sample dataset_split: test col_mapping: context: context question: question answers-text: answers.text answers-answer_start: answers.answer_start --- # Dataset Card for AutoTrain Evaluator This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset: * Task: Question Answering * Model: autoevaluate/extractive-question-answering * Dataset: autoevaluate/squad-sample * Config: autoevaluate--squad-sample * Split: test To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator). ## Contributions Thanks to [@lewtun](https://huggingface.co/lewtun) for evaluating this model.
autoevaluate
null
null
null
false
null
false
autoevaluate/autoeval-staging-eval-project-glue-f16e6c43-14015917
2022-08-29T12:07:18.000Z
null
false
ec3c96f7624cc7b419297c51779b9800826a818c
[]
[ "type:predictions", "tags:autotrain", "tags:evaluation", "datasets:glue" ]
https://huggingface.co/datasets/autoevaluate/autoeval-staging-eval-project-glue-f16e6c43-14015917/resolve/main/README.md
--- type: predictions tags: - autotrain - evaluation datasets: - glue eval_info: task: natural_language_inference model: mrm8488/deberta-v3-small-finetuned-mrpc metrics: [] dataset_name: glue dataset_config: mrpc dataset_split: validation col_mapping: text1: sentence1 text2: sentence2 target: label --- # Dataset Card for AutoTrain Evaluator This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset: * Task: Natural Language Inference * Model: mrm8488/deberta-v3-small-finetuned-mrpc * Dataset: glue * Config: mrpc * Split: validation To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator). ## Contributions Thanks to [@lewtun](https://huggingface.co/lewtun) for evaluating this model.
autoevaluate
null
null
null
false
null
false
autoevaluate/autoeval-staging-eval-project-emotion-af6a16fe-14025918
2022-08-29T12:07:19.000Z
null
false
64dea239da2de88405fb3120dc26f511eaff7891
[]
[ "type:predictions", "tags:autotrain", "tags:evaluation", "datasets:emotion" ]
https://huggingface.co/datasets/autoevaluate/autoeval-staging-eval-project-emotion-af6a16fe-14025918/resolve/main/README.md
--- type: predictions tags: - autotrain - evaluation datasets: - emotion eval_info: task: multi_class_classification model: anindabitm/sagemaker-distilbert-emotion metrics: [] dataset_name: emotion dataset_config: default dataset_split: test col_mapping: text: text target: label --- # Dataset Card for AutoTrain Evaluator This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset: * Task: Multi-class Text Classification * Model: anindabitm/sagemaker-distilbert-emotion * Dataset: emotion * Config: default * Split: test To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator). ## Contributions Thanks to [@lewtun](https://huggingface.co/lewtun) for evaluating this model.
autoevaluate
null
null
null
false
null
false
autoevaluate/autoeval-staging-eval-project-samsum-afdf25d0-14035919
2022-08-29T12:27:41.000Z
null
false
3160df47c1c1eef5087fa86fb551b61adfe2f552
[]
[ "type:predictions", "tags:autotrain", "tags:evaluation", "datasets:samsum" ]
https://huggingface.co/datasets/autoevaluate/autoeval-staging-eval-project-samsum-afdf25d0-14035919/resolve/main/README.md
--- type: predictions tags: - autotrain - evaluation datasets: - samsum eval_info: task: summarization model: ARTeLab/it5-summarization-fanpage metrics: [] dataset_name: samsum dataset_config: samsum dataset_split: test col_mapping: text: dialogue target: summary --- # Dataset Card for AutoTrain Evaluator This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset: * Task: Summarization * Model: ARTeLab/it5-summarization-fanpage * Dataset: samsum * Config: samsum * Split: test To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator). ## Contributions Thanks to [@lewtun](https://huggingface.co/lewtun) for evaluating this model.
autoevaluate
null
null
null
false
null
false
autoevaluate/autoeval-staging-eval-project-samsum-afdf25d0-14035921
2022-08-29T12:29:49.000Z
null
false
a4895d7e5d6f96414fce19ef999a68f0adc509e9
[]
[ "type:predictions", "tags:autotrain", "tags:evaluation", "datasets:samsum" ]
https://huggingface.co/datasets/autoevaluate/autoeval-staging-eval-project-samsum-afdf25d0-14035921/resolve/main/README.md
--- type: predictions tags: - autotrain - evaluation datasets: - samsum eval_info: task: summarization model: Ameer05/bart-large-cnn-samsum-rescom-finetuned-resume-summarizer-10-epoch metrics: [] dataset_name: samsum dataset_config: samsum dataset_split: test col_mapping: text: dialogue target: summary --- # Dataset Card for AutoTrain Evaluator This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset: * Task: Summarization * Model: Ameer05/bart-large-cnn-samsum-rescom-finetuned-resume-summarizer-10-epoch * Dataset: samsum * Config: samsum * Split: test To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator). ## Contributions Thanks to [@lewtun](https://huggingface.co/lewtun) for evaluating this model.
autoevaluate
null
null
null
false
null
false
autoevaluate/autoeval-staging-eval-project-squad_v2-82949658-14045922
2022-08-29T12:29:21.000Z
null
false
60630ce757b999088709d5d6816592c9b7fdbd89
[]
[ "type:predictions", "tags:autotrain", "tags:evaluation", "datasets:squad_v2" ]
https://huggingface.co/datasets/autoevaluate/autoeval-staging-eval-project-squad_v2-82949658-14045922/resolve/main/README.md
--- type: predictions tags: - autotrain - evaluation datasets: - squad_v2 eval_info: task: extractive_question_answering model: Adrian/distilbert-base-uncased-finetuned-squad metrics: [] dataset_name: squad_v2 dataset_config: squad_v2 dataset_split: validation col_mapping: context: context question: question answers-text: answers.text answers-answer_start: answers.answer_start --- # Dataset Card for AutoTrain Evaluator This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset: * Task: Question Answering * Model: Adrian/distilbert-base-uncased-finetuned-squad * Dataset: squad_v2 * Config: squad_v2 * Split: validation To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator). ## Contributions Thanks to [@lewtun](https://huggingface.co/lewtun) for evaluating this model.
autoevaluate
null
null
null
false
null
false
autoevaluate/autoeval-staging-eval-project-squad_v2-82949658-14045923
2022-08-29T12:30:22.000Z
null
false
f94df08f28998f2e61b9017f89692664e0530679
[]
[ "type:predictions", "tags:autotrain", "tags:evaluation", "datasets:squad_v2" ]
https://huggingface.co/datasets/autoevaluate/autoeval-staging-eval-project-squad_v2-82949658-14045923/resolve/main/README.md
--- type: predictions tags: - autotrain - evaluation datasets: - squad_v2 eval_info: task: extractive_question_answering model: Aiyshwariya/bert-finetuned-squad metrics: [] dataset_name: squad_v2 dataset_config: squad_v2 dataset_split: validation col_mapping: context: context question: question answers-text: answers.text answers-answer_start: answers.answer_start --- # Dataset Card for AutoTrain Evaluator This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset: * Task: Question Answering * Model: Aiyshwariya/bert-finetuned-squad * Dataset: squad_v2 * Config: squad_v2 * Split: validation To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator). ## Contributions Thanks to [@lewtun](https://huggingface.co/lewtun) for evaluating this model.
autoevaluate
null
null
null
false
null
false
autoevaluate/autoeval-staging-eval-project-wmt16-a5e2262a-14055924
2022-08-29T12:28:47.000Z
null
false
c82c3e92c8ce1011435ff34246d830634d4f3ab3
[]
[ "type:predictions", "tags:autotrain", "tags:evaluation", "datasets:wmt16" ]
https://huggingface.co/datasets/autoevaluate/autoeval-staging-eval-project-wmt16-a5e2262a-14055924/resolve/main/README.md
--- type: predictions tags: - autotrain - evaluation datasets: - wmt16 eval_info: task: translation model: Lvxue/finetuned-mt5-small-10epoch metrics: [] dataset_name: wmt16 dataset_config: de-en dataset_split: test col_mapping: source: translation.en target: translation.de --- # Dataset Card for AutoTrain Evaluator This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset: * Task: Translation * Model: Lvxue/finetuned-mt5-small-10epoch * Dataset: wmt16 * Config: de-en * Split: test To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator). ## Contributions Thanks to [@lewtun](https://huggingface.co/lewtun) for evaluating this model.
autoevaluate
null
null
null
false
null
false
autoevaluate/autoeval-staging-eval-project-glue-c88eb4d4-14065928
2022-08-29T12:27:58.000Z
null
false
d04e8305a7b1fe40ced830c06b1b435aa0252f6a
[]
[ "type:predictions", "tags:autotrain", "tags:evaluation", "datasets:glue" ]
https://huggingface.co/datasets/autoevaluate/autoeval-staging-eval-project-glue-c88eb4d4-14065928/resolve/main/README.md
--- type: predictions tags: - autotrain - evaluation datasets: - glue eval_info: task: binary_classification model: mrm8488/deberta-v3-small-finetuned-cola metrics: [] dataset_name: glue dataset_config: cola dataset_split: validation col_mapping: text: sentence target: label --- # Dataset Card for AutoTrain Evaluator This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset: * Task: Binary Text Classification * Model: mrm8488/deberta-v3-small-finetuned-cola * Dataset: glue * Config: cola * Split: validation To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator). ## Contributions Thanks to [@lewtun](https://huggingface.co/lewtun) for evaluating this model.
autoevaluate
null
null
null
false
null
false
autoevaluate/autoeval-staging-eval-project-glue-ca80bfc9-14105932
2022-08-29T12:31:56.000Z
null
false
a45bcb2ef853109b882d5f6c7cb99c3bd54bb223
[]
[ "type:predictions", "tags:autotrain", "tags:evaluation", "datasets:glue" ]
https://huggingface.co/datasets/autoevaluate/autoeval-staging-eval-project-glue-ca80bfc9-14105932/resolve/main/README.md
--- type: predictions tags: - autotrain - evaluation datasets: - glue eval_info: task: natural_language_inference model: mrm8488/deberta-v3-large-finetuned-mnli metrics: [] dataset_name: glue dataset_config: mnli dataset_split: validation_matched col_mapping: text1: premise text2: hypothesis target: label --- # Dataset Card for AutoTrain Evaluator This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset: * Task: Natural Language Inference * Model: mrm8488/deberta-v3-large-finetuned-mnli * Dataset: glue * Config: mnli * Split: validation_matched To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator). ## Contributions Thanks to [@lewtun](https://huggingface.co/lewtun) for evaluating this model.
autoevaluate
null
null
null
false
null
false
autoevaluate/autoeval-staging-eval-project-glue-91d4fe29-14115933
2022-08-29T12:28:56.000Z
null
false
81d0f6caa3ab9c6300a0bab43cfb0fdc10d53b05
[]
[ "type:predictions", "tags:autotrain", "tags:evaluation", "datasets:glue" ]
https://huggingface.co/datasets/autoevaluate/autoeval-staging-eval-project-glue-91d4fe29-14115933/resolve/main/README.md
--- type: predictions tags: - autotrain - evaluation datasets: - glue eval_info: task: natural_language_inference model: mrm8488/deberta-v3-small-finetuned-qnli metrics: [] dataset_name: glue dataset_config: qnli dataset_split: validation col_mapping: text1: question text2: sentence target: label --- # Dataset Card for AutoTrain Evaluator This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset: * Task: Natural Language Inference * Model: mrm8488/deberta-v3-small-finetuned-qnli * Dataset: glue * Config: qnli * Split: validation To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator). ## Contributions Thanks to [@lewtun](https://huggingface.co/lewtun) for evaluating this model.
autoevaluate
null
null
null
false
null
false
autoevaluate/autoeval-staging-eval-project-glue-f56b6c46-14085930
2022-08-29T12:28:55.000Z
null
false
ca26aa07d44b0cf23ae600e6fcf1690a0c2992c5
[]
[ "type:predictions", "tags:autotrain", "tags:evaluation", "datasets:glue" ]
https://huggingface.co/datasets/autoevaluate/autoeval-staging-eval-project-glue-f56b6c46-14085930/resolve/main/README.md
--- type: predictions tags: - autotrain - evaluation datasets: - glue eval_info: task: natural_language_inference model: Intel/roberta-base-mrpc metrics: [] dataset_name: glue dataset_config: mrpc dataset_split: train col_mapping: text1: sentence1 text2: sentence2 target: label --- # Dataset Card for AutoTrain Evaluator This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset: * Task: Natural Language Inference * Model: Intel/roberta-base-mrpc * Dataset: glue * Config: mrpc * Split: train To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator). ## Contributions Thanks to [@lewtun](https://huggingface.co/lewtun) for evaluating this model.
autoevaluate
null
null
null
false
null
false
autoevaluate/autoeval-staging-eval-project-glue-f1585abe-14095931
2022-08-29T12:31:25.000Z
null
false
3034f92e343d8e9629ba792ece2bfbfb067a5181
[]
[ "type:predictions", "tags:autotrain", "tags:evaluation", "datasets:glue" ]
https://huggingface.co/datasets/autoevaluate/autoeval-staging-eval-project-glue-f1585abe-14095931/resolve/main/README.md
--- type: predictions tags: - autotrain - evaluation datasets: - glue eval_info: task: natural_language_inference model: JeremiahZ/roberta-base-qqp metrics: [] dataset_name: glue dataset_config: qqp dataset_split: validation col_mapping: text1: question1 text2: question2 target: label --- # Dataset Card for AutoTrain Evaluator This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset: * Task: Natural Language Inference * Model: JeremiahZ/roberta-base-qqp * Dataset: glue * Config: qqp * Split: validation To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator). ## Contributions Thanks to [@lewtun](https://huggingface.co/lewtun) for evaluating this model.
autoevaluate
null
null
null
false
null
false
autoevaluate/autoeval-staging-eval-project-glue-f6cacc01-14075929
2022-08-29T12:28:52.000Z
null
false
a4c35f2ecd42cb2bfca9ea1cda04793fae25b6b9
[]
[ "type:predictions", "tags:autotrain", "tags:evaluation", "datasets:glue" ]
https://huggingface.co/datasets/autoevaluate/autoeval-staging-eval-project-glue-f6cacc01-14075929/resolve/main/README.md
--- type: predictions tags: - autotrain - evaluation datasets: - glue eval_info: task: binary_classification model: mrm8488/deberta-v3-small-finetuned-sst2 metrics: [] dataset_name: glue dataset_config: sst2 dataset_split: validation col_mapping: text: sentence target: label --- # Dataset Card for AutoTrain Evaluator This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset: * Task: Binary Text Classification * Model: mrm8488/deberta-v3-small-finetuned-sst2 * Dataset: glue * Config: sst2 * Split: validation To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator). ## Contributions Thanks to [@lewtun](https://huggingface.co/lewtun) for evaluating this model.
autoevaluate
null
null
null
false
null
false
autoevaluate/autoeval-staging-eval-project-glue-67467c9c-14145936
2022-08-29T12:29:38.000Z
null
false
2536141082d13670fa08230b1c7f2cd4c8ad43f1
[]
[ "type:predictions", "tags:autotrain", "tags:evaluation", "datasets:glue" ]
https://huggingface.co/datasets/autoevaluate/autoeval-staging-eval-project-glue-67467c9c-14145936/resolve/main/README.md
--- type: predictions tags: - autotrain - evaluation datasets: - glue eval_info: task: natural_language_inference model: Alireza1044/mobilebert_rte metrics: [] dataset_name: glue dataset_config: rte dataset_split: validation col_mapping: text1: sentence1 text2: sentence2 target: label --- # Dataset Card for AutoTrain Evaluator This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset: * Task: Natural Language Inference * Model: Alireza1044/mobilebert_rte * Dataset: glue * Config: rte * Split: validation To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator). ## Contributions Thanks to [@lewtun](https://huggingface.co/lewtun) for evaluating this model.
autoevaluate
null
null
null
false
null
false
autoevaluate/autoeval-staging-eval-project-glue-67467c9c-14145935
2022-08-29T12:30:00.000Z
null
false
b090c700a076dcf043522e5ddce467f6add05a67
[]
[ "type:predictions", "tags:autotrain", "tags:evaluation", "datasets:glue" ]
https://huggingface.co/datasets/autoevaluate/autoeval-staging-eval-project-glue-67467c9c-14145935/resolve/main/README.md
--- type: predictions tags: - autotrain - evaluation datasets: - glue eval_info: task: natural_language_inference model: JeremiahZ/roberta-base-rte metrics: [] dataset_name: glue dataset_config: rte dataset_split: validation col_mapping: text1: sentence1 text2: sentence2 target: label --- # Dataset Card for AutoTrain Evaluator This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset: * Task: Natural Language Inference * Model: JeremiahZ/roberta-base-rte * Dataset: glue * Config: rte * Split: validation To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator). ## Contributions Thanks to [@lewtun](https://huggingface.co/lewtun) for evaluating this model.