id
stringlengths
2
115
lastModified
stringlengths
24
24
tags
list
author
stringlengths
2
42
description
stringlengths
0
68.7k
citation
stringlengths
0
10.7k
cardData
null
likes
int64
0
3.55k
downloads
int64
0
10.1M
card
stringlengths
0
1.01M
HumanCompatibleAI/ppo-Pendulum-v1
2023-10-04T16:52:12.000Z
[ "region:us" ]
HumanCompatibleAI
null
null
null
0
107
--- dataset_info: features: - name: obs sequence: sequence: float32 - name: acts sequence: sequence: float32 - name: infos sequence: string - name: terminal dtype: bool - name: rews sequence: float32 splits: - name: train num_bytes: 2575710 num_examples: 200 download_size: 940375 dataset_size: 2575710 --- # Dataset Card for "ppo-Pendulum-v1" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
allegro/klej-polemo2-out
2022-08-30T06:57:07.000Z
[ "task_categories:text-classification", "task_ids:sentiment-classification", "annotations_creators:expert-generated", "language_creators:other", "multilinguality:monolingual", "size_categories:1K<n<10K", "source_datasets:original", "language:pl", "license:cc-by-sa-4.0", "region:us" ]
allegro
null
null
null
0
106
--- annotations_creators: - expert-generated language_creators: - other language: - pl license: - cc-by-sa-4.0 multilinguality: - monolingual pretty_name: 'PolEmo2.0-OUT' size_categories: - 1K<n<10K source_datasets: - original task_categories: - text-classification task_ids: - sentiment-classification --- # klej-polemo2-out ## Description The PolEmo2.0 is a dataset of online consumer reviews from four domains: medicine, hotels, products, and university. It is human-annotated on a level of full reviews and individual sentences. It comprises over 8000 reviews, about 85% from the medicine and hotel domains. We use the PolEmo2.0 dataset to form two tasks. Both use the same training dataset, i.e., reviews from medicine and hotel domains, but are evaluated on a different test set. **Out-of-Domain** is the second task, and we test the model on out-of-domain reviews, i.e., from product and university domains. Since the original test sets for those domains are scarce (50 reviews each), we decided to use the original out-of-domain training set of 900 reviews for testing purposes and create a new split of development and test sets. As a result, the task consists of 1000 reviews, comparable in size to the in-domain test dataset of 1400 reviews. ## Tasks (input, output, and metrics) The task is to predict the correct label of the review. **Input** ('*text'* column): sentence **Output** ('*target'* column): label for sentence sentiment ('zero': neutral, 'minus': negative, 'plus': positive, 'amb': ambiguous) **Domain**: Online reviews **Measurements**: Accuracy **Example**: Input: `Lekarz zalecił mi kurację alternatywną do dotychczasowej , więc jeszcze nie daję najwyższej oceny ( zobaczymy na ile okaże się skuteczna ) . Do Pana doktora nie mam zastrzeżeń : bardzo profesjonalny i kulturalny . Jedyny minus dotyczy gabinetu , który nie jest nowoczesny , co może zniechęcać pacjentki .` Input (translated by DeepL): `The doctor recommended me an alternative treatment to the current one , so I do not yet give the highest rating ( we will see how effective it turns out to be ) . To the doctor I have no reservations : very professional and cultured . The only minus is about the office , which is not modern , which may discourage patients .` Output: `amb` (ambiguous) ## Data splits | Subset | Cardinality | |:-----------|--------------:| | train | 5783 | | test | 722 | | validation | 723 | ## Class distribution | Class | Sentiment | train | validation | test | |:------|:----------|------:|-----------:|------:| | minus | positive | 0.379 | 0.334 | 0.368 | | plus | negative | 0.271 | 0.332 | 0.302 | | amb | ambiguous | 0.182 | 0.332 | 0.328 | | zero | neutral | 0.168 | 0.002 | 0.002 | ## Citation ``` @inproceedings{kocon-etal-2019-multi, title = "Multi-Level Sentiment Analysis of {P}ol{E}mo 2.0: Extended Corpus of Multi-Domain Consumer Reviews", author = "Koco{\'n}, Jan and Mi{\l}kowski, Piotr and Za{\'s}ko-Zieli{\'n}ska, Monika", booktitle = "Proceedings of the 23rd Conference on Computational Natural Language Learning (CoNLL)", month = nov, year = "2019", address = "Hong Kong, China", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/K19-1092", doi = "10.18653/v1/K19-1092", pages = "980--991", abstract = "In this article we present an extended version of PolEmo {--} a corpus of consumer reviews from 4 domains: medicine, hotels, products and school. Current version (PolEmo 2.0) contains 8,216 reviews having 57,466 sentences. Each text and sentence was manually annotated with sentiment in 2+1 scheme, which gives a total of 197,046 annotations. We obtained a high value of Positive Specific Agreement, which is 0.91 for texts and 0.88 for sentences. PolEmo 2.0 is publicly available under a Creative Commons copyright license. We explored recent deep learning approaches for the recognition of sentiment, such as Bi-directional Long Short-Term Memory (BiLSTM) and Bidirectional Encoder Representations from Transformers (BERT).", } ``` ## License ``` Attribution-NonCommercial-ShareAlike 4.0 International (CC BY-NC-SA 4.0) ``` ## Links [HuggingFace](https://huggingface.co/datasets/allegro/klej-polemo2-out) [Source](https://clarin-pl.eu/dspace/handle/11321/710) [Paper](https://aclanthology.org/K19-1092/) ## Examples ### Loading ```python from pprint import pprint from datasets import load_dataset dataset = load_dataset("allegro/klej-polemo2-out") pprint(dataset['train'][0]) # {'sentence': 'Super lekarz i człowiek przez duże C . Bardzo duże doświadczenie ' # 'i trafne diagnozy . Wielka cierpliwość do ludzi starszych . Od ' # 'lat opiekuje się moją Mamą staruszką , i twierdzę , że mamy duże ' # 'szczęście , że mamy takiego lekarza . Naprawdę nie wiem cobyśmy ' # 'zrobili , gdyby nie Pan doktor . Dzięki temu , moja mama żyje . ' # 'Każda wizyta u specjalisty jest u niego konsultowana i uważam , ' # 'że jest lepszy od każdego z nich . Mamy do Niego prawie ' # 'nieograniczone zaufanie . Można wiele dobrego o Panu doktorze ' # 'jeszcze napisać . Niestety , ma bardzo dużo pacjentów , jest ' # 'przepracowany ( z tego powodu nawet obawiam się o jego zdrowie ) ' # 'i dostęp do niego jest trudny , ale zawsze możliwy .', # 'target': '__label__meta_plus_m'} ``` ### Evaluation ```python import random from pprint import pprint from datasets import load_dataset, load_metric dataset = load_dataset("allegro/klej-polemo2-out") dataset = dataset.class_encode_column("target") references = dataset["test"]["target"] # generate random predictions predictions = [random.randrange(max(references) + 1) for _ in range(len(references))] acc = load_metric("accuracy") f1 = load_metric("f1") acc_score = acc.compute(predictions=predictions, references=references) f1_score = f1.compute(predictions=predictions, references=references, average="macro") pprint(acc_score) pprint(f1_score) # {'accuracy': 0.2894736842105263} # {'f1': 0.2484406098784191} ```
echarlaix/vqa
2022-02-01T10:45:13.000Z
[ "license:apache-2.0", "region:us" ]
echarlaix
VQA is a new dataset containing open-ended questions about images. These questions require an understanding of vision, language and commonsense knowledge to answer.
@inproceedings{antol2015vqa, title={Vqa: Visual question answering}, author={Antol, Stanislaw and Agrawal, Aishwarya and Lu, Jiasen and Mitchell, Margaret and Batra, Dhruv and Zitnick, C Lawrence and Parikh, Devi}, booktitle={Proceedings of the IEEE international conference on computer vision}, pages={2425--2433}, year={2015} }
null
1
106
--- license: apache-2.0 ---
jamescalam/reddit-python
2022-04-25T12:41:35.000Z
[ "region:us" ]
jamescalam
null
null
null
2
106
# Python Subreddit Dataset containing data scraped from the [Python subreddit](https://www.reddit.com/r/python).
AhmedSSabir/Japanese-wiki-dump-sentence-dataset
2023-07-11T12:22:09.000Z
[ "task_categories:sentence-similarity", "task_categories:text-classification", "task_categories:text-generation", "size_categories:1M<n<10M", "language:ja", "region:us" ]
AhmedSSabir
null
null
null
1
106
--- task_categories: - sentence-similarity - text-classification - text-generation language: - ja size_categories: - 1M<n<10M --- # Dataset 5M (5121625) clean Japanese full sentence with the context. This dataset can be used to learn unsupervised semantic similarity, etc.
CIRAL/ciral-corpus
2023-06-27T19:01:03.000Z
[ "language:ha", "language:so", "language:sw", "language:yo", "license:apache-2.0", "region:us" ]
CIRAL
null
null
null
0
106
--- language: - ha - so - sw - yo mutilinguality: - multilingual task-categories: - text-retrieval license: apache-2.0 viewer: true --- # Dataset Summary CIRAL is a collection for cross-lingual information retrieval research across four (4) African languages. The collection comprises English queries and query-passage relevance judgements manually annotated by native speakers. This dataset stores passages which have been culled from news websites for CIRAL. ## Dataset Structure This dataset is configured by language. An example of a passage data entry is ```json { 'docid': 'DOCID#0#0', 'title': 'This is the title of a sample passage', 'text': 'This is the content of a sample passage', 'url': 'https:/\/\this-is-a-sample-url.com' } ``` ## Load Dataset An example to load the dataset ```python language = "hausa" dataset = load_dataset("ciral/ciral-corpus", language) ``` ## Citation ...
PNLPhub/snappfood-sentiment-analysis
2023-09-03T07:22:13.000Z
[ "region:us" ]
PNLPhub
null
null
null
0
106
--- dataset_info: features: - name: comment dtype: string - name: label dtype: string - name: label_id dtype: float64 splits: - name: train num_bytes: 9448245 num_examples: 52110 - name: validation num_bytes: 1499484 num_examples: 8337 - name: test num_bytes: 1627356 num_examples: 9033 download_size: 11880991 dataset_size: 12575085 ---
PNLPhub/Persian-News
2023-06-20T11:05:30.000Z
[ "license:apache-2.0", "region:us" ]
PNLPhub
\\\\\\\A dataset of various news articles scraped from different online news agencies’ websites. The total number of articles is 16,438, spread over eight different classes.
\@article{ParsBERT, title={ParsBERT: Transformer-based Model for Persian Language Understanding}, author={Mehrdad Farahani, Mohammad Gharachorloo, Marzieh Farahani, Mohammad Manthouri}, journal={ArXiv}, year={2020}, volume={abs/2005.12515} }
null
0
106
--- license: apache-2.0 ---
germank/hh-rlhf_with_features_flan_t5_large
2023-07-24T14:19:59.000Z
[ "region:us" ]
germank
null
null
null
0
106
--- dataset_info: features: - name: chosen dtype: string - name: rejected dtype: string - name: helpfulness_chosen dtype: int64 - name: helpfulness_rejected dtype: int64 - name: specificity_chosen dtype: int64 - name: specificity_rejected dtype: int64 - name: intent_chosen dtype: int64 - name: intent_rejected dtype: int64 - name: factuality_chosen dtype: int64 - name: factuality_rejected dtype: int64 - name: easy-to-understand_chosen dtype: int64 - name: easy-to-understand_rejected dtype: int64 - name: relevance_chosen dtype: int64 - name: relevance_rejected dtype: int64 - name: readability_chosen dtype: int64 - name: readability_rejected dtype: int64 - name: enough-detail_chosen dtype: int64 - name: enough-detail_rejected dtype: int64 - name: biased:_chosen dtype: int64 - name: biased:_rejected dtype: int64 - name: fail-to-consider-individual-preferences_chosen dtype: int64 - name: fail-to-consider-individual-preferences_rejected dtype: int64 - name: repetetive_chosen dtype: int64 - name: repetetive_rejected dtype: int64 - name: fail-to-consider-context_chosen dtype: int64 - name: fail-to-consider-context_rejected dtype: int64 - name: too-long_chosen dtype: int64 - name: too-long_rejected dtype: int64 - name: human dtype: string - name: assistant_chosen dtype: string - name: assistant_rejected dtype: string - name: log_score_chosen dtype: float64 - name: log_score_rejected dtype: float64 - name: labels dtype: string splits: - name: train num_bytes: 14434424 num_examples: 9574 - name: test num_bytes: 14378349 num_examples: 9574 download_size: 15748504 dataset_size: 28812773 --- # Dataset Card for "hh-rlhf_with_features_flan_t5_large" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
merkol/ffhq-256
2023-08-28T11:26:44.000Z
[ "region:us" ]
merkol
null
null
null
0
106
--- dataset_info: features: - name: image dtype: image splits: - name: train num_bytes: 7358464050.0 num_examples: 70000 download_size: 7407340570 dataset_size: 7358464050.0 configs: - config_name: default data_files: - split: train path: data/train-* --- # Dataset Card for "ffhq-256" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
kineticseas/sql-test
2023-09-26T20:52:14.000Z
[ "region:us" ]
kineticseas
null
null
null
0
106
Entry not found
allegro/klej-polemo2-in
2022-08-30T06:57:28.000Z
[ "task_categories:text-classification", "task_ids:sentiment-classification", "annotations_creators:expert-generated", "language_creators:other", "multilinguality:monolingual", "size_categories:1K<n<10K", "source_datasets:original", "language:pl", "license:cc-by-sa-4.0", "region:us" ]
allegro
null
null
null
0
105
--- annotations_creators: - expert-generated language_creators: - other language: - pl license: - cc-by-sa-4.0 multilinguality: - monolingual pretty_name: 'PolEmo2.0-IN' size_categories: - 1K<n<10K source_datasets: - original task_categories: - text-classification task_ids: - sentiment-classification --- # klej-polemo2-in ## Description The PolEmo2.0 is a dataset of online consumer reviews from four domains: medicine, hotels, products, and university. It is human-annotated on a level of full reviews and individual sentences. It comprises over 8000 reviews, about 85% from the medicine and hotel domains. We use the PolEmo2.0 dataset to form two tasks. Both use the same training dataset, i.e., reviews from medicine and hotel domains, but are evaluated on a different test set. **In-Domain** is the first task, and we use accuracy to evaluate model performance within the in-domain context, i.e., on a test set of reviews from medicine and hotels domains. ## Tasks (input, output, and metrics) The task is to predict the correct label of the review. **Input** ('*text'* column): sentence **Output** ('*target'* column): label for sentence sentiment ('zero': neutral, 'minus': negative, 'plus': positive, 'amb': ambiguous) **Domain**: Online reviews **Measurements**: Accuracy **Example**: Input: `Lekarz zalecił mi kurację alternatywną do dotychczasowej , więc jeszcze nie daję najwyższej oceny ( zobaczymy na ile okaże się skuteczna ) . Do Pana doktora nie mam zastrzeżeń : bardzo profesjonalny i kulturalny . Jedyny minus dotyczy gabinetu , który nie jest nowoczesny , co może zniechęcać pacjentki .` Input (translated by DeepL): `The doctor recommended me an alternative treatment to the current one , so I do not yet give the highest rating ( we will see how effective it turns out to be ) . To the doctor I have no reservations : very professional and cultured . The only minus is about the office , which is not modern , which may discourage patients .` Output: `amb` (ambiguous) ## Data splits | Subset | Cardinality | |:-----------|--------------:| | train | 5783 | | test | 722 | | validation | 723 | ## Class distribution in train | Class | Sentiment | train | validation | test | |:------|:----------|------:|-----------:|------:| | minus | positive | 0.379 | 0.375 | 0.416 | | plus | negative | 0.271 | 0.289 | 0.273 | | amb | ambiguous | 0.182 | 0.160 | 0.150 | | zero | neutral | 0.168 | 0.176 | 0.162 | ## Citation ``` @inproceedings{kocon-etal-2019-multi, title = "Multi-Level Sentiment Analysis of {P}ol{E}mo 2.0: Extended Corpus of Multi-Domain Consumer Reviews", author = "Koco{\'n}, Jan and Mi{\l}kowski, Piotr and Za{\'s}ko-Zieli{\'n}ska, Monika", booktitle = "Proceedings of the 23rd Conference on Computational Natural Language Learning (CoNLL)", month = nov, year = "2019", address = "Hong Kong, China", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/K19-1092", doi = "10.18653/v1/K19-1092", pages = "980--991", abstract = "In this article we present an extended version of PolEmo {--} a corpus of consumer reviews from 4 domains: medicine, hotels, products and school. Current version (PolEmo 2.0) contains 8,216 reviews having 57,466 sentences. Each text and sentence was manually annotated with sentiment in 2+1 scheme, which gives a total of 197,046 annotations. We obtained a high value of Positive Specific Agreement, which is 0.91 for texts and 0.88 for sentences. PolEmo 2.0 is publicly available under a Creative Commons copyright license. We explored recent deep learning approaches for the recognition of sentiment, such as Bi-directional Long Short-Term Memory (BiLSTM) and Bidirectional Encoder Representations from Transformers (BERT).", } ``` ## License ``` Attribution-NonCommercial-ShareAlike 4.0 International (CC BY-NC-SA 4.0) ``` ## Links [HuggingFace](https://huggingface.co/datasets/allegro/klej-polemo2-in) [Source](https://clarin-pl.eu/dspace/handle/11321/710) [Paper](https://aclanthology.org/K19-1092/) ## Examples ### Loading ```python from pprint import pprint from datasets import load_dataset dataset = load_dataset("allegro/klej-polemo2-in") pprint(dataset['train'][0]) # {'sentence': 'Super lekarz i człowiek przez duże C . Bardzo duże doświadczenie ' # 'i trafne diagnozy . Wielka cierpliwość do ludzi starszych . Od ' # 'lat opiekuje się moją Mamą staruszką , i twierdzę , że mamy duże ' # 'szczęście , że mamy takiego lekarza . Naprawdę nie wiem cobyśmy ' # 'zrobili , gdyby nie Pan doktor . Dzięki temu , moja mama żyje . ' # 'Każda wizyta u specjalisty jest u niego konsultowana i uważam , ' # 'że jest lepszy od każdego z nich . Mamy do Niego prawie ' # 'nieograniczone zaufanie . Można wiele dobrego o Panu doktorze ' # 'jeszcze napisać . Niestety , ma bardzo dużo pacjentów , jest ' # 'przepracowany ( z tego powodu nawet obawiam się o jego zdrowie ) ' # 'i dostęp do niego jest trudny , ale zawsze możliwy .', # 'target': '__label__meta_plus_m'} ``` ### Evaluation ```python import random from pprint import pprint from datasets import load_dataset, load_metric dataset = load_dataset("allegro/klej-polemo2-in") dataset = dataset.class_encode_column("target") references = dataset["test"]["target"] # generate random predictions predictions = [random.randrange(max(references) + 1) for _ in range(len(references))] acc = load_metric("accuracy") f1 = load_metric("f1") acc_score = acc.compute(predictions=predictions, references=references) f1_score = f1.compute(predictions=predictions, references=references, average="macro") pprint(acc_score) pprint(f1_score) # {'accuracy': 0.25069252077562326} # {'f1': 0.23760962219870274} ```
naver-clova-ix/cord-v1
2022-07-14T14:08:12.000Z
[ "license:cc-by-4.0", "region:us" ]
naver-clova-ix
null
null
null
0
105
--- license: cc-by-4.0 ---
UCL-DARK/ludwig
2022-08-11T15:51:56.000Z
[ "task_categories:text-generation", "task_categories:fill-mask", "task_ids:language-modeling", "task_ids:masked-language-modeling", "annotations_creators:expert-generated", "language_creators:expert-generated", "multilinguality:monolingual", "size_categories:n<1K", "source_datasets:original", "language:en", "license:cc-by-4.0", "implicature", "pragmatics", "language", "llm", "conversation", "dialogue", "region:us" ]
UCL-DARK
TODO
TBC
null
6
105
--- annotations_creators: - expert-generated language: - en language_creators: - expert-generated license: - cc-by-4.0 multilinguality: - monolingual pretty_name: ludwig size_categories: - n<1K source_datasets: - original tags: - implicature - pragmatics - language - llm - conversation - dialogue task_categories: - text-generation - fill-mask task_ids: - language-modeling - masked-language-modeling --- # Dataset Card for LUDWIG ## Table of Contents - [Table of Contents](#table-of-contents) - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards) - [Languages](#languages) - [Dataset Structure](#dataset-structure) - [Data Instances](#data-instances) - [Data Fields](#data-fields) - [Data Splits](#data-splits) - [Dataset Creation](#dataset-creation) - [Curation Rationale](#curation-rationale) - [Source Data](#source-data) - [Annotations](#annotations) - [Personal and Sensitive Information](#personal-and-sensitive-information) - [Considerations for Using the Data](#considerations-for-using-the-data) - [Social Impact of Dataset](#social-impact-of-dataset) - [Discussion of Biases](#discussion-of-biases) - [Other Known Limitations](#other-known-limitations) - [Additional Information](#additional-information) - [Dataset Curators](#dataset-curators) - [Licensing Information](#licensing-information) - [Citation Information](#citation-information) - [Contributions](#contributions) ## Dataset Description - **Homepage:** - **Repository: https://github.com/ucl-dark/ludwig** - **Paper: TODO** - **Leaderboard: TODO** - **Point of Contact: Laura Ruis** ### Dataset Summary LUDWIG (**L**anguage **U**nderstanding **W**ith **I**mplied meanin**G**) is a dataset containing English conversational implicatures. Implicature is the act of meaning or implying one thing by saying something else. There's different types of implicatures, from simple ones like "Some guests came to the party" (implying not all guests came) to more complicated implicatures that depend on context like "A: Are you going to the party this Friday? B: There's a global pandemic.", implying no. Implicatures serve a wide range of goals in communication: efficiency, style, navigating social interactions, and more. We cannot fully understand utterances without understanding their implications. The implicatures in this dataset are conversational because they come in utterance-response tuples. Each tuple has an implicature associated with it, which is the implied meaning of the response. For example: Utterance: Are you going to the party this Friday? Response: There's a global pandemic. Implicature: No. This dataset can be used to evaluate language models on their pragmatic language understanding. ### Supported Tasks and Leaderboards - ```text-generation```: The dataset can be used to evaluate a models ability to generate the correct next token, i.e. "yes" or "no", depending on the implicature. For example, if you pass the model an example wrapped in a template like "Esther asked 'Are you coming to the party this Friday' and Juan responded 'There's a global pandemic', which means" the correct completion would be "no". Success in this task can be determined by the ability to generate the correct answer or by the ability to give the right token a higher likelihood than the wrong token, e.g. p("no") > p("yes"). - ```fill-mask```: The dataset can be used to evaluate a models ability to fill the correct token, i.e. "yes" or "no", depending on the implicature. For example, if you pass the model an example wrapped in a template like "Esther asked 'Are you coming to the party this Friday' and Juan responded 'There's a global pandemic', which means [mask]" the correct mask-fill would be "no". Success in this task can be determined by the ability to fill the correct answer or by the ability to give the right token a higher likelihood than the wrong token, e.g. p("no") > p("yes"). ### Languages English ## Dataset Structure ### Data Instances Find below an example of a 1-shot example instance (1-shot because there's 1 prompt example). ``` { "id": 1, "utterance": "Are you going to the party this Friday?", "response": "There's a global pandemic.", "implicature": "No.", "incoherent_implicature": "Yes". "prompts": [ { "utterance": "Was that hot?", "response": "The sun was scorching.", "implicature": "Yes.", "incoherent_implicature": "No.". } ] } ``` ### Data Fields ``` { "id": int, # unique identifier of data points "utterance": str, # the utterance in this example "response": str, # the response in this example "implicature": str, # the implied meaning of the response, e.g. 'yes' "incoherent_implicature": str, # the wrong implied meaning, e.g. 'no' "prompts": [ # optional: prompt examples from the validation set { "utterance": str, "response": str, "implicature": str, "incoherent_implicature": str, } ] } ``` ### Data Splits **Validation**: 118 instances that can be used for finetuning or few-shot learning **Test**: 600 instances that can be used for evaluating models. NB: the splits weren't originally part of the paper that presents this dataset. The same goes for the k-shot prompts. Added by @LauraRuis. ## Dataset Creation ### Curation Rationale Pragmatic language understanding is a crucial aspect of human communication, and implicatures are the primary object of study in this field. We want computational models of language to understand all the speakers implications. ### Source Data #### Initial Data Collection and Normalization "Conversational implicatures in English dialogue: Annotated dataset", Elizabeth Jasmi George and Radhika Mamidi 2020. [Link to paper](https://doi.org/10.1016/j.procs.2020.04.251) #### Who are the source language producers? These written representations of the utterances are collected manually by scraping and transcribing from relevant sources from August, 2019 to August, 2020. The source of dialogues in the data include TOEFL listening comprehension short conversations, movie dialogues from IMSDb and websites explaining idioms, similes, metaphors and hyperboles. The implicatures are annotated manually. ### Annotations #### Annotation process Manually annotated by dataset collectors. #### Who are the annotators? Authors of the original paper. ### Personal and Sensitive Information All the data is public and not sensitive. ## Considerations for Using the Data ### Social Impact of Dataset Any application that requires communicating with humans requires pragmatic language understanding. ### Discussion of Biases Implicatures can be biased to specific cultures. For example, whether the Pope is Catholic (a common used response implicature to indicate "yes") might not be common knowledge for everyone. Implicatures are also language-specific, the way people use pragmatic language depends on the language. This dataset only focuses on the English language. ### Other Known Limitations None yet. ## Additional Information ### Dataset Curators Elizabeth Jasmi George and Radhika Mamidi ### Licensing Information [license](https://creativecommons.org/licenses/by/4.0/) ### Citation Information ``` @article{George:Mamidi:2020, author = {George, Elizabeth Jasmi and Mamidi, Radhika}, doi = {10.1016/j.procs.2020.04.251}, journal = {Procedia Computer Science}, keywords = {}, note = {https://doi.org/10.1016/j.procs.2020.04.251}, number = {}, pages = {2316-2323}, title = {Conversational implicatures in English dialogue: Annotated dataset}, url = {https://app.dimensions.ai/details/publication/pub.1128198497}, volume = {171}, year = {2020} } ``` ### Contributions Thanks to [@LauraRuis](https://github.com/LauraRuis) for adding this dataset.
EMBO/SourceData
2023-10-09T12:05:46.000Z
[ "task_categories:token-classification", "size_categories:10K<n<100K", "language:en", "license:cc-by-4.0", "biology", "medical", "NER", "NEL", "doi:10.57967/hf/0495", "region:us" ]
EMBO
This dataset is based on the SourceData database and is intented to facilitate training of NLP tasks in the cell and molecualr biology domain.
@Unpublished{ huggingface: dataset, title = {SourceData NLP}, authors={Thomas Lemberger & Jorge Abreu-Vicente, EMBO}, year={2023} }
null
1
105
--- license: cc-by-4.0 task_categories: - token-classification language: - en tags: - biology - medical - NER - NEL size_categories: - 10K<n<100K pretty_name: SODA-NLP --- # SourceData Dataset > The largest annotated biomedical corpus for machine learning and AI in the publishing context. SourceData is the largest annotated biomedical dataset for NER and NEL. It is unique on its focus on the core of scientific evidence: figure captions. It is also unique on its real-world configuration, since it does not present isolated sentences out of more general context. It offers full annotated figure captions that can be further enriched in context using full text, abstracts, or titles. The goal is to extract the nature of the experiments on them described. SourceData presents also its uniqueness by labelling the causal relationship between biological entities present in experiments, assigning experimental roles to each biomedical entity present in the corpus. SourceData consistently annotates nine different biological entities (genes, proteins, cells, tissues, subcellular components, species, small molecules, and diseases). It is the first dataset annotating experimental assays and the roles played on them by the biological entities. Each entity is linked to their correspondent ontology, allowing for entity disambiguation and NEL. ## Cite our work ```latex @misc {embo_2023, author = { Abreu-Vicente, J. \& Lemberger, T. }, title = { The SourceData dataset}, year = 2023, url = { https://huggingface.co/datasets/EMBO/SourceData }, doi = { 10.57967/hf/0495 }, publisher = { Hugging Face } } @article {Liechti2017, author = {Liechti, Robin and George, Nancy and Götz, Lou and El-Gebali, Sara and Chasapi, Anastasia and Crespo, Isaac and Xenarios, Ioannis and Lemberger, Thomas}, title = {SourceData - a semantic platform for curating and searching figures}, year = {2017}, volume = {14}, number = {11}, doi = {10.1038/nmeth.4471}, URL = {https://doi.org/10.1038/nmeth.4471}, eprint = {https://www.biorxiv.org/content/early/2016/06/20/058529.full.pdf}, journal = {Nature Methods} } ``` ## Dataset usage The dataset has a semantic versioning. Specifying the version at loaded will give different versions. Below we is shown the code needed to load the latest available version of the dataset. Check below at `Changelog` to see the changes in the different versions. ```python from datasets import load_dataset # Load NER ds = load_dataset("EMBO/SourceData", "NER", version="2.0.3") # Load PANELIZATION ds = load_dataset("EMBO/SourceData", "PANELIZATION", version="2.0.3") # Load GENEPROD ROLES ds = load_dataset("EMBO/SourceData", "ROLES_GP", version="2.0.3") # Load SMALL MOLECULE ROLES ds = load_dataset("EMBO/SourceData", "ROLES_SM", version="2.0.3") # Load MULTI ROLES ds = load_dataset("EMBO/SourceData", "ROLES_MULTI", version="2.0.3") ``` ## Dataset Description - **Homepage:** https://sourcedata.embo.org - **Repository:** https://github.com/source-data/soda-data - **Paper:** - **Leaderboard:** - **Point of Contact:** thomas.lemberger@embo.org, jorge.abreu@embo.org Note that we offer the `XML` serialized dataset. This includes all the data needed to perform NEL in SourceData. For reproducibility, for each big version of the dataset we provide `split_vx.y.z.json` files to generate the train, validation, test splits. ### Supported Tasks and Leaderboards Tags are provided as [IOB2-style tags](https://en.wikipedia.org/wiki/Inside%E2%80%93outside%E2%80%93beginning_(tagging)). `PANELIZATION`: figure captions (or figure legends) are usually composed of segments that each refer to one of several 'panels' of the full figure. Panels tend to represent results obtained with a coherent method and depicts data points that can be meaningfully compared to each other. `PANELIZATION` provide the start (B-PANEL_START) of these segments and allow to train for recogntion of the boundary between consecutive panel lengends. `NER`: biological and chemical entities are labeled. Specifically the following entities are tagged: - `SMALL_MOLECULE`: small molecules - `GENEPROD`: gene products (genes and proteins) - `SUBCELLULAR`: subcellular components - `CELL_LINE`: cell lines - `CELL_TYPE`: cell types - `TISSUE`: tissues and organs - `ORGANISM`: species - `DISEASE`: diseases (see limitations) - `EXP_ASSAY`: experimental assays `ROLES`: the role of entities with regard to the causal hypotheses tested in the reported results. The tags are: - `CONTROLLED_VAR`: entities that are associated with experimental variables and that subjected to controlled and targeted perturbations. - `MEASURED_VAR`: entities that are associated with the variables measured and the object of the measurements. In the case of experimental roles, it is generated separatedly for `GENEPROD` and `SMALL_MOL` and there is also the `ROLES_MULTI` that takes both at the same time. ### Languages The text in the dataset is English. ## Dataset Structure ### Data Instances ### Data Fields - `words`: `list` of `strings` text tokenized into words. - `panel_id`: ID of the panel to which the example belongs to in the SourceData database. - `label_ids`: - `entity_types`: `list` of `strings` for the IOB2 tags for entity type; possible value in `["O", "I-SMALL_MOLECULE", "B-SMALL_MOLECULE", "I-GENEPROD", "B-GENEPROD", "I-SUBCELLULAR", "B-SUBCELLULAR", "I-CELL_LINE", "B-CELL_LINE", "I-CELL_TYPE", "B-CELL_TYPE", "I-TISSUE", "B-TISSUE", "I-ORGANISM", "B-ORGANISM", "I-EXP_ASSAY", "B-EXP_ASSAY"]` - `roles`: `list` of `strings` for the IOB2 tags for experimental roles; values in `["O", "I-CONTROLLED_VAR", "B-CONTROLLED_VAR", "I-MEASURED_VAR", "B-MEASURED_VAR"]` - `panel_start`: `list` of `strings` for IOB2 tags `["O", "B-PANEL_START"]` - `multi roles`: There are two different label sets. `labels` is like in `roles`. `is_category` tags `GENEPROD` and `SMALL_MOLECULE`. ### Data Splits * NER and ROLES ``` DatasetDict({ train: Dataset({ features: ['words', 'labels', 'tag_mask', 'text'], num_rows: 55250 }) test: Dataset({ features: ['words', 'labels', 'tag_mask', 'text'], num_rows: 6844 }) validation: Dataset({ features: ['words', 'labels', 'tag_mask', 'text'], num_rows: 7951 }) }) ``` * PANELIZATION ``` DatasetDict({ train: Dataset({ features: ['words', 'labels', 'tag_mask'], num_rows: 14655 }) test: Dataset({ features: ['words', 'labels', 'tag_mask'], num_rows: 1871 }) validation: Dataset({ features: ['words', 'labels', 'tag_mask'], num_rows: 2088 }) }) ``` ## Dataset Creation ### Curation Rationale The dataset was built to train models for the automatic extraction of a knowledge graph based from the scientific literature. The dataset can be used to train models for text segmentation, named entity recognition and semantic role labeling. ### Source Data #### Initial Data Collection and Normalization Figure legends were annotated according to the SourceData framework described in Liechti et al 2017 (Nature Methods, 2017, https://doi.org/10.1038/nmeth.4471). The curation tool at https://curation.sourcedata.io was used to segment figure legends into panel legends, tag enities, assign experiemental roles and normalize with standard identifiers (not available in this dataset). The source data was downloaded from the SourceData API (https://api.sourcedata.io) on 21 Jan 2021. #### Who are the source language producers? The examples are extracted from the figure legends from scientific papers in cell and molecular biology. ### Annotations #### Annotation process The annotations were produced manually with expert curators from the SourceData project (https://sourcedata.embo.org) #### Who are the annotators? Curators of the SourceData project. ### Personal and Sensitive Information None known. ## Considerations for Using the Data ### Social Impact of Dataset Not applicable. ### Discussion of Biases The examples are heavily biased towards cell and molecular biology and are enriched in examples from papers published in EMBO Press journals (https://embopress.org) The annotation of diseases has been added recently to the dataset. Although they appear, the number is very low and they are not consistently tagged through the entire dataset. We recommend to use the diseases by filtering the examples that contain them. ### Other Known Limitations [More Information Needed] ## Additional Information ### Dataset Curators Thomas Lemberger, EMBO. Jorge Abreu Vicente, EMBO ### Licensing Information CC BY 4.0 ### Citation Information We are currently working on a paper to present the dataset. It is expected to be ready by 2023 spring. In the meantime, the following paper should be cited. ```latex @article {Liechti2017, author = {Liechti, Robin and George, Nancy and Götz, Lou and El-Gebali, Sara and Chasapi, Anastasia and Crespo, Isaac and Xenarios, Ioannis and Lemberger, Thomas}, title = {SourceData - a semantic platform for curating and searching figures}, year = {2017}, volume = {14}, number = {11}, doi = {10.1038/nmeth.4471}, URL = {https://doi.org/10.1038/nmeth.4471}, eprint = {https://www.biorxiv.org/content/early/2016/06/20/058529.full.pdf}, journal = {Nature Methods} } ``` ### Contributions Thanks to [@tlemberger](https://github.com/tlemberger>) and [@drAbreu](https://github.com/drAbreu>) for adding this dataset. ## Changelog * **v2.0.3** - Data curated until 20.09.2023. Correction of 2,000+ unnormalized cell entities that have been now divided into cell line and cell type. Specially relevant for NER, not that important for NEL. * **v2.0.2** - Data curated until 20.09.2023. This version will also include the patch for milti-word generic terms. * **v1.0.2** - Modification of the generic patch in v1.0.1 to include generic terms of more than a word. * **v1.0.1** - Added a first patch of generic terms. Terms such as cells, fluorescence, or animals where originally tagged, but in this version they are removed. * **v1.0.0** - First publicly available version of the dataset. Data curated until March 2023.
ClementRomac/cleaned_deduplicated_oscar
2023-05-23T20:03:41.000Z
[ "region:us" ]
ClementRomac
null
null
null
0
105
--- dataset_info: features: - name: text dtype: string splits: - name: train num_bytes: 978937483730 num_examples: 232133013 - name: test num_bytes: 59798696914 num_examples: 12329126 download_size: 37220219718 dataset_size: 1038736180644 --- # Dataset Card for "cleaned_deduplicated_oscar" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
diffusers-parti-prompts/kandinsky-2-2
2023-07-18T05:32:32.000Z
[ "region:us" ]
diffusers-parti-prompts
null
null
null
0
105
--- dataset_info: features: - name: Prompt dtype: string - name: Category dtype: string - name: Challenge dtype: string - name: Note dtype: string - name: images dtype: image - name: model_name dtype: string - name: seed dtype: int64 splits: - name: train num_bytes: 163668480.032 num_examples: 1632 download_size: 163766653 dataset_size: 163668480.032 --- # Dataset Card for "kandinsky-2-2" The dataset was generated using the code below: ```python import PIL import torch from datasets import Dataset, Features from datasets import Image as ImageFeature from datasets import Value, load_dataset from diffusers import DiffusionPipeline def main(): print("Loading dataset...") parti_prompts = load_dataset("nateraw/parti-prompts", split="train") print("Loading pipeline...") pipe_prior = DiffusionPipeline.from_pretrained( "kandinsky-community/kandinsky-2-2-prior", torch_dtype=torch.float16 ) pipe_prior.to("cuda") pipe_prior.set_progress_bar_config(disable=True) t2i_pipe = DiffusionPipeline.from_pretrained( "kandinsky-community/kandinsky-2-2-decoder", torch_dtype=torch.float16 ) t2i_pipe.to("cuda") t2i_pipe.set_progress_bar_config(disable=True) seed = 0 generator = torch.Generator("cuda").manual_seed(seed) ckpt_id = ( "kandinsky-community/" + "kandinsky-2-2-prior" + "_" + "kandinsky-2-2-decoder" ) print("Running inference...") main_dict = {} for i in range(len(parti_prompts)): sample = parti_prompts[i] prompt = sample["Prompt"] image_embeds, negative_image_embeds = pipe_prior( prompt, generator=generator, num_inference_steps=100, guidance_scale=7.5, ).to_tuple() image = t2i_pipe( image_embeds=image_embeds, negative_image_embeds=negative_image_embeds, generator=generator, num_inference_steps=100, guidance_scale=7.5, ).images[0] image = image.resize((256, 256), resample=PIL.Image.Resampling.LANCZOS) img_path = f"kandinsky_22_{i}.png" image.save(img_path) main_dict.update( { prompt: { "img_path": img_path, "Category": sample["Category"], "Challenge": sample["Challenge"], "Note": sample["Note"], "model_name": ckpt_id, "seed": seed, } } ) def generation_fn(): for prompt in main_dict: prompt_entry = main_dict[prompt] yield { "Prompt": prompt, "Category": prompt_entry["Category"], "Challenge": prompt_entry["Challenge"], "Note": prompt_entry["Note"], "images": {"path": prompt_entry["img_path"]}, "model_name": prompt_entry["model_name"], "seed": prompt_entry["seed"], } print("Preparing HF dataset...") ds = Dataset.from_generator( generation_fn, features=Features( Prompt=Value("string"), Category=Value("string"), Challenge=Value("string"), Note=Value("string"), images=ImageFeature(), model_name=Value("string"), seed=Value("int64"), ), ) ds_id = "diffusers-parti-prompts/kandinsky-2-2" ds.push_to_hub(ds_id) if __name__ == "__main__": main() ```
JasiekKaczmarczyk/giant-midi-sustain-quantized
2023-09-15T10:33:37.000Z
[ "region:us" ]
JasiekKaczmarczyk
null
null
null
0
105
--- dataset_info: features: - name: midi_filename dtype: string - name: pitch sequence: int16 length: 128 - name: dstart sequence: float32 length: 128 - name: duration sequence: float32 length: 128 - name: velocity sequence: int16 length: 128 - name: dstart_bin sequence: int8 length: 128 - name: duration_bin sequence: int8 length: 128 - name: velocity_bin sequence: int8 length: 128 splits: - name: train num_bytes: 473899450 num_examples: 238919 - name: validation num_bytes: 58421208 num_examples: 29453 - name: test num_bytes: 56581945 num_examples: 28531 download_size: 0 dataset_size: 588902603 --- # Dataset Card for "giant-midi-sustain-quantized" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
SatwikKambham/uc_merced_land_use
2023-09-04T19:12:48.000Z
[ "task_categories:image-classification", "license:cc0-1.0", "region:us" ]
SatwikKambham
This is a 21 class land use image dataset meant for research purposes. There are 100 images for each of the following classes: - agricultural - airplane - baseballdiamond - beach - buildings - chaparral - denseresidential - forest - freeway - golfcourse - harbor - intersection - mediumresidential - mobilehomepark - overpass - parkinglot - river - runway - sparseresidential - storagetanks - tenniscourt Each image measures 256x256 pixels. The images were manually extracted from large images from the USGS National Map Urban Area Imagery collection for various urban areas around the country. The pixel resolution of this public domain imagery is 1 foot. For more information about the original UC Merced Land Use dataset, please visit the official dataset page: http://weegee.vision.ucmerced.edu/datasets/landuse.html Please refer to the original dataset source for any additional details, citations, or specific usage guidelines provided by the dataset creators.
@inproceedings{yang2010bagofvisualwords, author = {Yi Yang and Shawn Newsam}, title = {Bag-Of-Visual-Words and Spatial Extensions for Land-Use Classification}, booktitle = {ACM SIGSPATIAL International Conference on Advances in Geographic Information Systems (ACM GIS)}, year = {2010} }
null
0
105
--- license: cc0-1.0 dataset_info: config_name: ucmerced_landuse features: - name: img dtype: image - name: label dtype: class_label: names: '0': agricultural '1': airplane '2': baseballdiamond '3': beach '4': buildings '5': chaparral '6': denseresidential '7': forest '8': freeway '9': golfcourse '10': harbor '11': intersection '12': mediumresidential '13': mobilehomepark '14': overpass '15': parkinglot '16': river '17': runway '18': sparseresidential '19': storagetanks '20': tenniscourt splits: - name: train num_bytes: 406563 num_examples: 2100 download_size: 332468434 dataset_size: 406563 task_categories: - image-classification pretty_name: UC Merced Land Use --- This is a 21 class land use image dataset meant for research purposes. There are 100 images for each of the following classes: - agricultural - airplane - baseballdiamond - beach - buildings - chaparral - denseresidential - forest - freeway - golfcourse - harbor - intersection - mediumresidential - mobilehomepark - overpass - parkinglot - river - runway - sparseresidential - storagetanks - tenniscourt Each image measures 256x256 pixels. The images were manually extracted from large images from the USGS National Map Urban Area Imagery collection for various urban areas around the country. The pixel resolution of this public domain imagery is 1 foot. ### Original Dataset Source For more information about the original UC Merced Land Use dataset, please visit the official dataset page: [UC Merced Land Use Dataset](http://weegee.vision.ucmerced.edu/datasets/landuse.html) Please refer to the original dataset source for any additional details, citations, or specific usage guidelines provided by the dataset creators.
ContextualAI/hellaswag
2023-10-06T23:57:13.000Z
[ "region:us" ]
ContextualAI
null
null
null
0
105
--- dataset_info: features: - name: query dtype: string - name: choices sequence: string - name: gold_generation dtype: string splits: - name: dev num_bytes: 9610103 num_examples: 10042 - name: test num_bytes: 7885767 num_examples: 10003 download_size: 10451785 dataset_size: 17495870 --- # Dataset Card for "hellaswag" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
sagnikrayc/mctest
2022-10-25T00:16:37.000Z
[ "task_categories:question-answering", "task_ids:multiple-choice-qa", "annotations_creators:expert-generated", "language_creators:found", "multilinguality:monolingual", "size_categories:1K<n<10K", "language:en", "license:other", "explanations-in-question-answering", "region:us" ]
sagnikrayc
MCTest requires machines to answer multiple-choice reading comprehension questions about fictional stories, directly tackling the high-level goal of open-domain machine comprehension.
@inproceedings{richardson-etal-2013-mctest, title = "{MCT}est: A Challenge Dataset for the Open-Domain Machine Comprehension of Text", author = "Richardson, Matthew and Burges, Christopher J.C. and Renshaw, Erin", booktitle = "Proceedings of the 2013 Conference on Empirical Methods in Natural Language Processing", month = oct, year = "2013", address = "Seattle, Washington, USA", publisher = "Association for Computational Linguistics", url = "https://www.aclweb.org/anthology/D13-1020", pages = "193--203", }
null
2
104
--- annotations_creators: - expert-generated language_creators: - found language: - en license: - other multilinguality: - monolingual size_categories: - 1K<n<10K source_datasets: [] task_categories: - question-answering task_ids: - multiple-choice-qa paperswithcode_id: mctest language_bcp47: - en-US tags: - explanations-in-question-answering --- # Dataset Card Creation Guide ## Table of Contents - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards) - [Languages](#languages) - [Dataset Structure](#dataset-structure) - [Data Instances](#data-instances) - [Data Fields](#data-fields) - [Data Splits](#data-splits) - [Dataset Creation](#dataset-creation) - [Curation Rationale](#curation-rationale) - [Source Data](#source-data) - [Annotations](#annotations) - [Personal and Sensitive Information](#personal-and-sensitive-information) - [Considerations for Using the Data](#considerations-for-using-the-data) - [Social Impact of Dataset](#social-impact-of-dataset) - [Discussion of Biases](#discussion-of-biases) - [Other Known Limitations](#other-known-limitations) - [Additional Information](#additional-information) - [Dataset Curators](#dataset-curators) - [Licensing Information](#licensing-information) - [Citation Information](#citation-information) - [Contributions](#contributions) ## Dataset Description - **Homepage:** N/A - **Repository:** [GitHub](https://github.com/mcobzarenco/mctest/) - **Paper:** [MCTest: A Challenge Dataset for the Open-Domain Machine Comprehension of Text](https://www.aclweb.org/anthology/D13-1020.pdf) - **Leaderboard:** N/A - **Point of Contact:** - ### Dataset Summary [More Information Needed] ### Supported Tasks and Leaderboards [More Information Needed] ### Languages [More Information Needed] ## Dataset Structure ### Data Instances [More Information Needed] ### Data Fields [More Information Needed] ### Data Splits [More Information Needed] ## Dataset Creation ### Curation Rationale [More Information Needed] ### Source Data [More Information Needed] #### Initial Data Collection and Normalization [More Information Needed] #### Who are the source language producers? [More Information Needed] ### Annotations [More Information Needed] #### Annotation process [More Information Needed] #### Who are the annotators? [More Information Needed] ### Personal and Sensitive Information [More Information Needed] ## Considerations for Using the Data ### Social Impact of Dataset [More Information Needed] ### Discussion of Biases [More Information Needed] ### Other Known Limitations [More Information Needed] ## Additional Information ### Dataset Curators [More Information Needed] ### Licensing Information Microsoft Research License Agreement. ### Citation Information [More Information Needed] ### Contributions
MultiCoNER/multiconer_v2
2023-07-06T18:37:15.000Z
[ "task_categories:token-classification", "size_categories:100K<n<1M", "language:bn", "language:zh", "language:de", "language:en", "language:es", "language:fa", "language:fr", "language:hi", "language:it", "language:pt", "language:sv", "language:uk", "license:cc-by-4.0", "multiconer", "ner", "multilingual", "named entity recognition", "fine-grained ner", "region:us" ]
MultiCoNER
Complex named entities (NE), like the titles of creative works, are not simple nouns and pose challenges for NER systems (Ashwini and Choi, 2014). They can take the form of any linguistic constituent, like an imperative clause (“Dial M for Murder”), and do not look like traditional NEs (Persons, Locations, etc.). This syntactic ambiguity makes it challenging to recognize them based on context. We organized the MultiCoNER task (Malmasi et al., 2022) at SemEval-2022 to address these challenges in 11 languages, receiving a very positive community response with 34 system papers. Results confirmed the challenges of processing complex and long-tail NEs: even the largest pre-trained Transformers did not achieve top performance without external knowledge. The top systems infused transformers with knowledge bases and gazetteers. However, such solutions are brittle against out of knowledge-base entities and noisy scenarios like the presence of spelling mistakes and typos. We propose MultiCoNER II which represents novel challenges through new tasks that emphasize the shortcomings of the current top models. MultiCoNER II features complex NER in these languages: 1. English 2. Spanish 3. Hindi 4. Bangla 5. Chinese 6. Swedish 7. Farsi 8. French 9. Italian 10. Portugese 11. Ukranian 12. German For more details see https://multiconer.github.io/ ## References * Sandeep Ashwini and Jinho D. Choi. 2014. Targetable named entity recognition in social media. CoRR, abs/1408.0782. * Shervin Malmasi, Anjie Fang, Besnik Fetahu, Sudipta Kar, Oleg Rokhlenko. 2022. SemEval-2022 Task 11: Multilingual Complex Named Entity Recognition (MultiCoNER).
@inproceedings{multiconer2-report, title={{SemEval-2023 Task 2: Fine-grained Multilingual Named Entity Recognition (MultiCoNER 2)}}, author={Fetahu, Besnik and Kar, Sudipta and Chen, Zhiyu and Rokhlenko, Oleg and Malmasi, Shervin}, booktitle={Proceedings of the 17th International Workshop on Semantic Evaluation (SemEval-2023)}, year={2023}, publisher={Association for Computational Linguistics}, } @article{multiconer2-data, title={{MultiCoNER v2: a Large Multilingual dataset for Fine-grained and Noisy Named Entity Recognition}}, author={Fetahu, Besnik and Chen, Zhiyu and Kar, Sudipta and Rokhlenko, Oleg and Malmasi, Shervin}, year={2023}, }
null
7
104
--- license: cc-by-4.0 task_categories: - token-classification language: - bn - zh - de - en - es - fa - fr - hi - it - pt - sv - uk tags: - multiconer - ner - multilingual - named entity recognition - fine-grained ner size_categories: - 100K<n<1M --- # Dataset Card for Multilingual Complex Named Entity Recognition (MultiCoNER) ## Dataset Description - **Homepage:** https://multiconer.github.io - **Repository:** - **Paper:** - **Leaderboard:** https://multiconer.github.io/results, https://codalab.lisn.upsaclay.fr/competitions/10025 - **Point of Contact:** https://multiconer.github.io/organizers ### Dataset Summary The tagset of MultiCoNER is a fine-grained tagset. The fine to coarse level mapping of the tags are as follows: * Location (LOC) : Facility, OtherLOC, HumanSettlement, Station * Creative Work (CW) : VisualWork, MusicalWork, WrittenWork, ArtWork, Software * Group (GRP) : MusicalGRP, PublicCORP, PrivateCORP, AerospaceManufacturer, SportsGRP, CarManufacturer, ORG * Person (PER) : Scientist, Artist, Athlete, Politician, Cleric, SportsManager, OtherPER * Product (PROD) : Clothing, Vehicle, Food, Drink, OtherPROD * Medical (MED) : Medication/Vaccine, MedicalProcedure, AnatomicalStructure, Symptom, Disease ### Supported Tasks and Leaderboards The final leaderboard of the shared task is available <a href="https://multiconer.github.io/results" target="_blank">here</a>. ### Languages Supported languages are Bangla, Chinese, English, Spanish, Farsi, French, German, Hindi, Italian, Portuguese, Swedish, Ukrainian. ## Dataset Structure The dataset follows CoNLL format. ### Data Instances Here are some examples in different languages: * Bangla: [লিটল মিক্স | MusicalGrp] এ যোগদানের আগে তিনি [পিৎজা হাট | ORG] এ ওয়েট্রেস হিসাবে কাজ করেছিলেন। * Chinese: 它的纤维穿过 [锁骨 | AnatomicalStructure] 并沿颈部侧面倾斜向上和内侧. * English: [wes anderson | Artist]'s film [the grand budapest hotel | VisualWork] opened the festival . * Farsi: است] ناگویا |HumanSettlement] مرکزاین استان شهر * French: l [amiral de coligny | Politician] réussit à s y glisser . * German: in [frühgeborenes | Disease] führt dies zu [irds | Symptom] . * Hindi: १७९६ में उन्हें [शाही स्वीडिश विज्ञान अकादमी | Facility] का सदस्य चुना गया। * Italian: è conservato nel [rijksmuseum | Facility] di [amsterdam | HumanSettlement] . * Portuguese: também é utilizado para se fazer [licor | Drink] e [vinhos | Drink]. * Spanish: fue superado por el [aon center | Facility] de [los ángeles | HumanSettlement] . * Swedish: [tom hamilton | Artist] amerikansk musiker basist i [aerosmith | MusicalGRP] . * Ukrainian: назва альбому походить з роману « [кінець дитинства | WrittenWork] » англійського письменника [артура кларка | Artist] . ### Data Fields The data has two fields. One is the token and another is the label. Here is an example from the English data. ``` # id f5458a3a-cd23-4df4-8384-4e23fe33a66b domain=en doris _ _ B-Artist day _ _ I-Artist included _ _ O in _ _ O the _ _ O album _ _ O billy _ _ B-MusicalWork rose _ _ I-MusicalWork 's _ _ I-MusicalWork jumbo _ _ I-MusicalWork ``` ### Data Splits Train, Dev, and Test splits are provided ## Dataset Creation TBD ## Loading the Dataset ```python from datasets import load_dataset english_data = load_dataset('MultiCoNER/multiconer_v2', 'English (EN)') ``` ### Licensing Information CC BY 4.0 ### Citation Information ``` @inproceedings{multiconer2-report, title={{SemEval-2023 Task 2: Fine-grained Multilingual Named Entity Recognition (MultiCoNER 2)}}, author={Fetahu, Besnik and Kar, Sudipta and Chen, Zhiyu and Rokhlenko, Oleg and Malmasi, Shervin}, booktitle={Proceedings of the 17th International Workshop on Semantic Evaluation (SemEval-2023)}, year={2023}, publisher={Association for Computational Linguistics}, } @article{multiconer2-data, title={{MultiCoNER v2: a Large Multilingual dataset for Fine-grained and Noisy Named Entity Recognition}}, author={Fetahu, Besnik and Chen, Zhiyu and Kar, Sudipta and Rokhlenko, Oleg and Malmasi, Shervin}, year={2023}, } ```
Multimodal-Fatima/OK-VQA_train
2023-03-23T22:30:06.000Z
[ "region:us" ]
Multimodal-Fatima
null
null
null
1
104
--- dataset_info: features: - name: image dtype: image - name: question_type dtype: string - name: confidence dtype: int32 - name: answers sequence: string - name: answers_original list: - name: answer dtype: string - name: raw_answer dtype: string - name: answer_confidence dtype: string - name: answer_id dtype: int64 - name: id_image dtype: int64 - name: answer_type dtype: string - name: question_id dtype: int64 - name: question dtype: string - name: id dtype: int64 - name: clip_tags_ViT_L_14 sequence: string - name: clip_tags_LAION_ViT_H_14_2B sequence: string - name: blip_caption_beam_5 dtype: string - name: LLM_Description_gpt3_downstream_tasks_visual_genome_ViT_L_14 sequence: string - name: LLM_Description_gpt3_downstream_tasks_visual_genome_LAION-ViT-H-14-2B sequence: string - name: DETA_detections_deta_swin_large_o365_coco_classes list: - name: attribute dtype: string - name: box sequence: float32 - name: label dtype: string - name: location dtype: string - name: ratio dtype: float32 - name: size dtype: string - name: tag dtype: string - name: DETA_detections_deta_swin_large_o365_coco_classes_caption_module_random list: - name: attribute dtype: string - name: box sequence: float64 - name: captions_module sequence: string - name: captions_module_filter sequence: string - name: label dtype: string - name: location dtype: string - name: ratio dtype: float64 - name: size dtype: string - name: tag dtype: string splits: - name: train num_bytes: 1686555802.0 num_examples: 9009 download_size: 1572400067 dataset_size: 1686555802.0 --- # Dataset Card for "OK-VQA_train" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
albertvillanova/medmnist-v2
2023-05-30T05:40:52.000Z
[ "task_categories:image-classification", "task_ids:multi-class-image-classification", "task_ids:multi-label-image-classification", "multilinguality:monolingual", "size_categories:100K<n<1M", "source_datasets:original", "language:en", "license:cc-by-4.0", "medical", "arxiv:2110.14795", "region:us" ]
albertvillanova
MedMNIST v2 is a large-scale MNIST-like collection of standardized biomedical images, including 12 datasets for 2D and 6 datasets for 3D.
@article{medmnistv2, title={MedMNIST v2-A large-scale lightweight benchmark for 2D and 3D biomedical image classification}, author={Yang, Jiancheng and Shi, Rui and Wei, Donglai and Liu, Zequan and Zhao, Lin and Ke, Bilian and Pfister, Hanspeter and Ni, Bingbing}, journal={Scientific Data}, volume={10}, number={1}, pages={41}, year={2023}, publisher={Nature Publishing Group UK London} } @inproceedings{medmnistv1, title={MedMNIST Classification Decathlon: A Lightweight AutoML Benchmark for Medical Image Analysis}, author={Yang, Jiancheng and Shi, Rui and Ni, Bingbing}, booktitle={IEEE 18th International Symposium on Biomedical Imaging (ISBI)}, pages={191--195}, year={2021} }
null
3
104
--- language: en license: cc-by-4.0 multilinguality: - monolingual pretty_name: MedMNIST v2 size_categories: - 100K<n<1M source_datasets: - original task_categories: - image-classification task_ids: - multi-class-image-classification - multi-label-image-classification paperswithcode_id: medmnist-v2 tags: - medical --- # Dataset Card for MedMNIST v2 ## Table of Contents - [Table of Contents](#table-of-contents) - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards) - [Languages](#languages) - [Dataset Structure](#dataset-structure) - [Data Instances](#data-instances) - [Data Fields](#data-fields) - [Data Splits](#data-splits) - [Dataset Creation](#dataset-creation) - [Curation Rationale](#curation-rationale) - [Source Data](#source-data) - [Annotations](#annotations) - [Personal and Sensitive Information](#personal-and-sensitive-information) - [Considerations for Using the Data](#considerations-for-using-the-data) - [Social Impact of Dataset](#social-impact-of-dataset) - [Discussion of Biases](#discussion-of-biases) - [Other Known Limitations](#other-known-limitations) - [Additional Information](#additional-information) - [Dataset Curators](#dataset-curators) - [Licensing Information](#licensing-information) - [Citation Information](#citation-information) - [Contributions](#contributions) ## Dataset Description - **Homepage:** https://medmnist.com/ - **Repository:** https://github.com/MedMNIST/MedMNIST - **Paper:** [MedMNIST v2 -- A large-scale lightweight benchmark for 2D and 3D biomedical image classification](https://arxiv.org/abs/2110.14795) - **Leaderboard:** - **Point of Contact:** [Bingbing Ni](mailto:nibingbing@sjtu.edu.cn) ### Dataset Summary We introduce MedMNIST v2, a large-scale MNIST-like collection of standardized biomedical images, including 12 datasets for 2D and 6 datasets for 3D. All images are pre-processed into 28 x 28 (2D) or 28 x 28 x 28 (3D) with the corresponding classification labels, so that no background knowledge is required for users. Covering primary data modalities in biomedical images, MedMNIST v2 is designed to perform classification on lightweight 2D and 3D images with various data scales (from 100 to 100,000) and diverse tasks (binary/multi-class, ordinal regression and multi-label). The resulting dataset, consisting of 708,069 2D images and 9,998 3D images in total, could support numerous research / educational purposes in biomedical image analysis, computer vision and machine learning. We benchmark several baseline methods on MedMNIST v2, including 2D / 3D neural networks and open-source / commercial AutoML tools. ### Supported Tasks and Leaderboards [More Information Needed] ### Languages English (`en`). ## Dataset Structure ### Data Instances [More Information Needed] ### Data Fields [More Information Needed] ### Data Splits [More Information Needed] ## Dataset Creation ### Curation Rationale [More Information Needed] ### Source Data #### Initial Data Collection and Normalization [More Information Needed] #### Who are the source language producers? [More Information Needed] ### Annotations #### Annotation process [More Information Needed] #### Who are the annotators? [More Information Needed] ### Personal and Sensitive Information [More Information Needed] ## Considerations for Using the Data ### Social Impact of Dataset [More Information Needed] ### Discussion of Biases [More Information Needed] ### Other Known Limitations [More Information Needed] ## Additional Information ### Dataset Curators [More Information Needed] ### Licensing Information The dataset is licensed under [Creative Commons Attribution 4.0 International](https://creativecommons.org/licenses/by/4.0/) (CC BY 4.0). Each subset keeps the same license as that of the source dataset. Please also cite the corresponding paper of source data if you use any subset of MedMNIST. ### Citation Information If you find this project useful, please cite both v1 and v2 papers: ``` @article{medmnistv2, title={MedMNIST v2-A large-scale lightweight benchmark for 2D and 3D biomedical image classification}, author={Yang, Jiancheng and Shi, Rui and Wei, Donglai and Liu, Zequan and Zhao, Lin and Ke, Bilian and Pfister, Hanspeter and Ni, Bingbing}, journal={Scientific Data}, volume={10}, number={1}, pages={41}, year={2023}, publisher={Nature Publishing Group UK London} } @inproceedings{medmnistv1, title={MedMNIST Classification Decathlon: A Lightweight AutoML Benchmark for Medical Image Analysis}, author={Yang, Jiancheng and Shi, Rui and Ni, Bingbing}, booktitle={IEEE 18th International Symposium on Biomedical Imaging (ISBI)}, pages={191--195}, year={2021} } ``` Please also cite the corresponding paper(s) of source data if you use any subset of MedMNIST as per the description on the [project website](https://medmnist.com/). ### Contributions Thanks to [@albertvillanova](https://huggingface.co/albertvillanova) for adding this dataset.
dmayhem93/agieval-sat-en
2023-06-18T17:30:59.000Z
[ "license:mit", "arxiv:2304.06364", "region:us" ]
dmayhem93
null
null
null
2
104
--- dataset_info: features: - name: query dtype: string - name: choices sequence: string - name: gold sequence: int64 splits: - name: test num_bytes: 1019350 num_examples: 206 download_size: 265465 dataset_size: 1019350 license: mit --- # Dataset Card for "agieval-sat-en" Dataset taken from https://github.com/microsoft/AGIEval and processed as in that repo. MIT License Copyright (c) Microsoft Corporation. Permission is hereby granted, free of charge, to any person obtaining a copy of this software and associated documentation files (the "Software"), to deal in the Software without restriction, including without limitation the rights to use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of the Software, and to permit persons to whom the Software is furnished to do so, subject to the following conditions: The above copyright notice and this permission notice shall be included in all copies or substantial portions of the Software. THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE @misc{zhong2023agieval, title={AGIEval: A Human-Centric Benchmark for Evaluating Foundation Models}, author={Wanjun Zhong and Ruixiang Cui and Yiduo Guo and Yaobo Liang and Shuai Lu and Yanlin Wang and Amin Saied and Weizhu Chen and Nan Duan}, year={2023}, eprint={2304.06364}, archivePrefix={arXiv}, primaryClass={cs.CL} }
THUDM/webglm-qa
2023-07-12T17:14:35.000Z
[ "task_categories:text-generation", "task_categories:question-answering", "multilinguality:monolingual", "size_categories:100M<n<200M", "language:en", "arxiv:2306.07906", "region:us" ]
THUDM
null
null
null
14
104
--- annotations_creators: [] language: - en multilinguality: - monolingual source_datasets: [] task_categories: - text-generation - question-answering pretty_name: WebGLM-QA size_categories: - 100M<n<200M --- # WebGLM-QA ## Dataset Description [WebGLM-QA](https://github.com/THUDM/WebGLM) is the dataset used to train the WebGLM generator module. It consists of 43,579 high-quality data samples for the train split, 1,000 for the validation split, and 400 for the test split. Refer to [our paper](https://arxiv.org/abs/2306.07906) for the data construction details. ## Dataset Structure To load the dataset, you can try the following code. ```python from datasets import load_dataset load_dataset("THUDM/webglm-qa") DatasetDict({ train: Dataset({ features: ['question', 'answer', 'references'], num_rows: 43579 }) test: Dataset({ features: ['question', 'answer', 'references'], num_rows: 400 }) validation: Dataset({ features: ['question', 'answer', 'references'], num_rows: 1000 }) }) ``` ```python next(iter(data["test"])) {'question': 'Just got my (Canadian) mortgage renewal notice telling me I have to chose Subsequent Payment Terms.', 'answer': "When renewing a mortgage in Canada, your lender must notify you in advance of the renewal date with your options for renewal terms[1][2]. Your mortgage will typically automatically renew or become in default if you don't take action[3]. Depending on your lender, you may be able to renew your mortgage as early as 6 months prior to your current mortgage term expiring[2][3][5]. RBC Royal Bank mortgage customers can choose Subsequent Payment Terms and be protected from an increase in interest rates for the interest type and term they selected[4].", 'references': ['When faced with a mortgage renewal, this simply means that your current contracted mortgage term is approaching its expiration date. You see, the majority of mortgages in Toronto and in general mortgages in Ontario are contracted for a finite period of time that is referred to as the “mortgage term”. This period tends to range from as little as a few months to as long as 10 years in Canada.', 'You can either proactively reach out to your lender several months prior to your renewal date to find out, but if you don’t, your lender must notify you in advance of the renewal date what your options are. If your mortgage does happen to be with a federally regulated bank, then they are obligated to send you an official renewal statement with no less than 21 days remaining on your current mortgage term. Also, if your lender chooses not to renew your mortgage then they must notify you in advance and provide you with enough time to refinance your mortgage elsewhere or to pay it off.', 'When it comes to mortgage renewals, if you do not take action your mortgage will in many cases either renew automatically or become in default. When your mortgage term approaches the end, your mortgage lender will typically offer you renewal terms that you may choose to accept, negotiate, or decline. Provided you continue to make your monthly mortgage payments on time, lenders will rarely not extend to you an offer to renew your mortgage, although this can happen without cause depending on your mortgage commitment and contract.', "When you renew your RBC Royal Bank mortgage at maturity, you are protected from an increase in interest rates, for the interest type and term you selected, in the 30-day period prior to your regularly scheduled renewal date. And, if the interest rate changes before your actual mortgage renewal date, you'll automatically receive the lower rate for the term and type you chose.", 'When renewing your mortgage in Canada, some lenders may allow you to renew your mortgage as early as 6 months prior to your current mortgage term expiring.']} ``` ## Data Fields * ``question``: a question raised by a user or individual related to a certain topic. * ``answer``: the generated response to the question. * ``references``: a list of quotes or snippets from sources used to generate the answer given. ## Data Splits We split the dataset into train, validation, and test. ## Citation Information Refer to https://github.com/THUDM/WebGLM.
Locutusque/InstructMix
2023-08-02T23:35:14.000Z
[ "task_categories:text-generation", "task_categories:conversational", "task_categories:question-answering", "language:en", "region:us" ]
Locutusque
null
null
null
3
104
--- dataset: name: InstructiveMix tagline: A Combined Dataset of Diverse Instructional Content description: > InstructiveMix is a comprehensive dataset that brings together various instructional content from different domains. It combines instructions for tasks, code, poems, essays, medical texts, and more. With a diverse range of instructional data, this dataset is suitable for a wide range of natural language processing (NLP) tasks and research. license: CC-BY-SA-4.0 dataset_creation: '2023-08-02T00:00:00.000Z' dataset_version: 1.0.0 authors: - name: Locutusque email: locutusque.airshipcraft@gmail.com task_categories: - text-generation - conversational - question-answering language: - en --- **Dataset Summary:** InstructMix is a comprehensive combined dataset that offers diverse instructional content for a range of tasks. It includes data from various sources, such as code instructions, poems, essays, medical texts, and more. This dataset is designed to support natural language processing (NLP) research, model training, and evaluation across different domains. **Dataset Contents:** The dataset contains a collection of instructional data with corresponding inputs and outputs. Each entry has an "Input" field that contains the instructional content, and an "Output" field that represents the corresponding response or completion. Here is a list of the datasets used: - Locutusque/ColumnedChatCombined - TokenBender/code_instructions_120k_alpaca_style - Open-Orca/OpenOrca - vicgalle/alpaca-gpt4 - ChristophSchuhmann/essays-with-instructions - checkai/instruction-poems - pubmed_qa - BI55/MedText - nampdn-ai/tiny-codes It contains two of the following columns: - Input (string) - Output (string) These should hopefully be self-explanatory **Dataset Composition:** - Number of samples: [7283349] - Languages: English - License: CC-BY-SA-4.0 **Use Cases:** The InstructiveMix dataset is suitable for various NLP tasks, including text generation, text completion, translation, summarization, and more. It can be used to train and evaluate language models, code generation models, and other NLP-based applications. **Dataset Creation:** The InstructiveMix dataset was created by combining multiple existing datasets with instructional content and adding metadata to facilitate seamless integration. The content spans a diverse set of domains and was sourced from reputable datasets and public sources. **Acknowledgements:** I would like to acknowledge the original creators of the datasets used to construct InstructiveMix. Their contributions have enabled the creation of this valuable resource for the NLP community. **Contact:** For any questions or inquiries related to the InstructiveMix dataset, please contact me at [locutusque.airshipcraft@gmail.com]. ---
alzoubi36/title_generation
2023-10-01T12:43:11.000Z
[ "region:us" ]
alzoubi36
null
null
null
0
104
--- dataset_info: features: - name: text dtype: string - name: summary dtype: string - name: id dtype: int64 splits: - name: validation num_bytes: 1753243 num_examples: 2000 - name: test num_bytes: 1682435 num_examples: 2000 - name: train num_bytes: 17556737 num_examples: 20000 download_size: 10393931 dataset_size: 20992415 --- # Dataset Card for "title_generation" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
codeparrot/codeparrot-clean-valid
2022-10-10T15:28:51.000Z
[ "region:us" ]
codeparrot
null
null
null
5
103
# CodeParrot 🦜 Dataset Cleaned (valid) Train split of [CodeParrot 🦜 Dataset Cleaned](https://huggingface.co/datasets/lvwerra/codeparrot-clean). ## Dataset structure ```python DatasetDict({ train: Dataset({ features: ['repo_name', 'path', 'copies', 'size', 'content', 'license', 'hash', 'line_mean', 'line_max', 'alpha_frac', 'autogenerated'], num_rows: 61373 }) }) ```
persiannlp/parsinlu_query_paraphrasing
2022-10-22T15:13:22.000Z
[ "annotations_creators:expert-generated", "language_creators:expert-generated", "multilinguality:monolingual", "size_categories:1K<n<10K", "source_datasets:extended|quora|google", "language:fa", "license:cc-by-nc-sa-4.0", "arxiv:2012.06154", "region:us" ]
persiannlp
A Persian query paraphrasing task (paraphrase or not, given two questions). The questions are partly mined using Google auto-complete, and partly translated from Quora paraphrasing dataset.
@article{huggingface:dataset, title = {ParsiNLU: A Suite of Language Understanding Challenges for Persian}, authors = {Khashabi, Daniel and Cohan, Arman and Shakeri, Siamak and Hosseini, Pedram and Pezeshkpour, Pouya and Alikhani, Malihe and Aminnaseri, Moin and Bitaab, Marzieh and Brahman, Faeze and Ghazarian, Sarik and others}, year={2020} journal = {arXiv e-prints}, eprint = {2012.06154}, }
null
0
103
--- annotations_creators: - expert-generated language_creators: - expert-generated language: - fa license: - cc-by-nc-sa-4.0 multilinguality: - monolingual size_categories: - 1K<n<10K source_datasets: - extended|quora|google task_categories: - query-paraphrasing task_ids: - query-paraphrasing --- # Dataset Card for PersiNLU (Query Paraphrasing) ## Table of Contents - [Dataset Card for PersiNLU (Query Paraphrasing)](#dataset-card-for-persi_nlu_query_paraphrasing) - [Table of Contents](#table-of-contents) - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards) - [Languages](#languages) - [Dataset Structure](#dataset-structure) - [Data Instances](#data-instances) - [Data Fields](#data-fields) - [Data Splits](#data-splits) - [Dataset Creation](#dataset-creation) - [Curation Rationale](#curation-rationale) - [Source Data](#source-data) - [Initial Data Collection and Normalization](#initial-data-collection-and-normalization) - [Who are the source language producers?](#who-are-the-source-language-producers) - [Annotations](#annotations) - [Annotation process](#annotation-process) - [Who are the annotators?](#who-are-the-annotators) - [Personal and Sensitive Information](#personal-and-sensitive-information) - [Considerations for Using the Data](#considerations-for-using-the-data) - [Social Impact of Dataset](#social-impact-of-dataset) - [Discussion of Biases](#discussion-of-biases) - [Other Known Limitations](#other-known-limitations) - [Additional Information](#additional-information) - [Dataset Curators](#dataset-curators) - [Licensing Information](#licensing-information) - [Citation Information](#citation-information) - [Contributions](#contributions) ## Dataset Description - **Homepage:** [Github](https://github.com/persiannlp/parsinlu/) - **Repository:** [Github](https://github.com/persiannlp/parsinlu/) - **Paper:** [Arxiv](https://arxiv.org/abs/2012.06154) - **Leaderboard:** - **Point of Contact:** d.khashabi@gmail.com ### Dataset Summary A Persian query paraphrasng task (deciding whether two questions are paraphrases of each other). The questions are partially generated from Google auto-complete, and partially translated from the Quora paraphrasing dataset. ### Supported Tasks and Leaderboards [More Information Needed] ### Languages The text dataset is in Persian (`fa`). ## Dataset Structure ### Data Instances Here is an example from the dataset: ```json { "q1": "اعمال حج تمتع از چه روزی شروع میشود؟", "q2": "ویار از چه روزی شروع میشود؟", "label": "0", "category": "natural" } ``` ### Data Fields - `q1`: the first question. - `q2`: the second question. - `category`: whether the questions are mined from Quora (`qqp`) or they're extracted from Google auto-complete (`natural`). - `label`: `1` if the questions are paraphrases; `0` otherwise. ### Data Splits The train/dev/test splits contains 1830/898/1916 samples. ## Dataset Creation ### Curation Rationale For details, check [the corresponding draft](https://arxiv.org/abs/2012.06154). ### Source Data #### Initial Data Collection and Normalization [More Information Needed] #### Who are the source language producers? [More Information Needed] ### Annotations #### Annotation process [More Information Needed] #### Who are the annotators? [More Information Needed] ### Personal and Sensitive Information [More Information Needed] ## Considerations for Using the Data ### Social Impact of Dataset [More Information Needed] ### Discussion of Biases [More Information Needed] ### Other Known Limitations [More Information Needed] ## Additional Information ### Dataset Curators [More Information Needed] ### Licensing Information CC BY-NC-SA 4.0 License ### Citation Information ```bibtex @article{huggingface:dataset, title = {ParsiNLU: A Suite of Language Understanding Challenges for Persian}, authors = {Khashabi, Daniel and Cohan, Arman and Shakeri, Siamak and Hosseini, Pedram and Pezeshkpour, Pouya and Alikhani, Malihe and Aminnaseri, Moin and Bitaab, Marzieh and Brahman, Faeze and Ghazarian, Sarik and others}, year={2020} journal = {arXiv e-prints}, eprint = {2012.06154}, } ``` ### Contributions Thanks to [@danyaljj](https://github.com/danyaljj) for adding this dataset.
dmayhem93/toolformer-v0-postprocessed
2023-02-28T19:50:45.000Z
[ "region:us" ]
dmayhem93
null
null
null
5
103
--- dataset_info: features: - name: text dtype: string splits: - name: train num_bytes: 79229133 num_examples: 2245 download_size: 33861921 dataset_size: 79229133 --- # Dataset Card for "toolformer-v0-postprocessed" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
pvduy/sharegpt_alpaca_oa_vicuna_format
2023-04-29T18:37:21.000Z
[ "region:us" ]
pvduy
null
null
null
6
103
--- dataset_info: features: - name: prompt dtype: string - name: label dtype: string splits: - name: train num_bytes: 494337138 num_examples: 324160 - name: test num_bytes: 5944776 num_examples: 1499 download_size: 263071058 dataset_size: 500281914 --- # Dataset Card for "sharegpt_alpaca_oa_vicuna_format" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
clarin-knext/arguana-pl
2023-06-07T08:18:37.000Z
[ "language:pl", "arxiv:2305.19840", "region:us" ]
clarin-knext
null
null
null
0
103
--- language: - pl --- Part of **BEIR-PL: Zero Shot Information Retrieval Benchmark for the Polish Language**. Link to arxiv: https://arxiv.org/pdf/2305.19840.pdf Contact: konrad.wojtasik@pwr.edu.pl
dmayhem93/agieval-lsat-rc
2023-06-18T17:27:15.000Z
[ "license:mit", "arxiv:2304.06364", "arxiv:2104.06598", "region:us" ]
dmayhem93
null
null
null
0
103
--- dataset_info: features: - name: query dtype: string - name: choices sequence: string - name: gold sequence: int64 splits: - name: test num_bytes: 1136305 num_examples: 269 download_size: 322710 dataset_size: 1136305 license: mit --- # Dataset Card for "agieval-lsat-rc" Dataset taken from https://github.com/microsoft/AGIEval and processed as in that repo. Raw datset: https://github.com/zhongwanjun/AR-LSAT MIT License Copyright (c) 2022 Wanjun Zhong Permission is hereby granted, free of charge, to any person obtaining a copy of this software and associated documentation files (the "Software"), to deal in the Software without restriction, including without limitation the rights to use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of the Software, and to permit persons to whom the Software is furnished to do so, subject to the following conditions: The above copyright notice and this permission notice shall be included in all copies or substantial portions of the Software. THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE. @misc{zhong2023agieval, title={AGIEval: A Human-Centric Benchmark for Evaluating Foundation Models}, author={Wanjun Zhong and Ruixiang Cui and Yiduo Guo and Yaobo Liang and Shuai Lu and Yanlin Wang and Amin Saied and Weizhu Chen and Nan Duan}, year={2023}, eprint={2304.06364}, archivePrefix={arXiv}, primaryClass={cs.CL} } @misc{zhong2021arlsat, title={AR-LSAT: Investigating Analytical Reasoning of Text}, author={Wanjun Zhong and Siyuan Wang and Duyu Tang and Zenan Xu and Daya Guo and Jiahai Wang and Jian Yin and Ming Zhou and Nan Duan}, year={2021}, eprint={2104.06598}, archivePrefix={arXiv}, primaryClass={cs.CL} } @article{wang2022lsat, title={From lsat: The progress and challenges of complex reasoning}, author={Wang, Siyuan and Liu, Zhongkun and Zhong, Wanjun and Zhou, Ming and Wei, Zhongyu and Chen, Zhumin and Duan, Nan}, journal={IEEE/ACM Transactions on Audio, Speech, and Language Processing}, year={2022}, publisher={IEEE} }
dmayhem93/agieval-sat-en-without-passage
2023-06-18T17:31:43.000Z
[ "license:mit", "arxiv:2304.06364", "region:us" ]
dmayhem93
null
null
null
0
103
--- dataset_info: features: - name: query dtype: string - name: choices sequence: string - name: gold sequence: int64 splits: - name: test num_bytes: 154762 num_examples: 206 download_size: 85136 dataset_size: 154762 license: mit --- # Dataset Card for "agieval-sat-en-without-passage" Dataset taken from https://github.com/microsoft/AGIEval and processed as in that repo. MIT License Copyright (c) Microsoft Corporation. Permission is hereby granted, free of charge, to any person obtaining a copy of this software and associated documentation files (the "Software"), to deal in the Software without restriction, including without limitation the rights to use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of the Software, and to permit persons to whom the Software is furnished to do so, subject to the following conditions: The above copyright notice and this permission notice shall be included in all copies or substantial portions of the Software. THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE @misc{zhong2023agieval, title={AGIEval: A Human-Centric Benchmark for Evaluating Foundation Models}, author={Wanjun Zhong and Ruixiang Cui and Yiduo Guo and Yaobo Liang and Shuai Lu and Yanlin Wang and Amin Saied and Weizhu Chen and Nan Duan}, year={2023}, eprint={2304.06364}, archivePrefix={arXiv}, primaryClass={cs.CL} }
ShoukanLabs/OpenNiji-0_32237
2023-08-16T02:40:59.000Z
[ "region:us" ]
ShoukanLabs
null
null
null
0
103
--- dataset_info: features: - name: image dtype: image - name: url dtype: string - name: prompt dtype: string - name: style dtype: string splits: - name: train num_bytes: 53930726836.349 num_examples: 32237 download_size: 51827864474 dataset_size: 53930726836.349 --- # Dataset Card for "OpenNiji-0_32237" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
AWfaw/ai-hdlcoder-dataset
2023-07-27T10:46:56.000Z
[ "task_categories:text-generation", "task_ids:language-modeling", "size_categories:100K<n<1M", "language:code", "license:mit", "region:us" ]
AWfaw
null
null
null
0
103
--- annotations_creators: [] language: - code license: - mit pretty_name: github-code size_categories: - 100K<n<1M source_datasets: [] task_categories: - text-generation task_ids: - language-modeling --- # Dataset Card for AI-HDLCoder ## Dataset Description The GitHub Code dataset consists of 100M code files from GitHub in VHDL programming language with extensions totaling in 1.94 GB of data. The dataset was created from the public GitHub dataset on Google BiqQuery at Anhalt University of Applied Sciences. ## Considerations for Using the Data The dataset is created for research purposes and consists of source code from a wide range of repositories. As such they can potentially include harmful or biased code as well as sensitive information like passwords or usernames. ### Languages ```python { "VHDL": [".vhdl",".vhd" ] } ``` ## Dataset Structure ### Data Instances ```python { "repo_name": "sebgod/linguist", "path": "samples/VHDL/foo.vhd", "copies": "91", "size": "217", "content": "-- VHDL example file\n\nlibrary ieee;\nuse ieee.std_logic_1164.all;\n\nentity inverter is\n\tport(a : in std_logic;\n\t b : out std_logic);\nend entity;\n\narchitecture rtl of inverter is\nbegin\n\tb \u003c\u003d not a;\nend architecture;\n", "license": "mit" } ``` ### Data Fields |Field|Type|Description| |---|---|---| |content|string|content of source file| |repo_name|string|name of the GitHub repository| |path|string|path of file in GitHub repository| |license|string|license of GitHub repository| |size|int|size of source file in bytes| ### Data Splits The dataset contains a train split only ### Licensing Information ```python [ 'agpl-3.0', 'artistic-2.0', 'mpl-2.0', 'cc0-1.0', 'mit', 'gpl-2.0', 'gpl-3.0', 'lgpl-3.0', 'apache-2.0', 'bsd-3-clause' ] ``` ### v1.0 - Initial release of dataset - The query was executed on 21.07.2023, 00:02:38 UTC+2
AdiOO7/llama-2-finance
2023-07-24T20:46:36.000Z
[ "task_categories:text-classification", "size_categories:1K<n<10K", "language:en", "license:apache-2.0", "finance", "region:us" ]
AdiOO7
null
null
null
13
103
--- license: apache-2.0 task_categories: - text-classification language: - en tags: - finance size_categories: - 1K<n<10K ---
abacusai/WikiQA-Altered_Numeric_QA
2023-07-27T14:34:48.000Z
[ "license:apache-2.0", "region:us" ]
abacusai
null
null
null
5
103
--- license: apache-2.0 configs: - config_name: default data_files: - split: 2k path: data/2k-* - split: 4k path: data/4k-* - split: 8k path: data/8k-* - split: 16k path: data/16k-* dataset_info: features: - name: conversations list: - name: from dtype: string - name: tok_len dtype: int64 - name: value dtype: string splits: - name: 2k num_bytes: 2802096 num_examples: 456 - name: 4k num_bytes: 5492874 num_examples: 456 - name: 8k num_bytes: 10884816 num_examples: 456 - name: 16k num_bytes: 19884934 num_examples: 456 download_size: 8163043 dataset_size: 39064720 ---
Trelis/function_calling_extended
2023-09-11T07:42:28.000Z
[ "task_categories:question-answering", "task_categories:conversational", "task_categories:text-generation", "size_categories:n<1K", "language:en", "function call", "function calling", "function-calling", "region:us" ]
Trelis
null
null
null
14
103
--- task_categories: - question-answering - conversational - text-generation language: - en tags: - function call - function calling - function-calling size_categories: - n<1K extra_gated_prompt: "Access to this dataset requires the purchase of a license [here](https://buy.stripe.com/aEUaFq09vgynaMU7sN)" extra_gated_fields: Name: text Affiliation: text Email: text I agree to the terms of the license described on the dataset card: checkbox I agree to only train models up to 20B parameters in size: checkbox I have purchased a license: checkbox --- # Trelis Function Calling Dataset - Allows models to be fine-tuned for function-calling. - The dataset is human generated and does not make use of Llama 2 or OpenAI! - Contains 55 training and 15 test rows - Based on eight functions: search_bing, search_arxiv, save_chat, read_json_file, list_files, get_current_weather, delete_file, clear_chat Access this dataset by purchasing a license [here](https://buy.stripe.com/aEUaFq09vgynaMU7sN). --Change-log-- 22Aug2023: Major updates to the main branch: - The 'systemPrompt' column is now replaced by 'functionList', which contains a raw list of function metadata without any guidance. - The previous dataset, with 'systemPrompt' - containing specific instructions - has been moved to the 'explicit' branch. - The 'implicit' branch is a copy of the 'explicit' branch, but with slightly less instruction provided to the LLM in the systemPrompt column. The reason for these updates are: - For one-shot model prompting, it is helpful to provide as much description as possible to the LLM. - For fine-tuning, is is desirable to minimise the length of any added context to describe functions, especially if not necessary. Users can play around with the different levels of instruction provided. In summary: - 'main' - provides the lowest level of instruction on how to use the functions - 'implicit' - moderate instructions - 'explicit' - detailed instructions 18Aug2023: Added new 'implicit' branch with a shorter system prompt. Performs similarly to main branch, but uses less tokens for prompting. 15Aug2023: Added datasets to fine-tune models for awareness of available functions. ## Fine-Tuning Notes and Scripts The objective of function calling is for the model to return a structured json object *and nothing else*. The performance of fine-tuning depends **strongly** on how the attention mask and loss mask are set. For further details see the [Youtube Video Here](https://youtu.be/OQdp-OeG1as) ### QLoRa Training Notebook for Llama 2 (FREE) - Access a basic Google Colab script for fine-tuning [here](https://colab.research.google.com/drive/1uMSS1o_8YOPyG1X_4k6ENEE3kJfBGGhH?usp=sharing). ### QLoRa ADVANCED Training Notebook (PAID) This advanced script provides improved performance when training with small datasets: - Includes a prompt loss-mask for improved performance when structured responses are required. - Includes a stop token after responses - allowing the model to provide a short reponse (e.g. a function call) and then stop. - Request [access here](https://buy.stripe.com/5kA5l69K52Hxf3a006). €14.99 (or $16.49) per seat/user. Access will be provided within 24 hours of purchase. ## Licensing The Function Calling Extended dataset is commercially licensed. Users can purchase a license for €14.99 ($16.99) per seat/user from [here](https://buy.stripe.com/00g4h2cWh5TJ9IQ28c). Users will receive access within 24 hours of their purchase. Further terms: - Licenses are not transferable to other users/entities. - Licenses are limited to the training or fine-tuning of models with up to 20 billion parameters (whether all parameters are being trained or not). - Commercial licenses for larger models are available on request - email ronan [at] trelis [dot] com ### Attribution of data sources This project includes data from the TruthfulQA dataset, which is available at: https://huggingface.co/datasets/truthful_qa. The truthful_qa dataset is licensed under the Apache License 2.0, Copyright (C) 2023, Stephanie Lin, Jacob Hilton, and Owain Evans. ## Dataset Structure The datasets (train and test) contain three prompt types: 1. The first portion provides function metadata in the systemPrompt but then has userPrompt and assistantResponse values that do not require function calling. This is to get the language model accustomed to having function metadata available, but not using it. Questions and answers for these prompts are generated by running addBlank.py and the questions and answers come from [truthful_qa](https://huggingface.co/datasets/truthful_qa) - see below for license details. 2. The second portion of the train and test datasets provide examples where a function call is necessary. 3. The third portion (new as of August 13th 2023) acclimatises the model to recognising what functions it has available from the system prompt, and sharing that with the user when appropriate. ## Branches Specify the branch using: ``` data = load_dataset( "Trelis/function_calling_extended", revision="implicit" # optionally specify a branch ) ``` The 'main' branch uses short system/function prompt, with no instruction on usage (see the other branches for prompts with stronger instruction): ``` { "function": "search_bing", "description": "Search the web for content on Bing. This allows users to search online/the internet/the web for content.", "arguments": [ { "name": "query", "type": "string", "description": "The search query string" } ] } { "function": "list_files", "description": "This function provides a list of files in the user's directory. It can be useful when the user wants to check what files they have. This function requires no parameters and returns no values.", "arguments": [] } ``` The 'explicit' branch provides detailed instructions to the language model on how to call functions: ``` You are a helpful research assistant. The following functions are available for you to fetch further data to answer user questions, if relevant: { "function": "search_bing", "description": "Search the web for content on Bing. This allows users to search online/the internet/the web for content.", "arguments": [ { "name": "query", "type": "string", "description": "The search query string" } ] } { "function": "list_files", "description": "This function provides a list of files in the user's directory. It can be useful when the user wants to check what files they have. This function requires no parameters and returns no values.", "arguments": [] } To call a function, respond - immediately and only - with a JSON object of the following format: { "function": "function_name", "arguments": { "argument1": value1, "argument2": value2 } } ``` The 'implicit' branch uses a shorter, less explicit branch that performs similarly and is therefore recommended as it reduces the length of the system prompt: ``` You are a helpful research assistant. The following functions are available for you to fetch further data to answer user questions, if relevant: { "function": "search_bing", "description": "Search the web for content on Bing. This allows users to search online/the internet/the web for content.", "arguments": [ { "name": "query", "type": "string", "description": "The search query string" } ] } { "function": "list_files", "description": "This function provides a list of files in the user's directory. It can be useful when the user wants to check what files they have. This function requires no parameters and returns no values.", "arguments": [] } ``` Said differently, the 'implicit' branch omits the following portion of the prompt: ``` To call a function, respond - immediately and only - with a JSON object of the following format: { "function": "function_name", "arguments": { "argument1": value1, "argument2": value2 } } ``` ## Training and Inference Syntax Here is sample prompt syntax for Llama. This will depend on the language model you use and also how to wish to fine-tune the model: ``` # Define the roles and markers B_INST, E_INST = "[INST]", "[/INST]" B_SYS, E_SYS = "<<SYS>>\n", "\n<</SYS>>\n\n" system_prompt = data['test'][index]['systemPrompt'] user_prompt = data['test'][index]['userPrompt'] correct_answer = data['test'][index]['assistantResponse'] # Format your prompt template prompt = f"{B_INST} {B_SYS}{system_prompt.strip()}{E_SYS}{user_prompt.strip()} {E_INST}\n\n" ``` The `\n\n` after E_INST is important as it prevents E_INST from sometimes being tokenized with the ']' attached to the next characters. Using `\n\n` also provides the best chance for the model correctly telling whether to call a function or provide a usual response. Alternatively, you may prefer to stay away from the system prompt and create a separate wrapper for function descriptions (as an example for the data on 'main'): ``` # Define the roles and markers B_INST, E_INST = "[INST]", "[/INST]" B_FUNC, E_FUNC = "<FUNCTIONS>", "</FUNCTIONS>\n\n" functionList = data['test'][index]['functionList'] user_prompt = data['test'][index]['userPrompt'] correct_answer = data['test'][index]['assistantResponse'] # Format your prompt template prompt = f"{B_FUNC}{functionList.strip()}{E_FUNC}{B_INST} {user_prompt.strip()} {E_INST}\n\n" ``` ## File Structure (for prompt dataset generation) - `functions/`: This directory contains function files, each of which is a JSON file with a specific structure that describes a function and its sample prompts and responses. - `generate_dataset.py`: This Python script generates the base training and testing dataset CSV files. - `addBlank.py`: This adds in truthfulqa questions and answers after system prompts with functions - `hello.py`: adds in prompts to accustomise the model to the presence of functions in the system prompt. ### JSON File Structure Each function file should be a JSON file with the following structure: ```json { "functionMetaData": { "function": "function_name", "description": "function_description", "arguments": [ { "name": "argument_name", "type": "argument_type", "description": "argument_description" }, ... ] }, "samplePromptResponsePairs": [ { "prompt": "sample_prompt", "response": { "arguments": { "argument_name": "argument_value", ... } } }, ... ] } ``` The `functionMetaData` object describes the function. The `samplePromptResponsePairs` array contains sample prompts and responses for the function. ## Dataset Generation To generate the dataset, run the `generate_dataset.py` script. This script will iterate over each function file and generate a CSV row for each sample prompt-response pair. ## CSV File Structure The generated CSV file has the following columns: 'main' branches: - `functionList`: Descriptions of two functions (the current function and a randomly selected other function). - `userPrompt`: The user's prompt. - `assistantResponse`: The assistant's response. 'explicit' and 'implicit' branches: - `systemPrompt`: The system's prompt, which includes the descriptions of two functions (the current function and a randomly selected other function) and instructions on how to call a function ('explicit branch only'). - `userPrompt`: The user's prompt. - `assistantResponse`: The assistant's response. ## Testing JSON Structure A script named `validate.py` can be used to validate the structure of a function JSON file. It checks for the presence and correct types of all necessary keys in the JSON structure. To use the script, call it from the command line with the name of the function file as an argument: ``` python validate.py my_function.json ```
thaiqa_squad
2022-11-03T16:15:52.000Z
[ "task_categories:question-answering", "task_ids:extractive-qa", "task_ids:open-domain-qa", "annotations_creators:expert-generated", "language_creators:found", "multilinguality:monolingual", "size_categories:1K<n<10K", "source_datasets:extended|other-thaiqa", "language:th", "license:cc-by-nc-sa-3.0", "region:us" ]
null
`thaiqa_squad` is an open-domain, extractive question answering dataset (4,000 questions in `train` and 74 questions in `dev`) in [SQuAD](https://rajpurkar.github.io/SQuAD-explorer/) format, originally created by [NECTEC](https://www.nectec.or.th/en/) from Wikipedia articles and adapted to [SQuAD](https://rajpurkar.github.io/SQuAD-explorer/) format by [PyThaiNLP](https://github.com/PyThaiNLP/).
No clear citation guidelines from source: https://aiforthai.in.th/corpus.php SQuAD version: https://github.com/PyThaiNLP/thaiqa_squad
null
5
102
--- annotations_creators: - expert-generated language_creators: - found language: - th license: - cc-by-nc-sa-3.0 multilinguality: - monolingual size_categories: - 1K<n<10K source_datasets: - extended|other-thaiqa task_categories: - question-answering task_ids: - extractive-qa - open-domain-qa paperswithcode_id: null pretty_name: thaiqa-squad dataset_info: features: - name: question_id dtype: int32 - name: article_id dtype: int32 - name: context dtype: string - name: question dtype: string - name: answers sequence: - name: answer dtype: string - name: answer_begin_position dtype: int32 - name: answer_end_position dtype: int32 config_name: thaiqa_squad splits: - name: train num_bytes: 47905050 num_examples: 4000 - name: validation num_bytes: 744813 num_examples: 74 download_size: 10003354 dataset_size: 48649863 --- # Dataset Card for `thaiqa-squad` ## Table of Contents - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards) - [Languages](#languages) - [Dataset Structure](#dataset-structure) - [Data Instances](#data-instances) - [Data Fields](#data-fields) - [Data Splits](#data-splits) - [Dataset Creation](#dataset-creation) - [Curation Rationale](#curation-rationale) - [Source Data](#source-data) - [Annotations](#annotations) - [Personal and Sensitive Information](#personal-and-sensitive-information) - [Considerations for Using the Data](#considerations-for-using-the-data) - [Social Impact of Dataset](#social-impact-of-dataset) - [Discussion of Biases](#discussion-of-biases) - [Other Known Limitations](#other-known-limitations) - [Additional Information](#additional-information) - [Dataset Curators](#dataset-curators) - [Licensing Information](#licensing-information) - [Citation Information](#citation-information) - [Contributions](#contributions) ## Dataset Description - **Homepage:** http://github.com/pythainlp/thaiqa_squad (original `thaiqa` at https://aiforthai.in.th/) - **Repository:** http://github.com/pythainlp/thaiqa_squad - **Paper:** - **Leaderboard:** - **Point of Contact:**http://github.com/pythainlp/ (original `thaiqa` at https://aiforthai.in.th/) ### Dataset Summary `thaiqa_squad` is an open-domain, extractive question answering dataset (4,000 questions in `train` and 74 questions in `dev`) in [SQuAD](https://rajpurkar.github.io/SQuAD-explorer/) format, originally created by [NECTEC](https://www.nectec.or.th/en/) from Wikipedia articles and adapted to [SQuAD](https://rajpurkar.github.io/SQuAD-explorer/) format by [PyThaiNLP](https://github.com/PyThaiNLP/). ### Supported Tasks and Leaderboards extractive question answering ### Languages Thai ## Dataset Structure ### Data Instances ``` {'answers': {'answer': ['ฮิกกิ้นส์'], 'answer_begin_position': [528], 'answer_end_position': [537]}, 'article_id': 115035, 'context': '<doc id="115035" url="https://th.wikipedia.org/wiki?curid=115035" title="เบนจี้">เบนจี้ เบนจี้ () เป็นชื่อตัวละครหมาพันทางแสนรู้ ที่ปรากฏอยู่ในภาพยนตร์หลายเรื่องที่เขียนบท และกำกับโดย โจ แคมป์ ในช่วงทศวรรษ 1970 ถึง 1980 ภาพยนตร์เรื่องแรกในชุด ใช้ชื่อเรื่องว่า เบนจี้ เช่นเดียวกับตัวละคร ถ่ายทำที่เมืองดัลลัส รัฐเทกซัส ฉายครั้งแรกในปี พ.ศ. 2517 ภาพยนตร์ได้รับการเสนอชื่อเข้าชิงรางวัลออสการ์ และได้รางวัลลูกโลกทองคำ สาขาเพลงประกอบยอดเยี่ยม จากเพลง Benji\'s Theme (I Feel Love) ร้องโดย ชาร์ลี ริช หมาที่แสดงเป็นเบนจี้ตัวแรก ชื่อว่า ฮิกกิ้นส์ (พ.ศ. 2502 - พ.ศ. 2518) มีอายุถึง 15 ปีแล้วในขณะแสดง หลังจากภาพยนตร์ออกฉายได้ไม่นาน มันก็ตายในปี พ.ศ. 2518เบนจี้ในภาพยนตร์เบนจี้ในภาพยนตร์. - พ.ศ. 2517, Benji (ภาพยนตร์) - พ.ศ. 2520, For the Love of Benji (ภาพยนตร์) - พ.ศ. 2521, Benji\'s Very Own Christmas Story (ภาพยนตร์โทรทัศน์) - พ.ศ. 2523, Oh Heavenly Dog (ภาพยนตร์) - พ.ศ. 2523, Benji at Work (ภาพยนตร์โทรทัศน์) - พ.ศ. 2524, Benji Takes a Dive at Marineland (ภาพยนตร์โทรทัศน์) - พ.ศ. 2526, Benji, Zax & the Alien Prince (ภาพยนตร์ซีรีส์) - พ.ศ. 2530, Benji the Hunted (ภาพยนตร์) - พ.ศ. 2547, Benji: Off the Leash! (ภาพยนตร์) - พ.ศ. 2550, Benji: The Barkening (ภาพยนตร์)</doc>\n', 'question': 'สุนัขตัวแรกรับบทเป็นเบนจี้ในภาพยนตร์เรื่อง Benji ที่ออกฉายในปี พ.ศ. 2517 มีชื่อว่าอะไร', 'question_id': 1} {'answers': {'answer': ['ชาร์ลี ริช'], 'answer_begin_position': [482], 'answer_end_position': [492]}, 'article_id': 115035, 'context': '<doc id="115035" url="https://th.wikipedia.org/wiki?curid=115035" title="เบนจี้">เบนจี้ เบนจี้ () เป็นชื่อตัวละครหมาพันทางแสนรู้ ที่ปรากฏอยู่ในภาพยนตร์หลายเรื่องที่เขียนบท และกำกับโดย โจ แคมป์ ในช่วงทศวรรษ 1970 ถึง 1980 ภาพยนตร์เรื่องแรกในชุด ใช้ชื่อเรื่องว่า เบนจี้ เช่นเดียวกับตัวละคร ถ่ายทำที่เมืองดัลลัส รัฐเทกซัส ฉายครั้งแรกในปี พ.ศ. 2517 ภาพยนตร์ได้รับการเสนอชื่อเข้าชิงรางวัลออสการ์ และได้รางวัลลูกโลกทองคำ สาขาเพลงประกอบยอดเยี่ยม จากเพลง Benji\'s Theme (I Feel Love) ร้องโดย ชาร์ลี ริช หมาที่แสดงเป็นเบนจี้ตัวแรก ชื่อว่า ฮิกกิ้นส์ (พ.ศ. 2502 - พ.ศ. 2518) มีอายุถึง 15 ปีแล้วในขณะแสดง หลังจากภาพยนตร์ออกฉายได้ไม่นาน มันก็ตายในปี พ.ศ. 2518เบนจี้ในภาพยนตร์เบนจี้ในภาพยนตร์. - พ.ศ. 2517, Benji (ภาพยนตร์) - พ.ศ. 2520, For the Love of Benji (ภาพยนตร์) - พ.ศ. 2521, Benji\'s Very Own Christmas Story (ภาพยนตร์โทรทัศน์) - พ.ศ. 2523, Oh Heavenly Dog (ภาพยนตร์) - พ.ศ. 2523, Benji at Work (ภาพยนตร์โทรทัศน์) - พ.ศ. 2524, Benji Takes a Dive at Marineland (ภาพยนตร์โทรทัศน์) - พ.ศ. 2526, Benji, Zax & the Alien Prince (ภาพยนตร์ซีรีส์) - พ.ศ. 2530, Benji the Hunted (ภาพยนตร์) - พ.ศ. 2547, Benji: Off the Leash! (ภาพยนตร์) - พ.ศ. 2550, Benji: The Barkening (ภาพยนตร์)</doc>\n', 'question': "เพลง Benji's Theme ใช้ประกอบภาพยนตร์เรื่อง Benji ในปีพ.ศ. 2517 ขับร้องโดยใคร", 'question_id': 2035} ``` ### Data Fields ``` { "question_id": question id "article_id": article id "context": article texts "question": question "answers": { "answer": answer text "answer_begin_position": answer beginning position "answer_end_position": answer exclusive upper bound position } ), } ``` ### Data Splits | | train | valid | |-------------------------|-------------|-------------| | # questions | 4000 | 74 | | # avg words in context | 1186.740750 | 1016.459459 | | # avg words in question | 14.325500 | 12.743243 | | # avg words in answer | 3.279750 | 4.608108 | ## Dataset Creation ### Curation Rationale [PyThaiNLP](https://github.com/PyThaiNLP/) created `thaiqa_squad` as a [SQuAD](https://rajpurkar.github.io/SQuAD-explorer/) version of [thaiqa](http://copycatch.in.th/thai-qa-task.html). [thaiqa](https://aiforthai.in.th/corpus.php) is part of [The 2nd Question answering program from Thai Wikipedia](http://copycatch.in.th/thai-qa-task.html) of [National Software Contest 2020](http://nsc.siit.tu.ac.th/GENA2/login.php). ### Source Data #### Initial Data Collection and Normalization [More Information Needed] #### Who are the source language producers? Wikipedia authors for contexts and [NECTEC](https://www.nectec.or.th/en/) for questions and answer annotations ### Annotations #### Annotation process [More Information Needed] #### Who are the annotators? [NECTEC](https://www.nectec.or.th/en/) ### Personal and Sensitive Information All contents are from Wikipedia. No personal and sensitive information is expected to be included. ## Considerations for Using the Data ### Social Impact of Dataset - open-domain, extractive question answering in Thai ### Discussion of Biases [More Information Needed] ### Other Known Limitations Dataset provided for research purposes only. Please check dataset license for additional information. The contexts include `<doc>` tags at start and at the end ## Additional Information ### Dataset Curators [NECTEC](https://www.nectec.or.th/en/) for original [thaiqa](https://aiforthai.in.th/corpus.php). SQuAD formattting by [PyThaiNLP](https://github.com/PyThaiNLP/). ### Licensing Information CC-BY-NC-SA 3.0 ### Citation Information No clear citation guidelines from source: https://aiforthai.in.th/corpus.php SQuAD version: https://github.com/PyThaiNLP/thaiqa_squad ### Contributions Thanks to [@cstorm125](https://github.com/cstorm125) for adding this dataset.
allegro/klej-psc
2022-10-26T09:01:54.000Z
[ "task_categories:text-classification", "annotations_creators:expert-generated", "language_creators:other", "multilinguality:monolingual", "size_categories:5K", "size_categories:1K<n<10K", "source_datasets:original", "language:pl", "license:cc-by-sa-3.0", "paraphrase-classification", "region:us" ]
allegro
null
null
null
0
102
--- annotations_creators: - expert-generated language_creators: - other language: - pl license: - cc-by-sa-3.0 multilinguality: - monolingual size_categories: - 5K - 1K<n<10K source_datasets: - original task_categories: - text-classification task_ids: [] pretty_name: Polish Summaries Corpus tags: - paraphrase-classification --- # klej-psc ## Description The Polish Summaries Corpus (PSC) is a dataset of summaries for 569 news articles. The human annotators created five extractive summaries for each article by choosing approximately 5% of the original text. A different annotator created each summary. The subset of 154 articles was also supplemented with additional five abstractive summaries each, i.e., not created from the fragments of the original article. In huggingface version of this dataset, summaries of the same article are used as positive pairs, and the most similar summaries of different articles are sampled as negatives. ## Tasks (input, output, and metrics) The task is to predict whether the extract text and summary are similar. Based on PSC, we formulate a text-similarity task. We generate the positive pairs (i.e., referring to the same article) using only those news articles with both extractive and abstractive summaries. We match each extractive summary with two least similar abstractive ones of the same article. To create negative pairs, we follow a similar procedure. We find two most similar abstractive summaries for each extractive summary, but from different articles. **Input** (*'extract_text'*, *'summary_text'* columns): extract text and summary text sentences **Output** (*'label'* column): label: 1 indicates summary is similar, 0 means that it is not similar **Domain**: News articles **Measurements**: F1-Score **Example**: Input: `Mit o potopie jest prastary, sięga czasów, gdy topniał lodowiec. Na skutek tego wydarzenia w dziejach planety, poziom mórz i oceanów podniósł się o kilkadziesiąt metrów. Potop polodowcowy z całą, naukową pewnością, miał miejsce, ale najprawdopodobniej został przez ludzkość przegapiony. I oto pojawiła się w tej sprawie kolejna glosa. Jej autorami są amerykańscy geofizycy.` ; `Dwójka amerykańskich geofizyków przedstawiła swój scenariusz pochodzenia mitu o potopie. Przed 7500 laty do będącego jeszcze jeziorem Morza Czarnego wlały się wezbrane wskutek topnienia lodowców wody Morza Śródziemnego. Geofizycy twierdzą, że dzięki temu rozkwitło rolnictwo, bo ludzie musieli migrować i szerzyć rolniczy tryb życia. Środowiska naukowe twierdzą jednak, że potop był tylko jednym z czynników ekspansji rolnictwa.` Input (translated by DeepL): `The myth of the Flood is ancient, dating back to the time when the glacier melted. As a result of this event in the history of the planet, the level of the seas and oceans rose by several tens of meters. The post-glacial flood with all, scientific certainty, took place, but was most likely missed by mankind. And here is another gloss on the matter. Its authors are American geophysicists.` ; `Two American geophysicists presented their scenario of the origin of the Flood myth. 7500 years ago, the waters of the Mediterranean Sea flooded into the Black Sea, which was still a lake, due to the melting of glaciers. Geophysicists claim that this made agriculture flourish because people had to migrate and spread their agricultural lifestyle. However, the scientific community argues that the Flood was only one factor in the expansion of agriculture.` Output: `1` (summary is similar) ## Data splits | Subset | Cardinality | | ----------- | ----------: | | train | 4302 | | val | 0 | | test | 1078 | ## Class distribution | Class | train | validation | test | |:------------|--------:|-------------:|-------:| | not similar | 0.705 | - | 0.696 | | similar | 0.295 | - | 0.304 | ## Citation ``` @inproceedings{ogro:kop:14:lrec, title={The {P}olish {S}ummaries {C}orpus}, author={Ogrodniczuk, Maciej and Kope{'c}, Mateusz}, booktitle = "Proceedings of the Ninth International {C}onference on {L}anguage {R}esources and {E}valuation, {LREC}~2014", year = "2014", } ``` ## License ``` Creative Commons Attribution ShareAlike 3.0 licence (CC-BY-SA 3.0) ``` ## Links [HuggingFace](https://huggingface.co/datasets/allegro/klej-psc) [Source](http://zil.ipipan.waw.pl/PolishSummariesCorpus) [Paper](https://aclanthology.org/L14-1145/) ## Examples ### Loading ```python from pprint import pprint from datasets import load_dataset dataset = load_dataset("allegro/klej-psc") pprint(dataset['train'][100]) #{'extract_text': 'Nowe prawo energetyczne jest zagrożeniem dla małych ' # 'producentów energii ze źródeł odnawialnych. Sytuacja się ' # 'pogarsza wdobie urynkowienia energii. zniosło preferencje ' # 'wprowadzone dla energetyki wodnej. UE zamierza podwoić ' # 'udział takich źródeł energetyki jak woda, wiatr, słońce do ' # '2010 r.W Polsce 1-1,5 proc. zużycia energii wytwarza się ze ' # 'źródeł odnawialnych. W krajach Unii udział ten wynosi ' # 'średnio 5,6 proc.', # 'label': 1, # 'summary_text': 'W Polsce w niewielkim stopniu wykorzystuje się elektrownie ' # 'wodne oraz inne sposoby tworzenia energii ze źródeł ' # 'odnawialnych. Podczas gdy w innych krajach europejskich jest ' # 'to średnio 5,6 % w Polsce jest to 1-1,5 %. Powodem jest ' # 'niska opłacalność posiadania tego typu elektrowni-zakład ' # 'energetyczny płaci ok. 17 gr. za 1kWh, podczas gdy ' # 'wybudowanie takiej elektrowni kosztuje ok. 100 tyś. zł.'} ``` ### Evaluation ```python import random from pprint import pprint from datasets import load_dataset, load_metric dataset = load_dataset("allegro/klej-psc") dataset = dataset.class_encode_column("label") references = dataset["test"]["label"] # generate random predictions predictions = [random.randrange(max(references) + 1) for _ in range(len(references))] acc = load_metric("accuracy") f1 = load_metric("f1") acc_score = acc.compute(predictions=predictions, references=references) f1_score = f1.compute(predictions=predictions, references=references, average="macro") pprint(acc_score) pprint(f1_score) # {'accuracy': 0.18588469184890655} # {'f1': 0.17511412402843068} ```
parambharat/mile_dataset
2022-12-05T11:46:00.000Z
[ "task_categories:automatic-speech-recognition", "annotations_creators:expert-generated", "language_creators:expert-generated", "multilinguality:monolingual", "size_categories:10K<n<100K", "source_datasets:original", "language:ta", "license:cc-by-2.0", "Tamil ASR", "Speech Recognition", "arxiv:2207.13331", "arxiv:2207.13333", "region:us" ]
parambharat
IISc-MILE Tamil ASR Corpus contains transcribed speech corpus for training ASR systems for Tamil language. It contains ~150 hours of read speech data collected from 531 speakers in a noise-free recording environment with high quality USB microphones.
@misc{mile_1, doi = {10.48550/ARXIV.2207.13331}, url = {https://arxiv.org/abs/2207.13331}, author = {A, Madhavaraj and Pilar, Bharathi and G, Ramakrishnan A}, title = {Subword Dictionary Learning and Segmentation Techniques for Automatic Speech Recognition in Tamil and Kannada}, publisher = {arXiv}, year = {2022}, } @misc{mile_2, doi = {10.48550/ARXIV.2207.13333}, url = {https://arxiv.org/abs/2207.13333}, author = {A, Madhavaraj and Pilar, Bharathi and G, Ramakrishnan A}, title = {Knowledge-driven Subword Grammar Modeling for Automatic Speech Recognition in Tamil and Kannada}, publisher = {arXiv}, year = {2022}, }
null
1
102
--- annotations_creators: - expert-generated language: - ta language_creators: - expert-generated license: - cc-by-2.0 multilinguality: - monolingual pretty_name: IISc-MILE Tamil ASR Corpus size_categories: - 10K<n<100K source_datasets: - original tags: - Tamil ASR - Speech Recognition task_categories: - automatic-speech-recognition task_ids: [] --- # Dataset Card for [Dataset Name] ## Table of Contents - [Table of Contents](#table-of-contents) - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards) - [Languages](#languages) - [Dataset Structure](#dataset-structure) - [Data Instances](#data-instances) - [Data Fields](#data-fields) - [Data Splits](#data-splits) - [Dataset Creation](#dataset-creation) - [Curation Rationale](#curation-rationale) - [Source Data](#source-data) - [Annotations](#annotations) - [Personal and Sensitive Information](#personal-and-sensitive-information) - [Considerations for Using the Data](#considerations-for-using-the-data) - [Social Impact of Dataset](#social-impact-of-dataset) - [Discussion of Biases](#discussion-of-biases) - [Other Known Limitations](#other-known-limitations) - [Additional Information](#additional-information) - [Dataset Curators](#dataset-curators) - [Licensing Information](#licensing-information) - [Citation Information](#citation-information) - [Contributions](#contributions) ## Dataset Description - **Homepage:** https://www.openslr.org/127/ - **Repository:** https://github.com/MILE-IISc - **Paper:** https://arxiv.org/abs/2207.13331 - **Leaderboard:** - **Point of Contact:** ### Dataset Summary Tamil transcribed speech corpus for ASR ### Supported Tasks and Leaderboards [More Information Needed] ### Languages - Tamil ## Dataset Structure ### Data Instances [More Information Needed] ### Data Fields [More Information Needed] ### Data Splits [More Information Needed] ## Dataset Creation ### Curation Rationale [More Information Needed] ### Source Data #### Initial Data Collection and Normalization [More Information Needed] #### Who are the source language producers? [More Information Needed] ### Annotations #### Annotation process [More Information Needed] #### Who are the annotators? [More Information Needed] ### Personal and Sensitive Information [More Information Needed] ## Considerations for Using the Data ### Social Impact of Dataset [More Information Needed] ### Discussion of Biases [More Information Needed] ### Other Known Limitations [More Information Needed] ## Additional Information ### Dataset Curators [More Information Needed] ### Licensing Information Attribution 2.0 Generic (CC BY 2.0) ### Citation Information @misc{mile_1, doi = {10.48550/ARXIV.2207.13331}, url = {https://arxiv.org/abs/2207.13331}, author = {A, Madhavaraj and Pilar, Bharathi and G, Ramakrishnan A}, title = {Subword Dictionary Learning and Segmentation Techniques for Automatic Speech Recognition in Tamil and Kannada}, publisher = {arXiv}, year = {2022}, } @misc{mile_2, doi = {10.48550/ARXIV.2207.13333}, url = {https://arxiv.org/abs/2207.13333}, author = {A, Madhavaraj and Pilar, Bharathi and G, Ramakrishnan A}, title = {Knowledge-driven Subword Grammar Modeling for Automatic Speech Recognition in Tamil and Kannada}, publisher = {arXiv}, year = {2022}, } ### Contributions Thanks to [@parambharat](https://github.com/parambharat) for adding this dataset.
ScandEval/scandiqa-da-mini
2023-07-05T09:44:29.000Z
[ "task_categories:question-answering", "size_categories:1K<n<10K", "language:da", "license:cc-by-3.0", "region:us" ]
ScandEval
null
null
null
0
102
--- dataset_info: features: - name: id dtype: string - name: question dtype: string - name: answers struct: - name: answer_start sequence: int64 - name: text sequence: string - name: context dtype: string - name: answers_en struct: - name: answer_start sequence: int64 - name: text sequence: string - name: context_en dtype: string - name: title_en dtype: string splits: - name: train num_bytes: 3238964 num_examples: 1024 - name: val num_bytes: 1096223 num_examples: 256 - name: test num_bytes: 6668816 num_examples: 2048 download_size: 6456003 dataset_size: 11004003 license: cc-by-3.0 task_categories: - question-answering language: - da size_categories: - 1K<n<10K --- # Dataset Card for "scandiqa-da-mini" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
metaeval/defeasible-nli
2023-06-22T14:09:34.000Z
[ "task_categories:text-classification", "task_ids:natural-language-inference", "language:en", "license:apache-2.0", "region:us" ]
metaeval
null
null
null
0
102
--- license: apache-2.0 task_ids: - natural-language-inference task_categories: - text-classification language: - en --- https://github.com/rudinger/defeasible-nli ``` @inproceedings{rudinger2020thinking, title={Thinking like a skeptic: feasible inference in natural language}, author={Rudinger, Rachel and Shwartz, Vered and Hwang, Jena D and Bhagavatula, Chandra and Forbes, Maxwell and Le Bras, Ronan and Smith, Noah A and Choi, Yejin}, booktitle={Findings of the Association for Computational Linguistics: EMNLP 2020}, pages={4661--4675}, year={2020} } ```
artem9k/ai-text-detection-pile
2023-02-27T03:37:54.000Z
[ "license:mit", "region:us" ]
artem9k
null
null
null
2
102
--- license: mit --- # Dataset Card for AI Text Dectection Pile ## Dataset Description - **Point of Contact:artem9k@gmail.com ### Dataset Summary This is a large scale dataset intended for AI Text Detection tasks, geared toward long-form text and essays. It contains samples of both human text and AI-generated text from GPT2, GPT3, ChatGPT, GPTJ. Here is the (tentative) breakdown: #### Human Text | Dataset | Num Samples | Link | | ----------- | ----------- | ----------- | | Reddit WritingPromps | 570k | [Link](https://www.kaggle.com/datasets/ratthachat/writing-prompts) | | OpenAI Webtext | 260k | [Link](https://github.com/openai/gpt-2-output-dataset) | | HC3 (Human Responses) | 58k | [Link](https://huggingface.co/datasets/Hello-SimpleAI/HC3) | | ivypanda-essays | TODO | TODO | | **Total** | **990k** | **-** | #### AI-Generated Text | Model | Dataset | Num Samples | Link | | ----------- | ----------- | ----------- | ----------- | | GPT2 | OpenAI gpt2-output-dataset | 260k | [Link](https://github.com/openai/gpt-2-output-dataset) | | GPT3 | pairwise-davinci | 44k | TODO | | GPT3 | synthetic-instruct-davinci-pairwise | 30k | [Link](https://huggingface.co/datasets/Dahoas/instruct-synthetic-prompt-responses) | | GPTJ | synthetic-instruct-gptj-pairwise | 44k | [Link](https://huggingface.co/datasets/Dahoas/synthetic-instruct-gptj-pairwise) | | ChatGPT | Scraped from twitter | 5k | **-** | | ChatGPT | HC3 (ChatGPT Responses) | 27k | [Link](https://huggingface.co/datasets/Hello-SimpleAI/HC3) | | ChatGPT | ChatGPT Prompts/emergentmind | 500 | [Link](https://huggingface.co/datasets/MohamedRashad/ChatGPT-prompts/tree/main) | | **Total** | **340k** | **-** | **-** | ### Supported Tasks and Leaderboards Text Classification, AI Text Detection. ### Languages English. ### Data Fields TEXT: The text of the sample. SOURCE: either "human" or "ai"
maveriq/tobacco3482
2023-03-02T21:23:58.000Z
[ "region:us" ]
maveriq
null
null
null
1
102
--- dataset_info: features: - name: image dtype: image - name: label dtype: class_label: names: '0': ADVE '1': Email '2': Form '3': Letter '4': Memo '5': News '6': Note '7': Report '8': Resume '9': Scientific splits: - name: train num_bytes: 1409969631.808 num_examples: 3482 download_size: 1733093218 dataset_size: 1409969631.808 --- # Dataset Card for "tobacco3482" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
MU-NLPC/Calc-aqua_rat
2023-05-25T15:44:56.000Z
[ "task_categories:question-answering", "size_categories:10K<n<100K", "language:en", "license:apache-2.0", "arxiv:1705.04146", "arxiv:2305.15017", "region:us" ]
MU-NLPC
null
null
null
2
102
--- dataset_info: features: - name: question dtype: string - name: options sequence: string - name: rationale dtype: string - name: correct dtype: string - name: chain dtype: string splits: - name: train num_bytes: 71619470 num_examples: 97467 - name: test num_bytes: 191142 num_examples: 254 - name: validation num_bytes: 191976 num_examples: 254 download_size: 40727556 dataset_size: 72002588 license: apache-2.0 task_categories: - question-answering language: - en pretty_name: AQUA-RAT with Calculator size_categories: - 10K<n<100K --- # Dataset Card for "Calc-aqua_rat" ### Summary This dataset is an instance of [aqua_rat](https://huggingface.co/datasets/aqua_rat) dataset extended for the in-context calls of calculator, represented by the `exec` calls to a `sympy` library. ### Supported Tasks The dataset is intended for training Chain-of-Thought reasoning models able to use external tools to enhance the factuality of their responses. This dataset presents in-context scenarios where models can out-source the computations in the reasoning chain to a calculator. ### Construction Process The dataset was constructed automatically by evaluating all candidate calls to a `sympy` library that were extracted from the originally-annotated *rationale*s. The selection of candidates is pivoted by the matching of equals ('=') symbols in the chain, where the left-hand side of the equation is evaluated, and accepted as a correct gadget call, if the result occurs closely on the right-hand side. Therefore, the extraction of calculator calls may inhibit false negatives (where the calculator could have been used but was not), but not any known false positives. A full description of the extraction process can be found in the [corresponding parse script](https://github.com/markcheeky/gadgets/blob/7799a7841940b15593d4667219424ee71c74327e/gadgets/aqua.py#L19), **If you find an issue in the dataset or in the fresh version of the parsing script, we'd be happy if you report it, or create a PR.** ## Dataset Structure The dataset can be loaded by simply choosing a split (`train`, `validation` or `test`) and calling: ```python import datasets dataset_val = datasets.load_dataset("MU-NLPC/Calc-aqua_rat", split="validation") print(dataset_val[0]) # see the output below ``` ### Data Instances The samples of Calc-aqua_rat have this format (newline-reformated for better readability): ```python {'question': 'Three birds are flying at a fast rate of 900 kilometers per hour. What is their speed in miles per minute? [1km = 0.6 miles]', 'options': ['A)32400', 'B)6000', 'C)600', 'D)60000', 'E)10'], 'correct': 'A', 'rationale': 'To calculate the equivalent of miles in a kilometer\n 0.6 kilometers = 1 mile\n 900 kilometers = (0.6)*900 = 540 miles\n In 1 hour there are 60 minutes\n Speed in miles/minutes = 60 * 540 = 32400\n Correct answer - A', 'chain': 'To calculate the equivalent of miles in a kilometer\n 0.6 kilometers \n= 1 mile\n 900 kilometers \n= (0.6)*900\n= \n<gadget id="calculator">(0.6)*900</gadget>\n<output>540</output>\n540 miles\n In 1 hour there are 60 minutes\n Speed in miles/minutes\n= 60 * 540\n= \n<gadget id="calculator">60 * 540</gadget>\n<output>32_400</output>\n32400\n Correct answer - 32400\n. Final result is <result>32400</result>' } ``` The enclosing HTML tags (e.g. **`<gadget id="calculator">(0.6)*900</gadget>\n<output>540</output>`**) represent the inputs and outputs to the `sympy.parse_expr().evalf()` method (in our code [here](https://github.com/markcheeky/gadgets/blob/7799a7841940b15593d4667219424ee71c74327e/gadgets/gadget.py#L28)). Note that the format of the dataset is consistent with [MU-NLPC/Calc-gsm8k](https://huggingface.co/datasets/MU-NLPC/Calc-gsm8k). ### Data Fields * **question**: A natural language definition of the problem to solve. * **options**: 5 possible options (A, B, C, D and E), among which one is correct * **correct**: The correct option * **rationale**: A natural language sequence of steps leading to a solution of the given problem. * **chain**: A natural language sequence of steps with inserted calculator calls and outputs of the sympy calculator. ### Data Splits The samples in data splits are consistent with the original [aqua_rat](https://huggingface.co/datasets/aqua_rat) dataset, containing: * **train** split of 97467 samples, * **validation** split of 254 samples, * **test** split of 254. * ## Licensing Apache-2.0, consistently with the original aqua-rat dataset. ## Cite If you use this dataset in research, please cite the original [aqua-rat paper](https://arxiv.org/pdf/1705.04146.pdf) and our report as follows: ```bibtex @article{kadlcik2023calcx, title={Calc-X: Enriching Arithmetical Chain-of-Thoughts Datasets by Interaction with Symbolic Systems}, author={Marek Kadlčík and Michal Štefánik}, year={2023}, eprint={2305.15017}, archivePrefix={arXiv}, primaryClass={cs.LG} } ```
diffusers-parti-prompts/karlo-v1
2023-05-17T16:49:02.000Z
[ "region:us" ]
diffusers-parti-prompts
null
null
null
0
102
--- dataset_info: features: - name: Prompt dtype: string - name: Category dtype: string - name: Challenge dtype: string - name: Note dtype: string - name: images dtype: image - name: model_name dtype: string - name: seed dtype: int64 splits: - name: train num_bytes: 161180147.0 num_examples: 1632 download_size: 161038543 dataset_size: 161180147.0 --- # Images of Parti Prompts for "karlo-v1" Code that was used to get the results: ```py from diffusers import DiffusionPipeline import torch pipe = DiffusionPipeline.from_pretrained("kakaobrain/karlo-v1-alpha", torch_dtype=torch.float16) pipe.to("cuda") prompt = "" # a parti prompt generator = torch.Generator("cuda").manual_seed(0) image = pipe(prompt, prior_num_inference_steps=50, decoder_num_inference_steps=100, generator=generator).images[0] ```
dmayhem93/agieval-aqua-rat
2023-06-18T17:14:34.000Z
[ "license:apache-2.0", "arxiv:2304.06364", "region:us" ]
dmayhem93
null
null
null
0
102
--- dataset_info: features: - name: query dtype: string - name: choices sequence: string - name: gold sequence: int64 splits: - name: test num_bytes: 93696 num_examples: 254 download_size: 0 dataset_size: 93696 license: apache-2.0 --- # Dataset Card for "agieval-aqua-rat" Dataset taken from https://github.com/microsoft/AGIEval and processed as in that repo. Raw dataset: https://github.com/deepmind/AQuA Copyright 2017 Google Inc. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. @misc{zhong2023agieval, title={AGIEval: A Human-Centric Benchmark for Evaluating Foundation Models}, author={Wanjun Zhong and Ruixiang Cui and Yiduo Guo and Yaobo Liang and Shuai Lu and Yanlin Wang and Amin Saied and Weizhu Chen and Nan Duan}, year={2023}, eprint={2304.06364}, archivePrefix={arXiv}, primaryClass={cs.CL} } @inproceedings{ling-etal-2017-program, title = "Program Induction by Rationale Generation: Learning to Solve and Explain Algebraic Word Problems", author = "Ling, Wang and Yogatama, Dani and Dyer, Chris and Blunsom, Phil", booktitle = "Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)", month = jul, year = "2017", address = "Vancouver, Canada", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/P17-1015", doi = "10.18653/v1/P17-1015", pages = "158--167", abstract = "Solving algebraic word problems requires executing a series of arithmetic operations{---}a program{---}to obtain a final answer. However, since programs can be arbitrarily complicated, inducing them directly from question-answer pairs is a formidable challenge. To make this task more feasible, we solve these problems by generating answer rationales, sequences of natural language and human-readable mathematical expressions that derive the final answer through a series of small steps. Although rationales do not explicitly specify programs, they provide a scaffolding for their structure via intermediate milestones. To evaluate our approach, we have created a new 100,000-sample dataset of questions, answers and rationales. Experimental results show that indirect supervision of program learning via answer rationales is a promising strategy for inducing arithmetic programs.", }
dmayhem93/agieval-logiqa-en
2023-06-18T17:28:42.000Z
[ "license:cc-by-nc-sa-4.0", "arxiv:2304.06364", "region:us" ]
dmayhem93
null
null
null
0
102
--- dataset_info: features: - name: query dtype: string - name: choices sequence: string - name: gold sequence: int64 splits: - name: test num_bytes: 852087 num_examples: 651 download_size: 420337 dataset_size: 852087 license: cc-by-nc-sa-4.0 --- # Dataset Card for "agieval-logiqa-en" Dataset taken from https://github.com/microsoft/AGIEval and processed as in that repo. Raw datset: https://github.com/lgw863/LogiQA-dataset [Attribution-NonCommercial-ShareAlike 4.0 International (CC BY-NC-SA 4.0)](https://creativecommons.org/licenses/by-nc-sa/4.0/) @misc{zhong2023agieval, title={AGIEval: A Human-Centric Benchmark for Evaluating Foundation Models}, author={Wanjun Zhong and Ruixiang Cui and Yiduo Guo and Yaobo Liang and Shuai Lu and Yanlin Wang and Amin Saied and Weizhu Chen and Nan Duan}, year={2023}, eprint={2304.06364}, archivePrefix={arXiv}, primaryClass={cs.CL} } @inproceedings{Liu2020LogiQAAC, title={LogiQA: A Challenge Dataset for Machine Reading Comprehension with Logical Reasoning}, author={Jian Liu and Leyang Cui and Hanmeng Liu and Dandan Huang and Yile Wang and Yue Zhang}, booktitle={International Joint Conference on Artificial Intelligence}, year={2020} }
FreedomIntelligence/evol-instruct-deutsch
2023-08-06T08:12:07.000Z
[ "region:us" ]
FreedomIntelligence
null
null
null
2
102
The dataset is used in the research related to [MultilingualSIFT](https://github.com/FreedomIntelligence/MultilingualSIFT).
shengqin/web-attacks-long
2023-10-03T07:50:07.000Z
[ "region:us" ]
shengqin
null
null
null
0
102
Entry not found
mozilla-foundation/common_voice_7_0
2023-07-29T16:00:09.000Z
[ "task_categories:automatic-speech-recognition", "annotations_creators:crowdsourced", "language_creators:crowdsourced", "multilinguality:multilingual", "source_datasets:extended|common_voice", "license:cc0-1.0", "arxiv:1912.06670", "region:us" ]
mozilla-foundation
null
@inproceedings{commonvoice:2020, author = {Ardila, R. and Branson, M. and Davis, K. and Henretty, M. and Kohler, M. and Meyer, J. and Morais, R. and Saunders, L. and Tyers, F. M. and Weber, G.}, title = {Common Voice: A Massively-Multilingual Speech Corpus}, booktitle = {Proceedings of the 12th Conference on Language Resources and Evaluation (LREC 2020)}, pages = {4211--4215}, year = 2020 }
null
21
101
--- annotations_creators: - crowdsourced language_creators: - crowdsourced license: - cc0-1.0 multilinguality: - multilingual size_categories: ab: - 1K<n<10K ar: - 100K<n<1M as: - n<1K az: - n<1K ba: - 100K<n<1M bas: - 1K<n<10K be: - 100K<n<1M bg: - 1K<n<10K br: - 10K<n<100K ca: - 100K<n<1M cnh: - 1K<n<10K cs: - 10K<n<100K cv: - 10K<n<100K cy: - 100K<n<1M de: - 100K<n<1M dv: - 10K<n<100K el: - 10K<n<100K en: - 1M<n<10M eo: - 100K<n<1M es: - 100K<n<1M et: - 10K<n<100K eu: - 10K<n<100K fa: - 100K<n<1M fi: - 1K<n<10K fr: - 100K<n<1M fy-NL: - 10K<n<100K ga-IE: - 1K<n<10K gl: - 1K<n<10K gn: - 1K<n<10K ha: - 1K<n<10K hi: - 1K<n<10K hsb: - 1K<n<10K hu: - 10K<n<100K hy-AM: - 1K<n<10K ia: - 10K<n<100K id: - 10K<n<100K it: - 100K<n<1M ja: - 10K<n<100K ka: - 1K<n<10K kab: - 100K<n<1M kk: - 1K<n<10K kmr: - 10K<n<100K ky: - 10K<n<100K lg: - 10K<n<100K lt: - 10K<n<100K lv: - 1K<n<10K mn: - 10K<n<100K mt: - 10K<n<100K nl: - 10K<n<100K or: - 1K<n<10K pa-IN: - 1K<n<10K pl: - 100K<n<1M pt: - 10K<n<100K rm-sursilv: - 1K<n<10K rm-vallader: - 1K<n<10K ro: - 10K<n<100K ru: - 100K<n<1M rw: - 1M<n<10M sah: - 1K<n<10K sk: - 10K<n<100K sl: - 1K<n<10K sr: - n<1K sv-SE: - 10K<n<100K ta: - 100K<n<1M th: - 100K<n<1M tr: - 10K<n<100K tt: - 10K<n<100K ug: - 10K<n<100K uk: - 10K<n<100K ur: - 1K<n<10K uz: - n<1K vi: - 10K<n<100K vot: - n<1K zh-CN: - 10K<n<100K zh-HK: - 10K<n<100K zh-TW: - 10K<n<100K source_datasets: - extended|common_voice paperswithcode_id: common-voice pretty_name: Common Voice Corpus 7.0 language_bcp47: - ab - ar - as - az - ba - bas - be - bg - br - ca - cnh - cs - cv - cy - de - dv - el - en - eo - es - et - eu - fa - fi - fr - fy-NL - ga-IE - gl - gn - ha - hi - hsb - hu - hy-AM - ia - id - it - ja - ka - kab - kk - kmr - ky - lg - lt - lv - mn - mt - nl - or - pa-IN - pl - pt - rm-sursilv - rm-vallader - ro - ru - rw - sah - sk - sl - sr - sv-SE - ta - th - tr - tt - ug - uk - ur - uz - vi - vot - zh-CN - zh-HK - zh-TW extra_gated_prompt: By clicking on “Access repository” below, you also agree to not attempt to determine the identity of speakers in the Common Voice dataset. task_categories: - automatic-speech-recognition --- # Dataset Card for Common Voice Corpus 7.0 ## Table of Contents - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards) - [Languages](#languages) - [Dataset Structure](#dataset-structure) - [Data Instances](#data-instances) - [Data Fields](#data-fields) - [Data Splits](#data-splits) - [Dataset Creation](#dataset-creation) - [Curation Rationale](#curation-rationale) - [Source Data](#source-data) - [Annotations](#annotations) - [Personal and Sensitive Information](#personal-and-sensitive-information) - [Considerations for Using the Data](#considerations-for-using-the-data) - [Social Impact of Dataset](#social-impact-of-dataset) - [Discussion of Biases](#discussion-of-biases) - [Other Known Limitations](#other-known-limitations) - [Additional Information](#additional-information) - [Dataset Curators](#dataset-curators) - [Licensing Information](#licensing-information) - [Citation Information](#citation-information) - [Contributions](#contributions) ## Dataset Description - **Homepage:** https://commonvoice.mozilla.org/en/datasets - **Repository:** https://github.com/common-voice/common-voice - **Paper:** https://arxiv.org/abs/1912.06670 - **Leaderboard:** https://paperswithcode.com/dataset/common-voice - **Point of Contact:** [Anton Lozhkov](mailto:anton@huggingface.co) ### Dataset Summary The Common Voice dataset consists of a unique MP3 and corresponding text file. Many of the 13905 recorded hours in the dataset also include demographic metadata like age, sex, and accent that can help improve the accuracy of speech recognition engines. The dataset currently consists of 11192 validated hours in 76 languages, but more voices and languages are always added. Take a look at the [Languages](https://commonvoice.mozilla.org/en/languages) page to request a language or start contributing. ### Supported Tasks and Leaderboards The results for models trained on the Common Voice datasets are available via the [🤗 Speech Bench](https://huggingface.co/spaces/huggingface/hf-speech-bench) ### Languages ``` Abkhaz, Arabic, Armenian, Assamese, Azerbaijani, Basaa, Bashkir, Basque, Belarusian, Breton, Bulgarian, Catalan, Chinese (China), Chinese (Hong Kong), Chinese (Taiwan), Chuvash, Czech, Dhivehi, Dutch, English, Esperanto, Estonian, Finnish, French, Frisian, Galician, Georgian, German, Greek, Guarani, Hakha Chin, Hausa, Hindi, Hungarian, Indonesian, Interlingua, Irish, Italian, Japanese, Kabyle, Kazakh, Kinyarwanda, Kurmanji Kurdish, Kyrgyz, Latvian, Lithuanian, Luganda, Maltese, Mongolian, Odia, Persian, Polish, Portuguese, Punjabi, Romanian, Romansh Sursilvan, Romansh Vallader, Russian, Sakha, Serbian, Slovak, Slovenian, Sorbian, Upper, Spanish, Swedish, Tamil, Tatar, Thai, Turkish, Ukrainian, Urdu, Uyghur, Uzbek, Vietnamese, Votic, Welsh ``` ## Dataset Structure ### Data Instances A typical data point comprises the `path` to the audio file and its `sentence`. Additional fields include `accent`, `age`, `client_id`, `up_votes`, `down_votes`, `gender`, `locale` and `segment`. ```python { 'client_id': 'd59478fbc1ee646a28a3c652a119379939123784d99131b865a89f8b21c81f69276c48bd574b81267d9d1a77b83b43e6d475a6cfc79c232ddbca946ae9c7afc5', 'path': 'et/clips/common_voice_et_18318995.mp3', 'audio': { 'path': 'et/clips/common_voice_et_18318995.mp3', 'array': array([-0.00048828, -0.00018311, -0.00137329, ..., 0.00079346, 0.00091553, 0.00085449], dtype=float32), 'sampling_rate': 48000 }, 'sentence': 'Tasub kokku saada inimestega, keda tunned juba ammust ajast saati.', 'up_votes': 2, 'down_votes': 0, 'age': 'twenties', 'gender': 'male', 'accent': '', 'locale': 'et', 'segment': '' } ``` ### Data Fields `client_id` (`string`): An id for which client (voice) made the recording `path` (`string`): The path to the audio file `audio` (`dict`): A dictionary containing the path to the downloaded audio file, the decoded audio array, and the sampling rate. Note that when accessing the audio column: `dataset[0]["audio"]` the audio file is automatically decoded and resampled to `dataset.features["audio"].sampling_rate`. Decoding and resampling of a large number of audio files might take a significant amount of time. Thus it is important to first query the sample index before the `"audio"` column, *i.e.* `dataset[0]["audio"]` should **always** be preferred over `dataset["audio"][0]`. `sentence` (`string`): The sentence the user was prompted to speak `up_votes` (`int64`): How many upvotes the audio file has received from reviewers `down_votes` (`int64`): How many downvotes the audio file has received from reviewers `age` (`string`): The age of the speaker (e.g. `teens`, `twenties`, `fifties`) `gender` (`string`): The gender of the speaker `accent` (`string`): Accent of the speaker `locale` (`string`): The locale of the speaker `segment` (`string`): Usually an empty field ### Data Splits The speech material has been subdivided into portions for dev, train, test, validated, invalidated, reported and other. The validated data is data that has been validated with reviewers and received upvotes that the data is of high quality. The invalidated data is data has been invalidated by reviewers and received downvotes indicating that the data is of low quality. The reported data is data that has been reported, for different reasons. The other data is data that has not yet been reviewed. The dev, test, train are all data that has been reviewed, deemed of high quality and split into dev, test and train. ## Data Preprocessing Recommended by Hugging Face The following are data preprocessing steps advised by the Hugging Face team. They are accompanied by an example code snippet that shows how to put them to practice. Many examples in this dataset have trailing quotations marks, e.g _“the cat sat on the mat.“_. These trailing quotation marks do not change the actual meaning of the sentence, and it is near impossible to infer whether a sentence is a quotation or not a quotation from audio data alone. In these cases, it is advised to strip the quotation marks, leaving: _the cat sat on the mat_. In addition, the majority of training sentences end in punctuation ( . or ? or ! ), whereas just a small proportion do not. In the dev set, **almost all** sentences end in punctuation. Thus, it is recommended to append a full-stop ( . ) to the end of the small number of training examples that do not end in punctuation. ```python from datasets import load_dataset ds = load_dataset("mozilla-foundation/common_voice_7_0", "en", use_auth_token=True) def prepare_dataset(batch): """Function to preprocess the dataset with the .map method""" transcription = batch["sentence"] if transcription.startswith('"') and transcription.endswith('"'): # we can remove trailing quotation marks as they do not affect the transcription transcription = transcription[1:-1] if transcription[-1] not in [".", "?", "!"]: # append a full-stop to sentences that do not end in punctuation transcription = transcription + "." batch["sentence"] = transcription return batch ds = ds.map(prepare_dataset, desc="preprocess dataset") ``` ## Dataset Creation ### Curation Rationale [Needs More Information] ### Source Data #### Initial Data Collection and Normalization [Needs More Information] #### Who are the source language producers? [Needs More Information] ### Annotations #### Annotation process [Needs More Information] #### Who are the annotators? [Needs More Information] ### Personal and Sensitive Information The dataset consists of people who have donated their voice online. You agree to not attempt to determine the identity of speakers in the Common Voice dataset. ## Considerations for Using the Data ### Social Impact of Dataset The dataset consists of people who have donated their voice online. You agree to not attempt to determine the identity of speakers in the Common Voice dataset. ### Discussion of Biases [More Information Needed] ### Other Known Limitations [More Information Needed] ## Additional Information ### Dataset Curators [More Information Needed] ### Licensing Information Public Domain, [CC-0](https://creativecommons.org/share-your-work/public-domain/cc0/) ### Citation Information ``` @inproceedings{commonvoice:2020, author = {Ardila, R. and Branson, M. and Davis, K. and Henretty, M. and Kohler, M. and Meyer, J. and Morais, R. and Saunders, L. and Tyers, F. M. and Weber, G.}, title = {Common Voice: A Massively-Multilingual Speech Corpus}, booktitle = {Proceedings of the 12th Conference on Language Resources and Evaluation (LREC 2020)}, pages = {4211--4215}, year = 2020 } ```
domenicrosati/TruthfulQA
2022-07-01T15:41:54.000Z
[ "task_categories:question-answering", "task_ids:extractive-qa", "task_ids:open-domain-qa", "task_ids:closed-domain-qa", "annotations_creators:expert-generated", "language_creators:expert-generated", "multilinguality:monolingual", "size_categories:n<1K", "source_datasets:original", "language:en", "license:apache-2.0", "arxiv:2109.07958", "region:us" ]
domenicrosati
null
null
null
4
101
--- annotations_creators: - expert-generated language_creators: - expert-generated language: - en license: - apache-2.0 multilinguality: - monolingual pretty_name: TruthfulQA size_categories: - n<1K source_datasets: - original task_categories: - question-answering task_ids: - extractive-qa - open-domain-qa - closed-domain-qa --- # Dataset Card for TruthfulQA ## Table of Contents - [Table of Contents](#table-of-contents) - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards) - [Languages](#languages) - [Dataset Structure](#dataset-structure) - [Data Instances](#data-instances) - [Data Fields](#data-fields) - [Data Splits](#data-splits) - [Contributions](#contributions) ## Dataset Description - **Homepage:** [https://github.com/sylinrl/TruthfulQA](https://github.com/sylinrl/TruthfulQA) - **Repository:** [https://github.com/sylinrl/TruthfulQA](https://github.com/sylinrl/TruthfulQA) - **Paper:** [https://arxiv.org/abs/2109.07958](https://arxiv.org/abs/2109.07958) ### Dataset Summary TruthfulQA: Measuring How Models Mimic Human Falsehoods We propose a benchmark to measure whether a language model is truthful in generating answers to questions. The benchmark comprises 817 questions that span 38 categories, including health, law, finance and politics. We crafted questions that some humans would answer falsely due to a false belief or misconception. To perform well, models must avoid generating false answers learned from imitating human texts. We tested GPT-3, GPT-Neo/J, GPT-2 and a T5-based model. The best model was truthful on 58% of questions, while human performance was 94%. Models generated many false answers that mimic popular misconceptions and have the potential to deceive humans. The largest models were generally the least truthful. This contrasts with other NLP tasks, where performance improves with model size. However, this result is expected if false answers are learned from the training distribution. We suggest that scaling up models alone is less promising for improving truthfulness than fine-tuning using training objectives other than imitation of text from the web. ### Supported Tasks and Leaderboards See: [Tasks](https://github.com/sylinrl/TruthfulQA#tasks) ### Languages English ## Dataset Structure ### Data Instances The benchmark comprises 817 questions that span 38 categories, including health, law, finance and politics. ### Data Fields 1. **Type**: Adversarial v Non-Adversarial Questions 2. **Category**: Category of misleading question 3. **Question**: The question 4. **Best Answer**: The best correct answer 5. **Correct Answers**: A set of correct answers. Delimited by `;`. 6. **Incorrect Answers**: A set of incorrect answers. Delimited by `;`. 7. **Source**: A source that supports the correct answers. ### Data Splits Due to constraints of huggingface the dataset is loaded into a "train" split. ### Contributions Thanks to [@sylinrl](https://github.com/sylinrl) for adding this dataset.
Rifky/ID-SQuAD
2023-04-08T04:55:02.000Z
[ "region:us" ]
Rifky
null
null
null
2
101
--- dataset_info: features: - name: id dtype: string - name: title dtype: string - name: context dtype: string - name: question dtype: string - name: answers struct: - name: answer_start sequence: int64 - name: text sequence: string splits: - name: test num_bytes: 12218827 num_examples: 11858 - name: train num_bytes: 121632833 num_examples: 130318 - name: validation num_bytes: 12218827 num_examples: 11858 download_size: 19391596 dataset_size: 146070487 --- # Dataset Card for "ID-SQuAD" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
hpprc/jsick
2023-04-11T06:18:09.000Z
[ "task_categories:sentence-similarity", "task_categories:text-classification", "task_ids:natural-language-inference", "task_ids:semantic-similarity-scoring", "annotations_creators:expert-generated", "language_creators:expert-generated", "multilinguality:translation", "size_categories:10K<n<100K", "source_datasets:extended|sick", "language:ja", "language:en", "license:cc-by-sa-4.0", "semantic-textual-similarity", "sts", "region:us" ]
hpprc
Japanese Sentences Involving Compositional Knowledge (JSICK) Dataset. JSICK is the Japanese NLI and STS dataset by manually translating the English dataset SICK (Marelli et al., 2014) into Japanese. We hope that our dataset will be useful in research for realizing more advanced models that are capable of appropriately performing multilingual compositional inference. (from official website)
@article{yanaka-mineshima-2022-compositional, title = "Compositional Evaluation on {J}apanese Textual Entailment and Similarity", author = "Yanaka, Hitomi and Mineshima, Koji", journal = "Transactions of the Association for Computational Linguistics", volume = "10", year = "2022", address = "Cambridge, MA", publisher = "MIT Press", url = "https://aclanthology.org/2022.tacl-1.73", doi = "10.1162/tacl_a_00518", pages = "1266--1284", }
null
3
101
--- annotations_creators: - expert-generated language: - ja - en language_creators: - expert-generated license: - cc-by-sa-4.0 multilinguality: - translation pretty_name: JSICK size_categories: - 10K<n<100K source_datasets: - extended|sick tags: - semantic-textual-similarity - sts task_categories: - sentence-similarity - text-classification task_ids: - natural-language-inference - semantic-similarity-scoring --- # Dataset Card for JSICK ## Table of Contents - [Dataset Card for JSICK](#dataset-card-for-jsick) - [Table of Contents](#table-of-contents) - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Japanese Sentences Involving Compositional Knowledge (JSICK) Dataset.](#japanese-sentences-involving-compositional-knowledge-jsick-dataset) - [JSICK-stress Test set](#jsick-stress-test-set) - [Languages](#languages) - [Dataset Structure](#dataset-structure) - [Data Instances](#data-instances) - [base](#base) - [stress](#stress) - [Data Fields](#data-fields) - [base](#base-1) - [stress](#stress-1) - [Data Splits](#data-splits) - [Annotations](#annotations) - [Additional Information](#additional-information) - [Licensing Information](#licensing-information) - [Citation Information](#citation-information) - [Contributions](#contributions) ## Dataset Description - **Homepage:** https://github.com/verypluming/JSICK - **Repository:** https://github.com/verypluming/JSICK - **Paper:** https://direct.mit.edu/tacl/article/doi/10.1162/tacl_a_00518/113850/Compositional-Evaluation-on-Japanese-Textual - **Paper:** https://www.jstage.jst.go.jp/article/pjsai/JSAI2021/0/JSAI2021_4J3GS6f02/_pdf/-char/ja ### Dataset Summary From official [GitHub](https://github.com/verypluming/JSICK): #### Japanese Sentences Involving Compositional Knowledge (JSICK) Dataset. JSICK is the Japanese NLI and STS dataset by manually translating the English dataset [SICK (Marelli et al., 2014)](https://aclanthology.org/L14-1314/) into Japanese. We hope that our dataset will be useful in research for realizing more advanced models that are capable of appropriately performing multilingual compositional inference. #### JSICK-stress Test set The JSICK-stress test set is a dataset to investigate whether models capture word order and case particles in Japanese. The JSICK-stress test set is provided by transforming syntactic structures of sentence pairs in JSICK, where we analyze whether models are attentive to word order and case particles to predict entailment labels and similarity scores. The JSICK test set contains 1666, 797, and 1006 sentence pairs (A, B) whose premise sentences A (the column `sentence_A_Ja_origin`) include the basic word order involving ga-o (nominative-accusative), ga-ni (nominative-dative), and ga-de (nominative-instrumental/locative) relations, respectively. We provide the JSICK-stress test set by transforming syntactic structures of these pairs by the following three ways: - `scrum_ga_o`: a scrambled pair, where the word order of premise sentences A is scrambled into o-ga, ni-ga, and de-ga order, respectively. - `ex_ga_o`: a rephrased pair, where the only case particles (ga, o, ni, de) in the premise A are swapped - `del_ga_o`: a rephrased pair, where the only case particles (ga, o, ni) in the premise A are deleted ### Languages The language data in JSICK is in Japanese and English. ## Dataset Structure ### Data Instances When loading a specific configuration, users has to append a version dependent suffix: ```python import datasets as ds dataset: ds.DatasetDict = ds.load_dataset("hpprc/jsick") print(dataset) # DatasetDict({ # train: Dataset({ # features: ['id', 'premise', 'hypothesis', 'label', 'score', 'premise_en', 'hypothesis_en', 'label_en', 'score_en', 'corr_entailment_labelAB_En', 'corr_entailment_labelBA_En', 'image_ID', 'original_caption', 'semtag_short', 'semtag_long'], # num_rows: 4500 # }) # test: Dataset({ # features: ['id', 'premise', 'hypothesis', 'label', 'score', 'premise_en', 'hypothesis_en', 'label_en', 'score_en', 'corr_entailment_labelAB_En', 'corr_entailment_labelBA_En', 'image_ID', 'original_caption', 'semtag_short', 'semtag_long'], # num_rows: 4927 # }) # }) dataset: ds.DatasetDict = ds.load_dataset("hpprc/jsick", name="stress") print(dataset) # DatasetDict({ # test: Dataset({ # features: ['id', 'premise', 'hypothesis', 'label', 'score', 'sentence_A_Ja_origin', 'entailment_label_origin', 'relatedness_score_Ja_origin', 'rephrase_type', 'case_particles'], # num_rows: 900 # }) # }) ``` #### base An example of looks as follows: ```json { 'id': 1, 'premise': '子供たちのグループが庭で遊んでいて、後ろの方には年を取った男性が立っている', 'hypothesis': '庭にいる男の子たちのグループが遊んでいて、男性が後ろの方に立っている', 'label': 1, // (neutral) 'score': 3.700000047683716, 'premise_en': 'A group of kids is playing in a yard and an old man is standing in the background', 'hypothesis_en': 'A group of boys in a yard is playing and a man is standing in the background', 'label_en': 1, // (neutral) 'score_en': 4.5, 'corr_entailment_labelAB_En': 'nan', 'corr_entailment_labelBA_En': 'nan', 'image_ID': '3155657768_b83a7831e5.jpg', 'original_caption': 'A group of children playing in a yard , a man in the background .', 'semtag_short': 'nan', 'semtag_long': 'nan', } ``` #### stress An example of looks as follows: ```json { 'id': '5818_de_d', 'premise': '女性火の近くダンスをしている', 'hypothesis': '火の近くでダンスをしている女性は一人もいない', 'label': 2, // (contradiction) 'score': 4.0, 'sentence_A_Ja_origin': '女性が火の近くでダンスをしている', 'entailment_label_origin': 2, 'relatedness_score_Ja_origin': 3.700000047683716, 'rephrase_type': 'd', 'case_particles': 'de' } ``` ### Data Fields #### base A version adopting the column names of a typical NLI dataset. | Name | Description | | -------------------------- | ----------------------------------------------------------------------------------------------------------------------------------------- | | id | The ids (the same with original SICK). | | premise | The first sentence in Japanese. | | hypothesis | The second sentence in Japanese. | | label | The entailment label in Japanese. | | score | The relatedness score in the range [1-5] in Japanese. | | premise_en | The first sentence in English. | | hypothesis_en | The second sentence in English. | | label_en | The original entailment label in English. | | score_en | The original relatedness score in the range [1-5] in English. | | semtag_short | The linguistic phenomena tags in Japanese. | | semtag_long | The details of linguistic phenomena tags in Japanese. | | image_ID | The original image in [8K ImageFlickr dataset](https://www.kaggle.com/datasets/adityajn105/flickr8k). | | original_caption | The original caption in [8K ImageFlickr dataset](https://www.kaggle.com/datasets/adityajn105/flickr8k). | | corr_entailment_labelAB_En | The corrected entailment label from A to B in English by [(Karouli et al., 2017)](http://vcvpaiva.github.io/includes/pubs/2017-iwcs.pdf). | | corr_entailment_labelBA_En | The corrected entailment label from B to A in English by [(Karouli et al., 2017)](http://vcvpaiva.github.io/includes/pubs/2017-iwcs.pdf). | #### stress | Name | Description | | --------------------------- | ------------------------------------------------------------------------------------------------- | | id | Ids (the same with original SICK). | | premise | The first sentence in Japanese. | | hypothesis | The second sentence in Japanese. | | label | The entailment label in Japanese | | score | The relatedness score in the range [1-5] in Japanese. | | sentence_A_Ja_origin | The original premise sentences A from the JSICK test set. | | entailment_label_origin | The original entailment labels. | | relatedness_score_Ja_origin | The original relatedness scores. | | rephrase_type | The type of transformation applied to the syntactic structures of the sentence pairs. | | case_particles | The grammatical particles in Japanese that indicate the function or role of a noun in a sentence. | ### Data Splits | name | train | validation | test | | --------------- | ----: | ---------: | ----: | | base | 4,500 | | 4,927 | | original | 4,500 | | 4,927 | | stress | | | 900 | | stress-original | | | 900 | ### Annotations To annotate the JSICK dataset, they used the crowdsourcing platform "Lancers" to re-annotate entailment labels and similarity scores for JSICK. They had six native Japanese speakers as annotators, who were randomly selected from the platform. The annotators were asked to fully understand the guidelines and provide the same labels as gold labels for ten test questions. For entailment labels, they adopted annotations that were agreed upon by a majority vote as gold labels and checked whether the majority judgment vote was semantically valid for each example. For similarity scores, they used the average of the annotation results as gold scores. The raw annotations with the JSICK dataset are [publicly available](https://github.com/verypluming/JSICK/blob/main/jsick/jsick-all-annotations.tsv). The average annotation time was 1 minute per pair, and Krippendorff's alpha for the entailment labels was 0.65. ## Additional Information - [verypluming/JSICK](https://github.com/verypluming/JSICK) - [Compositional Evaluation on Japanese Textual Entailment and Similarity](https://direct.mit.edu/tacl/article/doi/10.1162/tacl_a_00518/113850/Compositional-Evaluation-on-Japanese-Textual) - [JSICK: 日本語構成的推論・類似度データセットの構築](https://www.jstage.jst.go.jp/article/pjsai/JSAI2021/0/JSAI2021_4J3GS6f02/_article/-char/ja) ### Licensing Information CC BY-SA 4.0 ### Citation Information ```bibtex @article{yanaka-mineshima-2022-compositional, title = "Compositional Evaluation on {J}apanese Textual Entailment and Similarity", author = "Yanaka, Hitomi and Mineshima, Koji", journal = "Transactions of the Association for Computational Linguistics", volume = "10", year = "2022", address = "Cambridge, MA", publisher = "MIT Press", url = "https://aclanthology.org/2022.tacl-1.73", doi = "10.1162/tacl_a_00518", pages = "1266--1284", } @article{谷中 瞳2021, title={JSICK: 日本語構成的推論・類似度データセットの構築}, author={谷中 瞳 and 峯島 宏次}, journal={人工知能学会全国大会論文集}, volume={JSAI2021}, number={ }, pages={4J3GS6f02-4J3GS6f02}, year={2021}, doi={10.11517/pjsai.JSAI2021.0_4J3GS6f02} } ``` ### Contributions Thanks to [Hitomi Yanaka](https://hitomiyanaka.mystrikingly.com/) and [Koji Mineshima](https://abelard.flet.keio.ac.jp/person/minesima/index-j.html) for creating this dataset.
arielnlee/Superimposed-Masked-Dataset
2023-08-01T18:08:45.000Z
[ "task_categories:image-classification", "size_categories:10K<n<100K", "language:en", "license:other", "occlusion", "arxiv:2306.17848", "region:us" ]
arielnlee
SMD is an occluded ImageNet-1K validation set, created to be an additional way to evaluate the impact of occlusion on model performance. This experiment used a variety of occluder objects that are not in the ImageNet-1K label space and are unambiguous in relationship to objects that reside in the label space.
@misc{lee2023hardwiring, title={Hardwiring ViT Patch Selectivity into CNNs using Patch Mixing}, author={Ariel N. Lee and Sarah Adel Bargal and Janavi Kasera and Stan Sclaroff and Kate Saenko and Nataniel Ruiz}, year={2023}, eprint={2306.17848}, archivePrefix={arXiv}, primaryClass={cs.CV} }
null
1
101
--- license: other task_categories: - image-classification language: - en tags: - occlusion size_categories: - 10K<n<100K --- # Superimposed Masked Dataset (SMD) SMD is an occluded version of the ImageNet-1K validation set, created to serve as an additional way to evaluate the impact of occlusion on model performance. Occluder objects were segmented using Meta's Segment Anything and are not in the ImageNet-1K label space. They were chosen to be unambiguous in relationship to objects that reside in the label space. Additional details about the dataset, including code to generate your own version of SMD, actual occlusion percentage of each image in the dataset, as well as occluder object segmentation masks, will be released shortly. ![SMD_examples](./smd.jpeg) The occluders shown above from left to right, starting from the top row: <strong>Grogu (baby yoda), bacteria, bacteriophage, airpods, origami heart, drone, diamonds (stones, not setting) and coronavirus</strong>. Occluder object images were obtained through Unsplash. SMD was created for testing model robustness to occlusion in [Hardwiring ViT Patch Selectivity into CNNs using Patch Mixing](https://arielnlee.github.io/PatchMixing/). ## Citations ```bibtex @misc{lee2023hardwiring, title={Hardwiring ViT Patch Selectivity into CNNs using Patch Mixing}, author={Ariel N. Lee and Sarah Adel Bargal and Janavi Kasera and Stan Sclaroff and Kate Saenko and Nataniel Ruiz}, year={2023}, eprint={2306.17848}, archivePrefix={arXiv}, primaryClass={cs.CV} } ``` ```bibtex @article{imagenet15russakovsky, Author = {Olga Russakovsky and Jia Deng and Hao Su and Jonathan Krause and Sanjeev Satheesh and Sean Ma and Zhiheng Huang and Andrej Karpathy and Aditya Khosla and Michael Bernstein and Alexander C. Berg and Li Fei-Fei}, Title = { {ImageNet Large Scale Visual Recognition Challenge} }, Year = {2015}, journal = {International Journal of Computer Vision (IJCV)}, doi = {10.1007/s11263-015-0816-y}, volume={115}, number={3}, pages={211-252} } ```
diffusers-parti-prompts/sdxl-1.0-refiner
2023-07-30T16:22:20.000Z
[ "region:us" ]
diffusers-parti-prompts
null
null
null
0
101
--- dataset_info: features: - name: Prompt dtype: string - name: Category dtype: string - name: Challenge dtype: string - name: Note dtype: string - name: images dtype: image - name: model_name dtype: string - name: seed dtype: int64 splits: - name: train num_bytes: 189993385.856 num_examples: 1632 download_size: 189456016 dataset_size: 189993385.856 --- # Dataset Card for "sdxl-1.0-refiner" Dataset was generated using the code below: ```python import torch from datasets import Dataset, Features from datasets import Image as ImageFeature from datasets import Value, load_dataset from diffusers import DDIMScheduler, DiffusionPipeline import PIL def main(): print("Loading dataset...") parti_prompts = load_dataset("nateraw/parti-prompts", split="train") print("Loading pipeline...") ckpt_id = "stabilityai/stable-diffusion-xl-base-1.0" refiner_ckpt_id = "stabilityai/stable-diffusion-xl-refiner-1.0" pipe = DiffusionPipeline.from_pretrained( ckpt_id, torch_dtype=torch.float16, use_auth_token=True ).to("cuda") pipe.scheduler = DDIMScheduler.from_config(pipe.scheduler.config) pipe.set_progress_bar_config(disable=True) refiner = DiffusionPipeline.from_pretrained( refiner_ckpt_id, torch_dtype=torch.float16, use_auth_token=True ).to("cuda") refiner.scheduler = DDIMScheduler.from_config(refiner.scheduler.config) refiner.set_progress_bar_config(disable=True) seed = 0 generator = torch.Generator("cuda").manual_seed(seed) print("Running inference...") main_dict = {} for i in range(len(parti_prompts)): sample = parti_prompts[i] prompt = sample["Prompt"] latent = pipe( prompt, generator=generator, num_inference_steps=100, guidance_scale=7.5, output_type="latent", ).images[0] image_refined = refiner( prompt=prompt, image=latent[None, :], generator=generator, num_inference_steps=100, guidance_scale=7.5, ).images[0] image = image_refined.resize((256, 256), resample=PIL.Image.Resampling.LANCZOS) img_path = f"sd_xl_{i}.png" image.save(img_path) main_dict.update( { prompt: { "img_path": img_path, "Category": sample["Category"], "Challenge": sample["Challenge"], "Note": sample["Note"], "model_name": ckpt_id, "seed": seed, } } ) def generation_fn(): for prompt in main_dict: prompt_entry = main_dict[prompt] yield { "Prompt": prompt, "Category": prompt_entry["Category"], "Challenge": prompt_entry["Challenge"], "Note": prompt_entry["Note"], "images": {"path": prompt_entry["img_path"]}, "model_name": prompt_entry["model_name"], "seed": prompt_entry["seed"], } print("Preparing HF dataset...") ds = Dataset.from_generator( generation_fn, features=Features( Prompt=Value("string"), Category=Value("string"), Challenge=Value("string"), Note=Value("string"), images=ImageFeature(), model_name=Value("string"), seed=Value("int64"), ), ) ds_id = "diffusers-parti-prompts/sdxl-1.0-refiner" ds.push_to_hub(ds_id) if __name__ == "__main__": main() ```
lamini/bird_spider_train_text_to_sql
2023-08-28T07:11:21.000Z
[ "region:us" ]
lamini
null
null
null
2
101
--- configs: - config_name: default data_files: - split: train path: data/train-* - split: test path: data/test-* dataset_info: features: - name: input dtype: string - name: output dtype: string splits: - name: train num_bytes: 34428892 num_examples: 16428 - name: test num_bytes: 1090039 num_examples: 1034 download_size: 3799750 dataset_size: 35518931 --- # Dataset Card for "bird_spider_train_text_to_sql" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
argilla/customer_assistant
2023-08-30T14:38:42.000Z
[ "size_categories:n<1K", "rlfh", "argilla", "human-feedback", "region:us" ]
argilla
null
null
null
0
101
--- size_categories: n<1K tags: - rlfh - argilla - human-feedback --- # Dataset Card for customer_assistant This dataset has been created with [Argilla](https://docs.argilla.io). As shown in the sections below, this dataset can be loaded into Argilla as explained in [Load with Argilla](#load-with-argilla), or used directly with the `datasets` library in [Load with `datasets`](#load-with-datasets). ## Dataset Description - **Homepage:** https://argilla.io - **Repository:** https://github.com/argilla-io/argilla - **Paper:** - **Leaderboard:** - **Point of Contact:** ### Dataset Summary This dataset contains: * A dataset configuration file conforming to the Argilla dataset format named `argilla.yaml`. This configuration file will be used to configure the dataset when using the `FeedbackDataset.from_huggingface` method in Argilla. * Dataset records in a format compatible with HuggingFace `datasets`. These records will be loaded automatically when using `FeedbackDataset.from_huggingface` and can be loaded independently using the `datasets` library via `load_dataset`. * The [annotation guidelines](#annotation-guidelines) that have been used for building and curating the dataset, if they've been defined in Argilla. ### Load with Argilla To load with Argilla, you'll just need to install Argilla as `pip install argilla --upgrade` and then use the following code: ```python import argilla as rg ds = rg.FeedbackDataset.from_huggingface("argilla/customer_assistant") ``` ### Load with `datasets` To load this dataset with `datasets`, you'll just need to install `datasets` as `pip install datasets --upgrade` and then use the following code: ```python from datasets import load_dataset ds = load_dataset("argilla/customer_assistant") ``` ### Supported Tasks and Leaderboards This dataset can contain [multiple fields, questions and responses](https://docs.argilla.io/en/latest/guides/llms/conceptual_guides/data_model.html) so it can be used for different NLP tasks, depending on the configuration. The dataset structure is described in the [Dataset Structure section](#dataset-structure). There are no leaderboards associated with this dataset. ### Languages [More Information Needed] ## Dataset Structure ### Data in Argilla The dataset is created in Argilla with: **fields**, **questions**, **suggestions**, and **guidelines**. The **fields** are the dataset records themselves, for the moment just text fields are suppported. These are the ones that will be used to provide responses to the questions. | Field Name | Title | Type | Required | Markdown | | ---------- | ----- | ---- | -------- | -------- | | user-message | User-message | TextField | True | False | | context | Context | TextField | True | False | The **questions** are the questions that will be asked to the annotators. They can be of different types, such as rating, text, single choice, or multiple choice. | Question Name | Title | Type | Required | Description | Values/Labels | | ------------- | ----- | ---- | -------- | ----------- | ------------- | | question-rating | Rate the relevance of the user question | RatingQuestion | False | N/A | [1, 2, 3, 4, 5] | | context-rating | Rate the quality and relevancy of context for the assistant | RatingQuestion | False | N/A | [1, 2, 3, 4, 5] | | response | Write a helpful, harmful, accurate response to the user question | TextQuestion | True | N/A | N/A | **✨ NEW** Additionally, we also have **suggestions**, which are linked to the existing questions, and so on, named appending "-suggestion" and "-suggestion-metadata" to those, containing the value/s of the suggestion and its metadata, respectively. So on, the possible values are the same as in the table above. Finally, the **guidelines** are just a plain string that can be used to provide instructions to the annotators. Find those in the [annotation guidelines](#annotation-guidelines) section. ### Data Instances An example of a dataset instance in Argilla looks as follows: ```json { "fields": { "context": "This process ensures the client administrator has full control over their team\u0027s access and can manage their workspace efficiently.Plans The plans for the Argilla Cloud service depend on the volume of records processed, with several tiers available to suit varying needs.Each tier has a corresponding monthly and annual price, with a 10% discount applied to the annual pricing option.The tier selection and associated price will be determined by the client\u0027s selection in the Service Order Form section of the Terms of Service document.Plans are: Starter 1 Million records Base 3 Million records Medium 4 Million records Large 6 million records\n\nSupport Argilla Cloud offers comprehensive support services to address various issues that may arise during the use of our service.Support levels are categorized into four distinct tiers, based on the severity of the issue, and a separate category for feature requests.The support process, response times, and procedures differ for each category.(1) Critical Issues Critical issues are characterized by: Severe impact on the Service, potentially rendering it completely non-functional.Disruption of critical service operations or functions.Obstruction of entire customer workflows.In the case of a critical issue, Argilla will: Assign specialist(s) to correct the issue on an expedited basis.Provide ongoing communication on the status via email and/or phone, according to the customer\u0027s preference.Begin work towards identifying a temporary workaround or fix.(2) Major Issues Major issues involve: Limited functionality of the Service.Service instability with periodic interruptions.Material service interruptions in mission-critical functions.Time-sensitive questions impacting performance or deliverables to end-clients.Upon encountering a major issue, Argilla will: Assign a specialist to begin a resolution.Implement additional, escalated procedures as reasonably determined necessary by Argilla Support Services staff.(3) Minor Issues Minor issues include: Errors causing partial, non-critical functionality loss.The need for clarification on procedures or information in documentation.Errors in service that may impact performance deliverables.(4) Trivial Issues Trivial issues are characterized by: Errors in system development with little to no impact on performance.Feature Requests Feature requests involve: Requesting a product enhancement.For feature requests, Argilla will: Respond regarding the relevance and interest in incorporating the requested feature.In summary, Argilla Cloud\u0027s support services are designed to provide timely and efficient assistance for issues of varying severity, ensuring a smooth and reliable user experience.All plans include Monday to Friday during office hours (8am to 17pm CEST) with additional support upon request.The Support Channels and features of each tier are shown below:\n\nStarter: Slack Community.Severity 1 - Response time \u003c 4 hours.Severity 2 - Response time \u003c 8 hours.Severity 3 - Response time \u003c 48 hours.Severity 4 not specified.Base: Ticketing System, Severity 1 - Response time \u003c 4 hours.Severity 2 - Response time \u003c 8 hours.Severity 3 - Response time \u003c 24 hours.Severity 4 not specified.Medium: Ticketing System and dedicated Slack channel, Severity 1 - Response time \u003c 4 hours.Severity 2 - Response time \u003c 8 hours.Severity 3 - Response time \u003c 24 hours.Severity 4 one week\n\nLarge: Ticketing System and dedicated Slack channel, Severity 1 - Response time \u003c 4 hours.Severity 2 - Response time \u003c 8 hours.Severity 3 - Response time \u003c 24 hours.Severity 4 one week.Data backup and recovery plan Argilla Cloud is committed to ensuring the safety and availability of your data.Our system is designed to run six data backups per day as a standard procedure.These backups capture a snapshot of the system state at the time of the backup, enabling restoration to that point if necessary.Our Recovery Point Objective (RPO) is four hours.This means that in the event of a system failure, the maximum data loss would be up to the last four hours of data input.We achieve this by running regular backups throughout the day, reducing the time window of potential data loss.Our Recovery Time Objective (RTO) is one hour.This is the maximum acceptable length of time that your system could be down following a failure or disruption.It represents our commitment to ensuring that your services are restored as quickly as possible.In the event of a disruption, our team will first evaluate the issue to determine the best course of action.If data recovery is necessary, we will restore from the most recent backup.We will then work to identify and resolve the root cause of the disruption to prevent a recurrence.Finally, we conduct regular test restores to ensure that our backup system is working as intended.These tests verify the integrity of the backup data and the functionality of the restore process.\nThis documents an overview of the Argilla Cloud service - a comprehensive Software as a Service (SaaS) solution for data labeling and curation.The service is specifically designed to meet the needs of businesses seeking a reliable, secure, and user-friendly platform for data management.The key components of our service include advanced security measures, robust data backup and recovery protocols, flexible pricing options, and dedicated customer support.The onboarding process is efficient, enabling clients to start using the service within one business day.The scope of this proposal includes details on the aforementioned aspects, providing a clear understanding of the service offerings and associated processes.Argilla Cloud offers four plans:\n\nStarter: Ideal for teams initiating their journey in scaling data curation and labelling projects.Perfect for environments where production monitoring is not a requirement.Base: Tailored for teams seeking to amplify their data curation, labelling efforts, and model monitoring, with enhanced support from Argilla.Medium: Designed for teams expanding their language model pipelines, requiring robust ML lifecycle management fortified by Argilla\u0027s comprehensive support.Large: Geared towards teams heavily dependent on language model pipelines, human feedback, and applications, requiring complete ML lifecycle management with robust support.Scope of services Argilla Cloud, a fully managed SaaS, encompasses the following functionalities: Unrestricted Users, Datasets, and Workspaces: The service imposes no limits on the number of users, datasets, or workspaces, supporting scalability of operations.Role-Based Access Control: Administrators and annotators have differentiated access rights to ensure structured and secure data management.Custom Subdomain: Clients are provided with a distinct argilla.io subdomain for accessing the platform.Regular Updates and Upgrades: The service includes regular platform patches and upgrades as part of routine maintenance to uphold system integrity and security.Managed Service: Infrastructure maintenance, backend operations, and other technical aspects are managed by Argilla, eliminating the need for client-side management.Security The security framework of the Argilla Cloud service involves a multi-faceted approach: Data Encryption at Rest: Data stored within the system is encrypted, forming a crucial layer of security.This process automatically encrypts data prior to storage, guarding against unauthorized access.Network Security Measures: The infrastructure has been designed to prevent unauthorized intrusion and to ensure consistent service availability.Measures include firewall protections, intrusion detection systems, and scheduled vulnerability scans to detect and address potential threats.Role-Based Access Control: The system implements role-based access control, defining access levels based on user roles.This mechanism controls the extent of access to sensitive information, aligning it with the responsibilities of each role.Security Audits: Regular audits of security systems and protocols are conducted to detect potential vulnerabilities and verify adherence to security standards.Employee Training: All personnel receive regular security training, fostering an understanding of the latest threats and the importance of security best practices.Incident Response Protocol: In the case of a security incident, a pre-defined incident response plan is activated.This plan outlines the procedures for managing different types of security events, and aims to ensure swift mitigation of potential damage.In summary, the security measures in place include data encryption, network security protocols, role-based access control, regular audits, employee training, and a comprehensive incident response plan.These measures contribute to a secure environment for data management.Setup and onboarding The process for setup and onboarding for Argilla Cloud is designed to be efficient and straightforward.The procedure involves a sequence of steps to ensure a smooth transition and optimal use of the service.Step 1: Account Creation The setup process begins with the creation of the client owner account.We require the client to provide the following details: Full name of the administrator Preferred username Administrator\u0027s email address Once these details are received, we send an onboarding email to sign up.Step 2: Platform Orientation Once logged in, the administrator has full access to the Argilla Cloud platform.They can familiarize themselves with the platform interface and various features.If required, a guided tour or tutorial can be provided to walk the administrator through the platform.Step 3: User Management The administrator is then responsible for setting up additional user accounts.They can invite users via email, manage roles (admin, annotator, etc.), and assign access permissions to different workspaces and datasets.Step 4: Workspace and Dataset Configuration The administrator can create and manage multiple workspaces and datasets.They have the option to configure settings as per their team\u0027s requirements, including assigning datasets to specific workspaces and managing access permissions.Step 5: Training and Support Argilla provides open resources and support to aid in the onboarding process.This includes user manuals, tutorials, and access to our support team for any queries or issues that may arise during the setup and onboarding process.By following these steps, new users can be quickly onboarded and begin using the Argilla Cloud service with minimal downtime.", "user-message": "What is the ticketing system used by Argilla for customer support?" }, "metadata": {}, "responses": [ { "status": "submitted", "user_id": "73d1e0c3-85ba-48bc-9386-519cdd5fd789", "values": { "context-rating": { "value": 2 }, "question-rating": { "value": 5 }, "response": { "value": "Thanks for your interest in Argilla Cloud!\n\nThe ticketing system used by Argilla for customer support is provided by well-renowned SaaS service." } } } ], "suggestions": [ { "question_id": "d7b6f5e3-6d4a-47c8-ba50-55ff15f8fb51", "question_name": "response", "value": "The ticketing system used by Argilla for customer support is not specified in the given context information." } ] } ``` While the same record in HuggingFace `datasets` looks as follows: ```json { "context": "This documents an overview of the Argilla Cloud service - a comprehensive Software as a Service (SaaS) solution for data labeling and curation.The service is specifically designed to meet the needs of businesses seeking a reliable, secure, and user-friendly platform for data management.The key components of our service include advanced security measures, robust data backup and recovery protocols, flexible pricing options, and dedicated customer support.The onboarding process is efficient, enabling clients to start using the service within one business day.The scope of this proposal includes details on the aforementioned aspects, providing a clear understanding of the service offerings and associated processes.Argilla Cloud offers four plans:\n\nStarter: Ideal for teams initiating their journey in scaling data curation and labelling projects.Perfect for environments where production monitoring is not a requirement.Base: Tailored for teams seeking to amplify their data curation, labelling efforts, and model monitoring, with enhanced support from Argilla.Medium: Designed for teams expanding their language model pipelines, requiring robust ML lifecycle management fortified by Argilla\u0027s comprehensive support.Large: Geared towards teams heavily dependent on language model pipelines, human feedback, and applications, requiring complete ML lifecycle management with robust support.Scope of services Argilla Cloud, a fully managed SaaS, encompasses the following functionalities: Unrestricted Users, Datasets, and Workspaces: The service imposes no limits on the number of users, datasets, or workspaces, supporting scalability of operations.Role-Based Access Control: Administrators and annotators have differentiated access rights to ensure structured and secure data management.Custom Subdomain: Clients are provided with a distinct argilla.io subdomain for accessing the platform.Regular Updates and Upgrades: The service includes regular platform patches and upgrades as part of routine maintenance to uphold system integrity and security.Managed Service: Infrastructure maintenance, backend operations, and other technical aspects are managed by Argilla, eliminating the need for client-side management.Security The security framework of the Argilla Cloud service involves a multi-faceted approach: Data Encryption at Rest: Data stored within the system is encrypted, forming a crucial layer of security.This process automatically encrypts data prior to storage, guarding against unauthorized access.Network Security Measures: The infrastructure has been designed to prevent unauthorized intrusion and to ensure consistent service availability.Measures include firewall protections, intrusion detection systems, and scheduled vulnerability scans to detect and address potential threats.Role-Based Access Control: The system implements role-based access control, defining access levels based on user roles.This mechanism controls the extent of access to sensitive information, aligning it with the responsibilities of each role.Security Audits: Regular audits of security systems and protocols are conducted to detect potential vulnerabilities and verify adherence to security standards.Employee Training: All personnel receive regular security training, fostering an understanding of the latest threats and the importance of security best practices.Incident Response Protocol: In the case of a security incident, a pre-defined incident response plan is activated.This plan outlines the procedures for managing different types of security events, and aims to ensure swift mitigation of potential damage.In summary, the security measures in place include data encryption, network security protocols, role-based access control, regular audits, employee training, and a comprehensive incident response plan.These measures contribute to a secure environment for data management.Setup and onboarding The process for setup and onboarding for Argilla Cloud is designed to be efficient and straightforward.The procedure involves a sequence of steps to ensure a smooth transition and optimal use of the service.Step 1: Account Creation The setup process begins with the creation of the client owner account.We require the client to provide the following details: Full name of the administrator Preferred username Administrator\u0027s email address Once these details are received, we send an onboarding email to sign up.Step 2: Platform Orientation Once logged in, the administrator has full access to the Argilla Cloud platform.They can familiarize themselves with the platform interface and various features.If required, a guided tour or tutorial can be provided to walk the administrator through the platform.Step 3: User Management The administrator is then responsible for setting up additional user accounts.They can invite users via email, manage roles (admin, annotator, etc.), and assign access permissions to different workspaces and datasets.Step 4: Workspace and Dataset Configuration The administrator can create and manage multiple workspaces and datasets.They have the option to configure settings as per their team\u0027s requirements, including assigning datasets to specific workspaces and managing access permissions.Step 5: Training and Support Argilla provides open resources and support to aid in the onboarding process.This includes user manuals, tutorials, and access to our support team for any queries or issues that may arise during the setup and onboarding process.By following these steps, new users can be quickly onboarded and begin using the Argilla Cloud service with minimal downtime.\nThis process ensures the client administrator has full control over their team\u0027s access and can manage their workspace efficiently.Plans The plans for the Argilla Cloud service depend on the volume of records processed, with several tiers available to suit varying needs.Each tier has a corresponding monthly and annual price, with a 10% discount applied to the annual pricing option.The tier selection and associated price will be determined by the client\u0027s selection in the Service Order Form section of the Terms of Service document.Plans are: Starter 1 Million records Base 3 Million records Medium 4 Million records Large 6 million records\n\nSupport Argilla Cloud offers comprehensive support services to address various issues that may arise during the use of our service.Support levels are categorized into four distinct tiers, based on the severity of the issue, and a separate category for feature requests.The support process, response times, and procedures differ for each category.(1) Critical Issues Critical issues are characterized by: Severe impact on the Service, potentially rendering it completely non-functional.Disruption of critical service operations or functions.Obstruction of entire customer workflows.In the case of a critical issue, Argilla will: Assign specialist(s) to correct the issue on an expedited basis.Provide ongoing communication on the status via email and/or phone, according to the customer\u0027s preference.Begin work towards identifying a temporary workaround or fix.(2) Major Issues Major issues involve: Limited functionality of the Service.Service instability with periodic interruptions.Material service interruptions in mission-critical functions.Time-sensitive questions impacting performance or deliverables to end-clients.Upon encountering a major issue, Argilla will: Assign a specialist to begin a resolution.Implement additional, escalated procedures as reasonably determined necessary by Argilla Support Services staff.(3) Minor Issues Minor issues include: Errors causing partial, non-critical functionality loss.The need for clarification on procedures or information in documentation.Errors in service that may impact performance deliverables.(4) Trivial Issues Trivial issues are characterized by: Errors in system development with little to no impact on performance.Feature Requests Feature requests involve: Requesting a product enhancement.For feature requests, Argilla will: Respond regarding the relevance and interest in incorporating the requested feature.In summary, Argilla Cloud\u0027s support services are designed to provide timely and efficient assistance for issues of varying severity, ensuring a smooth and reliable user experience.All plans include Monday to Friday during office hours (8am to 17pm CEST) with additional support upon request.The Support Channels and features of each tier are shown below:\n\nStarter: Slack Community.Severity 1 - Response time \u003c 4 hours.Severity 2 - Response time \u003c 8 hours.Severity 3 - Response time \u003c 48 hours.Severity 4 not specified.Base: Ticketing System, Severity 1 - Response time \u003c 4 hours.Severity 2 - Response time \u003c 8 hours.Severity 3 - Response time \u003c 24 hours.Severity 4 not specified.Medium: Ticketing System and dedicated Slack channel, Severity 1 - Response time \u003c 4 hours.Severity 2 - Response time \u003c 8 hours.Severity 3 - Response time \u003c 24 hours.Severity 4 one week\n\nLarge: Ticketing System and dedicated Slack channel, Severity 1 - Response time \u003c 4 hours.Severity 2 - Response time \u003c 8 hours.Severity 3 - Response time \u003c 24 hours.Severity 4 one week.Data backup and recovery plan Argilla Cloud is committed to ensuring the safety and availability of your data.Our system is designed to run six data backups per day as a standard procedure.These backups capture a snapshot of the system state at the time of the backup, enabling restoration to that point if necessary.Our Recovery Point Objective (RPO) is four hours.This means that in the event of a system failure, the maximum data loss would be up to the last four hours of data input.We achieve this by running regular backups throughout the day, reducing the time window of potential data loss.Our Recovery Time Objective (RTO) is one hour.This is the maximum acceptable length of time that your system could be down following a failure or disruption.It represents our commitment to ensuring that your services are restored as quickly as possible.In the event of a disruption, our team will first evaluate the issue to determine the best course of action.If data recovery is necessary, we will restore from the most recent backup.We will then work to identify and resolve the root cause of the disruption to prevent a recurrence.Finally, we conduct regular test restores to ensure that our backup system is working as intended.These tests verify the integrity of the backup data and the functionality of the restore process.", "context-rating": [], "context-rating-suggestion": null, "context-rating-suggestion-metadata": { "agent": null, "score": null, "type": null }, "external_id": null, "metadata": "{}", "question-rating": [], "question-rating-suggestion": null, "question-rating-suggestion-metadata": { "agent": null, "score": null, "type": null }, "response": [], "response-suggestion": "The benefits of choosing Argilla Cloud service over other cloud services include advanced security measures, robust data backup and recovery protocols, flexible pricing options, dedicated customer support, and efficient onboarding process. Argilla Cloud offers a comprehensive security framework that includes data encryption at rest, network security measures, role-based access control, regular security audits, employee training, and a comprehensive incident response protocol. The service also ensures the safety and availability of data through regular data backups with a Recovery Point Objective (RPO) of four hours and a Recovery Time Objective (RTO) of one hour. Additionally, Argilla Cloud offers flexible pricing options based on the volume of records processed and provides dedicated customer support with different support tiers based on the severity of the issue. The onboarding process is designed to be efficient and straightforward, allowing new users to quickly start using the service with minimal downtime.", "response-suggestion-metadata": { "agent": null, "score": null, "type": null }, "user-message": "What are the benefits of choosing Argilla Cloud service over other cloud services?" } ``` ### Data Fields Among the dataset fields, we differentiate between the following: * **Fields:** These are the dataset records themselves, for the moment just text fields are suppported. These are the ones that will be used to provide responses to the questions. * **user-message** is of type `TextField`. * **context** is of type `TextField`. * **Questions:** These are the questions that will be asked to the annotators. They can be of different types, such as `RatingQuestion`, `TextQuestion`, `LabelQuestion`, `MultiLabelQuestion`, and `RankingQuestion`. * (optional) **question-rating** is of type `RatingQuestion` with the following allowed values [1, 2, 3, 4, 5]. * (optional) **context-rating** is of type `RatingQuestion` with the following allowed values [1, 2, 3, 4, 5]. * **response** is of type `TextQuestion`. * **✨ NEW** **Suggestions:** As of Argilla 1.13.0, the suggestions have been included to provide the annotators with suggestions to ease or assist during the annotation process. Suggestions are linked to the existing questions, are always optional, and contain not just the suggestion itself, but also the metadata linked to it, if applicable. * (optional) **question-rating-suggestion** is of type `rating` with the following allowed values [1, 2, 3, 4, 5]. * (optional) **context-rating-suggestion** is of type `rating` with the following allowed values [1, 2, 3, 4, 5]. * (optional) **response-suggestion** is of type `text`. Additionally, we also have one more field which is optional and is the following: * **external_id:** This is an optional field that can be used to provide an external ID for the dataset record. This can be useful if you want to link the dataset record to an external resource, such as a database or a file. ### Data Splits The dataset contains a single split, which is `train`. ## Dataset Creation ### Curation Rationale [More Information Needed] ### Source Data #### Initial Data Collection and Normalization [More Information Needed] #### Who are the source language producers? [More Information Needed] ### Annotations #### Annotation guidelines [More Information Needed] #### Annotation process [More Information Needed] #### Who are the annotators? [More Information Needed] ### Personal and Sensitive Information [More Information Needed] ## Considerations for Using the Data ### Social Impact of Dataset [More Information Needed] ### Discussion of Biases [More Information Needed] ### Other Known Limitations [More Information Needed] ## Additional Information ### Dataset Curators [More Information Needed] ### Licensing Information [More Information Needed] ### Citation Information [More Information Needed] ### Contributions [More Information Needed]
NegarMov/Distorted_Human_Images
2023-09-13T16:08:46.000Z
[ "region:us" ]
NegarMov
null
null
null
0
101
Entry not found
ryanc/music_align_lp_musiccaps_mtt
2023-09-11T03:06:19.000Z
[ "region:us" ]
ryanc
null
null
null
0
101
--- dataset_info: features: - name: caption dtype: string - name: audio dtype: audio splits: - name: train num_bytes: 24105496899.94 num_examples: 25860 download_size: 24003791630 dataset_size: 24105496899.94 configs: - config_name: default data_files: - split: train path: data/train-* --- # Dataset Card for "music_align_lp_musiccaps_mtt" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
erkam/clevr-full-v6
2023-09-13T14:20:00.000Z
[ "region:us" ]
erkam
null
null
null
0
101
--- configs: - config_name: default data_files: - split: train path: data/train-* - split: val path: data/val-* - split: test path: data/test-* dataset_info: features: - name: image dtype: image - name: depth dtype: image - name: layout dtype: image - name: colored_layout dtype: image - name: objects sequence: int64 - name: boxes sequence: sequence: float32 - name: triplets sequence: sequence: int64 - name: objects_str dtype: string - name: depth_latent sequence: sequence: sequence: float32 - name: image_latent sequence: sequence: sequence: float32 splits: - name: train num_bytes: 104696506.0 num_examples: 960 - name: val num_bytes: 12961636.0 num_examples: 119 - name: test num_bytes: 12938095.0 num_examples: 119 download_size: 143558769 dataset_size: 130596237.0 --- # Dataset Card for "clevr-full-v6" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
sabuhi1997/fine-tune-hebrew-dataset-2
2023-09-14T10:59:14.000Z
[ "region:us" ]
sabuhi1997
null
null
null
0
101
--- configs: - config_name: default data_files: - split: train path: data/train-* - split: validation path: data/validation-* - split: test path: data/test-* dataset_info: features: - name: audio dtype: audio - name: transcription dtype: string splits: - name: train num_bytes: 5715697.0 num_examples: 8 - name: validation num_bytes: 1760186.0 num_examples: 3 - name: test num_bytes: 1625785.0 num_examples: 4 download_size: 3211475 dataset_size: 9101668.0 --- # Dataset Card for "fine-tune-hebrew-dataset-2" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
DDSC/angry-tweets
2023-07-20T00:34:34.000Z
[ "task_categories:text-classification", "task_ids:sentiment-classification", "annotations_creators:crowdsourced", "language_creators:found", "multilinguality:monolingual", "size_categories:1K<n<10K", "source_datasets:original", "language:da", "license:cc-by-4.0", "region:us" ]
DDSC
null
null
null
1
100
--- annotations_creators: - crowdsourced language_creators: - found language: - da license: - cc-by-4.0 multilinguality: - monolingual pretty_name: AngryTweets size_categories: - 1K<n<10K source_datasets: - original task_categories: - text-classification task_ids: - sentiment-classification --- # Dataset Card for AngryTweets ## Table of Contents - [Table of Contents](#table-of-contents) - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Languages](#languages) - [Dataset Structure](#dataset-structure) - [Data Instances](#data-instances) - [Data Fields](#data-fields) - [Data Splits](#data-splits) - [Additional Information](#additional-information) - [Dataset Curators](#dataset-curators) - [Licensing Information](#licensing-information) - [Citation Information](#citation-information) - [Contributions](#contributions) ## Dataset Description - **Paper:** https://aclanthology.org/2021.nodalida-main.53/ - **Direct Download**: https://danlp-downloads.alexandra.dk/datasets/game_tweets.zip ### Dataset Summary This dataset consists of anonymised Danish Twitter data that has been annotated for sentiment analysis through crowd-sourcing. All credits go to the authors of the following paper, who created the dataset: [Pauli, Amalie Brogaard, et al. "DaNLP: An open-source toolkit for Danish Natural Language Processing." Proceedings of the 23rd Nordic Conference on Computational Linguistics (NoDaLiDa). 2021](https://aclanthology.org/2021.nodalida-main.53/) ### Supported Tasks and Leaderboards This dataset is suitable for sentiment analysis. ### Languages This dataset is in Danish. ## Dataset Structure ### Data Instances Every entry in the dataset has a tweet and an associated label. ### Data Fields An entry in the dataset consists of the following fields: - `text` (`str`): The tweet content. - `label` (`str`): The label of the `text`. Can be "positiv", "neutral" or "negativ" for positive, neutral and negative sentiment, respectively. ### Data Splits A `train` and `test` split is available, with the test split being 30% of the dataset, randomly sampled in a stratified fashion. There are 2,437 tweets in the training split and 1,047 in the test split. ## Additional Information ### Dataset Curators The collection and annotation of the dataset is solely due to the authors of [the original paper](https://aclanthology.org/2021.nodalida-main.53/): Amalie Brogaard Pauli, Maria Barrett, Ophélie Lacroix and Rasmus Hvingelby. The tweets have been anonymised by [@saattrupdan](https://github.com/saattrupdan). ### Licensing Information The dataset is released under the CC BY 4.0 license. ### Citation Information ``` @inproceedings{pauli2021danlp, title={DaNLP: An open-source toolkit for Danish Natural Language Processing}, author={Pauli, Amalie Brogaard and Barrett, Maria and Lacroix, Oph{\'e}lie and Hvingelby, Rasmus}, booktitle={Proceedings of the 23rd Nordic Conference on Computational Linguistics (NoDaLiDa)}, pages={460--466}, year={2021} } ``` ### Contributions Thanks to [@saattrupdan](https://github.com/saattrupdan) for adding this dataset to the Hugging Face Hub.
qwant/squad_fr
2023-04-19T14:37:09.000Z
[ "task_categories:question-answering", "task_ids:extractive-qa", "task_ids:closed-domain-qa", "annotations_creators:machine-generated", "language_creators:machine-generated", "multilinguality:monolingual", "multilinguality:translation", "size_categories:10K<n<100K", "source_datasets:extended|squad", "language:fr", "license:cc-by-4.0", "region:us" ]
qwant
SQuAD-fr is a French translated version of the Stanford Question Answering Dataset (SQuAD), the reference corpus to evaluate question answering models' performances in English. It consists of 100K question-answer pairs on 500+ articles derived from the original English dataset and represents a large-scale dataset for closed-domain question answering on factoid questions in French. SQuAD-fr serves as a means of data augmentation on FQuAD and PIAF benchmarks, with 90K+ translated training pairs.
@inproceedings{cattan:hal-03336060, TITLE = {{On the Usability of Transformers-based models for a French Question-Answering task}}, AUTHOR = {Cattan, Oralie and Servan, Christophe and Rosset, Sophie}, URL = {https://hal.archives-ouvertes.fr/hal-03336060}, BOOKTITLE = {{Recent Advances in Natural Language Processing (RANLP)}}, ADDRESS = {Varna, Bulgaria}, YEAR = {2021}, MONTH = Sep, PDF = {https://hal.archives-ouvertes.fr/hal-03336060/file/RANLP_2021_transformers_usability.pdf}, HAL_ID = {hal-03336060}, HAL_VERSION = {v1}, }
null
6
100
--- annotations_creators: - machine-generated language_creators: - machine-generated language: - fr license: - cc-by-4.0 multilinguality: - monolingual - translation paperswithcode_id: squad pretty_name: SQuAD-fr size_categories: - 10K<n<100K source_datasets: - extended|squad task_categories: - question-answering task_ids: - extractive-qa - closed-domain-qa --- # Dataset Card for "squad_fr" ## Table of Contents - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards) - [Languages](#languages) - [Dataset Structure](#dataset-structure) - [Data Instances](#data-instances) - [Data Fields](#data-fields) - [Data Splits](#data-splits) - [Dataset Creation](#dataset-creation) - [Curation Rationale](#curation-rationale) - [Source Data](#source-data) - [Annotations](#annotations) - [Personal and Sensitive Information](#personal-and-sensitive-information) - [Considerations for Using the Data](#considerations-for-using-the-data) - [Social Impact of Dataset](#social-impact-of-dataset) - [Discussion of Biases](#discussion-of-biases) - [Other Known Limitations](#other-known-limitations) - [Additional Information](#additional-information) - [Dataset Curators](#dataset-curators) - [Licensing Information](#licensing-information) - [Citation Information](#citation-information) ## Dataset Description - **Paper:** [On the Usability of Transformers-based models for a French Question-Answering task](https://hal.archives-ouvertes.fr/hal-03336060) - **Size of downloaded dataset files:** 10 MB - **Size of the generated dataset:** 73 MB - **Total amount of disk used:** 83 MB ### Dataset Summary SQuAD-fr: - a translated version of the Stanford Question Answering Dataset (SQuAD) into French - obtained through automatic translation of the English dataset - a reading comprehension dataset, consisting of approximately 90K factoid questions on Wikipedia articles, where the answer to every question is a segment of text, or span, from the corresponding reading passage - serves as a means of data augmentation on FQuAD and PIAF benchmarks ### Supported Tasks and Leaderboards - `closed-domain-qa`, `text-retrieval`: This dataset is intended to be used for `closed-domain-qa`, but can also be used for information retrieval tasks. ### Languages This dataset is exclusively in French. ## Dataset Structure ### Data Instances #### default - **Size of downloaded dataset files:** 10 MB - **Size of the generated dataset:** 73 MB - **Total amount of disk used:** 83 MB An example of 'train' looks as follows. ``` { "answers": { "answer_start": [1], "text": ["This is a test text"] }, "context": "This is a test context.", "id": "1", "question": "Is this a test?", "title": "train test" } ``` ### Data Fields The data fields are the same among all splits. #### plain_text - `id`: a `string` feature. - `title`: a `string` feature. - `context`: a `string` feature. - `question`: a `string` feature. - `answers`: a dictionary feature containing: - `text`: a `string` feature. - `answer_start`: a `int32` feature. ### Data Splits | name |train|validation| |----------|----:|---------:| |1.1.0|87514| 17492| ## Dataset Creation ### Curation Rationale Usability of Transformer-based models, instability relating to data scarcity, investigation of data augmentation, hyperparameters optimization and cross-lingual transfer on the performance of a question-answering task in French. ### Source Data #### Initial Data Collection and Normalization validation: manually collected gold standards, chrf scores and bleu evaluation #### Who are the source language producers? [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ### Annotations #### Annotation process [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) #### Who are the annotators? [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ### Personal and Sensitive Information [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ## Considerations for Using the Data ### Social Impact of Dataset [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ### Discussion of Biases [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ### Other Known Limitations [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ## Additional Information ### Dataset Curators [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ### Licensing Information Attribution-NonCommercial-ShareAlike 3.0 Unported (CC BY-NC-SA 3.0) ### Citation Information ``` @inproceedings{cattan:hal-03336060, TITLE = {{On the Usability of Transformers-based models for a French Question-Answering task}}, AUTHOR = {Cattan, Oralie and Servan, Christophe and Rosset, Sophie}, URL = {https://hal.archives-ouvertes.fr/hal-03336060}, BOOKTITLE = {{Recent Advances in Natural Language Processing (RANLP)}}, ADDRESS = {Varna, Bulgaria}, YEAR = {2021}, MONTH = Sep, PDF = {https://hal.archives-ouvertes.fr/hal-03336060/file/RANLP_2021_transformers_usability.pdf}, HAL_ID = {hal-03336060}, HAL_VERSION = {v1}, } ```
mozilla-foundation/common_voice_10_0
2023-07-29T16:00:14.000Z
[ "task_categories:automatic-speech-recognition", "annotations_creators:crowdsourced", "language_creators:crowdsourced", "multilinguality:multilingual", "source_datasets:extended|common_voice", "license:cc0-1.0", "arxiv:1912.06670", "region:us" ]
mozilla-foundation
null
@inproceedings{commonvoice:2020, author = {Ardila, R. and Branson, M. and Davis, K. and Henretty, M. and Kohler, M. and Meyer, J. and Morais, R. and Saunders, L. and Tyers, F. M. and Weber, G.}, title = {Common Voice: A Massively-Multilingual Speech Corpus}, booktitle = {Proceedings of the 12th Conference on Language Resources and Evaluation (LREC 2020)}, pages = {4211--4215}, year = 2020 }
null
16
100
--- pretty_name: Common Voice Corpus 10.0 annotations_creators: - crowdsourced language_creators: - crowdsourced language_bcp47: - ab - ar - as - ast - az - ba - bas - be - bg - bn - br - ca - ckb - cnh - cs - cv - cy - da - de - dv - el - en - eo - es - et - eu - fa - fi - fr - fy-NL - ga-IE - gl - gn - ha - hi - hsb - hu - hy-AM - ia - id - ig - it - ja - ka - kab - kk - kmr - ky - lg - lt - lv - mdf - mhr - mk - ml - mn - mr - mt - myv - nan-tw - ne-NP - nl - nn-NO - or - pa-IN - pl - pt - rm-sursilv - rm-vallader - ro - ru - rw - sah - sat - sc - sk - sl - sr - sv-SE - sw - ta - th - tig - tok - tr - tt - ug - uk - ur - uz - vi - vot - yue - zh-CN - zh-HK - zh-TW license: - cc0-1.0 multilinguality: - multilingual size_categories: ab: - 10K<n<100K ar: - 100K<n<1M as: - 1K<n<10K ast: - n<1K az: - n<1K ba: - 100K<n<1M bas: - 1K<n<10K be: - 100K<n<1M bg: - 1K<n<10K bn: - 100K<n<1M br: - 10K<n<100K ca: - 1M<n<10M ckb: - 100K<n<1M cnh: - 1K<n<10K cs: - 10K<n<100K cv: - 10K<n<100K cy: - 100K<n<1M da: - 1K<n<10K de: - 100K<n<1M dv: - 10K<n<100K el: - 10K<n<100K en: - 1M<n<10M eo: - 1M<n<10M es: - 100K<n<1M et: - 10K<n<100K eu: - 100K<n<1M fa: - 100K<n<1M fi: - 10K<n<100K fr: - 100K<n<1M fy-NL: - 10K<n<100K ga-IE: - 1K<n<10K gl: - 10K<n<100K gn: - 1K<n<10K ha: - 1K<n<10K hi: - 10K<n<100K hsb: - 1K<n<10K hu: - 10K<n<100K hy-AM: - 1K<n<10K ia: - 10K<n<100K id: - 10K<n<100K ig: - 1K<n<10K it: - 100K<n<1M ja: - 10K<n<100K ka: - 1K<n<10K kab: - 100K<n<1M kk: - 1K<n<10K kmr: - 10K<n<100K ky: - 10K<n<100K lg: - 100K<n<1M lt: - 10K<n<100K lv: - 1K<n<10K mdf: - n<1K mhr: - 10K<n<100K mk: - n<1K ml: - 1K<n<10K mn: - 10K<n<100K mr: - 10K<n<100K mt: - 10K<n<100K myv: - 1K<n<10K nan-tw: - 10K<n<100K ne-NP: - n<1K nl: - 10K<n<100K nn-NO: - n<1K or: - 1K<n<10K pa-IN: - 1K<n<10K pl: - 100K<n<1M pt: - 100K<n<1M rm-sursilv: - 1K<n<10K rm-vallader: - 1K<n<10K ro: - 10K<n<100K ru: - 100K<n<1M rw: - 1M<n<10M sah: - 1K<n<10K sat: - n<1K sc: - n<1K sk: - 10K<n<100K sl: - 10K<n<100K sr: - 1K<n<10K sv-SE: - 10K<n<100K sw: - 100K<n<1M ta: - 100K<n<1M th: - 100K<n<1M tig: - n<1K tok: - 1K<n<10K tr: - 10K<n<100K tt: - 10K<n<100K ug: - 10K<n<100K uk: - 10K<n<100K ur: - 100K<n<1M uz: - 100K<n<1M vi: - 10K<n<100K vot: - n<1K yue: - 10K<n<100K zh-CN: - 100K<n<1M zh-HK: - 100K<n<1M zh-TW: - 100K<n<1M source_datasets: - extended|common_voice task_categories: - automatic-speech-recognition paperswithcode_id: common-voice extra_gated_prompt: "By clicking on “Access repository” below, you also agree to not attempt to determine the identity of speakers in the Common Voice dataset." --- # Dataset Card for Common Voice Corpus 10.0 ## Table of Contents - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards) - [Languages](#languages) - [Dataset Structure](#dataset-structure) - [Data Instances](#data-instances) - [Data Fields](#data-fields) - [Data Splits](#data-splits) - [Dataset Creation](#dataset-creation) - [Curation Rationale](#curation-rationale) - [Source Data](#source-data) - [Annotations](#annotations) - [Personal and Sensitive Information](#personal-and-sensitive-information) - [Considerations for Using the Data](#considerations-for-using-the-data) - [Social Impact of Dataset](#social-impact-of-dataset) - [Discussion of Biases](#discussion-of-biases) - [Other Known Limitations](#other-known-limitations) - [Additional Information](#additional-information) - [Dataset Curators](#dataset-curators) - [Licensing Information](#licensing-information) - [Citation Information](#citation-information) - [Contributions](#contributions) ## Dataset Description - **Homepage:** https://commonvoice.mozilla.org/en/datasets - **Repository:** https://github.com/common-voice/common-voice - **Paper:** https://arxiv.org/abs/1912.06670 - **Leaderboard:** https://paperswithcode.com/dataset/common-voice - **Point of Contact:** [Anton Lozhkov](mailto:anton@huggingface.co) ### Dataset Summary The Common Voice dataset consists of a unique MP3 and corresponding text file. Many of the 20817 recorded hours in the dataset also include demographic metadata like age, sex, and accent that can help improve the accuracy of speech recognition engines. The dataset currently consists of 15234 validated hours in 96 languages, but more voices and languages are always added. Take a look at the [Languages](https://commonvoice.mozilla.org/en/languages) page to request a language or start contributing. ### Supported Tasks and Leaderboards The results for models trained on the Common Voice datasets are available via the [🤗 Speech Bench](https://huggingface.co/spaces/huggingface/hf-speech-bench) ### Languages ``` Abkhaz, Arabic, Armenian, Assamese, Asturian, Azerbaijani, Basaa, Bashkir, Basque, Belarusian, Bengali, Breton, Bulgarian, Cantonese, Catalan, Central Kurdish, Chinese (China), Chinese (Hong Kong), Chinese (Taiwan), Chuvash, Czech, Danish, Dhivehi, Dutch, English, Erzya, Esperanto, Estonian, Finnish, French, Frisian, Galician, Georgian, German, Greek, Guarani, Hakha Chin, Hausa, Hindi, Hungarian, Igbo, Indonesian, Interlingua, Irish, Italian, Japanese, Kabyle, Kazakh, Kinyarwanda, Kurmanji Kurdish, Kyrgyz, Latvian, Lithuanian, Luganda, Macedonian, Malayalam, Maltese, Marathi, Meadow Mari, Moksha, Mongolian, Nepali, Norwegian Nynorsk, Odia, Persian, Polish, Portuguese, Punjabi, Romanian, Romansh Sursilvan, Romansh Vallader, Russian, Sakha, Santali (Ol Chiki), Sardinian, Serbian, Slovak, Slovenian, Sorbian, Upper, Spanish, Swahili, Swedish, Taiwanese (Minnan), Tamil, Tatar, Thai, Tigre, Toki Pona, Turkish, Ukrainian, Urdu, Uyghur, Uzbek, Vietnamese, Votic, Welsh ``` ## Dataset Structure ### Data Instances A typical data point comprises the `path` to the audio file and its `sentence`. Additional fields include `accent`, `age`, `client_id`, `up_votes`, `down_votes`, `gender`, `locale` and `segment`. ```python { 'client_id': 'd59478fbc1ee646a28a3c652a119379939123784d99131b865a89f8b21c81f69276c48bd574b81267d9d1a77b83b43e6d475a6cfc79c232ddbca946ae9c7afc5', 'path': 'et/clips/common_voice_et_18318995.mp3', 'audio': { 'path': 'et/clips/common_voice_et_18318995.mp3', 'array': array([-0.00048828, -0.00018311, -0.00137329, ..., 0.00079346, 0.00091553, 0.00085449], dtype=float32), 'sampling_rate': 48000 }, 'sentence': 'Tasub kokku saada inimestega, keda tunned juba ammust ajast saati.', 'up_votes': 2, 'down_votes': 0, 'age': 'twenties', 'gender': 'male', 'accent': '', 'locale': 'et', 'segment': '' } ``` ### Data Fields `client_id` (`string`): An id for which client (voice) made the recording `path` (`string`): The path to the audio file `audio` (`dict`): A dictionary containing the path to the downloaded audio file, the decoded audio array, and the sampling rate. Note that when accessing the audio column: `dataset[0]["audio"]` the audio file is automatically decoded and resampled to `dataset.features["audio"].sampling_rate`. Decoding and resampling of a large number of audio files might take a significant amount of time. Thus it is important to first query the sample index before the `"audio"` column, *i.e.* `dataset[0]["audio"]` should **always** be preferred over `dataset["audio"][0]`. `sentence` (`string`): The sentence the user was prompted to speak `up_votes` (`int64`): How many upvotes the audio file has received from reviewers `down_votes` (`int64`): How many downvotes the audio file has received from reviewers `age` (`string`): The age of the speaker (e.g. `teens`, `twenties`, `fifties`) `gender` (`string`): The gender of the speaker `accent` (`string`): Accent of the speaker `locale` (`string`): The locale of the speaker `segment` (`string`): Usually an empty field ### Data Splits The speech material has been subdivided into portions for dev, train, test, validated, invalidated, reported and other. The validated data is data that has been validated with reviewers and received upvotes that the data is of high quality. The invalidated data is data has been invalidated by reviewers and received downvotes indicating that the data is of low quality. The reported data is data that has been reported, for different reasons. The other data is data that has not yet been reviewed. The dev, test, train are all data that has been reviewed, deemed of high quality and split into dev, test and train. ## Data Preprocessing Recommended by Hugging Face The following are data preprocessing steps advised by the Hugging Face team. They are accompanied by an example code snippet that shows how to put them to practice. Many examples in this dataset have trailing quotations marks, e.g _“the cat sat on the mat.“_. These trailing quotation marks do not change the actual meaning of the sentence, and it is near impossible to infer whether a sentence is a quotation or not a quotation from audio data alone. In these cases, it is advised to strip the quotation marks, leaving: _the cat sat on the mat_. In addition, the majority of training sentences end in punctuation ( . or ? or ! ), whereas just a small proportion do not. In the dev set, **almost all** sentences end in punctuation. Thus, it is recommended to append a full-stop ( . ) to the end of the small number of training examples that do not end in punctuation. ```python from datasets import load_dataset ds = load_dataset("mozilla-foundation/common_voice_10_0", "en", use_auth_token=True) def prepare_dataset(batch): """Function to preprocess the dataset with the .map method""" transcription = batch["sentence"] if transcription.startswith('"') and transcription.endswith('"'): # we can remove trailing quotation marks as they do not affect the transcription transcription = transcription[1:-1] if transcription[-1] not in [".", "?", "!"]: # append a full-stop to sentences that do not end in punctuation transcription = transcription + "." batch["sentence"] = transcription return batch ds = ds.map(prepare_dataset, desc="preprocess dataset") ``` ## Dataset Creation ### Curation Rationale [Needs More Information] ### Source Data #### Initial Data Collection and Normalization [Needs More Information] #### Who are the source language producers? [Needs More Information] ### Annotations #### Annotation process [Needs More Information] #### Who are the annotators? [Needs More Information] ### Personal and Sensitive Information The dataset consists of people who have donated their voice online. You agree to not attempt to determine the identity of speakers in the Common Voice dataset. ## Considerations for Using the Data ### Social Impact of Dataset The dataset consists of people who have donated their voice online. You agree to not attempt to determine the identity of speakers in the Common Voice dataset. ### Discussion of Biases [More Information Needed] ### Other Known Limitations [More Information Needed] ## Additional Information ### Dataset Curators [More Information Needed] ### Licensing Information Public Domain, [CC-0](https://creativecommons.org/share-your-work/public-domain/cc0/) ### Citation Information ``` @inproceedings{commonvoice:2020, author = {Ardila, R. and Branson, M. and Davis, K. and Henretty, M. and Kohler, M. and Meyer, J. and Morais, R. and Saunders, L. and Tyers, F. M. and Weber, G.}, title = {Common Voice: A Massively-Multilingual Speech Corpus}, booktitle = {Proceedings of the 12th Conference on Language Resources and Evaluation (LREC 2020)}, pages = {4211--4215}, year = 2020 } ```
dali-does/clevr-math
2022-10-31T11:28:31.000Z
[ "task_categories:visual-question-answering", "task_ids:visual-question-answering", "annotations_creators:machine-generated", "language_creators:machine-generated", "multilinguality:monolingual", "source_datasets:clevr", "language:en", "license:cc-by-4.0", "reasoning", "neuro-symbolic", "multimodal", "arxiv:2208.05358", "region:us" ]
dali-does
CLEVR-Math is a dataset for compositional language, visual and mathematical reasoning. CLEVR-Math poses questions about mathematical operations on visual scenes using subtraction and addition, such as "Remove all large red cylinders. How many objects are left?". There are also adversarial (e.g. "Remove all blue cubes. How many cylinders are left?") and multihop questions (e.g. "Remove all blue cubes. Remove all small purple spheres. How many objects are left?").
@misc{https://doi.org/10.48550/arxiv.2208.05358, doi = {10.48550/ARXIV.2208.05358}, url = {https://arxiv.org/abs/2208.05358}, author = {Lindström, Adam Dahlgren and Abraham, Savitha Sam}, keywords = {Machine Learning (cs.LG), Computation and Language (cs.CL), Computer Vision and Pattern Recognition (cs.CV), FOS: Computer and information sciences, FOS: Computer and information sciences, I.2.7; I.2.10; I.2.6; I.4.8; I.1.4}, title = {CLEVR-Math: A Dataset for Compositional Language, Visual, and Mathematical Reasoning}, publisher = {arXiv}, year = {2022}, copyright = {Creative Commons Attribution Share Alike 4.0 International} }
null
4
100
--- annotations_creators: - machine-generated language: - en language_creators: - machine-generated license: - cc-by-4.0 multilinguality: - monolingual pretty_name: CLEVR-Math - Compositional language, visual, and mathematical reasoning size_categories: #- 100K<n<1M source_datasets: [clevr] tags: - reasoning - neuro-symbolic - multimodal task_categories: - visual-question-answering task_ids: - visual-question-answering --- # Dataset Card for CLEVR-Math ## Table of Contents - [Table of Contents](#table-of-contents) - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards) - [Languages](#languages) - [Dataset Structure](#dataset-structure) - [Data Instances](#data-instances) - [Data Fields](#data-fields) - [Data Splits](#data-splits) - [Dataset Creation](#dataset-creation) - [Considerations for Using the Data](#considerations-for-using-the-data) - [Other Known Limitations](#other-known-limitations) - [Additional Information](#additional-information) - [Dataset Curators](#dataset-curators) - [Licensing Information](#licensing-information) - [Citation Information](#citation-information) - [Contributions](#contributions) ## Dataset Description - **Homepage:** - **Repository:*https://github.com/dali-does/clevr-math* - **Paper:*https://arxiv.org/abs/2208.05358* - **Leaderboard:** - **Point of Contact:*dali@cs.umu.se* ### Dataset Summary Dataset for compositional multimodal mathematical reasoning based on CLEVR. #### Loading the data, preprocessing text with CLIP ``` from transformers import CLIPPreprocessor from datasets import load_dataset, DownloadConfig dl_config = DownloadConfig(resume_download=True, num_proc=8, force_download=True) # Load 'general' instance of dataset dataset = load_dataset('dali-does/clevr-math', download_config=dl_config) # Load version with only multihop in test data dataset_multihop = load_dataset('dali-does/clevr-math', 'multihop', download_config=dl_config) model_path = "openai/clip-vit-base-patch32" extractor = CLIPProcessor.from_pretrained(model_path) def transform_tokenize(e): e['image'] = [image.convert('RGB') for image in e['image']] return extractor(text=e['question'], images=e['image'], padding=True) dataset = dataset.map(transform_tokenize, batched=True, num_proc=8, padding='max_length') dataset_subtraction = dataset.filter(lambda e: e['template'].startswith('subtraction'), num_proc=4) ``` ### Supported Tasks and Leaderboards Leaderboard will be announced at a later date. ### Languages The dataset is currently only available in English. To extend the dataset to other languages, the CLEVR templates must be rewritten in the target language. ## Dataset Structure ### Data Instances * `general` containing the default version with multihop questions in train and test * `multihop` containing multihop questions only in test data to test generalisation of reasoning ### Data Fields ``` features = datasets.Features( { "template": datasets.Value("string"), "id": datasets.Value("string"), "question": datasets.Value("string"), "image": datasets.Image(), "label": datasets.Value("int64") } ) ``` ### Data Splits train/val/test ## Dataset Creation Data is generated using code provided with the CLEVR-dataset, using blender and templates constructed by the dataset curators. ## Considerations for Using the Data ### Other Known Limitations [More Information Needed] ## Additional Information ### Dataset Curators Adam Dahlgren Lindström - dali@cs.umu.se ### Licensing Information Licensed under Creative Commons Attribution Share Alike 4.0 International (CC-by 4.0). ### Citation Information [More Information Needed] ``` @misc{https://doi.org/10.48550/arxiv.2208.05358, doi = {10.48550/ARXIV.2208.05358}, url = {https://arxiv.org/abs/2208.05358}, author = {Lindström, Adam Dahlgren and Abraham, Savitha Sam}, keywords = {Machine Learning (cs.LG), Computation and Language (cs.CL), Computer Vision and Pattern Recognition (cs.CV), FOS: Computer and information sciences, FOS: Computer and information sciences, I.2.7; I.2.10; I.2.6; I.4.8; I.1.4}, title = {CLEVR-Math: A Dataset for Compositional Language, Visual, and Mathematical Reasoning}, publisher = {arXiv}, year = {2022}, copyright = {Creative Commons Attribution Share Alike 4.0 International} } ``` ### Contributions Thanks to [@dali-does](https://github.com/dali-does) for adding this dataset.
dmayhem93/agieval-lsat-ar
2023-06-18T17:25:42.000Z
[ "arxiv:2304.06364", "arxiv:2104.06598", "region:us" ]
dmayhem93
null
null
null
1
100
--- dataset_info: features: - name: query dtype: string - name: choices sequence: string - name: gold sequence: int64 splits: - name: test num_bytes: 273902 num_examples: 230 download_size: 66495 dataset_size: 273902 --- # Dataset Card for "agieval-lsat-ar" Dataset taken from https://github.com/microsoft/AGIEval and processed as in that repo. Raw datset: https://github.com/zhongwanjun/AR-LSAT MIT License Copyright (c) 2022 Wanjun Zhong Permission is hereby granted, free of charge, to any person obtaining a copy of this software and associated documentation files (the "Software"), to deal in the Software without restriction, including without limitation the rights to use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of the Software, and to permit persons to whom the Software is furnished to do so, subject to the following conditions: The above copyright notice and this permission notice shall be included in all copies or substantial portions of the Software. THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE. @misc{zhong2023agieval, title={AGIEval: A Human-Centric Benchmark for Evaluating Foundation Models}, author={Wanjun Zhong and Ruixiang Cui and Yiduo Guo and Yaobo Liang and Shuai Lu and Yanlin Wang and Amin Saied and Weizhu Chen and Nan Duan}, year={2023}, eprint={2304.06364}, archivePrefix={arXiv}, primaryClass={cs.CL} } @misc{zhong2021arlsat, title={AR-LSAT: Investigating Analytical Reasoning of Text}, author={Wanjun Zhong and Siyuan Wang and Duyu Tang and Zenan Xu and Daya Guo and Jiahai Wang and Jian Yin and Ming Zhou and Nan Duan}, year={2021}, eprint={2104.06598}, archivePrefix={arXiv}, primaryClass={cs.CL} } @article{wang2022lsat, title={From lsat: The progress and challenges of complex reasoning}, author={Wang, Siyuan and Liu, Zhongkun and Zhong, Wanjun and Zhou, Ming and Wei, Zhongyu and Chen, Zhumin and Duan, Nan}, journal={IEEE/ACM Transactions on Audio, Speech, and Language Processing}, year={2022}, publisher={IEEE} }
quintic/rp_1b_131k
2023-08-05T06:33:35.000Z
[ "region:us" ]
quintic
null
null
null
0
100
--- dataset_info: features: - name: input_ids sequence: int32 - name: attention_mask sequence: int8 - name: labels sequence: int64 splits: - name: train num_bytes: 17862486884 num_examples: 10483 download_size: 5031897691 dataset_size: 17862486884 configs: - config_name: default data_files: - split: train path: data/train-* --- # Dataset Card for "rp_1b_131k" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
AWfaw/ai-hdlcoder-dataset-clean
2023-09-29T14:58:31.000Z
[ "license:apache-2.0", "region:us" ]
AWfaw
null
null
null
0
100
--- license: apache-2.0 --- Cleaned dataset for experiments and fast pretokenizing. The SQL query to create the dataset is the following: ```python SELECT f.repo_name, f.path, c.copies, c.size, c.content, l.license FROM (select f.*, row_number() over (partition by id order by path desc) as seqnum from `bigquery-public-data.github_repos.files` AS f) f JOIN `bigquery-public-data.github_repos.contents` AS c ON f.id = c.id AND seqnum=1 JOIN `bigquery-public-data.github_repos.licenses` AS l ON f.repo_name = l.repo_name WHERE NOT c.binary AND ((f.path LIKE '%.vhdl' OR f.path LIKE '%.vhd' AND (c.size BETWEEN 0 AND 1048575))) ```
erkam/clevr-full-v5
2023-09-07T21:56:08.000Z
[ "region:us" ]
erkam
null
null
null
0
100
--- configs: - config_name: default data_files: - split: train path: data/train-* - split: val path: data/val-* - split: test path: data/test-* dataset_info: features: - name: image dtype: image - name: depth dtype: image - name: layout dtype: image - name: colored_layout dtype: image - name: objects sequence: int64 - name: boxes sequence: sequence: float32 - name: triplets sequence: sequence: int64 - name: objects_str dtype: string splits: - name: train num_bytes: 72217786.0 num_examples: 960 - name: val num_bytes: 8935628.0 num_examples: 119 - name: test num_bytes: 8912087.0 num_examples: 119 download_size: 88745185 dataset_size: 90065501.0 --- # Dataset Card for "clevr-full-v5" 25 objects with 4 spatial relationships [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
tyzhu/squad_wrong_id_train_10_eval_10
2023-09-19T15:54:56.000Z
[ "region:us" ]
tyzhu
null
null
null
0
100
--- dataset_info: features: - name: id dtype: string - name: title dtype: string - name: context dtype: string - name: question dtype: string - name: answers sequence: - name: text dtype: string - name: answer_start dtype: int32 - name: context_id dtype: string - name: inputs dtype: string - name: targets dtype: string splits: - name: train num_bytes: 237881 num_examples: 150 - name: validation num_bytes: 59884 num_examples: 48 download_size: 28458 dataset_size: 297765 --- # Dataset Card for "squad_wrong_id_train_10_eval_10" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
facat/sci-llm-part
2023-10-07T13:33:53.000Z
[ "region:us" ]
facat
null
null
null
0
100
--- configs: - config_name: default data_files: - split: gpt1 path: data/gpt1-* - split: gpt2 path: data/gpt2-* - split: gpt3 path: data/gpt3-* - split: gpt4 path: data/gpt4-* - split: gpt5 path: data/gpt5-* - split: gpt6 path: data/gpt6-* - split: han_40k path: data/han_40k-* - split: base_60k path: data/base_60k-* - split: test path: data/test-* - split: test2 path: data/test2-* dataset_info: features: - name: prompt dtype: string - name: context dtype: string - name: chosen dtype: string - name: A dtype: string - name: B dtype: string - name: C dtype: string - name: D dtype: string splits: - name: gpt1 num_bytes: 130420316 num_examples: 22113 - name: gpt2 num_bytes: 264545680 num_examples: 44859 - name: gpt3 num_bytes: 98018603 num_examples: 16648 - name: gpt4 num_bytes: 309111447 num_examples: 52813 - name: gpt5 num_bytes: 99277151 num_examples: 16795 - name: gpt6 num_bytes: 110054529 num_examples: 18325 - name: han_40k num_bytes: 236235210 num_examples: 40807 - name: base_60k num_bytes: 292172331 num_examples: 54209 - name: test num_bytes: 2214599 num_examples: 500 - name: test2 num_bytes: 1111116 num_examples: 200 download_size: 311808265 dataset_size: 1543160982 --- # Dataset Card for "sci-llm-part" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
air_dialogue
2022-11-03T16:31:11.000Z
[ "task_categories:conversational", "task_categories:text-generation", "task_categories:fill-mask", "task_ids:dialogue-generation", "task_ids:dialogue-modeling", "task_ids:language-modeling", "task_ids:masked-language-modeling", "annotations_creators:crowdsourced", "language_creators:machine-generated", "multilinguality:monolingual", "size_categories:100K<n<1M", "source_datasets:original", "language:en", "license:cc-by-nc-4.0", "region:us" ]
null
AirDialogue, is a large dataset that contains 402,038 goal-oriented conversations. To collect this dataset, we create a contextgenerator which provides travel and flight restrictions. Then the human annotators are asked to play the role of a customer or an agent and interact with the goal of successfully booking a trip given the restrictions.
@inproceedings{wei-etal-2018-airdialogue, title = "{A}ir{D}ialogue: An Environment for Goal-Oriented Dialogue Research", author = "Wei, Wei and Le, Quoc and Dai, Andrew and Li, Jia", booktitle = "Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing", month = oct # "-" # nov, year = "2018", address = "Brussels, Belgium", publisher = "Association for Computational Linguistics", url = "https://www.aclweb.org/anthology/D18-1419", doi = "10.18653/v1/D18-1419", pages = "3844--3854", abstract = "Recent progress in dialogue generation has inspired a number of studies on dialogue systems that are capable of accomplishing tasks through natural language interactions. A promising direction among these studies is the use of reinforcement learning techniques, such as self-play, for training dialogue agents. However, current datasets are limited in size, and the environment for training agents and evaluating progress is relatively unsophisticated. We present AirDialogue, a large dataset that contains 301,427 goal-oriented conversations. To collect this dataset, we create a context-generator which provides travel and flight restrictions. We then ask human annotators to play the role of a customer or an agent and interact with the goal of successfully booking a trip given the restrictions. Key to our environment is the ease of evaluating the success of the dialogue, which is achieved by using ground-truth states (e.g., the flight being booked) generated by the restrictions. Any dialogue agent that does not generate the correct states is considered to fail. Our experimental results indicate that state-of-the-art dialogue models can only achieve a score of 0.17 while humans can reach a score of 0.91, which suggests significant opportunities for future improvement.", }
null
6
99
--- pretty_name: AirDialogue annotations_creators: - crowdsourced language_creators: - machine-generated language: - en license: - cc-by-nc-4.0 multilinguality: - monolingual size_categories: - 100K<n<1M source_datasets: - original task_categories: - conversational - text-generation - fill-mask task_ids: - dialogue-generation - dialogue-modeling - language-modeling - masked-language-modeling paperswithcode_id: null dataset_info: - config_name: air_dialogue_data features: - name: action struct: - name: status dtype: string - name: name dtype: string - name: flight sequence: int32 - name: intent struct: - name: return_month dtype: string - name: return_day dtype: string - name: max_price dtype: int32 - name: departure_airport dtype: string - name: max_connections dtype: int32 - name: departure_day dtype: string - name: goal dtype: string - name: departure_month dtype: string - name: name dtype: string - name: return_airport dtype: string - name: timestamps sequence: int64 - name: dialogue sequence: string - name: expected_action struct: - name: status dtype: string - name: name dtype: string - name: flight sequence: int32 - name: search_info list: - name: button_name dtype: string - name: field_name dtype: string - name: field_value dtype: string - name: timestmamp dtype: int64 - name: correct_sample dtype: bool_ splits: - name: train num_bytes: 353721137 num_examples: 321459 - name: validation num_bytes: 44442238 num_examples: 40363 download_size: 272898923 dataset_size: 398163375 - config_name: air_dialogue_kb features: - name: kb list: - name: airline dtype: string - name: class dtype: string - name: departure_airport dtype: string - name: departure_day dtype: string - name: departure_month dtype: string - name: departure_time_num dtype: int32 - name: flight_number dtype: int32 - name: num_connections dtype: int32 - name: price dtype: int32 - name: return_airport dtype: string - name: return_day dtype: string - name: return_month dtype: string - name: return_time_num dtype: int32 - name: reservation dtype: int32 splits: - name: train num_bytes: 782592158 num_examples: 321459 - name: validation num_bytes: 98269789 num_examples: 40363 download_size: 272898923 dataset_size: 880861947 --- # Dataset Card for air_dialogue ## Table of Contents - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards) - [Languages](#languages) - [Dataset Structure](#dataset-structure) - [Data Instances](#data-instances) - [Data Fields](#data-fields) - [Data Splits](#data-splits) - [Dataset Creation](#dataset-creation) - [Curation Rationale](#curation-rationale) - [Source Data](#source-data) - [Annotations](#annotations) - [Personal and Sensitive Information](#personal-and-sensitive-information) - [Considerations for Using the Data](#considerations-for-using-the-data) - [Social Impact of Dataset](#social-impact-of-dataset) - [Discussion of Biases](#discussion-of-biases) - [Other Known Limitations](#other-known-limitations) - [Additional Information](#additional-information) - [Dataset Curators](#dataset-curators) - [Licensing Information](#licensing-information) - [Citation Information](#citation-information) - [Contributions](#contributions) ## Dataset Description - **Homepage:** https://worksheets.codalab.org/worksheets/0xa79833f4b3c24f4188cee7131b120a59 - **Repository:** https://github.com/google/airdialogue - **Paper:** https://www.aclweb.org/anthology/D18-1419/ - **Leaderboard:** https://worksheets.codalab.org/worksheets/0xa79833f4b3c24f4188cee7131b120a59 - **Point of Contact:** [AirDialogue-Google](mailto:airdialogue@gmail.com) [Aakash Gupta](mailto:aakashg80@gmail.com) ### Dataset Summary AirDialogue, is a large dataset that contains 402,038 goal-oriented conversations. To collect this dataset, we create a contextgenerator which provides travel and flight restrictions. Then the human annotators are asked to play the role of a customer or an agent and interact with the goal of successfully booking a trip given the restrictions. ### Supported Tasks and Leaderboards We use perplexity and BLEU score to evaluate the quality of the language generated by the model. We also compare the dialogue state generated by the model s and the ground truth state s0. Two categories of the metrics are used: exact match scores and scaled scores The inference competition & leaderboard can be found here: https://worksheets.codalab.org/worksheets/0xa79833f4b3c24f4188cee7131b120a59 ### Languages The text in the dataset is in English. The BCP 47 code is `en` ## Dataset Structure ### Data Instances The data is provided in two set of files. The first one has the dialogues (`air_dialogue_data`) and the knowledge-base (`air_dialogue_kb`) BuilderConfig: `air_dialogue_data` ``` {"action": {"status": "book", "name": "Emily Edwards", "flight": [1027]}, "intent": {"return_month": "June", "return_day": "14", "max_price": 200, "departure_airport": "DFW", "return_time": "afternoon", "max_connections": 1, "departure_day": "12", "goal": "book", "departure_month": "June", "name": "Emily Edwards", "return_airport": "IAD"}, "timestamps": [1519233239, 1519233244, 1519233249, 1519233252, 1519233333, 1519233374, 1519233392, 1519233416, 1519233443, 1519233448, 1519233464, 1519233513, 1519233525, 1519233540, 1519233626, 1519233628, 1519233638], "dialogue": ["customer: Hello.", "agent: Hello.", "customer: My name is Emily Edwards.", "agent: How may I help you out?", "customer: I need some help in my flight ticket reservation to attend a convocation meeting, can you please help me?", "agent: Sure, I will help you out. May I know your travelling dates please?", "customer: Thank you and my dates are 06/12 and back on 06/14.", "agent: Can I know your airport codes?", "customer: The airport codes are from DFW to IAD.", "agent: Ok, please wait a moment.", "customer: Sure.", "agent: There is a flight with connection 1 and price 200, can I proceed with this flight?", "customer: Yes, do proceed with booking.", "agent: Ok, your ticket has been booked.", "customer: Thank you for your assistance in my flight ticket reservation.", "agent: Thank you for choosing us.", "customer: You are welcome."], "expected_action": {"status": "book", "name": "Emily Edwards", "flight": [1027]}, "correct_sample": true} ``` BuilderConfig: `air_dialogue_kb` ``` {"kb": [{"return_airport": "DTW", "airline": "Spirit", "departure_day": "12", "departure_airport": "IAD", "flight_number": 1000, "departure_month": "June", "departure_time_num": 17, "class": "economy", "return_time_num": 2, "return_month": "June", "return_day": "14", "num_connections": 1, "price": 200}, {"return_airport": "DTW", "airline": "Frontier", "departure_day": "12", "departure_airport": "IAD", "flight_number": 1001, "departure_month": "June", "departure_time_num": 0, "class": "business", "return_time_num": 15, "return_month": "June", "return_day": "13", "num_connections": 0, "price": 500}, {"return_airport": "DTW", "airline": "JetBlue", "departure_day": "12", "departure_airport": "IAD", "flight_number": 1002, "departure_month": "June", "departure_time_num": 0, "class": "business", "return_time_num": 13, "return_month": "June", "return_day": "13", "num_connections": 1, "price": 600}, {"return_airport": "IAD", "airline": "Hawaiian", "departure_day": "12", "departure_airport": "DTW", "flight_number": 1003, "departure_month": "June", "departure_time_num": 6, "class": "economy", "return_time_num": 5, "return_month": "June", "return_day": "14", "num_connections": 1, "price": 200}, {"return_airport": "DFW", "airline": "AA", "departure_day": "12", "departure_airport": "DTW", "flight_number": 1004, "departure_month": "June", "departure_time_num": 9, "class": "economy", "return_time_num": 11, "return_month": "June", "return_day": "14", "num_connections": 1, "price": 100}, {"return_airport": "IAD", "airline": "AA", "departure_day": "12", "departure_airport": "DFW", "flight_number": 1005, "departure_month": "June", "departure_time_num": 3, "class": "economy", "return_time_num": 17, "return_month": "June", "return_day": "13", "num_connections": 1, "price": 100}, {"return_airport": "DTW", "airline": "Frontier", "departure_day": "12", "departure_airport": "IAD", "flight_number": 1006, "departure_month": "June", "departure_time_num": 10, "class": "economy", "return_time_num": 10, "return_month": "June", "return_day": "14", "num_connections": 1, "price": 100}, {"return_airport": "IAD", "airline": "UA", "departure_day": "12", "departure_airport": "DFW", "flight_number": 1007, "departure_month": "June", "departure_time_num": 14, "class": "economy", "return_time_num": 20, "return_month": "June", "return_day": "13", "num_connections": 1, "price": 100}, {"return_airport": "DFW", "airline": "AA", "departure_day": "13", "departure_airport": "DTW", "flight_number": 1008, "departure_month": "June", "departure_time_num": 6, "class": "economy", "return_time_num": 8, "return_month": "June", "return_day": "14", "num_connections": 2, "price": 400}, {"return_airport": "DFW", "airline": "Delta", "departure_day": "12", "departure_airport": "IAD", "flight_number": 1009, "departure_month": "June", "departure_time_num": 18, "class": "economy", "return_time_num": 6, "return_month": "June", "return_day": "14", "num_connections": 1, "price": 200}, {"return_airport": "DFW", "airline": "Frontier", "departure_day": "13", "departure_airport": "DTW", "flight_number": 1010, "departure_month": "June", "departure_time_num": 4, "class": "economy", "return_time_num": 2, "return_month": "June", "return_day": "14", "num_connections": 1, "price": 100}, {"return_airport": "DFW", "airline": "Southwest", "departure_day": "12", "departure_airport": "DTW", "flight_number": 1011, "departure_month": "June", "departure_time_num": 17, "class": "economy", "return_time_num": 22, "return_month": "June", "return_day": "13", "num_connections": 0, "price": 100}, {"return_airport": "DTW", "airline": "JetBlue", "departure_day": "11", "departure_airport": "DFW", "flight_number": 1012, "departure_month": "June", "departure_time_num": 13, "class": "economy", "return_time_num": 22, "return_month": "June", "return_day": "13", "num_connections": 1, "price": 100}, {"return_airport": "DTW", "airline": "Southwest", "departure_day": "12", "departure_airport": "IAD", "flight_number": 1013, "departure_month": "June", "departure_time_num": 16, "class": "economy", "return_time_num": 13, "return_month": "June", "return_day": "14", "num_connections": 1, "price": 200}, {"return_airport": "DTW", "airline": "Delta", "departure_day": "12", "departure_airport": "IAD", "flight_number": 1014, "departure_month": "June", "departure_time_num": 0, "class": "economy", "return_time_num": 8, "return_month": "June", "return_day": "15", "num_connections": 1, "price": 100}, {"return_airport": "DTW", "airline": "Southwest", "departure_day": "12", "departure_airport": "DFW", "flight_number": 1015, "departure_month": "June", "departure_time_num": 17, "class": "economy", "return_time_num": 1, "return_month": "June", "return_day": "15", "num_connections": 1, "price": 300}, {"return_airport": "DTW", "airline": "UA", "departure_day": "11", "departure_airport": "DFW", "flight_number": 1016, "departure_month": "June", "departure_time_num": 10, "class": "economy", "return_time_num": 4, "return_month": "June", "return_day": "14", "num_connections": 0, "price": 200}, {"return_airport": "DFW", "airline": "AA", "departure_day": "12", "departure_airport": "DTW", "flight_number": 1017, "departure_month": "June", "departure_time_num": 14, "class": "economy", "return_time_num": 23, "return_month": "June", "return_day": "14", "num_connections": 2, "price": 400}, {"return_airport": "DTW", "airline": "JetBlue", "departure_day": "12", "departure_airport": "DFW", "flight_number": 1018, "departure_month": "June", "departure_time_num": 3, "class": "economy", "return_time_num": 1, "return_month": "June", "return_day": "14", "num_connections": 1, "price": 100}, {"return_airport": "DFW", "airline": "Hawaiian", "departure_day": "12", "departure_airport": "IAD", "flight_number": 1019, "departure_month": "June", "departure_time_num": 7, "class": "economy", "return_time_num": 18, "return_month": "June", "return_day": "14", "num_connections": 1, "price": 200}, {"return_airport": "DFW", "airline": "Delta", "departure_day": "12", "departure_airport": "IAD", "flight_number": 1020, "departure_month": "June", "departure_time_num": 6, "class": "economy", "return_time_num": 18, "return_month": "June", "return_day": "14", "num_connections": 2, "price": 200}, {"return_airport": "IAD", "airline": "Delta", "departure_day": "12", "departure_airport": "DFW", "flight_number": 1021, "departure_month": "June", "departure_time_num": 11, "class": "business", "return_time_num": 8, "return_month": "June", "return_day": "14", "num_connections": 0, "price": 1000}, {"return_airport": "IAD", "airline": "JetBlue", "departure_day": "12", "departure_airport": "DTW", "flight_number": 1022, "departure_month": "June", "departure_time_num": 4, "class": "economy", "return_time_num": 14, "return_month": "June", "return_day": "13", "num_connections": 0, "price": 200}, {"return_airport": "IAD", "airline": "Frontier", "departure_day": "12", "departure_airport": "DTW", "flight_number": 1023, "departure_month": "June", "departure_time_num": 19, "class": "economy", "return_time_num": 23, "return_month": "June", "return_day": "13", "num_connections": 1, "price": 200}, {"return_airport": "DFW", "airline": "UA", "departure_day": "12", "departure_airport": "DTW", "flight_number": 1024, "departure_month": "June", "departure_time_num": 11, "class": "economy", "return_time_num": 19, "return_month": "June", "return_day": "15", "num_connections": 1, "price": 200}, {"return_airport": "DTW", "airline": "Hawaiian", "departure_day": "11", "departure_airport": "IAD", "flight_number": 1025, "departure_month": "June", "departure_time_num": 6, "class": "economy", "return_time_num": 10, "return_month": "June", "return_day": "14", "num_connections": 1, "price": 100}, {"return_airport": "DTW", "airline": "UA", "departure_day": "12", "departure_airport": "DFW", "flight_number": 1026, "departure_month": "June", "departure_time_num": 0, "class": "economy", "return_time_num": 18, "return_month": "June", "return_day": "14", "num_connections": 1, "price": 300}, {"return_airport": "IAD", "airline": "Delta", "departure_day": "12", "departure_airport": "DFW", "flight_number": 1027, "departure_month": "June", "departure_time_num": 17, "class": "economy", "return_time_num": 15, "return_month": "June", "return_day": "14", "num_connections": 1, "price": 200}, {"return_airport": "IAD", "airline": "Southwest", "departure_day": "12", "departure_airport": "DTW", "flight_number": 1028, "departure_month": "June", "departure_time_num": 23, "class": "economy", "return_time_num": 13, "return_month": "June", "return_day": "14", "num_connections": 1, "price": 100}, {"return_airport": "DFW", "airline": "Spirit", "departure_day": "11", "departure_airport": "DTW", "flight_number": 1029, "departure_month": "June", "departure_time_num": 22, "class": "business", "return_time_num": 4, "return_month": "June", "return_day": "14", "num_connections": 0, "price": 800}], "reservation": 0} ``` ### Data Fields BuilderConfig: `air_dialogue_data`: Provides for customer context, dialogue states and environment key name | Description | |---|---| |'search_action' | search action performed by customer | |'action' | Action taken by the agent | |'intent' | Intents from the conversation | |'timestamps' | Timestamp for each of the dialogues | |'dialogue' | Dialogue recorded between agent & customer | |'expected_action' | Expected action from agent (human-annotated)| |'correct_sample' | whether action performed by agent was same as expected_action | BuilderConfig: `air_dialogue_kb`: Provides for the Agent Context _ca_ = (_db_, _r_ ) key name | Description | |---|---| |'kb' | Available flights in the database | |'reservation' | whether customer has an existing reservation| ### Data Splits Data is split into Train/Dev & Test in the ration of 80%, 10% and 10% ## Dataset Creation ### Curation Rationale [Needs More Information] ### Source Data #### Initial Data Collection and Normalization [Needs More Information] #### Who are the source language producers? [Needs More Information] ### Annotations #### Annotation process To collect this dataset, we create a contextgenerator which provides travel and flight restrictions. We then ask human annotators to play the role of a customer or an agent and interact with the goal of successfully booking a trip given the restrictions. Key to our environment is the ease of evaluating the success of the dialogue, which is achieved by using ground-truth states (e.g., the flight being booked) generated by the restrictions. Any dialogue agent that does not generate the correct states is considered to fail. #### Who are the annotators? [Needs More Information] ### Personal and Sensitive Information No personal and sensitive information is stored ## Considerations for Using the Data ### Social Impact of Dataset [Needs More Information] ### Discussion of Biases [Needs More Information] ### Other Known Limitations [Needs More Information] ## Additional Information ### Dataset Curators [AirDialogue team](mailto:airdialogue@gmail.com) For issues regarding HuggingFace Dataset Hub implementation [Aakash Gupta](mailto:aakashg80@gmail.com) ### Licensing Information cc-by-nc-4.0 ### Citation Information @inproceedings{wei-etal-2018-airdialogue, title = "{A}ir{D}ialogue: An Environment for Goal-Oriented Dialogue Research", author = "Wei, Wei and Le, Quoc and Dai, Andrew and Li, Jia", booktitle = "Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing", month = oct # "-" # nov, year = "2018", address = "Brussels, Belgium", publisher = "Association for Computational Linguistics", url = "https://www.aclweb.org/anthology/D18-1419", doi = "10.18653/v1/D18-1419", pages = "3844--3854", abstract = "Recent progress in dialogue generation has inspired a number of studies on dialogue systems that are capable of accomplishing tasks through natural language interactions. A promising direction among these studies is the use of reinforcement learning techniques, such as self-play, for training dialogue agents. However, current datasets are limited in size, and the environment for training agents and evaluating progress is relatively unsophisticated. We present AirDialogue, a large dataset that contains 301,427 goal-oriented conversations. To collect this dataset, we create a context-generator which provides travel and flight restrictions. We then ask human annotators to play the role of a customer or an agent and interact with the goal of successfully booking a trip given the restrictions. Key to our environment is the ease of evaluating the success of the dialogue, which is achieved by using ground-truth states (e.g., the flight being booked) generated by the restrictions. Any dialogue agent that does not generate the correct states is considered to fail. Our experimental results indicate that state-of-the-art dialogue models can only achieve a score of 0.17 while humans can reach a score of 0.91, which suggests significant opportunities for future improvement.", } ### Contributions Thanks to [@skyprince999](https://github.com/skyprince999) for adding this dataset.
librispeech_lm
2023-04-05T10:09:21.000Z
[ "task_categories:text-generation", "task_ids:language-modeling", "annotations_creators:no-annotation", "language_creators:found", "multilinguality:monolingual", "size_categories:10M<n<100M", "source_datasets:original", "language:en", "license:cc0-1.0", "region:us" ]
null
Language modeling resources to be used in conjunction with the LibriSpeech ASR corpus.
@inproceedings{panayotov2015librispeech, title={Librispeech: an ASR corpus based on public domain audio books}, author={Panayotov, Vassil and Chen, Guoguo and Povey, Daniel and Khudanpur, Sanjeev}, booktitle={Acoustics, Speech and Signal Processing (ICASSP), 2015 IEEE International Conference on}, pages={5206--5210}, year={2015}, organization={IEEE} }
null
0
99
--- annotations_creators: - no-annotation language: - en language_creators: - found license: - cc0-1.0 multilinguality: - monolingual pretty_name: LibrispeechLm size_categories: - 10M<n<100M source_datasets: - original task_categories: - text-generation task_ids: - language-modeling paperswithcode_id: null dataset_info: features: - name: text dtype: string splits: - name: train num_bytes: 4418577129 num_examples: 40418260 download_size: 1507274412 dataset_size: 4418577129 --- # Dataset Card for "librispeech_lm" ## Table of Contents - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards) - [Languages](#languages) - [Dataset Structure](#dataset-structure) - [Data Instances](#data-instances) - [Data Fields](#data-fields) - [Data Splits](#data-splits) - [Dataset Creation](#dataset-creation) - [Curation Rationale](#curation-rationale) - [Source Data](#source-data) - [Annotations](#annotations) - [Personal and Sensitive Information](#personal-and-sensitive-information) - [Considerations for Using the Data](#considerations-for-using-the-data) - [Social Impact of Dataset](#social-impact-of-dataset) - [Discussion of Biases](#discussion-of-biases) - [Other Known Limitations](#other-known-limitations) - [Additional Information](#additional-information) - [Dataset Curators](#dataset-curators) - [Licensing Information](#licensing-information) - [Citation Information](#citation-information) - [Contributions](#contributions) ## Dataset Description - **Homepage:** [http://www.openslr.org/11](http://www.openslr.org/11) - **Repository:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) - **Paper:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) - **Point of Contact:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) - **Size of downloaded dataset files:** 1.51 GB - **Size of the generated dataset:** 4.42 GB - **Total amount of disk used:** 5.93 GB ### Dataset Summary Language modeling resources to be used in conjunction with the LibriSpeech ASR corpus. ### Supported Tasks and Leaderboards [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ### Languages [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ## Dataset Structure ### Data Instances #### default - **Size of downloaded dataset files:** 1.51 GB - **Size of the generated dataset:** 4.42 GB - **Total amount of disk used:** 5.93 GB An example of 'train' looks as follows. ``` { "text": "This is a test file" } ``` ### Data Fields The data fields are the same among all splits. #### default - `text`: a `string` feature. ### Data Splits | name | train | |-------|-------:| |default|40418260| ## Dataset Creation ### Curation Rationale [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ### Source Data #### Initial Data Collection and Normalization [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) #### Who are the source language producers? [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ### Annotations #### Annotation process [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) #### Who are the annotators? [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ### Personal and Sensitive Information [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ## Considerations for Using the Data ### Social Impact of Dataset [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ### Discussion of Biases [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ### Other Known Limitations [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ## Additional Information ### Dataset Curators [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ### Licensing Information [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ### Citation Information ``` @inproceedings{panayotov2015librispeech, title={Librispeech: an ASR corpus based on public domain audio books}, author={Panayotov, Vassil and Chen, Guoguo and Povey, Daniel and Khudanpur, Sanjeev}, booktitle={Acoustics, Speech and Signal Processing (ICASSP), 2015 IEEE International Conference on}, pages={5206--5210}, year={2015}, organization={IEEE} } ``` ### Contributions Thanks to [@lewtun](https://github.com/lewtun), [@jplu](https://github.com/jplu), [@thomwolf](https://github.com/thomwolf) for adding this dataset.
wider_face
2023-01-25T15:02:08.000Z
[ "task_categories:object-detection", "task_ids:face-detection", "annotations_creators:expert-generated", "language_creators:found", "multilinguality:monolingual", "size_categories:10K<n<100K", "source_datasets:extended|other-wider", "language:en", "license:cc-by-nc-nd-4.0", "arxiv:1511.06523", "region:us" ]
null
WIDER FACE dataset is a face detection benchmark dataset, of which images are selected from the publicly available WIDER dataset. We choose 32,203 images and label 393,703 faces with a high degree of variability in scale, pose and occlusion as depicted in the sample images. WIDER FACE dataset is organized based on 61 event classes. For each event class, we randomly select 40%/10%/50% data as training, validation and testing sets. We adopt the same evaluation metric employed in the PASCAL VOC dataset. Similar to MALF and Caltech datasets, we do not release bounding box ground truth for the test images. Users are required to submit final prediction files, which we shall proceed to evaluate.
@inproceedings{yang2016wider, Author = {Yang, Shuo and Luo, Ping and Loy, Chen Change and Tang, Xiaoou}, Booktitle = {IEEE Conference on Computer Vision and Pattern Recognition (CVPR)}, Title = {WIDER FACE: A Face Detection Benchmark}, Year = {2016}}
null
12
99
--- annotations_creators: - expert-generated language_creators: - found language: - en license: - cc-by-nc-nd-4.0 multilinguality: - monolingual size_categories: - 10K<n<100K source_datasets: - extended|other-wider task_categories: - object-detection task_ids: - face-detection paperswithcode_id: wider-face-1 pretty_name: WIDER FACE dataset_info: features: - name: image dtype: image - name: faces sequence: - name: bbox sequence: float32 length: 4 - name: blur dtype: class_label: names: '0': clear '1': normal '2': heavy - name: expression dtype: class_label: names: '0': typical '1': exaggerate - name: illumination dtype: class_label: names: '0': normal '1': 'exaggerate ' - name: occlusion dtype: class_label: names: '0': 'no' '1': partial '2': heavy - name: pose dtype: class_label: names: '0': typical '1': atypical - name: invalid dtype: bool splits: - name: train num_bytes: 12049881 num_examples: 12880 - name: test num_bytes: 3761103 num_examples: 16097 - name: validation num_bytes: 2998735 num_examples: 3226 download_size: 3676086479 dataset_size: 18809719 --- # Dataset Card for WIDER FACE ## Table of Contents - [Table of Contents](#table-of-contents) - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards) - [Languages](#languages) - [Dataset Structure](#dataset-structure) - [Data Instances](#data-instances) - [Data Fields](#data-fields) - [Data Splits](#data-splits) - [Dataset Creation](#dataset-creation) - [Curation Rationale](#curation-rationale) - [Source Data](#source-data) - [Annotations](#annotations) - [Personal and Sensitive Information](#personal-and-sensitive-information) - [Considerations for Using the Data](#considerations-for-using-the-data) - [Social Impact of Dataset](#social-impact-of-dataset) - [Discussion of Biases](#discussion-of-biases) - [Other Known Limitations](#other-known-limitations) - [Additional Information](#additional-information) - [Dataset Curators](#dataset-curators) - [Licensing Information](#licensing-information) - [Citation Information](#citation-information) - [Contributions](#contributions) ## Dataset Description - **Homepage:** http://shuoyang1213.me/WIDERFACE/index.html - **Repository:** - **Paper:** [WIDER FACE: A Face Detection Benchmark](https://arxiv.org/abs/1511.06523) - **Leaderboard:** http://shuoyang1213.me/WIDERFACE/WiderFace_Results.html - **Point of Contact:** shuoyang.1213@gmail.com ### Dataset Summary WIDER FACE dataset is a face detection benchmark dataset, of which images are selected from the publicly available WIDER dataset. We choose 32,203 images and label 393,703 faces with a high degree of variability in scale, pose and occlusion as depicted in the sample images. WIDER FACE dataset is organized based on 61 event classes. For each event class, we randomly select 40%/10%/50% data as training, validation and testing sets. We adopt the same evaluation metric employed in the PASCAL VOC dataset. Similar to MALF and Caltech datasets, we do not release bounding box ground truth for the test images. Users are required to submit final prediction files, which we shall proceed to evaluate. ### Supported Tasks and Leaderboards - `face-detection`: The dataset can be used to train a model for Face Detection. More information on evaluating the model's performance can be found [here](http://shuoyang1213.me/WIDERFACE/WiderFace_Results.html). ### Languages English ## Dataset Structure ### Data Instances A data point comprises an image and its face annotations. ``` { 'image': <PIL.JpegImagePlugin.JpegImageFile image mode=RGB size=1024x755 at 0x19FA12186D8>, 'faces': { 'bbox': [ [178.0, 238.0, 55.0, 73.0], [248.0, 235.0, 59.0, 73.0], [363.0, 157.0, 59.0, 73.0], [468.0, 153.0, 53.0, 72.0], [629.0, 110.0, 56.0, 81.0], [745.0, 138.0, 55.0, 77.0] ], 'blur': [2, 2, 2, 2, 2, 2], 'expression': [0, 0, 0, 0, 0, 0], 'illumination': [0, 0, 0, 0, 0, 0], 'occlusion': [1, 2, 1, 2, 1, 2], 'pose': [0, 0, 0, 0, 0, 0], 'invalid': [False, False, False, False, False, False] } } ``` ### Data Fields - `image`: A `PIL.Image.Image` object containing the image. Note that when accessing the image column: `dataset[0]["image"]` the image file is automatically decoded. Decoding of a large number of image files might take a significant amount of time. Thus it is important to first query the sample index before the `"image"` column, *i.e.* `dataset[0]["image"]` should **always** be preferred over `dataset["image"][0]` - `faces`: a dictionary of face attributes for the faces present on the image - `bbox`: the bounding box of each face (in the [coco](https://albumentations.ai/docs/getting_started/bounding_boxes_augmentation/#coco) format) - `blur`: the blur level of each face, with possible values including `clear` (0), `normal` (1) and `heavy` - `expression`: the facial expression of each face, with possible values including `typical` (0) and `exaggerate` (1) - `illumination`: the lightning condition of each face, with possible values including `normal` (0) and `exaggerate` (1) - `occlusion`: the level of occlusion of each face, with possible values including `no` (0), `partial` (1) and `heavy` (2) - `pose`: the pose of each face, with possible values including `typical` (0) and `atypical` (1) - `invalid`: whether the image is valid or invalid. ### Data Splits The data is split into training, validation and testing set. WIDER FACE dataset is organized based on 61 event classes. For each event class, 40%/10%/50% data is randomly selected as training, validation and testing sets. The training set contains 12880 images, the validation set 3226 images and test set 16097 images. ## Dataset Creation ### Curation Rationale The curators state that the current face detection datasets typically contain a few thousand faces, with limited variations in pose, scale, facial expression, occlusion, and background clutters, making it difficult to assess for real world performance. They argue that the limitations of datasets have partially contributed to the failure of some algorithms in coping with heavy occlusion, small scale, and atypical pose. ### Source Data #### Initial Data Collection and Normalization WIDER FACE dataset is a subset of the WIDER dataset. The images in WIDER were collected in the following three steps: 1) Event categories were defined and chosen following the Large Scale Ontology for Multimedia (LSCOM) [22], which provides around 1000 concepts relevant to video event analysis. 2) Images are retrieved using search engines like Google and Bing. For each category, 1000-3000 images were collected. 3) The data were cleaned by manually examining all the images and filtering out images without human face. Then, similar images in each event category were removed to ensure large diversity in face appearance. A total of 32203 images are eventually included in the WIDER FACE dataset. #### Who are the source language producers? The images are selected from publicly available WIDER dataset. ### Annotations #### Annotation process The curators label the bounding boxes for all the recognizable faces in the WIDER FACE dataset. The bounding box is required to tightly contain the forehead, chin, and cheek.. If a face is occluded, they still label it with a bounding box but with an estimation on the scale of occlusion. Similar to the PASCAL VOC dataset [6], they assign an ’Ignore’ flag to the face which is very difficult to be recognized due to low resolution and small scale (10 pixels or less). After annotating the face bounding boxes, they further annotate the following attributes: pose (typical, atypical) and occlusion level (partial, heavy). Each annotation is labeled by one annotator and cross-checked by two different people. #### Who are the annotators? Shuo Yang, Ping Luo, Chen Change Loy and Xiaoou Tang. ### Personal and Sensitive Information [More Information Needed] ## Considerations for Using the Data ### Social Impact of Dataset [More Information Needed] ### Discussion of Biases [More Information Needed] ### Other Known Limitations [More Information Needed] ## Additional Information ### Dataset Curators Shuo Yang, Ping Luo, Chen Change Loy and Xiaoou Tang ### Licensing Information [Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International (CC BY-NC-ND 4.0)](https://creativecommons.org/licenses/by-nc-nd/4.0/). ### Citation Information ``` @inproceedings{yang2016wider, Author = {Yang, Shuo and Luo, Ping and Loy, Chen Change and Tang, Xiaoou}, Booktitle = {IEEE Conference on Computer Vision and Pattern Recognition (CVPR)}, Title = {WIDER FACE: A Face Detection Benchmark}, Year = {2016}} ``` ### Contributions Thanks to [@mariosasko](https://github.com/mariosasko) for adding this dataset.
GEM/BiSECT
2022-09-02T21:58:17.000Z
[ "annotations_creators:none", "language_creators:unknown", "multilinguality:unknown", "size_categories:unknown", "source_datasets:original", "language:de", "language:en", "language:fr", "language:es", "license:other", "region:us" ]
GEM
BiSECT is a Split and Rephrase corpus created via bilingual pivoting.
@inproceedings{kim-etal-2021-bisect, title = "{B}i{SECT}: Learning to Split and Rephrase Sentences with Bitexts", author = "Kim, Joongwon and Maddela, Mounica and Kriz, Reno and Xu, Wei and Callison-Burch, Chris", booktitle = "Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing", month = nov, year = "2021", address = "Online and Punta Cana, Dominican Republic", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2021.emnlp-main.500", pages = "6193--6209" }
null
2
99
--- annotations_creators: - none language_creators: - unknown language: - de - en - fr - es license: - other multilinguality: - unknown pretty_name: BiSECT size_categories: - unknown source_datasets: - original task_categories: - simplification task_ids: - unknown --- # Dataset Card for GEM/BiSECT ## Dataset Description - **Homepage:** https://github.com/mounicam/BiSECT - **Repository:** https://github.com/mounicam/BiSECT/tree/main/bisect - **Paper:** https://aclanthology.org/2021.emnlp-main.500/ - **Leaderboard:** N/A - **Point of Contact:** Joongwon Kim, Mounica Maddela, Reno Kriz ### Link to Main Data Card You can find the main data card on the [GEM Website](https://gem-benchmark.com/data_cards/BiSECT). ### Dataset Summary This dataset is composed of 1 million complex sentences with the task to split and simplify them while retaining the full meaning. Compared to other simplification corpora, BiSECT requires more significant edits. BiSECT offers splits in English, German, French, and Spanish. You can load the dataset via: ``` import datasets data = datasets.load_dataset('GEM/BiSECT') ``` The data loader can be found [here](https://huggingface.co/datasets/GEM/BiSECT). #### website [Link](https://github.com/mounicam/BiSECT) #### paper [Link](https://aclanthology.org/2021.emnlp-main.500/) ## Dataset Overview ### Where to find the Data and its Documentation #### Webpage <!-- info: What is the webpage for the dataset (if it exists)? --> <!-- scope: telescope --> [Link](https://github.com/mounicam/BiSECT) #### Download <!-- info: What is the link to where the original dataset is hosted? --> <!-- scope: telescope --> [Link](https://github.com/mounicam/BiSECT/tree/main/bisect) #### Paper <!-- info: What is the link to the paper describing the dataset (open access preferred)? --> <!-- scope: telescope --> [Link](https://aclanthology.org/2021.emnlp-main.500/) #### BibTex <!-- info: Provide the BibTex-formatted reference for the dataset. Please use the correct published version (ACL anthology, etc.) instead of google scholar created Bibtex. --> <!-- scope: microscope --> ``` @inproceedings{kim-etal-2021-bisect, title = "{B}i{SECT}: Learning to Split and Rephrase Sentences with Bitexts", author = "Kim, Joongwon and Maddela, Mounica and Kriz, Reno and Xu, Wei and Callison-Burch, Chris", booktitle = "Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing", month = nov, year = "2021", address = "Online and Punta Cana, Dominican Republic", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2021.emnlp-main.500", pages = "6193--6209" } ``` #### Contact Name <!-- quick --> <!-- info: If known, provide the name of at least one person the reader can contact for questions about the dataset. --> <!-- scope: periscope --> Joongwon Kim, Mounica Maddela, Reno Kriz #### Contact Email <!-- info: If known, provide the email of at least one person the reader can contact for questions about the dataset. --> <!-- scope: periscope --> jkim0118@seas.upenn.edu, mmaddela3@gatech.edu, rkriz1@jh.edu #### Has a Leaderboard? <!-- info: Does the dataset have an active leaderboard? --> <!-- scope: telescope --> no ### Languages and Intended Use #### Multilingual? <!-- quick --> <!-- info: Is the dataset multilingual? --> <!-- scope: telescope --> yes #### Covered Languages <!-- quick --> <!-- info: What languages/dialects are covered in the dataset? --> <!-- scope: telescope --> `English`, `German`, `French`, `Spanish, Castilian` #### License <!-- quick --> <!-- info: What is the license of the dataset? --> <!-- scope: telescope --> other: Other license #### Intended Use <!-- info: What is the intended use of the dataset? --> <!-- scope: microscope --> Split and Rephrase. #### Add. License Info <!-- info: What is the 'other' license of the dataset? --> <!-- scope: periscope --> The dataset is not licensed by itself, and the source OPUS data consists solely of publicly available parallel corpora. #### Primary Task <!-- info: What primary task does the dataset support? --> <!-- scope: telescope --> Simplification #### Communicative Goal <!-- quick --> <!-- info: Provide a short description of the communicative goal of a model trained for this task on this dataset. --> <!-- scope: periscope --> To rewrite a long, complex sentence into shorter, readable, meaning-equivalent sentences. ### Credit ### Dataset Structure #### Data Fields <!-- info: List and describe the fields present in the dataset. --> <!-- scope: telescope --> - `gem_id` (string): a unique identifier for the instance - `source_sentence` (string): sentence to be simplified - `target_sentence` (string)" simplified text that was split and rephrased #### Example Instance <!-- info: Provide a JSON formatted example of a typical instance in the dataset. --> <!-- scope: periscope --> ``` { "gem_id": "bisect-train-0", "source_sentence": "The report on the visit to Bhutan states that the small community has made the task of coordination less complex and success is manifested in the synchronized programming cycles which now apply to all but one of the agencies ( the World Health Organization ) .", "target_sentence": "The report on the visit to Bhutan says that the small community has made the coordination work less complex . Success manifests itself in synchronized programming cycles that now apply to all but one organism ( the World Health Organization ) ." } ``` #### Data Splits <!-- info: Describe and name the splits in the dataset if there are more than one. --> <!-- scope: periscope --> For the main English BiSECT dataset, the splits are as follows: 1. Train (n=928440) 2. Validation (n=9079) 3. Test (n=583) Additional challenge sets were derived from the data presented in the paper. Please refer to the challenge set sections. The train/validation/test splits for other languages are as follows: German (n=184638/n=864/n=735) Spanish (n=282944/n=3638/n=3081) French (n=491035/n=2400/n=1036) #### Splitting Criteria <!-- info: Describe any criteria for splitting the data, if used. If there are differences between the splits (e.g., if the training annotations are machine-generated and the dev and test ones are created by humans, or if different numbers of annotators contributed to each example), describe them here. --> <!-- scope: microscope --> While all training data were derived from subsets of the OPUS corpora, different source subsets were used for training v.s., validation and testing. The training set comprised more web crawl data, whereas development and test sets comprised EMEA and EU texts. Details can be found in the BiSECT paper. ## Dataset in GEM ### Rationale for Inclusion in GEM #### Why is the Dataset in GEM? <!-- info: What does this dataset contribute toward better generation evaluation and why is it part of GEM? --> <!-- scope: microscope --> Understanding long and complex sentences is challenging for both humans and NLP models. The BiSECT dataset helps facilitate more research on Split and Rephrase as a task within itself, as well as how it can benefit downstream NLP applications. #### Similar Datasets <!-- info: Do other datasets for the high level task exist? --> <!-- scope: telescope --> yes #### Unique Language Coverage <!-- info: Does this dataset cover other languages than other datasets for the same task? --> <!-- scope: periscope --> yes #### Difference from other GEM datasets <!-- info: What else sets this dataset apart from other similar datasets in GEM? --> <!-- scope: microscope --> BiSECT is the largest available corpora for the Split and Rephrase task. In addition, it has been shown that BiSECT is of higher quality than previous Split and Rephrase corpora and contains a wider variety of splitting operations. Most previous Split and Rephrase corpora (HSplit-Wiki, Cont-Benchmark, and Wiki-Benchmark) were manually written at a small scale and focused on evaluation, while the one corpus of comparable size, WikiSplit, contains around 25% of pairs contain significant errors. This is because Wikipedia editors are not only trying to split a sentence, but also often simultaneously modifying the sentence for other purposes, which results in changes of the initial meaning. ### GEM-Specific Curation #### Modificatied for GEM? <!-- info: Has the GEM version of the dataset been modified in any way (data, processing, splits) from the original curated data? --> <!-- scope: telescope --> yes #### GEM Modifications <!-- info: What changes have been made to he original dataset? --> <!-- scope: periscope --> `data points added` #### Modification Details <!-- info: For each of these changes, described them in more details and provided the intended purpose of the modification --> <!-- scope: microscope --> The original BiSECT training, validation, and test splits are maintained to ensure a fair comparison. Note that the original BiSECT test set was created by manually selecting 583 high-quality Split and Rephrase instances from 1000 random source-target pairs sampled from the EMEA and JRC-Acquis corpora from OPUS. As the first challenge set, we include the HSPLIT-Wiki test set, containing 359 pairs. For each complex sentence, there are four reference splits; To ensure replicability, as reference splits, we again follow the BiSECT paper and present only the references from HSplit2-full. In addition to the two evaluation sets used in the original BiSECT paper, we also introduce a second challenge set. For this, we initially consider all 7,293 pairs from the EMEA and JRC-Acquis corpora. From there, we classify each pair using the classification algorithm from Section 4.2 of the original BiSECT paper. The three classes are as follows: 1. Direct Insertion: when a long sentence l contains two independent clauses and requires only minor changes in order to make a fluent and meaning-preserving split s. 2. Changes near Split, when l contains one independent and one dependent clause, but modifications are restricted to the region where l is split. 3. Changes across Sentences, where major changes are required throughout l in order to create a fluent split s. We keep only pairs labeled as Type 3, and after filtering out pairs with significant length differences (signaling potential content addition/deletion), we present a second challenge set of 1,798 pairs. #### Additional Splits? <!-- info: Does GEM provide additional splits to the dataset? --> <!-- scope: telescope --> no ### Getting Started with the Task #### Pointers to Resources <!-- info: Getting started with in-depth research on the task. Add relevant pointers to resources that researchers can consult when they want to get started digging deeper into the task. --> <!-- scope: microscope --> The dataset can be downloaded from the original repository by the authors. The original BiSECT paper proposes several transformer-based models that can be used as baselines, which also compares against Copy512, an LSTM-based model and the previous state-of-the-art. The common metric used for automatic evaluation of Split and Rephrase, and sentence simplification more generally is SARI. The BiSECT paper also evaluates using BERTScore. Note that automatic evaluations tend to not correlate well with human judgments, so a human evaluation for quality is generally expected for publication. The original BiSECT paper provides templates for collecting quality annotations from Amazon Mechanical Turk. ## Previous Results ### Previous Results #### Measured Model Abilities <!-- info: What aspect of model ability can be measured with this dataset? --> <!-- scope: telescope --> Text comprehension (needed to generate meaning-equivalent output) and notions of complexity (what is more 'readable' in terms of syntactic structure, lexical choice, punctuation). #### Metrics <!-- info: What metrics are typically used for this task? --> <!-- scope: periscope --> `Other: Other Metrics`, `BERT-Score` #### Other Metrics <!-- info: Definitions of other metrics --> <!-- scope: periscope --> SARI is a metric used for evaluating automatic text simplification systems. The metric compares the predicted simplified sentences against the reference and the source sentences. It explicitly measures the goodness of words that are added, deleted and kept by the system. #### Proposed Evaluation <!-- info: List and describe the purpose of the metrics and evaluation methodology (including human evaluation) that the dataset creators used when introducing this task. --> <!-- scope: microscope --> Existing automatic metrics, such as BLEU (Papineni et al., 2002) and SAMSA (Sulem et al., 2018), are not optimal for the Split and Rephrase task as they rely on lexical overlap between the output and the target (or source) and underestimate the splitting capability of the models that rephrase often. As such, the dataset creators focused on BERTScore (Zhang et al., 2020) and SARI (Xu et al., 2016). BERTScore captures meaning preservation and fluency well (Scialom et al., 2021). SARI can provide three separate F1/precision scores that explicitly measure the correctness of inserted, kept and deleted n-grams when compared to both the source and the target. The authors used an extended version of SARI that considers lexical paraphrases of the reference. #### Previous results available? <!-- info: Are previous results available? --> <!-- scope: telescope --> yes ## Dataset Curation ### Original Curation #### Original Curation Rationale <!-- info: Original curation rationale --> <!-- scope: telescope --> BiSECT was constructed to satisfy the need of a Split and Rephrase corpus that is both large-scale and high-quality. Most previous Split and Rephrase corpora (HSplit-Wiki, Cont-Benchmark, and Wiki-Benchmark) were manually written at a small scale and focused on evaluation, while the one corpus of comparable size, WikiSplit, contains around 25% of pairs contain significant errors. This is because Wikipedia editors are not only trying to split a sentence, but also often simultaneously modifying the sentence for other purposes, which results in changes of the initial meaning. #### Communicative Goal <!-- info: What was the communicative goal? --> <!-- scope: periscope --> The goal of Split and Rephrase is to break down longer sentences into multiple shorter sentences, which has downstream applications for many NLP tasks, including machine translation and dependency parsing. #### Sourced from Different Sources <!-- info: Is the dataset aggregated from different data sources? --> <!-- scope: telescope --> no ### Language Data #### How was Language Data Obtained? <!-- info: How was the language data obtained? --> <!-- scope: telescope --> `Found` #### Where was it found? <!-- info: If found, where from? --> <!-- scope: telescope --> `Other` #### Language Producers <!-- info: What further information do we have on the language producers? --> <!-- scope: microscope --> N/A. #### Topics Covered <!-- info: Does the language in the dataset focus on specific topics? How would you describe them? --> <!-- scope: periscope --> There is a range of topics spanning domains such as web crawl and government documents (European Parliament, United Nations, EMEA). #### Data Validation <!-- info: Was the text validated by a different worker or a data curator? --> <!-- scope: telescope --> validated by data curator #### Data Preprocessing <!-- info: How was the text data pre-processed? (Enter N/A if the text was not pre-processed) --> <!-- scope: microscope --> The construction of the BiSECT corpus relies on leveraging the sentence-level alignments from OPUS), a collection of bilingual parallel corpora over many language pairs. Given a target language A, this work extracts all 1-2 and 2-1 sentence alignments from parallel corpora between A and a set of foreign languages B. Next, the foreign sentences are translated into English using Google Translate’s Web API service to obtain sentence alignments between a single long sentence and two corresponding split sentences, both in the desired language. The authors further filtered the data in a hybrid fashion. #### Was Data Filtered? <!-- info: Were text instances selected or filtered? --> <!-- scope: telescope --> hybrid #### Filter Criteria <!-- info: What were the selection criteria? --> <!-- scope: microscope --> To remove noise, the authors remove pairs where the single long sentence (l) contains a token with a punctuation after the first two and before the last two alphabetic characters. The authors also removed instances where l contains more than one unconnected component in its dependency tree, generated via SpaCy. ### Structured Annotations #### Additional Annotations? <!-- quick --> <!-- info: Does the dataset have additional annotations for each instance? --> <!-- scope: telescope --> none #### Annotation Service? <!-- info: Was an annotation service used? --> <!-- scope: telescope --> no ### Consent #### Any Consent Policy? <!-- info: Was there a consent policy involved when gathering the data? --> <!-- scope: telescope --> no #### Justification for Using the Data <!-- info: If not, what is the justification for reusing the data? --> <!-- scope: microscope --> Since this data is collected from OPUS, all instances are already in the public domain. ### Private Identifying Information (PII) #### Contains PII? <!-- quick --> <!-- info: Does the source language data likely contain Personal Identifying Information about the data creators or subjects? --> <!-- scope: telescope --> unlikely #### Categories of PII <!-- info: What categories of PII are present or suspected in the data? --> <!-- scope: periscope --> `generic PII` #### Any PII Identification? <!-- info: Did the curators use any automatic/manual method to identify PII in the dataset? --> <!-- scope: periscope --> no identification ### Maintenance #### Any Maintenance Plan? <!-- info: Does the original dataset have a maintenance plan? --> <!-- scope: telescope --> no ## Broader Social Context ### Previous Work on the Social Impact of the Dataset #### Usage of Models based on the Data <!-- info: Are you aware of cases where models trained on the task featured in this dataset ore related tasks have been used in automated systems? --> <!-- scope: telescope --> no ### Impact on Under-Served Communities #### Addresses needs of underserved Communities? <!-- info: Does this dataset address the needs of communities that are traditionally underserved in language technology, and particularly language generation technology? Communities may be underserved for exemple because their language, language variety, or social or geographical context is underepresented in NLP and NLG resources (datasets and models). --> <!-- scope: telescope --> yes #### Details on how Dataset Addresses the Needs <!-- info: Describe how this dataset addresses the needs of underserved communities. --> <!-- scope: microscope --> The data as provided in GEMv2 is in English, which is a language with abundant existing resources. However, the original paper also provides Split and Rephrase pairs for French, Spanish, and German, while providing a framework for leveraging bilingual corpora from any language pair found within OPUS. ### Discussion of Biases #### Any Documented Social Biases? <!-- info: Are there documented social biases in the dataset? Biases in this context are variations in the ways members of different social categories are represented that can have harmful downstream consequences for members of the more disadvantaged group. --> <!-- scope: telescope --> no #### Are the Language Producers Representative of the Language? <!-- info: Does the distribution of language producers in the dataset accurately represent the full distribution of speakers of the language world-wide? If not, how does it differ? --> <!-- scope: periscope --> The language produced in the dataset is limited to what is captured in the used subset of the OPUS corpora, which might not represent the full distribution of speakers from all locations. For example, the corpora used are from a limited set of relatively formal domains, so it is possible that high performance on the BiSECT test set may not transfer to more informal text. ## Considerations for Using the Data ### PII Risks and Liability #### Potential PII Risk <!-- info: Considering your answers to the PII part of the Data Curation Section, describe any potential privacy to the data subjects and creators risks when using the dataset. --> <!-- scope: microscope --> Since this data is collected from OPUS, all pairs are already in the public domain. ### Licenses #### Copyright Restrictions on the Dataset <!-- info: Based on your answers in the Intended Use part of the Data Overview Section, which of the following best describe the copyright and licensing status of the dataset? --> <!-- scope: periscope --> `public domain` #### Copyright Restrictions on the Language Data <!-- info: Based on your answers in the Language part of the Data Curation Section, which of the following best describe the copyright and licensing status of the underlying language data? --> <!-- scope: periscope --> `public domain` ### Known Technical Limitations #### Technical Limitations <!-- info: Describe any known technical limitations, such as spurrious correlations, train/test overlap, annotation biases, or mis-annotations, and cite the works that first identified these limitations when possible. --> <!-- scope: microscope --> The creation of English BiSECT relies on translating non-English text back to English. While machine translation systems tend to perform well on high-resource languages, there is still a non-negligible chance that there these systems make errors; through a manual evaluation of a subset of BiSECT, it was found that 15% of pairs contained significant errors, while an additional 22% contained minor adequacy/fluency errors. This problem is exacerbated slightly when creating German BiSECT (22% significant errors, 24% minor errors), and these numbers would likely get larger if lower-resource languages were used.
koutch/intro_prog
2023-06-05T08:45:02.000Z
[ "region:us" ]
koutch
The Dublin programming dataset is a dataset composed of students' submissions to introductory programming assignments at the University of Dublin. Students submitted these programs for multiple programming courses over the duration of three academic years.
@inproceedings{azcona2019user2code2vec, title={user2code2vec: Embeddings for Profiling Students Based on Distributional Representations of Source Code}, author={Azcona, David and Arora, Piyush and Hsiao, I-Han and Smeaton, Alan}, booktitle={Proceedings of the 9th International Learning Analytics & Knowledge Conference (LAK’19)}, year={2019}, organization={ACM} } @inproceedings{DBLP:conf/edm/CleuziouF21, author = {Guillaume Cleuziou and Fr{\'{e}}d{\'{e}}ric Flouvat}, editor = {Sharon I{-}Han Hsiao and Shaghayegh (Sherry) Sahebi and Fran{\c{c}}ois Bouchet and Jill{-}J{\^{e}}nn Vie}, title = {Learning student program embeddings using abstract execution traces}, booktitle = {Proceedings of the 14th International Conference on Educational Data Mining, {EDM} 2021, virtual, June 29 - July 2, 2021}, publisher = {International Educational Data Mining Society}, year = {2021}, timestamp = {Wed, 09 Mar 2022 16:47:22 +0100}, biburl = {https://dblp.org/rec/conf/edm/CleuziouF21.bib}, bibsource = {dblp computer science bibliography, https://dblp.org} }
null
0
99
--- dataset_info: - config_name: dublin_metadata features: - name: assignment_id dtype: string - name: func_name dtype: string - name: reference_solution dtype: string - name: description dtype: string - name: test dtype: string splits: - name: train num_bytes: 18983 num_examples: 36 - name: test num_bytes: 17403 num_examples: 35 download_size: 41873 dataset_size: 36386 - config_name: singapore_metadata features: - name: assignment_id dtype: string - name: func_name dtype: string - name: reference_solution dtype: string - name: description dtype: string - name: test dtype: string splits: - name: train num_bytes: 5577 num_examples: 5 download_size: 6139 dataset_size: 5577 - config_name: dublin_data features: - name: submission_id dtype: int32 - name: func_code dtype: string - name: assignment_id dtype: string - name: func_name dtype: string - name: description dtype: string - name: test dtype: string - name: correct dtype: bool - name: user dtype: string - name: academic_year dtype: int32 splits: - name: train num_bytes: 4412068 num_examples: 7486 - name: test num_bytes: 7737585 num_examples: 14259 download_size: 15756562 dataset_size: 12149653 - config_name: singapore_data features: - name: submission_id dtype: int32 - name: func_code dtype: string - name: assignment_id dtype: string - name: func_name dtype: string - name: description dtype: string - name: test dtype: string - name: correct dtype: bool splits: - name: train num_bytes: 5098928 num_examples: 4394 download_size: 5705043 dataset_size: 5098928 - config_name: dublin_repair features: - name: submission_id dtype: int32 - name: func_code dtype: string - name: assignment_id dtype: string - name: func_name dtype: string - name: description dtype: string - name: test dtype: string - name: annotation dtype: string - name: user dtype: string - name: academic_year dtype: int32 splits: - name: train num_bytes: 229683 num_examples: 307 - name: test num_bytes: 1451820 num_examples: 1698 download_size: 1929518 dataset_size: 1681503 - config_name: singapore_repair features: - name: submission_id dtype: int32 - name: func_code dtype: string - name: assignment_id dtype: string - name: func_name dtype: string - name: description dtype: string - name: test dtype: string - name: annotation dtype: string splits: - name: train num_bytes: 18979 num_examples: 18 download_size: 21737 dataset_size: 18979 - config_name: newcaledonia_metadata features: - name: assignment_id dtype: string - name: func_name dtype: string - name: reference_solution dtype: string - name: description dtype: string - name: test dtype: string splits: - name: train num_bytes: 9053 num_examples: 9 download_size: 9760 dataset_size: 9053 - config_name: newcaledonia_data features: - name: submission_id dtype: int32 - name: func_code dtype: string - name: assignment_id dtype: string - name: func_name dtype: string - name: description dtype: string - name: test dtype: string - name: correct dtype: bool splits: - name: train num_bytes: 932024 num_examples: 1201 download_size: 1198518 dataset_size: 932024 --- # Dataset Card for intro_prog ## Dataset Description ### Dataset Summary IntroProg is a collection of students' submissions to assignments in various introductory programming courses offered at different universities. Currently, the dataset contains submissions collected from Dublin City University, and the University of Singapore. #### Dublin The Dublin programming dataset is a dataset composed of students' submissions to introductory programming assignments at the University of Dublin. Students submitted these programs for multiple programming courses over the duration of three academic years. #### Singapore The Singapore dataset contains 2442 correct and 1783 buggy program attempts by 361 undergraduate students crediting an introduction to Python programming course at NUS (National University of Singapore). ### Supported Tasks and Leaderboards #### "Metadata": Program synthesis Similarly to the [Most Basic Python Programs](https://huggingface.co/datasets/mbpp) (mbpp), the data split can be used to evaluate code generations models. #### "Data" The data configuration contains all the submissions as well as an indicator of whether these passed the required test. #### "repair": Program refinement/repair The "repair" configuration of each dataset is a subset of the "data" configuration augmented with educators' annotations on the corrections to the buggy programs. This configuration can be used for the task of program refinement. In [Computing Education Research](https://faculty.washington.edu/ajko/cer/) (CER), methods for automatically repairing student programs are used to provide students with feedback and help them debug their code. #### "bug": Bug classification [Coming soon] ### Languages The assignments were written in Python. ## Dataset Structure One configuration is defined by one source dataset *dublin* or *singapore* and one subconfiguration ("metadata", "data", or "repair"): * "dublin_metadata" * "dublin_data" * "dublin_repair" * "singapore_metadata" * "singapore_data" * "singapore_repair" ### Data Instances [More Information Needed] ### Data Fields [More Information Needed] Some of the fields are configuration specific * submission_id: a unique number identifying the submission * user: a unique string identifying the (anonymized) student who submitted the solution * date: the timestamp at which the grading server received the submission * func_code: the cleaned code submitted * func_name: the name of the function that had to be implemented * assingment_id: the unique (string) identifier of the assignment that had to be completed * academic_year: the starting year of the academic year (e.g. 2015 for the academic year 2015-2016) * module: the course/module * test: a human eval-style string which can be used to execute the submitted solution on the provided test cases * Description: a description of what the function is supposed to achieve * correct: whether the solution passed all tests or not ### Data Splits #### Dublin The Dublin dataset is split into a training and validation set. The training set contains the submissions to the assignments written during the academic years 2015-2016, and 2016-2017, while the test set contains programs written during the academic year 2017-2018. #### Singapore The Singapore dataset only contains a training split, which can be used as a test split for evaluating how your feedback methods perform on an unseen dataset (if, for instance, you train your methods on the Dublin Dataset). ## Dataset Creation ### Curation Rationale [More Information Needed] ### Source Data #### Initial Data Collection and Normalization [More Information Needed] #### Who are the source language producers? [More Information Needed] ### Annotations #### Annotation process [More Information Needed] #### Who are the annotators? [More Information Needed] ### Personal and Sensitive Information [More Information Needed] ## Considerations for Using the Data ### Social Impact of Dataset [More Information Needed] ### Discussion of Biases [More Information Needed] ### Other Known Limitations [More Information Needed] ## Additional Information ### Dataset Curators [More Information Needed] ### Licensing Information #### Dublin #### Singapore The data was released under a [GNU Lesser General Public License v3.0](https://github.com/githubhuyang/refactory/blob/master/LICENSE) license ### Citation Information ``` @inproceedings{azcona2019user2code2vec, title={user2code2vec: Embeddings for Profiling Students Based on Distributional Representations of Source Code}, author={Azcona, David and Arora, Piyush and Hsiao, I-Han and Smeaton, Alan}, booktitle={Proceedings of the 9th International Learning Analytics & Knowledge Conference (LAK’19)}, year={2019}, organization={ACM} } @inproceedings{DBLP:conf/edm/CleuziouF21, author = {Guillaume Cleuziou and Fr{\'{e}}d{\'{e}}ric Flouvat}, editor = {Sharon I{-}Han Hsiao and Shaghayegh (Sherry) Sahebi and Fran{\c{c}}ois Bouchet and Jill{-}J{\^{e}}nn Vie}, title = {Learning student program embeddings using abstract execution traces}, booktitle = {Proceedings of the 14th International Conference on Educational Data Mining, {EDM} 2021, virtual, June 29 - July 2, 2021}, publisher = {International Educational Data Mining Society}, year = {2021}, timestamp = {Wed, 09 Mar 2022 16:47:22 +0100}, biburl = {https://dblp.org/rec/conf/edm/CleuziouF21.bib}, bibsource = {dblp computer science bibliography, https://dblp.org} } ``` ### Contributions [More Information Needed]
metaeval/xnli
2023-05-23T12:38:22.000Z
[ "region:us" ]
metaeval
XNLI is a subset of a few thousand examples from MNLI which has been translated into a 14 different languages (some low-ish resource). As with MNLI, the goal is to predict textual entailment (does sentence A imply/contradict/neither sentence B) and is a classification task (given two sentences, predict one of three labels).
@InProceedings{conneau2018xnli, author = {Conneau, Alexis and Rinott, Ruty and Lample, Guillaume and Williams, Adina and Bowman, Samuel R. and Schwenk, Holger and Stoyanov, Veselin}, title = {XNLI: Evaluating Cross-lingual Sentence Representations}, booktitle = {Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing}, year = {2018}, publisher = {Association for Computational Linguistics}, location = {Brussels, Belgium}, }
null
0
99
Human annotated part of xnli ``` @InProceedings{conneau2018xnli, author = {Conneau, Alexis and Rinott, Ruty and Lample, Guillaume and Williams, Adina and Bowman, Samuel R. and Schwenk, Holger and Stoyanov, Veselin}, title = {XNLI: Evaluating Cross-lingual Sentence Representations}, booktitle = {Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing}, year = {2018}, publisher = {Association for Computational Linguistics}, location = {Brussels, Belgium}, } ```
junelee/remon_without_nsfw
2023-06-04T13:57:20.000Z
[ "region:us" ]
junelee
null
null
null
7
99
Entry not found
pankajmathur/WizardLM_Orca
2023-06-26T14:39:38.000Z
[ "task_categories:text-generation", "size_categories:10K<n<100K", "language:en", "license:cc-by-nc-sa-4.0", "region:us" ]
pankajmathur
null
null
null
63
99
--- license: cc-by-nc-sa-4.0 task_categories: - text-generation language: - en size_categories: - 10K<n<100K --- Explain tuned WizardLM dataset ~55K created using approaches from Orca Research Paper. We leverage all of the 15 system instructions provided in Orca Research Paper. to generate custom datasets, in contrast to vanilla instruction tuning approaches used by original datasets. This helps student models like orca_mini_13b to learn thought process from teacher model, which is ChatGPT (gpt-3.5-turbo-0301 version). Please see how the System prompt is added before each instruction.
chiragtubakad/chart-to-table-mix
2023-09-05T05:48:07.000Z
[ "region:us" ]
chiragtubakad
null
null
null
0
99
--- configs: - config_name: default data_files: - split: train path: data/train-* - split: test path: data/test-* dataset_info: features: - name: image dtype: image - name: text dtype: string splits: - name: train num_bytes: 102169807.41570717 num_examples: 2245 - name: test num_bytes: 25042009.85429284 num_examples: 562 download_size: 108880031 dataset_size: 127211817.27000001 --- # Dataset Card for "chart-to-table-mix" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
gopalkalpande/bbc-news-summary
2022-06-22T13:08:15.000Z
[ "license:cc0-1.0", "region:us" ]
gopalkalpande
null
null
null
3
98
--- license: cc0-1.0 --- # About Dataset ### Context Text summarization is a way to condense the large amount of information into a concise form by the process of selection of important information and discarding unimportant and redundant information. With the amount of textual information present in the world wide web the area of text summarization is becoming very important. The extractive summarization is the one where the exact sentences present in the document are used as summaries. The extractive summarization is simpler and is the general practice among the automatic text summarization researchers at the present time. Extractive summarization process involves giving scores to sentences using some method and then using the sentences that achieve highest scores as summaries. As the exact sentence present in the document is used the semantic factor can be ignored which results in generation of less calculation intensive summarization procedure. This kind of summary is generally completely unsupervised and language independent too. Although this kind of summary does its job in conveying the essential information it may not be necessarily smooth or fluent. Sometimes there can be almost no connection between adjacent sentences in the summary resulting in the text lacking in readability. Content This dataset for extractive text summarization has four hundred and seventeen political news articles of BBC from 2004 to 2005 in the News Articles folder. For each articles, five summaries are provided in the Summaries folder. The first clause of the text of articles is the respective title. Acknowledgements This dataset was created using a dataset used for data categorization that onsists of 2225 documents from the BBC news website corresponding to stories in five topical areas from 2004-2005 used in the paper of D. Greene and P. Cunningham. "Practical Solutions to the Problem of Diagonal Dominance in Kernel Document Clustering", Proc. ICML 2006; whose all rights, including copyright, in the content of the original articles are owned by the BBC. More at http://mlg.ucd.ie/datasets/bbc.html **Kaggle Link:** https://www.kaggle.com/datasets/pariza/bbc-news-summary
bigbio/mqp
2022-12-22T15:45:40.000Z
[ "multilinguality:monolingual", "language:en", "license:unknown", "region:us" ]
bigbio
Medical Question Pairs dataset by McCreery et al (2020) contains pairs of medical questions and paraphrased versions of the question prepared by medical professional. Paraphrased versions were labelled as similar (syntactically dissimilar but contextually similar ) or dissimilar (syntactically may look similar but contextually dissimilar). Labels 1: similar, 0: dissimilar
@article{DBLP:journals/biodb/LiSJSWLDMWL16, author = {Krallinger, M., Rabal, O., Lourenço, A.}, title = {Effective Transfer Learning for Identifying Similar Questions: Matching User Questions to COVID-19 FAQs}, journal = {KDD '20: Proceedings of the 26th ACM SIGKDD International Conference on Knowledge Discovery & Data Mining}, volume = {3458–3465}, year = {2020}, url = {https://github.com/curai/medical-question-pair-dataset}, doi = {}, biburl = {}, bibsource = {} }
null
0
98
--- language: - en bigbio_language: - English license: unknown multilinguality: monolingual bigbio_license_shortname: UNKNOWN pretty_name: MQP homepage: https://github.com/curai/medical-question-pair-dataset bigbio_pubmed: False bigbio_public: True bigbio_tasks: - SEMANTIC_SIMILARITY --- # Dataset Card for MQP ## Dataset Description - **Homepage:** https://github.com/curai/medical-question-pair-dataset - **Pubmed:** False - **Public:** True - **Tasks:** STS Medical Question Pairs dataset by McCreery et al (2020) contains pairs of medical questions and paraphrased versions of the question prepared by medical professional. Paraphrased versions were labelled as similar (syntactically dissimilar but contextually similar ) or dissimilar (syntactically may look similar but contextually dissimilar). Labels 1: similar, 0: dissimilar ## Citation Information ``` @article{DBLP:journals/biodb/LiSJSWLDMWL16, author = {Krallinger, M., Rabal, O., Lourenço, A.}, title = {Effective Transfer Learning for Identifying Similar Questions: Matching User Questions to COVID-19 FAQs}, journal = {KDD '20: Proceedings of the 26th ACM SIGKDD International Conference on Knowledge Discovery & Data Mining}, volume = {3458–3465}, year = {2020}, url = {https://github.com/curai/medical-question-pair-dataset}, doi = {}, biburl = {}, bibsource = {} } ```
Francesco/smoke-uvylj
2023-03-30T09:32:38.000Z
[ "task_categories:object-detection", "annotations_creators:crowdsourced", "language_creators:found", "multilinguality:monolingual", "size_categories:1K<n<10K", "source_datasets:original", "language:en", "license:cc", "rf100", "region:us" ]
Francesco
null
null
null
0
98
--- dataset_info: features: - name: image_id dtype: int64 - name: image dtype: image - name: width dtype: int32 - name: height dtype: int32 - name: objects sequence: - name: id dtype: int64 - name: area dtype: int64 - name: bbox sequence: float32 length: 4 - name: category dtype: class_label: names: '0': smoke-0 '1': smoke annotations_creators: - crowdsourced language_creators: - found language: - en license: - cc multilinguality: - monolingual size_categories: - 1K<n<10K source_datasets: - original task_categories: - object-detection task_ids: [] pretty_name: smoke-uvylj tags: - rf100 --- # Dataset Card for smoke-uvylj ** The original COCO dataset is stored at `dataset.tar.gz`** ## Dataset Description - **Homepage:** https://universe.roboflow.com/object-detection/smoke-uvylj - **Point of Contact:** francesco.zuppichini@gmail.com ### Dataset Summary smoke-uvylj ### Supported Tasks and Leaderboards - `object-detection`: The dataset can be used to train a model for Object Detection. ### Languages English ## Dataset Structure ### Data Instances A data point comprises an image and its object annotations. ``` { 'image_id': 15, 'image': <PIL.JpegImagePlugin.JpegImageFile image mode=RGB size=640x640 at 0x2373B065C18>, 'width': 964043, 'height': 640, 'objects': { 'id': [114, 115, 116, 117], 'area': [3796, 1596, 152768, 81002], 'bbox': [ [302.0, 109.0, 73.0, 52.0], [810.0, 100.0, 57.0, 28.0], [160.0, 31.0, 248.0, 616.0], [741.0, 68.0, 202.0, 401.0] ], 'category': [4, 4, 0, 0] } } ``` ### Data Fields - `image`: the image id - `image`: `PIL.Image.Image` object containing the image. Note that when accessing the image column: `dataset[0]["image"]` the image file is automatically decoded. Decoding of a large number of image files might take a significant amount of time. Thus it is important to first query the sample index before the `"image"` column, *i.e.* `dataset[0]["image"]` should **always** be preferred over `dataset["image"][0]` - `width`: the image width - `height`: the image height - `objects`: a dictionary containing bounding box metadata for the objects present on the image - `id`: the annotation id - `area`: the area of the bounding box - `bbox`: the object's bounding box (in the [coco](https://albumentations.ai/docs/getting_started/bounding_boxes_augmentation/#coco) format) - `category`: the object's category. #### Who are the annotators? Annotators are Roboflow users ## Additional Information ### Licensing Information See original homepage https://universe.roboflow.com/object-detection/smoke-uvylj ### Citation Information ``` @misc{ smoke-uvylj, title = { smoke uvylj Dataset }, type = { Open Source Dataset }, author = { Roboflow 100 }, howpublished = { \url{ https://universe.roboflow.com/object-detection/smoke-uvylj } }, url = { https://universe.roboflow.com/object-detection/smoke-uvylj }, journal = { Roboflow Universe }, publisher = { Roboflow }, year = { 2022 }, month = { nov }, note = { visited on 2023-03-29 }, }" ``` ### Contributions Thanks to [@mariosasko](https://github.com/mariosasko) for adding this dataset.
tasksource/disrpt
2023-09-27T08:04:41.000Z
[ "language:en", "license:apache-2.0", "region:us" ]
tasksource
null
null
null
1
98
--- license: apache-2.0 language: - en --- https://github.com/disrpt/sharedtask2023 scditb: ``` @inproceedings{yang-li-2018-scidtb, title = "{S}ci{DTB}: Discourse Dependency {T}ree{B}ank for Scientific Abstracts", author = "Yang, An and Li, Sujian", booktitle = "Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers)", month = jul, year = "2018", address = "Melbourne, Australia", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/P18-2071", doi = "10.18653/v1/P18-2071", pages = "444--449", abstract = "Annotation corpus for discourse relations benefits NLP tasks such as machine translation and question answering. In this paper, we present SciDTB, a domain-specific discourse treebank annotated on scientific articles. Different from widely-used RST-DT and PDTB, SciDTB uses dependency trees to represent discourse structure, which is flexible and simplified to some extent but do not sacrifice structural integrity. We discuss the labeling framework, annotation workflow and some statistics about SciDTB. Furthermore, our treebank is made as a benchmark for evaluating discourse dependency parsers, on which we provide several baselines as fundamental work.", } ```
shi3z/alpaca_cleaned_ja_json
2023-08-25T23:18:42.000Z
[ "task_categories:text-generation", "language:ja", "license:cc-by-4.0", "region:us" ]
shi3z
null
null
null
4
98
--- license: cc-by-4.0 task_categories: - text-generation language: - ja configs: - config_name: default data_files: - split: train path: "alpaca_cleaned_ja.json" - split: test path: "alpaca_cleaned_ja.json" --- # Dataset Card for Dataset Name ## Dataset Description - **Homepage:** - **Repository:** - **Paper:** - **Leaderboard:** - **Point of Contact:** ### Dataset Summary This dataset card aims to be a base template for new datasets. It has been generated using [this raw template](https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/templates/datasetcard_template.md?plain=1). ### Supported Tasks and Leaderboards [More Information Needed] ### Languages [More Information Needed] ## Dataset Structure ### Data Instances [More Information Needed] ### Data Fields [More Information Needed] ### Data Splits [More Information Needed] ## Dataset Creation ### Curation Rationale [More Information Needed] ### Source Data #### Initial Data Collection and Normalization [More Information Needed] #### Who are the source language producers? [More Information Needed] ### Annotations #### Annotation process [More Information Needed] #### Who are the annotators? [More Information Needed] ### Personal and Sensitive Information [More Information Needed] ## Considerations for Using the Data ### Social Impact of Dataset [More Information Needed] ### Discussion of Biases [More Information Needed] ### Other Known Limitations [More Information Needed] ## Additional Information ### Dataset Curators [More Information Needed] ### Licensing Information [More Information Needed] ### Citation Information [More Information Needed] ### Contributions [More Information Needed]
YeungNLP/moss-003-sft-data
2023-06-09T10:55:57.000Z
[ "region:us" ]
YeungNLP
null
null
null
35
98
# moss-003-sft-data 本数据集可用于中文多轮对话指令微调,包含110万中英文多轮对话数据。该数据集来自[MOSS项目](https://github.com/OpenLMLab/MOSS#%E6%95%B0%E6%8D%AE) 中的moss-003-sft-data数据集。 在原数据集的基础上,我们去除了冗余信息,仅提取出有效的对话信息,并且调整数据格式,以便在训练中更加灵活地组织数据格式。更多详细信息,可参考MOSS项目介绍。 本数据集为jsonl格式,每行为一个多轮对话数据,格式如下: ```json { "conversation_id":1, "category":"Brainstorming", "conversation":[ { "human":"如何保障工作中遵循正确的安全准则?", "assistant":"为了保障工作中遵循正确的安全准则,以下是一些建议:\n\n1.了解相关安全规定和标准:了解相关的安全规定和标准,并遵守它们。这可以包括公司和政府的安全标准,以及行业标准和最佳实践。\n\n2.培训和教育:确保您和您的同事接受了必要的培训和教育,以了解正确的安全准则和行为。\n\n3.使用正确的工具和设备:确保您使用正确的工具和设备,并且它们得到了正确的维护和保养。\n\n4.个人防护装备:确保您和您的同事穿戴正确的个人防护装备,如安全鞋、透明眼镜或面罩、手套等。\n\n5.创建一个安全文化:鼓励个人对安全的责任感和意识,并创建一个安全文化,使人们始终关注他们自己和他人的安全。\n\n6.持续监测和改进:持续监测和改进安全准则和程序,以确保它们保持最新,并适应变化的情况。\n\n这些措施可以帮助您和您的同事在工作中保障正确的安全准则,确保您的工作场所是一个安全的环境。" }, { "human":"如何应对突发紧急情况?", "assistant":"突发紧急情况可能会在任何时间发生,为了应对这些情况,以下是一些建议:\n\n1.保持冷静:保持冷静和清醒,以便能够更好地应对情况。\n\n2.评估情况:评估紧急情况的性质和范围,以便了解应该采取哪些行动。\n\n3.触发应急计划:如果您的机构或公司有应急计划,立即触发它,并按照计划的步骤行动。\n\n4.联系相关人员:通知必要的人员,例如紧急服务、管理层或其他相关人员,并与他们保持联系。\n\n5.遵循安全准则:在处理紧急情况时,请遵循正确的安全准则,确保自己和其他人的安全。\n\n6.提供及时信息:尽快向相关人员提供准确的信息,以便他们能够做出正确的决策。\n\n7.合作应对:与其他人合作应对紧急情况并相互支持。\n\n总之,应对突发紧急情况需要保持冷静和清醒,评估情况并采取适当的行动。同时,需要遵循正确的安全准则,并与其他人合作应对紧急情况。" }, { "human":"在网络安全方面,如何避免遭受黑客攻击?", "assistant":"以下是一些避免遭受黑客攻击的建议:\n\n1.更新安全软件:确保您的操作系统、应用程序和安全软件是最新的,并随时更新它们。这可以帮助您及时发现和修复安全漏洞。\n\n2.使用强密码:使用强密码,包括字母、数字和特殊字符。不要使用容易猜测的密码,如生日、姓名或常用单词。\n\n3.使用多因素身份验证:使用多因素身份验证来增加账户的安全性,例如发送短信验证码或使用指纹识别。\n\n4.小心附件和链接:不要打开未知的附件或链接,因为它们可能包含恶意软件。\n\n5.备份重要文件:定期备份重要文件,并将其存储在安全的位置。\n\n6.保护个人信息:不要在互联网上公开个人信息,例如出生日期、地址或社交安全号码。\n\n7.使用加密连接:在访问敏感信息时,使用加密连接,如HTTPS。\n\n8.教育员工:向员工提供网络安全培训,并确保他们了解正确的安全措施和行为。\n\n总之,避免遭受黑客攻击需要采取多种措施,包括更新安全软件、使用强密码、使用多因素身份验证、小心附件和链接、备份重要文件、保护个人信息、使用加密连接以及教育员工。这些措施可以帮助您保护个人信息和数据免受黑客攻击。" }, { "human":"如何确保网络连接的安全性?", "assistant":"以下是一些确保网络连接安全性的建议:\n\n1.使用加密连接:使用加密连接,如HTTPS,以确保数据传输过程中不被窃听或篡改。\n\n2.使用防病毒软件:安装和更新防病毒软件,可以帮助检测和清除恶意软件。\n\n3.使用防火墙:使用防火墙保护网络和设备,可以帮助阻止未经授权的访问和攻击。\n\n4.禁止未经授权的访问:禁止未经授权的访问,包括限制网络访问和设备访问权限。\n\n5.定期更新操作系统和应用程序:定期更新操作系统和应用程序,以修复安全漏洞和缺陷。\n\n6.使用强密码:使用强密码,包括字母、数字和特殊字符,并定期更改密码。\n\n7.控制访问:控制访问网络和设备的人员,仅授权人员可访问重要资源。\n\n8.教育员工:向员工提供网络安全培训,并确保他们了解正确的安全措施和行为。\n\n总之,确保网络连接的安全性需要采取多种措施,包括使用加密连接、使用防病毒软件、使用防火墙、禁止未经授权的访问、定期更新操作系统和应用程序、使用强密码、控制访问和教育员工。这些措施可以帮助您保护网络和设备免受未经授权的访问和攻击。" }, { "human":"对于金融机构来说,如何保护客户的隐私和数据安全?", "assistant":"金融机构需要对客户的隐私和数据安全高度重视,以下是一些保护客户隐私和数据安全的建议:\n\n1.遵守合规要求:确保遵守所有适用的合规要求和监管标准,包括数据隐私法规和安全标准。\n\n2.加强身份验证:采用多因素身份验证,例如使用短信验证码、指纹识别等,以确保客户身份的真实性。\n\n3.加密敏感数据:使用加密技术对敏感数据进行加密,以保护客户信息不被未经授权的人员获得。\n\n4.建立安全控制:建立安全控制,包括访问控制、安全审计和安全事件响应,以保护客户信息的安全性。\n\n5.教育员工:向员工提供网络安全培训,并确保他们了解正确的安全措施和行为。\n\n6.定期测试:定期进行安全测试和漏洞扫描,以发现和修复潜在的安全漏洞。\n\n7.建立应急计划:建立应急计划,并对其进行定期测试,以确保在紧急情况下能够及时响应和恢复。\n\n总之,保护客户隐私和数据安全需要采取多种措施,包括遵守合规要求、加强身份验证、加密敏感数据、建立安全控制、教育员工、定期测试和建立应急计划。这些措施可以帮助金融机构保护客户信息的安全性和隐私。" } ] } ```
santoshtyss/billsum
2023-08-06T11:45:22.000Z
[ "region:us" ]
santoshtyss
null
null
null
0
98
--- dataset_info: features: - name: text dtype: string - name: summary dtype: string - name: title dtype: string splits: - name: train num_bytes: 186689203 num_examples: 16107 - name: test num_bytes: 37866257 num_examples: 3269 - name: ca_test num_bytes: 14945291 num_examples: 1237 - name: validation num_bytes: 32906887 num_examples: 2842 download_size: 113748846 dataset_size: 272407638 --- # Dataset Card for "billsum" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
arazd/llama_features_alpaca
2023-08-19T21:44:10.000Z
[ "license:openrail", "region:us" ]
arazd
null
null
null
0
98
--- license: openrail --- Llama-2 representations extracted from Alpaca instruction tuning dataset (original orderof examples is preserved). Representations are extracted from the final layer and averaged across all tokens. Dataset structure: key=sample id, value=feature vector in string format, with ";" separator.
ubuntu_dialogs_corpus
2023-04-05T13:42:49.000Z
[ "task_categories:conversational", "task_ids:dialogue-generation", "annotations_creators:found", "language_creators:found", "multilinguality:monolingual", "size_categories:1M<n<10M", "source_datasets:original", "language:en", "license:unknown", "arxiv:1506.08909", "region:us" ]
null
Ubuntu Dialogue Corpus, a dataset containing almost 1 million multi-turn dialogues, with a total of over 7 million utterances and 100 million words. This provides a unique resource for research into building dialogue managers based on neural language models that can make use of large amounts of unlabeled data. The dataset has both the multi-turn property of conversations in the Dialog State Tracking Challenge datasets, and the unstructured nature of interactions from microblog services such as Twitter.
@article{DBLP:journals/corr/LowePSP15, author = {Ryan Lowe and Nissan Pow and Iulian Serban and Joelle Pineau}, title = {The Ubuntu Dialogue Corpus: {A} Large Dataset for Research in Unstructured Multi-Turn Dialogue Systems}, journal = {CoRR}, volume = {abs/1506.08909}, year = {2015}, url = {http://arxiv.org/abs/1506.08909}, archivePrefix = {arXiv}, eprint = {1506.08909}, timestamp = {Mon, 13 Aug 2018 16:48:23 +0200}, biburl = {https://dblp.org/rec/journals/corr/LowePSP15.bib}, bibsource = {dblp computer science bibliography, https://dblp.org} }
null
13
97
--- annotations_creators: - found language: - en language_creators: - found license: - unknown multilinguality: - monolingual pretty_name: UDC (Ubuntu Dialogue Corpus) size_categories: - 1M<n<10M source_datasets: - original task_categories: - conversational task_ids: - dialogue-generation paperswithcode_id: ubuntu-dialogue-corpus dataset_info: - config_name: train features: - name: Context dtype: string - name: Utterance dtype: string - name: Label dtype: int32 splits: - name: train num_bytes: 525126729 num_examples: 1000000 download_size: 0 dataset_size: 525126729 - config_name: dev_test features: - name: Context dtype: string - name: Ground Truth Utterance dtype: string - name: Distractor_0 dtype: string - name: Distractor_1 dtype: string - name: Distractor_2 dtype: string - name: Distractor_3 dtype: string - name: Distractor_4 dtype: string - name: Distractor_5 dtype: string - name: Distractor_6 dtype: string - name: Distractor_7 dtype: string - name: Distractor_8 dtype: string splits: - name: test num_bytes: 27060502 num_examples: 18920 - name: validation num_bytes: 27663181 num_examples: 19560 download_size: 0 dataset_size: 54723683 --- # Dataset Card for "ubuntu_dialogs_corpus" ## Table of Contents - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards) - [Languages](#languages) - [Dataset Structure](#dataset-structure) - [Data Instances](#data-instances) - [Data Fields](#data-fields) - [Data Splits](#data-splits) - [Dataset Creation](#dataset-creation) - [Curation Rationale](#curation-rationale) - [Source Data](#source-data) - [Annotations](#annotations) - [Personal and Sensitive Information](#personal-and-sensitive-information) - [Considerations for Using the Data](#considerations-for-using-the-data) - [Social Impact of Dataset](#social-impact-of-dataset) - [Discussion of Biases](#discussion-of-biases) - [Other Known Limitations](#other-known-limitations) - [Additional Information](#additional-information) - [Dataset Curators](#dataset-curators) - [Licensing Information](#licensing-information) - [Citation Information](#citation-information) - [Contributions](#contributions) ## Dataset Description - **Repository:** https://github.com/rkadlec/ubuntu-ranking-dataset-creator - **Paper:** [The Ubuntu Dialogue Corpus: A Large Dataset for Research in Unstructured Multi-Turn Dialogue Systems](https://arxiv.org/abs/1506.08909) - **Point of Contact:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) - **Size of downloaded dataset files:** 0.00 MB - **Size of the generated dataset:** 65.49 MB - **Total amount of disk used:** 65.49 MB ### Dataset Summary Ubuntu Dialogue Corpus, a dataset containing almost 1 million multi-turn dialogues, with a total of over 7 million utterances and 100 million words. This provides a unique resource for research into building dialogue managers based on neural language models that can make use of large amounts of unlabeled data. The dataset has both the multi-turn property of conversations in the Dialog State Tracking Challenge datasets, and the unstructured nature of interactions from microblog services such as Twitter. ### Supported Tasks and Leaderboards [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ### Languages [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ## Dataset Structure ### Data Instances #### train - **Size of downloaded dataset files:** 0.00 MB - **Size of the generated dataset:** 65.49 MB - **Total amount of disk used:** 65.49 MB An example of 'train' looks as follows. ``` This example was too long and was cropped: { "Context": "\"i think we could import the old comment via rsync , but from there we need to go via email . i think it be easier than cach the...", "Label": 1, "Utterance": "basic each xfree86 upload will not forc user to upgrad 100mb of font for noth __eou__ no someth i do in my spare time . __eou__" } ``` ### Data Fields The data fields are the same among all splits. #### train - `Context`: a `string` feature. - `Utterance`: a `string` feature. - `Label`: a `int32` feature. ### Data Splits |name |train | |-----|-----:| |train|127422| ## Dataset Creation ### Curation Rationale [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ### Source Data #### Initial Data Collection and Normalization [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) #### Who are the source language producers? [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ### Annotations #### Annotation process [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) #### Who are the annotators? [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ### Personal and Sensitive Information [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ## Considerations for Using the Data ### Social Impact of Dataset [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ### Discussion of Biases [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ### Other Known Limitations [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ## Additional Information ### Dataset Curators [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ### Licensing Information [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ### Citation Information ``` @article{DBLP:journals/corr/LowePSP15, author = {Ryan Lowe and Nissan Pow and Iulian Serban and Joelle Pineau}, title = {The Ubuntu Dialogue Corpus: {A} Large Dataset for Research in Unstructured Multi-Turn Dialogue Systems}, journal = {CoRR}, volume = {abs/1506.08909}, year = {2015}, url = {http://arxiv.org/abs/1506.08909}, archivePrefix = {arXiv}, eprint = {1506.08909}, timestamp = {Mon, 13 Aug 2018 16:48:23 +0200}, biburl = {https://dblp.org/rec/journals/corr/LowePSP15.bib}, bibsource = {dblp computer science bibliography, https://dblp.org} } ``` ### Contributions Thanks to [@thomwolf](https://github.com/thomwolf), [@patrickvonplaten](https://github.com/patrickvonplaten), [@lewtun](https://github.com/lewtun) for adding this dataset.
Chris1/GTA5
2022-04-06T14:44:22.000Z
[ "region:us" ]
Chris1
null
null
null
0
97
Entry not found
nlphuji/fairface_val_padding_025
2023-01-18T22:57:00.000Z
[ "region:us" ]
nlphuji
null
null
null
1
97
# FairFace (val set) Original paper: [Fairface: Face attribute dataset for balanced race, gender, and age for bias measurement and mitigation](https://openaccess.thecvf.com/content/WACV2021/papers/Karkkainen_FairFace_Face_Attribute_Dataset_for_Balanced_Race_Gender_and_Age_WACV_2021_paper.pdf) Homepage: https://github.com/joojs/fairface Bibtex: ``` @inproceedings{karkkainenfairface, title={FairFace: Face Attribute Dataset for Balanced Race, Gender, and Age for Bias Measurement and Mitigation}, author={Karkkainen, Kimmo and Joo, Jungseock}, booktitle={Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision}, year={2021}, pages={1548--1558} } ```
SirNeural/flan_v2
2023-02-24T19:05:00.000Z
[ "license:apache-2.0", "flan", "flan 2022", "flan v2", "arxiv:2301.13688", "region:us" ]
SirNeural
null
null
null
146
97
--- license: apache-2.0 tags: - flan - flan 2022 - flan v2 pretty_name: Flan v2 --- # Dataset Card for Flan V2 ## Dataset Description - **Homepage:** https://ai.googleblog.com/2023/02/the-flan-collection-advancing-open.html - **Repository:** https://github.com/google-research/FLAN/tree/main/flan/v2 - **Paper:** https://arxiv.org/abs/2301.13688 - **Leaderboard:** - **Point of Contact:** ### Dataset Summary This is a processed version of the Flan V2 dataset. I'm not affiliated with the creators, I'm just releasing the files in an easier-to-access format after processing. The authors of the Flan Collection recommend experimenting with different mixing ratio's of tasks to get optimal results downstream. ## Setup Instructions Here are the steps I followed to get everything working: ### Build AESLC and WinoGrande datasets manually The repos for these datasets were updated recently and checksums need to be recomputed in TFDS - `tfds build --dataset aeslc --register_checksums` - `tfds build --dataset winogrande --register_checksums` ### Fix dataset versions I've opened a PR [here](https://github.com/google-research/FLAN/pull/20) to get these updated in the upstream FLAN repo, until that gets merged in run these locally to fix any dataset version errors. - `sed -i 's/glue\/cola:1.0.0/glue\/cola:2.0.0/g' flan/v2/task_configs_v1.py` - `sed -i 's/gem\/common_gen:1.0.0/gem\/common_gen:1.1.0/g' flan/v2/task_configs_v1.py` - `sed -i 's/gem\/dart:1.0.0/gem\/dart:1.1.0/g' flan/v2/task_configs_v1.py` - `sed -i 's/gem\/e2e_nlg:1.0.0/gem\/e2e_nlg:1.1.0/g' flan/v2/task_configs_v1.py` - `sed -i 's/gem\/web_nlg_en:1.0.0/gem\/web_nlg_en:1.1.0/g' flan/v2/task_configs_v1.py` - `sed -i 's/gem\/common_gen:1.0.0/gem\/common_gen:1.1.0/g' flan/v2/task_configs_v1.py` - `sed -i 's/paws_wiki:1.0.0/paws_wiki:1.1.0/g' flan/v2/task_configs_v1.py` - `sed -i 's/glue\/mrpc:1.0.0/glue\/mrpc:2.0.0/g' flan/v2/task_configs_v1.py` - `sed -i 's/glue\/qqp:1.0.0/glue\/qqp:2.0.0/g' flan/v2/task_configs_v1.py` - `sed -i 's/glue\/sst2:1.0.0/glue\/sst2:2.0.0/g' flan/v2/task_configs_v1.py` - `sed -i 's/glue\/mnli:1.0.0/glue\/mnli:2.0.0/g' flan/v2/task_configs_v1.py` - `sed -i 's/glue\/qnli:1.0.0/glue\/qnli:2.0.0/g' flan/v2/task_configs_v1.py` - `sed -i 's/glue\/wnli:1.0.0/glue\/wnli:2.0.0/g' flan/v2/task_configs_v1.py` - `sed -i 's/glue\/stsb:1.0.0/glue\/stsb:2.0.0/g' flan/v2/task_configs_v1.py` - `sed -i 's/hellaswag:0.0.1/hellaswag:1.1.0/g' flan/v2/task_configs_v1.py` - `sed -i 's/xsum:1.0.0/huggingface:xsum/g' flan/v2/task_configs_v1.py` ### Download and install manual steps Save these to `~/tensorflow_datasets/downloads/manual`. - [CzEng (deduped ignoring sections)](https://ufal.mff.cuni.cz/czeng/czeng16pre) - [Newsroom (extract)](https://lil.nlp.cornell.edu/newsroom/download/index.html) - [Yandex 1M Corpus](https://translate.yandex.ru/corpus?lang=en) - [Story Cloze (extract and rename to cloze_test_test__spring2016.csv and cloze_test_val__spring2016.csv)](https://cs.rochester.edu/nlp/) ### Finally, export tasks ```python import tensorflow as tf tf.config.set_visible_devices([], 'GPU') from flan.v2 import constants from flan.v2 import constants_t0 from flan.v2 import mixtures_utils from flan.v2 import mixtures from flan.v2 import tasks import json import t5 import seqio import itertools from multiprocessing import Pool seqio.add_global_cache_dirs(constants.CACHE_DIRS) seqio.set_global_cache_dirs(constants.CACHE_DIRS) vocab = t5.data.get_default_vocabulary() def prepare_task(split, shots, opt, task): dataset = seqio.get_mixture_or_task(f'palmflan_{task}_{shots}_{opt}').get_dataset( split=split, num_epochs=1, sequence_length={'inputs':4096,'targets':4096} ) print("starting", task, shots, opt, split) with open(f'./data/{task}_{shots}_{opt}_{split}.jsonl', 'w') as f: for ex in dataset.as_numpy_iterator(): f.write( json.dumps({ "inputs": vocab.decode(ex["inputs"]), "targets": vocab.decode(ex["targets"]), "task": task, })) f.write("\n") print("done with", task, shots, opt, split) # prepare_task("train", "zs", "noopt", "dialog") # use this to export a single task tasks = itertools.product(["train"], ["zs", "fs"], ["opt", "noopt"], ["dialog", "t0", "niv2", "flan", "cot"]) with Pool(5) as p: p.starmap(prepare_task, [(task[0], task[1], task[2], task[3]) for task in tasks]) ``` ## Dataset Structure ### Data Instances Flan 2021 (flan), P3 (t0), Super-Natural Instructions (niv2), Chain-of-thought (cot), and Dialog (dialog) ### Data Fields Instruction data comes in a few formats: - Few Shot (fs) - Zero Shot (zs) - Options Provided in context (i.e. multiple choice pick one) (opt) - No Options Provided (noopt) Each combination of the above tasks + formats are saved as a JSONL with following schema `{"input": ..., "target": ..., "task": ...}` ### Data Splits Everything is saved as a train split Note: FLAN-fs-opt-train is too big to be uploaded even when gzipped, so its split into 45gb chunks. To combine and recover, run `cat flan_fs_opt_train_*.gz | gunzip -c > flan_fs_opt_train.jsonl`
MU-NLPC/Calc-ape210k
2023-10-07T20:04:16.000Z
[ "license:mit", "arxiv:2305.15017", "arxiv:2009.11506", "region:us" ]
MU-NLPC
null
null
null
8
97
--- license: mit configs: - config_name: default data_files: - split: train path: data/train-* - split: validation path: data/validation-* - split: test path: data/test-* dataset_info: features: - name: id dtype: int64 - name: question dtype: string - name: question_chinese dtype: string - name: chain dtype: string - name: result dtype: string - name: result_float dtype: float64 - name: equation dtype: string splits: - name: train num_bytes: 109265276 num_examples: 195179 - name: validation num_bytes: 2730588 num_examples: 4867 - name: test num_bytes: 2725460 num_examples: 4867 download_size: 51446429 dataset_size: 114721324 --- # Dataset Card for "Calc-ape210k" ## Summary This dataset is an instance of Ape210K dataset, converted to a simple HTML-like language that can be easily parsed (e.g. by BeautifulSoup). The data contains 3 types of tags: - gadget: A tag whose content is intended to be evaluated by calling an external tool (sympy-based calculator in this case) - output: An output of the external tool - result: The final answer of the mathematical problem (a number) ## Supported Tasks The dataset is intended for training Chain-of-Thought reasoning **models able to use external tools** to enhance the factuality of their responses. This dataset presents in-context scenarios where models can outsource the computations in the reasoning chain to a calculator. ## Construction Process First, we translated the questions into English using Google Translate. Next, we parsed the equations and the results. We linearized the equations into a sequence of elementary steps and evaluated them using a sympy-based calculator. We numerically compare the output with the result in the data and remove all examples where they do not match (less than 3% loss in each split). Finally, we save the chain of steps the HTML-like language in the `chain` column. We keep the original columns in the dataset for convenience. You can read more information about this process in our [technical report](https://arxiv.org/abs/2305.15017). ## Content and Data splits Content and splits correspond to the original Ape210K dataset. See [ape210k dataset github](https://github.com/Chenny0808/ape210k) and [the paper](https://arxiv.org/abs/2009.11506) for more info. Columns: - `id` - id of the example - `question` - the description of the math problem. Automatically translated from `question_chinese` column into English using Google Translate - `question_chinese` - description of the math problem in Chinese - `chain` - linearized `equation`, sequence of arithmetic steps in HTML-like language that can be evaluated using our sympy-based calculator - `result` - result as a string (can be integer, float or a fraction) - `result_float` - result as a float - `equation` - a nested expression that evaluates to the correct answer ## Licence MIT, consistently with the original dataset. ## Cite If you use this version of the dataset in research, please cite the [original Ape210k paper](https://arxiv.org/abs/2009.11506) and also [our technical report](https://arxiv.org/abs/2305.15017) as follows: ```bibtex @article{kadlcik2023calcx, title={Calc-X: Enriching Arithmetical Chain-of-Thoughts Datasets by Interaction with Symbolic Systems}, author={Marek Kadlčík and Michal Štefánik}, year={2023}, eprint={2305.15017}, archivePrefix={arXiv}, primaryClass={cs.LG} } ```
wenhu/TheoremQA
2023-07-15T17:54:40.000Z
[ "task_categories:question-answering", "size_categories:n<1K", "language:en", "license:mit", "question answering", "math", "science", "visual question answering", "arxiv:2305.12524", "region:us" ]
wenhu
null
null
null
10
97
--- license: mit task_categories: - question-answering language: - en tags: - question answering - math - science - visual question answering pretty_name: ThoeremQA size_categories: - n<1K --- ## Introduction We propose the first question-answering dataset driven by STEM theorems. We annotated 800 QA pairs covering 350+ theorems spanning across Math, EE&CS, Physics and Finance. The dataset is collected by human experts with very high quality. We provide the dataset as a new benchmark to test the limit of large language models to apply theorems to solve challenging university-level questions. We provide a pipeline in the following to prompt LLMs and evaluate their outputs with WolframAlpha. ## How to use TheoremQA ``` from datasets import load_dataset dataset = load_dataset("wenhu/TheoremQA") for d in dataset['test']: print(d) ``` To use the images, try to download images from images.zip in https://huggingface.co/datasets/wenhu/TheoremQA/blob/main/images.zip. The image is under the `Picture' field. ## Arxiv Paper: https://arxiv.org/abs/2305.12524 ## Code https://github.com/wenhuchen/TheoremQA/tree/main
MoritzLaurer/sentiment_economy_news
2023-06-28T10:28:33.000Z
[ "region:us" ]
MoritzLaurer
null
null
null
1
97
--- dataset_info: features: - name: text dtype: string - name: labels dtype: string - name: articleid dtype: string - name: relevance dtype: string - name: positivity dtype: string - name: split dtype: string - name: positivity_rounded dtype: string - name: idx dtype: int64 splits: - name: train num_bytes: 5122725 num_examples: 3000 - name: test num_bytes: 653059 num_examples: 382 - name: train_sample num_bytes: 1684685 num_examples: 1000 - name: train_sample_numeric num_bytes: 1720504 num_examples: 1000 download_size: 5611673 dataset_size: 9180973 --- # Dataset Card for "sentiment_economy_news" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
chuyin0321/perimeter-stocks
2023-09-07T08:35:12.000Z
[ "region:us" ]
chuyin0321
null
null
null
0
97
--- dataset_info: features: - name: symbol dtype: string - name: security dtype: string - name: gics_sector dtype: string - name: gics_sub_industry dtype: string splits: - name: train num_bytes: 111767 num_examples: 1500 download_size: 44340 dataset_size: 111767 configs: - config_name: default data_files: - split: train path: data/train-* --- # Dataset Card for "perimeter-stocks" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)