id
stringlengths
2
115
lastModified
stringlengths
24
24
tags
list
author
stringlengths
2
42
description
stringlengths
0
68.7k
citation
stringlengths
0
10.7k
cardData
null
likes
int64
0
3.55k
downloads
int64
0
10.1M
card
stringlengths
0
1.01M
sevin987/KoChatGpt
2023-09-27T14:31:08.000Z
[ "license:unknown", "region:us" ]
sevin987
null
null
null
0
56
--- license: unknown ---
aghilrs/qa-pquad-withtext-small
2023-10-09T05:00:21.000Z
[ "region:us" ]
aghilrs
null
null
null
0
56
--- configs: - config_name: default data_files: - split: train path: data/train-* - split: test path: data/test-* dataset_info: features: - name: question dtype: string - name: answer dtype: string - name: text dtype: string splits: - name: train num_bytes: 8214960.6 num_examples: 2880 - name: test num_bytes: 912773.4 num_examples: 320 download_size: 4215893 dataset_size: 9127734.0 --- # Dataset Card for "qa-pquad-withtext-small" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
allenai/cord19
2022-11-03T16:31:53.000Z
[ "task_categories:other", "annotations_creators:no-annotation", "language_creators:found", "multilinguality:monolingual", "size_categories:100K<n<1M", "source_datasets:original", "language:en", "license:cc-by-nd-4.0", "license:cc-by-sa-4.0", "license:other", "arxiv:2004.07180", "region:us" ]
allenai
The Covid-19 Open Research Dataset (CORD-19) is a growing resource of scientific papers on Covid-19 and related historical coronavirus research. CORD-19 is designed to facilitate the development of text mining and information retrieval systems over its rich collection of metadata and structured full text papers. Since its release, CORD-19 has been downloaded over 75K times and has served as the basis of many Covid-19 text mining and discovery systems. The dataset itself isn't defining a specific task, but there is a Kaggle challenge that define 17 open research questions to be solved with the dataset: https://www.kaggle.com/allen-institute-for-ai/CORD-19-research-challenge/tasks
@article{Wang2020CORD19TC, title={CORD-19: The Covid-19 Open Research Dataset}, author={Lucy Lu Wang and Kyle Lo and Yoganand Chandrasekhar and Russell Reas and Jiangjiang Yang and Darrin Eide and K. Funk and Rodney Michael Kinney and Ziyang Liu and W. Merrill and P. Mooney and D. Murdick and Devvret Rishi and Jerry Sheehan and Zhihong Shen and B. Stilson and A. Wade and K. Wang and Christopher Wilhelm and Boya Xie and D. Raymond and Daniel S. Weld and Oren Etzioni and Sebastian Kohlmeier}, journal={ArXiv}, year={2020} }
null
2
55
--- annotations_creators: - no-annotation language_creators: - found language: - en license: - cc-by-nd-4.0 - cc-by-sa-4.0 - other multilinguality: - monolingual size_categories: - 100K<n<1M source_datasets: - original task_categories: - other task_ids: [] paperswithcode_id: cord-19 pretty_name: CORD-19 dataset_info: - config_name: metadata features: - name: cord_uid dtype: string - name: sha dtype: string - name: source_x dtype: string - name: title dtype: string - name: doi dtype: string - name: abstract dtype: string - name: publish_time dtype: string - name: authors dtype: string - name: journal dtype: string - name: url dtype: string splits: - name: train num_bytes: 496247275 num_examples: 368618 download_size: 6142360818 dataset_size: 496247275 - config_name: fulltext features: - name: cord_uid dtype: string - name: sha dtype: string - name: source_x dtype: string - name: title dtype: string - name: doi dtype: string - name: abstract dtype: string - name: publish_time dtype: string - name: authors dtype: string - name: journal dtype: string - name: url dtype: string - name: fulltext dtype: string splits: - name: train num_bytes: 3718245736 num_examples: 368618 download_size: 6142360818 dataset_size: 3718245736 - config_name: embeddings features: - name: cord_uid dtype: string - name: sha dtype: string - name: source_x dtype: string - name: title dtype: string - name: doi dtype: string - name: abstract dtype: string - name: publish_time dtype: string - name: authors dtype: string - name: journal dtype: string - name: url dtype: string - name: doc_embeddings sequence: float64 splits: - name: train num_bytes: 2759561943 num_examples: 368618 download_size: 6142360818 dataset_size: 2759561943 --- # Dataset Card for CORD-19 ## Table of Contents - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards) - [Languages](#languages) - [Dataset Structure](#dataset-structure) - [Data Instances](#data-instances) - [Data Fields](#data-fields) - [Data Splits](#data-splits) - [Dataset Creation](#dataset-creation) - [Curation Rationale](#curation-rationale) - [Source Data](#source-data) - [Annotations](#annotations) - [Personal and Sensitive Information](#personal-and-sensitive-information) - [Considerations for Using the Data](#considerations-for-using-the-data) - [Social Impact of Dataset](#social-impact-of-dataset) - [Discussion of Biases](#discussion-of-biases) - [Other Known Limitations](#other-known-limitations) - [Additional Information](#additional-information) - [Dataset Curators](#dataset-curators) - [Licensing Information](#licensing-information) - [Citation Information](#citation-information) - [Contributions](#contributions) ## Dataset Description - **Homepage:** [https://www.semanticscholar.org/cord19](https://www.semanticscholar.org/cord19) - **Repository:** [https://github.com/allenai/cord19](https://github.com/allenai/cord19) - **Paper:** [CORD-19: The COVID-19 Open Research Dataset](https://www.aclweb.org/anthology/2020.nlpcovid19-acl.1/) - **Leaderboard:** [Kaggle challenge](https://www.kaggle.com/allen-institute-for-ai/CORD-19-research-challenge) ### Dataset Summary CORD-19 is a corpus of academic papers about COVID-19 and related coronavirus research. It's curated and maintained by the Semantic Scholar team at the Allen Institute for AI to support text mining and NLP research. ### Supported Tasks and Leaderboards See tasks defined in the related [Kaggle challenge](https://www.kaggle.com/allen-institute-for-ai/CORD-19-research-challenge/tasks) ### Languages The dataset is in english (en). ## Dataset Structure ### Data Instances The following code block present an overview of a sample in json-like syntax (abbreviated since some fields are very long): ``` { "abstract": "OBJECTIVE: This retrospective chart review describes the epidemiology and clinical features of 40 patients with culture-proven Mycoplasma pneumoniae infections at King Abdulaziz University Hospital, Jeddah, Saudi Arabia. METHODS: Patients with positive M. pneumoniae cultures from respiratory specimens from January 1997 through December 1998 were identified through the Microbiology records. Charts of patients were reviewed. RESULTS: 40 patients were identified [...]", "authors": "Madani, Tariq A; Al-Ghamdi, Aisha A", "cord_uid": "ug7v899j", "doc_embeddings": [ -2.939983606338501, -6.312200546264648, -1.0459030866622925, [...] 766 values in total [...] -4.107113361358643, -3.8174145221710205, 1.8976187705993652, 5.811529159545898, -2.9323840141296387 ], "doi": "10.1186/1471-2334-1-6", "journal": "BMC Infect Dis", "publish_time": "2001-07-04", "sha": "d1aafb70c066a2068b02786f8929fd9c900897fb", "source_x": "PMC", "title": "Clinical features of culture-proven Mycoplasma pneumoniae infections at King Abdulaziz University Hospital, Jeddah, Saudi Arabia", "url": "https: //www.ncbi.nlm.nih.gov/pmc/articles/PMC35282/" } ``` ### Data Fields Currently only the following fields are integrated: `cord_uid`, `sha`,`source_x`, `title`, `doi`, `abstract`, `publish_time`, `authors`, `journal`. With `fulltext` configuration, the sections transcribed in `pdf_json_files` are converted in `fulltext` feature. - `cord_uid`: A `str`-valued field that assigns a unique identifier to each CORD-19 paper. This is not necessariy unique per row, which is explained in the FAQs. - `sha`: A `List[str]`-valued field that is the SHA1 of all PDFs associated with the CORD-19 paper. Most papers will have either zero or one value here (since we either have a PDF or we don't), but some papers will have multiple. For example, the main paper might have supplemental information saved in a separate PDF. Or we might have two separate PDF copies of the same paper. If multiple PDFs exist, their SHA1 will be semicolon-separated (e.g. `'4eb6e165ee705e2ae2a24ed2d4e67da42831ff4a; d4f0247db5e916c20eae3f6d772e8572eb828236'`) - `source_x`: A `List[str]`-valued field that is the names of sources from which we received this paper. Also semicolon-separated. For example, `'ArXiv; Elsevier; PMC; WHO'`. There should always be at least one source listed. - `title`: A `str`-valued field for the paper title - `doi`: A `str`-valued field for the paper DOI - `pmcid`: A `str`-valued field for the paper's ID on PubMed Central. Should begin with `PMC` followed by an integer. - `pubmed_id`: An `int`-valued field for the paper's ID on PubMed. - `license`: A `str`-valued field with the most permissive license we've found associated with this paper. Possible values include: `'cc0', 'hybrid-oa', 'els-covid', 'no-cc', 'cc-by-nc-sa', 'cc-by', 'gold-oa', 'biorxiv', 'green-oa', 'bronze-oa', 'cc-by-nc', 'medrxiv', 'cc-by-nd', 'arxiv', 'unk', 'cc-by-sa', 'cc-by-nc-nd'` - `abstract`: A `str`-valued field for the paper's abstract - `publish_time`: A `str`-valued field for the published date of the paper. This is in `yyyy-mm-dd` format. Not always accurate as some publishers will denote unknown dates with future dates like `yyyy-12-31` - `authors`: A `List[str]`-valued field for the authors of the paper. Each author name is in `Last, First Middle` format and semicolon-separated. - `journal`: A `str`-valued field for the paper journal. Strings are not normalized (e.g. `BMJ` and `British Medical Journal` can both exist). Empty string if unknown. - `mag_id`: Deprecated, but originally an `int`-valued field for the paper as represented in the Microsoft Academic Graph. - `who_covidence_id`: A `str`-valued field for the ID assigned by the WHO for this paper. Format looks like `#72306`. - `arxiv_id`: A `str`-valued field for the arXiv ID of this paper. - `pdf_json_files`: A `List[str]`-valued field containing paths from the root of the current data dump version to the parses of the paper PDFs into JSON format. Multiple paths are semicolon-separated. Example: `document_parses/pdf_json/4eb6e165ee705e2ae2a24ed2d4e67da42831ff4a.json; document_parses/pdf_json/d4f0247db5e916c20eae3f6d772e8572eb828236.json` - `pmc_json_files`: A `List[str]`-valued field. Same as above, but corresponding to the full text XML files downloaded from PMC, parsed into the same JSON format as above. - `url`: A `List[str]`-valued field containing all URLs associated with this paper. Semicolon-separated. - `s2_id`: A `str`-valued field containing the Semantic Scholar ID for this paper. Can be used with the Semantic Scholar API (e.g. `s2_id=9445722` corresponds to `http://api.semanticscholar.org/corpusid:9445722`) Extra fields based on selected configuration during loading: - `fulltext`: A `str`-valued field containing the concatenation of all text sections from json (itself extracted from pdf) - `doc_embeddings`: A `sequence` of float-valued elements containing document embeddings as a vector of floats (parsed from string of values separated by ','). Details on the system used to extract the embeddings are available in: [SPECTER: Document-level Representation Learning using Citation-informed Transformers](https://arxiv.org/abs/2004.07180). TL;DR: it's relying on a BERT model pre-trained on document-level relatedness using the citation graph. The system can be queried through REST (see [public API documentation](https://github.com/allenai/paper-embedding-public-apis)). ### Data Splits No annotation provided in this dataset so all instances are provided in training split. The sizes of each configuration are: | | train | |------------|-------:| | metadata | 368618 | | fulltext | 368618 | | embeddings | 368618 | ## Dataset Creation ### Curation Rationale See [official readme](https://github.com/allenai/cord19/blob/master/README.md) ### Source Data See [official readme](https://github.com/allenai/cord19/blob/master/README.md) #### Initial Data Collection and Normalization See [official readme](https://github.com/allenai/cord19/blob/master/README.md) #### Who are the source language producers? See [official readme](https://github.com/allenai/cord19/blob/master/README.md) ### Annotations No annotations #### Annotation process N/A #### Who are the annotators? N/A ### Personal and Sensitive Information [More Information Needed] ## Considerations for Using the Data ### Social Impact of Dataset [More Information Needed] ### Discussion of Biases [More Information Needed] ### Other Known Limitations [More Information Needed] ## Additional Information ### Dataset Curators [More Information Needed] ### Licensing Information [More Information Needed] ### Citation Information ``` @article{Wang2020CORD19TC, title={CORD-19: The Covid-19 Open Research Dataset}, author={Lucy Lu Wang and Kyle Lo and Yoganand Chandrasekhar and Russell Reas and Jiangjiang Yang and Darrin Eide and K. Funk and Rodney Michael Kinney and Ziyang Liu and W. Merrill and P. Mooney and D. Murdick and Devvret Rishi and Jerry Sheehan and Zhihong Shen and B. Stilson and A. Wade and K. Wang and Christopher Wilhelm and Boya Xie and D. Raymond and Daniel S. Weld and Oren Etzioni and Sebastian Kohlmeier}, journal={ArXiv}, year={2020} } ``` ### Contributions Thanks to [@ggdupont](https://github.com/ggdupont) for adding this dataset.
SetFit/toxic_conversations
2022-02-11T13:45:54.000Z
[ "region:us" ]
SetFit
null
null
null
4
55
# Toxic Conversation This is a version of the [Jigsaw Unintended Bias in Toxicity Classification dataset](https://www.kaggle.com/c/jigsaw-unintended-bias-in-toxicity-classification/overview). It contains comments from the Civil Comments platform together with annotations if the comment is toxic or not. 10 annotators annotated each example and, as recommended in the task page, set a comment as toxic when target >= 0.5 The dataset is inbalanced, with only about 8% of the comments marked as toxic.
CLUTRR/v1
2022-10-25T10:03:19.000Z
[ "multilinguality:monolingual", "size_categories:10K<n<100K", "language:en", "license:unknown", "arxiv:1908.06177", "region:us" ]
CLUTRR
CLUTRR (Compositional Language Understanding and Text-based Relational Reasoning), a diagnostic benchmark suite, is first introduced in (https://arxiv.org/abs/1908.06177) to test the systematic generalization and inductive reasoning capabilities of NLU systems.
@article{sinha2019clutrr, Author = {Koustuv Sinha and Shagun Sodhani and Jin Dong and Joelle Pineau and William L. Hamilton}, Title = {CLUTRR: A Diagnostic Benchmark for Inductive Reasoning from Text}, Year = {2019}, journal = {Empirical Methods of Natural Language Processing (EMNLP)}, arxiv = {1908.06177} }
null
2
55
--- language: - en license: - unknown multilinguality: - monolingual size_categories: - 10K<n<100K --- # Dataset Card for CLUTRR ## Table of Contents ## Dataset Description ### Dataset Summary **CLUTRR** (**C**ompositional **L**anguage **U**nderstanding and **T**ext-based **R**elational **R**easoning), a diagnostic benchmark suite, is first introduced in (https://arxiv.org/abs/1908.06177) to test the systematic generalization and inductive reasoning capabilities of NLU systems. The CLUTRR benchmark allows us to test a model’s ability for **systematic generalization** by testing on stories that contain unseen combinations of logical rules, and test for the various forms of **model robustness** by adding different kinds of superfluous noise facts to the stories. ### Dataset Task CLUTRR contains a large set of semi-synthetic stories involving hypothetical families. The task is to infer the relationship between two family members, whose relationship is not explicitly mentioned in the given story. Join the CLUTRR community in https://www.cs.mcgill.ca/~ksinha4/clutrr/ ## Dataset Structure We show detailed information for all 14 configurations of the dataset. ### configurations: **id**: a unique series of characters and numbers that identify each instance <br> **story**: one semi-synthetic story involving hypothetical families<br> **query**: the target query/relation which contains two names, where the goal is to classify the relation that holds between these two entities<br> **target**: indicator for the correct relation for the query <br> **target_text**: text for the correct relation for the query <br> the indicator follows the rule as follows: <br> "aunt": 0, "son-in-law": 1, "grandfather": 2, "brother": 3, "sister": 4, "father": 5, "mother": 6, "grandmother": 7, "uncle": 8, "daughter-in-law": 9, "grandson": 10, "granddaughter": 11, "father-in-law": 12, "mother-in-law": 13, "nephew": 14, "son": 15, "daughter": 16, "niece": 17, "husband": 18, "wife": 19, "sister-in-law": 20 <br> **clean\_story**: the story without noise factors<br> **proof\_state**: the logical rule of the kinship generation <br> **f\_comb**: the kinships of the query followed by the logical rule<br> **task\_name**: the task of the sub-dataset in a form of "task_[num1].[num2]"<br> The first number [num1] indicates the status of noise facts added in the story: 1- no noise facts; 2- Irrelevant facts*; 3- Supporting facts*; 4- Disconnected facts*.<br> The second number [num2] directly indicates the length of clauses for the task target.<br> *for example:*<br> *task_1.2 -- task requiring clauses of length 2 without adding noise facts*<br> *task_2.3 -- task requiring clauses of length 3 with Irrelevant noise facts added in the story*<br> **story\_edges**: all the edges in the kinship graph<br> **edge\_types**: similar to the f\_comb, another form of the query's kinships followed by the logical rule <br> **query\_edge**: the corresponding edge of the target query in the kinship graph<br> **genders**: genders of names appeared in the story<br> **task\_split**: train,test <br> *Further explanation of Irrelevant facts, Supporting facts and Disconnected facts can be found in the 3.5 Robust Reasoning section in https://arxiv.org/abs/1908.06177 ### Data Instances An example of 'train'in Task 1.2 looks as follows. ``` { "id": b2b9752f-d7fa-46a9-83ae-d474184c35b6, "story": "[Lillian] and her daughter [April] went to visit [Lillian]'s mother [Ashley] last Sunday.", "query": ('April', 'Ashley'), "target": 7, "target_text": "grandmother", "clean_story": [Lillian] and her daughter [April] went to visit [Lillian]'s mother [Ashley] last Sunday., "proof_state": [{('April', 'grandmother', 'Ashley'): [('April', 'mother', 'Lillian'), ('Lillian', 'mother', 'Ashley')]}], "f_comb": "mother-mother", "task_name": "task_1.2", "story_edges": [(0, 1), (1, 2)], "edge_types": ['mother', 'mother'], "query_edge": (0, 2), "genders": "April:female,Lillian:female,Ashley:female", "task_split": trian } ``` ### Data Splits #### Data Split Name (corresponding with the name used in the paper) | task_split | split name in paper | train &validation task |test task | | :---: | :---: | :-: | :-: | | gen_train23_test2to10 | data_089907f8 | 1.2, 1.3 | 1.2, 1.3, 1.4, 1.5, 1.6, 1.7, 1.8, 1.9, 1.10 | | gen_train234_test2to10 | data_db9b8f04 | 1.2, 1.3, 1.4| 1.2, 1.3, 1.4, 1.5, 1.6, 1.7, 1.8, 1.9, 1.10 | | rob_train_clean_23_test_all_23 | data_7c5b0e70 | 1.2,1.3 | 1.2, 1.3, 2.3, 3.3, 4.3 | | rob_train_sup_23_test_all_23 | data_06b8f2a1 | 2.2, 2.3 | 2.2, 2.3, 1.3, 3.3, 4.3 | | rob_train_irr_23_test_all_23 | data_523348e6 | 3.2, 3.3 | 3.2, 3.3, 1.3, 2.3, 4.3 | | rob_train_disc_23_test_all_23 | data_d83ecc3e | 4.2, 4.3 | 4.2, 4.3, 1.3, 2.3, 3.3 | #### Data Split Summary Number of Instances in each split | task_split | train | validation | test | | :-: | :---: | :---: | :---: | | gen_train23_test2to10 | 9074 | 2020 | 1146 | | gen_train234_test2to10 | 12064 | 3019 | 1048 | | rob_train_clean_23_test_all_23 | 8098 | 2026 | 447 | | rob_train_disc_23_test_all_23 | 8080 | 2020 | 445 | | rob_train_irr_23_test_all_23 | 8079 | 2020 | 444 | | rob_train_sup_23_test_all_23 | 8123 | 2031 | 447 | ## Citation Information ``` @article{sinha2019clutrr, Author = {Koustuv Sinha and Shagun Sodhani and Jin Dong and Joelle Pineau and William L. Hamilton}, Title = {CLUTRR: A Diagnostic Benchmark for Inductive Reasoning from Text}, Year = {2019}, journal = {Empirical Methods of Natural Language Processing (EMNLP)}, arxiv = {1908.06177} } ```
ai4bharat/IndicHeadlineGeneration
2022-10-13T06:08:20.000Z
[ "annotations_creators:no-annotation", "language_creators:found", "multilinguality:multilingual", "size_categories:27K<n<341K", "source_datasets:original for Hindi, and modified [IndicGLUE](https://indicnlp.ai4bharat.org/indic-glue/) for other languages.", "language:as", "language:bn", "language:gu", ...
ai4bharat
This is the new headline generation dataset released as part of IndicNLG Suite. Each input document is paired an output title. We create this dataset in eleven languages including as, bn, gu, hi, kn, ml, mr, or, pa, ta, te. The total size of the dataset is 1.43M.
@inproceedings{Kumar2022IndicNLGSM, title={IndicNLG Suite: Multilingual Datasets for Diverse NLG Tasks in Indic Languages}, author={Aman Kumar and Himani Shrotriya and Prachi Sahu and Raj Dabre and Ratish Puduppully and Anoop Kunchukuttan and Amogh Mishra and Mitesh M. Khapra and Pratyush Kumar}, year={2022}, url = "https://arxiv.org/abs/2203.05437" }
null
0
55
--- annotations_creators: - no-annotation language_creators: - found language: - as - bn - gu - hi - kn - ml - mr - or - pa - ta - te license: - cc-by-nc-4.0 multilinguality: - multilingual pretty_name: IndicHeadlineGeneration size_categories: - 27K<n<341K source_datasets: - original for Hindi, and modified [IndicGLUE](https://indicnlp.ai4bharat.org/indic-glue/) for other languages. task_categories: - conditional-text-generation task_ids: - conditional-text-generation-other-headline-generation --- # Dataset Card for "IndicHeadlineGeneration" ## Table of Contents - [Dataset Card Creation Guide](#dataset-card-creation-guide) - [Table of Contents](#table-of-contents) - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards) - [Languages](#languages) - [Dataset Structure](#dataset-structure) - [Data Instances](#data-instances) - [Data Fields](#data-fields) - [Data Splits](#data-splits) - [Dataset Creation](#dataset-creation) - [Curation Rationale](#curation-rationale) - [Source Data](#source-data) - [Initial Data Collection and Normalization](#initial-data-collection-and-normalization) - [Who are the source language producers?](#who-are-the-source-language-producers) - [Annotations](#annotations) - [Annotation process](#annotation-process) - [Who are the annotators?](#who-are-the-annotators) - [Personal and Sensitive Information](#personal-and-sensitive-information) - [Considerations for Using the Data](#considerations-for-using-the-data) - [Social Impact of Dataset](#social-impact-of-dataset) - [Discussion of Biases](#discussion-of-biases) - [Other Known Limitations](#other-known-limitations) - [Additional Information](#additional-information) - [Dataset Curators](#dataset-curators) - [Licensing Information](#licensing-information) - [Citation Information](#citation-information) - [Contributions](#contributions) ## Dataset Description - **Homepage:** https://indicnlp.ai4bharat.org/indicnlg-suite - **Paper:** [IndicNLG Suite: Multilingual Datasets for Diverse NLG Tasks in Indic Languages](https://arxiv.org/abs/2203.05437) - **Point of Contact:** ### Dataset Summary IndicHeadlineGeneration is the news headline generation dataset released as part of IndicNLG Suite. Each input document is paired with an output as title. We create this dataset in eleven languages including as, bn, gu, hi, kn, ml, mr, or, pa, ta, te. The total size of the dataset is 1.4M. ### Supported Tasks and Leaderboards **Tasks:** Headline Generation **Leaderboards:** Currently there is no Leaderboard for this dataset. ### Languages - `Assamese (as)` - `Bengali (bn)` - `Gujarati (gu)` - `Kannada (kn)` - `Hindi (hi)` - `Malayalam (ml)` - `Marathi (mr)` - `Oriya (or)` - `Punjabi (pa)` - `Tamil (ta)` - `Telugu (te)` ## Dataset Structure ### Data Instances One random example from the `hi` dataset is given below in JSON format. ``` {'id': '14', 'input': "अमेरिकी सिंगर अरियाना ग्रांडे का नया म्यूजिक एल्बम 'थैंक यू नेक्स्ट' रिलीज हो गया है।एक दिन पहले ही रिलीज हुए इस गाने को देखने वालों की संख्या 37,663,702 पहुंच गई है।यूट्यूब पर अपलोड इस गाने को 24 घंटे के भीतर 3.8 मिलियन लोगों ने पसंद किया है।अरियाना ग्रांडे नई दिल्लीः अमेरिकी सिंगर अरियाना ग्रांडे का नया म्यूजिक एल्बम 'थैंक यू नेक्स्ट' रिलीज हो गया है।एक दिन पहले ही रिलीज हुए इस गाने को देखने वालों की संख्या 37,663,702 पहुंच गई है।यूट्यूब पर अपलोड इस गाने को 24 घंटे के भीतर 3.8 मिलियन लोगों ने पसंद किया है।वहीं इस वीडियो पर कमेंट्स की बाढ़ आ गई है।गाने में मीन गर्ल्स, ब्रिंग इट ऑन, लीगली ब्लॉंड और 13 गोइंग 30 के कुछ फेमस सीन्स को दिखाया गया है।गाने में क्रिस जैनर का कैमियो भी है।बता दें अभी कुछ महीने पहले ही अरियाना के एक्स ब्वॉयफ्रेंड मैक मिलर का 26 साल की उम्र में निधन हो गया था।इस खबर को सुनकर अरियाना टूट सी गई थीं।उन्होंने सोशल मीडिया पर पोस्ट कर कई बार अपनी भावनाएं व्यक्त की।अरियाना ग्रांडे और रैपर मैक मिलर ने करीब 2 साल तक एक दूसरे को डेट किया।मैक के निधन की वजह ड्रग्स की ओवरडोज बताई गई।दोनों की मुलाकात साल 2012 में हुई थी।दोनों ने एक कंसर्ट में साथ कई गानों पर परफॉर्म भी किया था।जिसके बाद दोनों एक दूसरे को डेट करने लगे लेकिन नशे की लत के कारण अरियाना ने उनसे ब्रेकअप कर लिया।पर देश-विदेश की ताजा और स्पेशल स्टोरी पढ़ते हुए अपने आप को रखिए अप-टू-डेट।के लिए क्लिक करें सिनेमा सेक्शन", 'target': 'अरियाना ग्रांडे का नया गाना रिलीज, सोशल मीडिया पर वायरल', 'url': 'https://www.indiatv.in/entertainment/hollywood-ariana-grande-shatters-24-hour-views-record-612835' } ``` ### Data Fields - `id (string)`: Unique identifier. - `input (string)`: News article as input. - `target (strings)`: Output as headline of the news article. - `url (string)`: Source web link of the news article. ### Data Splits Here is the number of samples in each split for all the languages. Language | ISO 639-1 Code | Train | Dev | Test | ---------- | ---------- | ---------- | ---------- | ---------- | Assamese | as | 29,631 | 14,592 | 14,808 | Bengali | bn | 113,424 | 14,739 | 14,568 | Gujarati | gu | 199,972 | 31,270 | 31,215 | Hindi | hi | 208,221 | 44,738 | 44,514 | Kannada | kn | 132,380 | 19,416 | 3,261 | Malayalam | ml | 10,358 | 5,388 | 5,220 | Marathi | mr | 114,042 | 14,253 | 14,340 | Oriya | or | 58,225 | 7,484 | 7,137 | Punjabi | pa | 48,441 | 6,108 | 6,086 | Tamil | ta | 60,650 | 7,616 | 7,688 | Telugu | te | 21,352 | 2,690 | 2,675 | ## Dataset Creation ### Curation Rationale [Detailed in the paper](https://arxiv.org/abs/2203.05437) ### Source Data For hindi, web sources like [Dainik Bhaskar](https://www.bhaskar.com), [Naidunia](https://www.naidunia.com/), [NDTV](https://ndtv.in/), [Business Standard](https://hindi.business-standard.com/) and [IndiaTV](https://www.indiatv.in/). For other languages, modified [IndicGLUE](https://indicnlp.ai4bharat.org/indic-glue/) dataset. #### Initial Data Collection and Normalization [Detailed in the paper](https://arxiv.org/abs/2203.05437) #### Who are the source language producers? [Detailed in the paper](https://arxiv.org/abs/2203.05437) ### Annotations [More information needed] #### Annotation process [More information needed] #### Who are the annotators? [More information needed] ### Personal and Sensitive Information [More information needed] ## Considerations for Using the Data ### Social Impact of Dataset [More information needed] ### Discussion of Biases [More information needed] ### Other Known Limitations [More information needed] ## Additional Information ### Dataset Curators [More information needed] ### Licensing Information Contents of this repository are restricted to only non-commercial research purposes under the [Creative Commons Attribution-NonCommercial 4.0 International License (CC BY-NC 4.0)](https://creativecommons.org/licenses/by-nc/4.0/). Copyright of the dataset contents belongs to the original copyright holders. ### Citation Information If you use any of the datasets, models or code modules, please cite the following paper: ``` @inproceedings{Kumar2022IndicNLGSM, title={IndicNLG Suite: Multilingual Datasets for Diverse NLG Tasks in Indic Languages}, author={Aman Kumar and Himani Shrotriya and Prachi Sahu and Raj Dabre and Ratish Puduppully and Anoop Kunchukuttan and Amogh Mishra and Mitesh M. Khapra and Pratyush Kumar}, year={2022}, url = "https://arxiv.org/abs/2203.05437", ``` ### Contributions [Detailed in the paper](https://arxiv.org/abs/2203.05437)
amandakonet/climate_fever_adopted
2022-04-16T22:41:13.000Z
[ "region:us" ]
amandakonet
null
null
null
0
55
aps/dynahate
2022-05-18T00:11:13.000Z
[ "region:us" ]
aps
We present a human-and-model-in-the-loop process for dynamically generating datasets and training better performing and more robust hate detection models. We provide a new dataset of ~40,000 entries, generated and labelled by trained annotators over four rounds of dynamic data creation. It includes ~15,000 challenging perturbations and each hateful entry has fine-grained labels for the type and target of hate. Hateful entries make up 54% of the dataset, which is substantially higher than comparable datasets. We show that model performance is substantially improved using this approach. Models trained on later rounds of data collection perform better on test sets and are harder for annotators to trick. They also perform better on HATECHECK, a suite of functional tests for online hate detection. See https://arxiv.org/abs/2012.15761 for more details.
@inproceedings{vidgen2021learning, title={Learning from the Worst: Dynamically Generated Datasets to Improve Online Hate Detection}, author={Vidgen, Bertie and Thrush, Tristan and Waseem, Zeerak and Kiela, Douwe}, booktitle={Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers)}, pages={1667--1682}, year={2021} }
null
1
55
Entry not found
strombergnlp/x-stance
2022-10-25T21:45:25.000Z
[ "task_categories:text-classification", "task_ids:fact-checking", "annotations_creators:crowdsourced", "language_creators:found", "multilinguality:multilingual", "size_categories:10K<n<100K", "language:de", "language:fr", "license:mit", "stance-detection", "arxiv:2003.08385", "region:us" ]
strombergnlp
The x-stance dataset contains more than 150 political questions, and 67k comments written by candidates on those questions. The comments are partly German, partly French and Italian. The data have been extracted from the Swiss voting advice platform Smartvote.
@inproceedings{vamvas2020xstance, author = "Vamvas, Jannis and Sennrich, Rico", title = "{X-Stance}: A Multilingual Multi-Target Dataset for Stance Detection", booktitle = "Proceedings of the 5th Swiss Text Analytics Conference (SwissText) \& 16th Conference on Natural Language Processing (KONVENS)", address = "Zurich, Switzerland", year = "2020", month = "jun", url = "http://ceur-ws.org/Vol-2624/paper9.pdf" }
null
1
55
--- annotations_creators: - crowdsourced language_creators: - found language: - de - fr license: - mit multilinguality: - multilingual size_categories: - 10K<n<100K source_datasets: [] task_categories: - text-classification task_ids: - fact-checking pretty_name: X-Stance tags: - stance-detection --- # Dataset Card for X-Stance ## Table of Contents - [Table of Contents](#table-of-contents) - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards) - [Languages](#languages) - [Dataset Structure](#dataset-structure) - [Data Instances](#data-instances) - [Data Fields](#data-fields) - [Data Splits](#data-splits) - [Dataset Creation](#dataset-creation) - [Curation Rationale](#curation-rationale) - [Source Data](#source-data) - [Annotations](#annotations) - [Personal and Sensitive Information](#personal-and-sensitive-information) - [Considerations for Using the Data](#considerations-for-using-the-data) - [Social Impact of Dataset](#social-impact-of-dataset) - [Discussion of Biases](#discussion-of-biases) - [Other Known Limitations](#other-known-limitations) - [Additional Information](#additional-information) - [Dataset Curators](#dataset-curators) - [Licensing Information](#licensing-information) - [Citation Information](#citation-information) - [Contributions](#contributions) ## Dataset Description - **Repository:** [https://github.com/ZurichNLP/xstance](https://github.com/ZurichNLP/xstance) - **Paper:** [http://ceur-ws.org/Vol-2624/paper9.pdf](http://ceur-ws.org/Vol-2624/paper9.pdf), [https://arxiv.org/abs/2003.08385](https://arxiv.org/abs/2003.08385) - **Point of Contact:** [Jannis Vamvas](https://twitter.com/j_vamvas) ### Dataset Summary The x-stance dataset contains more than 150 political questions, and 67k comments written by candidates on those questions. The comments are partly German, partly French and Italian. The data have been extracted from the Swiss voting advice platform Smartvote. ### Languages German, French/Italian ## Dataset Structure ### Data Instances An example of 'train' looks as follows: ``` { 'id': '0', 'question': 'Eine Volksinitiative fordert, dass die Gesamtfläche der Bauzonen in der Schweiz für die nächsten 20 Jahre auf dem heutigen Stand begrenzt wird. Befürworten Sie dieses Anliegen?', 'comment': 'Eine fixe Grösse verbieten, ist das falsche Mittel', ' 'label': 0 } ``` ### Data Fields - `id`: a 'string' feature. - `question`: a 'string' expressing a claim/topic. - `comment`: a 'string' to be classified for its stance to the source. - `label`: ``` 0: "AGAINST", 1: "FAVOR" ``` ### Data Splits |languages|name|instances| |---------|----|----:| |de|train|33850| |de|validation|2871| |de|test|11891| |fr|train|11790| |fr|validation|1055| |fr|test|5814| ## Dataset Creation ### Curation Rationale [More Information Needed] ### Source Data #### Initial Data Collection and Normalization [More Information Needed] #### Who are the source language producers? [More Information Needed] ### Annotations #### Annotation process [More Information Needed] #### Who are the annotators? [More Information Needed] ### Personal and Sensitive Information [More Information Needed] ## Considerations for Using the Data ### Social Impact of Dataset [More Information Needed] ### Discussion of Biases [More Information Needed] ### Other Known Limitations [More Information Needed] ## Additional Information ### Dataset Curators [More Information Needed] ### Licensing Information [MIT License](https://github.com/ZurichNLP/xstance/blob/master/LICENSE) ### Citation Information ``` @article{vamvas2020x, title={X-stance: A multilingual multi-target dataset for stance detection}, author={Vamvas, Jannis and Sennrich, Rico}, journal={arXiv preprint arXiv:2003.08385}, year={2020} } ``` ### Contributions Thanks to [mkonxd](https://github.com/mkonxd), [leondz](https://github.com/leondz) for adding this dataset.
eraldoluis/faquad
2023-01-23T08:45:41.000Z
[ "task_categories:question-answering", "task_ids:extractive-qa", "annotations_creators:expert-generated", "language_creators:found", "multilinguality:monolingual", "size_categories:n<1K", "source_datasets:extended|wikipedia", "language:pt", "license:cc-by-4.0", "region:us" ]
eraldoluis
Academic secretaries and faculty members of higher education institutions face a common problem: the abundance of questions sent by academics whose answers are found in available institutional documents. The official documents produced by Brazilian public universities are vast and disperse, which discourage students to further search for answers in such sources. In order to lessen this problem, we present FaQuAD: a novel machine reading comprehension dataset in the domain of Brazilian higher education institutions. FaQuAD follows the format of SQuAD (Stanford Question Answering Dataset) [Rajpurkar et al. 2016]. It comprises 900 questions about 249 reading passages (paragraphs), which were taken from 18 official documents of a computer science college from a Brazilian federal university and 21 Wikipedia articles related to Brazilian higher education system. As far as we know, this is the first Portuguese reading comprehension dataset in this format.
@INPROCEEDINGS{ 8923668, author={Sayama, Hélio Fonseca and Araujo, Anderson Viçoso and Fernandes, Eraldo Rezende}, booktitle={2019 8th Brazilian Conference on Intelligent Systems (BRACIS)}, title={FaQuAD: Reading Comprehension Dataset in the Domain of Brazilian Higher Education}, year={2019}, volume={}, number={}, pages={443-448}, doi={10.1109/BRACIS.2019.00084} }
null
6
55
--- pretty_name: FaQuAD annotations_creators: - expert-generated language_creators: - found language: - pt license: - cc-by-4.0 multilinguality: - monolingual size_categories: - n<1K source_datasets: - extended|wikipedia task_categories: - question-answering task_ids: - extractive-qa # paperswithcode_id: faquad train-eval-index: - config: plain_text task: question-answering task_id: extractive_question_answering splits: train_split: train eval_split: validation col_mapping: question: question context: context answers: text: text answer_start: answer_start metrics: - type: squad name: SQuAD --- # Dataset Card for FaQuAD ## Table of Contents - [Table of Contents](#table-of-contents) - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards) - [Languages](#languages) - [Dataset Structure](#dataset-structure) - [Data Instances](#data-instances) - [Data Fields](#data-fields) - [Data Splits](#data-splits) - [Dataset Creation](#dataset-creation) - [Curation Rationale](#curation-rationale) - [Source Data](#source-data) - [Annotations](#annotations) - [Personal and Sensitive Information](#personal-and-sensitive-information) - [Considerations for Using the Data](#considerations-for-using-the-data) - [Social Impact of Dataset](#social-impact-of-dataset) - [Discussion of Biases](#discussion-of-biases) - [Other Known Limitations](#other-known-limitations) - [Additional Information](#additional-information) - [Dataset Curators](#dataset-curators) - [Licensing Information](#licensing-information) - [Citation Information](#citation-information) - [Contributions](#contributions) ## Dataset Description - **Homepage:** https://github.com/liafacom/faquad - **Repository:** https://github.com/liafacom/faquad - **Paper:** https://ieeexplore.ieee.org/document/8923668/ <!-- - **Leaderboard:** --> - **Point of Contact:** Eraldo R. Fernandes <eraldoluis@gmail.com> ### Dataset Summary Academic secretaries and faculty members of higher education institutions face a common problem: the abundance of questions sent by academics whose answers are found in available institutional documents. The official documents produced by Brazilian public universities are vast and disperse, which discourage students to further search for answers in such sources. In order to lessen this problem, we present FaQuAD: a novel machine reading comprehension dataset in the domain of Brazilian higher education institutions. FaQuAD follows the format of SQuAD (Stanford Question Answering Dataset) [Rajpurkar et al. 2016]. It comprises 900 questions about 249 reading passages (paragraphs), which were taken from 18 official documents of a computer science college from a Brazilian federal university and 21 Wikipedia articles related to Brazilian higher education system. As far as we know, this is the first Portuguese reading comprehension dataset in this format. ### Supported Tasks and Leaderboards [More Information Needed] ### Languages [More Information Needed] ## Dataset Structure ### Data Instances [More Information Needed] ### Data Fields | name |train|validation| |---------|----:|----:| |faquad|837|63| ### Data Splits [More Information Needed] ## Dataset Creation ### Curation Rationale [More Information Needed] ### Source Data #### Initial Data Collection and Normalization [More Information Needed] #### Who are the source language producers? [More Information Needed] ### Annotations #### Annotation process [More Information Needed] #### Who are the annotators? [More Information Needed] ### Personal and Sensitive Information [More Information Needed] ## Considerations for Using the Data ### Social Impact of Dataset [More Information Needed] ### Discussion of Biases [More Information Needed] ### Other Known Limitations [More Information Needed] ## Additional Information ### Dataset Curators [More Information Needed] ### Licensing Information [More Information Needed] ### Citation Information [More Information Needed] ### Contributions Thanks to [@github-username](https://github.com/<github-username>) for adding this dataset.
cannlytics/cannabis_licenses
2023-09-30T14:23:05.000Z
[ "annotations_creators:expert-generated", "language_creators:expert-generated", "license:cc-by-4.0", "cannabis", "licenses", "region:us" ]
cannlytics
Cannabis Licenses is a dataset of curated cannabis license data. The dataset consists of sub-datasets for each state with permitted adult-use cannabis, as well as a sub-dataset that includes all licenses.
@inproceedings{cannlytics2023cannabis_licenses, author = {Skeate, Keegan and O'Sullivan-Sutherland, Candace}, title = {Cannabis Licenses}, booktitle = {Cannabis Data Science}, month = {August}, year = {2023}, address = {United States of America}, publisher = {Cannlytics} }
null
3
55
--- pretty_name: cannabis_licenses annotations_creators: - expert-generated language_creators: - expert-generated license: - cc-by-4.0 tags: - cannabis - licenses --- # Cannabis Licenses <!-- FIXME: <div align="center" style="text-align:center; margin-top:1rem; margin-bottom: 1rem;"> <img style="max-height:365px;width:100%;max-width:720px;" alt="" src="analysis/figures/cannabis-licenses-map.png"> </div> --> ## Table of Contents - [Table of Contents](#table-of-contents) - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Dataset Structure](#dataset-structure) - [Data Instances](#data-instances) - [Data Fields](#data-fields) - [Data Splits](#data-splits) - [Dataset Creation](#dataset-creation) - [Curation Rationale](#curation-rationale) - [Source Data](#source-data) - [Data Collection and Normalization](#data-collection-and-normalization) - [Personal and Sensitive Information](#personal-and-sensitive-information) - [Considerations for Using the Data](#considerations-for-using-the-data) - [Social Impact of Dataset](#social-impact-of-dataset) - [Discussion of Biases](#discussion-of-biases) - [Other Known Limitations](#other-known-limitations) - [Additional Information](#additional-information) - [Dataset Curators](#dataset-curators) - [License](#license) - [Citation](#citation) - [Contributions](#contributions) ## Dataset Description - **Homepage:** <https://github.com/cannlytics/cannlytics> - **Repository:** <https://huggingface.co/datasets/cannlytics/cannabis_licenses> - **Point of Contact:** <dev@cannlytics.com> ### Dataset Summary **Cannabis Licenses** is a collection of cannabis license data for each state with permitted adult-use cannabis. The dataset also includes a sub-dataset, `all`, that includes all licenses. ## Dataset Structure The dataset is partitioned into 18 subsets for each state and the aggregate. | State | Code | Status | |-------|------|--------| | [All](https://huggingface.co/datasets/cannlytics/cannabis_licenses/tree/main/data/all) | `all` | ✅ | | [Alaska](https://huggingface.co/datasets/cannlytics/cannabis_licenses/tree/main/data/ak) | `ak` | ✅ | | [Arizona](https://huggingface.co/datasets/cannlytics/cannabis_licenses/tree/main/data/az) | `az` | ✅ | | [California](https://huggingface.co/datasets/cannlytics/cannabis_licenses/tree/main/data/ca) | `ca` | ✅ | | [Colorado](https://huggingface.co/datasets/cannlytics/cannabis_licenses/tree/main/data/co) | `co` | ✅ | | [Connecticut](https://huggingface.co/datasets/cannlytics/cannabis_licenses/tree/main/data/ct) | `ct` | ✅ | | [Delaware](https://huggingface.co/datasets/cannlytics/cannabis_licenses/tree/main/data/de) | `md` | ✅ | | [Illinois](https://huggingface.co/datasets/cannlytics/cannabis_licenses/tree/main/data/il) | `il` | ✅ | | [Maine](https://huggingface.co/datasets/cannlytics/cannabis_licenses/tree/main/data/me) | `me` | ✅ | | [Maryland](https://huggingface.co/datasets/cannlytics/cannabis_licenses/tree/main/data/md) | `md` | ✅ | | [Massachusetts](https://huggingface.co/datasets/cannlytics/cannabis_licenses/tree/main/data/ma) | `ma` | ✅ | | [Michigan](https://huggingface.co/datasets/cannlytics/cannabis_licenses/tree/main/data/mi) | `mi` | ✅ | | [Missouri](https://huggingface.co/datasets/cannlytics/cannabis_licenses/tree/main/data/mo) | `mo` | ✅ | | [Montana](https://huggingface.co/datasets/cannlytics/cannabis_licenses/tree/main/data/mt) | `mt` | ✅ | | [Nevada](https://huggingface.co/datasets/cannlytics/cannabis_licenses/tree/main/data/nv) | `nv` | ✅ | | [New Jersey](https://huggingface.co/datasets/cannlytics/cannabis_licenses/tree/main/data/nj) | `nj` | ✅ | | [New Mexico](https://huggingface.co/datasets/cannlytics/cannabis_licenses/tree/main/data/nm) | `nm` | ✅ | | [New York](https://huggingface.co/datasets/cannlytics/cannabis_licenses/tree/main/data/ny) | `ny` | ✅ | | [Oregon](https://huggingface.co/datasets/cannlytics/cannabis_licenses/tree/main/data/or) | `or` | ✅ | | [Rhode Island](https://huggingface.co/datasets/cannlytics/cannabis_licenses/tree/main/data/ri) | `ri` | ✅ | | [Vermont](https://huggingface.co/datasets/cannlytics/cannabis_licenses/tree/main/data/vt) | `vt` | ✅ | | Virginia | `va` | ⏳ Expected 2024 | | [Washington](https://huggingface.co/datasets/cannlytics/cannabis_licenses/tree/main/data/wa) | `wa` | ✅ | The following states have issued medical cannabis licenses, but are not (yet) included in the dataset: - Alabama - Arkansas - District of Columbia (D.C.) - Florida - Kentucky (2024) - Louisiana - Minnesota - Mississippi - New Hampshire - North Dakota - Ohio - Oklahoma - Pennsylvania - South Dakota - Utah - West Virginia ### Data Instances You can load the licenses for each state. For example: ```py from datasets import load_dataset # Get the licenses for a specific state. dataset = load_dataset('cannlytics/cannabis_licenses', 'all') data = dataset['data'] ``` ### Data Fields Below is a non-exhaustive list of fields, used to standardize the various data that are encountered, that you may expect to find for each observation. | Field | Example | Description | |-------|-----|-------------| | `id` | `"1046"` | A state-unique ID for the license. | | `license_number` | `"C10-0000423-LIC"` | A unique license number. | | `license_status` | `"Active"` | The status of the license. Only licenses that are active are included. | | `license_status_date` | `"2022-04-20T00:00"` | The date the status was assigned, an ISO-formatted date if present. | | `license_term` | `"Provisional"` | The term for the license. | | `license_type` | `"Commercial - Retailer"` | The type of business license. | | `license_designation` | `"Adult-Use and Medicinal"` | A state-specific classification for the license. | | `issue_date` | `"2019-07-15T00:00:00"` | An issue date for the license, an ISO-formatted date if present. | | `expiration_date` | `"2023-07-14T00:00:00"` | An expiration date for the license, an ISO-formatted date if present. | | `licensing_authority_id` | `"BCC"` | A unique ID for the state licensing authority. | | `licensing_authority` | `"Bureau of Cannabis Control (BCC)"` | The state licensing authority. | | `business_legal_name` | `"Movocan"` | The legal name of the business that owns the license. | | `business_dba_name` | `"Movocan"` | The name the license is doing business as. | | `business_owner_name` | `"redacted"` | The name of the owner of the license. | | `business_structure` | `"Corporation"` | The structure of the business that owns the license. | | `activity` | `"Pending Inspection"` | Any relevant license activity. | | `premise_street_address` | `"1632 Gateway Rd"` | The street address of the business. | | `premise_city` | `"Calexico"` | The city of the business. | | `premise_state` | `"CA"` | The state abbreviation of the business. | | `premise_county` | `"Imperial"` | The county of the business. | | `premise_zip_code` | `"92231"` | The zip code of the business. | | `business_email` | `"redacted@gmail.com"` | The business email of the license. | | `business_phone` | `"(555) 555-5555"` | The business phone of the license. | | `business_website` | `"cannlytics.com"` | The business website of the license. | | `parcel_number` | `"A42"` | An ID for the business location. | | `premise_latitude` | `32.69035693` | The latitude of the business. | | `premise_longitude` | `-115.38987552` | The longitude of the business. | | `data_refreshed_date` | `"2022-09-21T12:16:33.3866667"` | An ISO-formatted time when the license data was updated. | ### Data Splits The data is split into subsets by state. You can retrieve all licenses by requesting the `all` subset. ```py from datasets import load_dataset # Get all cannabis licenses. dataset = load_dataset('cannlytics/cannabis_licenses', 'all') data = dataset['data'] ``` ## Dataset Creation ### Curation Rationale Data about organizations operating in the cannabis industry for each state is valuable for research. ### Source Data | State | Data Source URL | |-------|-----------------| | Alaska | <https://www.commerce.alaska.gov/abc/marijuana/Home/licensesearch> | | Arizona | <https://azcarecheck.azdhs.gov/s/?licenseType=null> | | California | <https://search.cannabis.ca.gov/> | | Colorado | <https://sbg.colorado.gov/med/licensed-facilities> | | Connecticut | <https://portal.ct.gov/DCP/Medical-Marijuana-Program/Connecticut-Medical-Marijuana-Dispensary-Facilities> | | Delaware | <https://dhss.delaware.gov/dhss/dph/hsp/medmarcc.html> | | Illinois | <https://www.idfpr.com/LicenseLookup/AdultUseDispensaries.pdf> | | Maine | <https://www.maine.gov/dafs/ocp/open-data/adult-use> | | Maryland | <https://mmcc.maryland.gov/Pages/Dispensaries.aspx> | | Massachusetts | <https://masscannabiscontrol.com/open-data/data-catalog/> | | Michigan | <https://michigan.maps.arcgis.com/apps/webappviewer/index.html?id=cd5a1a76daaf470b823a382691c0ff60> | | Missouri | <https://health.mo.gov/safety/cannabis/licensed-facilities.php> | | Montana | <https://mtrevenue.gov/cannabis/#CannabisLicenses> | | Nevada | <https://ccb.nv.gov/list-of-licensees/> | | New Jersey | <https://data.nj.gov/stories/s/ggm4-mprw> | | New Mexico | <https://nmrldlpi.force.com/bcd/s/public-search-license?division=CCD&language=en_US> | | New York | <https://cannabis.ny.gov/licensing> | | Oregon | <https://www.oregon.gov/olcc/marijuana/pages/recreational-marijuana-licensing.aspx> | | Rhode Island | <https://dbr.ri.gov/office-cannabis-regulation/compassion-centers/licensed-compassion-centers> | | Vermont | <https://ccb.vermont.gov/licenses> | | Washington | <https://lcb.wa.gov/records/frequently-requested-lists> | ### Data Collection and Normalization In the `algorithms` directory, you can find the algorithms used for data collection. You can use these algorithms to recreate the dataset. First, you will need to clone the repository: ``` git clone https://huggingface.co/datasets/cannlytics/cannabis_licenses ``` You can then install the algorithm Python (3.9+) requirements: ``` cd cannabis_licenses pip install -r requirements.txt ``` Then you can run all of the data-collection algorithms: ``` python algorithms/main.py ``` Or you can run each algorithm individually. For example: ``` python algorithms/get_licenses_ny.py ``` ### Personal and Sensitive Information This dataset includes names of individuals, public addresses, and contact information for cannabis licensees. It is important to take care to use these data points in a legal manner. ## Considerations for Using the Data ### Social Impact of Dataset Arguably, there is substantial social impact that could result from the study of permitted adult-use cannabis, therefore, researchers and data consumers alike should take the utmost care in the use of this dataset. ### Discussion of Biases Cannlytics is a for-profit data and analytics company that primarily serves cannabis businesses. The data are not randomly collected and thus sampling bias should be taken into consideration. ### Other Known Limitations The data is for adult-use cannabis licenses. It would be valuable to include medical cannabis licenses too. ## Additional Information ### Dataset Curators Curated by [🔥Cannlytics](https://cannlytics.com)<br> <contact@cannlytics.com> ### License ``` Copyright (c) 2022-2023 Cannlytics and the Cannabis Data Science Team The files associated with this dataset are licensed under a Creative Commons Attribution 4.0 International license. You can share, copy and modify this dataset so long as you give appropriate credit, provide a link to the CC BY license, and indicate if changes were made, but you may not do so in a way that suggests the rights holder has endorsed you or your use of the dataset. Note that further permission may be required for any content within the dataset that is identified as belonging to a third party. ``` ### Citation Please cite the following if you use the code examples in your research: ```bibtex @misc{cannlytics2023, title={Cannabis Data Science}, author={Skeate, Keegan and O'Sullivan-Sutherland, Candace}, journal={https://github.com/cannlytics/cannabis-data-science}, year={2023} } ``` ### Contributions Thanks to [🔥Cannlytics](https://cannlytics.com), [@candy-o](https://github.com/candy-o), [@hcadeaux](https://huggingface.co/hcadeaux), [@keeganskeate](https://github.com/keeganskeate), and the entire [Cannabis Data Science Team](https://meetup.com/cannabis-data-science/members) for their contributions.
bigbio/chemdner
2022-12-22T15:44:21.000Z
[ "multilinguality:monolingual", "language:en", "license:unknown", "region:us" ]
bigbio
We present the CHEMDNER corpus, a collection of 10,000 PubMed abstracts that contain a total of 84,355 chemical entity mentions labeled manually by expert chemistry literature curators, following annotation guidelines specifically defined for this task. The abstracts of the CHEMDNER corpus were selected to be representative for all major chemical disciplines. Each of the chemical entity mentions was manually labeled according to its structure-associated chemical entity mention (SACEM) class: abbreviation, family, formula, identifier, multiple, systematic and trivial.
@article{Krallinger2015, title = {The CHEMDNER corpus of chemicals and drugs and its annotation principles}, author = { Krallinger, Martin and Rabal, Obdulia and Leitner, Florian and Vazquez, Miguel and Salgado, David and Lu, Zhiyong and Leaman, Robert and Lu, Yanan and Ji, Donghong and Lowe, Daniel M. and Sayle, Roger A. and Batista-Navarro, Riza Theresa and Rak, Rafal and Huber, Torsten and Rockt{\"a}schel, Tim and Matos, S{\'e}rgio and Campos, David and Tang, Buzhou and Xu, Hua and Munkhdalai, Tsendsuren and Ryu, Keun Ho and Ramanan, S. V. and Nathan, Senthil and {\v{Z}}itnik, Slavko and Bajec, Marko and Weber, Lutz and Irmer, Matthias and Akhondi, Saber A. and Kors, Jan A. and Xu, Shuo and An, Xin and Sikdar, Utpal Kumar and Ekbal, Asif and Yoshioka, Masaharu and Dieb, Thaer M. and Choi, Miji and Verspoor, Karin and Khabsa, Madian and Giles, C. Lee and Liu, Hongfang and Ravikumar, Komandur Elayavilli and Lamurias, Andre and Couto, Francisco M. and Dai, Hong-Jie and Tsai, Richard Tzong-Han and Ata, Caglar and Can, Tolga and Usi{\'e}, Anabel and Alves, Rui and Segura-Bedmar, Isabel and Mart{\'i}nez, Paloma and Oyarzabal, Julen and Valencia, Alfonso }, year = 2015, month = {Jan}, day = 19, journal = {Journal of Cheminformatics}, volume = 7, number = 1, pages = {S2}, doi = {10.1186/1758-2946-7-S1-S2}, issn = {1758-2946}, url = {https://doi.org/10.1186/1758-2946-7-S1-S2}, abstract = { The automatic extraction of chemical information from text requires the recognition of chemical entity mentions as one of its key steps. When developing supervised named entity recognition (NER) systems, the availability of a large, manually annotated text corpus is desirable. Furthermore, large corpora permit the robust evaluation and comparison of different approaches that detect chemicals in documents. We present the CHEMDNER corpus, a collection of 10,000 PubMed abstracts that contain a total of 84,355 chemical entity mentions labeled manually by expert chemistry literature curators, following annotation guidelines specifically defined for this task. The abstracts of the CHEMDNER corpus were selected to be representative for all major chemical disciplines. Each of the chemical entity mentions was manually labeled according to its structure-associated chemical entity mention (SACEM) class: abbreviation, family, formula, identifier, multiple, systematic and trivial. The difficulty and consistency of tagging chemicals in text was measured using an agreement study between annotators, obtaining a percentage agreement of 91. For a subset of the CHEMDNER corpus (the test set of 3,000 abstracts) we provide not only the Gold Standard manual annotations, but also mentions automatically detected by the 26 teams that participated in the BioCreative IV CHEMDNER chemical mention recognition task. In addition, we release the CHEMDNER silver standard corpus of automatically extracted mentions from 17,000 randomly selected PubMed abstracts. A version of the CHEMDNER corpus in the BioC format has been generated as well. We propose a standard for required minimum information about entity annotations for the construction of domain specific corpora on chemical and drug entities. The CHEMDNER corpus and annotation guidelines are available at: ttp://www.biocreative.org/resources/biocreative-iv/chemdner-corpus/ } }
null
0
55
--- language: - en bigbio_language: - English license: unknown multilinguality: monolingual bigbio_license_shortname: UNKNOWN pretty_name: CHEMDNER homepage: https://biocreative.bioinformatics.udel.edu/resources/biocreative-iv/chemdner-corpus/ bigbio_pubmed: True bigbio_public: True bigbio_tasks: - NAMED_ENTITY_RECOGNITION - TEXT_CLASSIFICATION --- # Dataset Card for CHEMDNER ## Dataset Description - **Homepage:** https://biocreative.bioinformatics.udel.edu/resources/biocreative-iv/chemdner-corpus/ - **Pubmed:** True - **Public:** True - **Tasks:** NER,TXTCLASS We present the CHEMDNER corpus, a collection of 10,000 PubMed abstracts that contain a total of 84,355 chemical entity mentions labeled manually by expert chemistry literature curators, following annotation guidelines specifically defined for this task. The abstracts of the CHEMDNER corpus were selected to be representative for all major chemical disciplines. Each of the chemical entity mentions was manually labeled according to its structure-associated chemical entity mention (SACEM) class: abbreviation, family, formula, identifier, multiple, systematic and trivial. ## Citation Information ``` @article{Krallinger2015, title = {The CHEMDNER corpus of chemicals and drugs and its annotation principles}, author = { Krallinger, Martin and Rabal, Obdulia and Leitner, Florian and Vazquez, Miguel and Salgado, David and Lu, Zhiyong and Leaman, Robert and Lu, Yanan and Ji, Donghong and Lowe, Daniel M. and Sayle, Roger A. and Batista-Navarro, Riza Theresa and Rak, Rafal and Huber, Torsten and Rockt{"a}schel, Tim and Matos, S{'e}rgio and Campos, David and Tang, Buzhou and Xu, Hua and Munkhdalai, Tsendsuren and Ryu, Keun Ho and Ramanan, S. V. and Nathan, Senthil and { {Z}}itnik, Slavko and Bajec, Marko and Weber, Lutz and Irmer, Matthias and Akhondi, Saber A. and Kors, Jan A. and Xu, Shuo and An, Xin and Sikdar, Utpal Kumar and Ekbal, Asif and Yoshioka, Masaharu and Dieb, Thaer M. and Choi, Miji and Verspoor, Karin and Khabsa, Madian and Giles, C. Lee and Liu, Hongfang and Ravikumar, Komandur Elayavilli and Lamurias, Andre and Couto, Francisco M. and Dai, Hong-Jie and Tsai, Richard Tzong-Han and Ata, Caglar and Can, Tolga and Usi{'e}, Anabel and Alves, Rui and Segura-Bedmar, Isabel and Mart{'i}nez, Paloma and Oyarzabal, Julen and Valencia, Alfonso }, year = 2015, month = {Jan}, day = 19, journal = {Journal of Cheminformatics}, volume = 7, number = 1, pages = {S2}, doi = {10.1186/1758-2946-7-S1-S2}, issn = {1758-2946}, url = {https://doi.org/10.1186/1758-2946-7-S1-S2}, abstract = { The automatic extraction of chemical information from text requires the recognition of chemical entity mentions as one of its key steps. When developing supervised named entity recognition (NER) systems, the availability of a large, manually annotated text corpus is desirable. Furthermore, large corpora permit the robust evaluation and comparison of different approaches that detect chemicals in documents. We present the CHEMDNER corpus, a collection of 10,000 PubMed abstracts that contain a total of 84,355 chemical entity mentions labeled manually by expert chemistry literature curators, following annotation guidelines specifically defined for this task. The abstracts of the CHEMDNER corpus were selected to be representative for all major chemical disciplines. Each of the chemical entity mentions was manually labeled according to its structure-associated chemical entity mention (SACEM) class: abbreviation, family, formula, identifier, multiple, systematic and trivial. The difficulty and consistency of tagging chemicals in text was measured using an agreement study between annotators, obtaining a percentage agreement of 91. For a subset of the CHEMDNER corpus (the test set of 3,000 abstracts) we provide not only the Gold Standard manual annotations, but also mentions automatically detected by the 26 teams that participated in the BioCreative IV CHEMDNER chemical mention recognition task. In addition, we release the CHEMDNER silver standard corpus of automatically extracted mentions from 17,000 randomly selected PubMed abstracts. A version of the CHEMDNER corpus in the BioC format has been generated as well. We propose a standard for required minimum information about entity annotations for the construction of domain specific corpora on chemical and drug entities. The CHEMDNER corpus and annotation guidelines are available at: ttp://www.biocreative.org/resources/biocreative-iv/chemdner-corpus/ } } ```
stacked-summaries/stacked-samsum-1024
2023-05-29T00:30:18.000Z
[ "task_categories:summarization", "size_categories:10K<n<100K", "source_datasets:samsum", "language:en", "license:apache-2.0", "stacked summaries", "region:us" ]
stacked-summaries
null
null
null
4
55
--- license: apache-2.0 source_datasets: - samsum task_categories: - summarization language: - en tags: - stacked summaries pretty_name: Stacked Samsum - 1024 size_categories: - 10K<n<100K --- # stacked samsum 1024 Created with the `stacked-booksum` repo version v0.25. It contains: 1. Original Dataset: copy of the base dataset 2. Stacked Rows: The original dataset is processed by stacking rows based on certain criteria: - Maximum Input Length: The maximum length for input sequences is 1024 tokens in the longt5 model tokenizer. - Maximum Output Length: The maximum length for output sequences is also 1024 tokens in the longt5 model tokenizer. 3. Special Token: The dataset utilizes the `[NEXT_CONCEPT]` token to indicate a new topic **within** the same summary. It is recommended to explicitly add this special token to your model's tokenizer before training, ensuring that it is recognized and processed correctly during downstream usage. ## stats ![stacked-samsum-1024-trainstats](https://i.imgur.com/BRPHWnQ.png) ## dataset details Default (train): ```python [2022-12-04 13:19:32] INFO:root:{'num_columns': 4, 'num_rows': 14732, 'num_unique_target': 14730, 'num_unique_text': 14265, 'summary - average chars': 110.13, 'summary - average tokens': 28.693727939180015, 'text input - average chars': 511.22, 'text input - average tokens': 148.88759163725223} ``` stacked (train) ```python [2022-12-05 00:49:04] INFO:root:stacked 14730 rows, 2 rows were ineligible [2022-12-05 00:49:04] INFO:root:dropped 20 duplicate rows, 29442 rows remain [2022-12-05 00:49:04] INFO:root:shuffling output with seed 182 [2022-12-05 00:49:04] INFO:root:STACKED - basic stats - train [2022-12-05 00:49:04] INFO:root:{'num_columns': 5, 'num_rows': 29442, 'num_unique_chapters': 28975, 'num_unique_summaries': 29441, 'summary - average chars': 452.8, 'summary - average tokens': 106.46820868147545, 'text input - average chars': 1814.09, 'text input - average tokens': 528.665579783982} ```
RIW/small-coco
2023-02-06T01:06:04.000Z
[ "region:us" ]
RIW
null
null
null
0
55
--- dataset_info: features: - name: image dtype: image - name: caption dtype: string - name: url dtype: string - name: key dtype: string - name: status dtype: string - name: error_message dtype: 'null' - name: width dtype: int64 - name: height dtype: int64 - name: original_width dtype: int64 - name: original_height dtype: int64 - name: exif dtype: string - name: sha256 dtype: string splits: - name: train num_bytes: 1946738057.45 num_examples: 9890 - name: validation num_bytes: 1953823510.0 num_examples: 9893 download_size: 1313384992 dataset_size: 3900561567.45 --- # Dataset Card for "small-coco" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
venetis/symptom_text_to_disease_mk4
2023-03-10T04:30:59.000Z
[ "region:us" ]
venetis
null
null
null
2
55
--- dataset_info: features: - name: text dtype: string - name: labels dtype: class_label: names: '0': emotional pain '1': hair falling out '2': heart hurts '3': infected wound '4': foot ache '5': shoulder pain '6': injury from sports '7': skin issue '8': stomach ache '9': knee pain '10': joint pain '11': hard to breath '12': head ache '13': body feels weak '14': feeling dizzy '15': back pain '16': open wound '17': internal pain '18': blurry vision '19': acne '20': muscle pain '21': neck pain '22': cough '23': ear ache '24': feeling cold splits: - name: train num_bytes: 330494.3762197868 num_examples: 5328 - name: test num_bytes: 41373.82675273983 num_examples: 667 - name: valid num_bytes: 41311.79702747335 num_examples: 666 download_size: 144224 dataset_size: 413180.0 --- # Dataset Card for "symptom_text_to_disease_mk4" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
Francesco/coins-1apki
2023-03-30T09:10:17.000Z
[ "task_categories:object-detection", "annotations_creators:crowdsourced", "language_creators:found", "multilinguality:monolingual", "size_categories:1K<n<10K", "source_datasets:original", "language:en", "license:cc", "rf100", "region:us" ]
Francesco
null
null
null
0
55
--- dataset_info: features: - name: image_id dtype: int64 - name: image dtype: image - name: width dtype: int32 - name: height dtype: int32 - name: objects sequence: - name: id dtype: int64 - name: area dtype: int64 - name: bbox sequence: float32 length: 4 - name: category dtype: class_label: names: '0': coins '1': coin '2': nail '3': nut '4': screw annotations_creators: - crowdsourced language_creators: - found language: - en license: - cc multilinguality: - monolingual size_categories: - 1K<n<10K source_datasets: - original task_categories: - object-detection task_ids: [] pretty_name: coins-1apki tags: - rf100 --- # Dataset Card for coins-1apki ** The original COCO dataset is stored at `dataset.tar.gz`** ## Dataset Description - **Homepage:** https://universe.roboflow.com/object-detection/coins-1apki - **Point of Contact:** francesco.zuppichini@gmail.com ### Dataset Summary coins-1apki ### Supported Tasks and Leaderboards - `object-detection`: The dataset can be used to train a model for Object Detection. ### Languages English ## Dataset Structure ### Data Instances A data point comprises an image and its object annotations. ``` { 'image_id': 15, 'image': <PIL.JpegImagePlugin.JpegImageFile image mode=RGB size=640x640 at 0x2373B065C18>, 'width': 964043, 'height': 640, 'objects': { 'id': [114, 115, 116, 117], 'area': [3796, 1596, 152768, 81002], 'bbox': [ [302.0, 109.0, 73.0, 52.0], [810.0, 100.0, 57.0, 28.0], [160.0, 31.0, 248.0, 616.0], [741.0, 68.0, 202.0, 401.0] ], 'category': [4, 4, 0, 0] } } ``` ### Data Fields - `image`: the image id - `image`: `PIL.Image.Image` object containing the image. Note that when accessing the image column: `dataset[0]["image"]` the image file is automatically decoded. Decoding of a large number of image files might take a significant amount of time. Thus it is important to first query the sample index before the `"image"` column, *i.e.* `dataset[0]["image"]` should **always** be preferred over `dataset["image"][0]` - `width`: the image width - `height`: the image height - `objects`: a dictionary containing bounding box metadata for the objects present on the image - `id`: the annotation id - `area`: the area of the bounding box - `bbox`: the object's bounding box (in the [coco](https://albumentations.ai/docs/getting_started/bounding_boxes_augmentation/#coco) format) - `category`: the object's category. #### Who are the annotators? Annotators are Roboflow users ## Additional Information ### Licensing Information See original homepage https://universe.roboflow.com/object-detection/coins-1apki ### Citation Information ``` @misc{ coins-1apki, title = { coins 1apki Dataset }, type = { Open Source Dataset }, author = { Roboflow 100 }, howpublished = { \url{ https://universe.roboflow.com/object-detection/coins-1apki } }, url = { https://universe.roboflow.com/object-detection/coins-1apki }, journal = { Roboflow Universe }, publisher = { Roboflow }, year = { 2022 }, month = { nov }, note = { visited on 2023-03-29 }, }" ``` ### Contributions Thanks to [@mariosasko](https://github.com/mariosasko) for adding this dataset.
seanghay/km-speech-corpus
2023-05-03T04:47:59.000Z
[ "task_categories:automatic-speech-recognition", "task_categories:text-to-speech", "size_categories:10K<n<100K", "language:km", "license:cc-by-4.0", "region:us" ]
seanghay
null
null
null
0
55
--- dataset_info: features: - name: audio dtype: audio - name: transcription dtype: string - name: raw_transcription dtype: string splits: - name: train num_bytes: 2401601016.002 num_examples: 14943 download_size: 2386178405 dataset_size: 2401601016.002 license: cc-by-4.0 task_categories: - automatic-speech-recognition - text-to-speech language: - km pretty_name: Khmer Speech Corpus size_categories: - 10K<n<100K --- # Dataset Card for "km-speech-corpus" ``` sampling_rate: 16000 mean_seconds: 2.5068187111021882 max_seconds: 19.392 min_seconds: 0.448 total_seconds: 37459.392 total_hrs: 10.405386666666667 ```
Ar4ikov/iemocap_audio_text
2023-05-02T22:25:17.000Z
[ "region:us" ]
Ar4ikov
null
null
null
1
55
--- dataset_info: features: - name: _id dtype: string - name: activation dtype: float64 - name: dominance dtype: float64 - name: emotion dtype: string - name: end_time dtype: float64 - name: start_time dtype: float64 - name: titre dtype: string - name: to_translate dtype: string - name: translated dtype: string - name: valence dtype: float64 - name: audio dtype: audio: sampling_rate: 16000 splits: - name: train num_bytes: 1479523969.375 num_examples: 10039 download_size: 1405738430 dataset_size: 1479523969.375 --- # Dataset Card for "iemocap_audio_text" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
frncscp/patacon-730
2023-05-04T01:51:07.000Z
[ "region:us" ]
frncscp
null
null
null
0
55
--- dataset_info: features: - name: image dtype: image - name: label dtype: class_label: names: '0': Patacon-False '1': Patacon-True splits: - name: train num_bytes: 114865007.0 num_examples: 874 - name: validation num_bytes: 18290064.0 num_examples: 143 - name: test num_bytes: 59447780.0 num_examples: 442 download_size: 192218294 dataset_size: 192602851.0 --- # Dataset Card for "patacon-730" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
ewof/code-alpaca-instruct-unfiltered
2023-05-13T03:53:37.000Z
[ "region:us" ]
ewof
null
null
null
7
55
This dataset is HuggingFaceH4/CodeAlpaca_20K unfiltered, removing 36 instances of blatant alignment. 19986 instructions remain. https://huggingface.co/datasets/HuggingFaceH4/CodeAlpaca_20K/blob/29ba7b7fdf0c55e5435c848cf6bbf9782fef62a6/data/test-00000-of-00001.parquet https://huggingface.co/datasets/HuggingFaceH4/CodeAlpaca_20K/blob/a123ae447f02484d83c3457438b4422cd8417ad5/data/train-00000-of-00001.parquet i combined all of these files above into code_alpaca_data.jsonl with parquet2json and ran clean.py on it, renamed to .json not .jsonl normal dedupe.py script didn't find any dupes here. inspired by https://huggingface.co/datasets/ehartford/WizardLM_alpaca_evol_instruct_70k_unfiltered All credit to anon8231489123 for the cleanup script that I adapted to wizardlm_clean.py, I then took this script and adapted it to clean.py
vmalperovich/SST5
2023-05-25T00:10:29.000Z
[ "task_categories:text-classification", "size_categories:1K<n<10K", "language:en", "region:us" ]
vmalperovich
This data collection contains all the data used in our learning question classification experiments(see [1]), which has question class definitions, the training and testing question sets, examples of preprocessing the questions, feature definition scripts and examples of semantically related word features. This work has been done by Xin Li and Dan Roth and supported by [2].
""" _TRAIN_DOWNLOAD_URL = "https://huggingface.co/datasets/vmalperovich/SST-5/raw/main/train.csv" _TEST_DOWNLOAD_URL = "https://huggingface.co/datasets/vmalperovich/SST-5/raw/main/test.csv" _VALID_DOWNLOAD_URL = "https://huggingface.co/datasets/vmalperovich/SST-5/raw/main/validation.csv" CATEGORY_MAPPING = {'0': 0, '1': 1, '2': 2, '3': 3, '4': 4 } class SST5(datasets.GeneratorBasedBuilder):
null
0
55
--- task_categories: - text-classification language: - en pretty_name: sst-5 size_categories: - 1K<n<10K --- # Dataset Card for Dataset Name ## Dataset Description - **Homepage:** - **Repository:** - **Paper:** - **Leaderboard:** - **Point of Contact:** ### Dataset Summary This dataset card aims to be a base template for new datasets. It has been generated using [this raw template](https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/templates/datasetcard_template.md?plain=1). ### Supported Tasks and Leaderboards [More Information Needed] ### Languages [More Information Needed] ## Dataset Structure ### Data Instances [More Information Needed] ### Data Fields [More Information Needed] ### Data Splits [More Information Needed] ## Dataset Creation ### Curation Rationale [More Information Needed] ### Source Data #### Initial Data Collection and Normalization [More Information Needed] #### Who are the source language producers? [More Information Needed] ### Annotations #### Annotation process [More Information Needed] #### Who are the annotators? [More Information Needed] ### Personal and Sensitive Information [More Information Needed] ## Considerations for Using the Data ### Social Impact of Dataset [More Information Needed] ### Discussion of Biases [More Information Needed] ### Other Known Limitations [More Information Needed] ## Additional Information ### Dataset Curators [More Information Needed] ### Licensing Information [More Information Needed] ### Citation Information [More Information Needed] ### Contributions [More Information Needed]
blindsubmissions/M2CRB
2023-08-08T15:06:30.000Z
[ "task_categories:translation", "task_categories:summarization", "size_categories:1K<n<10K", "language:pt", "language:de", "language:fr", "language:es", "license:other", "code", "region:us" ]
blindsubmissions
null
null
null
0
55
--- dataset_info: features: - name: identifier dtype: string - name: parameters dtype: string - name: return_statement dtype: string - name: docstring dtype: string - name: docstring_summary dtype: string - name: function dtype: string - name: function_tokens sequence: string - name: start_point sequence: int64 - name: end_point sequence: int64 - name: argument_list dtype: 'null' - name: language dtype: string - name: docstring_language dtype: string - name: docstring_language_predictions dtype: string - name: is_langid_reliable dtype: string - name: is_langid_extra_reliable dtype: bool - name: type dtype: string splits: - name: test num_bytes: 15742687 num_examples: 7743 download_size: 5530793 dataset_size: 15742687 license: other task_categories: - translation - summarization language: - pt - de - fr - es tags: - code pretty_name: m size_categories: - 1K<n<10K --- # M2CRB ## Dataset Summary M2CRB contains pairs of text and code data with multiple natural and programming language pairs. Namely: Spanish, Portuguese, German, and French, each paired with code snippets for: Python, Java, and JavaScript. The data is curated via an automated filtering pipeline from source files within [The Stack](https://huggingface.co/datasets/bigcode/the-stack) followed by human verification to ensure accurate language classification I.e., humans were asked to filter out data for which natural language did not correspond to a target language. ## Supported Tasks M2CRB is a multilingual evaluation dataset for code-to-text and/or text-to-code models, both on information retrieval or conditional generation evaluations. ## Currently Supported Languages ```python NATURAL_LANGUAGE_SET = {"es", "fr", "pt", "de"} PROGRAMMING_LANGUAGE_SET = {"python", "java", "javascript"} ``` ## How to get the data with a given language combination ```python from datasets import load_dataset def get_dataset(prog_lang, nat_lang): test_data = load_dataset("blindsubmissions/M2CRB") test_data = test_data.filter( lambda example: example["docstring_language"] == nat_lang and example["language"] == prog_lang ) return test_data ``` ## Dataset Structure ### Data Instances Each data instance corresponds to function/methods occurring in licensed files that compose The Stack. That is, files with permissive licences collected from GitHub. ### Relevant Data Fields - identifier (string): Function/method name. - parameters (string): Function parameters. - return_statement (string): Return statement if found during parsing. - docstring (string): Complete docstring content. - docstring_summary (string): Summary/processed docstring dropping args and return statements. - function (string): Actual function/method content. - argument_list (null): List of arguments. - language (string): Programming language of the function. - docstring_language (string): Natural language of the docstring. - type (string): Return type if found during parsing. ## Summary of data curation pipeline - Filtering out repositories that appear in [CodeSearchNet](https://huggingface.co/datasets/code_search_net). - Filtering the files that belong to the programming languages of interest. - Pre-filtering the files that likely contain text in the natural languages of interest. - AST parsing with [Tree-sitter](\url{https://tree-sitter.github.io/tree-sitter/). - Perform language identification of docstrings in the resulting set of functions/methods. - Perform human verification/validation of the underlying language of docstrings. ## Social Impact of the dataset M2CRB is released with the aim to increase the coverage of the NLP for code research community by providing data from scarce combinations of languages. We expect this data to help enable more accurate information retrieval systems and text-to-code or code-to-text summarization on languages other than English. As a subset of The Stack, this dataset inherits de-risking efforts carried out when that dataset was built, though we highlight risks exist and malicious use of the data could exist such as, for instance, to aid on creation of malicious code. We highlight however that this is a risk shared by any code dataset made openly available. Moreover, we remark that while unlikely due to human filtering, the data may contain harmful or offensive language, which could be learned by the models. ## Discussion of Biases The data is collected from GitHub and naturally occurring text on that platform. As a consequence, certain language combinations are more or less likely to contain well documented code and, as such, resulting data will not be uniformly represented in terms of their natural and programing languages. ## Known limitations While we cover 16 scarce combinations of programming and natural languages, our evaluation dataset can be expanded to further improve its coverage. Moreover, we use text naturally occurring as comments or docstrings as opposed to human annotators. As such, resulting data will have high variance in terms of quality and depending on practices of sub-communities of software developers. However, we remark that the task our evaluation dataset defines is reflective of what searching on a real codebase would look like. Finally, we note that some imbalance on data is observed due to the same reason since certain language combinations are more or less likely to contain well documented code. ## Maintenance plan: The data will be kept up to date by following The Stack releases. We should rerun our pipeline for every new release and add non-overlapping new content to both training and testing partitions of our data. This is so that we carry over opt-out updates and include fresh repos. ## Update plan: - Short term: - Cover all 6 programming languages from CodeSearchNet. - Long-term - Add an extra test set containing human-generated text/code pairs so the gap between in-the-wild and controlled performances can be measured. - Include extra natural languages. ## Licensing Information M2CRB is a subset filtered and pre-processed from [The Stack](https://huggingface.co/datasets/bigcode/the-stack), a collection of source code from repositories with various licenses. Any use of all or part of the code gathered in M2CRB must abide by the terms of the original licenses.
mcipriano/stackoverflow-kubernetes-questions
2023-10-10T18:21:03.000Z
[ "task_categories:question-answering", "task_categories:text-generation", "size_categories:10K<n<100K", "language:en", "license:cc-by-sa-4.0", "Kubernetes", "Stackoverflow", "region:us" ]
mcipriano
null
null
null
8
55
--- license: cc-by-sa-4.0 task_categories: - question-answering - text-generation language: - en tags: - Kubernetes - Stackoverflow size_categories: - 10K<n<100K --- The purpose of this dataset is to provide the opportunity to perform any training, fine-tuning, etc. for any Language Model. In the 'data' folder, you will find the dataset in Parquet format, which is one of the formats used for these processes. In case it may be useful for other purposes, I have also included the dataset in CSV format. All data in this dataset were retrieved from the Stack Exchange network using the Stack Exchange Data explorer tool (https://github.com/StackExchange/StackExchange.DataExplorer). Specifically, the dataset contains all the Question-Answer pairs from Stack Overflow with Kubernetes tags. Specifically, in each Question-Answer pair, the Answer is the one with a positive and maximum score. Posts on Stack Overflow with negative scores have been excluded from the dataset.
cryptom/ceval-exam
2023-06-24T00:40:14.000Z
[ "task_categories:text-classification", "task_categories:multiple-choice", "task_categories:question-answering", "size_categories:10K<n<100K", "language:zh", "license:cc-by-nc-sa-4.0", "arxiv:2305.08322", "region:us" ]
cryptom
C-Eval is a comprehensive Chinese evaluation suite for foundation models. It consists of 13948 multi-choice questions spanning 52 diverse disciplines and four difficulty levels.
@article{huang2023ceval, title={C-Eval: A Multi-Level Multi-Discipline Chinese Evaluation Suite for Foundation Models}, author={Huang, Yuzhen and Bai, Yuzhuo and Zhu, Zhihao and Zhang, Junlei and Zhang, Jinghan and Su, Tangjun and Liu, Junteng and Lv, Chuancheng and Zhang, Yikai and Lei, Jiayi and Fu, Yao and Sun, Maosong and He, Junxian}, journal={arXiv preprint arXiv:2305.08322}, year={2023} }
null
0
55
--- license: cc-by-nc-sa-4.0 task_categories: - text-classification - multiple-choice - question-answering language: - zh pretty_name: C-Eval size_categories: - 10K<n<100K --- C-Eval is a comprehensive Chinese evaluation suite for foundation models. It consists of 13948 multi-choice questions spanning 52 diverse disciplines and four difficulty levels. Please visit our [website](https://cevalbenchmark.com/) and [GitHub](https://github.com/SJTU-LIT/ceval/tree/main) or check our [paper](https://arxiv.org/abs/2305.08322) for more details. Each subject consists of three splits: dev, val, and test. The dev set per subject consists of five exemplars with explanations for few-shot evaluation. The val set is intended to be used for hyperparameter tuning. And the test set is for model evaluation. Labels on the test split are not released, users are required to submit their results to automatically obtain test accuracy. [How to submit?](https://github.com/SJTU-LIT/ceval/tree/main#how-to-submit) ### Load the data ```python from datasets import load_dataset dataset=load_dataset(r"ceval/ceval-exam",name="computer_network") print(dataset['val'][0]) # {'id': 0, 'question': '使用位填充方法,以01111110为位首flag,数据为011011111111111111110010,求问传送时要添加几个0____', 'A': '1', 'B': '2', 'C': '3', 'D': '4', 'answer': 'C', 'explanation': ''} ``` More details on loading and using the data are at our [github page](https://github.com/SJTU-LIT/ceval#data). Please cite our paper if you use our dataset. ``` @article{huang2023ceval, title={C-Eval: A Multi-Level Multi-Discipline Chinese Evaluation Suite for Foundation Models}, author={Huang, Yuzhen and Bai, Yuzhuo and Zhu, Zhihao and Zhang, Junlei and Zhang, Jinghan and Su, Tangjun and Liu, Junteng and Lv, Chuancheng and Zhang, Yikai and Lei, Jiayi and Fu, Yao and Sun, Maosong and He, Junxian}, journal={arXiv preprint arXiv:2305.08322}, year={2023} } ```
psymon/namuwiki_alpaca_dataset
2023-06-29T07:29:01.000Z
[ "language:ko", "license:cc-by-nc-sa-2.0", "region:us" ]
psymon
null
null
null
8
55
--- license: cc-by-nc-sa-2.0 language: - ko --- ## namuwiki for Stanford Alpaca 나무위키 덤프 파일을 Stanford Alpaca 학습에 맞게 수정한 데이터셋입니다. 데이터 형식은 Stanford Alpaca와 동일합니다. instruction은 '나무위키 문서 제목' + '에 대해 설명해줘.' 형태이고,<br> output은 문서 == 개요 == 에 해당하는 내용입니다. 개요가 없는 항목, 개요가 너무 짧은 항목은 제외하였습니다.
aditijha/processed_lima
2023-08-29T05:26:26.000Z
[ "region:us" ]
aditijha
null
null
null
2
55
--- dataset_info: features: - name: prompt dtype: string - name: response dtype: string splits: - name: train num_bytes: 2942583 num_examples: 1000 - name: test num_bytes: 80137 num_examples: 300 download_size: 31591 dataset_size: 3022720 --- # Dataset Card for "processed_lima" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
richardr1126/spider-skeleton-context-instruct
2023-07-18T17:55:47.000Z
[ "source_datasets:spider", "language:en", "license:cc-by-4.0", "text-to-sql", "SQL", "Spider", "fine-tune", "region:us" ]
richardr1126
null
null
null
2
55
--- language: - en license: - cc-by-4.0 source_datasets: - spider pretty_name: Spider Skeleton Context Instruct tags: - text-to-sql - SQL - Spider - fine-tune dataset_info: features: - name: db_id dtype: string - name: text dtype: string --- # Dataset Card for Spider Skeleton Context Instruct ### Dataset Summary Spider is a large-scale complex and cross-domain semantic parsing and text-to-SQL dataset annotated by 11 Yale students The goal of the Spider challenge is to develop natural language interfaces to cross-domain databases. This dataset was created to finetune LLMs in a `### Instruction:` and `### Response:` format with database context. ### Yale Lily Spider Leaderboards The leaderboard can be seen at https://yale-lily.github.io/spider ### Languages The text in the dataset is in English. ### Licensing Information The spider dataset is licensed under the [CC BY-SA 4.0](https://creativecommons.org/licenses/by-sa/4.0/legalcode) ### Citation ``` @article{yu2018spider, title={Spider: A large-scale human-labeled dataset for complex and cross-domain semantic parsing and text-to-sql task}, author={Yu, Tao and Zhang, Rui and Yang, Kai and Yasunaga, Michihiro and Wang, Dongxu and Li, Zifan and Ma, James and Li, Irene and Yao, Qingning and Roman, Shanelle and others}, journal={arXiv preprint arXiv:1809.08887}, year={2018} } ```
Linhz/qag_viquad
2023-08-24T16:33:51.000Z
[ "region:us" ]
Linhz
null
null
null
0
55
Entry not found
mtc/german_seahorse_dataset_with_articles
2023-08-24T14:01:36.000Z
[ "region:us" ]
mtc
null
null
null
0
55
--- configs: - config_name: default data_files: - split: test path: data/test-* - split: train path: data/train-* - split: validation path: data/validation-* dataset_info: features: - name: gem_id dtype: string - name: worker_lang dtype: string - name: summary dtype: string - name: model dtype: string - name: question1 dtype: string - name: question2 dtype: string - name: question3 dtype: string - name: question4 dtype: string - name: question5 dtype: string - name: question6 dtype: string - name: article dtype: string splits: - name: test num_bytes: 9444778 num_examples: 2685 - name: train num_bytes: 32022408 num_examples: 9180 - name: validation num_bytes: 4677669 num_examples: 1373 download_size: 14666995 dataset_size: 46144855 --- # Dataset Card for "german_seahorse_dataset_with_articles" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
AlexWortega/secret_chats
2023-09-04T12:49:58.000Z
[ "region:us" ]
AlexWortega
null
null
null
0
55
--- configs: - config_name: default data_files: - split: train path: data/train-* dataset_info: features: - name: input dtype: string - name: output dtype: string - name: reward dtype: float64 splits: - name: train num_bytes: 8645384214 num_examples: 4470687 download_size: 5157410846 dataset_size: 8645384214 --- # Dataset Card for "secret_chats" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
dongyoung4091/hh-rlhf_with_features_flan_t5_large_flan_t5_zeroshot
2023-09-08T11:37:16.000Z
[ "region:us" ]
dongyoung4091
null
null
null
0
55
--- configs: - config_name: default data_files: - split: train path: data/train-* - split: test path: data/test-* dataset_info: features: - name: chosen dtype: string - name: rejected dtype: string - name: helpfulness_chosen dtype: int64 - name: helpfulness_rejected dtype: int64 - name: specificity_chosen dtype: int64 - name: specificity_rejected dtype: int64 - name: intent_chosen dtype: int64 - name: intent_rejected dtype: int64 - name: factuality_chosen dtype: int64 - name: factuality_rejected dtype: int64 - name: easy-to-understand_chosen dtype: int64 - name: easy-to-understand_rejected dtype: int64 - name: relevance_chosen dtype: int64 - name: relevance_rejected dtype: int64 - name: readability_chosen dtype: int64 - name: readability_rejected dtype: int64 - name: enough-detail_chosen dtype: int64 - name: enough-detail_rejected dtype: int64 - name: biased:_chosen dtype: int64 - name: biased:_rejected dtype: int64 - name: fail-to-consider-individual-preferences_chosen dtype: int64 - name: fail-to-consider-individual-preferences_rejected dtype: int64 - name: repetetive_chosen dtype: int64 - name: repetetive_rejected dtype: int64 - name: fail-to-consider-context_chosen dtype: int64 - name: fail-to-consider-context_rejected dtype: int64 - name: too-long_chosen dtype: int64 - name: too-long_rejected dtype: int64 - name: human dtype: string - name: assistant_chosen dtype: string - name: assistant_rejected dtype: string - name: log_score_chosen dtype: float64 - name: log_score_rejected dtype: float64 - name: labels dtype: string - name: zeroshot_helpfulness_chosen dtype: int64 - name: zeroshot_helpfulness_rejected dtype: int64 - name: zeroshot_specificity_chosen dtype: int64 - name: zeroshot_specificity_rejected dtype: int64 - name: zeroshot_intent_chosen dtype: int64 - name: zeroshot_intent_rejected dtype: int64 - name: zeroshot_factuality_chosen dtype: int64 - name: zeroshot_factuality_rejected dtype: int64 - name: zeroshot_easy-to-understand_chosen dtype: int64 - name: zeroshot_easy-to-understand_rejected dtype: int64 - name: zeroshot_relevance_chosen dtype: int64 - name: zeroshot_relevance_rejected dtype: int64 - name: zeroshot_readability_chosen dtype: int64 - name: zeroshot_readability_rejected dtype: int64 - name: zeroshot_enough-detail_chosen dtype: int64 - name: zeroshot_enough-detail_rejected dtype: int64 - name: zeroshot_biased:_chosen dtype: int64 - name: zeroshot_biased:_rejected dtype: int64 - name: zeroshot_fail-to-consider-individual-preferences_chosen dtype: int64 - name: zeroshot_fail-to-consider-individual-preferences_rejected dtype: int64 - name: zeroshot_repetetive_chosen dtype: int64 - name: zeroshot_repetetive_rejected dtype: int64 - name: zeroshot_fail-to-consider-context_chosen dtype: int64 - name: zeroshot_fail-to-consider-context_rejected dtype: int64 - name: zeroshot_too-long_chosen dtype: int64 - name: zeroshot_too-long_rejected dtype: int64 splits: - name: train num_bytes: 16425816 num_examples: 9574 - name: test num_bytes: 16369741 num_examples: 9574 download_size: 16115109 dataset_size: 32795557 --- # Dataset Card for "hh-rlhf_with_features_flan_t5_large_flan_t5_zeroshot" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
bibidentuhanoi/BMO_vicuna
2023-10-10T14:05:17.000Z
[ "region:us" ]
bibidentuhanoi
null
null
null
0
55
--- dataset_info: features: - name: conversations list: - name: from dtype: string - name: value dtype: string - name: id dtype: string splits: - name: train num_bytes: 134020 num_examples: 118 download_size: 68555 dataset_size: 134020 configs: - config_name: default data_files: - split: train path: data/train-* --- # Dataset Card for "BMO_vicuna" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
kye/all-lucidrain-python-3
2023-09-16T00:00:14.000Z
[ "license:mit", "region:us" ]
kye
null
null
null
3
55
--- license: mit ---
Wabbina/moore_dataset_transcription_v1.0
2023-09-25T16:46:41.000Z
[ "region:us" ]
Wabbina
null
null
null
0
55
--- configs: - config_name: default data_files: - split: train path: data/train-* - split: test path: data/test-* - split: valid path: data/valid-* dataset_info: features: - name: audio dtype: audio - name: language dtype: string - name: translation dtype: string - name: transcription dtype: string - name: is_recorded dtype: int64 - name: is_valid dtype: bool - name: dialect dtype: string - name: source dtype: string - name: duration dtype: float64 - name: cur_fs dtype: int64 - name: bitrate dtype: string - name: status dtype: int64 splits: - name: train num_bytes: 143803884.73787105 num_examples: 4735 - name: test num_bytes: 18804046.053001966 num_examples: 618 - name: valid num_bytes: 18096760.686362773 num_examples: 607 download_size: 159543788 dataset_size: 180704691.4772358 --- # Dataset Card for "moore_dataset_transcription_v1.0" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
sc3069/zx
2023-09-27T09:47:20.000Z
[ "region:us" ]
sc3069
null
null
null
0
55
--- dataset_info: features: - name: input dtype: string - name: label dtype: int64 splits: - name: train num_bytes: 10329536 num_examples: 350 download_size: 1991265 dataset_size: 10329536 configs: - config_name: default data_files: - split: train path: data/train-* --- # Dataset Card for "zx" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
prnv19/MathProb
2023-10-05T09:04:07.000Z
[ "license:apache-2.0", "region:us" ]
prnv19
null
null
null
0
55
--- license: apache-2.0 ---
vishnupriyavr/spotify-million-song-dataset
2023-10-02T12:17:02.000Z
[ "license:cc0-1.0", "region:us" ]
vishnupriyavr
null
null
null
0
55
--- license: - cc0-1.0 converted_from: kaggle kaggle_id: notshrirang/spotify-million-song-dataset --- # Dataset Card for Spotify Million Song Dataset ## Table of Contents - [Table of Contents](#table-of-contents) - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards) - [Languages](#languages) - [Dataset Structure](#dataset-structure) - [Data Instances](#data-instances) - [Data Fields](#data-fields) - [Data Splits](#data-splits) - [Dataset Creation](#dataset-creation) - [Curation Rationale](#curation-rationale) - [Source Data](#source-data) - [Annotations](#annotations) - [Personal and Sensitive Information](#personal-and-sensitive-information) - [Considerations for Using the Data](#considerations-for-using-the-data) - [Social Impact of Dataset](#social-impact-of-dataset) - [Discussion of Biases](#discussion-of-biases) - [Other Known Limitations](#other-known-limitations) - [Additional Information](#additional-information) - [Dataset Curators](#dataset-curators) - [Licensing Information](#licensing-information) - [Citation Information](#citation-information) - [Contributions](#contributions) ## Dataset Description - **Homepage:** https://kaggle.com/datasets/notshrirang/spotify-million-song-dataset - **Repository:** - **Paper:** - **Leaderboard:** - **Point of Contact:** ### Dataset Summary This is Spotify Million Song Dataset. This dataset contains song names, artists names, link to the song and lyrics. This dataset can be used for recommending songs, classifying or clustering songs. ### Supported Tasks and Leaderboards [More Information Needed] ### Languages [More Information Needed] ## Dataset Structure ### Data Instances [More Information Needed] ### Data Fields [More Information Needed] ### Data Splits [More Information Needed] ## Dataset Creation ### Curation Rationale [More Information Needed] ### Source Data #### Initial Data Collection and Normalization [More Information Needed] #### Who are the source language producers? [More Information Needed] ### Annotations #### Annotation process [More Information Needed] #### Who are the annotators? [More Information Needed] ### Personal and Sensitive Information [More Information Needed] ## Considerations for Using the Data ### Social Impact of Dataset [More Information Needed] ### Discussion of Biases [More Information Needed] ### Other Known Limitations [More Information Needed] ## Additional Information ### Dataset Curators This dataset was shared by [@notshrirang](https://kaggle.com/notshrirang) ### Licensing Information The license for this dataset is cc0-1.0 ### Citation Information ```bibtex [More Information Needed] ``` ### Contributions [More Information Needed]
FelixdoingAI/IP2P-adwm-200
2023-10-04T05:02:36.000Z
[ "region:us" ]
FelixdoingAI
null
null
null
0
55
--- dataset_info: features: - name: original_prompt dtype: string - name: original_image dtype: image - name: edit_prompt dtype: string - name: edited_prompt dtype: string - name: edited_image dtype: image - name: adversarial_image dtype: image splits: - name: train num_bytes: 117698490.0 num_examples: 200 download_size: 117696517 dataset_size: 117698490.0 --- # Dataset Card for "IP2P-adwm-200" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
llama2d/llama2d-unscramble
2023-10-07T02:15:20.000Z
[ "region:us" ]
llama2d
null
null
null
0
55
--- dataset_info: features: - name: input_ids sequence: float32 - name: coords sequence: sequence: float32 - name: labels sequence: float32 - name: attention_mask sequence: float32 splits: - name: train num_bytes: 30080000 num_examples: 5000 download_size: 0 dataset_size: 30080000 configs: - config_name: default data_files: - split: train path: data/train-* --- # Dataset Card for "llama2d-unscramble" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
ajgt_twitter_ar
2023-01-25T14:26:05.000Z
[ "task_categories:text-classification", "task_ids:sentiment-classification", "annotations_creators:found", "language_creators:found", "multilinguality:monolingual", "size_categories:1K<n<10K", "source_datasets:original", "language:ar", "license:unknown", "region:us" ]
null
Arabic Jordanian General Tweets (AJGT) Corpus consisted of 1,800 tweets annotated as positive and negative. Modern Standard Arabic (MSA) or Jordanian dialect.
@inproceedings{alomari2017arabic, title={Arabic tweets sentimental analysis using machine learning}, author={Alomari, Khaled Mohammad and ElSherif, Hatem M and Shaalan, Khaled}, booktitle={International Conference on Industrial, Engineering and Other Applications of Applied Intelligent Systems}, pages={602--610}, year={2017}, organization={Springer} }
null
2
54
--- annotations_creators: - found language_creators: - found language: - ar license: - unknown multilinguality: - monolingual size_categories: - 1K<n<10K source_datasets: - original task_categories: - text-classification task_ids: - sentiment-classification pretty_name: Arabic Jordanian General Tweets dataset_info: features: - name: text dtype: string - name: label dtype: class_label: names: '0': Negative '1': Positive config_name: plain_text splits: - name: train num_bytes: 175424 num_examples: 1800 download_size: 107395 dataset_size: 175424 --- # Dataset Card for Arabic Jordanian General Tweets ## Table of Contents - [Dataset Card for Arabic Jordanian General Tweets](#dataset-card-for-arabic-jordanian-general-tweets) - [Table of Contents](#table-of-contents) - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards) - [Languages](#languages) - [Dataset Structure](#dataset-structure) - [Data Instances](#data-instances) - [Data Fields](#data-fields) - [Data Splits](#data-splits) - [|split|num examples|](#splitnum-examples) - [Dataset Creation](#dataset-creation) - [Curation Rationale](#curation-rationale) - [Source Data](#source-data) - [Initial Data Collection and Normalization](#initial-data-collection-and-normalization) - [Who are the source language producers?](#who-are-the-source-language-producers) - [Annotations](#annotations) - [Annotation process](#annotation-process) - [Who are the annotators?](#who-are-the-annotators) - [Personal and Sensitive Information](#personal-and-sensitive-information) - [Considerations for Using the Data](#considerations-for-using-the-data) - [Social Impact of Dataset](#social-impact-of-dataset) - [Discussion of Biases](#discussion-of-biases) - [Other Known Limitations](#other-known-limitations) - [Additional Information](#additional-information) - [Dataset Curators](#dataset-curators) - [Licensing Information](#licensing-information) - [Citation Information](#citation-information) - [Contributions](#contributions) ## Dataset Description - **Repository:** [Arabic Jordanian General Tweets](https://github.com/komari6/Arabic-twitter-corpus-AJGT) - **Paper:** [Arabic Tweets Sentimental Analysis Using Machine Learning](https://link.springer.com/chapter/10.1007/978-3-319-60042-0_66) - **Point of Contact:** [Khaled Alomari](khaled.alomari@adu.ac.ae) ### Dataset Summary Arabic Jordanian General Tweets (AJGT) Corpus consisted of 1,800 tweets annotated as positive and negative. Modern Standard Arabic (MSA) or Jordanian dialect. ### Supported Tasks and Leaderboards The dataset was published on this [paper](https://link.springer.com/chapter/10.1007/978-3-319-60042-0_66). ### Languages The dataset is based on Arabic. ## Dataset Structure ### Data Instances A binary datset with with negative and positive sentiments. ### Data Fields - `text` (str): Tweet text. - `label` (int): Sentiment. ### Data Splits The dataset is not split. | | train | |----------|------:| | no split | 1,800 | ## Dataset Creation ### Curation Rationale [More Information Needed] ### Source Data [More Information Needed] #### Initial Data Collection and Normalization Contains 1,800 tweets collected from twitter. #### Who are the source language producers? From tweeter. ### Annotations The dataset does not contain any additional annotations. #### Annotation process [More Information Needed] #### Who are the annotators? [More Information Needed] ### Personal and Sensitive Information [More Information Needed] ## Considerations for Using the Data ### Social Impact of Dataset [Needs More Information] ### Discussion of Biases [Needs More Information] ### Other Known Limitations [Needs More Information] ## Additional Information ### Dataset Curators [More Information Needed] ### Licensing Information [More Information Needed] ### Citation Information ``` @inproceedings{alomari2017arabic, title={Arabic tweets sentimental analysis using machine learning}, author={Alomari, Khaled Mohammad and ElSherif, Hatem M and Shaalan, Khaled}, booktitle={International Conference on Industrial, Engineering and Other Applications of Applied Intelligent Systems}, pages={602--610}, year={2017}, organization={Springer} } ``` ### Contributions Thanks to [@zaidalyafeai](https://github.com/zaidalyafeai), [@lhoestq](https://github.com/lhoestq) for adding this dataset.
eli5_category
2022-11-18T20:00:33.000Z
[ "task_categories:text2text-generation", "task_ids:abstractive-qa", "task_ids:open-domain-abstractive-qa", "annotations_creators:found", "language_creators:found", "multilinguality:monolingual", "size_categories:100K<n<1M", "source_datasets:extended|eli5", "language:en", "license:unknown", "regio...
null
The ELI5-Category dataset is a smaller but newer and categorized version of the original ELI5 dataset. After 2017, a tagging system was introduced to this subreddit so that the questions can be categorized into different topics according to their tags. Since the training and validation set is built by questions in different topics, the dataset is expected to alleviate the train/validation overlapping issue in the original ELI5 dataset.
@inproceedings{eli5-category, author = {Jingsong Gao and Qingren Zhou and Rui Qiu}, title = {{ELI5-Category:} A categorized open-domain QA dataset}, year = {2021} }
null
4
54
--- annotations_creators: - found language_creators: - found language: - en license: - unknown multilinguality: - monolingual paperswithcode_id: null pretty_name: ELI5-Category size_categories: - 100K<n<1M source_datasets: - extended|eli5 task_categories: - text2text-generation task_ids: - abstractive-qa - open-domain-abstractive-qa dataset_info: features: - name: q_id dtype: string - name: title dtype: string - name: selftext dtype: string - name: category dtype: string - name: subreddit dtype: string - name: answers struct: - name: a_id sequence: string - name: text sequence: string - name: score sequence: int32 - name: text_urls sequence: sequence: string - name: title_urls sequence: string - name: selftext_urls sequence: string splits: - name: train num_bytes: 166409797 num_examples: 91772 - name: validation1 num_bytes: 13150585 num_examples: 5446 - name: validation2 num_bytes: 4737744 num_examples: 2375 - name: test num_bytes: 10419098 num_examples: 5411 download_size: 72921829 dataset_size: 194717224 --- # Dataset Card for ELI5-Category ## Table of Contents - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards) - [Languages](#languages) - [Dataset Structure](#dataset-structure) - [Data Instances](#data-instances) - [Data Fields](#data-fields) - [Data Splits](#data-splits) - [Dataset Creation](#dataset-creation) - [Curation Rationale](#curation-rationale) - [Source Data](#source-data) - [Annotations](#annotations) - [Personal and Sensitive Information](#personal-and-sensitive-information) - [Considerations for Using the Data](#considerations-for-using-the-data) - [Social Impact of Dataset](#social-impact-of-dataset) - [Discussion of Biases](#discussion-of-biases) - [Other Known Limitations](#other-known-limitations) - [Additional Information](#additional-information) - [Dataset Curators](#dataset-curators) - [Licensing Information](#licensing-information) - [Citation Information](#citation-information) - [Contributions](#contributions) ## Dataset Description - **Homepage:** [ELI5-Category homepage](https://celeritasml.netlify.app/posts/2021-12-01-eli5c/) - **Repository:** [ELI5-Category repository](https://github.com/rexarski/ANLY580-final-project) - **Point of Contact:** [Jingsong Gao](mailto:jg2109@georgetown.edu) ### Dataset Summary The ELI5-Category dataset is a smaller but newer and categorized version of the original ELI5 dataset. It's an English-language dataset of questions and answers gathered from the [r/explainlikeimfive](https://www.reddit.com/r/explainlikeimfive/) subreddit where users ask factual questions requiring paragraph-length or longer answers. After 2017, a tagging system was introduced to this subreddit so that the questions can be categorized into different topics according to their tags. Since the training and validation set is built by questions in different topics, the dataset is expected to alleviate the train/validation overlapping issue in the original [ELI5 dataset](https://huggingface.co/datasets/eli5). ### Supported Tasks and Leaderboards - `abstractive-qa`, `open-domain-abstractive-qa`: The dataset can be used to train a model for Open Domain Long Form Question Answering. An LFQA model is presented with a non-factoid and asked to retrieve relevant information from a knowledge source (such as [Wikipedia](https://www.wikipedia.org/)), then use it to generate a multi-sentence answer. ### Languages The text in the dataset is in English, as spoken by Reddit users on the [r/explainlikeimfive](https://www.reddit.com/r/explainlikeimfive/) subreddit. The associated BCP-47 code is `en`. ## Dataset Structure ### Data Instances The structure of this dataset is very similar to the original [ELI5 dataset](https://huggingface.co/datasets/eli5). A typical data point comprises a question, with a `title` containing the main question and a `selftext` which sometimes elaborates on it, and a list of answers from the forum sorted by scores they obtained. Additionally, the URLs in each of the text fields have been extracted to respective lists and replaced by generic tokens in the text. In addition to the original ELI5 dataset, the data point also has a `category` field. There are 11 common values of `category` in this dataset: `Biology`,`Chemistry`,`Culture`,`Earth Science`,`Economics`,`Engineering`,`Mathematics`,`Other`,`Physics`,`Psychology`,`Technology`, and a special `category`: `Repost` indicates the same question has been asked before. An example from the ELI5-Category set looks as follows: ``` {'q_id': '5lcm18', 'title': 'Why do old games running on new hardware still have technical issues ?', 'selftext': 'I am playing some mega man games on my Xbox One and experience slowdown when there are a lot of enemies on screen . but the Xbox One is significantly more powerful than the NES , so why is there still slowdown on this hardware ?', 'category': 'Engineering', 'subreddit': 'explainlikeimfive', 'answers': {'a_id': ['dbuo48e', 'dbusfve'], 'text': ["The XBox is emulating NES hardware and running the emulation at a set speed . If it ran it at as fast as possible , then it would be several times faster than the original NES game and would be unplayable . I ca n't speak for Mega Man exactly , but older games tended to run on a cycle locked to the screen refresh which was a fixed 60Hz or 50Hz . There was only one piece of hardware they ran on , so there was no need to adjust for different hardware speeds .", "In that case , it 's probably on purpose - they want to emulate the experience as closely as possible , even including the slowdown and sprite flickering . Some emulators let you turn it off , but it 's usually turned on by default . In other cases , like if you 're trying to emulate PS2 games on your PC , the game might just run really slow in general . Even though your PC is way more powerful than a PS2 , it has to \" translate \" from PS2 language to PC language in realtime , which is much more difficult than running PS2 code on the PS2 itself ."], 'score': [13, 3], 'text_urls': [[],[]]}, 'title_urls': {'url': []}, 'selftext_urls': {'url': []}} ``` ### Data Fields - `q_id`: a string question identifier for each example, corresponding to its ID in the [Pushshift.io](https://files.pushshift.io/reddit/submissions/) Reddit submission dumps - `subreddit`: always `explainlikeimfive`, indicating which subreddit the question came from - `category`: tag of the question, the possible values are listed above. - `title`: title of the question, with URLs extracted and replaced by `URL_n` tokens - `title_urls`: list of the extracted URLs, the `n`th element of the list was replaced by `URL_n` - `selftext`: either an empty string or an elaboration of the question - `selftext_urls`: similar to `title_urls` but for `self_text` - `answers`: a list of answers, each answer has: - `a_id`: a string answer identifier for each answer, corresponding to its ID in the [Pushshift.io](https://files.pushshift.io/reddit/comments/) Reddit comments dumps. - `text`: the answer text with the URLs normalized - `score`: the number of upvotes - the number of downvotes the answer had received when the dumps were created - `text_urls`: lists of the extracted URLs for every answer ### Data Splits In order to avoid having duplicate questions across sets, three non-overlapping subsets of `category` are used in the training, validation and test set. Also, a special validation set contains all the questions in the `Repost` category. A valid retriever-generator model should have consistent performances on both validation sets. The final split sizes are as follows: | | Train | Valid | Valid2 |Test | | ----- | ------ | ----- | ---- | ---- | | `Biology` | 32769 | | | | | `Chemistry` | 6633 | | | | | `Culture` | | 5446 | | | | `Earth Science` | 677 | | | | | `Economics` | 5901 | | | | | `Engineering` | | | | 5411 | | `Mathematics` | 1912 | | | | | `Other` | 19312 | | | | | `Physics` | 10196 | | | | | `Psychology` | 338 | | | | | `Technology` | 14034 | | | | | `Repost` | | | 2375 | | | **Total** | 91772 | 5446 | 2375 | 5411 | ## Dataset Creation ### Curation Rationale ELI5-Category was built to provide a testbed for machines to learn how to answer more complex questions, which requires them to find and combine the information in a coherent manner. The dataset was built by gathering questions that were asked by community members of three subreddits, including [r/explainlikeimfive](https://www.reddit.com/r/explainlikeimfive/), along with the answers that were provided by other users. The [rules of the subreddit](https://www.reddit.com/r/explainlikeimfive/wiki/detailed_rules) make this data particularly well suited to training a model for abstractive question answering: the questions need to seek an objective explanation about well-established facts, and the answers provided need to be understandable to a layperson without any particular knowledge domain. ### Source Data #### Initial Data Collection and Normalization The data was obtained by filtering submissions and comments from the subreddits of interest from the XML dumps of the [Reddit forum](https://www.reddit.com/) hosted on [Pushshift.io](https://files.pushshift.io/reddit/). In order to further improve the quality of the selected examples, only questions with a score of at least 2 and at least one answer with a score of at least 2 were selected for the dataset. The dataset questions and answers span a period from January 2017 to June 2021. #### Who are the source language producers? The language producers are users of the [r/explainlikeimfive](https://www.reddit.com/r/explainlikeimfive/) subreddit between 2017 and 2021. No further demographic information was available from the data source. ### Annotations The dataset contains the `category` as an additional annotation for the topics of questions. #### Annotation process The dataset is auto-annotated by the tags of posts in the [Reddit forum](https://www.reddit.com/). #### Who are the annotators? The annotators are users/administrators of the [r/explainlikeimfive](https://www.reddit.com/r/explainlikeimfive/) subreddit between 2017 and 2021. No further demographic information was available from the data source. ### Personal and Sensitive Information The authors removed the speaker IDs from the [Pushshift.io](https://files.pushshift.io/reddit/) dumps but did not otherwise anonymize the data. Some questions and answers are about contemporary public figures or individuals who appeared in the news. ## Considerations for Using the Data ### Social Impact of Dataset The dataset has a similar social impact to the original ELI5 dataset [Social Impact of Dataset](https://huggingface.co/datasets/eli5#social-impact-of-dataset). ### Discussion of Biases The dataset has similar considerations of biases to the original ELI5 dataset [Discussion of Biases](https://huggingface.co/datasets/eli5#discussion-of-biases). ### Other Known Limitations The dataset has similar limitations to the original ELI5 dataset [Other Known Limitations](https://huggingface.co/datasets/eli5#other-known-limitations). ## Additional Information ### Dataset Curators The dataset was initially created by Jingsong Gao, Qinren Zhou, Rui Qiu, during a course project of `ANLY 580`: NLP for Data Analytics at Georgetown University. ### Licensing Information The licensing status of the dataset hinges on the legal status of the [Pushshift.io](https://files.pushshift.io/reddit/) data which is unclear. ### Citation Information ``` @inproceedings{eli5-category, author = {Jingsong Gao and Qingren Zhou and Rui Qiu}, title = {{ELI5-Category:} A categorized open-domain QA dataset}, year = {2021} } ``` ### Contributions Thanks to [@jingshenSN2](https://github.com/jingshenSN2), [@QinrenZhou](https://github.com/QinrenZhou), [@rexarski](https://github.com/rexarski) for adding this dataset.
the_pile_openwebtext2
2023-02-24T13:58:16.000Z
[ "task_categories:text-generation", "task_categories:fill-mask", "task_categories:text-classification", "task_ids:language-modeling", "task_ids:masked-language-modeling", "task_ids:text-scoring", "annotations_creators:no-annotation", "language_creators:found", "multilinguality:monolingual", "size_c...
null
OpenWebText2 is part of EleutherAi/The Pile dataset and is an enhanced version of the original OpenWebTextCorpus covering all Reddit submissions from 2005 up until April 2020, with further months becoming available after the corresponding PushShift dump files are released.
@article{pile, title={The {P}ile: An 800GB Dataset of Diverse Text for Language Modeling}, author={Gao, Leo and Biderman, Stella and Black, Sid and Golding, Laurence and Hoppe, Travis and Foster, Charles and Phang, Jason and He, Horace and Thite, Anish and Nabeshima, Noa and Presser, Shawn and Leahy, Connor}, journal={arXiv preprint arXiv:2101.00027}, year={2020} }
null
9
54
--- annotations_creators: - no-annotation language_creators: - found language: - en license: - mit multilinguality: - monolingual pretty_name: OpenWebText2 size_categories: - 10M<n<100M source_datasets: - original task_categories: - text-generation - fill-mask - text-classification task_ids: - language-modeling - masked-language-modeling - text-scoring dataset_info: features: - name: title dtype: string - name: text dtype: string config_name: plain_text splits: - name: train num_bytes: 68571017395 num_examples: 17103059 download_size: 29344276480 dataset_size: 68571017395 --- # Dataset Card for the_pile_openwebtext2 ## Table of Contents - [Dataset Card for the_pile_openwebtext2](#dataset-card-for-the_pile_openwebtext2) - [Table of Contents](#table-of-contents) - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards) - [Languages](#languages) - [Dataset Structure](#dataset-structure) - [Data Instances](#data-instances) - [Data Fields](#data-fields) - [Data Splits](#data-splits) - [|split|num examples|](#splitnum-examples) - [Dataset Creation](#dataset-creation) - [Curation Rationale](#curation-rationale) - [Source Data](#source-data) - [Initial Data Collection and Normalization](#initial-data-collection-and-normalization) - [Who are the source language producers?](#who-are-the-source-language-producers) - [Annotations](#annotations) - [Annotation process](#annotation-process) - [Who are the annotators?](#who-are-the-annotators) - [Personal and Sensitive Information](#personal-and-sensitive-information) - [Considerations for Using the Data](#considerations-for-using-the-data) - [Social Impact of Dataset](#social-impact-of-dataset) - [Discussion of Biases](#discussion-of-biases) - [Other Known Limitations](#other-known-limitations) - [Additional Information](#additional-information) - [Dataset Curators](#dataset-curators) - [Licensing Information](#licensing-information) - [Citation Information](#citation-information) - [Contributions](#contributions) ## Dataset Description - **Homepage:** https://openwebtext2.readthedocs.io/en/latest/ - **Repository:** [GitHub](https://github.com/EleutherAI/openwebtext2) - **Paper:** https://arxiv.org/abs/2101.00027 - **Leaderboard:** [Needs More Information] - **Point of Contact:** [Needs More Information] ### Dataset Summary OpenWebText2 is part of EleutherAi/The Pile dataset and is an enhanced version of the original OpenWebTextCorpus covering all Reddit submissions from 2005 up until April 2020, with further months becoming available after the corresponding PushShift dump files are released. |download_size|27.3 Gib| |dataset_size|63.8 Gib| ### Supported Tasks and Leaderboards This dataset is used for Language Modeling. ### Languages This dataset is in English. ## Dataset Structure ### Data Instances ``` This example was too long and was cropped: {'title': Xiaomi Mi Note 10 Gearbest Coupon Promo Code [6+128GB] [France Warehouse], 'text': '27% off Xiaomi Mi Note 10 (CC9 Pro) 108MP Penta Camera Mobile Phone Global Version Online Smartphone – Black Gearbest Coupon Promo Code\n\nGearbest Coupon Price :$439.99\n\nRegular Price : $603.19 Your Save : $163.20 Coupon Limit: 100 times Warehouse: France Expires : September 30, 2020 Coupon Valid for...', 'reddit_scores': [6],} ``` ### Data Fields - `title`: title of the web page - `text`: text content of the web page - `reddit_scores`: scores of the reddit submissions that mention this web page, as a list of integers ### Data Splits |split|num examples| -------------------------------- |train|17103059| ## Dataset Creation ### Curation Rationale [Needs More Information] ### Source Data #### Initial Data Collection and Normalization [Needs More Information] #### Who are the source language producers? [Needs More Information] ### Annotations #### Annotation process [Needs More Information] #### Who are the annotators? [Needs More Information] ### Personal and Sensitive Information [Needs More Information] ## Considerations for Using the Data ### Social Impact of Dataset [Needs More Information] ### Discussion of Biases [Needs More Information] ### Other Known Limitations [Needs More Information] ## Additional Information ### Dataset Curators [Needs More Information] ### Licensing Information [Needs More Information] ### Citation Information ``` @article{pile, title={The {P}ile: An 800GB Dataset of Diverse Text for Language Modeling}, author={Gao, Leo and Biderman, Stella and Black, Sid and Golding, Laurence and Hoppe, Travis and Foster, Charles and Phang, Jason and He, Horace and Thite, Anish and Nabeshima, Noa and Presser, Shawn and Leahy, Connor}, journal={arXiv preprint arXiv:2101.00027}, year={2020} } ``` ### Contributions [researcher2](https://github.com/researcher2) Wrote much of this code, with inspiration and some straight copying of the scraping code found [here](https://github.com/yet-another-account/openwebtext/).<br/> [sdtblck](https://github.com/sdtblck/) kindly put together the Colab notebook, and performed a chunk of the scraping. <br/> [leogao2](https://github.com/leogao2/) provided overall design guidance, lm_dataformat, and performed another chunk of scraping. <br /> [Colaboratory](https://colab.research.google.com/) VMs helped with about 10% of our overall scraping. <br /> [The Eye](http://the-eye.eu/) host the processed datasets.<br /> [Read The Docs](https://readthedocs.org/) host our documentation.<br /> [@richarddwang](https://github.com/richarddwang) added this dataset to HF/datasets.
Lacito/pangloss
2022-09-06T18:02:34.000Z
[ "task_categories:automatic-speech-recognition", "annotations_creators:expert-generated", "language_creators:expert-generated", "multilinguality:multilingual", "multilinguality:translation", "source_datasets:original", "language:jya", "language:nru", "license:cc-by-nc-sa-4.0", "region:us" ]
Lacito
These datasets are extracts from the Pangloss collection and have been preprocessed for ASR experiments in Na and Japhug.
null
null
2
54
--- pretty_name: Pangloss annotations_creators: - expert-generated language_creators: - expert-generated language: - jya - nru language_bcp47: - x-japh1234 - x-yong1288 language_details: jya consists of japh1234 (Glottolog code); nru consists of yong1288 (Glottolog code) license: cc-by-nc-sa-4.0 multilinguality: - multilingual - translation size_categories: yong1288: - 10K<n<100K japh1234: - 10K<n<100K source_datasets: - original task_categories: - automatic-speech-recognition task_ids: - speech-recognition --- # Dataset Card for [Needs More Information] ## Table of Contents - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Supported Tasks](#supported-tasks-and-leaderboards) - [Languages](#languages) - [Dataset Structure](#dataset-structure) - [Data Instances](#data-instances) - [Data Fields](#data-instances) - [Data Splits](#data-instances) - [Dataset Creation](#dataset-creation) - [Curation Rationale](#curation-rationale) - [Source Data](#source-data) - [Annotations](#annotations) - [Personal and Sensitive Information](#personal-and-sensitive-information) - [Considerations for Using the Data](#considerations-for-using-the-data) - [Social Impact of Dataset](#social-impact-of-dataset) - [Discussion of Biases](#discussion-of-biases) - [Other Known Limitations](#other-known-limitations) - [Additional Information](#additional-information) - [Dataset Curators](#dataset-curators) - [Licensing Information](#licensing-information) - [Citation Information](#citation-information) ## Dataset Description - **Homepage:** [Web interface of the Pangloss Collection, which hosts the data sets](https://pangloss.cnrs.fr/) - **Repository:** [GithHub repository of the Pangloss Collection, which hosts the data sets](https://github.com/CNRS-LACITO/Pangloss/) - **Paper:** [A paper about the Pangloss Collection, including a presentation of the Document Type Definition](https://halshs.archives-ouvertes.fr/halshs-01003734) [A paper in French about the deposit in Zenodo](https://halshs.archives-ouvertes.fr/halshs-03475436) - **Leaderboard:** [Needs More Information] - **Point of Contact:** [Benjamin Galliot](mailto:b.g01lyon@gmail.com) ### Dataset Summary Two audio corpora of minority languages of China (Japhug and Na), with transcriptions, proposed as reference data sets for experiments in Natural Language Processing. The data, collected and transcribed in the course of immersion fieldwork, amount to a total of about 1,900 minutes in Japhug and 200 minutes in Na. By making them available in an easily accessible and usable form, we hope to facilitate the development and deployment of state-of-the-art NLP tools for the full range of human languages. There is an associated tool for assembling datasets from the Pangloss Collection (an open archive) in a way that ensures full reproducibility of experiments conducted on these data. The Document Type Definition for the XML files is available here: http://cocoon.huma-num.fr/schemas/Archive.dtd ### Supported Tasks and Leaderboards [Needs More Information] ### Languages Japhug (ISO 639-3 code: jya, Glottolog language code: japh1234) and Yongning Na (ISO 639-3 code: nru, Glottolog language code: yong1288) are two minority languages of China. The documents in the dataset have a transcription in the endangered language. Some of the documents have translations into French, English, and Chinese. ## Dataset Structure ### Data Instances A typical data row includes the path, audio, sentence, document type and several translations (depending on the sub-corpus). ` { "path": "cocoon-db3cf0e1-30bb-3225-b012-019252bb4f4d_C1/Tone_BodyPartsOfAnimals_12_F4_2008_withEGG_069.wav", "audio": "{'path': 'na/cocoon-db3cf0e1-30bb-3225-b012-019252bb4f4d_C1/Tone_BodyPartsOfAnimals_12_F4_2008_withEGG_069.wav', 'array': array([0.00018311, 0.00015259, 0.00021362, ..., 0.00030518, 0.00030518, 0.00054932], dtype=float32), 'sampling_rate': 16000}", "sentence": "ʈʂʰɯ˧ | ɖɤ˧mi˧-ɬi˧pi˩ ɲi˩", "doctype": "WORDLIST", "translation:zh": "狐狸的耳朵", "translation:fr": "oreilles de renard", "translation:en": "fox's ears", } ` ### Data Fields path: the path to the audio file;; audio: a dictionary containing the path to the audio file, the audio array and the sampling rate; sentence: the sentence the native has pronunced; doctype: the document type (a text or a word list); translation:XX: the translation of the sentence in the language XX. ### Data Splits The train, test and validation splits have all been reviewed and were splitted randomly (ratio 8:1:1) at sentence level (after the extraction from various files). ## Dataset Creation ### Curation Rationale [Needs More Information] ### Source Data #### Initial Data Collection and Normalization [Needs More Information] #### Who are the source language producers? [Needs More Information] ### Annotations #### Annotation process [Needs More Information] #### Who are the annotators? [Needs More Information] ### Personal and Sensitive Information [Needs More Information] ## Considerations for Using the Data ### Social Impact of Dataset The dataset was collected in immersion fieldwork for language documentation. It contributes to the documentation and study of the world's languages by providing documents of connected, spontaneous speech recorded in their cultural context and transcribed in consultation with native speakers. The impacts concern research, and society at large: a guiding principle of the Pangloss Collection, which hosts the data sets, is that a close association between documentation and research is highly profitable to both. A range of possibilities for uses exist, for the scientific and speaker communities and for the general public. ### Discussion of Biases The corpora are single-speaker and hence clearly do not reflect the sociolinguistic and dialectal diversity of the languages. No claim is made that the language variety described constitutes a 'standard'. ### Other Known Limitations The translations are entirely hand-made by experts working on these languages; the amount and type of translations available varies from document to document, as not all documents have translations and not all translated documents have the same translation languages (Chinese, French, English...). ## Additional Information ### Dataset Curators [Needs More Information] ### Licensing Information [Needs More Information] ### Citation Information [Needs More Information]
pysentimiento/spanish-targeted-sentiment-headlines
2022-06-17T21:28:01.000Z
[ "region:us" ]
pysentimiento
null
null
null
1
54
Entry not found
knkarthick/xsum
2022-12-07T08:30:19.000Z
[ "task_categories:summarization", "task_categories:text2text-generation", "task_categories:text-generation", "annotations_creators:expert-generated", "language_creators:expert-generated", "multilinguality:monolingual", "size_categories:100K<n<1M", "source_datasets:original", "language:en", "license...
knkarthick
null
null
null
2
54
--- annotations_creators: - expert-generated language_creators: - expert-generated language: - en license: - cc-by-nc-nd-4.0 multilinguality: - monolingual size_categories: - 100K<n<1M source_datasets: - original task_categories: - summarization - text2text-generation - text-generation task_ids: [] paperswithcode_id: samsum-corpus pretty_name: XSum Corpus tags: - conversations-summarization --- # Dataset Card for SAMSum Corpus ## Dataset Description ### Links - **Homepage:** https://arxiv.org/abs/1808.08745 - **Repository:** https://arxiv.org/abs/1808.08745 - **Paper:** https://arxiv.org/abs/1808.08745 - **Point of Contact:** https://huggingface.co/knkarthick ### Dataset Summary This repository contains data and code for our EMNLP 2018 paper "[Don't Give Me the Details, Just the Summary! Topic-Aware Convolutional Neural Networks for Extreme Summarization](https://arxiv.org/abs/1808.08745)". ### Languages English ## Dataset Structure ### Data Instances XSum dataset is made of 226711 conversations split into train, test and val. The first instance in the training set: {'dialogue': 'The full cost of damage in Newton Stewart, one of the areas worst affected, is still being assessed.\nRepair work is ongoing in Hawick and many roads in Peeblesshire remain badly affected by standing water.\nTrains on the west coast mainline face disruption due to damage at the Lamington Viaduct.\nMany businesses and householders were affected by flooding in Newton Stewart after the River Cree overflowed into the town.\nFirst Minister Nicola Sturgeon visited the area to inspect the damage.\nThe waters breached a retaining wall, flooding many commercial properties on Victoria Street - the main shopping thoroughfare.\nJeanette Tate, who owns the Cinnamon Cafe which was badly affected, said she could not fault the multi-agency response once the flood hit.\nHowever, she said more preventative work could have been carried out to ensure the retaining wall did not fail.\n"It is difficult but I do think there is so much publicity for Dumfries and the Nith - and I totally appreciate that - but it is almost like we\'re neglected or forgotten," she said.\n"That may not be true but it is perhaps my perspective over the last few days.\n"Why were you not ready to help us a bit more when the warning and the alarm alerts had gone out?"\nMeanwhile, a flood alert remains in place across the Borders because of the constant rain.\nPeebles was badly hit by problems, sparking calls to introduce more defences in the area.\nScottish Borders Council has put a list on its website of the roads worst affected and drivers have been urged not to ignore closure signs.\nThe Labour Party\'s deputy Scottish leader Alex Rowley was in Hawick on Monday to see the situation first hand.\nHe said it was important to get the flood protection plan right but backed calls to speed up the process.\n"I was quite taken aback by the amount of damage that has been done," he said.\n"Obviously it is heart-breaking for people who have been forced out of their homes and the impact on businesses."\nHe said it was important that "immediate steps" were taken to protect the areas most vulnerable and a clear timetable put in place for flood prevention plans.\nHave you been affected by flooding in Dumfries and Galloway or the Borders? Tell us about your experience of the situation and how it was handled. Email us on selkirk.news@bbc.co.uk or dumfries@bbc.co.uk.', 'summary': 'Clean-up operations are continuing across the Scottish Borders and Dumfries and Galloway after flooding caused by Storm Frank.', 'id': '35232142'} ### Data Fields - dialogue: text of dialogue. - summary: one line human written summary of the dialogue. - id: unique file id of an example. ### Data Splits - train: 204045 - val: 11332 - test: 11334 ## Dataset Creation ### Curation Rationale ### Who are the source language producers? linguists ### Who are the annotators? language experts ### Annotation process ## Licensing Information non-commercial licence: MIT ## Citation Information ``` @InProceedings{xsum-emnlp, author = "Shashi Narayan and Shay B. Cohen and Mirella Lapata", title = "Don't Give Me the Details, Just the Summary! {T}opic-Aware Convolutional Neural Networks for Extreme Summarization", booktitle = "Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing ", year = "2018", address = "Brussels, Belgium", ``` ## Contributions Thanks to [@Edinburgh NLP](https://github.com/EdinburghNLP) for adding this dataset.
bigscience/xP3all
2023-05-30T15:51:40.000Z
[ "task_categories:other", "annotations_creators:expert-generated", "annotations_creators:crowdsourced", "multilinguality:multilingual", "size_categories:100M<n<1B", "language:ak", "language:ar", "language:as", "language:bm", "language:bn", "language:ca", "language:code", "language:en", "lan...
bigscience
xP3 (Crosslingual Public Pool of Prompts) is a collection of prompts & datasets across 46 of languages & 16 NLP tasks. It is used for the training of BLOOMZ and mT0, multilingual language models capable of following human instructions in dozens of languages zero-shot.
@misc{muennighoff2022crosslingual, title={Crosslingual Generalization through Multitask Finetuning}, author={Niklas Muennighoff and Thomas Wang and Lintang Sutawika and Adam Roberts and Stella Biderman and Teven Le Scao and M Saiful Bari and Sheng Shen and Zheng-Xin Yong and Hailey Schoelkopf and Xiangru Tang and Dragomir Radev and Alham Fikri Aji and Khalid Almubarak and Samuel Albanie and Zaid Alyafeai and Albert Webson and Edward Raff and Colin Raffel}, year={2022}, eprint={2211.01786}, archivePrefix={arXiv}, primaryClass={cs.CL} }
null
17
54
--- annotations_creators: - expert-generated - crowdsourced language: - ak - ar - as - bm - bn - ca - code - en - es - eu - fon - fr - gu - hi - id - ig - ki - kn - lg - ln - ml - mr - ne - nso - ny - or - pa - pt - rn - rw - sn - st - sw - ta - te - tn - ts - tum - tw - ur - vi - wo - xh - yo - zh - zu programming_language: - C - C++ - C# - Go - Java - JavaScript - Lua - PHP - Python - Ruby - Rust - Scala - TypeScript license: - apache-2.0 multilinguality: - multilingual pretty_name: xP3 size_categories: - 100M<n<1B task_categories: - other --- # Dataset Card for xP3 ## Table of Contents - [Table of Contents](#table-of-contents) - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards) - [Languages](#languages) - [Dataset Structure](#dataset-structure) - [Data Instances](#data-instances) - [Data Fields](#data-fields) - [Data Splits](#data-splits) - [Dataset Creation](#dataset-creation) - [Curation Rationale](#curation-rationale) - [Source Data](#source-data) - [Annotations](#annotations) - [Additional Information](#additional-information) - [Licensing Information](#licensing-information) - [Citation Information](#citation-information) - [Contributions](#contributions) ## Dataset Description - **Repository:** https://github.com/bigscience-workshop/xmtf - **Paper:** [Crosslingual Generalization through Multitask Finetuning](https://arxiv.org/abs/2211.01786) - **Point of Contact:** [Niklas Muennighoff](mailto:niklas@hf.co) ### Dataset Summary > xP3 (Crosslingual Public Pool of Prompts) is a collection of prompts & datasets across 46 of languages & 16 NLP tasks. It is used for the training of BLOOMZ and mT0, multilingual language models capable of following human instructions in dozens of languages zero-shot. - **Creation:** The dataset can be recreated using instructions available [here](https://github.com/bigscience-workshop/xmtf#create-xp3). We provide this version to save processing time and ease reproducibility. - **Languages:** 46 (Can be extended by [recreating with more splits](https://github.com/bigscience-workshop/xmtf#create-xp3)) - **xP3 Dataset Family:** <table> <tr> <th>Name</th> <th>Explanation</th> <th>Example models</th> </tr> <tr> <td><a href=https://huggingface.co/datasets/Muennighoff/xP3x>xP3x</a></t> <td>Mixture of 17 tasks in 277 languages with English prompts</td> <td>WIP - Join us at Project Aya @<a href=https://cohere.for.ai/>C4AI</a> to help!</td> </tr> <tr> <td><a href=https://huggingface.co/datasets/bigscience/xP3>xP3</a></t> <td>Mixture of 13 training tasks in 46 languages with English prompts</td> <td><a href=https://huggingface.co/bigscience/bloomz>bloomz</a> & <a href=https://huggingface.co/bigscience/mt0-xxl>mt0-xxl</a></td> </tr> <tr> <td><a href=https://huggingface.co/datasets/bigscience/xP3mt>xP3mt</a></t> <td>Mixture of 13 training tasks in 46 languages with prompts in 20 languages (machine-translated from English)</td> <td><a href=https://huggingface.co/bigscience/bloomz-mt>bloomz-mt</a> & <a href=https://huggingface.co/bigscience/mt0-xxl-mt>mt0-xxl-mt</a></td> </tr> <tr> <td><a href=https://huggingface.co/datasets/bigscience/xP3all>xP3all</a></t> <td>xP3 + evaluation datasets adding an additional 3 tasks for a total of 16 tasks in 46 languages with English prompts</td> <td></td> </tr> <tr> <td><a href=https://huggingface.co/datasets/bigscience/xP3megds>xP3megds</a></t> <td><a href=https://github.com/bigscience-workshop/Megatron-DeepSpeed>Megatron-DeepSpeed</a> processed version of xP3</td> <td><a href=https://huggingface.co/bigscience/bloomz>bloomz</a></td> </tr> <tr> <td><a href=https://huggingface.co/datasets/Muennighoff/P3>P3</a></t> <td>Repreprocessed version of the English-only <a href=https://huggingface.co/datasets/bigscience/P3>P3</a> with 8 training tasks</td> <td><a href=https://huggingface.co/bigscience/bloomz-p3>bloomz-p3</a> & <a href=https://huggingface.co/bigscience/mt0-xxl-p3>mt0-xxl-p3</a></td> </tr> </table> ## Dataset Structure ### Data Instances An example of "train" looks as follows: ```json { "inputs": "Sentence 1: Fue académico en literatura metafísica, teología y ciencias clásicas.\nSentence 2: Fue académico en literatura metafísica, teología y ciencia clásica.\nQuestion: Can we rewrite Sentence 1 to Sentence 2? Yes or No?", "targets": "Yes" } ``` ### Data Fields The data fields are the same among all splits: - `inputs`: the natural language input fed to the model - `targets`: the natural language target that the model has to generate ### Data Splits The below table summarizes sizes per language (computed from the `merged_{lang}.jsonl` files). Due to languages like `tw` only being single sentence translation samples from Flores, their byte percentage is significantly lower than their sample percentage. |Language|Kilobytes|%|Samples|%| |--------|------:|-:|---:|-:| |tw|106288|0.11|265071|0.33| |bm|107056|0.11|265180|0.33| |ak|108096|0.11|265071|0.33| |ca|110608|0.11|271191|0.33| |eu|113008|0.11|281199|0.35| |fon|113072|0.11|265063|0.33| |st|114080|0.11|265063|0.33| |ki|115040|0.12|265180|0.33| |tum|116032|0.12|265063|0.33| |wo|122560|0.12|365063|0.45| |ln|126304|0.13|365060|0.45| |as|156256|0.16|265063|0.33| |or|161472|0.16|265063|0.33| |kn|165456|0.17|265063|0.33| |ml|175040|0.18|265864|0.33| |rn|192992|0.19|318189|0.39| |nso|229712|0.23|915051|1.13| |tn|235536|0.24|915054|1.13| |lg|235936|0.24|915021|1.13| |rw|249360|0.25|915043|1.13| |ts|250256|0.25|915044|1.13| |sn|252496|0.25|865056|1.07| |xh|254672|0.26|915058|1.13| |zu|263712|0.26|915061|1.13| |ny|272128|0.27|915063|1.13| |ig|325232|0.33|950097|1.17| |yo|352784|0.35|918416|1.13| |ne|393680|0.39|315754|0.39| |pa|523248|0.52|339210|0.42| |gu|560688|0.56|347499|0.43| |sw|566656|0.57|1130481|1.4| |mr|666240|0.67|417269|0.52| |bn|832720|0.83|428843|0.53| |ta|926912|0.93|415433|0.51| |te|1343232|1.35|584590|0.72| |ur|1918272|1.92|855756|1.06| |vi|3102512|3.11|1672106|2.07| |code|4330752|4.34|2707724|3.34| |hi|4403568|4.41|1554667|1.92| |zh|4599440|4.61|3589234|4.43| |id|4612256|4.62|2643418|3.27| |ar|4683456|4.69|2160181|2.67| |fr|6591120|6.6|5316403|6.57| |pt|6886800|6.9|3752156|4.63| |es|8587920|8.6|5413205|6.69| |en|39252528|39.33|32740750|40.44| |total|99807184|100.0|80956089|100.0| ## Dataset Creation ### Source Data #### Training datasets - Code Miscellaneous - [CodeComplex](https://huggingface.co/datasets/codeparrot/codecomplex) - [Docstring Corpus](https://huggingface.co/datasets/teven/code_docstring_corpus) - [GreatCode](https://huggingface.co/datasets/great_code) - [State Changes](https://huggingface.co/datasets/Fraser/python-state-changes) - Closed-book QA - [Hotpot QA](https://huggingface.co/datasets/hotpot_qa) - [Trivia QA](https://huggingface.co/datasets/trivia_qa) - [Web Questions](https://huggingface.co/datasets/web_questions) - [Wiki QA](https://huggingface.co/datasets/wiki_qa) - Extractive QA - [Adversarial QA](https://huggingface.co/datasets/adversarial_qa) - [CMRC2018](https://huggingface.co/datasets/cmrc2018) - [DRCD](https://huggingface.co/datasets/clue) - [DuoRC](https://huggingface.co/datasets/duorc) - [MLQA](https://huggingface.co/datasets/mlqa) - [Quoref](https://huggingface.co/datasets/quoref) - [ReCoRD](https://huggingface.co/datasets/super_glue) - [ROPES](https://huggingface.co/datasets/ropes) - [SQuAD v2](https://huggingface.co/datasets/squad_v2) - [xQuAD](https://huggingface.co/datasets/xquad) - TyDI QA - [Primary](https://huggingface.co/datasets/khalidalt/tydiqa-primary) - [Goldp](https://huggingface.co/datasets/khalidalt/tydiqa-goldp) - Multiple-Choice QA - [ARC](https://huggingface.co/datasets/ai2_arc) - [C3](https://huggingface.co/datasets/c3) - [CoS-E](https://huggingface.co/datasets/cos_e) - [Cosmos](https://huggingface.co/datasets/cosmos) - [DREAM](https://huggingface.co/datasets/dream) - [MultiRC](https://huggingface.co/datasets/super_glue) - [OpenBookQA](https://huggingface.co/datasets/openbookqa) - [PiQA](https://huggingface.co/datasets/piqa) - [QUAIL](https://huggingface.co/datasets/quail) - [QuaRel](https://huggingface.co/datasets/quarel) - [QuaRTz](https://huggingface.co/datasets/quartz) - [QASC](https://huggingface.co/datasets/qasc) - [RACE](https://huggingface.co/datasets/race) - [SciQ](https://huggingface.co/datasets/sciq) - [Social IQA](https://huggingface.co/datasets/social_i_qa) - [Wiki Hop](https://huggingface.co/datasets/wiki_hop) - [WiQA](https://huggingface.co/datasets/wiqa) - Paraphrase Identification - [MRPC](https://huggingface.co/datasets/super_glue) - [PAWS](https://huggingface.co/datasets/paws) - [PAWS-X](https://huggingface.co/datasets/paws-x) - [QQP](https://huggingface.co/datasets/qqp) - Program Synthesis - [APPS](https://huggingface.co/datasets/codeparrot/apps) - [CodeContests](https://huggingface.co/datasets/teven/code_contests) - [JupyterCodePairs](https://huggingface.co/datasets/codeparrot/github-jupyter-text-code-pairs) - [MBPP](https://huggingface.co/datasets/Muennighoff/mbpp) - [NeuralCodeSearch](https://huggingface.co/datasets/neural_code_search) - [XLCoST](https://huggingface.co/datasets/codeparrot/xlcost-text-to-code) - Structure-to-text - [Common Gen](https://huggingface.co/datasets/common_gen) - [Wiki Bio](https://huggingface.co/datasets/wiki_bio) - Sentiment - [Amazon](https://huggingface.co/datasets/amazon_polarity) - [App Reviews](https://huggingface.co/datasets/app_reviews) - [IMDB](https://huggingface.co/datasets/imdb) - [Rotten Tomatoes](https://huggingface.co/datasets/rotten_tomatoes) - [Yelp](https://huggingface.co/datasets/yelp_review_full) - Simplification - [BiSECT](https://huggingface.co/datasets/GEM/BiSECT) - Summarization - [CNN Daily Mail](https://huggingface.co/datasets/cnn_dailymail) - [Gigaword](https://huggingface.co/datasets/gigaword) - [MultiNews](https://huggingface.co/datasets/multi_news) - [SamSum](https://huggingface.co/datasets/samsum) - [Wiki-Lingua](https://huggingface.co/datasets/GEM/wiki_lingua) - [XLSum](https://huggingface.co/datasets/GEM/xlsum) - [XSum](https://huggingface.co/datasets/xsum) - Topic Classification - [AG News](https://huggingface.co/datasets/ag_news) - [DBPedia](https://huggingface.co/datasets/dbpedia_14) - [TNEWS](https://huggingface.co/datasets/clue) - [TREC](https://huggingface.co/datasets/trec) - [CSL](https://huggingface.co/datasets/clue) - Translation - [Flores-200](https://huggingface.co/datasets/Muennighoff/flores200) - [Tatoeba](https://huggingface.co/datasets/Helsinki-NLP/tatoeba_mt) - Word Sense disambiguation - [WiC](https://huggingface.co/datasets/super_glue) - [XL-WiC](https://huggingface.co/datasets/pasinit/xlwic) #### Evaluation datasets (included in [xP3all](https://huggingface.co/datasets/bigscience/xP3all) except for HumanEval) - Natural Language Inference - [ANLI](https://huggingface.co/datasets/anli) - [CB](https://huggingface.co/datasets/super_glue) - [RTE](https://huggingface.co/datasets/super_glue) - [XNLI](https://huggingface.co/datasets/xnli) - Coreference Resolution - [Winogrande](https://huggingface.co/datasets/winogrande) - [XWinograd](https://huggingface.co/datasets/Muennighoff/xwinograd) - Program Synthesis - [HumanEval](https://huggingface.co/datasets/openai_humaneval) - Sentence Completion - [COPA](https://huggingface.co/datasets/super_glue) - [Story Cloze](https://huggingface.co/datasets/story_cloze) - [XCOPA](https://huggingface.co/datasets/xcopa) - [XStoryCloze](https://huggingface.co/datasets/Muennighoff/xstory_cloze) #### Additional [xP3all](https://huggingface.co/datasets/bigscience/xP3all) datasets - Coreference Resolution - [WSC (Fixed)](https://huggingface.co/datasets/super_glue) - Sentence Completion - [HellaSwag](https://huggingface.co/datasets/hellaswag) - Translation - [MultiEurlex](https://huggingface.co/datasets/multi_eurlex) ## Additional Information ### Licensing Information The dataset is released under Apache 2.0. ### Citation Information ```bibtex @misc{muennighoff2022crosslingual, title={Crosslingual Generalization through Multitask Finetuning}, author={Niklas Muennighoff and Thomas Wang and Lintang Sutawika and Adam Roberts and Stella Biderman and Teven Le Scao and M Saiful Bari and Sheng Shen and Zheng-Xin Yong and Hailey Schoelkopf and Xiangru Tang and Dragomir Radev and Alham Fikri Aji and Khalid Almubarak and Samuel Albanie and Zaid Alyafeai and Albert Webson and Edward Raff and Colin Raffel}, year={2022}, eprint={2211.01786}, archivePrefix={arXiv}, primaryClass={cs.CL} } ``` ### Contributions Thanks to the contributors of [promptsource](https://github.com/bigscience-workshop/promptsource/graphs/contributors) for adding many prompts used in this dataset.
jonathanli/echr
2022-08-21T23:29:28.000Z
[ "license:cc-by-nc-sa-4.0", "arxiv:1906.02059", "region:us" ]
jonathanli
The ECHR Cases dataset is designed for experimentation of neural judgment prediction, as in the original 2019 ACL paper "Neural Legal Judgment Prediction in English".
@inproceedings{chalkidis-etal-2019-neural, title = "Neural Legal Judgment Prediction in {E}nglish", author = "Chalkidis, Ilias and Androutsopoulos, Ion and Aletras, Nikolaos", booktitle = "Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics", month = jul, year = "2019", address = "Florence, Italy", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/P19-1424", doi = "10.18653/v1/P19-1424", pages = "4317--4323", }
null
0
54
--- license: cc-by-nc-sa-4.0 --- # ECHR Cases The original data from [Chalkidis et al.](https://arxiv.org/abs/1906.02059), sourced from [archive.org](https://archive.org/details/ECHR-ACL2019). ## Preprocessing * Order is shuffled * Fact numbers preceeding each fact are removed (using the python regex `^[0-9]+\. `), as some cases didn't have fact numbers to begin with * Everything else is the same
eytanc/FestAbilityTranscripts
2022-10-19T06:25:24.000Z
[ "license:cc-by-nc-sa-4.0", "region:us" ]
eytanc
null
null
null
0
54
--- license: cc-by-nc-sa-4.0 ---
dream-textures/textures-color-1k
2023-01-13T17:54:04.000Z
[ "task_categories:text-to-image", "size_categories:1K<n<10K", "language:en", "license:cc0-1.0", "region:us" ]
dream-textures
null
null
null
3
54
--- dataset_info: features: - name: image dtype: image - name: text dtype: string splits: - name: train num_bytes: 60933571.47 num_examples: 1426 download_size: 58351352 dataset_size: 60933571.47 license: cc0-1.0 task_categories: - text-to-image language: - en size_categories: - 1K<n<10K --- # textures-color-1k ## Table of Contents - [Table of Contents](#table-of-contents) - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Languages](#languages) - [Dataset Structure](#dataset-structure) - [Data Instances](#data-instances) - [Data Fields](#data-fields) - [Data Splits](#data-splits) - [Dataset Creation](#dataset-creation) - [Curation Rationale](#curation-rationale) - [Source Data](#source-data) - [Additional Information](#additional-information) - [Dataset Curators](#dataset-curators) - [Licensing Information](#licensing-information) - [Citation Information](#citation-information) - [Contributions](#contributions) ## Dataset Description - **Homepage:** - **Repository:** - **Paper:** - **Leaderboard:** - **Point of Contact:** ### Dataset Summary The `textures-color-1k` dataset is an image dataset of 1000+ color image textures in 512x512 resolution with associated text descriptions. The dataset was created for training/fine-tuning diffusion models on texture generation tasks. It contains a combination of CC0 procedural and photoscanned PBR materials from [ambientCG](https://ambientcg.com/). ### Languages The text descriptions are in English, and created by joining the tags of each material with a space character. ## Dataset Structure ### Data Instances Each data point contains a 512x512 image and and additional `text` feature containing the description of the texture. ### Data Fields * `image`: the color texture as a PIL image * `text`: the associated text description created by merging the material's tags ### Data Splits | | train | | -- | ----- | | ambientCG | 1426 | ## Dataset Creation ### Curation Rationale `textures-color-1k` was created to provide an accesible source of data for automating 3D-asset creation workflows. The [Dream Textures](https://github.com/carson-katri/dream-textures) add-on is one such tool providing AI automation in Blender. By fine-tuning models such as Stable Diffusion on textures, this particular use-case can be more accurately automated. ### Source Data #### Initial Data Collection and Normalization The data was obtained from [ambientCG](https://ambientcg.com/)'s CC0 textures. Only the color maps were included in this dataset. Text descriptions were synthesized by joining the tags associated with each material with a space. ## Additional Information ### Dataset Curators The dataset was created by Carson Katri, with the images being provided by [ambientCG](https://ambientcg.com/). ### Licensing Information All of the images used in this dataset are CC0. ### Citation Information [N/A] ### Contributions Thanks to [@carson-katri](https://github.com/carson-katri) for adding this dataset.
nomic-ai/gpt4all_prompt_generations
2023-04-13T21:42:15.000Z
[ "task_categories:text-generation", "size_categories:100K<n<1M", "language:en", "license:apache-2.0", "region:us" ]
nomic-ai
null
null
null
120
54
--- dataset_info: features: - name: prompt dtype: string - name: response dtype: string - name: source dtype: string splits: - name: train num_bytes: 782175193 num_examples: 437604 download_size: 397878357 dataset_size: 782175193 license: apache-2.0 task_categories: - text-generation language: - en size_categories: - 100K<n<1M --- # Dataset Card for [GPT4All Prompt Generations] ## Dataset Description Dataset used to train [GPT4All](https://huggingface.co/nomic-ai/gpt4all-lora) - **Homepage:** - **Repository:** [gpt4all](https://github.com/nomic-ai/gpt4all) - **Paper:** [Technical Report](https://s3.amazonaws.com/static.nomic.ai/gpt4all/2023_GPT4All_Technical_Report.pdf) - **Atlas Map:** [Map of Cleaned Data](https://atlas.nomic.ai/map/gpt4all_data_clean)
Francesco/axial-mri
2023-03-30T09:39:28.000Z
[ "task_categories:object-detection", "annotations_creators:crowdsourced", "language_creators:found", "multilinguality:monolingual", "size_categories:1K<n<10K", "source_datasets:original", "language:en", "license:cc", "rf100", "region:us" ]
Francesco
null
null
null
0
54
--- dataset_info: features: - name: image_id dtype: int64 - name: image dtype: image - name: width dtype: int32 - name: height dtype: int32 - name: objects sequence: - name: id dtype: int64 - name: area dtype: int64 - name: bbox sequence: float32 length: 4 - name: category dtype: class_label: names: '0': axial-MRI '1': negative '2': positive annotations_creators: - crowdsourced language_creators: - found language: - en license: - cc multilinguality: - monolingual size_categories: - 1K<n<10K source_datasets: - original task_categories: - object-detection task_ids: [] pretty_name: axial-mri tags: - rf100 --- # Dataset Card for axial-mri ** The original COCO dataset is stored at `dataset.tar.gz`** ## Dataset Description - **Homepage:** https://universe.roboflow.com/object-detection/axial-mri - **Point of Contact:** francesco.zuppichini@gmail.com ### Dataset Summary axial-mri ### Supported Tasks and Leaderboards - `object-detection`: The dataset can be used to train a model for Object Detection. ### Languages English ## Dataset Structure ### Data Instances A data point comprises an image and its object annotations. ``` { 'image_id': 15, 'image': <PIL.JpegImagePlugin.JpegImageFile image mode=RGB size=640x640 at 0x2373B065C18>, 'width': 964043, 'height': 640, 'objects': { 'id': [114, 115, 116, 117], 'area': [3796, 1596, 152768, 81002], 'bbox': [ [302.0, 109.0, 73.0, 52.0], [810.0, 100.0, 57.0, 28.0], [160.0, 31.0, 248.0, 616.0], [741.0, 68.0, 202.0, 401.0] ], 'category': [4, 4, 0, 0] } } ``` ### Data Fields - `image`: the image id - `image`: `PIL.Image.Image` object containing the image. Note that when accessing the image column: `dataset[0]["image"]` the image file is automatically decoded. Decoding of a large number of image files might take a significant amount of time. Thus it is important to first query the sample index before the `"image"` column, *i.e.* `dataset[0]["image"]` should **always** be preferred over `dataset["image"][0]` - `width`: the image width - `height`: the image height - `objects`: a dictionary containing bounding box metadata for the objects present on the image - `id`: the annotation id - `area`: the area of the bounding box - `bbox`: the object's bounding box (in the [coco](https://albumentations.ai/docs/getting_started/bounding_boxes_augmentation/#coco) format) - `category`: the object's category. #### Who are the annotators? Annotators are Roboflow users ## Additional Information ### Licensing Information See original homepage https://universe.roboflow.com/object-detection/axial-mri ### Citation Information ``` @misc{ axial-mri, title = { axial mri Dataset }, type = { Open Source Dataset }, author = { Roboflow 100 }, howpublished = { \url{ https://universe.roboflow.com/object-detection/axial-mri } }, url = { https://universe.roboflow.com/object-detection/axial-mri }, journal = { Roboflow Universe }, publisher = { Roboflow }, year = { 2022 }, month = { nov }, note = { visited on 2023-03-29 }, }" ``` ### Contributions Thanks to [@mariosasko](https://github.com/mariosasko) for adding this dataset.
distil-whisper/voxpopuli
2023-09-25T10:30:13.000Z
[ "task_categories:automatic-speech-recognition", "language:en", "license:cc0-1.0", "region:us" ]
distil-whisper
A large-scale multilingual speech corpus for representation learning, semi-supervised learning and interpretation.
@inproceedings{wang-etal-2021-voxpopuli, title = "{V}ox{P}opuli: A Large-Scale Multilingual Speech Corpus for Representation Learning, Semi-Supervised Learning and Interpretation", author = "Wang, Changhan and Riviere, Morgane and Lee, Ann and Wu, Anne and Talnikar, Chaitanya and Haziza, Daniel and Williamson, Mary and Pino, Juan and Dupoux, Emmanuel", booktitle = "Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers)", month = aug, year = "2021", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2021.acl-long.80", doi = "10.18653/v1/2021.acl-long.80", pages = "993--1003", }
null
0
54
--- license: cc0-1.0 task_categories: - automatic-speech-recognition language: - en -pretty_name: VoxPopuli --- # Distil Whisper: VoxPopuli This is a variant of the [VoxPopuli](https://huggingface.co/datasets/facebook/voxpopuli) dataset, augmented to return the pseudo-labelled Whisper Transcriptions alongside the original dataset elements. The pseudo-labelled transcriptions were generated by labelling the input audio data with the Whisper [large-v2](https://huggingface.co/openai/whisper-large-v2) model with *greedy* sampling. For information on how the original dataset was curated, refer to the original [dataset card](https://huggingface.co/datasets/facebook/voxpopuli). ## Standalone Usage First, install the latest version of the 🤗 Datasets package: ```bash pip install --upgrade pip pip install --upgrade datasets[audio] ``` The dataset can be downloaded and pre-processed on disk using the [`load_dataset`](https://huggingface.co/docs/datasets/v2.14.5/en/package_reference/loading_methods#datasets.load_dataset) function: ```python from datasets import load_dataset dataset = load_dataset("distil-whisper/voxpopuli", "en") # take the first sample of the validation set sample = dataset["validation"][0] ``` It can also be streamed directly from the Hub using Datasets' [streaming mode](https://huggingface.co/blog/audio-datasets#streaming-mode-the-silver-bullet). Loading a dataset in streaming mode loads individual samples of the dataset at a time, rather than downloading the entire dataset to disk: ```python from datasets import load_dataset dataset = load_dataset("distil-whisper/voxpopuli", "en", streaming=True) # take the first sample of the validation set sample = next(iter(dataset["validation"])) ``` ## Distil Whisper Usage To use this dataset to reproduce a Distil Whisper training run, refer to the instructions on the [Distil Whisper repository](https://github.com/huggingface/distil-whisper#training). ## License This dataset is licensed under cc0-1.0.
Hyeon2/riffusion-musiccaps-dataset
2023-07-15T15:43:17.000Z
[ "task_categories:text-to-image", "size_categories:10K<n<100K", "language:en", "license:cc-by-4.0", "music", "region:us" ]
Hyeon2
null
null
null
2
54
--- language: en license: cc-by-4.0 size_categories: - 10K<n<100K task_categories: - text-to-image pretty_name: riffusion manipulated google/musiccap viewer: true tags: - music dataset_info: features: - name: image dtype: image - name: text dtype: string splits: - name: train num_bytes: 2521001438.24 num_examples: 20588 download_size: 2509138106 dataset_size: 2521001438.24 --- riffusion manipulated google/MusicCaps
DAMO-NLP-MT/multialpaca
2023-07-14T01:43:07.000Z
[ "license:apache-2.0", "region:us" ]
DAMO-NLP-MT
null
null
null
8
54
--- license: apache-2.0 ---
izumi-lab/wikinews-ja-20230728
2023-07-29T03:06:48.000Z
[ "language:ja", "license:cc-by-2.5", "region:us" ]
izumi-lab
null
null
null
3
54
--- dataset_info: features: - name: text dtype: string - name: title dtype: string - name: url dtype: string splits: - name: train num_bytes: 7998861 num_examples: 4283 download_size: 4086208 dataset_size: 7998861 license: cc-by-2.5 language: - ja --- # Dataset Card for "wikinews-ja-20230728" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
squarelike/ko_medical_chat
2023-08-19T06:45:48.000Z
[ "language:ko", "medical", "region:us" ]
squarelike
null
null
null
1
54
--- language: - ko tags: - medical --- [https://github.com/jwj7140/ko-medical-chat](https://github.com/jwj7140/ko-medical-chat) Korean medical conversation dataset from converting [MedText](https://huggingface.co/datasets/BI55/MedText) and [ChatDoctor](https://github.com/Kent0n-Li/ChatDoctor)
argilla/emotion
2023-08-23T06:37:14.000Z
[ "size_categories:10K<n<100K", "rlfh", "argilla", "human-feedback", "region:us" ]
argilla
null
null
null
0
54
--- size_categories: 10K<n<100K tags: - rlfh - argilla - human-feedback --- # Dataset Card for emotion This dataset has been created with [Argilla](https://docs.argilla.io). As shown in the sections below, this dataset can be loaded into Argilla as explained in [Load with Argilla](#load-with-argilla), or used directly with the `datasets` library in [Load with `datasets`](#load-with-datasets). ## Dataset Description - **Homepage:** https://argilla.io - **Repository:** https://github.com/argilla-io/argilla - **Paper:** - **Leaderboard:** - **Point of Contact:** ### Dataset Summary This dataset contains: * A dataset configuration file conforming to the Argilla dataset format named `argilla.yaml`. This configuration file will be used to configure the dataset when using the `FeedbackDataset.from_huggingface` method in Argilla. * Dataset records in a format compatible with HuggingFace `datasets`. These records will be loaded automatically when using `FeedbackDataset.from_huggingface` and can be loaded independently using the `datasets` library via `load_dataset`. * The [annotation guidelines](#annotation-guidelines) that have been used for building and curating the dataset, if they've been defined in Argilla. ### Load with Argilla To load with Argilla, you'll just need to install Argilla as `pip install argilla --upgrade` and then use the following code: ```python import argilla as rg ds = rg.FeedbackDataset.from_huggingface("argilla/emotion") ``` ### Load with `datasets` To load this dataset with `datasets`, you'll just need to install `datasets` as `pip install datasets --upgrade` and then use the following code: ```python from datasets import load_dataset ds = load_dataset("argilla/emotion") ``` ### Supported Tasks and Leaderboards This dataset can contain [multiple fields, questions and responses](https://docs.argilla.io/en/latest/guides/llms/conceptual_guides/data_model.html) so it can be used for different NLP tasks, depending on the configuration. The dataset structure is described in the [Dataset Structure section](#dataset-structure). There are no leaderboards associated with this dataset. ### Languages [More Information Needed] ## Dataset Structure ### Data in Argilla The dataset is created in Argilla with: **fields**, **questions**, **suggestions**, and **guidelines**. The **fields** are the dataset records themselves, for the moment just text fields are suppported. These are the ones that will be used to provide responses to the questions. | Field Name | Title | Type | Required | Markdown | | ---------- | ----- | ---- | -------- | -------- | | text | Text | TextField | True | False | The **questions** are the questions that will be asked to the annotators. They can be of different types, such as rating, text, single choice, or multiple choice. | Question Name | Title | Type | Required | Description | Values/Labels | | ------------- | ----- | ---- | -------- | ----------- | ------------- | | label | Label | LabelQuestion | True | N/A | ['0', '1', '2', '3', '4', '5'] | **✨ NEW** Additionally, we also have **suggestions**, which are linked to the existing questions, and so on, named appending "-suggestion" and "-suggestion-metadata" to those, containing the value/s of the suggestion and its metadata, respectively. So on, the possible values are the same as in the table above. Finally, the **guidelines** are just a plain string that can be used to provide instructions to the annotators. Find those in the [annotation guidelines](#annotation-guidelines) section. ### Data Instances An example of a dataset instance in Argilla looks as follows: ```json { "fields": { "text": "i didnt feel humiliated" }, "metadata": { "split": "train" }, "responses": [ { "status": "submitted", "values": { "label": { "value": "0" } } } ], "suggestions": [] } ``` While the same record in HuggingFace `datasets` looks as follows: ```json { "external_id": null, "label": [ { "status": "submitted", "user_id": null, "value": "0" } ], "label-suggestion": null, "label-suggestion-metadata": { "agent": null, "score": null, "type": null }, "metadata": "{\"split\": \"train\"}", "text": "i didnt feel humiliated" } ``` ### Data Fields Among the dataset fields, we differentiate between the following: * **Fields:** These are the dataset records themselves, for the moment just text fields are suppported. These are the ones that will be used to provide responses to the questions. * **text** is of type `TextField`. * **Questions:** These are the questions that will be asked to the annotators. They can be of different types, such as `RatingQuestion`, `TextQuestion`, `LabelQuestion`, `MultiLabelQuestion`, and `RankingQuestion`. * **label** is of type `LabelQuestion` with the following allowed values ['0', '1', '2', '3', '4', '5']. * **✨ NEW** **Suggestions:** As of Argilla 1.13.0, the suggestions have been included to provide the annotators with suggestions to ease or assist during the annotation process. Suggestions are linked to the existing questions, are always optional, and contain not just the suggestion itself, but also the metadata linked to it, if applicable. * (optional) **label-suggestion** is of type `label_selection` with the following allowed values ['0', '1', '2', '3', '4', '5']. Additionally, we also have one more field which is optional and is the following: * **external_id:** This is an optional field that can be used to provide an external ID for the dataset record. This can be useful if you want to link the dataset record to an external resource, such as a database or a file. ### Data Splits The dataset contains a single split, which is `train`. ## Dataset Creation ### Curation Rationale [More Information Needed] ### Source Data #### Initial Data Collection and Normalization [More Information Needed] #### Who are the source language producers? [More Information Needed] ### Annotations #### Annotation guidelines Argilla port of [dair-ai/emotion](https://huggingface.co/datasets/dair-ai/emotion). #### Annotation process [More Information Needed] #### Who are the annotators? [More Information Needed] ### Personal and Sensitive Information [More Information Needed] ## Considerations for Using the Data ### Social Impact of Dataset [More Information Needed] ### Discussion of Biases [More Information Needed] ### Other Known Limitations [More Information Needed] ## Additional Information ### Dataset Curators [More Information Needed] ### Licensing Information [More Information Needed] ### Citation Information [More Information Needed] ### Contributions [More Information Needed]
manu/dila_legifrance
2023-08-31T21:22:47.000Z
[ "region:us" ]
manu
null
null
null
0
54
--- configs: - config_name: default data_files: - split: train path: data/train-* dataset_info: features: - name: id dtype: string - name: text dtype: string splits: - name: train num_bytes: 4399589467 num_examples: 2349748 download_size: 1326748165 dataset_size: 4399589467 --- # Dataset Card for "dila_legifrance" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
yzhuang/autotree_automl_100000_eye_movements_sgosdt_l256_dim10_d3_sd0
2023-09-08T03:05:45.000Z
[ "region:us" ]
yzhuang
null
null
null
0
54
--- dataset_info: features: - name: id dtype: int64 - name: input_x sequence: sequence: float32 - name: input_y sequence: sequence: float32 - name: input_y_clean sequence: sequence: float32 - name: rtg sequence: float64 - name: status sequence: sequence: float32 - name: split_threshold sequence: sequence: float32 - name: split_dimension sequence: int64 splits: - name: train num_bytes: 2364400000 num_examples: 100000 - name: validation num_bytes: 236440000 num_examples: 10000 download_size: 873878506 dataset_size: 2600840000 --- # Dataset Card for "autotree_automl_100000_eye_movements_sgosdt_l256_dim10_d3_sd0" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
yzhuang/autotree_automl_100000_default-of-credit-card-clients_sgosdt_l256_dim10_d3_sd0
2023-09-08T04:55:50.000Z
[ "region:us" ]
yzhuang
null
null
null
0
54
--- dataset_info: features: - name: id dtype: int64 - name: input_x sequence: sequence: float32 - name: input_y sequence: sequence: float32 - name: input_y_clean sequence: sequence: float32 - name: rtg sequence: float64 - name: status sequence: sequence: float32 - name: split_threshold sequence: sequence: float32 - name: split_dimension sequence: int64 splits: - name: train num_bytes: 2364400000 num_examples: 100000 - name: validation num_bytes: 236440000 num_examples: 10000 download_size: 667958200 dataset_size: 2600840000 --- # Dataset Card for "autotree_automl_100000_default-of-credit-card-clients_sgosdt_l256_dim10_d3_sd0" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
DemonMaike/game
2023-09-10T20:07:01.000Z
[ "region:us" ]
DemonMaike
null
null
null
0
54
--- configs: - config_name: default data_files: - split: train path: data/train-* - split: test path: data/test-* dataset_info: features: - name: text dtype: string splits: - name: train num_bytes: 4259288.1572769955 num_examples: 766 - name: test num_bytes: 478196.8427230047 num_examples: 86 download_size: 2140477 dataset_size: 4737485.0 --- # Dataset Card for "game" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
shnl/qg_vicoqa
2023-09-13T04:14:23.000Z
[ "region:us" ]
shnl
null
null
null
0
54
Entry not found
mychen76/ds_receipts_v2_eval
2023-09-20T21:38:26.000Z
[ "region:us" ]
mychen76
null
null
null
0
54
--- dataset_info: features: - name: image dtype: image - name: text dtype: string splits: - name: train num_bytes: 1469341.0 num_examples: 19 download_size: 1462479 dataset_size: 1469341.0 --- # Dataset Card for "ds_receipts_v2_eval" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
Falah/instagram_model_ocean_grunge_prompts
2023-09-21T06:29:44.000Z
[ "region:us" ]
Falah
null
null
null
0
54
--- dataset_info: features: - name: prompts dtype: string splits: - name: train num_bytes: 65721 num_examples: 1000 download_size: 1451 dataset_size: 65721 --- # Dataset Card for "instagram_model_ocean_grunge_prompts" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
hasangoni/Electron_microscopy_dataset
2023-09-25T07:57:56.000Z
[ "task_categories:image-segmentation", "size_categories:10K<n<100K", "language:en", "microscopy", "EPFL", "image segmentation", "region:us" ]
hasangoni
null
null
null
0
54
--- task_categories: - image-segmentation language: - en tags: - microscopy - EPFL - image segmentation pretty_name: electron microscopy patch image size_categories: - 10K<n<100K --- The dataset: - Is a patch from the existing dataset available at https://www.epfl.ch/labs/cvlab/data/data-em/. - Contains patches of size (256, 256). - Removes any patches with empty masks to ensure quality. - Has the same license applied as the original dataset. - Please refer to the license for information on allowed usage. - If you have any questions or concerns about the dataset, please do not hesitate to contact me.
PurCL/marinda-type-inference-debuginfo-only-O1-shuffle
2023-09-28T05:10:33.000Z
[ "region:us" ]
PurCL
null
null
null
0
54
--- configs: - config_name: default data_files: - split: train path: data/train-* - split: test path: data/test-* dataset_info: features: - name: metadata struct: - name: binary_name dtype: string - name: function_addr dtype: int64 - name: function_name dtype: string - name: project_name dtype: string - name: code_w_type dtype: string - name: code dtype: string - name: data_dep dtype: string splits: - name: train num_bytes: 201535867.70075417 num_examples: 37113 - name: test num_bytes: 22394684.299245823 num_examples: 4124 download_size: 52386440 dataset_size: 223930552.0 --- # Dataset Card for "marinda-type-inference-debuginfo-only-O1-shuffle" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
Tianduo/rag_oasst
2023-09-28T18:21:58.000Z
[ "region:us" ]
Tianduo
null
null
null
0
54
--- configs: - config_name: default data_files: - split: train path: data/train-* dataset_info: features: - name: text dtype: string splits: - name: train num_bytes: 35555913 num_examples: 12947 download_size: 20853725 dataset_size: 35555913 --- # Dataset Card for "rag_oasst" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
diegomiranda/small-dataset-img-test
2023-10-04T20:32:06.000Z
[ "arxiv:2308.16900", "region:us" ]
diegomiranda
null
@article{bender2023learning, title={Learning to Taste: A Multimodal Wine Dataset}, author={Bender, Thoranna and S{\o}rensen, Simon M{\o}e and Kashani, Alireza and Hjorleifsson, K Eldjarn and Hyldig, Grethe and Hauberg, S{\o}ren and Belongie, Serge and Warburg, Frederik}, journal={arXiv preprint arXiv:2308.16900}, year={2023} }
null
0
54
# Dataset Card for WineSensed ## Table of Contents - [Table of Contents](#table-of-contents) - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Languages](#languages) - [Dataset Structure](#dataset-structure) - [Data Fields](#data-fields) - [Additional Information](#additional-information) - [Licensing Information](#licensing-information) - [Citation Information](#citation-information) ## Dataset Description - **Homepage:** [WineSensed Dataset](https://https://thoranna.github.io/learning_to_taste/) - **Repository:** - **Paper:** [Paper](https://arxiv.org/pdf/2308.16900.pdf) ### Dataset Summary The dataset encompasses 897k images of wine labels and 824k reviews of wines curated from the Vivino platform. It has over 350k unique vintages, annotated with year, region, rating, alcohol percentage, price, and grape composition. We obtained fine-grained flavor annotations on a subset by conducting a wine-tasting experiment with 256 participants who were asked to rank wines based on their similarity in flavor, resulting in more than 5k pairwise flavor distances. ### Languages English ## Dataset Structure ### Data Fields The dataset contains the file metadata.zip, consisting of the files participants.csv, which contains information connecting participants to annotations in the experiment, images_reviews_attributes.csv, which contains reviews, links to images, and wine attributes, and napping.csv, which contains the coordinates of each wine on the napping paper alongside information connecting each coordinate pair to the wine being annotated and the participant who annotated it. The chunk_<chunk num>.zip folders contain the images of the wines in the dataset in .jpg format. #### napping.csv contains the following fields: - session_round_name: session number during the event_name, at most three sessions per event (maps to experiment_round in participants.csv) - event_name: name of the data collection event (maps to the same attribute in participants.csv) - experiment_no: which number the napping paper was in the list of papers returned for this session_round_name (maps to experiment_no in participants.csv) - experiment_id: id the wine being annotated was given in the experiment - coor1: x-axis coordinate on the napping paper - coor2: y-axis coordinate on the napping paper - color: color of the sticker used #### participants.csv contains the following fields: - session_round_name: session number during the event_name, at most three sessions per event (maps to experiment_round in napping.csv) - event_name: name of data-collection event (maps to event_name in napping.csv) - experiment_no: which number the napping paper was in the list of papers returned for this session_round_name (maps to experiment_no in napping.csv) - round_id: round number (from 1-3) - participant_id: id the participant was given in the experiment #### images_reviews_attributes.csv contains the following fields: - vintage_id: vintage id of the wine - image: image link (each .jpg in chunk_<chunk num>.zip can be mapped to a corresponding image link in this column by removing the /p prefix from the link). - review: user review of the wine - experiment_id: id the wine got during data collection (each experiment_id can be mapped to the same column in napping.csv) - year: year the wine was produced - winery_id: id of the winery that produced the wine - wine: name of the wine - alcohol: the wine's alcohol percentage - country: the country where the wine was produced - region: the region where the wine was produced - price: price of the wine in USD (collected 05/2023) - rating: average rating of the wine (collected 05/2023) - grape: the wine's grape composition, represented as a comma-separated list ordered in descending sequence of the percentage contribution of each grape variety to the overall blend. ## Dataset Creation ### All Images Dataset 1) Unzip all the chunk_*.zip files 2) Copy the script create_all_images_dataset.sh to the output_images/ directory 3) Execute chmod +x create_all_images_dataset.sh 4) Execute ./create_all_images_dataset.sh ## Additional Information ### Licensing Information LICENSE AGREEMENT ================= - WineSensed by Thoranna Bender, Simon Søresen, Alireza Kashani, Kristjan Eldjarn, Grethe Hyldig, Søren Hauberg, Serge Belongie, Frederik Warburg is licensed under a CC BY-NC-ND 4.0 Licence ### Citation Information ``` @article{bender2023learning, title={Learning to Taste: A Multimodal Wine Dataset}, author={Bender, Thoranna and S{\o}rensen, Simon M{\o}e and Kashani, Alireza and Hjorleifsson, K Eldjarn and Hyldig, Grethe and Hauberg, S{\o}ren and Belongie, Serge and Warburg, Frederik}, journal={arXiv preprint arXiv:2308.16900}, year={2023} ```
AlignmentLab-AI/caption_creation_0.4
2023-10-02T15:52:43.000Z
[ "region:us" ]
AlignmentLab-AI
null
null
null
0
54
Entry not found
JasiekKaczmarczyk/maestro-v1-sustain-masked
2023-10-02T10:34:44.000Z
[ "region:us" ]
JasiekKaczmarczyk
null
null
null
0
54
--- dataset_info: features: - name: midi_filename dtype: string - name: source dtype: string - name: pitch sequence: int16 length: 128 - name: dstart sequence: float32 length: 128 - name: duration sequence: float32 length: 128 - name: velocity sequence: int16 length: 128 - name: masking_spaces struct: - name: <Random Mask> sequence: bool length: 128 - name: <LH Mask> sequence: bool length: 128 - name: <RH Mask> sequence: bool length: 128 - name: <Harmonic Root Mask> sequence: bool length: 128 - name: <Harmonic Outliers Mask> sequence: bool length: 128 splits: - name: train num_bytes: 86282539 num_examples: 43738 - name: validation num_bytes: 9735862 num_examples: 4931 - name: test num_bytes: 11249478 num_examples: 5695 download_size: 40330447 dataset_size: 107267879 --- # Dataset Card for "maestro-v1-sustain-masked" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
ContextualAI/lambada
2023-10-06T23:56:48.000Z
[ "region:us" ]
ContextualAI
null
null
null
0
54
--- dataset_info: features: - name: query dtype: string - name: gold_generation dtype: string splits: - name: dev num_bytes: 1698004 num_examples: 4869 - name: test num_bytes: 1786025 num_examples: 5153 download_size: 2261199 dataset_size: 3484029 --- # Dataset Card for "lambada" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
opus_dgt
2023-06-01T14:59:53.000Z
[ "task_categories:translation", "annotations_creators:found", "language_creators:found", "multilinguality:multilingual", "size_categories:100K<n<1M", "size_categories:10K<n<100K", "size_categories:1M<n<10M", "source_datasets:original", "language:bg", "language:cs", "language:da", "language:de",...
null
A collection of translation memories provided by the JRC. Source: https://ec.europa.eu/jrc/en/language-technologies/dgt-translation-memory 25 languages, 299 bitexts total number of files: 817,410 total number of tokens: 2.13G total number of sentence fragments: 113.52M
@InProceedings{TIEDEMANN12.463, author = {J{\"o}rg Tiedemann}, title = {Parallel Data, Tools and Interfaces in OPUS}, booktitle = {Proceedings of the Eight International Conference on Language Resources and Evaluation (LREC'12)}, year = {2012}, month = {may}, date = {23-25}, address = {Istanbul, Turkey}, editor = {Nicoletta Calzolari (Conference Chair) and Khalid Choukri and Thierry Declerck and Mehmet Ugur Dogan and Bente Maegaard and Joseph Mariani and Jan Odijk and Stelios Piperidis}, publisher = {European Language Resources Association (ELRA)}, isbn = {978-2-9517408-7-7}, language = {english} }
null
1
53
--- annotations_creators: - found language_creators: - found language: - bg - cs - da - de - el - en - es - et - fi - fr - ga - hr - hu - it - lt - lv - mt - nl - pl - pt - ro - sh - sk - sl - sv license: - unknown multilinguality: - multilingual size_categories: - 100K<n<1M - 10K<n<100K - 1M<n<10M source_datasets: - original task_categories: - translation task_ids: [] paperswithcode_id: null pretty_name: OpusDgt dataset_info: - config_name: bg-ga features: - name: id dtype: string - name: translation dtype: translation: languages: - bg - ga splits: - name: train num_bytes: 82972428 num_examples: 179142 download_size: 15935979 dataset_size: 82972428 - config_name: bg-hr features: - name: id dtype: string - name: translation dtype: translation: languages: - bg - hr splits: - name: train num_bytes: 239828651 num_examples: 701572 download_size: 46804111 dataset_size: 239828651 - config_name: bg-sh features: - name: id dtype: string - name: translation dtype: translation: languages: - bg - sh splits: - name: train num_bytes: 498884905 num_examples: 1488507 download_size: 97402723 dataset_size: 498884905 - config_name: fi-ga features: - name: id dtype: string - name: translation dtype: translation: languages: - fi - ga splits: - name: train num_bytes: 61313136 num_examples: 178619 download_size: 14385114 dataset_size: 61313136 - config_name: es-ga features: - name: id dtype: string - name: translation dtype: translation: languages: - es - ga splits: - name: train num_bytes: 63115666 num_examples: 178696 download_size: 14447359 dataset_size: 63115666 - config_name: ga-sh features: - name: id dtype: string - name: translation dtype: translation: languages: - ga - sh splits: - name: train num_bytes: 28666585 num_examples: 91613 download_size: 6963357 dataset_size: 28666585 - config_name: hr-sk features: - name: id dtype: string - name: translation dtype: translation: languages: - hr - sk splits: - name: train num_bytes: 170718371 num_examples: 689263 download_size: 42579941 dataset_size: 170718371 - config_name: mt-sh features: - name: id dtype: string - name: translation dtype: translation: languages: - mt - sh splits: - name: train num_bytes: 368562443 num_examples: 1450424 download_size: 88598048 dataset_size: 368562443 - config_name: hr-sv features: - name: id dtype: string - name: translation dtype: translation: languages: - hr - sv splits: - name: train num_bytes: 171858392 num_examples: 696334 download_size: 41410203 dataset_size: 171858392 - config_name: ga-nl features: - name: id dtype: string - name: translation dtype: translation: languages: - ga - nl splits: - name: train num_bytes: 59065574 num_examples: 170644 download_size: 13730934 dataset_size: 59065574 config_names: - bg-ga - bg-hr - bg-sh - es-ga - fi-ga - ga-nl - ga-sh - hr-sk - hr-sv - mt-sh --- # Dataset Card for OpusDgt ## Table of Contents - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards) - [Languages](#languages) - [Dataset Structure](#dataset-structure) - [Data Instances](#data-instances) - [Data Fields](#data-fields) - [Data Splits](#data-splits) - [Dataset Creation](#dataset-creation) - [Curation Rationale](#curation-rationale) - [Source Data](#source-data) - [Annotations](#annotations) - [Personal and Sensitive Information](#personal-and-sensitive-information) - [Considerations for Using the Data](#considerations-for-using-the-data) - [Social Impact of Dataset](#social-impact-of-dataset) - [Discussion of Biases](#discussion-of-biases) - [Other Known Limitations](#other-known-limitations) - [Additional Information](#additional-information) - [Dataset Curators](#dataset-curators) - [Licensing Information](#licensing-information) - [Citation Information](#citation-information) - [Contributions](#contributions) ## Dataset Description - **Homepage:** http://opus.nlpl.eu/DGT.php - **Repository:** None - **Paper:** http://www.lrec-conf.org/proceedings/lrec2012/pdf/463_Paper.pdf - **Leaderboard:** [More Information Needed] - **Point of Contact:** [More Information Needed] ### Dataset Summary A collection of translation memories provided by the Joint Research Centre (JRC) Directorate-General for Translation (DGT): https://ec.europa.eu/jrc/en/language-technologies/dgt-translation-memory Tha dataset contains 25 languages and 299 bitexts. To load a language pair which isn't part of the config, all you need to do is specify the language code as pairs, e.g. ```python dataset = load_dataset("opus_dgt", lang1="it", lang2="pl") ``` You can find the valid pairs in Homepage section of Dataset Description: http://opus.nlpl.eu/DGT.php ### Supported Tasks and Leaderboards [More Information Needed] ### Languages The languages in the dataset are: - bg - cs - da - de - el - en - es - et - fi - fr - ga - hr - hu - it - lt - lv - mt - nl - pl - pt - ro - sh - sk - sl - sv ## Dataset Structure ### Data Instances ``` { 'id': '0', 'translation': { "bg": "Протокол за поправка на Конвенцията относно компетентността, признаването и изпълнението на съдебни решения по граждански и търговски дела, подписана в Лугано на 30 октомври 2007 г.", "ga": "Miontuairisc cheartaitheach maidir le Coinbhinsiún ar dhlínse agus ar aithint agus ar fhorghníomhú breithiúnas in ábhair shibhialta agus tráchtála, a siníodh in Lugano an 30 Deireadh Fómhair 2007" } } ``` ### Data Fields - `id` (`str`): Unique identifier of the parallel sentence for the pair of languages. - `translation` (`dict`): Parallel sentences for the pair of languages. ### Data Splits The dataset contains a single `train` split. ## Dataset Creation ### Curation Rationale [More Information Needed] ### Source Data [More Information Needed] #### Initial Data Collection and Normalization [More Information Needed] #### Who are the source language producers? [More Information Needed] ### Annotations [More Information Needed] #### Annotation process [More Information Needed] #### Who are the annotators? [More Information Needed] ### Personal and Sensitive Information [More Information Needed] ## Considerations for Using the Data ### Social Impact of Dataset [More Information Needed] ### Discussion of Biases [More Information Needed] ### Other Known Limitations [More Information Needed] ## Additional Information ### Dataset Curators [More Information Needed] ### Licensing Information [More Information Needed] ### Citation Information ```bibtex @InProceedings{TIEDEMANN12.463, author = {J{\"o}rg Tiedemann}, title = {Parallel Data, Tools and Interfaces in OPUS}, booktitle = {Proceedings of the Eight International Conference on Language Resources and Evaluation (LREC'12)}, year = {2012}, month = {may}, date = {23-25}, address = {Istanbul, Turkey}, editor = {Nicoletta Calzolari (Conference Chair) and Khalid Choukri and Thierry Declerck and Mehmet Ugur Dogan and Bente Maegaard and Joseph Mariani and Jan Odijk and Stelios Piperidis}, publisher = {European Language Resources Association (ELRA)}, isbn = {978-2-9517408-7-7}, language = {english} } ``` ### Contributions Thanks to [@rkc007](https://github.com/rkc007) for adding this dataset.
wmt17
2023-04-05T13:43:57.000Z
[ "task_categories:translation", "annotations_creators:no-annotation", "language_creators:found", "multilinguality:translation", "size_categories:10M<n<100M", "source_datasets:extended|europarl_bilingual", "source_datasets:extended|news_commentary", "source_datasets:extended|setimes", "source_datasets...
null
null
@InProceedings{bojar-EtAl:2017:WMT1, author = {Bojar, Ond\v{r}ej and Chatterjee, Rajen and Federmann, Christian and Graham, Yvette and Haddow, Barry and Huang, Shujian and Huck, Matthias and Koehn, Philipp and Liu, Qun and Logacheva, Varvara and Monz, Christof and Negri, Matteo and Post, Matt and Rubino, Raphael and Specia, Lucia and Turchi, Marco}, title = {Findings of the 2017 Conference on Machine Translation (WMT17)}, booktitle = {Proceedings of the Second Conference on Machine Translation, Volume 2: Shared Task Papers}, month = {September}, year = {2017}, address = {Copenhagen, Denmark}, publisher = {Association for Computational Linguistics}, pages = {169--214}, url = {http://www.aclweb.org/anthology/W17-4717} }
null
1
53
--- annotations_creators: - no-annotation language_creators: - found language: - cs - de - en - fi - lv - ru - tr - zh license: - unknown multilinguality: - translation size_categories: - 10M<n<100M source_datasets: - extended|europarl_bilingual - extended|news_commentary - extended|setimes - extended|un_multi task_categories: - translation task_ids: [] pretty_name: WMT17 paperswithcode_id: null dataset_info: - config_name: cs-en features: - name: translation dtype: translation: languages: - cs - en splits: - name: train num_bytes: 300698431 num_examples: 1018291 - name: validation num_bytes: 707870 num_examples: 2999 - name: test num_bytes: 674430 num_examples: 3005 download_size: 1784240523 dataset_size: 302080731 - config_name: de-en features: - name: translation dtype: translation: languages: - de - en splits: - name: train num_bytes: 1715537443 num_examples: 5906184 - name: validation num_bytes: 735516 num_examples: 2999 - name: test num_bytes: 729519 num_examples: 3004 download_size: 1945382236 dataset_size: 1717002478 - config_name: fi-en features: - name: translation dtype: translation: languages: - fi - en splits: - name: train num_bytes: 743856525 num_examples: 2656542 - name: validation num_bytes: 1410515 num_examples: 6000 - name: test num_bytes: 1388828 num_examples: 6004 download_size: 434531933 dataset_size: 746655868 - config_name: lv-en features: - name: translation dtype: translation: languages: - lv - en splits: - name: train num_bytes: 517419100 num_examples: 3567528 - name: validation num_bytes: 544604 num_examples: 2003 - name: test num_bytes: 530474 num_examples: 2001 download_size: 169634544 dataset_size: 518494178 - config_name: ru-en features: - name: translation dtype: translation: languages: - ru - en splits: - name: train num_bytes: 11000075522 num_examples: 24782720 - name: validation num_bytes: 1050677 num_examples: 2998 - name: test num_bytes: 1040195 num_examples: 3001 download_size: 3582640660 dataset_size: 11002166394 - config_name: tr-en features: - name: translation dtype: translation: languages: - tr - en splits: - name: train num_bytes: 60416617 num_examples: 205756 - name: validation num_bytes: 732436 num_examples: 3000 - name: test num_bytes: 752773 num_examples: 3007 download_size: 62263061 dataset_size: 61901826 - config_name: zh-en features: - name: translation dtype: translation: languages: - zh - en splits: - name: train num_bytes: 5529286149 num_examples: 25134743 - name: validation num_bytes: 589591 num_examples: 2002 - name: test num_bytes: 540347 num_examples: 2001 download_size: 2314906945 dataset_size: 5530416087 --- # Dataset Card for "wmt17" ## Table of Contents - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards) - [Languages](#languages) - [Dataset Structure](#dataset-structure) - [Data Instances](#data-instances) - [Data Fields](#data-fields) - [Data Splits](#data-splits) - [Dataset Creation](#dataset-creation) - [Curation Rationale](#curation-rationale) - [Source Data](#source-data) - [Annotations](#annotations) - [Personal and Sensitive Information](#personal-and-sensitive-information) - [Considerations for Using the Data](#considerations-for-using-the-data) - [Social Impact of Dataset](#social-impact-of-dataset) - [Discussion of Biases](#discussion-of-biases) - [Other Known Limitations](#other-known-limitations) - [Additional Information](#additional-information) - [Dataset Curators](#dataset-curators) - [Licensing Information](#licensing-information) - [Citation Information](#citation-information) - [Contributions](#contributions) ## Dataset Description - **Homepage:** [http://www.statmt.org/wmt17/translation-task.html](http://www.statmt.org/wmt17/translation-task.html) - **Repository:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) - **Paper:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) - **Point of Contact:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) - **Size of downloaded dataset files:** 1.78 GB - **Size of the generated dataset:** 302.09 MB - **Total amount of disk used:** 2.09 GB ### Dataset Summary <div class="course-tip course-tip-orange bg-gradient-to-br dark:bg-gradient-to-r before:border-orange-500 dark:before:border-orange-800 from-orange-50 dark:from-gray-900 to-white dark:to-gray-950 border border-orange-50 text-orange-700 dark:text-gray-400"> <p><b>Warning:</b> There are issues with the Common Crawl corpus data (<a href="https://www.statmt.org/wmt13/training-parallel-commoncrawl.tgz">training-parallel-commoncrawl.tgz</a>):</p> <ul> <li>Non-English files contain many English sentences.</li> <li>Their "parallel" sentences in English are not aligned: they are uncorrelated with their counterpart.</li> </ul> <p>We have contacted the WMT organizers.</p> </div> Translation dataset based on the data from statmt.org. Versions exist for different years using a combination of data sources. The base `wmt` allows you to create a custom dataset by choosing your own data/language pair. This can be done as follows: ```python from datasets import inspect_dataset, load_dataset_builder inspect_dataset("wmt17", "path/to/scripts") builder = load_dataset_builder( "path/to/scripts/wmt_utils.py", language_pair=("fr", "de"), subsets={ datasets.Split.TRAIN: ["commoncrawl_frde"], datasets.Split.VALIDATION: ["euelections_dev2019"], }, ) # Standard version builder.download_and_prepare() ds = builder.as_dataset() # Streamable version ds = builder.as_streaming_dataset() ``` ### Supported Tasks and Leaderboards [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ### Languages [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ## Dataset Structure ### Data Instances #### cs-en - **Size of downloaded dataset files:** 1.78 GB - **Size of the generated dataset:** 302.09 MB - **Total amount of disk used:** 2.09 GB An example of 'train' looks as follows. ``` ``` ### Data Fields The data fields are the same among all splits. #### cs-en - `translation`: a multilingual `string` variable, with possible languages including `cs`, `en`. ### Data Splits |name | train |validation|test| |-----|------:|---------:|---:| |cs-en|1018291| 2999|3005| ## Dataset Creation ### Curation Rationale [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ### Source Data #### Initial Data Collection and Normalization [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) #### Who are the source language producers? [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ### Annotations #### Annotation process [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) #### Who are the annotators? [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ### Personal and Sensitive Information [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ## Considerations for Using the Data ### Social Impact of Dataset [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ### Discussion of Biases [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ### Other Known Limitations [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ## Additional Information ### Dataset Curators [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ### Licensing Information [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ### Citation Information ``` @InProceedings{bojar-EtAl:2017:WMT1, author = {Bojar, Ond {r}ej and Chatterjee, Rajen and Federmann, Christian and Graham, Yvette and Haddow, Barry and Huang, Shujian and Huck, Matthias and Koehn, Philipp and Liu, Qun and Logacheva, Varvara and Monz, Christof and Negri, Matteo and Post, Matt and Rubino, Raphael and Specia, Lucia and Turchi, Marco}, title = {Findings of the 2017 Conference on Machine Translation (WMT17)}, booktitle = {Proceedings of the Second Conference on Machine Translation, Volume 2: Shared Task Papers}, month = {September}, year = {2017}, address = {Copenhagen, Denmark}, publisher = {Association for Computational Linguistics}, pages = {169--214}, url = {http://www.aclweb.org/anthology/W17-4717} } ``` ### Contributions Thanks to [@patrickvonplaten](https://github.com/patrickvonplaten), [@thomwolf](https://github.com/thomwolf) for adding this dataset.
CAiRE/ASCEND
2022-10-24T12:43:58.000Z
[ "task_categories:automatic-speech-recognition", "annotations_creators:expert-generated", "language_creators:crowdsourced", "multilinguality:multilingual", "size_categories:10K<n<100K", "source_datasets:original", "language:en", "language:zh", "license:cc-by-sa-4.0", "speech-recognition", "code-s...
CAiRE
ASCEND (A Spontaneous Chinese-English Dataset) introduces a high-quality resource of spontaneous multi-turn conversational dialogue Chinese-English code-switching corpus collected in Hong Kong. ASCEND consists of 10.62 hours of spontaneous speech with a total of ~12.3K utterances. The corpus is split into 3 sets: training, validation, and test with a ratio of 8:1:1 while maintaining a balanced gender proportion on each set.
@inproceedings{lovenia2021ascend, title = {ASCEND: A Spontaneous Chinese-English Dataset for Code-switching in Multi-turn Conversation}, author = {Lovenia, Holy and Cahyawijaya, Samuel and Winata, Genta Indra and Xu, Peng and Yan, Xu and Liu, Zihan and Frieske, Rita and Yu, Tiezheng and Dai, Wenliang and Barezi, Elham J and others}, booktitle = {Proceedings of the International Conference on Language Resources and Evaluation, {LREC} 2022, 20-25 June 2022, Lu Palais du Pharo, France}, publisher = {European Language Resources Association}, year = {2022}, pages = {} }
null
8
53
--- annotations_creators: - expert-generated language_creators: - crowdsourced language: - en - zh license: - cc-by-sa-4.0 multilinguality: - multilingual size_categories: - 10K<n<100K source_datasets: - original task_categories: - automatic-speech-recognition task_ids: [] pretty_name: 'ASCEND: A Spontaneous Chinese-English Dataset for Code-switching in Multi-turn Conversation' tags: - speech-recognition - code-switching --- # Dataset Card for ASCEND ## Table of Contents - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Supported Tasks](#supported-tasks-and-leaderboards) - [Languages](#languages) - [Usage](#usage) - [Dataset Structure](#dataset-structure) - [Data Splits](#data-instances) - [Additional Information](#additional-information) - [Licensing Information](#licensing-information) - [Citation Information](#citation-information) ## Dataset Description - **Homepage:** [Needs More Information] - **Repository:** [Needs More Information] - **Paper:** https://arxiv.org/abs/2112.06223 - **Leaderboard:** [Needs More Information] - **Point of Contact:** [Needs More Information] ### Dataset Summary ASCEND (A Spontaneous Chinese-English Dataset) introduces a high-quality resource of spontaneous multi-turn conversational dialogue Chinese-English code-switching corpus collected in Hong Kong. ASCEND consists of 10.62 hours of spontaneous speech with a total of ~12.3K utterances. The corpus is split into 3 sets: training, validation, and test with a ratio of 8:1:1 while maintaining a balanced gender proportion on each set. ### Supported Tasks and Leaderboards Code-switching ### Languages Chinese and English ## Usage To obtain the full dataset (complete with train, validation, and test set), simply run this: ``` import datasets dataset = datasets.load_dataset("CAiRE/ASCEND") ``` ## Dataset Structure A typical data point comprises the path to the audio file, the loaded audio array, and its transcription. Additional fields include datapoint id, duration, language, speaker id, session id, and topic. ``` { 'id': '00644', 'path': '.cache/huggingface/datasets/downloads/extracted/f0b33b5266cd9452ee310eef3577cf7adb7f29aa54dbff74b9a8ee406a55d614/waves/ses2_spk3_L13101_189.900_5.490.wav', 'audio': { 'path': '.cache/huggingface/datasets/downloads/extracted/f0b33b5266cd9452ee310eef3577cf7adb7f29aa54dbff74b9a8ee406a55d614/waves/ses2_spk3_L13101_189.900_5.490.wav', 'array': array([-6.1035156e-05, -1.8310547e-04, 3.0517578e-05, ..., 0.0000000e+00, -3.0517578e-05, 0.0000000e+00 ], dtype = float32), 'sampling_rate': 16000 }, 'transcription': '因为你不可能邀你的female friends去说走我们去play basketball', 'duration': 5.489999771118164, 'language': 'mixed', 'original_speaker_id': 3, 'session_id': 2, 'topic': 'sports' } ``` ### Data Splits Number of utterances: 9,869 train, 1,130 validation, and 1,315 test. ## Additional Information For comprehensive explanations, please check [our paper](https://arxiv.org/pdf/2112.06223.pdf). ### Licensing Information Creative Common Attribution Share-Alike 4.0 International (CC-BY-SA 4.0) ### Citation Information If you use our dataset, please cite us: ``` @inproceedings{lovenia2022ascend, title={ASCEND: A Spontaneous Chinese-English Dataset for Code-switching in Multi-turn Conversation}, author={Lovenia, Holy and Cahyawijaya, Samuel and Winata, Genta Indra and Xu, Peng and Yan, Xu and Liu, Zihan and Frieske, Rita and Yu, Tiezheng and Dai, Wenliang and Barezi, Elham J and others}, booktitle={Proceedings of the 13th Language Resources and Evaluation Conference (LREC)}, year={2022} ```
PaddlePaddle/dureader_robust
2022-03-10T05:14:18.000Z
[ "license:apache-2.0", "region:us" ]
PaddlePaddle
DureaderRobust is a chinese reading comprehension dataset, designed to evaluate the MRC models from three aspects: over-sensitivity, over-stability and generalization.
null
null
1
53
--- license: apache-2.0 ---
sepidmnorozy/English_sentiment
2022-08-16T08:58:35.000Z
[ "region:us" ]
sepidmnorozy
null
null
null
0
53
Entry not found
sepidmnorozy/Japanese_sentiment
2022-08-16T09:24:31.000Z
[ "region:us" ]
sepidmnorozy
null
null
null
1
53
Entry not found
mrmoor/cyber-threat-intelligence
2022-10-23T09:12:59.000Z
[ "license:unknown", "region:us" ]
mrmoor
null
null
null
3
53
--- license: unknown ---
bigbio/bionlp_st_2011_id
2022-12-22T15:43:52.000Z
[ "multilinguality:monolingual", "language:en", "license:other", "region:us" ]
bigbio
The dataset of the Infectious Diseases (ID) task of BioNLP Shared Task 2011.
@inproceedings{pyysalo-etal-2011-overview, title = "Overview of the Infectious Diseases ({ID}) task of {B}io{NLP} Shared Task 2011", author = "Pyysalo, Sampo and Ohta, Tomoko and Rak, Rafal and Sullivan, Dan and Mao, Chunhong and Wang, Chunxia and Sobral, Bruno and Tsujii, Jun{'}ichi and Ananiadou, Sophia", booktitle = "Proceedings of {B}io{NLP} Shared Task 2011 Workshop", month = jun, year = "2011", address = "Portland, Oregon, USA", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/W11-1804", pages = "26--35", }
null
0
53
--- language: - en bigbio_language: - English license: other multilinguality: monolingual bigbio_license_shortname: GENIA_PROJECT_LICENSE pretty_name: BioNLP 2011 ID homepage: https://github.com/openbiocorpora/bionlp-st-2011-id bigbio_pubmed: True bigbio_public: True bigbio_tasks: - EVENT_EXTRACTION - COREFERENCE_RESOLUTION - NAMED_ENTITY_RECOGNITION --- # Dataset Card for BioNLP 2011 ID ## Dataset Description - **Homepage:** https://github.com/openbiocorpora/bionlp-st-2011-id - **Pubmed:** True - **Public:** True - **Tasks:** EE,COREF,NER The dataset of the Infectious Diseases (ID) task of BioNLP Shared Task 2011. ## Citation Information ``` @inproceedings{pyysalo-etal-2011-overview, title = "Overview of the Infectious Diseases ({ID}) task of {B}io{NLP} Shared Task 2011", author = "Pyysalo, Sampo and Ohta, Tomoko and Rak, Rafal and Sullivan, Dan and Mao, Chunhong and Wang, Chunxia and Sobral, Bruno and Tsujii, Jun{'}ichi and Ananiadou, Sophia", booktitle = "Proceedings of {B}io{NLP} Shared Task 2011 Workshop", month = jun, year = "2011", address = "Portland, Oregon, USA", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/W11-1804", pages = "26--35", } ```
ibm/MedMentions-ZS
2022-11-25T16:49:58.000Z
[ "region:us" ]
ibm
null
null
null
0
53
Entry not found
tushar117/xalign
2023-01-01T20:39:30.000Z
[ "task_categories:table-to-text", "task_ids:rdf-to-text", "annotations_creators:found", "language_creators:crowdsourced", "multilinguality:multilingual", "size_categories:100K<n<1M", "source_datasets:original", "language:as", "language:bn", "language:gu", "language:hi", "language:kn", "langua...
tushar117
It consists of an extensive collection of a high quality cross-lingual fact-to-text dataset where facts are in English and corresponding sentences are in native language for person biographies. The Train & validation splits are created using distant supervision methods and Test data is generated through human annotations.
@article{abhishek2022xalign, title={XAlign: Cross-lingual Fact-to-Text Alignment and Generation for Low-Resource Languages}, author={Abhishek, Tushar and Sagare, Shivprasad and Singh, Bhavyajeet and Sharma, Anubhav and Gupta, Manish and Varma, Vasudeva}, journal={arXiv preprint arXiv:2202.00291}, year={2022} }
null
1
53
--- annotations_creators: - found configs: - release_v1 language: - as - bn - gu - hi - kn - ml - mr - or - pa - ta - te - en language_creators: - crowdsourced license: - cc-by-nc-sa-4.0 - mit multilinguality: - multilingual paperswithcode_id: xalign pretty_name: 'XAlign' size_categories: - 100K<n<1M source_datasets: - original tags: - xalign - NLG - low-resource - LRL task_categories: - table-to-text task_ids: - rdf-to-text --- # Dataset Card for XAlign ## Table of Contents - [Table of Contents](#table-of-contents) - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards) - [Languages](#languages) - [Dataset Structure](#dataset-structure) - [Data Instances](#data-instances) - [Data Fields](#data-fields) - [Data Splits](#data-splits) - [Dataset Creation](#dataset-creation) - [Curation Rationale](#curation-rationale) - [Source Data](#source-data) - [Annotations](#annotations) - [Personal and Sensitive Information](#personal-and-sensitive-information) - [Considerations for Using the Data](#considerations-for-using-the-data) - [Social Impact of Dataset](#social-impact-of-dataset) - [Known Limitations](#known-limitations) - [Additional Information](#additional-information) - [Dataset Curators](#dataset-curators) - [Licensing Information](#licensing-information) - [Citation Information](#citation-information) - [Contributions](#contributions) ## Dataset Description - **Homepage:** [XAlign homepage](https://github.com/tushar117/XAlign) - **Repository:** [XAlign repo](https://github.com/tushar117/XAlign) - **Paper:** [XAlign: Cross-lingual Fact-to-Text Alignment and Generation for Low-Resource Languages](https://arxiv.org/abs/2202.00291) - **Leaderboard:** [Papers With Code Leaderboard for XAlign](https://paperswithcode.com/sota/data-to-text-generation-on-xalign) - **Point of Contact:** [Tushar Abhishek](tushar.abhishek@research.iiit.ac.in) ### Dataset Summary It consists of an extensive collection of a high quality cross-lingual fact-to-text dataset where facts are in English and corresponding sentences are in native language for person biographies. The Train & validation splits are created using distant supervision methods and Test data is generated through human annotations. ### Supported Tasks and Leaderboards - 'Data-to-text Generation': XAlign dataset can be used to train cross-lingual data-to-text generation models. The model performance can measured through any text generation evaluation metrics by taking average across all the languages. [Sagare et al. (2022)](https://arxiv.org/abs/2209.11252) reported average BLEU score of 29.27 and average METEOR score of 53.64 over the test set. - 'Relation Extraction': XAlign could also be used for cross-lingual relation extraction where relations in English can be extracted from associated native sentence. See [Papers With Code Leaderboard](https://paperswithcode.com/sota/data-to-text-generation-on-xalign) for more models. ### Languages Assamese (as), Bengali (bn), Gujarati (gu), Hindi (hi), Kannada (kn), Malayalam (ml), Marathi (mr), Oriya (or), Punjabi (pa), Tamil (ta), Telugu (te), and English (en). ## Dataset Structure ### Data Fields Each record consist of the following entries: - sentence (string) : Native language wikipedia sentence. (non-native language strings were removed.) - `facts` (List[Dict]) : List of facts associated with the sentence where each fact is stored as dictionary. - language (string) : Language identifier. The `facts` key contains list of facts where each facts is stored as dictionary. A single record within fact list contains following entries: - subject (string) : central entity. - object (string) : entity or a piece of information about the subject. - predicate (string) : relationship that connects the subject and the object. - qualifiers (List[Dict]) : It provide additional information about the fact, is stored as list of qualifier where each record is a dictionary. The dictionary contains two keys: qualifier_predicate to represent property of qualifer and qualifier_object to store value for the qualifier's predicate. ### Data Instances Example from English ``` { "sentence": "Mark Paul Briers (born 21 April 1968) is a former English cricketer.", "facts": [ { "subject": "Mark Briers", "predicate": "date of birth", "object": "21 April 1968", "qualifiers": [] }, { "subject": "Mark Briers", "predicate": "occupation", "object": "cricketer", "qualifiers": [] }, { "subject": "Mark Briers", "predicate": "country of citizenship", "object": "United Kingdom", "qualifiers": [] } ], "language": "en" } ``` Example from one of the low-resource languages (i.e. Hindi) ``` { "sentence": "बोरिस पास्तेरनाक १९५८ में साहित्य के क्षेत्र में नोबेल पुरस्कार विजेता रहे हैं।", "facts": [ { "subject": "Boris Pasternak", "predicate": "nominated for", "object": "Nobel Prize in Literature", "qualifiers": [ { "qualifier_predicate": "point in time", "qualifier_subject": "1958" } ] } ], "language": "hi" } ``` ### Data Splits The XAlign dataset has 3 splits: train, validation, and test. Below are the statistics the dataset. | Dataset splits | Number of Instances in Split | | --- | --- | | Train | 499155 | | Validation | 55469 | | Test | 7425 | ## Dataset Creation ### Curation Rationale Most of the existing Data-to-Text datasets are available in English. Also, the structured Wikidata entries for person entities in low resource languages are minuscule in number compared to that in English. Thus, monolingual Data-to-Text for low resource languages suffers from data sparsity. XAlign dataset would be useful in creation of cross-lingual Data-to-Text generation systems that take a set of English facts as input and generates a sentence capturing the fact-semantics in the specified language. ### Source Data #### Initial Data Collection and Normalization The dataset creation process starts with an intial list of ~95K person entities selected from Wikidata and each of which has a link to a corresponding Wikipedia page in at least one of our 11 low resource languages. This leads to a dataset where every instance is a tuple containing entityID, English Wikidata facts, language identifier, Wikipedia URL for the entityID. The facts (in English) are extracted from the 20201221 WikiData dump for each entity using the [WikiData](https://query.wikidata.org) APIs. The facts are gathered only for the speficied Wikidata property (or relation) types that captures most useful factual information for person entities: WikibaseItem, Time, Quantity, and Monolingualtext.This leads to overall ~0.55M data instances across all the 12 languages. Also, for each language, the sentences (along with section information) are extracted from 20210520 Wikipedia XML dump using the pre-processing steps as described [here](https://arxiv.org/abs/2202.00291). For every (entity, language) pair, the pre-processed dataset contains a set of English Wikidata facts and a set of Wikipedia sentences in that language. In order to create train and validation dataset, these are later passed through a two-stage automatic aligner as proposed in [abhishek et al. (2022)](https://arxiv.org/abs/2202.00291) to associate a sentence with a subset of facts. #### Who are the source language producers? The text are extracted from Wikipedia and facts are retrieved from Wikidata. ### Annotations #### Annotation process The Manual annotation of Test dataset was done in two phases. For both the phases, the annotators were presented with (low resource language sentence, list of English facts). They were asked to mark facts present in the given sentence. There were also specific guidelines to ignore redundant facts, handle abbreviations, etc. More detailed annotation guidelines and ethical statement are mentioned [here](https://docs.google.com/document/d/1ucGlf-Jm1ywQ_Fjw9f2UqPeMWPlBnlZA46UY7KuZ0EE/edit) . In the first phase, we got 60 instances labeled per language by a set of 8 expert annotators (trusted graduate students who understood the task very well). In phase 2, we selected 8 annotators per language from the [National Register of Translators](https://www.ntm.org.in/languages/english/nrtdb.aspx}). We tested these annotators using phase 1 data as golden control set, and shortlisted up to 4 annotators per language who scored highest (on Kappa score with golden annotations). #### Who are the annotators? Human annotators were selected appropriately (after screening) from [National Translation Mission](https://www.ntm.org.in) for Test set creation. ### Personal and Sensitive Information The dataset does not involve collection or storage of any personally identifiable information or offensive information at any stage. ## Considerations for Using the Data ### Social Impact of Dataset The purpose of the this dataset is to help develop cross-lingual Data-to-Text generation systems that are vital in many downstream Natural Language Processing (NLP) applications like automated dialog systems, domain-specific chatbots, open domain question answering, authoring sports reports, etc. These systems will be useful for powering business applications like Wikipedia text generation given English Infoboxes, automated generation of non-English product descriptions using English product attributes, etc. ### Known Limitations The XAlign dataset focus only on person biographies and system developed on this dataset might not be generalized to other domains. ## Additional Information ### Dataset Curators This dataset is collected by Tushar Abhishek, Shivprasad Sagare, Bhavyajeet Singh, Anubhav Sharma, Manish Gupta and Vasudeva Varma of Information Retrieval and Extraction Lab (IREL), Hyderabad, India. They released [scripts](https://github.com/tushar117/xalign) to collect and process the data into the Data-to-Text format. ### Licensing Information The XAlign dataset is released under the [MIT License](https://github.com/tushar117/XAlign/blob/main/LICENSE). ### Citation Information ``` @article{abhishek2022xalign, title={XAlign: Cross-lingual Fact-to-Text Alignment and Generation for Low-Resource Languages}, author={Abhishek, Tushar and Sagare, Shivprasad and Singh, Bhavyajeet and Sharma, Anubhav and Gupta, Manish and Varma, Vasudeva}, journal={arXiv preprint arXiv:2202.00291}, year={2022} } ``` ### Contributions Thanks to [Tushar Abhishek](https://github.com/tushar117), [Shivprasad Sagare](https://github.com/ShivprasadSagare), [Bhavyajeet Singh](https://github.com/bhavyajeet), [Anubhav Sharma](https://github.com/anubhav-sharma13), [Manish Gupta](https://github.com/blitzprecision) and [Vasudeva Varma](vv@iiit.ac.in) for adding this dataset. Additional thanks to the annotators from National Translation Mission for their crucial contributions to creation of the test dataset: Bhaswati Bhattacharya, Aditi Sarkar, Raghunandan B. S., Satish M., Rashmi G.Rao, Vidyarashmi PN, Neelima Bhide, Anand Bapat, Krishna Rao N V, Nagalakshmi DV, Aditya Bhardwaj Vuppula, Nirupama Patel, Asir. T, Sneha Gupta, Dinesh Kumar, Jasmin Gilani, Vivek R, Sivaprasad S, Pranoy J, Ashutosh Bharadwaj, Balaji Venkateshwar, Vinkesh Bansal, Vaishnavi Udyavara, Ramandeep Singh, Khushi Goyal, Yashasvi LN Pasumarthy and Naren Akash.
celikmus/symptom_text_to_disease_01
2023-03-10T10:09:08.000Z
[ "license:apache-2.0", "region:us" ]
celikmus
null
null
null
1
53
--- license: apache-2.0 dataset_info: features: - name: text dtype: string - name: labels dtype: class_label: names: '0': emotional pain '1': hair falling out '2': heart hurts '3': infected wound '4': foot ache '5': shoulder pain '6': injury from sports '7': skin issue '8': stomach ache '9': knee pain '10': joint pain '11': hard to breath '12': head ache '13': body feels weak '14': feeling dizzy '15': back pain '16': open wound '17': internal pain '18': blurry vision '19': acne '20': muscle pain '21': neck pain '22': cough '23': ear ache '24': feeling cold splits: - name: train num_bytes: 330494.3762197868 num_examples: 5328 - name: test num_bytes: 41373.82675273983 num_examples: 667 - name: valid num_bytes: 41311.79702747335 num_examples: 666 download_size: 145457 dataset_size: 413180.0 ---
mxeval/multi-humaneval
2023-03-20T19:20:48.000Z
[ "task_categories:text-generation", "language:en", "license:apache-2.0", "mxeval", "code-generation", "multi-humaneval", "humaneval", "arxiv:2210.14868", "region:us" ]
mxeval
A collection of execution-based multi-lingual benchmark for code generation.
@article{mbxp_athiwaratkun2022, title = {Multi-lingual Evaluation of Code Generation Models}, author = {Athiwaratkun, Ben and Gouda, Sanjay Krishna and Wang, Zijian and Li, Xiaopeng and Tian, Yuchen and Tan, Ming and Ahmad, Wasi Uddin and Wang, Shiqi and Sun, Qing and Shang, Mingyue and Gonugondla, Sujan Kumar and Ding, Hantian and Kumar, Varun and Fulton, Nathan and Farahani, Arash and Jain, Siddhartha and Giaquinto, Robert and Qian, Haifeng and Ramanathan, Murali Krishna and Nallapati, Ramesh and Ray, Baishakhi and Bhatia, Parminder and Sengupta, Sudipta and Roth, Dan and Xiang, Bing}, doi = {10.48550/ARXIV.2210.14868}, url = {https://arxiv.org/abs/2210.14868}, keywords = {Machine Learning (cs.LG), Computation and Language (cs.CL), FOS: Computer and information sciences, FOS: Computer and information sciences}, publisher = {arXiv}, year = {2022}, copyright = {Creative Commons Attribution 4.0 International} }
null
1
53
--- dataset_info: features: - name: task_id dtype: string - name: language dtype: string - name: prompt dtype: string - name: test dtype: string - name: entry_point dtype: string splits: - name: multi-humaneval_python num_bytes: 165716 num_examples: 164 download_size: 67983 dataset_size: 165716 license: apache-2.0 task_categories: - text-generation tags: - mxeval - code-generation - multi-humaneval - humaneval pretty_name: multi-humaneval language: - en --- # Multi-HumanEval ## Table of Contents - [multi-humaneval](#multi-humaneval) - [Table of Contents](#table-of-contents) - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Supported Tasks and Leaderboards](#related-tasks-and-leaderboards) - [Languages](#languages) - [Dataset Structure](#dataset-structure) - [Data Instances](#data-instances) - [Data Fields](#data-fields) - [Data Splits](#data-splits) - [Executional Correctness](#execution) - [Execution Example](#execution-example) - [Considerations for Using the Data](#considerations-for-using-the-data) - [Dataset Creation](#dataset-creation) - [Curation Rationale](#curation-rationale) - [Personal and Sensitive Information](#personal-and-sensitive-information) - [Social Impact of Dataset](#social-impact-of-dataset) - [Additional Information](#additional-information) - [Dataset Curators](#dataset-curators) - [Licensing Information](#licensing-information) - [Citation Information](#citation-information) - [Contributions](#contributions) # multi-humaneval ## Dataset Description - **Repository:** [GitHub Repository](https://github.com/amazon-science/mbxp-exec-eval) - **Paper:** [Multi-lingual Evaluation of Code Generation Models](https://openreview.net/forum?id=Bo7eeXm6An8) ### Dataset Summary This repository contains data and code to perform execution-based multi-lingual evaluation of code generation capabilities and the corresponding data, namely, a multi-lingual benchmark MBXP, multi-lingual MathQA and multi-lingual HumanEval. <br>Results and findings can be found in the paper ["Multi-lingual Evaluation of Code Generation Models"](https://arxiv.org/abs/2210.14868). ### Related Tasks and Leaderboards * [Multi-HumanEval](https://huggingface.co/datasets/mxeval/multi-humaneval) * [MBXP](https://huggingface.co/datasets/mxeval/mbxp) * [MathQA-X](https://huggingface.co/datasets/mxeval/mathqa-x) ### Languages The programming problems are written in multiple programming languages and contain English natural text in comments and docstrings. ## Dataset Structure To lookup currently supported datasets ```python get_dataset_config_names("mxeval/multi-humaneval") ['python', 'csharp', 'go', 'java', 'javascript', 'kotlin', 'perl', 'php', 'ruby', 'scala', 'swift', 'typescript'] ``` To load a specific dataset and language ```python from datasets import load_dataset load_dataset("mxeval/multi-humaneval", "python") DatasetDict({ test: Dataset({ features: ['task_id', 'language', 'prompt', 'test', 'entry_point', 'canonical_solution', 'description'], num_rows: 164 }) }) ``` ### Data Instances An example of a dataset instance: ```python { "task_id": "HumanEval/0", "language": "python", "prompt": "from typing import List\n\n\ndef has_close_elements(numbers: List[float], threshold: float) -> bool:\n \"\"\" Check if in given list of numbers, are any two numbers closer to each other than\n given threshold.\n >>> has_close_elements([1.0, 2.0, 3.0], 0.5)\n False\n >>> has_close_elements([1.0, 2.8, 3.0, 4.0, 5.0, 2.0], 0.3)\n True\n \"\"\"\n", "test": "\n\nMETADATA = {\n \"author\": \"jt\",\n \"dataset\": \"test\"\n}\n\n\ndef check(candidate):\n assert candidate([1.0, 2.0, 3.9, 4.0, 5.0, 2.2], 0.3) == True\n assert candidate([1.0, 2.0, 3.9, 4.0, 5.0, 2.2], 0.05) == False\n assert candidate([1.0, 2.0, 5.9, 4.0, 5.0], 0.95) == True\n assert candidate([1.0, 2.0, 5.9, 4.0, 5.0], 0.8) == False\n assert candidate([1.0, 2.0, 3.0, 4.0, 5.0, 2.0], 0.1) == True\n assert candidate([1.1, 2.2, 3.1, 4.1, 5.1], 1.0) == True\n assert candidate([1.1, 2.2, 3.1, 4.1, 5.1], 0.5) == False\n\n", "entry_point": "has_close_elements", "canonical_solution": " for idx, elem in enumerate(numbers):\n for idx2, elem2 in enumerate(numbers):\n if idx != idx2:\n distance = abs(elem - elem2)\n if distance < threshold:\n return True\n\n return False\n", "description": "Check if in given list of numbers, are any two numbers closer to each other than\n given threshold.\n >>> has_close_elements([1.0, 2.0, 3.0], 0.5)\n False\n >>> has_close_elements([1.0, 2.8, 3.0, 4.0, 5.0, 2.0], 0.3)\n True" } ``` ### Data Fields - `task_id`: identifier for the data sample - `prompt`: input for the model containing function header and docstrings - `canonical_solution`: solution for the problem in the `prompt` - `description`: task description - `test`: contains function to test generated code for correctness - `entry_point`: entry point for test - `language`: programming lanuage identifier to call the appropriate subprocess call for program execution ### Data Splits - HumanXEval - Python - Csharp - Go - Java - Javascript - Kotlin - Perl - Php - Ruby - Scala - Swift - Typescript ## Dataset Creation ### Curation Rationale Since code generation models are often trained on dumps of GitHub a dataset not included in the dump was necessary to properly evaluate the model. However, since this dataset was published on GitHub it is likely to be included in future dumps. ### Personal and Sensitive Information None. ### Social Impact of Dataset With this dataset code generating models can be better evaluated which leads to fewer issues introduced when using such models. ## Execution ### Execution Example Install the repo [mbxp-exec-eval](https://github.com/amazon-science/mbxp-exec-eval) to execute generations or canonical solutions for the prompts from this dataset. ```python >>> from datasets import load_dataset >>> from mxeval.execution import check_correctness >>> humaneval_python = load_dataset("mxeval/multi-humaneval", "python", split="test") >>> example_problem = humaneval_python[0] >>> check_correctness(example_problem, example_problem["canonical_solution"], timeout=20.0) {'task_id': 'HumanEval/0', 'passed': True, 'result': 'passed', 'completion_id': None, 'time_elapsed': 9.636878967285156} ``` ### Considerations for Using the Data Make sure to sandbox the execution environment. ### Dataset Curators AWS AI Labs ### Licensing Information [LICENSE](https://huggingface.co/datasets/mxeval/multi-humaneval/blob/main/multi-humaneval-LICENSE) <br> [THIRD PARTY LICENSES](https://huggingface.co/datasets/mxeval/multi-humaneval/blob/main/THIRD_PARTY_LICENSES) ### Citation Information ``` @article{mbxp_athiwaratkun2022, title = {Multi-lingual Evaluation of Code Generation Models}, author = {Athiwaratkun, Ben and Gouda, Sanjay Krishna and Wang, Zijian and Li, Xiaopeng and Tian, Yuchen and Tan, Ming and Ahmad, Wasi Uddin and Wang, Shiqi and Sun, Qing and Shang, Mingyue and Gonugondla, Sujan Kumar and Ding, Hantian and Kumar, Varun and Fulton, Nathan and Farahani, Arash and Jain, Siddhartha and Giaquinto, Robert and Qian, Haifeng and Ramanathan, Murali Krishna and Nallapati, Ramesh and Ray, Baishakhi and Bhatia, Parminder and Sengupta, Sudipta and Roth, Dan and Xiang, Bing}, doi = {10.48550/ARXIV.2210.14868}, url = {https://arxiv.org/abs/2210.14868}, keywords = {Machine Learning (cs.LG), Computation and Language (cs.CL), FOS: Computer and information sciences, FOS: Computer and information sciences}, publisher = {arXiv}, year = {2022}, copyright = {Creative Commons Attribution 4.0 International} } ``` ### Contributions [skgouda@](https://github.com/sk-g) [benathi@](https://github.com/benathi)
Multimodal-Fatima/COCO_captions_test
2023-03-17T21:23:22.000Z
[ "region:us" ]
Multimodal-Fatima
null
null
null
0
53
--- dataset_info: features: - name: image dtype: image - name: filepath dtype: string - name: sentids list: int32 - name: filename dtype: string - name: imgid dtype: int32 - name: split dtype: string - name: sentences_tokens list: list: string - name: sentences_raw list: string - name: sentences_sentid list: int32 - name: cocoid dtype: int32 - name: id dtype: int64 - name: clip_tags_ViT_L_14 sequence: string - name: clip_tags_LAION_ViT_H_14_2B sequence: string - name: blip_caption_beam_5 dtype: string - name: LLM_Description_gpt3_downstream_tasks_visual_genome_ViT_L_14 sequence: string - name: LLM_Description_gpt3_downstream_tasks_visual_genome_LAION-ViT-H-14-2B sequence: string - name: DETA_detections_deta_swin_large_o365_coco_classes list: - name: attribute dtype: string - name: box sequence: float32 - name: label dtype: string - name: location dtype: string - name: ratio dtype: float32 - name: size dtype: string - name: tag dtype: string splits: - name: test num_bytes: 831189492.0 num_examples: 5000 download_size: 823516792 dataset_size: 831189492.0 --- # Dataset Card for "COCO_captions_test" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
aisquared/databricks-dolly-15k
2023-04-12T18:14:46.000Z
[ "language:en", "license:cc-by-sa-3.0", "databricks", "dolly", "arxiv:2203.02155", "region:us" ]
aisquared
null
null
null
3
53
--- license: cc-by-sa-3.0 language: - en tags: - databricks - dolly pretty_name: 'Dataset ' --- # databricks-dolly-15k **This dataset was not originally created by AI Squared.** This dataset was curated and created by [Databricks](https://databricks.com). The below text comes from the original release of the dataset's README file in GitHub (available at https://github.com/databrickslabs/dolly/tree/master/data): # Summary `databricks-dolly-15k` is an open source dataset of instruction-following records generated by thousands of Databricks employees in several of the behavioral categories outlined in the [InstructGPT](https://arxiv.org/abs/2203.02155) paper, including brainstorming, classification, closed QA, generation, information extraction, open QA, and summarization. This dataset can be used for any purpose, whether academic or commercial, under the terms of the [Creative Commons Attribution-ShareAlike 3.0 Unported License](https://creativecommons.org/licenses/by-sa/3.0/legalcode). Supported Tasks: - Training LLMs - Synthetic Data Generation - Data Augmentation Languages: English Version: 1.0 **Owner: Databricks, Inc.** # Dataset Overview `databricks-dolly-15k` is a corpus of more than 15,000 records generated by thousands of Databricks employees to enable large language models to exhibit the magical interactivity of ChatGPT. Databricks employees were invited to create prompt / response pairs in each of eight different instruction categories, including the seven outlined in the InstructGPT paper, as well as an open-ended free-form category. The contributors were instructed to avoid using information from any source on the web with the exception of Wikipedia (for particular subsets of instruction categories), and explicitly instructed to avoid using generative AI in formulating instructions or responses. Examples of each behavior were provided to motivate the types of questions and instructions appropriate to each category. Halfway through the data generation process, contributors were given the option of answering questions posed by other contributors. They were asked to rephrase the original question and only select questions they could be reasonably expected to answer correctly. For certain categories contributors were asked to provide reference texts copied from Wikipedia. Reference text (indicated by the `context` field in the actual dataset) may contain bracketed Wikipedia citation numbers (e.g. `[42]`) which we recommend users remove for downstream applications. # Intended Uses While immediately valuable for instruction fine tuning large language models, as a corpus of human-generated instruction prompts, this dataset also presents a valuable opportunity for synthetic data generation in the methods outlined in the Self-Instruct paper. For example, contributor--generated prompts could be submitted as few-shot examples to a large open language model to generate a corpus of millions of examples of instructions in each of the respective InstructGPT categories. Likewise, both the instructions and responses present fertile ground for data augmentation. A paraphrasing model might be used to restate each prompt or short responses, with the resulting text associated to the respective ground-truth sample. Such an approach might provide a form of regularization on the dataset that could allow for more robust instruction-following behavior in models derived from these synthetic datasets. # Dataset ## Purpose of Collection As part of our continuing commitment to open source, Databricks developed what is, to the best of our knowledge, the first open source, human-generated instruction corpus specifically designed to enable large language models to exhibit the magical interactivity of ChatGPT. Unlike other datasets that are limited to non-commercial use, this dataset can be used, modified, and extended for any purpose, including academic or commercial applications. ## Sources - **Human-generated data**: Databricks employees were invited to create prompt / response pairs in each of eight different instruction categories. - **Wikipedia**: For instruction categories that require an annotator to consult a reference text (information extraction, closed QA, summarization) contributors selected passages from Wikipedia for particular subsets of instruction categories. No guidance was given to annotators as to how to select the target passages. ## Annotator Guidelines To create a record, employees were given a brief description of the annotation task as well as examples of the types of prompts typical of each annotation task. Guidelines were succinct by design so as to encourage a high task completion rate, possibly at the cost of rigorous compliance to an annotation rubric that concretely and reliably operationalizes the specific task. Caveat emptor. The annotation guidelines for each of the categories are as follows: - **Creative Writing**: Write a question or instruction that requires a creative, open-ended written response. The instruction should be reasonable to ask of a person with general world knowledge and should not require searching. In this task, your prompt should give very specific instructions to follow. Constraints, instructions, guidelines, or requirements all work, and the more of them the better. - **Closed QA**: Write a question or instruction that requires factually correct response based on a passage of text from Wikipedia. The question can be complex and can involve human-level reasoning capabilities, but should not require special knowledge. To create a question for this task include both the text of the question as well as the reference text in the form. - **Open QA**: Write a question that can be answered using general world knowledge or at most a single search. This task asks for opinions and facts about the world at large and does not provide any reference text for consultation. - **Summarization**: Give a summary of a paragraph from Wikipedia. Please don't ask questions that will require more than 3-5 minutes to answer. To create a question for this task include both the text of the question as well as the reference text in the form. - **Information Extraction**: These questions involve reading a paragraph from Wikipedia and extracting information from the passage. Everything required to produce an answer (e.g. a list, keywords etc) should be included in the passages. To create a question for this task include both the text of the question as well as the reference text in the form. - **Classification**: These prompts contain lists or examples of entities to be classified, e.g. movie reviews, products, etc. In this task the text or list of entities under consideration is contained in the prompt (e.g. there is no reference text.). You can choose any categories for classification you like, the more diverse the better. - **Brainstorming**: Think up lots of examples in response to a question asking to brainstorm ideas. ## Personal or Sensitive Data This dataset contains public information (e.g., some information from Wikipedia). To our knowledge, there are no private person’s personal identifiers or sensitive information. ## Language American English # Known Limitations - Wikipedia is a crowdsourced corpus and the contents of this dataset may reflect the bias, factual errors and topical focus found in Wikipedia - Some annotators may not be native English speakers - Annotator demographics and subject matter may reflect the makeup of Databricks employees # License/Attribution **Copyright (2023) Databricks, Inc.** This dataset was developed at Databricks (https://www.databricks.com) and its use is subject to the CC BY-SA 3.0 license. Certain categories of material in the dataset include materials from the following sources, licensed under the CC BY-SA 3.0 license: Wikipedia (various pages) - https://www.wikipedia.org/ Copyright © Wikipedia editors and contributors.
fnlp/moss-002-sft-data
2023-04-20T16:17:16.000Z
[ "task_categories:conversational", "task_categories:text-generation", "size_categories:1M<n<10M", "language:en", "language:zh", "license:cc-by-4.0", "arxiv:2212.10560", "region:us" ]
fnlp
null
null
null
81
53
--- license: cc-by-4.0 task_categories: - conversational - text-generation language: - en - zh size_categories: - 1M<n<10M --- # Dataset Card for "moss-002-sft-data" ## Dataset Description - **Homepage:** [https://txsun1997.github.io/blogs/moss.html](https://txsun1997.github.io/blogs/moss.html) - **Repository:** [https://github.com/OpenLMLab/MOSS](https://github.com/OpenLMLab/MOSS) - **Total amount of disk used:** 2.16 GB ### Dataset Summary An open-source conversational dataset that was used to train MOSS-002. The user prompts are extended based on a small set of human-written seed prompts in a way similar to [Self-Instruct](https://arxiv.org/abs/2212.10560). The AI responses are generated using `text-davinci-003`. The user prompts of `en_harmlessness` are from [Anthropic red teaming data](https://github.com/anthropics/hh-rlhf/tree/master/red-team-attempts). ### Data Splits | name | \# samples | |----------------------|-----------:| | en_helpfulness.json | 419049 | | en_honesty.json | 112580 | | en_harmlessness.json | 38873 | | zh_helpfulness.json | 447750 | | zh_honesty.json | 142885 |
thu-coai/esconv
2023-07-15T08:26:36.000Z
[ "language:en", "license:cc-by-nc-4.0", "arxiv:2106.01144", "region:us" ]
thu-coai
null
null
null
0
53
--- license: cc-by-nc-4.0 language: - en --- The ESConv dataset. [GitHub repo](https://github.com/thu-coai/Emotional-Support-Conversation). [Original paper](https://arxiv.org/abs/2106.01144). ```bib @inproceedings{liu-etal-2021-towards, title={Towards Emotional Support Dialog Systems}, author={Liu, Siyang and Zheng, Chujie and Demasi, Orianna and Sabour, Sahand and Li, Yu and Yu, Zhou and Jiang, Yong and Huang, Minlie}, booktitle={ACL}, year={2021} } ```
lexlms/lex_files
2023-07-24T11:59:58.000Z
[ "task_categories:text-generation", "task_categories:fill-mask", "task_ids:language-modeling", "task_ids:masked-language-modeling", "annotations_creators:no-annotation", "language_creators:found", "multilinguality:monolingual", "size_categories:1M<n<10M", "source_datasets:extended", "language:en", ...
lexlms
null
null
null
4
53
--- annotations_creators: - no-annotation language_creators: - found language: - en license: - cc-by-nc-sa-4.0 multilinguality: - monolingual size_categories: - 1M<n<10M source_datasets: - extended task_categories: - text-generation - fill-mask task_ids: - language-modeling - masked-language-modeling pretty_name: LexFiles tags: - legal - law --- # Dataset Card for "LexFiles" ## Table of Contents - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Dataset Specifications](#supported-tasks-and-leaderboards) ## Dataset Description - **Homepage:** https://github.com/coastalcph/lexlms - **Repository:** https://github.com/coastalcph/lexlms - **Paper:** https://arxiv.org/abs/2305.07507 - **Point of Contact:** [Ilias Chalkidis](mailto:ilias.chalkidis@di.ku.dk) ### Dataset Summary The LeXFiles is a new diverse English multinational legal corpus that we created including 11 distinct sub-corpora that cover legislation and case law from 6 primarily English-speaking legal systems (EU, CoE, Canada, US, UK, India). The corpus contains approx. 19 billion tokens. In comparison, the "Pile of Law" corpus released by Hendersons et al. (2022) comprises 32 billion in total, where the majority (26/30) of sub-corpora come from the United States of America (USA), hence the corpus as a whole is biased towards the US legal system in general, and the federal or state jurisdiction in particular, to a significant extent. ### Dataset Specifications | Corpus | Corpus alias | Documents | Tokens | Pct. | Sampl. (a=0.5) | Sampl. (a=0.2) | |-----------------------------------|----------------------|-----------|--------|--------|----------------|----------------| | EU Legislation | `eu-legislation` | 93.7K | 233.7M | 1.2% | 5.0% | 8.0% | | EU Court Decisions | `eu-court-cases` | 29.8K | 178.5M | 0.9% | 4.3% | 7.6% | | ECtHR Decisions | `ecthr-cases` | 12.5K | 78.5M | 0.4% | 2.9% | 6.5% | | UK Legislation | `uk-legislation` | 52.5K | 143.6M | 0.7% | 3.9% | 7.3% | | UK Court Decisions | `uk-court-cases` | 47K | 368.4M | 1.9% | 6.2% | 8.8% | | Indian Court Decisions | `indian-court-cases` | 34.8K | 111.6M | 0.6% | 3.4% | 6.9% | | Canadian Legislation | `canadian-legislation` | 6K | 33.5M | 0.2% | 1.9% | 5.5% | | Canadian Court Decisions | `canadian-court-cases` | 11.3K | 33.1M | 0.2% | 1.8% | 5.4% | | U.S. Court Decisions [1] | `us-court-cases` | 4.6M | 11.4B | 59.2% | 34.7% | 17.5% | | U.S. Legislation | `us-legislation` | 518 | 1.4B | 7.4% | 12.3% | 11.5% | | U.S. Contracts | `us-contracts` | 622K | 5.3B | 27.3% | 23.6% | 15.0% | | Total | `lexlms/lex_files` | 5.8M | 18.8B | 100% | 100% | 100% | [1] We consider only U.S. Court Decisions from 1965 onwards (cf. post Civil Rights Act), as a hard threshold for cases relying on severely out-dated and in many cases harmful law standards. The rest of the corpora include more recent documents. [2] Sampling (Sampl.) ratios are computed following the exponential sampling introduced by Lample et al. (2019). Additional corpora not considered for pre-training, since they do not represent factual legal knowledge. | Corpus | Corpus alias | Documents | Tokens | |----------------------------------------|------------------------|-----------|--------| | Legal web pages from C4 | `legal-c4` | 284K | 340M | ### Usage Load a specific sub-corpus, given the corpus alias, as presented above. ```python from datasets import load_dataset dataset = load_dataset('lexlms/lex_files', name='us-court-cases') ``` ### Citation [*Ilias Chalkidis\*, Nicolas Garneau\*, Catalina E.C. Goanta, Daniel Martin Katz, and Anders Søgaard.* *LeXFiles and LegalLAMA: Facilitating English Multinational Legal Language Model Development.* *2022. In the Proceedings of the 61th Annual Meeting of the Association for Computational Linguistics. Toronto, Canada.*](https://aclanthology.org/2023.acl-long.865/) ``` @inproceedings{chalkidis-etal-2023-lexfiles, title = "{L}e{XF}iles and {L}egal{LAMA}: Facilitating {E}nglish Multinational Legal Language Model Development", author = "Chalkidis, Ilias and Garneau, Nicolas and Goanta, Catalina and Katz, Daniel and S{\o}gaard, Anders", booktitle = "Proceedings of the 61st Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)", month = jul, year = "2023", address = "Toronto, Canada", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2023.acl-long.865", pages = "15513--15535", } ```
clarin-knext/scidocs-pl
2023-06-07T08:10:24.000Z
[ "language:pl", "arxiv:2305.19840", "region:us" ]
clarin-knext
null
null
null
0
53
--- language: - pl --- Part of **BEIR-PL: Zero Shot Information Retrieval Benchmark for the Polish Language**. Link to arxiv: https://arxiv.org/pdf/2305.19840.pdf Contact: konrad.wojtasik@pwr.edu.pl
WelfCrozzo/kupalinka
2023-09-03T14:47:32.000Z
[ "task_categories:translation", "size_categories:1M<n<10M", "language:be", "language:en", "language:ru", "license:mit", "region:us" ]
WelfCrozzo
null
null
null
0
53
--- license: mit dataset_info: features: - name: input dtype: string - name: output dtype: string - name: input_lang dtype: string - name: output_lang dtype: string splits: - name: train num_bytes: 7142399468 num_examples: 4499046 - name: validation num_bytes: 792724656 num_examples: 499504 download_size: 1554482578 dataset_size: 7935124124 task_categories: - translation language: - be - en - ru size_categories: - 1M<n<10M ---
tasksource/symbolic-instruction-tuning-sql
2023-06-15T13:19:03.000Z
[ "task_categories:text2text-generation", "language:en", "license:mit", "arxiv:2304.07995", "region:us" ]
tasksource
null
null
null
1
53
--- dataset_info: features: - name: inputs dtype: string - name: targets dtype: string splits: - name: train num_bytes: 290434558 num_examples: 200000 download_size: 148817199 dataset_size: 290434558 license: mit task_categories: - text2text-generation language: - en --- # Dataset Card for "symbolic-instruction-tuning-sql" Original component (=no Flan) from the symbolic instruction tuning dataset, with flan column names. [From Zero to Hero: Examining the Power of Symbolic Tasks in Instruction Tuning](https://arxiv.org/abs/2304.07995). The training code can be found in [here](https://github.com/sail-sg/symbolic-instruction-tuning). ``` @article{liu2023zero, title={From Zero to Hero: Examining the Power of Symbolic Tasks in Instruction Tuning}, author={Liu, Qian and Zhou, Fan and Jiang, Zhengbao and Dou, Longxu and Lin, Min}, eprint={2304.07995}, year={2023} } ```
C-MTEB/ThuNewsClusteringP2P
2023-07-27T17:29:09.000Z
[ "region:us" ]
C-MTEB
null
null
null
0
53
--- configs: - config_name: default data_files: - split: test path: data/test-* dataset_info: features: - name: sentences sequence: string - name: labels sequence: string splits: - name: test num_bytes: 31552896 num_examples: 10 download_size: 23299710 dataset_size: 31552896 --- # Dataset Card for "ThuNewsClusteringP2P" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
C-MTEB/CLSClusteringP2P
2023-07-27T17:29:48.000Z
[ "region:us" ]
C-MTEB
null
null
null
0
53
--- configs: - config_name: default data_files: - split: test path: data/test-* dataset_info: features: - name: sentences sequence: string - name: labels sequence: string splits: - name: test num_bytes: 56780231 num_examples: 10 download_size: 37254736 dataset_size: 56780231 --- # Dataset Card for "CLSClusteringP2P" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
mattlc/fake-reward-model-dataset-melodic-acid
2023-08-20T15:52:51.000Z
[ "region:us" ]
mattlc
null
null
null
0
53
--- dataset_info: features: - name: text dtype: string - name: audio_chosen struct: - name: array sequence: float64 - name: audio_rejected struct: - name: array sequence: float64 splits: - name: train num_bytes: 1536002400 num_examples: 100 download_size: 1174735475 dataset_size: 1536002400 configs: - config_name: default data_files: - split: train path: data/train-* --- # Dataset Card for "fake-reward-model-dataset-melodic-acid" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
LibrAI/do-not-answer
2023-08-28T18:22:52.000Z
[ "task_categories:text-generation", "size_categories:n<1K", "language:en", "license:apache-2.0", "safety", "large language model", "region:us" ]
LibrAI
null
null
null
2
53
--- license: apache-2.0 task_categories: - text-generation language: - en tags: - safety - large language model pretty_name: do-not-answer size_categories: - n<1K dataset_info: features: - name: id dtype: int64 - name: risk_area dtype: string - name: types_of_harm dtype: string - name: specific_harms dtype: string - name: question dtype: string - name: GPT4_response dtype: string - name: GPT4_harmful dtype: int64 - name: GPT4_action dtype: int64 - name: ChatGPT_response dtype: string - name: ChatGPT_harmful dtype: int64 - name: ChatGPT_action dtype: int64 - name: Claude_response dtype: string - name: Claude_harmful dtype: int64 - name: Claude_action dtype: int64 - name: ChatGLM2_response dtype: string - name: ChatGLM2_harmful dtype: int64 - name: ChatGLM2_action dtype: int64 - name: llama2-7b-chat_response dtype: string - name: llama2-7b-chat_harmful dtype: int64 - name: llama2-7b-chat_action dtype: int64 - name: vicuna-7b_response dtype: string - name: vicuna-7b_harmful dtype: int64 - name: vicuna-7b_action dtype: int64 splits: - name: train num_bytes: 3997193 num_examples: 939 download_size: 1709142 dataset_size: 3997193 --- # Do-Not-Answer: A Dataset for Evaluating Safeguards in LLMs ## Overview *Do not answer* is an open-source dataset to evaluate LLMs' safety mechanism at a low cost. The dataset is curated and filtered to consist only of prompts to which responsible language models do not answer. Besides human annotations, *Do not answer* also implements model-based evaluation, where a 600M fine-tuned BERT-like evaluator achieves comparable results with human and GPT-4. ## Instruction Taxonomy Instructions are organized by a three-level hierarchical taxonomy, covering 61 specific harms. ![taxonomy](assets/dna.png) *Do not answer* collects 939 instructions, the distribution across five risk areas and 12 harm types is shown below: ![risk_area](assets/instruction_dist.png) ## Response Assessment We assess six models' responses (GPT-4, ChatGPT, Claude, LLaMA-2, Vicuna, ChatGLM2) from two levels. * Harmfulness: whether a response is harmful (a binary label). * Action category: responses are categorized into six distinct groups, each representing a specific response type. ![action](assets/action.png) ### Human Evaluation High percentage of safe responses across five risk areas for six models demonstrates that they are basically safe. LLaMA-2 ranks the first, followed by ChatGPT, Claude, GPT-4, and Vicuna, with ChatGLM2 ranking the last by 85 harmful responses among 939. * Harmlessness Rank ![harmful_num](assets/harmful_num.png) * Action Category Distribution * ![action_distribution](assets/action_dist.png) ### Automatic Evaluation Longformer evaluators achieve comparable results with human and GPT-4. * GPT-4 evaluation * Fine-tuned Longformer classifier (600M) | Model | Human | GPT-4 | Longformer | |:---------------|--------:|--------:|-------------:| | LLaMA-2 | 99.7 | 99.4 | 99.3 | | ChatGPT | 98.5 | 97.7 | 97.9 | | Claude | 98.3 | 98.3 | 97.6 | | GPT-4 | 97.6 | 96.5 | 97.2 | | vicuna | 94.5 | 94.9 | 95.0 | | ChatGLM2 | 90.9 | 92.9 | 92.9 | ## Citation If our work is useful for your own, you can cite us with the following BibTex entry: ```bibtex @misc{wang2023donotanswer, author = {Wang, Yuxia, and Li, Haonan and Han, Xudong and Nakov, Preslav and Baldwin, Timothy}, title = {Do-Not-Answer: A Dataset for Evaluating Safeguards in LLMs}, year = {2023}, howpublished = {arXiv preprint arXiv:2308.13387}, } ``` ## License All datasets in this repository are released under the [Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License](http://creativecommons.org/licenses/by-nc-sa/4.0/). All source files in this repository are released under the Apache 2.0 license, the text of which can be found in the LICENSE file.
twang2218/chinese-law-and-regulations
2023-09-05T10:51:53.000Z
[ "license:apache-2.0", "region:us" ]
twang2218
null
null
null
4
53
--- license: apache-2.0 dataset_info: - config_name: default features: - name: publish_date dtype: timestamp[ns] - name: effective_date dtype: timestamp[ns] - name: type dtype: string - name: status dtype: string - name: title dtype: string - name: office dtype: string - name: office_level dtype: string - name: office_category dtype: string - name: effective_period dtype: string - name: content dtype: string splits: - name: train num_bytes: 363619544 num_examples: 22552 download_size: 159516785 dataset_size: 363619544 - config_name: metadata features: - name: publish_date dtype: timestamp[ns] - name: effective_date dtype: timestamp[ns] - name: type dtype: string - name: status dtype: string - name: title dtype: string - name: office dtype: string - name: office_level dtype: string - name: office_category dtype: string - name: effective_period dtype: string splits: - name: train num_bytes: 4529871 num_examples: 22552 download_size: 740438 dataset_size: 4529871 configs: - config_name: default data_files: - split: train path: data/train-* - config_name: metadata data_files: - split: train path: metadata/train-* ---
yzhuang/autotree_automl_100000_credit_sgosdt_l256_dim10_d3_sd0
2023-09-07T19:45:41.000Z
[ "region:us" ]
yzhuang
null
null
null
0
53
--- dataset_info: features: - name: id dtype: int64 - name: input_x sequence: sequence: float32 - name: input_y sequence: sequence: float32 - name: input_y_clean sequence: sequence: float32 - name: rtg sequence: float64 - name: status sequence: sequence: float32 - name: split_threshold sequence: sequence: float32 - name: split_dimension sequence: int64 splits: - name: train num_bytes: 2364400000 num_examples: 100000 - name: validation num_bytes: 236440000 num_examples: 10000 download_size: 725608022 dataset_size: 2600840000 --- # Dataset Card for "autotree_automl_100000_credit_sgosdt_l256_dim10_d3_sd0" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
librarian-bot/librarian-bot-stats
2023-10-10T01:24:48.000Z
[ "region:us" ]
librarian-bot
null
null
null
0
53
--- dataset_info: features: - name: createdAt dtype: timestamp[us] - name: pr_number dtype: int64 - name: status dtype: large_string - name: repo_id dtype: large_string - name: type dtype: large_string - name: isPullRequest dtype: bool splits: - name: train num_bytes: 1112843 num_examples: 12722 download_size: 407995 dataset_size: 1112843 --- # Dataset Card for "librarian-bot-stats" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
Trelis/touch-rugby-rules-embeddings
2023-09-15T10:43:22.000Z
[ "task_categories:text-generation", "size_categories:n<1K", "language:en", "fine-tuning", "touch rugby", "region:us" ]
Trelis
null
null
null
0
53
--- task_categories: - text-generation language: - en tags: - fine-tuning - touch rugby size_categories: - n<1K --- # Touch Rugby Rules Dataset (for embeddings) train.csv is taken from the [International Touch Website](https://cdn.internationaltouch.org/public/FIT%205th%20Edition%20Rulebook.pdf) test.csv is copy pasted from abbreviated rules on the [UK Touch website](https://www.englandtouch.org.uk/develop/coaching/the-rules/). Note that I'm bypassing the pdf to text stage. All text is chunked to a length of 100 tokens with 50% overlap. For educational and non-commercial use only.
wentingzhao/one-million-paired-instructions
2023-09-16T14:49:21.000Z
[ "region:us" ]
wentingzhao
null
null
null
0
53
--- dataset_info: features: - name: user dtype: string - name: system dtype: string - name: source dtype: string splits: - name: train num_bytes: 441090052 num_examples: 1797967 download_size: 124201734 dataset_size: 441090052 configs: - config_name: default data_files: - split: train path: data/train-* --- # Dataset Card for "one-million-paired-instructions" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)