text-classification
bool
2 classes
text
stringlengths
0
664k
false
# Dataset Card for "riffusion-musiccaps-datasets-768" Converted google/musicCaps to spectograms with audio_to_spectrum with riffusion cli. Random 7.68 sec for each music in musicCaps. [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
false
# Dataset card for "george-chou/pianos_mel" ## Usage ``` from datasets import load_dataset data = load_dataset("george-chou/pianos_mel") trainset = data['train'] validset = data['validation'] testset = data['test'] labels = trainset.features['label'].names for item in trainset: print('image: ', item['image'].convert('RGB')) print('label name: ' + labels[item['label']]) for item in validset: print('image: ', item['image'].convert('RGB')) print('label name: ' + labels[item['label']]) for item in testset: print('image: ', item['image'].convert('RGB')) print('label name: ' + labels[item['label']]) ``` ## Maintenance ``` git clone git@hf.co:datasets/george-chou/pianos_mel ``` ## Cite ``` @dataset{zhaorui_liu_2021_5676893, author = {Zhaorui Liu, Monan Zhou, Shenyang Xu and Zijin Li}, title = {{Music Data Sharing Platform for Computational Musicology Research (CCMUSIC DATASET)}}, month = nov, year = 2021, publisher = {Zenodo}, version = {1.1}, doi = {10.5281/zenodo.5676893}, url = {https://doi.org/10.5281/zenodo.5676893} } ```
true
# Dataset Card for Peewee Issues ## Dataset Summary Peewee Issues is a dataset containing all the issues in the [Peewee github repository](https://github.com/coleifer/peewee) up to the last date of extraction (5/3/2023). It has been made for educational purposes in mind (especifically, to get me used to using Hugging Face's datasets), but can be used for multi-label classification or semantic search. The contents are all in English and concern SQL databases and ORM libraries.
false
## Dataset Description - **Repository:** [Link to repo](https://github.com/VityaVitalich/IMAD) - **Paper:** [IMage Augmented multi-modal Dialogue: IMAD](https://arxiv.org/abs/2305.10512v1) - **Point of Contact:** [Contacts Section](https://github.com/VityaVitalich/IMAD#contacts) ### Dataset Summary This dataset contains data from the paper [IMage Augmented multi-modal Dialogue: IMAD](https://arxiv.org/abs/2305.10512v1). The main feature of this dataset is the novelty of the task. It has been generated specifically for the purpose of image interpretation in a dialogue context. Some of the dialogue utterances have been replaced with images, allowing a generative model to be trained to restore the initial utterance. The dialogues are sourced from multiple dialogue datasets (DailyDialog, Commonsense, PersonaChat, MuTual, Empathetic Dialogues, Dream) and have been filtered using a technique described in the paper. A significant portion of the data has been labeled by assessors, resulting in a high inter-reliability score. The combination of these methods has led to a well-filtered dataset and consequently a high BLEU score. We hope that this dataset will be beneficial for the development of multi-modal deep learning. ### Data Fields Dataset contains 5 fields - `image_id`: `string` that contains id of image in the full Unsplash Dataset - `source_data`: `string` that contains the name of source dataset - `utter`: `string` that contains utterance that was replaced in this dialogue with an image - `context`: `list` of `string` that contains sequence of utterances in the dialogue before the replaced utterance - `image_like`: `int` that shows if the data was collected with assessors or via filtering technique ### Licensing Information Textual part of IMAD is licensed under [CC BY-NC-SA 4.0](https://creativecommons.org/licenses/by-nc-sa/4.0/). Full Dataset with images could be requested directly contacting authors or could be obtained with matching images_id with Unsplash full dataset. ### Contacts Feel free to reach out to us at [vvmoskvoretskiy@yandex.ru] for inquiries, collaboration suggestions, or data requests related to our work. ### Citation Information To cite this article please use this BibTex reference ```bibtex @misc{viktor2023imad, title={IMAD: IMage-Augmented multi-modal Dialogue}, author={Moskvoretskii Viktor and Frolov Anton and Kuznetsov Denis}, year={2023}, eprint={2305.10512}, archivePrefix={arXiv}, primaryClass={cs.CL} } ``` Or via MLA Citation ``` Viktor, Moskvoretskii et al. “IMAD: IMage-Augmented multi-modal Dialogue.” (2023). ```
false
# MAP An SQLite database of video urls and captions/descriptions.
false
Stanford Question Answering Dataset (SQuAD) is a reading comprehension dataset, consisting of questions posed by crowdworkers on a set of Wikipedia articles, where the answer to every question is a segment of text, or span, from the corresponding reading passage, or the question might be unanswerable. This is an Indonesia-translated version of [squad]("https://huggingface.co/datasets/squad") dataset Translated from [sentence-transformers/embedding-training-data](https://huggingface.co/datasets/sentence-transformers/embedding-training-data) Translated using [Helsinki-NLP/EN-ID](https://huggingface.co/Helsinki-NLP/opus-mt-en-id)
false
# Dataset Card for Tapir-Cleaned This is a revised version of the DAISLab dataset of IFTTT rules, which has been thoroughly cleaned, scored, and adjusted for the purpose of instruction-tuning. ## Tapir Dataset Summary Tapir is a subset of the larger DAISLab dataset, which comprises 242,480 recipes extracted from the IFTTT platform. After a thorough cleaning process that involved the removal of redundant and inconsistent recipes, the refined dataset was condensed to include 67,697 high-quality recipes. This curated set of instruction data is particularly useful for conducting instruction-tuning exercises for language models, allowing them to more accurately follow instructions and achieve superior performance. The last version of Tapir includes a correlation score that helps to identify the most appropriate description-rule pairs for instruction tuning. Description-rule pairs with a score greater than 0.75 are deemed good enough and are prioritized for further analysis and tuning. ### Supported Tasks and Leaderboards The Tapir dataset designed for instruction training pretrained language models ### Languages The data in Tapir are mainly in English (BCP-47 en). # Dataset Structure ### Data Instances ```json { "instruction":"From the description of a rule: identify the 'trigger', identify the 'action', write a IF 'trigger' THEN 'action' rule.", "input":"If it's raining outside, you'll want some nice warm colors inside!", "output":"IF Weather Underground Current condition changes to THEN LIFX Change color of lights", "score":"0.788197", "text": "Below is an instruction that describes a task, paired with an input that provides further context. Write a response that appropriately completes the request.\n\n### Instruction:\nFrom the description of a rule: identify the 'trigger', identify the 'action', write a IF 'trigger' THEN 'action' rule.\n\n### Input:\nIf it's raining outside, you'll want some nice warm colors inside!\n\n### Response:\nIF Weather Underground Current condition changes to THEN LIFX Change color of lights", } ``` ### Data Fields The data fields are as follows: * `instruction`: describes the task the model should perform. * `input`: context or input for the task. Each of the 67K input is unique. * `output`: the answer taken from the original Tapir Dataset formatted as an IFTTT recipe. * `score`: the correlation score obtained via BertForNextSentencePrediction * `text`: the `instruction`, `input` and `output` formatted with the [prompt template](https://github.com/tatsu-lab/stanford_alpaca#data-release) used by the authors of Alpaca for fine-tuning their models. ### Data Splits | | train | |---------------|------:| | tapir | 67697 | ### Licensing Information The dataset is available under the [Creative Commons NonCommercial (CC BY-NC 4.0)](https://creativecommons.org/licenses/by-nc/4.0/legalcode). ### Citation Information ``` @misc{tapir, author = {Mattia Limone, Gaetano Cimino, Annunziata Elefante}, title = {TAPIR: Trigger Action Platform for Information Retrieval}, year = {2023}, publisher = {GitHub}, journal = {GitHub repository}, howpublished = {\url{https://github.com/MattiaLimone/ifttt_recommendation_system}}, } ```
false
# Dataset Card for Tapir-Cleaned This is a revised version of the DAISLab dataset of IFTTT rules, which has been thoroughly cleaned, scored, and adjusted for the purpose of instruction-tuning. ## Tapir Dataset Summary Tapir is a subset of the larger DAISLab dataset, which comprises 242,480 recipes extracted from the IFTTT platform. After a thorough cleaning process that involved the removal of redundant and inconsistent recipes, the refined dataset was condensed to include 116,862 high-quality recipes. This curated set of instruction data is particularly useful for conducting instruction-tuning exercises for language models, allowing them to more accurately follow instructions and achieve superior performance. The last version of Tapir includes a correlation score that helps to identify the most appropriate description-rule pairs for instruction tuning. Description-rule pairs with a score greater than 0.75 are deemed good enough and are prioritized for further analysis and tuning. ### Supported Tasks and Leaderboards The Tapir dataset designed for instruction training pretrained language models ### Languages The data in Tapir are mainly in English (BCP-47 en). # Dataset Structure ### Data Instances ```json { "instruction":"From the description of a rule: identify the 'trigger', identify the 'action', write a IF 'trigger' THEN 'action' rule.", "input":"If lostphone is texted to my phone the volume will turn up to 100 so I can find it.", "output":"IF Android SMS New SMS received matches search THEN Android Device Set ringtone volume", "score":"0.804322", "text": "Below is an instruction that describes a task, paired with an input that provides further context. Write a response that appropriately completes the request.\n\n### Instruction:\nFrom the description of a rule: identify the 'trigger', identify the 'action', write a IF 'trigger' THEN 'action' rule.\n\n### Input:\nIf lostphone is texted to my phone the volume will turn up to 100 so I can find it.\n\n### Response:\nIF Android SMS New SMS received matches search THEN Android Device Set ringtone volume", } ``` ### Data Fields The data fields are as follows: * `instruction`: describes the task the model should perform. * `input`: context or input for the task. Each of the 116K input is unique. * `output`: the answer taken from the original Tapir Dataset formatted as an IFTTT recipe. * `score`: the correlation score obtained via BertForNextSentencePrediction * `text`: the `instruction`, `input` and `output` formatted with the [prompt template](https://github.com/tatsu-lab/stanford_alpaca#data-release) used by the authors of Alpaca for fine-tuning their models. ### Data Splits | | train | |---------------|------:| | tapir | 116862 | ### Licensing Information The dataset is available under the [Creative Commons NonCommercial (CC BY-NC 4.0)](https://creativecommons.org/licenses/by-nc/4.0/legalcode). ### Citation Information ``` @misc{tapir, author = {Mattia Limone, Gaetano Cimino, Annunziata Elefante}, title = {TAPIR: Trigger Action Platform for Information Retrieval}, year = {2023}, publisher = {GitHub}, journal = {GitHub repository}, howpublished = {\url{https://github.com/MattiaLimone/ifttt_recommendation_system}}, } ```
false
# Dataset Card for DIALOGSum Corpus ## Dataset Description ### Links - **Homepage:** https://aclanthology.org/2021.findings-acl.449 - **Repository:** https://github.com/cylnlp/dialogsum - **Paper:** https://aclanthology.org/2021.findings-acl.449 ### Dataset Summary DialogSum is a large-scale dialogue summarization dataset, consisting of 13,460 (Plus 100 holdout data for topic generation) dialogues with corresponding manually labeled summaries and topics. ### Languages Russian (translated from English by Google Translate). ## Dataset Structure ### Data Fields - dialogue: text of dialogue. - summary: human written summary of the dialogue. - topic: human written topic/one liner of the dialogue. - id: unique file id of an example. ### Data Splits - train: 12460 - val: 500 - test: 1500 - holdout: 100 [Only 3 features: id, dialogue, topic] ## Dataset Creation ### Curation Rationale In paper: We collect dialogue data for DialogSum from three public dialogue corpora, namely Dailydialog (Li et al., 2017), DREAM (Sun et al., 2019) and MuTual (Cui et al., 2019), as well as an English speaking practice website. These datasets contain face-to-face spoken dialogues that cover a wide range of daily-life topics, including schooling, work, medication, shopping, leisure, travel. Most conversations take place between friends, colleagues, and between service providers and customers. Compared with previous datasets, dialogues from DialogSum have distinct characteristics: Under rich real-life scenarios, including more diverse task-oriented scenarios; Have clear communication patterns and intents, which is valuable to serve as summarization sources; Have a reasonable length, which comforts the purpose of automatic summarization. We ask annotators to summarize each dialogue based on the following criteria: Convey the most salient information; Be brief; Preserve important named entities within the conversation; Be written from an observer perspective; Be written in formal language. ### Who are the source language producers? linguists ### Who are the annotators? language experts ## Licensing Information MIT License ## Citation Information ``` @inproceedings{chen-etal-2021-dialogsum, title = "{D}ialog{S}um: {A} Real-Life Scenario Dialogue Summarization Dataset", author = "Chen, Yulong and Liu, Yang and Chen, Liang and Zhang, Yue", booktitle = "Findings of the Association for Computational Linguistics: ACL-IJCNLP 2021", month = aug, year = "2021", address = "Online", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2021.findings-acl.449", doi = "10.18653/v1/2021.findings-acl.449", pages = "5062--5074", ``` ## Contributions Thanks to [@cylnlp](https://github.com/cylnlp) for adding this dataset.
false
curr. size: 53,081 videos goal (todo): 100,000+
false
# Dataset Card for "code-search-net-ruby" ## Dataset Description - **Homepage:** None - **Repository:** https://huggingface.co/datasets/Nan-Do/code-search-net-go - **Paper:** None - **Leaderboard:** None - **Point of Contact:** [@Nan-Do](https://github.com/Nan-Do) ### Dataset Summary This dataset is the Ruby portion of the CodeSarchNet annotated with a summary column. The code-search-net dataset includes open source functions that include comments found at GitHub. The summary is a short description of what the function does. ### Languages The dataset's comments are in English and the functions are coded in Ruby ### Data Splits Train, test, validation labels are included in the dataset as a column. ## Dataset Creation May of 2023 ### Curation Rationale This dataset can be used to generate instructional (or many other interesting) datasets that are useful to train LLMs ### Source Data The CodeSearchNet dataset can be found at https://www.kaggle.com/datasets/omduggineni/codesearchnet ### Annotations This datasets include a summary column including a short description of the function. #### Annotation process The annotation procedure was done using [Salesforce](https://huggingface.co/Salesforce) T5 summarization models. A sample notebook of the process can be found at https://github.com/Nan-Do/OpenAssistantInstructionResponsePython The annontations have been cleaned to make sure there are no repetitions and/or meaningless summaries. (some may still be present in the dataset) ### Licensing Information Apache 2.0
false
MMC4-130k是对MMC4中,抽样了130k左右 simliarty较高的图文pair得到的数据集 我们准备陆续翻译这个子集 我们会陆续将更多数据集发布到hf,包括 - [ ] Coco Caption的中文翻译 - [ ] CoQA的中文翻译 - [ ] CNewSum的Embedding数据 - [ ] 增广的开放QA数据 - [x] WizardLM的中文翻译 如果你也在做这些数据集的筹备,欢迎来联系我们,避免重复花钱。 # 骆驼(Luotuo): 开源中文大语言模型 [https://github.com/LC1332/Luotuo-Chinese-LLM](https://github.com/LC1332/Luotuo-Chinese-LLM) 骆驼(Luotuo)项目是由[冷子昂](https://blairleng.github.io) @ 商汤科技, 陈启源 @ 华中师范大学 以及 李鲁鲁 @ 商汤科技 发起的中文大语言模型开源项目,包含了一系列语言模型。 ( 注意: [陈启源](https://qiyuan-chen.github.io/) 正在寻找2024推免导师,欢迎联系 ) 骆驼项目**不是**商汤科技的官方产品。 ## Citation Please cite the repo if you use the data or code in this repo. ``` @misc{alpaca, author={Ziang Leng, Qiyuan Chen and Cheng Li}, title = {Luotuo: An Instruction-following Chinese Language model, LoRA tuning on LLaMA}, year = {2023}, publisher = {GitHub}, journal = {GitHub repository}, howpublished = {\url{https://github.com/LC1332/Luotuo-Chinese-LLM}}, } ```
false
# AutoTrain Dataset for project: doodles-30 ## Dataset Description This dataset has been automatically processed by AutoTrain for project doodles-30. ### Languages The BCP-47 code for the dataset's language is unk. ## Dataset Structure ### Data Instances A sample from this dataset looks as follows: ```json [ { "image": "<256x256 RGB PIL image>", "target": 1 }, { "image": "<256x256 RGB PIL image>", "target": 3 } ] ``` ### Dataset Fields The dataset has the following fields (also called "features"): ```json { "image": "Image(decode=True, id=None)", "target": "ClassLabel(names=['ant', 'bear', 'bee', 'bird', 'cat', 'dog', 'dolphin', 'elephant', 'giraffe', 'horse', 'lion', 'mosquito', 'tiger', 'whale'], id=None)" } ``` ### Dataset Splits This dataset is split into a train and validation split. The split sizes are as follow: | Split name | Num samples | | ------------ | ------------------- | | train | 336 | | valid | 84 |
false
- subset from https://huggingface.co/datasets/liuhaotian/LLaVA-Instruct-150K - train: 21000 - val seen: 3000 - val unseen: 2100 - test: 6000
false
# Dataset Card for Tapir-Cleaned This is a revised version of the DAISLab dataset of IFTTT rules, which has been thoroughly cleaned, scored, and adjusted for the purpose of instruction-tuning. ## Tapir Dataset Summary Tapir is a subset of the larger DAISLab dataset, which comprises 242,480 recipes extracted from the IFTTT platform. After a thorough cleaning process that involved the removal of redundant and inconsistent recipes, the refined dataset was condensed to include 32,403 high-quality recipes. This curated set of instruction data is particularly useful for conducting instruction-tuning exercises for language models, allowing them to more accurately follow instructions and achieve superior performance. The last version of Tapir includes a correlation score that helps to identify the most appropriate description-rule pairs for instruction tuning. Description-rule pairs with a score greater than 0.75 are deemed good enough and are prioritized for further analysis and tuning. ### Supported Tasks and Leaderboards The Tapir dataset designed for instruction training pretrained language models ### Languages The data in Tapir are mainly in English (BCP-47 en). # Dataset Structure ### Data Instances ```json { "instruction":"From the description of a rule: identify the 'trigger', identify the 'action', write a IF 'trigger' THEN 'action' rule.", "input":"If it's raining outside, you'll want some nice warm colors inside!", "output":"IF Weather Underground Current condition changes to THEN LIFX Change color of lights", "text": "Below is an instruction that describes a task, paired with an input that provides further context. Write a response that appropriately completes the request.\n\n### Instruction:\nFrom the description of a rule: identify the 'trigger', identify the 'action', write a IF 'trigger' THEN 'action' rule.\n\n### Input:\nIf it's raining outside, you'll want some nice warm colors inside!\n\n### Response:\nIF Weather Underground Current condition changes to THEN LIFX Change color of lights", } ``` ### Data Fields The data fields are as follows: * `instruction`: describes the task the model should perform. * `input`: context or input for the task. Each of the 32k input is unique. * `output`: the answer taken from the original Tapir Dataset formatted as an IFTTT recipe. * `text`: the `instruction`, `input` and `output` formatted with the [prompt template](https://github.com/tatsu-lab/stanford_alpaca#data-release) used by the authors of Alpaca for fine-tuning their models. ### Data Splits | | train | |---------------|------:| | tapir | 32403 | ### Licensing Information The dataset is available under the [Creative Commons NonCommercial (CC BY-NC 4.0)](https://creativecommons.org/licenses/by-nc/4.0/legalcode). ### Citation Information ``` @misc{tapir, author = {Mattia Limone, Gaetano Cimino, Annunziata Elefante}, title = {TAPIR: Trigger Action Platform for Information Retrieval}, year = {2023}, publisher = {GitHub}, journal = {GitHub repository}, howpublished = {\url{https://github.com/MattiaLimone/ifttt_recommendation_system}}, } ```
false
# Dataset Card for "instructional_code-search-net-ruby" ## Dataset Description - **Homepage:** None - **Repository:** https://huggingface.co/datasets/Nan-Do/instructional_code-search-net-ruby - **Paper:** None - **Leaderboard:** None - **Point of Contact:** [@Nan-Do](https://github.com/Nan-Do) ### Dataset Summary This is an instructional dataset for Ruby. The dataset contains two different kind of tasks: - Given a piece of code generate a description of what it does. - Given a description generate a piece of code that fulfils the description. ### Languages The dataset is in English. ### Data Splits There are no splits. ## Dataset Creation May of 2023 ### Curation Rationale This dataset was created to improve the coding capabilities of LLMs. ### Source Data The summarized version of the code-search-net dataset can be found at https://huggingface.co/datasets/Nan-Do/code-search-net-ruby ### Annotations The dataset includes an instruction and response columns. #### Annotation process The annotation procedure was done using templates and NLP techniques to generate human-like instructions and responses. A sample notebook of the process can be found at https://github.com/Nan-Do/OpenAssistantInstructionResponsePython The annontations have been cleaned to make sure there are no repetitions and/or meaningless summaries. ### Licensing Information Apache 2.0
false
# Dataset Card for "instructional_code-search-net-php" ## Dataset Description - **Homepage:** None - **Repository:** https://huggingface.co/datasets/Nan-Do/instructional_code-search-net-php - **Paper:** None - **Leaderboard:** None - **Point of Contact:** [@Nan-Do](https://github.com/Nan-Do) ### Dataset Summary This is an instructional dataset for PHP. The dataset contains two different kind of tasks: - Given a piece of code generate a description of what it does. - Given a description generate a piece of code that fulfils the description. ### Languages The dataset is in English. ### Data Splits There are no splits. ## Dataset Creation May of 2023 ### Curation Rationale This dataset was created to improve the coding capabilities of LLMs. ### Source Data The summarized version of the code-search-net dataset can be found at https://huggingface.co/datasets/Nan-Do/code-search-net-php ### Annotations The dataset includes an instruction and response columns. #### Annotation process The annotation procedure was done using templates and NLP techniques to generate human-like instructions and responses. A sample notebook of the process can be found at https://github.com/Nan-Do/OpenAssistantInstructionResponsePython The annontations have been cleaned to make sure there are no repetitions and/or meaningless summaries. ### Licensing Information Apache 2.0
true
sts 2012-2016 datasets
true
This dataset contains more than 250k articles obtained from polish news site `tvp.info.pl`. Main purpouse of collecting the data was to create a transformer-based model for text summarization. Columns: * `link` - link to article * `title` - original title of the article * `headline` - lead/headline of the article - first paragraph of the article visible directly from the page * `content` - full textual contents of the article Link to original repo: https://github.com/WiktorSob/scraper-tvp Download the data: ```python from datasets import load_dataset dataset = load_dataset("WiktorS/polish-news") ```
false
# Dataset Card for "hotel_reviews" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
false
The dataset contains 20703 records. The dataset was created by removing all dataset items from the original 27k dataset that had a BLEU score 0 or more than 0.3388.
false
# VoxCeleb 1 VoxCeleb1 contains over 100,000 utterances for 1,251 celebrities, extracted from videos uploaded to YouTube. ## Verification Split | | train | validation | test | | :---: | :---: | :---: | :---: | | # of speakers | 1211 | 1211 | 40 | | # of samples | 299246 | 33672 | 4874 | ## References - https://www.robots.ox.ac.uk/~vgg/data/voxceleb/vox1.html
false
Simple anime image rating prediction task. Data is randomly scraped from Sankaku Complex. Please note that due to the often unclear boundaries between `safe`, `r15` and `r18` levels, there is no objective ground truth for this task, and the data is scraped without any manual filtering. Therefore, the models trained on this dataset can only provide rough checks. **If you require an accurate solution for classifying `R18` images, it is recommended to consider a solution based on keypoint object detection.** | Dataset | Safe Images | R15 Images | R18 Images | Description | |:-------:|:-----------:|:----------:|:----------:|--------------------------------------| | v1 | 5991 | 4960 | 5070 | Simply crawled from Sankaku Complex. |
true
Conversation Ending Check
false
This dataset contains a selection of Q&A-related tasks gathered and cleaned from the webGPT_comparisons set and the databricks-dolly-15k set. Unicode escapes were explicitly removed, and wikipedia citations in the "output" were stripped through regex to hopefully help any end-product model ignore these artifacts within their input context. This data is formatted for use in the alpaca instruction format, however the instruction, input, and output columns are kept separate in the raw data to allow for other configurations. The data has been filtered so that every entry is less than our chosen truncation length of 1024 (LLaMA-style) tokens with the format: ``` """Below is an instruction that describes a task, paired with an input that provides further context. Write a response that appropriately completes the request. ### Instruction: {instruction} ### Input: {inputt} ### Response: {output}""" ``` <h3>webGPT</h3> This set was filtered from the webGPT_comparisons data by taking any Q&A option that was positively or neutrally-rated by humans (e.g. "score" >= 0). This might not provide the ideal answer, but this dataset was assembled specifically for extractive Q&A with less regard for how humans feel about the results. This selection comprises 23826 of the total entries in the data. <h3>Dolly</h3> The dolly data was selected primarily to focus on closed-qa tasks. For this purpose, only entries in the "closed-qa", "information_extraction", "summarization", "classification", and "creative_writing" were used. While not all of these include a context, they were judged to help flesh out the training set. This selection comprises 5362 of the total entries in the data.
false
false
Face Masks ensemble dataset is no longer limited to [Kaggle](https://www.kaggle.com/datasets/henrylydecker/face-masks), it is now coming to Huggingface! This dataset was created to help train and/or fine tune models for detecting masked and un-masked faces. I created a new face masks object detection dataset by compositing together three publically available face masks object detection datasets on Kaggle that used the YOLO annotation format. To combine the datasets, I used Roboflow. All three original datasets had different class dictionaries, so I recoded the classes into two classes: "Mask" and "No Mask". One dataset included a class for incorrectly worn face masks, images with this class were removed from the dataset. Approximately 50 images had corrupted annotations, so they were manually re-annotated in the Roboflow platform. The final dataset includes 9,982 images, with 24,975 annotated instances. Image resolution was on average 0.49 mp, with a median size of 750 x 600 pixels. To improve model performance on out of sample data, I used 90 degree rotational augmentation. This saved duplicate versions of each image for 90, 180, and 270 degree rotations. I then split the data into 85% training, 10% validation, and 5% testing. Images with classes that were removed from the dataset were removed, leaving 16,000 images in training, 1,900 in validation, and 1,000 in testing.
true
# Dataset Card for News_Articles_Categorization ## Table of Contents - [Dataset Description](#dataset-description) - [Languages](#languages) - [Dataset Structure](#dataset-structure) - [Source Data](#source-data) ## Dataset Description 29000 News headlines which are classified into 13 different labels namely: "Playful", "Infuriating", "Sentimental", "Cynical", "Depressing", "Awe-inspiring", "Patriotic", "Begrudging", "Educational", "Hopeful", "Sarcastic", "Disrespectful", "Disparaging" ## Languages The text in the dataset is in English ## Dataset Structure The dataset consists of 14 columns namely Headline and the other 13 representing the labels mentioned above. The Headline column consists of the news headlines and the label columns represent if the headline belongs to the label or not ## Source Data The dataset is collected from the database of otherweb.com
false
# Dataset Card for Leading Decision Summarization ## Table of Contents - [Table of Contents](#table-of-contents) - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards) - [Languages](#languages) - [Dataset Structure](#dataset-structure) - [Data Instances](#data-instances) - [Data Fields](#data-fields) - [Data Splits](#data-splits) - [Dataset Creation](#dataset-creation) - [Curation Rationale](#curation-rationale) - [Source Data](#source-data) - [Annotations](#annotations) - [Personal and Sensitive Information](#personal-and-sensitive-information) - [Considerations for Using the Data](#considerations-for-using-the-data) - [Social Impact of Dataset](#social-impact-of-dataset) - [Discussion of Biases](#discussion-of-biases) - [Other Known Limitations](#other-known-limitations) - [Additional Information](#additional-information) - [Dataset Curators](#dataset-curators) - [Licensing Information](#licensing-information) - [Citation Information](#citation-information) - [Contributions](#contributions) ## Dataset Description - **Homepage:** - **Repository:** - **Paper:** - **Leaderboard:** - **Point of Contact:** ### Dataset Summary This dataset contains text and summary for swiss leading decisions. ### Supported Tasks and Leaderboards ### Languages Switzerland has four official languages with three languages German, French and Italian being represenated. The decisions are written by the judges and clerks in the language of the proceedings. | Language | Subset | Number of Documents| |------------|------------|--------------------| | German | **de** | 12K | | French | **fr** | 5K | | Italian | **it** | 835 | ## Dataset Structure - decision_id: unique identifier for the decision - header: a short header for the decision - regeste: the summary of the leading decision - text: the main text of the leading decision - law_area: area of law of the decision - law_sub_area: sub-area of law of the decision - language: language of the decision - year: year of the decision - court: court of the decision - chamber: chamber of the decision - canton: canton of the decision - region: region of the decision ### Data Fields [More Information Needed] ### Data Instances [More Information Needed] ### Data Fields [More Information Needed] ### Data Splits ## Dataset Creation ### Curation Rationale ### Source Data #### Initial Data Collection and Normalization The original data are published from the Swiss Federal Supreme Court (https://www.bger.ch) in unprocessed formats (HTML). The documents were downloaded from the Entscheidsuche portal (https://entscheidsuche.ch) in HTML. #### Who are the source language producers? The decisions are written by the judges and clerks in the language of the proceedings. ### Annotations #### Annotation process #### Who are the annotators? ### Personal and Sensitive Information The dataset contains publicly available court decisions from the Swiss Federal Supreme Court. Personal or sensitive information has been anonymized by the court before publication according to the following guidelines: https://www.bger.ch/home/juridiction/anonymisierungsregeln.html. ## Considerations for Using the Data ### Social Impact of Dataset [More Information Needed] ### Discussion of Biases [More Information Needed] ### Other Known Limitations [More Information Needed] ## Additional Information ### Dataset Curators [More Information Needed] ### Licensing Information We release the data under CC-BY-4.0 which complies with the court licensing (https://www.bger.ch/files/live/sites/bger/files/pdf/de/urteilsveroeffentlichung_d.pdf) © Swiss Federal Supreme Court, 2002-2022 The copyright for the editorial content of this website and the consolidated texts, which is owned by the Swiss Federal Supreme Court, is licensed under the Creative Commons Attribution 4.0 International licence. This means that you can re-use the content provided you acknowledge the source and indicate any changes you have made. Source: https://www.bger.ch/files/live/sites/bger/files/pdf/de/urteilsveroeffentlichung_d.pdf ### Citation Information *Visu, Ronja, Joel* *Title: Blabliblablu* *Name of conference* ``` cit ``` ### Contributions
true
false
false
# Dataset Card for Dataset Name ## Dataset Description - **Repository:** https://github.com/danielsteinigen/nlp-legal-texts - **Paper:** - **Leaderboard:** - **Point of Contact:** ### Dataset Summary This dataset card aims to be a base template for new datasets. It has been generated using [this raw template](https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/templates/datasetcard_template.md?plain=1). ### Supported Tasks and Leaderboards [More Information Needed] ### Languages [More Information Needed] ## Dataset Structure ### Data Instances [More Information Needed] ### Data Fields [More Information Needed] ### Data Splits [More Information Needed] ## Dataset Creation ### Curation Rationale [More Information Needed] ### Source Data #### Initial Data Collection and Normalization [More Information Needed] #### Who are the source language producers? [More Information Needed] ### Annotations #### Annotation process [More Information Needed] #### Who are the annotators? [More Information Needed] ### Personal and Sensitive Information [More Information Needed] ## Considerations for Using the Data ### Social Impact of Dataset [More Information Needed] ### Discussion of Biases [More Information Needed] ### Other Known Limitations [More Information Needed] ## Additional Information ### Dataset Curators [More Information Needed] ### Licensing Information [More Information Needed] ### Citation Information [More Information Needed] ### Contributions [More Information Needed]
false
# Dataset Card for Dataset Name ### Dataset Summary It is just a dataset of dolly-15k-jp(*1) converted to jsonl form so that it can be used in SFTTrainer(*2)'s dataset_text_field property. (*1)https://huggingface.co/datasets/kunishou/databricks-dolly-15k-ja (*2)https://huggingface.co/docs/trl/main/en/sft_trainer ### Languages ja ### Licensing Information This dataset is licensed under CC BY SA 3.0 Special Thanks https://huggingface.co/datasets/kunishou/databricks-dolly-15k-ja
false
## LLaVA Visual Instruct CC3M 595K Pretrain Dataset Card [LLaVA](https://llava-vl.github.io/)에서 공개한 CC3M의 595K개 Visual Instruction dataset을 한국어로 번역한 데이터셋입니다. 기존 [Ko-conceptual-captions](https://github.com/QuoQA-NLP/Ko-conceptual-captions)에 공개된 한국어 caption을 가져와 데이터셋을 구축했습니다. 번역 결과가 다소 좋지 않아, 추후에 DeepL로 다시 번역할 수 있습니다. License: [CC-3M](https://github.com/google-research-datasets/conceptual-captions/blob/master/LICENSE) 준수
true
# Dataset Card for Cryptonews articles with price momentum labels ## Table of Contents - [Table of Contents](#table-of-contents) - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards) - [Languages](#languages) - [Dataset Structure](#dataset-structure) - [Data Instances](#data-instances) - [Data Fields](#data-fields) - [Data Splits](#data-splits) ## Dataset Description - **Homepage:** - **Repository:** https://github.com/SahandNZ/IUST-NLP-project-spring-2023 - **Paper:** - **Leaderboard:** - **Point of Contact:** ### Dataset Summary The dataset was gathered from two prominent sources in the cryptocurrency industry: Cryptonews.com and Binance.com. The aim of the dataset was to evaluate the impact of news on crypto price movements. As we know, news events such as regulatory changes, technological advancements, and major partnerships can have a significant impact on the price of cryptocurrencies. By analyzing the data collected from these sources, this dataset aimed to provide insights into the relationship between news events and crypto market trends. ### Supported Tasks and Leaderboards - **Text Classification** - **Sentiment Analysis** ### Languages The language data in this dataset is in English (BCP-47 en) ## Dataset Structure ### Data Instances Todo ### Data Fields Todo ### Data Splits Todo ### Source Data - **Textual:** https://Cryptonews.com - **Numerical:** https://Binance.com
false
# EasyQA: A Kindergarten-Level QA Dataset for Investigating Truthfulness. EasyQA is a GPT-3.5-turbo-generated dataset of easy kindergarten-level facts, meant to be used to prompt and evaluate large language models for "common-sense" truthful responses. This dataset was originally created to understand how different types of truthfulness may be represented in the intermediate activations of large language models. EasyQA compromises 2346 questions that span 50 categories, including art, technology, education, music, and animals. The questions are meant to be extremely simple and obvious, eliciting an obvious truth that would not be susceptible to misconceptions -- making it an excellent comparison compared to benchmarks related to other types of truth (e.g. TruthfulQA, which focuses on common misconceptions). Credits to Kevin Wang, Richard Ren, and Phillip Guo. ## Dataset Creation The dataset was created by prompting GPT-3.5-turbo with: "*Please generate 50 easy, obvious, common-knowledge questions that a kindergartener would learn in class about the topic prompted, as well as correct and incorrect responses. These questions should be less like trivia questions (i.e. Who is known as the Queen of Jazz?) and more like obvious facts (ie What color is the sky?). Your generations should be in the format: Question: {Your question here} Right: {Right answer} Wrong: {Wrong answer} where each question is a new line. Please follow this format verbatim (e.g. do not number the questions).*" The following categories were used: ``` Animals Plants Food and drink Music Movies Television shows Literature Sports Geography History Science Mathematics Art Technology Politics Business and Economy Education Health and Fitness Environment and Climate Space and Astronomy Fashion and Style Video Games Travel and Tourism Language and Literature Religion and Spirituality Famous Personalities Cultural Events/Festivals Cars and Automobiles Photography Architecture Medicine and Health Psychology Philosophy Law Social Sciences Human Rights Current Events/News Global Affairs National Landmarks Celebrities and Entertainment Nature Cooking and Baking Gardening DIY Projects Dance Comic Books and Graphic Novels Mythology and Folklore Internet and Social Media Parenting and Family Life Home Decor ```
false
true
# Content This is a dataset of Spotify tracks over a range of **125** different genres. Each track has some audio features associated with it. The data is in `CSV` format which is tabular and can be loaded quickly. # Usage The dataset can be used for: - Building a **Recommendation System** based on some user input or preference - **Classification** purposes based on audio features and available genres - Any other application that you can think of. Feel free to discuss! # Column Description - **track_id**: The Spotify ID for the track - **artists**: The artists' names who performed the track. If there is more than one artist, they are separated by a `;` - **album_name**: The album name in which the track appears - **track_name**: Name of the track - **popularity**: **The popularity of a track is a value between 0 and 100, with 100 being the most popular**. The popularity is calculated by algorithm and is based, in the most part, on the total number of plays the track has had and how recent those plays are. Generally speaking, songs that are being played a lot now will have a higher popularity than songs that were played a lot in the past. Duplicate tracks (e.g. the same track from a single and an album) are rated independently. Artist and album popularity is derived mathematically from track popularity. - **duration_ms**: The track length in milliseconds - **explicit**: Whether or not the track has explicit lyrics (true = yes it does; false = no it does not OR unknown) - **danceability**: Danceability describes how suitable a track is for dancing based on a combination of musical elements including tempo, rhythm stability, beat strength, and overall regularity. A value of 0.0 is least danceable and 1.0 is most danceable - **energy**: Energy is a measure from 0.0 to 1.0 and represents a perceptual measure of intensity and activity. Typically, energetic tracks feel fast, loud, and noisy. For example, death metal has high energy, while a Bach prelude scores low on the scale - **key**: The key the track is in. Integers map to pitches using standard Pitch Class notation. E.g. `0 = C`, `1 = C♯/D♭`, `2 = D`, and so on. If no key was detected, the value is -1 - **loudness**: The overall loudness of a track in decibels (dB) - **mode**: Mode indicates the modality (major or minor) of a track, the type of scale from which its melodic content is derived. Major is represented by 1 and minor is 0 - **speechiness**: Speechiness detects the presence of spoken words in a track. The more exclusively speech-like the recording (e.g. talk show, audio book, poetry), the closer to 1.0 the attribute value. Values above 0.66 describe tracks that are probably made entirely of spoken words. Values between 0.33 and 0.66 describe tracks that may contain both music and speech, either in sections or layered, including such cases as rap music. Values below 0.33 most likely represent music and other non-speech-like tracks - **acousticness**: A confidence measure from 0.0 to 1.0 of whether the track is acoustic. 1.0 represents high confidence the track is acoustic - **instrumentalness**: Predicts whether a track contains no vocals. "Ooh" and "aah" sounds are treated as instrumental in this context. Rap or spoken word tracks are clearly "vocal". The closer the instrumentalness value is to 1.0, the greater likelihood the track contains no vocal content - **liveness**: Detects the presence of an audience in the recording. Higher liveness values represent an increased probability that the track was performed live. A value above 0.8 provides strong likelihood that the track is live - **valence**: A measure from 0.0 to 1.0 describing the musical positiveness conveyed by a track. Tracks with high valence sound more positive (e.g. happy, cheerful, euphoric), while tracks with low valence sound more negative (e.g. sad, depressed, angry) - **tempo**: The overall estimated tempo of a track in beats per minute (BPM). In musical terminology, tempo is the speed or pace of a given piece and derives directly from the average beat duration - **time_signature**: An estimated time signature. The time signature (meter) is a notational convention to specify how many beats are in each bar (or measure). The time signature ranges from 3 to 7 indicating time signatures of `3/4`, to `7/4`. - **track_genre**: The genre in which the track belongs # Sources and Methodology The data was collected and cleaned using Spotify's Web API and Python.
false
false
false
# rudetoxifier_data_detox This is subset of toxic comments from [d0rj/rudetoxifier_data](https://huggingface.co/datasets/d0rj/rudetoxifier_data) which has detoxified column created by [s-nlp/ruT5-base-detox](https://huggingface.co/s-nlp/ruT5-base-detox).
false
prompts and prompt engineering are essential for guiding language models, enabling control over outputs, generating desired content, fostering creativity, and enhancing the overall user experience. They form a critical component in the interaction between users and AI systems, ensuring meaningful and contextually appropriate conversations. This is one of the inspiration behind this dataset. In this dataset we generated this prompts samples by various chatbots and few from Bard and from ChatGpt. the main intention and idea behind that is 1) Prompt Engineering 2) Rich data . This type of few samples of prompt which for helpful for training various generative ai applications.but in this dataset the prompts samples are low amount .but you generate synthetic data from that .
false
# Rakuda - Questions for Japanese models **Repository**: [https://github.com/yuzu-ai/japanese-llm-ranking](https://github.com/yuzu-ai/japanese-llm-ranking) This is a set of 40 questions in Japanese about Japanese-specific topics designed to evaluate the capabilities of AI Assistants in Japanese. The questions are evenly distributed between four categories: history, society, government, and geography. Questions in the first three categories are open-ended, while the geography questions are more specific. Answers to these questions can be used to rank the Japanese abilities of models, in the same way the [vicuna-eval questions](https://lmsys.org/vicuna_eval/) are frequently used to measure the usefulness of assistants. ## Usage ```python from datasets import load_dataset dataset = load_dataset("yuzuai/rakuda-questions") print(dataset) # => DatasetDict({ # train: Dataset({ # features: ['category', 'question_id', 'text'], # num_rows: 40 # }) # }) ```
true
# Dataset Card for "UnpredicTable-cluster22" - Dataset of Few-shot Tasks from Tables ## Table of Contents - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Supported Tasks](#supported-tasks-and-leaderboards) - [Languages](#languages) - [Dataset Structure](#dataset-structure) - [Data Instances](#data-instances) - [Data Fields](#data-instances) - [Data Splits](#data-instances) - [Dataset Creation](#dataset-creation) - [Curation Rationale](#curation-rationale) - [Source Data](#source-data) - [Annotations](#annotations) - [Personal and Sensitive Information](#personal-and-sensitive-information) - [Considerations for Using the Data](#considerations-for-using-the-data) - [Social Impact of Dataset](#social-impact-of-dataset) - [Discussion of Biases](#discussion-of-biases) - [Other Known Limitations](#other-known-limitations) - [Additional Information](#additional-information) - [Dataset Curators](#dataset-curators) - [Licensing Information](#licensing-information) - [Citation Information](#citation-information) ## Dataset Description - **Homepage:** https://ethanperez.net/unpredictable - **Repository:** https://github.com/JunShern/few-shot-adaptation - **Paper:** Few-shot Adaptation Works with UnpredicTable Data - **Point of Contact:** junshern@nyu.edu, perez@nyu.edu ### Dataset Summary The UnpredicTable dataset consists of web tables formatted as few-shot tasks for fine-tuning language models to improve their few-shot performance. There are several dataset versions available: * [UnpredicTable-full](https://huggingface.co/datasets/MicPie/unpredictable_full): Starting from the initial WTC corpus of 50M tables, we apply our tables-to-tasks procedure to produce our resulting dataset, [UnpredicTable-full](https://huggingface.co/datasets/MicPie/unpredictable_full), which comprises 413,299 tasks from 23,744 unique websites. * [UnpredicTable-unique](https://huggingface.co/datasets/MicPie/unpredictable_unique): This is the same as [UnpredicTable-full](https://huggingface.co/datasets/MicPie/unpredictable_full) but filtered to have a maximum of one task per website. [UnpredicTable-unique](https://huggingface.co/datasets/MicPie/unpredictable_unique) contains exactly 23,744 tasks from 23,744 websites. * [UnpredicTable-5k](https://huggingface.co/datasets/MicPie/unpredictable_5k): This dataset contains 5k random tables from the full dataset. * UnpredicTable data subsets based on a manual human quality rating (please see our publication for details of the ratings): * [UnpredicTable-rated-low](https://huggingface.co/datasets/MicPie/unpredictable_rated-low) * [UnpredicTable-rated-medium](https://huggingface.co/datasets/MicPie/unpredictable_rated-medium) * [UnpredicTable-rated-high](https://huggingface.co/datasets/MicPie/unpredictable_rated-high) * UnpredicTable data subsets based on the website of origin: * [UnpredicTable-baseball-fantasysports-yahoo-com](https://huggingface.co/datasets/MicPie/unpredictable_baseball-fantasysports-yahoo-com) * [UnpredicTable-bulbapedia-bulbagarden-net](https://huggingface.co/datasets/MicPie/unpredictable_bulbapedia-bulbagarden-net) * [UnpredicTable-cappex-com](https://huggingface.co/datasets/MicPie/unpredictable_cappex-com) * [UnpredicTable-cram-com](https://huggingface.co/datasets/MicPie/unpredictable_cram-com) * [UnpredicTable-dividend-com](https://huggingface.co/datasets/MicPie/unpredictable_dividend-com) * [UnpredicTable-dummies-com](https://huggingface.co/datasets/MicPie/unpredictable_dummies-com) * [UnpredicTable-en-wikipedia-org](https://huggingface.co/datasets/MicPie/unpredictable_en-wikipedia-org) * [UnpredicTable-ensembl-org](https://huggingface.co/datasets/MicPie/unpredictable_ensembl-org) * [UnpredicTable-gamefaqs-com](https://huggingface.co/datasets/MicPie/unpredictable_gamefaqs-com) * [UnpredicTable-mgoblog-com](https://huggingface.co/datasets/MicPie/unpredictable_mgoblog-com) * [UnpredicTable-mmo-champion-com](https://huggingface.co/datasets/MicPie/unpredictable_mmo-champion-com) * [UnpredicTable-msdn-microsoft-com](https://huggingface.co/datasets/MicPie/unpredictable_msdn-microsoft-com) * [UnpredicTable-phonearena-com](https://huggingface.co/datasets/MicPie/unpredictable_phonearena-com) * [UnpredicTable-sittercity-com](https://huggingface.co/datasets/MicPie/unpredictable_sittercity-com) * [UnpredicTable-sporcle-com](https://huggingface.co/datasets/MicPie/unpredictable_sporcle-com) * [UnpredicTable-studystack-com](https://huggingface.co/datasets/MicPie/unpredictable_studystack-com) * [UnpredicTable-support-google-com](https://huggingface.co/datasets/MicPie/unpredictable_support-google-com) * [UnpredicTable-w3-org](https://huggingface.co/datasets/MicPie/unpredictable_w3-org) * [UnpredicTable-wiki-openmoko-org](https://huggingface.co/datasets/MicPie/unpredictable_wiki-openmoko-org) * [UnpredicTable-wkdu-org](https://huggingface.co/datasets/MicPie/unpredictable_wkdu-org) * UnpredicTable data subsets based on clustering (for the clustering details please see our publication): * [UnpredicTable-cluster00](https://huggingface.co/datasets/MicPie/unpredictable_cluster00) * [UnpredicTable-cluster01](https://huggingface.co/datasets/MicPie/unpredictable_cluster01) * [UnpredicTable-cluster02](https://huggingface.co/datasets/MicPie/unpredictable_cluster02) * [UnpredicTable-cluster03](https://huggingface.co/datasets/MicPie/unpredictable_cluster03) * [UnpredicTable-cluster04](https://huggingface.co/datasets/MicPie/unpredictable_cluster04) * [UnpredicTable-cluster05](https://huggingface.co/datasets/MicPie/unpredictable_cluster05) * [UnpredicTable-cluster06](https://huggingface.co/datasets/MicPie/unpredictable_cluster06) * [UnpredicTable-cluster07](https://huggingface.co/datasets/MicPie/unpredictable_cluster07) * [UnpredicTable-cluster08](https://huggingface.co/datasets/MicPie/unpredictable_cluster08) * [UnpredicTable-cluster09](https://huggingface.co/datasets/MicPie/unpredictable_cluster09) * [UnpredicTable-cluster10](https://huggingface.co/datasets/MicPie/unpredictable_cluster10) * [UnpredicTable-cluster11](https://huggingface.co/datasets/MicPie/unpredictable_cluster11) * [UnpredicTable-cluster12](https://huggingface.co/datasets/MicPie/unpredictable_cluster12) * [UnpredicTable-cluster13](https://huggingface.co/datasets/MicPie/unpredictable_cluster13) * [UnpredicTable-cluster14](https://huggingface.co/datasets/MicPie/unpredictable_cluster14) * [UnpredicTable-cluster15](https://huggingface.co/datasets/MicPie/unpredictable_cluster15) * [UnpredicTable-cluster16](https://huggingface.co/datasets/MicPie/unpredictable_cluster16) * [UnpredicTable-cluster17](https://huggingface.co/datasets/MicPie/unpredictable_cluster17) * [UnpredicTable-cluster18](https://huggingface.co/datasets/MicPie/unpredictable_cluster18) * [UnpredicTable-cluster19](https://huggingface.co/datasets/MicPie/unpredictable_cluster19) * [UnpredicTable-cluster20](https://huggingface.co/datasets/MicPie/unpredictable_cluster20) * [UnpredicTable-cluster21](https://huggingface.co/datasets/MicPie/unpredictable_cluster21) * [UnpredicTable-cluster22](https://huggingface.co/datasets/MicPie/unpredictable_cluster22) * [UnpredicTable-cluster23](https://huggingface.co/datasets/MicPie/unpredictable_cluster23) * [UnpredicTable-cluster24](https://huggingface.co/datasets/MicPie/unpredictable_cluster24) * [UnpredicTable-cluster25](https://huggingface.co/datasets/MicPie/unpredictable_cluster25) * [UnpredicTable-cluster26](https://huggingface.co/datasets/MicPie/unpredictable_cluster26) * [UnpredicTable-cluster27](https://huggingface.co/datasets/MicPie/unpredictable_cluster27) * [UnpredicTable-cluster28](https://huggingface.co/datasets/MicPie/unpredictable_cluster28) * [UnpredicTable-cluster29](https://huggingface.co/datasets/MicPie/unpredictable_cluster29) * [UnpredicTable-cluster-noise](https://huggingface.co/datasets/MicPie/unpredictable_cluster-noise) ### Supported Tasks and Leaderboards Since the tables come from the web, the distribution of tasks and topics is very broad. The shape of our dataset is very wide, i.e., we have 1000's of tasks, while each task has only a few examples, compared to most current NLP datasets which are very deep, i.e., 10s of tasks with many examples. This implies that our dataset covers a broad range of potential tasks, e.g., multiple-choice, question-answering, table-question-answering, text-classification, etc. The intended use of this dataset is to improve few-shot performance by fine-tuning/pre-training on our dataset. ### Languages English ## Dataset Structure ### Data Instances Each task is represented as a jsonline file and consists of several few-shot examples. Each example is a dictionary containing a field 'task', which identifies the task, followed by an 'input', 'options', and 'output' field. The 'input' field contains several column elements of the same row in the table, while the 'output' field is a target which represents an individual column of the same row. Each task contains several such examples which can be concatenated as a few-shot task. In the case of multiple choice classification, the 'options' field contains the possible classes that a model needs to choose from. There are also additional meta-data fields such as 'pageTitle', 'title', 'outputColName', 'url', 'wdcFile'. ### Data Fields 'task': task identifier 'input': column elements of a specific row in the table. 'options': for multiple choice classification, it provides the options to choose from. 'output': target column element of the same row as input. 'pageTitle': the title of the page containing the table. 'outputColName': output column name 'url': url to the website containing the table 'wdcFile': WDC Web Table Corpus file ### Data Splits The UnpredicTable datasets do not come with additional data splits. ## Dataset Creation ### Curation Rationale Few-shot training on multi-task datasets has been demonstrated to improve language models' few-shot learning (FSL) performance on new tasks, but it is unclear which training tasks lead to effective downstream task adaptation. Few-shot learning datasets are typically produced with expensive human curation, limiting the scale and diversity of the training tasks available to study. As an alternative source of few-shot data, we automatically extract 413,299 tasks from diverse internet tables. We provide this as a research resource to investigate the relationship between training data and few-shot learning. ### Source Data #### Initial Data Collection and Normalization We use internet tables from the English-language Relational Subset of the WDC Web Table Corpus 2015 (WTC). The WTC dataset tables were extracted from the July 2015 Common Crawl web corpus (http://webdatacommons.org/webtables/2015/EnglishStatistics.html). The dataset contains 50,820,165 tables from 323,160 web domains. We then convert the tables into few-shot learning tasks. Please see our publication for more details on the data collection and conversion pipeline. #### Who are the source language producers? The dataset is extracted from [WDC Web Table Corpora](http://webdatacommons.org/webtables/). ### Annotations #### Annotation process Manual annotation was only carried out for the [UnpredicTable-rated-low](https://huggingface.co/datasets/MicPie/unpredictable_rated-low), [UnpredicTable-rated-medium](https://huggingface.co/datasets/MicPie/unpredictable_rated-medium), and [UnpredicTable-rated-high](https://huggingface.co/datasets/MicPie/unpredictable_rated-high) data subsets to rate task quality. Detailed instructions of the annotation instructions can be found in our publication. #### Who are the annotators? Annotations were carried out by a lab assistant. ### Personal and Sensitive Information The data was extracted from [WDC Web Table Corpora](http://webdatacommons.org/webtables/), which in turn extracted tables from the [Common Crawl](https://commoncrawl.org/). We did not filter the data in any way. Thus any user identities or otherwise sensitive information (e.g., data that reveals racial or ethnic origins, sexual orientations, religious beliefs, political opinions or union memberships, or locations; financial or health data; biometric or genetic data; forms of government identification, such as social security numbers; criminal history, etc.) might be contained in our dataset. ## Considerations for Using the Data ### Social Impact of Dataset This dataset is intended for use as a research resource to investigate the relationship between training data and few-shot learning. As such, it contains high- and low-quality data, as well as diverse content that may be untruthful or inappropriate. Without careful investigation, it should not be used for training models that will be deployed for use in decision-critical or user-facing situations. ### Discussion of Biases Since our dataset contains tables that are scraped from the web, it will also contain many toxic, racist, sexist, and otherwise harmful biases and texts. We have not run any analysis on the biases prevalent in our datasets. Neither have we explicitly filtered the content. This implies that a model trained on our dataset may potentially reflect harmful biases and toxic text that exist in our dataset. ### Other Known Limitations No additional known limitations. ## Additional Information ### Dataset Curators Jun Shern Chan, Michael Pieler, Jonathan Jao, Jérémy Scheurer, Ethan Perez ### Licensing Information Apache 2.0 ### Citation Information ``` @misc{chan2022few, author = {Chan, Jun Shern and Pieler, Michael and Jao, Jonathan and Scheurer, Jérémy and Perez, Ethan}, title = {Few-shot Adaptation Works with UnpredicTable Data}, publisher={arXiv}, year = {2022}, url = {https://arxiv.org/abs/2208.01009} } ```
false
# Disclaimer This was inspired from https://huggingface.co/datasets/lambdalabs/pokemon-blip-captions # Dataset Card for A subset of Magic card BLIP captions _Dataset used to train [Magic card text to image model](https://github.com/LambdaLabsML/examples/tree/main/stable-diffusion-finetuning)_ BLIP generated captions for Magic Card images collected from the web. Original images were obtained from [Scryfall](https://scryfall.com/) and captioned with the [pre-trained BLIP model](https://github.com/salesforce/BLIP). For each row the dataset contains `image` and `text` keys. `image` is a varying size PIL jpeg, and `text` is the accompanying text caption. Only a train split is provided. ## Examples ![pk1.jpg](https://api.scryfall.com/cards/354de08d-41a8-4d6c-85d6-2413393ac181?format=image) > A woman holding a flower ![pk10.jpg](https://api.scryfall.com/cards/95608d51-9ec0-497c-a065-15adb7eff242?format=image) > two knights fighting ![pk100.jpg](https://api.scryfall.com/cards/42d3de03-9c3d-42f6-af34-1e15afb10e4f?format=image) > a card with a unicorn on it ## Citation If you use this dataset, please cite it as: ``` @misc{yayab2022onepiece, author = {YaYaB}, title = {Magic card creature split BLIP captions}, year={2022}, howpublished= {\url{https://huggingface.co/datasets/YaYaB/magic-blip-captions/}} } ```
false
# Dataset Card for Dicionário Português It is a list of 53138 portuguese words with its inflections. How to use it: ``` from datasets import load_dataset remote_dataset = load_dataset("VanessaSchenkel/pt-inflections", field="data") remote_dataset ``` Output: ``` DatasetDict({ train: Dataset({ features: ['word', 'pos', 'forms'], num_rows: 53138 }) }) ``` Exemple: ``` remote_dataset["train"][42] ``` Output: ``` {'word': 'numeral', 'pos': 'noun', 'forms': [{'form': 'numerais', 'tags': ['plural']}]} ```
false
# Dataset Card for Dicionário Português It is a list of portuguese words with its inflections How to use it: ``` from datasets import load_dataset remote_dataset = load_dataset("VanessaSchenkel/pt-all-words") remote_dataset ```
false
## Dataset Summary Depth-of-Field(DoF) dataset is comprised of 1200 annotated images, binary annotated with(0) and without(1) bokeh effect, shallow or deep depth of field. It is a forked data set from the [Unsplash 25K](https://github.com/unsplash/datasets) data set. ## Dataset Description - **Repository:** [https://github.com/sniafas/photography-style-analysis](https://github.com/sniafas/photography-style-analysis) - **Paper:** [More Information Needed](https://www.researchgate.net/publication/355917312_Photography_Style_Analysis_using_Machine_Learning) ### Citation Information ``` @article{sniafas2021, title={DoF: An image dataset for depth of field classification}, author={Niafas, Stavros}, doi= {10.13140/RG.2.2.29880.62722}, url= {https://www.researchgate.net/publication/364356051_DoF_depth_of_field_datase} year={2021} } ``` Note that each DoF dataset has its own citation. Please see the source to get the correct citation for each contained dataset.
false
# Dataset Card for COPA-SSE ## Table of Contents - [Table of Contents](#table-of-contents) - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards) - [Languages](#languages) - [Dataset Structure](#dataset-structure) - [Data Instances](#data-instances) - [Data Fields](#data-fields) - [Data Splits](#data-splits) - [Dataset Creation](#dataset-creation) - [Curation Rationale](#curation-rationale) - [Source Data](#source-data) - [Annotations](#annotations) - [Personal and Sensitive Information](#personal-and-sensitive-information) - [Considerations for Using the Data](#considerations-for-using-the-data) - [Social Impact of Dataset](#social-impact-of-dataset) - [Discussion of Biases](#discussion-of-biases) - [Other Known Limitations](#other-known-limitations) - [Additional Information](#additional-information) - [Dataset Curators](#dataset-curators) - [Licensing Information](#licensing-information) - [Citation Information](#citation-information) - [Contributions](#contributions) ## Dataset Description - **Homepage:** https://github.com/a-brassard/copa-sse - **Paper:** [COPA-SSE: Semi-Structured Explanations for Commonsense Reasoning](https://arxiv.org/abs/2201.06777) - **Point of Contact:** [Ana Brassard](mailto:ana.brassard@riken.jp) ### Dataset Summary ![Crowdsourcing protocol](crowdsourcing_protocol.png) COPA-SSE contains crowdsourced explanations for the [Balanced COPA](https://balanced-copa.github.io/) dataset, a variant of the [Choice of Plausible Alternatives (COPA)](https://people.ict.usc.edu/~gordon/copa.html) benchmark. The explanations are formatted as a set of triple-like common sense statements with [ConceptNet](https://conceptnet.io/) relations but freely written concepts. ### Supported Tasks and Leaderboards Can be used to train a model for explain+predict or predict+explain settings. Suited for both text-based and graph-based architectures. Base task is COPA (causal QA). ### Languages English ## Dataset Structure ### Data Instances Validation and test set each contains Balanced COPA samples with added explanations in `.jsonl` format. The question ids match the original questions of the Balanced COPA validation and test sets, respectively. ### Data Fields Each entry contains: - the original question (matching format and ids) - `human-explanations`: a list of explanations each containing: - `expl-id`: the explanation id - `text`: the explanation in plain text (full sentences) - `worker-id`: anonymized worker id (the author of the explanation) - `worker-avg`: the average score the author got for their explanations - `all-ratings`: all collected ratings for the explanation - `filtered-ratings`: ratings excluding those that failed the control - `triples`: the triple-form explanation (a list of ConceptNet-like triples) Example entry: ``` id: 1, asks-for: cause, most-plausible-alternative: 1, p: "My body cast a shadow over the grass.", a1: "The sun was rising.", a2: "The grass was cut.", human-explanations: [ {expl-id: f4d9b407-681b-4340-9be1-ac044f1c2230, text: "Sunrise causes casted shadows.", worker-id: 3a71407b-9431-49f9-b3ca-1641f7c05f3b, worker-avg: 3.5832864694635025, all-ratings: [1, 3, 3, 4, 3], filtered-ratings: [3, 3, 4, 3], filtered-avg-rating: 3.25, triples: [["sunrise", "Causes", "casted shadows"]] }, ...] ``` ### Data Splits Follows original Balanced COPA split: 1000 dev and 500 test instances. Each instance has up to nine explanations. ## Dataset Creation ### Curation Rationale The goal was to collect human-written explanations to supplement an existing commonsense reasoning benchmark. The triple-like format was designed to support graph-based models and increase the overall data quality, the latter being notoriously lacking in freely-written crowdsourced text. ### Source Data #### Initial Data Collection and Normalization The explanations in COPA-SSE are fully crowdsourced via the Amazon Mechanical Turk platform. Workers entered explanations by providing one or more concept-relation-concept triples. The explanations were then rated by different annotators with one- to five-star ratings. The final dataset contains explanations with a range of quality ratings. Additional collection rounds guaranteed that each sample has at least one explanation rated 3.5 stars or higher. #### Who are the source language producers? The original COPA questions (500 dev+500 test) were initially hand-crafted by experts. Similarly, the additional 500 development samples in Balanced COPA were authored by a small team of NLP researchers. Finally, the added explanations and quality ratings in COPA-SSE were collected with the help of Amazon Mechanical Turk workers who passed initial qualification rounds. ### Annotations #### Annotation process Workers were shown a Balanced COPA question, its answer, and a short instructional text. Then, they filled in free-form text fields for head and tail concepts and selected the relation from a drop-down menu with a curated selection of ConceptNet relations. Each explanation was rated by five different workers who were shown the same question and answer with five candidate explanations. #### Who are the annotators? The workers were restricted to persons located in the U.S. or G.B., with a HIT approval of 98% or more, and 500 or more approved HITs. Their identity and further personal information are not available. ### Personal and Sensitive Information N/A ## Considerations for Using the Data ### Social Impact of Dataset Models trained to output similar explanations as those in COPA-SSE may not necessarily provide convincing or faithful explanations. Researchers should carefully evaluate the resulting explanations before considering any real-world applications. ### Discussion of Biases COPA questions ask for causes or effects of everyday actions or interactions, some of them containing gendered language. Some explanations may reinforce harmful stereotypes if their reasoning is based on biased assumptions. These biases were not verified during collection. ### Other Known Limitations The data was originally intended to be explanation *graphs*, i.e., hypothetical "ideal" subgraphs of a commonsense knowledge graph. While they can still function as valid natural language explanations, their wording may be at times unnatural to a human and may be better suited for graph-based implementations. ## Additional Information ### Dataset Curators This work was authored by Ana Brassard, Benjamin Heinzerling, Pride Kavumba, and Kentaro Inui. All are both members of the Riken AIP Natural Language Understanding Team and the Tohoku NLP Lab under Tohoku University. ### Licensing Information COPA-SSE is released under the [MIT License](https://mit-license.org/). ### Citation Information ``` @InProceedings{copa-sse:LREC2022, author = {Brassard, Ana and Heinzerling, Benjamin and Kavumba, Pride and Inui, Kentaro}, title = {COPA-SSE: Semi-structured Explanations for Commonsense Reasoning}, booktitle = {Proceedings of the Language Resources and Evaluation Conference}, month = {June}, year = {2022}, address = {Marseille, France}, publisher = {European Language Resources Association}, pages = {3994--4000}, url = {https://aclanthology.org/2022.lrec-1.425} } ``` ### Contributions Thanks to [@a-brassard](https://github.com/a-brassard) for adding this dataset.
true
# MLDoc ## Table of Contents - [Table of Contents](#table-of-contents) - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards) - [Languages](#languages) - [Dataset Structure](#dataset-structure) - [Data Instances](#data-instances) - [Data Fields](#data-fields) - [Data Splits](#data-splits) - [Dataset Creation](#dataset-creation) - [Curation Rationale](#curation-rationale) - [Source Data](#source-data) - [Annotations](#annotations) - [Personal and Sensitive Information](#personal-and-sensitive-information) - [Considerations for Using the Data](#considerations-for-using-the-data) - [Social Impact of Dataset](#social-impact-of-dataset) - [Discussion of Biases](#discussion-of-biases) - [Other Known Limitations](#other-known-limitations) - [Additional Information](#additional-information) - [Dataset Curators](#dataset-curators) - [Licensing Information](#licensing-information) - [Citation Information](#citation-information) - [Contributions](#contributions) ## Dataset Description - **Website:** https://github.com/facebookresearch/MLDoc ### Dataset Summary For document classification, we use the Multilingual Document Classification Corpus (MLDoc) [(Schwenk and Li, 2018)](http://www.lrec-conf.org/proceedings/lrec2018/pdf/658.pdf), a cross-lingual document classification dataset covering 8 languages. We use the Spanish portion to evaluate our models on monolingual classification as part of the EvalEs Spanish language benchmark. The corpus consists of 14,458 news articles from Reuters classified in four categories: Corporate/Industrial, Economics, Government/Social and Markets. This dataset can't be downloaded straight from HuggingFace as it requires signing specific agreements. The detailed instructions on how to download it can be found in this [repository](https://github.com/facebookresearch/MLDoc). ### Supported Tasks and Leaderboards Text Classification ### Languages The dataset is in English, German, French, Spanish, Italian, Russian, Japanese and Chinese. ## Dataset Structure ### Data Instances <pre> MCAT b' FRANCFORT, 17 feb (Reuter) - La Bolsa de Francfort abri\xc3\xb3 la sesi\xc3\xb3n de corros con baja por la ca\xc3\xadda del viernes en Wall Street y una toma de beneficios. El d\xc3\xb3lar ayudaba a apuntalar al mercado, que pronto podr\xc3\xada reanudar su tendencia alcista. Volkswagen bajaba por los da\xc3\xb1os ocasionados por la huelga de camioneros en Espa\xc3\xb1a. Preussag participaba en un joint venture de exploraci\xc3\xb3n petrol\xc3\xadfera en Filipinas con Atlantic Richfield Co. A las 0951 GMT, el Dax 30 bajaba 10,49 puntos, un 0,32 pct, a 3.237,69 tras abrir a un m\xc3\xa1ximo de 3.237,69. (c) Reuters Limited 1997. ' </pre> ### Data Fields - Label: CCAT (Corporate/Industrial), ECAT (Economics), GCAT (Government/Social) and MCAT (Markets) - Text ### Data Splits - train.tsv: 9,458 lines - valid.tsv: 1,000 lines - test.tsv: 4,000 lines ## Dataset Creation ### Curation Rationale [N/A] ### Source Data The source data is from the Reuters Corpus. In 2000, Reuters Ltd made available a large collection of Reuters News stories for use in research and development of natural language processing, information retrieval, and machine learning systems. This corpus, known as "Reuters Corpus, Volume 1" or RCV1, is significantly larger than the older, well-known Reuters-21578 collection heavily used in the text classification community. For more information visit the paper [(Lewis et al., 2004)](https://www.jmlr.org/papers/volume5/lewis04a/lewis04a.pdf). #### Initial Data Collection and Normalization For more information visit the paper [(Lewis et al., 2004)](https://www.jmlr.org/papers/volume5/lewis04a/lewis04a.pdf). #### Who are the source language producers? For more information visit the paper [(Lewis et al., 2004)](https://www.jmlr.org/papers/volume5/lewis04a/lewis04a.pdf). ### Annotations #### Annotation process For more information visit the paper [(Schwenk and Li, 2018; Lewis et al., 2004)](http://www.lrec-conf.org/proceedings/lrec2018/pdf/658.pdf). #### Who are the annotators? For more information visit the paper [(Schwenk and Li, 2018; Lewis et al., 2004)](http://www.lrec-conf.org/proceedings/lrec2018/pdf/658.pdf). ### Personal and Sensitive Information [N/A] ## Considerations for Using the Data ### Social Impact of Dataset This dataset contributes to the development of language models in Spanish. ### Discussion of Biases [N/A] ### Other Known Limitations [N/A] ## Additional Information ### Dataset Curators [N/A] ### Licensing Information Access to the actual news stories of the Reuters Corpus (both RCV1 and RCV2) requires a NIST agreement. The stories in the Reuters Corpus are under the copyright of Reuters Ltd and/or Thomson Reuters, and their use is governed by the following agreements: - Organizational agreement: This agreement must be signed by the person responsible for the data at your organization, and sent to NIST. - Individual agreement: This agreement must be signed by all researchers using the Reuters Corpus at your organization, and kept on file at your organization. For more information about the agreement see [here](https://trec.nist.gov/data/reuters/reuters.html) ### Citation Information The following paper must be cited when using this corpus: ``` @InProceedings{SCHWENK18.658, author = {Holger Schwenk and Xian Li}, title = {A Corpus for Multilingual Document Classification in Eight Languages}, booktitle = {Proceedings of the Eleventh International Conference on Language Resources and Evaluation (LREC 2018)}, year = {2018}, month = {may}, date = {7-12}, location = {Miyazaki, Japan}, editor = {Nicoletta Calzolari (Conference chair) and Khalid Choukri and Christopher Cieri and Thierry Declerck and Sara Goggi and Koiti Hasida and Hitoshi Isahara and Bente Maegaard and Joseph Mariani and Hélène Mazo and Asuncion Moreno and Jan Odijk and Stelios Piperidis and Takenobu Tokunaga}, publisher = {European Language Resources Association (ELRA)}, address = {Paris, France}, isbn = {979-10-95546-00-9}, language = {english} } @inproceedings{schwenk-li-2018-corpus, title = "A Corpus for Multilingual Document Classification in Eight Languages", author = "Schwenk, Holger and Li, Xian", booktitle = "Proceedings of the Eleventh International Conference on Language Resources and Evaluation ({LREC} 2018)", month = may, year = "2018", address = "Miyazaki, Japan", publisher = "European Language Resources Association (ELRA)", url = "https://aclanthology.org/L18-1560", } ```
false
# Dataset Description ## Structure - Consists of 5 fields - Each row corresponds to a policy - sequence of actions, given an initial `<START>` state, and corresponding rewards at each step. ## Fields `steps`, `step_attn_masks`, `rewards`, `actions`, `dones` ## Field descriptions - `steps` (List of lists of `Int`s) - tokenized step tokens of all the steps in the policy sequence (here we use the `roberta-base` tokenizer, as `roberta-base` would be used to encode each step of a recipe) - `step_attn_masks` (List of lists of `Int`s) - Attention masks corresponding to `steps` - `rewards` (List of `Float`s) - Sequence of rewards (normalized b/w 0 and 1) assigned per step. - `actions` (List of lists of `Int`s) - Sequence of actions (one-hot encoded, as the action space is discrete). There are `33` different actions possible (we consider the maximum number of steps per recipe = `16`, so the action can vary from `-16` to `+16`; The class label is got by adding 16 to the actual action value) - `dones` (List of `Bool`) - Sequence of flags, conveying if the work is completed when that step is reached, or not. ## Dataset Size - Number of rows = `2255673` - Maximum number of steps per row = `16`
false
# Dataset Card for GEM/TaTA ## Dataset Description - **Homepage:** https://github.com/google-research/url-nlp - **Repository:** https://github.com/google-research/url-nlp - **Paper:** https://arxiv.org/abs/2211.00142 - **Leaderboard:** https://github.com/google-research/url-nlp - **Point of Contact:** Sebastian Ruder ### Link to Main Data Card You can find the main data card on the [GEM Website](https://gem-benchmark.com/data_cards/TaTA). ### Dataset Summary Existing data-to-text generation datasets are mostly limited to English. Table-to-Text in African languages (TaTA) addresses this lack of data as the first large multilingual table-to-text dataset with a focus on African languages. TaTA was created by transcribing figures and accompanying text in bilingual reports by the Demographic and Health Surveys Program, followed by professional translation to make the dataset fully parallel. TaTA includes 8,700 examples in nine languages including four African languages (Hausa, Igbo, Swahili, and Yorùbá) and a zero-shot test language (Russian). You can load the dataset via: ``` import datasets data = datasets.load_dataset('GEM/TaTA') ``` The data loader can be found [here](https://huggingface.co/datasets/GEM/TaTA). #### website [Github](https://github.com/google-research/url-nlp) #### paper [ArXiv](https://arxiv.org/abs/2211.00142) #### authors Sebastian Gehrmann, Sebastian Ruder , Vitaly Nikolaev, Jan A. Botha, Michael Chavinda, Ankur Parikh, Clara Rivera ## Dataset Overview ### Where to find the Data and its Documentation #### Webpage <!-- info: What is the webpage for the dataset (if it exists)? --> <!-- scope: telescope --> [Github](https://github.com/google-research/url-nlp) #### Download <!-- info: What is the link to where the original dataset is hosted? --> <!-- scope: telescope --> [Github](https://github.com/google-research/url-nlp) #### Paper <!-- info: What is the link to the paper describing the dataset (open access preferred)? --> <!-- scope: telescope --> [ArXiv](https://arxiv.org/abs/2211.00142) #### BibTex <!-- info: Provide the BibTex-formatted reference for the dataset. Please use the correct published version (ACL anthology, etc.) instead of google scholar created Bibtex. --> <!-- scope: microscope --> ``` @misc{gehrmann2022TaTA, Author = {Sebastian Gehrmann and Sebastian Ruder and Vitaly Nikolaev and Jan A. Botha and Michael Chavinda and Ankur Parikh and Clara Rivera}, Title = {TaTa: A Multilingual Table-to-Text Dataset for African Languages}, Year = {2022}, Eprint = {arXiv:2211.00142}, } ``` #### Contact Name <!-- quick --> <!-- info: If known, provide the name of at least one person the reader can contact for questions about the dataset. --> <!-- scope: periscope --> Sebastian Ruder #### Contact Email <!-- info: If known, provide the email of at least one person the reader can contact for questions about the dataset. --> <!-- scope: periscope --> ruder@google.com #### Has a Leaderboard? <!-- info: Does the dataset have an active leaderboard? --> <!-- scope: telescope --> yes #### Leaderboard Link <!-- info: Provide a link to the leaderboard. --> <!-- scope: periscope --> [Github](https://github.com/google-research/url-nlp) #### Leaderboard Details <!-- info: Briefly describe how the leaderboard evaluates models. --> <!-- scope: microscope --> The paper introduces a metric StATA which is trained on human ratings and which is used to rank approaches submitted to the leaderboard. ### Languages and Intended Use #### Multilingual? <!-- quick --> <!-- info: Is the dataset multilingual? --> <!-- scope: telescope --> yes #### Covered Languages <!-- quick --> <!-- info: What languages/dialects are covered in the dataset? --> <!-- scope: telescope --> `English`, `Portuguese`, `Arabic`, `French`, `Hausa`, `Swahili (macrolanguage)`, `Igbo`, `Yoruba`, `Russian` #### Whose Language? <!-- info: Whose language is in the dataset? --> <!-- scope: periscope --> The language is taken from reports by the demographic and health surveys program. #### License <!-- quick --> <!-- info: What is the license of the dataset? --> <!-- scope: telescope --> cc-by-sa-4.0: Creative Commons Attribution Share Alike 4.0 International #### Intended Use <!-- info: What is the intended use of the dataset? --> <!-- scope: microscope --> The dataset poses significant reasoning challenges and is thus meant as a way to asses the verbalization and reasoning capabilities of structure-to-text models. #### Primary Task <!-- info: What primary task does the dataset support? --> <!-- scope: telescope --> Data-to-Text #### Communicative Goal <!-- quick --> <!-- info: Provide a short description of the communicative goal of a model trained for this task on this dataset. --> <!-- scope: periscope --> Summarize key information from a table in a single sentence. ### Credit #### Curation Organization Type(s) <!-- info: In what kind of organization did the dataset curation happen? --> <!-- scope: telescope --> `industry` #### Curation Organization(s) <!-- info: Name the organization(s). --> <!-- scope: periscope --> Google Research #### Dataset Creators <!-- info: Who created the original dataset? List the people involved in collecting the dataset and their affiliation(s). --> <!-- scope: microscope --> Sebastian Gehrmann, Sebastian Ruder , Vitaly Nikolaev, Jan A. Botha, Michael Chavinda, Ankur Parikh, Clara Rivera #### Funding <!-- info: Who funded the data creation? --> <!-- scope: microscope --> Google Research #### Who added the Dataset to GEM? <!-- info: Who contributed to the data card and adding the dataset to GEM? List the people+affiliations involved in creating this data card and who helped integrate this dataset into GEM. --> <!-- scope: microscope --> Sebastian Gehrmann (Google Research) ### Dataset Structure #### Data Fields <!-- info: List and describe the fields present in the dataset. --> <!-- scope: telescope --> - `example_id`: The ID of the example. Each ID (e.g., `AB20-ar-1`) consists of three parts: the document ID, the language ISO 639-1 code, and the index of the table within the document. - `title`: The title of the table. - `unit_of_measure`: A description of the numerical value of the data. E.g., percentage of households with clean water. - `chart_type`: The kind of chart associated with the data. We consider the following (normalized) types: horizontal bar chart, map chart, pie graph, tables, line chart, pie chart, vertical chart type, line graph, vertical bar chart, and other. - `was_translated`: Whether the table was transcribed in the original language of the report or translated. - `table_data`: The table content is a JSON-encoded string of a two-dimensional list, organized by row, from left to right, starting from the top of the table. Number of items varies per table. Empty cells are given as empty string values in the corresponding table cell. - `table_text`: The sentences forming the description of each table are encoded as a JSON object. In the case of more than one sentence, these are separated by commas. Number of items varies per table. - `linearized_input`: A single string that contains the table content separated by vertical bars, i.e., |. Including title, unit of measurement, and the content of each cell including row and column headers in between brackets, i.e., (Medium Empowerment, Mali, 17.9). #### Reason for Structure <!-- info: How was the dataset structure determined? --> <!-- scope: microscope --> The structure includes all available information for the infographics on which the dataset is based. #### How were labels chosen? <!-- info: How were the labels chosen? --> <!-- scope: microscope --> Annotators looked through English text to identify sentences that describe an infographic. They then identified the corresponding location of the parallel non-English document. All sentences were extracted. #### Example Instance <!-- info: Provide a JSON formatted example of a typical instance in the dataset. --> <!-- scope: periscope --> ``` { "example_id": "FR346-en-39", "title": "Trends in early childhood mortality rates", "unit_of_measure": "Deaths per 1,000 live births for the 5-year period before the survey", "chart_type": "Line chart", "was_translated": "False", "table_data": "[[\"\", \"Child mortality\", \"Neonatal mortality\", \"Infant mortality\", \"Under-5 mortality\"], [\"1990 JPFHS\", 5, 21, 34, 39], [\"1997 JPFHS\", 6, 19, 29, 34], [\"2002 JPFHS\", 5, 16, 22, 27], [\"2007 JPFHS\", 2, 14, 19, 21], [\"2009 JPFHS\", 5, 15, 23, 28], [\"2012 JPFHS\", 4, 14, 17, 21], [\"2017-18 JPFHS\", 3, 11, 17, 19]]", "table_text": [ "neonatal, infant, child, and under-5 mortality rates for the 5 years preceding each of seven JPFHS surveys (1990 to 2017-18).", "Under-5 mortality declined by half over the period, from 39 to 19 deaths per 1,000 live births.", "The decline in mortality was much greater between the 1990 and 2007 surveys than in the most recent period.", "Between 2012 and 2017-18, under-5 mortality decreased only modestly, from 21 to 19 deaths per 1,000 live births, and infant mortality remained stable at 17 deaths per 1,000 births." ], "linearized_input": "Trends in early childhood mortality rates | Deaths per 1,000 live births for the 5-year period before the survey | (Child mortality, 1990 JPFHS, 5) (Neonatal mortality, 1990 JPFHS, 21) (Infant mortality, 1990 JPFHS, 34) (Under-5 mortality, 1990 JPFHS, 39) (Child mortality, 1997 JPFHS, 6) (Neonatal mortality, 1997 JPFHS, 19) (Infant mortality, 1997 JPFHS, 29) (Under-5 mortality, 1997 JPFHS, 34) (Child mortality, 2002 JPFHS, 5) (Neonatal mortality, 2002 JPFHS, 16) (Infant mortality, 2002 JPFHS, 22) (Under-5 mortality, 2002 JPFHS, 27) (Child mortality, 2007 JPFHS, 2) (Neonatal mortality, 2007 JPFHS, 14) (Infant mortality, 2007 JPFHS, 19) (Under-5 mortality, 2007 JPFHS, 21) (Child mortality, 2009 JPFHS, 5) (Neonatal mortality, 2009 JPFHS, 15) (Infant mortality, 2009 JPFHS, 23) (Under-5 mortality, 2009 JPFHS, 28) (Child mortality, 2012 JPFHS, 4) (Neonatal mortality, 2012 JPFHS, 14) (Infant mortality, 2012 JPFHS, 17) (Under-5 mortality, 2012 JPFHS, 21) (Child mortality, 2017-18 JPFHS, 3) (Neonatal mortality, 2017-18 JPFHS, 11) (Infant mortality, 2017-18 JPFHS, 17) (Under-5 mortality, 2017-18 JPFHS, 19)" } ``` #### Data Splits <!-- info: Describe and name the splits in the dataset if there are more than one. --> <!-- scope: periscope --> - `Train`: Training set, includes examples with 0 or more references. - `Validation`: Validation set, includes examples with 3 or more references. - `Test`: Test set, includes examples with 3 or more references. - `Ru`: Russian zero-shot set. Includes English and Russian examples (Russian is not includes in any of the other splits). #### Splitting Criteria <!-- info: Describe any criteria for splitting the data, if used. If there are differences between the splits (e.g., if the training annotations are machine-generated and the dev and test ones are created by humans, or if different numbers of annotators contributed to each example), describe them here. --> <!-- scope: microscope --> The same table across languages is always in the same split, i.e., if table X is in the test split in language A, it will also be in the test split in language B. In addition to filtering examples without transcribed table values, every example of the development and test splits has at least 3 references. From the examples that fulfilled these criteria, 100 tables were sampled for both development and test for a total of 800 examples each. A manual review process excluded a few tables in each set, resulting in a training set of 6,962 tables, a development set of 752 tables, and a test set of 763 tables. #### <!-- info: What does an outlier of the dataset in terms of length/perplexity/embedding look like? --> <!-- scope: microscope --> There are tables without references, without values, and others that are very large. The dataset is distributed as-is, but the paper describes multiple strategies to deal with data issues. ## Dataset in GEM ### Rationale for Inclusion in GEM #### Why is the Dataset in GEM? <!-- info: What does this dataset contribute toward better generation evaluation and why is it part of GEM? --> <!-- scope: microscope --> There is no other multilingual data-to-text dataset that is parallel over languages. Moreover, over 70% of references in the dataset require reasoning and it is thus of very high quality and challenging for models. #### Similar Datasets <!-- info: Do other datasets for the high level task exist? --> <!-- scope: telescope --> yes #### Unique Language Coverage <!-- info: Does this dataset cover other languages than other datasets for the same task? --> <!-- scope: periscope --> yes #### Difference from other GEM datasets <!-- info: What else sets this dataset apart from other similar datasets in GEM? --> <!-- scope: microscope --> More languages, parallel across languages, grounded in infographics, not centered on Western entities or source documents #### Ability that the Dataset measures <!-- info: What aspect of model ability can be measured with this dataset? --> <!-- scope: periscope --> reasoning, verbalization, content planning ### GEM-Specific Curation #### Modificatied for GEM? <!-- info: Has the GEM version of the dataset been modified in any way (data, processing, splits) from the original curated data? --> <!-- scope: telescope --> no #### Additional Splits? <!-- info: Does GEM provide additional splits to the dataset? --> <!-- scope: telescope --> no ### Getting Started with the Task #### Pointers to Resources <!-- info: Getting started with in-depth research on the task. Add relevant pointers to resources that researchers can consult when they want to get started digging deeper into the task. --> <!-- scope: microscope --> The background section of the [paper](https://arxiv.org/abs/2211.00142) provides a list of related datasets. #### Technical Terms <!-- info: Technical terms used in this card and the dataset and their definitions --> <!-- scope: microscope --> - `data-to-text`: Term that refers to NLP tasks in which the input is structured information and the output is natural language. ## Previous Results ### Previous Results #### Metrics <!-- info: What metrics are typically used for this task? --> <!-- scope: periscope --> `Other: Other Metrics` #### Other Metrics <!-- info: Definitions of other metrics --> <!-- scope: periscope --> `StATA`: A new metric associated with TaTA that is trained on human judgments and which has a much higher correlation with them. #### Proposed Evaluation <!-- info: List and describe the purpose of the metrics and evaluation methodology (including human evaluation) that the dataset creators used when introducing this task. --> <!-- scope: microscope --> The creators used a human evaluation that measured [attribution](https://arxiv.org/abs/2112.12870) and reasoning capabilities of various models. Based on these ratings, they trained a new metric and showed that existing metrics fail to measure attribution. #### Previous results available? <!-- info: Are previous results available? --> <!-- scope: telescope --> no ## Dataset Curation ### Original Curation #### Original Curation Rationale <!-- info: Original curation rationale --> <!-- scope: telescope --> The curation rationale is to create a multilingual data-to-text dataset that is high-quality and challenging. #### Communicative Goal <!-- info: What was the communicative goal? --> <!-- scope: periscope --> The communicative goal is to describe a table in a single sentence. #### Sourced from Different Sources <!-- info: Is the dataset aggregated from different data sources? --> <!-- scope: telescope --> no ### Language Data #### How was Language Data Obtained? <!-- info: How was the language data obtained? --> <!-- scope: telescope --> `Found` #### Where was it found? <!-- info: If found, where from? --> <!-- scope: telescope --> `Single website` #### Language Producers <!-- info: What further information do we have on the language producers? --> <!-- scope: microscope --> The language was produced by USAID as part of the Demographic and Health Surveys program (https://dhsprogram.com/). #### Topics Covered <!-- info: Does the language in the dataset focus on specific topics? How would you describe them? --> <!-- scope: periscope --> The topics are related to fertility, family planning, maternal and child health, gender, and nutrition. #### Data Validation <!-- info: Was the text validated by a different worker or a data curator? --> <!-- scope: telescope --> validated by crowdworker #### Was Data Filtered? <!-- info: Were text instances selected or filtered? --> <!-- scope: telescope --> not filtered ### Structured Annotations #### Additional Annotations? <!-- quick --> <!-- info: Does the dataset have additional annotations for each instance? --> <!-- scope: telescope --> expert created #### Number of Raters <!-- info: What is the number of raters --> <!-- scope: telescope --> 11<n<50 #### Rater Qualifications <!-- info: Describe the qualifications required of an annotator. --> <!-- scope: periscope --> Professional annotator who is a fluent speaker of the respective language #### Raters per Training Example <!-- info: How many annotators saw each training example? --> <!-- scope: periscope --> 0 #### Raters per Test Example <!-- info: How many annotators saw each test example? --> <!-- scope: periscope --> 1 #### Annotation Service? <!-- info: Was an annotation service used? --> <!-- scope: telescope --> yes #### Which Annotation Service <!-- info: Which annotation services were used? --> <!-- scope: periscope --> `other` #### Annotation Values <!-- info: Purpose and values for each annotation --> <!-- scope: microscope --> The additional annotations are for system outputs and references and serve to develop metrics for this task. #### Any Quality Control? <!-- info: Quality control measures? --> <!-- scope: telescope --> validated by data curators #### Quality Control Details <!-- info: Describe the quality control measures that were taken. --> <!-- scope: microscope --> Ratings were compared to a small (English) expert-curated set of ratings to ensure high agreement. There were additional rounds of training and feedback to annotators to ensure high quality judgments. ### Consent #### Any Consent Policy? <!-- info: Was there a consent policy involved when gathering the data? --> <!-- scope: telescope --> yes #### Other Consented Downstream Use <!-- info: What other downstream uses of the data did the original data creators and the data curators consent to? --> <!-- scope: microscope --> In addition to data-to-text generation, the dataset can be used for translation or multimodal research. ### Private Identifying Information (PII) #### Contains PII? <!-- quick --> <!-- info: Does the source language data likely contain Personal Identifying Information about the data creators or subjects? --> <!-- scope: telescope --> no PII #### Justification for no PII <!-- info: Provide a justification for selecting `no PII` above. --> <!-- scope: periscope --> The DHS program only publishes aggregate survey information and thus, no personal information is included. ### Maintenance #### Any Maintenance Plan? <!-- info: Does the original dataset have a maintenance plan? --> <!-- scope: telescope --> no ## Broader Social Context ### Previous Work on the Social Impact of the Dataset #### Usage of Models based on the Data <!-- info: Are you aware of cases where models trained on the task featured in this dataset ore related tasks have been used in automated systems? --> <!-- scope: telescope --> no ### Impact on Under-Served Communities #### Addresses needs of underserved Communities? <!-- info: Does this dataset address the needs of communities that are traditionally underserved in language technology, and particularly language generation technology? Communities may be underserved for exemple because their language, language variety, or social or geographical context is underepresented in NLP and NLG resources (datasets and models). --> <!-- scope: telescope --> yes #### Details on how Dataset Addresses the Needs <!-- info: Describe how this dataset addresses the needs of underserved communities. --> <!-- scope: microscope --> The dataset is focusing on data about African countries and the languages included in the dataset are spoken in Africa. It aims to improve the representation of African languages in the NLP and NLG communities. ### Discussion of Biases #### Any Documented Social Biases? <!-- info: Are there documented social biases in the dataset? Biases in this context are variations in the ways members of different social categories are represented that can have harmful downstream consequences for members of the more disadvantaged group. --> <!-- scope: telescope --> no #### Are the Language Producers Representative of the Language? <!-- info: Does the distribution of language producers in the dataset accurately represent the full distribution of speakers of the language world-wide? If not, how does it differ? --> <!-- scope: periscope --> The language producers for this dataset are those employed by the DHS program which is a US-funded program. While the data is focused on African countries, there may be implicit western biases in how the data is presented. ## Considerations for Using the Data ### PII Risks and Liability ### Licenses #### Copyright Restrictions on the Dataset <!-- info: Based on your answers in the Intended Use part of the Data Overview Section, which of the following best describe the copyright and licensing status of the dataset? --> <!-- scope: periscope --> `open license - commercial use allowed` #### Copyright Restrictions on the Language Data <!-- info: Based on your answers in the Language part of the Data Curation Section, which of the following best describe the copyright and licensing status of the underlying language data? --> <!-- scope: periscope --> `open license - commercial use allowed` ### Known Technical Limitations #### Technical Limitations <!-- info: Describe any known technical limitations, such as spurrious correlations, train/test overlap, annotation biases, or mis-annotations, and cite the works that first identified these limitations when possible. --> <!-- scope: microscope --> While tables were transcribed in the available languages, the majority of the tables were published in English as the first language. Professional translators were used to translate the data, which makes it plausible that some translationese exists in the data. Moreover, it was unavoidable to collect reference sentences that are only partially entailed by the source tables. #### Unsuited Applications <!-- info: When using a model trained on this dataset in a setting where users or the public may interact with its predictions, what are some pitfalls to look out for? In particular, describe some applications of the general task featured in this dataset that its curation or properties make it less suitable for. --> <!-- scope: microscope --> The domain of health reports includes potentially sensitive topics relating to reproduction, violence, sickness, and death. Perceived negative values could be used to amplify stereotypes about people from the respective regions or countries. The intended academic use of this dataset is to develop and evaluate models that neutrally report the content of these tables but not use the outputs to make value judgments, and these applications are thus discouraged.
false
# Dataset Card for QA-Portuguese ## Table of Contents - [Table of Contents](#table-of-contents) - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards) - [Languages](#languages) - [Dataset Structure](#dataset-structure) - [Data Instances](#data-instances) - [Data Fields](#data-fields) - [Data Splits](#data-splits) - [Dataset Creation](#dataset-creation) - [Curation Rationale](#curation-rationale) - [Source Data](#source-data) - [Annotations](#annotations) - [Personal and Sensitive Information](#personal-and-sensitive-information) - [Considerations for Using the Data](#considerations-for-using-the-data) - [Social Impact of Dataset](#social-impact-of-dataset) - [Discussion of Biases](#discussion-of-biases) - [Other Known Limitations](#other-known-limitations) - [Additional Information](#additional-information) - [Dataset Curators](#dataset-curators) - [Licensing Information](#licensing-information) - [Citation Information](#citation-information) - [Contributions](#contributions) ## Dataset Description - **Homepage:** - **Repository:** - **Paper:** - **Leaderboard:** - **Point of Contact:** ### Dataset Summary Portuguese preprocessed split from [MQA dataset](https://huggingface.co/datasets/clips/mqa). ### Supported Tasks and Leaderboards [More Information Needed] ### Languages The dataset is Portuguese. ## Dataset Structure ### Data Instances [More Information Needed] ### Data Fields [More Information Needed] ### Data Splits [More Information Needed] ## Dataset Creation ### Curation Rationale [More Information Needed] ### Source Data #### Initial Data Collection and Normalization [More Information Needed] #### Who are the source language producers? [More Information Needed] ### Annotations #### Annotation process [More Information Needed] #### Who are the annotators? [More Information Needed] ### Personal and Sensitive Information [More Information Needed] ## Considerations for Using the Data ### Social Impact of Dataset [More Information Needed] ### Discussion of Biases [More Information Needed] ### Other Known Limitations [More Information Needed] ## Additional Information ### Dataset Curators [More Information Needed] ### Licensing Information [More Information Needed] ### Citation Information [More Information Needed] ### Contributions Thanks to [@ju-resplande](https://github.com/ju-resplande) for adding this dataset.
false
# Dataset Card for "LexFiles" ## Table of Contents - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Dataset Specifications](#supported-tasks-and-leaderboards) ## Dataset Description - **Homepage:** https://github.com/coastalcph/lexlms - **Repository:** https://github.com/coastalcph/lexlms - **Paper:** https://arxiv.org/abs/xxx - **Point of Contact:** [Ilias Chalkidis](mailto:ilias.chalkidis@di.ku.dk) ### Dataset Summary **Disclaimer: This is a pre-proccessed version of the LexFiles corpus (https://huggingface.co/datasets/lexlms/lexfiles), where documents are pre-split in chunks of 512 tokens.** The LeXFiles is a new diverse English multinational legal corpus that we created including 11 distinct sub-corpora that cover legislation and case law from 6 primarily English-speaking legal systems (EU, CoE, Canada, US, UK, India). The corpus contains approx. 19 billion tokens. In comparison, the "Pile of Law" corpus released by Hendersons et al. (2022) comprises 32 billion in total, where the majority (26/30) of sub-corpora come from the United States of America (USA), hence the corpus as a whole is biased towards the US legal system in general, and the federal or state jurisdiction in particular, to a significant extent. ### Dataset Specifications | Corpus | Corpus alias | Documents | Tokens | Pct. | Sampl. (a=0.5) | Sampl. (a=0.2) | |-----------------------------------|----------------------|-----------|--------|--------|----------------|----------------| | EU Legislation | `eu-legislation` | 93.7K | 233.7M | 1.2% | 5.0% | 8.0% | | EU Court Decisions | `eu-court-cases` | 29.8K | 178.5M | 0.9% | 4.3% | 7.6% | | ECtHR Decisions | `ecthr-cases` | 12.5K | 78.5M | 0.4% | 2.9% | 6.5% | | UK Legislation | `uk-legislation` | 52.5K | 143.6M | 0.7% | 3.9% | 7.3% | | UK Court Decisions | `uk-court-cases` | 47K | 368.4M | 1.9% | 6.2% | 8.8% | | Indian Court Decisions | `indian-court-cases` | 34.8K | 111.6M | 0.6% | 3.4% | 6.9% | | Canadian Legislation | `canadian-legislation` | 6K | 33.5M | 0.2% | 1.9% | 5.5% | | Canadian Court Decisions | `canadian-court-cases` | 11.3K | 33.1M | 0.2% | 1.8% | 5.4% | | U.S. Court Decisions [1] | `court-listener` | 4.6M | 11.4B | 59.2% | 34.7% | 17.5% | | U.S. Legislation | `us-legislation` | 518 | 1.4B | 7.4% | 12.3% | 11.5% | | U.S. Contracts | `us-contracts` | 622K | 5.3B | 27.3% | 23.6% | 15.0% | | Total | `lexlms/lexfiles` | 5.8M | 18.8B | 100% | 100% | 100% | [1] We consider only U.S. Court Decisions from 1965 onwards (cf. post Civil Rights Act), as a hard threshold for cases relying on severely out-dated and in many cases harmful law standards. The rest of the corpora include more recent documents. [2] Sampling (Sampl.) ratios are computed following the exponential sampling introduced by Lample et al. (2019). Additional corpora not considered for pre-training, since they do not represent factual legal knowledge. | Corpus | Corpus alias | Documents | Tokens | |----------------------------------------|------------------------|-----------|--------| | Legal web pages from C4 | `legal-c4` | 284K | 340M | ### Citation [*Ilias Chalkidis\*, Nicolas Garneau\*, Catalina E.C. Goanta, Daniel Martin Katz, and Anders Søgaard.* *LeXFiles and LegalLAMA: Facilitating English Multinational Legal Language Model Development.* *2022. In the Proceedings of the 61th Annual Meeting of the Association for Computational Linguistics. Toronto, Canada.*](https://aclanthology.org/xxx/) ``` @inproceedings{chalkidis-garneau-etal-2023-lexlms, title = {{LeXFiles and LegalLAMA: Facilitating English Multinational Legal Language Model Development}}, author = "Chalkidis*, Ilias and Garneau*, Nicolas and Goanta, Catalina and Katz, Daniel Martin and Søgaard, Anders", booktitle = "Proceedings of the 61h Annual Meeting of the Association for Computational Linguistics", month = june, year = "2023", address = "Toronto, Canada", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/xxx", } ```
false
# Dataset Card for IDK-MRC ## Dataset Description - **Repository:** [rifkiaputri/IDK-MRC](https://github.com/rifkiaputri/IDK-MRC) - **Paper:** [PDF](https://aclanthology.org/2022.emnlp-main.465/) - **Point of Contact:** [rifkiaputri](https://github.com/rifkiaputri) ### Dataset Summary I(n)dontKnow-MRC (IDK-MRC) is an Indonesian Machine Reading Comprehension dataset that covers answerable and unanswerable questions. Based on the combination of the existing answerable questions in TyDiQA, the new unanswerable question in IDK-MRC is generated using a question generation model and human-written question. Each paragraph in the dataset has a set of answerable and unanswerable questions with the corresponding answer. ### Supported Tasks IDK-MRC is mainly intended to train Machine Reading Comprehension or extractive QA models. ### Languages Indonesian ## Dataset Structure ### Data Instances ``` { "context": "Para ilmuwan menduga bahwa megalodon terlihat seperti hiu putih yang lebih kekar, walaupun hiu ini juga mungkin tampak seperti hiu raksasa (Cetorhinus maximus) atau hiu harimau-pasir (Carcharias taurus). Hewan ini dianggap sebagai salah satu predator terbesar dan terkuat yang pernah ada, dan fosil-fosilnya sendiri menunjukkan bahwa panjang maksimal hiu raksasa ini mencapai 18 m, sementara rata-rata panjangnya berkisar pada angka 10,5 m. Rahangnya yang besar memiliki kekuatan gigitan antara 110.000 hingga 180.000 newton. Gigi mereka tebal dan kuat, dan telah berevolusi untuk menangkap mangsa dan meremukkan tulang.", "qas": [ { "id": "indonesian--6040202845759439489-1", "is_impossible": false, "question": "Apakah jenis hiu terbesar di dunia ?", "answers": [ { "text": "megalodon", "answer_start": 27 } ] }, { "id": "indonesian-0426116372962619813-unans-h-2", "is_impossible": true, "question": "Apakah jenis hiu terkecil di dunia?", "answers": [] }, { "id": "indonesian-2493757035872656854-unans-h-2", "is_impossible": true, "question": "Apakah jenis hiu betina terbesar di dunia?", "answers": [] } ] } ``` ### Data Fields Each instance has several fields: - `context`: context passage/paragraph as a string - `qas`: list of questions related to the `context` - `id`: question ID as a string - `is_impossible`: whether the question is unanswerable (impossible to answer) or not as a boolean - `question`: question as a string - `answers`: list of answers - `text`: answer as a string - `answer_start`: answer start index as an integer ### Data Splits - `train`: 9,332 (5,042 answerable, 4,290 unanswerable) - `valid`: 764 (382 answerable, 382 unanswerable) - `test`: 844 (422 answerable, 422 unanswerable) ## Dataset Creation ### Curation Rationale IDK-MRC dataset is built based on the existing paragraph and answerable questions (ans) in TyDiQA-GoldP (Clark et al., 2020). The new unanswerable questions are automatically generated using the combination of mT5 (Xue et al., 2021) and XLM-R (Conneau et al., 2020) models, which are then manually verified by human annotators (filtered ans and filtered unans). We also asked the annotators to manually write additional unanswerable questions as described in §3.3 (additional unans). Each paragraphs in the final dataset will have a set of filtered ans, filtered unans, and additional unans questions. ### Annotations #### Annotation process In our dataset collection pipeline, the annotators are asked to validate the model-generated unanswerable questions and write a new additional unanswerable questions. #### Who are the annotators? We recruit four annotators with 2+ years of experience in Indonesian NLP annotation using direct recruitment. All of them are Indonesian native speakers who reside in Indonesia (Java Island) and fall under the 18–34 age category. We set the payment to around $7.5 per hour. Given the annotators’ demographic, we ensure that the payment is above the minimum wage rate (as of December 2021). All annotators also have signed the consent form and agreed to participate in this project. ## Considerations for Using the Data The paragraphs and answerable questions that we utilized to build IDK-MRC dataset are taken from Indonesian subset of TyDiQA-GoldP dataset (Clark et al., 2020), which originates from Wikipedia articles. Since those articles are written from a neutral point of view, the risk of harmful content is minimal. Also, all model-generated questions in our dataset have been validated by human annotators to eliminate the risk of harmful questions. During the manual question generation process, the annotators are also encouraged to avoid producing possibly offensive questions. Even so, we argue that further assessment is needed before using our dataset and models in real-world applications. This measurement is especially required for the pre-trained language models used in our experiments, namely mT5 (Xue et al., 2021), IndoBERT (Wilie et al., 2020), mBERT (Devlin et al., 2019), and XLM-R (Conneau et al., 2020). These language models are mostly pre-trained on the common-crawl dataset, which may contain harmful biases or stereotypes. ## Additional Information ### Licensing Information CC BY-SA 4.0 ### Citation Information ```bibtex @inproceedings{putri-oh-2022-idk, title = "{IDK}-{MRC}: Unanswerable Questions for {I}ndonesian Machine Reading Comprehension", author = "Putri, Rifki Afina and Oh, Alice", booktitle = "Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing", month = dec, year = "2022", address = "Abu Dhabi, United Arab Emirates", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2022.emnlp-main.465", pages = "6918--6933", } ```
false
# Dataset Card for Digimon BLIP captions This project was inspired by the [labelled Pokemon dataset](https://huggingface.co/datasets/lambdalabs/pokemon-blip-captions). The captions were generated using the BLIP Model found in the [LAVIS Library for Language-Vision Intelligence](https://github.com/salesforce/LAVIS). Like the Pokemon equivalent, each row in the dataset contains the `image` and `text` keys. `Image` is a varying size pixel jpeg, and `text` is the corresponding text caption. ## Citation If you use this dataset, please cite it as: ``` @misc{clemen2022digimon, author = {Kok, Clemen}, title = {Digimon BLIP captions}, year={2022}, howpublished= {\url{https://huggingface.co/datasets/ClemenKok/digimon-lavis-captions/}} } ```
false
# Dataset Card for VoxCeleb ## Table of Contents - [Table of Contents](#table-of-contents) - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards) - [Languages](#languages) - [Dataset Structure](#dataset-structure) - [Data Instances](#data-instances) - [Data Fields](#data-fields) - [Data Splits](#data-splits) - [Dataset Creation](#dataset-creation) - [Curation Rationale](#curation-rationale) - [Source Data](#source-data) - [Annotations](#annotations) - [Personal and Sensitive Information](#personal-and-sensitive-information) - [Considerations for Using the Data](#considerations-for-using-the-data) - [Social Impact of Dataset](#social-impact-of-dataset) - [Discussion of Biases](#discussion-of-biases) - [Other Known Limitations](#other-known-limitations) - [Additional Information](#additional-information) - [Dataset Curators](#dataset-curators) - [Licensing Information](#licensing-information) - [Citation Information](#citation-information) - [Contributions](#contributions) ## Dataset Description ### Dataset Summary VoxCeleb is an audio-visual dataset consisting of short clips of human speech, extracted from interview videos uploaded to YouTube. NOTE: Although this dataset can be automatically downloaded, you must manually request credentials to access it from the creators' website. ### Supported Tasks and Leaderboards [More Information Needed] ### Languages [More Information Needed] ## Dataset Structure ### Data Instances Each datapoint has a path to the audio/video clip along with metadata about the speaker. ``` { 'file': '/datasets/downloads/extracted/[hash]/wav/id10271/_YimahVgI1A/00003.wav', 'file_format': 'wav', 'dataset_id': 'vox1', 'speaker_id': 'id10271', 'speaker_gender': 'm', 'speaker_name': 'Ed_Westwick', 'speaker_nationality': 'UK', 'video_id': '_YimahVgI1A', 'clip_id': '00003', 'audio': { 'path': '/datasets/downloads/extracted/[hash]/wav/id10271/_YimahVgI1A/00003.wav', 'array': array([...], dtype=float32), 'sampling_rate': 16000 } } ``` ### Data Fields Each row includes the following fields: - `file`: The path to the audio/video clip - `file_format`: The file format in which the clip is stored (e.g. `wav`, `aac`, `mp4`) - `dataset_id`: The ID of the dataset this clip is from (`vox1`, `vox2`) - `speaker_id`: The ID of the speaker in this clip - `speaker_gender`: The gender of the speaker (`m`/`f`) - `speaker_name` (VoxCeleb1 only): The full name of the speaker in the clip - `speaker_nationality` (VoxCeleb1 only): The speaker's country of origin - `video_id`: The ID of the video from which this clip was taken - `clip_index`: The index of the clip for this specific video - `audio` (Audio dataset only): The audio signal data ### Data Splits The dataset has a predefined dev set and test set. The dev set has been renamed to a "train" split. ## Dataset Creation ### Curation Rationale [More Information Needed] ### Source Data #### Initial Data Collection and Normalization [More Information Needed] #### Who are the source language producers? [More Information Needed] ### Annotations #### Annotation process [More Information Needed] #### Who are the annotators? [More Information Needed] ### Personal and Sensitive Information The dataset includes recordings of clips (mostly of celebrities and public figures) from public YouTube videos. The names of speakers in VoxCeleb1 are provided. ## Considerations for Using the Data ### Social Impact of Dataset [More Information Needed] ### Discussion of Biases [More Information Needed] ### Other Known Limitations [More Information Needed] ## Additional Information ### Dataset Curators [More Information Needed] ### Licensing Information [More Information Needed] ### Citation Information The VoxCeleb authors request that anyone who uses VoxCeleb1 or VoxCeleb2 includes the following three citations: ``` @Article{Nagrani19, author = "Arsha Nagrani and Joon~Son Chung and Weidi Xie and Andrew Zisserman", title = "Voxceleb: Large-scale speaker verification in the wild", journal = "Computer Science and Language", year = "2019", publisher = "Elsevier", } @InProceedings{Chung18b, author = "Chung, J.~S. and Nagrani, A. and Zisserman, A.", title = "VoxCeleb2: Deep Speaker Recognition", booktitle = "INTERSPEECH", year = "2018", } @InProceedings{Nagrani17, author = "Nagrani, A. and Chung, J.~S. and Zisserman, A.", title = "VoxCeleb: a large-scale speaker identification dataset", booktitle = "INTERSPEECH", year = "2017", } ``` ### Contributions Thanks to [@101arrowz](https://github.com/101arrowz) for adding this dataset.
true
# Dataset Card for SI-NLI ### Dataset Summary SI-NLI (Slovene Natural Language Inference Dataset) contains 5,937 human-created Slovene sentence pairs (premise and hypothesis) that are manually labeled with the labels "entailment", "contradiction", and "neutral". We created the dataset using sentences that appear in the Slovenian reference corpus [ccKres](http://hdl.handle.net/11356/1034). Annotators were tasked to modify the hypothesis in a candidate pair in a way that reflects one of the labels. The dataset is balanced since the annotators created three modifications (entailment, contradiction, neutral) for each candidate sentence pair. The dataset is split into train, validation, and test sets, with sizes of 4,392, 547, and 998. Only the hypothesis and premise are given in the test set (i.e. no annotations) since SI-NLI is integrated into the Slovene evaluation framework [SloBENCH](https://slobench.cjvt.si/). If you use the dataset to train your models, please consider submitting the test set predictions to SloBENCH to get the evaluation score and see how it compares to others. If you have access to the private test set (with labels), you can load it instead of the public one via `datasets.load_dataset("cjvt/si_nli", "private", data_dir="<...>")`. ### Supported Tasks and Leaderboards Natural language inference. ### Languages Slovenian. ## Dataset Structure ### Data Instances A sample instance from the dataset: ``` { 'pair_id': 'P0', 'premise': 'Vendar se je anglikanska večina v grofijah na severu otoka (Ulster) na plebiscitu odločila, da ostane v okviru Velike Britanije.', 'hypothesis': 'A na glasovanju o priključitvi ozemlja k Severni Irski so se prebivalci ulsterskih grofij, pretežno anglikanske veroizpovedi, izrekli o obstanku pod okriljem VB.', 'annotation1': 'entailment', 'annotator1_id': 'annotator_C', 'annotation2': 'entailment', 'annotator2_id': 'annotator_A', 'annotation3': '', 'annotator3_id': '', 'annotation_final': 'entailment', 'label': 'entailment' } ``` ### Data Fields - `pair_id`: string identifier of the pair (`""` in the test set), - `premise`: premise sentence, - `hypothesis`: hypothesis sentence, - `annotation1`: the first annotation (`""` if not available), - `annotator1_id`: anonymized identifier of the first annotator (`""` if not available), - `annotation2`: the second annotation (`""` if not available), - `annotator2_id`: anonymized identifier of the second annotator (`""` if not available), - `annotation3`: the third annotation (`""` if not available), - `annotator3_id`: anonymized identifier of the third annotator (`""` if not available), - `annotation_final`: aggregated annotation where it could be unanimously determined (`""` if not available or an unanimous agreement could not be reached), - `label`: aggregated annotation: either same as `annotation_final` (in case of agreement), same as `annotation1` (in case of disagreement), or `""` (in the test set). **Note that examples with disagreement are all put in the training set**. This aggregation is just the most simple possibility and the user may instead do something more advanced based on the individual annotations (e.g., learning with disagreement). \* A small number of examples did not go through the annotation process because they were constructed by the authors when writing the guidelines. The quality of these was therefore checked by the authors. Such examples do not have the individual annotations and the annotator IDs. ## Additional Information ### Dataset Curators Matej Klemen, Aleš Žagar, Jaka Čibej, Marko Robnik-Šikonja. ### Licensing Information CC BY-NC-SA 4.0. ### Citation Information ``` @misc{sinli, title = {Slovene Natural Language Inference Dataset {SI}-{NLI}}, author = {Klemen, Matej and {\v Z}agar, Ale{\v s} and {\v C}ibej, Jaka and Robnik-{\v S}ikonja, Marko}, url = {http://hdl.handle.net/11356/1707}, note = {Slovenian language resource repository {CLARIN}.{SI}}, year = {2022} } ``` ### Contributions Thanks to [@matejklemen](https://github.com/matejklemen) for adding this dataset.
false
# Dataset Card for Wikipedia This repo is a wrapper around [olm/wikipedia](https://huggingface.co/datasets/olm/wikipedia) that just concatenates data from the EU languages. Please refer to it for a complete data card. The EU languages we include are: - bg - cs - da - de - el - en - es - et - fi - fr - ga - hr - hu - it - lt - lv - mt - nl - pl - pt - ro - sk - sl - sv As with `olm/wikipedia` you will need to install a few dependencies: ``` pip install mwparserfromhell==0.6.4 multiprocess==0.70.13 ``` ```python from datasets import load_dataset load_dataset("dlwh/eu_wikipedias", date="20221101") ``` Please refer to the original olm/wikipedia for a complete data card.
false
false
false
# laion-translated-to-en-korean-subset ## Dataset Description - **Homepage:** [laion-5b](https://laion.ai/blog/laion-5b/) - **Download Size** 1.40 GiB - **Generated Size** 3.49 GiB - **Total Size** 4.89 GiB ## About dataset a subset data of [laion/laion2B-multi-joined-translated-to-en](https://huggingface.co/datasets/laion/laion2B-multi-joined-translated-to-en) and [laion/laion1B-nolang-joined-translated-to-en](https://huggingface.co/datasets/laion/laion1B-nolang-joined-translated-to-en), including only korean ### Lisence CC-BY-4.0 ## Data Structure ### Data Instance ```py >>> from datasets import load_dataset >>> dataset = load_dataset("Bingsu/laion-translated-to-en-korean-subset") >>> dataset DatasetDict({ train: Dataset({ features: ['hash', 'URL', 'TEXT', 'ENG TEXT', 'WIDTH', 'HEIGHT', 'LANGUAGE', 'similarity', 'pwatermark', 'punsafe', 'AESTHETIC_SCORE'], num_rows: 12769693 }) }) ``` ```py >>> dataset["train"].features {'hash': Value(dtype='int64', id=None), 'URL': Value(dtype='large_string', id=None), 'TEXT': Value(dtype='large_string', id=None), 'ENG TEXT': Value(dtype='large_string', id=None), 'WIDTH': Value(dtype='int32', id=None), 'HEIGHT': Value(dtype='int32', id=None), 'LANGUAGE': Value(dtype='large_string', id=None), 'similarity': Value(dtype='float32', id=None), 'pwatermark': Value(dtype='float32', id=None), 'punsafe': Value(dtype='float32', id=None), 'AESTHETIC_SCORE': Value(dtype='float32', id=None)} ``` ### Data Size download: 1.40 GiB<br> generated: 3.49 GiB<br> total: 4.89 GiB ### Data Field - 'hash': `int` - 'URL': `string` - 'TEXT': `string` - 'ENG TEXT': `string`, null data are dropped - 'WIDTH': `int`, null data are filled with 0 - 'HEIGHT': `int`, null data are filled with 0 - 'LICENSE': `string` - 'LANGUAGE': `string` - 'similarity': `float32`, CLIP similarity score, null data are filled with 0.0 - 'pwatermark': `float32`, Probability of containing a watermark, null data are filled with 0.0 - 'punsafe': `float32`, Probability of nsfw image, null data are filled with 0.0 - 'AESTHETIC_SCORE': `float32`, null data are filled with 0.0 ### Data Splits | | train | | --------- | -------- | | # of data | 12769693 | ### polars ```sh pip install polars[fsspec] ``` ```py import polars as pl from huggingface_hub import hf_hub_url url = hf_hub_url("Bingsu/laion-translated-to-en-korean-subset", filename="train.parquet", repo_type="dataset") # url = "https://huggingface.co/datasets/Bingsu/laion-translated-to-en-korean-subset/resolve/main/train.parquet" df = pl.read_parquet(url) ``` pandas broke my colab session.
false
# Dataset Card for [Dataset Name] ## Table of Contents - [Table of Contents](#table-of-contents) - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards) - [Languages](#languages) - [Dataset Structure](#dataset-structure) - [Data Instances](#data-instances) - [Data Fields](#data-fields) - [Data Splits](#data-splits) - [Dataset Creation](#dataset-creation) - [Curation Rationale](#curation-rationale) - [Source Data](#source-data) - [Annotations](#annotations) - [Personal and Sensitive Information](#personal-and-sensitive-information) - [Considerations for Using the Data](#considerations-for-using-the-data) - [Social Impact of Dataset](#social-impact-of-dataset) - [Discussion of Biases](#discussion-of-biases) - [Other Known Limitations](#other-known-limitations) - [Additional Information](#additional-information) - [Dataset Curators](#dataset-curators) - [Licensing Information](#licensing-information) - [Citation Information](#citation-information) - [Contributions](#contributions) ## Dataset Description - **Homepage:** https://www.openslr.org/127/ - **Repository:** https://github.com/MILE-IISc - **Paper:** https://arxiv.org/abs/2207.13331 - **Leaderboard:** - **Point of Contact:** ### Dataset Summary Tamil transcribed speech corpus for ASR ### Supported Tasks and Leaderboards [More Information Needed] ### Languages - Tamil ## Dataset Structure ### Data Instances [More Information Needed] ### Data Fields [More Information Needed] ### Data Splits [More Information Needed] ## Dataset Creation ### Curation Rationale [More Information Needed] ### Source Data #### Initial Data Collection and Normalization [More Information Needed] #### Who are the source language producers? [More Information Needed] ### Annotations #### Annotation process [More Information Needed] #### Who are the annotators? [More Information Needed] ### Personal and Sensitive Information [More Information Needed] ## Considerations for Using the Data ### Social Impact of Dataset [More Information Needed] ### Discussion of Biases [More Information Needed] ### Other Known Limitations [More Information Needed] ## Additional Information ### Dataset Curators [More Information Needed] ### Licensing Information Attribution 2.0 Generic (CC BY 2.0) ### Citation Information @misc{mile_1, doi = {10.48550/ARXIV.2207.13331}, url = {https://arxiv.org/abs/2207.13331}, author = {A, Madhavaraj and Pilar, Bharathi and G, Ramakrishnan A}, title = {Subword Dictionary Learning and Segmentation Techniques for Automatic Speech Recognition in Tamil and Kannada}, publisher = {arXiv}, year = {2022}, } @misc{mile_2, doi = {10.48550/ARXIV.2207.13333}, url = {https://arxiv.org/abs/2207.13333}, author = {A, Madhavaraj and Pilar, Bharathi and G, Ramakrishnan A}, title = {Knowledge-driven Subword Grammar Modeling for Automatic Speech Recognition in Tamil and Kannada}, publisher = {arXiv}, year = {2022}, } ### Contributions Thanks to [@parambharat](https://github.com/parambharat) for adding this dataset.
true
# Dataset Card for "twitter-coronavirus" ## Dataset Description - **Homepage:** Kaggle Challenge - **Repository:** https://www.kaggle.com/datasets/datatattle/covid-19-nlp-text-classification - **Paper:** N.A. - **Leaderboard:** N.A. - **Point of Contact:** N.A. ### Dataset Summary Perform Text Classification on the data. The tweets have been pulled from Twitter and manual tagging has been done then. The names and usernames have been given codes to avoid any privacy concerns. Columns: 1) Location 2) Tweet At 3) Original Tweet 4) Label - Extremely Negative - Negative - Neutral - Positive - Extremely Positive ### Languages english ### Citation Information https://www.kaggle.com/datasets/datatattle/covid-19-nlp-text-classification ### Contributions Thanks to [@davidberenstein1957](https://github.com/davidberenstein1957) for adding this dataset.
false
# Indic TTS Malayalam Speech Corpus The Malayalam subset of [Indic TTS Corpus](https://www.iitm.ac.in/donlab/tts/index.php), taken from [this Kaggle database.](https://www.kaggle.com/datasets/kavyamanohar/indic-tts-malayalam-speech-corpus) The corpus contains one male and one female speaker, with a 2:1 ratio of samples due to missing files for the female speaker. The license is given in the repository.
false
# Dataset Card for "lmqg/qag_koquad" ## Dataset Description - **Repository:** [https://github.com/asahi417/lm-question-generation](https://github.com/asahi417/lm-question-generation) - **Paper:** [https://arxiv.org/abs/2210.03992](https://arxiv.org/abs/2210.03992) - **Point of Contact:** [Asahi Ushio](http://asahiushio.com/) ### Dataset Summary This is the question & answer generation dataset based on the KOQuAD. ### Supported Tasks and Leaderboards * `question-answer-generation`: The dataset is assumed to be used to train a model for question & answer generation. Success on this task is typically measured by achieving a high BLEU4/METEOR/ROUGE-L/BERTScore/MoverScore (see our paper for more in detail). ### Languages Korean (ko) ## Dataset Structure An example of 'train' looks as follows. ``` { "paragraph": ""3.13 만세운동" 은 1919년 3.13일 전주에서 일어난 만세운동이다. 지역 인사들과 함께 신흥학교 학생들이 주도적인 역할을 하며, 만세운동을 이끌었다. 박태련, 김신극 등 전주 지도자들은 군산에서 4일과 5일 독립만세 시위가 감행됐다는 소식에 듣고 준비하고 있었다. 천도교와 박태련 신간회 총무집에서 필요한 태극기를 인쇄하기로 했었다. 서울을 비롯한 다른 지방에서 시위가 계속되자 일본경찰은 신흥학교와 기전학교를 비롯한 전주시내 학교에 강제 방학조치를 취했다. 이에 최종삼 등 신흥학교 학생 5명은 밤을 이용해 신흥학교 지하실에서 태극기 등 인쇄물을 만들었다. 준비를 마친 이들은 13일 장터로 모이기 시작했고, 채소가마니로 위장한 태극기를 장터로 실어 나르고 거사 직전 시장 입구인 완산동과 전주교 건너편에서 군중들에게 은밀히 배부했다. 낮 12시20분께 신흥학교와 기전학교 학생 및 천도교도 등은 태극기를 들고 만세를 불렀다. 남문 밖 시장, 제2보통학교(현 완산초등학교)에서 모여 인쇄물을 뿌리며 시가지로 구보로 행진했다. 시위는 오후 11시까지 서너차례 계속됐다. 또 다음날 오후 3시에도 군중이 모여 만세를 불렀다. 이후 고형진, 남궁현, 김병학, 김점쇠, 이기곤, 김경신 등 신흥학교 학생들은 시위를 주도했다는 혐의로 모두 실형 1년을 언도 받았다. 이외 신흥학교 학생 3명은 일제의 고문에 옥사한 것으로 알려졌다. 또 시위를 지도한 김인전 목사는 이후 중국 상해로 거처를 옮겨 임시정부에서 활동했다. 현재 신흥학교 교문 옆에 만세운동 기념비가 세워져 있다.", "questions": [ "만세운동 기념비가 세워져 있는 곳은?", "일본경찰의 강제 방학조치에도 불구하고 학생들은 신흥학교 지하실에 모여서 어떤 인쇄물을 만들었는가?", "여러 지방에서 시위가 일어나자 일본경찰이 전주시내 학교에 감행한 조치는 무엇인가?", "지역인사들과 신흥고등학교 학생들이 주도적인 역할을 한 3.13 만세운동이 일어난 해는?", "신흥학교 학생들은 시위를 주도했다는 혐의로 모두 실형 몇년을 언도 받았는가?", "만세운동에서 주도적인 역할을 한 이들은?", "1919년 3.1 운동이 일어난 지역은 어디인가?", "3.13 만세운동이 일어난 곳은?" ], "answers": [ "신흥학교 교문 옆", "태극기", "강제 방학조치", "1919년", "1년", "신흥학교 학생들", "전주", "전주" ], "questions_answers": "question: 만세운동 기념비가 세워져 있는 곳은?, answer: 신흥학교 교문 옆 | question: 일본경찰의 강제 방학조치에도 불구하고 학생들은 신흥학교 지하실에 모여서 어떤 인쇄물을 만들었는가?, answer: 태극기 | question: 여러 지방에서 시위가 일어나자 일본경찰이 전주시내 학교에 감행한 조치는 무엇인가?, answer: 강제 방학조치 | question: 지역인사들과 신흥고등학교 학생들이 주도적인 역할을 한 3.13 만세운동이 일어난 해는?, answer: 1919년 | question: 신흥학교 학생들은 시위를 주도했다는 혐의로 모두 실형 몇년을 언도 받았는가?, answer: 1년 | question: 만세운동에서 주도적인 역할을 한 이들은?, answer: 신흥학교 학생들 | question: 1919년 3.1 운동이 일어난 지역은 어디인가?, answer: 전주 | question: 3.13 만세운동이 일어난 곳은?, answer: 전주" } ``` The data fields are the same among all splits. - `questions`: a `list` of `string` features. - `answers`: a `list` of `string` features. - `paragraph`: a `string` feature. - `questions_answers`: a `string` feature. ## Data Splits |train|validation|test | |----:|---------:|----:| |9600 | 960 | 4442| ## Citation Information ``` @inproceedings{ushio-etal-2022-generative, title = "{G}enerative {L}anguage {M}odels for {P}aragraph-{L}evel {Q}uestion {G}eneration", author = "Ushio, Asahi and Alva-Manchego, Fernando and Camacho-Collados, Jose", booktitle = "Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing", month = dec, year = "2022", address = "Abu Dhabi, U.A.E.", publisher = "Association for Computational Linguistics", } ```
false
### Roboflow Dataset Page [https://universe.roboflow.com/material-identification/garbage-classification-3/dataset/2](https://universe.roboflow.com/material-identification/garbage-classification-3/dataset/2?ref=roboflow2huggingface) ### Dataset Labels ``` ['biodegradable', 'cardboard', 'glass', 'metal', 'paper', 'plastic'] ``` ### Citation ``` @misc{ garbage-classification-3_dataset, title = { GARBAGE CLASSIFICATION 3 Dataset }, type = { Open Source Dataset }, author = { Material Identification }, howpublished = { \\url{ https://universe.roboflow.com/material-identification/garbage-classification-3 } }, url = { https://universe.roboflow.com/material-identification/garbage-classification-3 }, journal = { Roboflow Universe }, publisher = { Roboflow }, year = { 2022 }, month = { mar }, note = { visited on 2023-01-02 }, } ``` ### License CC BY 4.0 ### Dataset Summary This dataset was exported via roboflow.com on July 27, 2022 at 5:44 AM GMT Roboflow is an end-to-end computer vision platform that helps you * collaborate with your team on computer vision projects * collect & organize images * understand unstructured image data * annotate, and create datasets * export, train, and deploy computer vision models * use active learning to improve your dataset over time It includes 10464 images. GARBAGE-GARBAGE-CLASSIFICATION are annotated in COCO format. The following pre-processing was applied to each image: * Auto-orientation of pixel data (with EXIF-orientation stripping) * Resize to 416x416 (Stretch) The following augmentation was applied to create 1 versions of each source image: * 50% probability of horizontal flip * 50% probability of vertical flip * Equal probability of one of the following 90-degree rotations: none, clockwise, counter-clockwise, upside-down
false
Dataset with Prolog code / query pairs and execution results.
false
# Dataset Card for MNIST ## Table of Contents - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards) - [Languages](#languages) - [Dataset Structure](#dataset-structure) - [Data Instances](#data-instances) - [Data Fields](#data-fields) - [Data Splits](#data-splits) - [Dataset Creation](#dataset-creation) - [Curation Rationale](#curation-rationale) - [Source Data](#source-data) - [Annotations](#annotations) - [Personal and Sensitive Information](#personal-and-sensitive-information) - [Considerations for Using the Data](#considerations-for-using-the-data) - [Social Impact of Dataset](#social-impact-of-dataset) - [Discussion of Biases](#discussion-of-biases) - [Other Known Limitations](#other-known-limitations) - [Additional Information](#additional-information) - [Dataset Curators](#dataset-curators) - [Licensing Information](#licensing-information) - [Citation Information](#citation-information) - [Contributions](#contributions) ## Dataset Description - **Homepage:** http://yann.lecun.com/exdb/mnist/ - **Repository:** - **Paper:** MNIST handwritten digit database by Yann LeCun, Corinna Cortes, and CJ Burges - **Leaderboard:** - **Point of Contact:** ### Dataset Summary The MNIST dataset consists of 70,000 28x28 black-and-white images of handwritten digits extracted from two NIST databases. There are 60,000 images in the training dataset and 10,000 images in the validation dataset, one class per digit so a total of 10 classes, with 7,000 images (6,000 train images and 1,000 test images) per class. Half of the image were drawn by Census Bureau employees and the other half by high school students (this split is evenly distributed in the training and testing sets). ### Supported Tasks and Leaderboards - `image-classification`: The goal of this task is to classify a given image of a handwritten digit into one of 10 classes representing integer values from 0 to 9, inclusively. The leaderboard is available [here](https://paperswithcode.com/sota/image-classification-on-mnist). ### Languages English ## Dataset Structure ### Data Instances A data point comprises an image and its label: ``` { 'image': <PIL.PngImagePlugin.PngImageFile image mode=L size=28x28 at 0x276021F6DD8>, 'label': 5 } ``` ### Data Fields - `image`: A `PIL.Image.Image` object containing the 28x28 image. Note that when accessing the image column: `dataset[0]["image"]` the image file is automatically decoded. Decoding of a large number of image files might take a significant amount of time. Thus it is important to first query the sample index before the `"image"` column, *i.e.* `dataset[0]["image"]` should **always** be preferred over `dataset["image"][0]` - `label`: an integer between 0 and 9 representing the digit. ### Data Splits The data is split into training and test set. All the images in the test set were drawn by different individuals than the images in the training set. The training set contains 60,000 images and the test set 10,000 images. ## Dataset Creation ### Curation Rationale The MNIST database was created to provide a testbed for people wanting to try pattern recognition methods or machine learning algorithms while spending minimal efforts on preprocessing and formatting. Images of the original dataset (NIST) were in two groups, one consisting of images drawn by Census Bureau employees and one consisting of images drawn by high school students. In NIST, the training set was built by grouping all the images of the Census Bureau employees, and the test set was built by grouping the images form the high school students. The goal in building MNIST was to have a training and test set following the same distributions, so the training set contains 30,000 images drawn by Census Bureau employees and 30,000 images drawn by high school students, and the test set contains 5,000 images of each group. The curators took care to make sure all the images in the test set were drawn by different individuals than the images in the training set. ### Source Data #### Initial Data Collection and Normalization The original images from NIST were size normalized to fit a 20x20 pixel box while preserving their aspect ratio. The resulting images contain grey levels (i.e., pixels don't simply have a value of black and white, but a level of greyness from 0 to 255) as a result of the anti-aliasing technique used by the normalization algorithm. The images were then centered in a 28x28 image by computing the center of mass of the pixels, and translating the image so as to position this point at the center of the 28x28 field. #### Who are the source language producers? Half of the source images were drawn by Census Bureau employees, half by high school students. According to the dataset curator, the images from the first group are more easily recognizable. ### Annotations #### Annotation process The images were not annotated after their creation: the image creators annotated their images with the corresponding label after drawing them. #### Who are the annotators? Same as the source data creators. ### Personal and Sensitive Information [More Information Needed] ## Considerations for Using the Data ### Social Impact of Dataset [More Information Needed] ### Discussion of Biases [More Information Needed] ### Other Known Limitations [More Information Needed] ## Additional Information ### Dataset Curators Chris Burges, Corinna Cortes and Yann LeCun ### Licensing Information MIT Licence ### Citation Information ``` @article{lecun2010mnist, title={MNIST handwritten digit database}, author={LeCun, Yann and Cortes, Corinna and Burges, CJ}, journal={ATT Labs [Online]. Available: http://yann.lecun.com/exdb/mnist}, volume={2}, year={2010} } ``` ### Contributions Thanks to [@sgugger](https://github.com/sgugger) for adding this dataset.
false
# Dataset Card for `disks45/nocr` The `disks45/nocr` dataset, provided by the [ir-datasets](https://ir-datasets.com/) package. For more information about the dataset, see the [documentation](https://ir-datasets.com/disks45#disks45/nocr). # Data This dataset provides: - `docs` (documents, i.e., the corpus); count=528,155 This dataset is used by: [`disks45_nocr_trec-robust-2004`](https://huggingface.co/datasets/irds/disks45_nocr_trec-robust-2004), [`disks45_nocr_trec-robust-2004_fold1`](https://huggingface.co/datasets/irds/disks45_nocr_trec-robust-2004_fold1), [`disks45_nocr_trec-robust-2004_fold2`](https://huggingface.co/datasets/irds/disks45_nocr_trec-robust-2004_fold2), [`disks45_nocr_trec-robust-2004_fold3`](https://huggingface.co/datasets/irds/disks45_nocr_trec-robust-2004_fold3), [`disks45_nocr_trec-robust-2004_fold4`](https://huggingface.co/datasets/irds/disks45_nocr_trec-robust-2004_fold4), [`disks45_nocr_trec-robust-2004_fold5`](https://huggingface.co/datasets/irds/disks45_nocr_trec-robust-2004_fold5), [`disks45_nocr_trec7`](https://huggingface.co/datasets/irds/disks45_nocr_trec7), [`disks45_nocr_trec8`](https://huggingface.co/datasets/irds/disks45_nocr_trec8) ## Usage ```python from datasets import load_dataset docs = load_dataset('irds/disks45_nocr', 'docs') for record in docs: record # {'doc_id': ..., 'title': ..., 'body': ..., 'marked_up_doc': ...} ``` Note that calling `load_dataset` will download the dataset (or provide access instructions when it's not public) and make a copy of the data in 🤗 Dataset format. ## Citation Information ``` @misc{Voorhees1996Disks45, title = {NIST TREC Disks 4 and 5: Retrieval Test Collections Document Set}, author = {Ellen M. Voorhees}, doi = {10.18434/t47g6m}, year = {1996}, publisher = {National Institute of Standards and Technology} } ```
false
# Dataset Card for `lotte/technology/test/search` The `lotte/technology/test/search` dataset, provided by the [ir-datasets](https://ir-datasets.com/) package. For more information about the dataset, see the [documentation](https://ir-datasets.com/lotte#lotte/technology/test/search). # Data This dataset provides: - `queries` (i.e., topics); count=596 - `qrels`: (relevance assessments); count=2,045 - For `docs`, use [`irds/lotte_technology_test`](https://huggingface.co/datasets/irds/lotte_technology_test) ## Usage ```python from datasets import load_dataset queries = load_dataset('irds/lotte_technology_test_search', 'queries') for record in queries: record # {'query_id': ..., 'text': ...} qrels = load_dataset('irds/lotte_technology_test_search', 'qrels') for record in qrels: record # {'query_id': ..., 'doc_id': ..., 'relevance': ..., 'iteration': ...} ``` Note that calling `load_dataset` will download the dataset (or provide access instructions when it's not public) and make a copy of the data in 🤗 Dataset format. ## Citation Information ``` @article{Santhanam2021ColBERTv2, title = "ColBERTv2: Effective and Efficient Retrieval via Lightweight Late Interaction", author = "Keshav Santhanam and Omar Khattab and Jon Saad-Falcon and Christopher Potts and Matei Zaharia", journal= "arXiv preprint arXiv:2112.01488", year = "2021", url = "https://arxiv.org/abs/2112.01488" } ```
false
# Dataset Card for `mmarco/fr` The `mmarco/fr` dataset, provided by the [ir-datasets](https://ir-datasets.com/) package. For more information about the dataset, see the [documentation](https://ir-datasets.com/mmarco#mmarco/fr). # Data This dataset provides: - `docs` (documents, i.e., the corpus); count=8,841,823 This dataset is used by: [`mmarco_fr_dev`](https://huggingface.co/datasets/irds/mmarco_fr_dev), [`mmarco_fr_train`](https://huggingface.co/datasets/irds/mmarco_fr_train) ## Usage ```python from datasets import load_dataset docs = load_dataset('irds/mmarco_fr', 'docs') for record in docs: record # {'doc_id': ..., 'text': ...} ``` Note that calling `load_dataset` will download the dataset (or provide access instructions when it's not public) and make a copy of the data in 🤗 Dataset format. ## Citation Information ``` @article{Bonifacio2021MMarco, title={{mMARCO}: A Multilingual Version of {MS MARCO} Passage Ranking Dataset}, author={Luiz Henrique Bonifacio and Israel Campiotti and Roberto Lotufo and Rodrigo Nogueira}, year={2021}, journal={arXiv:2108.13897} } ```
false
# Dataset Card for `mmarco/pt/dev` The `mmarco/pt/dev` dataset, provided by the [ir-datasets](https://ir-datasets.com/) package. For more information about the dataset, see the [documentation](https://ir-datasets.com/mmarco#mmarco/pt/dev). # Data This dataset provides: - `queries` (i.e., topics); count=101,619 - `qrels`: (relevance assessments); count=59,273 - For `docs`, use [`irds/mmarco_pt`](https://huggingface.co/datasets/irds/mmarco_pt) This dataset is used by: [`mmarco_pt_dev_v1.1`](https://huggingface.co/datasets/irds/mmarco_pt_dev_v1.1) ## Usage ```python from datasets import load_dataset queries = load_dataset('irds/mmarco_pt_dev', 'queries') for record in queries: record # {'query_id': ..., 'text': ...} qrels = load_dataset('irds/mmarco_pt_dev', 'qrels') for record in qrels: record # {'query_id': ..., 'doc_id': ..., 'relevance': ..., 'iteration': ...} ``` Note that calling `load_dataset` will download the dataset (or provide access instructions when it's not public) and make a copy of the data in 🤗 Dataset format. ## Citation Information ``` @article{Bonifacio2021MMarco, title={{mMARCO}: A Multilingual Version of {MS MARCO} Passage Ranking Dataset}, author={Luiz Henrique Bonifacio and Israel Campiotti and Roberto Lotufo and Rodrigo Nogueira}, year={2021}, journal={arXiv:2108.13897} } ```
false
# Dataset Card for `mmarco/pt/dev/small` The `mmarco/pt/dev/small` dataset, provided by the [ir-datasets](https://ir-datasets.com/) package. For more information about the dataset, see the [documentation](https://ir-datasets.com/mmarco#mmarco/pt/dev/small). # Data This dataset provides: - `queries` (i.e., topics); count=7,000 - `qrels`: (relevance assessments); count=7,437 - For `docs`, use [`irds/mmarco_pt`](https://huggingface.co/datasets/irds/mmarco_pt) ## Usage ```python from datasets import load_dataset queries = load_dataset('irds/mmarco_pt_dev_small', 'queries') for record in queries: record # {'query_id': ..., 'text': ...} qrels = load_dataset('irds/mmarco_pt_dev_small', 'qrels') for record in qrels: record # {'query_id': ..., 'doc_id': ..., 'relevance': ..., 'iteration': ...} ``` Note that calling `load_dataset` will download the dataset (or provide access instructions when it's not public) and make a copy of the data in 🤗 Dataset format. ## Citation Information ``` @article{Bonifacio2021MMarco, title={{mMARCO}: A Multilingual Version of {MS MARCO} Passage Ranking Dataset}, author={Luiz Henrique Bonifacio and Israel Campiotti and Roberto Lotufo and Rodrigo Nogueira}, year={2021}, journal={arXiv:2108.13897} } ```
false
# Dataset Card for `mmarco/pt/dev/v1.1` The `mmarco/pt/dev/v1.1` dataset, provided by the [ir-datasets](https://ir-datasets.com/) package. For more information about the dataset, see the [documentation](https://ir-datasets.com/mmarco#mmarco/pt/dev/v1.1). # Data This dataset provides: - `queries` (i.e., topics); count=101,093 - For `docs`, use [`irds/mmarco_pt`](https://huggingface.co/datasets/irds/mmarco_pt) - For `qrels`, use [`irds/mmarco_pt_dev`](https://huggingface.co/datasets/irds/mmarco_pt_dev) ## Usage ```python from datasets import load_dataset queries = load_dataset('irds/mmarco_pt_dev_v1.1', 'queries') for record in queries: record # {'query_id': ..., 'text': ...} ``` Note that calling `load_dataset` will download the dataset (or provide access instructions when it's not public) and make a copy of the data in 🤗 Dataset format. ## Citation Information ``` @article{Bonifacio2021MMarco, title={{mMARCO}: A Multilingual Version of {MS MARCO} Passage Ranking Dataset}, author={Luiz Henrique Bonifacio and Israel Campiotti and Roberto Lotufo and Rodrigo Nogueira}, year={2021}, journal={arXiv:2108.13897} } ```
false
# Dataset Card for `mmarco/pt/train/v1.1` The `mmarco/pt/train/v1.1` dataset, provided by the [ir-datasets](https://ir-datasets.com/) package. For more information about the dataset, see the [documentation](https://ir-datasets.com/mmarco#mmarco/pt/train/v1.1). # Data This dataset provides: - `queries` (i.e., topics); count=808,731 - For `docs`, use [`irds/mmarco_pt`](https://huggingface.co/datasets/irds/mmarco_pt) - For `qrels`, use [`irds/mmarco_pt_train`](https://huggingface.co/datasets/irds/mmarco_pt_train) - For `docpairs`, use [`irds/mmarco_pt_train`](https://huggingface.co/datasets/irds/mmarco_pt_train) ## Usage ```python from datasets import load_dataset queries = load_dataset('irds/mmarco_pt_train_v1.1', 'queries') for record in queries: record # {'query_id': ..., 'text': ...} ``` Note that calling `load_dataset` will download the dataset (or provide access instructions when it's not public) and make a copy of the data in 🤗 Dataset format. ## Citation Information ``` @article{Bonifacio2021MMarco, title={{mMARCO}: A Multilingual Version of {MS MARCO} Passage Ranking Dataset}, author={Luiz Henrique Bonifacio and Israel Campiotti and Roberto Lotufo and Rodrigo Nogueira}, year={2021}, journal={arXiv:2108.13897} } ```
false
# Dataset Card for `mmarco/v2/pt` The `mmarco/v2/pt` dataset, provided by the [ir-datasets](https://ir-datasets.com/) package. For more information about the dataset, see the [documentation](https://ir-datasets.com/mmarco#mmarco/v2/pt). # Data This dataset provides: - `docs` (documents, i.e., the corpus); count=8,841,823 This dataset is used by: [`mmarco_v2_pt_dev`](https://huggingface.co/datasets/irds/mmarco_v2_pt_dev), [`mmarco_v2_pt_train`](https://huggingface.co/datasets/irds/mmarco_v2_pt_train) ## Usage ```python from datasets import load_dataset docs = load_dataset('irds/mmarco_v2_pt', 'docs') for record in docs: record # {'doc_id': ..., 'text': ...} ``` Note that calling `load_dataset` will download the dataset (or provide access instructions when it's not public) and make a copy of the data in 🤗 Dataset format. ## Citation Information ``` @article{Bonifacio2021MMarco, title={{mMARCO}: A Multilingual Version of {MS MARCO} Passage Ranking Dataset}, author={Luiz Henrique Bonifacio and Israel Campiotti and Roberto Lotufo and Rodrigo Nogueira}, year={2021}, journal={arXiv:2108.13897} } ```
false
# Dataset Card for `mmarco/v2/pt/dev` The `mmarco/v2/pt/dev` dataset, provided by the [ir-datasets](https://ir-datasets.com/) package. For more information about the dataset, see the [documentation](https://ir-datasets.com/mmarco#mmarco/v2/pt/dev). # Data This dataset provides: - `queries` (i.e., topics); count=101,093 - `qrels`: (relevance assessments); count=59,273 - For `docs`, use [`irds/mmarco_v2_pt`](https://huggingface.co/datasets/irds/mmarco_v2_pt) ## Usage ```python from datasets import load_dataset queries = load_dataset('irds/mmarco_v2_pt_dev', 'queries') for record in queries: record # {'query_id': ..., 'text': ...} qrels = load_dataset('irds/mmarco_v2_pt_dev', 'qrels') for record in qrels: record # {'query_id': ..., 'doc_id': ..., 'relevance': ..., 'iteration': ...} ``` Note that calling `load_dataset` will download the dataset (or provide access instructions when it's not public) and make a copy of the data in 🤗 Dataset format. ## Citation Information ``` @article{Bonifacio2021MMarco, title={{mMARCO}: A Multilingual Version of {MS MARCO} Passage Ranking Dataset}, author={Luiz Henrique Bonifacio and Israel Campiotti and Roberto Lotufo and Rodrigo Nogueira}, year={2021}, journal={arXiv:2108.13897} } ```
false
# Dataset Card for `nyt/wksup` The `nyt/wksup` dataset, provided by the [ir-datasets](https://ir-datasets.com/) package. For more information about the dataset, see the [documentation](https://ir-datasets.com/nyt#nyt/wksup). # Data This dataset provides: - `queries` (i.e., topics); count=1,864,661 - `qrels`: (relevance assessments); count=1,864,661 - For `docs`, use [`irds/nyt`](https://huggingface.co/datasets/irds/nyt) ## Usage ```python from datasets import load_dataset queries = load_dataset('irds/nyt_wksup', 'queries') for record in queries: record # {'query_id': ..., 'text': ...} qrels = load_dataset('irds/nyt_wksup', 'qrels') for record in qrels: record # {'query_id': ..., 'doc_id': ..., 'relevance': ...} ``` Note that calling `load_dataset` will download the dataset (or provide access instructions when it's not public) and make a copy of the data in 🤗 Dataset format. ## Citation Information ``` @inproceedings{MacAvaney2019Wksup, author = {MacAvaney, Sean and Yates, Andrew and Hui, Kai and Frieder, Ophir}, title = {Content-Based Weak Supervision for Ad-Hoc Re-Ranking}, booktitle = {SIGIR}, year = {2019} } @article{Sandhaus2008Nyt, title={The new york times annotated corpus}, author={Sandhaus, Evan}, journal={Linguistic Data Consortium, Philadelphia}, volume={6}, number={12}, pages={e26752}, year={2008} } ```
false
# Dataset Card for `wikiclir/pt` The `wikiclir/pt` dataset, provided by the [ir-datasets](https://ir-datasets.com/) package. For more information about the dataset, see the [documentation](https://ir-datasets.com/wikiclir#wikiclir/pt). # Data This dataset provides: - `docs` (documents, i.e., the corpus); count=973,057 - `queries` (i.e., topics); count=611,732 - `qrels`: (relevance assessments); count=1,741,889 ## Usage ```python from datasets import load_dataset docs = load_dataset('irds/wikiclir_pt', 'docs') for record in docs: record # {'doc_id': ..., 'title': ..., 'text': ...} queries = load_dataset('irds/wikiclir_pt', 'queries') for record in queries: record # {'query_id': ..., 'text': ...} qrels = load_dataset('irds/wikiclir_pt', 'qrels') for record in qrels: record # {'query_id': ..., 'doc_id': ..., 'relevance': ..., 'iteration': ...} ``` Note that calling `load_dataset` will download the dataset (or provide access instructions when it's not public) and make a copy of the data in 🤗 Dataset format. ## Citation Information ``` @inproceedings{sasaki-etal-2018-cross, title = "Cross-Lingual Learning-to-Rank with Shared Representations", author = "Sasaki, Shota and Sun, Shuo and Schamoni, Shigehiko and Duh, Kevin and Inui, Kentaro", booktitle = "Proceedings of the 2018 Conference of the North {A}merican Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 2 (Short Papers)", month = jun, year = "2018", address = "New Orleans, Louisiana", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/N18-2073", doi = "10.18653/v1/N18-2073", pages = "458--463" } ```
false
# Dataset Card for `wikiclir/ru` The `wikiclir/ru` dataset, provided by the [ir-datasets](https://ir-datasets.com/) package. For more information about the dataset, see the [documentation](https://ir-datasets.com/wikiclir#wikiclir/ru). # Data This dataset provides: - `docs` (documents, i.e., the corpus); count=1,413,945 - `queries` (i.e., topics); count=664,924 - `qrels`: (relevance assessments); count=2,321,384 ## Usage ```python from datasets import load_dataset docs = load_dataset('irds/wikiclir_ru', 'docs') for record in docs: record # {'doc_id': ..., 'title': ..., 'text': ...} queries = load_dataset('irds/wikiclir_ru', 'queries') for record in queries: record # {'query_id': ..., 'text': ...} qrels = load_dataset('irds/wikiclir_ru', 'qrels') for record in qrels: record # {'query_id': ..., 'doc_id': ..., 'relevance': ..., 'iteration': ...} ``` Note that calling `load_dataset` will download the dataset (or provide access instructions when it's not public) and make a copy of the data in 🤗 Dataset format. ## Citation Information ``` @inproceedings{sasaki-etal-2018-cross, title = "Cross-Lingual Learning-to-Rank with Shared Representations", author = "Sasaki, Shota and Sun, Shuo and Schamoni, Shigehiko and Duh, Kevin and Inui, Kentaro", booktitle = "Proceedings of the 2018 Conference of the North {A}merican Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 2 (Short Papers)", month = jun, year = "2018", address = "New Orleans, Louisiana", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/N18-2073", doi = "10.18653/v1/N18-2073", pages = "458--463" } ```
false
hand-collected set of 57817 pics mostly from russian internet. pics without captions. датасет из тех самых "прикольных картинок" с дисков и т.п. все картинки с корневой директории полностью собраны ручками. не размечен.
false
# Dataset Card for FrenchMedMCQA : A French Multiple-Choice Question Answering Corpus for Medical domain ## Table of Contents - [Dataset Card for FrenchMedMCQA : A French Multiple-Choice Question Answering Corpus for Medical domain](#dataset-card-for-frenchmedmcqa--a-french-multiple-choice-question-answering-corpus-for-medical-domain) - [Table of Contents](#table-of-contents) - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards) - [Languages](#languages) - [Dataset Structure](#dataset-structure) - [Data Instances](#data-instances) - [Data Fields](#data-fields) - [Data Splits](#data-splits) - [Dataset Creation](#dataset-creation) - [Source Data](#source-data) - [Initial Data Collection and Normalization](#initial-data-collection-and-normalization) - [Personal and Sensitive Information](#personal-and-sensitive-information) - [Additional Information](#additional-information) - [Dataset Curators](#dataset-curators) - [Licensing Information](#licensing-information) - [Citation Information](#citation-information) - [Contact](#contact) ## Dataset Description - **Homepage:** https://deft2023.univ-avignon.fr/ - **Repository:** https://deft2023.univ-avignon.fr/ - **Paper:** [FrenchMedMCQA: A French Multiple-Choice Question Answering Dataset for Medical domain](https://hal.science/hal-03824241/document) - **Leaderboard:** Coming soon - **Point of Contact:** [Yanis LABRAK](mailto:yanis.labrak@univ-avignon.fr) ### Dataset Summary This paper introduces FrenchMedMCQA, the first publicly available Multiple-Choice Question Answering (MCQA) dataset in French for medical domain. It is composed of 3,105 questions taken from real exams of the French medical specialization diploma in pharmacy, mixing single and multiple answers. Each instance of the dataset contains an identifier, a question, five possible answers and their manual correction(s). We also propose first baseline models to automatically process this MCQA task in order to report on the current performances and to highlight the difficulty of the task. A detailed analysis of the results showed that it is necessary to have representations adapted to the medical domain or to the MCQA task: in our case, English specialized models yielded better results than generic French ones, even though FrenchMedMCQA is in French. Corpus, models and tools are available online. ### Supported Tasks and Leaderboards Multiple-Choice Question Answering (MCQA) ### Languages The questions and answers are available in French. ## Dataset Structure ### Data Instances ```json { "id": "1863462668476003678", "question": "Parmi les propositions suivantes, laquelle (lesquelles) est (sont) exacte(s) ? Les chylomicrons plasmatiques :", "answers": { "a": "Sont plus riches en cholestérol estérifié qu'en triglycérides", "b": "Sont synthétisés par le foie", "c": "Contiennent de l'apolipoprotéine B48", "d": "Contiennent de l'apolipoprotéine E", "e": "Sont transformés par action de la lipoprotéine lipase" }, "correct_answers": [ "c", "d", "e" ], "subject_name": "pharmacie", "type": "multiple" } ``` ### Data Fields - `id` : a string question identifier for each example - `question` : question text (a string) - `answer_a` : Option A - `answer_b` : Option B - `answer_c` : Option C - `answer_d` : Option D - `answer_e` : Option E - `correct_answers` : Correct options, i.e., A, D and E - `choice_type` ({"single", "multiple"}): Question choice type. - "single": Single-choice question, where each choice contains a single option. - "multiple": Multi-choice question, where each choice contains a combination of multiple options. ### Data Splits | # Answers | Training | Validation | Test | Total | |:---------:|:--------:|:----------:|:----:|:-----:| | 1 | 595 | 164 | 321 | 1,080 | | 2 | 528 | 45 | 97 | 670 | | 3 | 718 | 71 | 141 | 930 | | 4 | 296 | 30 | 56 | 382 | | 5 | 34 | 2 | 7 | 43 | | Total | 2171 | 312 | 622 | 3,105 | ## Dataset Creation ### Source Data #### Initial Data Collection and Normalization The questions and their associated candidate answer(s) were collected from real French pharmacy exams on the remede website. Questions and answers were manually created by medical experts and used during examinations. The dataset is composed of 2,025 questions with multiple answers and 1,080 with a single one, for a total of 3,105 questions. Each instance of the dataset contains an identifier, a question, five options (labeled from A to E) and correct answer(s). The average question length is 14.17 tokens and the average answer length is 6.44 tokens. The vocabulary size is of 13k words, of which 3.8k are estimated medical domain-specific words (i.e. a word related to the medical field). We find an average of 2.49 medical domain-specific words in each question (17 % of the words) and 2 in each answer (36 % of the words). On average, a medical domain-specific word is present in 2 questions and in 8 answers. ### Personal and Sensitive Information The corpora is free of personal or sensitive information. ## Additional Information ### Dataset Curators The dataset was created by Labrak Yanis and Bazoge Adrien and Dufour Richard and Daille Béatrice and Gourraud Pierre-Antoine and Morin Emmanuel and Rouvier Mickael. ### Licensing Information Apache 2.0 ### Citation Information If you find this useful in your research, please consider citing the dataset paper : ```latex @inproceedings{labrak-etal-2022-frenchmedmcqa, title = "{F}rench{M}ed{MCQA}: A {F}rench Multiple-Choice Question Answering Dataset for Medical domain", author = "Labrak, Yanis and Bazoge, Adrien and Dufour, Richard and Daille, Beatrice and Gourraud, Pierre-Antoine and Morin, Emmanuel and Rouvier, Mickael", booktitle = "Proceedings of the 13th International Workshop on Health Text Mining and Information Analysis (LOUHI)", month = dec, year = "2022", address = "Abu Dhabi, United Arab Emirates (Hybrid)", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2022.louhi-1.5", pages = "41--46", abstract = "This paper introduces FrenchMedMCQA, the first publicly available Multiple-Choice Question Answering (MCQA) dataset in French for medical domain. It is composed of 3,105 questions taken from real exams of the French medical specialization diploma in pharmacy, mixing single and multiple answers. Each instance of the dataset contains an identifier, a question, five possible answers and their manual correction(s). We also propose first baseline models to automatically process this MCQA task in order to report on the current performances and to highlight the difficulty of the task. A detailed analysis of the results showed that it is necessary to have representations adapted to the medical domain or to the MCQA task: in our case, English specialized models yielded better results than generic French ones, even though FrenchMedMCQA is in French. Corpus, models and tools are available online.", } ``` ### Contact Thanks to contact [Yanis LABRAK](https://github.com/qanastek) for more information about this dataset.
false
# Dataset cointelegraph English ## Dataset Description It is a dataset where information about the title, description, author, etc. is collected. approx: 10041 row page: https://cointelegraph.com/ categorie: #cryptocurrency, #Bitcoin, #Ethereum ...
false
# Dataset Card for Dataset Name ## Dataset Description - **Homepage:** - **Repository:** - **Paper:** - **Leaderboard:** - **Point of Contact:** ### Dataset Summary This is a multilingual dataset containing ~130k annotated sentence boundaries. It contains laws and court decision in 6 different languages. ### Supported Tasks and Leaderboards [More Information Needed] ### Languages English, French, Italian, German, Portuguese, Spanish ## Dataset Structure It is structured in the following format: {language}\_{type}\_{shard}.jsonl.xz type is one of the following: - laws - judgements Use the the dataset like this: ``` from datasets import load_dataset config = 'fr_laws' #{language}_{type} | to load all languages and/or all types, use 'all_all' dataset = load_dataset('rdcs/MultiLegalSBD', config) ``` ### Data Instances [More Information Needed] ### Data Fields - text: the original text - spans: - start: offset of the first character - end: offset of the last character - label: One label only -> Sentence - token_start: id of the first token - token_end: id of the last token - tokens: - text: token text - start: offset of the first character - end: offset of the last character - id: token id - ws: whether the token is followed by whitespace ### Data Splits There is only one split available ## Dataset Creation ### Curation Rationale [More Information Needed] ### Source Data #### Initial Data Collection and Normalization [More Information Needed] #### Who are the source language producers? [More Information Needed] ### Annotations #### Annotation process [More Information Needed] #### Who are the annotators? [More Information Needed] ### Personal and Sensitive Information [More Information Needed] ## Considerations for Using the Data ### Social Impact of Dataset [More Information Needed] ### Discussion of Biases [More Information Needed] ### Other Known Limitations [More Information Needed] ## Additional Information ### Dataset Curators [More Information Needed] ### Licensing Information [More Information Needed] ### Citation Information [More Information Needed] ### Contributions [More Information Needed]
false
# Dataset Card for Dataset Name ## Dataset Description - **Homepage:** - **Repository:** - **Paper:** - **Leaderboard:** - **Point of Contact:** ### Dataset Summary [More Information Needed] ### Supported Tasks and Leaderboards [More Information Needed] ### Languages [More Information Needed] ## Dataset Structure ### Data Instances [More Information Needed] ### Data Fields [More Information Needed] ### Data Splits [More Information Needed] ## Dataset Creation ### Curation Rationale [More Information Needed] ### Source Data #### Initial Data Collection and Normalization [More Information Needed] #### Who are the source language producers? [More Information Needed] ### Annotations #### Annotation process [More Information Needed] #### Who are the annotators? [More Information Needed] ### Personal and Sensitive Information [More Information Needed] ## Considerations for Using the Data ### Social Impact of Dataset [More Information Needed] ### Discussion of Biases [More Information Needed] ### Other Known Limitations [More Information Needed] ## Additional Information ### Dataset Curators [More Information Needed] ### Licensing Information [More Information Needed] ### Citation Information [More Information Needed] ### Contributions [More Information Needed]
true
# AutoTrain Dataset for project: books-rating-analysis ## Dataset Description This dataset has been automatically processed by AutoTrain for project books-rating-analysis. ### Languages The BCP-47 code for the dataset's language is en. ## Dataset Structure ### Data Instances A sample from this dataset looks as follows: ```json [ { "feat_Unnamed: 0": 1976, "feat_user_id": "792500e85277fa7ada535de23e7eb4c3", "feat_book_id": 18243288, "feat_review_id": "7f8219233a62bde2973ddd118e8162e2", "target": 2, "text": "This book is kind of tricky. It is pleasingly written stylistically and it's an easy read so I cruised along on the momentum of the smooth prose and the potential of what this book could have and should have been for a while before I realized that it is hollow and aimless. \n This is a book where the extraordinary is deliberately made mundane for some reason and characters are stubbornly underdeveloped. It is as if all the drama has been removed from this story, leaving a bloodless collection of 19th industrial factoids sprinkled amidst a bunch of ciphers enduring an oddly dull series of tragedies. \n Mildly entertaining for a while but ultimately unsatisfactory.", "feat_date_added": "Mon Apr 27 11:37:36 -0700 2015", "feat_date_updated": "Mon May 04 08:50:42 -0700 2015", "feat_read_at": "Mon May 04 08:50:42 -0700 2015", "feat_started_at": "Mon Apr 27 00:00:00 -0700 2015", "feat_n_votes": 0, "feat_n_comments": 0 }, { "feat_Unnamed: 0": 523, "feat_user_id": "01ec1a320ffded6b2dd47833f2c8e4fb", "feat_book_id": 18220354, "feat_review_id": "c19543fab6b2386df92c1a9ba3cf6e6b", "target": 4, "text": "4.5 stars!! I am always intrigued to read a novel written from a male POV. I am equally fascinated by pen names, and even when the writer professes to be one gender or the other (or leaves it open to the imagination such as BG Harlen), I still wonder at the back of my mind whether the author is a male or female. Do some female writers have a decidedly masculine POV? Yes, there are several that come to mind. Do some male writers have a feminine \"flavor\" to their writing? It seems so. \n And so we come to the fascinating Thou Shalt Not. I loved Luke's story, as well as JJ Rossum's writing style, and don't want to be pigeon-holed into thinking that the author is male or female. That's just me. Either way, it's a very sexy and engaging book with plenty of steamy scenes to satisfy even the most jaded erotic romance reader (such as myself). The story carries some very weighty themes (domestic violence, adultery, the nature of beauty), but the book is very fast-paced and satisfying. Will Luke keep himself out of trouble with April? Will he learn to really love someone again? No spoilers here, but the author answers these questions while exploring what qualities are really important and what makes someone worthy of love. \n This book has a very interesting conclusion that some readers will love, and some might find a little challenging. I loved it and can't wait to read more from this author. \n *ARC provided by the author in exchange for an honest review.", "feat_date_added": "Mon Jul 29 16:04:04 -0700 2013", "feat_date_updated": "Thu Dec 12 21:43:54 -0800 2013", "feat_read_at": "Fri Dec 06 00:00:00 -0800 2013", "feat_started_at": "Thu Dec 05 00:00:00 -0800 2013", "feat_n_votes": 10, "feat_n_comments": 0 } ] ``` ### Dataset Fields The dataset has the following fields (also called "features"): ```json { "feat_Unnamed: 0": "Value(dtype='int64', id=None)", "feat_user_id": "Value(dtype='string', id=None)", "feat_book_id": "Value(dtype='int64', id=None)", "feat_review_id": "Value(dtype='string', id=None)", "target": "ClassLabel(names=['0', '1', '2', '3', '4', '5'], id=None)", "text": "Value(dtype='string', id=None)", "feat_date_added": "Value(dtype='string', id=None)", "feat_date_updated": "Value(dtype='string', id=None)", "feat_read_at": "Value(dtype='string', id=None)", "feat_started_at": "Value(dtype='string', id=None)", "feat_n_votes": "Value(dtype='int64', id=None)", "feat_n_comments": "Value(dtype='int64', id=None)" } ``` ### Dataset Splits This dataset is split into a train and validation split. The split sizes are as follow: | Split name | Num samples | | ------------ | ------------------- | | train | 2397 | | valid | 603 |
false
# Paraphrase Dataset (Urdu) This dataset contains paraphrases in Urdu. It is provided in the Parquet format and is split into a training set with 393,000 rows. ## Dataset Details - Columns: - `sentence1`: The first sentence in a pair of paraphrases (string). - `sentence2`: The second sentence in a pair of paraphrases (string). ## Usage You can use this dataset for various natural language processing tasks such as text similarity, paraphrase identification, and language generation.
false
# Dataset Card for spaeti_store ## Dataset Description The dataset consists of 10 pictures of one späti (German convenience store) from different angles. The data is unlabeled. The dataset was created to fine-tune a text-to-image Stable Diffusion model as part of the DreamBooth Hackathon. Visit the [organization's page](https://huggingface.co/dreambooth-hackathon) for more info.
true
# Dataset for the project: reviews-sentiment-analysis ## Dataset Description This dataset is for project reviews-sentiment-analysis. ### Languages The BCP-47 code for the dataset's language is en. ## Dataset Structure ### Data Instances A sample from this dataset looks as follows: ```json [ { "text": "Now, I won't deny that when I purchased this off eBay, I had high expectations. This was an incredible out-of-print work from the master of comedy that I so enjoy. However, I was soon to be disappointed. Apologies to those who enjoyed it, but I just found the Compleat Al to be very difficult to watch. I got a few smiles, sure, but the majority of the funny came from the music videos (which I've got on DVD) and the rest was basically filler. You could tell that this was not Al's greatest video achievement (that honor goes to UHF). Honestly, I doubt if this will ever make the jump to DVD, so if you're an ultra-hardcore Al fan and just HAVE to own everything, buy the tape off eBay. Just don't pay too much for it.", "target": 0 }, { "text": "The saddest thing about this \"tribute\" is that almost all the singers (including the otherwise incredibly talented Nick Cave) seem to have missed the whole point where Cohen's intensity lies: by delivering his lines in an almost tuneless poise, Cohen transmits the full extent of his poetry, his irony, his all-round humanity, laughter and tears in one.<br /><br />To see some of these singer upstarts make convoluted suffering faces, launch their pathetic squeals in the patent effort to scream \"I'm a singer!,\" is a true pain. It's the same feeling many of you probably had listening in to some horrendous operatic versions of simple songs such as Lennon's \"Imagine.\" Nothing, simply nothing gets close to the simplicity and directness of the original. If there is a form of art that doesn't need embellishments, it's Cohen's art. Embellishments cast it in the street looking like the tasteless make-up of sex for sale.<br /><br />In this Cohen's tribute I found myself suffering and suffering through pitiful tributes and awful reinterpretations, all of them entirely lacking the original irony of the master and, if truth be told, several of these singers sounded as if they had been recruited at some asylum talent show. It's Cohen doing a tribute to them by letting them sing his material, really, not the other way around: they may have been friends, or his daughter's, he could have become very tender-hearted and in the mood for a gift. Too bad it didn't stay in the family.<br /><br />Fortunately, but only at the very end, Cohen himself performed his majestic \"Tower of Song,\" but even that flower was spoiled by the totally incongruous background of the U2, all of them carrying the expression that bored kids have when they visit their poor grandpa at the nursing home.<br /><br />A sad show, really, and sadder if you truly love Cohen as I do.", "target": 0 } ] ``` ### Dataset Fields The dataset has the following fields (also called "features"): ```json { "text": "Value(dtype='string', id=None)", "target": "ClassLabel(names=['Negative', 'Positive'], id=None)" } ``` ### Dataset Splits This dataset is split into a train and validation split. The split sizes are as follows: | Split name | Num samples | | ------------ | ------------------- | | train | 7499 | | valid | 2497 |
false
# Dataset for project: food-classification ## Dataset Description This dataset has been processed for project food-classification. ### Languages The BCP-47 code for the dataset's language is unk. ## Dataset Structure ### Data Instances A sample from this dataset looks as follows: ```json [ { "image": "<308x512 RGB PIL image>", "target": 0 }, { "image": "<512x512 RGB PIL image>", "target": 0 } ] ``` ### Dataset Fields The dataset has the following fields (also called "features"): ```json { "image": "Image(decode=True, id=None)", "target": "ClassLabel(names=['apple_pie', 'falafel', 'french_toast', 'ice_cream', 'ramen', 'sushi', 'tiramisu'], id=None)" } ``` ### Dataset Splits This dataset is split into a train and validation split. The split sizes are as follow: | Split name | Num samples | | ------------ | ------------------- | | train | 1050 | | valid | 350 |
false
# MIRACL (id) embedded with cohere.ai `multilingual-22-12` encoder We encoded the [MIRACL dataset](https://huggingface.co/miracl) using the [cohere.ai](https://txt.cohere.ai/multilingual/) `multilingual-22-12` embedding model. The query embeddings can be found in [Cohere/miracl-id-queries-22-12](https://huggingface.co/datasets/Cohere/miracl-id-queries-22-12) and the corpus embeddings can be found in [Cohere/miracl-id-corpus-22-12](https://huggingface.co/datasets/Cohere/miracl-id-corpus-22-12). For the orginal datasets, see [miracl/miracl](https://huggingface.co/datasets/miracl/miracl) and [miracl/miracl-corpus](https://huggingface.co/datasets/miracl/miracl-corpus). Dataset info: > MIRACL 🌍🙌🌏 (Multilingual Information Retrieval Across a Continuum of Languages) is a multilingual retrieval dataset that focuses on search across 18 different languages, which collectively encompass over three billion native speakers around the world. > > The corpus for each language is prepared from a Wikipedia dump, where we keep only the plain text and discard images, tables, etc. Each article is segmented into multiple passages using WikiExtractor based on natural discourse units (e.g., `\n\n` in the wiki markup). Each of these passages comprises a "document" or unit of retrieval. We preserve the Wikipedia article title of each passage. ## Embeddings We compute for `title+" "+text` the embeddings using our `multilingual-22-12` embedding model, a state-of-the-art model that works for semantic search in 100 languages. If you want to learn more about this model, have a look at [cohere.ai multilingual embedding model](https://txt.cohere.ai/multilingual/). ## Loading the dataset In [miracl-id-corpus-22-12](https://huggingface.co/datasets/Cohere/miracl-id-corpus-22-12) we provide the corpus embeddings. Note, depending on the selected split, the respective files can be quite large. You can either load the dataset like this: ```python from datasets import load_dataset docs = load_dataset(f"Cohere/miracl-id-corpus-22-12", split="train") ``` Or you can also stream it without downloading it before: ```python from datasets import load_dataset docs = load_dataset(f"Cohere/miracl-id-corpus-22-12", split="train", streaming=True) for doc in docs: docid = doc['docid'] title = doc['title'] text = doc['text'] emb = doc['emb'] ``` ## Search Have a look at [miracl-id-queries-22-12](https://huggingface.co/datasets/Cohere/miracl-id-queries-22-12) where we provide the query embeddings for the MIRACL dataset. To search in the documents, you must use **dot-product**. And then compare this query embeddings either with a vector database (recommended) or directly computing the dot product. A full search example: ```python # Attention! For large datasets, this requires a lot of memory to store # all document embeddings and to compute the dot product scores. # Only use this for smaller datasets. For large datasets, use a vector DB from datasets import load_dataset import torch #Load documents + embeddings docs = load_dataset(f"Cohere/miracl-id-corpus-22-12", split="train") doc_embeddings = torch.tensor(docs['emb']) # Load queries queries = load_dataset(f"Cohere/miracl-id-queries-22-12", split="dev") # Select the first query as example qid = 0 query = queries[qid] query_embedding = torch.tensor(queries['emb']) # Compute dot score between query embedding and document embeddings dot_scores = torch.mm(query_embedding, doc_embeddings.transpose(0, 1)) top_k = torch.topk(dot_scores, k=3) # Print results print("Query:", query['query']) for doc_id in top_k.indices[0].tolist(): print(docs[doc_id]['title']) print(docs[doc_id]['text']) ``` You can get embeddings for new queries using our API: ```python #Run: pip install cohere import cohere co = cohere.Client(f"{api_key}") # You should add your cohere API Key here :)) texts = ['my search query'] response = co.embed(texts=texts, model='multilingual-22-12') query_embedding = response.embeddings[0] # Get the embedding for the first text ``` ## Performance In the following table we compare the cohere multilingual-22-12 model with Elasticsearch version 8.6.0 lexical search (title and passage indexed as independent fields). Note that Elasticsearch doesn't support all languages that are part of the MIRACL dataset. We compute nDCG@10 (a ranking based loss), as well as hit@3: Is at least one relevant document in the top-3 results. We find that hit@3 is easier to interpret, as it presents the number of queries for which a relevant document is found among the top-3 results. Note: MIRACL only annotated a small fraction of passages (10 per query) for relevancy. Especially for larger Wikipedias (like English), we often found many more relevant passages. This is know as annotation holes. Real nDCG@10 and hit@3 performance is likely higher than depicted. | Model | cohere multilingual-22-12 nDCG@10 | cohere multilingual-22-12 hit@3 | ES 8.6.0 nDCG@10 | ES 8.6.0 acc@3 | |---|---|---|---|---| | miracl-ar | 64.2 | 75.2 | 46.8 | 56.2 | | miracl-bn | 61.5 | 75.7 | 49.2 | 60.1 | | miracl-de | 44.4 | 60.7 | 19.6 | 29.8 | | miracl-en | 44.6 | 62.2 | 30.2 | 43.2 | | miracl-es | 47.0 | 74.1 | 27.0 | 47.2 | | miracl-fi | 63.7 | 76.2 | 51.4 | 61.6 | | miracl-fr | 46.8 | 57.1 | 17.0 | 21.6 | | miracl-hi | 50.7 | 62.9 | 41.0 | 48.9 | | miracl-id | 44.8 | 63.8 | 39.2 | 54.7 | | miracl-ru | 49.2 | 66.9 | 25.4 | 36.7 | | **Avg** | 51.7 | 67.5 | 34.7 | 46.0 | Further languages (not supported by Elasticsearch): | Model | cohere multilingual-22-12 nDCG@10 | cohere multilingual-22-12 hit@3 | |---|---|---| | miracl-fa | 44.8 | 53.6 | | miracl-ja | 49.0 | 61.0 | | miracl-ko | 50.9 | 64.8 | | miracl-sw | 61.4 | 74.5 | | miracl-te | 67.8 | 72.3 | | miracl-th | 60.2 | 71.9 | | miracl-yo | 56.4 | 62.2 | | miracl-zh | 43.8 | 56.5 | | **Avg** | 54.3 | 64.6 |
false
# MIRACL (ru) embedded with cohere.ai `multilingual-22-12` encoder We encoded the [MIRACL dataset](https://huggingface.co/miracl) using the [cohere.ai](https://txt.cohere.ai/multilingual/) `multilingual-22-12` embedding model. The query embeddings can be found in [Cohere/miracl-ru-queries-22-12](https://huggingface.co/datasets/Cohere/miracl-ru-queries-22-12) and the corpus embeddings can be found in [Cohere/miracl-ru-corpus-22-12](https://huggingface.co/datasets/Cohere/miracl-ru-corpus-22-12). For the orginal datasets, see [miracl/miracl](https://huggingface.co/datasets/miracl/miracl) and [miracl/miracl-corpus](https://huggingface.co/datasets/miracl/miracl-corpus). Dataset info: > MIRACL 🌍🙌🌏 (Multilingual Information Retrieval Across a Continuum of Languages) is a multilingual retrieval dataset that focuses on search across 18 different languages, which collectively encompass over three billion native speakers around the world. > > The corpus for each language is prepared from a Wikipedia dump, where we keep only the plain text and discard images, tables, etc. Each article is segmented into multiple passages using WikiExtractor based on natural discourse units (e.g., `\n\n` in the wiki markup). Each of these passages comprises a "document" or unit of retrieval. We preserve the Wikipedia article title of each passage. ## Embeddings We compute for `title+" "+text` the embeddings using our `multilingual-22-12` embedding model, a state-of-the-art model that works for semantic search in 100 languages. If you want to learn more about this model, have a look at [cohere.ai multilingual embedding model](https://txt.cohere.ai/multilingual/). ## Loading the dataset In [miracl-ru-corpus-22-12](https://huggingface.co/datasets/Cohere/miracl-ru-corpus-22-12) we provide the corpus embeddings. Note, depending on the selected split, the respective files can be quite large. You can either load the dataset like this: ```python from datasets import load_dataset docs = load_dataset(f"Cohere/miracl-ru-corpus-22-12", split="train") ``` Or you can also stream it without downloading it before: ```python from datasets import load_dataset docs = load_dataset(f"Cohere/miracl-ru-corpus-22-12", split="train", streaming=True) for doc in docs: docid = doc['docid'] title = doc['title'] text = doc['text'] emb = doc['emb'] ``` ## Search Have a look at [miracl-ru-queries-22-12](https://huggingface.co/datasets/Cohere/miracl-ru-queries-22-12) where we provide the query embeddings for the MIRACL dataset. To search in the documents, you must use **dot-product**. And then compare this query embeddings either with a vector database (recommended) or directly computing the dot product. A full search example: ```python # Attention! For large datasets, this requires a lot of memory to store # all document embeddings and to compute the dot product scores. # Only use this for smaller datasets. For large datasets, use a vector DB from datasets import load_dataset import torch #Load documents + embeddings docs = load_dataset(f"Cohere/miracl-ru-corpus-22-12", split="train") doc_embeddings = torch.tensor(docs['emb']) # Load queries queries = load_dataset(f"Cohere/miracl-ru-queries-22-12", split="dev") # Select the first query as example qid = 0 query = queries[qid] query_embedding = torch.tensor(queries['emb']) # Compute dot score between query embedding and document embeddings dot_scores = torch.mm(query_embedding, doc_embeddings.transpose(0, 1)) top_k = torch.topk(dot_scores, k=3) # Print results print("Query:", query['query']) for doc_id in top_k.indices[0].tolist(): print(docs[doc_id]['title']) print(docs[doc_id]['text']) ``` You can get embeddings for new queries using our API: ```python #Run: pip install cohere import cohere co = cohere.Client(f"{api_key}") # You should add your cohere API Key here :)) texts = ['my search query'] response = co.embed(texts=texts, model='multilingual-22-12') query_embedding = response.embeddings[0] # Get the embedding for the first text ``` ## Performance In the following table we compare the cohere multilingual-22-12 model with Elasticsearch version 8.6.0 lexical search (title and passage indexed as independent fields). Note that Elasticsearch doesn't support all languages that are part of the MIRACL dataset. We compute nDCG@10 (a ranking based loss), as well as hit@3: Is at least one relevant document in the top-3 results. We find that hit@3 is easier to interpret, as it presents the number of queries for which a relevant document is found among the top-3 results. Note: MIRACL only annotated a small fraction of passages (10 per query) for relevancy. Especially for larger Wikipedias (like English), we often found many more relevant passages. This is know as annotation holes. Real nDCG@10 and hit@3 performance is likely higher than depicted. | Model | cohere multilingual-22-12 nDCG@10 | cohere multilingual-22-12 hit@3 | ES 8.6.0 nDCG@10 | ES 8.6.0 acc@3 | |---|---|---|---|---| | miracl-ar | 64.2 | 75.2 | 46.8 | 56.2 | | miracl-bn | 61.5 | 75.7 | 49.2 | 60.1 | | miracl-de | 44.4 | 60.7 | 19.6 | 29.8 | | miracl-en | 44.6 | 62.2 | 30.2 | 43.2 | | miracl-es | 47.0 | 74.1 | 27.0 | 47.2 | | miracl-fi | 63.7 | 76.2 | 51.4 | 61.6 | | miracl-fr | 46.8 | 57.1 | 17.0 | 21.6 | | miracl-hi | 50.7 | 62.9 | 41.0 | 48.9 | | miracl-id | 44.8 | 63.8 | 39.2 | 54.7 | | miracl-ru | 49.2 | 66.9 | 25.4 | 36.7 | | **Avg** | 51.7 | 67.5 | 34.7 | 46.0 | Further languages (not supported by Elasticsearch): | Model | cohere multilingual-22-12 nDCG@10 | cohere multilingual-22-12 hit@3 | |---|---|---| | miracl-fa | 44.8 | 53.6 | | miracl-ja | 49.0 | 61.0 | | miracl-ko | 50.9 | 64.8 | | miracl-sw | 61.4 | 74.5 | | miracl-te | 67.8 | 72.3 | | miracl-th | 60.2 | 71.9 | | miracl-yo | 56.4 | 62.2 | | miracl-zh | 43.8 | 56.5 | | **Avg** | 54.3 | 64.6 |
false
false
# MIRACL (en) embedded with cohere.ai `multilingual-22-12` encoder We encoded the [MIRACL dataset](https://huggingface.co/miracl) using the [cohere.ai](https://txt.cohere.ai/multilingual/) `multilingual-22-12` embedding model. The query embeddings can be found in [Cohere/miracl-en-queries-22-12](https://huggingface.co/datasets/Cohere/miracl-en-queries-22-12) and the corpus embeddings can be found in [Cohere/miracl-en-corpus-22-12](https://huggingface.co/datasets/Cohere/miracl-en-corpus-22-12). For the orginal datasets, see [miracl/miracl](https://huggingface.co/datasets/miracl/miracl) and [miracl/miracl-corpus](https://huggingface.co/datasets/miracl/miracl-corpus). Dataset info: > MIRACL 🌍🙌🌏 (Multilingual Information Retrieval Across a Continuum of Languages) is a multilingual retrieval dataset that focuses on search across 18 different languages, which collectively encompass over three billion native speakers around the world. > > The corpus for each language is prepared from a Wikipedia dump, where we keep only the plain text and discard images, tables, etc. Each article is segmented into multiple passages using WikiExtractor based on natural discourse units (e.g., `\n\n` in the wiki markup). Each of these passages comprises a "document" or unit of retrieval. We preserve the Wikipedia article title of each passage. ## Embeddings We compute for `title+" "+text` the embeddings using our `multilingual-22-12` embedding model, a state-of-the-art model that works for semantic search in 100 languages. If you want to learn more about this model, have a look at [cohere.ai multilingual embedding model](https://txt.cohere.ai/multilingual/). ## Loading the dataset In [miracl-en-corpus-22-12](https://huggingface.co/datasets/Cohere/miracl-en-corpus-22-12) we provide the corpus embeddings. Note, depending on the selected split, the respective files can be quite large. You can either load the dataset like this: ```python from datasets import load_dataset docs = load_dataset(f"Cohere/miracl-en-corpus-22-12", split="train") ``` Or you can also stream it without downloading it before: ```python from datasets import load_dataset docs = load_dataset(f"Cohere/miracl-en-corpus-22-12", split="train", streaming=True) for doc in docs: docid = doc['docid'] title = doc['title'] text = doc['text'] emb = doc['emb'] ``` ## Search Have a look at [miracl-en-queries-22-12](https://huggingface.co/datasets/Cohere/miracl-en-queries-22-12) where we provide the query embeddings for the MIRACL dataset. To search in the documents, you must use **dot-product**. And then compare this query embeddings either with a vector database (recommended) or directly computing the dot product. A full search example: ```python # Attention! For large datasets, this requires a lot of memory to store # all document embeddings and to compute the dot product scores. # Only use this for smaller datasets. For large datasets, use a vector DB from datasets import load_dataset import torch #Load documents + embeddings docs = load_dataset(f"Cohere/miracl-en-corpus-22-12", split="train") doc_embeddings = torch.tensor(docs['emb']) # Load queries queries = load_dataset(f"Cohere/miracl-en-queries-22-12", split="dev") # Select the first query as example qid = 0 query = queries[qid] query_embedding = torch.tensor(queries['emb']) # Compute dot score between query embedding and document embeddings dot_scores = torch.mm(query_embedding, doc_embeddings.transpose(0, 1)) top_k = torch.topk(dot_scores, k=3) # Print results print("Query:", query['query']) for doc_id in top_k.indices[0].tolist(): print(docs[doc_id]['title']) print(docs[doc_id]['text']) ``` You can get embeddings for new queries using our API: ```python #Run: pip install cohere import cohere co = cohere.Client(f"{api_key}") # You should add your cohere API Key here :)) texts = ['my search query'] response = co.embed(texts=texts, model='multilingual-22-12') query_embedding = response.embeddings[0] # Get the embedding for the first text ``` ## Performance In the following table we compare the cohere multilingual-22-12 model with Elasticsearch version 8.6.0 lexical search (title and passage indexed as independent fields). Note that Elasticsearch doesn't support all languages that are part of the MIRACL dataset. We compute nDCG@10 (a ranking based loss), as well as hit@3: Is at least one relevant document in the top-3 results. We find that hit@3 is easier to interpret, as it presents the number of queries for which a relevant document is found among the top-3 results. Note: MIRACL only annotated a small fraction of passages (10 per query) for relevancy. Especially for larger Wikipedias (like English), we often found many more relevant passages. This is know as annotation holes. Real nDCG@10 and hit@3 performance is likely higher than depicted. | Model | cohere multilingual-22-12 nDCG@10 | cohere multilingual-22-12 hit@3 | ES 8.6.0 nDCG@10 | ES 8.6.0 acc@3 | |---|---|---|---|---| | miracl-ar | 64.2 | 75.2 | 46.8 | 56.2 | | miracl-bn | 61.5 | 75.7 | 49.2 | 60.1 | | miracl-de | 44.4 | 60.7 | 19.6 | 29.8 | | miracl-en | 44.6 | 62.2 | 30.2 | 43.2 | | miracl-es | 47.0 | 74.1 | 27.0 | 47.2 | | miracl-fi | 63.7 | 76.2 | 51.4 | 61.6 | | miracl-fr | 46.8 | 57.1 | 17.0 | 21.6 | | miracl-hi | 50.7 | 62.9 | 41.0 | 48.9 | | miracl-id | 44.8 | 63.8 | 39.2 | 54.7 | | miracl-ru | 49.2 | 66.9 | 25.4 | 36.7 | | **Avg** | 51.7 | 67.5 | 34.7 | 46.0 | Further languages (not supported by Elasticsearch): | Model | cohere multilingual-22-12 nDCG@10 | cohere multilingual-22-12 hit@3 | |---|---|---| | miracl-fa | 44.8 | 53.6 | | miracl-ja | 49.0 | 61.0 | | miracl-ko | 50.9 | 64.8 | | miracl-sw | 61.4 | 74.5 | | miracl-te | 67.8 | 72.3 | | miracl-th | 60.2 | 71.9 | | miracl-yo | 56.4 | 62.2 | | miracl-zh | 43.8 | 56.5 | | **Avg** | 54.3 | 64.6 |
false
# Dataset Card for NST Swedish Speech Synthesis (44 kHz) ## Table of Contents - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Supported Tasks](#supported-tasks-and-leaderboards) - [Languages](#languages) - [Dataset Structure](#dataset-structure) - [Data Instances](#data-instances) - [Data Fields](#data-instances) - [Data Splits](#data-instances) - [Dataset Creation](#dataset-creation) - [Curation Rationale](#curation-rationale) - [Source Data](#source-data) - [Annotations](#annotations) - [Personal and Sensitive Information](#personal-and-sensitive-information) - [Considerations for Using the Data](#considerations-for-using-the-data) - [Social Impact of Dataset](#social-impact-of-dataset) - [Discussion of Biases](#discussion-of-biases) - [Other Known Limitations](#other-known-limitations) - [Additional Information](#additional-information) - [Dataset Curators](#dataset-curators) - [Licensing Information](#licensing-information) - [Citation Information](#citation-information) - [Contributions](#contributions) ## Dataset Description - **Homepage:** [NST Swedish Speech Synthesis (44 kHz)](https://www.nb.no/sprakbanken/en/resource-catalogue/oai-nb-no-sbr-18/) ### Dataset Summary The corpus consists of a single speaker, with 5277 segments. ### Supported Tasks and Leaderboards [Needs More Information] ### Languages The audio is in Swedish. ## Dataset Structure [Needs More Information] ### Data Instances [Needs More Information] ### Data Fields [Needs More Information] ### Data Splits [Needs More Information] ## Dataset Creation ### Curation Rationale (The below is a partially corrected machine translation from [here](https://www.nb.no/sbfil/dok/nst_taledat_se.pdf) ) The data was developed by Nordisk språkteknologi holding AS (NST), which went bankrupt in 2003. In 2006, a jointly owned group of the University of Oslo, the University of Bergen, the Norwegian University of Science and Technology, the Language Council and IBM bought the assets of NST, to ensure that the linguistic resources that NST had developed were take care of. The National Library was commissioned by the Ministry of Culture to build a Norwegian language bank in 2009, and started this work in 2010. The resources after NST were transferred to the National Library in May 2011, and they are now done available in the Language Bank, initially without further processing. ### Source Data #### Initial Data Collection and Normalization [Needs More Information] #### Who are the source language producers? [Needs More Information] ### Annotations #### Annotation process [Needs More Information] #### Who are the annotators? [Needs More Information] ### Personal and Sensitive Information [Needs More Information] ## Considerations for Using the Data ### Social Impact of Dataset [More Information Needed] ### Discussion of Biases [More Information Needed] ### Other Known Limitations [Needs More Information] ## Additional Information ### Dataset Curators [The Norwegian Language Bank](https://www.nb.no/sprakbanken/en/sprakbanken/) ### Licensing Information [CC0: Public Domain](https://creativecommons.org/publicdomain/zero/1.0/) ### Citation Information [Needs More Information] ### Contributions [Needs More Information]
false
# Dataset Card for NeuMARCO ## Dataset Description - **Website:** https://neuclir.github.io/ ### Dataset Summary This is the dataset created for TREC 2022 NeuCLIR Track. The collection consists of documents from [`msmarco-passage`](https://ir-datasets.com/msmarco-passage) translated into Chinese, Persian, and Russian. ### Languages - Chinese - Persian - Russian ## Dataset Structure ### Data Instances | Split | Documents | |-----------------|----------:| | `fas` (Persian) | 8.8M | | `rus` (Russian) | 8.8M | | `zho` (Chinese) | 8.8M | ### Data Fields - `doc_id`: unique identifier for this document - `text`: translated passage text ## Dataset Usage Using 🤗 Datasets: ```python from datasets import load_dataset dataset = load_dataset('neuclir/neumarco') dataset['fas'] # Persian passages dataset['rus'] # Russian passages dataset['zho'] # Chinese passages ```
false
# Dataset Card for Beans ## Table of Contents - [Table of Contents](#table-of-contents) - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards) - [Languages](#languages) - [Dataset Structure](#dataset-structure) - [Data Instances](#data-instances) - [Data Fields](#data-fields) - [Data Splits](#data-splits) - [Dataset Creation](#dataset-creation) - [Curation Rationale](#curation-rationale) - [Source Data](#source-data) - [Annotations](#annotations) - [Personal and Sensitive Information](#personal-and-sensitive-information) - [Considerations for Using the Data](#considerations-for-using-the-data) - [Social Impact of Dataset](#social-impact-of-dataset) - [Discussion of Biases](#discussion-of-biases) - [Other Known Limitations](#other-known-limitations) - [Additional Information](#additional-information) - [Dataset Curators](#dataset-curators) - [Licensing Information](#licensing-information) - [Citation Information](#citation-information) - [Contributions](#contributions) ## Dataset Description - **Homepage:** - **Repository:** - **Paper:** N/A - **Leaderboard:** N/A - **Point of Contact:** N/A ### Dataset Summary Coffee Beans Grading ### Supported Tasks and Leaderboards - `image-classification`: Based on a coffee bean grading, the goal of this task is to grade single beans for clusterization. ### Languages Indonesia ## Dataset Structure ### Data Instances A sample from the training set is provided below: ``` { 'image_file_path': '/root/.cache/huggingface/datasets/downloads/extracted/0aaa78294d4bf5114f58547e48d91b7826649919505379a167decb629aa92b0a/train/bean_rust/bean_rust_train.109.jpg', 'image': <PIL.JpegImagePlugin.JpegImageFile image mode=RGB size=500x500 at 0x16BAA72A4A8>, 'labels': 1 } ``` ### Data Fields The data instances have the following fields: - `image_file_path`: a `string` filepath to an image. - `image`: A `PIL.Image.Image` object containing the image. Note that when accessing the image column: `dataset[0]["image"]` the image file is automatically decoded. Decoding of a large number of image files might take a significant amount of time. Thus it is important to first query the sample index before the `"image"` column, *i.e.* `dataset[0]["image"]` should **always** be preferred over `dataset["image"][0]`. - `labels`: an `int` classification label. Class Label Mappings: ```json { "1": 0, "2": 1, "3": 2, } ``` ### Data Splits | |train|validation|test| |-------------|----:|---------:|---:| |# of examples|1400 |400 |200 | ## Dataset Creation ### Curation Rationale [More Information Needed] ### Source Data #### Initial Data Collection and Normalization [More Information Needed] #### Who are the source language producers? [More Information Needed] ### Annotations #### Annotation process [More Information Needed] #### Who are the annotators? [More Information Needed] ### Personal and Sensitive Information [More Information Needed] ## Considerations for Using the Data ### Social Impact of Dataset [More Information Needed] ### Discussion of Biases [More Information Needed] ### Other Known Limitations [More Information Needed] ## Additional Information ### Dataset Curators [More Information Needed] ### Licensing Information [More Information Needed] ### Citation Information ### Contributions
false
# Dataset Card for "Europarl-ST" ## Table of Contents - [Table of Contents](#table-of-contents) - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Languages](#languages) - [Dataset Structure](#dataset-structure) - [Data Fields](#data-fields) - [Data Splits](#data-splits) - [Additional Information](#additional-information) - [Licensing Information](#licensing-information) - [Citation Information](#citation-information) ## Dataset Description - **Homepage:** https://www.mllp.upv.es/europarl-st/ - **Paper:** https://ieeexplore.ieee.org/document/9054626 - **Point of Contact:** https://www.mllp.upv.es/ ### Dataset Summary Europarl-ST is a Multilingual Speech Translation Corpus, that contains paired audio-text samples for Speech Translation, constructed using the debates carried out in the European Parliament in the period between 2008 and 2012. ### Languages Spanish, German, English, French, Dutch, Polish, Portuguese, Romanian, Italian ## Dataset Structure ### Data Fields - **original_audio:** The original speech that is heard on the recording. - **original_language:** The language of the audio - **audio_path:** Path to the audio file - **segment_start:** Second in which the speech begins - **segment_end:** Second in which the speech ends - **transcriptions:** Dictionary containing transcriptions into different languages ### Data Splits - **train split:** 116138 samples - **valid split:** 17538 samples - **test split:** 18901 samples Train set (hours): | src/tgt | en | fr | de | it | es | pt | pl | ro | nl | |---------|----|----|----|----|----|----|----|----|----| | en | - | 81 | 83 | 80 | 81 | 81 | 79 | 72 | 80 | | fr | 32 | - | 21 | 20 | 21 | 22 | 20 | 18 | 22 | | de | 30 | 18 | - | 17 | 18 | 18 | 17 | 17 | 18 | | it | 37 | 21 | 21 | - | 21 | 21 | 21 | 19 | 20 | | es | 22 | 14 | 14 | 14 | - | 14 | 13 | 12 | 13 | | pt | 15 | 10 | 10 | 10 | 10 | - | 9 | 9 | 9 | | pl | 28 | 18 | 18 | 17 | 18 | 18 | - | 16 | 18 | | ro | 24 | 12 | 12 | 12 | 12 | 12 | 12 | - | 12 | | nl | 7 | 5 | 5 | 4 | 5 | 4 | 4 | 4 | - | Valid/Test sets are all between 3 and 6 hours. ## Additional Information ### Licensing Information * The work carried out for constructing the Europarl-ST corpus is released under a Creative Commons Attribution-NonCommercial 4.0 International license (CC BY-NC 4.0) * All rights of the data belong to the European Union and respective copyright holders. ### Citation Information If you use the corpus in your research please cite the following reference: @INPROCEEDINGS{jairsan2020a, author={J. {Iranzo-Sánchez} and J. A. {Silvestre-Cerdà} and J. {Jorge} and N. {Roselló} and A. {Giménez} and A. {Sanchis} and J. {Civera} and A. {Juan}}, booktitle={ICASSP 2020 - 2020 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP)}, title={Europarl-ST: A Multilingual Corpus for Speech Translation of Parliamentary Debates}, year={2020}, pages={8229-8233},}
false
# Neuro CNN Project - Fernando Feltrin # Brain Meningioma images (39 classes) for image classification ## Dataset Description - **More info: fernando2rad@gmail.com** ### Dataset Summary A collection of T1, contrast-enhanced, and T2-weighted MRI images of meningiomas sorted according to location in the brain. Images without any type of marking or patient identification, interpreted by radiologists and provided for study purposes. Images are separated by clivus / petroclival, sphenoid / cavernous sinus, anterior cranial fossa, medial cranial fossa, posterior cranial fossa, frontal / frontoparietal, frontotemporal, infratentorial / cerebellar, interhemispheric / suprasellar, intracisternal, intraventricular / parafalkyne, parietal / parietooccipital, supratentorial, temporal/temporoparietal.