dataset_name stringlengths 2 128 | description stringlengths 1 9.7k | prompt stringlengths 59 185 |
|---|---|---|
KnowIT VQA | KnowIT VQA is a video dataset with 24,282 human-generated question-answer pairs about The Big Bang Theory. The dataset combines visual, textual and temporal coherence reasoning together with knowledge-based questions, which need of the experience obtained from the viewing of the series to be answered. | Provide a detailed description of the following dataset: KnowIT VQA |
Korean HateSpeech Dataset | Presents 9.4K manually labeled entertainment news comments for identifying Korean toxic speech, collected from a widely used online news platform in Korea. | Provide a detailed description of the following dataset: Korean HateSpeech Dataset |
KorNLI | **KorNLI** is a Korean Natural Language Inference (NLI) dataset. The dataset is constructed by automatically translating the training sets of the SNLI, XNLI and MNLI datasets. To ensure translation quality, two professional translators with at least seven years of experience who specialize in academic papers/books as well as business contracts post-edited a half of the dataset each and cross-checked each other’s translation afterward.
It contains 942,854 training examples translated automatically and 7,500 evaluation (development and test) examples translated manually
Source: [https://github.com/kakaobrain/KorNLUDatasets](https://github.com/kakaobrain/KorNLUDatasets) | Provide a detailed description of the following dataset: KorNLI |
KPTimes | KPTimes is a large-scale dataset of news texts paired with editor-curated keyphrases. | Provide a detailed description of the following dataset: KPTimes |
KQA Pro | A large-scale dataset for Complex KBQA. | Provide a detailed description of the following dataset: KQA Pro |
Kuzushiji-49 | Kuzushiji-49 is an MNIST-like dataset that has 49 classes (28x28 grayscale, 270,912 images) from 48 Hiragana characters and one Hiragana iteration mark. | Provide a detailed description of the following dataset: Kuzushiji-49 |
Kvasir-Instrument | Consists of annotated frames containing GI procedure tools such as snares, balloons and biopsy forceps, etc. Beside of the images, the dataset includes ground truth masks and bounding boxes and has been verified by two expert GI endoscopists. | Provide a detailed description of the following dataset: Kvasir-Instrument |
LABR | LABR is a large sentiment analysis dataset to-date for the Arabic language. It consists of over 63,000 book reviews, each rated on a scale of 1 to 5 stars. | Provide a detailed description of the following dataset: LABR |
LAD | LAD (Large-scale Attribute Dataset) has 78,017 images of 5 super-classes and 230 classes. The image number of LAD is larger than the sum of the four most popular attribute datasets (AwA, CUB, aP/aY and SUN). 359 attributes of visual, semantic and subjective properties are defined and annotated in instance-level. | Provide a detailed description of the following dataset: LAD |
LAG | Includes 5,824 fundus images labeled with either positive glaucoma (2,392) or negative glaucoma (3,432). | Provide a detailed description of the following dataset: LAG |
LAMA | LAnguage Model Analysis (**LAMA**) consists of a set of knowledge sources, each comprised of a set of facts. LAMA is a probe for analyzing the factual and commonsense knowledge contained in pretrained language models. | Provide a detailed description of the following dataset: LAMA |
LaMem | An annotated image memorability dataset to date (with 60,000 labeled images from a diverse array of sources). | Provide a detailed description of the following dataset: LaMem |
LAOFIW Dataset | An ancestral origin database of 14,000 images of individuals from East Asia, the Indian subcontinent, sub-Saharan Africa, and Western Europe. | Provide a detailed description of the following dataset: LAOFIW Dataset |
LaPa | A large-scale Landmark guided face Parsing dataset (LaPa) for face parsing. It consists of more than 22,000 facial images with abundant variations in expression, pose and occlusion, and each image of LaPa is provided with a 11-category pixel-level label map and 106-point landmarks. | Provide a detailed description of the following dataset: LaPa |
LasVR | A large-scale video database for rain removal (LasVR), which consists of 316 rain videos. | Provide a detailed description of the following dataset: LasVR |
Lazaro Corpus | A corpus of 21,570 newspaper headlines written in European Spanish annotated with emergent anglicisms. | Provide a detailed description of the following dataset: Lazaro Corpus |
LC25000 | The **LC25000** dataset contains 25,000 color images with 5 classes of 5,000 images each. All images are 768 x 768 pixels in size and are in jpeg file format. The 5 classes are: colon adenocarcinomas, benign colonic tissues, lung adenocarcinomas, lung squamous cell carcinomas and bening lung tissues. | Provide a detailed description of the following dataset: LC25000 |
LCCC | Contains a base version (6.8million dialogues) and a large version (12.0 million dialogues). | Provide a detailed description of the following dataset: LCCC |
LC-QuAD 2.0 | LC-QuAD 2.0 is a Large Question Answering dataset with 30,000 pairs of question and its corresponding SPARQL query. The target knowledge base is Wikidata and DBpedia, specifically the 2018 version. | Provide a detailed description of the following dataset: LC-QuAD 2.0 |
LCSTS | LCSTS is a large corpus of Chinese short text summarization dataset constructed from the Chinese microblogging website Sina Weibo, which is released to the public. This corpus consists of over 2 million real Chinese short texts with short summaries given by the author of each text. The authors also manually tagged the relevance of 10,666 short summaries with their corresponding short texts 10,666 short summaries with their corresponding short texts. | Provide a detailed description of the following dataset: LCSTS |
LEAF Benchmark | A suite of open-source federated datasets, a rigorous evaluation framework, and a set of reference implementations, all geared towards capturing the obstacles and intricacies of practical federated environments. | Provide a detailed description of the following dataset: LEAF Benchmark |
Leaf counting dataset | Dataset containing 9372 RGB images of weeds with the number of leaves counted. The images are collected in fields across Denmark using Nokia and Samsung cell phone cameras; Samsung, Nikon, Canon and Sony consumer cameras; and a Point Grey industrial camera. | Provide a detailed description of the following dataset: Leaf counting dataset |
LectureBank | **LectureBank** Dataset is a manually collected dataset of lecture slides. It contains 1,352 online lecture files from 60 courses covering 5 different domains, including Natural Language Processing (nlp), Machine Learning (ml), Artificial Intelligence (ai), Deep Learning (dl) and Information Retrieval (ir). In addition, it also contains the corresponding annotations for each slide.
Source: [https://github.com/Yale-LILY/LectureBank](https://github.com/Yale-LILY/LectureBank)
Image Source: [https://github.com/Yale-LILY/LectureBank](https://github.com/Yale-LILY/LectureBank) | Provide a detailed description of the following dataset: LectureBank |
Legal Documents Entity Recognition | Court decisions from 2017 and 2018 were selected for the dataset, published online by the Federal Ministry of Justice and Consumer Protection. The documents originate from seven federal courts: Federal Labour Court (BAG), Federal Fiscal Court (BFH), Federal Court of Justice (BGH), Federal Patent Court (BPatG), Federal Social Court (BSG), Federal Constitutional Court (BVerfG) and Federal Administrative Court (BVerwG). | Provide a detailed description of the following dataset: Legal Documents Entity Recognition |
LEMMA | The **LEMMA** dataset aims to explore the essence of complex human activities in a goal-directed, multi-agent, multi-task setting with ground-truth labels of compositional atomic-actions and their associated tasks. By quantifying the scenarios to up to two multi-step tasks with two agents, the authors strive to address human multi-task and multi-agent interactions in four scenarios: single-agent single-task (1 x 1), single-agent multi-task (1 x 2), multi-agent single-task (2 x 1), and multi-agent multi-task (2 x 2). Task instructions are only given to one agent in the 2 x 1 setting to resemble the robot-helping scenario, hoping that the learned perception models could be applied in robotic tasks (especially in HRI) in the near future.
Both the third-person views (TPVs) and the first-person views (FPVs) were recorded to account for different perspectives of the same activities. The authors densely annotate atomic-actions (in the form of compositional verb-noun pairs) and tasks of each atomic-action, as well as the spatial location of each participating agent (bounding boxes) to facilitate the learning of multi-agent multi-task task scheduling and assignment. | Provide a detailed description of the following dataset: LEMMA |
Lenta Short Sentences | The **Lenta Short Sentences** dataset is a text dataset for language modelling for the Russian language. It consists of 236K sentences sampled from the Lenta News dataset.
Source: [https://arxiv.org/pdf/2005.02470.pdf](https://arxiv.org/pdf/2005.02470.pdf) | Provide a detailed description of the following dataset: Lenta Short Sentences |
Libri-Adapt | Libri-Adapt aims to support unsupervised domain adaptation research on speech recognition models. | Provide a detailed description of the following dataset: Libri-Adapt |
LibriCSS | Continuous speech separation (CSS) is an approach to handling overlapped speech in conversational audio signals. A real recorded dataset, called **LibriCSS**, is derived from LibriSpeech by concatenating the corpus utterances to simulate a conversation and capturing the audio replays with far-field microphones. | Provide a detailed description of the following dataset: LibriCSS |
Libri-Light | Libri-Light is a collection of spoken English audio suitable for training speech recognition systems under limited or no supervision. It is derived from open-source audio books from the LibriVox project. It contains over 60K hours of audio. | Provide a detailed description of the following dataset: Libri-Light |
LibriMix | LibriMix is an open-source alternative to wsj0-2mix. Based on LibriSpeech, LibriMix consists of two- or three-speaker mixtures combined with ambient noise samples from WHAM!. | Provide a detailed description of the following dataset: LibriMix |
LibriVoxDeEn | LibriVoxDeEn is a corpus of sentence-aligned triples of German audio, German text, and English translation, based on German audiobooks. The speech translation data consist of 110 hours of audio material aligned to over 50k parallel sentences. An even larger dataset comprising 547 hours of German speech aligned to German text is available for speech recognition. The audio data is read speech and thus low in disfluencies. | Provide a detailed description of the following dataset: LibriVoxDeEn |
LiMiT | The limit dataset of ~24K sentences that describe literal motion (~14K sentences), and sentences not describing motion or other type of motion (e.g. fictive motion). Senteces were extracted from electronic books categorized as fiction or novels, and a portion from the NetActivity Captions Dataset. | Provide a detailed description of the following dataset: LiMiT |
LinCE | A centralized benchmark for Linguistic Code-switching Evaluation (LinCE) that combines ten corpora covering four different code-switched language pairs (i.e., Spanish-English, Nepali-English, Hindi-English, and Modern Standard Arabic-Egyptian Arabic) and four tasks (i.e., language identification, named entity recognition, part-of-speech tagging, and sentiment analysis). | Provide a detailed description of the following dataset: LinCE |
Liputan6 | A large-scale Indonesian summarization dataset consisting of harvested articles from Liputan6.com, an online news portal, resulting in 215,827 document-summary pairs. | Provide a detailed description of the following dataset: Liputan6 |
LISA Gaze Dataset | LISA Gaze is a dataset for driver gaze estimation comprising of 11 long drives, driven by 10 subjects in two different cars. | Provide a detailed description of the following dataset: LISA Gaze Dataset |
Live Comment Dataset | The Live Comment Dataset is a large-scale dataset with 2,361 videos and 895,929 live comments that were written while the videos were streamed. | Provide a detailed description of the following dataset: Live Comment Dataset |
LiveQA | A new question answering dataset constructed from play-by-play live broadcast. It contains 117k multiple-choice questions written by human commentators for over 1,670 NBA games, which are collected from the Chinese Hupu (https://nba.hupu.com/games) website. | Provide a detailed description of the following dataset: LiveQA |
LIVE-YT-HFR | LIVE-YT-HFR comprises of 480 videos having 6 different frame rates, obtained from 16 diverse contents. | Provide a detailed description of the following dataset: LIVE-YT-HFR |
LKS | **LKS** is a dataset of 684 Liver-Kidney-Stomach immunofluorescence whole slide images (WSIs) used in the investigation of autoimmune liver disease.
Source: [https://arxiv.org/abs/2003.05080](https://arxiv.org/abs/2003.05080)
Image Source: [https://github.com/cradleai/LKS-Dataset](https://github.com/cradleai/LKS-Dataset) | Provide a detailed description of the following dataset: LKS |
WI-LOCNESS | WI-LOCNESS is part of the [Building Educational Applications 2019 Shared Task for Grammatical Error Correction](https://www.cl.cam.ac.uk/research/nl/bea2019st/). It consists of two datasets:
- **LOCNESS**: is a corpus consisting of essays written by native English students.
- **Cambridge English Write & Improve** (**W&I**): Write & Improve (Yannakoudakis et al., 2018) is an online web platform that assists non-native English students with their writing. Specifically, students from around the world submit letters, stories, articles and essays in response to various prompts, and the W&I system provides instant feedback. Since W&I went live in 2014, W&I annotators have manually annotated some of these submissions and assigned them a CEFR level. | Provide a detailed description of the following dataset: WI-LOCNESS |
LoDoPaB-CT | LoDoPaB-CT is a dataset of computed tomography images and simulated low-dose measurements. It contains over 40,000 scan slices from around 800 patients selected from the LIDC/IDRI Database. | Provide a detailed description of the following dataset: LoDoPaB-CT |
Logic2Text | **Logic2Text** is a large-scale dataset with 10,753 descriptions involving common logic types paired with the underlying logical forms. The logical forms show diversified graph structure of free schema, which poses great challenges on the model's ability to understand the semantics.
Source: [https://github.com/czyssrs/Logic2Text](https://github.com/czyssrs/Logic2Text) | Provide a detailed description of the following dataset: Logic2Text |
LogiQA | LogiQA consists of 8,678 QA instances, covering multiple types of deductive reasoning. Results show that state-of-the-art neural models perform by far worse than human ceiling. The dataset can also serve as a benchmark for reinvestigating logical AI under the deep learning NLP setting. | Provide a detailed description of the following dataset: LogiQA |
Logo-2K+ | **Logo-2K+**:A Large-Scale Logo Dataset for Scalable Logo Classification
The Logo-2K+ dataset contains a diverse range of logo classes from real-world logo images. It contains 167,140 images with 10 root categories and 2,341 leaf categories.
The 10 different root categories are: Food, Clothes, Institution, Accessories, Transportation, Electronic, Necessities, Cosmetic, Leisure and Medical.
Source: [https://github.com/msn199959/Logo-2k-plus-Dataset](https://github.com/msn199959/Logo-2k-plus-Dataset)
Image Source: [https://github.com/msn199959/Logo-2k-plus-Dataset](https://github.com/msn199959/Logo-2k-plus-Dataset) | Provide a detailed description of the following dataset: Logo-2K+ |
LogoDet-3K | A logo detection dataset with full annotation, which has 3,000 logo categories, about 200,000 manually annotated logo objects and 158,652 images. LogoDet-3K creates a more challenging benchmark for logo detection, for its higher comprehensive coverage and wider variety in both logo categories and annotated objects compared with existing datasets. | Provide a detailed description of the following dataset: LogoDet-3K |
LOGO-Net | A large-scale logo image database for logo detection and brand recognition from real-world product images. | Provide a detailed description of the following dataset: LOGO-Net |
Long-Term Crowd Flow | A synthetic dataset of procedurally generated environments, dynamically simulated crowd flows, and statically derived “proxy” crowd flows (which have more error but are more efficient to compute), for model training and evaluation. | Provide a detailed description of the following dataset: Long-Term Crowd Flow |
Long-term visual localization | Long-term visual localization provides a benchmark datasets aimed at evaluating 6 DoF pose estimation accuracy over large appearance variations caused by changes in seasonal (summer, winter, spring, etc.) and illumination (dawn, day, sunset, night) conditions. Each dataset consists of a set of reference images, together with their corresponding ground truth poses, and a set of query images. | Provide a detailed description of the following dataset: Long-term visual localization |
Lost and Found | **Lost and Found** is a novel lost-cargo image sequence dataset comprising more than two thousand frames with pixelwise annotations of obstacle and free-space and provide a thorough comparison to several stereo-based baseline methods. The dataset will be made available to the community to foster further research on this important topic. | Provide a detailed description of the following dataset: Lost and Found |
LPW | **Labeled Pedestrian in the Wild (LPW)** is a pedestrian detection dataset that contains 2,731 pedestrians in three different scenes where each annotated identity is captured by from 2 to 4 cameras. The LPW features a notable scale of 7,694 tracklets with over 590,000 images as well as the cleanliness of its tracklets. It distinguishes from existing datasets in three aspects: large scale with cleanliness, automatically detected bounding boxes and far more crowded scenes with greater age span. This dataset provides a more realistic and challenging benchmark, which facilitates the further exploration of more powerful algorithms. | Provide a detailed description of the following dataset: LPW |
LRS3-TED | LRS3-TED is a multi-modal dataset for visual and audio-visual speech recognition. It includes face tracks from over 400 hours of TED and TEDx videos, along with the corresponding subtitles and word alignment boundaries. The new dataset is substantially larger in scale compared to other public datasets that are available for general research. | Provide a detailed description of the following dataset: LRS3-TED |
LS3D-W | A 3D facial landmark dataset of around 230,000 images. | Provide a detailed description of the following dataset: LS3D-W |
LSHTC | LSHTC is a dataset for large-scale text classification. The data used in the LSHTC challenges originates from two popular sources: the DBpedia and the ODP (Open Directory Project) directory, also known as DMOZ. DBpedia instances were selected from the english, non-regional Extended Abstracts provided by the DBpedia site. The DMOZ instances consist
of either Content vectors, Description vectors or both. A Content vectors is obtained by directly indexing the web page using standard indexing chain (preprocessing, stemming/lemmatization, stop-word removal). | Provide a detailed description of the following dataset: LSHTC |
LSICC | Large Scale Informal Chinese Corpus (LSICC) is a large-scale corpus of informal Chinese. This corpus contains around 37 million book reviews and 50 thousand netizen's comments to the news. | Provide a detailed description of the following dataset: LSICC |
LSLF | Consists of a large number of unconstrained multi-view and partially occluded faces. | Provide a detailed description of the following dataset: LSLF |
LSMDC-Context | The Large Scale Movie Description Challenge (LSMDC) - Context is an augmented version of the original LSMDC dataset with movie scripts as contextual text.
Source: [https://github.com/primle/LSMDC-Context](https://github.com/primle/LSMDC-Context) | Provide a detailed description of the following dataset: LSMDC-Context |
LVIS | LVIS is a dataset for long tail instance segmentation. It has annotations for over 1000 object categories in 164k images. | Provide a detailed description of the following dataset: LVIS |
Lyft Level 5 Prediction | A self-driving dataset for motion prediction, containing over 1,000 hours of data. This was collected by a fleet of 20 autonomous vehicles along a fixed route in Palo Alto, California, over a four-month period. It consists of 170,000 scenes, where each scene is 25 seconds long and captures the perception output of the self-driving system, which encodes the precise positions and motions of nearby vehicles, cyclists, and pedestrians over time. | Provide a detailed description of the following dataset: Lyft Level 5 Prediction |
m2caiSeg | Created from endoscopic video feeds of real-world surgical procedures. Overall, the data consists of 307 images, each of which is annotated for the organs and different surgical instruments present in the scene. | Provide a detailed description of the following dataset: m2caiSeg |
M2E2 | Aims to extract events and their arguments from multimedia documents. Develops the first benchmark and collect a dataset of 245 multimedia news articles with extensively annotated events and arguments. | Provide a detailed description of the following dataset: M2E2 |
Machine Number Sense | Consists of visual arithmetic problems automatically generated using a grammar model--And-Or Graph (AOG). These visual arithmetic problems are in the form of geometric figures: each problem has a set of geometric shapes as its context and embedded number symbols. | Provide a detailed description of the following dataset: Machine Number Sense |
Mafiascum | A collection of over 700 games of Mafia, in which players are randomly assigned either deceptive or non-deceptive roles and then interact via forum postings. Over 9000 documents were compiled from the dataset, which each contained all messages written by a single player in a single game. This corpus was used to construct a set of hand-picked linguistic features based on prior deception research, as well as a set of average word vectors enriched with subword information. | Provide a detailed description of the following dataset: Mafiascum |
Makeup Datasets | A dataset of female face images assembled for studying the impact of makeup on face recognition. | Provide a detailed description of the following dataset: Makeup Datasets |
Malaria Dataset | The dataset contains a total of 27,558 cell images with equal instances of parasitized and uninfected cells. | Provide a detailed description of the following dataset: Malaria Dataset |
MalayalamMixSentiment | MalayalamMixSentiment is a Sentiment Analysis Dataset for Code-Mixed Malayalam-English. | Provide a detailed description of the following dataset: MalayalamMixSentiment |
MAMe | The **MAMe** dataset contains images of high-resolution and variable shape of artworks from 3 different museums:
- The Metropolitan Museum of Art of New York
- The Los Angeles County Museum of Art
- The Cleveland Museum of Art | Provide a detailed description of the following dataset: MAMe |
MANTRA | An annotated dataset of 4869 transient and 71207 non-transient object lightcurves built from the Catalina Real Time Transient Survey. | Provide a detailed description of the following dataset: MANTRA |
ManyModalQA | Collects the data by scraping Wikipedia and then utilize crowdsourcing to collect question-answer pairs. | Provide a detailed description of the following dataset: ManyModalQA |
Market1203-Reid-Dataset | This dataset contains 1203 individuals captured from two disjoint camera views. To each person, one to twelve images are captured from one to six different orientations under one camera view and are normalized to 128x64 pixels. This dataset is constructed based on the Market-1501 benchmark data and the orientation label for each image has been manually annotated.
Source: [https://github.com/charliememory/Market1203-Reid-Dataset](https://github.com/charliememory/Market1203-Reid-Dataset) | Provide a detailed description of the following dataset: Market1203-Reid-Dataset |
Marmara Turkish Coreference Resolution Corpus | Describe the Marmara Turkish Coreference Corpus, which is an annotation of the whole METU-Sabanci Turkish Treebank with mentions and coreference chains. | Provide a detailed description of the following dataset: Marmara Turkish Coreference Resolution Corpus |
MASATI | The MASATI dataset contains color images in dynamic marine environments, and it can be used to evaluate ship detection methods. Each image may contain one or multiple targets in different weather and illumination conditions. The datasets is composed of 7,389 satellite images labeled according to the following seven classes: land, coast, sea, ship, multi, coast-ship, and detail. In addition, labeling with the bounding box for the location of the vessels is also included. | Provide a detailed description of the following dataset: MASATI |
MaskedFace-Net | Proposes three types of masked face detection dataset; namely, the Correctly Masked Face Dataset (CMFD), the Incorrectly Masked Face Dataset (IMFD) and their combination for the global masked face detection (MaskedFace-Net). | Provide a detailed description of the following dataset: MaskedFace-Net |
MASRI-HEADSET | MASRI-HEADSET is a corpus that was developed by the MASRI project at the University of Malta. It consists of 8 hours of speech paired with text, recorded by using short text snippets in a laboratory environment. The speakers were recruited from different geographical locations all over the Maltese islands, and were roughly evenly distributed by gender. | Provide a detailed description of the following dataset: MASRI-HEADSET |
MaSS | MaSS (Multilingual corpus of Sentence-aligned Spoken utterances) is an extension of the CMU Wilderness Multilingual Speech Dataset, a speech dataset based on recorded readings of the New Testament.
MaSS extends it by providing a large and clean dataset of 8,130 parallel spoken utterances across 8 languages (56 language pairs). The covered languages are: Basque, English, Finnish, French, Hungarian, Romanian, Russian and Spanish. | Provide a detailed description of the following dataset: MaSS |
MathQA | MathQA significantly enhances the AQuA dataset with fully-specified operational programs. | Provide a detailed description of the following dataset: MathQA |
MatterportLayout | MatterportLayout extends the Matterport3D dataset with general Manhattan layout annotations. It has 2,295 RGBD panoramic images from Matterport3D which are extended with ground truth 3D layouts. | Provide a detailed description of the following dataset: MatterportLayout |
MAVEN | Contains 4,480 Wikipedia documents, 118,732 event mention instances, and 168 event types. | Provide a detailed description of the following dataset: MAVEN |
MCAD | Designed to evaluate the open view classification problem under the surveillance environment. In total, MCAD contains 14,298 action samples from 18 action categories, which are performed by 20 subjects and independently recorded with 5 cameras. | Provide a detailed description of the following dataset: MCAD |
MC-AFP | A dataset of around 2 million examples for machine reading-comprehension. | Provide a detailed description of the following dataset: MC-AFP |
MCIC-COCO | A large-scale machine comprehension dataset (based on the COCO images and captions). | Provide a detailed description of the following dataset: MCIC-COCO |
MC-TACO | MC-TACO is a dataset of 13k question-answer pairs that require temporal commonsense comprehension. The dataset contains five temporal properties, (1) duration (how long an event takes), (2) temporal ordering (typical order of events), (3) typical time (when an event occurs), (4) frequency (how often an event occurs), and (5) stationarity (whether a state is maintained for a very long time or indefinitely). | Provide a detailed description of the following dataset: MC-TACO |
MD4K | A small-scale training set, which only contains 4K images. | Provide a detailed description of the following dataset: MD4K |
MD Gender | Provides eight automatically annotated large scale datasets with gender information. | Provide a detailed description of the following dataset: MD Gender |
mEBAL | A multimodal database for eye blink detection and attention level estimation. | Provide a detailed description of the following dataset: mEBAL |
MED | **MED** is a new evaluation dataset that covers a wide range of monotonicity reasoning that was created by crowdsourcing and collected from linguistics publications. The dataset was constructed by collecting naturally-occurring examples by crowdsourcing and well-designed ones from linguistics publications.
It consists of 5,382 examples. | Provide a detailed description of the following dataset: MED |
MeDAL | The Medical Dataset for Abbreviation Disambiguation for Natural Language Understanding (**MeDAL**) is a large medical text dataset curated for abbreviation disambiguation, designed for natural language understanding pre-training in the medical domain. It was published at the ClinicalNLP workshop at EMNLP. | Provide a detailed description of the following dataset: MeDAL |
MedDG | **MedDG** is a large-scale high-quality Medical Dialogue dataset related to 12 types of common Gastrointestinal diseases. It contains more than 17K conversations collected from the online health consultation community. Five different categories of entities, including diseases, symptoms, attributes, tests, and medicines, are annotated in each conversation of MedDG as additional labels.
Two kinds of medical dialogue tasks are proposed for this dataset:
* Next entity prediction
* Doctor response generation
Source: [https://github.com/lwgkzl/MedDG](https://github.com/lwgkzl/MedDG) | Provide a detailed description of the following dataset: MedDG |
Medical Case Report Corpus | Medical Case Report Corpus is a new corpus comprising annotations of medical entities in case reports, originating from PubMed Central's open access library. | Provide a detailed description of the following dataset: Medical Case Report Corpus |
MedICaT | **MedICaT** is a dataset of medical images, captions, subfigure-subcaption annotations, and inline textual references.
Figures and captions are extracted from open access articles in PubMed Central and corresponding reference text is derived from S2ORC.
The dataset consists of:
217,060 figures from 131,410 open access papers
7507 subcaption and subfigure annotations for 2069 compound figures
Inline references for ~25K figures in the ROCO dataset
Source: [https://github.com/allenai/medicat](https://github.com/allenai/medicat) | Provide a detailed description of the following dataset: MedICaT |
MEDIQA-AnS | The first summarization collection containing question-driven summaries of answers to consumer health questions. This dataset can be used to evaluate single or multi-document summaries generated by algorithms using extractive or abstractive approaches. | Provide a detailed description of the following dataset: MEDIQA-AnS |
medisim | **medisim** is a collection of new large-scale medical term similarity datasets based on SNOMED-CT.
Source: [https://github.com/babylonhealth/medisim](https://github.com/babylonhealth/medisim) | Provide a detailed description of the following dataset: medisim |
Medley2K | A dataset called Medley2K that consists of 2,000 medleys and 7,712 labeled transitions. | Provide a detailed description of the following dataset: Medley2K |
MedMentions | MedMentions is a new manually annotated resource for the recognition of biomedical concepts. What distinguishes MedMentions from other annotated biomedical corpora is its size (over 4,000 abstracts and over 350,000 linked mentions), as well as the size of the concept ontology (over 3 million concepts from UMLS 2017) and its broad coverage of biomedical disciplines. | Provide a detailed description of the following dataset: MedMentions |
MedMNIST | A collection of 10 pre-processed medical open datasets. MedMNIST is standardized to perform classification tasks on lightweight 28x28 images, which requires no background knowledge. | Provide a detailed description of the following dataset: MedMNIST |
MedQuAD | MedQuAD includes 47,457 medical question-answer pairs created from 12 NIH websites (e.g. cancer.gov, niddk.nih.gov, GARD, MedlinePlus Health Topics). The collection covers 37 question types (e.g. Treatment, Diagnosis, Side Effects) associated with diseases, drugs and other medical entities such as tests. | Provide a detailed description of the following dataset: MedQuAD |
MegaAge | MegaAge is a large dataset that consists of 41,941 faces annotated with age posterior distributions. | Provide a detailed description of the following dataset: MegaAge |
Mega-COV | **Mega-COV** is a billion-scale dataset from Twitter for studying COVID-19. The dataset is diverse (covers 234 countries), longitudinal (goes as back as 2007), multilingual (comes in 65 languages), and has a significant number of location-tagged tweets (~32M tweets).
Source: [https://github.com/UBC-NLP/megacov](https://github.com/UBC-NLP/megacov)
Image Source: [https://github.com/UBC-NLP/megacov](https://github.com/UBC-NLP/megacov) | Provide a detailed description of the following dataset: Mega-COV |
MegaDepth | The MegaDepth dataset is a dataset for single-view depth prediction that includes 196 different locations reconstructed from COLMAP SfM/MVS. | Provide a detailed description of the following dataset: MegaDepth |
MeGlass | **MeGlass** is an eyeglass dataset originally designed for eyeglass face recognition evaluation. All the face images are selected and cleaned from MegaFace. Each identity has at least two face images with eyeglass and two face images without eyeglass. It contains 47,817 images from 1,710 different identities.
Source: [https://github.com/cleardusk/MeGlass](https://github.com/cleardusk/MeGlass)
Image Source: [https://github.com/cleardusk/MeGlass](https://github.com/cleardusk/MeGlass) | Provide a detailed description of the following dataset: MeGlass |
MEIR | MEIR is a substantially challenging dataset over that which has been previously available to support research into image repurposing detection. The new dataset includes location, person, and organization manipulations on real-world data sourced from Flickr. | Provide a detailed description of the following dataset: MEIR |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.