dataset_name stringlengths 2 128 | description stringlengths 1 9.7k | prompt stringlengths 59 185 |
|---|---|---|
Alexa Point of View | The **Alexa Point of View** dataset is point of view conversion dataset, a parallel corpus of messages spoken to a virtual assistant and the converted messages for delivery.
The dataset contains parallel corpus of input (input column) message and POV converted messages (output column). An example of a pair is `tell @CN@ that i'll be late [\t] hi @CN@, @SCN@ would like you to know that they'll be late.` The input and pov-converted output pair is tab separated. `@CN@` tag is a placeholder for the contact name (receiver) and `@SCN@` tag is a placeholder for source contact name (sender).
The total dataset has 46563 pairs. This data is then test/train/dev split into 6985 pairs/32594 pairs/6985 pairs.
Source: [https://github.com/alexa/alexa-point-of-view-dataset](https://github.com/alexa/alexa-point-of-view-dataset) | Provide a detailed description of the following dataset: Alexa Point of View |
ALFRED | ALFRED (Action Learning From Realistic Environments and Directives), is a new benchmark for learning a mapping from natural language instructions and egocentric vision to sequences of actions for household tasks. | Provide a detailed description of the following dataset: ALFRED |
ALGAD | Repository of a generative art dataset by computer artist Andy Lomas.
Source: [https://github.com/SensiLab/Andy-Lomas-Generative-Art-Dataset](https://github.com/SensiLab/Andy-Lomas-Generative-Art-Dataset)
Image Source: [https://github.com/SensiLab/Andy-Lomas-Generative-Art-Dataset](https://github.com/SensiLab/Andy-Lomas-Generative-Art-Dataset) | Provide a detailed description of the following dataset: ALGAD |
Allegro Reviews | A comprehensive multi-task benchmark for the Polish language understanding, accompanied by an online leaderboard. It consists of a diverse set of tasks, adopted from existing datasets for named entity recognition, question-answering, textual entailment, and others. | Provide a detailed description of the following dataset: Allegro Reviews |
AlloCine | A new dataset for sentiment analysis, scraped from Allociné.fr user reviews. It contains 100k positive and 100k negative reviews divided into 3 balanced splits: train (160k reviews), val (20k) and test (20k). | Provide a detailed description of the following dataset: AlloCine |
ALT | The ALT project aims to advance the state-of-the-art Asian natural language processing (NLP) techniques through the open collaboration for developing and using ALT. It was first conducted by NICT and UCSY as described in Ye Kyaw Thu, Win Pa Pa, Masao Utiyama, Andrew Finch and Eiichiro Sumita (2016). Then, it was developed under ASEAN IVO as described in this Web page. The process of building ALT began with sampling about 20,000 sentences from English Wikinews, and then these sentences were translated into the other languages. ALT now has 13 languages: Bengali, English, Filipino, Hindi, Bahasa Indonesia, Japanese, Khmer, Lao, Malay, Myanmar (Burmese), Thai, Vietnamese, Chinese (Simplified Chinese). | Provide a detailed description of the following dataset: ALT |
AMASS | AMASS is a large database of human motion unifying different optical marker-based motion capture datasets by representing them within a common framework and parameterization. AMASS is readily useful for animation, visualization, and generating training data for deep learning. | Provide a detailed description of the following dataset: AMASS |
Amazon Product Data | This dataset contains product reviews and metadata from Amazon, including 142.8 million reviews spanning May 1996 - July 2014.
This dataset includes reviews (ratings, text, helpfulness votes), product metadata (descriptions, category information, price, brand, and image features), and links (also viewed/also bought graphs). | Provide a detailed description of the following dataset: Amazon Product Data |
AmbigQA | Is a new open-domain question answering task which involves predicting a set of question-answer pairs, where every plausible answer is paired with a disambiguated rewrite of the original question. A dataset covering 14,042 questions from NQ-open, an existing open-domain QA benchmark. | Provide a detailed description of the following dataset: AmbigQA |
AML Robot Cutting Dataset | The AML Robot Cutting Dataset consists of approximately 1500 seconds of real data collected on Kinova Jaco 2 robot retrofitted with a custom end-effector fixture and dremel performing cutting tasks on wood specimens for 5 materials and 5 thicknesses. | Provide a detailed description of the following dataset: AML Robot Cutting Dataset |
ANETAC | An English-Arabic named entity transliteration and classification dataset built from freely available parallel translation corpora. The dataset contains 79,924 instances, each instance is a triplet (e, a, c), where e is the English named entity, a is its Arabic transliteration and c is its class that can be either a Person, a Location, or an Organization. The ANETAC dataset is mainly aimed for the researchers that are working on Arabic named entity transliteration, but it can also be used for named entity classification purposes. | Provide a detailed description of the following dataset: ANETAC |
Animal-Pose Dataset | **Animal-Pose Dataset** is an animal pose dataset to facilitate training and evaluation. This dataset provides animal pose annotations on five categories are provided: dog, cat, cow, horse, sheep, with in total 6,000+ instances in 4,000+ images. Besides, the dataset also contains bounding box annotations for other 7 animal categories. | Provide a detailed description of the following dataset: Animal-Pose Dataset |
AnimalWeb | A large-scale, hierarchical annotated dataset of animal faces, featuring 21.9K faces from 334 diverse species and 21 animal orders across biological taxonomy. These faces are captured `in-the-wild' conditions and are consistently annotated with 9 landmarks on key facial features. The proposed dataset is structured and scalable by design; its development underwent four systematic stages involving rigorous, manual annotation effort of over 6K man-hours. | Provide a detailed description of the following dataset: AnimalWeb |
ANTIQUE | ANTIQUE is a collection of 2,626 open-domain non-factoid questions from a diverse set of categories. The dataset contains 34,011 manual relevance annotations. The questions were asked by real users in a community question answering service, i.e., Yahoo! Answers. Relevance judgments for all the answers to each question were collected through crowdsourcing. | Provide a detailed description of the following dataset: ANTIQUE |
AO-CLEVr | **AO-CLEVr** is a new synthetic-images dataset containing images of "easy" Attribute-Object categories, based on the CLEVr. AO-CLEVr has attribute-object pairs created from 8 attributes: { red, purple, yellow, blue, green, cyan, gray, brown } and 3 object shapes {sphere, cube, cylinder}, yielding 24 attribute-object pairs. Each pair consists of 7500 images. Each image has a single object that consists of the attribute-object pair. The object is randomly assigned one of two sizes (small/large), one of two materials (rubber/metallic), a random position, and random lightning according to CLEVr defaults.
Source: [https://github.com/nv-research-israel/causal_comp](https://github.com/nv-research-israel/causal_comp)
Image Source: [https://github.com/nv-research-israel/causal_comp](https://github.com/nv-research-israel/causal_comp) | Provide a detailed description of the following dataset: AO-CLEVr |
ApartmenTour | Contains a large number of online videos and subtitles. | Provide a detailed description of the following dataset: ApartmenTour |
APE | APE is useful to evaluate Machine Translation automatic post-editing (**APE**), which is the task of improving the output of a blackbox MT system by automatically fixing its mistakes. The act of post-editing text can be fully specified as a sequence of delete and insert actions in given positions. | Provide a detailed description of the following dataset: APE |
APRICOT | APRICOT is a collection of over 1,000 annotated photographs of printed adversarial patches in public locations. The patches target several object categories for three COCO-trained detection models, and the photos represent natural variation in position, distance, lighting conditions, and viewing angle. | Provide a detailed description of the following dataset: APRICOT |
APT-Malware | The APT Malware dataset is used to train classifiers to predict if a given malware belongs to the “Advanced Persistent Threat” (APT) type or not. It contains 3131 samples spread over 24 different unique malware classes.
Source: [https://arxiv.org/pdf/1810.07321.pdf](https://arxiv.org/pdf/1810.07321.pdf) | Provide a detailed description of the following dataset: APT-Malware |
AQUA | The question-answer (QA) pairs are automatically generated using state-of-the-art question generation methods based on paintings and comments provided in an existing art understanding dataset. The QA pairs are cleansed by crowdsourcing workers with respect to their grammatical correctness, answerability, and answers' correctness. The dataset inherently consists of visual (painting-based) and knowledge (comment-based) questions. | Provide a detailed description of the following dataset: AQUA |
Aqualoc | A new underwater dataset that has been recorded in an harbor and provides several sequences with synchronized measurements from a monocular camera, a MEMS-IMU and a pressure sensor. | Provide a detailed description of the following dataset: Aqualoc |
aquamuse | 5,519 query-based summaries, each associated with an average of 6 input documents selected from an index of 355M documents from Common Crawl. | Provide a detailed description of the following dataset: aquamuse |
AQUA-RAT | Algebra Question Answering with Rationales (AQUA-RAT) is a dataset that contains algebraic word problems with rationales. The dataset consists of about 100,000 algebraic word problems with natural language rationales. Each problem is a json object consisting of four parts:
* question - A natural language definition of the problem to solve
* options - 5 possible options (A, B, C, D and E), among which one is correct
* rationale - A natural language description of the solution to the problem
* correct - The correct option | Provide a detailed description of the following dataset: AQUA-RAT |
Arabic Dataset for Commonsense Validation  | A benchmark Arabic dataset for commonsense understanding and validation as well as a baseline research and models trained using the same dataset. | Provide a detailed description of the following dataset: Arabic Dataset for Commonsense Validation  |
Arabic Handwritten Digits Dataset | Contain Arabic handwritten digits images (60000 training and 10000 testing images). | Provide a detailed description of the following dataset: Arabic Handwritten Digits Dataset |
Arabic Text Diacritization | Extracted from the Tashkeela Corpus, the dataset consists of 55K lines containing about 2.3M words.
Source: [https://github.com/AliOsm/arabic-text-diacritization](https://github.com/AliOsm/arabic-text-diacritization) | Provide a detailed description of the following dataset: Arabic Text Diacritization |
ArabicWeb16 | It includes 150M (150,211,934) Arabic Web pages.
Web pages in ArabicWeb16 are collected into files that conform to the WARC ISO 28500 version 0.18 standard ("WARC files"). | Provide a detailed description of the following dataset: ArabicWeb16 |
ARCD | Composed of 1,395 questions posed by crowdworkers on Wikipedia articles, and a machine translation of the Stanford Question Answering Dataset (Arabic-SQuAD). | Provide a detailed description of the following dataset: ARCD |
ArCOV-19 | ArCOV-19 is an Arabic COVID-19 Twitter dataset that covers the period from 27th of January till 30th of April 2020. ArCOV-19 is the first publicly-available Arabic Twitter dataset covering COVID-19 pandemic that includes over 1M tweets alongside the propagation networks of the most-popular subset of them (i.e., most-retweeted and -liked). | Provide a detailed description of the following dataset: ArCOV-19 |
ArCOV19-Rumors | ArCOV19-Rumors is an Arabic COVID-19 Twitter dataset for misinformation detection composed of tweets containing claims from 27th January till the end of April 2020. | Provide a detailed description of the following dataset: ArCOV19-Rumors |
ARDIS | This is a new image-based handwritten historical digit dataset named ARDIS (Arkiv Digital Sweden). The images in ARDIS dataset are extracted from 15.000 Swedish church records which were written by different priests with various handwriting styles in the nineteenth and twentieth centuries. The constructed dataset consists of three single digit datasets and one digit strings dataset. The digit strings dataset includes 10.000 samples in Red-Green-Blue (RGB) color space, whereas, the other datasets contain 7.600 single digit images in different color spaces. | Provide a detailed description of the following dataset: ARDIS |
Armenian Paraphrase Detection Corpus | This dataset contains 2,360 paraphrases in Armenian that can be used for paraphrase detection. The dataset is constructed by back-translating sentences from Armenian to English twice, and manually filtering the result.
Source: [https://github.com/ivannikov-lab/arpa-paraphrase-corpus](https://github.com/ivannikov-lab/arpa-paraphrase-corpus) | Provide a detailed description of the following dataset: Armenian Paraphrase Detection Corpus |
ArraMon | A dataset (in English; and also extended to Hindi) with human-written navigation and assembling instructions, and the corresponding ground truth trajectories. | Provide a detailed description of the following dataset: ArraMon |
ArSentD-LEV | The Arabic Sentiment Twitter Dataset for the Levantine dialect (ArSenTD-LEV) is a dataset of 4,000 tweets with the following annotations: the overall sentiment of the tweet, the target to which the sentiment was expressed, how the sentiment was expressed, and the topic of the tweet. | Provide a detailed description of the following dataset: ArSentD-LEV |
ART Dataset | ART consists of over 20k commonsense narrative contexts and 200k explanations. | Provide a detailed description of the following dataset: ART Dataset |
Arxiv Academic Paper Dataset | A dataset to enable automatic academic paper rating. | Provide a detailed description of the following dataset: Arxiv Academic Paper Dataset |
arXiv Summarization Dataset | This is a dataset for evaluating summarisation methods for research papers. | Provide a detailed description of the following dataset: arXiv Summarization Dataset |
ASAYAR | The first public dataset dedicated for Latin (French) and Arabic Scene Text Detection in Highway panels. It comprises more than 1800 well-annotated images. The dataset was colleted from Moroccan Highway and it has been manually annotated. ASAYAR data can be used to develop and evaluate traffic signs detection and French or Arabic text detection in different languages. | Provide a detailed description of the following dataset: ASAYAR |
AskParents | **AskParents** is a dataset for advice classification extracted from Reddit. In this dataset, posts are annotated for whether they contain advice or not. It contains 8,701 samples for training, 802 for validation and 1,091 for testing.
Source: [https://github.com/venkatasg/Advice-EMNLP2020](https://github.com/venkatasg/Advice-EMNLP2020) | Provide a detailed description of the following dataset: AskParents |
ASNQ | A large scale dataset to enable the transfer step, exploiting the Natural Questions dataset. | Provide a detailed description of the following dataset: ASNQ |
ASSET Corpus | A crowdsourced multi-reference corpus where each simplification was produced by executing several rewriting transformations. | Provide a detailed description of the following dataset: ASSET Corpus |
ASSIN | ASSIN (Avaliação de Similaridade Semântica e INferência textual) is a dataset with semantic similarity score and entailment annotations. It was used in a shared task in the PROPOR 2016 conference.
The full dataset has 10,000 sentence pairs, half of which in Brazilian Portuguese and half in European Portuguese. Either language variant has 2,500 pairs for training, 500 for validation and 2,000 for testing. This is different from the split used in the shared task, in which the training set had 3,000 pairs and there was no validation set. The shared task training set can be reconstructed by simply merging both sets. | Provide a detailed description of the following dataset: ASSIN |
ASSIN2 | ASSIN 2 is the second Semantic Similarity Assessment and Textual Inference, and was a workshop held in conjunction with STIL 2019 . | Provide a detailed description of the following dataset: ASSIN2 |
Astyx HiRes2019 | A radar-centric automotive dataset based on radar, lidar and camera data for the purpose of 3D object detection. | Provide a detailed description of the following dataset: Astyx HiRes2019 |
Atari Grand Challenge | The **Atari Grand Challenge** dataset is a large dataset of human Atari 2600 replays. It consists of replays for 5 different games:
* Space Invaders (445 episodes, 2M frames)
* Q*bert (659 episodes, 1.6M frames)
* Ms.Pacman (384 episodes, 1.7M frames)
* Video Pinball (211 episodes, 1.5M frames)
* Montezuma’s revenge (668 episodes, 2.7M frames) | Provide a detailed description of the following dataset: Atari Grand Challenge |
Atlas | **Atlas** is a dataset for e-commerce clothing product categorization. The Atlas dataset consists of a high-quality product taxonomy dataset focusing on clothing products which contain 186,150 images under clothing category with 3 levels and 52 leaf nodes in the taxonomy. | Provide a detailed description of the following dataset: Atlas |
ATOMIC | **ATOMIC** is an atlas of everyday commonsense reasoning, organized through 877k textual descriptions of inferential knowledge. Compared to existing resources that center around taxonomic knowledge, ATOMIC focuses on inferential knowledge organized as typed if-then relations with variables (e.g., "if X pays Y a compliment, then Y will likely return the compliment"). | Provide a detailed description of the following dataset: ATOMIC |
ATRW | The **ATRW Dataset** contains over 8,000 video clips from 92 Amur tigers, with bounding box, pose keypoint, and tiger identity annotations. | Provide a detailed description of the following dataset: ATRW |
AU-AIR | The **AU-AIR** is a multi-modal aerial dataset captured by a UAV. Having visual data, object annotations, and flight data (time, GPS, altitude, IMU sensor data, velocities), AU-AIR meets vision and robotics for UAVs.
Source: [https://github.com/bozcani/auairdataset](https://github.com/bozcani/auairdataset)
Image Source: [https://github.com/bozcani/auairdataset](https://github.com/bozcani/auairdataset) | Provide a detailed description of the following dataset: AU-AIR |
Automatic Keyphrase Extraction Dataset | Dataset for automatic keyphrase extraction task. | Provide a detailed description of the following dataset: Automatic Keyphrase Extraction Dataset |
Automating Dynamic Consent | This dataset is used to evaluate a predictive consent model for users’ information shared in social media. In this task, the goal is to predict whether the users will give their consent to share that data with different hypothetical audiences within a medical context. The dataset is built from information the users posted on Facebook and their consent answers about each piece of information.
Source: [https://github.com/cnorval/automating-dynamic-consent-dataset](https://github.com/cnorval/automating-dynamic-consent-dataset) | Provide a detailed description of the following dataset: Automating Dynamic Consent |
AuxAD | **AuxAD** is a a distantly supervised dataset for acronym disambiguation.
Source: [https://github.com/PrimerAI/sdu-data](https://github.com/PrimerAI/sdu-data) | Provide a detailed description of the following dataset: AuxAD |
AVA-ActiveSpeaker | Contains temporally labeled face tracks in video, where each face instance is labeled as speaking or not, and whether the speech is audible. This dataset contains about 3.65 million human labeled frames or about 38.5 hours of face tracks, and the corresponding audio. | Provide a detailed description of the following dataset: AVA-ActiveSpeaker |
AVA-LAEO | Dataset to address the problem of detecting people Looking At Each Other (LAEO) in video sequences. | Provide a detailed description of the following dataset: AVA-LAEO |
AVA-Speech | Contains densely labeled speech activity in YouTube videos, with the goal of creating a shared, available dataset for this task. | Provide a detailed description of the following dataset: AVA-Speech |
AVD | AVD focuses on simulating robotic vision tasks in everyday indoor environments using real imagery. The dataset includes 20,000+ RGB-D images and 50,000+ 2D bounding boxes of object instances densely captured in 9 unique scenes. | Provide a detailed description of the following dataset: AVD |
AVE | To investigate three temporal localization tasks: supervised and weakly-supervised audio-visual event localization, and cross-modality localization. | Provide a detailed description of the following dataset: AVE |
AVECL-UMons | A dataset for audio-visual event classification and localization in the context of office environments. The audio-visual dataset is composed of 11 event classes recorded at several realistic positions in two different rooms. Two types of sequences are recorded according to the number of events in the sequence. The dataset comprises 2662 unilabel sequences and 2724 multilabel sequences corresponding to a total of 5.24 hours. | Provide a detailed description of the following dataset: AVECL-UMons |
BanFakeNews | An annotated dataset of ~50K news that can be used for building automated fake news detection systems for a low resource language like Bangla. | Provide a detailed description of the following dataset: BanFakeNews |
BanglaLekha-Isolated | This dataset contains Bangla handwritten numerals, basic characters and compound characters. This dataset was collected from multiple geographical location within Bangladesh and includes sample collected from a variety of aged groups. This dataset can also be used for other classification problems i.e: gender, age, district. | Provide a detailed description of the following dataset: BanglaLekha-Isolated |
BanglaWriting | The **BanglaWriting** dataset contains single-page handwritings of 260 individuals of different personalities and ages. Each page includes bounding-boxes that bounds each word, along with the unicode representation of the writing. This dataset contains 21,234 words and 32,787 characters in total. Moreover, this dataset includes 5,470 unique words of Bangla vocabulary. Apart from the usual words, the dataset comprises 261 comprehensible overwriting and 450 incomprehensible overwriting. All of the bounding boxes and word labels are manually-generated. The dataset can be used for complex optical character/word recognition, writer identification, and handwritten word segmentation. Furthermore, this dataset is suitable for extracting age-based and gender-based variation of handwriting.
Source: [https://github.com/QuwsarOhi/BanglaWriting](https://github.com/QuwsarOhi/BanglaWriting)
Image Source: [https://github.com/QuwsarOhi/BanglaWriting](https://github.com/QuwsarOhi/BanglaWriting) | Provide a detailed description of the following dataset: BanglaWriting |
BAR | **Biased Action Recognition** (**BAR**) dataset is a real-world image dataset categorized as six action classes which are biased to distinct places. The authors settle these six action classes by inspecting imSitu, which provides still action images from Google Image Search with action and place labels. In detail, the authors choose action classes where images for each of these candidate actions share common place characteristics. At the same time, the place characteristics of action class candidates should be distinct in order to classify the action only from place attributes. The select pairs are six typical action-place pairs: (Climbing, RockWall), (Diving, Underwater), (Fishing, WaterSurface), (Racing, APavedTrack), (Throwing, PlayingField),and (Vaulting, Sky). | Provide a detailed description of the following dataset: BAR |
BASIL | 300 news articles annotated with 1,727 bias spans and find evidence that informational bias appears in news articles more frequently than lexical bias. | Provide a detailed description of the following dataset: BASIL |
BBDB | A new large-scale baseball video dataset which is produced semi-automatically by using play-by-play texts available online. The BBDB contains 4200 hours of baseball game videos with 400k temporally annotated activity segments. | Provide a detailed description of the following dataset: BBDB |
BCWS | Dataset for evaluating English-Chinese Bilingual Contextual Word Similarity. The dataset consists of 2,091 English-Chinese word pairs with the corresponding sentential contexts and their similarity scores annotated by the human. | Provide a detailed description of the following dataset: BCWS |
BD-4SK-ASR | The **Basic Dataset for Sorani Kurdish Automatic Speech Recognition** (**BD-4SK-ASR**) is a dataset for automatic speech recognition for Sorani Kurdish.
Source: [https://arxiv.org/abs/1911.13087](https://arxiv.org/abs/1911.13087) | Provide a detailed description of the following dataset: BD-4SK-ASR |
BdSLImset | Bangladeshi Sign Language Image Dataset (BdSLImset) is a dataset that contains images of different Bangladeshi sign letters. | Provide a detailed description of the following dataset: BdSLImset |
Bengali Hate Speech | Introduces three datasets of expressing hate, commonly used topics, and opinions for hate speech detection, document classification, and sentiment analysis, respectively. | Provide a detailed description of the following dataset: Bengali Hate Speech |
Berkeley DeepDrive Video | A dataset comprised of real driving videos and GPS/IMU data. The BDDV dataset contains diverse driving scenarios including cities, highways, towns, and rural areas in several major cities in US. | Provide a detailed description of the following dataset: Berkeley DeepDrive Video |
Bianet | Bianet is a parallel news corpus in Turkish, Kurdish and English
It contains 3,214 Turkish articles with their sentence-aligned Kurdish or English translations from the Bianet online newspaper. | Provide a detailed description of the following dataset: Bianet |
BigEarthNet | BigEarthNet consists of 590,326 Sentinel-2 image patches, each of which is a section of i) 120x120 pixels for 10m bands; ii) 60x60 pixels for 20m bands; and iii) 20x20 pixels for 60m bands. | Provide a detailed description of the following dataset: BigEarthNet |
BigHand2.2M Benchmark | A large-scale hand pose dataset, collected using a novel capture method. | Provide a detailed description of the following dataset: BigHand2.2M Benchmark |
Billion Word Benchmark | The **One Billion Word** dataset is a dataset for language modeling. The training/held-out data was produced from the WMT 2011 News Crawl data using a combination of Bash shell and Perl scripts. | Provide a detailed description of the following dataset: Billion Word Benchmark |
BIMCV COVID-19 | BIMCV-COVID19+ dataset is a large dataset with chest X-ray images CXR (CR, DX) and computed tomography (CT) imaging of COVID-19 patients along with their radiographic findings, pathologies, polymerase chain reaction (PCR), immunoglobulin G (IgG) and immunoglobulin M (IgM) diagnostic antibody tests and radiographic reports from Medical Imaging Databank in Valencian Region Medical Image Bank (BIMCV). The findings are mapped onto standard Unified Medical Language System (UMLS) terminology and they cover a wide spectrum of thoracic entities, contrasting with the much more reduced number of entities annotated in previous datasets. Images are stored in high resolution and entities are localized with anatomical labels in a Medical Imaging Data Structure (MIDS) format. In addition, 23 images were annotated by a team of expert radiologists to include semantic segmentation of radiographic findings. Moreover, extensive information is provided, including the patient’s demographic information, type of projection and acquisition parameters for the imaging study, among others. These iterations of the database include 7,377 CR, 9,463 DX and 6,687 CT studies. | Provide a detailed description of the following dataset: BIMCV COVID-19 |
BIOMRC | A large-scale cloze-style biomedical MRC dataset. Care was taken to reduce noise, compared to the previous BIOREAD dataset of Pappas et al. (2018). | Provide a detailed description of the following dataset: BIOMRC |
Blackbird | The Blackbird unmanned aerial vehicle (UAV) dataset is a large-scale, aggressive indoor flight dataset collected using a custom-built quadrotor platform for use in evaluation of agile perception. The Blackbird dataset contains over 10 hours of flight data from 168 flights over 17 flight trajectories and 5 environments. Each flight includes sensor data from 120Hz stereo and downward-facing photorealistic virtual cameras, 100Hz IMU, motor speed sensors, and 360Hz millimeter-accurate motion capture ground truth. Camera images for each flight were photorealistically rendered using FlightGoggles across a variety of environments to facilitate easy experimentation of high performance perception algorithms. | Provide a detailed description of the following dataset: Blackbird |
BlendedMVS | **BlendedMVS** is a novel large-scale dataset, to provide sufficient training ground truth for learning-based MVS. The dataset was created by applying a 3D reconstruction pipeline to recover high-quality textured meshes from images of well-selected scenes. Then, these mesh models were rendered to color images and depth maps. | Provide a detailed description of the following dataset: BlendedMVS |
Blended Skill Talk | To analyze how these capabilities would mesh together in a natural conversation, and compare the performance of different architectures and training schemes. | Provide a detailed description of the following dataset: Blended Skill Talk |
Blog Authorship Corpus | The Blog Authorship Corpus consists of the collected posts of 19,320 bloggers gathered from blogger.com in August 2004. The corpus incorporates a total of 681,288 posts and over 140 million words - or approximately 35 posts and 7250 words per person. | Provide a detailed description of the following dataset: Blog Authorship Corpus |
BLUE | The BLUE benchmark consists of five different biomedicine text-mining tasks with ten corpora. These tasks cover a diverse range of text genres (biomedical literature and clinical notes), dataset sizes, and degrees of difficulty and, more importantly, highlight common biomedicine text-mining challenges. | Provide a detailed description of the following dataset: BLUE |
BLVD | BLVD is a large scale 5D semantics dataset collected by the Visual Cognitive Computing and Intelligent Vehicles Lab.
This dataset contains 654 high-resolution video clips owing 120k frames extracted from Changshu, Jiangsu Province, China, where the Intelligent Vehicle Proving Center of China (IVPCC) is located. The frame rate is 10fps/sec for RGB data and 3D point cloud. The dataset contains fully annotated frames which yield 249,129 3D annotations, 4,902 independent individuals for tracking with the length of overall 214,922 points, 6,004 valid fragments for 5D interactive event recognition, and 4,900 individuals for 5D intention prediction. These tasks are contained in four kinds of scenarios depending on the object density (low and high) and light conditions (daytime and nighttime). | Provide a detailed description of the following dataset: BLVD |
Book Cover Dataset | A new challenging dataset that can be used for many pattern recognition tasks. | Provide a detailed description of the following dataset: Book Cover Dataset |
BotNet | The **BotNet** dataset is a set of topological botnet detection datasets forgraph neural networks.
Source: [https://github.com/harvardnlp/botnet-detection](https://github.com/harvardnlp/botnet-detection)
Image Source: [https://github.com/harvardnlp/botnet-detection](https://github.com/harvardnlp/botnet-detection) | Provide a detailed description of the following dataset: BotNet |
BreizhCrops | **BreizhCrops** is a satellite image time series dataset for crop type classification. It consists on aggregated label data and Sentinel-2 top-of-atmosphere as well as bottom-of-atmosphere time series in the region of Brittany (Breizh in local language), north-east France.
Source: [https://github.com/TUM-LMF/BreizhCrops](https://github.com/TUM-LMF/BreizhCrops) | Provide a detailed description of the following dataset: BreizhCrops |
Brno-Urban-Dataset | This self-driving dataset collected in Brno, Czech Republic contains data from four WUXGA cameras, two 3D LiDARs, inertial measurement unit, infrared camera and especially differential RTK GNSS receiver with centimetre accuracy.
Source: [https://arxiv.org/abs/1909.06897](https://arxiv.org/abs/1909.06897)
Image Source: [https://github.com/RoboticsBUT/Brno-Urban-Dataset](https://github.com/RoboticsBUT/Brno-Urban-Dataset) | Provide a detailed description of the following dataset: Brno-Urban-Dataset |
BRWAC | Composed by 2.7 billion tokens, and has been annotated with tagging and parsing information. | Provide a detailed description of the following dataset: BRWAC |
BSTLD | This dataset contains 13427 camera images at a resolution of 1280x720 pixels and contains about 24000 annotated traffic lights. The annotations include bounding boxes of traffic lights as well as the current state (active light) of each traffic light.
The camera images are provided as raw 12bit HDR images taken with a red-clear-clear-blue filter and as reconstructed 8-bit RGB color images. The RGB images are provided for debugging and can also be used for training. However, the RGB conversion process has some drawbacks. Some of the converted images may contain artifacts and the color distribution may seem unusual. | Provide a detailed description of the following dataset: BSTLD |
BVI-DVC | Contains 800 sequences at various spatial resolutions from 270p to 2160p and has been evaluated on ten existing network architectures for four different coding tools. | Provide a detailed description of the following dataset: BVI-DVC |
C3 | C3 is a free-form multiple-Choice Chinese machine reading Comprehension dataset. | Provide a detailed description of the following dataset: C3 |
CADC | Collected with the Autonomoose autonomous vehicle platform, based on a modified Lincoln MKZ. | Provide a detailed description of the following dataset: CADC |
CADP | A novel dataset for traffic accidents analysis. | Provide a detailed description of the following dataset: CADP |
CAIL2019-SCM | Chinese AI and Law 2019 Similar Case Matching dataset. CAIL2019-SCM contains 8,964 triplets of cases published by the Supreme People's Court of China. CAIL2019-SCM focuses on detecting similar cases, and the participants are required to check which two cases are more similar in the triplets. | Provide a detailed description of the following dataset: CAIL2019-SCM |
CALFW | A renovation of Labeled Faces in the Wild (LFW), the de facto standard testbed for unconstraint face verification. | Provide a detailed description of the following dataset: CALFW |
Caltech Pedestrian Dataset | The Caltech Pedestrian Dataset consists of approximately 10 hours of 640x480 30Hz video taken from a vehicle driving through regular traffic in an urban environment. About 250,000 frames (in 137 approximately minute long segments) with a total of 350,000 bounding boxes and 2300 unique pedestrians were annotated. The annotation includes temporal correspondence between bounding boxes and detailed occlusion labels. | Provide a detailed description of the following dataset: Caltech Pedestrian Dataset |
capes | Approximately 240,000 documents were collected and aligned using the Hunalign tool. | Provide a detailed description of the following dataset: capes |
CapGaze | Consists of eye movements and verbal descriptions recorded synchronously over images. | Provide a detailed description of the following dataset: CapGaze |
CARD-660 | An expert-annotated word similarity dataset which provides a highly reliable, yet challenging, benchmark for rare word representation techniques. | Provide a detailed description of the following dataset: CARD-660 |
CARRADA | CARRADA is a dataset of synchronized camera and radar recordings with range-angle-Doppler annotations. | Provide a detailed description of the following dataset: CARRADA |
CASIA-SURF | Dataset for face anti-spoofing in terms of both subjects and modalities. Specifically, it consists of subjects with videos and each sample has modalities (i.e., RGB, Depth and IR). | Provide a detailed description of the following dataset: CASIA-SURF |
CATER | Rendered synthetically using a library of standard 3D objects, and tests the ability to recognize compositions of object movements that require long-term reasoning. | Provide a detailed description of the following dataset: CATER |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.