dataset_name stringlengths 2 128 | description stringlengths 1 9.7k | prompt stringlengths 59 185 |
|---|---|---|
Bend the Truth | "Bend the Truth" dataset contains news in six different domains: technology, education, business, sports, politics, and entertainment. The real news included in the dataset were collected from a variety of mainstream news websites predominantly in Pakistan, India, UK, and the USA. These news channels are BBC Urdu News, CNN Urdu, Express-News, Jung News, Noway Waqat, and many other reliable news websites. The fake news included in this dataset consist of fake versions of the real news in the dataset, written by professional journalists. | Provide a detailed description of the following dataset: Bend the Truth |
Urdu Sentiment Corpus | Consists of Urdu tweets for the sentiment analysis and polarity detection. The dataset is consisting of tweets, such that it casts a political shadow and presents a competitive environment between two separate political parties versus the government of Pakistan. Overall, the dataset is comprising over 17, 185 tokens with 52% records as positive, and 48% records as negative.
Source: [Urdu Sentiment Corpus (v1.0): Linguistic Exploration and Visualization of Labeled Dataset for Urdu Sentiment Analysis](https://ieeexplore.ieee.org/abstract/document/9080043) | Provide a detailed description of the following dataset: Urdu Sentiment Corpus |
UR-FUNNY | For understanding multimodal language used in expressing humor. | Provide a detailed description of the following dataset: UR-FUNNY |
US-4 | The **US-4** is a dataset of Ultrasound (US) images. It is a video-based image dataset that contains over 23,000 high-resolution images from four US video sub-datasets, where two sub-datasets are newly collected by experienced doctors for this dataset.
Source: [https://github.com/983632847/USCL](https://github.com/983632847/USCL)
Image Source: [https://github.com/983632847/USCL](https://github.com/983632847/USCL) | Provide a detailed description of the following dataset: US-4 |
UTA-RLDD | Consists of around 30 hours of video, with contents ranging from subtle signs of drowsiness to more obvious ones. | Provide a detailed description of the following dataset: UTA-RLDD |
UT Zappos50K | **UT Zappos50K** is a large shoe dataset consisting of 50,025 catalog images collected from Zappos.com. The images are divided into 4 major categories — shoes, sandals, slippers, and boots — followed by functional types and individual brands. The shoes are centered on a white background and pictured in the same orientation for convenient analysis. | Provide a detailed description of the following dataset: UT Zappos50K |
UW IOM | Comprises twenty individuals picking up and placing objects of varying weights to and from cabinet and table locations at various heights. | Provide a detailed description of the following dataset: UW IOM |
V2C | Contains ~9K videos of human agents performing various actions, annotated with 3 types of commonsense descriptions. | Provide a detailed description of the following dataset: V2C |
VDQG | The **Visual Discriminative Question Generation (VDQG)** dataset contains 11202 ambiguous image pairs collected from Visual Genome. Each image pair is annotated with 4.6 discriminative questions and 5.9 non-discriminative questions on average. | Provide a detailed description of the following dataset: VDQG |
VehicleX | **VehicleX** is a large-scale synthetic dataset. Created in Unity, it contains 1,362 vehicles of various 3D models with fully editable attributes.
Source: [https://github.com/yorkeyao/VehicleX](https://github.com/yorkeyao/VehicleX)
Image Source: [https://arxiv.org/pdf/1912.08855.pdf](https://arxiv.org/pdf/1912.08855.pdf) | Provide a detailed description of the following dataset: VehicleX |
VeRi Dataset | To facilitate the research of vehicle re-identification (Re-Id), a large-scale benchmark dateset is built for vehicle Re-Id in the real-world urban surveillance scenario, named “VeRi”. The featured properties of VeRi include:
- It contains over 50,000 images of 776 vehicles captured by 20 cameras covering an 1.0 km^2 area in 24 hours, which makes the dataset scalable enough for vehicle Re-Id and other related research.
- The images are captured in a real-world unconstrained surveillance scene and labeled with varied attributes, e.g. BBoxes, types, colors, and brands. So complicated models can be learnt and evaluated for vehicle Re-Id.
- Each vehicle is captured by 2 ∼ 18 cameras in different viewpoints, illuminations, resolutions, and occlusions, which provides high recurrence rate for vehicle Re-Id in practical surveillance environment.
- It is also labeled with sufficient license plates and spatiotemporal information, such as the BBoxes of plates, plate strings, the timestamps of vehicles, and the distances between neighbouring cameras. | Provide a detailed description of the following dataset: VeRi Dataset |
VG-Depth | Enable visual relation detection and serves as an extension to Visual Genome (VG). | Provide a detailed description of the following dataset: VG-Depth |
VGG-Sound | Consists of more than 210k videos for 310 audio classes. | Provide a detailed description of the following dataset: VGG-Sound |
VIA | The **VIA** dataset is a dataset for aiding the visually impaired. The proposed datase1 consists of 342 images divided into two classes: 175 of them are “clear-path” and 167 are “nonclear” path. They were taken using a smartphone camera and resized to 750 × 1000 pixels. The smartphone was placed in the user chest height and inclined approximately 30 to 60 from the ground, so it could capture a few meters of the path ahead, and beyond the reach of a regular white cane
Source: [https://arxiv.org/abs/2005.04473](https://arxiv.org/abs/2005.04473) | Provide a detailed description of the following dataset: VIA |
VideoMem | Composed of 10,000 videos annotated with memorability scores. In contrast to previous work on image memorability -- where memorability was measured a few minutes after memorization -- memory performance is measured twice: a few minutes after memorization and again 24-72 hours later. | Provide a detailed description of the following dataset: VideoMem |
Video Storytelling | A new dataset describing textual stories for events. | Provide a detailed description of the following dataset: Video Storytelling |
VIDIT | **VIDIT** is a reference evaluation benchmark and to push forward the development of illumination manipulation methods. VIDIT includes 390 different Unreal Engine scenes, each captured with 40 illumination settings, resulting in 15,600 images. The illumination settings are all the combinations of 5 color temperatures (2500K, 3500K, 4500K, 5500K and 6500K) and 8 light directions (N, NE, E, SE, S, SW, W, NW). Original image resolution is 1024x1024. | Provide a detailed description of the following dataset: VIDIT |
VidSet | A large video dataset with dynamic content. | Provide a detailed description of the following dataset: VidSet |
VidSTG | The **VidSTG** dataset is a spatio-temporal video grounding dataset constructed based on the video relation dataset VidOR. VidOR contains 7,000, 835 and 2,165 videos for training, validation and testing, respectively. The goal of the Spatio-Temporal Video Grounding task (STVG) is to localize the spatio-temporal section of an untrimmed video that matches a given sentence depicting an object.
Source: [https://github.com/Guaranteer/VidSTG-Dataset](https://github.com/Guaranteer/VidSTG-Dataset)
Image Source: [https://github.com/Guaranteer/VidSTG-Dataset](https://github.com/Guaranteer/VidSTG-Dataset) | Provide a detailed description of the following dataset: VidSTG |
VIENA2 | Covers 5 generic driving scenarios, with a total of 25 distinct action classes. It contains more than 15K full HD, 5s long videos acquired in various driving conditions, weathers, daytimes and environments, complemented with a common and realistic set of sensor measurements. This amounts to more than 2.25M frames, each annotated with an action label, corresponding to 600 samples per action class. | Provide a detailed description of the following dataset: VIENA2 |
ViMMRC | A challenging machine comprehension corpus with multiple-choice questions, intended for research on the machine comprehension of Vietnamese text. This corpus includes 2,783 multiple-choice questions and answers based on a set of 417 Vietnamese texts used for teaching reading comprehension for 1st to 5th graders. Answers may be extracted from the contents of single or multiple sentences in the corresponding reading text. | Provide a detailed description of the following dataset: ViMMRC |
Violin | Video-and-Language Inference is the task of joint multimodal understanding of video and text. Given a video clip with aligned subtitles as premise, paired with a natural language hypothesis based on the video content, a model needs to infer whether the hypothesis is entailed or contradicted by the given video clip. The **Violin** dataset is a dataset for this task which consists of 95,322 video-hypothesis pairs from 15,887 video clips, spanning over 582 hours of video. These video clips contain rich content with diverse temporal dynamics, event shifts, and people interactions, collected from two sources: (i) popular TV shows, and (ii) movie clips from YouTube channels.
Source: [https://github.com/jimmy646/violin](https://github.com/jimmy646/violin)
Image Source: [https://github.com/jimmy646/violin](https://github.com/jimmy646/violin) | Provide a detailed description of the following dataset: Violin |
VIPL-HR | VIPL-HR database is a database for remote heart rate (HR) estimation from face videos under less-constrained situations. It contains 2,378 visible light videos (VIS) and 752 near-infrared (NIR) videos of 107 subjects. Nine different conditions, including various head movements and illumination conditions are taken into consideration. All the videos are recorded using Logitech C310, RealSense F200 and the front camera of HUAWEI P9 smartphone, and the ground-truth HR is recorded using a CONTEC CMS60C BVP sensor (a FDA approved device). | Provide a detailed description of the following dataset: VIPL-HR |
Virtual Gallery | The Virtual Gallery dataset is a synthetic dataset that targets multiple challenges such as varying lighting conditions and different occlusion levels for various tasks such as depth estimation, instance segmentation and visual localization.
It consists of a scene containing 3-4 rooms, in which a total of 42 free-for-use famous paintings are placed on the walls.
The virtual model and the captured images were generated with Unity software, allowing us to extract ground-truth information such as depth, semantic and instance segmentation, 2D-2D and 2D-3D correspondences. | Provide a detailed description of the following dataset: Virtual Gallery |
VisDrone | **VisDrone** is a large-scale benchmark with carefully annotated ground-truth for various important computer vision tasks, to make vision meet drones. The VisDrone2019 dataset is collected by the AISKYEYE team at Lab of Machine Learning and Data Mining, Tianjin University, China. The benchmark dataset consists of 288 video clips formed by 261,908 frames and 10,209 static images, captured by various drone-mounted cameras, covering a wide range of aspects including location (taken from 14 different cities separated by thousands of kilometers in China), environment (urban and country), objects (pedestrian, vehicles, bicycles, etc.), and density (sparse and crowded scenes). Note that, the dataset was collected using various drone platforms (i.e., drones with different models), in different scenarios, and under various weather and lighting conditions. These frames are manually annotated with more than 2.6 million bounding boxes of targets of frequent interests, such as pedestrians, cars, bicycles, and tricycles. Some important attributes including scene visibility, object class and occlusion, are also provided for better data utilization. | Provide a detailed description of the following dataset: VisDrone |
Vistas-NP | The **Vistas-NP** dataset is an out-of-distribution detection dataset based on the Mapillary Vistas dataset. The original Vistas dataset consists of 18,000 training images and 2,000 validation images with 66 classes. In Vistas-NP the human classes are used as outliers due to their dispersion across scenes and visual diversity from other objects. The dataset is created by excluding all images with class person and the three rider classes to the test subset. Consequently, the dataset has 8,003 train images and 830 validation images. The test set contains 11,167.
Source: [https://github.com/matejgrcic/Vistas-NP](https://github.com/matejgrcic/Vistas-NP) | Provide a detailed description of the following dataset: Vistas-NP |
Visual Question Answering | **Visual Question Answering (VQA)** is a dataset containing open-ended questions about images. These questions require an understanding of vision, language and commonsense knowledge to answer. The first version of the dataset was released in October 2015. [VQA v2.0](/dataset/visual-question-answering-v2-0) was released in April 2017. | Provide a detailed description of the following dataset: Visual Question Answering |
VQG | **VQG** is a collection of datasets for visual question generation. VQG questions were collected by crowdsourcing the task on Amazon Mechanical Turk (AMT). The authors provided details on the prompt and the specific instructions for all the crowdsourcing tasks in this paper in the supplementary material. The prompt was successful at capturing nonliteral questions. Images were taken from the MSCOCO dataset. | Provide a detailed description of the following dataset: VQG |
Visual Relationship Detection Dataset | A dataset containing 5000 images with 37,993 thousand relationships. The dataset contains 100 object categories and 70 predicate categories connecting those objects together. | Provide a detailed description of the following dataset: Visual Relationship Detection Dataset |
ViText2SQL | **ViText2SQL** is a dataset for the Vietnamese Text-to-SQL semantic parsing task, consisting of about 10K question and SQL query pairs.
Source: [https://github.com/VinAIResearch/ViText2SQL](https://github.com/VinAIResearch/ViText2SQL) | Provide a detailed description of the following dataset: ViText2SQL |
ViTT | The ViTT dataset consists of human produced segment-level annotations for 8,169 videos. Of these, 5,840 videos have been annotated once, and the rest of the videos have been annotated twice or more. A total of 12,461 sets of annotations are released. The videos in the dataset are from the [Youtube-8M dataset](https://paperswithcode.com/dataset/youtube-8m).
An annotation has the following format:
```
{
"id": "FmTp",
"annotations": [
{
"timestamp": 260,
"tag": "Opening"
},
{
"timestamp": 16000,
"tag": "Displaying technique"
},
{
"timestamp": 23990,
"tag": "Showing foot positioning"
},
{
"timestamp": 55530,
"tag": "Demonstrating crossover"
},
{
"timestamp": 114100,
"tag": "Closing"
}
]
}
``` | Provide a detailed description of the following dataset: ViTT |
VizWiz-Captions | Consists of over 39,000 images originating from people who are blind that are each paired with five captions. | Provide a detailed description of the following dataset: VizWiz-Captions |
VizWiz-Priv | VizWiz-Priv includes 8,862 regions showing private content across 5,537 images taken by blind people. Of these, 1,403 are paired with questions and 62% of those directly ask about the private content. | Provide a detailed description of the following dataset: VizWiz-Priv |
VizWiz-QualityIssues | A large-scale dataset that links the assessment of image quality issues to two practical vision tasks: image captioning and visual question answering. | Provide a detailed description of the following dataset: VizWiz-QualityIssues |
VLEngagement | A novel dataset that consists of content-based and video-specific features extracted from publicly available scientific video lectures and several metrics related to user engagement. | Provide a detailed description of the following dataset: VLEngagement |
VMSMO | The Video-based Multimodal Summarization with Multimodal Output (**VMSMO**) corpus consists of 184,920 document-summary pairs, with 180,000 training pairs, 2,460 validation and test pairs. The task for this dataset is generating and appropriate textual summary of an article and choosing a proper cover frame from a video accompanying the article.
Source: [https://github.com/yingtaomj/VMSMO](https://github.com/yingtaomj/VMSMO) | Provide a detailed description of the following dataset: VMSMO |
VocalFolds | The Vocal Folds dataset is a dataset for automatic segmentation of laryngeal endoscopic images.
The dataset consists of 8 sequences from 2 patients containing 536 hand segmented in vivo colour images of the larynx during two different resection interventions with a resolution of 512x512 pixels.
Source: [https://github.com/imesluh/vocalfolds](https://github.com/imesluh/vocalfolds)
Image Source: [https://github.com/imesluh/vocalfolds](https://github.com/imesluh/vocalfolds) | Provide a detailed description of the following dataset: VocalFolds |
VoxClamantis | A large-scale corpus for phonetic typology, with aligned segments and estimated phoneme-level labels in 690 readings spanning 635 languages, along with acoustic-phonetic measures of vowels and sibilants. | Provide a detailed description of the following dataset: VoxClamantis |
VoxPopuli | VoxPopuli is a large-scale multilingual corpus providing 100K hours of unlabelled speech data in 23 languages. It is the largest open data to date for unsupervised representation learning as well as semi-supervised learning. VoxPopuli also contains 1.8K hours of transcribed speeches in 16 languages and their aligned oral interpretations into 5 other languages totaling 5.1K hours. | Provide a detailed description of the following dataset: VoxPopuli |
VQA-OV | Collects 60 reference sequences and 540 impaired sequences. | Provide a detailed description of the following dataset: VQA-OV |
VT5000 | Includes 5000 spatially aligned RGBT image pairs with ground truth annotations. VT5000 has 11 challenges collected in different scenes and environments for exploring the robustness of algorithms. | Provide a detailed description of the following dataset: VT5000 |
Ward2ICU | **Ward2ICU** is a vital signs dataset of inpatients from the general ward. It contains vital signs with class labels indicating patient transitions from the ward to intensive care units
Source: [https://github.com/3778/Ward2ICU](https://github.com/3778/Ward2ICU) | Provide a detailed description of the following dataset: Ward2ICU |
Waymo Open Dataset | The Waymo Open Dataset is comprised of high resolution sensor data collected by autonomous vehicles operated by the Waymo Driver in a wide variety of conditions.
The Waymo Open Dataset currently contains 1,950 segments. The authors plan to grow this dataset in the future. Currently the datasets includes:
* 1,950 segments of 20s each, collected at 10Hz (390,000 frames) in diverse geographies and conditions
* Sensor data
* 1 mid-range lidar
* 4 short-range lidars
* 5 cameras (front and sides)
* Synchronized lidar and camera data
* Lidar to camera projections
* Sensor calibrations and vehicle poses
* Labeled data
* Labels for 4 object classes - Vehicles, Pedestrians, Cyclists, Signs
* High-quality labels for lidar data in 1,200 segments
* 12.6M 3D bounding box labels with tracking IDs on lidar data
* High-quality labels for camera data in 1,000 segments
* 11.8M 2D bounding box labels with tracking IDs on camera data | Provide a detailed description of the following dataset: Waymo Open Dataset |
WeatherBench | A benchmark dataset for data-driven medium-range weather forecasting, a topic of high scientific interest for atmospheric and computer scientists alike. | Provide a detailed description of the following dataset: WeatherBench |
WebCaricature Dataset | Aims to facilitate research in caricature recognition. All the caricatures and face images were collected from the Web. Compared with two existing datasets, this dataset is much more challenging, with a much greater number of available images, artistic styles and larger intra-personal variations. | Provide a detailed description of the following dataset: WebCaricature Dataset |
WebChild | One of the largest commonsense knowledge bases available, describing over 2 million disambiguated concepts and activities, connected by over 18 million assertions. | Provide a detailed description of the following dataset: WebChild |
WOS | Web of Science (WOS) is a document classification dataset that contains 46,985 documents with 134 categories which include 7 parents categories. | Provide a detailed description of the following dataset: WOS |
WGISD | Embrapa Wine Grape Instance Segmentation Dataset (WGISD) contains grape clusters properly annotated in 300 images and a novel annotation methodology for segmentation of complex objects in natural images. | Provide a detailed description of the following dataset: WGISD |
WHOI-Plankton | WHOI-Plankton is a collection of annotated plankton images. It contains > 3.5 million images of microscopic marine plankton, organized according to category labels provided by researchers at the Woods Hole Oceanographic Institution (WHOI). The images are currently placed into one of 103 categories. | Provide a detailed description of the following dataset: WHOI-Plankton |
WHU | Created for MVS tasks and is a large-scale multi-view aerial dataset generated from a highly accurate 3D digital surface model produced from thousands of real aerial images with precise camera parameters. | Provide a detailed description of the following dataset: WHU |
WiC | WiC is a benchmark for the evaluation of context-sensitive word embeddings. WiC is framed as a binary classification task. Each instance in WiC has a target word w, either a verb or a noun, for which two contexts are provided. Each of these contexts triggers a specific meaning of w. The task is to identify if the occurrences of w in the two contexts correspond to the same meaning or not. In fact, the dataset can also be viewed as an application of Word Sense Disambiguation in practise. | Provide a detailed description of the following dataset: WiC |
WIDER | **WIDER** is a dataset for complex event recognition from static images. As of v0.1, it contains 61 event categories and around 50574 images annotated with event class labels. | Provide a detailed description of the following dataset: WIDER |
WIDER Attribute Dataset | The **WIDER Attribute** dataset is a human attribute recognition dataset with human attribute and image event annotations. Images are selected from the WIDER dataset. There are a total of 13,789 images. A bounding box is annotated for each person in these images, with no more than 20 people (with top resolutions) in a crowd image, resulting in 57,524 boxes in total and 4+ boxes per image on average. For each bounding box, 14 distinct human attributes are labelled. There are 805,336 labels in total. | Provide a detailed description of the following dataset: WIDER Attribute Dataset |
WikiDataSets | Topical subsets of WikiData, assembled using the WikiDataSets python library. Extracted from Wikidata in April 2020. | Provide a detailed description of the following dataset: WikiDataSets |
Wiki-40B | A new multilingual language model benchmark that is composed of 40+ languages spanning several scripts and linguistic families containing round 40 billion characters and aimed to accelerate the research of multilingual modeling. | Provide a detailed description of the following dataset: Wiki-40B |
WikiAnn | WikiAnn is a dataset for cross-lingual name tagging and linking based on Wikipedia articles in 295 languages. | Provide a detailed description of the following dataset: WikiAnn |
WikiAsp | A large-scale dataset for multi-domain aspect-based summarization that attempts to spur research in the direction of open-domain aspect-based summarization. | Provide a detailed description of the following dataset: WikiAsp |
WikiAtomicEdits | WikiAtomicEdits is a corpus of 43 million atomic edits across 8 languages. These edits are mined from Wikipedia edit history and consist of instances in which a human editor has inserted a single contiguous phrase into, or deleted a single contiguous phrase from, an existing sentence. | Provide a detailed description of the following dataset: WikiAtomicEdits |
WikiCatSum | **WikiCatSum** is a domain specific Multi-Document Summarisation (MDS) dataset. It assumes the summarisation task of generating Wikipedia lead sections for Wikipedia entities of a certain domain (e.g. Companies) from the set of documents cited in Wikipedia articles or returned by Google (using article titles as queries). The dataset includes three domains: Companies, Films, and Animals.
Source: [https://datashare.ed.ac.uk/handle/10283/3368](https://datashare.ed.ac.uk/handle/10283/3368) | Provide a detailed description of the following dataset: WikiCatSum |
WikiConv | A corpus that encompasses the complete history of conversations between contributors to Wikipedia, one of the largest online collaborative communities. By recording the intermediate states of conversations---including not only comments and replies, but also their modifications, deletions and restorations---this data offers an unprecedented view of online conversation. | Provide a detailed description of the following dataset: WikiConv |
WikiCoref | WikiCoref is an English corpus annotated for anaphoric relations, where all documents are from the English version of Wikipedia. | Provide a detailed description of the following dataset: WikiCoref |
WikiDocEdits | A dataset of single-sentence edits crawled from Wikipedia. | Provide a detailed description of the following dataset: WikiDocEdits |
Wiki-en | **Wiki-en** is an annotated English dataset for domain detection extracted from Wikipedia. It includes texts from 7 different domains: “Business and Commerce” (BUS), “Government and Politics” (GOV), “Physical and Mental Health” (HEA), “Law and Order” (LAW), “Lifestyle” (LIF), “Military” (MIL), and “General Purpose” (GEN).
Source: [https://arxiv.org/pdf/1907.11499.pdf](https://arxiv.org/pdf/1907.11499.pdf) | Provide a detailed description of the following dataset: Wiki-en |
Wiki-Flickr Event Dataset | The Wiki-Flick Event dataset for cross-modal event retrieval is a well-labelled but weakly-aligned dataset collected for cross-modality event retrieval. The dataset consists of 28,825 images on Flickr and 11,960 text articles from hundreds of social media, belonging to 82 categories of events.
Source: [https://github.com/zhengyang5/Wiki-Flickr-Event-Dataset](https://github.com/zhengyang5/Wiki-Flickr-Event-Dataset) | Provide a detailed description of the following dataset: Wiki-Flickr Event Dataset |
WikiLingua | WikiLingua includes ~770k article and summary pairs in 18 languages from WikiHow. Gold-standard article-summary alignments across languages are extracted by aligning the images that are used to describe each how-to step in an article. | Provide a detailed description of the following dataset: WikiLingua |
WikiLinks | A method for automatically gathering massive amounts of naturally-occurring cross-document reference data is used to create the Wikilinks dataset comprising of 40 million mentions over 3 million entities. | Provide a detailed description of the following dataset: WikiLinks |
WikiMatrix | **WikiMatrix** is a dataset of parallel sentences in the textual content of Wikipedia for all possible language pairs. The mined data consists of:
- 85 different languages, 1620 language pairs
- 134M parallel sentences, out of which 34M are aligned with English | Provide a detailed description of the following dataset: WikiMatrix |
Wikipedia Title | **Wikipedia Title** is a dataset for learning character-level compositionality from the character visual characteristics. It consists of a collection of Wikipedia titles in Chinese, Japanese or Korean labelled with the category to which the article belongs.
Source: [https://arxiv.org/abs/1704.04859](https://arxiv.org/abs/1704.04859) | Provide a detailed description of the following dataset: Wikipedia Title |
WikiQAar | A publicly available set of question and sentence pairs, collected and annotated for research on open-domain question answering. | Provide a detailed description of the following dataset: WikiQAar |
WikiReading Recycled | A newly developed public dataset and the task of multiple property extraction. It uses the same data as WikiReading but does not inherit its predecessor's identified disadvantages. | Provide a detailed description of the following dataset: WikiReading Recycled |
WikiSection | A publicly available dataset with 242k labeled sections in English and German from two distinct domains: diseases and cities. | Provide a detailed description of the following dataset: WikiSection |
WikiSem500 | The **WikiSem500** dataset contains around 500 per-language cluster groups for English, Spanish, German, Chinese, and Japanese (a total of 13,314 test cases).
Source: [https://arxiv.org/abs/1611.01547](https://arxiv.org/abs/1611.01547) | Provide a detailed description of the following dataset: WikiSem500 |
WikiSplit | Contains one million naturally occurring sentence rewrites, providing sixty times more distinct split examples and a ninety times larger vocabulary than the WebSplit corpus introduced by Narayan et al. (2017) as a benchmark for this task. | Provide a detailed description of the following dataset: WikiSplit |
WikiSRS | **WikiSRS** is a novel dataset of similarity and relatedness judgments of paired Wikipedia entities (people, places, and organizations), as assigned by Amazon Mechanical Turk workers.
Source: [https://github.com/OSU-slatelab/WikiSRS](https://github.com/OSU-slatelab/WikiSRS) | Provide a detailed description of the following dataset: WikiSRS |
WikiText-TL-39 | WikiText-TL-39 is a benchmark language modeling dataset in Filipino that has 39 million tokens in the training set. | Provide a detailed description of the following dataset: WikiText-TL-39 |
Wilds | Builds on top of recent data collection efforts by domain experts in these applications and provides a unified collection of datasets with evaluation metrics and train/test splits that are representative of real-world distribution shifts.
The v2.0 update adds unlabeled data to 8 datasets. The labeled data and evaluation metrics are exactly the same, so all previous results are directly comparable. | Provide a detailed description of the following dataset: Wilds |
WildDash | WildDash is a benchmark evaluation method is presented that uses the meta-information to calculate the robustness of a given algorithm with respect to the individual hazards. | Provide a detailed description of the following dataset: WildDash |
WildDeepfake | **WildDeepfake** is a dataset for real-world deepfakes detection which consists of 7,314 face sequences extracted from 707 deepfake videos that are collected completely from the internet. WildDeepfake is a small dataset that can be used, in addition to existing datasets, to develop more effective detectors against real-world deepfakes.
Source: [https://github.com/deepfakeinthewild/deepfake-in-the-wild](https://github.com/deepfakeinthewild/deepfake-in-the-wild)
Image Source: [https://github.com/deepfakeinthewild/deepfake-in-the-wild](https://github.com/deepfakeinthewild/deepfake-in-the-wild) | Provide a detailed description of the following dataset: WildDeepfake |
WiLI-2018 | WiLI-2018 is a benchmark dataset for monolingual written natural language identification. WiLI-2018 is a publicly available, free of charge dataset of short text extracts from Wikipedia. It contains 1000 paragraphs of 235 languages, totaling in 23500 paragraphs. WiLI is a classification dataset: Given an unknown paragraph written in one dominant language, it has to be decided which language it is. | Provide a detailed description of the following dataset: WiLI-2018 |
Winogender Schemas | Winogender Schemas is a novel, Winograd schema-style set of minimal pair sentences that differ only by pronoun gender. | Provide a detailed description of the following dataset: Winogender Schemas |
WinoGrande | WinoGrande is a large-scale dataset of 44k problems, inspired by the original WSC design, but adjusted to improve both the scale and the hardness of the dataset. The key steps of the dataset construction consist of (1) a carefully designed crowdsourcing procedure, followed by (2) systematic bias reduction using a novel AfLite algorithm that generalizes human-detectable word associations to machine-detectable embedding associations. | Provide a detailed description of the following dataset: WinoGrande |
WISDOM | Synthetic training dataset of 50,000 depth images and 320,000 object masks using simulated heaps of 3D CAD models. | Provide a detailed description of the following dataset: WISDOM |
Wisesight Sentiment Corpus | Social media message with sentiment label (positive, neutral, negative, question). | Provide a detailed description of the following dataset: Wisesight Sentiment Corpus |
WLASL | **WLASL** is a large video dataset for **Word-Level American Sign Language** (ASL) recognition, which features 2,000 common different words in ASL. | Provide a detailed description of the following dataset: WLASL |
WLD | **WildLife Documentary** is an animal object detection dataset. It contains 15 documentary films that are downloaded from YouTube. The videos vary between 9 minutes to as long as 50 minutes, with resolution ranging from 360p
to 1080p. A unique property of this dataset is that all videos are accompanied with subtitles that are automatically generated from speech by YouTube. The subtitles are revised manually to correct obvious spelling mistakes. All the animals in the videos are annotated, resulting in more than 4098 object tracklets of 60 different visual
concepts, e.g., ‘tiger’, ‘koala’, ‘langur’, and ‘ostrich’. | Provide a detailed description of the following dataset: WLD |
WNLaMPro | The **WordNet Language Model Probing** (**WNLaMPro**) dataset consists of relations between keywords and words. It contains 4 different kinds of relations: Antonym, Hypernym, Cohyponym and Corruption.
Source: [https://arxiv.org/pdf/1904.06707.pdf](https://arxiv.org/pdf/1904.06707.pdf) | Provide a detailed description of the following dataset: WNLaMPro |
WoodScape | Fisheye cameras are commonly employed for obtaining a large field of view in surveillance, augmented reality and in particular automotive applications. In spite of its prevalence, there are few public datasets for detailed evaluation of computer vision algorithms on fisheye images.
**WoodScape** is an extensive fisheye automotive dataset named after Robert Wood who invented the fisheye camera in 1906. WoodScape comprises of four surround view cameras and nine tasks including segmentation, depth estimation, 3D bounding box detection and soiling detection. Semantic annotation of 40 classes at the instance level is provided for over 10,000 images and annotation for other tasks are provided for over 100,000 images. | Provide a detailed description of the following dataset: WoodScape |
Workplace Sexual Harassment | The goal of this dataset is to understand how people experience sexism and sexual harassment in the workplace by discovering themes in 2,362 experiences posted on the Everyday Sexism Project's website
Source: [https://arxiv.org/abs/1907.00510](https://arxiv.org/abs/1907.00510) | Provide a detailed description of the following dataset: Workplace Sexual Harassment |
WritingPrompts | WritingPrompts is a large dataset of 300K human-written stories paired with writing prompts from an online forum. | Provide a detailed description of the following dataset: WritingPrompts |
WSVD | The Web Stereo Video Dataset consists of 553 stereoscopic videos from YouTube. This dataset has a wide variety of scene types, and features many nonrigid objects. | Provide a detailed description of the following dataset: WSVD |
xBD | The xBD dataset contains over 45,000KM2 of polygon labeled pre and post disaster imagery. The dataset provides the post-disaster imagery with transposed polygons from pre over the buildings, with damage classification labels. | Provide a detailed description of the following dataset: xBD |
XED | XED is a multilingual fine-grained emotion dataset. The dataset consists of human-annotated Finnish (25k) and English sentences (30k), as well as projected annotations for 30 additional languages, providing new resources for many low-resource languages. | Provide a detailed description of the following dataset: XED |
XKCDColors | A balanced dataset of color names and RGB values for training classifiers.
Source: [https://github.com/Smoltbob/XKCDColors-Dataset](https://github.com/Smoltbob/XKCDColors-Dataset) | Provide a detailed description of the following dataset: XKCDColors |
XL-R2R | The **XL-R2R** dataset is built upon the R2R dataset and extends it with Chinese instructions. XL-R2R preserves the same splits as in R2R and thus consists of train, val-seen, and val-unseen splits with both English and Chinese instructions, and test split with English instructions only.
Source: [https://github.com/zzxslp/Crosslingual-VLN](https://github.com/zzxslp/Crosslingual-VLN) | Provide a detailed description of the following dataset: XL-R2R |
XL-WiC | A large multilingual benchmark, XL-WiC, featuring gold standards in 12 new languages from varied language families and with different degrees of resource availability, opening room for evaluation scenarios such as zero-shot cross-lingual transfer. | Provide a detailed description of the following dataset: XL-WiC |
X-MARS | The **X-MARS** dataset proposes new splits for the MARS dataset, to allow for cross-evaluation with the Market-1501 dataset without training and test overlap between the two datasets.
Source: [https://github.com/andreas-eberle/x-mars](https://github.com/andreas-eberle/x-mars) | Provide a detailed description of the following dataset: X-MARS |
XOR-TYDI QA | A large-scale dataset built on questions from TyDi QA lacking same-language answers. | Provide a detailed description of the following dataset: XOR-TYDI QA |
LAReQA | A challenging new benchmark for language-agnostic answer retrieval from a multilingual candidate pool. | Provide a detailed description of the following dataset: LAReQA |
X-ray and Visible Spectra Circular Motion Images Dataset | Collections of images of the same rotating plastic object made in X-ray and visible spectra. Both parts of the dataset contain 400 images. The images are maid every 0.5 degrees of the object axial rotation. The collection of images is designed for evaluation of the performance of circular motion estimation algorithms as well as for the study of X-ray nature influence on the image analysis algorithms such as keypoints detection and description. | Provide a detailed description of the following dataset: X-ray and Visible Spectra Circular Motion Images Dataset |
xR-EgoPose | xR-EgoPose is an egocentric synthetic dataset for egocentric 3D human pose estimation. It consists of ~380 thousand photo-realistic egocentric camera images in a variety of indoor and outdoor spaces. | Provide a detailed description of the following dataset: xR-EgoPose |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.