dataset_name
stringlengths
2
128
description
stringlengths
1
9.7k
prompt
stringlengths
59
185
ImageNet-X
**ImageNet-X** is a set of human annotations pinpointing failure types for the popular ImageNet dataset. ImageNet-X labels distinguishing object factors such as pose, size, color, lighting, occlusions, co-occurences, etc. for each image in the validation set and a random subset of 12,000 training samples. It is designed to study the types of mistakes as a function of model's architecture, learning paradigm, and training procedures.
Provide a detailed description of the following dataset: ImageNet-X
JECC
**Jericho Environment Commonsense Comprehension** (JECC) is a dataset for commonsense reasoning. It consists of 29 games in multiple domains from the Jericho Environment hausknecht2019interactive.
Provide a detailed description of the following dataset: JECC
COSIAN
**COSIAN** is an annotation collection of Japanese popular (J-POP) songs, focusing on singing style and expression of famous solo-singers. It consists of various 168 songs. There are 21 female- and 21 male singers. Each singer has four songs that have different moods from each other.
Provide a detailed description of the following dataset: COSIAN
NLI4Wills Corpus
**NLI4Wills Corpus** can be used to train transformers and sentence-transformer models for the validity evaluation of the legal will statements. Our dataset consists of ID numbers, three types of inputs (legal will statements, laws, and conditions) and classifications (support, refute, or unrelated).
Provide a detailed description of the following dataset: NLI4Wills Corpus
SpaRTUN
**SpaRTUN** a dataset synthesized for transfer learning on spatial question answering (SQA) and spatial role labeling (SpRL). Recent research shows synthetic data as a source of supervision helps pretrained language models (PLM) transfer learning to new target tasks/domains. However, this idea is less explored for spatial language. We provide two new data resources on multiple spatial language processing tasks. The first dataset is synthesized for transfer learning on spatial question answering (SQA) and spatial role labeling (SpRL). Compared to previous SQA datasets, we include a larger variety of spatial relation types and spatial expressions. Our data generation process is easily extendable with new spatial expression lexicons. The second one is a real-world SQA dataset with human-generated questions built on an existing corpus with SPRL annotations. This dataset can be used to evaluate spatial language processing models in realistic situations. We show pretraining with automatically generated data significantly improves the SOTA results on several SQA and SPRL benchmarks, particularly when the training data in the target domain is small.
Provide a detailed description of the following dataset: SpaRTUN
USGS Landsat 8 Collection 1 Tier 1
Landsat 8 Collection 1 Tier 1 and Real-Time data DN values, representing scaled, calibrated at-sensor radiance. Landsat scenes with the highest available data quality are placed into Tier 1 and are considered suitable for time-series processing analysis. Tier 1 includes Level-1 Precision Terrain (L1TP) processed data that have well-characterized radiometry and are inter-calibrated across the different Landsat sensors. The georegistration of Tier 1 scenes will be consistent and within prescribed tolerances [<=12 m root mean square error (RMSE)]. All Tier 1 Landsat data can be considered consistent and inter-calibrated (regardless of sensor) across the full collection. See more information [in the USGS docs](https://www.usgs.gov/core-science-systems/nli/landsat/landsat-collections). The T1_RT collection contains both Tier 1 and Real-Time (RT) assets. Newly-acquired Landsat 7 ETM+ and Landsat 8 OLI/TIRS data are processed upon downlink but use predicted ephemeris, initial bumper mode parameters, or initial TIRS line of sight model parameters. The data is placed in the Real-Time tier and made available for immediate download. Once the data have been reprocessed with definitive ephemeris, updated bumper mode parameters and refined TIRS parameters, the products are transitioned to either Tier 1 or Tier 2 and removed from the Real-Time tier. The transition delay from Real-Time to Tier 1 or Tier 2 is between 14 and 26 days.
Provide a detailed description of the following dataset: USGS Landsat 8 Collection 1 Tier 1
Cornell (48%/32%/20% fixed splits)
Node classification on Cornell with the fixed 48%/32%/20% splits provided by Geom-GCN.
Provide a detailed description of the following dataset: Cornell (48%/32%/20% fixed splits)
Wisconsin (48%/32%/20% fixed splits)
Node classification on Wisconsin with the fixed 48%/32%/20% splits provided by Geom-GCN.
Provide a detailed description of the following dataset: Wisconsin (48%/32%/20% fixed splits)
Texas (48%/32%/20% fixed splits)
Node classification on Texas with the fixed 48%/32%/20% splits provided by Geom-GCN.
Provide a detailed description of the following dataset: Texas (48%/32%/20% fixed splits)
Film(48%/32%/20% fixed splits)
Node classification on Film with the fixed 48%/32%/20% splits provided by Geom-GCN.
Provide a detailed description of the following dataset: Film(48%/32%/20% fixed splits)
Chameleon (48%/32%/20% fixed splits)
Node classification on Chameleon with the fixed 48%/32%/20% splits provided by Geom-GCN.
Provide a detailed description of the following dataset: Chameleon (48%/32%/20% fixed splits)
Squirrel (48%/32%/20% fixed splits)
Node classification on Squirrel with the fixed 48%/32%/20% splits provided by Geom-GCN.
Provide a detailed description of the following dataset: Squirrel (48%/32%/20% fixed splits)
xView3-SAR
Unsustainable fishing practices worldwide pose a major threat to marine resources and ecosystems. Identifying vessels that do not show up in conventional monitoring systems---known as ``dark vessels''---is key to managing and securing the health of marine environments. With the rise of satellite-based synthetic aperture radar (SAR) imaging and modern machine learning (ML), it is now possible to automate detection of dark vessels day or night, under all-weather conditions. SAR images, however, require a domain-specific treatment and are not widely accessible to the ML community. Maritime objects (vessels and offshore infrastructure) are relatively small and sparse, challenging traditional computer vision approaches. We present the largest labeled dataset for training ML models to detect and characterize vessels and ocean structures in SAR imagery. xView3-SAR consists of nearly 1,000 analysis-ready SAR images from the Sentinel-1 mission that are, on average, 29,400-by-24,400 pixels each. The images are annotated using a combination of automated and manual analysis. Co-located bathymetry and wind state rasters accompany every SAR image. We also provide an overview of the xView3 Computer Vision Challenge, an international competition using xView3-SAR for ship detection and characterization at large scale. We release the data (\href{https://iuu.xview.us/}{https://iuu.xview.us/}) and code (\href{https://github.com/DIUx-xView}{https://github.com/DIUx-xView}) to support ongoing development and evaluation of ML approaches for this important application.
Provide a detailed description of the following dataset: xView3-SAR
HumSet
Timely and effective response to humanitarian crises requires quick and accurate analysis of large amounts of text data, a process that can highly benefit from expert-assisted NLP systems trained on validated and annotated data in the humanitarian response domain. To enable creation of such NLP systems, we introduce and release **HumSet**, a novel and rich multilingual dataset of humanitarian response documents annotated by experts in the humanitarian response community. The dataset provides documents in three languages (English, French, Spanish) and covers a variety of humanitarian crises from 2018 to 2021 across the globe. For each document, **HumSet** provides selected snippets (entries) as well as assigned classes to each entry annotated using common humanitarian information analysis frameworks. **HumSet** also provides novel and challenging entry extraction and multi-label entry classification tasks. In this paper, we take a first step towards approaching these tasks and conduct a set of experiments on Pre-trained Language Models (PLM) to establish strong baselines for future research in this domain. The dataset is available at [https://blog.thedeep.io/humset/](https://blog.thedeep.io/humset/).
Provide a detailed description of the following dataset: HumSet
Cora (48%/32%/20% fixed splits)
Node classification on Cora with the fixed 48%/32%/20% splits provided by Geom-GCN.
Provide a detailed description of the following dataset: Cora (48%/32%/20% fixed splits)
Citeseer (48%/32%/20% fixed splits)
Node classification on Citeseer with the fixed 48%/32%/20% splits provided by Geom-GCN.
Provide a detailed description of the following dataset: Citeseer (48%/32%/20% fixed splits)
PubMed (48%/32%/20% fixed splits)
Node classification on PubMed with the fixed 48%/32%/20% splits provided by Geom-GCN.
Provide a detailed description of the following dataset: PubMed (48%/32%/20% fixed splits)
Raw-Microscopy and Raw-Drone
Raw-Microscopy: 940 raw bright-field microscopy images of human blood smear slides for leukocyte classification (microscopy/images/raw_scale100) with corresponding labels (microscopy/labels). 5,640 variations measured at six additional different intensities (microscopy/images/raw_scale001-raw_scale0075) 11,280 images of the raw sensor data processed through twelve different pipelines (microscopy/images/processed_views) Raw-Drone: 548 raw drone camera images for car segmentation (drone/images_tiles_256/raw_scale100) with corresponding binary segmentation mask (drone/masks_tiles_256). The images and the masks are cropped from 12 raw drone camera images (drone/images_full/raw_scale100) and 12 masks (drone/masks_full) of size 3648 by 5472. 3,288 variations measured at six additional different intensities (drone/images_tiles_256/raw_scale001-raw_scale075). 6,576 images of the raw sensor data processed through twelve different pipelines (drone/images_tiles_256/processed_views). Detailed datasheets for the two datasets can be found in the appendices of https://arxiv.org/abs/2211.02578.
Provide a detailed description of the following dataset: Raw-Microscopy and Raw-Drone
CLSE
**CLSE** is an augmented version of the Schema-Guided Dialog Dataset. The corpus includes 34 languages and covers 74 different semantic types to support various applications from airline ticketing to video games.
Provide a detailed description of the following dataset: CLSE
QDax
**QDax** is a benchmark suite designed for for Deep Neuroevolution in Reinforcement Learning domains for robot control. The suite includes the definition of tasks, environments, behavioral descriptors, and fitness. It specify different benchmarks based on the complexity of both the task and the agent controlled by a deep neural network. The benchmark uses standard Quality-Diversity metrics, including coverage, QD-score, maximum fitness, and an archive profile metric to quantify the relation between coverage and fitness.
Provide a detailed description of the following dataset: QDax
TempWikiBio
**TempWikiBio** is a new data-to-text generation dataset containing more than 4 millions of chronologically ordered revisions of biographical articles from English Wikipedia, each paired with structured personal profiles.
Provide a detailed description of the following dataset: TempWikiBio
xP3
**xP3** is a multilingual dataset for multitask prompted finetuning. It is a composite of supervised datasets in 46 languages with English and machine-translated prompts.
Provide a detailed description of the following dataset: xP3
20NewsGroups
The 20 Newsgroups data set is a collection of approximately 20,000 newsgroup documents, partitioned (nearly) evenly across 20 different newsgroups.
Provide a detailed description of the following dataset: 20NewsGroups
Laval Indoor HDR Dataset
This dataset contains 2100+ high resolution indoor panoramas, captured using a Canon 5D Mark III and a robotic panoramic tripod head. Each capture was multi-exposed (22 f-stops) and is fully HDR, without any saturation. Panoramas were stitched from 6 captures (60 degrees azimuth increment) and were captured in a wide variety of indoor environments.
Provide a detailed description of the following dataset: Laval Indoor HDR Dataset
Paper2Fig100k
**Paper2Fig100k** is a dataset with over 100k images of figures and texts from research papers. The figures show architecture diagrams and methodologies of articles available at arXiv.org from fields like artificial intelligence and computer vision. Figures usually include text and discrete objects, e.g., boxes in a diagram, with lines and arrows that connect them.
Provide a detailed description of the following dataset: Paper2Fig100k
FISHTRAC
A dataset of real-world underwater videos annotated with multi-object tracking labels. The data was collected of the coast of the Big Island of Hawaii and the primary goal is to help scientists studying fish behavior, with the goal of conserving rare and beautiful fish species. Features: * Real-world video collected primarily by divers : Results in complex camera motion and complex background effects (coral), which poses a challenge for many tracking systems. * Limited training data: Only 3 videos reserved for training, the other 11 are reserved for testing. This allows one to examine how approaches cope with little training data. * Provided deep detections: Unfiltered deep detections are provided from RetinaNet. These detections are reasonable but imperfect, allowing one to examine how to cope with unfiltered predictions. * Complete labels: Every frame is annotated, and every fish that can be identified with 100% certainty from a single frame is marked.
Provide a detailed description of the following dataset: FISHTRAC
RealHDRTV_dataset
RealHDRTV dataset is the first real-world paired SDRTV-HDRTV dataset, which includes SDRTV-HDRTV pairs with 8K resolutions captured by a smartphone camera with the “SDR” and “HDR10” modes. To avoid possible misalignment, a professional steady tripod is used and only captured indoor or in controlled static scenes. After the acquisition, regions are cut out with obvious motions (10+ pixels) and light condition changes, and are cropped into 4K image pairs and a global 2D translation is used to align the cropped image pairs. Then, the pairs are removed which are still with obvious misalignment and get final 4K SDRTV-HDRTV pairs with misalignment no more than 1 pixel as labeled inference dataset.
Provide a detailed description of the following dataset: RealHDRTV_dataset
Malimg
The Malimg Dataset contains 9,339 malware byteplot images from 25 different families.
Provide a detailed description of the following dataset: Malimg
DPCSpell-Bangla-SEC-Corpus
**MIT licenseDPCSpell-Bangla-SEC-Corpus** is a large-scale parallel corpus for Bangla spelling error correction.
Provide a detailed description of the following dataset: DPCSpell-Bangla-SEC-Corpus
codecov-benchs-for-4.3&5.3
slopt_fuzzbench_and_bandit_plot_data.tar.gz contains all `plot_data` of fuzzer instances that were run in the FuzzBench benchmark (Section 5.3) and Bandit Algorithm Comparison (Section 4.3).
Provide a detailed description of the following dataset: codecov-benchs-for-4.3&5.3
bugcov-benchs-for-5.4
slopt_magma_jsons.tar.gz contains the summary of the Magma benchmark (Section 5.4) as JSON files, which was generated by exp2json.py.
Provide a detailed description of the following dataset: bugcov-benchs-for-5.4
DAIR-V2X
**DAIR-V2X** is a large-scale, multi-modality, multi-view dataset from real scenarios for VICAD. DAIR-V2X comprises 71254 LiDAR frames and 71254 Camera frames, and all frames are captured from real scenes with 3D annotations.
Provide a detailed description of the following dataset: DAIR-V2X
TAP-Vid
**TAP-Vid** is a benchmark which contains both real-world videos with accurate human annotations of point tracks, and synthetic videos with perfect ground-truth point tracks. This is designed for a new task called tracking any point.
Provide a detailed description of the following dataset: TAP-Vid
CRIPP-VQA
**CRIPP-VQA** is a video question answering dataset for reasoning about the implicit physical properties of objects in a scene. It contains videos of object in motion, annotated with questions that involve counterfactual reasoning about actions, questions about planning in order to reach a goal, and descriptive questions about visible properties of objects.
Provide a detailed description of the following dataset: CRIPP-VQA
INSANE Cross-Domain UAV Data Set
This data set contains over 600GB of multimodal data from a Mars analog mission, including accurate 6DoF outdoor ground truth, indoor-outdoor transitions with continuous cross-domain ground truth, and indoor data with Optitrack measurements as ground truth. With 26 flights and a combined distance of 2.5km, this data set provides you with various distinct challenges for testing and proofing your algorithms. The UAV carries 18 sensors, including a high-resolution navigation camera and a stereo camera with an overlapping field of view, two RTK GNSS sensors with centimeter accuracy, as well as three IMUs, placed at strategic locations: Hardware dampened at the center, off-center with a lever arm, and a 1kHz IMU rigidly attached to the UAV (in case you want to work with unfiltered data). The sensors are fully pre-calibrated, and the data set is ready to use. However, if you want to use your own calibration algorithms, then the raw calibration data is also ready for download. The cross-domain outdoor to indoor transition segments are especially challenging because of realistic sensor behavior such as GNSS degradation and dropouts, changes in the measured magnetic field, and changing light conditions when transitioning to the indoor environment. The data set also provides 4 hours of static sensor data and vibration data with accurate RPM measurements to provide you with additional information about sensor properties and vehicle integrity. The data set provides you with everything you need to test your research: First, you can test your algorithms with the indoor data sets in a controlled environment and then switch to the more challenging flight scenario, such as the transition data, which requires sensor switching, or the Mars analog data with higher velocities, multiple touchdowns, challenging ground structures or constant velocity segments. The Mars analog data also contains cliff flight overs and traverses while the stereo camera faces the cliff in case you want to perform a 3D reconstruction or challenge your SLAM algorithm. The critical aspects of each data set are shown on the website (https://sst.aau.at/cns/datasets), making it easy to find the best data to test or challenge your algorithm.
Provide a detailed description of the following dataset: INSANE Cross-Domain UAV Data Set
legal_NER
**legal_NER** is a corpus of 46545 annotated legal named entities mapped to 14 legal entity types. It is designed for named entity recognition in indian court judgement.
Provide a detailed description of the following dataset: legal_NER
Halpe-FullBody
**Halpe-FullBody** is a full body keypoints dataset where each person has annotated 136 keypoints, including 20 for body, 6 for feet, 42 for hands and 68 for face. It is designed for the task of whole body human pose estimation.
Provide a detailed description of the following dataset: Halpe-FullBody
Interiorverse
**Interiorverse** is a high-quality indoor scene dataset with rich details, including complex furniture and decorations and it is rendered with GGX BRDF model, which has stronger material modeling capability than any BRDF models.
Provide a detailed description of the following dataset: Interiorverse
EventEA
**EventEA** is an event-centric entity alignment dataset, harvested from EventKG, DBpedia and Wikidata.
Provide a detailed description of the following dataset: EventEA
GLOBEM
**GLOBEM** is a multi-year passive sensing datasets, containing over 700 user-years and 497 unique users' data collected from mobile and wearable sensors, together with a wide range of well-being metrics. The datasets can support multiple cross-dataset evaluations of behavior modeling algorithms' generalizability across different users and years.
Provide a detailed description of the following dataset: GLOBEM
Amharic - English Parallel Corpus for Machine Translation
**Amharic - English Parallel Corpus for Machine Translation** contains 33,955 sentence pairs extracted text from such news platforms as Ethiopian Press Agency1, Fana Broadcasting Corporate2, and Walta Information Center3. As the data we used is from different sources, it includes various domains such as religious (Bible and Quran), politics, economics, sports, news, among others.
Provide a detailed description of the following dataset: Amharic - English Parallel Corpus for Machine Translation
Concise
**Concise** has two datasets of 2000 sentences each, that were annotated by two and five human annotators, respectively. They are designed for the new task of making sentence concise.
Provide a detailed description of the following dataset: Concise
DIALOCONAN
**DIALOCONAN** is a dataset comprising over 3000 fictitious multi-turn dialogues between a hater and an NGO operator, covering 6 targets of hate.
Provide a detailed description of the following dataset: DIALOCONAN
CoP3D
**CoP3D** is a collection of crowd-sourced videos showing around 4,200 distinct pets. CoP2D is a large-scale datasets for benchmarking non-rigid 3D reconstruction "in the wild".
Provide a detailed description of the following dataset: CoP3D
CELLS
**CELLS** is a large (63k pairs) and broadest-ranging (12 journals) parallel corpus for lay language generation. The abstract and the corresponding lay language summary are written by domain experts, assuring the quality of the dataset.
Provide a detailed description of the following dataset: CELLS
VGGSound-Sparse
The dataset uses VGG-Sound which consists of 10s clips collected from YouTube for 309 sound classes. A subset of ‘temporally sparse’ classes is selected using the following procedure: 5–15 videos are randomly picked from each of the 309 VGGSound classes, and manually annotated as to whether audio-visual cues are only sparsely available. As a result, 12 classes are selected (∼4 %) or 6.5k and 0.6k videos in the train and test sets, respectively. The classes include 'dog barking', 'chopping wood', 'lion roaring', 'skateboarding' etc.
Provide a detailed description of the following dataset: VGGSound-Sparse
FinRL-Meta
**FinRL-Meta** is universe of market environments for data-driven financial reinforcement learning. It follows the de facto standard of OpenAI Gym and the lean principle of software development. It has the following unique features of layered structure and extensibility, training-testing-trading pipeline and plug-and-play mode.
Provide a detailed description of the following dataset: FinRL-Meta
Conversational Stance Detection
**Conversational Stance Detection (CSD)** is a dataset with annotations of stances and the structures of conversation threads. It consists of 500 conversation threads (including 500 posts and 5376 comments) from six major social media platforms in Hong Kong.
Provide a detailed description of the following dataset: Conversational Stance Detection
SciHTC
**SciHTC** is a dataset for hierarchical multi-label text classification (HMLTC) of scientific papers which contains 186,160 papers and 1,233 categories from the ACM CCS tree.
Provide a detailed description of the following dataset: SciHTC
CochlScene
**CochlScene** is a dataset for acoustic scene classification. The dataset consists of 76k samples collected from 831 participants in 13 acoustic scenes.
Provide a detailed description of the following dataset: CochlScene
NJH
**NJH** is a dataset of over 40,000 tweets about immigration from the US and UK, annotated with six labels for different aspects of incivility and intolerance. It is a more fine-grained multi-label approach to predicting incivility and hateful or intolerant content.
Provide a detailed description of the following dataset: NJH
MCXFACE
MCXFace is a heterogeneous face recognition dataset consisting of multi-channel image samples for 51 subjects. For each subject color (RGB), thermal, near-infrared (850 nm), short-wave infrared (1300 nm), Depth, Stereo depth, and depth estimated from RGB images are available. Overall 7406 images together with landmark annotations and standard protocols are available in this dataset. The Multi-Channel Heterogeneous Face Recognition dataset (MCXFace) is derived from the HQ-WMCA dataset (https://www.idiap.ch/en/dataset/hq-wmca). The MCXFace dataset contains images of 51 subjects collected in different channels under three different sessions and various illumination conditions. The channels available are color (RGB), thermal, near-infrared (850 nm), short-wave infrared (1300 nm), Depth, Stereo depth, and depth estimated from RGB images using the 3DDFA method. All the channels are registered spatially and temporally across all the modalities. In the MCXFace dataset, only bonafide samples are present. Further, the files are divided into train and dev sets with a disjoint set of identities to make experiments in different homogeneous and heterogeneous settings possible. For each of the protocols, we have created five different folds, by randomly dividing the subjects into train and dev partitions. In addition to the images, annotations for left and right eye centers for all the images are also provided in a JSON file. Specifically for each image, annotations for the left eye, right eye, mouth left corner, mouth right corner, nose, top left, and top right are provided. Each image file is stored as a ``.jpg'' file with a resolution of 1920x1200.
Provide a detailed description of the following dataset: MCXFACE
MEVID
**Multi-view Extended Videos with Identities dataset (MEVID)** is a dataset for large-scale, video person re-identification (ReID) in the wild. It spans an extensive indoor and outdoor environment across nine unique dates in a 73-day window, various camera viewpoints, and entity clothing changes. Specifically, it contains labels of the identities of 158 unique people wearing 598 outfits taken from 8, 092 tracklets, average length of about 590 frames, seen in 33 camera views from the very large-scale MEVA person activities dataset.
Provide a detailed description of the following dataset: MEVID
ToM-in-AMC
**ToM-in-AMC** is a novel NLP benchmark, Short for Theory-of-Mind meta-learning Assessment with Movie Characters. The benchmark consists of 1,000 parsed movie scripts for this purpose, each corresponding to a few-shot character understanding task.
Provide a detailed description of the following dataset: ToM-in-AMC
WIKIPerson
**WIKIPerson** is a high-quality human-annotated visual person linking dataset based on Wikipedia. The dataset contains a total of 48k different news images, covering 13k out of 120K Person named entities, each of which corresponds to a celebrity in Wikipedia. Unlike previously commonly-used datasets in EL, the mention in WIKIPerson is only an image containing the person entity with its bounding box. The corresponding label identifies a unique entity in Wikipedia. For each entity in the Wikipedia, we provide textual descriptions as well as images to satisfy the need of three sub-tasks.
Provide a detailed description of the following dataset: WIKIPerson
DOORS
**DOORS** is a dataset designed for boulders recognition, centroid regression, segmentation, and navigation applications. The dataset is divided into two sets: - Regression: Contains images, masks, and labels for 4 splits of single boulders positioned on the surface of a spherical mesh. It can be used to perform navigation, boulder recognition, segmentation, and centroid regression. - Segmentation: Contain images, masks, and labels of 2 datasets: DS1 and DS2. DS1 is made of the same images of the Regression dataset but is specifically designed for segmentation. DS2 is made of images with multiple instances of boulders appearing on the surface of the Didymos asteroid model
Provide a detailed description of the following dataset: DOORS
Stanceosaurus
**Stanceosaurus** is a corpus of 28,033 tweets in English, Hindi, and Arabic annotated with stance towards 251 misinformation claims. The claims in Stanceosaurus originate from 15 fact-checking sources that cover diverse geographical regions and cultures. Unlike existing stance datasets, it introduces a more fine-grained 5-class labeling strategy with additional subcategories to distinguish implicit stance.
Provide a detailed description of the following dataset: Stanceosaurus
FSDSoundScapes
A synthetic sound mixture specification dataset for the Target Sound Extraction (TSE) task. Dataset samples consist of a [.jams](https://jams.readthedocs.io/en/stable/) file specifying the mixture components, and a metadata file with target labels. Mixtures are 6 seconds long and contain 3-5 unique foreground sounds over a 6 second long background sound. Each sample is provided with 3 target labels, and sounds corresponding to all target labels are guaranteed to be present in the mixture. [FSDKaggle2018](https://zenodo.org/record/2552860) is used as the source for foreground sounds and [TAU Urban Acoustic Scenes 2019](https://dcase.community/challenge2019/task-acoustic-scene-classification) is used as the source for background sounds. ##### Split Train: 50K Val: 5K Test: 10K
Provide a detailed description of the following dataset: FSDSoundScapes
MMDialog
**MMDialog** is a large-scale multi-turn dialogue dataset containing multi-modal open-domain conversations derived from real human-human chat content in social media. MMDialog contains 1.08M dialogue sessions and 1.53M associated images. On average, one dialogue session has 2.59 images, which can be located anywhere at any conversation turn.
Provide a detailed description of the following dataset: MMDialog
AnimeRun
**AnimeRun** is a 2D animation visual correspondence dataset. It is designed for tasks converting open source three-dimensional (3D) movies to full scenes in 2D style, including simultaneous moving background and interactions of multiple subjects.
Provide a detailed description of the following dataset: AnimeRun
EntitySeg
The **EntitySeg** dataset contains 33,227 images with high-quality mask annotations. Compared with existing dataets, there are three distinct properties in EntitySeg. First, 71.25% and 86.23% of the images are of high resolution with at least 2000px×2000px and 1000px×1000px which is more consistent with current digital imaging trends. Second, the dataset is open-world and is not limited to predefined classes. Third, the mask annotation along the boundaries are more accurate than existing datasets.
Provide a detailed description of the following dataset: EntitySeg
Visual Commonsense Immorality benchmark
**Visual Commonsense Immorality benchmark** is a benchmark designed to evaluate commonsense immorality. It contains 2,172 immoral images for general and extensive immoral image detection.
Provide a detailed description of the following dataset: Visual Commonsense Immorality benchmark
AtyPict
**AtyPict** is a dataset of atypical sketch content designed for atypical sketch content detection tasks.
Provide a detailed description of the following dataset: AtyPict
MACSum
**MACSum** a human-annotated summarization dataset for controlling mixed attributes. It contains source texts from two domains, news articles and dialogues, with human-annotated summaries controlled by five designed attributes (Length, Extractiveness, Specificity, Topic, and Speaker).
Provide a detailed description of the following dataset: MACSum
Spatial Monitoring and Insect Behavioural Analysis Dataset
Insects are the most important global pollinator of crops and play a key role in maintaining the sustainability of natural ecosystems. Insect pollination monitoring and management are therefore essential for improving crop production and food security. Computer vision-facilitated pollinator monitoring can intensify data collection over what is feasible using manual approaches. We introduce a novel system to facilitate markerless data capture for insect counting, insect motion tracking, behaviour analysis and pollination prediction across large agricultural areas. Our system is comprised of edge computing multi-point video recording, offline automated multi-species insect counting, tracking and behavioural analysis. We implement and test our system on a commercial berry farm to demonstrate its capabilities. This dataset contains movement tracks for four insect varieties and flowers recorded across nine monitoring stations within polytunnels in a strawberry farm. Insect tracks include details on flower visits, recorded locations and sighted times. Insect varieties included in the dataset are honeybees (1805 tracks), Syrphidae (85 tracks), Lepidoptera (100 tracks) and Vespids (385 tracks). This upload also consists of software to analyse pollination and visualise insect trajectories. In addition, it contains an annotated dataset of images from the four classes for YOLOv4 object detection model training and testing and a dataset of ten videos used for the system evaluation. Associated Publication: M.N. Ratnayake, D.C. Amarathunga, A. Zaman, A.G.Dyer and A. Dorin, "Spatial Monitoring and Insect Behavioural Analysis Using Computer Vision for Precision Pollination.", Accepted to be published in the International Journal of Computer Vision (Acceptance date: 7th November 2022)
Provide a detailed description of the following dataset: Spatial Monitoring and Insect Behavioural Analysis Dataset
Re-DocRED
The Re-DocRED Dataset resolved the following problems of DocRED: 1. Resolved the incompleteness problem by supplementing large amounts of relation triples. 2. Addressed the logical inconsistencies in DocRED. 3. Corrected the coreferential errors within DocRED.
Provide a detailed description of the following dataset: Re-DocRED
KAMEL
KAMEL comprises knowledge about 234 relations from Wikidata with a large training, validation, and test dataset. We make sure that all facts are also present in Wikipedia so that they have been seen during the pre-training procedure of the LMs we are probing. Most importantly we overcome the limitations of existing probing datasets by (1) having a larger variety of knowledge graph relations, (2) it contains single- and multi-token entities, (3) we use relations with literals, and (4) have alternative labels for entities. (5) Furthermore, we created an evaluation procedure for higher cardinality relations, which was missing in previous works, and (6) make sure that the dataset can be used for causal LMs.
Provide a detailed description of the following dataset: KAMEL
QuALITY
**QuALITY** (**Question Answering with Long Input Texts, Yes!**) is a multiple-choice question answering dataset for long document comprehension. The dataset consists of context passages in English that have an average length of about 5,000 tokens, much longer than typical current models can process. Unlike in prior work with passages, the questions are written and validated by contributors who have read the entire passage, rather than relying on summaries or excerpts.
Provide a detailed description of the following dataset: QuALITY
CoRAL dataset
**CoRAL** is a language and culturally aware Croatian Abusive dataset covering phenomena of implicitness and reliance on local and global context.
Provide a detailed description of the following dataset: CoRAL dataset
ICLR Database
A maintained database tracks ICLR submissions and reviews, augmented with author profiles and higher-level textual features.
Provide a detailed description of the following dataset: ICLR Database
QV-Pipe
The QV-Pipe dataset consists of 9.6k videos, which are collected from real-world urban pipes. The total duration of all videos exceeds 55 hours. Moreover, there are 1 normal class and 16 defect classes. Because the pipe situation is complex and multiple defects often appear at the same time, each video is annotated with multiple labels.
Provide a detailed description of the following dataset: QV-Pipe
CCTV-Pipe
Our CCTV-Pipe dataset consists of 16 defect categories including structural and functional defects in the pipe. It contains 575 videos with 87 hours, which are collected from real-world urban pipe systems. Different from traditional temporal action localization, our goal in this realistic scenario is to find preferable temporal locations of defects from a untrimmed CCTV video, instead of exact temporal boundaries.
Provide a detailed description of the following dataset: CCTV-Pipe
High Altitude Georeferenced UAV Images
The dataset contains 179 photographs taken by a UAV flying at 120 meters altitude. The photographs are high resolution georeferenced (altitude and lontitude) orthoimages. It was originally used for developing a visual-based localization algorithm for UAVs. The dataset could be used for training machine learning models for localization purposes or building maps.
Provide a detailed description of the following dataset: High Altitude Georeferenced UAV Images
#chinahate
**#chinahate** dataset contains a total of 2,172,333 tweets hashtagged #china posted during the time it was collected. It is designed for the task of hate speech detection.
Provide a detailed description of the following dataset: #chinahate
Placenta
**Placenta** is a benchmark dataset for node classification in an underexplored domain: predicting microanatomical tissue structures from cell graphs in placenta histology whole slide images. Cell graphs are large (>1 million nodes per image), node features are varied (64-dimensions of 11 types of cells), class labels are imbalanced (9 classes ranging from 0.21% of the data to 40.0%), and cellular communities cluster into heterogeneously distributed tissues of widely varying sizes (from 11 nodes to 44,671 nodes for a single structure).
Provide a detailed description of the following dataset: Placenta
BWB
The **BWB** corpus consists of Chinese novels translated by experts into English, and the annotated test set is designed to probe the ability of machine translation systems to model various discourse phenomena.
Provide a detailed description of the following dataset: BWB
DTBM
**DTBM** is a benchmark dataset for Digital Twins that reflects these characteristics and look into the scaling challenges of different knowledge graph technologies.
Provide a detailed description of the following dataset: DTBM
S3E
**S3E** is a novel large-scale multimodal dataset captured by a fleet of unmanned ground vehicles along four designed collaborative trajectory paradigms. S3E consists of 7 outdoor and 5 indoor scenes that each exceed 200 seconds, consisting of well synchronized and calibrated high-quality stereo camera, LiDAR, and high-frequency IMU data.
Provide a detailed description of the following dataset: S3E
REFIT
Smart meter roll-outs provide easy access to granular meter measurements, enabling advanced energy services, ranging from demand response measures, tailored energy feedback and smart home/building automation. To design such services, train and validate models, access to data that resembles what is expected of smart meters, collected in a real-world setting, is necessary. The REFIT electrical load measurements dataset described in this paper includes whole house aggregate loads and nine individual appliance measurements at 8-second intervals per house, collected continuously over a period of two years from 20 houses. During monitoring, the occupants were conducting their usual routines. At the time of publishing, the dataset has the largest number of houses monitored in the United Kingdom at less than 1-minute intervals over a period greater than one year. The dataset comprises 1,194,958,790 readings, that represent over 250,000 monitored appliance uses. The data is accessible in an easy-to-use comma-separated format, is time-stamped and cleaned to remove invalid measurements, correctly label appliance data and fill in small gaps of missing data.
Provide a detailed description of the following dataset: REFIT
HUI speech corpus
The data set contains several speakers. The 5 largest are listed individually, the rest are summarized as other. All audio files have a sampling rate of 44.1kHz. For each speaker, there is a clean variant in addition to the full data set, where the quality is even higher. Furthermore, there are various statistics. The dataset can also be used for automatic speech recognition (ASR) if audio files are converted to 16 kHz.
Provide a detailed description of the following dataset: HUI speech corpus
Thorsten voice 21.02 neutral
Thorsten-Voice (Thorsten-21.02-neutral) is a neutrally spoken voice dataset recorded by Thorsten Müller, audio optimized by Dominik Kreutz and licenced under CC0 to provide it for anybody without any financial or licence struggle. It is intended to be used for speech synthesis in German as a single speaker dataset. It contains about 23 hours of high quality audio
Provide a detailed description of the following dataset: Thorsten voice 21.02 neutral
Voxforge German
VoxForge is an open speech dataset that was set up to collect transcribed speech for use with Free and Open Source Speech Recognition Engines (on Linux, Windows and Mac). We will make available all submitted audio files under the GPL license, and then 'compile' them into acoustic models for use with Open Source speech recognition engines such as CMU Sphinx, ISIP, Julius (github) and HTK (note: HTK has distribution restrictions).
Provide a detailed description of the following dataset: Voxforge German
M-AILabs speech dataset
The M-AILABS Speech Dataset is the first large dataset that we are providing free-of-charge, freely usable as training data for speech recognition and speech synthesis. Most of the data is based on LibriVox and Project Gutenberg. The training data consist of nearly thousand hours of audio and the text-files in prepared format. A transcription is provided for each clip. Clips vary in length from 1 to 20 seconds and have a total length of approximately shown in the list (and in the respective info.txt-files) below. The texts were published between 1884 and 1964, and are in the public domain. The audio was recorded by the LibriVox project and is also in the public domain
Provide a detailed description of the following dataset: M-AILabs speech dataset
Tobacco800
Tobacco800 is a public subset of the complex document image processing (CDIP) test collection constructed by Illinois Institute of Technology, assembled from 42 million pages of documents (in 7 million multi-page TIFF images) released by tobacco companies under the Master Settlement Agreement and originally hosted at UCSF. Tobacco800, composed of 1290 document images, is a realistic database for document image analysis research as these documents were collected and scanned using a wide variety of equipment over time. In addition, a significant percentage of Tobacco800 are consecutively numbered multi-page business documents, making it a valuable testbed for various content-based document image retrieval approaches. Resolutions of documents in Tobacco800 vary significantly from 150 to 300 DPI and the dimensions of images range from 1200 by 1600 to 2500 by 3200 pixels.
Provide a detailed description of the following dataset: Tobacco800
NSCLC-Radiomics
This collection contains images from 422 non-small cell lung cancer (NSCLC) patients. For these patients pretreatment CT scans, manual delineation by a radiation oncologist of the 3D volume of the gross tumor volume and clinical outcome data are available.
Provide a detailed description of the following dataset: NSCLC-Radiomics
NSCLC-Radiogenomics
Unique radiogenomic dataset from a Non-Small Cell Lung Cancer (NSCLC) cohort of 211 subjects. The dataset comprises Computed Tomography (CT), Positron Emission Tomography (PET)/CT images, semantic annotations of the tumors as observed on the medical images using a controlled vocabulary, segmentation maps of tumors in the CT scans, and quantitative values obtained from the PET/CT scans. Imaging data are also paired with gene mutation, RNA sequencing data from samples of surgically excised tumor tissue, and clinical data, including survival outcomes.
Provide a detailed description of the following dataset: NSCLC-Radiogenomics
Harry Potter Dialogue Dataset
Harry Potter Dialogue is the first dialogue dataset that integrates with scene, attributes and relations which are dynamically changed as the storyline goes on. Our work can facilitate research to construct more human-like conversational systems in practice. For example, virtual assistant, NPC in games, etc. Moreover, HPD can both support dialogue generation and retrieval tasks.
Provide a detailed description of the following dataset: Harry Potter Dialogue Dataset
SURL
"Identifying Sensitive URLs at Web-Scale" dataset at IMC20
Provide a detailed description of the following dataset: SURL
Criteo Display Advertising Challenge
his dataset contains feature values and click feedback for millions of display ads. Its purpose is to benchmark algorithms for clickthrough rate (CTR) prediction. It has been used for the Display Advertising Challenge hosted by Kaggle: https://www.kaggle.com/c/criteo-display-ad-challenge/ =================================================== Full description: This dataset contains 2 files: train.txt test.txt corresponding to the training and test parts of the data. ==================================================== Dataset construction: The training dataset consists of a portion of Criteo's traffic over a period of 7 days. Each row corresponds to a display ad served by Criteo and the first column is indicates whether this ad has been clicked or not. The positive (clicked) and negatives (non-clicked) examples have both been subsampled (but at different rates) in order to reduce the dataset size. There are 13 features taking integer values (mostly count features) and 26 categorical features. The values of the categorical features have been hashed onto 32 bits for anonymization purposes. The semantic of these features is undisclosed. Some features may have missing values. The rows are chronologically ordered. The test set is computed in the same way as the training set but it corresponds to events on the day following the training period. The first column (label) has been removed. ==================================================== Format: The columns are tab separeted with the following schema: <label> <integer feature 1> ... <integer feature 13> <categorical feature 1> ... <categorical feature 26> When a value is missing, the field is just empty. There is no label field in the test set. ==================================================== Dataset assembled by Olivier Chapelle (o.chapelle@criteo.com)
Provide a detailed description of the following dataset: Criteo Display Advertising Challenge
MedleyVox
**MedleyVox** is an evaluation dataset for multiple singing voices separation that corresponds to such categories. The problem definition in this dataset is categorised into i) duet, ii) unison, iii) main vs. rest, and iv) N-singing separation.
Provide a detailed description of the following dataset: MedleyVox
MAVEN-ERE
**MAVEN-ERE** is a dataset designed for event relation extraction tasks containing 103,193 event coreference chains, 1,216,217 temporal relations, 57,992 causal relations, and 15,841 subevent relations.
Provide a detailed description of the following dataset: MAVEN-ERE
UGIF
**UGIF** is a multi-lingual, multi-modal UI grounded dataset for step-by-step task completion on the smartphone. It contains 523 natural language instructions with paired sequences of multilingual UI screens and actions that show how to execute the task in eight languages.
Provide a detailed description of the following dataset: UGIF
Ambiguous VQA
The **Ambiguous VQA** dataset is a dataset of ambiguous questions about images. It consists of a set of ambiguous images and their answers. It is used to train and evaluate question generation models in English.
Provide a detailed description of the following dataset: Ambiguous VQA
Marine Microalgae Detection in Microscopy Images
**Marine Microalgae Detection in Microscopy Images** dataset contains a total number of images in the dataset is 937 and all the objects in these images were annotated. The total number of annotated objects is 4201. The training set contains 537 images and the testing set contains 430 images.
Provide a detailed description of the following dataset: Marine Microalgae Detection in Microscopy Images
Hinglish-TOP
**Hinglish-TOP** is a human annotated code-switched semantic parsing dataset containing 10k human annotations for Hindi-English (HINGLISH) code switched utterances, and over 170K CST5 generated code-switched utterances from the TOPv2 dataset.
Provide a detailed description of the following dataset: Hinglish-TOP
UFPR-Periocular
The UFPR-Periocular dataset has 16,830 images of both eyes (33,660 cropped images of each eye) from 1,122 subjects (2,244 classes). All the images were captured by the participant using their own smartphone through a mobile application (app) developed by the authors. There are 15 samples from each subject's eye, obtained in 3 sessions (5 images per session) with a minimum interval of 8 hours between the sessions. The images were collected from June 2019 to January 2020 and have several resolutions varying from 360×160 to 1862×1008 pixels – depending on the mobile device used to capture the image. In total, the dataset has images captured from 196 different mobile devices. Each subject captured their images using the same device model. This dataset's main intra- and inter-class variability are caused by lighting variation, occlusion, specular reflection, blur, motion blur, eyeglasses, off-angle, eye-gaze, makeup, and facial expression. The authors manually annotated the eye corner of all images with 4 points (inside and outside eye corners) and used it to normalize the periocular region regarding scale and rotation. All the original and cropped periocular images, eye-corner annotations, and experimental protocol files are publicly available for the research community (upon request). The paper contains information about images' distributions by gender, age, resolution, and other experiments' details and benchmarks.
Provide a detailed description of the following dataset: UFPR-Periocular
DIO
**Discovering Interacted Objects (DIO)** is a benchmark containing 51 interactions and 1,000+ objects designed for Spatio-temporal Human-Object Interaction (ST-HOI) detection.
Provide a detailed description of the following dataset: DIO
PeMS08
PeMS08 is a traffic forecasting dataset.
Provide a detailed description of the following dataset: PeMS08
CVE
CVE stands for Common Vulnerabilities and Exposures. CVE is a glossary that classifies vulnerabilities. The glossary analyzes vulnerabilities and then uses the Common Vulnerability Scoring System (CVSS) to evaluate the threat level of a vulnerability. A CVE score is often used for prioritizing the security of vulnerabilities.
Provide a detailed description of the following dataset: CVE
standard atomic contexts
The dataset contains standard contexts of the lattices of all atomic lattices in the Concept Explorer format.
Provide a detailed description of the following dataset: standard atomic contexts