dataset_name
stringlengths
2
128
description
stringlengths
1
9.7k
prompt
stringlengths
59
185
V4V
Over the past few years a number of research groups have made rapid advances in remote PPG methods for estimating heart rate from digital video and obtained impressive results. How these various methods compare in naturalistic conditions, where spontaneous behavior, facial expressions, and illumination changes are present, is relatively unknown. To enable comparisons among alternative methods, the Vision for Vitals dataset was introduced. It is a novel dataset containing high-resolution videos time-locked with varied physiological signals from a diverse population. It contains more than 150+ subjects with over 1300+ videos along with ground truth heart rate and respiration rate annotations. It also includes blood pressure waveform signals as part of its physiological data.
Provide a detailed description of the following dataset: V4V
EvoGym
**EvoGym** is a large-scale benchmark for co-optimizing the design and control of soft robots.
Provide a detailed description of the following dataset: EvoGym
TraVLR
**TraVLR** is a synthetic dataset comprising four visio-linguistic reasoning tasks. Each example encodes the scene bimodally such that either modality can be dropped during training/testing with no loss of relevant information. TraVLR's training and testing distributions are also constrained along task-relevant dimensions, enabling the evaluation of out-of-distribution generalisation.
Provide a detailed description of the following dataset: TraVLR
Iconary
**Iconary** dataset is for testing multimodal communication with drawings and text.
Provide a detailed description of the following dataset: Iconary
image-goal-nav-dataset
A dataset for Image-Goal Navigation in Habitat based on Gibson scenes.
Provide a detailed description of the following dataset: image-goal-nav-dataset
2D Moving Clusters
Contains $10^7$ points, sampled from 20 clusters, with incremental concept drift - On each batch (of size 1000) the mean of each of the clusters moves a random (small) length in some random direction, the means move independently of each other. This dataset should be used sequentially, in batches of $1000$.
Provide a detailed description of the following dataset: 2D Moving Clusters
KMIR
**KMIR** (**Knowledge Memorization, Identification, and Reasoning**) is a benchmark that covers 3 types of knowledge, including general knowledge, domain-specific knowledge, and commonsense, and provides 184,348 well-designed questions. KMIR can be used for evaluating knowledge memorization, identification and reasoning abilities of language models.
Provide a detailed description of the following dataset: KMIR
Tecnocampus Hand Image Database
The acquisition over the VIS and TIR data was performed by a commercial thermal camera Testo 882-3. We have used a second external camera to obtain the NIR data. In this case we have built a NIR camera using a webcam changing the default optical filter for a couple of Kodak filters for IR. We have also used a printed circuit board with 16 infra-red LEDs that provide the infra-red illumination. In order to alleviate the variability on the way the users present their hand we have used a kind of removable mask/template. Users had to put their hand in a neoprene surface with the help of a hand mask. Once the hand is placed the mask was removed and the three images (VIS, NIR, TIR) were shot. The same process was repeated with the palmar hand side, but using the mask in the opposite position (flip up to down). Once the first acquisition was finished (no more than a minute) the user performed some exercise in order to change hand heat conditions. This step was carried out in less than 30 seconds so that when finished, the user proceeded again with the second acquisition. For each session a total of 12 hand images were captured per user.
Provide a detailed description of the following dataset: Tecnocampus Hand Image Database
CARL Database
Visible and thermal images have been acquired using a thermographic camera TESTO 880-3, equipped with an uncooled detector with a spectral sensitivity range from 8 to 14 μm and provided with a germanium optical lens, and an approximate cost of 8.000 EUR. For the NIR a customized Logitech Quickcam messenger E2500 has been used, provided with a Silicon based CMOS image sensor with a sensibility to the overall visible spectrum and the half part of the NIR (until 1.000 nm approximately) with a cost of approx. 30 EUR. We have replaced the default optical filter of this camera by a couple of Kodak daylight filters for IR interspersed between optical and sensor. They both have similar spectrum responses and are coded as wratten filter 87 and 87C, respectively. In addition, we have used a special purpose printed circuit board (PCB) with a set of 16 infrared leds (IRED) with a range of emission from 820 to 1.000 nm in order to provide the required illumination. The thermographic camera provides a resolution of 160×120 pixels for thermal images and 640×480 for visible images, while the webcam provides a still picture maximum resolution of 640×480 for near-infrared images and this has been the final resolution selected for our experiments. A couple of halogen focus disposed 30 degrees away from the frontal direction and about 3 m away from the user, match the artificial light of the room. Note that all the tripods and structures have fixed markings on the ground.
Provide a detailed description of the following dataset: CARL Database
EasyCall corpus
EasyCall corpus is a dysarthric speech command dataset in Italian. The dataset consists of 21386 audio recordings from 24 healthy and 31 dysarthric speakers, whose individual degree of speech impairment was assessed by neurologists through the Therapy Outcome Measure.
Provide a detailed description of the following dataset: EasyCall corpus
TOPv2
Task Oriented Parsing v2 (TOPv2) representations for intent-slot based dialog systems. Provided under the CC-BY-SA license. Please cite the accompanying paper when using this dataset - ``` @inproceedings{chen-etal-2020-low-resource, title={Low-Resource Domain Adaptation for Compositional Task-Oriented Semantic Parsing}, author={Xilun Chen and Asish Ghoshal and Yashar Mehdad and Luke Zettlemoyer and Sonal Gupta}, booktitle={Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP)}, year={2020}, publisher = "Association for Computational Linguistics" } ``` CHANGELOG:<br> 03/10/2021 (V1.1): Added the low-resource splits used in the paper.<br> 09/18/2020 (V1.0): Initial release. TOPv2 is a multi-domain task-oriented semantic parsing dataset. It is an extension to the TOP dataset (http://fb.me/semanticparsingdialog) with 6 additional domains and 137k new samples. In total, TOPv2 has 8 domains (alarm, event, messaging, music, navigation, reminder, timer, weather) and 180k samples randomly split into train, eval, and test sets for each domain. Please refer to the paper for more data statistics. Note: As TOPv2 data is provided on a per-domain basis, the UNSUPPORTED utterances in the original TOP dataset were removed as they could not be mapped to any domain. The training, evaluation and test sets for each domain are provided as tab-separated value (TSV) files with file names of "domain_split.tsv". The first row of each file contains the column headers, while each following row is of the format: domain <tab> utterance <tab> semantic_parse where the semantic_parse follows the same format as the original TOP dataset. e.g. event <tab> Art fairs this weekend in Detroit <tab> [IN:GET_EVENT [SL:CATEGORY_EVENT Art fairs ] [SL:DATE_TIME this weekend ] in [SL:LOCATION Detroit ] ] The low-resource splits used in our experiments are provided in the `low_resource_splits` subdirectory, including training and validation sets from the reminder and weather domains under 10, 25, 50, 100, 250, 500 and 1000 SPIS.
Provide a detailed description of the following dataset: TOPv2
Microsoft Academic Graph
The Microsoft Academic Graph is a heterogeneous graph containing scientific publication records, citation relationships between those publications, as well as authors, institutions, journals, conferences, and fields of study. [Documentation](https://docs.microsoft.com/en-us/academic-services/graph/)
Provide a detailed description of the following dataset: Microsoft Academic Graph
RefSeer
A data set containing citations, citation contexts, and papers. [Download instructions](https://github.com/chbrown/refseer)
Provide a detailed description of the following dataset: RefSeer
GLips
The German Lipreading dataset consists of 250,000 publicly available videos of the faces of speakers of the Hessian Parliament, which was processed for word-level lip reading using an automatic pipeline. The format is similar to that of the English language Lip Reading in the Wild (LRW) dataset, with each H264-compressed MPEG-4 video encoding one word of interest in a context of 1.16 seconds duration, which yields compatibility for studying transfer learning between both datasets. Choosing video material based on naturally spoken language in a natural environment ensures more robust results for real-world applications than artificially generated datasets with as little noise as possible. The 500 different spoken words ranging between 4-18 characters in length each have 500 instances and separate MPEG-4 audio- and text metadata-files, originating from 1018 parliamentary sessions. Additionally, the complete TextGrid files containing the segmentation information of those sessions are also included. The size of the uncompressed dataset is 15GB.
Provide a detailed description of the following dataset: GLips
Burr classification images
Original images and images with RUSTICO filters applied Also a csv with classes is included
Provide a detailed description of the following dataset: Burr classification images
DFDM
We created a new dataset, named DFDM, with 6,450 Deepfake videos generated by different Autoencoder models. Specifically, five Autoencoder models with variations in encoder, decoder, intermediate layer, and input resolution, respectively, have been selected to generate Deepfakes based on the same input. We have observed the visible but subtle visual differences among different Deepfakes, demonstrating the evidence of model attribution artifacts.
Provide a detailed description of the following dataset: DFDM
Intel Lab Data
This dataset contains data collected from 54 sensors deployed in the Intel Berkeley Research lab between February 28th and April 5th, 2004. Mica2Dot sensors with weatherboards collected timestamped topology information, along with humidity, temperature, light, and voltage values once every 31 seconds. Data was collected using the TinyDB in-network query processing system, built on the TinyOS platform.
Provide a detailed description of the following dataset: Intel Lab Data
Moléne Dataset
The French national meteorological service published an open-access dataset of hourly weather observations in Brittany, France, for the month of January 2014. In addition to the graph of ground weather stations, the dataset contains hourly readings of those stations. Readings include temperatures, wind characteristics, rain, and other information.
Provide a detailed description of the following dataset: Moléne Dataset
VID Dataset
The Visual-Inertial-Dynamical (VID) dataset not only focuses on traditional six degrees of freedom (6-DOF) pose estimation, but also provides dynamical characteristics of the flight platform for external force perception or dynamics-aided estimation. The VID dataset contains hardware synchronized imagery and inertial measurements, with accurate ground truth trajectories for evaluating common visual-inertial estimators. Moreover, the proposed dataset highlights rotor speed and motor current measurements, control inputs, and ground truth 6-axis force data to evaluate external force estimation. To the best of our knowledge, the proposed VID dataset is the first public dataset containing visual-inertial and complete dynamical information in the real world for pose and external force evaluation.
Provide a detailed description of the following dataset: VID Dataset
NVALT-8
Te NVALT-8 study (`m=200` participants) examined if nadroparin combined with chemotherapy could reduce cancer relapse after surgical removal of a non-small cell lung tumour.
Provide a detailed description of the following dataset: NVALT-8
NVALT-11
The NVALT-11 study considered the effect of profylactic brain radiation versus observation in ($m$=174) patients with advanced non-small cell lung cancer.
Provide a detailed description of the following dataset: NVALT-11
AKB-48
**AKB-48** is a large-scale **A**rticulated object **K**nowledge **B**ase which consists of 2,037 real-world 3D articulated object models of **48** categories.
Provide a detailed description of the following dataset: AKB-48
MetaShift
**MetaShift** is a collection of 12,868 sets of natural images across 410 classes. It can be used to benchmark and evaluate how robust machine learning models are to data shifts.
Provide a detailed description of the following dataset: MetaShift
Dataset for the Article "Does the Venue of Scientific Conferences Leverage their Impact? A Large Scale study on Computer Science
Is there any correlation between the impact of a scientific conference and the venue where it takes place? It seems that no one has tackled this issue before, so we decided to explore the possible implications. From the one hand, we considered the number of citations as indicator of the impact of a conference; from the other hand, we considered specific touristic indexes that characterize the venue. In this work we report on the results of the large scale analysis we conducted on the bibliographic data we extracted from nearly 4000 conference series in the Computer Science area and over 2.5 million papers spanning more than 30 years of research. Interestingly, we found out that the two aspects are indeed related and this is shown by the detailed analysis of the data.
Provide a detailed description of the following dataset: Dataset for the Article "Does the Venue of Scientific Conferences Leverage their Impact? A Large Scale study on Computer Science
WSJ Dow Jones Stock Data
Please see code repository. [https://github.com/nlandolfi/acc2022treelinearcascades_stocks](https://github.com/nlandolfi/acc2022treelinearcascades_stocks)
Provide a detailed description of the following dataset: WSJ Dow Jones Stock Data
ZInd
The Zillow Indoor Dataset (ZInD) provides extensive visual data that covers a real world distribution of unfurnished residential homes. It consists of primary 360º panoramas with annotated room layouts, windows, doors and openings (W/D/O), merged rooms, secondary localized panoramas, and final 2D floor plans. The figure above illustrates the various representations (from left to right beyond capture): Room layout with W/D/O annotations, merged layouts, 3D textured mesh, and final 2D floor plan.
Provide a detailed description of the following dataset: ZInd
IMDB-Clean
We have cleaned the noisy IMDB-WIKI dataset using a constrained clustering method, resulting this new benchmark for in-the-wild age estimation. The annotations also allow this dataset to use for some other tasks, like gender classification and face recognition/verification. For more details, please refer to our FPAge paper.
Provide a detailed description of the following dataset: IMDB-Clean
TriggerCit 2021 Thailand / Nepal floods
Twitter dataset related to flood events onsets in Thailand and Nepal, focused on September 26/27, 2022, June 16/17 2021 and July 01/02 2021. The dataset has been processed with a VisualCit pipeline in order to automatically filter a relevant subset of posts through automated image analysis, using deep learning techniques. The posts were then geolocated using the CIME algorithm. Additional information about the data collection and data processing are described in http://arxiv.org/abs/2202.12014
Provide a detailed description of the following dataset: TriggerCit 2021 Thailand / Nepal floods
CRC100K
* This is a set of 100,000 non-overlapping image patches from hematoxylin & eosin (H&E) stained histological images of human colorectal cancer (CRC) and normal tissue. * All images are 224x224 pixels (px) at 0.5 microns per pixel (MPP). * For tissue classification; the classes are: Adipose (ADI), background (BACK), debris (DEB), lymphocytes (LYM), mucus (MUC), smooth muscle (MUS), normal colon mucosa (NORM), cancer-associated stroma (STR), colorectal adenocarcinoma epithelium (TUM). * The images were manually extracted from N=86 H&E stained human cancer tissue slides from formalin-fixed paraffin-embedded (FFPE) samples from the NCT Biobank (National Center for Tumor Diseases, Heidelberg, Germany) and the UMM pathology archive (University Medical Center Mannheim, Mannheim, Germany). Tissue samples contained CRC primary tumor slides and tumor tissue from CRC liver metastases; normal tissue classes were augmented with non-tumorous regions from gastrectomy specimen to increase variability.
Provide a detailed description of the following dataset: CRC100K
Dafonts Free
This is a dataset of 18624 fonts labeled as 100% Free and Public domain / GPL / OFL on https://www.dafont.com/ with .ttf and .otf extensions. Code used to create it can be found at: https://github.com/duskvirkus/dafonts-free
Provide a detailed description of the following dataset: Dafonts Free
Mathematical Mathematics Memes
### Dataset Description This dataset contains +10k math memes. Memes were approved by admins before being shared with group members. Thus all memes follow the community standards. Memes are about college math or above. ### Acknowledgements Thanks to the [Mathematical Mathematics Memes](https://web.facebook.com/groups/1567682496877142/) community for sharing OC memes. ### Inspiration - Generate more high-quality math memes. - Detect hateful or abusive memes. - Study the popularity of the memes. - Extract text and predict popularity. ### Copyright Copyright of all images kept by their respective owners. All images posted on Facebook are subject to fair use. ** Disclaimer: These memes were collected by blind web scraping, and I have not reviewed the vast majority of them. I do not agree with any of the sentiments contained therein. **
Provide a detailed description of the following dataset: Mathematical Mathematics Memes
EmoSpeech
**EmoSpeech** contains keywords with diverse emotions and background sounds, presented to explore new challenges in audio analysis.
Provide a detailed description of the following dataset: EmoSpeech
Earth on Canvas
A Zero-Shot Sketch-based Inter-Modal Object Retrieval Scheme for Remote Sensing Images WITH the advancement in sensor technology, huge amounts of data are being collected from various satellites. Hence, the task of target-based data retrieval and acquisition has become exceedingly challenging. Existing satellites essentially scan a vast overlapping region of the Earth using various sensing techniques, like multi-spectral, hyperspectral, Synthetic Aperture Radar (SAR), video, and compressed sensing, to name a few. With increasing complexity and different sensing techniques at our disposal, it has become our primary interest to design efficient algorithms to retrieve data from multiple data modalities, given the complementary information that is captured by different sensors. This type of problem is referred to as inter-modal data retrieval. In remote sensing (RS), there are primarily two important types of problems, i.e., land-cover classification and object detection. In this work, we focus on the target-based object retrieval part, which falls under the realm of object detection in RS. Object retrieval essentially requires high-resolution imagery for objects to be distinctly visible in the image. The main challenge with the conventional retrieval approach using large-scale databases is that, quite often, we do not have any query image sample of the target class at our disposal. The target of interest solely exists as a perception to the user in the form of an imprecise sketch. In such situations where a photo query is absent, it can be immensely useful if we can promptly make a quick hand-made sketch of the target. Sketches are a highly symbolic and hieroglyphic representation of data. One can exploit the notion of this minimalistic representative of sketch queries for sketch-based image retrieval (SBIR) framework. While dealing with satellite images, it is imperative to collect as many samples of images as possible for each object class for object recognition with a high success rate. However, in general, there exists a considerable number of classes for which we seldom have any training data samples. Therefore, for such classes, we can use the zero-shot learning (ZSL) strategy. The ZSL approach aims to solve a task without receiving any example of that task during the training phase. This makes the network capable of handling an unseen class (new class) sample obtained during the inference phase upon deployment of the network. Hence, we propose the aerial sketch-image dataset, namely Earth on Canvas dataset. Classes in this dataset: Airplane, Baseball Diamond, Buildings, Freeway, Golf Course, Harbor, Intersection, Mobile home park, Overpass, Parking lot, River, Runway, Storage tank, Tennis court.
Provide a detailed description of the following dataset: Earth on Canvas
NTU-X
NTU-X is an extended version of popular [NTU](/dataset/ntu-rgb-d/) dataset.
Provide a detailed description of the following dataset: NTU-X
LSFB Datasets
# Sign Language Datasets for French Belgian Sign Language This dataset is built upon the work of Belgian linguists from the University of Namur. During eight years, they've collected and annotated 50 hours of videos depicting sign language conversation. 100 signers were recorded, making it one of the most representative sign language corpus. The annotation has been sanitized and enriched with metadata to construct two, easy to use, datasets for sign language recognition. One for continuous sign language recognition and the other for isolated sign recognition. ## LSFB-CONT The dataset for continuous sign language recognition is made of over 25h of video clips. Each clip is associated with a time-aligned annotation file containing the start and the end of each sign along with a gloss (label) associated with all unique signs. Mediapipe pose and hands information were also computed for each video clip and these metadata are made available in the dataset. ## LSFB-ISOL The isolated version of the dataset contains only clips showing one isolated sign issued from the LSFB-CONT dataset. We chose to keep all the signs with at least 40 examples, leading to a dataset containing over 50 000 clips for 635 different glosses (labels). The Mediapipe metadata is also available for this dataset.
Provide a detailed description of the following dataset: LSFB Datasets
TR_AR_S2S
Dubbed series are gaining a lot of popularity in recent years with strong support from major media service providers. Such popularity is fueled by studies that showed that dubbed versions of TV shows are more popular than their subtitled equivalents. This work proposes an unsupervised approach to construct speech-to-speech corpus, aligned on short segment levels, to produce a parallel speech corpus in the source- and target- languages. Our methodology exploits video frames, speech recognition, machine translation, and noisy frames removal algorithms to match segments in both languages.
Provide a detailed description of the following dataset: TR_AR_S2S
Kubric
**Kubric** is a data generation pipeline for creating semi-realistic synthetic multi-object videos with rich annotations such as instance segmentation masks, depth maps, and optical flow. It also presents a series of 13 different generated datasets for tasks ranging from studying 3D NeRF models to optical flow estimation. *Kubric is mainly built on-top of pybullet (for physics simulation) and Blender (for rendering); however, the code is kept modular to potentially support different rendering backends.*
Provide a detailed description of the following dataset: Kubric
Pretrained Models of the Benchmarking Algorithms for UVCGAN
The pretrained models from four image translation algorithms: ACL-GAN, Council-GAN, CycleGAN, and U-GAT-IT on three benchmarking datasets: Selfie2Anime, CelebA_gender, CelebA_glasses. We trained the models to provide benchmarks for the algorithm we detailed in the paper "UVCGAN: UNet Vision Transformer Cycle-consistent GAN for Unpaired Image-to-Image Translation.". We only trained a model if a pretrained model is provided by a benchmarking algorithm.
Provide a detailed description of the following dataset: Pretrained Models of the Benchmarking Algorithms for UVCGAN
Fitness-AQA
Largest, first-of-its-kind, in-the-wild, fine-grained workout/exercise posture analysis dataset, covering three different exercises: BackSquat, Barbell Row, and Overhead Press. Seven different types of exercise errors are covered. Unlabeled data is also provided to facilitate self-supervised learning.
Provide a detailed description of the following dataset: Fitness-AQA
WITS
This dataset is an extension of MASAC, a multimodal, multi-party, Hindi-English code-mixed dialogue dataset compiled from the popular Indian TV show, ‘Sarabhai v/s Sarabhai’. WITS was created by augmenting MASAC with natural language explanations for each sarcastic dialogue. The dataset consists of the transcribed sarcastic dialogues from 55 episodes of the TV show, along with audio and video multimodal signals. It was designed to facilitate Sarcasm Explanation in Dialogue (SED), a novel task aimed at generating a natural language explanation for a given sarcastic dialogue, that spells out the intended irony. Each data instance in WITS is associated with a corresponding video, audio, and textual transcript where the last utterance is sarcastic in nature. All the final selected explanations contain the following attributes: • Sarcasm source: The speaker in the dialog who is being sarcastic. • Sarcasm target: The person/ thing towards whom the sarcasm is directed. • Action word: Verb/ action used to describe how the sarcasm is taking place. e.g. mocking, insults, taunts, etc. • Description: A description of the scene to help contextualize the sarcasm.
Provide a detailed description of the following dataset: WITS
FloW
- Marine wastes are severely threatening marine animals and their habitat, also causing an impact on human life through toxic substances transportation and accumulation. To prevent the wastes especially the plastic trash from getting into the ocean, it is essential to detect and clean the floating wastes in inland waters efficiently like in rivers, lakes, and canals. - FloW is the first dataset for floating waste detection in inland waters. It contains a vision-based sub-dataset, FloW-Img, and a multimodal dataset, FloW-RI which contains the spatial and temporal calibrated image and millimeter-wave radar data. - By publishing Flow, it is hoped that more attention from research communities could be paid to floating waste detection in inland waters as well as the challenging small object detection over the water surface. In addition, waste detection based on millimeter-wave radar data or the fusion of image and radar data is also a novel task and FloW provides accessible real-world data.
Provide a detailed description of the following dataset: FloW
ATOM3D
**ATOM3D** is a unified collection of datasets concerning the three-dimensional structure of biomolecules, including proteins, small molecules, and nucleic acids. These datasets are specifically designed to provide a benchmark for machine learning methods which operate on 3D molecular structure, and represent a variety of important structural, functional, and engineering tasks. All datasets are provided in a standardized format along with a Python package containing processing code, utilities, models, and dataloaders for common machine learning frameworks such as PyTorch. ATOM3D is designed to be a living database, where datasets are updated and tasks are added as the field progresses. Description from: [https://www.atom3d.ai/](https://www.atom3d.ai/)
Provide a detailed description of the following dataset: ATOM3D
SILG
**Symbolic Interactive Language Grounding** (**SILG**) is a multi-environment benchmark which unifies a collection of diverse grounded language learning environments under a common interface. SILG consists of grid-world environments that require generalization to new dynamics, entities, and partially observed worlds (RTFM, Messenger, NetHack), as well as symbolic counterparts of visual worlds that require interpreting rich natural language with respect to complex scenes (ALFWorld, Touchdown). Together, these environments provide diverse grounding challenges in richness of observation space, action space, language specification, and plan complexity.
Provide a detailed description of the following dataset: SILG
MPSGaze
This is a synthetic dataset containing full images (instead of only cropped faces) that provides ground truth 3D gaze directions for multiple people in one image.
Provide a detailed description of the following dataset: MPSGaze
I.PHI
**I.PHI** processes the Packard Humanities Institute (PHI) database of ancient Greek inscriptions including the geographical and chronological metadata into a machine actionable format. The processed dataset is referred to as I.PHI.
Provide a detailed description of the following dataset: I.PHI
Human Activity Recognition
We provide six different datasets with diverse range of activities
Provide a detailed description of the following dataset: Human Activity Recognition
ImageNet-Patch
ImageNet-Patch: A Dataset for Benchmarking Machine Learning Robustness against Adversarial Patches Adversarial patches are optimized contiguous pixel blocks in an input image that cause a machine-learning model to misclassify it. However, their optimization is computationally demanding, and requires careful hyperparameter tuning, potentially leading to suboptimal robustness evaluations. To overcome these issues, we propose ImageNet-Patch, a dataset to benchmark machine-learning models against adversarial patches. It consists of a set of patches, optimized to generalize across different models, and readily applicable to ImageNet data after preprocessing them with affine transformations. This process enables an approximate yet faster robustness evaluation, leveraging the transferability of adversarial perturbations.
Provide a detailed description of the following dataset: ImageNet-Patch
RENOIR
A dataset of color images corrupted by natural noise due to low-light conditions, together with spatially and intensity-aligned low noise images of the same scenes.
Provide a detailed description of the following dataset: RENOIR
Fingerprint inpainting and denoising
Synthetic training set: This set is constructed in the following two steps and will be used for estimation/training purposes. i) 84,000 275 pixel x 400 pixel ground-truth fingerprint images without any noise or scratches, but with random transformations (at most five pixels translation and +/-10 degrees rotation) were generated by using the software Anguli: Synthetic Fingerprint Generator. ii) 84,000 275 pixel x 400 pixel degraded fingerprint images were generated by applying random artifacts (blur, brightness, contrast, elastic transformation, occlusion, scratch, resolution, rotation) and backgrounds to the ground-truth fingerprint images. In total, it contains 168,000 fingerprint images (84,000 fingerprints, and two impressions - one ground-truth and one degraded - per fingerprint). Synthetic test set: This set is constructed similarly to the synthetic training set and will be used to evaluate the reconstruction performance. In total, it contains 16,800 fingerprint images (8,400 fingerprints and two impressions - one ground-truth and one degraded - per fingerprint). Since this set will be used for the purpose of evaluating the reconstruction performance, only the degraded and not the ground-truth fingerprint images will be provided to participants. Real test set: This set is constructed by systematically drawing fingerprint images with varying sizes from publicly available datasets. In total, it contains 1680 fingerprint images (140 fingerprints and 12 impressions - high-quality scans under operational conditions - per fingerprint). Description from: [Fingerprint inpainting and denoising (WCCI'18, ECCV'18)](https://chalearnlap.cvc.uab.cat/dataset/32/description/)
Provide a detailed description of the following dataset: Fingerprint inpainting and denoising
Cross-View Time Dataset
The appearance of the world varies dramatically not only from place to place but also from hour to hour and month to month. Every day billions of images capture this complex relationship, many of which are associated with precise time and location metadata. We propose to use these images to construct a global-scale, dynamic map of visual appearance attributes. Such a map enables fine-grained understanding of the expected appearance at any geographic location and time. Our approach integrates dense overhead imagery with location and time metadata into a general framework capable of mapping a wide variety of visual attributes. A key feature of our approach is that it requires no manual data annotation. We demonstrate how this approach can support various applications, including image-driven mapping, image geolocalization, and metadata verification.
Provide a detailed description of the following dataset: Cross-View Time Dataset
Cross-View Time Dataset (Cross-Camera Split)
The standard evaluation protocol of Cross-View Time dataset allows for certain cameras to be shared between training and testing sets. This protocol can emulate scenarios in which we need to verify the authenticity of images from a particular set of devices and locations. Considering the ubiquity of surveillance systems (CCTV) nowadays, this is a common scenario, especially for big cities and high visibility events (e.g., protests, musical concerts, terrorist attempts, sports events). In such cases, we can leverage the availability of historical photographs of that device and collect additional images from previous days, months, and years. This would allow the model to better capture the particularities of how time influences the appearance of that specific place, probably leading to a better verification accuracy. However, there might be cases in which data is originated from heterogeneous sources, such as social media. In this sense, it is essential that models are optimized on camera-disjoint sets to avoid learning sensor-specific characteristics that might not generalize accordingly for new imagery during inference. With this in mind, we propose a novel organization for CVT dataset. We split available data into training and testing sets, ensuring that all images from a single camera are assigned to the same set. During this division, we aimed to keep the size of each set roughly similar to the original splits, allowing models to be optimized with similar amounts of data.
Provide a detailed description of the following dataset: Cross-View Time Dataset (Cross-Camera Split)
MVTEC 3D-AD
MVTec 3D Anomaly Detection Dataset (MVTec 3D-AD) is a comprehensive 3D dataset for the task of unsupervised anomaly detection and localization. It contains over 4000 high-resolution scans acquired by an industrial 3D sensor. Each of the 10 different object categories comprises a set of defect-free training and validation samples and a test set of samples with various kinds of defects. Precise ground-truth annotations are provided for each anomalous test sample.
Provide a detailed description of the following dataset: MVTEC 3D-AD
BODMAS
We collaborate with Blue Hexagon to release a dataset containing timestamped malware samples and well-curated family information for research purposes. The BODMAS dataset contains 57,293 malware samples and 77,142 benign samples collected from August 2019 to September 2020, with carefully curated family information (581 families). We also provide preprocessed feature vectors and metadata available to everyone. The malware binaries can be obtained per request.
Provide a detailed description of the following dataset: BODMAS
StepGame
A Benchmark for Robust Multi-Hop Spatial Reasoning in Texts
Provide a detailed description of the following dataset: StepGame
NASA Perseverance
Samples from NASA Perseverance and set of GAN generated synthetic images from Neural Mars.
Provide a detailed description of the following dataset: NASA Perseverance
Moroccan Monay dataset
A dataset of all Moroccan money
Provide a detailed description of the following dataset: Moroccan Monay dataset
CANDOR Corpus
The CANDOR corpus is a large, novel, multimodal corpus of 1,656 recorded conversations in spoken English. This 7+ million word, 850 hour corpus totals over 1TB of audio, video, and transcripts, with moment-to-moment measures of vocal, facial, and semantic expression, along with an extensive survey of speaker post conversation reflections.
Provide a detailed description of the following dataset: CANDOR Corpus
ML guided Logic synthesis
Logic synthesis is a challenging and widely-researched combinatorial optimization problem during integrated circuit (IC) design. It transforms a high-level description of hardware in a programming language like Verilog into an optimized digital circuit netlist, a network of interconnected Boolean logic gates, that implements the function. Spurred by the success of ML in solving combinatorial and graph problems in other domains, there is growing interest in the design of ML-guided logic synthesis tools. Yet, there are no standard datasets or prototypical learning tasks defined for this problem domain. Here, we describe OpenABC-D,a large-scale, labeled dataset produced by synthesizing open source designs with a leading open-source logic synthesis tool and illustrate its use in developing, evaluating and benchmarking ML-guided logic synthesis. OpenABC-D has intermediate and final outputs in the form of 870,000 And-Inverter-Graphs (AIGs) produced from 1500 synthesis runs plus labels such as the optimized node counts, and de-lay. We define a generic learning problem on this dataset and benchmark existing solutions for it. The codes related to dataset creation and benchmark models are available athttps://github.com/NYU-MLDA/OpenABC.git.
Provide a detailed description of the following dataset: ML guided Logic synthesis
NELA-GT-2021
**NELA-GT-2021** is the fourth installment of the NELA-GT datasets, NELA-GT-2021. The dataset contains 1.8M articles from 367 outlets between January 1st, 2021 and December 31st, 2021. Just as in past releases of the dataset, NELA-GT-2021 includes outlet-level veracity labels from Media Bias/Fact Check and tweets embedded in collected news articles.
Provide a detailed description of the following dataset: NELA-GT-2021
K-SportsSum
K-SportsSum is a sports game summarization dataset with two characteristics: (1) K-SportsSum collects a large amount of data from massive games. It has 7,854 commentary-news pairs. To improve the quality, K-SportsSum employs a manual cleaning process; (2) Different from existing datasets, to narrow the knowledge gap, K-SportsSum further provides a large-scale knowledge corpus that contains the information of 523 sports teams and 14,724 sports players.
Provide a detailed description of the following dataset: K-SportsSum
SportsSum
SportsSum is a Chinese sports game summarization dataset that contains 5,428 soccer games of live commentaries and the corresponding news articles.
Provide a detailed description of the following dataset: SportsSum
SKM-TEA
The **SKM-TEA** dataset pairs raw quantitative knee MRI (qMRI) data, image data, and dense labels of tissues and pathology for end-to-end exploration and evaluation of the MR imaging pipeline. This 1.6TB dataset consists of raw-data measurements of ~25,000 slices (155 patients) of anonymized patient knee MRI scans, the corresponding scanner-generated DICOM images, manual segmentations of four tissues, and bounding box annotations for sixteen clinically relevant pathologies. ## Challenge Tracks **DICOM Track**: The DICOM benchmarking track uses scanner-generated DICOM images as the input for image segmentation and detection tasks. **Raw Data Track**: The Raw Data benchmarking track uses raw MRI data (i.e. k-space) as the input for image reconstruction, segmentation and detection tasks.
Provide a detailed description of the following dataset: SKM-TEA
ILPC22-Small
A small dataset from the Inductive Link Prediction Challenge 2022. Training graph contains 10K entities, 96 relations, 78K triples. Inference graph contains 7K entities, 96 relations, 21K triples. Validation and test triples to predict belong to the inference graph.
Provide a detailed description of the following dataset: ILPC22-Small
ILPC22-Large
A large dataset from the Inductive Link Prediction Challenge 2022. Training graph contains 46K entities, 130 relations, 202K triples. Inference graph contains 30K entities, 130 relations, 77K triples. Validation and test triples to predict belong to the inference graph.
Provide a detailed description of the following dataset: ILPC22-Large
PRIME
This dataset contains both infeasible and feasible data points as described in [PRIME](https://arxiv.org/abs/2110.11346). The descriptors of the collected data are presented in the table below. | | # of Infeasible | # of Feasible | Max Runtime (ms) | Min Runtime (ms) | Average Runtime (ms) | |------------------|-----------------|---------------|------------------|------------------|----------------------| | MobileNetEdgeTPU | 384355 | 115711 | 16352.26 | 252.22 | 529.13 | | MobilenetV2 | 744718 | 255414 | 7398.13 | 191.35 | 375.05 | | MobilenetV3 | 797460 | 202672 | 7001.46 | 405.19 | 993.75 | | M4 | 791984 | 208148 | 35881.35 | 335.59 | 794.33 | | M5 | 698618 | 301514 | 35363.55 | 202.55 | 440.52 | | M6 | 756468 | 243664 | 4236.90 | 127.79 | 301.74 | | UNet | 449578 | 51128 | 124987.51 | 610.96 | 3681.75 | | T-RNN Dec | 405607 | 94459 | 4447.74 | 128.05 | 662.44 | | T-RNN Enc | 410933 | 88880 | 5112.82 | 127.97 | 731.20 |
Provide a detailed description of the following dataset: PRIME
BBAI Dataset
This dataset is for evaluating the task of Black-box Multi-agent Integration which focuses on combining the capabilities of multiple black-box conversational agents at scale. It provides data to explore two main frameworks of exploration: question agent pairing and question response pairing. Overall this dataset contains 5550 utterances with 19 question-response pairs per question (one from each of the 19 agents), 105,450 in total across 37 domains. The utterances are split into 3700 utterances (100 examples per domain) for the training set and 1850 (50 per domain) for the test set. The train and test sets respectively contain 2399 and 1186 utterances with at least one positive question-response pair. In the remaining examples, none of the agents were able to achieve annotator agreement (>= 3).
Provide a detailed description of the following dataset: BBAI Dataset
MSP-Podcast
The MSP-Podcast corpus contains speech segments from podcast recordings which are perceptually annotated using crowdsourcing. The collection of this corpus is an ongoing process. Version 1.7 of the corpus has 62,140 speaking turns (100hrs). Key features of this corpus: * We download available audio recordings with common license. We only use the podcasts that have less restrictive licenses, so we can modify, sell and distribute the corpus (you can use it for commercial product!). * Most of the segments in a regular podcasts are neutral. We use machine learning techniques trained with available data to retrieve candidate segments. These segments are emotionally annotated with crowdsourcing. This approach allows us to spend our resources on speech segments that are likely to convey emotions. * We annotate categorical emotions and attribute based labels at the speaking turn label * This is an ongoing effort, where we currently have 62,140 speaking turns (100h). We collect approximately 10,000-13,000 new speaking turns per year. Our goal is to reach 400 hours.
Provide a detailed description of the following dataset: MSP-Podcast
Thermal focus image database
The database was acquired using a thermographic camera TESTO 880-3. This camera is equipped with an uncooled detector and has a spectral sensitivity range from 8 to 14 μm. It has a removable German optic lens. It provides the following main features: Image resolution: 160 × 120 pixels. Optical field/min. focus distance: 32° × 24°/0.1 m. Thermal sensitivity (NETD) <0.1 °C at 30 °C. Geometric resolution: 3.5 mrad. Detector type: FPA 160 × 120 pixels, temperature-stabilized. The database consists of several image sets. In each set, the camera acquires one image of the scene at each lens position. In our case we have manually moved the lens in 1 mm steps which provides a total of 96 positions. Thus, each set consists of 96 different images of the one scene. For this purpose, we have attached a millimeter tape to the objective, and used a stable tripod in order to acquire the same scene for each scene position. to add a brief description of the dataset (Markdown and LaTeX enabled). Provide: * a high-level explanation of the dataset characteristics * explain motivations and summary of its content * potential use cases of the dataset We acquire different kinds of images according to their information content and depth of focus. It should be easier to focus an image with large amount of detail, because blurring in such an image will generally be more evident. As in visible images, it should be more difficult to completely focus an image with several objects, when each object is located at a different focal distance. We analyzed only static scenes, because of the need for comparability (same position and temperature). We have constructed 10 different databases, as follows. Telematic equipment (TE): this consists of four sets of images of the same scene (an item of telematic equipment). Set one (TE1) is acquired at one meter distance from the scene to the camera, set two (TE2) is acquired at two meters, set three (TE3) at three meters and set four (TE4) at 4 m. Obviously, when moving the camera away from the scene, more objects appear in the image. On the other hand, these four databases contain a scene that can be considered to be contained in a flat plane. Thus, it is acquiring mainly a two dimensional object with a single point of focus. Electronic circuit (EC): this consists of a single set of images of the same scene (an electronic circuit with components at different temperatures and distances from the camera). It is important to emphasize that, in this case, we are acquiring a very near object, in which there is a range of depth. Thus, it is not possible to focus the whole image simultaneously. Laptop transformer (LT): this consists of a single set of images of the same scene (the transformer of a laptop computer). Corridor and fluorescents (CF): this consists of a single set of images of a single scene (a corridor at the university, illuminated by several ceiling fluorescents). Heater (H): this consists of a single set of images of a heater. This scene contains a large amount of detail because the metallic parts are warmer than the spaces between. Face (F): this consists of a single set of images of a human face. This sequence contains a scene that is not fully static because of involuntary physical movement (eyes, breathing, etc.). Hand (Ha): this consists of a single set of images of a hand. The hand rests on a black surface. The database consists of 10 × 96 = 960 images.
Provide a detailed description of the following dataset: Thermal focus image database
TR-News
This dataset is collected from various global and local news sources. Toponyms are manually annotated in the articles with the corresponding entries from GeoNames. In total, the dataset consists of 118 articles.
Provide a detailed description of the following dataset: TR-News
Reddit Conversation Corpus
Reddit Conversation Corpus (RCC) consists of conversations, scraped from Reddit, for a 20 month period from November 2016 until August 2018. To ensure the quality and diversity of topics, 95 subreddits are selected from which conversations are collected. In total, RCC contains 9.2 million 3-turn conversations.
Provide a detailed description of the following dataset: Reddit Conversation Corpus
IEEE-CIS Fraud Detection
#### Can you detect fraud from customer transactions? Imagine standing at the check-out counter at the grocery store with a long line behind you and the cashier not-so-quietly announces that your card has been declined. In this moment, you probably aren’t thinking about the data science that determined your fate. Embarrassed, and certain you have the funds to cover everything needed for an epic nacho party for 50 of your closest friends, you try your card again. Same result. As you step aside and allow the cashier to tend to the next customer, you receive a text message from your bank. “Press 1 if you really tried to spend $500 on cheddar cheese.” While perhaps cumbersome (and often embarrassing) in the moment, this fraud prevention system is actually saving consumers millions of dollars per year. Researchers from the IEEE Computational Intelligence Society (IEEE-CIS) want to improve this figure, while also improving the customer experience. With higher accuracy fraud detection, you can get on with your chips without the hassle. IEEE-CIS works across a variety of AI and machine learning areas, including deep neural networks, fuzzy systems, evolutionary computation, and swarm intelligence. Today they’re partnering with the world’s leading payment service company, Vesta Corporation, seeking the best solutions for fraud prevention industry, and now you are invited to join the challenge. In this competition, you’ll benchmark machine learning models on a challenging large-scale dataset. The data comes from Vesta's real-world e-commerce transactions and contains a wide range of features from device type to product features. You also have the opportunity to create new features to improve your results. If successful, you’ll improve the efficacy of fraudulent transaction alerts for millions of people around the world, helping hundreds of thousands of businesses reduce their fraud loss and increase their revenue. And of course, you will save party people just like you the hassle of false positives. Acknowledgements: Vesta Corporation provided the dataset for this competition. Vesta Corporation is the forerunner in guaranteed e-commerce payment solutions. Founded in 1995, Vesta pioneered the process of fully guaranteed card-not-present (CNP) payment transactions for the telecommunications industry. Since then, Vesta has firmly expanded data science and machine learning capabilities across the globe and solidified its position as the leader in guaranteed ecommerce payments. Today, Vesta guarantees more than $18B in transactions annually.
Provide a detailed description of the following dataset: IEEE-CIS Fraud Detection
Kinetics-100
Kinetics-100 is a dataset split created from the Kinetics dataset to evaluate the performance of few-shot action recognition models. 100 classes are randomly selected from a total of 400 categories, each composed of 100 examples. The 100 classes are further split into 64, 12, and 24 non-overlapping classes to use as the meta-training set, meta-validation set, and meta-testing set, respectively. Link to the selected samples can be found here: https://github.com/ffmpbgrnn/CMN/tree/master/kinetics-100
Provide a detailed description of the following dataset: Kinetics-100
Something-Something-100
Something-Something-100 is a dataset split created from Something-Something V2. A total of 100 classes are selected and each comprises 100 samples. The 100 classes were split into 64, 12, and 24 non-overlapping classes to use as the meta-training set, meta-validation set, and meta-testing set, respectively. Link to exactly selected samples can be found here: https://github.com/ffmpbgrnn/CMN/tree/master/smsm-100
Provide a detailed description of the following dataset: Something-Something-100
Slovenian Twitter dataset 2018-2020
A comprehensive set of all Slovenian tweets posted in the 2018-2020 period, with retweet links and assigned hate speech classes. Available at a public language resource repository CLARIN.SI.
Provide a detailed description of the following dataset: Slovenian Twitter dataset 2018-2020
AIT-LDSv2.0
Synthetic log data suitable for evaluation of intrusion detection systems, federated learning, and alert aggregation. Each of the 8 datasets corresponds to a testbed representing a small enterprise network including mail server, file share, WordPress server, VPN, firewall, etc. Normal user behavior is simulated to generate background noise over a time span of 4-6 days. At some point, a sequence of attack steps are launched against the network. Log data is collected from all hosts and includes Apache access and error logs, authentication logs, DNS logs, VPN logs, audit logs, Suricata logs, network traffic packet captures, horde logs, exim logs, syslog, and system monitoring logs. Attacks include scans (nmap, WPScan, dirb), webshell upload, password cracking, privilege escalation, remote command execution, and data exfiltration.
Provide a detailed description of the following dataset: AIT-LDSv2.0
eVED
**Extended Vehicle Energy Dataset** (**eVED**) is an extended version of the Vehicle Energy Dataset (VED), which is a large-scale dataset for vehicle energy consumption analysis. Compared with its original version, the extended VED (eVED) dataset is enhanced with accurate vehicle trip GPS coordinates, serving as a reliable basis to associate the VED trip records with external information e.g., road speed limit and intersections, from accessible map services to accumulate attributes that is relevant and essential in analyzing vehicle energy consumption.
Provide a detailed description of the following dataset: eVED
Multi-focus thermal database
The database was acquired using a thermographic camera TESTO 882-3 equipped with an uncooled detector and a spectral sensitivity range from 8 to 14 μm. It has a removable German optic lens with these main features: image resolution: 320 × 240 px, spectral sensitivity: 8 to 14 μm, thermal sensitivity (NETD)<0.06 °C at 30 °C, geometric resolution (IFOV): 1.7 mrad, detector type: silicon microbolometer uncooled, temperature stabilized, FOV: 32° × 23°; focal distance: 15 mm; fixed aperture: f/0.95. The database consists of six image sets. In each set, the camera acquires one image of the scene at each lens position. In our case we have manually moved the lens in 1 mm steps, which provides a total of 96 positions. Thus, each set consists of 96 different images of the one scene. For this purpose, we have attached a millimeter tape to the objective. We also used a stable tripod in order to acquire the same scene for each scene position and a dimmer to fix the bulb current. We have acquired six image sets: Image set 1: scene is made up of mobile phone and RS-232 interface in different distances and homogenous heat absorbing background. Distance between camera and the first object is 35 cm and its temperature is 41.2 °C. The distance between objects is 40 cm for all images sets. The maximum temperature of the second object is 32.9 °C. Image set 2: scene is made up of mobile phone and RS-232 interface in different distances and homogenous heat absorbing background. Distance between camera and the first object is only 15 cm and its temperature is 39.4 °C. The maximum temperature of the second object is 55.9 °C. Image set 3: scene is made up of two bulbs in different distances and non-homogenous background (partially black and partially white). The bulbs are acquired with a view to the holders. Distance between camera and the first object is 30 cm as in all bulb image sets. The temperature of 1st bulb is 51.7 °C and 2nd is 50.4 °C. Image set 4: scene is made up of two bulbs in different distances and homogenous white background. The bulbs are acquired with a view to the holders. The temperature of the first bulb is 43.3 °C and 2nd is 41.3 °C. Image set 5: scene is made up of two bulbs in different distances and homogenous white background. The bulbs are acquired without a view to the holders. The temperature of the first bulb is 57.0 °C and 2nd is 53.6 °C. Image set 6: scene is made up of two bulbs in different distances and homogenous heat absorbing black background. The bulbs are acquired with a view to the holders. The temperature of the first bulb is 57.9 °C and 2nd is 54.7 °C. The reference images, where both objects are sharp, were created very easily using this command in MATLAB: img = [img1(:,1:thr) img2(:,thr+1:end)]; where img1 and img2 is the image with a perfectly sharp object 1 and 2 respectively. Variable thr determines the border between these two objects.
Provide a detailed description of the following dataset: Multi-focus thermal database
i3DMM Test Dataset
A new dataset consisting of 64 people with different expressions and hairstyles.
Provide a detailed description of the following dataset: i3DMM Test Dataset
NOAA Atmospheric Temperature Dataset
This dataset contains meteorological observations (temperature) at the land-based weather stations located in the United States, collected from the Online Climate Data Directory of the National Oceanic and Atmospheric Administration (NOAA). The weather stations are sampled from the Western and Southeastern states that have actively measured meteorological observations during 2015. The 1-year sequential data of hourly temperature records are divided into small sequences of 24 hours. For training, validation, and test a sequential 8-2-2 (months) split is used.
Provide a detailed description of the following dataset: NOAA Atmospheric Temperature Dataset
NEMO Sea Surface Temperature Dataset
This dataset contains spatiotemporal sequences of SST generated by the NEMO ocean engine. The observations correspond to 250 randomly selected data sites within a [0, 550] × [100, 650] square cropped from the area between 50 deg N − 65 deg N and 75W deg − 10W deg starting from 01-01-2016 to 12-31-2017. The data is divided into 24 sequences, each lasting 30 days (extra days in each month are truncated). Data corresponding to 2016 are used for training and the rest is used for validation and testing, in the equal sequential split.
Provide a detailed description of the following dataset: NEMO Sea Surface Temperature Dataset
JaNLI
The Japanese Adversarial NLI (JaNLI) dataset is designed to require understanding of Japanese linguistic phenomena and illuminate the vulnerabilities of models. Please see the paper [Assessing the Generalization Capacity of Pre-trained Language Models through Japanese Adversarial Natural Language Inference](https://aclanthology.org/2021.blackboxnlp-1.26.pdf) for details.
Provide a detailed description of the following dataset: JaNLI
Sachs
Sachs dataset measures the expression level of different proteins and phospholipids in human cells. It includes the simultaneous measurements of 11 phosphorylated proteins and phospholipids derived from thousands of individual primary immune system cells, subjected to both general and specific molecular interventions.
Provide a detailed description of the following dataset: Sachs
Full-Spectral Autofluorescence Lifetime Microscopic Images
- The dataset contains full-spectral autofluorescence lifetime microscopic images (FS-FLIM) acquired on unstained ex-vivo human lung tissue, where 100 4D hypercubes of 256x256 (spatial resolution) x 32 (time bins) x 512 (spectral channels from 500nm to 780nm). This dataset associates with our paper "Deep Learning-Assisted Co-registration of Full-Spectral Autofluorescence Lifetime Microscopic Images with H&E-Stained Histology Images" (https://arxiv.org/abs/2202.07755) and "Full spectrum fluorescence lifetime imaging with 0.5 nm spectral and 50 ps temporal resolution" (https://doi.org/10.1038/s41467-021-26837-0). - The FS-FLIM images provide transformative insights into human lung cancer with extra-dimensional information. This will enable visual and precise detection of early lung cancer. With the methodology in our co-registration paper, FS-FLIM images can be registered with H&E-stained histology images, allowing characterisation of tumour and surrounding cells at a celluar level with absolute autofluorescence lifetime. - The dataset can be used for various purposes, including signal processing for optimal lifetime reconstruction, advanced image analysis for automatic feature extraction of lung cancer, and cellular-level characterisation of lung cancer with absolute label-free autofluorescence lifetime values. - The dataset is available on the University of Edinburgh's DataShare (https://doi.org/10.7488/ds/3099 and https://doi.org/10.7488/ds/3421)
Provide a detailed description of the following dataset: Full-Spectral Autofluorescence Lifetime Microscopic Images
XLING
The XLING BLI Dataset contains bilingual dictionaries for 28 language pairs. For each of the language pairs, there are 5 dictionary files: 4 training dictionaries of varying sizes (500, 1K, 3K, and 5K translation pairs) and one testing dictionary containing 2K test word pairs. All results reported in the above paper have been obtained on test dictionaries of respective language pairs.
Provide a detailed description of the following dataset: XLING
Code Smells in Elixir
Dataset used in research submitted to ICPC ERA 2022
Provide a detailed description of the following dataset: Code Smells in Elixir
Nations
The Nations dataset is a small knowledge graph with 14 entities, 55 relations, and 1992 triples describing countries and their political relationships. This dataset is available for download from https://github.com/ZhenfengLei/KGDatasets.
Provide a detailed description of the following dataset: Nations
PanLex-BLI
PanLex-based bilingual lexicons for 210 language pairs
Provide a detailed description of the following dataset: PanLex-BLI
FE108
Large-scale single-object tracking dataset, containing 108 sequences with a total length of 1.5 hours. FE108 provides ground truth annotations on both the frame and event domain. The annotation frequency is up to 40Hz and 240Hz for the frame and event domains, respectively. FE108 is the largest event-frame-based dataset for single object tracking, and also offers the highest annotation frequency in the event domain.
Provide a detailed description of the following dataset: FE108
IntHarmony
This newly curated synthetic dataset specifies an additional reference region to guide image harmonization. There are 118,287 training images and 959 test images. The dataset consists of objects, backgrounds, and people. IntHarmony has the following information for each data instance: composite image, ground truth, foreground mask of the composite foreground, and a guide mask that provides information about the reference region to guide harmonization. IntHarmony is built on top of the MS-COCO dataset, and makes use of the instance masks provided in MS-COCO to simulate foreground and reference regions. First, a random instance mask is selected to pick the foreground region. The selected foreground region is then augmented using a wide set of meaningful augmentations focusing on luminance, contrast and color. Another random instance mask is used to get the reference guide mask. The original image is considered the ground truth. The instance masks and the augmentations are chosen at random to induce more generalizability to the network.
Provide a detailed description of the following dataset: IntHarmony
ToxiGen
A large-scale and machine-generated dataset of 274,186 toxic and benign statements about 13 minority groups. This dataset uses a demonstration-based prompting framework and an adversarial classifier-in-the-loop decoding method to generate subtly toxic and benign text with a massive pre-trained language model (GPT-3). Controlling machine generation in this way allows TOXIGEN to cover implicitly toxic text at a larger scale, and about more demographic groups, than previous resources of human-written text. TOXIGEN can be used to fight human-written and machine-generated toxicity.
Provide a detailed description of the following dataset: ToxiGen
PET
The dataset contains 45 documents containing narrative description of business process and their annotations. Annotated with activities, gateways, actors, and flow information. Each document is composed of three files: Doc_name.txt (Process description in CONLL format) Doc_name.process-elements.IOB2.txt (Process elements annotated with IOB2 Schema in CONLL format) Doc_name.relations.tsv (Process relations between process elements. Each line is a triplette (source, relation tag, target). Source and target are in the form: n_sent_x words range.)
Provide a detailed description of the following dataset: PET
PET: A new Dataset for Process Extraction from Natural Language Text
The dataset contains 45 documents containing narrative description of business process and their annotations. Annotated with activities, gateways, actors, and flow information. Each document is composed of three files: Doc_name.txt (Process description in CONLL format) Doc_name.process-elements.IOB2.txt (Process elements annotated with IOB2 Schema in CONLL format) Doc_name.relations.tsv (Process relations between process elements. Each line is a triplette (source, relation tag, target). Source and target are in the form: n_sent_x words range.)
Provide a detailed description of the following dataset: PET: A new Dataset for Process Extraction from Natural Language Text
EGDB
This dataset contains transcriptions of the electric guitar performance of 240 tablatures, rendered with different tones. The goal is to contribute to automatic music transcription (AMT) of guitar music, a technically challenging task. Activity signals were captured by attaching a special hexaphonic pickup to each string of an electric guitar and using a JUCE program to control a digital audio workstation (DAW) to automatically re-render the audio recordings of the “Direct Input” (DI) using different amplifiers (Amps), including low-gain amps and high-gain ones. A new collecting pipeline was employed to reduce the effort of manual inspection. The final dataset contains six copies of 118 minutes of guitar playing, each copy being associated with a different timbre. The new dataset, named “EGDB,” is constructed in this way to account for the diverse timbre associated with electric guitar. Having multiple guitar tones makes it possible to test a trained model on held-out unseen tones for generalizability. <span style="color:grey; opacity: 0.6">( Image Source: [Frame Harirak](https://unsplash.com/s/photos/guitar-electric?utm_source=unsplash&utm_medium=referral&utm_content=creditCopyText) )</span>
Provide a detailed description of the following dataset: EGDB
DanceTrack
A large-scale multi-object tracking dataset for human tracking in occlusion, frequent crossover, uniform appearance and diverse body gestures. It is proposed to emphasize the importance of motion analysis in multi-object tracking instead of mainly appearance-matching-based diagram.
Provide a detailed description of the following dataset: DanceTrack
ChildCIdb
A large-scale, first-of-its-kind database aimed at generating a better understanding of the way children interact with mobile devices during their development process. ChildCIdbv1 comprises data collected from 438 children, from 18 months to 8 years old, encompassing the first three development stages of Piaget's theory. Data collected spans interaction with screens using both finger and pen stylus, information regarding the previous experience of the child with mobile devices, the child’s grade level, and whether attention-deficit/hyperactivity disorder (ADHD) is present. Use cases: Child age detection based on device interaction.
Provide a detailed description of the following dataset: ChildCIdb
VidHarm
**VidHarm** is a professionally annotated dataset for detection of harmful content in video. Include 3589 annotate video clips from a variety of film trailers. In contrast to previous approaches which mostly use meta data from long sequences, it uses the raw video and focus on short clips.
Provide a detailed description of the following dataset: VidHarm
Spatial Commonsense Graph Dataset
Dataset built from partial reconstructions of real-world indoor scenes using RGB-D sequences from ScanNet, aimed at estimating the unknown position of an object (e.g. where is the bag?) given a partial 3D scan of a scene. The dataset mostly consists of bedrooms, bathrooms, and living rooms. Some room types like closet and gym only have a few instances.
Provide a detailed description of the following dataset: Spatial Commonsense Graph Dataset
HOPE-Image
The NVIDIA HOPE datasets consist of RGBD images and video sequences with labeled 6-DoF poses for 28 toy grocery objects. The toy grocery objects are readily available for purchase and have ideal size and weight for robotic manipulation. 3D textured meshes for generating synthetic training data are provided. The HOPE-Image dataset shows the objects in 50 scenes from 10 household/office environments, and contains 188 test images taken in 8 environments, with a total of 40 scenes (unique camera and object poses). Up to 5 lighting variations are captured for each scene, including backlighting and angled direct lighting with cast shadows. Scenes are cluttered with varying levels of occlusion. An additional 50 validation images are included from 2 environments in 10 scene arrangements. Within each scene, up to 5 lighting variations are captured with the same camera and object poses. For example, the captures in `valid/scene_0000/*.json` all depict the same camera pose and arrangement of objects, but each individual capture (0000.json, 0001.json, ...) has a different lighting condition. For this reason, each image should be treated independently for purposes of pose prediction. The most favorable lighting condition for each scene is found in `image 0000.json`. Images were captured using a RealSense D415 RGBD camera. Systematic errors were observed in the depth values relative to the estimated distance of a calibration grid. To correct for this, depth frames are scaled by a factor of 0.98042517 before registering to RGB. Annotations were made manually using these corrected RGBD frames. NOTE: Only validation set annotations are included. Test annotations are managed by the BOP challenge.
Provide a detailed description of the following dataset: HOPE-Image
HOPE-Video
The HOPE-Video dataset contains 10 video sequences (2038 frames) with 5-20 objects on a tabletop scene captured by a robot arm-mounted RealSense D415 RGBD camera. In each sequence, the camera is moved to capture multiple views of a set of objects in the robotic workspace. First COLMAP was applied to refine the camera poses (keyframes at 6~fps) provided by forward kinematics and RGB calibration from RealSense to Baxter's wrist camera. 3D dense point cloud was then generated via CascadeStereo (included for each sequence in 'scene.ply'). Ground truth poses for the HOPE objects models in the world coordinate system were annotated manually using the CascadeStereo point clouds. The following are provided for each frame: *Camera intrinsics/extrinsics *RGB images of 640x480 *Depth images of 640x480 *3D scene reconstruction from CascadeStereo *Object pose annotation in the camera frame Objects consist of a set of 28 toy grocery items selected for compatibility with robot manipulation and widespread availability. Textured models were generated by an EinScan-SE 3D Scanner, units were converted to centimeters, and the centers/rotations of the meshes were aligned to a canonical pose.
Provide a detailed description of the following dataset: HOPE-Video
Heritage Health Prize
Heritage Provider Network is providing Competition Entrants with deidentified member data collected during a forty-eight month period that is allocated among three data sets (the "Data Sets"). Competition Entrants will use the Data Sets to develop and test their algorithms for accurately predicting the number of days that the members will spend in a hospital (inpatient or emergency room visit) during the 12-month period following the Data Set cut-off date. HHP_release3.zip contains the latest files, so you can ignore HHP_release2.zip. SampleEntry.CSV shows you how an entry should look. Data Sets will be released to Entrants after registration on the Website according to the following schedule: April 4, 2011 Claims Table - Y1 and DaysInHospital Table - Y2 May 4, 2011 All other Data Sets except Labs Table and Rx Table June 4, 2011 Labs Table and Rx Table Entrants are welcome to use other data to develop and test their algorithms and entries until 11:59:59 UTC on April 4, 2012 if the data are (i) freely available to all other Entrants and (i) published (or a link provided) to the data in the External Data portion of the Forum within one (1) week of an entry submission using the other data. Entrants may not use any data other than the Data Sets after 11:59:59 UTC on April 4, 2012 without prior approval. Tables Each of the Data Sets will be comprised of tables as follows: a. Members Table, which will include: i. MemberID (a unique member ID) ii. AgeAtFirstClaim (member's age when first claim was made in the Data Set period) iii. Sex b. Claims Table, which will include: i. MemberID ii. ProviderID (the ID of the doctor or specialist providing the service) iii. Vendor (the company that issues the bill) iv. PCP (member's primary care physician) v. Year (the year of the claim, Y1, Y2, Y3) vi. Specialty vii. PlaceSvc (place where the member was treated) viii. PayDelay (the delay between the claim and the day the claim was paid for) ix. LengthOfStay x. DSFS (days since first service that year) xi. PrimaryConditionGroup (a generalization of the primary diagnosis codes) xii. CharlsonIndex (a generalization of the diagnosis codes in the form of a categorized comorbidity score) xiii. ProcedureGroup (a generalization of the CPT code or treatment code) xiv. SupLOS (a flag that indicates if LengthOfStay is null because it has been suppressed) c. Labs Table, which will contain certain details of lab tests provided to members. d. RX Table, which will contain certain details of prescriptions filled by members. e. DaysInHospital Tables - Y2 and Y3, which will contain the number of days of hospitalization for each eligible member during Y2 and Y3 and will include: i. MemberID; ii. ClaimsTruncated (a flag for members who have had claims suppressed. If the flag is 1 for member xxx in DaysInHospital_Y2, some claims for member xxx will have been suppressed in Y1). iii. DaysInHospital (the number of days in hospital Y2 or Y3, as applicable). These two Tables are intended for use by Entrants to train and validate their algorithms. DaysInHospital Tables are based on the Claims Table with admissions in Y2 or Y3, as applicable. As a privacy measure, any member who spent more than two weeks in hospital is grouped; they are treated as though they spent 15 days in hospital. f. Target - is "DaysInHospital_Y4" but doesn't include DaysInHospital. DaysInHospital data for Y4 are to be filled in by Entrants to produce entries. Seem SampleEntry.csv as an example.
Provide a detailed description of the following dataset: Heritage Health Prize