dataset_name
stringlengths
2
128
description
stringlengths
1
9.7k
prompt
stringlengths
59
185
Parkour-dataset
The LAAS Parkour dataset contains 28 RGB videos capturing human subjects performing four typical parkour techniques: safety-vault, kong vault, pull-up and muscle-up. These are highly dynamic motions with rich contact interactions with the environment. The dataset is provided with the ground truth 3D positions of 16 pre-defined human joints, together with the contact forces at the human subjects' hand and foot joints exerted by the environment.
Provide a detailed description of the following dataset: Parkour-dataset
REALY
The REALY benchmark aims to introduce a region-aware evaluation pipeline to measure the fine-grained normalized mean square error (NMSE) of 3D face reconstruction methods from under-controlled image sets. Given the reconstructed mesh from the 2D image in REALY by a specific method, the REALY benchmark calculates the similarity of ground-truth scans on four regions (nose, mouth, forehead, cheek) with the predicted mesh.
Provide a detailed description of the following dataset: REALY
Locount
Loucount is a retail object detection and and counting dataset with rich annotations in retail stores, which consists of 50, 394 images with more than 1.9 million object instances in 140 categories
Provide a detailed description of the following dataset: Locount
Replication Data for: Do uHear? Validation of uHear App for Preliminary Screening of Hearing Ability in Soundscape Studies
Audiogram data based on a "gold standard" audiometer and the uHear iOS application of 163 participants
Provide a detailed description of the following dataset: Replication Data for: Do uHear? Validation of uHear App for Preliminary Screening of Hearing Ability in Soundscape Studies
VisRecall
Despite its importance for assessing the effectiveness of communicating information visually, fine-grained recallability of information visualisations has not been studied quantitatively so far. We propose a question-answering paradigm to study visualisation recallability and present VisRecall -- a novel dataset consisting of 200 information visualisations that are annotated with crowd-sourced human (N = 305) recallability scores obtained from 1,000 questions from five question types, which are related to titles, filtering information, finding extrema, retrieving values, and understanding visualisations. It aims to make fundamental contributions towards a new generation of methods to assist designers in optimising information visualisations. This dataset contains stimuli and collected participant data of VisRecall. The structure of the dataset is described in the README-File. If you are interested in related codes of the publication, please refer to the code repository in Metadata for Research Software.
Provide a detailed description of the following dataset: VisRecall
Phrase-in-Context
Phrase in Context is a curated benchmark for phrase understanding and semantic search, consisting of three tasks of increasing difficulty: Phrase Similarity (PS), Phrase Retrieval (PR) and Phrase Sense Disambiguation (PSD). The datasets are annotated by 13 linguistic experts on Upwork and verified by two groups: ~1000 AMT crowdworkers and another set of 5 linguistic experts. PiC benchmark is distributed under CC-BY-NC 4.0.
Provide a detailed description of the following dataset: Phrase-in-Context
PoserNet ECCV 2022 data
This data is derived from the 7Scenes dataset. It contains graphs used for training PoserNet and for evaluating its performance.
Provide a detailed description of the following dataset: PoserNet ECCV 2022 data
RSBlur
The RSBlur dataset provides pairs of real and synthetic blurred images with ground truth sharp images. The dataset enables the evaluation of deblurring methods and blur synthesis methods on real-world blurred images. Training, validation, and test sets consist of 8,878, 1,120, and 3,360 blurred images, respectively.
Provide a detailed description of the following dataset: RSBlur
UHDM
The first ultra-high-definition image demoireing dataset, consisting of 4,500 4K resolution training pairs and 500 standard 4K resolution validation pairs.
Provide a detailed description of the following dataset: UHDM
ELEVATER
The ELEVATER benchmark is a collection of resources for training, evaluating, and analyzing language-image models on image classification and object detection. ELEVATER consists of: - Benchmark: A benchmark suite that consists of 20 image classification datasets and 35 object detection datasets, augmented with external knowledge - Toolkit: An automatic hyper-parameter tuning toolkit; Strong language-augmented efficient model adaptation methods. - Baseline: Pre-trained language-free and language-augmented visual models. - Knowledge: A platform to study the benefit of external knowledge for vision problems. - Evaluation Metrics: Sample-efficiency (zero-, few-, and full-shot) and Parameter-efficiency. - Leaderboard: A public leaderboard to track performance on the benchmark The ultimate goal of ELEVATER is to drive research in the development of language-image models to tackle core computer vision problems in the wild.
Provide a detailed description of the following dataset: ELEVATER
Oracle-MNIST
We introduce the Oracle-MNIST dataset, comprising of 2828 grayscale images of 30,222 ancient characters from 10 categories, for benchmarking pattern classification, with particular challenges on image noise and distortion. The training set totally consists of 27,222 images, and the test set contains 300 images per class. Oracle-MNIST shares the same data format with the original MNIST dataset, allowing for direct compatibility with all existing classifiers and systems, but it constitutes a more challenging classification task than MNIST. The images of ancient characters suffer from 1) extremely serious and unique noises caused by three-thousand years of burial and aging and 2) dramatically variant writing styles by ancient Chinese, which all make them realistic for machine learning research. The dataset is freely available at https://github.com/wm-bupt/oracle-mnist.
Provide a detailed description of the following dataset: Oracle-MNIST
HelixNet
Large-scale and open-access LiDAR dataset intended for the evaluation of real-time semantic segmentation algorithms. In contrast to other large-scale datasets, HelixNet includes fine-grained data about the sensor's rotation and position, as well as the points' release time.
Provide a detailed description of the following dataset: HelixNet
MNIST Multiview Datasets
MNIST Multiview Datasets ======================== MNIST is a publicly available dataset consisting of 70, 000 images of handwritten digits distributed over ten classes. We generated 2 four-view datasets where each view is a vector of R<sup>14 x 14</sup>: * MNIST<sub>1</sub>: It is generated by considering 4 quarters of image as 4 views. * MNIST<sub>2</sub>: It is generated by considering 4 overlapping views around the centre of images: this dataset brings redundancy between the views. Related Papers: ``` Goyal, Anil, Emilie Morvant, Pascal Germain, and Massih-Reza Amini. "Multiview Boosting by Controlling the Diversity and the Accuracy of View-specific Voters." Neurocomputing, 358, 2019, pp. 81-92. Link to the ArXiv version: https://arxiv.org/abs/1808.05784 Published Version: https://doi.org/10.1016/j.neucom.2019.04.072 ``` ``` Goyal, Anil, Emilie Morvant, and Massih-Reza Amini. "Multiview Learning of Weighted Majority Vote by Bregman Divergence Minimization." In International Symposium on Intelligent Data Analysis, pp. 124-136. Springer, Cham, 2018. Link to the ArXiv version: https://arxiv.org/abs/1805.10212 Published Version: https://doi.org/10.1007/978-3-030-01768-2_11 ```
Provide a detailed description of the following dataset: MNIST Multiview Datasets
Long Range Graph Benchmark (LRGB)
The Long Range Graph Benchmark (LRGB) is a collection of 5 graph learning datasets that arguably require long-range reasoning to achieve strong performance in a given task. The 5 datasets in this benchmark can be used to prototype new models that can capture long range dependencies in graphs. | Dataset | Domain | Task | |---|---|---| | PascalVOC-SP| Computer Vision | Node Classification | | COCO-SP | Computer Vision | Node Classification | | PCQM-Contact | Quantum Chemistry | Link Prediction | | Peptides-func | Chemistry | Graph Classification | | Peptides-struct | Chemistry | Graph Regression |
Provide a detailed description of the following dataset: Long Range Graph Benchmark (LRGB)
KonIQ-10k
**KonIQ-10k** is a large-scale IQA dataset consisting of 10,073 quality scored images. This is the first in-the-wild database aiming for ecological validity, with regard to the authenticity of distortions, the diversity of content, and quality-related indicators. Through the use of crowdsourcing, we obtained 1.2 million reliable quality ratings from 1,459 crowd workers, paving the way for more general IQA models.
Provide a detailed description of the following dataset: KonIQ-10k
KADID-10k
Konstanz artificially distorted image quality database (KADID-10k) contains 81 pristine images, each degraded by 25 distortions in 5 levels.
Provide a detailed description of the following dataset: KADID-10k
HELMET
The HELMET dataset contains 910 videoclips of motorcycle traffic, recorded at 12 observation sites in Myanmar in 2016. Each videoclip has a duration of 10 seconds, recorded with a framerate of 10fps and a resolution of 1920x1080. The dataset contains 10,006 individual motorcycles, surpassing the number of motorcycles available in existing datasets. Each motorcycle in the 91,000 annotated frames of the dataset is annotated with a bounding box, and rider number per motorcycle as well as position specific helmet use data is available.
Provide a detailed description of the following dataset: HELMET
MAVERICS
**Manually vAlidated Vq2a Examples fRom Image/Caption datasetS** (**MAVERICS**) is a suite of test-only visual question answering datasets.
Provide a detailed description of the following dataset: MAVERICS
Urban Hyperspectral Image
Urban is one of the most widely used hyperspectral data used in the hyperspectral unmixing study. There are 307x307 pixels, each of which corresponds to a 2x2 m2 area. In this image, there are 210 wavelengths ranging from 400 nm to 2500 nm, resulting in a spectral resolution of 10 nm. After the channels 1-4, 76, 87, 101-111, 136-153 and 198-210 are removed (due to dense water vapor and atmospheric effects), we remain 162 channels (this is a common preprocess for hyperspectral unmixing analyses). There are three versions of ground truth, which contain 4, 5 and 6 endmembers respectively, which are introduced in the ground truth. Linda S. Kalman and Edward M. Bassett III "Classification and material identification in an urban environment using HYDICE hyperspectral data", Proc. SPIE 3118, Imaging Spectrometry III, (31 October 1997); https://doi.org/10.1117/12.283843 Hosted at: - https://rslab.ut.ac.ir/data - http://lesun.weebly.com/hyperspectral-data-set.html - https://erdc-library.erdc.dren.mil/jspui/handle/11681/2925
Provide a detailed description of the following dataset: Urban Hyperspectral Image
EgoProceL
EgoProceL is a large-scale dataset for procedure learning. It consists of 62 hours of egocentric videos recorded by 130 subjects performing 16 tasks for procedure learning. EgoProceL contains videos and key-step annotations for multiple tasks from CMU-MMAC, EGTEA Gaze+, and individual tasks like toy-bike assembly, tent assembly, PC assembly, and PC disassembly. EgoProceL overcomes the limitations of third-person videos. As, using third-person videos makes the manipulated object small in appearance and often occluded by the actor, leading to significant errors. In contrast, we observe that videos obtained from first-person (egocentric) wearable cameras provide an unobstructed and clear view of the action.
Provide a detailed description of the following dataset: EgoProceL
PSG Dataset
PSG dataset has 48749 images with 133 object classes (80 objects and 53 stuff) and 56 predicate classes. It annotates inter-segment relations based on COCO panoptic segmentation.
Provide a detailed description of the following dataset: PSG Dataset
DEAP City Dataset
[comment]:<> (Data for the paper "Deciphering Environmental Air Pollution with Large Scale City Data") ## Main Dataset `city_pollution_data.csv` Relevant Columns: * `Date`: Date of the sample * `City`: City of the sample * `X_median`: Median value of the pollutant/meteorological feature X for the day * `mil_miles`: Total vehicle travel distance for the sample * `pp_feat`: Calculated feature for the influence of neighboring power plants * `Population Staying at Home`: Used a measure of domestic emissions. **Pollutants**: `PM2.5`,`PM10`,`NO2`,`O3`,`CO`,`SO2` **Meteorological Features**: `Temperature`,`Pressure`,`Humidity`,`Dew`,`Wind Speed`,`Wind Gust` ## Power Plant Generation and Location Dataset [Extra]: `pp_gen_data.csv` Relevant Columns: * `Month`: Month of the data * `Netgen`: Net generation for that month. If you find the data or code useful in your work, please cite ``` @inproceedings{ijcai2022p698, title = {Deciphering Environmental Air Pollution with Large Scale City Data}, author = {Bhattacharyya, Mayukh and Nag, Sayan and Ghosh, Udita}, booktitle = {Proceedings of the Thirty-First International Joint Conference on Artificial Intelligence, {IJCAI-22}}, publisher = {International Joint Conferences on Artificial Intelligence Organization}, year = {2022}, } ```
Provide a detailed description of the following dataset: DEAP City Dataset
Breast Lesion Detection in Ultrasound Videos (CVA-Net)
The **breast lesion detection in ultrasound videos** dataset uses a clip-level and video-level feature aggregated network (CVA-Net) and consists of 188 ultrasound videos, of which 113 are labeled malignant and 75 benign. Overall these consist of 25,272 ultrasound images in total with the number of images for each video varying from 28 to 413. 150 videos were used for training, 38 for testing. The primary intended use case would be for computer-aided breast cancer diagnosis, supporting systems to assist radiologists. Here are more details summarising the approach: * A novel network: a new state-of-the-art clip-level and video level feature aggregated network (CVA-Net) created to aggregate clip-level temporal features and video-level lesion classification features to fuse and into a prediction classifier. It outperformed existing methods mainly focused on 2D images and or fusing with unlabeled videos. * The need for increased accuracy contributed to the motivation, given detection challenges due to blurry breast lesion boundaries, inhomogeneous distributions, changeable breast lesion sizes, and positions in dynamic video. * Each video has a complete scan of the tumor, from where it becomes visible to where it is no longer visible as well as the largest section - all acquired via LOGIQ-E9 and PHILIPS TIS L9-3 ultrasound machines . * 2 pathologists with 8 years of experience invited to manually annotate the breast lesion rectangles inside each frame and give the corresponding classification. Second source in addition to separate homepage URL below: [ https://github.com/jhl-Det/CVA-Net/tree/main/datasets]( https://github.com/jhl-Det/CVA-Net/tree/main/datasets)
Provide a detailed description of the following dataset: Breast Lesion Detection in Ultrasound Videos (CVA-Net)
Western Mediterranean Wetlands Birds - Version 2
The **Western Mediterranean Wetlands Bird** Dataset is a collection of birds' vocalizations of different lengths that primarily consists of 5,795 labelled audio clips derived from 1,098 recordings, totalling 201.6 minutes or 12,096 seconds alongside with corresponding annotations. It also comes with Mel spectrogram version of the data, where an image represents a 1-second window of the original audio, resulting in a total of 17,536 spectrographic images. These are stored in matrix form within .npy files. These are the species covered: * Acrocephalus arundinaceus * Acrocephalus melanopogon * Acrocephalus scirpaceus * Alcedo atthis * Anas strepera * Anas platyrhynchos * Ardea purpurea * Botaurus stellaris * Charadrius alexandrinus * Ciconia ciconia * Circus aeruginosus * Coracias garrulus * Dendrocopos minor * Fulica atra * Gallinula chloropus * Himantopus himantopus * Ixobrychus minutus * Motacilla flava * Porphyrio porphyrio * Tachybaptus ruficollis *Can be considered a subset retrieved from the Xeno-Canto citizen science portal which is a database containing 716,298 recordings of bird sound at the time of writing this.*
Provide a detailed description of the following dataset: Western Mediterranean Wetlands Birds - Version 2
UMass Citation Field Extraction
The **University of Massachusetts Amherst citation field extraction** dataset contains labels and segments for extracted citations from articles found on arXiv. Compared to previous standard datasets in citation field extraction, this one had 4 times more data and provided detailed nested labels rather than coarse-grained flat labels, alongside drawing from 4 different academic disciplines versus 1 - namely computer science, mathematics, physics, and quantitative biology. It consisted of 6,000 unlabeled citation strings, with 1829 labeled to date at the time of its last publication - 2476 according to the latest citation from 'Using BibTeX to Automatically Generate Labeled Data for Citation Field Extraction' - Dung Thai, Zhiyang Xu, Nicholas Monath, Boris Veytsman, and Andrew McCallum. Each citation string was labeled hierarchically, separating coarse-grain and fine-grain labeled segments. *Dataset introduced in the following paper:* Sam Anzaroot and Andrew McCallum. A new dataset for fine-grained citation field extraction. In ICML Workshop on Peer Reviewing and Publishing Models (PEER), 2013.
Provide a detailed description of the following dataset: UMass Citation Field Extraction
27 Class ASL Sign Language
This **27 Class American Sign Language-based** dataset consists of photographs collected from 173 individuals asked to display gestures with their hands. Using a camera, these were taken to a 3024 by 3024 pixels frame size within RGB color space. 130 photos were taken from each person, 5 per class (minor changes on sample sizes in classes can be observed) - 26 classes containing phrases, letters, and numbers with a 27th class null category made up of 314 images for control purposes. The main motivation was contributing to technology development use cases that can reduce the communication challenges faced speech-impaired people with new data to meet the diversity and sample size necessary for intelligent computer vision studies and sign language applications.
Provide a detailed description of the following dataset: 27 Class ASL Sign Language
AIH
AIH is created for hand deocclusion and removal.
Provide a detailed description of the following dataset: AIH
Genocide Transcript Corpus (GTC): Topic-Based Paragraph Classification in Genocide-Related Court Transcripts
The **Topic-Based Paragraph Classification in Genocide-Related Court Transcripts (GTC)** dataset is the first reference corpus annotated with samples from genocide tribunals in different international criminal courts. It is made up of witness statements about violence experienced. The material consists of 1475 text passages with about 40 to 120 pages per transcript, covering 3 tribunals: the Extraordinary Chambers in the Courts of Cambodia (ECCC) - 438 pages, the International Criminal Tribunal for Rwanda (ICTR) - 566 pages, and the International Criminal Tribunal of the Former Yugoslavia (ICTY) - 416 pages. As no datasets of any kind containing genocide court transcripts have been published nor other forms of pre-structured or annotated text data in this field of research exist, the aim was to address this gap by providing a systematically annotated dataset. Potential use cases include genocide-related inquiry conducted by those who need better to access, explore, and search through extensive documentation on these topics including researchers, lawyers and other practitioners. Broadly, its stated aim is to serve 3 purposes: * (1) to provide a first reference corpus for the community * (2) to establish benchmark performances (using state-of-the-art transformer-based approaches) for the new classification task of paragraph identification of violence-related witness statements * (3) to explore first steps towards transfer learning within the domain
Provide a detailed description of the following dataset: Genocide Transcript Corpus (GTC): Topic-Based Paragraph Classification in Genocide-Related Court Transcripts
Visual Knowledge Tracing
**Visual Knowledge Tracing** contains images and human response data for a visual classification task on three datasets. Datasets are intended to serve as benchmark for visual knowledge tracing algorithms.
Provide a detailed description of the following dataset: Visual Knowledge Tracing
Reflective essays on CS TA experience
Teaching assistants (TAs) are heavily used in computer science courses as a way to handle high enrollment and still being able to offer students individual tutoring and detailed assessments. This data is the result of a multi-institutional, multi-national perspective of challenges that TAs in computer science face. 180 reflective essays written by TAs from three institutions across Europe were analyzed and coded. The thematic analysis resulted in five main challenges: becoming a professional TA, student-focused challenges, assessment, defining and using best practice and threats to best practice. In addition, these challenges were all identified within the essays from all three institutions, indicating that the identified challenges are not particularly context-dependent. (2021-04-11)
Provide a detailed description of the following dataset: Reflective essays on CS TA experience
12 Scenes
Dataset containing RGB-D data of 4 large scenes, comprising a total of 12 rooms, for the purpose of RGB and RGB-D camera relocalization. The RGB-D data was captured using a Structure.io depth sensor coupled with an iPad color camera. Each room was scanned multiple times, with the multiple sequences run through a global bundle adjustment in order to obtain globally aligned camera poses though all sequences of the same scene.
Provide a detailed description of the following dataset: 12 Scenes
CelebV-HQ
**CelebV-HQ** is a large-scale video facial attributes dataset with annotations. CelebV-HQ contains 35,666 video clips involving 15,653 identities and 83 manually labeled facial attributes covering appearance, action, and emotion. GitHub repository: [https://github.com/celebv-hq/celebv-hq]( https://github.com/celebv-hq/celebv-hq)
Provide a detailed description of the following dataset: CelebV-HQ
S2B
Suite of OpenAI Gym-compatible multi-agent reinforcement learning environment centered around meta-referential games to benchmark for behavioral traits pertaining to symbolic behaviours, as described in [Santoro et al., 2021, "Symbolic Behaviours in Artificial Intelligence"](https://arxiv.org/abs/2102.03406), with a primary focus on the following behavioural traits: * receptive, * constructive, * malleable, and * separable.
Provide a detailed description of the following dataset: S2B
ScanNet200
The ScanNet200 benchmark studies 200-class 3D semantic segmentation - an order of magnitude more class categories than previous 3D scene understanding benchmarks. The source of scene data is identical to ScanNet, but parses a larger vocabulary for semantic and instance segmentation
Provide a detailed description of the following dataset: ScanNet200
Real-time Election Results: Portugal 2019 Data Set
Data Set Information: A data set describing the evolution of results in the Portuguese Parliamentary Elections of October 6th 2019. The data spans a time interval of 4 hours and 25 minutes, in intervals of 5 minutes, concerning the results of the 27 parties involved in the electoral event. The data set is tailored for predictive modelling tasks, mostly focused on numerical forecasting tasks. Regardless, it allows for other tasks such as ordinal regression or learn-to-rankProvide a short description of your data set (less than 200 characters). Additional (and updated) information may be found in [Web Link] : - Raw data sets - R code to build the final data set - Basic operations to build predictive modelling tasks using this data set Attribute Information: TimeElapsed (Numeric): Time (minutes) passed since the first data acquisition time (timestamp): Date and time of the data acquisition territoryName (string): Short name of the location (district or nation-wide) totalMandates (numeric): MP's elected at the moment availableMandates (numeric): MP's left to elect at the moment numParishes (numeric): Total number of parishes in this location numParishesApproved (numeric): Number of parishes approved in this location blankVotes (numeric): Number of blank votes blankVotesPercentage (numeric): Percentage of blank votes nullVotes (numeric): Number of null votes nullVotesPercentage (numeric): Percentage of null votes votersPercentage (numeric): Percentage of voters subscribedVoters (numeric): Number of subscribed voters in the location totalVoters (numeric): Percentage of blank votes pre.blankVotes (numeric): Number of blank votes (previous election) pre.blankVotesPercentage (numeric): Percentage of blank votes (previous election) pre.nullVotes (numeric): Number of null votes (previous election) pre.nullVotesPercentage (numeric): Percentage of null votes (previous election) pre.votersPercentage (numeric): Percentage of voters (previous election) pre.subscribedVoters (numeric): Number of subscribed voters in the location (previous election) pre.totalVoters (numeric): Percentage of blank votes (previous election) Party (string): Political Party Mandates (numeric): MP's elected at the moment for the party in a given district Percentage (numeric): Percentage of votes in a party validVotesPercentage (numeric): Percentage of valid votes in a party Votes (numeric): Percentage of party votes Hondt (numeric): Number of MP's according to the distribution of votes now FinalMandates (numeric): Target: final number of elected MP's in a district/national-level Relevant Papers: Nuno Moniz (2019) Real-time 2019 Portuguese Parliament Election Results Dataset. arXiv Code + Data in [Web Link] Citation Request: Nuno Moniz (2019) Real-time 2019 Portuguese Parliament Election Results Dataset. arXiv
Provide a detailed description of the following dataset: Real-time Election Results: Portugal 2019 Data Set
YouTube-Hands
**YouTube-Hands** includes 240 videos which are annotated with hand trajectories.
Provide a detailed description of the following dataset: YouTube-Hands
MSU NR VQA Database
The dataset was created for video quality assessment problem. It was formed with 36 clips from Vimeo, which were selected from 18,000+ open-source clips with high bitrate (license CCBY or CC0). The clips include videos recorded by both professionals and amateurs. Almost half of the videos contain scene changes and high dynamism. Moreover, the synthetic to natural lightning ratio is approximately 1 to 3. * Content type: nature, sport, humans close up, gameplays, music videos, water stream or steam, CGI * Effects and distortions: shaking, slow-motion, grain/noisy, too dark/bright regions, macro shooting, captions (text), extraneous objects on the camera lens or just close to it * Resolution: 1920x1080 as the most popular modern video resolution (more in the future) * Format: yuv420p * FPS: 24, 25, 30, 39, 50, 60 * Videos duration: mainly 10 seconds Such content diversity helps simulate near-realistic conditions. The choice of videos collected for the benchmark dataset employed clustering in terms of space-time complexity to obtain a representative distribution. For compression we used 40 codecs of 10 compression standards (H.264, AV1, H.265, VVC, etc.). Each video was compressed with 3 target bitrates: 1,000 Kbps, 2,000 Kbps, and 4,000 Kbps, and different real-life encoding modes: constant quality (CRF) and variable bitrate (VBR). The choice of bitrate range simplifies the subjective comparison procedure since the video quality is more difficult to distinguish visually at higher bitrates. The subjective assessment involved pairwise comparisons using crowdsourcing service Subjectify.us. To increase the relevance of the results, each pair of videos received at least 10 responses from participants. In total, 766362 valid answers were collected from more than 10800 unique participants.
Provide a detailed description of the following dataset: MSU NR VQA Database
PCSOD
It is a new proposed dataset for point cloud salient object detection that has 2000 training samples and 872 testing samples.
Provide a detailed description of the following dataset: PCSOD
Replication Data for: Assessment of a Cost-Effective Headphone Calibration Procedure for Soundscape Evaluations
This dataset contains the data used for all statistical comparisons in our ICSV 2022 submission "Assessment of a Cost-Effective Headphone Calibration Procedure for Soundscape Evaluations", summarised in a single .csv file. <br><br> To obtain the data in this dataset, 17 participants were invited to each rate 27 stimuli twice, once with the stimuli calibrated with a head-and-torso simulator ("HATS method") and once with the stimuli calibrated via an open-circuit voltage method ("OCV method"). This resulted in a total of 17*27*2 = 918 data samples, corresponding to the number of rows in the .csv file. <br><br> For more details on the calibration method and listening test procedure, please refer to our manuscript: <br><br> B. Lam, K. Ooi, K.N. Watcharasupat, Y.-T. Lau, Z.-T. Ong, T. Wong, W.-S. Gan, "Assessment of a Cost-Effective Headphone Calibration Procedure for Soundscape Evaluations", in <i>Proceedings of the 28th International Congress on Sound and Vibration</i>, ICSV28, Singapore, 2022. <br><br> A short explanation of the columns in the .csv file is as follows: <ul> <li><code>calib_method</code>: The method used to calibrate the stimulus for which the present row of responses was obtained (either <code>HATS</code> or <code>OCV</code>)</li> <li><code>participant_idx</code>: The index of the participant who provided the present row of responses. Each participant is assigned a unique index. In addition, indices do not start from 1 because we omitted data from participants who adjusted volume settings when listening to the stimuli (thus rendering the calibration invalid), as well as participants who were unable to provide a complete set of responses for stimuli calibrated using both calibration methods.</li> <li><code>stimulus_idx</code>: The index of the stimulus for which the present row of responses was obtained (integers from 1 to 27, inclusive).</li> <li><code>pleasant</code>: The rating given by the participant for the present stimulus on a scale of 0 to 100 (inclusive) for the attribute "pleasant".</li> <li><code>chaotic</code>: The rating given by the participant for the present stimulus on a scale of 0 to 100 (inclusive) for the attribute "chaotic"</li> <li><code>vibrant</code>: The rating given by the participant for the present stimulus on a scale of 0 to 100 (inclusive) for the attribute "vibrant"</li> <li><code>uneventful</code>: The rating given by the participant for the present stimulus on a scale of 0 to 100 (inclusive) for the attribute "uneventful"</li> <li><code>calm</code>: The rating given by the participant for the present stimulus on a scale of 0 to 100 (inclusive) for the attribute "calm"</li> <li><code>annoying</code>: The rating given by the participant for the present stimulus on a scale of 0 to 100 (inclusive) for the attribute "annoying"</li> <li><code>eventful</code>: The rating given by the participant for the present stimulus on a scale of 0 to 100 (inclusive) for the attribute "eventful"</li> <li><code>monotonous</code>: The rating given by the participant for the present stimulus on a scale of 0 to 100 (inclusive) for the attribute "monotonous"</li> </ul>
Provide a detailed description of the following dataset: Replication Data for: Assessment of a Cost-Effective Headphone Calibration Procedure for Soundscape Evaluations
SCVD
We create a new benchmark called the Smart-City CCTV Violence Detection dataset (SCVD). Current datasets for violence detection contain videos recorded from phone cameras which could alter the needed CCTV distributions. Furthermore, this dataset contains weaponized violence class so it could be used by DNNs to learn the distribution of any potential weapons and infer for quicker action to be carried out on such by the authorities. This means that our dataset is tuned to the fact that any handheld object which could be used to harm humans and properties could be regarded as a weapon
Provide a detailed description of the following dataset: SCVD
VITON-HD
**VITON-HD** dataset is a dataset for high-resolution (i.e., 1024x768) virtual try-on of clothing items. Specifically, it consists of 13,679 frontal-view woman and top clothing image pairs.
Provide a detailed description of the following dataset: VITON-HD
AnimeCeleb
We present a novel Animation CelebHeads dataset (AnimeCeleb) to address an animation head reenactment. Different from previous animation head datasets, we utilize 3D animation models as the controllable image samplers, which can provide a large amount of head images with their corresponding detailed pose annotations. To facilitate a data creation process, we build a semi-automatic pipeline leveraging an open 3D computer graphics software with a developed annotation system. After training with the AnimeCeleb, recent head reenactment models produce high-quality animation head reenactment results, which are not achievable with existing datasets. Furthermore, motivated by metaverse application, we propose a novel pose mapping method and architecture to tackle a cross-domain head reenactment task. During inference, a user can easily transfer one's motion to an arbitrary animation head. Experiments demonstrate the usefulness of the AnimeCeleb to train animation head reenactment models, and the superiority of our cross-domain head reenactment model compared to state-of-the-art methods. Our dataset and code are available at https://github.com/kangyeolk/AnimeCeleb.
Provide a detailed description of the following dataset: AnimeCeleb
Multiface
**Multiface** consists of high quality recordings of the faces of 13 identities, each captured in a multi-view capture stage performing various facial expressions. An average of 12,200 (v1 scripts) to 23,000 (v2 scripts) frames per subject with capture rate at 30 fps. Each frame includes roughly 40 (v1) to 160 (v2) different camera views under uniform illumination, yielding a total dataset size of 65TB. The dataset provides the raw captured images from each camera view at a resolution of 2048 × 1334 pixels, tracked meshes including headposes, unwrapped textures at 1024 × 1024 pixels, metadata including intrinsic and extrinsic camera calibrations, and audio.
Provide a detailed description of the following dataset: Multiface
GBUSV
### Description GBUSV is a un-annotated dataset consisting of ultrasound videos of of patients with either of a malignant or a non-malignant gallbladder. The ultrasound videos were obtained from patients referred to the radiology department of PGIMER, Chandigarh (a high-input hospital in Northern India) for abdominal ultrasound examinations of suspected gallbladder pathologies. Patients were at fasting of at least 6 hours. A 1-5 MHz curved array transducer (C-1-5D, Logiq S8, GE Healthcare) was used. The scanning intended to include the entire gallbladder and the lesion or pathology. The length of the video sequences varies from 43 to 888 frames. The dataset consists of 32 malignant and 32 non-malignant videos containing a total of 12,251 and 3,549 frames, respectively. The video frames are cropped from the center to anonymize the patient information and annotations. The processed frame sizes are of size 360x480 pixels. ### Annotations The images of the video sequences are un-annotated and suitable for unsupervised learning tasks. We provide the high-level categorization for each video whether it's malignant or non-malignant.
Provide a detailed description of the following dataset: GBUSV
DNA mutations
In bioinformatics, the issue of mutation discovery and type determination remains a significant concern. The problem is divided by the researchers into binary classification and multi-class problems. When the user wants to know if the DNA sequence has been altered, the issue is a binary classification problem. When it is desirable to identify the principal class of mutation or its sub-classes, the problem becomes more challenging. The primary classes of mutations are deletion, insertion, and replacement mutation, and their sub-classes are (deletion frameshift, deletion in-frame, insertion frameshift, insertion in-frame, silent, missense, nonsense, and read-through). Additionally, answers to sporadic issues like the DNA sequence alignment challenge are necessary for mutation detection techniques. Due to the scarcity of labeled databases, this data set was created by addressing an unlabeled database and creating random mutations of all kinds for the purpose of benefiting from them by researchers in the field of bioinformatics to analyze DNA sequences and know the impact of mutations on humans. By using a database published on the NCBI GeneBank website (NC_045512.2 Severe acute respiratory syndrome coronavirus 2 isolate Wuhan-Hu-1, complete genome).
Provide a detailed description of the following dataset: DNA mutations
NMED-H
The NMED-H dataset contains scalp EEG responses recorded from 48 adults as they heard intact and scrambled versions of full-length vocal works (Hindi pop songs). Sixteen stimuli were included in the experiment: Four songs in four conditions per song. Twelve participants were assigned to each stimulus, and each participant heard their assigned stimuli twice (24 trials total per stimulus). Dense-array EEG was recorded using the Electrical Geodesics, Inc. (EGI) GES 300 platform. Data are published in Matlab format. The dataset contains (1) raw EEG (individual recordings, 97 files), (2) clean EEG (aggregated by stimulus and listen, 32 files), (3) spatially filtered EEG (aggregated by stimulus condition, four files), (4) behavioral responses (grouped by listen, two files), and (5) participant-stimulus assignment file. Items (1) - (3) are compressed in .zip archives (500 MB - 2 GB each); an example file from each archive can be downloaded separately. Items (4) and (5) are < 1 KB each. This dataset can be used in combination with the Naturalistic Music EEG Dataset - Tempo (NMED-T).
Provide a detailed description of the following dataset: NMED-H
Compressive measurements DD-CASSI
We capture some hyperspectral images in our lab using the multishot DD-CASSI architecture. The algorithm can be found on GitHub
Provide a detailed description of the following dataset: Compressive measurements DD-CASSI
KID-F
# Description K-pop Idol Dataset - Female (KID-F) is the first dataset of K-pop idol high quality face images. It consists of about 6,000 high quality face images at 512x512 resolution and identity labels for each image. We collected about 90,000 K-pop female idol images and crop the face from each image. And we classified high quality face images. As a result, there are about 6,000 high quality face images in this dataset. There are 300 test datasets for a benchmark. There are no duplicate images between test and train images. Some identities in test images are not duplicated with train images. (It means some test images is new identity to the trained model) Each test images have its degraded pair. You can use these degraded test images for testing face super resolution performance. We also provide identity labels for each image. You can download the csv file from our [github](https://github.com/PCEO-AI-CLUB/KID-F) # Download You can download dataset from here. [Google Drive](https://drive.google.com/drive/folders/15RbdHeLymfKA_Xm96rIrGe4Dt5iCQ75E?usp=sharing) # Agreement - The use of this software is RESTRICTED to **non-commercial** research and educational purposes. - All images of the KID-F dataset are obtained from the internet which are not property of EDA(PCEO-AI-CLUB). EDA is not responsible for the content nor the meaning of these images. - You agree **not to** reproduce, duplicate, copy, sell, trade, resell or exploit for any commercial purposes, any portion of the images and any portion of derived data. - You agree **not to** further copy, publish or distribute any portion of the KID-F dataset. Except, for internal use at a single site within the same organization it is allowed to make copies of the dataset. - EDA reserves the right to terminate your access to the CelebA dataset at any time.
Provide a detailed description of the following dataset: KID-F
VideoLQ
VideoLQ consists of videos downloaded from various video hosting sites such as Flickr and YouTube, with a Creative Common license.
Provide a detailed description of the following dataset: VideoLQ
RFMiD
According to the WHO, World report on vision 2019, the number of visually impaired people worldwide is estimated to be 2.2 billion, of whom at least 1 billion have a vision impairment that could have been prevented or is yet to be addressed. The world faces considerable challenges in terms of eye care, including inequalities in the coverage and quality of prevention, treatment, and rehabilitation services. Early detection and diagnosis of ocular pathologies would enable forestall of visual impairment. One challenge that limits the adoption of a computer-aided diagnosis tool by the ophthalmologist is, the sight-threatening rare pathologies such as central retinal artery occlusion or anterior ischemic optic neuropathy and others are usually ignored. In the past two decades, many publicly available datasets of color fundus images have been collected with a primary focus on diabetic retinopathy, glaucoma, and age-related macular degeneration, and few other frequent pathologies. The challenge for which this dataset was introduced aimed to unite the medical image analysis community to develop methods for automatic ocular disease classification of frequent diseases along with the rare pathologies. The Retinal Fundus Multi-disease Image Dataset (RFMiD) consists of a total of 3200 fundus images captured using three different fundus cameras with 46 conditions annotated through adjudicated consensus of two senior retinal experts. To the best of the authors knowledge, the dataset, RFMiD represents the only publicly available dataset that constitutes such a wide variety of diseases that appear in routine clinical settings. This aforementioned challenge promoted the development of generalizable models for screening retina, unlike the previous efforts that focused on the detection of specific diseases.
Provide a detailed description of the following dataset: RFMiD
Retinal Fundus MultiDisease Image Dataset (RFMiD)
According to the WHO, World report on vision 2019, the number of visually impaired people worldwide is estimated to be 2.2 billion, of whom at least 1 billion have a vision impairment that could have been prevented or is yet to be addressed. The world faces considerable challenges in terms of eye care, including inequalities in the coverage and quality of prevention, treatment, and rehabilitation services. Early detection and diagnosis of ocular pathologies would enable forestall of visual impairment. One challenge that limits the adoption of a computer-aided diagnosis tool by the ophthalmologist is, the sight-threatening rare pathologies such as central retinal artery occlusion or anterior ischemic optic neuropathy and others are usually ignored. In the past two decades, many publicly available datasets of color fundus images have been collected with a primary focus on diabetic retinopathy, glaucoma, and age-related macular degeneration, and few other frequent pathologies. The challenge for which this dataset was introduced aimed to unite the medical image analysis community to develop methods for automatic ocular disease classification of frequent diseases along with the rare pathologies. The Retinal Fundus Multi-disease Image Dataset (RFMiD) consists of a total of 3200 fundus images captured using three different fundus cameras with 46 conditions annotated through adjudicated consensus of two senior retinal experts. To the best of the authors knowledge, the dataset, RFMiD represents the only publicly available dataset that constitutes such a wide variety of diseases that appear in routine clinical settings. This aforementioned challenge promoted the development of generalizable models for screening retina, unlike the previous efforts that focused on the detection of specific diseases.
Provide a detailed description of the following dataset: Retinal Fundus MultiDisease Image Dataset (RFMiD)
BACC-18
The developed BACC-18 contains the text of 18 famous authors of Bengali literature. To build this corpus, we crawled texts from four online sources namely NLTR society for natural language technology research [36], Ebanglalibrary [37], Git repository [38] and Blogs [39]–[40][41]. The maximum number of texts (13,308) are collected from NLTR source whereas minimum number of texts (240) are crawled from Blogs. A self-built automatic web crawler3 is used to scrapping the data from four sources. Due to HTML page structure variation of sources, we used various web crawler instead of a typical crawler. In particular, the proposed research has developed 31 Python crawler which can automatically crawl textual data based on the robots.txt policy. The robots.txt policy ensures the search engine whether a crawler can or cannot crawl the particular text contents from a source.4 Initially, we manually selected the famous and authentic web portal’s hyperlink to collect the author’s texts. Web crawler starts with the hyperlink and a spider explore all the pages under the hyperlink to scrapping the author text. After collecting all the authors’ text and we prepared the authorship classification corpus with annotation based on the hyperlink. A single hyperlink contains only a single author text. This hyperlink based web crawling reduces the manual annotation time and cost of human efforts. Corpus Link: https://data.mendeley.com/datasets/y64fcp2nzz
Provide a detailed description of the following dataset: BACC-18
Two Coiling Spirals
The two Coiling Spiral is a 2d classification dataset composed of two classes; each spiral corresponds to one class. Gaussian noise can be added resulting in a greater thickness of the spiral.
Provide a detailed description of the following dataset: Two Coiling Spirals
Pavementscapes
**Pavementscapes** is a large-scale dataset to develop and evaluate methods for pavement damage segmentation. It is comprised of 4,000 images with a resolution of 1024×2048, which have been recorded in the real-world pavement inspection projects with 15 different pavements. A total of 8,680 damage instances are manually labeled with six damage classes at the pixel level.
Provide a detailed description of the following dataset: Pavementscapes
Fashion4Events
A dataset of fashion images for social events To collect Fashion4Events, a dataset of garment images paired with social event labels, we exploited two different sources of data: the DeepFashion2 dataset and the USED dataset. Each image is paired with a social event category among the following: orig_class_names = ["concert", "graduation", "meeting", "mountain-trip", "picnic", "sea-holiday", "ski-holiday", "wedding", "conference", "exhibition", "fashion", "protest", "sport", "theater-dance"]
Provide a detailed description of the following dataset: Fashion4Events
ViQuAE
ViQuAE is a dataset for KVQAE (Knowledge-based Visual Question Answering about named Entities), a task which consists in answering questions about named entities grounded in a visual context using a Knowledge Base. It is the first KVQAE dataset to cover a wide range of entity types (e.g. persons, landmarks, and products). We argue that KVQAE is a clear, well-defined task that can be evaluated easily, making it suitable to track the progress of multimodal entity representation’s quality. Multimodal entity representation is a central issue that will allow to make human-machine interactions more natural. For example, while watching a movie, one might wonder ‘‘Where did I already see this actress?’’ or ‘‘Did she ever win an Oscar?’’
Provide a detailed description of the following dataset: ViQuAE
French Dialect Samples
We collated subcorpora each between 50,000 and 70,000 words, containing samples of national dialects of French across different countries: Algeria, Democratic Republic of Congo, France, Ivory Coast, Morocco and Senegal.
Provide a detailed description of the following dataset: French Dialect Samples
LGI-PPGI
LGI-PPGI is a dataset for heart Rate estimation from face videos in the wild.
Provide a detailed description of the following dataset: LGI-PPGI
GAFA
We introduce a new dataset of annotated surveillance videos of freely moving people taken from a distance in both indoor and outdoor scenes. The videos are captured with multiple cameras placed in eight different daily environments. People in the videos undergo large pose variations and are frequently occluded by various environmental factors. Most important, their eyes are mostly not clearly visible as is often the case in surveillance videos. We introduce the first rigorously annotated dataset of 3D gaze directions of freely moving people captured from afar.
Provide a detailed description of the following dataset: GAFA
Cryptocurrency User Attitudes Towards the Environmental Impact of Proof-of-Work in Nigeria: Online Survey Results
The "Cryptocurrency User Attitudes Towards the Environmental Impact of Proof-of-Work in Nigeria" online survey is a convenience sample survey of residents of Nigeria aged 16 and over that have participated in Bitcoin transactions in the past. Between November 2021 and March 2022, participants were asked their opinions on cryptocurrencies, the environmental effects of using them, and attitudes towards these effects.
Provide a detailed description of the following dataset: Cryptocurrency User Attitudes Towards the Environmental Impact of Proof-of-Work in Nigeria: Online Survey Results
Swissmetro
A Stated Preference Survey on mode choice https://transp-or.epfl.ch/documents/technicalReports/CS_SwissmetroDescription.pdf
Provide a detailed description of the following dataset: Swissmetro
The Game of 2048
The 2048 game task involves training an agent to achieve high scores in the game [2048 (Wikipedia)](https://en.wikipedia.org/wiki/2048_(video_game))
Provide a detailed description of the following dataset: The Game of 2048
Ultra-processed Food Dataset
The raw data are obtained from an industrial plant for ultra-processed food production. The sampling was carried out every 5 minutes while the total production cycle takes approximately 3 hours, from raw ingredients to final semi- finished products. The extracted data represent approximately 80 days of production. Variables 2 − 14 belonging to 4 specific phases of the process and influence the qualitative variable 17. Variables 15 and 16 are external variables not controlled by the process which affect the final product. It should also be noted that some variation may be due to changes in raw materials, in production flow (variable 1) or to possible reconfiguration between weeks. However while the magnitude of effects may change between weeks, the causal relationships are dictated by the plant and process dynamics and are consistent (at the best of potential un-cofounder and faults) throughout the production .
Provide a detailed description of the following dataset: Ultra-processed Food Dataset
Auditory Detection of Sound (ADS)
Test dataset for unsupervised anomaly detection in sound (ADS).
Provide a detailed description of the following dataset: Auditory Detection of Sound (ADS)
Sequence Consistency Evaluation (SCE) tests
Sequence Consistency Evaluation (SCE) consists of a benchmark task for sequence consistency evaluation (SCE).
Provide a detailed description of the following dataset: Sequence Consistency Evaluation (SCE) tests
Bone Age
At RSNA 2017 there was a contest to correctly identify the age of a child from an X-ray of their hand.
Provide a detailed description of the following dataset: Bone Age
PodcastFillers
The PodcastFillers dataset consists of 199 full-length podcast episodes in English with manually annotated filler words and automatically generated transcripts. The podcast audio recordings, sourced from SoundCloud, are CC-licensed, gender-balanced, and total 145 hours of audio from over 350 speakers. The annotations are provided under a non-commercial license and consist of 85,803 manually annotated audio events including approximately 35,000 filler words (“uh” and “um”) and 50,000 non-filler events such as breaths, music, laughter, repeated words, and noise. The annotated events are also provided as pre-processed 1-second audio clips. The dataset also includes automatically generated speech transcripts from a speech-to-text system. A detailed description is provided in Dataset.
Provide a detailed description of the following dataset: PodcastFillers
MPHOI-72
MPHOI-72 is a multi-person human-object interaction dataset that can be used for a wide variety of HOI/activity recognition and pose estimation/object tracking tasks. The dataset is challenging due to many body occlusions among the humans and objects. It consists of 72 videos captured from 3 different angles at 30 fps, with totally 26,383 frames and an average length of 12 seconds. It involves 5 humans performing in pairs, 6 object types, 3 activities and 13 sub-activities. The dataset includes color video, depth video, human skeletons, human and object bounding boxes.
Provide a detailed description of the following dataset: MPHOI-72
NeRF
Neural Radiance Fields (NeRF) is a method for synthesizing novel views of complex scenes by optimizing an underlying continuous volumetric scene function using a sparse set of input views. The dataset contains three parts with the first 2 being synthetic renderings of objects called Diffuse Synthetic 360◦ and Realistic Synthetic 360◦ while the third is real images of complex scenes. Diffuse Synthetic 360◦ consists of four Lambertian objects with simple geometry. Each object is rendered at 512x512 pixels from viewpoints sampled on the upper hemisphere. Realistic Synthetic 360◦ consists of eight objects of complicated geometry and realistic non-Lambertian materials. Six of them are rendered from viewpoints sampled on the upper hemisphere and the two left are from viewpoints sampled on a full sphere with all of them at 800x800 pixels. The real images of complex scenes consist of 8 forward-facing scenes captured with a cellphone at a size of 1008x756 pixels.
Provide a detailed description of the following dataset: NeRF
LLFF
Local Light Field Fusion (LLFF) is a practical and robust deep learning solution for capturing and rendering novel views of complex real-world scenes for virtual exploration. The dataset consists of both renderings and real images of natural scenes. The synthetic images are rendered from the SUNCG and UnrealCV where SUNCG contains 45000 simplistic house and room environments with texture-mapped surfaces and low geometric complexity. UnrealCV contains a few large-scale environments modeled and rendered with extreme detail. The real images are 24 scenes captured from a handheld cellphone.
Provide a detailed description of the following dataset: LLFF
Mip-NeRF 360
Mip-NeRF 360 is an extension to the Mip-NeRF that uses a non-linear parameterization, online distillation, and a novel distortion-based regularize to overcome the challenge of unbounded scenes. The dataset consists of 9 scenes with 5 outdoors and 4 indoors, each containing a complex central object or area with a detailed background.
Provide a detailed description of the following dataset: Mip-NeRF 360
VLN-CE
Vision and Language Navigation in Continuous Environments (VLN-CE) is an instruction-guided navigation task with crowdsourced instructions, realistic environments, and unconstrained agent navigation. The dataset consists of 4475 trajectories converted from Room-to-Room train and validation splits. For each trajectory, multiple natural language instructions from Room-to-Room and a pre-computed shortest path are provided following the waypoints via low-level actions.
Provide a detailed description of the following dataset: VLN-CE
ImageCoDe
Given 10 minimally contrastive (highly similar) images and a complex description for one of them, the task is to retrieve the correct image. The source of most images are videos and descriptions as well as retrievals come from human.
Provide a detailed description of the following dataset: ImageCoDe
WHU Building Dataset
We manually edited an aerial and a satellite imagery dataset of building samples and named it a WHU building dataset. The aerial dataset consists of more than 220, 000 independent buildings extracted from aerial images with 0.075 m spatial resolution and 450 km2 covering in Christchurch, New Zealand. The satellite imagery dataset consists of two subsets. One of them is collected from cities over the world and from various remote sensing resources including QuickBird, Worldview series, IKONOS, ZY-3, etc. The other satellite building sub-dataset consists of 6 neighboring satellite images covering 550 km2 on East Asia with 2.7 m ground resolution.
Provide a detailed description of the following dataset: WHU Building Dataset
Handwritten Devanagari Character Recognition
This is an image database of Handwritten Devanagari characters. There are 46 classes of characters with 2000 examples each. The dataset is split into training set(85%) and testing set(15%). Citation: S. Acharya, A.K. Pant and P.K. Gyawali Deep Learning Based Large Scale Handwritten Devanagari Character Recognition ,In Proceedings of the 9th International Conference on Software, Knowledge, Information Management and Applications (SKIMA), pp. 121-126, 2015.
Provide a detailed description of the following dataset: Handwritten Devanagari Character Recognition
H2O (2 Hands and Objects)
We present a comprehensive framework for egocentric interaction recognition using markerless 3D annotations of two hands manipulating objects. To this end, we propose a method to create a unified dataset for egocentric 3D interaction recognition. Our method produces annotations of the 3D pose of two hands and the 6D pose of the manipulated objects, along with their interaction labels for each frame. Our dataset, called H2O (2 Hands and Objects), provides synchronized multi-view RGB-D images, interaction labels, object classes, ground-truth 3D poses for left & right hands, 6D object poses, ground-truth camera poses, object meshes and scene point clouds. To the best of our knowledge, this is the first benchmark that enables the study of first-person actions with the use of the pose of both left and right hands manipulating objects and presents an unprecedented level of detail for egocentric 3D interaction recognition. We further propose the method to predict interaction classes by estimating the 3D pose of two hands and the 6D pose of the manipulated objects, jointly from RGB images. Our method models both inter- and intra-dependencies between both hands and objects by learning the topology of a graph convolutional network that predicts interactions. We show that our method facilitated by this dataset establishes a strong baseline for joint hand-object pose estimation and achieves state-of-the-art accuracy for first person interaction recognition.
Provide a detailed description of the following dataset: H2O (2 Hands and Objects)
OpenTTGames
OSAI introduces OpenTTGames - an open dataset aimed at evaluation of different computer vision tasks in Table Tennis: ball detection, semantic segmentation of humans, table and scoreboard and fast in-game events spotting. It includes full-HD videos of table tennis games recorded at 120 fps with an industrial camera. Every video is equipped with an annotation containing the frame numbers and corresponding targets for this particular frame: manually labeled in-game events (ball bounces, net hits, or empty event targets) and/or ball coordinates and segmentation masks, which were labeled with deep learning-aided annotation models.
Provide a detailed description of the following dataset: OpenTTGames
FullTextPeerRead
FullTextPeerRead is a dataset created by Jeong et al. for context-aware citation recommendation. It contains context sentences to cited references and paper metadata, which makes it a well-organized dataset for a context-aware paper recommendation. From paper: A context-aware citation recommendation model with BERT and graph convolutional networks
Provide a detailed description of the following dataset: FullTextPeerRead
arXiv-200
A newly proposed dataset for local citation recommendation, consisting of 3.2 million local citation sentences along with the title and the abstract of both the citing and the cited papers. Around 1.66 million papers' titles and abstracts are available in the database.
Provide a detailed description of the following dataset: arXiv-200
MIMIC PERform Testing Dataset
The MIMIC PERform Testing dataset contains the following physiological signals recorded from 200 critically-ill patients during routine clinical care: - electrocardiogram (ECG) - photoplethysmogram (PPG) - impedance pneumography (imp), also known as respiratory (resp) - (in some cases) arterial blood pressure (abp) Each signal is sampled at 125 Hz. The dataset also contains some fixed parameters for each subject (such as whether the subject was an adult or neonate). The dataset is available in CSV, Matlab, and WaveForm Database formats [here](https://doi.org/10.5281/zenodo.6807402). Further details of the datasets are provided in the documentation accompanying the [ppg-beats project documentation](https://ppg-beats.readthedocs.io/en/latest/). The dataset was extracted from the [MIMIC III Waveform Database](https://physionet.org/content/mimic3wdb/1.0/).
Provide a detailed description of the following dataset: MIMIC PERform Testing Dataset
HuTics
HuTics contains 2040 images showing how humans use deictic gestures to interact with various daily-life objects. The images are annotated by segmentation masks of the object(s) of interest. The original purpose of the data collection is for gesture-aware object-agnostic segmentation tasks.
Provide a detailed description of the following dataset: HuTics
Wendi
Null
Provide a detailed description of the following dataset: Wendi
IMDB-WIKI
https://data.vision.ee.ethz.ch/cvl/rrothe/imdb-wiki/
Provide a detailed description of the following dataset: IMDB-WIKI
ICBHI Respiratory Sound Database
The Respiratory Sound database was originally compiled to support the scientific challenge organized at Int. Conf. on Biomedical Health Informatics - ICBHI 2017. The database consists of a total of 5.5 hours of recordings containing 6898 respiratory cycles, of which 1864 contain crackles, 886 contain wheezes, and 506 contain both crackles and wheezes, in 920 annotated audio samples from 126 subjects. The cycles were annotated by respiratory experts as including crackles, wheezes, a combination of them, or no adventitious respiratory sounds. The recordings were collected using heterogeneous equipment and their duration ranged from 10s to 90s. For more information about the dataset, the annotation files, or to download it, please visit [the challenge official page](https://bhichallenge.med.auth.gr/ICBHI_2017_Challenge).
Provide a detailed description of the following dataset: ICBHI Respiratory Sound Database
MOS Dataset
This dataset was used in the paper 'Template-based Abstractive Microblog Opinion Summarisation' (to be published at TACL, 2022). The data is structured as follows: each file represents a cluster of tweets which contains the tweet IDs and a summary of the tweets written by journalists. The gold standard summary follows a template structure and depending on its opinion content, it contains a main story, majority opinion (if any) and/or minority opinions (if any).
Provide a detailed description of the following dataset: MOS Dataset
Box-Jenkins
Box-Jenkins gas furnace, a well-known time series forecasting problem
Provide a detailed description of the following dataset: Box-Jenkins
Air Quality Index
The AQI dataset is collected from 12 observing stations around Beijing from year 2013 to 2017. The data is accessible at The University of California, Irvine (UCI) Machine Learning Repository.
Provide a detailed description of the following dataset: Air Quality Index
32vis
Dataset for the 32 years of IEEE VIS
Provide a detailed description of the following dataset: 32vis
Bus Stop Spacings for Transit Providers in the US
Transit agencies use the General Transit Feed Specification (GTFS) to publish transit data. More and more cities across the globe are adopting this GTFS format to represent their transit network. However, the GTFS format is not convenient as the information is spread across multiple files and is cumbersome with rules. This dataset is a collection of over 600 Transit agencies in the US with a concise and easy representation of GTFS data in the form of segments.
Provide a detailed description of the following dataset: Bus Stop Spacings for Transit Providers in the US
mini-Imagenet
mini-Imagenet is proposed by **Matching Networks for One Shot Learning . In NeurIPS, 2016**. This dataset consists of 50000 training images and 10000 testing images, evenly distributed across 100 classes.
Provide a detailed description of the following dataset: mini-Imagenet
OASIS-1
The Open Access Series of Imaging Studies (OASIS) is a project aimed at making neuroimaging data sets of the brain freely available to the scientific community. By compiling and freely distributing neuroimaging data sets, we hope to facilitate future discoveries in basic and clinical neuroscience. OASIS-1 set consists of a cross-sectional collection of 416 subjects aged 18 to 96. For each subject, 3 or 4 individual T1-weighted MRI scans obtained in single scan sessions are included. The subjects are all right-handed and include both men and women. 100 of the included subjects over the age of 60 have been clinically diagnosed with very mild to moderate Alzheimer’s disease (AD). Additionally, a reliability data set is included containing 20 nondemented subjects imaged on a subsequent visit within 90 days of their initial session.
Provide a detailed description of the following dataset: OASIS-1
ZJU-MoCap
LightStage is a multi-view dataset, which is proposed in NeuralBody. This dataset captures multiple dynamic human videos using a multi-camera system that has 20+ synchronized cameras. The humans perform complex motions, including twirling, Taichi, arm swings, warmup, punching, and kicking. We provide the SMPL-X parameters recovered with EasyMocap, which contain the motions of body, hand, and face.
Provide a detailed description of the following dataset: ZJU-MoCap
Bengali Ekman's Six Basic Emotions Corpus
The dataset contains 36000 Bangla data based on Ekman's six basic emotions. This data was first introduced in the paper Alternative non-BERT model choices for the textual classification in low-resource languages and environments. The whole dataset is balanced and evenly distributed among all the six classes.
Provide a detailed description of the following dataset: Bengali Ekman's Six Basic Emotions Corpus
RSCD: Large-scale Road Surface Classification Dataset for Autonomous Vehicles
The preview of the road surface states is essential for improving the safety and the ride comfort of autonomous vehicles. This dataset consists of 1 million (240 x 360 pixels) road surface images captured under a wide range of road and weather conditions in China. The original pictures are acquired with a vehicle-mounted camera and then the patches containing only the road surface area are cropped. The images are classified into 27 categories, containing both the friction level, material, and unevenness properties. The dataset is divided into train-set(~960k samples), validation-set(~20k samples), test-set(~50k samples) . This large-scale dataset is useful for developing vision-based road sensing modules to improve the performance of the driving assistance systems. More details, please visit Github: https://github.com/ztsrxh/RSCD-Road_Surface_Classification_Dataset
Provide a detailed description of the following dataset: RSCD: Large-scale Road Surface Classification Dataset for Autonomous Vehicles
An Extension of XNLI
https://github.com/salesforce/xnli_extension
Provide a detailed description of the following dataset: An Extension of XNLI
EBB!
This dataset contains around 5K pairs of aligned images captured using Canon 70D DSLR with low and high apertures, modeling normal photos and photos with bokeh (blur) effect. The height of each image in the dataset is 1024 pixels, the width varies over the images.
Provide a detailed description of the following dataset: EBB!
ImageNet ctest10k
Colorization validation set for unconditional/conditional colorization tasks. Subset of the ImageNet validation images and excludes andy grayscale single-channel images.
Provide a detailed description of the following dataset: ImageNet ctest10k
Flickr-8k
Contains 8k flickr Images with captions. Visit [this](http://hockenmaier.cs.illinois.edu/8k-pictures.html) page to explore the data. Cite this paper if you find it useful in your research: [Framing image description as a ranking task: data, models and evaluation metrics](http://hockenmaier.cs.illinois.edu/Framing_Image_Description/KCCA.html)
Provide a detailed description of the following dataset: Flickr-8k
KRAUTS
KRAUTS (Korpus of newspapeR Articles with Underlinded Temporal expressionS) is a German temporally annotated news corpus accompanied with TimeML annotation guidelines for German. It was developed at Fondazione Bruno Kessler, Trento, Italy and at the Max Planck Institute for Informatics, Saarbrücken, Germany. Our goal is to boost temporal tagging research for German.
Provide a detailed description of the following dataset: KRAUTS
TimeBankPT
TimeBankPT is a corpus of Portuguese text with annotations about time. The annotation scheme used is similar to TimeML. TimeBankPT is the result of adapting the English corpus used in the first TempEval challenge to the Portuguese language. In what regards the temporal relaiton type, the corpus was annotated with *six* labels, namely: BEFORE, AFTER, OVERLAP, BEFORE-OR-OVERLAP, OVERLAP-OR-AFTER and VAGUE. | | Train | Test | |-----------|-------|------| | Documents | 162 | 20 | | Sentences | 2281 | 351 | | Words | 60782 | 8920 | | Events | 6790 | 1097 | | Timexs | 1244 | 165 | | Tlinks | 5781 | 758 |
Provide a detailed description of the following dataset: TimeBankPT