dataset_name
stringlengths
2
128
description
stringlengths
1
9.7k
prompt
stringlengths
59
185
MovingFashion
**MovingFashion** is a dataset for video-to-shop, the task of retrieving clothes which are worn in social media videos. MovingFashion is composed of 14,855 social videos, each one of them associated with e-commerce "shop" images where the corresponding clothing items are clearly portrayed.
Provide a detailed description of the following dataset: MovingFashion
BANKING77
Dataset composed of online banking queries annotated with their corresponding intents. BANKING77 dataset provides a very fine-grained set of intents in a banking domain. It comprises 13,083 customer service queries labeled with 77 intents. It focuses on fine-grained single-domain intent detection.
Provide a detailed description of the following dataset: BANKING77
Mila Simulated Floods
Mila Simulated Floods Dataset is a 1.5 square km virtual world using the Unity3D game engine including urban, suburban and rural areas. The *urban* environment contains skyscrapers, large buildings, and roads, as well as objects such as traffic items and vehicles. The *rural* environment consists of a landscape of grassy hills, forests, and mountains, with sparse houses and other buildings such as a church, and no roads. The rural and urban areas make up for 1 square km of our virtual world. The *suburban* environment is a residential area of 0.5 square km with many individual houses with front yards. To gather the simulated dataset, we captured *before* and *after* flood pairs from 2000 viewpoints with the following modalities: - non-flooded RGB image, depth map, segmentation map - flooded RGB image, binary mask of the flooded area, segmentation map The camera was placed about 1.5 above ground, and has a field of view of *120 degree*, and the resolution of the images is *1200 x 900*. At each viewpoint, we took 10 pictures, by varying slightly the position of the camera in order to augment the dataset.
Provide a detailed description of the following dataset: Mila Simulated Floods
ETH Kinect Dataset
This dataset contains 27 ROS bags of point clouds produced by a Kinect based the ground truth obtained from a Vicon pose capture system. These runs cover 3 environments of increasing complexity, with 3 types of motions at 3 different speeds. This dataset can be used with our ICP Mapper to track the pose of the Kinect and to explore parameters of ICP algorithms.
Provide a detailed description of the following dataset: ETH Kinect Dataset
ETH Laser Registration Datasets
This group of datasets was recorded with the aim to test point cloud registration algorithms in specific environments and conditions. Special care is taken regarding the precision of the "ground truth" positions of the scanner, which is in the millimeter range, using a theodolite.
Provide a detailed description of the following dataset: ETH Laser Registration Datasets
SpaceNet 7
Satellite imagery analytics have numerous human development and disaster response applications, particularly when time series methods are involved. For example, quantifying population statistics is fundamental to 67 of the 232 United Nations Sustainable Development Goals, but the World Bank estimates that more than 100 countries currently lack effective Civil Registration systems. The SpaceNet 7 Multi-Temporal Urban Development Challenge aims to help address this deficit and develop novel computer vision methods for non-video time series data. In this challenge, participants will identify and track buildings in satellite imagery time series collected over rapidly urbanizing areas. The competition centers around a new open source dataset of Planet satellite imagery mosaics, which includes 24 images (one per month) covering ~100 unique geographies. The dataset will comprise over 40,000 square kilometers of imagery and exhaustive polygon labels of building footprints in the imagery, totaling over 10 million individual annotations. Challenge participants will be asked to track building construction over time, thereby directly assessing urbanization.
Provide a detailed description of the following dataset: SpaceNet 7
Labeled Retinal Optical Coherence Tomography Dataset for Classification of Normal, Drusen, and CNV Cases
This dataset consists of more than 16,000 retinal OCT B-scans from 441 cases (Normal: 120, Drusen: 160, CNV: 161) and is acquired at Noor Eye Hospital, Tehran, Iran. Images are labeled by a retinal specialist. The structure of the folders are as below: - CNV, DRUSEN, NORMAL folders - Within each class, folders are separated patient-wise with numbers from 1 to <number_of_patients>. - Within each patient folder, images (B-scans) are labeled with <0XX_LABEL> format where <XX> is the B-scan number, and <LABEL> is the specialist's selected label for that specific B-scan. The excel spreadsheet (data_information.csv) includes information such as "Patient ID", "Class", "Eye", "B-scan", "Label", and "Directory" for all images (16823 rows, 6 columns). The python code (read_data.py) includes code for loading images and labels as NumPy arrays. The written function outputs the input data as an array with shape (number_of_images, imageSize, imageSize, 3) and output data as a list of labels (Normal: 0, Drusen: 1, CNV: 2). There are two different options for reading the files: - Option 1: Reading all images. This would result in 16822 images. - Option 2: Reading the worst-case condition images for each volume (i.e., if a patient was detected as a CNV case, only CNV-appearing B-scans were included for training procedure and normal and drusen B-scans of that patient are excluded from the dataset). This would result in 12649 images.
Provide a detailed description of the following dataset: Labeled Retinal Optical Coherence Tomography Dataset for Classification of Normal, Drusen, and CNV Cases
Symbolic Mathematics
A personalized subset of Symbolic Mathematics dataset, initially introduced in the paper Deep Learning for Symbolic Mathematics (Lample et al.). We used this subset for our paper Pretrained Language Models are Symbolic Mathematics Solvers Too! (Noorbakhsh et al.).
Provide a detailed description of the following dataset: Symbolic Mathematics
AraCovid19-SSD
AraCovid19-SSD is a manually annotated Arabic COVID-19 sarcasm and sentiment detection dataset containing 5,162 tweets.
Provide a detailed description of the following dataset: AraCovid19-SSD
eBDtheque
The eBDtheque database is a selection of one hundred comic pages from America, Japan (manga) and Europe. Image source: [http://ebdtheque.univ-lr.fr/database/](http://ebdtheque.univ-lr.fr/database/)
Provide a detailed description of the following dataset: eBDtheque
DCM
The DCM dataset is composed of 772 annotated images from 27 golden age comic books. We freely collected them from the free public domain collection of digitized comic books [Digital Comics Museum](http://digitalcomicmuseum.com/). One album per available publisher was selected to get as many different styles as possible. We made ground-truth bounding boxes of all panels, all characters (body + faces), small or big, human-like or animal-like. Image source: [https://gitlab.univ-lr.fr/crigau02/dcm-dataset/-/tree/master](https://gitlab.univ-lr.fr/crigau02/dcm-dataset/-/tree/master)
Provide a detailed description of the following dataset: DCM
Aristo-v4
The Aristo Tuple KB contains a collection of high-precision, domain-targeted (subject,relation,object) tuples extracted from text using a high-precision extraction pipeline, and guided by domain vocabulary constraints. The dataset was introduced by the paper [_Domain-Targeted, High Precision Knowledge Extraction_](https://aclanthology.org/Q17-1017.pdf).
Provide a detailed description of the following dataset: Aristo-v4
HowSumm
HowSumm is a large-scale query-focused multi-document summarization dataset. It is focused on summarization of various sources to create HowTo guides. It is derived from wikiHow articles. HowSumm is partitioned into HowSumm-Step where target summary is relatively short (avg. 90 words) and HowSumm-Method where target summary is a concatenation of several steps an therefore longer (avg. 500 words). HowSumm-Method and HowSumm-Step contain 11,121 and 84348 instances, respectively. Description from: [HowSumm](https://github.com/odelliab/HowSumm) Image source: [HowSumm](https://github.com/odelliab/HowSumm)
Provide a detailed description of the following dataset: HowSumm
DSIOD
This dataset contains data which enables the evaluation of metamodels and approches for targeted test case selection without setting up test environments or performing test runs. The dataset is split into different scenarios: Each scenario comes with one or more tabular datasets containing the inputs and outputs of different test cases (concrete scenarios). A configuration file describes which of the columns are inputs and outputs and explains the different parameters. The config also contains verbal descriptions of the scenarios. Additionally, animations of the scenarios are available.
Provide a detailed description of the following dataset: DSIOD
RWD-10K
Rogue Wave Dataset-10K dataset consists of 10191 rogue wave images.
Provide a detailed description of the following dataset: RWD-10K
Odysseus
A major reason for the lack of a realistic Trojan detection method has been the unavailability of a large-scale benchmark dataset, consisting of clean and Trojan models. Here we introduce Odysseus the largest public dataset that contains over 3,000 trained clean and Trojan models based on Pytorch. While creating Odysseus, we focused on several factors such as mapping type, model architectures, fooling rate and validation accuracy of each model, and also the type of trigger. These models are trained on CIFAR10, Fashion-MNIST, and MNIST datasets. For each dataset, clean and Trojan models are trained for 4 different architectures. Namely Resent18, VGG19, Densenet, and GoogleNet for CIFAR10 and Fashion-MNIST and 4 custom-designed architectures for MNIST. We also considered various sources to target label mapping for the Trojan models.
Provide a detailed description of the following dataset: Odysseus
IndicTTS
A special corpus of Indian languages covering 13 major languages of India. It comprises of 10000+ spoken sentences/utterances each of mono and English recorded by both Male and Female native speakers. Speech waveform files are available in .wav format along with the corresponding text. We hope that these recordings will be useful for researchers and speech technologists working on synthesis and recognition. You can request zip archives of the entire database here.
Provide a detailed description of the following dataset: IndicTTS
CGHD1152
- 1152 Images - 144 Circuits - 12 Drafter - 48,563 Object (Symbol, Structural, Text) Annotations
Provide a detailed description of the following dataset: CGHD1152
A Curb Dataset
This is a dataset with curb annotations by using 3D LiDAR data and we build this dataset based on the SemanticKITTI dataset.
Provide a detailed description of the following dataset: A Curb Dataset
WenetSpeech
WenetSpeech is a multi-domain Mandarin corpus consisting of 10,000+ hours high-quality labeled speech, 2,400+ hours weakly labelled speech, and about 10,000 hours unlabeled speech, with 22,400+ hours in total. The authors collected the data from YouTube and Podcast, which covers a variety of speaking styles, scenarios, domains, topics, and noisy conditions. An optical character recognition (OCR) based method is introduced to generate the audio/text segmentation candidates for the YouTube data on its corresponding video captions. Image source: [https://github.com/wenet-e2e/wenetspeech](https://github.com/wenet-e2e/wenetspeech)
Provide a detailed description of the following dataset: WenetSpeech
MOD20
**MOD20** is an action recognition dataset consisting of videos collected from YouTube and our own drone. The dataset contains 2,324 videos lasting a total of 240 minutes. The actions were selected from challenging and complex scenarios, and cover multiple viewpoints, from ground-level to bird's-eye view. The substantial variation in body size, number of people, viewpoints, camera motion, and background makes this dataset challenging for action recognition. The action classes, 720×720 size un-distorted clips and multi-viewpoint video selection extend the dataset's applicability to a wider research community.
Provide a detailed description of the following dataset: MOD20
NGAFID-MC
NGNGAFID-MC consists of over 7500 labeled flights, representing over 11,500 hours of per second flight data recorder readings of 23 sensor parameters.
Provide a detailed description of the following dataset: NGAFID-MC
FOD-A
FOD in Airports (FOD-A) is an image dataset of FOD, Foreign Object Degris, which consists of 31 object categories and over 30,000 annotation instances. The object categories have been selected based on guidance from prior documentation and related research by the Federal Aviation Administration (FAA).
Provide a detailed description of the following dataset: FOD-A
V-HICO
**V-HICO** is a dataset for human-object interaction in videos. There are 6,594 videos, including 5,297 training videos, 635 validation videos, 608 test videos, and 54 unseen test videos, of human-object interaction. To test the performance of models on common human-object interaction classes and generalization to new human-object interaction classes, we provide two test splits, the first one has the same human-object interaction classes in the training split while the second one consists of unseen novel classes. V-HICO consists of 244 object classes and 99 action classes. There are 756 action-object pairwise classes in total. The unseen test dataset contains 51 object classes and 32 action classes with 52 action-object pairwise classes. All videos are labeled with text annotations of the human action and the associated object. The test and unseen dataset contain the annotations of both human and object bounding boxes.
Provide a detailed description of the following dataset: V-HICO
VOC 2012
see detailed use case on code implementation of the paper 'Tell Me Where To Look: Guided Attention Inference Networks'
Provide a detailed description of the following dataset: VOC 2012
SignalTrain LA2A Dataset
LA-2A Compressor data to accompany the paper "SignalTrain: Profiling Audio Compressors with Deep Neural Networks," https://arxiv.org/abs/1905.11928 Accompanying computer code: https://github.com/drscotthawley/signaltrain A collection of recorded data from an analog Teletronix LA-2A opto-electronic compressor, for various settings of the Peak Reduction knob. Other knobs were kept constant. Audio samples present in these files are either 'randomly generated', or downloaded audio clips with Create Commons licenses, or are property of Scott Hawley freely distributed as part of this dataset. Data taken by Ben Colburn, supervised by Scott Hawley **Dataset used in:** * "Efficient neural networks for real-time analog audio effect modeling" by C. Steinmetz & J. Reiss, 2021. https://arxiv.org/abs/2102.06200 * “Exploring quality and generalizability in parameterized neural audio effects," by W. Mitchell and S. H. Hawley, 149th Audio Engineering Society Convention (AES), 2020. https://arxiv.org/abs/2006.05584 * "SignalTrain: Profiling Audio Compressors with Deep Neural Networks," 147th Audio Engineering Society Convention (AES), 2019. https://arxiv.org/abs/1905.11928
Provide a detailed description of the following dataset: SignalTrain LA2A Dataset
CityUHK-X-BEV
BEV Crowd-Counting dataset extended from CityUHK-X
Provide a detailed description of the following dataset: CityUHK-X-BEV
TBCOV
TBCOV is a large-scale Twitter dataset comprising more than two billion multilingual tweets related to the COVID-19 pandemic collected worldwide over a continuous period of more than one year. Several state-of-the-art deep learning models are used to enrich the data with important attributes, including sentiment labels, named-entities (e.g., mentions of persons, organizations, locations), user types, and gender information. A geotagging method is proposed to assign country, state, county, and city information to tweets, enabling a myriad of data analysis tasks to understand real-world issues. Description from: [TBCOV: Two Billion Multilingual COVID-19 Tweets with Sentiment, Entity, Geo, and Gender Labels](https://arxiv.org/pdf/2110.03664v1.pdf) Image source: [https://arxiv.org/pdf/2110.03664v1.pdf](https://arxiv.org/pdf/2110.03664v1.pdf)
Provide a detailed description of the following dataset: TBCOV
Bridge Data
Bridge Data is a large multi-domain and multi-task dataset, with 7,200 demonstrations constituting 71 tasks across 10 environments. The dataset is collected using a low-cost yet versatile 6-DoF WidowX250 robot arm and contains 7,200 demonstrations of a robot performing 71 kitchen tasks across 10 environments with varying lighting, robot positions, and backgrounds. It can be used to boosting generalization of robotic skills and empirically study how it can improve the learning of new tasks in new environments. Image source: [https://arxiv.org/pdf/2109.13396v1.pdf](https://arxiv.org/pdf/2109.13396v1.pdf)
Provide a detailed description of the following dataset: Bridge Data
BuildingNet
**BuildingNet** is a large-scale dataset of 3D building models whose exteriors are consistently labeled. The dataset consists on 513K annotated mesh primitives, grouped into 292K semantic part components across 2K building models. The dataset covers several building categories, such as houses, churches, skyscrapers, town halls, libraries, and castles.
Provide a detailed description of the following dataset: BuildingNet
EDFace-Celeb-1M
**EDFace-Celeb-1M** is a public Ethnically Diverse Face dataset which is used to benchmark the task of face hallucination. The dataset includes 1.7 million photos that cover different countries, with balanced race composition.
Provide a detailed description of the following dataset: EDFace-Celeb-1M
MetFaces
MetFaces is an image dataset of human faces extracted from works of art. The dataset consists of 1336 high-quality PNG images at 1024×1024 resolution. The images were downloaded via the Metropolitan Museum of Art Collection API, and automatically aligned and cropped using dlib. Various automatic filters were used to prune the set.
Provide a detailed description of the following dataset: MetFaces
DCCW
| Name | Purpose | |------|---------| | [FM100P](FM100P/README.md) | Evaluation of the single palette sorting | | [KHTP](KHTP/README.md) | Evaluation of the palette pair sorting | | [LHSP](LHSP/README.md) | Evaluation of the palette similarity measurement | | [Perceptual Study](perceptual-study/README.md) | Perceptual Study |
Provide a detailed description of the following dataset: DCCW
Energy Consumption Curves of 499 Customers from Spain
Predictions of energy consumption are crucial for energy retailers to minimize deviations from energy acquired in the day-ahead market and the actual consumption of their customers. The increasing spread of smartmeters means that retailers have access to hourly consumption values of all their contracted customers in realtime. Using machine learning algorithms, these hourly values can be used to calculate predictions for the future energy consumption of the customers. The present data set allows the training and validation of AI-based prediction models.
Provide a detailed description of the following dataset: Energy Consumption Curves of 499 Customers from Spain
EM-POSE
Electromagnetic measurements obtained from 12 wireless sensors, paired with the corresponding ground-truth SMPL poses. Approx. 37 minutes recorded with 5 participants.
Provide a detailed description of the following dataset: EM-POSE
GeoMNIST
A simple dataset consisting of three geometric shapes (Triangle, Rectangle, Ellipsoid) of similar sizes but different orientations.
Provide a detailed description of the following dataset: GeoMNIST
Gait Dataset
Details about the creation of the dataset can be seen in https://arxiv.org/abs/2110.06139. ``` @misc{sa2021classification, title={Classification of anomalous gait using Machine Learning techniques and embedded sensors}, author={T. R. D. Sa and C. M. S. Figueiredo}, year={2021}, eprint={2110.06139}, archivePrefix={arXiv}, primaryClass={eess.SP} } ``` The dataset can be download through the link posted in github repository.
Provide a detailed description of the following dataset: Gait Dataset
STR-2021
The **STR-2021** dataset has 5,500 English sentence pairs manually annotated for semantic relatedness using a comparative annotation framework.
Provide a detailed description of the following dataset: STR-2021
MUNO21
**MUNO21** is a large-scale and comprehensive dataset for the map update task. It includes time series of aerial images and map data to capture the evolution of both the physical road network and real street maps over time -- we collect NAIP aerial images at each of four years over the eight-year timespan from 2012–2019, and OSM extracts from each year during the same timespan.
Provide a detailed description of the following dataset: MUNO21
KOHTD
Kazakh offline Handwritten Text dataset (KOHTD) has 3000 handwritten exam papers and more than 140335 segmented images and there are approximately 922010 symbols. It can serve researchers in the field of handwriting recognition tasks by using deep and machine learning. Image source: [https://github.com/abdoelsayed2016/KOHTD](https://github.com/abdoelsayed2016/KOHTD)
Provide a detailed description of the following dataset: KOHTD
QMAR
QMAR is an RGB multi-view Quality of Human Movement Assessment dataset. QMAR was recorded using 6 Primesense cameras (3 different frontal views and 3 different side views) with 38 healthy subjects, 8 female and 30 male. The subjects were trained by a physiotherapist to perform two different types of movements while simulating three ailments, resulting in five overall possibilities: a return Walk to approximately the original position while simulating Parkinsons (W-P), Stroke (W-S) and Limp (W-L), and Standing up and Sitting down with Parkinson (SS-P) and Stroke (SS-S). The dataset includes RGB (and depth and skeleton) data, although in the current version contains only the RGB data. Depth/skeleton data can be obtinaed on request, however note they are available for two views only. All documents and papers that use the QMAR dataset, or any derived part of the dataset, should cite the following paper: F. Sardari, A. Paiement, S. Hannuna, M. Mirmehdi; VI-Net: View-Invariant Quality of Human Movement Assessment, Sensors, 2020, 20, 5258.
Provide a detailed description of the following dataset: QMAR
Real Life Violence Situations Dataset
This dataset has the following citation: M. Soliman, M. Kamal, M. Nashed, Y. Mostafa, B. Chawky, D. Khattab, “ Violence Recognition from Videos using Deep Learning Techniques”, Proc. 9th International Conference on Intelligent Computing and Information Systems (ICICIS'19), Cairo, pp. 79-84, 2019. please use it in case of using the dataset in research or engineering purpose ) when we start our Graduation Project Violence Recognition from Videos we found that there is shortage in available datasets related to violence between individuals so we decide to create new big dataset with variety of scenes Content Our Dataset Contains 1000 Violence and 1000 non-violence videos collected from youtube videos, violence videos in our dataset contain many real street fights situations in several environments and conditions. also non-violence videos from our dataset are collected from many different human actions like sports, eating, walking …etc.
Provide a detailed description of the following dataset: Real Life Violence Situations Dataset
ImageNet 50 samples per class
This ImageNet version contains only 50 training images per class while the original testing set remains unchanged. It is one of the datasets comprising the data-efficient image classification (DEIC) benchmark. It was proposed to challenge the generalization capabilities of modern image classifiers.
Provide a detailed description of the following dataset: ImageNet 50 samples per class
DEIC Benchmark
DEIC is a benchmark for measuring the data efficiency of models in the context of image classification. It is composed of 6 datasets that contain a small number of training samples per class (i.e., 30 < x < 80). It covers multiple image domains (i.e., natural images, fine-grained recognition, medical images, remote sensing, handwriting recognition) and data types (i.e., RGB, grayscale, multi-spectral).
Provide a detailed description of the following dataset: DEIC Benchmark
Deep Sea Treasure Pareto-Front
The dataset contains two Pareto-fronts: - The Pareto-front for the 2-objective problem - The Pareto-front for the 3-objective problem Each Pareto-front contains a set of points, with coordinates given by their objectives. The dataset also contains 1 possible action sequence that leads to this point. If there are multiple possible paths leading to the same point, only 1 was kept.
Provide a detailed description of the following dataset: Deep Sea Treasure Pareto-Front
Multilingual Dataset for Training and Evaluating Diacritics Restoration Systems
The dataset contains training and evaluation data for 12 languages: - Vietnamese - Romanian - Latvian - Czech - Polish - Slovak - Irish - Hungarian - French - Turkish - Spanish - Croatian For each language, one training, one development and one testing set acquired from Wikipedia articles is provided. Moreover, each language dataset contains (substantially larger) training set collected from (general) Web texts. All sets, except for Wikipedia and Web training sets that can contain similar sentences, are disjoint. Data are segmented into sentences which are further word tokenized.
Provide a detailed description of the following dataset: Multilingual Dataset for Training and Evaluating Diacritics Restoration Systems
Ego4D
Ego4D is a massive-scale egocentric video dataset and benchmark suite. It offers 3,025 hours of daily life activity video spanning hundreds of scenarios (household, outdoor, workplace, leisure, etc.) captured by 855 unique camera wearers from 74 worldwide locations and 9 different countries. The approach to collection is designed to uphold rigorous privacy and ethics standards with consenting participants and robust de-identification procedures where relevant. Ego4D dramatically expands the volume of diverse egocentric video footage publicly available to the research community. Portions of the video are accompanied by audio, 3D meshes of the environment, eye gaze, stereo, and/or synchronized videos from multiple egocentric cameras at the same event. Furthermore, a host of new benchmark challenges are presented, centered around understanding the first-person visual experience in the past (querying an episodic memory), present (analyzing hand-object manipulation, audio-visual conversation, and social interactions), and future (forecasting activities). By publicly sharing this massive annotated dataset and benchmark suite, the aim is to push the frontier of first-person perception. Description from: [Facebook AI](https://ai.facebook.com/research/publications/ego4d-unscripted-first-person-video-from-around-the-world-and-a-benchmark-suite-for-egocentric-perception) Paper: [Ego4D: Around the World in 3,000 Hours of Egocentric Video](https://ai.facebook.com/research/publications/ego4d-unscripted-first-person-video-from-around-the-world-and-a-benchmark-suite-for-egocentric-perception) GitHub: [https://github.com/EGO4D](https://github.com/EGO4D)
Provide a detailed description of the following dataset: Ego4D
EigenWorms
Caenorhabditis elegans is a roundworm commonly used as a model organism in the study of genetics. The movement of these worms is known to be a useful indicator for understanding behavioural genetics. Brown {\em et al.}[1] describe a system for recording the motion of worms on an agar plate and measuring a range of human-defined features[2]. It has been shown that the space of shapes Caenorhabditis elegans adopts on an agar plate can be represented by combinations of six base shapes, or eigenworms. Once the worm outline is extracted, each frame of worm motion can be captured by six scalars representing the amplitudes along each dimension when the shape is projected onto the six eigenworms. Using data collected for the work described in[1], we address the problem of classifying individual worms as wild-type or mutant based on the time series. The data were extracted from the C. elegans behavioural database [3]. We have 259 cases, which we split 131 train and 128 test. We have truncated each series to the shortest usable. Each series has 17984 observations. Each worm is classified as either wild-type (the N2 reference strain) or one of four mutant types: goa-1; unc-1; unc-38 and unc-63. [1] A. Brown, E. Yemini, L. Grundy, T. Jucikas, and W. Schafer, A dictionary of behavioral motifs reveals clusters of genes affecting caenorhabditis elegans locomotion, Proceedings of the National Academy of Sciences of the United States of America (PNAS), vol. 10, no. 2, pp. 791 796, 2013. [2] E. Yemini, T. Jucikas, L. Grundy, A. Brown, and W. Schafer, A database of caenorhabditis elegans behavioral phenotypes, Nature Methods, vol. 10, pp. 877 879, 2013. [3] C. elegans behavioural database
Provide a detailed description of the following dataset: EigenWorms
HUMAN4D
HUMAN4D is a large and multimodal 4D dataset that contains a variety of human activities simultaneously captured by a professional marker-based MoCap, a volumetric capture and an audio recording system. By capturing 2 female and $2$ male professional actors performing various full-body movements and expressions, HUMAN4D provides a diverse set of motions and poses encountered as part of single- and multi-person daily, physical and social activities (jumping, dancing, etc. ), along with multi-RGBD (mRGBD), volumetric and audio data. Description from: [HUMAN4D: A Human-Centric Multimodal Dataset for Motions and Immersive Media](https://paperswithcode.com/paper/human4d-a-human-centric-multimodal-dataset) Image source: [https://github.com/tofis/human4d_dataset](https://github.com/tofis/human4d_dataset)
Provide a detailed description of the following dataset: HUMAN4D
ConditionalQA
ConditionalQA is a Question Answering (QA) dataset that contains complex questions with conditional answers, i.e. the answers are only applicable when certain conditions apply.
Provide a detailed description of the following dataset: ConditionalQA
PyTorrent
PyTorrent contains 218,814 Python package libraries from PyPI and Anaconda environment. This is because earlier studies have shown that much of the code is redundant and Python packages from these environments are better in quality and are well-documented. PyTorrent enables users (such as data scientists, students, etc.) to build off the shelf machine learning models directly without spending months of effort on large infrastructure.
Provide a detailed description of the following dataset: PyTorrent
Multimodal Emoji Prediction
The twitter emoji dataset obtained from CodaLab comprises of 50 thousand tweets along with the associated emoji label. Each tweet in the dataset has a corresponding numerical label which maps to a specific emoji. The emojis are of the 20 most frequent emojis and hence the labels range from 0 to 19
Provide a detailed description of the following dataset: Multimodal Emoji Prediction
DeepGlobe
We observe that satellite imagery is a powerful source of information as it contains more structured and uniform data, compared to traditional images. Although computer vision community has been accomplishing hard tasks on everyday image datasets using deep learning, satellite images are only recently gaining attention for maps and population analysis. This workshop aims at bringing together a diverse set of researchers to advance the state-of-the-art in satellite image analysis. To direct more attention to such approaches, we propose DeepGlobe Satellite Image Understanding Challenge, structured around three different satellite image understanding tasks. The datasets created and released for this competition may serve as reference benchmarks for future research in satellite image analysis. Furthermore, since the challenge tasks will involve "in the wild" forms of classic computer vision problems, these datasets have the potential to become valuable testbeds for the design of robust vision algorithms, beyond the area of remote sensing.
Provide a detailed description of the following dataset: DeepGlobe
Massachusetts Roads Dataset
The datasets introduced in Chapter 6 of my PhD thesis are below. See the thesis for more details. If you use any of these datasets for research purposes you should use the following citation in any resulting publications: ``` @phdthesis{MnihThesis, author = {Volodymyr Mnih}, title = {Machine Learning for Aerial Image Labeling}, school = {University of Toronto}, year = {2013} } ```
Provide a detailed description of the following dataset: Massachusetts Roads Dataset
Chikusei Dataset
The airborne hyperspectral dataset was taken by Headwall Hyperspec-VNIR-C imaging sensor over agricultural and urban areas in Chikusei, Ibaraki, Japan, on July 29, 2014 between the times 9:56 to 10:53 UTC+9. The central point of the scene is located at coordinates: 36.294946N, 140.008380E. The hyperspectral dataset has 128 bands in the spectral range from 363 nm to 1018 nm. The scene consists of 2517x2335 pixels and the ground sampling distance was 2.5 m. Ground truth of 19 classes was collected via a field survey and visual inspection using high-resolution color images obtained by Canon EOS 5D Mark II together with the hyperspectral data. The hyperspectral data and ground truth were made available to the scientific community in the ENVI and MATLAB formats at http://park.itc.u-tokyo.ac.jp/sal/hyperdata. More details of the experiment are presented in the technical report given below. In order to use the datasets, please fulfill the following three requirements: 1) Giving an acknowledgement as follows: The authors gratefully acknowledge Space Application Laboratory, Department of Advanced Interdisciplinary Studies, the University of Tokyo for providing the hyperspectral data. 2) Using the following license for hyperspectral data: http://creativecommons.org/licenses/by/3.0/ 3) This dataset was made public by Dr. Naoto Yokoya and Prof. Akira Iwasaki from the University of Tokyo. Please cite: In WORD: N. Yokoya and A. Iwasaki, "Airborne hyperspectral data over Chikusei," Space Appl. Lab., Univ. Tokyo, Japan, Tech. Rep. SAL-2016-05-27, May 2016. In LaTex: @techreport{NYokoya2016, author = {N. Yokoya and A. Iwasaki}, title = {Airborne hyperspectral data over Chikusei}, institution = {Space Application Laboratory, University of Tokyo}, number = {SAL-2016-05-27}, address = {Japan}, month = {May}, year = 2016, }
Provide a detailed description of the following dataset: Chikusei Dataset
LoveDA
1. 5987 high spatial resolution (0.3 m) remote sensing images from Nanjing, Changzhou, and Wuhan 2. Focus on different geographical environments between Urban and Rural 3. Advance both semantic segmentation and domain adaptation tasks 4. Three considerable challenges: * Multi-scale objects * Complex background samples * Inconsistent class distributions Two contests are held on the Codalab: [<b>LoveDA Semantic Segmentation Challenge</b>](https://competitions.codalab.org/competitions/35865#), [<b>LoveDA Unsupervised Domain Adaptation Challenge</b>](https://competitions.codalab.org/competitions/35874)
Provide a detailed description of the following dataset: LoveDA
FMFCC-A
FMFCC-A is a large publicly-available Mandarin dataset for synthetic speech detection, which contains 40,000 synthesized Mandarin utterances that generated by 11 Mandarin TTS systems and two Mandarin VC systems, and 10,000 genuine Mandarin utterance collected from 58 speakers. The FMFCCA dataset is divided into the training, development and evaluation sets, which are used for the research of detection of synthesised Mandarin speech under various previously unknown speech synthesis systems or audio post-processing operations.
Provide a detailed description of the following dataset: FMFCC-A
POG
Object detection dataset featuring people walking on grass captured aboard a UAV. This data sets include precise meta data information about altitude, viewing angle and others.
Provide a detailed description of the following dataset: POG
DialFact
DialFact is a testing benchmark dataset of 22,245 annotated conversational claims, paired with pieces of evidence from Wikipedia. There are three sub-tasks in DialFact: 1) Verifiable claim detection task distinguishes whether a response carries verifiable factual information; 2) Evidence retrieval task retrieves the most relevant Wikipedia snippets as evidence; 3) Claim verification task predicts a dialogue response to be supported, refuted, or not enough information. Description from: [DialFact: A Benchmark for Fact-Checking in Dialogue](https://arxiv.org/abs/2110.08222)
Provide a detailed description of the following dataset: DialFact
BBQ
Bias Benchmark for QA (BBQ) is a dataset consisting of question-sets constructed by the authors that highlight attested social biases against people belonging to protected classes along nine different social dimensions relevant for U.S. English-speaking contexts.
Provide a detailed description of the following dataset: BBQ
Acappella
Acappella comprises around 46 hours of a cappella solo singing videos sourced from YouTbe, sampled across different singers and languages. Four languages are considered: English, Spanish, Hindi and others. This dateset was designed for **Audiovisual singing voice separation**, although it can be used for any self-supervised audio-visual task such us: audio-guided lip reading, audio-visual learning,
Provide a detailed description of the following dataset: Acappella
CCQA
CCQA is new web-scale dataset for in-domain model pre-training. CCQA is a novel QA dataset based on the Common Crawl project. Using the readily available schema.org annotation, around 130 million multilingual question-answer pairs are extracted, including about 60 million English data-points.
Provide a detailed description of the following dataset: CCQA
SHREC'19
Shape matching plays an important role in geometry processing and shape analysis. In the last decades, much research has been devoted to improve the quality of matching between surfaces. This huge effort is motivated by several applications such as object retrieval, animation and information transfer just to name a few. Shape matching is usually divided into two main categories: rigid and non rigid matching. In both cases, the standard evaluation is usually performed on shapes that share the same connectivity, in other words, shapes represented by the same mesh. This is mainly due to the availability of a “natural” ground truth that is given for these shapes. Indeed, in most cases the consistent connectivity directly induces a ground truth correspondence between vertices. However, this standard practice obviously does not allow to estimate the robustness of a method with respect to different connectivity. With this track, we propose a benchmark to evaluate the performance of point-to-point matching pipelines when the shapes to be matched have different connectivity (see Figure 1). We consider the concurrent presence of 1) different meshing, 2) rigid transformation in 3D space, 3) non-rigid deformations, 4) different vertex density, ranging from 5K to more than 50K, and 5) topological changes induced by mesh gluing in areas of contact. The correspondence between these shapes is obtained through the recently proposed registration pipeline FARM [1]. This method provides a high-quality registration of the SMPL model [2] to a large set of human meshes coming from different datasets from which we obtain a well-defined correspondence for all the meshes registered and SMPL itself.
Provide a detailed description of the following dataset: SHREC'19
TOSCA
Hi-resolution three-dimensional nonrigid shapes in a variety of poses for non-rigid shape similarity and correspondence experiments. The database contains a total of 80 objects, including 11 cats, 9 dogs, 3 wolves, 8 horses, 6 centaurs, 4 gorillas, 12 female figures, and two different male figures, containing 7 and 20 poses. Typical vertex count is about 50,000. Objects within the same class have the same triangulation and an equal number of vertices numbered in a compatible way. This can be used as a per-vertex ground truth correspondence in correspondence experiments. Two representations are available: MATLAB file (.mat) and ASCII text files containing the 1-based list of triangular faces (.tri), and a list of vertex XYZ coordinates (.vert). A .png thumbnail is available for each object.
Provide a detailed description of the following dataset: TOSCA
SHREC'16 Partial Benchmark
Finding a correspondence between two shapes is a fundamental task in computer graphics and geometry processing with applications ranging from texture mapping to animation. A particularly challenging and widely studied setting is when shapes are allowed to undergo quasi-isometric deformations, as it happens when we consider articulated bodies in different poses. An even more interesting scenario is partial correspondence, where one is shown only a subset of the shape, and has to match it with a deformable version thereof. Partial correspondence problems arise in numerous applications that involve real data acquisition by 3D sensors, which inevitably leads to missing parts due to occlusions and partial views. We propose a benchmark to evaluate the performance of algorithms for establishing correspondences between a full shape and its deformed versions, under the presence of different amounts and kinds of partiality.
Provide a detailed description of the following dataset: SHREC'16 Partial Benchmark
BEAMetrics
**BEAMetrics** (Benchmark to Evaluate Automatic Metrics) is resource to make research into new metrics for evaluation of generated language easier to evaluate. BEAMetrics users can quickly compare existing and new metrics with human judgements across a diverse set of tasks, quality dimensions (fluency vs. coherence vs. informativeness etc), and languages.
Provide a detailed description of the following dataset: BEAMetrics
CoDa
The Color Dataset (CoDa) is a probing dataset to evaluate the representation of visual properties in language models. CoDa consists of color distributions for 521 common objects, which are split into 3 groups: Single, Multi, and Any. The default configuration of CoDa uses 10 CLIP-style templates (e.g. "A photo of a [object]"), and 10 cloze-style templates (e.g. "Everyone knows most [object] are [color]."
Provide a detailed description of the following dataset: CoDa
NYU-VPR
**NYU-VPR** is a dataset for Visual place recognition (VPR) that contains more than 200,000 images over a 2km×2km area near the New York University campus, taken within the whole year of 2016.
Provide a detailed description of the following dataset: NYU-VPR
CodRep
Five curated datasets of one-liner commits from open-source projects. In total, they are composed of 58069 one-liner commits.
Provide a detailed description of the following dataset: CodRep
MAAD
The **Model for Attended Awareness in Driving** (**MAAD**) is a dataset of third-person estimates of a driver’s attended awareness. It consists of videos of a scene, as seen by a person performing a task in the scene, along with noisily registered ego-centric gaze sequences from that person.
Provide a detailed description of the following dataset: MAAD
PubTables-1M
The goal of PubTables-1M is to create a large, detailed, high-quality dataset for training and evaluating a wide variety of models for the tasks of **table detection**, **table structure recognition**, and **functional analysis**. It contains: - 460,589 annotated document pages containing tables for table detection. - 947,642 fully annotated tables including text content and complete location (bounding box) information for table structure recognition and functional analysis. - Full bounding boxes in both image and PDF coordinates for all table rows, columns, and cells (including blank cells), as well as other annotated structures such as column headers and projected row headers. - Rendered images of all tables and pages. - Bounding boxes and text for all words appearing in each table and page image. - Additional cell properties not used in the current model training. Additionally, cells in the headers are *canonicalized* and we implement multiple *quality control* steps to ensure the annotations are as free of noise as possible. For more details, please see [our paper](https://arxiv.org/pdf/2110.00061.pdf).
Provide a detailed description of the following dataset: PubTables-1M
Experiment-data-for-UM-S-TM
0.This is experiment data for the following article: @misc{liu2021topic, title={Topic Model Supervised by Understanding Map}, author={Gangli Liu}, year={2021}, eprint={2110.06043}, archivePrefix={arXiv}, primaryClass={cs.CL} } 1. *.txt files are the data of Table 4 of the paper. 2. The top lines of all the *.txt files are contents of the artificial documents. Column names are : "Topic", "Distance", "Topic-len", "alpha"/"Noise" , "doc concept-length", and "Votes counter". 3.Coding of file names of *.txt files see "Table 4: Discovered SCOM of six documents". "all_topic" means the candidate topic set is all the topics in a domain. 4.For the "300docs-mentioned-in-section3.2.xlsx" file, its name tells its contents.
Provide a detailed description of the following dataset: Experiment-data-for-UM-S-TM
CNewSum
**CNewSum** is a large-scale Chinese news summarization dataset which consists of 304,307 documents and human-written summaries for the news feed. It has long documents with high-abstractive summaries, which can encourage document-level understanding and generation for current summarization models. An additional distinguishing feature of CNewSum is that its test set contains adequacy and deducibility annotations for the summaries.
Provide a detailed description of the following dataset: CNewSum
UVO
UVO is a new benchmark for open-world class-agnostic object segmentation in videos. Besides shifting the problem focus to the open-world setup, UVO is significantly larger, providing approximately 8 times more videos compared with [DAVIS](/dataset/davis), and 7 times more mask (instance) annotations per video compared with [YouTube-VOS](/dataset/youtube-vos) and [YouTube-VIS](/dataset/youtubevis). UVO is also more challenging as it includes many videos with crowded scenes and complex background motions. Some highlights of the dataset include: - High quality instance masks densely annotated at 30 fps on 1024 YouTube videos and 1fps on 10337 videos from Kinetics dataset - Open-world: annotating all objects in each video, 13.5 objects per video on average - Diverse object categories: 57% of objects are not covered by COCO categories
Provide a detailed description of the following dataset: UVO
ASIRRA
Web services are often protected with a challenge that's supposed to be easy for people to solve, but difficult for computers. Such a challenge is often called a CAPTCHA (Completely Automated Public Turing test to tell Computers and Humans Apart) or HIP (Human Interactive Proof). HIPs are used for many purposes, such as to reduce email and blog spam and prevent brute-force attacks on web site passwords. Asirra (Animal Species Image Recognition for Restricting Access) is a HIP that works by asking users to identify photographs of cats and dogs. This task is difficult for computers, but studies have shown that people can accomplish it quickly and accurately. Many even think it's fun! Here is an example of the Asirra interface: Asirra is unique because of its partnership with Petfinder.com, the world's largest site devoted to finding homes for homeless pets. They've provided Microsoft Research with over three million images of cats and dogs, manually classified by people at thousands of animal shelters across the United States. Kaggle is fortunate to offer a subset of this data for fun and research.
Provide a detailed description of the following dataset: ASIRRA
OpenImages-v6
OpenImages V6 is a large-scale dataset , consists of 9 million training images, 41,620 validation samples, and 125,456 test samples. It is a partially annotated dataset, with 9,600 trainable classes
Provide a detailed description of the following dataset: OpenImages-v6
NACA Airfoils
The training datasets consisting of NACA 4- and 5-digit airfoils, at different flight conditions, were generated using Javafoil.
Provide a detailed description of the following dataset: NACA Airfoils
TAU-NIGENS Spatial Sound Events 2020
The TAU-NIGENS Spatial Sound Events 2020 dataset contains multiple spatial sound-scene recordings, consisting of sound events of distinct categories integrated into a variety of acoustical spaces, and from multiple source directions and distances as seen from the recording position. The spatialization of all sound events is based on filtering through real spatial room impulse responses (RIRs), captured in multiple rooms of various shapes, sizes, and acoustical absorption properties. Furthermore, each scene recording is delivered in two spatial recording formats, a microphone array one (MIC), and first-order Ambisonics one (FOA). The sound events are spatialized as either stationary sound sources in the room, or moving sound sources, in which case time-variant RIRs are used. Each sound event in the sound scene is associated with a trajectory of its direction-of-arrival (DoA) to the recording point, and a temporal onset and offset time. The isolated sound event recordings used for the synthesis of the sound scenes are obtained from the NIGENS general sound events database. These recordings serve as the development dataset for the DCASE 2020 Sound Event Localization and Detection Task of the DCASE 2020 Challenge.
Provide a detailed description of the following dataset: TAU-NIGENS Spatial Sound Events 2020
ICASSP 2021 Acoustic Echo Cancellation Challenge
The ICASSP 2021 Acoustic Echo Cancellation Challenge is intended to stimulate research in the area of acoustic echo cancellation (AEC), which is an important part of speech enhancement and still a top issue in audio communication and conferencing systems. Many recent AEC studies report good performance on synthetic datasets where the train and test samples come from the same underlying distribution. However, the AEC performance often degrades significantly on real recordings. Also, most of the conventional objective metrics such as echo return loss enhancement (ERLE) and perceptual evaluation of speech quality (PESQ) do not correlate well with subjective speech quality tests in the presence of background noise and reverberation found in realistic environments. In this challenge, we open source two large datasets to train AEC models under both single talk and double talk scenarios. These datasets consist of recordings from more than 2,500 real audio devices and human speakers in real environments, as well as a synthetic dataset. We open source two large test sets, and we open source an online subjective test framework for researchers to quickly test their results. The winners of this challenge will be selected based on the average Mean Opinion Score (MOS) achieved across all different single talk and double talk scenarios.
Provide a detailed description of the following dataset: ICASSP 2021 Acoustic Echo Cancellation Challenge
INTERSPEECH 2021 Acoustic Echo Cancellation Challenge
The INTERSPEECH 2021 Acoustic Echo Cancellation Challenge is intended to stimulate research in the area of acoustic echo cancellation (AEC), which is an important part of speech enhancement and still a top issue in audio communication and conferencing systems. Many recent AEC studies report reasonable performance on synthetic datasets where the train and test samples come from the same underlying distribution. However, the AEC performance often degrades significantly on real recordings. Also, most of the conventional objective metrics such as echo return loss enhancement (ERLE) and perceptual evaluation of speech quality (PESQ) do not correlate well with subjective speech quality tests in the presence of background noise and reverberation found in realistic environments. In this challenge, we open source two large datasets to train AEC models under both single talk and double talk scenarios. These datasets consist of recordings from more than 5,000 real audio devices and human speakers in real environments, as well as a synthetic dataset. We open source an online subjective test framework based on ITU-T P.808 for researchers to quickly test their results. The winners of this challenge will be selected based on the average P.808 Mean Opinion Score (MOS) achieved across all different single talk and double talk scenarios.
Provide a detailed description of the following dataset: INTERSPEECH 2021 Acoustic Echo Cancellation Challenge
SCICAP
SCICAP is a large-scale image captioning dataset that contains real-world scientific figures and captions. SCICAP was constructed using more than two million images from over 290,000 papers collected and released by arXiv. Image source: [https://arxiv.org/pdf/2110.11624v1.pdf](https://arxiv.org/pdf/2110.11624v1.pdf)
Provide a detailed description of the following dataset: SCICAP
FIRESTARTER 2 - dataset and notebooks
Data used in the paper "FIRESTARTER 2: Dynamic Code Generation for Processor Stress Tests", as well as notebooks to generate plots.
Provide a detailed description of the following dataset: FIRESTARTER 2 - dataset and notebooks
NTU RGB+D 2D
**NTU RGB+D 2D** is a curated version of [NTU RGB+D](https://paperswithcode.com/dataset/ntu-rgb-d) often used for skeleton-based action prediction and synthesis. It contains less number of actions.
Provide a detailed description of the following dataset: NTU RGB+D 2D
Small-Bench NLP
Small-Bench NLP is a benchmark for small efficient neural language models trained on a single GPU. Small-Bench NLP benchmark comprises of eight NLP tasks on the publicly available GLUE datasets and a leaderboard to track the progress of the community.
Provide a detailed description of the following dataset: Small-Bench NLP
SpaceNet 2
*SpaceNet 2: Building Detection v2* - is a dataset for building footprint detection in geographically diverse settings from very high resolution satellite images. It contains over 302,701 building footprints, 3/8-band Worldview-3 satellite imagery at 0.3m pixel res., across 5 cities (Rio de Janeiro, Las Vegas, Paris, Shanghai, Khartoum), and covers areas that are both urban and suburban in nature. The dataset was split using 60%/20%/20% for train/test/validation. The main use case for the detection of building footprints from satellite imagery is to aid foundational mapping.
Provide a detailed description of the following dataset: SpaceNet 2
Turn-Level Goals Dataset
This dataset is a record of the active learning data collected from interacting with PersonaGPT to fine-tune its actions toward turn-level goals, which are text descriptions of decoding goals for each response in a conversation. There are 11 possible turn-level goals that can be used to condition the PersonaGPT response at each turn of the conversation. To encode new turn-level goals, use the ActiveGym environment in https://github.com/af1tang/convogym to collect new active learning data.
Provide a detailed description of the following dataset: Turn-Level Goals Dataset
Coveo Data Challenge Dataset
The 2021 SIGIR workshop on eCommerce is hosting the Coveo Data Challenge for "In-session prediction for purchase intent and recommendations". The challenge addresses the growing need for reliable predictions within the boundaries of a shopping session, as customer intentions can be different depending on the occasion. The need for efficient procedures for personalization is even clearer if we consider the e-commerce landscape more broadly: outside of giant digital retailers, the constraints of the problem are stricter, due to smaller user bases and the realization that most users are not frequently returning customers. We release a new session-based dataset including more than 30M fine-grained browsing events (product detail, add, purchase), enriched by linguistic behavior (queries made by shoppers, with items clicked and items not clicked after the query) and catalog meta-data (images, text, pricing information). On this dataset, we ask participants to showcase innovative solutions for two open problems: a recommendation task (where a model is shown some events at the start of a session, and it is asked to predict future product interactions); an intent prediction task, where a model is shown a session containing an add-to-cart event, and it is asked to predict whether the item will be bought before the end of the session.
Provide a detailed description of the following dataset: Coveo Data Challenge Dataset
VerSe
Spine or vertebral segmentation is a crucial step in all applications regarding automated quantification of spinal morphology and pathology. With the advent of deep learning, for such a task on computed tomography (CT) scans, a big and varied data is a primary sought-after resource. However, a large-scale, public dataset is currently unavailable. VerSe is a large scale, multi-detector, multi-site, CT spine dataset consisting of 374 scans from 355 patients. The challenge was held in two iterations in conjunction with MICCAI 2019 and 2020. The tasks evaluated for include: vertebral labelling and segmentation. Image source: [https://arxiv.org/pdf/2001.09193v5.pdf](https://arxiv.org/pdf/2001.09193v5.pdf)
Provide a detailed description of the following dataset: VerSe
The Berka Dataset
The Berka dataset is a collection of financial information from a Czech bank. The dataset deals with over 5,300 bank clients with approximately 1,000,000 transactions. Additionally, the bank represented in the dataset has extended close to 700 loans and issued nearly 900 credit cards, all of which are represented in the data.
Provide a detailed description of the following dataset: The Berka Dataset
SSP-3D
SSP-3D is an evaluation dataset consisting of 311 images of sportspersons in tight-fitted clothes, with a variety of body shapes and poses. The images were collected from the [Sports-1M dataset](https://cs.stanford.edu/people/karpathy/deepvideo/). SSP-3D is intended for use as a benchmark for body **shape** prediction methods. Pseudo-ground-truth 3D shape labels (using the SMPL body model) were obtained via multi-frame optimisation with shape consistency between frames, as described [here](https://arxiv.org/abs/2009.10013).
Provide a detailed description of the following dataset: SSP-3D
SpaceNet 1
SpaceNet 1: Building Detection v1 is a dataset for building footprint detection. The data is comprised of 382,534 building footprints, covering an area of 2,544 sq. km of 3/8 band WorldView-2 imagery (0.5 m pixel res.) across the city of Rio de Janeiro, Brazil. The images are processed as 200m×200m tiles with associated building footprint vectors for training. The main use case for the detection of building footprints from satellite imagery is to aid foundational mapping.
Provide a detailed description of the following dataset: SpaceNet 1
Carbon Intensity 2020
Energy production and carbon intensity datasets for the regions Germany, Great Britain, France (all via the [ENTSO-E Transparency Platform](https://transparency.entsoe.eu/)) and California (via [California ISO](https://www.caiso.com/)) for the entire year 2020 +-10 days.
Provide a detailed description of the following dataset: Carbon Intensity 2020
SMLM CEP152-Complex FITS Images
The following files comprise 19 sets of 40,000 images, each set corresponding to a different rendering sigma as described in the paper. * Extracting * To extract the files, execute the following command (under Linux): cat paper_data.tar.gz.* | tar xzvf - * Organisation * The data are grouped into 19 directories, corresponding to the sigma value they were rendered at. These values are 10, 9, 8.1, 7.29, 6.56, 5.9, 5.31, 4.78, 4.3, 3.87, 3.65, 3.28, 2.95, 2.66, 2.39, 2.15, 1.94, 1.743, 1.57, 1.41 Each directory contains 40,000 FITS files - a NASA floating point image standard. The images are single channel and un-normalised. This structure is ready to be used * Recreating the data * If you have the time and compute power, you can regenerate this data set with as many or as few images as you prefer, at any sigma level. The original experimental data is available at <fill in later>. To recreate the data set you need to download the CEPRender program, available on github: https://github.com/OniDaito/CEPrender - details on how to use this program are available with the code.
Provide a detailed description of the following dataset: SMLM CEP152-Complex FITS Images
IndoNLI
IndoNLI is the first human-elicited NLI dataset for Indonesian consisting of nearly 18K sentence pairs annotated by crowd workers and experts.
Provide a detailed description of the following dataset: IndoNLI
SALAMI
Comes from https://ddmal.music.mcgill.ca/research/SALAMI/: SALAMI is an innovative and ambitious computational musicology project. To date, musical analysis has been conducted by individuals and on a small scale. Our computational approach, combined with the huge volume of data now available from such source as the Internet Archive, will: a) deliver a very substantive corpus of musical analyses in a common framework for use by music scholars, students and beyond; and, b) establish a methodology and tooling which will enable others to add to this in the future and to broaden the application of the techniques we establish. A resource of SALAMI’s magnitude empowers musicologists to approach their work in a new and different way, starting with the data, and to ask research questions that have not been possible before. There are two resources available on this site: - Annotation data: Visit this page to access the annotation data and to learn about how it was collected. - Blog: Here you’ll find updates about SALAMI features and tools. The older posts recount the data collection process. - Background: The background page gives an overview of the SALAMI project. It consists mainly of the proposal for the Digging Into Data grant that SALAMI was awarded in 2009. Through a Digging Into Data grant, this research was supported by the Social Sciences and Humanities Research Council of Canada, by the National Science Foundation, and by JISC.
Provide a detailed description of the following dataset: SALAMI
Lower-limb Kinematics and Kinetics During Continuously Varying Human Locomotion
This dataset reports the lower-limb kinematics and kinetics of ten able-bodied subjects walking at multiple inclines (± 0°, 5°, and 10°) and speeds (0.8 m/s, 1 m/s, and 1.2 m/s), running over level-ground at multiple speeds (1.8 m/s, 2 m/s, 2.2 m/s, and 2.4 m/s), walking and running with constant acceleration and deceleration (± 0.2 m/s2, and 0.5 m/s2), and stair ascent/descent with multiple stair inclines (± 20°, 25°, 30°, and 35°). This dataset also includes sit-stand transitions, walk-run transitions, and walk-stairs transitions. Data were recorded by a Vicon motion capture system and, for applicable tasks, a Bertec instrumented treadmill. This dataset can aid in the development of kinematic models of multi-activity human locomotion and the design and control of agile wearable robots.
Provide a detailed description of the following dataset: Lower-limb Kinematics and Kinetics During Continuously Varying Human Locomotion
TMBuD
**TMBuD** is a dataset for building recognition and 3D reconstruction of human made structures in urban scenarios. The dataset features 160 images of buildings from Timişoara, Romania, with a resolution of 768 x 1024 pixels each. The proposed dataset will allow proper evaluation of salient edges and semantic segmentation of images focusing on the street view perspective
Provide a detailed description of the following dataset: TMBuD
Surgical Hands
Surgical Hands is a dataset that provides multi-instance articulated hand pose annotations for in-vivo videos. The dataset contains 76 video clips from 28 publicly available surgical videos and over 8.1k annotated hand pose instances.
Provide a detailed description of the following dataset: Surgical Hands
A Dataset of Multispectral Potato Plants Images
The dataset contains aerial agricultural images of a potato field with manual labels of healthy and stressed plant regions. The images were collected with a Parrot Sequoia multispectral camera carried by a 3DR Solo drone flying at an altitude of 3 meters. The dataset consists of RGB images with a resolution of 750×750 pixels, and spectral monochrome red, green, red-edge, and near-infrared images with a resolution of 416×416 pixels, and XML files with annotated bounding boxes of healthy and stressed potato crop.
Provide a detailed description of the following dataset: A Dataset of Multispectral Potato Plants Images
CIFAR-10N
This work presents two new benchmark datasets (CIFAR-10N, CIFAR-100N), equipping the training dataset of CIFAR-10 and CIFAR-100 with human-annotated real-world noisy labels that we collect from Amazon Mechanical Turk.
Provide a detailed description of the following dataset: CIFAR-10N