dataset_name
stringlengths
2
128
description
stringlengths
1
9.7k
prompt
stringlengths
59
185
OLGA
The OLGA dataset contains artist similarities from AllMusic, together with content features from AcousticBrainz. With 17,673 artists, this is the largest academic artist similarity dataset that includes content-based features to date.
Provide a detailed description of the following dataset: OLGA
TMED
TMED is a clinically-motivated benchmark dataset for computer vision and machine learning from limited labeled data. Two overall goals inspired this dataset: 1) We wish to improve timely diagnosis and treatment of aortic stenosis (AS), a common degenerative cardiac valve condition. AS is a particularly important condition where automation holds substantial promise. Automated screening for AS can increase referral and treatment rates for patients with this life threatening condition. 2) We wish to provide an authentic assessment of semi-supervised learning (SSL) methods to the computer vision and ML research community. Especially in medical contexts, labels are often difficult and expensive to acquire. SSL is promising way to combine a small labeled set (images plus expert annotations) with a large, easy-to-acquire unlabeled set (images only). However, most existing benchmark datasets don't represent the challenges of truly uncurated unlabeled sets in a medical context. We hope our data release catalyzes work on methods for effective multi-task SSL. The dataset is available for academic use to any researcher who [applies for access](https://TMED.cs.tufts.edu/data_access.html) on our website and agrees to a standard data use agreement (do not share the data with non-approved users, no commercial use, no attempt to reidentify patients, etc.). ### Dataset contents The TMED dataset contains imagery from 2773 patients and supervised labels for two classification tasks from a *small subset* of 260 patients (because labels are difficult to acquire). All data is **de-identified** and approved for release by our IRB. Imagery comes from transthoracic echocardiograms acquired in the course of routine care consistent with American Society of Echocardiography (ASE) guidelines, all obtained from 2015-2020 at Tufts Medical Center. When gathering echocardiogram imagery for each patient, a sonographer manipulates a handheld transducer over the patient’s chest, manually choosing different acquisition angles in order to fully assess the heart’s complex anatomy. This imaging process results in multiple cineloop video clips of the heart depicting various anatomical views. We extract one still image from each available video clip, so each patient study is represented in our dataset as multiple images (typically ~100). Each image is preprocessed to a grayscale 64x64 PNG. Two kinds of labels are available for the labeled subset of patients: * View labels (PLAX/PSAX/Other), indicating which standard anatomical view is shown by the image. Each *image* in our fully-labeled set is annotated. * Diagnostic labels (no AS, mild/moderate AS, severe AS), indicating the severity of disease. Each *patient* in our fully-labeled set is annotated. For more information, see [our website](https://TMED.cs.tufts.edu/) and our [published paper at MLHC '21](https://TMED.cs.tufts.edu/publications.html)
Provide a detailed description of the following dataset: TMED
DUC 2006
There is currently much interest and activity aimed at building powerful multi-purpose information systems. The agencies involved include DARPA, ARDA and NIST. Their programmes, for example DARPA's TIDES (Translingual Information Detection Extraction and Summarization) programme, ARDA's Advanced Question & Answering Program and NIST's TREC (Text Retrieval Conferences) programme cover a range of subprogrammes. These focus on different tasks requiring their own evaluation designs. Within TIDES and among other researchers interested in document understanding, a group grew up which has been focusing on summarization and the evaluation of summarization systems. Part of the initial evaluation for TIDES called for a workshop to be held in the fall of 2000 to explore different ways of summarizing a common set of documents. Additionally a road mapping effort was started in March of 2000 to lay plans for a long-term evaluation effort in summarization. Out of the initial workshop and the roadmapping effort has grown a continuing evaluation in the area of text summarization called the Document Understanding Conferences (DUC). Sponsored by the Advanced Research and Development Activity (ARDA), the conference series is run by the National Institute of Standards and Technology (NIST) to further progress in summarization and enable researchers to participate in large-scale experiments.
Provide a detailed description of the following dataset: DUC 2006
DUC 2007
There is currently much interest and activity aimed at building powerful multi-purpose information systems. The agencies involved include DARPA, ARDA and NIST. Their programmes, for example DARPA's TIDES (Translingual Information Detection Extraction and Summarization) programme, ARDA's Advanced Question & Answering Program and NIST's TREC (Text Retrieval Conferences) programme cover a range of subprogrammes. These focus on different tasks requiring their own evaluation designs. Within TIDES and among other researchers interested in document understanding, a group grew up which has been focusing on summarization and the evaluation of summarization systems. Part of the initial evaluation for TIDES called for a workshop to be held in the fall of 2000 to explore different ways of summarizing a common set of documents. Additionally a road mapping effort was started in March of 2000 to lay plans for a long-term evaluation effort in summarization. Out of the initial workshop and the roadmapping effort has grown a continuing evaluation in the area of text summarization called the Document Understanding Conferences (DUC). Sponsored by the Advanced Research and Development Activity (ARDA), the conference series is run by the National Institute of Standards and Technology (NIST) to further progress in summarization and enable researchers to participate in large-scale experiments.
Provide a detailed description of the following dataset: DUC 2007
raw gaze data
Data was collected from Tobii Fusion screen-based Eye Tracker. This study collected drivers’ gaze data by letting participants watch dashcam captured videos of driving scenes in the lab setting. Original vidoes are downloaded from [https://github.com/Cogito2012/CarCrashDataset](https://github.com/Cogito2012/CarCrashDataset). Each video lasts 5 seconds and the frequency of the videos is 10 Hz. Full data and description can be found: [https://github.com/yuli1102/eye_tracker_data](https://github.com/yuli1102/eye_tracker_data)
Provide a detailed description of the following dataset: raw gaze data
EXTREME CLASSIFICATION
The objective in extreme multi-label classification is to learn feature architectures and classifiers that can automatically tag a data point with the most relevant subset of labels from an extremely large label set. This repository provides resources that can be used for evaluating the performance of extreme multi-label algorithms including datasets, code, and metrics. For more details please visit the link http://manikvarma.org/downloads/XC/XMLRepository.html
Provide a detailed description of the following dataset: EXTREME CLASSIFICATION
ImageNet-VidVRD
ImageNet-VidVRD dataset contains 1,000 videos selected from ILVSRC2016-VID dataset based on whether the video contains clear visual relations. It is split into 800 training set and 200 test set, and covers common subject/objects of 35 categories and predicates of 132 categories. Ten people contributed to labeling the dataset, which includes object trajectory labeling and relation labeling. Since the ILVSRC2016-VID dataset has the object trajectory annotation for 30 categories already, we supplemented the annotations by labeling the remaining 5 categories. In order to save the labor of relation labeling, we labeled typical segments of the videos in the training set and the whole of the videos in the test set.
Provide a detailed description of the following dataset: ImageNet-VidVRD
VidOR
VidOR (Video Object Relation) dataset contains 10,000 videos (98.6 hours) from YFCC100M collection together with a large amount of fine-grained annotations for relation understanding. In particular, 80 categories of objects are annotated with bounding-box trajectory to indicate their spatio-temporal location in the videos; and 50 categories of relation predicates are annotated among all pairs of annotated objects with starting and ending frame index. This results in around 50,000 object and 380,000 relation instances annotated. To use the dataset for model development, the dataset is split into 7,000 videos for training, 835 videos for validation, and 2,165 videos for testing.
Provide a detailed description of the following dataset: VidOR
Euro-PVI
The Euro-PVI dataset contains trajectories of pedestrians and bicyclists, with dense interactions with the ego-vehicle. The dataset is collected in Brussels and Leuven, Belgium. The goal of this dataset is to address the challenge of future trajectory prediction in urban environments with dense pedestrian (bicyclist) - vehicle interactions.
Provide a detailed description of the following dataset: Euro-PVI
Knot128
Knot128 is a dataset to test knot untangling algorithms, i.e., highly-tangled configurations that can be difficult to smooth out into a canonical knot embedding. Knot128 is comprised of knots from 128 different isotopy classes; for each class, a tangled embedding, and a canonical embedding are provided.
Provide a detailed description of the following dataset: Knot128
Trefoil100
Trefoil100 is a dataset to test knot untangling algorithms, i.e., highly-tangled configurations that can be difficult to smooth out into a canonical knot embedding. Trefoil100 contains 100 tangled embeddings of the [trefoil knot](https://en.wikipedia.org/wiki/Trefoil_knot).
Provide a detailed description of the following dataset: Trefoil100
DONeRF: Evaluation Dataset
This is the dataset for the CGF 2021 paper "DONeRF: Towards Real-Time Rendering of Compact Neural Radiance Fields using Depth Oracle Networks". Please note the original creators of the individual 3D scenes themselves (individual license files can be found in the individual .zip archives in the dataset): Bulldozer by "Heinzelnisse" (CC-BY-NC): https://www.blendswap.com/blend/11490 Forest by Robin Tran (CC-BY-SA 3.0): https://cloud.blender.org/p/gallery/5fbd186ec57d586577c57417 Classroom by Christophe Seux (CC-0): https://download.blender.org/demo/test/classroom.zip San Miguel by Guillermo M. Leal Llaguno (CC-BY 3.0): https://casual-effects.com/g3d/data10/index.html# Pavillon by Hamza Cheggour / "eMirage" (CC-BY): https://download.blender.org/demo/test/pabellon_barcelona_v1.scene_.zip Barbershop by Blender Animation Studio (CC-BY): https://svn.blender.org/svnroot/bf-blender/trunk/lib/benchmarks/cycles/barbershop_interior/
Provide a detailed description of the following dataset: DONeRF: Evaluation Dataset
CF-mMIMO data - measurement at USC
This repo contains open-source channel measurement data for research and development purposes. Copyright Thomas Choi, University of Southern California. The data may be used for non-commercial purposes only. Redistribution prohibited. If you use this data for results presented in research papers, please cite as follows: Data were obtained from [Choi2021Using], whose data are available at [WiDeS_Choi2021Using]. [Choi2021Using] T. Choi et al., "Using a drone sounder to measure channels for cell-free massive MIMO systems," arXiv preprint arXiv:2106.15276, 2021. [WiDeS_Choi2021Using] T. Choi et al., “Open-Source Cell-Free Massive MIMO Channel Data 2020”. URL: https://wides.usc.edu/research_matlab.html
Provide a detailed description of the following dataset: CF-mMIMO data - measurement at USC
Bizarre Pose Dataset
Human keypoint dataset of anime/manga-style character illustrations. Extension of the [AnimeDrawingsDataset](https://github.com/dragonmeteor/AnimeDrawingsDataset), with additional features: * all 17 COCO-compliant human keypoints * character bounding boxes * 2000 additional samples (4000 total) from [Danbooru](https://www.gwern.net/Danbooru2020) with difficult tags Useful for pose estimation of illustrated characters, which allows downstream tasks such as pose-guided reference drawing retrieval (e.g. Hermit Purple).
Provide a detailed description of the following dataset: Bizarre Pose Dataset
Anime Drawings Dataset
A dataset for 2D pose estimation of anime/manga images.
Provide a detailed description of the following dataset: Anime Drawings Dataset
EMOPIA
EMOPIA (pronounced ‘yee-mò-pi-uh’) dataset is a shared multi-modal (audio and MIDI) database focusing on perceived emotion in pop piano music, to facilitate research on various tasks related to music emotion. The dataset contains 1,087 music clips from 387 songs and clip-level emotion labels annotated by four dedicated annotators.
Provide a detailed description of the following dataset: EMOPIA
ailabs1k7
https://github.com/YatingMusic/compound-word-transformer
Provide a detailed description of the following dataset: ailabs1k7
I2L-140K
Introduced by Singh, Sumeet S.. “Teaching Machines to Code: Neural Markup Generation with Visual Attention.” ArXiv abs/1802.05415 (2018): n. pag. A prebuilt dataset for OpenAI's task for image-2-latex system. Includes total of ~140k formulas and images splitted into train, validation and test sets. Superset of im2latex-100K dataset.
Provide a detailed description of the following dataset: I2L-140K
Im2latex-90k
Introduced by Singh, Sumeet S.. “Teaching Machines to Code: Neural Markup Generation with Visual Attention.” ArXiv abs/1802.05415 (2018): n. pag. Sanitized version of im2latex-100K dataset (erroneous samples were removed). A prebuilt dataset for OpenAI's task for image-2-latex system. Includes total of ~90k formulas and images split into train, validation and test sets. Also see I2L-140K which is a superset of this dataset.
Provide a detailed description of the following dataset: Im2latex-90k
Interactive Media Experience Click Dataset
The dataset contains summary statistics and engagement metrics captured from users in a live, 'in-the-wild' study of an interactive TV show.
Provide a detailed description of the following dataset: Interactive Media Experience Click Dataset
UAV-based multispectral vineyards
UAS-based Multispectral othomosaics of vineyards from central Portugal - 2 distinct vineyards - Multispectral and HD orthomosaics
Provide a detailed description of the following dataset: UAV-based multispectral vineyards
Multispectral and HD vineyard orthomosaics from central Portugal
Multispectral and HD vineyard orthomosaics from central Portugal - Mulstispectral and HD orthomosaics - 2 distinct vineyards - ground-truth masks for row detection
Provide a detailed description of the following dataset: Multispectral and HD vineyard orthomosaics from central Portugal
AGAR
The Annotated Germs for Automated Recognition (AGAR) dataset is an image database of microbial colonies cultured on an agar plate. It contains 18000 photos of five different microorganisms, taken under diverse lighting conditions with two different cameras. All images are classified into countable, uncountable, and empty, with the former being labeled by microbiologists with colony location and 5 species identification (336 442 colonies).
Provide a detailed description of the following dataset: AGAR
Kinetics 400
The dataset contains 400 human action classes, with at least 400 video clips for each action. Each clip lasts around 10s and is taken from a different YouTube video. The actions are human focussed and cover a broad range of classes including human-object interactions such as playing instruments, as well as human-human interactions such as shaking hands.
Provide a detailed description of the following dataset: Kinetics 400
VideoRemoval4K
We provide video sequences with annotated object masks for video inpainting. The resolution is 3840 x 2160.
Provide a detailed description of the following dataset: VideoRemoval4K
I-RAVEN
To fix the defacts of RAVEN dataset, we generate an alternative answer set for each RPM question in RAVEN, forming an improved dataset named Impartial-RAVEN (I-RAVEN for short).
Provide a detailed description of the following dataset: I-RAVEN
MedLEA
The MedLEA package provides morphological and structural features of 471 medicinal plant leaves and 1099 leaf images of 31 species and 29-45 images per species.
Provide a detailed description of the following dataset: MedLEA
Swedish Leaf Dataset
A dataset of images containing leaves from 15 tree classes. Image source: [https://www.cvl.isy.liu.se/en/research/datasets/swedish-leaf/](https://www.cvl.isy.liu.se/en/research/datasets/swedish-leaf/)
Provide a detailed description of the following dataset: Swedish Leaf Dataset
Well-being Dataset
The dataset is a private dataset collected for automatic analysis of psychological distress. It contains self-reported distress labels provided by human volunteers. The dataset consists of 30-min interview recordings of participants.
Provide a detailed description of the following dataset: Well-being Dataset
XA Bin-Picking
**XA Bin-Picking** is a point-cloud dataset comprising both simulated and real-world scenes with three industrial parts. Synthesized scenes consists of 1000 training samples. The test samples are real scenes and the ground truth instance labels are made manually. There are 20 to 30 identical types of parts randomly piled up in a scene. Each scene contains about 60,000 boundary points. Each point in the scene has instance annotations. The parts are texture-less and have no discernible color. Both of training samples and test sam- ples only contain the boundary points of parts.
Provide a detailed description of the following dataset: XA Bin-Picking
German Credit Dataset
Two datasets are provided. the original dataset, in the form provided by Prof. Hofmann, contains categorical/symbolic attributes and is in the file "german.data". For algorithms that need numerical attributes, Strathclyde University produced the file "german.data-numeric". This file has been edited and several indicator variables added to make it suitable for algorithms which cannot cope with categorical variables. Several attributes that are ordered categorical (such as attribute 17) have been coded as integer. This was the form used by StatLog. This dataset requires use of a cost matrix: | | Good | Bad | |---|---|---| | Good | 0 | 1 | | Bad | 5 | 0 | The rows represent the actual classification and the columns the predicted classification. It is worse to class a customer as good when they are bad (5), than it is to class a customer as bad when they are good (1).
Provide a detailed description of the following dataset: German Credit Dataset
Monkey V1 dataset
This dataset is used for neural co-training. mtl_monkey_dataset: was used for our MTL-Monkey model and involves neural responses that were predicted by a single-task trained model on real monkey V1. mtl_oracle_dataset: was used for our MTL-Oracle model and involves neural responses that were predicted by our image classification oracle. mtl_shuffled_dataset: was used for our MTL-Shuffled model and is the result of shuffling the mtl_monkey_dataset across images.
Provide a detailed description of the following dataset: Monkey V1 dataset
S&P 500 Intraday Data
##### Technical Information Dates range from 2017-09-11 to 2018-02-16 and the time interval is 1 minute. This is a MultiIndex CSV file, to load in pandas use: `dataset = pd.read_csv('dataset.csv', index_col=0, header=[0, 1]).sort_index(axis=1)` Stocks that entered or exited the Index during the dataset time range are omitted. ##### Collection & Processing These are the scripts used for collecting the data, and also utilities to clean & scale the dataset & convert it to a numpy array: https://github.com/nickdl/alpha
Provide a detailed description of the following dataset: S&P 500 Intraday Data
Crello
Crello dataset consists of design templates obtained from online design service, crello.com. The dataset contains designs for various display formats, such as social media posts, banner ads, blog headers, or printed posters, all in a vector format. In dataset construction, design templates and associated resources (e.g., linked images) from crello.com were first downloaded. After the initial data acquisition, the data structure was inspected and identified useful vector graphic information in each template. Next, mal-formed templates or those having more than 50 elements were eliminated, resulting in 23,182 templates. The data was paritioned to 18,768 / 2,315 / 2,278 examples for train, validation, and test splits.
Provide a detailed description of the following dataset: Crello
World Mortality Dataset
The **World Mortality Dataset** contains weekly, monthly, or quarterly all-cause mortality data from 103 countries and territories. It contains country-level data on all-cause mortality in 2015–2021 collected from various sources.
Provide a detailed description of the following dataset: World Mortality Dataset
HiRID
HiRID is a freely accessible critical care dataset containing data relating to almost 34 thousand patient admissions to the Department of Intensive Care Medicine of the Bern University Hospital, Switzerland (ICU), an interdisciplinary 60-bed unit admitting >6,500 patients per year. The ICU offers the full range of modern interdisciplinary intensive care medicine for adult patients. The dataset was developed in cooperation between the Swiss Federal Institute of Technology (ETH) Zürich, Switzerland and the ICU. The dataset contains de-identified demographic information and a total of 681 routinely collected physiological variables, diagnostic test results and treatment parameters from almost 34 thousand admissions during the period from January 2008 to June 2016. Data is stored with a uniquely high time resolution of one entry every two minutes.
Provide a detailed description of the following dataset: HiRID
Q-Pain
Q-Pain, a dataset for assessing bias in medical QA in the context of pain management, one of the most challenging forms of clinical decision-making.
Provide a detailed description of the following dataset: Q-Pain
SROIE
Consists of a dataset with 1000 whole scanned receipt images and annotations for the competition on scanned receipts OCR and key information extraction (SROIE). Image source: [https://arxiv.org/pdf/2103.10213.pdf](https://arxiv.org/pdf/2103.10213.pdf)
Provide a detailed description of the following dataset: SROIE
EPHOIE
EPHOIE is a fully-annotated dataset which is the first Chinese benchmark for both text spotting and visual information extraction. EPHOIE consists of 1,494 images of examination paper head with complex layouts and background, including a total of 15,771 Chinese handwritten or printed text instances.
Provide a detailed description of the following dataset: EPHOIE
28 Ghz wireless channel dataset
Our dataset which consists of multiple indoor and outdoor experiments for up to 30 m gNB-UE link. In each experiment, we fixed the location of the gNB and move the UE with an increment of roughly one degrees. The table above specifies the direction of user movement with respect to gNB-UE link, distance resolution, and the number of user locations for which we conduct channel measurements. Outdoor 30 m data also contains blockage between 3.9 m to 4.8 m. At each location, we scan the transmission beam and collect data for each beam. By doing so, we can get the full OFDM channels for different locations along the moving trajectory with all the beam angles. Moreover, we use 240 kHz subcarrier spacing, which is consistent with the 5G NR numerology at FR2, so the data we collect will be a true reflection of what a 5G UE will see.
Provide a detailed description of the following dataset: 28 Ghz wireless channel dataset
Mechanical MNIST Crack Path
The Mechanical MNIST Crack Path dataset contains Finite Element simulation results from phase-field models of quasi-static brittle fracture in heterogeneous material domains subjected to prescribed loading and boundary conditions. For all samples, the material domain is a square with a side length of $1$. There is an initial crack of fixed length ($0.25$) on the left edge of each domain. The bottom edge of the domain is fixed in $x$ (horizontal) and $y$ (vertical), the right edge of the domain is fixed in $x$ and free in $y$, and the left edge is free in both $x$ and $y$. The top edge is free in $x$, and in $y$ it is displaced such that, at each step, the displacement increases linearly from zero at the top right corner to the maximum displacement on the top left corner. Maximum displacement starts at $0.0$ and increases to $0.02$ by increments of $0.0001$ ($200$ simulation steps in total). The heterogeneous material distribution is obtained by adding rigid circular inclusions to the domain using the Fashion MNIST bitmaps as the reference location for the center of the inclusions. Specifically, each center point location is generated randomly inside a square region defined by the corresponding Fashion MNIST pixel when the pixel has an intensity value higher than $10$. In addition, a minimum center-to-center distance limit of $0.0525$ is applied while generating these center points for each sample. The values of Young’s Modulus $(E)$, Fracture Toughness $(G_f)$, and Failure Strength $(f_t)$ near each inclusion are increased with respect to the background domain by a variable rigidity ratio $r$. The background value for $E$ is $210000$, the background value for $G_f$ is $2.7$, and the background value for $f_t$ is $2445.42$. The rigidity ratio throughout the domain depends on position with respect to all inclusion centers such that the closer a point is to the inclusion center the higher the rigidity ratio will be. We note that the full algorithm for constructing the heterogeneous material property distribution is included in the simulations scripts shared on GitHub. The following information is included in our dataset: (1) A rigidity ratio array to capture heterogeneous material distribution reported over a uniform $64\times64$ grid, (2) the damage field at the final level of applied displacement reported over a uniform $256\times256$ grid, and (3) the force-displacement curves for each simulation. All simulations are conducted with the FEniCS computing platform (https://fenicsproject.org). The code to reproduce these simulations is hosted on GitHub (https://github.com/saeedmhz/phase-field).
Provide a detailed description of the following dataset: Mechanical MNIST Crack Path
Mechanical MNIST
Each dataset in the Mechanical MNIST collection contains the results of 70,000 (60,000 training examples + 10,000 test examples) finite element simulation of a heterogeneous material subject to large deformation. Mechanical MNIST is generated by first converting the MNIST bitmap images (http://www.pymvpa.org/datadb/mnist.html) to 2D heterogeneous blocks of material. Consistent with the MNIST bitmap ($28 \times 28$ pixels), the material domain is a $28 \times 28$ unit square. All simulations are conducted with the FEniCS computing platform (https://fenicsproject.org). The code to reproduce these simulations is hosted on GitHub (https://github.com/elejeune11/Mechanical-MNIST/tree/master/generate_dataset). The paper "Mechanical MNIST: A benchmark dataset for mechanical metamodels" can be found at https://doi.org/10.1016/j.eml.2020.100659. All code necessary to reproduce the metamodels demonstrated in the manuscript is available on GitHub (https://github.com/elejeune11/Mechanical-MNIST). For questions, please contact Emma Lejeune (elejeune@bu.edu).
Provide a detailed description of the following dataset: Mechanical MNIST
CIRR
**Composed Image Retrieval** (or, **Image Retreival conditioned on Language Feedback**) is a relatively new retrieval task, where an input query consists of an image and short textual description of how to modify the image. For humans, the advantage of a bi-modal query is clear: some concepts and attributes are more succinctly described visually, others through language. By cross-referencing the two modalities, a reference image can capture the general gist of a scene, while the text can specify finer details. We identify a major challenge of this task as the inherent ambiguity in knowing what information is important (typically one object of interest in the scene) and what can be ignored (e.g., the background and other irrelevant objects). We release the first dataset of open-domain, real-life images with human-generated modification sentences, which support research on one-shot composed image retrieval, dialogue systems, fine-grained visiolinguistic reasoning, and more.
Provide a detailed description of the following dataset: CIRR
CWRU Bearing Dataset
Data was collected for normal bearings, single-point drive end and fan end defects. Data was collected at 12,000 samples/second and at 48,000 samples/second for drive end bearing experiments. All fan end bearing data was collected at 12,000 samples/second.
Provide a detailed description of the following dataset: CWRU Bearing Dataset
KanHope
KanHope is a code mixed hope speech dataset for equality, diversity, and inclusion in Kannada, an under-resourced Dravidian language. The dataset consists of 6,176 user-generated comments in code mixed Kannada crawled from YouTube and manually labelled as bearing hope speech or not-hope speech.
Provide a detailed description of the following dataset: KanHope
BEOID
The BEOID dataset includes object interactions ranging from preparing a coffee to operating a weight lifting machine and opening a door. The dataset is recorded at six locations: kitchen, workspace, laser printer, corridor with a locked door, cardiac gym, and weight-lifting machine. For the first four locations, sequences from five different operators were recorded (two sequences per operator), and from three operators for the last two locations (three sequences per operator). The wearable gaze tracker hardware (ASL Mobile Eye XG) was used to record the dataset. Synchronized wide-lens video data with calibrated 2D gaze fixations are available. Moreover, we release 3D information using a pre-built cloud point map and PTAM tracking. Three-dimensional information of the image and the gaze fixations are included.
Provide a detailed description of the following dataset: BEOID
BIDCD
Bosch Industrial Depth Completion Dataset (BIDCD) is an RGBD dataset for of static table-top scenes with industrial objects. The data was collected with a RealSense depth-camera mounted on a robotic arm, i.e. from multiple Points-of-View (POV), approximately 60 for each scene. We generated depth ground truth with a customized pipeline for removing erroneous depth values, and applied Multi-View geometry to fuse the cleaned depth frames and fill-in missing information. The fused scene mesh was back-projected to each POV, and finally a bi-lateral filter was applied to reduce the remaining holes. For each scene we provide RGB, raw Depth, Ground-Truth Depth. We also provide a corresponding file-system with 3D information: our fused meshes, camera poses, and camera parameters. A simpler dataset with a Single-Item (SI) in each scene is also provided, using fewer POV, approximately 4 for each scene.
Provide a detailed description of the following dataset: BIDCD
VR traffic traces
The dataset contains traffic traces collected from 3 different VR applications. Researchers can use this dataset to replicate the behavior of real VR traffic directly in their studies, e.g., their simulations. Further information can be found in the repository.
Provide a detailed description of the following dataset: VR traffic traces
The Boston Housing Dataset
This dataset contains information collected by the U.S Census Service concerning housing in the area of Boston Mass. It was obtained from the StatLib archive (http://lib.stat.cmu.edu/datasets/boston), and has been used extensively throughout the literature to benchmark algorithms. However, these comparisons were primarily done outside of Delve and are thus somewhat suspect. The dataset is small in size with only 506 cases.
Provide a detailed description of the following dataset: The Boston Housing Dataset
DeliData
DeliData is the first publicly available dataset containing collaborative conversations on solving a cognitive task, consisting of 500 group dialogues and 14k utterances.
Provide a detailed description of the following dataset: DeliData
IPAC
IPAC (Icelandic Parallel Abstracts Corpus ) is a new Icelandic-English parallel corpus, composed of abstracts from student theses and dissertations. The texts were collected from the Skemman repository which keeps records of all theses, dissertations and final projects from students at Icelandic universities. The corpus was aligned based on sentence-level BLEU scores, in both translation directions, from NMT models using Bleualign. The result is a corpus of 64k sentence pairs from over 6 thousand parallel abstracts.
Provide a detailed description of the following dataset: IPAC
FakeAVCeleb
FakeAVCeleb is a novel Audio-Video Deepfake dataset that not only contains deepfake videos but respective synthesized cloned audios as well. Image source: [https://arxiv.org/pdf/2108.05080v1.pdf](https://arxiv.org/pdf/2108.05080v1.pdf)
Provide a detailed description of the following dataset: FakeAVCeleb
MuSiQue-Ans
MuSiQue-Ans is a new multihop QA dataset with ~25K 2-4 hop questions using seed questions from 5 existing single-hop datasets.
Provide a detailed description of the following dataset: MuSiQue-Ans
Bambara Language Dataset
A Bambara dialectal dataset dedicated for Sentiment Analysis, available freely for Natural Language Processing research purposes
Provide a detailed description of the following dataset: Bambara Language Dataset
TrUMAn
Trope Understanding in Movies and Animations (TrUMAn) is a dataset intending to evaluate and develop learning systems beyond visual signals.
Provide a detailed description of the following dataset: TrUMAn
FoodLogoDet-1500
FoodLogoDet-1500 is a new large-scale publicly available food logo dataset, which has 1,500 categories, about 100,000 images and about 150,000 manually annotated food logo objects.
Provide a detailed description of the following dataset: FoodLogoDet-1500
COMPARE
COMPARE is a taxonomy and a dataset of comparison discussions in peer reviews of research papers in the domain of experimental deep learning.
Provide a detailed description of the following dataset: COMPARE
CirCor DigiScope
CirCor DigiScope is currently the largest pediatric heart sound dataset. A total of 5282 recordings have been collected from the four main auscultation locations of 1568 patients, in the process 215780 heart sounds have been manually annotated. Each cardiac murmur has been manually annotated by an expert annotator according to its timing, shape, pitch, grading and quality.
Provide a detailed description of the following dataset: CirCor DigiScope
Dense Fog
We introduce an object detection dataset in challenging adverse weather conditions covering 12000 samples in real-world driving scenes and 1500 samples in controlled weather conditions within a fog chamber. The dataset includes different weather conditions like fog, snow, and rain and was acquired by over 10,000 km of driving in northern Europe. The driven route with cities along the road is shown on the right. In total, 100k Objekts were labeled with accurate 2D and 3D bounding boxes. The main contributions of this dataset are: - We provide a proving ground for a broad range of algorithms covering signal enhancement, domain adaptation, object detection, or multi-modal sensor fusion, focusing on the learning of robust redundancies between sensors, especially if they fail asymmetrically in different weather conditions. - The dataset was created with the initial intention to showcase methods, which learn of robust redundancies between the sensor and enable a raw data sensor fusion in case of asymmetric sensor failure induced through adverse weather effects. - In our case we departed from proposal level fusion and applied an adaptive fusion driven by measurement entropy enabling the detection also in case of unknown adverse weather effects. This method outperforms other reference fusion methods, which even drop in below single image methods. - Please check out our paper for more information.
Provide a detailed description of the following dataset: Dense Fog
OpenStreetMap Multi-Sensor Scene Classification
A high-resolution multi-sensor remote sensing scene classification dataset, appropriate for training and evaluating image classification models in the remote sensing domain. The dataset consists of 8400 overhead scenes, each covered by Airbus Pléiades, Airbus SPOT, and USDA NAIP imagery. The scenes are classified into 12 OpenStreetMap categories: * man_made=bridge * man_made=breakwater * building=farm * power=substation * leisure=stadium * leisure=golf_course * waterway=dam * landuse=quarry * landuse=farmland * landuse=forest * natural=water * natural=bare_rock
Provide a detailed description of the following dataset: OpenStreetMap Multi-Sensor Scene Classification
ICFG-PEDES
One large-scale database for Text-to-Image Person Re-identification, i.e., Text-based Person Retrieval. Compared with existing databases, ICFG-PEDES has three key advantages. First, its textual descriptions are identity-centric and fine-grained. Second, the images included in ICFG-PEDES are more challenging, containing more appearance variability due to the presence of complex backgrounds and variable illumination. Third, the scale of ICFG-PEDES is larger.
Provide a detailed description of the following dataset: ICFG-PEDES
Two-probe macaque monkey auditory LFP
Dataset accompanying paper Klein, N., Siegle, J.H., Teichert, T., Kass, R.E. (2021) "Cross-population coupling of neural activity based on Gaussian process current source densities". Auditory local field potential (LFP) recordings and evoked multi-unit activity (MUA) from two 24-electrode linear probes (V-Probes from Plexon) inserted in primary auditory cortex of a macaque monkey. The probes were arranged parallel to the iso-frequency bands in primary auditory cortex (A1), and had similar tonal response fields with preferred frequencies close to 1000 Hz. The first probe (which we call the lateral probe) was located centrally in A1, while the second probe (which we call the medial probe) was located more medially and closer to the boundary of A1 with the medio-lateral belt. The medial probe had lower response threshold, shorter MUA latencies, and overall stronger current sinks and sources than the lateral probe. The spacing between electrodes on each probe was 100 microns so that the probe spanned 2,300 microns. The treatment of the animals was in accordance with the guidelines set by the U.S. Department of Health and Human Services (NIH) for the care and use of laboratory animals, and all methods were approved by the Institutional Animal Care and Use Committee at the University of Pittsburgh.
Provide a detailed description of the following dataset: Two-probe macaque monkey auditory LFP
Neuropixels single-mouse LFP data
Single-mouse Neuropixels recordings (spikes and LFPs) in NWB format. Dataset used in the paper "Cross-population coupling of neural activity based on Gaussian process current source densities" by Klein, N., Siegle, J.H., Teichert, T., and Kass, R.E. (preprint: https://arxiv.org/abs/2104.10070). The data is part of the Allen Brain Observatory Neuropixels dataset (©2019 Allen Institute for Brain Science, available from https://portal.brain-map.org/explore/circuits/visual-coding-neuropixels). Six Neuropixels probes were simultaneously inserted through visual cortex, hippocampus, thalamus, and midbrain. On each probe, LFP data was recorded from up to 374 electrode locations in a checkerboard layout spanning two spatial dimensions: four columns spaced 16 microns apart, with 20 micron spacing between rows. LFP data was acquired at 2500 Hz after applying a 1000 Hz low-pass filter. Boundaries between regions were manually identified based on decreases in unit density as well as physiological signatures (such as elevated theta-band activity in the hippocampus). LFP electrodes without region labels were not included in the analysis. Spike trains and downsampled LFP data for this mouse (subject ID: 730760270; session ID: 755434585) can be accessed via the AllenSDK or via the DANDI Archive https://dandiarchive.org/dandiset/000021/draft. The original LFP data used for this analysis is available as part of the Allen Brain Observatory AWS Public Data Set https://registry.opendata.aws/allen-brain-observatory/.
Provide a detailed description of the following dataset: Neuropixels single-mouse LFP data
SPACE
**SPACE** is a simulator for physical Interactions and causal learning in 3D environments. The SPACE simulator is used to generate the SPACE dataset, a synthetic video dataset in a 3D environment, to systematically evaluate physics-based models on a range of physical causal reasoning tasks. Inspired by daily object interactions, the SPACE dataset comprises videos depicting three types of physical events: containment, stability and contact.
Provide a detailed description of the following dataset: SPACE
HatemojiCheck
**HatemojiCheck** is a test suite for detecting emoji-based hate of 3,930 test cases covering seven functionalities of emoji-based hate and six identities.
Provide a detailed description of the following dataset: HatemojiCheck
WikiScenes
The **WikiScenes** dataset consists of paired images and language descriptions capturing world landmarks and cultural sites, with associated 3D models and camera poses. WikiScenes is derived from the massive public catalog of freely-licensed crowdsourced data in the Wikimedia Commons project, which contains a large variety of images with captions and other metadata. The dataset contains two forms of textual descriptions for each image: (1) Captions associated with images, describing the image using free-form language, and (2) The WikiCategory hierarchy obtained according to the hierarchy of WikiCategories associated with each image (see the examples in the image below). Overall, WikiScenes contains approximately 63K images with textual descriptions.
Provide a detailed description of the following dataset: WikiScenes
ACS PUMS
**ACS PUMS** stands for American Community Survey (ACS) Public Use Microdata Sample (PUMS) and has been used to construct several tabular datasets for studying fairness in machine learning: - ACSIncome: to predict whether an individual’s income is above $50,000. - ACSPublicCoverage: to predict whether an individual is covered by public health insurance. - ACSMobility: to predict whether an individual had the same residential address one year ago. - ACSEmployment: to predict whether an individual is employed. - ACSTravelTime: predict whether an individual has a commute to work that is longer than 20 minutes.
Provide a detailed description of the following dataset: ACS PUMS
Computer Vision Values Dataset
This is a corpus of about 500 computer vision datasets, from which the authors sampled 114 dataset publications across different vision tasks and coded for themes through both structured and qualitative content analysis. This work most closely pairs with the following research question: How do dataset developers in CV and NLP research, describe and motivate the decisions that go into their creation?
Provide a detailed description of the following dataset: Computer Vision Values Dataset
PADv2
With complex scenes and rich annotations, the **PADv2** dataset can be used as a test bed to benchmark affordance detection methods and may also facilitate downstream vision tasks, such as scene understanding, action recognition, and robot manipulation. It contains 30k diverse images covering 39 affordance categories as well as 103 object categories from different scenes.
Provide a detailed description of the following dataset: PADv2
CDR
The BioCreative V CDR task corpus is manually annotated for chemicals, diseases and chemical-induced disease (CID) relations. It contains the titles and abstracts of 1500 PubMed articles and is split into equally sized train, validation and test sets. It is common to first tune a model on the validation set and then train on the combination of the train and validation sets before evaluating on the test set. It is also common to filter negative relations with disease entities that are hypernyms of a corresponding true relations disease entity within the same abstract (see Appendix C of [this paper](https://api.semanticscholar.org/CorpusID:247939302) for details).
Provide a detailed description of the following dataset: CDR
Screen2Words
**Screen2Words** is a large-scale screen summarization dataset annotated by human workers. The dataset contains more than 112k language summarization across 22k unique UI screens. This dataset can be used for Mobile User Interface Summarization, which is a task where a model generates succinct language descriptions of mobile screens for conveying important contents and functionalities of the screen.
Provide a detailed description of the following dataset: Screen2Words
Stereo Waterdrop
**Steredo Waterdrop** is a real-world dataset for research on stereo waterdrop removal. The dataset contains 837 stereo image pairs captured from 129 indoor and outdoor scenes with various waterdrops, disparities, and illumination conditions. We use the ZED 2 stereo camera for data collection.
Provide a detailed description of the following dataset: Stereo Waterdrop
KTH-TIPS2
The KTH-TIPS (Textures under varying Illumination, Pose and Scale) image database was created to extend the CUReT database in two directions, by providing variations in scale as well as pose and illumination, and by imaging other samples of a subset of its materials in different settings. The KTH-TIPS2 databases took this a step further by imaging 4 different samples of 11 materials, each under varying pose, illumination and scale. For more information about the databases, please view the documentation. The databases may be downloaded here.
Provide a detailed description of the following dataset: KTH-TIPS2
VitaminC
The VitaminC dataset contains more than 450,000 claim-evidence pairs for fact verification and factual consistent generation. Based on over 100,000 revisions to popular Wikipedia pages, and additional "synthetic" revisions.
Provide a detailed description of the following dataset: VitaminC
SWSR
The Sina Weibo Sexism Review (SWSR) dataset is a dataset to research online sexism in Chinese. The SWSR dataset provides labels at different levels of granularity including (i) sexism or non-sexism, (ii) sexism category and (iii) target type, which can be exploited, among others, for building computational methods to identify and investigate finer-grained gender-related abusive language.
Provide a detailed description of the following dataset: SWSR
iGibson 2.0
**iGibson 2.0** is an open-source simulation environment that supports the simulation of a more diverse set of household tasks through three key innovations. First, iGibson 2.0 supports object states, including temperature, wetness level, cleanliness level, and toggled and sliced states, necessary to cover a wider range of tasks. Second, iGibson 2.0 implements a set of predicate logic functions that map the simulator states to logic states like Cooked or Soaked. Additionally, given a logic state, iGibson 2.0 can sample valid physical states that satisfy it. This functionality can generate potentially infinite instances of tasks with minimal effort from the users. The sampling mechanism allows our scenes to be more densely populated with small objects in semantically meaningful locations. Third, iGibson 2.0 includes a virtual reality (VR) interface to immerse humans in its scenes to collect demonstrations.
Provide a detailed description of the following dataset: iGibson 2.0
gENder-IT
**gENder-IT** is an English-Italian challenge set focusing on the resolution of natural gender phenomena by providing word-level gender tags on the English source side and multiple gender alternative translations, where needed, on the Italian target side.
Provide a detailed description of the following dataset: gENder-IT
WebFG-496
**WebFG-496** is a dataset for fine-grained recognition that contains 200 subcategories of the "Bird" (Web-bird), 100 subcategories of the Aircraft" (Web-aircraft), and 196 subcategories of the "Car" (Web-car). It has a total number of 53339 web training images.
Provide a detailed description of the following dataset: WebFG-496
WDC-Dialogue
**WDC-Dialogue** is a dataset built from the Chinese social media to train EVA. Specifically, conversations from various sources are gathered and a rigorous data cleaning pipeline is designed to enforce the quality of WDC-Dialogue. The dataset mainly focuses on three categories of textual interaction data, i.e., repost on social media, comment / reply on various online forums and online question and answer (Q&A) exchanges. Each round of these textual interactions yields a dialogue session via well-designed parsing rules.
Provide a detailed description of the following dataset: WDC-Dialogue
InferWiki
**InferWiki** is a Knowledge Graph Completion (KGC) dataset that improves upon existing benchmarks in inferential ability, assumptions, and patterns. First, each testing sample is predictable with supportive data in the training set. Second, InferWiki initiates the evaluation following the open-world assumption and improves the inferential difficulty of the closed-world assumption, by providing manually annotated negative and unknown triples. Third, the dataset includes various inference patterns (e.g., reasoning path length and types) for comprehensive evaluation.
Provide a detailed description of the following dataset: InferWiki
Marine Debris Turntable
**Marine Debris Turntable** is a dataset for sonar perception.
Provide a detailed description of the following dataset: Marine Debris Turntable
RareDis corpus
The **RareDis** corpus contains more than 5,000 rare diseases and almost 6,000 clinical manifestations are annotated. Moreover, the Inter Annotator Agreement evaluation shows a relatively high agreement (F1-measure equal to 83.5% under exact match criteria for the entities and equal to 81.3% for the relations). Based on these results, this corpus is of high quality, supposing a significant step for the field since there is a scarcity of available corpus annotated with rare diseases.
Provide a detailed description of the following dataset: RareDis corpus
WikiChurches
WikiChurches is a dataset for architectural style classification, consisting of 9,485 images of church buildings. Both images and style labels were sourced from Wikipedia. The dataset can serve as a benchmark for various research fields, as it combines numerous real-world challenges: fine-grained distinctions between classes based on subtle visual features, a comparatively small sample size, a highly imbalanced class distribution, a high variance of viewpoints, and a hierarchical organization of labels, where only some images are labeled at the most precise level. In addition, we provide 631 bounding box annotations of characteristic visual features for 139 churches from four major categories. These annotations can, for example, be useful for research on fine-grained classification, where additional expert knowledge about distinctive object parts is often available.
Provide a detailed description of the following dataset: WikiChurches
HVIS Dataset
We propose a new benchmark called Human Video Instance Segmentation (HVIS), which focuses on complex real-world scenarios with sufficient human instance masks and identities. Our dataset contains 805 videos with 1447 detailedly annotated human instances. It also includes various overlapping scenes, which integrates into the most challenging video dataset related to humans.
Provide a detailed description of the following dataset: HVIS Dataset
Raw data for NMR-POISE
The NMR-POISE paper can be found at: Anal. Chem. 2021, 93 (31), 10735–10739 (DOI: 10.1021/acs.analchem.1c01767). The majority of one- and multi-dimensional NMR experiments, indispensable to chemists in many areas of research, are often run with generic or "compromise" parameter values that are not optimised. This is particularly problematic when robust, automated acquisition on a variety of samples is desired. Here we present a Python package, NMR-POISE (Parameter Optimisation by Iterative Spectral Evaluation), with full integration into Bruker’s TopSpin software, that utilises feedback control for on-the-fly, sample-tailored optimisation of NMR experiments. POISE provides a highly extensible and user-friendly framework which allows its core optimisation algorithms to be implemented in a wide variety of scenarios. The data attached herein provide examples of optimisation procedures where POISE can be used to great effect. The raw NMR data is attached here, together with all of the scripts used for processing and plotting this data (which can be used to directly regenerate the figures in the manuscript). The raw NMR data is in the "datasets" directory, and the processing scripts in the "figures" directory. The scripts can be run as long as this directory structure is maintained, but require v0.4.1 of the "penguins" Python package: this can be installed using the command "pip install penguins=0.4.1" (without quotes). Please refer to the Supporting Information of the POISE paper for more details, including a full description of the individual datasets.
Provide a detailed description of the following dataset: Raw data for NMR-POISE
MobIE
**MobIE** is a German-language dataset which is human-annotated with 20 coarse- and fine-grained entity types and entity linking information for geographically linkable entities. The dataset consists of 3,232 social media texts and traffic reports with 91K tokens, and contains 20.5K annotated entities, 13.1K of which are linked to a knowledge base. A subset of the dataset is human-annotated with seven mobility-related, n-ary relation types, while the remaining documents are annotated using a weakly-supervised labeling approach implemented with the Snorkel framework. The dataset can be used for NER (Named entity recognition), EL (entity linking) and RE (relation extraction), and thus can be used for joint and multi-task learning of these fundamental information extraction tasks.
Provide a detailed description of the following dataset: MobIE
Who’s Waldo
**Who's Waldo** is a dataset of 270K image–caption pairs, depicting interactions of people, that is automatically mined from Wikimedia Commons. It is a benchmark dataset for person-centric visual grounding, the problem of linking between people named in a caption and people pictured in an image.
Provide a detailed description of the following dataset: Who’s Waldo
TUM-VIE
**TUM-VIE** is an event camera dataset for developing 3D perception and navigation algorithms. It contains handheld and head-mounted sequences in indoor and outdoor environments with rapid motion during sports and high dynamic range. TUM-VIE includes challenging sequences where state-of-the art VIO fails or results in large drift. Hence, it can help to push the boundary on event-based visual-inertial algorithms.
Provide a detailed description of the following dataset: TUM-VIE
AutoChart
**AutoChart** is a dataset for chart-to-text generation, a task that consists on generating analytical descriptions of visual plots.
Provide a detailed description of the following dataset: AutoChart
DAHLIA
DAHLIA dataset [1] is devoted to human activity recognition, which is a major issue for adapting smart-home services such as user assistance. DAHLIA has been realized in Mobile Mii Platform by CEA LIST, and has been partly supported by ITEA 3 Emospaces Project (https://itea3.org/project/emospaces.html) Videos were recorded in realistic conditions, with 3 Kinect v2 sensors located as they would be in a real context. The long-range activities were performed in an unconstrained way (participants received only few instructions), and in a continuous (untrimmed) sequence, resulting in long videos (40 min in average per subject). Contrary to previously published databases, in which labeled actions are very short and have low-semantic level, this new database focuses on high-level semantic activities such as « Preparing lunch » or « House Working ». [1] G. Vaquette, A. Orcesi, L. Lucat and C. Achard, "The DAily Home LIfe Activity Dataset: A High Semantic Activity Dataset for Online Recognition," 2017 12th IEEE International Conference on Automatic Face & Gesture Recognition (FG 2017), 2017, pp. 497-504, doi: 10.1109/FG.2017.67.
Provide a detailed description of the following dataset: DAHLIA
PIDray
**PIDray** is a large-scale dataset which covers various cases in real-world scenarios for prohibited item detection, especially for deliberately hidden items. The dataset contains 12 categories of prohibited items in 47, 677 X-ray images with high-quality annotated segmentation masks and bounding boxes.
Provide a detailed description of the following dataset: PIDray
CallOptionBSM
This dataset collects 88,077 numerical samples of call options on Shanghai Stock Exchange from 2015-02 to 2020-07. After the pre-processing, 83,427 samples remain in the data set. This data set records only original quotation of call options on Shanghai Stock Exchange, and does not include derivative indicators published by stock brokerage firms.
Provide a detailed description of the following dataset: CallOptionBSM
Dataset to "Easing the Conscience with OPC UA: An Internet-Wide Study on Insecure Deployments"
This is the dataset to "Easing the Conscience with OPC UA: An Internet-Wide Study on Insecure Deployments" [In ACM Internet Measurement Conference (IMC ’20)]. It contains our weekly scanning results between 2020-02-09 and 2020-08-31 complied using our zgrab2 extensions, i.e, it contains an Internet-wide view on OPC UA deployments and their security configurations. To compile the dataset, we anonymized the output of zgrab2, i.e., we removed host and network identifiers from that dataset. More precisely, we mapped all IP addresses, fully qualified hostnames, and autonomous system IDs to numbers as well as removed certificates containing any identifiers. See the README file for more information. Using this dataset we showed that 93% of Internet-facing OPC UA deployments have problematic security configurations, e.g., missing access control (on 24% of hosts), disabled security functionality (24%), or use of deprecated cryptographic primitives (25%). Furthermore, we discover several hundred devices in multiple autonomous systems sharing the same security certificate, opening the door for impersonation attacks. Overall, with the analysis of this dataset we underpinned that secure protocols, in general, are no guarantee for secure deployments if they need to be configured correctly following regularly updated guidelines that account for basic primitives losing their security promises.
Provide a detailed description of the following dataset: Dataset to "Easing the Conscience with OPC UA: An Internet-Wide Study on Insecure Deployments"
MHMD
**MHMD** (Modern Historical Movies Dataset) is a dataset for old image colorization, built from historical movies. It consists of 1,353,166 images and 42 labels of eras, nationalities, and garment types for automatic colorization from 147 historical movies or TV series made in modern time.
Provide a detailed description of the following dataset: MHMD
DensePASS
DensePASS - a novel densely annotated dataset for panoramic segmentation under cross-domain conditions, specifically built to study the Pinhole-to-Panoramic transfer and accompanied with pinhole camera training examples obtained from Cityscapes. DensePASS covers both, labelled- and unlabelled 360-degree images, with the labelled data comprising 19 classes which explicitly fit the categories available in the source domain (i.e. pinhole) data.
Provide a detailed description of the following dataset: DensePASS
STN PLAD
STN PLAD is a high-resolution and real-world image dataset of multiple high-voltage power line components. It has 2,409 annotated objects divided into five classes: transmission tower, insulator, spacer, tower plate, and Stockbridge damper, which vary in size (resolution), orientation, illumination, angulation, and background. ## Properties - Image size: 5472×3078 or 5472×3648 - Total images: 133 - Total instances: 2409 - Average instances per image: 18.1 - Nº of object classes (different assets): 5 - Other stats: ## ![](https://i.imgur.com/HzdL7bF.png?1) ## Abstract Many power line companies are using UAVs to perform their inspection processes instead of putting their workers at risk by making them climb high voltage power line towers, for instance. A crucial task for the inspection is to detect and classify assets in the power transmission lines. However, public data related to power line assets are scarce, preventing a faster evolution of this area. This work proposes the Power Line Assets Dataset, containing high-resolution and real-world images of multiple high-voltage power line components. It has 2,409 annotated objects divided into five classes: transmission tower, insulator, spacer, tower plate, and Stockbridge damper, which vary in size (resolution), orientation, illumination, angulation, and background. This work also presents an evaluation with popular deep object detection methods, showing considerable room for improvement. ## Baseline results - mAP: 89.2% | Assets | Average Precision | |--------------------|-----------------| | Transmission tower | 0.900 | | Insulator | 0.894 | | Spacer | 0.856 | | Tower plate | 0.971 | | Stockbridge damper | 0.838 | | **mean** | **0.892** |
Provide a detailed description of the following dataset: STN PLAD
DocBank-TB
This dataset consisting 500 set of caption, table and coresponding paper page, processed from [DocBank](docbank).
Provide a detailed description of the following dataset: DocBank-TB
Twitter Sentiment Analysis
This is an **entity-level** Twitter Sentiment Analysis dataset. For each message, the task is to judge the sentiment of the entire sentence towards a given entity. For example, A outperforms B is **positive** for entity **A** but **negative** for entity **B**. The dataset contains ~70K labeled training messages and 1K labeled validation messages. It is available online for free on Kaggle.
Provide a detailed description of the following dataset: Twitter Sentiment Analysis
NASA C-MAPSS
Engine degradation simulation was carried out using C-MAPSS. Four different were sets simulated under different combinations of operational conditions and fault modes. Records several sensor channels to characterize fault evolution. The data set was provided by the Prognostics CoE at NASA Ames.
Provide a detailed description of the following dataset: NASA C-MAPSS
HiFiMask
**HiFiMask** is a large-scale High-Fidelity Mask dataset, namely CASIA-SURF HiFiMask (briefly HiFiMask). It contains a total amount of 54,600 videos are recorded from 75 subjects with 225 realistic masks by 7 new kinds of sensors.
Provide a detailed description of the following dataset: HiFiMask