dataset_name
stringlengths
2
128
description
stringlengths
1
9.7k
prompt
stringlengths
59
185
MetaGraspNet 1
There has been increasing interest in smart factories powered by robotics systems to tackle repetitive, laborious tasks. One particular impactful yet challenging task in robotics-powered smart factory applications is robotic grasping: using robotic arms to grasp objects autonomously in different settings. Robotic grasping requires a variety of computer vision tasks such as object detection, segmentation, grasp prediction, pick planning, etc. While significant progress has been made in leveraging of machine learning for robotic grasping, particularly with deep learning, a big challenge remains in the need for large-scale, high-quality RGBD datasets that cover a wide diversity of scenarios and permutations. To tackle this big, diverse data problem, we are inspired by the recent rise in the concept of metaverse, which has greatly closed the gap between virtual worlds and the physical world. In particular, metaverses allow us to create digital twins of real-world manufacturing scenarios and to virtually create different scenarios from which large volumes of data can be generated for training models. We present MetaGraspNet: a large-scale benchmark dataset for vision-driven robotic grasping via physics-based metaverse synthesis. The proposed dataset contains 100,000 images and 25 different object types, and is split into 5 difficulties to evaluate object detection and segmentation model performance in different grasping scenarios. We also propose a new layout-weighted performance metric alongside the dataset for evaluating object detection and segmentation performance in a manner that is more appropriate for robotic grasp applications compared to existing general-purpose performance metrics. This repository contains the first phase of MetaGraspNet benchmark dataset which includes detailed object detection, segmentation, layout annotations, and a script for layout-weighted performance metric (https://github.com/y2863/MetaGraspNet ).
Provide a detailed description of the following dataset: MetaGraspNet 1
MetaGraspNet 2
There has been increasing interest in smart factories powered by robotics systems to tackle repetitive, laborious tasks. One particular impactful yet challenging task in robotics-powered smart factory applications is robotic grasping: using robotic arms to grasp objects autonomously in different settings. Robotic grasping requires a variety of computer vision tasks such as object detection, segmentation, grasp prediction, pick planning, etc. While significant progress has been made in leveraging of machine learning for robotic grasping, particularly with deep learning, a big challenge remains in the need for large-scale, high-quality RGBD datasets that cover a wide diversity of scenarios and permutations. To tackle this big, diverse data problem, we are inspired by the recent rise in the concept of metaverse, which has greatly closed the gap between virtual worlds and the physical world. In particular, metaverses allow us to create digital twins of real-world manufacturing scenarios and to virtually create different scenarios from which large volumes of data can be generated for training models. We present MetaGraspNet: a large-scale benchmark dataset for vision-driven robotic grasping via physics-based metaverse synthesis. The proposed dataset contains 100,000 images and 25 different object types, and is split into 5 difficulties to evaluate object detection and segmentation model performance in different grasping scenarios. We also propose a new layout-weighted performance metric alongside the dataset for evaluating object detection and segmentation performance in a manner that is more appropriate for robotic grasp applications compared to existing general-purpose performance metrics. This repository contains the first phase of MetaGraspNet benchmark dataset which includes detailed object detection, segmentation, layout annotations, and a script for layout-weighted performance metric (https://github.com/y2863/MetaGraspNet ).
Provide a detailed description of the following dataset: MetaGraspNet 2
MetaGraspNet 3
There has been increasing interest in smart factories powered by robotics systems to tackle repetitive, laborious tasks. One particular impactful yet challenging task in robotics-powered smart factory applications is robotic grasping: using robotic arms to grasp objects autonomously in different settings. Robotic grasping requires a variety of computer vision tasks such as object detection, segmentation, grasp prediction, pick planning, etc. While significant progress has been made in leveraging of machine learning for robotic grasping, particularly with deep learning, a big challenge remains in the need for large-scale, high-quality RGBD datasets that cover a wide diversity of scenarios and permutations. To tackle this big, diverse data problem, we are inspired by the recent rise in the concept of metaverse, which has greatly closed the gap between virtual worlds and the physical world. In particular, metaverses allow us to create digital twins of real-world manufacturing scenarios and to virtually create different scenarios from which large volumes of data can be generated for training models. We present MetaGraspNet: a large-scale benchmark dataset for vision-driven robotic grasping via physics-based metaverse synthesis. The proposed dataset contains 100,000 images and 25 different object types, and is split into 5 difficulties to evaluate object detection and segmentation model performance in different grasping scenarios. We also propose a new layout-weighted performance metric alongside the dataset for evaluating object detection and segmentation performance in a manner that is more appropriate for robotic grasp applications compared to existing general-purpose performance metrics. This repository contains the first phase of MetaGraspNet benchmark dataset which includes detailed object detection, segmentation, layout annotations, and a script for layout-weighted performance metric (https://github.com/y2863/MetaGraspNet ).
Provide a detailed description of the following dataset: MetaGraspNet 3
MetaGraspNet 4
There has been increasing interest in smart factories powered by robotics systems to tackle repetitive, laborious tasks. One particular impactful yet challenging task in robotics-powered smart factory applications is robotic grasping: using robotic arms to grasp objects autonomously in different settings. Robotic grasping requires a variety of computer vision tasks such as object detection, segmentation, grasp prediction, pick planning, etc. While significant progress has been made in leveraging of machine learning for robotic grasping, particularly with deep learning, a big challenge remains in the need for large-scale, high-quality RGBD datasets that cover a wide diversity of scenarios and permutations. To tackle this big, diverse data problem, we are inspired by the recent rise in the concept of metaverse, which has greatly closed the gap between virtual worlds and the physical world. In particular, metaverses allow us to create digital twins of real-world manufacturing scenarios and to virtually create different scenarios from which large volumes of data can be generated for training models. We present MetaGraspNet: a large-scale benchmark dataset for vision-driven robotic grasping via physics-based metaverse synthesis. The proposed dataset contains 100,000 images and 25 different object types, and is split into 5 difficulties to evaluate object detection and segmentation model performance in different grasping scenarios. We also propose a new layout-weighted performance metric alongside the dataset for evaluating object detection and segmentation performance in a manner that is more appropriate for robotic grasp applications compared to existing general-purpose performance metrics. This repository contains the first phase of MetaGraspNet benchmark dataset which includes detailed object detection, segmentation, layout annotations, and a script for layout-weighted performance metric (https://github.com/y2863/MetaGraspNet ).
Provide a detailed description of the following dataset: MetaGraspNet 4
MetaGraspNet 5
There has been increasing interest in smart factories powered by robotics systems to tackle repetitive, laborious tasks. One particular impactful yet challenging task in robotics-powered smart factory applications is robotic grasping: using robotic arms to grasp objects autonomously in different settings. Robotic grasping requires a variety of computer vision tasks such as object detection, segmentation, grasp prediction, pick planning, etc. While significant progress has been made in leveraging of machine learning for robotic grasping, particularly with deep learning, a big challenge remains in the need for large-scale, high-quality RGBD datasets that cover a wide diversity of scenarios and permutations. To tackle this big, diverse data problem, we are inspired by the recent rise in the concept of metaverse, which has greatly closed the gap between virtual worlds and the physical world. In particular, metaverses allow us to create digital twins of real-world manufacturing scenarios and to virtually create different scenarios from which large volumes of data can be generated for training models. We present MetaGraspNet: a large-scale benchmark dataset for vision-driven robotic grasping via physics-based metaverse synthesis. The proposed dataset contains 100,000 images and 25 different object types, and is split into 5 difficulties to evaluate object detection and segmentation model performance in different grasping scenarios. We also propose a new layout-weighted performance metric alongside the dataset for evaluating object detection and segmentation performance in a manner that is more appropriate for robotic grasp applications compared to existing general-purpose performance metrics. This repository contains the first phase of MetaGraspNet benchmark dataset which includes detailed object detection, segmentation, layout annotations, and a script for layout-weighted performance metric (https://github.com/y2863/MetaGraspNet ).
Provide a detailed description of the following dataset: MetaGraspNet 5
Multiview Manipulation Data
Accompanying expert data and trained models for 2021 IROS paper on Multiview Manipulation.
Provide a detailed description of the following dataset: Multiview Manipulation Data
GMVD
The GMVD dataset consists of synthetic scenes captured using the GTA-V and Unity graphics engines. The dataset covers a variety of scenes, along with different conditions including day time variations (morning, afternoon, evening, night) and weather variations (sunny, cloudy, rainy, snowy). The purpose of the dataset is twofold. The first is to benchmark the generalization capabilities of Multi-View Detection algorithms. The second purpose is to serve as a synthetic training source from which the trained models can be directly applied on real-world data.
Provide a detailed description of the following dataset: GMVD
NLC2CMD
The NLC2CMD Competition hosted at NeurIPS 2020 aimed to bring the power of natural language processing to the command line. Participants were tasked with building models that can transform descriptions of command line tasks in English to their Bash syntax.
Provide a detailed description of the following dataset: NLC2CMD
2018 n2c2 (Track 2) - Adverse Drug Events and Medication Extraction
Abstract Objective This article summarizes the preparation, organization, evaluation, and results of Track 2 of the 2018 National NLP Clinical Challenges shared task. Track 2 focused on extraction of adverse drug events (ADEs) from clinical records and evaluated 3 tasks: concept extraction, relation classification, and end-to-end systems. We perform an analysis of the results to identify the state of the art in these tasks, learn from it, and build on it. Materials and Methods For all tasks, teams were given raw text of narrative discharge summaries, and in all the tasks, participants proposed deep learning–based methods with hand-designed features. In the concept extraction task, participants used sequence labelling models (bidirectional long short-term memory being the most popular), whereas in the relation classification task, they also experimented with instance-based classifiers (namely support vector machines and rules). Ensemble methods were also popular. Results A total of 28 teams participated in task 1, with 21 teams in tasks 2 and 3. The best performing systems set a high performance bar with F1 scores of 0.9418 for concept extraction, 0.9630 for relation classification, and 0.8905 for end-to-end. However, the results were much lower for concepts and relations of Reasons and ADEs. These were often missed because local context is insufficient to identify them. Conclusions This challenge shows that clinical concept extraction and relation classification systems have a high performance for many concept types, but significant improvement is still required for ADEs and Reasons. Incorporating the larger context or outside knowledge will likely improve the performance of future systems.
Provide a detailed description of the following dataset: 2018 n2c2 (Track 2) - Adverse Drug Events and Medication Extraction
SPARTQA -
We take advantage of the ground truth of NLVR images, design CFGs to generate stories, and use spatial reasoning rules to ask and answer spatial reasoning questions. This automatically generated data is called SpaRTQA. https://aclanthology.org/2021.naacl-main.364/
Provide a detailed description of the following dataset: SPARTQA -
Moon Phases
Dates with Moon phases extended days until next phase (1992/1/4 to 2027/12/20) Incorporate lunar data to your research. The moon affects multiple physical things on the earth, such as the ocean tides, the behavior of living organisms as well as humans Moon Phases data 0 = New Moon 1 = first day after New Moon 2 = second day after New Moon . . . 10 = First Quarter 11 = first day after First Quarter 12 = second day after First Quarter . . . 20 = Full Moon 21 = first day after Full Moon 22 = second day after Full Moon . . . 30 = Third Quarter 31 = first day after Third Quarter 32 = second day after Third Quarter . . .
Provide a detailed description of the following dataset: Moon Phases
CLIPS
CLIPS, ovvero Corpora e Lessici dell'Italiano Parlato e Scritto, è uno degli otto progetti (Progetto n. 2) del Cluster C18 "LINGUISTICA COMPUTAZIONALE: RICERCHE MONOLINGUI E MULTILINGUI" (Legge 488), finanziato dal Ministero dell'Istruzione, dell'Università e della Ricerca (MIUR).
Provide a detailed description of the following dataset: CLIPS
Phone call network for 2 years in a Euro country
We employ a nationwide phone call dataset from Jan. 2015 to Dec. 2016. The *log* interaction duration and *log* interaction frequency in each phase (intermediate results) are both provided. Currently, we upload the Results folder to Google Drive. (https://drive.google.com/drive/folders/1h4rHZvzzQO7niYMelbzToJZernOij1dv?usp=sharing) Please download the files from google drive for replication purposes. In each file, we list tie ranges and interactions in all phases. For example, in 'Results/Graph_season_TR_Duration.txt', the former eight columns are tie range and the latter eight columns are *log* interaction duration. Tie range is calculated by the length of the second shortest path of two nodes. '-1' means that one node of this connection has no interaction with others in this phase. '100' means that there is no second path between two nodes, indicating that the tie range is infinite. '101' means that the degree of one node is 1, indicating that the tie range is infinite. Differential privacy is applied to protect the privacy of users. Concretely, we add a Gaussian noise with μ=0, σ=5 to *log* interactions. When reproducing the results, please remove all *numpy.log* in the codes, and minus a σ for the calculation of error bars.
Provide a detailed description of the following dataset: Phone call network for 2 years in a Euro country
RodoSol-ALPR
This dataset, called RodoSol-ALPR dataset, contains 20,000 images captured by static cameras located at pay tolls owned by the *Rodovia do Sol* (RodoSol) concessionaire, which operates 67.5 kilometers of a highway (ES-060) in the Brazilian state of Espírito Santo. There are images of different types of vehicles (e.g., cars, motorcycles, buses and trucks), captured during the day and night, from distinct lanes, on clear and rainy days, and the distance from the vehicle to the camera varies slightly. All images have a resolution of 1,280 × 720 pixels. An important feature of the proposed dataset is that it has images of two different license plate (LP) layouts: Brazilian and Mercosur (to maintain consistency with existing works, we refer to “Brazilian” as the standard used in Brazil before the adoption of the Mercosur standard). Every image has the following information available in a text file: the vehicle’s type (car or motorcycle), the LP’s layout (Brazilian or Mercosul), its text (e.g., ABC-1234), and the position (x, y) of each of its four corners. We labeled the corners instead of just the LP bounding box to enable the training of methods that explore LP rectification, as well as the application of a wider range of data augmentation techniques. Regarding privacy concerns related to our dataset, we remark that in Brazil the LPs are related to the respective vehicles, i.e., no public information is available about the vehicle drivers/owners. Moreover, all human faces (e.g., drivers or RodoSol’s employees) were manually redacted (i.e., blurred) in each image.
Provide a detailed description of the following dataset: RodoSol-ALPR
LTFT
Dataset originally conceived for multi-face tracking/detection for highly crowded scenarios. In these scenarios, the face is the only part that can be used to track the individuals. All our videos present novel crowd scenes recorded at near-eye level, where faces are visible enough to be analysed at the microscopic level, while also benefiting from a macroscopic view of the crowd. It includes: - Face detections of 715 unique subjects along with instructions to download the synchronized video. - More than 75k face detections annotated. - A density ranging from 3 to 13 people/frame. - 6 indoor and 4 outdoor videos. 8/10 videos are totally unconstrained, 2/10 feature 3 re-appearances per subject. Our dataset may be useful for: - Face tracking, especially relevant for crowded scenarios (typically from video-surveillance cameras). - Heavily occluded body tracking (in many videos, only the face is mostly visible). - Face recognition. - Face detection for partially occluded faces.
Provide a detailed description of the following dataset: LTFT
IMS Bearing Dataset
Bearing acceleration data from three run-to-failure experiments on a loaded shaft. The data set was provided by the Center for Intelligent Maintenance Systems (IMS), University of Cincinnati.
Provide a detailed description of the following dataset: IMS Bearing Dataset
PRONOSTIA Bearing Dataset
The PRONOSTIA (also called FEMTO) bearing dataset consists of 17 accelerated run-to-failures on a small bearing test rig. Both acceleration and temperature data was collected for each experiment. The dataset was used in the 2012 IEEE Prognostic Challenge. The dataset is from FEMTO-ST Institute in France.
Provide a detailed description of the following dataset: PRONOSTIA Bearing Dataset
LSA64
The sign database for the Argentinian Sign Language, created with the goal of producing a dictionary for LSA and training an automatic sign recognizer, includes 3200 videos where 10 non-expert subjects executed 5 repetitions of 64 different types of signs. Signs were selected among the most commonly used ones in the LSA lexicon, including both verbs and nouns.
Provide a detailed description of the following dataset: LSA64
Makeup216
Makeup216 contains a variety and representation of logo (captured from the real world) and is among the largest and most complex logo datasets in the field. It comprises of 216 logos and 157 brands, including 10,019 images and 37,018 annotated logo objects.
Provide a detailed description of the following dataset: Makeup216
MVHand
MVHand is a new multi-view hand posture dataset to obtain complete 3D point clouds of the hand in the real world.
Provide a detailed description of the following dataset: MVHand
Wikidated 1.0
**Wikidated 1.0** is a dataset of Wikidata's full revision history, which encodes changes between Wikidata revisions as sets of deletions and additions of RDF triples. It constitutes one of the first large datasets of an evolving knowledge graph, a recently emerging research subject in the Semantic Web community.
Provide a detailed description of the following dataset: Wikidated 1.0
Drosophila Immunity Time-Course Data
The data used for all results in this paper can be found [here](https://github.com/sara-venkatraman/Bayesian-Gene-Dynamics/tree/master/Data). This directory contains: * `GeneData.csv`: Contains temporal gene expression measurements for 1735 genes at 17 time points. Measurements are provided as the $\log_2$-fold change from first time point. Hours corresponding to each time point are defined in the R script [`3_Results.R`](https://github.com/sara-venkatraman/Bayesian-Gene-Dynamics/blob/master/3_Results.R#L23) in our GitHub repository. This dataset is derived from a larger gene expression dataset collected by [Schlamp et al. (2021)](https://www.ncbi.nlm.nih.gov/bioproject/PRJNA641552). * `PriorMatrix.csv`: A 1735 x 1735 prior adjacency matrix. Each entry is `0`, `1`, or `NA` to indicate that a biological relationship between the corresponding two genes is unlikely, likely, or unknown according to external databases. Further details about the collection of this data can be found in Section 4.1 and Appendix C of our paper. The R script `3_Results.R` shows how these CSV files are read and used for our analysis.
Provide a detailed description of the following dataset: Drosophila Immunity Time-Course Data
VGG-Sound Sync
**VGG-Sound Sync** is an audio-visual synchronisation benchmark based on videos collected from YouTube. VGG-Sound Sync contains over 100k video clips, spanning 160 classes and can be downloaded [here](https://www.robots.ox.ac.uk/~vgg/research/avs/data/vggsoundsync.csv). Note, only the test clips are included here, please use the training clips in the original [VGG-Sound](https://paperswithcode.com/dataset/vgg-sound) to train your models ( classes are same with the ones in the test clips). Each line in the json file has been defined by: `# YouTube ID, start seconds, label `
Provide a detailed description of the following dataset: VGG-Sound Sync
BRATS21
The RSNA-ASNR-MICCAI BraTS 2021 challenge utilizes multi-institutional pre-operative baseline multi-parametric magnetic resonance imaging (mpMRI) scans, and focuses on the evaluation of state-of-the-art methods for (Task 1) the segmentation of intrinsically heterogeneous brain glioblastoma sub-regions in mpMRI scans. Furthemore, this BraTS 2021 challenge also focuses on the evaluation of (Task 2) classification methods to predict the MGMT promoter methylation status.
Provide a detailed description of the following dataset: BRATS21
MetaVD
MetaVD is a *Meta Video Dataset* for enhancing human action recognition datasets. It provides human-annotated relationship labels between action classes across human action recognition datasets. MetaVD is proposed in the following paper: **Yuya Yoshikawa, Yutaro Shigeto, and Akikazu Takeuchi. "MetaVD: A Meta Video Dataset for enhancing human action recognition datasets." Computer Vision and Image Understanding 212 (2021): 103276. [[link](https://www.sciencedirect.com/science/article/pii/S107731422100120X)]** MetaVD integrates the following datasets: [UCF101](https://www.crcv.ucf.edu/data/UCF101.php), [HMDB51](https://serre-lab.clps.brown.edu/resource/hmdb-a-large-human-motion-database/), [ActivityNet](http://activity-net.org/), [STAIR Actions](https://actions.stair.center/), [Charades](https://prior.allenai.org/projects/charades), [Kinetics-700](https://deepmind.com/research/open-source/kinetics) This repository does _NOT_ provide videos in the datasets. For information on how to download the videos, please refer to the website of each dataset.
Provide a detailed description of the following dataset: MetaVD
ProSLU
In the paper, to bridge the research gap, we propose a new and important task, Profile-based Spoken Language Understanding (ProSLU), which requires a model not only depends on the text but also on the given supporting profile information. We further introduce a Chinese human-annotated dataset, with over 5K utterances annotated with intent and slots, and corresponding supporting profile information. In total, we provide three types of supporting profile information: (1) Knowledge Graph (KG) consists of entities with rich attributes, (2) User Profile (UP) is composed of user settings and information, (3) Context Awareness(CA) is user state and environmental information.
Provide a detailed description of the following dataset: ProSLU
SERV-CT
Endoscopic stereo reconstruction for surgical scenes gives rise to specific problems, including the lack of clear corner features, highly specular surface properties, and the presence of blood and smoke. These issues present difficulties for both stereo reconstruction itself and also for standardised dataset production. We present a stereo-endoscopic reconstruction validation dataset based on cone-beam CT (SERV-CT). Two ex vivo small porcine full torso cadavers were placed within the view of the endoscope with both the endoscope and target anatomy visible in the CT scan. Subsequent orientation of the endoscope was manually aligned to match the stereoscopic view and benchmark disparities, depths and occlusions are calculated. The requirement of a CT scan limited the number of stereo pairs to 8 from each ex vivo sample. For the second sample an RGB surface was acquired to aid alignment of smooth, featureless surfaces. Repeated manual alignments showed an RMS disparity accuracy of around 2 pixels and a depth accuracy of about 2 mm. A simplified reference dataset is provided consisting of endoscope image pairs with corresponding calibration, disparities, depths, and occlusions covering the majority of the endoscopic image and a range of tissue types, including smooth specular surfaces, as well as significant variation of depth. The SERV-CT dataset provides an easy-to-use stereoscopic validation for surgical applications with smooth reference disparities and depths covering the majority of the endoscopic image.
Provide a detailed description of the following dataset: SERV-CT
ASL-Skeleton3D
The ASL-Skeleton3D introduces a representation based on mapping into the three-dimensional space the coordinates of the signers in the ASLLVD dataset. This enables a more accurate observation of the body parts and the signs articulation, allowing researchers to better understand the language and explore other approaches to the SLR field.
Provide a detailed description of the following dataset: ASL-Skeleton3D
ASL-Phono
The ASL-Phono introduces a novel linguistics-based representation, which describes the signs in the ASLLVD dataset in terms of a set of attributes of the American Sign Language phonology.
Provide a detailed description of the following dataset: ASL-Phono
ASLLVD
Extremely important: The ASLLVD video data are subject to Terms of Use: http://www.bu.edu/asllrp/signbank-terms.pdf. By downloading these video files, you are agreeing to respect these conditions. In particular, NO FURTHER REDISTRIBUTION OF THESE VIDEO FILES is allowed. ---------- The American Sign Language Lexicon Video Dataset (ASLLVD) consists of videos of >3,300 ASL signs in citation form, each produced by 1-6 native ASL signers, for a total of almost 9,800 tokens. This dataset includes multiple synchronized videos showing the signing from different angles. Linguistic annotations include gloss labels, sign start and end time codes, start and end handshape labels for both hands, morphological and articulatory classifications of sign type. For compound signs, the dataset includes annotations for each morpheme. To facilitate computer vision-based sign language recognition, the dataset also includes numeric ID labels for sign variants, video sequences in uncompressed-raw format, and camera calibration sequences.
Provide a detailed description of the following dataset: ASLLVD
MRSpineSeg Challenge
1、 Competition name: The 2nd China Society of Image and Graphics (CSIG) Image and Graphics Technology Challenge: MRSpineSeg Challenge: Automated Multi-class Segmentation of Spinal Structures on Volumetric MR Images. 2、 Purpose: Degenerative spine diseases (e.g., lumbar disc herniation, spinal stenosis, etc.) have become important diseases affecting the health and quality of life of the elderly and working people. These degenerative spinal diseases often cause changes in the structural morphology and mechanical systems of the spine, resulting in pain, such as lumbar disc herniation, reduced disc height, and nerve compression. Magnetic resonance imaging (MRI) as a non-invasive examination method, it has good soft tissue imaging and no radiation. It is a reliable screening method for degenerative spine diseases. In clinical practice, the treatment of degenerative spinal disorders depends largely on physicians’ experience and lacks accurate quantitative analysis tools. 3D automatic segmentation (MR) images of multiclass spinal structures by MRI are a prerequisite for 3D reconstruction of spinal structures. It can provide quantitative analysis tools for building biomechanical models of the spine, simulating stresses in spinal structures, and assessing the prognosis of different treatment options for degenerative spinal diseases. This competition aims to gather global developers to explore efficient and accurate 3D automatic segmentation of spinal structure in MR images by using artificial intelligence technology. The spinal structure to be segmented includes 10 vertebrae and 9 intervertebral discs. 3、 Organizer: Qianjin,Feng, School of Biomedical Engineering, Southern Medical University, Guangdong Key Laboratory of medical image processing, China. 4、 Requirements for competition participants: It is open to the whole society. Personnel from colleges and universities, scientific research institutions and enterprises can sign up for the competition. The maximum number of each team is four. Each person can only participate in one team. After team registration, the team information cannot be changed. Note: all personnel who have access to the competition data are prohibited from participating in the competition. Those who have not access to the competition data of Southern Medical University can also participate in the competition. Southern Medical University has the right of final interpretation. 5、 Timeline: Registration (March 30, 2021 – May 22, 2021) 6、Citation: [1] Shumao Pang, Chunlan Pang, Lei Zhao, Yangfan Chen, Zhihai Su, Yujia Zhou, Meiyan Huang, Wei Yang, Hai Lu, Qianjin Feng*. SpineParseNet: Spine Parsing for Volumetric MR Image by a Two-Stage Segmentation Framework with Semantic Image Representation [J]. IEEE Transactions on Medical Imaging, 2021, 40(1): 262-273. [2] Shumao Pang, Chunlan Pang, Zhihai Su, Liyan Lin, Lei Zhao, Yangfan Chen, Yujia Zhou, Hai Lu, Qianjin Feng*. DGMSNet: Spine Segmentation for MR Image by a Detection-Guided Mixed-supervised Segmentation Network [J]. Medical Image Analysis, 2022, 102261.
Provide a detailed description of the following dataset: MRSpineSeg Challenge
DIDI Dataset
The dataset contains digital ink drawings of diagrams with dynamic drawing information. The dataset aims to foster research in interactive graphical symbolic understanding. The dataset was obtained using a prompted data collection effort.
Provide a detailed description of the following dataset: DIDI Dataset
OULU-NPU
The Oulu-NPU face presentation attack detection database consists of 4950 real access and attack videos. These videos were recorded using the front cameras of six mobile devices (Samsung Galaxy S6 edge, HTC Desire EYE, MEIZU X5, ASUS Zenfone Selfie, Sony XPERIA C5 Ultra Dual and OPPO N3) in three sessions with different illumination conditions and background scenes. The presentation attack types considered in the OULU-NPU database are print and video-replay. The 2D face artefacts were created using two printers and two display devices. The videos of the 55 subjects are divided into three subject-disjoint subsets for training, development and testing. Four test protocols are used to evaluate the generalization capability of face PAD methods across three covariates: unknown environmental conditions (namely illumination and background scene), acquisition devices and presentation attack instruments (PAI). Each of the four unambiguously defined evaluation protocols introduces at least one previously unseen condition to the test set, which enables a fair comparison on the generalization capabilities between new and existing approaches. Image Source: [https://www.researchgate.net/profile/Neil-Robertson/publication/333834759/figure/fig5/AS:897964780302339@1591102895306/Samples-from-the-OULU-NPU-database-From-top-to-bottom-is-the-three-sessions-with_W640.jpg](https://www.researchgate.net/profile/Neil-Robertson/publication/333834759/figure/fig5/AS:897964780302339@1591102895306/Samples-from-the-OULU-NPU-database-From-top-to-bottom-is-the-three-sessions-with_W640.jpg)
Provide a detailed description of the following dataset: OULU-NPU
Wiki-One
This dataset is a Wikipedia dump, split by relations to perform Few-Shot Knowledge Graph Completion. \begin{table}[] \begin{tabular}{@{}lllccl@{}} \textbf{Dataset} & \textbf{\# Ent} & \textbf{\# Rel} & \textbf{\# Triplets} & \textbf{Train/Dev/Test} \\ Wiki-One & 4,838,244 & 822 & 5,829,240 & 133/16/34 \\ \end{tabular} \caption{Datasets used in the experiments. } \end{table}
Provide a detailed description of the following dataset: Wiki-One
FACTIFY
FACTIFY is a dataset on multi-modal fact verification. It contains images, textual claim, reference textual documenta and image. The task is to classify the claims into support, not-enough-evidence and refute categories with the help of the supporting data. We aim to combat fake news in the social media era by providing this multi-modal dataset. Factify contains 50,000 claims accompanied with 100,000 images, split into training, validation and test sets.
Provide a detailed description of the following dataset: FACTIFY
DurLAR
DurLAR is a high-fidelity 128-channel 3D LiDAR dataset with panoramic ambient (near infrared) and reflectivity imagery for multi-modal autonomous driving applications. Compared to existing autonomous driving task datasets, DurLAR has the following novel features: - High vertical resolution **LiDAR** with **128 channels**, which is twice that of any existing datasets, full **360 degree depth**, range accuracy to ±2 cm at 20-50m. - **Ambient illumination (near infrared)** and **reflectivity panoramic imagery** are made available in the Mono16 format (2048 × 128 resolution), with this being only dataset to make this provision. - No rolling shutter effect, as our flash LiDAR captures all 128 channels simultaneously. - **Ambient illumination data** is recorded via an on-board lux meter, which is again not available in previous datasets. - High-fidelity **GNSS/INS** available via an onboard OxTS navigation unit operating at 100 Hz and receiving position and timing data from multiple GNSS con-stellations in addition to GPS. - KITTI data format adopted as the de facto dataset format such that it can be parsed using both the DurLAR development kit and existing KITTI-compatible tools. - **Diversity over repeated locations** such that the dataset has been collected under diverse environmental and weather conditions over the same driving route with additional variations in the time of day relative to environmental conditions. ## Sensor placement - **LiDAR**: [Ouster OS1-128 LiDAR sensor](https://ouster.com/products/os1-lidar-sensor/) with 128 channels vertical resolution - **Stereo** Camera: [Carnegie Robotics MultiSense S21 stereo camera](https://carnegierobotics.com/products/multisense-s21/) with grayscale, colour, and IR enhanced imagers, 2048x1088 @ 2MP resolution - **GNSS/INS**: [OxTS RT3000v3](https://www.oxts.com/products/rt3000-v3/) global navigation satellite and inertial navigation system, supporting localization from GPS, GLONASS, BeiDou, Galileo, PPP and SBAS constellations - **Lux Meter**: [Yocto Light V3](http://www.yoctopuce.com/EN/products/usb-environmental-sensors/yocto-light-v3), a USB ambient light sensor (lux meter), measuring ambient light up to 100,000 lux
Provide a detailed description of the following dataset: DurLAR
BLP
A blackout poetry dataset constructed from publicly available short stories and large poems. The dataset consists of two variants: 8K and 16K examples of passages along with a poem generated from the passage and the indices of the words in the passage from which words in the poem have been selected. The dataset also contains perplexity scores for each of the poems indicating the language quality of the poems. The dataset was constructed synthetically, and hence contains multiple poor poems and frequent grammatical errors. However, it is a great starting point for the task of applying machine learning to blackout poetry generation.
Provide a detailed description of the following dataset: BLP
NICO
I.I.D. hypothesis between training and testing data is the basis of numerous image classification methods. Such property can hardly be guaranteed in practice where the Non-IIDness is common, causing in- stable performances of these models. In literature, however, the Non-I.I.D. image classification problem is largely understudied. A key reason is lacking of a well-designed dataset to support related research. In this paper, we construct and release a Non-I.I.D. image dataset called NICO, which uses contexts to create Non-IIDness consciously. Compared to other datasets, extended analyses prove NICO can support various Non-I.I.D. situations with sufficient flexibility. Meanwhile, we propose a baseline model with Con- vNet structure for General Non-I.I.D. image classification, where distribution of testing data is unknown but different from training data. The experimental results demonstrate that NICO can well support the training of ConvNet model from scratch, and a batch balancing module can help ConvNets to perform better in Non-I.I.D. settings.
Provide a detailed description of the following dataset: NICO
Atari 100k
Atari Games for only 100k environment steps. (400k frames with frame-skip=4).
Provide a detailed description of the following dataset: Atari 100k
GOTOV
Stylianos ParaschiakosStylianos Paraschiakos, Beekman M. (Marian), Knobbe A. (Arno), Cachucho R. (Ricardo), Slagboom P. (Eline) Wearable sensor-based data of physical activities and indirect calorimetry for 35 (14 female, 21 male) healthy older individuals (over 60 years old). The data has been collected from different body locations and devices: 3x GeneActives accelerometers (ankle, wrist, and chest), 1x Equivital (chest) and COSMED (mask and belt on chest). The 35 individuals followed a protocol of 16 activities of daily living for approximately an hour and a half in a semi-lab environment. These include different types or paces of indoor and outdoor activities with low (lying down, sitting), mid (standing, household activities) and high (walking and cycling) levels of intensity. Additionally, some activities can be specified at different granularities. The study took place at LUMC, between February and May 2015.
Provide a detailed description of the following dataset: GOTOV
SaL-Lightning
**SaL-Lightning** is a dataset for research in the field of Search as Learning. It contains detailed recordings, pre- and post-knowledge assessments of 114 participants, interaction data on real-world search behavior, as well as resource features of a user study. This data diversity has the potential to help researchers answer diverse questions tied to the entire online learning framework, from individual psychological aspects, over usability tests and data visualization over retrieval and ranking issues existing in the technology that enables this process.
Provide a detailed description of the following dataset: SaL-Lightning
Weibo21
**Weibo21** is a benchmark of fake news dataset for multi-domain fake news detection (MFND) with domain label annotated, which consists of 4,488 fake news and 4,640 real news from 9 different domains.
Provide a detailed description of the following dataset: Weibo21
H2O
The **Human-to-Human-or-Object Interaction Dataset** (**H2O**) dataset is a dataset for Human-Object Interaction (HOI) detection. It consists in determining and locating the list of triplets <subject,verb,target> which describe all the simultaneous interactions in an image. H²O is composed of the 10 301 images from [V-COCO](v-coco) dataset to which are added 3 635 images which mostly contain interactions between people.
Provide a detailed description of the following dataset: H2O
ERD
**ERD** (Educational Resource Discovery) is a corpus of 39,728 manually labeled web resources and 659 queries from NLP, Computer Vision (CV), and Statistics (STATS) for educational resource discovery.
Provide a detailed description of the following dataset: ERD
LoRa RF
This is a large-scale RF fingerprinting dataset, collected from 25 different LoRa-enabled IoT transmitting devices using USRP B210 receivers. Our dataset consists of a large number of SigMF-compliant binary files representing the I/Q time-domain samples and their corresponding FFT-based files of LoRa transmissions.
Provide a detailed description of the following dataset: LoRa RF
Turath-150K
**Turath-150K** is a database of images of the Arab world that reflect objects, activities, and scenarios commonly found there. Broadly, the database consists of objects, activities, and scenarios commonly encountered in the Arab World (from Mauritania in the West of Africa to Iraq). More specifically, there exist 3 distinct benchmark databases; Turath-Standard, Turath-Art, and Turath-UNESCO.
Provide a detailed description of the following dataset: Turath-150K
FS2K
**FS2K** is a high-quality Facial Sketch Synthesis (FSS). It consists of 2,104 image-sketch pairs spanning three types of sketch styles, image backgrounds, lighting conditions, skin colors, and facial attributes. FS2K differs from previous FSS datasets in difficulty, diversity, and scalability, and should thus facilitate the progress of FSS research.
Provide a detailed description of the following dataset: FS2K
KIND
KIND is an Italian dataset for Named-Entity Recognition. It contains more than one million tokens with the annotation covering three classes: persons, locations, and organizations. Most of the dataset (around 600K tokens) contains manual gold annotations in three different domains: news, literature, and political discourses.
Provide a detailed description of the following dataset: KIND
SFU-HW-Tracks
**SFU-HW-Tracks** is a dataset for Object Tracking on raw video sequences that contains object annotations with unique object identities (IDs) for the High Efficiency Video Coding (HEVC) v1 Common Test Conditions (CTC) sequences. Ground-truth annotations for 13 sequences were prepared and released as the dataset called SFU-
Provide a detailed description of the following dataset: SFU-HW-Tracks
YACLC
**YACLC** is a large scale, multidimensional annotated Chinese learner corpus. To construct the corpus, the aurhots first obtain a large number of topic-rich texts generated by Chinese as Foreign Language (CFL) learners. The authors collected and annotated 32,124 sentences written by CFL learners from the lang-8 platform. Each sentence is annotated by 10 annotators. After post processing, a total of 469,000 revised sentences are obtained.
Provide a detailed description of the following dataset: YACLC
The Benchmark
**The Benchmark** is a collection of datasets for Monocular Height Estimation. It consists of two datasets: GTAH and AHN. **GTAH** (Grand Theft Auto for Height estimation) is a large-scale synthetic dataset which is obtained from the game Grand Theft Auto, under different imaging conditions. GTAH contains 28,627 height maps in total and each with a resolution of 1920×1080. For each height map, there are three corresponding RGB images that are captured under different weather conditions.
Provide a detailed description of the following dataset: The Benchmark
CUGE
**CUGE** is a Chinese Language Understanding and Generation Evaluation benchmark with the following features: (1) Hierarchical benchmark framework, where datasets are principally selected and organized with a language capability-task-dataset hierarchy. (2) Multi-level scoring strategy, where different levels of model performance are provided based on the hierarchical framework. CUGE covers 7 important language capabilities, 17 mainstream NLP tasks and 19 representative datasets. It includes tasks like: word segmentation, part of speech tagging, reading comprehension and document retrieval.
Provide a detailed description of the following dataset: CUGE
N-Omniglot
**N-Omniglot** is a neuromorphic dataset for few-shot learning. It contains 1,623 categories of handwritten characters, with only 20 samples per class.
Provide a detailed description of the following dataset: N-Omniglot
nvBench
**nvBench** is a large-scale NL2VIS (natural languagge to visualisations) benchmark, containing 25,750 (NL, VIS) pairs from 750 tables over 105 domains, synthesized from (NL, SQL) benchmarks to support cross-domain NLPVIS (Natural Language Query to Visualization) task.
Provide a detailed description of the following dataset: nvBench
BPOD
**Brown Pedestrian Odometry Dataset** (**BPOD**) is a dataset for benchmarking visual odometry algorithms in head-mounted pedestrian settings. This dataset was captured using synchronized global and rolling shutter stereo cameras in 12 diverse indoor and outdoor locations on Brown University's campus. Compared to existing datasets, BPOD contains more image blur and self-rotation, which are common in pedestrian odometry but rare elsewhere. Ground-truth trajectories are generated from stick-on markers placed along the pedestrian’s path, and the pedestrian's position is documented using a third-person video.
Provide a detailed description of the following dataset: BPOD
HSPACE
**HSPACE** (Human-SPACE) is a large-scale photo-realistic dataset of animated humans placed in complex synthetic indoor and outdoor environments. For all frames the dataset provides 3d pose and shape ground truth, as well as other rich image annotations including human segmentation, body part localisation semantics, and temporal correspondences.
Provide a detailed description of the following dataset: HSPACE
PandaSet
**PandaSet** is a dataset produced by a complete, high-precision autonomous vehicle sensor kit with a no-cost commercial license. The dataset was collected using one 360x360 mechanical spinning LiDAR, one forward-facing, long-range LiDRAR, and 6 cameras. The datasets contains more than 100 scenes, each of which is 8 seconds long, and provides 28 types of labels for object classification and 37 types of annotations for semantic segmentation.
Provide a detailed description of the following dataset: PandaSet
EgoBody
**EgoBody** dataset is a novel large-scale dataset for egocentric 3D human pose, shape and motions under interactions in complex 3D scenes.
Provide a detailed description of the following dataset: EgoBody
RLD
**RLD** (Responsive Listener Dataset) is a conversation video corpus collected from the public resources featuring 67 speakers, 76 listeners with three different attitudes. Through non-verbal signals response to the speakers' words, intonations, or behaviors in real-time, listeners show how they are engaged in dialogue.
Provide a detailed description of the following dataset: RLD
EMDS-6
In EMDS-6, there are 21 classes of environmental microorganisms (EMs). In each calss, there are 40 EM original images and their corresponding binary groud truth images. In ground truth images, the foreground is white and background is black.
Provide a detailed description of the following dataset: EMDS-6
Industrial Benchmark Dataset for Customer Escalation Prediction
This is a real-world industrial benchmark dataset from a major medical device manufacturer for the prediction of customer escalations. The dataset contains features derived from IoT (machine log) and enterprise data including labels for escalation from a fleet of thousands of customers of high-end medical devices.  The dataset accompanies the publication "System Design for a Data-driven and Explainable Customer Sentiment Monitor" (submitted). We provide an anonymized version of data collected over a period of two years. The dataset should fuel the research and development of new machine learning algorithms to better cope with real-world data challenges including sparse and noisy labels, and concept drifts. Additional challenges are the optimal fusion of enterprise and log-based features for the prediction task. Thereby, the interpretability of designed prediction models should be ensured in order to have practical relevancy.  Supporting software Kindly use the corresponding GitHub repository (https://github.com/annguy/customer-sentiment-monitor) to design and benchmark your algorithms.  Citation and Contact If you use this dataset please cite the following publication: @ARTICLE{9520354, author={Nguyen, An and Foerstel, Stefan and Kittler, Thomas and Kurzyukov, Andrey and Schwinn, Leo and Zanca, Dario and Hipp, Tobias and Jun, Sun Da and Schrapp, Michael and Rothgang, Eva and Eskofier, Bjoern}, journal={IEEE Access}, title={System Design for a Data-Driven and Explainable Customer Sentiment Monitor Using IoT and Enterprise Data}, year={2021}, volume={9}, number={}, pages={117140-117152}, doi={10.1109/ACCESS.2021.3106791}} If you would like to get in touch, please contact an.nguyen@fau.de.
Provide a detailed description of the following dataset: Industrial Benchmark Dataset for Customer Escalation Prediction
CeyMo
CeyMo is a novel benchmark dataset for road marking detection which covers a wide variety of challenging urban, sub-urban and rural road scenarios. The dataset consists of 2887 total images of 1920 × 1080 resolution with 4706 road marking instances belonging to 11 classes. The test set is divided into six categories: normal, crowded, dazzle light, night, rain and shadow.
Provide a detailed description of the following dataset: CeyMo
DSIFN-CD
The dataset is manually collected from Google Earth. It consists of six large bi-temporal high resolution images covering six cities (i.e., Beijing, Chengdu, Shenzhen, Chongqing, Wuhan, Xian) in China. The five large image-pairs (i.e., Beijing, Chengdu, Shenzhen, Chongqing, Wuhan) are clipped into 394 subimage pairs with sizes of 512×512. After data augmentation, a collection of 3940 bi-temporal image pairs is acquired. Xian image pair is clipped into 48 image pairs for model testing. There are 3600 image pairs in the training dataset, 340 image paris in the validation dataset, and 48 image pairs in the test dataset.
Provide a detailed description of the following dataset: DSIFN-CD
SCROLLS
** SCROLLS** (Standardized CompaRison Over Long Language Sequences) is an NLP benchmark consisting of a suite of tasks that require **reasoning over long texts**. SCROLLS contains summarization, question answering, and natural language inference tasks, covering multiple domains, including literature, science, business, and entertainment. The dataset is made available in a unified text-to-text format and host a live leaderboard to facilitate research on model architecture and pretraining methods. The **SCROLLS** benchmark contains the datasets [GovReport](govreport), SummScreenFD, [QMSum](qmsum), [QASPER](qasper), [NarrativeQA](NarrativeQA), QuALITY and ContractNLI.
Provide a detailed description of the following dataset: SCROLLS
CrossMoDA
****CrossMoDA** is a large and multi-class benchmark for unsupervised cross-modality Domain Adaptation. The goal of the challenge is to segment two key brain structures involved in the follow-up and treatment planning of vestibular schwannoma (VS): the VS and the cochleas. Currently, the diagnosis and surveillance in patients with VS are commonly performed using contrast-enhanced T1 (ceT1) MR imaging.
Provide a detailed description of the following dataset: CrossMoDA
DADA-seg
DADA-seg is a pixel-wise annotated accident dataset, which contains a variety of critical scenarios from traffic accidents. It is used for semantic segmentation.
Provide a detailed description of the following dataset: DADA-seg
MyoPS
**MyoPS** is a dataset for myocardial pathology segmentation combining three-sequence cardiac magnetic resonance (CMR) images, which was first proposed in the MyoPS challenge, in conjunction with MICCAI 2020. The challenge provided 45 paired and pre-aligned CMR images, allowing algorithms to combine the complementary information from the three CMR sequences for pathology segment
Provide a detailed description of the following dataset: MyoPS
3D-BSLS-6D
Dataset consist of both real captures from Photoneo PhoXi structured light scanner devices annotated by hand and synthetic samples produced by custom generator. In comparison with existing datasets for 6D pose estimation, some notable differences include: * most of the captured bins are texture-less, made from uniform, single-colored materials, * all bins are of cuboid shape with different proportions. Compared to objects with complex geometry, bins consist of flat faces with edges, which are not guaranteed to be seen in the capture due to occlusion. Surface models of these bins are not provided, just their approximate bounding boxes, * PhoXi scanner provides high-resolution 3D geometry data, but no RGB data, with a rough and noisy gray-scale intensity image being the closest equivalent, * captures come from different devices with various intrinsic camera parameters. 3D point clouds contain these parameters implicitly as opposed to RGBD images. Due to its currently limited size, we recommend cross-validation instead of an explicit train-validation split. We plan to add more samples into the dataset. NOTE: Annotation files with suffixes _bad, _ish or _catastrophic should be ignored. It was not possible to annotate them correctly with our current toolset.
Provide a detailed description of the following dataset: 3D-BSLS-6D
FR-FS
The FR-FS dataset contains 417 videos collected from FIV dataset and Pingchang 2018 Winter Olympic Games. FR-FS contains the critical movements of the athlete’s take-off, rotation, and landing. Among them, 276 are smooth landing videos, and 141 are fall videos. To test the generalization performance of our proposed model, we randomly select 50% of the videos from the fall and landing videos as the training set and the testing set.
Provide a detailed description of the following dataset: FR-FS
results-A
The results-A dataset is a dataset consisting of 22 infrared images commonly used for testing performance of Infrared Image Super-Resolution models.
Provide a detailed description of the following dataset: results-A
results-C
The results-C dataset is a dataset consisting of 22 infrared images commonly used for testing performance of Infrared Image Super-Resolution models.
Provide a detailed description of the following dataset: results-C
GrailQA
GrailQA is a new large-scale, high-quality dataset for question answering on knowledge bases (KBQA) on Freebase with 64,331 questions annotated with both answers and corresponding logical forms in different syntax (i.e., SPARQL, S-expression, etc.). It can be used to test three levels of generalization in KBQA: i.i.d., compositional, and zero-shot.
Provide a detailed description of the following dataset: GrailQA
EurekaAlert
This dataset contains around 5000 scholarly articles and their corresponding easy summary from eureka alert blog, the dataset can be used for the combined task of summarization and simplification.
Provide a detailed description of the following dataset: EurekaAlert
UAV-VeID
1. Data Collection We simulate real scenarios as much as possible during the UAV videos collection. Specifically, UAV videos are collected from different locations with distinct backgrounds and lighting conditions, e.g., including highways, urban road intersections, parking lots, etc. For vehicles at parking lots, we adopt various UAV sport modes such as cruising and rotating to record vehicles. This strategy introduces viewpoint and scale changes, as well as partial occlusions to images of the same vehicle. For moving vehicles, we use two UAVs to simultaneously shoot videos from different viewpoints and heights. This strategy introduces viewpoint, scale, and background changes. The flying height of UAVs ranges from 15 to 60 meters, leading to different scales of vehicle images. The vertical angle of UAV camera ranges from 40 to 80 degrees, which leads to different viewpoints of vehicle images. The videos are recorded at 30 frames per second (fps), with the resolution of 2704 × 1520 pixels and 4096 × 2160 pixels, respectively. The UAV-VeID is constructed from 80 video sequences selected from raw UAV videos. 2. Annotation We annotate vehicles from collected videos to construct the UAV-VeID. In each video clip, 1 video frame is sampled every one second to construct a video frame dataset. The dataset annotation is hence conducted based on those sample video frames. To finish the vehicle annotation, 6 domain experts are involved to manually locate and annotate the identities of vehicles from each video frame. The data annotation procedure takes 1000 man-hours and finally results in a dataset containing 41,917 vehicle bounding boxes of 4601 vehicles. Each vehicle is annotated by at least two bounding boxes. 3. Dataset partition The UAV-VeID dataset is split into the training set, validation set, and testing set, among which the training set contains 18,709 images with 1,797 IDs, the validation set contains 4,150 images with 596 IDs, and the testing set contains 19,058 images with 2,208 IDs. The validation set is further divided into a query set ("val_q_label.txt" 3,554 images) and a gallery set ("val_g_label.txt" 596 images). The testing set is further divided into a query set ("test_q_label.txt" 16,850 images) and a gallery set ("test_g_label.txt" 2,208 images). 4. Download Please sign the Agreement(UAV-VeID_AGREEMENT.pdf) and thereby agrees to observe the restrictions listed in this document. After filling it, please send the electrical version to us. After confirming your information, we will send the download link and password to you via Email. 5. Contact Shangzhi Teng, Email: tengshangzhi@126.com
Provide a detailed description of the following dataset: UAV-VeID
ITB
**Informative Tracking Benchmark** (**ITB**) is a small and informative tracking benchmark with 7% out of 1.2 M frames of existing and newly collected datasets, which enables efficient evaluation while ensuring effectiveness. Specifically, the authors designed a quality assessment mechanism to select the most informative sequences from existing benchmarks taking into account 1) challenging level, 2) discriminative strength, 3) and density of appearance variations. Furthermore, they collect additional sequences to ensure the diversity and balance of tracking scenarios, leading to a total of 20 sequences for each scenario.
Provide a detailed description of the following dataset: ITB
PhysNLU
**PhysNLU** is a collection of 4 core datasets related to sentence classification, ordering, and coherence of physics explanations based on related tasks. Each dataset comprises explanations extracted from Wikipedia including derivations and mathematical language.
Provide a detailed description of the following dataset: PhysNLU
Incidents1M
**Incidents1M** is a large-scale multi-label dataset for incident detection which contains 977,088 images, with 43 incident and 49 place categories. It is an evolution of the [Incidents](/dataset/incidents) dataset that doubles the dataset size and includes more incident labels.
Provide a detailed description of the following dataset: Incidents1M
CVSS
**CVSS** is a massively multilingual-to-English speech to speech translation (S2ST) corpus, covering sentence-level parallel S2ST pairs from 21 languages into English. CVSS is derived from the [Common Voice](common-voice) speech corpus and the [CoVoST](covost) 2 speech-to-text translation (ST) corpus, by synthesizing the translation text from CoVoST 2 into speech using state-of-the-art TTS systems
Provide a detailed description of the following dataset: CVSS
PerCQA
PerCQA is the first Persian dataset for CQA (Community Question Answering). This dataset contains the questions and answers crawled from the most well-known Persian forum.
Provide a detailed description of the following dataset: PerCQA
ArtImage
**ArtImage** is a synthetic dataset of articulated object models of 5 categories from PartNet-Mobility for articulated object tasks in category level.
Provide a detailed description of the following dataset: ArtImage
ASCEND
**ASCEND** (A Spontaneous Chinese-English Dataset) introduces a high-quality resource of spontaneous multi-turn conversational dialogue Chinese code-switching corpus collected in Hong Kong. ASCEND includes 23 bilinguals that are fluent in both Chinese and English and consists of 10.62 hours clean speech corpus.
Provide a detailed description of the following dataset: ASCEND
MetaEval
**MetaEval** is a collection of 101 NLP tasks. It consists of 101 tasks in a benchmark that can be used for future probing and transfer learning.
Provide a detailed description of the following dataset: MetaEval
Learn2Reg
**Learn2Reg** is a dataset for medical image registration. Learn2Reg covers a wide range of anatomies (brain, abdomen, and thorax), modalities (ultrasound, CT, MR), availability of annotations, as well as intra- and inter-patient registration evaluation.
Provide a detailed description of the following dataset: Learn2Reg
LOOK
**LOOK** is a large-scale dataset for eye contact detection in the wild, which focuses on diverse and unconstrained scenarios for real-world generalization. The dataset focuses on real-world scenarios for autonomous vehicles with no control over the environment or the distance of pedestrians
Provide a detailed description of the following dataset: LOOK
VocBench
**VocBench** is a framework that benchmark the performance of state-of-the art neural vocoders. VocBench uses a systematic study to evaluate different neural vocoders in a shared environment that enables a fair comparison between them.
Provide a detailed description of the following dataset: VocBench
ES-ImageNet
**ES-ImageNet** is a large-scale event-stream dataset for SNNs and neuromorphic vision. It consists of about 1.3 M samples converted from ILSVRC2012 in 1000 different categories. ES-ImageNet is dozens of times larger than other neuromorphic classification datasets at present and completely generated by the software
Provide a detailed description of the following dataset: ES-ImageNet
CD&S
The Corn Disease and Severity (**CD&S**) dataset consists of 511, 524, and 562, field acquired raw images, corresponding to three common foliar corn diseases, namely Northern Leaf Blight (NLB), Gray Leaf Spot (GLS), and Northern Leaf Spot.
Provide a detailed description of the following dataset: CD&S
NOD
This is a high-quality large-scale Night Object Detection (NOD) dataset of outdoor images targeting low-light object detection. The dataset contains more than 7K images and 46K annotated objects (with bounding boxes) that belong to classes: person, bicycle, and car. The photos were taken on the streets at evening hours, and thus all images present low-light conditions to a varying degree of severity.
Provide a detailed description of the following dataset: NOD
Curlie
**Curlie dataset** is a dataset with more than 1M websites in 92 languages with relative labels collected from Curlie, the largest multilingual crowdsourced Web directory. The dataset contains 14 website categories aligned across languages. It is used for language-agnostic website embedding and classification
Provide a detailed description of the following dataset: Curlie
BNATURE
This is a dataset for Bengali Captioning from Images.
Provide a detailed description of the following dataset: BNATURE
DeepLesion
The National Institutes of Health’s Clinical Center has made a large-scale dataset of CT images publicly available to help the scientific community improve detection accuracy of lesions. While most publicly available medical image datasets have less than a thousand lesions, this dataset, named DeepLesion, has over 32,000 annotated lesions (220GB) identified on CT images. DeepLesion, a dataset with 32,735 lesions in 32,120 CT slices from 10,594 studies of 4,427 unique patients. There are a variety of lesion types in this dataset, such as lung nodules, liver tumors, enlarged lymph nodes, and so on. It has the potential to be used in various medical image applications
Provide a detailed description of the following dataset: DeepLesion
BRACS
BReAst Carcinoma Subtyping (**BRACS**) dataset, a large cohort of annotated Hematoxylin & Eosin (H&E)-stained images to facilitate the characterization of breast lesions. BRACS contains 547 Whole-Slide Images (WSIs), and 4539 Regions of Interest (ROIs) extracted from the WSIs. Each WSI, and respective ROIs, are annotated by the consensus of three board-certified pathologists into different lesion categories. Specifically, BRACS includes three lesion types, i.e., benign, malignant and atypical, which are further subtyped into seven categories.
Provide a detailed description of the following dataset: BRACS
EEGEyeNet
**EEEyeNet** is a dataset and benchmark with the goal of advancing research in the intersection of brain activities and eye movements. It consists of simultaneous Electroencephalography (EEG) and Eye-tracking (ET) recordings from 356 different subjects collected from three different experimental paradigms.
Provide a detailed description of the following dataset: EEGEyeNet
PaSa
**PaSa** is a dataset to train Machine Learning algorithms to automate the highlighting of patent paragraphs with semantic annotations. It consists of 150k samples obtained by traversing USPTO patents over a decade
Provide a detailed description of the following dataset: PaSa
CUB-GHA
**CUB-GHA** is a dataset for fine-grained classification with human attention annotations. The dataset collects human gaze data for the fine-grained classification dataset CUB and builds a dataset named CUB-GHA (Gaze-based Human Attention).
Provide a detailed description of the following dataset: CUB-GHA
UMLS
Source: [Convolutional 2D Knowledge Graph Embeddings](https://arxiv.org/abs/1707.01476)
Provide a detailed description of the following dataset: UMLS
FFHQ-Text
**FFHQ-Text** is a small-scale face image dataset with large-scale facial attributes, designed for text-to-face generation & manipulation, text-guided facial image manipulation, and other vision-related tasks. This dataset is an extension of the [NVIDIA Flickr-Faces-HQ Dataset (FFHQ)](https://github.com/NVlabs/ffhq-dataset), which is the selected top **760 female FFHQ images** that only contain one complete human face.
Provide a detailed description of the following dataset: FFHQ-Text
Semantic Question Similarity in Arabic
[NSURL-2019 Shared Task 8: Semantic Question Similarity in Arabic](https://aclanthology.org/2019.nsurl-1.1.pdf) This dataset contains 11,997 pairs of questions in MSA Arabic that are assigned either a label of 0, for no semantic similarity, or 1 otherwise.
Provide a detailed description of the following dataset: Semantic Question Similarity in Arabic
Sepehr_RumTel01
The expansion of social networks has accelerated the transmission of information and news at every communities. Over the past few years, the number of users, audiences and social networking publishers, are increased dramatically too. Among the massive amounts of information and news reported on these networks, we are faced with issues that have not been verified which is called “rumors”. Identifying rumors on social networks is carried out in the form of rumor detection approaches; the massive amount of these news and information force to use the machine learning techniques. The most important problem with auto-detection approaches is the lack of a database of rumors. For that matter, in this article, a collection of rumors published on the social network “telegrams” have been collected. These data are gathered from five Persian-language channels that have specially reviewed this issue. The collected data set contains 3283 messages with 2829 attachments, having a volume of over 1.6 gigabytes. This dataset can also be used for different purposes of natural language processing.
Provide a detailed description of the following dataset: Sepehr_RumTel01
BanglaEmotion
**BanglaEmotion** is a manually annotated Bangla Emotion corpus, which incorporates the diversity of fine-grained emotion expressions in social-media text. More fine-grained emotion labels are considered such as Sadness, Happiness, Disgust, Surprise, Fear and Anger - which are, according to Paul Ekman (1999), the six basic emotion categories. For this task, a large amount of raw text data are collected from the user’s comments on two different Facebook groups (Ekattor TV and Airport Magistrates) and from the public post of a popular blogger and activist Dr. Imran H Sarker. These comments are mostly reactions to ongoing socio-political issues and towards the economic success and failure of Bangladesh. A total of 32923 comments are scraped from the three sources aforementioned above. Out of these, a total of 6314 comments were annotated into the six categories. The distribution of the annotated corpus is as follows: sad = 1341 happy = 1908 disgust = 703 surprise = 562 fear = 384 angry = 1416 A balanced set is also provided from the above data and split the dataset into training and test set of equal ratio. A proportion of 5:1 is used for training and evaluation purposes. More information on the dataset and the experiments on it could be found in our paper (related links below).
Provide a detailed description of the following dataset: BanglaEmotion