dataset_name stringlengths 2 128 | description stringlengths 1 9.7k | prompt stringlengths 59 185 |
|---|---|---|
ClonedPerson | The ClonedPerson dataset is a large-scale synthetic person re-identification dataset introduced in the paper "Cloning Outfits from Real-World Images to 3D Characters for Generalizable Person Re-Identification" in CVPR 2022. It is generated by MakeHuman and Unity3D. Characters in this dataset use an automatic approach to directly clone the whole outfits from real-world person images to virtual 3D characters, such that any virtual person thus created will appear very similar to its real-world counterpart. The dataset contains 887,766 synthesized person images of 5,621 identities. | Provide a detailed description of the following dataset: ClonedPerson |
A-OKVQA | **A-OKVQA** is crowdsourced visual question answering dataset composed of a diverse set of about 25K questions requiring a broad base of commonsense and world knowledge to answer. | Provide a detailed description of the following dataset: A-OKVQA |
Synthehicle | Synthehicle is a massive CARLA-based synthehic multi-vehicle multi-camera tracking dataset and includes ground truth for 2D detection and tracking, 3D detection and tracking, depth estimation, and semantic, instance and panoptic segmentation. | Provide a detailed description of the following dataset: Synthehicle |
nuScenes (Cross-City UDA) | A cross-city UDA benchmark built upon nuScenes. | Provide a detailed description of the following dataset: nuScenes (Cross-City UDA) |
Labeled data for citation field extraction | Citations are an important part of scientific papers, and the proper handling of them is indispensable for the science of science. Citation field extraction is the task of parsing citations: given a citation string, extract authors, title, venue, doi etc. Since the number of citations is counted by hundreds millions, efficient computer based methods for this task are very important.
The development of machine learning methods for citation field extraction requires ground truth: a large corpus of labeled citations. This dataset provides a very large (41M) corpus of labeled data obtained by the reverse process: we took structured citation lists and used BibTeX to generate labeled citation strings. | Provide a detailed description of the following dataset: Labeled data for citation field extraction |
doges-dogaresse | This is the list of all doges of the Venetian Republic, as well as their wives, if there's a record that they existed. They include name, surname if known, and date of their office, as well as the date of their weddings. Data has been extracted from the Wikipedia, with some errors fixed checking against other sources. | Provide a detailed description of the following dataset: doges-dogaresse |
Matbench | The Matbench test suite v0.1 contains 13 supervised ML tasks from 10 datasets. Matbench’s data are sourced from various subdisciplines of materials science, such as experimental mechanical properties (alloy strength), computed elastic properties, computed and experimental electronic properties, optical and phonon properties, and thermodynamic stabilities for crystals, 2D materials, and disordered metals. The number of samples in each task ranges from 312 to 132,752, representing both relatively scarce experimental materials properties and comparatively abundant properties such as DFT-GGA formation energies. Each task is a self-contained dataset containing a single material primitive as input (either composition or composition plus crystal structure) and target property as output for each sample. | Provide a detailed description of the following dataset: Matbench |
SKINL2 | The SKINL2 dataset comprises a total of 376 light fields acquired under similar conditions. The images were classified using eight categories, according to the type of skin lesion/ICD code:
Melanoma / C43
Melanocytic Nevus / D22
Basal Cell Carcinoma / D04
Seborrheic Keratosis / L82
Hemangioma / D18
Dermatofibroma / D23s
Psoriasis / L40
Other
S. M. M. de Faria et al., "Light Field Image Dataset of Skin Lesions," 2019 41st Annual International Conference of the IEEE Engineering in Medicine and Biology Society (EMBC), Berlin, Germany, 2019, pp. 3905-3908. DOI: 10.1109/EMBC.2019.8856578 | Provide a detailed description of the following dataset: SKINL2 |
SD-198 | The SD-198 dataset contains 198 different diseases from different types of eczema, acne and various cancerous conditions. There are 6,584 images in total. A subset include the classes with more than 20 image samples, namely SD-128." | Provide a detailed description of the following dataset: SD-198 |
MED-NODE | "Our dataset consists of 70 melanoma and 100 naevus images from the digital image archive of the Department of Dermatology of the University Medical Center Groningen (UMCG) used for the development and testing of the MED-NODE system for skin cancer detection from macroscopic images. The file - complete_mednode_dataset.zip 24KB - contains 170 images (70 melanoma and 100 nevi cases)."
I. Giotis, N. Molders, S. Land, M. Biehl, M.F. Jonkman and N. Petkov: "MED-NODE: A computer-assisted melanoma diagnosis system using non-dermoscopic images", Expert Systems with Applications, 42 (2015), 6578-6585 | Provide a detailed description of the following dataset: MED-NODE |
7-point criteria evaluation Database | "We provide a database for evaluating computerized image-based prediction of the 7-point skin lesion malignancy checklist. The dataset includes over 2000 clinical and dermoscopy color images, along with corresponding structured metadata tailored for training and evaluating computer aided diagnosis (CAD) systems. " | Provide a detailed description of the following dataset: 7-point criteria evaluation Database |
NVIDIA Synthetic Head Dataset | This dataset contains 500K high photo-real rendered images of 10 real head models with (yaw, pitch, roll) head pose labels. Each head is rendered with a different pose and environmental lighting. | Provide a detailed description of the following dataset: NVIDIA Synthetic Head Dataset |
MSU Video Upscalers: Quality Enhancement | The dataset aims to find the algorithms that produce the most visually pleasant image possible and generalize well to a broad range of content. It consists of 30 clips and contains 15 2D-animated segments losslessly recorded from various video games and 15 camera-shot segments from high-bitrate YUV444 sources. The complexity of clips varies significantly in terms of spatial and temporal indexes. Multiple bicubic downscaling mixed with sharpening is used to simulate complex real-world camera degradation. The authors used slight compression and YUV420 conversion to simulate a practical use case. 1920×1080 sources were downscaled to 480×270 input. | Provide a detailed description of the following dataset: MSU Video Upscalers: Quality Enhancement |
UMAD | The UMAD is a virtual-scene dataset made by AirSim, which is a simulator built on Unreal Engine. In order to ensure the simulation data is as close to real as possible, on the one hand, we use a realistic city scene model which comes from Kirill Sibiriakov, on the other hand, we collect the vehicle motion data and camera data separately to enable the frequency and quality, which is derived from the method in the paper. In comparison to urban datasets which are created by using the oblique aerial photography technique, our dataset has higher fidelity when it comes to the texture details | Provide a detailed description of the following dataset: UMAD |
CodeQueries | CodeQueries Benchmark dataset consists of instances of semantic queries, code context and code spans in the context corresponding to the semantic queries. The dataset can be used in experiments involving semantic query comprehension with an extractive question-answering methodology over code. More details can be found in the [paper](https://arxiv.org/abs/2209.08372). | Provide a detailed description of the following dataset: CodeQueries |
Multidimensional Texture Perception | Texture-based studies and designs have been in focus recently. Whisker-based multidimensional surface texture data is missing in the literature. This data is critical for robotics and machine perception algorithms in the classification and regression of textural surfaces. We present a novel sensor design to acquire multidimensional texture information. The surface texture's roughness and hardness were measured experimentally using sweeping and dabbing. The data is made available to the research community for further advancing texture perception studies. | Provide a detailed description of the following dataset: Multidimensional Texture Perception |
MIDGARD | **MIDGARD** is an open-source simulator for autonomous robot navigation in outdoor unstructured environments.
It is designed to enable the training of autonomous agents (e.g., unmanned ground vehicles) in photorealistic 3D environments, and support the generalization skills of learning-based agents thanks to the variability in training scenarios. | Provide a detailed description of the following dataset: MIDGARD |
STDW | **STDW** is a diverse large-scale dataset for table detection with more than seven thousand samples containing a wide variety of table structures collected from many diverse sources. | Provide a detailed description of the following dataset: STDW |
WildQA | **WildQA** is a video understanding dataset of videos recorded in outside settings. The dataset can be used to evaluate models for video question answering. | Provide a detailed description of the following dataset: WildQA |
Baxter-UR5_95-Objects | In this dataset two robots, Baxter and UR5, perform 8 behaviors (look, grasp, pick, hold, shake, lower, drop, and push) on 95 objects that vary by 5 color (blue, green, red, white, and yellow), 6 contents (wooden button, plastic dices, glass marbles, nuts & bolts, pasta, and rice), and 4 weights (empty, 50g, 100g, and 150g). There are 90 objects with contents (5 colors x 3 weights x 6 contents) and 5 objects without any content that only vary by 5 colors. Both robots perform 5 trials on each object, resulting in 7,600 interactions (2 robots x 8 behaviors x 95 objects x 5 trials | Provide a detailed description of the following dataset: Baxter-UR5_95-Objects |
Plittersdorf | A set of 221 stereo videos captured by the SOCRATES stereo camera trap in a wildlife park in Bonn, Germany between February and July of 2022. A subset of frames is labeled with instance annotations in the COCO format. | Provide a detailed description of the following dataset: Plittersdorf |
ImageNet-S | Powered by the ImageNet dataset, unsupervised learning on large-scale data has made significant advances for classification tasks. There are two major challenges to allowing such an attractive learning modality for segmentation tasks: i) a large-scale benchmark for assessing algorithms is missing; ii) unsupervised shape representation learning is difficult. We propose a new problem of large-scale unsupervised semantic segmentation (LUSS) with a newly created benchmark dataset to track the research progress. Based on the ImageNet dataset, we propose the ImageNet-S dataset with 1.2 million training images and 50k high-quality semantic segmentation annotations for evaluation. Our benchmark has a high data diversity and a clear task objective. We also present a simple yet effective baseline method that works surprisingly well for LUSS. In addition, we benchmark related un/weakly/fully supervised methods accordingly, identifying the challenges and possible directions of LUSS. | Provide a detailed description of the following dataset: ImageNet-S |
University of Waterloo skin cancer database | The dataset is maintained by VISION AND IMAGE PROCESSING LAB, University of Waterloo.
The images of the dataset were extracted from the public databases DermIS and DermQuest, along with manual segmentations of the lesions.
The dataset was used in the following journal publication.
[1] Glaister, J., A. Wong, and D. A. Clausi, "Automatic segmentation of skin lesions from dermatological photographs using a joint probabilistic texture distinctiveness approach", IEEE Transactions on Biomedical Engineering
[2] Amelard, R., J. Glaister, A. Wong, and D. A. Clausi, "High-level intuitive features (HLIFs) for intuitive skin lesion descriptionpdf", IEEE Transactions on Biomedical Engineering, vol. 62, issue 3, pp. 820-831, October, 2015.
[3] Glaister, J., R. Amelard, A. Wong, and D. A. Clausi, "MSIM: Multi-Stage Illumination Modeling of Dermatological Photographs for Illumination-Corrected Skin Lesion Analysis", IEEE Transactions on Biomedical Engineering, vol. 60, issue 7, pp. 1873 - 1883, November, 2013. | Provide a detailed description of the following dataset: University of Waterloo skin cancer database |
Hazards&Robots | We consider the problem of detecting, in the visual sensing data stream of an autonomous mobile robot, semantic patterns that are unusual (i.e., anomalous) with respect to the robot’s previous experience in similar environments. These anomalies might indicate unforeseen hazards and, in scenarios where failure is costly, can be used to trigger an avoidance behavior. We contribute three novel image-based datasets acquired in robot exploration scenarios, comprising a total of more than 200k labeled frames, spanning various types of anomalies. | Provide a detailed description of the following dataset: Hazards&Robots |
ShapeNet-ViPC | A large-scale dataset for the point cloud completion task on the ShapeNet dataset. | Provide a detailed description of the following dataset: ShapeNet-ViPC |
ScienceQA | **Science Question Answering** (**ScienceQA**) is a new benchmark that consists of 21,208 multimodal multiple choice questions with diverse science topics and annotations of their answers with corresponding lectures and explanations. Out of the questions in **ScienceQA**, 10,332 (48.7%) have an image context, 10,220 (48.2%) have a text context, and 6,532 (30.8%) have both. Most questions are annotated with grounded lectures (83.9%) and detailed explanations (90.5%). The lecture and explanation provide general external knowledge and specific reasons, respectively, for arriving at the correct answer. To the best of our knowledge, **ScienceQA** is the first large-scale multimodal dataset that annotates lectures and explanations for the answers.
**ScienceQA**, in contrast to previous datasets, has richer domain diversity from three subjects: natural science, language science, and social science. Questions in each subject are categorized first by the topic (Biology, Physics, Chemistry, etc.), then by the category (Plants, Cells, Animals, etc.), and finally by the skill (Classify fruits and vegetables as plant parts, Identify countries of Africa, etc.). **ScienceQA** features 26 topics, 127 categories, and 379 skills that cover a wide range of domains. | Provide a detailed description of the following dataset: ScienceQA |
UTSig | UTSig (University of Tehran Persian Signature) dataset is freely available at MLCM lab website: http://mlcm.ut.ac.ir/Datasets.html
Persian offline signature dataset, UTSig. This rich dataset consists of significant numbers of classes and samples, where aforementioned variables are considered during signature collection procedure. UTSig provides the research community with the opportunity to train, test, and compare different Persian offline SVSs, and to evaluate different culture-independent classifiers on a rich dataset by using its proposed standard experimental setups.
Link : http://mlcm.ut.ac.ir/Datasets.html | Provide a detailed description of the following dataset: UTSig |
CSL-2022 | We present CSL, a large-scale **C**hinese **S**cientific **L**iterature dataset,
which contains the titles, abstracts, keywords and academic fields of 396,209 papers.
To our knowledge, CSL is the first scientific document dataset in Chinese.
[Paper](https://arxiv.org/abs/2209.05034) | [Code and data](https://github.com/ydli-ai/CSL)
## Dataset
We obtain the paper's meta-information from the
[National Engineering Research Center for Science and Technology Resources Sharing Service (NSTR)](https://nstr.escience.net.cn) dated from 2010 to 2020.
Then, we filter data by the Catalogue of Chinese Core Journals.
According to the Catalogue and collected data, we divide academic fields into 13 first-level categories (e.g., Engineering, Science) and 67 second-level disciplines (e.g., Mechanics, Mathematics).
In total, we collect 396,209 instances for the CSL dataset, represented as tuples <T, A, K, c, d>, where *T* is the title, *A* is the abstract, *K* is a list of keywords, *c* is the category label and *d* is the discipline label.
The paper distribution over categories and the examples of disciplines are shown in below:
| Category | \#d | len(T) | len(A) | num(K) | \#Samples | Discipline Examples |
|-----------------|-------------:|-------:|-------:|-------:|----------:|---------------------------------------|
| Engineering | 27 | 19.1 | 210.9 | 4.4 | 177,600 | Mechanics,Architecture,Electrical Science |
| Science | 9 | 20.7 | 254.4 | 4.3 | 35,766 | Mathematics,Physics,Astronomy,Geography |
| Agriculture | 7 | 17.1 | 177.1 | 7.1 | 39,560 | Crop Science,Horticulture,Forestry |
| Medicine | 5 | 20.7 | 269.5 | 4.7 | 36,783 | Clinical Medicine,Dental Medicine,Pharmacy |
| Management | 4 | 18.7 | 157.7 | 6.2 | 23,630 | Business Management,Public Administration |
| Jurisprudence | 4 | 18.9 | 174.4 | 6.1 | 21,554 | Legal Science,Political Science,Sociology |
| Pedagogy | 3 | 17.7 | 179.4 | 4.3 | 16,720 | Pedagogy,Psychology,Physical Education |
| Economics | 2 | 19.5 | 177.2 | 4.5 | 11,558 | Theoretical Economics,Applied Economics |
| Literature | 2 | 18.8 | 158.2 | 8.3 | 10,501 | Chinese Literature,Journalism |
| Art | 1 | 17.8 | 170.8 | 5.4 | 5,201 | Art |
| History | 1 | 17.6 | 181.0 | 6.0 | 6,270 | History |
| Strategics | 1 | 17.5 | 169.3 | 4.0 | 3,555 | Military Science |
| Philosophy | 1 | 18.0 | 176.5 | 8.0 | 7,511 | Philosophy |
| All | 67 | | | | 396,209 |
## Evaluation Tasks
We build a benchmark to facilitate the development of Chinese scientific literature NLP.
It contains diverse tasks, ranging from classification to text generation, representing many practical scenarios.
We randomly select 100k samples and split the datasets into the training set, validation set and test set according to the ratio, 0.8 : 0.1 : 0.1.
This split is shared across different tasks, which allows multitask training and evaluation.
Datasets are presented in text2text format.
#### 1.Text Summarization (Title Prediction)
Predict the paper title from the abstract.
Data examples:
```
{
"prompt": "to title",
"text_a": "多个相邻场景同时进行干涉参数外定标的过程称为联合定标,联合定标能够 \
保证相邻场景的高程衔接性,能够实现无控制点场景的干涉定标.该文提出了 \
一种适用于机载InSAR系统的联合定标算法...",
"text_b": "基于加权最优化模型的机载InSAR联合定标算法"
}
```
#### 2.Keyword Generation
Predict a list of keywords from a given paper title and abstract.
Data examples:
```
{
"prompt": "to keywords",
"text_a": "通过对72个圆心角为120°的双跨偏心支承弯箱梁桥模型的计算分析,以梁 \
格系法为基础编制的3D-BSA软件系统为结构计算工具,用统计分析的方法建 \
立双跨偏心支承弯箱梁桥结构反应在使用极限状态及承载能力极限状态下与 \
桥梁跨长... 偏心支承对120°圆心角双跨弯箱梁桥的影响",
"text_b": "曲线桥_箱形梁_偏心支承_设计_经验公式"
}
```
#### 3.Category Classification
Predict the category with the paper title (13 classes).
Data examples:
```
{
"prompt": "to category",
"text_a": "基于模糊C均值聚类的流动单元划分方法——以克拉玛依油田五3中区克下组为例",
"text_b": "工学"
},
{
"prompt": "to category",
"text_a": "正畸牵引联合牙槽外科矫治上颌尖牙埋伏阻生的临床观察",
"text_b": "医学"
}
```
#### 4.Discipline Classification
Predict the discipline with the paper abstract (67 classes).
Data examples:
```
{
"prompt": "to discipline",
"text_a": "某铁矿选矿厂所产铁精矿含硫超过0.3%,而现场为了今后发展的需要,要 \
求将含硫量降到0.1%以下.为此,针对该铁精矿中硫化物主要以磁黄铁矿 \
形式存在、硫化物多与铁矿物连生且氧化程度较高的特点...",
"text_b": "矿业工程"
},
{
"prompt": "to discipline",
"text_a": "为了校正广角镜头的桶形畸变,提出一种新的桶形畸变数字校正方法.它 \
使用点阵样板校正的方法,根据畸变图和理想图中圆点的位置关系,得出 \
畸变图像素在X轴和Y轴方向上的偏移量曲面...",
"text_b": "计算机科学与技术"
}
``` | Provide a detailed description of the following dataset: CSL-2022 |
IHDS | **IHDS** is a nationally representative, multi-topic panel survey of 41,554 households in 1503 villages and 971 urban neighborhoods across India. | Provide a detailed description of the following dataset: IHDS |
Dataset: Impact Events for Structural Health Monitoring of a Plastic Thin Plate | ## Dataset outline
This repository contains a novel time-series dataset for impact detection and localization on a plastic thin-plate, towards Structural Health Monitoring applications, using ceramic piezoelectric transducers (PZTs) connected to an Internet of Things (IoT) device. The dataset was collected from an experimental procedure of low-velocity, low-energy impact events that includes at least 3 repetitions for each unique experiment, while the input measurements come from 4 PZT sensors placed at the corners of the plate. For each repetition and sensor, 5000 values are stored with 100 KHz sampling rate. The system is excited with a steel ball, and the height from which it is released varies from 10 cm to 20 cm.
To the best of our knowledge, we are the first, to publish a public dataset that contains PZT sensors measurements concerning low-velocity, low-energy impact events in a thin plastic plate. In addition, we also contribute with our methodology on data collection using an SHM IoT system with resource constraints (based on Arduino NANO 33 MCU), as opposed to the majority of the literature that uses Oscilloscopes for data acquisition. This concept of an MCU-based system for data collection in SHM is especially important nowadays, due to the fast rise of extreme-edge and embedded machine learning practices solutions that enable a variety of real-time data-driven SHM applications. Finally, we wish to highlight that by using this specific Microcontroller Unit (MCU) and sensors, the proposed implementation aims for an overall low-cost data collection solution. | Provide a detailed description of the following dataset: Dataset: Impact Events for Structural Health Monitoring of a Plastic Thin Plate |
FEIDEGGER | The FEIDEGGER (fashion images and descriptions in German) dataset is a new multi-modal corpus that focuses specifically on the domain of fashion items and their visual descriptions in German. The dataset was created as part of ongoing research at Zalando into text-image multi-modality in the area of fashion.
The dataset itself consists of 8732 high-resolution images, each depicting a dress from the available on the Zalando shop against a white-background. For each of the images we provide five textual annotations in German, each of which has been generated by a separate user. The example above shows 2 of the 5 descriptions for a dress (English translations only given for illustration, but not part of the dataset). | Provide a detailed description of the following dataset: FEIDEGGER |
Mars DTM Estimation | This dataset is useful for doing research in the field of mars surface monocular depth estimation.
The dataset is composed of 250k patches where each patch is a 3-channels 512 x 512 raster. The first two channels are respectively left and right images of the stereo pair while the third channel is the DTM.
Because DTMs are saved with absolute values you have to preprocess in case you want to predict relative values.
The Dataset size is 800 GB. | Provide a detailed description of the following dataset: Mars DTM Estimation |
SDD | **SDD** dataset contains a variety of indoor and outdoor scenes, designed for Image Defocus Deblurring. There are 50 indoor scenes and 65 outdoor scenes in the training set, and 11 indoor scenes and 24 outdoor scenes in the testing set. | Provide a detailed description of the following dataset: SDD |
ArtFID Dataset | The ArtFID dataset contains around 250k labeled artworks. | Provide a detailed description of the following dataset: ArtFID Dataset |
SketchyVR | We present the first fine-grained dataset of 1,497 3D VR sketch and 3D shape pairs for 1,005 chair shapes with large shapes diversity from the ShapeNetCore dataset from 50 participants.
The collected sketches are provided as both obj files and point cloud files, along with corresponding time information in timestamp.txt. Each sketch obj file contains a collection of strokes, where each stroke consists of several edges.
This dataset is targeted for fine-grained 3D VR sketch to 3D shape retrieval in this paper. But it can also enable other novel applications, especially those that require a fine-grained angle such as fine-grained 3D shape reconstruction. | Provide a detailed description of the following dataset: SketchyVR |
KArSL | KArSL (**K**FUPM **Ar**abic **S**ign **L**anguage) is an Arabic sign language (ArSL) database collected using Microsoft Kinect V2. The database consists of **502 sign words** constituting the sign words of eleven chapters of ArSL dictionary (Letters, Numbers, Health, Common verbs, Family, Characteristics, Directions and places, Social relationships, In house, Religion, and Jobs and professions). Each sign of the database is performed by three professional signers. The signers involved in this database are all male and their age is between 30 and 40 years. Each signer repeated **each sign 50 times** which resulted in a total of **75,300 samples** of the whole database **(502 x 3 x 50)** as shown in the table below. | Provide a detailed description of the following dataset: KArSL |
GasHisSDB | Four pathologists from Longhua Hospital Shanghai University of Traditional Chinese Medicine provide 600 images of gastric cancer pathology images at size 2048$\times$2048 pixels. These images were scanned using a NewUsbCamera and digitized at $\times$20 magnification, tissue-level labels were also given by the four experienced pathologists. Based on that, five biomedical researchers from Northeastern University cropped them to 245,196 sub-sized gastric cancer pathology images, and two experienced pathologists from Liaoning Cancer Hospital and Institute perform the calibration. The 245,196 images were split to three sizes (160$\times$160, 120$\times$120, 80$\times$80) for two categories: abnormal and normal. | Provide a detailed description of the following dataset: GasHisSDB |
Thermal Face Database | High-resolution thermal infrared face database with extensive manual annotations, introduced by Kopaczka et al, 2018. Useful for training algoeithms for image processing tasks as well as facial expression recognition. The full database itself, all annotations and the complete source code are freely available from the authors for research purposes at https://github.com/marcinkopaczka/thermalfaceproject.
Please cite following papers for the dataset:
[1] M. Kopaczka, R. Kolk and D. Merhof, "A fully annotated thermal face database and its application for thermal facial expression recognition," 2018 IEEE International Instrumentation and Measurement Technology Conference (I2MTC), 2018, pp. 1-6, doi: 10.1109/I2MTC.2018.8409768.
[2] Kopaczka, M., Kolk, R., Schock, J., Burkhard, F., & Merhof, D. (2018). A thermal infrared face database with facial landmarks and emotion labels. IEEE Transactions on Instrumentation and Measurement, 68(5), 1389-1401. | Provide a detailed description of the following dataset: Thermal Face Database |
ZEGGS Dataset | **ZEGGS** dataset contains 67 sequences of monologues performed by a female actor speaking in English and covers 19 different motion styles. | Provide a detailed description of the following dataset: ZEGGS Dataset |
Penguin dataset | The penguin dataset is a collection of images of penguin colonies in Antarctica coming from the larger penguin watch project, which was setup with the purpose of monitoring their changes in population. The images are taken by fixed cameras in over 40 different locations, which have been capturing an image per hour for several years. In order to track the colony sizes, the number of penguins in each of the images in the dataset is required.
So far, the penguin count has been done with the help of citizen scientists on the Penguin Watch site by Zooniverse, where interested users can place dots on top of the penguins. Here we release part of this data to the vision community in order to learn from the crowd-sourced dot-annotations to automatically annotate these images.
For more information about the project, please visit Penguin Watch. | Provide a detailed description of the following dataset: Penguin dataset |
NCT-CRC-HE-100K | The NCT-CRC-HE-100K dataset is a set of 100,000 non-overlapping image patches extracted from 86 H$\&$E stained human cancer tissue slides and normal tissue from the NCT biobank (National Center for Tumor Diseases) and the UMM pathology archive (University Medical Center Mannheim). While the dataset Colorectal Cacner-Validation-Histology-7K (CRC-VAL-HE-7K) consist of 7180 images extracted from 50 patients with colorectal adenocarcinoma and were used to create a dataset that does not overlap with patients in the NCT-CRC-HE-100K dataset. It was created by pathologists by manually delineating tissue regions in whole slide images into the following nine tissue classes: Adipose (ADI), background (BACK), debris (DEB), lymphocytes (LYM), mucus (MUC), smooth muscle (MUS), normal colon mucosa (NORM), cancer-associated stroma (STR), colorectal adenocarcinoma epithelium (TUM).
Image source: [https://www.cs.unc.edu/~mn/sites/default/files/macenko2009.pdf](https://www.cs.unc.edu/~mn/sites/default/files/macenko2009.pdf) | Provide a detailed description of the following dataset: NCT-CRC-HE-100K |
flower_photos | A large set of images of flowers
Homepage: https://www.tensorflow.org/tutorials/load_data/images
Dataset size: 221.83 MiB | Provide a detailed description of the following dataset: flower_photos |
UHRSD | Recent salient object detection (SOD) methods based on deep neural network have achieved remarkable performance. However, most of existing SOD models designed for low-resolution input perform poorly on high-resolution images due to the contradiction between the sampling depth and the receptive field size. Aiming at resolving this contradiction, we propose a novel one-stage framework called Pyramid Grafting Network (PGNet), using transformer and CNN backbone to extract features from different resolution images independently and then graft the features from transformer branch to CNN branch. An attention-based Cross-Model Grafting Module (CMGM) is proposed to enable CNN branch to combine broken detailed information more holistically, guided by different source feature during decoding process. Moreover, we design an Attention Guided Loss (AGL) to explicitly supervise the attention matrix generated by CMGM to help the network better interact with the attention from different models. We contribute a new Ultra-High-Resolution Saliency Detection dataset UHRSD, containing 5,920 images at 4K-8K resolutions. To our knowledge, it is the largest dataset in both quantity and resolution for high-resolution SOD task, which can be used for training and testing in future research. Sufficient experiments on UHRSD and widely-used SOD datasets demonstrate that our method achieves superior performance compared to the state-of-the-art methods. | Provide a detailed description of the following dataset: UHRSD |
DAVIS-S | To enrich the diversity, we also collect 92 images which are suitable for saliency detection from DAVIS [27], a densely annotated high-resolution video segmentation dataset. Im- ages in this dataset are precisely annotated and have very high resolutions (i.e.,1920!1080). We ignore the categories of the objects and generate saliency ground truth masks for this dataset. For convenience, the collected dataset is denot- ed as DAVIS-S. | Provide a detailed description of the following dataset: DAVIS-S |
VR Mocap Dataset for Pose/Orientation Prediction | Data used for the paper Combining Motion Matching and Orientation Prediction to Animate Avatars for Consumer-Grade VR Devices.
MMData.zip contains the necessary files to execute the project in Unity. Use only following the instructions on the GitHub project.
MMVR_Dataset.zip contains all .bvh files used for training the orientation prediction network. All files are captured with an Xsens Awinda motion capture system while using Virtual Reality. Visit GitHub for more information. | Provide a detailed description of the following dataset: VR Mocap Dataset for Pose/Orientation Prediction |
DexHand | Inspired by OpenAI dexterous in-hand manipulation, we collected a synthetic RGB-D dataset of a Shadow Hand robot manipulating a cube towards arbitrary goal configurations. This dataset consists of about 10000 videos, each video including 25 RGB-D frames.
DexHand is challenging as the robot has 24 degrees of freedom, and there can be a significant amount of motion and occlusion between consecutive frames. | Provide a detailed description of the following dataset: DexHand |
Omnipush | Omnipush is a dataset with high variety of planar pushing behavior. The dataset contains 250 pushes for each of 250 objects, all recorded with RGB-D and high precision state tracking.
The objects are constructed to explore key factors that affect pushing --the shape of the object and its mass distribution-- which have not been broadly explored in previous datasets and allow to study generalization in model learning. | Provide a detailed description of the following dataset: Omnipush |
Gun Violence Corpus | The Gun Violence Corpus (GVC) consists of 241 unique incidents for which we have structured data on a) location, b) time c) the name, gender and age of the victims and d) the status of the victims after the incident: killed or injured. For these data, 510 news articles were gathered following the 'data to text' approach. The structured data and articles report on a variety of gun violence incidents, such as drive-by shootings, murder-suicides, hunting accidents, involuntary gun discharges, etcetera. The documents have been manually annotated for all mentions that make reference to the gun violence incident at hand. | Provide a detailed description of the following dataset: Gun Violence Corpus |
Sustainable Venture Capital Survey 2022 | To explore the nascent area of sustainable venture capital, a review of related research was conducted and social entrepreneurs & investors interviewed to construct a questionnaire assessing the interests and intentions of current & future ecosystem participants. Analysis of 114 responses received via several sampling methods revealed statistically significant relationships between investing preferences and genders, generations, sophistication, and other variables, all the way down to the level of individual UN Sustainable Development Goals (SDGs).
* the survey data has been deidentified for privacy reasons
* the survey sample may not be suitable for your application
* IBM SPSS Syntax code has been provided on GitHub to run on your own results
* the dataset for the separate database (Crunchbase) analysis is unable to be shared as it has a proprietary license | Provide a detailed description of the following dataset: Sustainable Venture Capital Survey 2022 |
DifferSketching | **DifferSketching** is a dataset of freehand sketches to understand how differently professional and novice users sketch 3D objects. It includes 3,620 freehand multi-view sketches registered with their corresponding 3D objects. To date, the dataset is an order of magnitude larger than the existing datasets. | Provide a detailed description of the following dataset: DifferSketching |
ViPhy | **ViPhy** leverages two datasets: Visual Genome (Krishna et al., 2017), and ADE20K (Zhou et al., 2017). The dense captions in Visual Genome provide a broad coverage of object classes, making it a suitable resource for collecting subtype candidates. For extracting hyponyms from knowledge base, we acquire "is-a" relations from ConceptNet (Speer et al., 2017), and augment the subtype candidate set. We extract spatial relations from ADE20K, as it provides images categorised by scene type – primarily indoor environments with high object density: {bedroom, bathroom, kitchen, living room, office}. | Provide a detailed description of the following dataset: ViPhy |
SPICE | **SPICE** is a collection of quantum mechanical data for training potential functions. The emphasis is particularly on simulating drug-like small molecules interacting with proteins. It is designed to achieve the following goals:
- Cover a wide range of chemical space
- Cover a wide range of conformations
- Include forces as well as energies
- Include a variety of other information
- Use an accurate level of theory
- Be a dynamic, growing dataset
- Be freely available under a non-restrictive licence | Provide a detailed description of the following dataset: SPICE |
WikiDes | WikiDes is a dataset for generating descriptions of Wikidata from Wikipedia paragraphs. | Provide a detailed description of the following dataset: WikiDes |
HUME-VB | The Hume Vocal Burst Database (H-VB) includes all train, validation, and test recordings and corresponding emotion ratings for the train and validation recordings.
This dataset contains 59,201 audio recordings of vocal bursts from 1,702 speakers, from 4 cultures—the U.S, South Africa, China, and Venezuela—ranging in age from 20 to 39.5 years old. The duration of data in this version of H-VB is 36 Hours (Mean: 02.23 sec). The emotion ratings correspond to ten emotion concepts, listed below, and averaged 0-100 intensities for each emotion concept, with each sample having been rated by an average of 85.2 raters.
Emotion Labels: Awe, Excitement, Amusement, Awkwardness, Fear, Horror, Distress, Triumph, Sadness, Surprise. | Provide a detailed description of the following dataset: HUME-VB |
MIMIC II | The data used in this research is a subset of the Multi-parameter Intelligent Monitoring for Critical
Care (MIMIC) II database. It contains minute-by-minute time series of Heart Rate (HR), Systolic
Blood Pressure (SBP), Diastolic Blood Pressure (DBP), and Mean Arterial blood Pressure (MAP)
arranged into records, each of which corresponds to an adult patient’s ICU stay. | Provide a detailed description of the following dataset: MIMIC II |
WebLI | **WebLI** (Web Language Image) is a web-scale multilingual image-text dataset, designed to support Google’s vision-language research, such as the large-scale pre-training for image understanding, image captioning, visual question answering, object detection etc.
The dataset is built from the public web, including image bytes, image-associated texts (alt-text, OCR, page title), 109 languages and many other features. The dataset is deduplicated on 68 common vision/vision-language tasks, and has no user or personally identifiable data with careful RAI considerations.
source: [PaLI: A Jointly-Scaled Multilingual Language-Image Model](https://arxiv.org/abs/2209.06794) | Provide a detailed description of the following dataset: WebLI |
ArNLI | Natural Language Inference processes pairs of sentences to extract their semantic relations.
Pair sentences are annotated with three classes (Contradictions, Entailment, Neutral).
1- Entailment: A and B intersect (A ∩ B ≠ ∅, A ∪ B ≠ Universe) or one may contain another ((A ∁ B) OR (B ∁ A)).
2- Contradiction: A, and B cannot be together, if A is True then B is False (A ∩ B = ∅, and A ∪ B = Universe).
3- Neutral: A, and B have no semantic relations. Each one of them is a different set in the universe. No intersection and their union is a subset of the universe, not all universe (A ∩ B = ∅ and A ∪ B ≠Universe) | Provide a detailed description of the following dataset: ArNLI |
Artie Bias Corpus | **Artie Bias Corpus** is an open dataset for detecting demographic bias in speech applications. | Provide a detailed description of the following dataset: Artie Bias Corpus |
EHR Dataset for Patient Treatment Classification | The dataset is Electronic Health Record Predicting collected from a private Hospital in Indonesia. It contains the patients laboratory test results used to determine next patient treatment whether in care or out care patient. The task embedded to the dataset is classification prediction. | Provide a detailed description of the following dataset: EHR Dataset for Patient Treatment Classification |
Code2Seq (Java) | Java-Small, Java-Med, Java-Large | Provide a detailed description of the following dataset: Code2Seq (Java) |
CocoChorales | # The CocoChorales Dataset
**CocoChorales** is a dataset consisting of over 1400 hours of audio mixtures containing four-part chorales performed by 13 instruments, all synthesized with realistic-sounding generative models. CocoChorales contains mixes, sources, and MIDI data, as well as annotations for note expression (e.g., per-note volume and vibrato) and synthesis parameters (e.g., multi-f0).
## Dataset
We created CocoChorales using two generative models produced by Magenta: [Coconet](https://magenta.tensorflow.org/coconet) and [MIDI-DDSP](https://magenta.tensorflow.org/midi-ddsp). The dataset was created in two stages. First, we used a trained Coconet model to generate a large set of four-part chorales in the style of J.S. Bach. The output of this first stage is a set of note sequences, stored as MIDI, to which we assign a tempo and add random timing variations to each note (for added realism).
In the second stage, we use MIDI-DDSP to synthesize these MIDI files into audio, resulting in audio clips that sound like the chorales were performed by live musicians. This MIDI-DDSP model was trained on [URMP](https://labsites.rochester.edu/air/projects/URMP.html). We define a set of ensembles that consist of the following instruments, in Soprano, Alto, Tenor, Bass (SATB) order:
<ul>
<li><strong>String Ensemble</strong>: Violin 1, Violin 2, Viola, Cello.</li>
<li><strong>Brass Ensemble</strong>: Trumpet, French Horn, Trombone, Tuba.</li>
<li><strong>Woodwind Ensemble</strong>: Flute, Oboe, Clarinet, Bassoon.</li>
<li><strong>Random Ensemble</strong>: Each SATB part is randomly assigned an instrument according to the following:
<ul class="nested">
<li><em>Soprano</em>: Violin, Flute, Trumpet, Clarinet, Oboe.</li>
<li><em>Alto</em>: Violin, Viola, Flute, Clarinet, Oboe, Saxophone, Trumpet, French Horn.</li>
<li><em>Tenor</em>: Viola, Cello, Clarinet, Saxophone, Trombone, French Horn.</li>
<li><em>Bass</em>: Cello, Double Bass, Bassoon, Tuba.</li>
</ul>
</li>
</ul>
Each instrument in the ensemble is synthesized separately, with annotations for the high-level expressions used for each note (e.g., vibrato, note volume, note brightness, etc; all expressions shown [here](https://midi-ddsp.github.io/#note_expression_control), and more details in Sections 3.2 and B.3 of the [MIDI-DDSP paper](https://openreview.net/pdf?id=UseMOjWENv)) as well as detailed low-level annotations for synthesis parameters (e.g., f<sub>0</sub>’s, amplitudes of each harmonic, etc). Because the MIDI-DDSP model skews sharp, we randomly applied pitch augmentation to the f<sub>0</sub>’s (see Figure 2, [here](https://arxiv.org/pdf/2209.14458.pdf)) to . All four audio clips for each instrument in the ensemble are then mixed together to produce an example in the dataset.
Because all of the data in CocoChorales originate from generative models, all of the annotations perfectly correspond to the audio data. All in all, the dataset contains 240,000 examples, 60,000 mixes from each one of the four ensemble types above. Each ensemble has its own train/validation/test split All of the audio is 16 kHz, 16-bit PCM data. Each example contains:
<ul>
<li>A mixture</li>
<li>Source audio for all four instruments
<ul class="nested">
<li>Gain applied to each source</li>
</ul>
</li>
<li>MIDI with tempo and precise timing</li>
<li>The name of the ensemble with instrument names</li>
<li>Note expression annotations for every note:
<ul class="nested">
<li>Volume, Volume Fluctuation, Volume Peak Position, Vibrato, Brightness, and Attack Noise used by MIDI-DDSP to synthesize every note (see Sections 3.2 and B.3 of the MIDI-DDSP paper for more details)</li>
</ul>
</li>
<li>Synthesis parameters for every source (250 Hz):
<ul class="nested">
<li>Fundamental frequency (f<sub>0</sub>), amplitude, amplitude of all harmonics, filtered noise parameters</li>
<li>Amount of pitch augmentation applied</li>
</ul>
</li>
</ul>
## Further Details
A detailed view of the contents of the CocoChorales dataset is provided [at this link](https://github.com/lukewys/chamber-ensemble-generator/blob/master/data_format.md).
# Download
For download instructions, please see [this github page](https://github.com/lukewys/chamber-ensemble-generator#dataset-download). The compressed version of the full dataset is 2.9 Tb, and the uncompressed version is larger than 4 Tb. There is a "tiny" version for download as well.
MD5 Hashes for all zipped files in the download are provided [here](https://storage.googleapis.com/magentadata/datasets/cocochorales/cocochorales_full_v1_zipped/cocochorales_md5s.txt).
# License
The CocoChorales dataset was made by [Yusong Wu](https://lukewys.github.io/) and is available under the [Creative Commons Attribution 4.0 International (CC-BY 4.0)](https://creativecommons.org/licenses/by/4.0/).
# How to Cite
If you use CocoChorales in your work, we ask that you cite the following [paper](https://arxiv.org/abs/2209.14458) where it was introduced:
```
Yusong Wu, Josh Gardner, Ethan Manilow, Ian Simon, Curtis Hawthorne, and Jesse Engel.
“The Chamber Ensemble Generator: Limitless High-Quality MIR Data via Generative Modeling.”
arXiv preprint, arXiv:2209.14458, 2022.
```
You can also use the following bibtex entry:
```
@article{wu2022chamber,
title = {The Chamber Ensemble Generator: Limitless High-Quality MIR Data via Generative Modeling},
author = {Wu, Yusong and Gardner, Josh and Manilow, Ethan and Simon, Ian and Hawthorne, Curtis and Engel, Jesse},
journal={arXiv preprint arXiv:2209.14458},
year = {2022},
}
``` | Provide a detailed description of the following dataset: CocoChorales |
Sig53 | A dataset of 53 complex-valued signal modulation classes. | Provide a detailed description of the following dataset: Sig53 |
Argoverse 2 Motion Forecasting | The **Argoverse 2 Motion Forecasting** Dataset is a curated collection of 250,000 scenarios for training and validation. Each scenario is 11 seconds long and contains the 2D, birds-eye-view centroid and heading of each tracked object sampled at 10 Hz.
To curate this collection, we sifted through thousands of hours of driving data from our fleet of self-driving test vehicles to find the most challenging segments. We place special emphasis on kinematically and socially unusual behavior, especially when exhibited by actors relevant to the ego-vehicle’s decision-making process. Some examples of interactions captured within our dataset include: buses navigating through multi-lane intersections, vehicles yielding to pedestrians at crosswalks, and cyclists sharing dense city streets.
Spanning 2,000+ km over six geographically diverse cities, Argoverse 2 covers a large geographic area. Argoverse 2 also contains a large object taxonomy with 10 non-overlapping classes that encompass a broad range of actors, both static and dynamic. In comparison to the Argoverse 1 Motion Forecasting Dataset, the scenarios in this dataset are approximately twice as long and more diverse.
Together, these changes incentivize methods that perform well on extended forecast horizons, handle multiple types of dynamic objects, and ensure safety in long tail scenarios. | Provide a detailed description of the following dataset: Argoverse 2 Motion Forecasting |
Extended MP-16 Dataset | To overcome the need for a full installation of a reverse geocoder such as Nominatim, we provide the post-processed output of the reverse geocoding for the MP-16 dataset along with the validation set (YFCC-Val26k) which originally comprised photos and respective GPS coordinates. Both datasets are subsets of the YFCC100M dataset which are crawled from Flickr.
Larson, M., Soleymani, M., Gravier, G., Ionescu, B., & Jones, G. J. (2017). The benchmarking initiative for multimedia evaluation: MediaEval 2016. IEEE MultiMedia, 24(1), 93-96.
Thomee, B., Shamma, D. A., Friedland, G., Elizalde, B., Ni, K., Poland, D., ... & Li, L. J. (2016). YFCC100M: The new data in multimedia research. Communications of the ACM, 59(2), 64-73. | Provide a detailed description of the following dataset: Extended MP-16 Dataset |
Events in Invasion Games Dataset - Handball | This dataset contains the broadcast video streams of handball matches along with synchronized official positional data and human event annotations for 125min raw data in summary.
### Data Source & Characteristics
- Handball matches from the [Handball-Bundesliga (HBL)](https://www.liquimoly-hbl.de/en/) captured in saison 2019/20
- Size: 5 matches x 5 sequences x 5min
- Video:
- unedited broadcast video stream (no cuts, no overlays)
- HD resolution (1280x720px)@30fps
- Positional data:
- official captured by [Kinexon](https://kinexon.com/)
- manually synchronized to video streams (offsets and sampling rate (originally captured at 20Hz))
- Events:
- frame-wise annotations based solely on the video content
- annotations according to the proposed taxonomy
- multiple annotations for two matches (10 sequences) from 3 experts
- hierarchical event format: `<root_event>.<sub_event>.<sub_sub_event>`
- statistics: [event_statistics.ipynb]
### License
Position and video data are provided by [Kinexon](https://kinexon.com/) with authorization of the [Handball-Bundesliga (HBL)](https://www.liquimoly-hbl.de/en/).
As *EIGD-H* is licensed under [CC BY-NC-SA 4.0](https://creativecommons.org/licenses/by-nc-sa/4.0/) you must give appropriate credit when using this dataset by
1) naming the *Handball-Bundesliga (HBL)*
2) [citing this publication](#citation) | Provide a detailed description of the following dataset: Events in Invasion Games Dataset - Handball |
Xia and Ding, 2019 | Emotion-cause pair extraction (ECPE) aims to extract the potential pairs of emotions and corresponding causes in a document. This dataset consists of 1,945 Chinese documents from SINA NEWS website. | Provide a detailed description of the following dataset: Xia and Ding, 2019 |
BPAEC | This dataset contains the confocal fluorescence microscopy images of nucleus, actin and mitochondria, where each clear image corresponds to 6 out-of-focus images with different degree of blurring. Images acquired below and above the optimal focal plane are blurry and out-of-focus images. In detail, scans were acquired in z-stack of 15 layers spanning the depth (8.4 μm) with 0.6 μm between each slice, z = 7 is the optimal focal plane, z = 1–6 are below the focal plane, z = 8–15 are above the focal plane. We make layers from z = 4 to z = 10 publicly available. Their visual variations in 3-dimensional structure can be negligible. In the end, datasets of actin and nucleus contain 100 in-focus images and 600 out-of-focus images respectively, dataset of mitochondria contain 97 in-focus images and 582 out-of-focus images. Each image is composed of 1024 × 1024 pixels in 8-bit JPG format. | Provide a detailed description of the following dataset: BPAEC |
Leishmania parasite dataset | This dataset includes sharp-blur pairs of Leishmania image, which is a protozoan parasite microscopy image dataset of Leishmania, obtained from the preserved slides stained with Giemsa. The paired blur-sharp images are acquired by employing a bright-field microscope (Olympus IX53) with 100× magnification oil immersion objectives.We first capture the sharp images as ground truth, then acquire its corresponding out-of-focus images. The extent and nature of defocusing are random along the optical axis, where the degree of out-of-focus is inconsistent from image-to-image. This dataset includes 764 in-focus and 764 corresponding out-of-focus images, where each image is composed of 2304 × 1728 pixels in 24-bit JPG format. | Provide a detailed description of the following dataset: Leishmania parasite dataset |
Music4All-Onion | Music4All-Onion is a large-scale, multi-modal music dataset that expands the Music4All dataset by including 26 additional audio, video, and metadata features for 109,269 music pieces and provides a set of 252,984,396 listening records of 119,140 users, extracted from the online music platform Last.fm . | Provide a detailed description of the following dataset: Music4All-Onion |
CHQ-Summ | Contains 1507 domain-expert annotated consumer health questions and corresponding summaries. The dataset is derived from the community question answering forum and therefore provides a valuable resource for understanding consumer health-related posts on social media. | Provide a detailed description of the following dataset: CHQ-Summ |
MultiDex | We collect a large-scale synthetic dataset for robotic hands with [Differentiable Force Closure(DFC)](https://sites.google.com/view/ral2021-grasp/). It covers 436,000 diverse and stable grasps for 58 household objects from ContactDB and YCB datasets across 5 robotic hands including EZGripper, Barrett Hand, Robotiq-3Finger, Allegro Hand and Shadowhand. | Provide a detailed description of the following dataset: MultiDex |
DR.BENCH | **DR.BENCH** is a dataset for developing and evaluating cNLP models with clinical diagnostic reasoning ability. The suite includes six tasks from ten publicly available datasets addressing clinical text understanding, medical knowledge reasoning, and diagnosis generation. | Provide a detailed description of the following dataset: DR.BENCH |
Satimage | The resources for this dataset can be found at https://www.openml.org/d/182
Author: Ashwin Srinivasan, Department of Statistics and Data Modeling, University of Strathclyde
Source: UCI - 1993
Please cite: UCI
The database consists of the multi-spectral values of pixels in 3x3 neighbourhoods in a satellite image, and the classification associated with the central pixel in each neighbourhood. The aim is to predict this classification, given the multi-spectral values. In the sample database, the class of a pixel is coded as a number.
One frame of Landsat MSS imagery consists of four digital images of the same scene in different spectral bands. Two of these are in the visible region (corresponding approximately to green and red regions of the visible spectrum) and two are in the (near) infra-red. Each pixel is a 8-bit binary word, with 0 corresponding to black and 255 to white. The spatial resolution of a pixel is about 80m x 80m. Each image contains 2340 x 3380 such pixels.
The database is a (tiny) sub-area of a scene, consisting of 82 x 100 pixels. Each line of data corresponds to a 3x3 square neighbourhood of pixels completely contained within the 82x100 sub-area. Each line contains the pixel values in the four spectral bands (converted to ASCII) of each of the 9 pixels in the 3x3 neighbourhood and a number indicating the classification label of the central pixel.
Each pixel is categorized as one of the following classes:
1. red soil
2. cotton crop
3. grey soil
4. damp grey soil
5. soil with vegetation stubble
6. mixture class (all types present)
7. very damp grey soil
NB. There are no examples with class 6 in this dataset.
The data is given in random order and certain lines of data have been removed so you cannot reconstruct the original image from this dataset. | Provide a detailed description of the following dataset: Satimage |
ImDrug | **ImDrug** is a comprehensive benchmark with an open-source Python library which consists of 4 imbalance settings, 11 AI-ready datasets, 54 learning tasks and 16 baseline algorithms tailored for imbalanced learning. It features modularized components including formulation of learning setting and tasks, dataset curation, standardized evaluation, and baseline algorithms. It also provides an accessible and customizable testbed for problems and solutions spanning a broad spectrum of the drug discovery pipeline such as molecular modeling, drug-target interaction and retrosynthesis. | Provide a detailed description of the following dataset: ImDrug |
VILT | **VILT** is a new benchmark collection of tasks and multimodal video content. The video linking collection includes annotations from 10 (recipe) tasks, which the annotators chose from a random subset of the collection of 2,275 high-quality 'Wholefoods' recipes. There are linking annotations for 61 query steps across these tasks which contain cooking techniques, chosen from the 189 total recipe steps. As each method results in approximately 10 videos to annotate, the collection consists of 831 linking judgments. | Provide a detailed description of the following dataset: VILT |
CommitBART | **CommitBART** is a benchmark for researching commit-related task such as denoising, cross-modal generation and contrastive learning. The dataset contains over 7 million commits across 7 programming languages. | Provide a detailed description of the following dataset: CommitBART |
CLEVR-Math | **CLEVR-Math** is a multi-modal math word problems dataset consisting of simple math word problems involving addition/subtraction, represented partly by a textual description and partly by an image illustrating the scenario. These word problems requires a combination of language, visual and mathematical reasoning. | Provide a detailed description of the following dataset: CLEVR-Math |
MFRC | **Moral Foundations Reddit Corpus (MFRC)** is a collection of 16,123 Reddit comments that have been curated from 12 distinct subreddits, hand-annotated by at least three trained annotators for 8 categories of moral sentiment (i.e., Care, Proportionality, Equality, Purity, Authority, Loyalty, Thin Morality, Implicit/Explicit Morality) based on the updated Moral Foundations Theory (MFT) framework. | Provide a detailed description of the following dataset: MFRC |
Ski-Pose PTZ-Camera | This multi-view pant-tilt-zoom-camera (PTZ) dataset features competitive alpine skiers performing giant slalom runs. It provides labels for the skiers’ 3D poses in each frame, their projected 2D pose in all 20k images, and accurate per-frame calibration of the PTZ cameras. The dataset was collected by Spörri and Colleagues within his Habilitation at the Department of Sport Science and Kinesiology of the University of Salzburg [Spörri16], and was previously used as a reference in different methodological studies [Gilgien13, Gilgien14, Gilgien 15, Fasel16, Fasel18, Rhodin18]. Moreover, upon request the dataset would be available to interested researchers for further methodological-orientated research purposes. | Provide a detailed description of the following dataset: Ski-Pose PTZ-Camera |
DrugComb | **DrugComb** is an open-access, community-driven data portal where the results of drug combination screening studies for a large variety of cancer cell lines are accumulated, standardized and harmonized. An actively expanding array of data visualization and computational tools is provided for the analysis of drug combination data. All the data and informatics tools are made freely available to a wider community of cancer researchers. | Provide a detailed description of the following dataset: DrugComb |
MBW - Zoo Dataset | Dataset page: https://github.com/mosamdabhi/MBW-Data
MBW - Zoo is a challenging dataset consisting image frames of tail-end distribution categories (such as Fish, Colobus Monkeys, Chimpanzees, etc.) with their corresponding 2D, 3D, and Bounding-Box labels generated from minimal human intervention. Some of the prominent use cases of this dataset include not only sparse 2D and 3D landmark prediction.
The data was collected by two smartphone cameras without any constraints: meaning no guidance or instructions were given as to how the data should be collected. The intention was to mimic the data captured casually by anyone holding a smartphone grade camera. Due to this reason, the cameras were continuously moving in space changing their extrinsics with respect to each other, capturing an in-the-wild dynamic scene. This dataset could be used to benchmark robust algorithms in various computer vision tasks. | Provide a detailed description of the following dataset: MBW - Zoo Dataset |
UBFC-rPPG | We introduce here a new database called UBFC-rPPG (stands for Univ. Bourgogne Franche-Comté Remote PhotoPlethysmoGraphy) comprising two datasets that are focused specifically on rPPG analysis. The UBFC-RPPG database was created using a custom C++ application for video acquisition with a simple low cost webcam (Logitech C920 HD Pro) at 30fps with a resolution of 640x480 in uncompressed 8-bit RGB format. A CMS50E transmissive pulse oximeter was used to obtain the ground truth PPG data comprising the PPG waveform as well as the PPG heart rates. During the recording, the subject sits in front of the camera (about 1m away from the camera) with his/her face visible. All experiments are conducted indoors with a varying amount of sunlight and indoor illumination. The link to download the complete video dataset is available on request. A basic Matlab implementation can also be provided to read ground truth data acquired with a pulse oximeter. | Provide a detailed description of the following dataset: UBFC-rPPG |
MMSE-HR | The MMSE-HR benchmark consists of a dataset of 102 videos from 40 subjects recorded at 1040x1392 raw resolution at 25fps. During the recordings, various stimuli such as videos, sounds, and smells are introduced to induce different emotional states in the subjects. The ground truth waveform for MMSE-HR is the blood pressure signal sampled at 1000Hz. The dataset contains a diverse distribution of skin colors in the Fitzpatrick scale (II=8, III=11, IV=17, V+VI=4). | Provide a detailed description of the following dataset: MMSE-HR |
Wisture Dataset | https://ieee-dataport.org/documents/wi-fi-signal-strength-measurements-smartphone-various-hand-gestures | Provide a detailed description of the following dataset: Wisture Dataset |
OQM9HK | This is a large-scale dataset of quantum-mechanically calculated properties (DFT level) of crystalline materials for graph representation learning that contains approximately 900k entries (OQM9HK). This dataset is constructed on the basis of [the Open Quantum Materials Database](https://oqmd.org) (OQMD) v1.5 containing more than one million entries, and is the successor to [the OQMD v1.2 dataset](https://paperswithcode.com/dataset/oqmd-v1-2) containing approximately 600k entries (OQM6HK).
* [Technical Report](https://storage.googleapis.com/rimcs_cgnn/oqm9hk_dataset_Sep_30_2022.pdf)
* [CGNN v1.1](https://github.com/Tony-Y/cgnn/tree/dev_v1.1)
 | Provide a detailed description of the following dataset: OQM9HK |
MINTAKA | **MINTAKA** is a complex, natural, and multilingual dataset designed for experimenting with end-to-end question-answering models. It is composed of 20,000 question-answer pairs collected in English, annotated with Wikidata entities, and translated into Arabic, French, German, Hindi, Italian, Japanese, Portuguese, and Spanish for a total of 180,000 samples. Mintaka includes 8 types of complex questions, including superlative, intersection, and multi-hop questions, which were naturally elicited from crowd workers. | Provide a detailed description of the following dataset: MINTAKA |
PrOntoQA | **PrOntoQA** is a question-answering dataset which generates examples with chains-of-thought that describe the reasoning required to answer the questions correctly. The sentences in the examples are syntactically simple and amenable to semantic parsing. It can be used to formally analyze the predicted chain-of-thought from large language models such as GPT-3. | Provide a detailed description of the following dataset: PrOntoQA |
Cambridge Landmarks | Cambridge Landmarks, a large scale outdoor visual relocalisation dataset taken around Cambridge University. Contains original video, with extracted image frames labelled with their 6-DOF camera pose and a visual reconstruction of the scene. If you use this data, please cite our paper: Alex Kendall, Matthew Grimes and Roberto Cipolla "PoseNet: A Convolutional Network for Real-Time 6-DOF Camera Relocalization." Proceedings of the International Conference on Computer Vision (ICCV), 2015.
Links to individual scenes (see: https://github.com/GrumpyZhou/visloc-apr/issues/3):
Old Hospital: https://www.repository.cam.ac.uk/handle/1810/251340
Kings College: https://www.repository.cam.ac.uk/handle/1810/251342
St. Marys Church: https://www.repository.cam.ac.uk/handle/1810/251294
Great Court: https://www.repository.cam.ac.uk/handle/1810/251291
Shop Facade: https://www.repository.cam.ac.uk/handle/1810/251336
Street: https://www.repository.cam.ac.uk/handle/1810/251292 | Provide a detailed description of the following dataset: Cambridge Landmarks |
V-MIND | **V-MIND** enhanced the MIND dataset with news pictures. | Provide a detailed description of the following dataset: V-MIND |
SMOKE | The SMOKE dataset is a dataset for fog/smoke removal.
There are 110 self-collected fog/smoke images and their clean pairs.
There are 12 other pairs of fog data for evaluation. | Provide a detailed description of the following dataset: SMOKE |
K-Lane | KAIST-Lane (K-Lane) is the world’s first and the largest public urban road and highway lane dataset for Lidar. K-Lane has more than 15K frames and contains annotations of up to six lanes under various road and traffic conditions, e.g., occluded roads of multiple occlusion levels, roads at day and night times, merging (converging and diverging) and curved lanes. | Provide a detailed description of the following dataset: K-Lane |
K-Radar | KAIST-Radar (K-Radar) is a novel large-scale object detection dataset and benchmark that contains 35K frames of 4D Radar tensor (4DRT) data with power measurements along the Doppler, range, azimuth, and elevation dimensions, together with carefully annotated 3D bounding box labels of objects on the roads. K-Radar includes challenging driving conditions such as adverse weathers (fog, rain, and snow) on various road structures (urban, suburban roads, alleyways, and highways). In addition to the 4DRT, we provide auxiliary measurements from carefully calibrated high-resolution Lidars, surround stereo cameras, and RTK-GPS. | Provide a detailed description of the following dataset: K-Radar |
RECON | https://sites.google.com/view/recon-robot/dataset | Provide a detailed description of the following dataset: RECON |
ViHSD | This dataset contains 33,400 annotated comments used for hate speech detection on social network sites.
Label: CLEAN (non hate), OFFENSIVE and HATE | Provide a detailed description of the following dataset: ViHSD |
ViSpamReviews | This dataset is used for spam review detection (opinion spam reviews) on Vietnamese E-commerce website | Provide a detailed description of the following dataset: ViSpamReviews |
CICEROv2 | The CICEROv2 dataset can be found in the [data](https://github.com/declare-lab/CICERO/releases/download/v2.0.0/data.zip) directory. Each line of the files is a json object indicating a single instance. The json objects have the following key-value pairs:
| Key | Value |
|:----------:| :-----:|
| ID | Dialogue ID with dataset indicator. |
| Dialogue | Utterances of the dialogue in a list. |
| Target | Target utterance. |
| Question | One of the five questions (inference types). |
| Choices | Five possible answer choices in a list. One of the answers is<br>human written. The other four answers are machine generated<br>and selected through the Adversarial Filtering (AF) algorithm. |
| Human Written Answer | Index of the human written answer in a<br>single element list. Index starts from 0. |
| Correct Answers | List of all correct answers indicated as plausible<br>or speculatively correct by the human annotators.<br>Includes the index of the human written answer. |
---------------------------------------------------------------------------
An example of the data is shown below.
```
{
"ID": "daily-dialogue-0404",
"Dialogue": [
"A: Dad , why are you taping the windows ?",
"B: Honey , a typhoon is coming .",
"A: Really ? Wow , I don't have to go to school tomorrow .",
"B: Jenny , come and help , we need to prepare more food .",
"A: OK . Dad ! I'm coming ."
],
"Target": "Jenny , come and help , we need to prepare more food .",
"Question": "What subsequent event happens or could happen following the target?",
"Choices": [
"Jenny and her father stockpile food for the coming days.",
"The speaker and the listener go outside to purchase more food material for precaution.",
"Jenny and her father give away all their food.",
"Jenny and her father eat all the food in their refrigerator."
],
"Correct Answers": [
0,
1
]
}
``` | Provide a detailed description of the following dataset: CICEROv2 |
ShanghaiTech Campus | The ShanghaiTech Campus dataset has 13 scenes with complex light conditions and camera angles. It contains 130 abnormal events and over 270, 000 training frames. Moreover, both the frame-level and pixel-level ground truth of abnormal events are annotated in this dataset. | Provide a detailed description of the following dataset: ShanghaiTech Campus |
FCGEC | * a fine-grained corpus to detect, identify and correct the chinese grammatical errors.
* collected mainly from multi-choice questions in public school Chinese examinations
* with multiple references
* Online Evaluation Site for test set: https://codalab.lisn.upsaclay.fr/competitions/8020 | Provide a detailed description of the following dataset: FCGEC |
DigiFace-1M | **DigiFace-1M** is a **synthetic dataset** for face recognition, obtained by rendering digital faces using a computer graphics pipeline. It contains **1.22M images** of **110K unique identities**. The dataset consists of two parts. The **first part** contains **720K images** with **10K identities**. For each identity, 4 different sets of accessories are sampled and 18 images are rendered for each set. The **second part** contains **500K images** with **100K identities**. For each identity, only one set of accessories is sampled and only 5 images are rendered. Following the format of the existing datasets, we provide the aligned crop around the face, resized into $112 \times 112$ resolution.
Please visit the website for more detail. | Provide a detailed description of the following dataset: DigiFace-1M |
HR-ShanghaiTech | The human-Related version of the ShanghaiTech Campus, was first presented by Morais et al. in the paper "Learning Regularity in Skeleton Trajectories for Anomaly Detection in Videos". | Provide a detailed description of the following dataset: HR-ShanghaiTech |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.