dataset_name
stringlengths
2
128
description
stringlengths
1
9.7k
prompt
stringlengths
59
185
Azure Functions Trace 2019
This is a set of files representing part of the workload of Microsoft's Azure Functions offering, collected in July of 2019. This dataset is a subset of the data described in, and analyzed, in the USENIX ATC 2020 paper 'Serverless in the Wild: Characterizing and Optimizing the Serverless Workload at a Large Cloud Provider'. Functions in Azure Functions are grouped into Applications. Included here is only data pertaining to a random sample of Azure Functions applications. The sampling is done per application, so that if there is data about an application in the trace, then all of its functions are included. The sampling rate is small and unspecified, but as the accompanying notebook shows, the distributions in the released trace are a good match to those in the ATC paper. In Azure Functions, applications are the unit of resource allocation. This has a few practical implications: for example, warm-up decisions are made at the application level, and memory allocation is measured by application, not by function. The 'HashOwner' field in these files is used to group applications that belong to the same subscription in Azure. It is included to indicate applications that are possibly related to each other. The dataset comprises this description, and an R notebook with plots comparing the released trace with the ATC paper, and the following sets of files: Function invocation counts and triggers Function execution time distributions Application memory allocation distributions
Provide a detailed description of the following dataset: Azure Functions Trace 2019
MuCeD
MuCeD, a dataset that is carefully curated and validated by expert pathologists from the All India Institute of Medical Science (AIIMS), Delhi, India. The H&E-stained histopathology images of the human duodenum in MuCeD are captured through an Olympus BX50 microscope at 20x zoom using a DP26 camera with each image being 1920x2148 in dimension. The dataset has 55 images, with bounding boxes for 2,090 IELs and 6,518 ENs annotated using the LabelMe software and are further validated by multiple pathologists. These cells are selected from the epithelial area -- a region of interest that has been explicitly segmented by experts. The epithelial area denotes the area of continuous villi and is used for cell detection, whereas rest of the area is masked out. Further, each image is sliced into 9 subimages and each subimage is re-scaled to 640x640, before it is given as input to object detection models. We divide 55 images into five folds of 11 images each and report 5-fold crossvalidation numbers. Within 44 training images in a given fold, 8 are used for validation and 36 for training. Data is annotated in yolo format with labels are present in .txt files with class x, y, width, heigh format.
Provide a detailed description of the following dataset: MuCeD
Deep PCB
### DeepPCB * [Dataset Link](https://github.com/tangsanli5201/DeepPCB) : * A dataset contains 1,500 image pairs, each of which consists of a defect-free template image and an aligned tested image with annotations including positions of 6 most common types of PCB defects: open, short, mousebite, spur, pin hole, and spurious copper. ### Dataset Description #### Image Collection * All the images in this dataset are obtained from a linear scan CCD in a resolution of around 48 pixels per 1 millimeter. * The defect-free template images are manually checked and cleaned from sampled images in the above manner. * The original size of the template and the tested image is around 16k x 16k pixels. * Then they are cropped into many sub-images with the size of 640 x 640 and aligned through template matching techniques. * Next, a threshold is carefully selected to employ binarization to avoid illumination disturbance. * Notice that pre-processing algorithms can be varied according to the specific PCB defect detection algorithms, however, the image registration and thresholding techniques are common processes for high-accuracy PCB defect localization and classification. * An example pair in DeepPCB dataset is illustrated in the following figure, where the right one is the defect-free template image and the left one is the defective tested image with the ground truth annotations. <!--| An example of the tested image | The corresponding template image | |---|---| | ![tested image](https://github.com/tangsanli5201/DeepPCB/blob/08e98c4db5922613fb97176eb3d6497d48260cb1/fig/test.jpg") | ![template image](https://github.com/tangsanli5201/DeepPCB/blob/08e98c4db5922613fb97176eb3d6497d48260cb1/fig/template.jpg) | --> #### Image Annotation We use the axis-aligned bounding box with a class ID for each defect in the tested images. As illustrated in above, we annotate six common types of PCB defects: open, short, mousebite, spur, pin hole, and spurious copper. Since there are only a few defects in the real tested image, we manually arguement some artificial defects on each tested image according to the PCB defect patterns, which leads to around 3 to 12 defects in each 640 x 640 image. The number of PCB defects is shown in the following figure. We separate 1,000 images as a training set and the remains as a test set. Each annotated image owns an annotation file with the same filename, e.g.**_00041000_test.jpg_**, **_00041000_temp.jpg_**, and **_00041000.txt_** are the tested image, template image, and the corresponding annotation file. Each defect on the tested image is annotated as the format:**_x1,y1,x2,y2, type_**, where **_(x1,y1)_** and **_(x2,y2)_** is the top left and the bottom right corner of the bounding box of the defect. **_type_** is an integer ID that follows the matches: **0-background (not used), _1-open, 2-short, 3-mousebite, 4-spur, 5-copper, 6-pin-hole_**. <!-- <div align=center> <img src="https://github.com/tangsanli5201/DeepPCB/blob/master/fig/CountPCB.png" width="560"> </div> --> The annotation tool is now available with the source code in the **_./tools_** directory. #### Benchmarks The average precision rate and F-score are used for evaluation. A detection is correct only if the intersection of unit (IoU) between the detected bounding box and any of the ground truth box with the same class is larger than 0.33. F-score is calculated as: F-score=2PR/(P+R), where P and R is the precision and recall rate. Notice that F-score is threshold-sensitive, which means you could adjust your score threshold to obtain a better result. Although F-score is not as fair as the mAP criteria but more practical since a threshold should always be given when deploying the model and not all of the algorithms have a score evaluation for the target. Thus, F-score and mAP are both under consideration in the benchmarks. The evaluation script for mAP and F-score are borrowed from [Icdar2015 evaluation scripts](http://rrc.cvc.uab.es/?ch=4&com=mymethods&task=1) with a small modification (You may first register an account.). Here, we give the modified evaluation scripts and the ground truth _gt.zip_ file of the test set in _evaluation/_ directory. You can evaluate your own method by following the instructions: * run your algorithm and save the detected result for each image named as *image_name.txt*, where the *image_name* should be the same as in the *gt.zip*. You should follow the format of *evaluation/gt.zip* except that the output description of each defect from your algorithm should be: **_x1,y1,x2,y2, confidence, type_**, where **_(x1,y1)_** and **_(x2,y2)_** is the top left and the bottom right corner of the bounding box of the defect. **_confidence_** is a float number to show how confident you believe such detection results. **_type_** is a string and should be one of the following: **_open, short,mousebite,spur,copper,pin-hole_**. **Notice there is no space except the comma**. * zip your **_.txt_** file to **_res.zip_**. (You should not contain any sub-directory in the **_res.zip_** file) * run the evaluation script: *python script.py -s=res.zip -g=gt.zip* ### Approach This section with the source code will be public after the acceptance of the paper. #### Experiment results Here we show some results of our model based on deep neural networks. Our model achieves **_98.6% mAp, 98.2% F-score @ 62FPS_**. More statistical analysis will be public after the acceptance of the paper. The green bounding box is the predicted location of the PCB defect with the confidence on the top of each. <!-- Result pair 1: <div align=center><img src="https://github.com/tangsanli5201/DeepPCB/blob/master/fig/result/result_test1.jpg" width="375" style="margin:20"> &nbsp;&nbsp;&nbsp;&nbsp;&nbsp; <img src="https://github.com/tangsanli5201/DeepPCB/blob/master/fig/result/result_temp1.jpg" width="375" style="margin:20"> </div> Result pair 2: <div align=center><img src="https://github.com/tangsanli5201/DeepPCB/blob/master/fig/result/result_test2.jpg" width="375" style="margin:20"> &nbsp;&nbsp;&nbsp;&nbsp;&nbsp; <img src="https://github.com/tangsanli5201/DeepPCB/blob/master/fig/result/result_temp2.jpg" width="375" style="margin:20"> </div> Result pair 3: <div align=center><img src="https://github.com/tangsanli5201/DeepPCB/blob/master/fig/result/result_test3.jpg" width="375" style="margin:20"> &nbsp;&nbsp;&nbsp;&nbsp;&nbsp; <img src="https://github.com/tangsanli5201/DeepPCB/blob/master/fig/result/result_temp3.jpg" width="375" style="margin:20"> </div> Result pair 4: <div align=center><img src="https://github.com/tangsanli5201/DeepPCB/blob/master/fig/result/result_test4.jpg" width="375" style="margin:20"> &nbsp;&nbsp;&nbsp;&nbsp;&nbsp; <img src="https://github.com/tangsanli5201/DeepPCB/blob/master/fig/result/result_temp4.jpg" width="375" style="margin:20"> </div> --> ##### Notification This work is contributed by the paper **_On-line PCB Defect Detector On A New PCB Defect Dataset_**. You can only use this dataset for research purposes. Contributor for paper with code: * [Allena Venkata Sai Abhishek](https://paperswithcode.com/search?q=author%3AAllena+Venkata+Sai+Abhishek)
Provide a detailed description of the following dataset: Deep PCB
Cityscapes 3D
Detecting vehicles and representing their position and orientation in the three dimensional space is a key technology for autonomous driving. Recently, methods for 3D vehicle detection solely based on monocular RGB images gained popularity. In order to facilitate this task as well as to compare and drive state-of-the-art methods, several new datasets and benchmarks have been published. Ground truth annotations of vehicles are usually obtained using lidar point clouds, which often induces errors due to imperfect calibration or synchronization between both sensors. To this end, we propose Cityscapes 3D, extending the original Cityscapes dataset with 3D bounding box annotations for all types of vehicles. In contrast to existing datasets, our 3D annotations were labeled using stereo RGB images only and capture all nine degrees of freedom. This leads to a pixel-accurate reprojection in the RGB image and a higher range of annotations compared to lidar-based approaches. In order to ease multitask learning, we provide a pairing of 2D instance segments with 3D bounding boxes. In addition, we complement the Cityscapes benchmark suite with 3D vehicle detection based on the new annotations as well as metrics presented in this work. Dataset and benchmark are available online.
Provide a detailed description of the following dataset: Cityscapes 3D
Classical conditioning
The paper introduces three benchmarking tasks inspired by animal learning.
Provide a detailed description of the following dataset: Classical conditioning
Customer Support on Twitter
The Customer Support on Twitter dataset is a large, modern corpus of tweets and replies to aid innovation in natural language understanding and conversational models, and for study of modern customer support practices and impact.
Provide a detailed description of the following dataset: Customer Support on Twitter
EgoTV
**EgoTV** dataset consists of (task description, video) pairs with positive on negative task verification labels. By combining the six sub-tasks heat, clean, slice, cool, put, pick with different ordering constraints, there are 82 tasks for EgoTV. Tasks are instantiated with 130 target objects (excluding visual variations in shape, texture, and color) and 24 receptacle objects, totaling 1038 task object combinations. These are performed in 30 different kitchen scenes.
Provide a detailed description of the following dataset: EgoTV
Statcan Dialogue Dataset
> **[The StatCan Dialogue Dataset: Retrieving Data Tables through Conversations with Genuine Intents](https://arxiv.org/abs/2304.01412)** > > *[Xing Han Lu](https://xinghanlu.com), [Siva Reddy](https://sivareddy.in), [Harm de Vries](https://www.harmdevries.com/)* > > EACL 2023 | | | | | | | :--: | :--: | :--: | :--: | :--: | | [Code](https://github.com/mcGill-NLP/statcan-dialogue-dataset) | [Huggingface](https://huggingface.co/datasets/McGill-NLP/statcan-dialogue-dataset/) | [Request on Dataverse](https://borealisdata.ca/dataset.xhtml?persistentId=doi:10.5683/SP3/NR0BMY) | [Paper](https://arxiv.org/abs/2304.01412) | [Website](https://mcgill-nlp.github.io/statcan-dialogue-dataset) | ![Banner](https://github.com/McGill-NLP/statcan-dialogue-dataset/raw/main/images/banner.svg)
Provide a detailed description of the following dataset: Statcan Dialogue Dataset
SemanticSTF
**SemanticSTF** is an adverse-weather point cloud dataset that provides dense point-level annotations and allows to study 3DSS under various adverse weather conditions. It contains 2,076 scans captured by a Velodyne HDL64 S3D LiDAR sensor from STF that cover various adverse weather conditions including 694 snowy, 637 dense-foggy, 631 light-foggy, and 114 rainy (all rainy LiDAR scans in STF).
Provide a detailed description of the following dataset: SemanticSTF
AQL-22
The Archive Query Log (AQL) is a previously unused, comprehensive query log collected at the Internet Archive over the last 25 years. Its first version includes 356 million queries, 166 million search result pages, and 1.7 billion search results across 550 search providers. Although many query logs have been studied in the literature, the search providers that own them generally do not publish their logs to protect user privacy and vital business data. The AQL is the first publicly available query log that combines size, scope, and diversity, enabling research on new retrieval models and search engine analyses. Provided in a privacy-preserving manner, it promotes open research as well as more transparency and accountability in the search industry.
Provide a detailed description of the following dataset: AQL-22
SportsPose
Accurate 3D human pose estimation is essential for sports analytics, coaching, and injury prevention. However, existing datasets for monocular pose estimation do not adequately capture the challenging and dynamic nature of sports movements. In response, we introduce SportsPose, a large-scale 3D human pose dataset consisting of highly dynamic sports movements. With more than 176,000 3D poses from 24 different subjects performing 5 different sports activities, SportsPose provides a diverse and comprehensive set of 3D poses that reflect the complex and dynamic nature of sports movements. Contrary to other markerless datasets we have quantitatively evaluated the precision of SportsPose by comparing our poses with a commercial marker-based system and achieve a mean error of 34.5 mm across all evaluation sequences. This is comparable to the error reported on the commonly used 3DPW dataset. We further introduce a new metric, local movement, which describes the movement of the wrist and ankle joints in relation to the body. With this, we show that SportsPose contains more movement than the Human3.6M and 3DPW datasets in these extremum joints, indicating that our movements are more dynamic. The dataset with accompanying code can be downloaded from our website. We hope that SportsPose will allow researchers and practitioners to develop and evaluate more effective models for the analysis of sports performance and injury prevention. With its realistic and diverse dataset, SportsPose provides a valuable resource for advancing the state-of-the-art in pose estimation in sports
Provide a detailed description of the following dataset: SportsPose
Doctor-patient questions (French)
These are the test and training data used for experiments presented in BioNLP 2017. ## Licence The data are only aimed for research, educational and non-commercial purposes. ## How to cite If you use these data, please cite our contribution to BioNLP 2017 as follows: [Automatic classification of doctor-patient questions for a virtual patient record query task](http://www.aclweb.org/anthology/W17-2343) Leonardo Campillos-Llanos, Sophie Rosset, Pierre Zweigenbaum *Proc. of BioNLP 2017*, August 4 2017, Vancouver, Canada, pp. 333-341 Note that these data were manually collected from books aimed at medical consultation and clinical examination, as well as resources for medical translation. These sources also need to be referenced as follows: * Barbara Bates and Lynn S Bickley. 2014. *Guide de l’examen clinique-Nouvelle édition 2014.* Arnette- John Libbey Eurotext. * Claire Coudé, Franois-Xavier Coudé, and Kai Kassmann. 2011. *Guide de conversation médicale - français-anglais-allemand.* Lavoisier, Médecine Sciences Publications. * Owen Epstein, David Perkin, John Cookson, and David P. de Bono. 2015. *Guide pratique de l’examen clinique.* Elsevier Masson, Paris. * Félicie Pastore. 2015. *How can I help you today? Guide de la consultation médicale et paramédicale en anglais*. Ellipses, Paris. * [UMVF/Medical English Portal](http://anglaismedical.u-bourgogne.fr/) UFR Médecine de Dijon (Last access: May 2017)
Provide a detailed description of the following dataset: Doctor-patient questions (French)
Extended Agriculture-Vision
Extended Agriculture-Vision dataset comprises two parts: 1. An improved version of the Agriculture-Vision dataset, including full-field farmland imagery, encourages the exploration of geo-information on a larger scale. 2. Over three terabytes of high-resolution raw images across the US, aiming to inspire research in self-supervised learning in remote sensing and agriculture. Extended Agriculture-Vision identified 1200 fields from 2019-2020 growing seasons. Each image consists of RGB and Near-infrared (NIR) channels with resolutions as high as 10 cm per pixel.
Provide a detailed description of the following dataset: Extended Agriculture-Vision
AeBAD
Unlike previous datasets that focus on detecting the diversity of defect categories (like MVTec AD and VisA), AeBAD is centered on the diversity of domains within the same data category. The aim of AeBAD is to automatically detect abnormalities in the blades of aero-engines, ensuring their stable operation. AeBAD consists of two sub-datasets: the single-blade dataset (AeBAD-S) and the video anomaly detection of blades (AeBAD-V). AeBAD-S comprises images of single blades of different scales, with a primary feature being that the samples are not aligned. Furthermore, there is a domain shift between the distribution of normal samples in the test set and the training set, where the domain shifts are mainly caused by the changes in illumination and view. AeBAD-V, on the other hand, includes videos of blades assembled on the blisks of aero-engines, with the aim of detecting blade anomalies during blisk rotation. A distinctive feature of AeBAD-V is that the shooting view in the test set differs from that in the training set.
Provide a detailed description of the following dataset: AeBAD
Snow100K
The **Snow100K** dataset consists of 1) 100k synthesized snowy images 2) corresponding snow-free ground truth images 3) snow masks 4) 1,329 realistic snowy images The images of 2) and 4) were downloaded via the Flickr api, and were manually divided into snow and snow-free categories, respectively. In addition, each image are normalized to the size of 640 pixels and retained its original aspect ratio.
Provide a detailed description of the following dataset: Snow100K
SRRS
**SRRS (Snow Removal in Realistic Scenario)** contains 15000 synthesized snow images and 1000 snow images in real scenarios downloaded from the Internet.
Provide a detailed description of the following dataset: SRRS
PINet
We propose a new light field image database called “PINet” inheriting the hierarchical structure from WordNet. It consists of 7549 LIs captured by Lytro Illum, which is much larger than the existing databases. The images are manually annotated to 178 categories according to WordNet, such as cat, camel, bottle, fans, etc. The registered depth maps are also provided. Each image is generated by processing the raw LI from the camera by Light Field Toolbox v0.4 for demosaicing and devignetting. PINet is the largest light field image dataset. More details can be found in https://github.com/VincentChandelier/SADN
Provide a detailed description of the following dataset: PINet
Human-Art
Human-Art is a versatile human-centric dataset to bridge the gap between natural and artificial scenes. It includes twenty high-quality human scenes, including natural and artificial humans in both 2D representation and 3D representation. It includes 50,000 images including more than 123,000 human figures in 20 scenarios, with annotations of human bounding box, 21 2D human keypoints, human self-contact keypoints, and description text.
Provide a detailed description of the following dataset: Human-Art
SCB-dataset
**Student Classroom Behavior dataset (SCB-dataset)** reflects real-life scenarios. The dataset includes 11,248 labels and 4,003 images, with a focus on handraising behavior.
Provide a detailed description of the following dataset: SCB-dataset
MoocRadar
**MoocRadar** is a fine-grained and multiaspect knowledge repository that consists of 2,513 exercises, 5,600 concepts, and 14,224 students’ 12,715,126 behavioral records for improving cognitive student modeling in MOOCs.
Provide a detailed description of the following dataset: MoocRadar
SA-1B
**SA-1B** consists of 11M diverse, high resolution, licensed, and privacy protecting images and 1.1B high-quality segmentation masks.
Provide a detailed description of the following dataset: SA-1B
A Large Scale Fish Dataset
This dataset contains 9 different seafood types collected from a supermarket in Izmir, Turkey for a university-industry collaboration project at Izmir University of Economics, and this work was published in ASYU 2020. The dataset includes gilt head bream, red sea bream, sea bass, red mullet, horse mackerel, black sea sprat, striped red mullet, trout, shrimp image samples. If you use this dataset in your work, please consider to cite: @inproceedings{ulucan2020large, title={A Large-Scale Dataset for Fish Segmentation and Classification}, author={Ulucan, Oguzhan and Karakaya, Diclehan and Turkan, Mehmet}, booktitle={2020 Innovations in Intelligent Systems and Applications Conference (ASYU)}, pages={1--5}, year={2020}, organization={IEEE} } The dataset contains 9 different seafood types. For each class, there are 1000 augmented images and their pair-wise augmented ground truths. Each class can be found in the "Fish_Dataset" file with their ground truth labels. All images for each class are ordered from "00000.png" to "01000.png". For example, if you want to access the ground truth images of the shrimp in the dataset, the order should be followed is "Fish->Shrimp->Shrimp GT". This dataset was collected in order to carry out segmentation, feature extraction, and classification tasks and compare the common segmentation, feature extraction, and classification algorithms (Semantic Segmentation, Convolutional Neural Networks, Bag of Features). All of the experiment results prove the usability of our dataset for purposes mentioned above.
Provide a detailed description of the following dataset: A Large Scale Fish Dataset
Large COVID-19 CT scan slice dataset
"We built a large lung CT scan dataset for COVID-19 by curating data from 7 public datasets listed in the acknowledgements. These datasets have been publicly used in COVID-19 diagnosis literature and proven their efficiency in deep learning applications. Therefore, the merged dataset is expected to improve the generalization ability of deep learning methods by learning from all these resources together. These datasets are made available in different formats. Our goal is to provide a large dataset of COVID-19, Normal, and CAP CT slices together with their corresponding metadata. Some of the datasets consist of categorized CT slices, and some include CT volumes with annotated lesion slices. Therefore, we used the slice-level annotations to extract axial slices from CT volumes. We then converted all the images to 8-bit to have a consistent depth. To ensure the dataset quality, we have removed the closed lung normal slices that do not carry information about inside lung manifestations. Additionally, we did not include images lacking clear class labels or patient information. In total, we have gathered 7,593 COVID-19 images from 466 patients, 6,893 normal images from 604 patients, and 2,618 CAP images from 60 patients. All of our CAP images are from Afshar et al. dataset, in which 25 cases are already annotated. Our radiologist has annotated the remaining 35 CT scan volumes. This is the largest COVID-19 lung CT dataset so far, to the best of our knowledge." - Source: [A Robust Ensemble-Deep Learning Model for COVID-19 Diagnosis based on an Integrated CT Scan Images Database](https://www.researchgate.net/publication/352296409_A_Robust_Ensemble-Deep_Learning_Model_for_COVID-19_Diagnosis_based_on_an_Integrated_CT_Scan_Images_Database) Acknowledgements - J. Zhao, Y. Zhang, X. He, and P. Xie, "COVID-CT-Dataset: a CT scan dataset about COVID-19," arXiv preprint arXiv:2003.13865, 2020. - P. Afshar et al., "COVID-CT-MD: COVID-19 Computed Tomography (CT) Scan Dataset Applicable in Machine Learning and Deep Learning," arXiv preprint arXiv:2009.14623, 2020. - J. P. Cohen, P. Morrison, L. Dao, K. Roth, T. Q. Duong, and M. Ghassemi, "Covid-19 image data collection: Prospective predictions are the future," arXiv preprint arXiv:2006.11988, 2020. - S. Morozov et al., "MosMedData: Chest CT Scans With COVID-19 Related Findings Dataset," arXiv preprint arXiv:2005.06465, 2020. - M. Rahimzadeh, A. Attar, and S. M. Sakhaei, "A Fully Automated Deep Learning-based Network For Detecting COVID-19 from a New And Large Lung CT Scan Dataset," medRxiv, 2020. - M. Jun et al., "COVID-19 CT Lung and Infection Segmentation Dataset," Zenodo, Apr, vol. 20, 2020. - "COVID-19." 2020. [Online] http://medicalsegmentation.com/covid19/ [Accessed 23 December, 2020].
Provide a detailed description of the following dataset: Large COVID-19 CT scan slice dataset
Pathfinder-X2
Pathfinder and Pathfinder-X have proven to be instrumental in training and testing Large Language Models with long-range dependencies. Recently, Meta's Moving Average Equipped Gated Attention model scored a 97% on the Pathfinder-X dataset, indicating a need for a larger, more challenging dataset. Whereas Pathfinder-X only went up to 256 x 256 pixel images (or a sequence length of 65,536 tokens), Pathfinder-X2 introduces images of 512 x 512 pixels, or 262,144 tokens. Each image is meant to be read as a sequence of pixels. A LLM's task is to segment out the one snake in each image with a circle at its tip. The dataset includes 200,000 images and 200,000 segmentation masks, one for each image.
Provide a detailed description of the following dataset: Pathfinder-X2
PKU (License Plate Detection)
The PKU dataset has almost 4,000 images categorized into five groups (G1-G5) that show different situations. For example, G1 has images of highways during the day with only one car in them. On the other hand, G5 has images of crosswalks during the day or at night with multiple cars and license plates (LPs). It can be used to train and test LP detectors, as the authors labeled the position of each visible LP on each image.
Provide a detailed description of the following dataset: PKU (License Plate Detection)
Word Analogy Bangla
We provide a Mikolov-style word-analogy evaluation set specifically for Bangla, with a sample size of 16678, as well as a translated and curated version of the Mikolov dataset, which contains 10594 samples for cross-lingual research.
Provide a detailed description of the following dataset: Word Analogy Bangla
Bangla Word Analogy
We provide a Mikolov-style word-analogy evaluation set specifically for Bangla, with a sample size of 16678, as well as a translated and curated version of the Mikolov dataset, which contains 10594 samples for cross-lingual research.
Provide a detailed description of the following dataset: Bangla Word Analogy
RoboPianist
**RoboPianist** is a benchmarking suite for high-dimensional control, targeted at testing high spatial and temporal precision, coordination, and planning, all with an underactuated system frequently making-and-breaking contacts. The proposed challenge is mastering the piano through bi-manual dexterity, using a pair of simulated anthropomorphic robot hands. The initial version covers a broad set of 150 variable-difficulty songs.
Provide a detailed description of the following dataset: RoboPianist
WebBrain-Raw
**WebBrain-Raw** is a large-scale dataset built from English Wikipedia articles and their crawlable Wikipedia references. It comprises 153 zipped data chunks in which each line is a Wikipedia page with its reference articles.
Provide a detailed description of the following dataset: WebBrain-Raw
Dress Code
Dress Code is a new dataset for image-based virtual try-on composed of image pairs coming from different catalogs of YOOX NET-A-PORTER. The dataset contains more than 50k high resolution model clothing images pairs divided into three different categories (i.e. dresses, upper-body clothes, lower-body clothes).
Provide a detailed description of the following dataset: Dress Code
IAW Dataset
The IAW dataset contains 420 Ikea furniture pieces from 14 common categories e.g. sofa, bed, wardrobe, table, etc. Each piece of furniture comes with one or more user instruction manuals, which are first divided into pages and then further divided into independent steps cropped from each page (some pages contain more than one step and some pages do not contain instructions). There are 8568 pages and 8263 steps overall, on average 20.4 pages and 19.7 steps for each piece of furniture. We crawled YouTube to find videos corresponding to these instruction manuals and as such the conditions in the videos are diverse on many aspects e.g. duration, resolution, first- or third-person view, camera pose, background environment, number of assemblers, etc. The IAW dataset contains 1005 raw videos with a length of around 183 hours in total. Among them, approximately 114 hours of content are labeled as 15649 actions to match the corresponding step in the corresponding manual.
Provide a detailed description of the following dataset: IAW Dataset
WEAR
WEAR is an outdoor sports dataset for both vision- and inertial-based human activity recognition (HAR). The dataset comprises data from 18 participants performing a total of 18 different workout activities with untrimmed inertial (acceleration) and camera (egocentric video) data recorded at 10 different outside locations. Unlike previous egocentric datasets, WEAR provides a challenging prediction scenario marked by purposely introduced activity variations as well as an overall small information overlap across modalities.
Provide a detailed description of the following dataset: WEAR
HRS-Bench
**HRS-Bench** is a concrete evaluation benchmark for T2I models that is Holistic, Reliable, and Scalable. It measures 13 skills that can be categorized into five major categories: accuracy, robustness, generalization, fairness, and bias. In addition, HRS-Bench covers 50 scenarios, including fashion, animals, transportation, food, and clothes.
Provide a detailed description of the following dataset: HRS-Bench
1QIsaa data collection
This data set is collected for the ERC project: The Hands that Wrote the Bible: Digital Palaeography and Scribal Culture of the Dead Sea Scrolls PI: Mladen Popović Grant agreement ID: 640497 Project website: https://cordis.europa.eu/project/id/640497
Provide a detailed description of the following dataset: 1QIsaa data collection
BalitaNLP
A Filipino multi-modal language dataset for image-conditional language generation and text-conditional image generation. Consists of 351,755 Filipino news articles gathered from Filipino news outlets. Each entry contains: * body - Article text * title - Article title * website - Name of the news outlet * category - News category given by the news outlet * date - Date published * author - Article author * url - URL of the article * img_url - URL of the article image * img_path - Filename of the image in the dataset
Provide a detailed description of the following dataset: BalitaNLP
ImageNet-Hard
ImageNet-Hard is a new benchmark that comprises 10,980 images collected from various existing ImageNet-scale benchmarks (ImageNet, ImageNet-V2, ImageNet-Sketch, ImageNet-C, ImageNet-R, ImageNet-ReaL, ImageNet-A, and ObjectNet). This dataset poses a significant challenge to state-of-the-art vision models as merely zooming in often fails to improve their ability to classify images correctly. As a result, even the most advanced models, such as CLIP-ViT-L/14@336px, struggle to perform well on this dataset, achieving a mere 2.02% accuracy.
Provide a detailed description of the following dataset: ImageNet-Hard
LSSED
LSSED, a challenging large-scale english dataset for speech emotion recognition. It contains 147,025 sentences (206 hours and 25 minutes in total) spoken by 820 people. Each segment is annotated for the presence of 11 emotions (angry, neutral, fear, happy, sad, disappointed, bored, disgusted, excited, surprised, fear and other)
Provide a detailed description of the following dataset: LSSED
PIQ23
Year after year, the demand for ever-better smartphone photos continues to grow, in particular in the domain of portrait photography. Manufacturers thus use perceptual quality criteria throughout the development of smartphone cameras. This costly procedure can be partially replaced by automated learning-based methods for image quality assessment (IQA). Due to its subjective nature, it is necessary to estimate and guarantee the consistency of the IQA process, a characteristic lacking in the mean opinion scores (MOS) widely used for crowdsourcing IQA. In addition, existing blind IQA (BIQA) datasets pay little attention to the difficulty of cross-content assessment, which may degrade the quality of annotations. This paper introduces PIQ23, a portrait-specific IQA dataset of 5116 images of 50 predefined scenarios acquired by 100 smartphones, covering a high variety of brands, models, and use cases. The dataset includes individuals of various genders and ethnicities who have given explicit and informed consent for their photographs to be used in public research. It is annotated by pairwise comparisons (PWC) collected from over 30 image quality experts for three image attributes: face detail preservation, face target exposure, and overall image quality. An in-depth statistical analysis of these annotations allows us to evaluate their consistency over PIQ23. Finally, we show through an extensive comparison with existing baselines that semantic information (image context) can be used to improve IQA predictions. The dataset along with the proposed statistical analysis and BIQA algorithms are available: https://github.com/DXOMARKResearch/PIQ2023
Provide a detailed description of the following dataset: PIQ23
FewDR
**FewDR** is a dataset for Few-shot dense retrieval (DR). FewDR aims to effectively generalize to novel search scenarios by learning a few samples. Specifically, FewDR employs class-wise sampling to establish a standardized "few-shot" setting with finely-defined classes, reducing variability in multiple sampling rounds.
Provide a detailed description of the following dataset: FewDR
InterHuman
**InterHuman** is a multimodal dataset, named InterHuman. It consists of about 107M frames for diverse two-person interactions, with accurate skeletal motions and 16,756 natural language descriptions.
Provide a detailed description of the following dataset: InterHuman
DALES
We present the Dayton Annotated LiDAR Earth Scan (DALES) data set, a new large-scale aerial LiDAR data set with over a half-billion hand-labeled points spanning 10 square kilometers of area and eight object categories. Large annotated point cloud data sets have become the standard for evaluating deep learning methods. However, most of the existing data sets focus on data collected from a mobile or terrestrial scanner with few focusing on aerial data. Point cloud data collected from an Aerial Laser Scanner (ALS) presents a new set of challenges and applications in areas such as 3D urban modeling and large-scale surveillance. DALES is the most extensive publicly available ALS data set with over 400 times the number of points and six times the resolution of other currently available annotated aerial point cloud data sets. This data set gives a critical number of expert verified hand-labeled points for the evaluation of new 3D deep learning algorithms, helping to expand the focus of current algorithms to aerial data. We describe the nature of our data, annotation workflow, and provide a benchmark of current state-of-the-art algorithm performance on the DALES data set.
Provide a detailed description of the following dataset: DALES
RoboBEV
RoboBEV is a robustness evaluation benchmark tailored for camera-based bird's eye view (BEV) perception under natural data corruptions and domain shift. It includes eight distinct corruptions, including Bright, Dark, Fog, Snow, Motion Blur, Color Quant, Camera Crash, and Frame Lost.
Provide a detailed description of the following dataset: RoboBEV
Databricks Dolly 15k
**Databricks Dolly 15k** is a dataset containing 15,000 high-quality human-generated prompt / response pairs specifically designed for instruction tuning large language models. It is authored by more than 5,000 Databricks employees during March and April of 2023. The training records are natural, expressive and designed to represent a wide range of the behaviors, from brainstorming and content generation to information extraction and summarization.
Provide a detailed description of the following dataset: Databricks Dolly 15k
Amateur Drawings
**Amateur Drawings** is a dataset collected via the public demo of Animated Drawings, containing over 178,000 amateur drawings and corresponding user-accepted character bounding boxes, segmentation masks, and joint location annotations.
Provide a detailed description of the following dataset: Amateur Drawings
Youtube INRIA Instructional
We address the problem of automatically learning the main steps to complete a certain task, such as changing a car tire, from a set of narrated instruction videos. The contributions of this paper are three-fold. First, we develop a new unsupervised learning approach that takes advantage of the complementary nature of the input video and the associated narration. The method solves two clustering problems, one in text and one in video, applied one after each other and linked by joint constraints to obtain a single coherent sequence of steps in both modalities. Second, we collect and annotate a new challenging dataset of real-world instruction videos from the Internet. The dataset contains about 800,000 frames for five different tasks (How to : change a car tire, perform CardioPulmonary resuscitation (CPR), jump cars, repot a plant and make coffee) that include complex interactions between people and objects, and are captured in a variety of indoor and outdoor settings. Third, we experimentally demonstrate that the proposed method can automatically discover, in an unsupervised manner , the main steps to achieve the task and locate the steps in the input videos. This video presents our results of automatically discovering the scenario for the two following task : changing a tire and performing CardioPulmonary Resuscitation (CPR). At the bottom of the videos, there are three bars. The first one corresponds to our ground truth annotation. The second one corresponds to our time interval prediction in video. Finally the third one corresponds to the constraints that we obtain from the text domain. On the right, there is a list of label. They corresponds to the label recovered by our NLP method in an unsupervised manner.
Provide a detailed description of the following dataset: Youtube INRIA Instructional
Five-Billion-Pixels
The Five-Billion-Pixels dataset contains more than 5 billion labeled pixels of 150 high-resolution Gaofen-2 (4 m) satellite images, annotated in a 24-category system covering artificial-constructed, agricultural, and natural classes. It possesses the advantage of rich categories, large coverage, wide distribution, and high-spatial resolution, which well reflects the distributions of real-world ground objects and can benefit to different land cover related studies.
Provide a detailed description of the following dataset: Five-Billion-Pixels
FollowMe Vehicle Behaviour Prediction Dataset
This dataset is a result of a study that was created to assess drivers behaviors when following a lead vehicle. The driving simulator study used a simulated suburban environment for collecting driver behavior data while following a lead vehicle driving through various unsignalized intersections. The driving environment had two lanes in each direction and a dedicated left-turn lane for the intersection. The experiment was deployed on a miniSim Driving Simulator. We programmed the lead vehicle ran- domly turn left, right or go straight through the intersections. In total we had 2(traffic density) × 2(speed level) × 3 = 12 scenarios for each participant to be tested on. We split the data into train, validation and test sets. The setup for the task is to observe 1 second of trajectories and predict the next 3,5 and 8 seconds.
Provide a detailed description of the following dataset: FollowMe Vehicle Behaviour Prediction Dataset
BURST
BURST is a benchmark suite built upon TAO that requires tracking and segmenting multiple objects from camera video. The benchmark contains 6 different sub-tasks divided into 2 groups that all share the same data for training/validation/testing. ##### Class-guided 1. **Common:** Track and segment all objects belonging to a set of 78 common classes (based on the COCO class set) 2. **Long-tail**: Track and segment all objects belonging to an extended set of 482 object classes (based on the LVIS class set) 3. **Open-world**: Methods are only allowed to use the annotations of the 78 common classes during training, but during inference they are expected to track and segment all 482 object classes (class label predictions are not required) ##### Exemplar-guided 4. **Mask**: Track and segment all objects in the video for which the first-frame object masks are given. This task is identical to Video Object Segmentation (VOS). 5. **Box**: Track and segment all objects in the video for which the first-frame object bounding-boxes are given. 6. **Point**: Track and segment all objects in the video for which we are only given the (x,y) point coordinates of the mask centroid in the first-frame in which the objects appear. An illustration of the task hierarchy is given [here](https://github.com/Ali2500/BURST-benchmark/blob/main/.images/task_taxonomy.PNG) and a detailed explanation is given in Sec. 5 of the dataset paper
Provide a detailed description of the following dataset: BURST
LaSCo
Large Scale Composed Image Retrieval (LaSCo) is a new dataset for Composed Image Retrieval (CoIR), x10 times larger than current ones.
Provide a detailed description of the following dataset: LaSCo
3D Ken Burns
This dataset accompanies our paper on synthesizing the 3D Ken Burns effect from a single image. It consists of 134041 captures from 32 virtual environments where each capture consists of 4 views. Each view contains color-, depth-, and normal-maps at a resolution of 512x512 pixels.
Provide a detailed description of the following dataset: 3D Ken Burns
LLCM
LLCM (Low-Light Cross-Modality) dataset is constructed to facilitate the study of low-light cross-modality person Re-ID task. It contains 46,767 person images of 1,064 identities, and each identity is captured by at least one RGB camera and one IR camera. The LLCM dataset is divided into a training set and a testing set at a ratio about 2:1. The training set contains 30,921 bounding boxes of 713 identities (16,946 bounding boxes are from the VIS modality and 13,975 bounding boxes are from the IR modality), and the testing set contains 13,909 bounding boxes of 351 identities (8,680 bounding boxes are from the VIS modality and 7,166 bounding boxes are from the IR modality).
Provide a detailed description of the following dataset: LLCM
L1BSR
The Sentinel-2 satellite carries 12 CMOS detectors for the VNIR bands, with adjacent detectors having overlapping fields of view that result in overlapping regions in level-1 B (L1B) images. This dataset includes 3740 pairs of overlapping image crops extracted from two L1B products. Each crop has a height of around 400 pixels and a variable width that depends on the overlap width between detectors for RGBN bands, typically around 120-200 pixels. In addition to detector parallax, there is also cross-band parallax for each detector, resulting in shifts between bands. Pre-registration is performed for both cross-band and cross-detector parallax, with a precision of up to a few pixels (typically less than 10 pixels).
Provide a detailed description of the following dataset: L1BSR
PGDataset
PGDataset (Profile Generation Dataset) is a dataset created for the PGTask (Profile Generation Task), where the goal is to extract/generate a profile sentence given a dialogue utterance.
Provide a detailed description of the following dataset: PGDataset
RADIOML 2018.01A
**RADIOML 2018.01A** is a dataset which includes both synthetic simulated channel effects of 24 digital and analog modulation types which has been validated.
Provide a detailed description of the following dataset: RADIOML 2018.01A
MMC4
**Multimodal C4 (MMC4)** is an augmentation of the popular text-only c4 corpus with images interleaved. The corpus contains 103M documents containing 585M images interleaved with 43B English tokens.
Provide a detailed description of the following dataset: MMC4
MILAN Sky Survey
During the MILAN research project (MachIne Learning for AstroNomy), the research team uses Stellina observation stations to collect raw images of deep sky objects. Thus, the research team built a dataset that represents what can be obtained during classical Electronically Assisted Astronomy sessions in the Luxembourg Greater Region. The dataset is composed of ZIP files -- and each zip file contains raw images in FITS format (Flexible Image Transport System): data comes directly from the Sony IMX178 sensor of the Stellina observation station (no debayerisation and no post processing). Funding: This work was funded by the Luxembourg National Research Fund (FNR -- https://www.fnr.lu/) . More information about VAONIS instruments: https://vaonis.com
Provide a detailed description of the following dataset: MILAN Sky Survey
LayoutBench
LayoutBench is a diagnostic benchmark that examines 4 spatial control skills (number, position, size, shape), where each skill consists of 2 OOD layout splits, i.e., in total 8 tasks = 4 skills x 2 splits. To disentangle spatial control from other aspects of image generation, such as generating diverse objects, LayoutBench keeps the object configurations of CLEVR, and changes the spatial layouts.
Provide a detailed description of the following dataset: LayoutBench
wildlight
Multi-view image dataset of seven objects under indoor lighting, for the purpose of multi-view 3D reconstruction and inverse rendering. Around half images are taken under indoor environment lighting only, and the other half are also under a flashlight co-located with camera centre. The co-located flashlight images are for material/BSDF reconstruction. Four out of seven objects are synthesised in blender and has geometry ground truth. The other three are real-world objects captured by an iPhone and do not have ground truth.
Provide a detailed description of the following dataset: wildlight
XWikiRef
We provide a new data set XWikiRef for the task of Cross-lingual Multi-document Summarization. This task aims at generating Wikipedia style text in Low Resource languages by taking reference text as input. Overall, the data set contains 8 different languages: bengali (bn), english (en), hindi (hi), marathi (mr), malayalam (ml), odia (or), punjabi (pa) and tamil (ta). It also contains 5 domains: books, films, politicians, sportsman and writers. ##Data Format Dataset is publicly available [here](https://github.com/DhavalTaunk08/XWikiGen). Each directory contains language specific data subset having 1 json file per domain. In each file, each line denotes one article. It contains the following set of keys: - Article title - Sections - section title 1 - section text 1 - list of reference texts 1 - ..... - ..... - ..... - section title n - section text n - list of reference texts 1
Provide a detailed description of the following dataset: XWikiRef
SuHiFiMask
**SuHiFiMask (Surveillance High-Fidelity Mask)** extends FAS to real surveillance scenes rather than mimicking low-resolution images and surveillance environments. It contains 10,195 videos from 101 subjects of different age groups, which are collected by 7 mainstream surveillance cameras.
Provide a detailed description of the following dataset: SuHiFiMask
LongForm
**LongForm** dataset is created by leveraging English corpus examples with augmented instructions. It contains diverse set of human-written documents from existing corpora such as C4 and Wikipedia and generate instructions for the given documents via LLMs. The examples generated from raw text corpora via LLMs includes structured corpus examples, as well as various NLP task examples such as email writing, grammar error correction, story/poem generation, and text summarization.
Provide a detailed description of the following dataset: LongForm
YIM Dataset
An instance segmentation dataset of yeast cells in microstructures. The dataset includes 493 densely annotated microscopy images. For more information see the paper "An Instance Segmentation Dataset of Yeast Cells in Microstructures".
Provide a detailed description of the following dataset: YIM Dataset
Trajectory calibration experiments
Data and experiments for motion-based extrinsic calibration using [trajectory_calibration]. The data was generated for evaluating hand-eye calibration algorithms in extrinsic sensor-to-sensor calibration. The repository is organised as follows: ``` ├── calib │ └── *.csv # The output calibration values ├── kitti_tests │ └── ... # The trajectories generated from KITTI data ├── matlab │ └── ... # Matlab files to generate simulation data ├── simulation_tests │ └── ... # The generated simulation data ├── *.py # Scripts to run the calibration experiments └── README.md ``` If you use the data in an academic context, please cite: @article{valimaki2023, author = {Välimäki, Tuomas and Garigipati, Bharath and Ghabcheloo, Reza}, title = {Motion-Based Extrinsic Sensor-to-Sensor Calibration: Effect of Reference Frame Selection for New and Existing Methods}, journal = {Sensors}, volume = {23}, year = {2023}, number = {7}, pages = {3740}, url = {https://www.mdpi.com/1424-8220/23/7/3740}, issn = {1424-8220}, doi = {10.3390/s23073740} } [trajectory_calibration]: https://github.com/tau-alma/trajectory_calibration
Provide a detailed description of the following dataset: Trajectory calibration experiments
Brazilian E-Commerce Public Dataset by Olist
See https://www.kaggle.com/datasets/olistbr/brazilian-ecommerce .
Provide a detailed description of the following dataset: Brazilian E-Commerce Public Dataset by Olist
SentNoB
Social Media User Sentiment Analysis Dataset. Each user comments are labeled with either positive (1), negative (2), or neutral (0).
Provide a detailed description of the following dataset: SentNoB
iiwa Robotic Arm Reconstruction Dataset
Please see our website and code repository for detailed description.
Provide a detailed description of the following dataset: iiwa Robotic Arm Reconstruction Dataset
MasakhaNEWS
**MasakhaNEWS** is a benchmark dataset for news topic classification covering 16 languages widely spoken in Africa.
Provide a detailed description of the following dataset: MasakhaNEWS
CKBP v2
**CKBP v2** is a new CSKB Population benchmark, which addresses the two mentioned problems by using experts instead of crowd-sourced annotation and by adding diversified adversarial samples to make the evaluation set more representative.
Provide a detailed description of the following dataset: CKBP v2
FLAIR (French Land cover from Aerospace ImageRy)
The French National Institute of Geographical and Forest Information (IGN) has the mission to document and measure land-cover on French territory and provides referential geographical datasets, including high-resolution aerial images and topographic maps. The monitoring of land-cover plays a crucial role in land management and planning initiatives, which can have significant socio-economic and environmental impact. Together with remote sensing technologies, artificial intelligence (IA) promises to become a powerful tool in determining land-cover and its evolution. IGN is currently exploring the potential of IA in the production of high-resolution land cover maps. Notably, deep learning methods are employed to obtain a semantic segmentation of aerial images. However, territories as large as France imply heterogeneous contexts: variations in landscapes and image acquisition make it challenging to provide uniform, reliable and accurate results across all of France. The FLAIR-one dataset presented is part of the dataset currently used at IGN to establish the French national reference land cover map "Occupation du sol \`a grande \'echelle" (OCS- GE). It covers 810 km² and has 13 semantic classes.
Provide a detailed description of the following dataset: FLAIR (French Land cover from Aerospace ImageRy)
MLRegTest
MLRegTest is a benchmark for sequence classification, containing training, development, and test sets from 1,800 regular languages. Regular languages are formal languages, which are sets of sequences definable with certain kinds of formal grammars, including regular expressions, finite-state acceptors, and monadic second-order logic with either the successor or precedence relation in the model signature for words. This benchmark was designed to help identify those factors, specifically the kinds of long-distance dependencies, that can make it difficult for ML systems to generalize successfully in learning patterns over sequences. MLRegTest organizes its languages according to their logical complexity (monadic second-order, first-order, propositional, or monomial expressions) and the kind of logical literals (string, tier-string, subsequence, or combinations thereof). The logical complexity and choice of literal provides a systematic way to understand different kinds of long-distance dependencies in regular languages, and therefore to understand the capabilities of different ML systems to learn such long-distance dependencies. The authors think it will be an important milestone if other researchers are able to find an ML system that succeeds across the board on MLRegTest.
Provide a detailed description of the following dataset: MLRegTest
EmoNoBa
Detecting Multi-labeled Emotion for 6 emotion categories, namely Love, Joy, Surprise, Anger, Sadness, Fear.
Provide a detailed description of the following dataset: EmoNoBa
Wikipedia Math Essentials
Contains Wikipedia pages about popular mathematics topics and edges describe the links from one page to another. Features describe the number of daily visits between 2019 and 2021 March.
Provide a detailed description of the following dataset: Wikipedia Math Essentials
MIMIC-IV ICD-9
MIMIC-IV ICD-9 contains 209,326 discharge summaries—free-text medical documents—annotated with ICD-9 diagnosis and procedure codes. It contains data for patients admitted to the Beth Israel Deaconess Medical Center emergency department or ICU between 2008-2019. All codes with fewer than ten examples have been removed, and the train-val-test split was created using [multi-label stratified sampling](MIMIC-IV ICD-9). The dataset is described further in [Automated Medical Coding on MIMIC-III and MIMIC-IV: A Critical Review and Replicability Study](https://arxiv.org/abs/2304.10909), and the code to use the dataset is found [here](https://github.com/JoakimEdin/medical-coding-reproducibility). The dataset is intended for medical code prediction and was created using [MIMIC-IV v2.2](https://physionet.org/content/mimiciv/2.2/) and [MIMIC-IV-NOTE v2.2](https://physionet.org/content/mimic-iv-note/2.2/). Using the two datasets requires a license obtained in [Physionet](https://physionet.org/register/); this can take a couple of days.
Provide a detailed description of the following dataset: MIMIC-IV ICD-9
MIMIC-IV ICD-10
MIMIC-IV ICD-10 contains 122,279 discharge summaries—free-text medical documents—annotated with ICD-10 diagnosis and procedure codes. It contains data for patients admitted to the Beth Israel Deaconess Medical Center emergency department or ICU between 2008-2019. All codes with fewer than ten examples have been removed, and the train-val-test split was created using [multi-label stratified sampling](MIMIC-IV ICD-9). The dataset is described further in [Automated Medical Coding on MIMIC-III and MIMIC-IV: A Critical Review and Replicability Study](https://arxiv.org/abs/2304.10909), and the code to use the dataset is found [here](https://github.com/JoakimEdin/medical-coding-reproducibility). The dataset is intended for medical code prediction and was created using [MIMIC-IV v2.2](https://physionet.org/content/mimiciv/2.2/) and [MIMIC-IV-NOTE v2.2](https://physionet.org/content/mimic-iv-note/2.2/). Using the two datasets requires a license obtained in [Physionet](https://physionet.org/register/); this can take a couple of days.
Provide a detailed description of the following dataset: MIMIC-IV ICD-10
FireRisk
In this work, we propose a novel remote sensing dataset, FireRisk, consisting of 7 fire risk classes with a total of 91 872 labelled images for fire risk assessment. This remote sensing dataset is labelled with the fire risk classes supplied by the Wildfire Hazard Potential (WHP) raster dataset, and remote sensing images are collected using the National Agriculture Imagery Program (NAIP), a high-resolution remote sensing imagery program. On FireRisk, we present benchmark performance for supervised and self-supervised representations, with Masked Autoencoders (MAE) pre-trained on ImageNet1k achieving the highest classification accuracy, 65.29%.
Provide a detailed description of the following dataset: FireRisk
WikiDetox
An annotated dataset of 1m crowd-sourced annotations that cover 100k talk page diffs (with 10 judgements per diff) for personal attacks, aggression, and toxicity.
Provide a detailed description of the following dataset: WikiDetox
KnowledJe
We introduce KnowledJe, an English-language knowledge graph of antisemitic history and language from the 20th century to the present. Structured as a JSON file, KnowledJe currently contains 618 entries, which consist of 210 event names, 137 place names, 95 person names, 80 dates (years), 38 publication names, 27 organization names, and 1 product name. Each entry is associated with its own dictionary, which contains descriptions, locations, authors, and dates as applicable. We obtain the entries through four Wikipedia articles: “Timeline of antisemitism in the 20th century,” “Timeline of antisemitism in the 21stcentury,” the “Jews” section of “List of religious slurs,” and “Timeline of the Holocaust.” To obtain descriptions for each applicable key, we used the following general rules: 1. If the concept associated with the key is a slur, the description is the entry in the “Meaning, origin, and notes” column of the “List of religious slurs” article. 2. Otherwise, if the concept associated with the key has its own Wikipedia page and that Wikipedia page has a table of contents, the description is the body of text above the table of contents. If the page exists but does not have a table of contents, the description is the first paragraph of the text on the page. 3. Otherwise, the description is the paragraph given directly under the listing of the year of the event in the Wikipedia article in which the concept was first found. We edit descriptions to remove non-Latin characters and citations. For concepts with multiple names, we create separate keys for each name. Potential use cases: enhancing hate speech detection algorithms for antisemitism, extracting knowledge of historical antisemitism. See paper (https://arxiv.org/abs/2304.11223) for examples and further description. See GitHub repo (https://github.com/enscma2/knowledje) for data files.
Provide a detailed description of the following dataset: KnowledJe
EchoKG
Echo Corpus (Arviv et al, 2021) infused with information from KnowledJe (Halevy, 2023). Algorithm detailed in Algorithm 1 in Section 3.2 of Halevy (2023) (https://arxiv.org/pdf/2304.11223.pdf). 4,630 total tweet samples, 380 labeled as antisemitic hate speech. Files included in https://github.com/enscma2/knowledje and described in its README.md. Potential use cases: detection of antisemitic hate speech.
Provide a detailed description of the following dataset: EchoKG
Echo Corpus
A large dataset of over 18,000,000 English tweets posted by ∼7K echo users was constructed in the following manner: 1. **Base Corpus** We have obtained access to a random sample of 10% of all public tweets posted in May and June 2016 – the peak use of the echo. 2. **Raw Echo Corpus** Searching the base corpus, we extracted all tweets containing the echo symbol, resulting in 803,539 tweets posted by 418,624 users. Filtering out non-English Tweets and users who used the echo less than three times we were left with ∼7K users. 3. **Echo Corpus** We used Twitter API to obtain the most recent tweets (up to 3.2K) of each of the users remainingin the English list. This process resulted in ∼18M tweets posted by 7,073 users. Some of the accounts we found using the echo were already suspended or deleted at the time of collection, thus their tweets were not retrievable. Relevant footnotes: - The echo is found in tweets written in multiple languages, particularly in East-Asian languages of which the user based is known for heavy use of ascii art and kaomoji (McCulloch 2019). - The data was collected in December 2016, amidst reports on the trending ‘echo’. Description taken from paper: Arviv, E., Hanouna, S., & Tsur, O. (2020). It's a Thin Line Between Love and Hate: Using the Echo in Modeling Dynamics of Racist Online Communities. ArXiv, abs/2012.01133.
Provide a detailed description of the following dataset: Echo Corpus
MJFF Levodopa Response Study
The data generated from this study are grouped into 3 main types: (1) participant demographic and clinical data, (2) sensor data from the different devices, as well as clinical scores and metadata related to the tasks performed, and (3) participant diaries collected during the in-clinic and at-home phases of the study. Throughout the data tables, timestamps are provided as UNIX epoch/POSIX time.
Provide a detailed description of the following dataset: MJFF Levodopa Response Study
Comment quality assessment papers
A list of all proceedings retrieved from the two-stage keyword (key first, key second in the csv file) filtering approach and the list of all evaluated and reviewed papers by four authors to identify the relevant papers.
Provide a detailed description of the following dataset: Comment quality assessment papers
Code comments in Java, Python, and Pharo
It contains the dataset of class comments extracted from various projects of three programming languages Java, Pharo, and Python
Provide a detailed description of the following dataset: Code comments in Java, Python, and Pharo
Dataset for analyzing the impact of gamification in software testing education
Data collected for the controlled experiment performed to analyze whether gamification can help in software testing education. The results are reported in "Can gamification help in software testing education? Findings from an empirical study" (DOI: 10.1016/j.jss.2023.111647).
Provide a detailed description of the following dataset: Dataset for analyzing the impact of gamification in software testing education
CDS2K
**CDS2K** is a benchmark for Concealed scene understanding (CSU), which is a hot computer vision topic aiming to perceive objects with camouflaged properties. It is a concealed defect segmentation dataset from the five well-known defect segmentation databases. It contains five sub-databases: MVTecAD, NEU, CrackForest, KolektorSDD, and MagneticTile. The defective regions are highlighted with red rectangles.
Provide a detailed description of the following dataset: CDS2K
UCF101-DS
Existing benchmark datasets in real-world distribution shifts are generally synthetically generated via augmentations to simulate real-world shifts such as weather and camera rotation. The UCF101-DS dataset consists of real-world distribution shifts from user-generated videos without synthetic augmentation. It has videos for 47 UCF-101 classes with 63 different distribution shifts that can be categorized into 15 categories. A total of 536 unique videos split into a total of 4,708 clips. Each clip ranges from 7 to 10 seconds long.
Provide a detailed description of the following dataset: UCF101-DS
LibriS2S
LibriS2S is a Speech to Speech Translation (S2ST) dataset build further upon existing resources. The dataset provides English-German speech and text quadruplets ranging just over 50 hours for both languages.
Provide a detailed description of the following dataset: LibriS2S
MMCU
We propose a test to measure the multitask accuracy of large Chinese language models. We constructed a large-scale, multi-task test consisting of single and multiple-choice questions from various branches of knowledge. The test encompasses the fields of medicine, law, psychology, and education, with medicine divided into 15 sub-tasks and education into 8 sub-tasks. The questions in the dataset were manually collected by professionals from freely available online resources, including university medical examinations, national unified legal professional qualification examinations, psychological counselor exams, graduate entrance examinations for psychology majors, and the Chinese National College Entrance Examination. In total, we collected 11,900 questions, which we divided into a few-shot development set and a test set. The few-shot development set contains 5 questions per topic, amounting to 55 questions in total. The test set comprises 11,845 questions.
Provide a detailed description of the following dataset: MMCU
New Plant Diseases Dataset
This dataset is recreated using offline augmentation from the original dataset. The original dataset can be found on this github repo. This dataset consists of about 87K rgb images of healthy and diseased crop leaves which is categorized into 38 different classes. The total dataset is divided into 80/20 ratio of training and validation set preserving the directory structure. A new directory containing 33 test images is created later for prediction purpose.
Provide a detailed description of the following dataset: New Plant Diseases Dataset
vReLoc
A total of 18 sequences were collected of various lengths. Since the Velodyne LiDAR, RealSense camera and Vicon motion tracker system run in different frequencies, we synchronized these systems so that the image and LiDAR in each timestamp has the same 6-DoF pose. For the static scenario, there are no moving objects in the scene. For other scenarios, there are people randomly walking in the scene. Sequences 01-10 come from the static environment, sequences 11-15 are the one-person moving scenario, and sequences 16-18 are two-persons moving scenario.
Provide a detailed description of the following dataset: vReLoc
DBP1M FR-EN
A large-scale cross-lingual dataset for entity alignment
Provide a detailed description of the following dataset: DBP1M FR-EN
IMUPoser
The **IMUPoser** Dataset is a dataset for estimating body pose using IMUs already in devices that many users own -- namely smartphones, smartwatches, and earbuds.
Provide a detailed description of the following dataset: IMUPoser
EasyPortrait
We introduce a large-scale image dataset **EasyPortrait** for portrait segmentation and face parsing. Proposed dataset can be used in several tasks, such as background removal in conference applications, teeth whitening, face skin enhancement, red eye removal or eye colorization, and so on. EasyPortrait dataset size is about **26GB**, and it contains **20 000** RGB images with high quality annotated masks. This dataset is divided into training set, validation set and test set by hashed subject *user_id*. The training set includes 14000 images, the validation set includes 2000 images, and the test set includes 4000 images. Training images were received from 5,947 unique users, while validation was from 860 and testing was from 1,570. On average, each EasyPortrait image has **254 polygon points**, from which it can be concluded that the annotation is of high quality. Segmentation masks were created from polygons for each annotation. Annotations are presented as 2D-arrays, images in `*.png` format with several classes: | Index | Class | |------:|:-----------| | 0 | BACKGROUND | | 1 | PERSON | | 2 | SKIN | | 3 | LEFT BROW | | 4 | RIGHT_BROW | | 5 | LEFT_EYE | | 6 | RIGHT_EYE | | 7 | LIPS | | 8 | TEETH | Also, we provide some additional meta-information for dataset in `annotations/meta.zip` file: | | attachment_id | user_id | data_hash | width | height | brightness | train | test | valid | |---:|:--------------|:--------|:----------|------:|-------:|-----------:|:------|:------|:------| | 0 | de81cc1c-... | 1b... | e8f... | 1440 | 1920 | 136 | True | False | False | | 1 | 3c0cec5a-... | 64... | df5... | 1440 | 1920 | 148 | False | False | True | | 2 | d17ca986-... | cf... | a69... | 1920 | 1080 | 140 | False | True | False | where: - `attachment_id` - image file name without extension - `user_id` - unique anonymized user ID - `data_hash` - image hash by using Perceptual hashing - `width` - image width - `height` - image height - `brightness` - image brightness - `train`, `test`, `valid` are the binary columns for train / test / val subsets respectively
Provide a detailed description of the following dataset: EasyPortrait
SIMARA
# Description We propose a new database for information extraction from historical handwritten documents. The corpus includes 5,393 finding aids from six different series, dating from the 18th-20th centuries. Finding aids are handwritten documents that contain metadata describing older archives. They are stored in the National Archives of France and are used by archivists to identify and find archival documents. Each document is annotated at page-level, and contains seven fields to retrieve. The localization of each field is not available in such a way that this dataset encourages research on segmentation-free systems for information extraction. The dataset is available at https://zenodo.org/record/7868059 ## Details for each series and entity type | Series | Train | Validation | Test | Total (%) | | ------------------- | ----- | ---------- | ---- | --------: | | E series | 322 | 64 | 79 | 8.6 | | L series | 38 | 8 | 4 | 0.9 | | M series | 128 | 21 | 27 | 3.3 | | X1a series | 2209 | 491 | 469 | 58.8 | | Y series | 940 | 205 | 196 | 24.9 | | Douët s'Arcq series | 141 | 22 | 29 | 3.5 | | Total | 3778 | 811 | 804 | 100 | | Entities | Train | Validation | Test | Total (%) | | -------------- | ----- | ---------- | ----- | --------: | | date | 8406 | 1814 | 1799 | 10.4 | | title | 35531 | 7495 | 8173 | 44.5 | | serie | 3168 | 664 | 676 | 3.9 | | analysis | 25988 | 5130 | 5602 | 31.9 | | volume_number | 3913 | 808 | 813 | 4.8 | | article_number | 3181 | 665 | 678 | 3.9 | | arrangement | 644 | 122 | 153 | 0.8 | | Total | 80831 | 16698 | 17894 | 100 | ## Data encoding Transcriptions with entities are encoded in the `labels.json` JSON file. Special tokens are used to represent named entities. Please not that there are only opening NER tokens: each entity spans all words until the next entity starts. | Entities | Special token | Symbol unicode | | -------------- | ------------- | -------- | | date | ⓓ | `\u24d3` | | title | ⓘ | `\u24d8` | | serie | ⓢ | `\u24e2` | | analysis | ⓒ | `\u24d2` | | volume_number | ⓟ | `\u24df` | | article_number | ⓐ | `\u24d0` | | arrangement | ⓥ | `\u24e5` | # Cite us! The dataset is presented in details in the following article: ```bib @article{simara2023, author = {Solène Tarride and Mélodie Boillet and Jean-François Moufflet and Christopher Kermorvant}, title = {SIMARA: a database for key-value information extraction from full-page handwritten documents}, year = {2023}, journal={Proceedings of the 17th International Conference on Document Analysis and Recognition}, } ```
Provide a detailed description of the following dataset: SIMARA
A Software Maintainability Dataset
This dataset contains 304 manual evaluations of class-level software maintainability, drawn from 5 open-source projects: [ArgoUML](https://github.com/argouml-tigris-org/argouml), [Art of Illusion](http://www.artofillusion.org/), [Diary Management](https://sourceforge.net/projects/diarymanagement/), [JUnit 4](https://junit.org/junit4/), [JSweet](http://www.jsweet.org/). Each Java class is labelled along 5 axis: readability, understandability, complexity, modularity and overall maintainability. Each Java class was assessed by several experts independently of its relation to other classes. It can be used to develop and evaluate automated quality prediction tools.
Provide a detailed description of the following dataset: A Software Maintainability Dataset
Mip-NeRF RGB-D
4 different synthetic datasets generated by Blender
Provide a detailed description of the following dataset: Mip-NeRF RGB-D
MIMIC-IV-ICD9-top50
The MIMIC-IV-ICD9 dataset, featuring the top 50 most frequently occurring labels.
Provide a detailed description of the following dataset: MIMIC-IV-ICD9-top50
MIMIC-IV-ICD10-top50
The MIMIC-IV-ICD10 dataset, featuring the top 50 most frequently occurring labels.
Provide a detailed description of the following dataset: MIMIC-IV-ICD10-top50
MIMIC-IV-ICD-10-full
The MIMIC-IV-ICD10-full dataset, including occurring labels.
Provide a detailed description of the following dataset: MIMIC-IV-ICD-10-full
MIMIC-IV-ICD9-full
The MIMIC-IV-ICD9 dataset, including all occurring labels.
Provide a detailed description of the following dataset: MIMIC-IV-ICD9-full
Expi
Extreme Pose Interaction (ExPI) Dataset is a new person interaction dataset of Lindy Hop dancing actions. In Lindy Hop, the two dancers are called leader and follower. The authors recorded two couples of dancers in a multi-camera setup equipped also with a motion-capture system. 16 different actions are performed in ExPI dataset, some by the two couples of dancers, some by only one of the couples. Each action was repeated five times to account for variability. More precisely, for each recorded sequence, ExPI provides: (i) Multi-view videos at 25FPS from all the cameras in the recording setup; (ii) Mocap data (3D position of 18 joints for each person) at 25FPS synchronized with the videos.; (iii) camera calibration information; and (iv) 3D shapes as textured meshes for each frame. Overall, the dataset contains 115 sequences with 30k visual frames for each viewpoint and 60k 3D instances annotated
Provide a detailed description of the following dataset: Expi