dataset_name
stringlengths
2
128
description
stringlengths
1
9.7k
prompt
stringlengths
59
185
BCI Competition IV: ECoG to Finger Movements
#####Prediction of Finger Flexion IV Brain-Computer Interface Data Competition The goal of this dataset is to predict the flexion of individual fingers from signals recorded from the surface of the brain (electrocorticography (ECoG)). This data set contains brain signals from three subjects, as well as the time courses of the flexion of each of five fingers. The task in this competition is to use the provided flexion information in order to predict finger flexion for a provided test set. The performance of the classifier will be evaluated by calculating the average correlation coefficient r between actual and predicted finger flexion. ECoG data during individual flexions of the five fingers; movements acquired with a data glove. [48 - 64 ECoG channels (0.15-200Hz), 1000Hz sampling rate, 3 subjects]
Provide a detailed description of the following dataset: BCI Competition IV: ECoG to Finger Movements
Stanford ECoG library: ECoG to Finger Movements
Electrophysiological data from implanted electrodes in the human brain are rare, and therefore scientific access to it has remained somewhat exclusive. Here we present a freely-available curated library of implanted electrocorticographic (ECoG) data and analyses for 16 benchmark behavioral experiments, with 204 individual datasets from 34 patients made with the same amplifiers (at the same sampling rate and filter settings). In every case, electrode positions have been carefully registered to brain anatomy. A large set of fully-commented analysis scripts to interpret these data using modern techniques is embedded in the library alongside the data. All data, anatomic correlations, and analysis files (MATLAB code) are in a common, intuitive file structure at https://searchworks.stanford.edu/view/zk881ps0522. The library may be used as course material or serve as a starter package for researchers early in their career or for established groups, to modify the analyses and re-apply them in new settings. Patients were cued with a word displayed on a bedside monitor to move individual fingers repetitively (contralateral to electrode array) during 2 s cue periods while finger position was recorded with a dataglove. filename - fingerflex.zip
Provide a detailed description of the following dataset: Stanford ECoG library: ECoG to Finger Movements
HuTu 80
The image set contains 180 high-resolution color microscopic images of human duodenum adenocarcinoma HuTu 80 cell populations obtained in an in vitro scratch assay (for the details of the experimental protocol, we refer to (Liang et al., 2007)). Briefly, cells were seeded in 12-well culture plates ($20 \times 10^3$ cells per well) and grown to form a monolayer with 85\% or more confluency. Then the cell monolayer was scraped in a straight line using a pipette tip ($200 \mu L$). The debris was removed by washing with a growth medium and the medium in wells was replaced. The scratch areas were marked to obtain the same field during the image acquisition. Images of the scratches were captured immediately following the scratch formation, as well as after 24, 48 and 72 h of cultivation. Images were obtained with the Zeiss Axio Observer 1.0 microscope (Carl Zeiss AG, Oberkochen, Germany) with 400x magnification. All images have been manually annotated by domain experts as a part of the original experimental [study](https://link.springer.com/article/10.1134/S0026893320010173). Here we use these manual annotations as a reference (``ground truth''). To improve the reproducibility of our analysis, we made corresponding images and their manual annotations fully available at [https://gitlab.com/digiratory/biomedimaging/bcanalyzer](https://bit.ly/3hlvli9).
Provide a detailed description of the following dataset: HuTu 80
CWL EEG/fMRI Dataset
EEG/fMRI Data from 8 subject doing a simple eyes open/eyes closed task is provided on this webpage. The EEG/fMRI data are six files for each subject, with two basic factors: recording during Helium pump On and Helium pump Off, and recording during MRI scanning and without MRI scanning. In addition 'outside' EEG data is provided, before as well as after the MRI session. There are 30 EEG channels, 1 EOG channel, 1 ECG channel, as well as 6 CWL signals.
Provide a detailed description of the following dataset: CWL EEG/fMRI Dataset
SFpark
The San Francisco Municipal Transportation Agency (SFMTA) website provides data collected during the SFpark pilot project. On-street occupancy rate data contain per-block hourly occupancy rates and meter prices for seven parking districts.
Provide a detailed description of the following dataset: SFpark
JRDB-Pose
**JRDB-Pose** is a large-scale dataset and benchmark for multi-person pose estimation and tracking using videos captured from a social navigation robot. The dataset contains challenge scenes with crowded indoor and outdoor locations and a diverse range of scales and occlusion types. It provides human pose annotations with per-keypoint occlusion labels and tack IDs consistent across the scene. These annotations include 600,000 human body pose annotations and 600,000 head bounding box annotations.
Provide a detailed description of the following dataset: JRDB-Pose
MGSM
Multilingual Grade School Math Benchmark (MGSM) is a benchmark of grade-school math problems. The same 250 problems from GSM8K are each translated via human annotators in 10 languages. GSM8K (Grade School Math 8K) is a dataset of 8.5K high-quality linguistically diverse grade school math word problems. The dataset was created to support the task of question answering on basic mathematical problems that require multi-step reasoning.
Provide a detailed description of the following dataset: MGSM
Demosthenes
Corpus for argument mining in legal documents, composed of 40 decisions of the Court of Justice of the European Union on matters of fiscal state aid
Provide a detailed description of the following dataset: Demosthenes
FrenchMedMCQA
This paper introduces FrenchMedMCQA, the first publicly available Multiple-Choice Question Answering (MCQA) dataset in French for medical domain. It is composed of 3,105 questions taken from real exams of the French medical specialization diploma in pharmacy, mixing single and multiple answers. Each instance of the dataset contains an identifier, a question, five possible answers and their manual correction(s). We also propose first baseline models to automatically process this MCQA task in order to report on the current performances and to highlight the difficulty of the task. A detailed analysis of the results showed that it is necessary to have representations adapted to the medical domain or to the MCQA task: in our case, English specialized models yielded better results than generic French ones, even though FrenchMedMCQA is in French. Corpus, models and tools are available online.
Provide a detailed description of the following dataset: FrenchMedMCQA
High-cardinality Geometrically Shaped Constellation for the AWGN channel and optical fibre channel
Optimised constellation for the paper High-Cardinality Geometrical Constellation Shaping for the Nonlinear Fibre Channel. Each file is a constellation optimised for the SNR in dB mentioned in the filename, containing the coordinates of the constellation points as comma-separated values. Each column represents a dimension and each row is a separate constellation point. The bit labels for the generalised mutual information (GMI) are implied and follow natural mapping, the first row is 0,..,0,0 the second 0,...0,1 the third 0,..,1,0 the fourth 0,...,1,1 etc and the last 1,...,1,1. The file named gmi.txt is the GMI for the resulting constellations. [source](https://doi.org/10.5522/04/20223963.v1)
Provide a detailed description of the following dataset: High-cardinality Geometrically Shaped Constellation for the AWGN channel and optical fibre channel
CovidET
Crises such as the COVID-19 pandemic continuously threaten our world and emotionally affect billions of people worldwide in distinct ways. Understanding the triggers leading to people's emotions is of crucial importance. Social media posts can be a good source of such analysis, yet these texts tend to be charged with multiple emotions, with triggers scattering across multiple sentences. This paper takes a novel angle, namely, emotion detection and trigger summarization, aiming to both detect perceived emotions in text, and summarize events that trigger each emotion. To support this goal, we introduce CovidET (Emotions and their Triggers during Covid-19), a dataset of ~1,900 English Reddit posts related to COVID-19, which contains manual annotations of perceived emotions and abstractive summaries of their triggers described in the post. We develop strong baselines to jointly detect emotions and summarize emotion triggers. Our analyses show that CovidET presents new challenges in emotion-specific summarization, as well as multi-emotion detection in long social media posts.
Provide a detailed description of the following dataset: CovidET
CrossRE
**CrossRE** is a cross-domain benchmark for Relation Extraction (RE), which comprises six distinct text domains and includes multi-label annotations. The dataset includes meta-data collected during annotation, to include explanations and flags of difficult instances.
Provide a detailed description of the following dataset: CrossRE
ALTO
**ALTO** is a vision-focused dataset for the development and benchmarking of Visual Place Recognition and Localization methods for Unmanned Aerial Vehicles. The dataset is composed of two long (approximately 150km and 260km) trajectories flown by a helicopter over Ohio and Pennsylvania, and it includes high precision GPS-INS ground truth location data, high precision accelerometer readings, laser altimeter readings, and RGB downward facing camera imagery.The dataset also comes with reference imagery over the flight paths, which makes this dataset suitable for VPR benchmarking and other tasks common in Localization, such as image registration and visual odometry.
Provide a detailed description of the following dataset: ALTO
PAXRay
Projection of RibFrac CT dataset to a 2D plane to imitate X-Ray data for a total of 880 images with multi-label segmentation masks. The dataset contains fine-grained 92 individual labels of anatomical structures, which, when including super-classes, lead to a total of 166 labels in both lateral and frontal view.
Provide a detailed description of the following dataset: PAXRay
DiSCQ
**DiSCQ** is a newly curated question dataset composed of 2,000+ questions paired with the snippets of text (triggers) that prompted each question. The questions are generated by medical experts from 100+ MIMIC-III discharge summaries. This dataset is released to facilitate further research into realistic clinical Question Answering (QA) and Question Generation (QG).
Provide a detailed description of the following dataset: DiSCQ
Avalon
**Avalon** is a benchmark for generalization in Reinforcement Learning (RL). The benchmark consists of a set of tasks in which embodied agents in highly diverse procedural 3D worlds must survive by navigating terrain, hunting or gathering food, and avoiding hazards. Avalon is unique among existing RL benchmarks in that the reward function, world dynamics, and action space are the same for every task, with tasks differentiated solely by altering the environment; its 20 tasks, ranging in complexity from eat and throw to hunt and navigate, each create worlds in which the agent must perform specific skills in order to survive. This benchmark setup enables investigations of generalization within tasks, between tasks, and to compositional tasks that require combining skills learned from previous tasks.
Provide a detailed description of the following dataset: Avalon
PoseScript
**PoseScript** is a dataset that pairs a few thousand 3D human poses from AMASS with rich human-annotated descriptions of the body parts and their spatial relationships. This dataset is designed for the retrieval of relevant poses from large-scale datasets and synthetic pose generation, both based on a textual pose description.
Provide a detailed description of the following dataset: PoseScript
Perception Test
Perception Test is a benchmark designed to evaluate the perception and reasoning skills of multimodal models. It introduces real-world videos designed to show perceptually interesting situations and defines multiple tasks that require understanding of memory, abstract patterns, physics, and semantics – across visual, audio, and text modalities. The benchmark consists of 11.6k videos, 23s average length, filmed by around 100 participants worldwide. The videos are densely annotated with six types of labels: object and point tracks, temporal action and sound segments, multiple-choice video question-answers and grounded video question-answers. The benchmark probes pre-trained models for their transfer capabilities, in a zero-shot / few-shot or fine tuning regime.
Provide a detailed description of the following dataset: Perception Test
TFW: Annotated Thermal Faces in the Wild Dataset
Face detection and subsequent localization of facial landmarks are the primary steps in many face applications. Numerous algorithms and benchmark datasets have been introduced to develop robust models for the visible domain. However, varying conditions of illumination still pose challenging problems. In this regard, thermal cameras are employed to address this problem, because they operate on longer wavelengths. However, thermal face and facial landmark detection in the wild is an open research problem because most of the existing thermal datasets were collected in controlled environments. In addition, many of them were not annotated with face bounding boxes and facial landmarks. In this work, we present a thermal face dataset with manually labeled bounding boxes and facial landmarks to address these problems. The dataset contains 9,982 images of 147 subjects collected under controlled and uncontrolled conditions. As a baseline, we trained the YOLOv5 object detection model and its adaptation for face detection, YOLO5Face, on our dataset. In addition to our test set, we evaluated the models on the external RWTH-Aachen thermal face dataset to show the efficacy of our dataset. We have made the dataset, source code, and pre-trained models publicly available at https://github.com/IS2AI/TFW to bolster research in thermal face analysis.
Provide a detailed description of the following dataset: TFW: Annotated Thermal Faces in the Wild Dataset
SF-TL54: A Thermal Facial Landmark Dataset with Visual Pairs
Facial landmark detection is a cornerstone in many facial analysis tasks such as face recognition, drowsiness detection, and facial expression recognition. Numerous methodologies were introduced to achieve accurate and efficient facial landmark localization in visual images. However, there are only several works that address facial landmark detection in thermal images. The main challenge is the limited number of annotated datasets. In this work, we present a thermal face dataset with annotated face bounding boxes and facial landmarks. The dataset contains 2,556 thermal images of 142 individuals, where each thermal image is paired with the corresponding visual image. To the best of our knowledge, our dataset is the largest in terms of the number of individuals. In addition, our dataset can be employed for tasks such as thermal-to-visual image translation, thermal-visual face recognition, and others. We trained two models for the facial landmark detection task to show the efficacy of our dataset. The first model is a classic machine learning model based on an ensemble of regression trees. The second model is a deep learning model based on the U-net architecture. The dataset, annotations, source code, and pre-trained models are publicly available to advance research in thermal face analysis.
Provide a detailed description of the following dataset: SF-TL54: A Thermal Facial Landmark Dataset with Visual Pairs
MovieCLIP
MovieCLIP is a movie-centric taxonomy of 179 scene labels derived from movie scripts and auxiliary web-based video datasets designed for visual scene recognition.
Provide a detailed description of the following dataset: MovieCLIP
XiaChuFang Recipe Corpus
XiaChuFang Recipe Corpus contains recipes are from 下厨房 (XiaChuFang), a popular Chinese recipe sharing website. The full recipe corpus contains 1,520,327 Chinese recipes. Among them, 1,242,206 recipes belong to 30,060 dishes. A dish has 41.3 recipes on average.
Provide a detailed description of the following dataset: XiaChuFang Recipe Corpus
Motion Policy Networks
This dataset contains a large set (~3.2 Million) of high quality expert trajectories generated from a geometrically consist hybrid planner in a wide variety of environment (~575,000 environments). We created this dataset to explore the capabilities of neural networks to learn complex robotic motion, mimicking a traditional planner. For more information on how to use this data, please refer to the Github for this project: https://github.com/nvlabs/motion-policy-networks
Provide a detailed description of the following dataset: Motion Policy Networks
TEACh
Robots operating in human spaces must be able to engage in natural language interaction with people, both understanding and executing instructions, and using conversation to resolve ambiguity and recover from mistakes. To study this, we introduce TEACh, a dataset of over 3,000 human--human, interactive dialogues to complete household tasks in simulation. A Commander with access to oracle information about a task communicates in natural language with a Follower. The Follower navigates through and interacts with the environment to complete tasks varying in complexity from "Make Coffee" to "Prepare Breakfast", asking questions and getting additional information from the Commander. We propose three benchmarks using TEACh to study embodied intelligence challenges, and we evaluate initial models' abilities in dialogue understanding, language grounding, and task execution.
Provide a detailed description of the following dataset: TEACh
SDN
Situated Dialogue Navigation (SDN) is a navigation benchmark of 183 trials with a total of 8415 utterances, around 18.7 hours of control streams, and 2.9 hours of trimmed audio. SDN is developed to evaluate the agent's ability to predict dialogue moves from humans as well as generate its own dialogue moves and physical navigation actions.
Provide a detailed description of the following dataset: SDN
Breaking Bad
**Breaking Bad** is a large-scale dataset of fractured objects. The dataset contains around 10k meshes from PartNet and Thingi10k. For each mesh, 20 fracture modes are pre-computed and then simulate 80 fractures from them, resulting in a total of 1M breakdown patterns. This dataset serves as a benchmark that enables the study of fractured object reassembly and presents new challenges for geometric shape understanding.
Provide a detailed description of the following dataset: Breaking Bad
RTI Rwanda Drone Crop Types
RTI International (RTI) generated 2,611 labeled point locations representing 19 different land cover types, clustered in 5 distinct agroecological zones within Rwanda. These land cover types were reduced to three crop types (Banana, Maize, and Legume), two additional non-crop land cover types (Forest and Structure), and a catch-all Other land cover type to provide training/evaluation data for a crop classification model. Each point is attributed with its latitude and longitude, the land cover type, and the degree of confidence the labeler had when classifying the point location. For each location there are also three corresponding image chips (4.5 m x 4.5 m in size) with the point id as part of the image name. Each image contains a P1, P2, or P3 designation in the name, indicating the time period. P1 corresponds to December 2018, P2 corresponds to January 2019, and P3 corresponds to February 2019. These data were used in the development of research documented in greater detail in “Deep Neural Networks and Transfer Learning for Food Crop Identification in UAV Images” (Chew et al., 2020).
Provide a detailed description of the following dataset: RTI Rwanda Drone Crop Types
Wikipedia Knowledge Graph dataset
Wikipedia is the largest and most read online free encyclopedia currently existing. As such, Wikipedia offers a large amount of data on all its own contents and interactions around them, as well as different types of open data sources. This makes Wikipedia a unique data source that can be analyzed with quantitative data science techniques. However, the enormous amount of data makes it difficult to have an overview, and sometimes many of the analytical possibilities that Wikipedia offers remain unknown. In order to reduce the complexity of identifying and collecting data on Wikipedia and expanding its analytical potential, after collecting different data from various sources and processing them, we have generated a dedicated Wikipedia Knowledge Graph aimed at facilitating the analysis, contextualization of the activity and relations of Wikipedia pages, in this case limited to its English edition. We share this Knowledge Graph dataset in an open way, aiming to be useful for a wide range of researchers, such as informetricians, sociologists or data scientists. There are a total of 9 files, all of them in tsv format, and they have been built under a relational structure. The main one that acts as the core of the dataset is the page file, after it there are 4 files with different entities related to the Wikipedia pages (category, url, pub and page_property files) and 4 other files that act as "intermediate tables" making it possible to connect the pages both with the latter and between pages (page_category, page_url, page_pub and page_link files).
Provide a detailed description of the following dataset: Wikipedia Knowledge Graph dataset
MSU Video Frame Interpolation
This is a dataset for video frame interpolation task. The dataset contains the 1920×1080 videos in 240 FPS for videos captured with iPhone 11 and in 120 FPS for gaming content captured with OBS.
Provide a detailed description of the following dataset: MSU Video Frame Interpolation
Cross-institution Male Pelvic Structures
The data set includes 589 T2-weighted images acquired from the same number of patients collected by seven studies, INDEX, the SmartTarget Biopsy Trial, PICTURE, TCIA Prostate3T, Promise12, TCIA ProstateDx (Diagnosis) and the Prostate MR Image Database. Further details are reported in the respective study references. These images were divided into seven subsets based on the acquiring institution. The cross-institution imaging protocols contain multiple scanners (two manufacturers with mixed 1.5 and 3T field strengths), varying field-of-view and anisotropic voxels, in-plane voxel dimensions ranging between 0.3 and 1.0 mm and out-of-plane spacing between 1.8 and 5.4 mm. For each image, eight anatomical structures of planning interest, including bladder, bone, neurovascular bundle, obturator internus, zone, rectum, seminal vesicle, transition zone and peripheral zone were labelled. All segmentations were manually annotated by eight biomedical imaging researchers, with experience ranging from 2 to 10 years in the annotation of medical image data, each annotating a mixed-institution subset using an institution-stratified sampling. Each annotation has been reviewed at least once. All images and labels could be found in data.zip while the indexing from image to trial and to institution are respectively provided in trial.txt and institution.txt. if you find this labelled data set useful for your research please consider to acknowledge the work: Li, Y., et al. "Prototypical few-shot segmentation for cross-institution male pelvic structures with spatial registration." arXiv preprint arXiv:2209.05160 (2022).
Provide a detailed description of the following dataset: Cross-institution Male Pelvic Structures
The Reddit Climate Change Dataset
The Reddit Climate Change Dataset is a dataset of 620K Reddit posts and 4.6M comments - all mentions of the terms "climate" and "change" until 2022-09-01 across the entire Reddit social network. Both were procured with [SocialGrep's export feature](https://socialgrep.com/exports?utm_source=paperswithcode&utm_medium=link&utm_campaign=theredditclimatechangedataset) and released as part of SocialGrep [Reddit datasets](https://socialgrep.com/datasets?utm_source=paperswithcode&utm_medium=link&utm_campaign=theredditclimatechangedataset). The posts are labeled with their subreddit, title, creation date, domain, selftext, and score. The comments are labeled with their subreddit, body, creation date, sentiment (calculated for you using a VADER pipeline), and score.
Provide a detailed description of the following dataset: The Reddit Climate Change Dataset
BioNLI
**BioNLI** is a dataset in biomedical natural language inference. This dataset contains abstracts from biomedical literature and mechanistic premises generated with nine different strategies.
Provide a detailed description of the following dataset: BioNLI
VizWiz Answer Grounding
Source: [paper](https://openaccess.thecvf.com/content/CVPR2022/papers/Chen_Grounding_Answers_for_Visual_Questions_Asked_by_Visually_Impaired_People_CVPR_2022_paper.pdf) Visual Question Answering (VQA) is the task of returning the answer to a question about an image. While most VQA services only return a natural language answer, we believe it is also valuable for a VQA service to return the region in the image used to arrive at the answer. We call this task of locating the relevant visual evidence answer grounding. We publicly share the VizWiz-VQA-Grounding dataset, the first dataset that visually grounds answers to visual questions asked by people with visual impairments, to encourage community progress in developing algorithmic frameworks.. Numerous applications would be possible if answer groundings were provided in response to visual questions. First, they enable assessment of whether a VQA model reasons based on the correct visual evidence. This is valuable as an explanation as well as to support developers in debugging models. Second, answer groundings enable segmenting the relevant content from the background. This is a valuable precursor for obfuscating the background to preserve privacy, given that photographers can inadvertently capture private information in the background of their images. Third, users could more quickly find the desired information if a service instead magnified the relevant visual evidence. This is valuable in part because answers from VQA services can be insufficient, including because humans suffer from “reporting bias” meaning they describe what they find interesting without understanding what a person/population is seeking.
Provide a detailed description of the following dataset: VizWiz Answer Grounding
DiffusionDB
**DiffusionDB** is a large-scale text-to-image prompt dataset. It contains 2 million images generated by Stable Diffusion using prompts and hyperparameters specified by real users.
Provide a detailed description of the following dataset: DiffusionDB
RGZ EMU: Semantic Taxonomy
The data used in - "Radio Galaxy Zoo EMU: Towards a Semantic Radio Galaxy Morphology Taxonomy" (Bowles et al. submitted) - "A New Task: Deriving Semantic Class Targets for the Physical Sciences" (Bowles et al. 2022: https://arxiv.org/abs/2210.14760) accepted at the Fifth Workshop on Machine Learning and the Physical Sciences, Neural Information Processing Systems 2022. This data consists of images of galaxies, and plain English annotations of the features of the radio galaxies, as well as expert classifications using pre-existing scientific classes. Additionally the data presented contains checkpoints of the data at various stages throughout the processing initially completed with https://github.com/mb010/Text2Tag.
Provide a detailed description of the following dataset: RGZ EMU: Semantic Taxonomy
DeepSportRadar-v1
**DeepSportradar** is a benchmark suite of computer vision tasks, datasets and benchmarks for automated sport understanding. DeepSportradar currently supports four challenging tasks related to basketball: ball 3D localization, camera calibration, player instance segmentation and player re-identification. For each of the four tasks, a detailed description of the dataset, objective, performance metrics, and the proposed baseline method are provided.
Provide a detailed description of the following dataset: DeepSportRadar-v1
InfantBooks
A dataset of books for very young children.
Provide a detailed description of the following dataset: InfantBooks
Commonsense LAMA probes
Probes to evaluate commonsense in language models.
Provide a detailed description of the following dataset: Commonsense LAMA probes
UML Classes With Specs
# Repository for UML-English data This repository contains the data used for "Extraction of UML Class Diagrams from Natural Language Specification" (Yang et al. 2022) ## Getting the dataset To get the entire dataset, you must download the release containing `dataset.tar.gz`. ## Structure of the dataset * `dataset.tar.gz`: archive that contains all the files * `fragments.csv`: file that lists UML fragments and their characteristics * `labels.csv`: file that contains the labels received in the crowdsourcing effort * `models.csv`: file that lists UML class diagrams and their characteristics * `zoo/`: folder that contains all the UML data itself, such as pictures and UML encodings ## Making use of the dataset Unzip the tarball first. ### Opening the image of a certain UML model Open `models.csv` to read the list of available models. Copy its name and search in the `zoo/` folder for `.png` files starting with that name. For example, the ACME model has an image in the `zoo/` folder called `ACME.png`. ```bash ls zoo/ACME.png code zoo/ACME.png # any other image visualizer ``` ### Opening the image of a certain fragment Fragment files are named in the following pattern. Class fragments: ``` (ModelName)_(class)(number).png ``` Relationship fragments: ``` (ModelName)_(rel)(number).png ``` Similarly, you can visualize them. ```bash code zoo/CFG_class0.png ``` ### Finding the image of a fragment starting from a label 1. Browse through `labels.csv` and find the line that has the label of interest. 2. Every label has a `fragment_id`, which can be indexed in `fragments.csv`. Find the ID for the label of interest. 3. Inside `fragments.csv`, search for the line where the column value of `unique_id` equals `fragment_id` from Step 2. 4. Proceed like in the previous [section](#opening-the-image-of-a-certain-fragment)
Provide a detailed description of the following dataset: UML Classes With Specs
CBIS-DDSM
This CBIS-DDSM (Curated Breast Imaging Subset of DDSM) is an updated and standardized version of the Digital Database for Screening Mammography (DDSM) . The DDSM is a database of 2,620 scanned film mammography studies. It contains normal, benign, and malignant cases with verified pathology information. The scale of the database along with ground truth validation makes the DDSM a useful tool in the development and testing of decision support systems. The CBIS-DDSM collection includes a subset of the DDSM data selected and curated by a trained mammographer. The images have been decompressed and converted to DICOM format. Updated ROI segmentation and bounding boxes, and pathologic diagnosis for training data are also included. A manuscript describing how to use this dataset in detail is available at https://www.nature.com/articles/sdata2017177. Published research results from work in developing decision support systems in mammography are difficult to replicate due to the lack of a standard evaluation data set; most computer-aided diagnosis (CADx) and detection (CADe) algorithms for breast cancer in mammography are evaluated on private data sets or on unspecified subsets of public databases. Few well-curated public datasets have been provided for the mammography community. These include the DDSM, the Mammographic Imaging Analysis Society (MIAS) database, and the Image Retrieval in Medical Applications (IRMA) project. Although these public data sets are useful, they are limited in terms of data set size and accessibility. For example, most researchers using the DDSM do not leverage all its images for a variety of historical reasons. When the database was released in 1997, computational resources to process hundreds or thousands of images were not widely available. Additionally, the DDSM images are saved in non-standard compression files that require the use of decompression code that has not been updated or maintained for modern computers. Finally, the ROI annotations for the abnormalities in the DDSM were provided to indicate a general position of lesions, but not a precise segmentation for them. Therefore, many researchers must implement segmentation algorithms for accurate feature extraction. This causes an inability to directly compare the performance of methods or to replicate prior results. The CBIS-DDSM collection addresses that challenge by publicly releasing an curated and standardized version of the DDSM for evaluation of future CADx and CADe systems (sometimes referred to generally as CAD) research in mammography. Please note that the image data for this collection is structured such that each participant has multiple patient IDs. For example, participant 00038 has 10 separate patient IDs which provide information about the scans within the IDs (e.g. Calc-Test_P_00038_LEFT_CC, Calc-Test_P_00038_RIGHT_CC_1). This makes it appear as though there are 6,671 patients according to the DICOM metadata, but there are only 1,566 actual participants in the cohort.
Provide a detailed description of the following dataset: CBIS-DDSM
Articulation GAN: Unsupervised modeling of articulatory learning
Checkpoints, generated EMA representations, audio outputs, and annotations for paper titled "Articulation GAN: Unsupervised modeling of articulatory learning"
Provide a detailed description of the following dataset: Articulation GAN: Unsupervised modeling of articulatory learning
Panoramic Video Panoptic Segmentation Dataset
**Panoramic Video Panoptic Segmentation Dataset** is a large-scale dataset that offers high-quality panoptic segmentation labels for autonomous driving. The dataset has labels for 28 semantic categories and 2,860 temporal sequences that were captured by five cameras mounted on autonomous vehicles driving in three different geographical locations, leading to a total of 100k labeled camera images.
Provide a detailed description of the following dataset: Panoramic Video Panoptic Segmentation Dataset
CS1QA
**CS1QA** is a dataset for code-based question answering in the programming education domain. It consists of 9,237 question-answer pairs gathered from chat logs in an introductory programming class using Python, and 17,698 unannotated chat data with code.
Provide a detailed description of the following dataset: CS1QA
Housekeep
**Housekeep** a benchmark to evaluate common sense reasoning in the home for embodied AI. In Housekeep, an embodied agent must tidy a house by rearranging misplaced objects without explicit instructions specifying which objects need to be rearranged. The dataset contains where humans typically place objects in tidy and untidy houses constituting 1799 objects, 268 object categories, 585 placements, and 105 rooms.
Provide a detailed description of the following dataset: Housekeep
CFC
**Caltech Fish Counting Dataset** (**CFC**) is a large-scale dataset for detecting, tracking, and counting fish in sonar videos. This dataset contains over 1,500 videos sourced from seven different sonar cameras.
Provide a detailed description of the following dataset: CFC
QAMPARI
**QAMPARI** is an ODQA benchmark, where question answers are lists of entities, spread across many paragraphs. It was created by (a) generating questions with multiple answers from Wikipedia's knowledge graph and tables, (b) automatically pairing answers with supporting evidence in Wikipedia paragraphs, and (c) manually paraphrasing questions and validating each answer.
Provide a detailed description of the following dataset: QAMPARI
The Stack
**The Stack** contains over 3TB of permissively-licensed source code files covering 30 programming languages crawled from GitHub. The dataset was created as part of the BigCode Project, an open scientific collaboration working on the responsible development of Large Language Models for Code (Code LLMs).
Provide a detailed description of the following dataset: The Stack
4D Temperature Monitoring
This Kaggle repository is still under construction (as of October 2022). More updates and improvements are shortly incoming. This dataset contains 250 examples of temperature field simulations in a shallow aquifer. Temperature logs are an important tool in the geothermal industry. Temperature measurements from boreholes are used for exploration, system design, and monitoring. The number of observations, however, is not always sufficient to fully determine the temperature field or explore the entire parameter space of interest. Drilling in the best locations is still difficult and expensive. It is therefore critical to optimize the number and location of boreholes. Due to its higher spatial resolution and lower cost, four-dimensional (4D) temperature field monitoring via time-lapse Electrical Resistivity Tomography (ERT) has been investigated as a potential alternative.
Provide a detailed description of the following dataset: 4D Temperature Monitoring
bFFHQ
Gender-biased FFHQ dataset (bFFHQ) has age as a target label and gender as a correlated bias, and the images are from the FFHQ dataset. The images include the dominant number of young women (i.e., aged 10-29) and old men (i.e., aged 40-59) in the training data.
Provide a detailed description of the following dataset: bFFHQ
Vehicle Claims
The code to create the dataset is available [here](https://github.com/ajaychawda58/UADAD/blob/main/Code/Notebooks/create_dataset.ipynb). The dataset used in the paper is available on [github](https://github.com/ajaychawda58/UADAD/tree/main/data/vehicle_claims) - `Maker` - *Categorical* - The brand of the vehicle. - `GenModel` - *Categorical* - The model of the vehicle. - `Color` - *Categorical* - Colour of the vehicle. - `Reg_Year` - *Categorical* - Year of Registration. - `Body_Type` - *Categorical* - Eg. SUV, Convertible. - `Runned_Miles` - *Numerical* - Distance covered by the vehicle. - `Engin_Size` - *Categorical* - Size of engine. - `GearBox` - *Categorical* - Automatic, Manual. - `FuelType` - *Categorical* - Petrol, Diesel. - `Price` - *Numerical* - Price of vehicle. - `Seat_num` - *Numerical* - Number of seats. - `Door_num` - *Numerical* - Number of Doors. - `issue` - *Categorical* - Type of damage. - `issue_id` - *Categorical* - Specific damage. - `repair_complexity` - *Categorical* - Difficulty to repair the vehicle. - `repair_hours` - *Numerical* - Time required to finish the job. - `repair_cost` - *Numerical* - Cost of repair. Other attributes are not used for evaluation in this work. `breakdown_date` and `repair_date` were added with the idea of inserting anomalies based on the number of days required to repair the vehicle.
Provide a detailed description of the following dataset: Vehicle Claims
S-TEST
S-TEST is a benchmark for measuring the specificity of the language of pre-trained language models.
Provide a detailed description of the following dataset: S-TEST
Open Relation Modeling
Given two entities, generating a coherent sentence describing the relation between them. E.g., (data mining, database) => data mining is a process of extracting and discovering patterns in large data sets involving methods at the intersection of machine learning, statistics, and database systems.
Provide a detailed description of the following dataset: Open Relation Modeling
LM Email Address Leakage
Are Large Pre-Trained Language Models Leaking Your Personal Information? We analyze whether Pre-Trained Language Models (PLMs) are prone to leaking personal information. Specifically, we query PLMs for email addresses with contexts of the email address or prompts containing the owner's name.
Provide a detailed description of the following dataset: LM Email Address Leakage
ENTIGEN
**ENTIGEN** is a benchmark dataset to evaluate the change in image generations conditional on ethical interventions across three social axes -- gender, skin color, and culture. It contains 246 prompts based on an attribute set containing diverse professions, objects, and cultural scenarios.
Provide a detailed description of the following dataset: ENTIGEN
arXivEdits
**arXivEdits** an annotated corpus of 751 full papers from arXiv with gold sentence alignment across their multiple versions of revision, as well as fine-grained span-level edits and their underlying intentions for 1,000 sentence pairs. This dataset is designed for studying the human revision process in the scientific writing domain.
Provide a detailed description of the following dataset: arXivEdits
CodeSyntax
**CodeSyntax** is a large-scale dataset of programs annotated with the syntactic relationships in their corresponding abstract syntax trees. It contains 18,701 code samples annotated with 1,342,050 relation edges in 43 relation types for Python, and 13,711 code samples annotated with 864,411 relation edges in 39 relation types for Java. It is designed to evaluate the performance of language models on code syntax understanding.
Provide a detailed description of the following dataset: CodeSyntax
Towards a Data-Driven Requirements Engineering Approach: Automatic Analysis of User Reviews
6000 French user reviews from three applications on Google Play (Garmin Connect, Huawei Health, Samsung Health) are labelled manually. We selected four labels: rating, bug report, feature request and user experience. * **Ratings** are simple text which express the overall evaluation to that app, including praise, criticism, or dissuasion. * **Bug reports** show the problems that users have met while using the app, like loss of data, crash of app, connection error, etc. * **Feature requests** reflect the demande of users on new function, new content, new interface, etc. * In **user experience**, users describe their experience in relation to the functionality of the app, how does certain functions be helpful. As we can observe from the following table, that shows examples of labelled user reviews, each review belongs to one or more categories. | App | Total | Rating | Bug report | Feature request | User experience | | -------------- | ----- | ------ | ---------- | --------------- | --------------- | | Garmin Connect | 2000 | 1260 | 757 | 170 | 493 | | Huawei Health | 2000 | 1068 | 819 | 384 | 289 | | Samsung Health | 2000 | 1324 | 491 | 486 | 349 |
Provide a detailed description of the following dataset: Towards a Data-Driven Requirements Engineering Approach: Automatic Analysis of User Reviews
TLMSDD
none
Provide a detailed description of the following dataset: TLMSDD
PDEBench - Benchmark for Scientific Machine Learning
**PDEBench** provides a diverse and comprehensive set of benchmarks for scientific machine learning, including challenging and realistic physical problems. The repository consists of the code used to generate the datasets, to upload and download the datasets from the data repository, as well as to train and evaluate different machine learning models as baseline. PDEBench features a much wider range of PDEs than existing benchmarks and included realistic and difficult problems (both forward and inverse), larger ready-to-use datasets comprising various initial and boundary conditions, and PDE parameters. Moreover, PDEBench was crated to make the source code extensible and we invite active participation to improve and extent the benchmark.
Provide a detailed description of the following dataset: PDEBench - Benchmark for Scientific Machine Learning
SDOML
Machine-learning Data Set Prepared from NASA Solar Dynamics Observatory Mission data. * It contains data from 2010 to 2018 of the AIA and HMI instruments * Multi-wavelength full-disk images of the solar corona * about 7TB in total
Provide a detailed description of the following dataset: SDOML
RaVAEn_21
Annotated Earth Observation dataset of extreme events
Provide a detailed description of the following dataset: RaVAEn_21
GDSCv2
We have characterised 1000 human cancer cell lines and screened them with 100s of compounds. On this website, you will find drug response data and genomic markers of sensitivity. The Genomics of Drug Sensitivity in Cancer Project - http://www.cancerrxgene.org/ - was part of a Wellcome Trust funded collaboration between The Cancer Genome Project at the Wellcome Sanger Institute (UK) and the Center for Molecular Therapeutics, Massachusetts General Hospital Cancer Center (USA). This collaboration integrated the expertise at both sites toward the goal of identifying cancer biomarkers that can be used to identify genetically defined subsets of patients most likely to respond to cancer therapies. We screened >1000 genetically characterised human cancer cell lines with a wide range of anti-cancer therapeutics. These compounds included cytotoxic chemotherapeutics as well as targeted therapeutics from commercial sources, academic collaborators, and from the biotech and pharmaceutical industries. The sensitivity patterns of the cell lines were correlated with extensive genomic and expression data to identify genetic features that are predictive of sensitivity. This large collection of cell lines enabled us to capture much of the genomic heterogeneity that underlies human cancer, and which appears to play a critical role in determining the variable response of patients to treatment with specific agents. Our drug sensitivity data and genetic correlations are freely available through our website as a resource to the academic and medical communities. 默认使用: Screening Set: GDSC2 Select tissue type: Pan-Cancer Select mutation type: Copy number alteration
Provide a detailed description of the following dataset: GDSCv2
Tabula Sapiens
Human single-cell atlas.
Provide a detailed description of the following dataset: Tabula Sapiens
Covid Assessment Centre Line Listing
A dataset that consists of the demographics, triage category, symptoms, and comorbidities of COVID-19 patients. The dataset can be used to study the predictive factors of determining if a COVID-19 patient requires direct admission to the hospital.
Provide a detailed description of the following dataset: Covid Assessment Centre Line Listing
Unpaired haze images
Unpaired dataset: The dataset is built by ourselves, and there are all real haze images from websites. 10000 images: Address:Baidu cloud disk Extraction code:zvh6 1000 images: Address:Baidu cloud disk Extraction code:47v9 Paired dataset: The dataset is added haze by ourselves according to the image depth. Address: Baidu cloud disk Extraction code : 63xf
Provide a detailed description of the following dataset: Unpaired haze images
Lila
**Lila** is a unified mathematical reasoning benchmark consisting of 23 diverse tasks along four dimensions: (i) mathematical abilities e.g., arithmetic, calculus (ii) language format e.g., question-answering, fill-in-the-blanks (iii) language diversity e.g., no language, simple language (iv) external knowledge e.g., commonsense, physics. The benchmark is constructed by extending 20 datasets benchmark by collecting task instructions and solutions in the form of Python programs, thereby obtaining explainable solutions in addition to the correct answer.
Provide a detailed description of the following dataset: Lila
QTautobase
Equilibrium structures of the tautobase(reference) optimized at the level of theory of popular quantum chemical databases (QM9,PC9 and ANI-E). The structures were generated from the SMILES structures of the original publication and then optimized using Gaussian09. For simplicity, structures are divided on type 'A' and 'B'. The database consists of 1257 pairs (2514 molecules) for each database evaluated. The format of the database is .xyz
Provide a detailed description of the following dataset: QTautobase
Meta-Album
Meta Album is a meta-dataset created for few-shot learning, meta-learning, continual learning and so on. Meta Album consists of 40 datasets from 10 unique domains. Datasets are arranged in sets (10 datasets, one dataset from each domain). It is a continuously growing meta-dataset. We repurposed datasets that were generously made available by original creators. All datasets are free for use for academic purposes, provided that proper credits are given. For your convenience, you may cite our paper, which references all original creators. Meta-Album is released under a CC BY-NC 4.0 license permitting non-commercial use for research purposes, provided that you cite us. Additionally, redistributed datasets have their own license. The recommended use of Meta-Album is to conduct fundamental research on machine learning algorithms and conduct benchmarks, particularly in: few-shot learning, meta-learning, continual learning, transfer learning, and image classification.
Provide a detailed description of the following dataset: Meta-Album
Parasitic Egg Detection and Classification in Microscopic Images
Parasitic infections have been recognized as one of the most significant causes of illnesses by WHO. Most infected persons shed cysts or eggs in their living environment, and unwittingly cause transmission of parasites to other individuals. Diagnosis of intestinal parasites is usually based on direct examination in the laboratory, of which capacity is obviously limited. Targeting to automate routine fecal examination for parasitic diseases, this challenge aims to gather experts in the field to develop robust automated methods to detect and classify eggs of parasitic worms in a variety of microscopic images. Participants will work with a large-scale dataset, containing 11 types of parasitic eggs from fecal smear samples. They are the main interest because of causing major diseases and illness in developing countries. We open to any techniques used for parasitic egg recognition, ranging from conventional approaches based on statistical models to deep learning techniques. Finally, the organizers expect a new collaboration come out from the challenge.
Provide a detailed description of the following dataset: Parasitic Egg Detection and Classification in Microscopic Images
TUT Urban Acoustic Scenes 2018
The dataset for this task is the TUT Urban Acoustic Scenes 2018 dataset, consisting of recordings from various acoustic scenes. The dataset was recorded in six large european cities, in different locations for each scene class. For each recording location there are 5-6 minutes of audio. The original recordings were split into segments with a length of 10 seconds that are provided in individual files. Available information about the recordings include the following: acoustic scene class, city, and recording location.
Provide a detailed description of the following dataset: TUT Urban Acoustic Scenes 2018
TAU Audio-Visual Urban Scenes 2021
The dataset for this task is TAU Audio-Visual Urban Scenes 2021. The dataset contains synchronized audio and video recordings from 12 European cities in 10 different scenes.
Provide a detailed description of the following dataset: TAU Audio-Visual Urban Scenes 2021
IGC
111
Provide a detailed description of the following dataset: IGC
Financial Language Understanding Evaluation
**Financial Language Understanding Evaluation** is an open-source comprehensive suite of benchmarks for the financial domain. It contains benchmarks across 5 NLP tasks in financial domain as well as common benchmarks used in the previous research. The tasks are financial sentiment analysis, news headline classification, named entity recognition, structure boundary detection and question answering.
Provide a detailed description of the following dataset: Financial Language Understanding Evaluation
CausalBench
**CausalBench** is a comprehensive benchmark suite for evaluating network inference methods on large-scale perturbational single-cell gene expression data. CausalBench introduces several biologically meaningful performance metrics and operates on two large, curated and openly available benchmark data sets for evaluating methods on the inference of gene regulatory networks from single-cell data generated under perturbations. The datasets consists of over 200000 training samples under interventions.
Provide a detailed description of the following dataset: CausalBench
ACES
**ACES** a dataset consisting of 68 phenomena ranging from simple perturbations at the word/character level to more complex errors based on discourse and real-world knowledge. It can be used to evaluate a wide range of Machine Translation metrics.
Provide a detailed description of the following dataset: ACES
Florence 4D
**Florence 4D** is a dataset that consists of dynamic sequences of 3D face models, where a combination of synthetic and real identities exhibit an unprecedented variety of 4D facial expressions, with variations that include the classical neutral-apex transition, but generalize to expression-to-expression. It is designed for research in 4D facial analysis, with a particular focus on dynamic expressions.
Provide a detailed description of the following dataset: Florence 4D
bipedal-skills
The bipedal skills benchmark is a suite of reinforcement learning environments implemented for the MuJoCo physics simulator. It aims to provide a set of tasks that demand a variety of motor skills beyond locomotion, and is intended for evaluating skill discovery and hierarchical learning methods. The majority of tasks exhibit a sparse reward structure.
Provide a detailed description of the following dataset: bipedal-skills
E2E Refined
**E2E** Refined is a dataset for sentence classification. It consists of 40,560 examples for training, 4,489 for validation, and 4,555 for test. It is a refined version of the well-known MR-to-text E2E dataset where many deletion/insertion/substitution errors has been fixed.
Provide a detailed description of the following dataset: E2E Refined
Social Network Study
The SNS data (Valente et al., 2013) is a four-wave survey conducted in Los Angeles county, the United States, which features a sample of 1,795 high-school students. The survey collected information about high-school students between grades 10 to 12, a majority of them self-identified as Hispanic. Among the collected information we have socio-economic status, demographics, social networks, and consumption of alcohol, tobacco, and marijuana–substance use.
Provide a detailed description of the following dataset: Social Network Study
YCB-Slide
The YCB-Slide dataset comprises of [DIGIT](https://digit.ml/) sliding interactions on [YCB](https://www.ycbbenchmarks.com) objects. We envision this can contribute towards efforts in tactile localization, mapping, object understanding, and learning dynamics models. We provide access to DIGIT images, sensor poses, RGB video feed, ground-truth mesh models, and ground-truth heightmaps + contact masks (simulation only). This dataset is supplementary to the [MidasTouch paper](https://openreview.net/forum?id=JWROnOf4w-K), a [CoRL 2022](https://corl2022.org/) submission.
Provide a detailed description of the following dataset: YCB-Slide
Jonathan Benchimol
Source: [Text mining methodologies with R: An application to central bank texts](https://doi.org/10.1016/j.mlwa.2022.100286)
Provide a detailed description of the following dataset: Jonathan Benchimol
KaggleDBQA
KaggleDBQA is a challenging cross-domain and complex evaluation dataset of real Web databases, with domain-specific data types, original formatting, and unrestricted questions. It expands upon contemporary cross-domain text-to-SQL datasets in three key aspects: (1) Its databases are pulled from real-world data sources and not normalized. (2) Its questions are authored in environments that mimic natural question answering. (3) It also provides database documentation that contains rich in-domain knowledge.
Provide a detailed description of the following dataset: KaggleDBQA
STAR: A Benchmark for Situated Reasoning in Real-World Videos
Reasoning in the real world is not divorced from situations. How to capture the present knowledge from surrounding situations and perform reasoning accordingly is crucial and challenging for machine intelligence. This paper introduces a new benchmark that evaluates the situated reasoning ability via situation abstraction and logic-grounded question answering for real-world videos, called Situated Reasoning in Real-World Videos (STAR). This benchmark is built upon the realworld videos associated with human actions or interactions, which are naturally dynamic, compositional, and logical. The dataset includes four types of questions, including interaction, sequence, prediction, and feasibility. We represent the situations in real-world videos by hyper-graphs connecting extracted atomic entities and relations (e.g., actions, persons, objects, and relationships). Besides visual perception, situated reasoning also requires structured situation comprehension and logical reasoning. Questions and answers are procedurally generated. The answering logic of each question is represented by a functional program based on a situation hyper-graph. We compare various existing video reasoning models and find that they all struggle on this challenging situated reasoning task. We further propose a diagnostic neuro-symbolic model that can disentangle visual perception, situation abstraction, language understanding, and functional reasoning to understand the challenges of this benchmark.
Provide a detailed description of the following dataset: STAR: A Benchmark for Situated Reasoning in Real-World Videos
LED Array Microscopy Frog Blood Dataset
Images collected on an LED array microscope (also known as a Fourier ptychographic microscope) on 172 fields-of-view of frog blood smears. Two of the fields-of-view ( example_000000 and example_000001) have 85 intensity images under single LED illumination, and all fields-of-view have 8 intensity images, 4 taken with uniformly random patterns and 4 taken with pseudo-Dirichlet random patterns.
Provide a detailed description of the following dataset: LED Array Microscopy Frog Blood Dataset
Cornell (60%/20%/20% random splits)
Node classification on Cornell with 60%/20%/20% random splits for training/validation/test.
Provide a detailed description of the following dataset: Cornell (60%/20%/20% random splits)
Film (60%/20%/20% random splits)
Node classification on Film with 60%/20%/20% random splits for training/validation/test.
Provide a detailed description of the following dataset: Film (60%/20%/20% random splits)
Squirrel (60%/20%/20% random splits)
Node classification on Squirrel with 60%/20%/20% random splits for training/validation/test.
Provide a detailed description of the following dataset: Squirrel (60%/20%/20% random splits)
PubMed (60%/20%/20% random splits)
Node classification on PubMed with 60%/20%/20% random splits for training/validation/test.
Provide a detailed description of the following dataset: PubMed (60%/20%/20% random splits)
BAFMD
**BAFMD** contains images posted on Twitter during the pandemic from around the world with more images from underrepresented race and age groups to mitigate the problem for the face mask detection task.
Provide a detailed description of the following dataset: BAFMD
Virtual-Pedcross-4667
**Virtual-PedCross-4667** is a dataset for pedestrian crossing prediction. It consists of 4667 video sequences, 2862 pedestrian crossing sequences and 1804 not-crossing sequences. Totally, 745k video frames with the resolution of 1280×720 are saved.
Provide a detailed description of the following dataset: Virtual-Pedcross-4667
Title2Event
**Title2Event** is a large-scale sentence-level dataset for benchmarking Open Event Extraction without restricting event types. Title2Event contains more than 42,000 news titles in 34 topics collected from Chinese web pages.
Provide a detailed description of the following dataset: Title2Event
ELPV
The dataset contains 2,624 samples of $300\times300$ pixels 8-bit grayscale images of functional and defective solar cells with varying degree of degradations extracted from 44 different solar modules. The defects in the annotated images are either of intrinsic or extrinsic type and are known to reduce the power efficiency of solar modules. All images are normalized with respect to size and perspective. Additionally, any distortion induced by the camera lens used to capture the EL images was eliminated prior to solar cell extraction.
Provide a detailed description of the following dataset: ELPV
TripClick
TripClick is a large-scale dataset of click logs in the health domain, obtained from user interactions of the Trip Database health web search engine. Provide: * Approximately 5.2 million user interactions * IR evaluation benchmark * Trainin data for deep learning IR models
Provide a detailed description of the following dataset: TripClick
Demonstration and Experience Replays
This is the data regarding the pre-generated demonstration and experience replay for the proposed Deep-GRAIL algorithm. You are welcomed to generate your own replays based on your problems at hand.
Provide a detailed description of the following dataset: Demonstration and Experience Replays
Wisconsin(60%/20%/20% random splits)
Node classification on Wisconsin with 60%/20%/20% random splits for training/validation/test.
Provide a detailed description of the following dataset: Wisconsin(60%/20%/20% random splits)
Texas(60%/20%/20% random splits)
Node classification on Texas with 60%/20%/20% random splits for training/validation/test.
Provide a detailed description of the following dataset: Texas(60%/20%/20% random splits)
Chameleon(60%/20%/20% random splits)
Node classification on Chameleon with 60%/20%/20% random splits for training/validation/test.
Provide a detailed description of the following dataset: Chameleon(60%/20%/20% random splits)
Deezer-Europe
Node classification on Deezer Europe with 50%/25%/25% random splits for training/validation/test.
Provide a detailed description of the following dataset: Deezer-Europe
TransProteus
The dataset contains procedurally generated images of transparent vessels containing liquid and objects . The data for each image includes segmentation maps, 3d depth maps, and normal maps of of the liquid or object inside the transparent vessel, and the vessel. In addition, the properties of the materials inside the containers are given(color/transparency/roughness/metalness). In addition, a natural image benchmark for the 3d/depth estimation of objects inside transparent containers is supplied. 3d models of the objects (GTLF) are also supplied.
Provide a detailed description of the following dataset: TransProteus
PAGE
**PAGE** contains 98,525 games played by 2,007 professional players and spans over 70 years. The dataset includes rich AI analysis results for each move.
Provide a detailed description of the following dataset: PAGE