dataset_name
stringlengths
2
128
description
stringlengths
1
9.7k
prompt
stringlengths
59
185
Flame emission spectrum data
Spectrum data on CH4/Air flame emission with 200ms exposure and 2s exposure
Provide a detailed description of the following dataset: Flame emission spectrum data
Occ3D
**Occ3D** is a dataset for 3D occupancy prediction, which aims to estimate the detailed occupancy and semantics of objects from multi-view images. To facilitate this task, a label generation pipeline that produces dense, visibility-aware labels for a given scene. This pipeline includes point cloud aggregation, point labeling, and occlusion handling.
Provide a detailed description of the following dataset: Occ3D
ViMQ
**ViMQ** is a Vietnamese dataset of medical questions from patients with sentence-level and entity-level annotations for the Intent Classification and Named Entity Recognition tasks. It contains Vietnamese medical questions crawled from the consultation section online between patients and doctors from www.vinmec.com, a website of a Vietnamese general hospital. Each consultation consists of a question regarding a specific health issue of a patient and a detailed respond provided by a clinical expert. The dataset contains health issues that fall into a wide range of categories including common illness, cardiology, hematology, cancer, pediatrics, etc. We removed sections where users ask about information of the hospital and selected 9,000 questions for the dataset.
Provide a detailed description of the following dataset: ViMQ
LIS
To reveal and systematically investigate the effectiveness of the proposed method in the real world, a real low-light image dataset for instance segmentation is necessary and urgently needed. Considering there is no suitable dataset, therefore, we collect and annotate a Low-light Instance Segmentation (LIS) dataset using a Canon EOS 5D Mark IV camera. It exhibits the following characteristics: Paired samples. In the LIS dataset, we provide images in both sRGB-JPEG (typical camera output) and RAW formats, each format consists of paired short-exposure low-light and corresponding long-exposure normal-light images. We term these four types of images as \textit{sRGB-dark, sRGB-normal, RAW-dark, and RAW-normal}. To ensure they are pixel-wise aligned, we mount the camera on a sturdy tripod and avoid vibrations by remote control via a mobile app. Diverse scenes. The LIS dataset consists of 2230 image pairs, which are collected in various scenes, including indoor and outdoor. To increase the diversity of low-light conditions, we use a series of ISO levels (\eg, 800, 1600, 3200, 6400) to take long-exposure reference images, and we deliberately decrease the exposure time by a series of low-light factors (\eg, 10, 20, 30, 40, 50, 100) to take short-exposure images for simulating very low-light conditions. Instance-level pixel-wise labels. For each pair of images, we provide precise instance-level pixel-wise labels annotated by professional annotators, yielding 10504 labeled instances of 8 most common object classes in our daily life (bicycle, car, motorcycle, bus, bottle, chair, dining table, tv).
Provide a detailed description of the following dataset: LIS
Industry Biscuit (Cookie) dataset
The Industrial Biscuits (Cookie) dataset is our internal dataset designed for the anomaly detection task, which captures Tarallini biscuits. It contains 1225 samples in four classes with the following structure: * No defect (474 captions) * Defect: not complete (465 captions) * Defect: strange object (158 captions) * Defect: color defect (128 captions) All classes can be seen in fig. 1. In order to augmentate the dataset we rotated each sample by 90° for three times, so the augmented dataset contains 4900 captures in total. Augmented images were cropped to the size of their bounding boxes, the source samples remained uncropped in the original dataset.
Provide a detailed description of the following dataset: Industry Biscuit (Cookie) dataset
VSPW
A Large-scale Dataset for Video Scene Parsing in the Wild
Provide a detailed description of the following dataset: VSPW
VIPSeg
A large-scale VIdeo Panoptic Segmentation dataset
Provide a detailed description of the following dataset: VIPSeg
bSDD
The buildingSMART Data Dictionary (bSDD) is an online service that hosts classifications and their properties, allowed values, units and translations. The bSDD allows linking between all the content inside the database. It provides a standardized workflow to guarantee data quality and information consistency. BIM modelers use the bSDD to have easy and efficient access to all kinds of standards to enrich their model. BIM Managers use the bSDD to check BIM data for validity. Advanced users use the contents from the bSDD to check compliance, automatically find manufactures products, extend IFC, create Information Delivery Specifications (IDS) and much more. Besides national classification systems (Uniclass, Minnd, etc) and application specific standards (ETIM, UniversalTypes, IfcAirport, etc) project specific, National and company specific standards can be stored in bSDD as well. The internal structure can facilitate ISO 12006-3, ISO 23386 and Linked Data publications.
Provide a detailed description of the following dataset: bSDD
LLaVA_Instruct_150K
LLaVA Visual Instruct 150K is a set of GPT-generated multimodal instruction-following data. It is constructed for visual instruction tuning and for building large multimodal towards GPT-4 vision/language capability. Based on the COCO dataset, we interact with langauge-only GPT-4, and collect 158K unique language-image instruction-following samples in total, including 58K in conversations, 23K in detailed description, and 77k in complex reasoning, respectively. Please check out ``LLaVA-Instruct-150K''' on [HuggingFace Dataset]. Data file name | File Size | Sample Size | | --- | --- | ---:| conversation_58k.json | 126 MB | 58K | detail_23k.json | 20.5 MB | 23K | complex_reasoning_77k.json | 79.6 MB | 77K|
Provide a detailed description of the following dataset: LLaVA_Instruct_150K
ChatLog
**ChatLog** is a coarse-to-fine temporal dataset called ChatLog, consisting of two parts that update monthly and daily: 1. **ChatLog-Monthly** is a dataset of 38,730 question-answer pairs collected every month including questions from both the reasoning and classification tasks. 2. **ChatLogDaily**, on the other hand, consists of ChatGPT's responses to 1000 identical questions for long-form generation every day.
Provide a detailed description of the following dataset: ChatLog
ResQ
ReSQ is a real-world Spatial Question Answering dataset with human-generated questions built on an existing corpus with SpRL annotations. This dataset can be used to evaluate spatial language processing models in realistic situations.
Provide a detailed description of the following dataset: ResQ
Sound-based drone fault classification using multitask learning
arxiv : https://arxiv.org/abs/2304.11708 Accepted at 29th International Congress on Sound and Vibration (ICSV29). The drone has been used for various purposes including military applications, aerial photography, and pesticide spraying. However, the drone is vulnerable to external disturbances, and malfunction in propellers and motors can easily occur. To improve the safety of drone operations, early detection of mechanical faults should be made in real-time. In this paper, we propose a sound-based deep neural network (DNN) fault classifier and drone sound dataset. The dataset was constructed by collecting the operating sounds of drones from microphones mounted on three different drones in an anechoic chamber. The dataset includes various operating conditions of drones, such as flight directions (front, back, right, left, clockwise, counter clockwise) and faults on propellers and motors. The drone sounds were then mixed with noises recorded in five different spots on the university campus, with a signal-to-noise ratio (SNR) varying from 10 dB to 15 dB. Using the acquired dataset, we train a DNN classifier, 1DCNN-ResNet, that classifies the types of mechanical faults and their locations from short-time input waveforms. We employ multitask learning (MTL) and incorporate the direction classification task as an auxiliary task to make the classifier learn more general audio features. The test over unseen data reveals that the proposed multitask model can successfully classify faults in drones and outperforms single-task models even with less training data. please reorganize the file directory like below drone ㄴA ㄴB ㄴC For each drone type A, B, and C have 54000*2 files. (Here, *2 means stereo channel, you can find mic1 and mic2 in subdirectory) They are divided into train, valid, and test by a 6:2:2 ratio. For each file, recording information is labeled below. {model_type}_{maneuvering_direction}_{fault}_{drone_file_index}_{background}_{background_file_index}_{SNR} model_type: A, B, C maneuvering_direction: F(Front), B(Back), R(Right), L(Left), C(Clockwise), CC(Counter-clockwise) fault: N (Normal), MF1~4 (Moter Failure), PC1~4 (Propeller Cut) -> 1~4 means each motor/propeller of the quadcopter. Dataset available under below "Homepage" ↓
Provide a detailed description of the following dataset: Sound-based drone fault classification using multitask learning
Reverb-WSJ0
Noiseless reverberant dataset using the public WSJ0 corpus and simulated room impulse responses using the PyRoomAcoustics library. Used in: - Speech Enhancement and Dereverberation with Diffusion-based Generative Models, Richter et al., arXiv 2022 - StoRM: A Stochastic Regeneration Model for Speech Enhancement and Dereverberation, Lemercier et al., arXiv 2022 - Analysing Discriminative versus Diffusion-based Generative Models for Speech Restoration, Lemercier et al., ICASSP 2023
Provide a detailed description of the following dataset: Reverb-WSJ0
MC1296
a dataset of reading pointer meter
Provide a detailed description of the following dataset: MC1296
Evosuite SF110 Benchmark
The SF100 corpus of classes is a statistically representative sample of 100 Java projects from SourceForge, which is a popular open source repository (more than 300,000 projects with more than two million registered users). Because SourceForge is home to many old and stale projects, we have extended SF100 with the 10 most popular projects, resulting in a revised corpus of classes, SF110.
Provide a detailed description of the following dataset: Evosuite SF110 Benchmark
ContactArt
**ContactArt** is a dataset for learning hand-object interaction priors for hand and articulated object pose estimation. The dataset is created using visual teleoperation, where the human operator can directly play within a physical simulator to manipulate the articulated objects. All the object models are from Partnet dataset for the convenience of scaling up. ContactArt can provide accurate annotation, rich hand-object interaction, and contact information.
Provide a detailed description of the following dataset: ContactArt
Pick-a-Pic
**Pick-a-Pic** dataset was created by logging user interactions with the Pick-a-Pic web application for text-to image generation. Overall, the Pick-a-Pic dataset contains over 500,000 examples and 35,000 distinct prompts. Each example contains a prompt, two generated images, and a label for which image is preferred, or if there is a tie when no image is significantly preferred over the other.
Provide a detailed description of the following dataset: Pick-a-Pic
Stain Transfer in Histopathology
The dataset contains 256x256 tiles extracted from Whole Slide Images (WSI) of mouse liver tissue stained with H&E and Masson's Trichrome. WSIs were acquired with a Zeiss AxioScan scanner with a 20× objective at a resolution of 0.221 µm/pix and subsequently subsampled with a factor of 1:2, which resulted in a 0.442 µm/pixel resolution.
Provide a detailed description of the following dataset: Stain Transfer in Histopathology
Stain Transfer
The dataset contains 256x256 tiles extracted from Whole Slide Images (WSI) of mouse liver tissue stained with H&E and Masson's Trichrome. WSIs were acquired with a Zeiss AxioScan scanner with a 20× objective at a resolution of 0.221 µm/pix and subsequently subsampled with a factor of 1:2, which resulted in a 0.442 µm/pixel resolution. The dataset can be used for training and evaluation of image to image translation methods for stain transfer in histopathology.
Provide a detailed description of the following dataset: Stain Transfer
WebUI
The WebUI dataset contains 400K web UIs captured over a period of 3 months and cost about $500 to crawl. We grouped web pages together by their domain name, then generated training (70%), validation (10%), and testing (20%) splits. This ensured that similar pages from the same website must appear in the same split. We created four versions of the training dataset. Three of these splits were generated by randomly sampling a subset of the training split: Web-7k, Web-70k, Web-350k. We chose 70k as a baseline size, since it is approximately the size of existing UI datasets. We also generated an additional split (Web-7k-Resampled) to provide a small, higher quality split for experimentation. Web-7k-Resampled was generated using a class-balancing sampling technique, and we removed screens with possible visual defects (e.g., very small, occluded, or invisible elements). The validation and test split was always kept the same.
Provide a detailed description of the following dataset: WebUI
Zenseact Open Dataset
The Zenseact Open Dataset (ZOD) is a large-scale and diverse multi-modal autonomous driving (AD) dataset, created by researchers at Zenseact. It was collected over a 2-year period in 14 different European counties, using a fleet of vehicles equipped with a full sensor suite. The dataset consists of three subsets: Frames, Sequences, and Drives, designed to encompass both data diversity and support for spatiotemporal learning, sensor fusion, localization, and mapping. Frames consist of 100k curated camera images with two seconds of other supporting sensor data, while the 1473 Sequences and 29 Drives include the entire sensor suite for 20 seconds and a few minutes, respectively. ZOD is released under the permissive CC BY-SA 4.0 license, allowing for both commercial and non-commercial use. For more information about the license, see here.
Provide a detailed description of the following dataset: Zenseact Open Dataset
ChCatExt
ChCatExt is composed of BidAnn (bid announcement), FinAnn (financial announcement) and CreRat (credit rating report). It is designed for re-construct catalog trees from documents.
Provide a detailed description of the following dataset: ChCatExt
PGPS9K
A new large scale plane geometry problem solving dataset called PGPS9K, labeled both fine-grained diagram annotation and interpretable solution program.
Provide a detailed description of the following dataset: PGPS9K
Indigo Mobile
**Indigo Mobile** is a public dataset of copy detection patterns (CDP) based on DataMatrix modulation.
Provide a detailed description of the following dataset: Indigo Mobile
Documentary sources of case studies on the issues a data protection officer faces on a daily basis
The dataset is composed of 95 unique document texts spanning the period 2005-2022. This dataset makes available a corpus of documentary sources useful for outlining case studies related to scenarios in which the DPO finds himself operating in the performance of his daily activities. Ciclosi, Francesco, & Massacci, Fabio. (2023). Documentary sources of case studies on the issues a data protection officer faces on a daily basis [Data set]. In IEEE Security & Privacy (Vol. 21, Number 01, pp. 66–77). Zenodo. https://doi.org/10.5281/zenodo.7879104
Provide a detailed description of the following dataset: Documentary sources of case studies on the issues a data protection officer faces on a daily basis
SAMRS
**SAMRS** is a remote sensing segmentation dataset which provides object category, location, and instance information that can be used for semantic segmentation, instance segmentation, and object detection, either individually or in combination.
Provide a detailed description of the following dataset: SAMRS
DP0E
**DP0E** is a public dataset of anti-counterfeiting printable graphical codes (PGC) based on DataMatrix modulation.
Provide a detailed description of the following dataset: DP0E
Simple Liquid-Argon Track Samples (SLATS)
**SLATS** is a dataset which covers two data domains. Each domain is populated by a variant of a LArTPC detector simulation used in the ProtoDUNE-SP experiment. The two domains differ in one feature—the detector response function. The domain real is generated with a 2D response, and the fake domain is generated with a quasi-1D response. The dataset can be used to train unpaired image translation algorithms.
Provide a detailed description of the following dataset: Simple Liquid-Argon Track Samples (SLATS)
Dynamic Replica
**Dynamic Replica** is a synthetic dataset of stereo videos featuring humans and animals in virtual environments. It is a benchmark for dynamic disparity/depth estimation and 3D reconstruction consisting of 145,200 stereo frames (524 videos). The dataset contains annotations for left and right views that include: camera intrinsics and extrinsics, image depth, instance segmentation masks, binary foreground / background segmentation masks, optical flow, long-range pixel trajectories
Provide a detailed description of the following dataset: Dynamic Replica
Multimedia Goal-oriented Generative Script Learning Dataset
[Multimedia Goal-oriented Generative Script Learning Dataset](https://drive.google.com/file/d/1lSo-Kr4edNas0_uTl1SvDnEGuPYl0Or9/view?usp=sharing) This link contains a dataset consisting of multimedia steps for two categories: gardening and crafts. The dataset consists of a total of 79,089 multimedia steps across 5,652 tasks. The dataset is split into three sets: training, development, and testing. The gardening category has 20,258 training tasks, 2,428 development tasks, and 2,684 testing tasks. The crafts category has 32,082 training tasks, 4,064 development tasks, and 3,937 testing tasks. Each task is associated with a set of multimedia steps, which include corresponding step images related to the task. The `*_data` folder contains the full dataset, which will be released after the paper is published. Each `*_data` folder includes three files: `train.json`, `valid.json`, and `test.json`. These files are used for training, validation, and testing respectively. Each file is a JSON file that contains multiple lines. Each line represents an instance and follows the schema described below: ```python { "title": # goal of activity "method": # subgoal of activity "steps": # list of step text "captions": # list of corresponding captions of step "target": # next step text "img": # last step image id "target_img": # next step image id "retrieve": # 20 retrieved historical relevant steps "retrieve_neg": # list of retrieved top-20 most similar steps with respect to the last step. They will serve as retrieve-negatives } ``` The `img` subfolder in the `*_data` folder contains all images and the corresponding wikihow task json file for the gardening and crafts datasets.
Provide a detailed description of the following dataset: Multimedia Goal-oriented Generative Script Learning Dataset
HPO
The Human Phenotype Ontology (HPO) graph is a standardized vocabulary of human phenotypic abnormalities and their relationships. It represents these abnormalities as nodes in a graph, with edges indicating relationships such as subtypes or overlapping features. The HPO graph is organized in a hierarchical structure, with more general terms at the top and more specific terms at the bottom. The ontology provides a framework for the annotation of human genetic variations, aiding in the diagnosis of rare genetic disorders and the identification of potential therapeutic targets.
Provide a detailed description of the following dataset: HPO
Car crash dataset RUSSIA 2022-2023
Car crash dataset RUSSIA 2022-2023 is a big driving video dataset that contains over 500 high-resolution videos of various driving scenarios. The dataset was created to aid the development and testing of autonomous driving systems and other related technologies. It includes videos from Russia, captured from a diverse set of locations, weather conditions, and lighting conditions, each video lasting about 10 seconds. The videos are annotated with bounding boxes around objects such as different types of cars, pedestrians, and cyclists, as well as traffic signs, and traffic lights. Additionally, the dataset includes metadata information for each video.Car crash dataset RUSSIA 2022-2023 is considered to be one of the few datasets from Russia on this topic. Created by 7 students from Moscow, MIEM HSE. First version published on 4th May, 2023.
Provide a detailed description of the following dataset: Car crash dataset RUSSIA 2022-2023
FilmSet
A large film style dataset
Provide a detailed description of the following dataset: FilmSet
VISO
This dataset is a large-scale dataset for moving object detection and tracking in satellite videos, which consists of 40 satellite videos captured by Jilin-1 satellite platforms. Each image has a resolution of 12000x5000 and contains a great number of objects with different scales. Four common types of vechicles, including plane, car, ship, and train, are manually-labeled. A total of 853,911 instances are labeled by axis-aligned bounding boxes. https://paperswithcode.com/paper/detecting-and-tracking-small-and-dense-moving
Provide a detailed description of the following dataset: VISO
PerSeg
PerSeg is a dataset for personalized segmentation. The raw images are collect from the training data of subject driven diffusion models: DreamBooth, Textual Inversion, and Custom Diffusion. PerSeg contains 40 objects of various categories in total, including daily necessities, animals, and buildings. Contextualized in different poses or scenes, each object is related with 5∼7 images with our annotated masks.
Provide a detailed description of the following dataset: PerSeg
NLI4CT
**NLI4CT** dataset consists of 2,400 annotated statements with accompanying labels, CTRs, and evidence. Split into 1700 training, 500 test, and 200 development instances. The two labels and 4 CTR sections prompts are equally distributed across the dataset and its splits.
Provide a detailed description of the following dataset: NLI4CT
MIMIC-IT
**MultI-Modal In-Context Instruction Tuning (MIMIC-IT)** is a dataset for instruction tuning into multi-modal models, motivated by the Flamingo model's upstream interleaved format pretraining dataset. The data sample consists of a queried image-instruction-answer triplet, with the instruction-answer tailored to the image, and context. The context contains a series of image-instruction-answer triplets that contextually correlate with the queried triplet, emulating the relationship between the context and the queried image-text pair found in the MMC4 dataset.
Provide a detailed description of the following dataset: MIMIC-IT
Bistatic MIMO Radar Sensing of Specularly Reflecting Surfaces for Wireless Power Transfer
The measurement data <b>VNA_20220722_232002_XETS_reduced.mat</b> includes a data matrix $\mathbf{R}$ acquired with a synthetic aperture measurement testbed described in [2] and [3]. Measured were $N_f=1000$ frequency steps in a band from $3-10$GHz of the scattering parameter $S_{21}$ between a synthetic $51$-ULA with antenna positions saved in the file <b>ULA.mat</b> and a synthetic $(13\times 13)$-URA with antenna positions saved in the file <b>URA.mat</b>. The file <b>XETSantennaCharacterization.mat</b> holds antenna gains of an XETS antenna [4] characterized in an anechoic chamber. XETS antennas were used on both the ULA (oriented towards the negative $x$-direction) and on the URA (oriented towards the positive $x$-direction).<br> These data have been used to perform ultra-wideband (UWB) bistatic radar imaging and wireless power transfer (WPT). Our implementation is provided in <b>MAIN_wall_detection.mat</b>. Potential use: spherical wavefront beamforming, wireless power transfer, communication, environment learning, imaging, positioning, channel modeling
Provide a detailed description of the following dataset: Bistatic MIMO Radar Sensing of Specularly Reflecting Surfaces for Wireless Power Transfer
WikiWeb2M
**Wikipedia Webpage 2M (WikiWeb2M)** is a multimodal open source dataset consisting of over 2 million English Wikipedia articles. It is created by rescraping the ∼2M English articles in WIT. Each webpage sample includes the page URL and title, section titles, text, and indices, images and their captions.
Provide a detailed description of the following dataset: WikiWeb2M
ParsVQA-Caps
Despite recent advances in vision-and-language tasks, most progress is still focused on resource-rich languages such as English. Furthermore, widespread vision-and-language datasets directly adopt images representative of American or European cultures resulting in bias. Hence we introduce ParsVQA-Caps, the first benchmark in Persian for Visual Question Answering and Image Captioning tasks. We utilize two ways to collect datasets for each task, human-based and template-based for VQA and human-based and web-based for image captioning. The image captioning dataset consists of over 7.5k images and about 9k captions. The VQA dataset consists of almost 11k images and 28.5k question and answer pairs with short and long answers usable for both classification and generation VQA. source: [ParsVQA-Caps: A Benchmark for Visual Question Answering and Image Captioning in Persian](https://www.winlp.org/wp-content/uploads/2022/11/68_Paper.pdf)
Provide a detailed description of the following dataset: ParsVQA-Caps
CoScript
**CoScript** is a constrained language planning dataset, which consists of 55,000 scripts.
Provide a detailed description of the following dataset: CoScript
SecurityEval
Automated source code generation is currently a popular machine learning-based task. It can be helpful for software developers to write functionally correct code from a given context. However, just like human developers, a code generation model can produce vulnerable code, which the developers can mistakenly use. For this reason, evaluating the security of a code generation model is a must. In this paper, we describe SecurityEval, an evaluation dataset to fulfill this purpose. It contains 130 samples for 75 vulnerability types, which are mapped to the Common Weakness Enumeration (CWE). We also demonstrate using our dataset to evaluate one open-source (i.e., InCoder) and one closed-source code generation model (i.e., GitHub Copilot).
Provide a detailed description of the following dataset: SecurityEval
SimpleQuestionsWikiData
SimpleQuestionsWikidata maps [SimpleQuestions](https://research.fb.com/downloads/babi/) to Wikidata. It was proposed in the paper [Question Answering Benchmarks for Wikidata](https://ceur-ws.org/Vol-1963/paper555.pdf) by Diefenbach et al.
Provide a detailed description of the following dataset: SimpleQuestionsWikiData
QDAT Quran Recitation
QDAT data set contains 1500 WAV files along with sound files stored on Excel CSV file format. The sound file contains links to the WAV files attached with other features: Age, Gender, and the correctness of the recitation of the three recitation rules and the final goal shows the correctness of the whole reading.
Provide a detailed description of the following dataset: QDAT Quran Recitation
GeoGLUE
**GeoGLUE** is a GeoGraphic Language Understanding Evaluation benchmark, which consists of six geographic text-related tasks, including geographic textual similarity on recall, geotagged geographic elements tagging, geographic composition analysis, geographic where what cut, and geographic entity alignment. All tasks' datasets are collected from open-released resources.
Provide a detailed description of the following dataset: GeoGLUE
AfriQA
**AfriQA** is a cross-lingual QA dataset with a focus on African languages. AfriQA includes 12,000+ XOR QA examples across 10 African languages, where relevant passages are retrieved in a high-resource language spoken in the corresponding region and answers are translated into the source language. The dataset enables the development of more equitable QA technology.
Provide a detailed description of the following dataset: AfriQA
STAR: Situated Reasoning
Reasoning in the real world is not divorced from situations. A key challenge is to capture the present knowledge from surrounding situations and reason accordingly. STAR is a novel benchmark for Situated Reasoning, which provides challenging question-answering tasks, symbolic situation descriptions and logic-grounded diagnosis via real-world video situations.
Provide a detailed description of the following dataset: STAR: Situated Reasoning
LIMUC
The LIMUC dataset is the largest publicly available labeled ulcerative colitis dataset that compromises 11276 images from 564 patients and 1043 colonoscopy procedures. Three experienced gastroenterologists were involved in the annotation process, and all images are labeled according to the Mayo endoscopic score (MES).
Provide a detailed description of the following dataset: LIMUC
V-D4RL
V-D4RL provides pixel-based analogues of the popular D4RL benchmarking tasks, derived from the dm_control suite, along with natural extensions of two state-of-the-art online pixel-based continuous control algorithms, DrQ-v2 and DreamerV2, to the offline setting.
Provide a detailed description of the following dataset: V-D4RL
NeuralRGBD
RGB-D dataset of synthetic indoor scenes with color, noisy depth map, etc.
Provide a detailed description of the following dataset: NeuralRGBD
WebCPM
**WebCPM** is a Chinese LFQA dataset. It contains 5,500 high-quality question-answer pairs, together with 14,315 supporting facts and 121,330 web search actions.
Provide a detailed description of the following dataset: WebCPM
MVSep
**MVSep** is a synthetic dataset for the vocal separation task created by combining random vocal and instrumental samples, publicly available on the internet. The sourced samples were separated into two sets (vocal-only and instrumental-only) and then randomly mixed together. The mixtures may not always sound like a real melody, but they allow for testing audio separation methods. Synth MVSep dataset consists of 100 tracks, each with a duration of exactly one minute and a sample rate of 44.1 kHz.
Provide a detailed description of the following dataset: MVSep
Meta Omnium
**Meta Omnium** is a dataset-of-datasets spanning multiple vision tasks including recognition, keypoint localization, semantic segmentation and regression. Meta Omnium enables meta-learning researchers to evaluate model generalization to a much wider array of tasks than previously possible, and provides a single framework for evaluating meta-learners across a wide suite of vision applications in a consistent manner.
Provide a detailed description of the following dataset: Meta Omnium
A View From Somewhere (AVFS)
A View From Somewhere (AVFS)—a dataset of 638,180 face similarity judgments over 4,921 faces. Each judgment corresponds to the odd-one-out (i.e., least similar) face in a triplet of faces and is accompanied by both the identifier and demographic attributes of the annotator who made the judgment.
Provide a detailed description of the following dataset: A View From Somewhere (AVFS)
CREMP
**CREMP** is a resource generated for the rapid development and evaluation of machine learning models for macrocyclic peptides. CREMP contains 36,198 unique macrocyclical peptides and their high-quality structural ensembles generated using the Conformer-Rotamer Ensemble Sampling Tool (CREST).
Provide a detailed description of the following dataset: CREMP
SYNTH-PEDES
**SYNTH-PEDES** is a large-scale person dataset with image-text pairs by far, which contains 312,321 identities, 4,791,711 images, and 12,138,157 textual descriptions.
Provide a detailed description of the following dataset: SYNTH-PEDES
titanic5 Dataset
titanic5 Dataset Created by David Beltran del Rio March 2016. Notes This is the final (for now) version of my update to the Titanic data. I think it’s finally ready for publishing if you’d like. What I did was to strip all the passenger and crew data from the Encyclopedia Titanica (ET) web pages (excluding channel crossing passengers), create a unique ID for each passenger and crew member (Name_ID), then (painstakingly and hopefully 100% correctly) match to your earlier titanic3 dataset, in order to compare the two and to get your sibsp and parch variables. Since the ET is updated occasionally the work put into the ID and matching can be reused and refined later. I did eventually hear back from the ET people, they are willing to make the underlying database available in the future, I have not yet taken them up on it. The two datasets line up nicely, most of the differences in the newer titanic5 dataset are in the age variable, as I had mentioned before - the new set has less missing ages - 51 missing (vs 263) out of 1309. I am in the process of refining my analysis of the data as well, based on your comments below and your Regression Modeling Strategies example. titanic3_wID data can be matched to titanic5 using the Name_ID variable. Tab titanic5 Metadata has the variable descriptions and allowable values for Class and Class/Dept. A note about the ages - instead of using the add 0.5 trick to indicate estimated birth day / date I have a flag that indicates how the “final” age (Age_F) was arrived at. It’s the Age_F_Code variable - the allowable values are in the Titanic5_metadata tab in the attached excel. The reason for this is that I already had some fractional ages for infants where I had age in months instead of years and I wanted to avoid confusion for 6 month old infants, although I don’t think there are any in the data! Also, I was thinking to make fractional ages or age in days for all passengers for whom I have DoB, but I have not yet done so. Here’s what the tabs are: Titanic5_all - all (mostly cleaned) Titanic passenger and crew records Titanic5_work - working dataset, crew removed, unnecessary variables removed - this is the one I import into SAS / R to work on Titanic5_metadata - Variable descriptions and allowable values titanic3_wID - Original Titanic3 dataset with Name_ID added for merging to Titanic5 I have a csv, R dataset, and SAS dataset, but the variable names are an older version, so I won’t send those along for now to avoid confusion. If it helps send my contact info along to your student in case any questions arise. Gmail address probably best, on weekends for sure: davebdr@gmail.com The tabs in titanic5.xls are Titanic5_all Titanic5_passenger (the one to be used for analysis) Titanic5_metadata (used during analysis file creation) Titanic3_wID
Provide a detailed description of the following dataset: titanic5 Dataset
MNAD
# About the MNAD Dataset The MNAD corpus is a collection of over **1 million Moroccan news articles** written in modern Arabic language. These news articles have been gathered from 11 prominent electronic news sources. The dataset is made available to the academic community for research purposes, such as data mining (clustering, classification, etc.), information retrieval (ranking, search, etc.), and other non-commercial activities. ## Dataset Fields - Title: The title of the article - Body: The body of the article - Category: The category of the article - Source: The Electronic News paper source of the article ## About Version 1 of the Dataset (MNAD.v1) Version 1 of the dataset comprises 418,563 articles classified into 19 categories. The data was collected from well-known electronic news sources, namely Akhbarona.ma, Hespress.ma, Hibapress.com, and Le360.com. The articles were stored in four separate CSV files, each corresponding to the news website source. Each CSV file contains three fields: Title, Body, and Category of the news article. The dataset is rich in Arabic vocabulary, with approximately 906,125 unique words. It has been utilized as a benchmark in the research paper: ```"A Moroccan News Articles Dataset (MNAD) For Arabic Text Categorization". In 2021 International Conference on Decision Aid Sciences and Application (DASA).``` This dataset is available for download from the following sources: - Kaggle Datasets : [MNADv1](https://www.kaggle.com/datasets/jmourad100/mnad-moroccan-news-articles-dataset) - Huggingface Datasets: [MNADv1](https://huggingface.co/datasets/J-Mourad/MNAD.v1) ## About Version 2 of the Dataset (MNAD.v2) Version 2 of the MNAD dataset includes an additional 653,901 articles, bringing the total number of articles to over 1 million (1,069,489), classified into the same 19 categories as in version 1. The new documents were collected from seven additional prominent Moroccan news websites, namely al3omk.com, medi1news.com, alayam24.com, anfaspress.com, alyaoum24.com, barlamane.com, and SnrtNews.com. The newly collected articles have been merged with the articles from the previous version into a single CSV file named ```MNADv2.csv```. This file includes an additional column called "Source" to indicate the source of each news article. Furthermore, MNAD.v2 incorporates improved pre-processing techniques and data cleaning methods. These enhancements involve removing duplicates, eliminating multiple spaces, discarding rows with NaN values, replacing new lines with "\n", excluding very long and very short articles, and removing non-Arabic articles. These additions and improvements aim to enhance the usability and value of the MNAD dataset for researchers and practitioners in the field of Arabic text analysis. This dataset is available for download from the following sources: - Kaggle Datasets : [MNADv2](https://huggingface.co/datasets/J-Mourad/MNAD.v2) - Huggingface Datasets: [MNADv2](https://huggingface.co/datasets/J-Mourad/MNAD.v2) ## Citation If you use our data, please cite the following paper: ```bibtex @inproceedings{MNAD2021, author = {Mourad Jbene and Smail Tigani and Rachid Saadane and Abdellah Chehri}, title = {A Moroccan News Articles Dataset ({MNAD}) For Arabic Text Categorization}, year = {2021}, publisher = {{IEEE}}, booktitle = {2021 International Conference on Decision Aid Sciences and Application ({DASA})} doi = {10.1109/dasa53625.2021.9682402}, url = {https://doi.org/10.1109/dasa53625.2021.9682402}, } ```
Provide a detailed description of the following dataset: MNAD
Dynamical Systems
Trajectories of 3 dynamical systems: - Pendulum - Lotka-Voltera - 3-body system Code to re-create the datasets is provided in the repo on the folder `data_generation`
Provide a detailed description of the following dataset: Dynamical Systems
Webis-TLDR-17 Corpus
This corpus contains preprocessed posts from the Reddit dataset, suitable for abstractive summarization using deep learning. The format is a json file where each line is a JSON object representing a post. The schema of each post is shown below: - author: string (nullable = true) - body: string (nullable = true) - normalizedBody: string (nullable = true) - content: string (nullable = true) - content_len: long (nullable = true) - summary: string (nullable = true) - summary_len: long (nullable = true) - id: string (nullable = true) - subreddit: string (nullable = true) - subreddit_id: string (nullable = true) - title: string (nullable = true) Specifically, the content and summary fields can be directly used as inputs to a deep learning model (e.g. Sequence to Sequence model ). The dataset consists of 3,848,330 posts with an average length of 270 words for content, and 28 words for the summary. The dataset is a combination of both the Submissions and Comments merged on the common schema. As a result, most of the comments which do not belong to any submission have null as their title. Note : This corpus does not contain a separate test set. Thus it is up to the users to divide the corpus into appropriate training, validation and test sets.
Provide a detailed description of the following dataset: Webis-TLDR-17 Corpus
CreditRisk
Dataset containing Credit scores and loan repayment rate (90-day default rate) for individuals, separated by race (white, black, Hispanic Asian).
Provide a detailed description of the following dataset: CreditRisk
COMPAS
Dataset used by ProPublica to assess and analyse the fairness of the COMPAS (Correctional Offender Management Profiling for Alternative Sanctions) software. compas.db - a sqlite3 database containing criminal history, jail and prison time, demographics and COMPAS risk scores for defendants from Broward County.
Provide a detailed description of the following dataset: COMPAS
COVIDx CXR-3
COVIDx CXR-3 is an open access benchmark dataset that we generated, comprising 30,882 CXR images across 17,026 patient cases. Images may be added over time to improve the dataset. This dataset is being used to train and validate our models for COVID-19 detection from CXR images. Useful dataset code and manipulation tools are available in the COVID-Net repository. This dataset may also be constructed from the individual sources by following the instructions here.
Provide a detailed description of the following dataset: COVIDx CXR-3
Pothole Mix
This dataset for the semantic segmentation of potholes and cracks on the road surface was assembled from 5 other datasets already publicly available, plus a very small addition of segmented images on our part. To speed up the labeling operations, we started working with depth cameras to try to automate, to some extent, this extremely time-consuming phase. The main dataset is composed of 4340 (image,mask) pairs at different resolutions divided into training/validation/test sets with a proportion of 3340/496/504 images equal to 77/11/12 percent. This is the dataset used in the SHREC2022 competition and it is the dataset that allowed us to train the neural networks for semantic segmentation capable of obtaining the nice images and videos that you have probably already seen. Along the main dataset we also release a set of RGB-D videos consisting of 797 RGB clips and as many clips with their disparity maps, captured with the excellent OAK-D cameras we won for being finalists at the OpenCV AI Competition 2021. In an effort to achieve (semi-)automatic labeling for these clips, we postprocessed the disparity maps using classic CV algorithms and managed to obtain 359 binary mask clips. Obviously these masks are not perfect (they cannot be by definition, otherwise the problem of automatic road damage detection would not exist), but nonetheless we believe they are an excellent starting point to create, for example, new data augmentations (creating potholes on "intact road images" belonging to other standard road datasets) or to be used as textures in the creation of 3D scenes from which to extract large amounts of images/masks for the training of neural networks. You can have a preview of what you will find in these clips by watching this video showing the overlay of images and binary masks: http://deeplearning.ge.imati.cnr.it/genova-5G/video/pothole-mix-videos/pothole-mix-rgb-d-overlay-videos-concat.html Please take a look at the readme file inside the main dataset zipfile to have some more details about the single sub-datasets and their sources.
Provide a detailed description of the following dataset: Pothole Mix
pinkeggs
We introduce a novel dataset consisting of images depicting pink eggs that have been identified as Pomacea canaliculata eggs, accompanied by corresponding bounding box annotations. The purpose of this dataset is to aid researchers in the analysis of the spread of Pomacea canaliculata species by utilizing deep learning techniques, as well as supporting other investigative pursuits that require visual data pertaining to the eggs of Pomacea canaliculata. It is worth noting, however, that the identity of the eggs in question is not definitively established, as other species within the same taxonomic family have been observed to lay similar-looking eggs in regions of the Americas. Therefore, a crucial prerequisite to any decision regarding the elimination of these eggs would be to establish with certainty whether they are exclusively attributable to invasive Pomacea canaliculata or if other species are also involved. The dataset is available at https://www.kaggle.com/datasets/deeshenzhen/pinkeggs
Provide a detailed description of the following dataset: pinkeggs
DOTA 2.0
—In the past decade, object detection has achieved significant progress in natural images but not in aerial images, due to the massive variations in the scale and orientation of objects caused by the bird’s-eye view of aerial images. More importantly, the lack of large-scale benchmarks has become a major obstacle to the development of object detection in aerial images (ODAI). In this paper, we present a large-scale Dataset of Object deTection in Aerial images (DOTA) and comprehensive baselines for ODAI. The proposed DOTA dataset contains 1,793,658 object instances of 18 categories of oriented-bounding-box annotations collected from 11,268 aerial images. Based on this large-scale and well-annotated dataset, we build baselines covering 10 state-of-the-art algorithms with over 70 configurations, where the speed and accuracy performances of each model have been evaluated. Furthermore, we provide a code library for ODAI and build a website for evaluating different algorithms. Previous challenges run on DOTA have attracted more than 1300 teams worldwide. We believe that the expanded large-scale DOTA dataset, the extensive baselines, the code library and the challenges can facilitate the designs of robust algorithms and reproducible research on the problem of object detection in aerial images.
Provide a detailed description of the following dataset: DOTA 2.0
Tinto
The increasing use of deep learning techniques has reduced interpretation time and, ideally, reduced interpreter bias by automatically deriving geological maps from digital outcrop models. However, accurate validation of these automated mapping approaches is a significant challenge due to the subjective nature of geological mapping and the difficulty in collecting quantitative validation data. Additionally, many state-of-the-art deep learning methods are limited to 2D image data, which is insufficient for 3D digital outcrops, such as hyperclouds. To address these challenges, we present Tinto, a multi-sensor benchmark digital outcrop dataset designed to facilitate the development and validation of deep learning approaches for geological mapping, especially for non-structured 3D data like point clouds. Tinto comprises two complementary sets: 1) a real digital outcrop model from Corta Atalaya (Spain), with spectral attributes and ground-truth data, and 2) a synthetic twin that uses latent features in the original datasets to reconstruct realistic spectral data (including sensor noise and processing artifacts) from the ground-truth. The point cloud is dense and contains 3,242,964 labeled points. We used these datasets to explore the abilities of different deep learning approaches for automated geological mapping. By making Tinto publicly available, we hope to foster the development and adaptation of new deep learning tools for 3D applications in Earth sciences. The 3D visualization of the Tinto point clouds on Potree can be accessed through this link: https://www.hzdr.de/FWG/FWGE/Hyperclouds/Tinto.html.
Provide a detailed description of the following dataset: Tinto
CWD30
CWD30 comprises over 219,770 high-resolution images of 20 weed species and 10 crop species, encompassing various growth stages, multiple viewing angles, and environmental conditions. The images were collected from diverse agricultural fields across different geographic locations and seasons, ensuring a representative dataset.
Provide a detailed description of the following dataset: CWD30
HAC
**HAC** is a dataset for learning and benchmarking arbitrary Hybrid Adverse Conditions restoration. HAC contains 31 scenarios composed of an arbitrary combination of five common weather, with a total of 316K adverse-weather/clean pairs.
Provide a detailed description of the following dataset: HAC
M3KE
**M3KE** is a Massive Multi-Level Multi-Subject Knowledge Evaluation benchmark, which is developed to measure knowledge acquired by Chinese large language models by testing their multitask accuracy in zero- and few-shot settings. We have collected 20,477 questions from 71 tasks. Our selection covers all major levels of Chinese education system, ranging from the primary school to college, as well as a wide variety of subjects, including humanities, history, politics, law, education, psychology, science, technology, art and religion. All questions are multiple-choice questions with four options, hence guaranteeing a standardized and unified assessment process.
Provide a detailed description of the following dataset: M3KE
Morphological Classification of Galaxies
Dataset can be used by anyone who is interested to perform morphological classification of galaxies. Originally dataset provided by Kaggle user Jay Lin (https://www.kaggle.com/jay1985) 4 years ago. Dataset was used in conference paper "Morphological Classification of Galaxies Using SpinalNet"
Provide a detailed description of the following dataset: Morphological Classification of Galaxies
ATMs fault prediction
The collected dataset consists of multivariate time series (MTS) data belonging to several ATMs banking along with the annotations that the operators did when they performed a maintenance task on any of the machines. Each sample is a MTS with 144 points, associated with all the 10-minute time windows of that day and 38 dimensions. Each dimension is related to a command type and response type, where the value of each point in the time series represents the number of occurrences of the associated command and response since the last failure event., i.e. in this cycle of failure. Therefore, the time series value is accumulated until the next cycle begins. Additionally, some extra information is included for each sample, such as the cycle of failure and the machine identifier, which can be used to create data partitions without mixing different machines. The dataset and the labels assigned to each sample was used in the original work to perform a binary classification problem addressed by ML techniques. The goal of the problem was to predict whether a failure will occur within the next 7 days, using only the information from the current day (accumulated since the last error), which is based on an event-log. Potential use cases of the dataset: • multivariate time series classification-regression-forecasting methodologies; • feature learning- feature extraction approaches; • predictive maintenance tasks: failure classification, failure prediction, anomaly detection.
Provide a detailed description of the following dataset: ATMs fault prediction
SWS
**Smart Word Suggestions (SWS)** is a task and benchmark. This task involves identifying words or phrases that require improvement and providing substitution suggestions. The benchmark includes human-labeled data for testing, a large distantly supervised dataset for training, and the framework for evaluation. The test data includes 1,000 sentences written by English learners, accompanied by over 16,000 substitution suggestions annotated by 10 native speakers. The training dataset comprises over 3.7 million sentences and 12.7million suggestions generated through rules.
Provide a detailed description of the following dataset: SWS
SpeechInstruct
**SpeechInstruct** is a large-scale cross-modal speech instruction dataset. It contains 37,969 quadruplets composed of speech instructions, text instructions, text responses, and speech responses.
Provide a detailed description of the following dataset: SpeechInstruct
Multi-CrossRE
**Multi-CrossRE** is a broadest multi-lingual dataset for Relation Extraction (RE) including 26 languages in addition to English, and covering six text domains. It is a machine translated version of CrossRE crossre, with a sub-portion including more than 200 sentences in seven diverse languages checked by native speakers.
Provide a detailed description of the following dataset: Multi-CrossRE
Simulated wind farm graph dataset
## FLORIS farm dataset A dataset for graph neural network modeling of wind farms. The current version of the dataset contains two farms, with very different geometry but similar inter-turbine statistics. The wind farms were simulated with the steady-state wake model [FLORIS](https://github.com/NREL/floris).
Provide a detailed description of the following dataset: Simulated wind farm graph dataset
VizWiz-Classification
Our goal is to improve upon the status quo for designing image classification models trained in one domain that perform well on images from another domain. Complementing existing work in robustness testing, we introduce the first test dataset for this purpose which comes from an authentic use case where photographers wanted to learn about the content in their images. We built a new test set using 8,900 images taken by people who are blind for which we collected metadata to indicate the presence versus absence of 200 ImageNet object categories. We call this dataset VizWiz-Classification.
Provide a detailed description of the following dataset: VizWiz-Classification
Honeycombs in Concrete
The directory HiCIS contains two datasets for instance segmentation of honeycombs in concrete in COCO Format. The datasets orginate from images scraped from the internet and the other one is provided by Metis Systems AG. The directory HiCC/web contains the dataset using the images from the internet and HICC/metis contains the dataset using the images provided by Metis Systems AG as part of the research project Smart Design and Construction (SDaC).
Provide a detailed description of the following dataset: Honeycombs in Concrete
DHB Dataset
Dynamic Human Bodies dataset (DHB), containing 10 point cloud sequences from the MITAMA dataset and 4 from the 8IVFB dataset. The sequences in DHB record 3D human motions with large and non-rigid deformation in real world. The overall dataset contains more than 3000 point cloud frames. And each frame has 1024 points.
Provide a detailed description of the following dataset: DHB Dataset
NL-Drive
A challenging multi-frame interpolation dataset for autonomous driving scenarios. Based on the principle of hard-sample selection and the diversity of scenarios, NL-Drive dataset contains point cloud sequences with large nonlinear movements from three public large-scale autonomous driving datasets: KITTI, Argoverse and Nuscenes. The overall dataset contains more than 20,000 LiDAR point cloud frames. The frame rate of point cloud sequence is 10Hz. And NL-Drive dataset is split into the training, validation and test set in the ratio of 14:3:3. For the point cloud interpolation task, the point cloud frame input is selected at a given interval of frames, and the remaining point clouds as the ground truth of the interpolation frame. Particularly, each sample of NL-Drive dataset is 4 point cloud frames of 2.5Hz when there are 3 interpolation frames to predict between the middle two input frames.
Provide a detailed description of the following dataset: NL-Drive
naab
# naab: A ready-to-use plug-and-play corpus for Farsi The biggest cleaned and ready-to-use open-source textual corpus in Farsi. It contains about 130GB of data, 250 million paragraphs, and 15 billion words. The project name is derived from the Farsi word NAAB K which means pure and high grade. We also provide the raw version of the corpus called naab-raw and an easy-to-use preprocessor that can be employed by those who wanted to make a customized corpus.
Provide a detailed description of the following dataset: naab
TFVulFix
**TFVulFix** is a dataset containing commits from TensorFlow, which is a well-known deep learning library. It contains 290 vulnerability fixing and 1,535 non-vulnerability-fixing commits. In this dataset, no commit is explicitly linked to an issue.
Provide a detailed description of the following dataset: TFVulFix
default of credit card clients Data Set
This research aimed at the case of customers default payments in Taiwan and compares the predictive accuracy of probability of default among six data mining methods. From the perspective of risk management, the result of predictive accuracy of the estimated probability of default will be more valuable than the binary result of classification - credible or not credible clients. Because the real probability of default is unknown, this study presented the novel Sorting Smoothing Method to estimate the real probability of default. With the real probability of default as the response variable (Y), and the predictive probability of default as the independent variable (X), the simple linear regression result (Y = A + BX) shows that the forecasting model produced by artificial neural network has the highest coefficient of determination; its regression intercept (A) is close to zero, and regression coefficient (B) to one. Therefore, among the six data mining techniques, artificial neural network is the only one that can accurately estimate the real probability of default.
Provide a detailed description of the following dataset: default of credit card clients Data Set
AmaSum
AmaSum is the largest abstractive opinion summarization dataset, consisting of more than 33,000 human-written summaries for Amazon products. Each summary is paired, on average, with more than 320 customer reviews. Summaries consist of verdicts, pros, and cons, see the example below.
Provide a detailed description of the following dataset: AmaSum
SPACE (Opinion Summarization)
SPACE is a large-scale opinion summarization benchmark for the evaluation of unsupervised summarizers. SPACE is built on TripAdvisor hotel reviews and includes a training set of approximately 1.1 million reviews for over 11 thousand hotels. For evaluation, we created a collection of human-written, abstractive opinion summaries for 50 hotels, including high-level general summaries and aspect summaries for six popular aspects: building, cleanliness, food, location, rooms, and service. Every summary is based on 100 input reviews, an order of magnitude increase compared to existing corpora. In total, SPACE contains 1,050 gold standard summaries. You can view the full instructions for out multi-stage annotation procedure here.
Provide a detailed description of the following dataset: SPACE (Opinion Summarization)
ActionBench
**ActionBench** contains two carefully designed probing tasks: Action Antonym and Video Reversal, which targets multimodal alignment capabilities and temporal understanding skills of the model, respectively. Action knowledge involves the understanding of textual, visual, and temporal aspects of actions. The benchmark is constructed by leveraging two existing open-domain video-language datasets, Ego4D and Something-Something v2 (SSv2), which provide fine-grained action annotations for each video clip.
Provide a detailed description of the following dataset: ActionBench
ChatGPT Advice Responses
*Taking Advice from ChatGPT* is a laboratory study of how student participants incorporate advice generated by ChatGPT. In a survey conducted through the Experimental Social Science Laboratory, 118 students answered 2,828 questions on topics from the MMLU benchmark. The rich dataset includes questions/choices, advice characteristics, participant answers, and participant background. It can be used to explore algorithm aversion, advice-taking, ChatGPT usage, and more.
Provide a detailed description of the following dataset: ChatGPT Advice Responses
MultiTACRED
MultiTACRED is a multilingual version of the large-scale [TAC Relation Extraction Dataset](https://nlp.stanford.edu/projects/tacred). It covers 12 typologically diverse languages from 9 language families, and was created by the [Speech & Language Technology group of DFKI](https://www.dfki.de/slt) by machine-translating the instances of the original TACRED dataset and automatically projecting their entity annotations. For details of the original TACRED's data collection and annotation process, see the [Stanford paper](https://aclanthology.org/D17-1004/). Translations are syntactically validated by checking the correctness of the XML tag markup. Any translations with an invalid tag structure, e.g. missing or invalid head or tail tag pairs, are discarded (on average, 2.3% of the instances). Languages covered are: Arabic, Chinese, Finnish, French, German, Hindi, Hungarian, Japanese, Polish, Russian, Spanish, Turkish. Intended use is supervised relation classification. Audience - researchers. Please see [our ACL paper](https://arxiv.org/abs/2305.04582) for full details.
Provide a detailed description of the following dataset: MultiTACRED
SIDAR
**SIDAR** is a dataset designed to be a training and evaluation set for a multitude of tasks involving image alignment and artifact removal, such as deep homography estimation, dense image matching, 2D bundle adjustment, inpainting, shadow removal, denoising, content retrieval, and background subtraction.
Provide a detailed description of the following dataset: SIDAR
ExplainCPE
This is a medical multiple-choice dataset with explanations which can be used to interpret the answer. The data comes from Chinese Pharmacist Examination. Each piece of data has a question, five options, a gold_answer and a gold_explanation.
Provide a detailed description of the following dataset: ExplainCPE
ENRICH
A new synthetic, multi-purpose dataset - called ENRICH - for testing photogrammetric and computer vision algorithms. Compared to existing datasets, ENRICH offers higher resolution images also rendered with different lighting conditions, camera orientation, scales, and field of view. Specifically, ENRICH is composed of three sub-datasets: ENRICH-Aerial, ENRICH-Square, and ENRICH-Statue, each exhibiting different characteristics. The proposed dataset is useful for several photogrammetry and computer vision-related tasks, such as the evaluation of hand-crafted and deep learning-based local features, effects of ground control points (GCPs) configuration on the 3D accuracy, and monocular depth estimation. Each zip file in the root is relative to a specific dataset: - ENRICH-Aerial, is an aerial image block of the city of Launceston, Australia. The acquisition is performed by simulating a typical oblique aerial camera with five views (nadir and four oblique views). - ENRICH-Square, is a ground-level dataset of a square captured by four cameras, each one moving on a different path with different focal length, orientation, and lighting conditions. - ENRICH-Statue, is a ground-level dataset portraying a statue (placed in the center of the ENRICH-Square scene), acquired using four cameras moving on different paths with different focal lengths, orientations, and lighting conditions. Be sure to check the README file in the dataset root for information on folder structure and file contents. Please refer to the related paper (https://doi.org/10.1016/j.isprsjprs.2023.03.002) for information about the generation method and the purpose of ENRICH.
Provide a detailed description of the following dataset: ENRICH
MSRA CN NER
Simplified Chinese dataset for NER in The Third International Chinese Language Processing Bakeoff (2006), provided by Microsoft Research Asia (MSRA).
Provide a detailed description of the following dataset: MSRA CN NER
data_qe
This file contains the data and code for the publication "The Federal Reserve's Response to the Global Financial Crisis and Its Long-Term Impact: An Interrupted Time-Series Natural Experimental Analysis" by A. C. Kamkoum, 2023.
Provide a detailed description of the following dataset: data_qe
HWR200
New open access dataset of handwritten texts images in Russian
Provide a detailed description of the following dataset: HWR200
X-Wines
X-Wines is a consistent wine dataset containing 100,646 instances and 21 million real evaluations carried out by users. Data were collected on the open Web in 2022 and pre-processed for wider free use. They refer to the scale 1–5 ratings carried out over a period of 10 years (2012–2021) for wines produced in 62 different countries.
Provide a detailed description of the following dataset: X-Wines
MSP-IMPROV
We present the MSP-IMPROV corpus, a multimodal emotional database, where the goal is to have control over lexical content and emotion while also promoting naturalness in the recordings. Studies on emotion perception often require stimuli with fixed lexical content, but that convey different emotions. These stimuli can also serve as an instrument to understand how emotion modulates speech at the phoneme level, in a manner that controls for coarticulation. Such audiovisual data are not easily available from natural recordings. A common solution is to record actors reading sentences that portray different emotions, which may not produce natural behaviors. We propose an alternative approach in which we define hypothetical scenarios for each sentence that are carefully designed to elicit a particular emotion. Two actors improvise these emotion-specific situations, leading them to utter contextualized, non-read renditions of sentences that have fixed lexical content and convey different emotions. We describe the context in which this corpus was recorded, the key features of the corpus, the areas in which this corpus can be useful, and the emotional content of the recordings. The paper also provides the performance for speech and facial emotion classifiers. The analysis brings novel classification evaluations where we study the performance in terms of inter-evaluator agreement and naturalness perception, leveraging the large size of the audiovisual database.
Provide a detailed description of the following dataset: MSP-IMPROV
PMC-VQA
**PMC-VQA** is a large-scale medical visual question-answering dataset that contains 227k VQA pairs of 149k images that cover various modalities or diseases. The question-answer pairs are generated from PMC-OA.
Provide a detailed description of the following dataset: PMC-VQA
OntoEvent
OntoEvent is a new ED dataset with event correlations. It contains 13 supertypes with 100 subtypes, derived from 4,115 documents with 60,546 event instances.
Provide a detailed description of the following dataset: OntoEvent
ChaBuD
The dataset comprises patches of size 512x512 pixels collected from Sentinel-2 L2A satellite mission. All reported forest fires are located in California. For each area of interest, two images are provided: pre-fire acquisition and post-fire acquisition. Each image is composed of 12 different channels, collecting information from the visible spectrum, infrared and ultrablue. The dataset is split into: - train set - validation set - hidden test set, for which ground truth labels are not disclosed. The hdf5 file is structured in this way: ``` root | |- uuid_0: {"post_fire", "pre_fire", "mask"} | | |- uuid_1: {"post_fire", "pre_fire", "mask"} | ... ``` Each uuid have associated an attribute called fold. Dataset can be downloaded from [here](https://hf.co/datasets/chabud-team/chabud-ecml-pkdd2023). You can find a script that can be used to load data. Additional optional data are available [here](https://huggingface.co/datasets/chabud-team/chabud-extra). If you need more data, contact us.
Provide a detailed description of the following dataset: ChaBuD
Baidu PersonaChat
Baidu PersonaChat, which is a personalization dataset collected and open-sourced by Baidu, is similar to ConvAI2, although it’s Chinese.
Provide a detailed description of the following dataset: Baidu PersonaChat