dataset_name
stringlengths
2
128
description
stringlengths
1
9.7k
prompt
stringlengths
59
185
Graph dataset MCF-7
Dataset introduced by Xifeng Yan et al. in: SIGMOD '08: Proceedings of the 2008 ACM SIGMOD international conference on Management of data, June 2008, Pages 433–444, https://doi.org/10.1145/1376616.1376662 The dataset is now hosted by TUD. The dataset consists of small molecules activities against breast cancer tumors.
Provide a detailed description of the following dataset: Graph dataset MCF-7
IRV2V
To facilitate research on asynchrony for collaborative perception, we simulate the first collaborative perception dataset with different temporal asynchronies based on CARLA, named IRregular V2V(IRV2V). We set 100ms as ideal sampling time interval and simulate various asynchronies in real-world scenarios from two main aspects: i) considering that agents are unsynchronized with the unified global clock, we uniformly sample a time shift $\delta_s\sim \mathcal{U}(-50,50)\text{ms}$ for each agent in the same scene, and ii) considering the trigger noise of the sensors, we uniformly sample a time turbulence $\delta_d\sim \mathcal{U}(-10,10)\text{ms}$ for each sampling timestamp. The final asynchronous time interval between adjacent timestamps is the summation of the time shift and time turbulence. In experiments, we also sample the frame intervals to achieve large-scale and diverse asynchrony. Each scene includes multiple collaborative agents ranging from 2 to 5. Each agent is equipped with 4 cameras with the resolution 600 $\times$ 800 and a 32-channel LiDAR. The detection range is 281.6m $\times$ 80m. It results in 34K images and 8.5K LiDAR sweeps. See more details in the Appendix.
Provide a detailed description of the following dataset: IRV2V
PJM(AEP)
PJM Hourly Energy Consumption Data PJM Interconnection LLC (PJM) is a regional transmission organization (RTO) in the United States. It is part of the Eastern Interconnection grid operating an electric transmission system serving all or parts of Delaware, Illinois, Indiana, Kentucky, Maryland, Michigan, New Jersey, North Carolina, Ohio, Pennsylvania, Tennessee, Virginia, West Virginia, and the District of Columbia. The hourly power consumption data comes from PJM's website and are in megawatts (MW). The regions have changed over the years so data may only appear for certain dates per region.
Provide a detailed description of the following dataset: PJM(AEP)
MMface4D
MMFace4D is a large-scale multi-modal 4D (3D sequence) face dataset consisting of 431 identities, 35,904 sequences, and 3.9 million frames MMFace4D has three appealing characteristics: 1) highly diversified subjects and corpus, 2) synchronized audio and 3D mesh sequence with high-resolution face details, and 3) low storage cost with a new efficient compression algorithm on 3D mesh sequences.
Provide a detailed description of the following dataset: MMface4D
PIE-Bench
PIE-Bench comprises 700 images featuring 10 distinct editing types. Images are evenly distributed in natural and artificial scenes (e.g., paintings) among four categories: animal, human, indoor, and outdoor. Each image in PIE-Bench includes five annotations: source image prompt, target image prompt, editing instruction, main editing body, and the editing mask. Notably, the editing mask annotation (indicating the anticipated editing region) is crucial in accurate metrics computations as we expect the editing to only occur within a designated area.
Provide a detailed description of the following dataset: PIE-Bench
IMCPT-SparseGM-50
IMCPT-SparseGM dataset is a new visual graph matching benchmark addressing partial matching and graphs with larger sizes, based on the novel stereo benchmark [Image Matching Challenge PhotoTourism (IMC-PT) 2020](https://www.cs.ubc.ca/research/image-matching-challenge/2020/). This dataset is released in CVPR 2023 paper [*Deep Learning of Partial Graph Matching via Differentiable Top-K*](https://openreview.net/forum?id=4OoXQPGd1s). | **# images** | **# classes** | **avg # nodes** | **avg # edges** | **# universe** | **partial rate** | | ------------ | ------------- | --------------- | ----------- | -------------- | ---------------- | | 25765 | 16 | 21.36 | 54.71 | 50 | 57.3% |
Provide a detailed description of the following dataset: IMCPT-SparseGM-50
MM-Vet
MM-Vet: Evaluating Large Multimodal Models for Integrated Capabilities
Provide a detailed description of the following dataset: MM-Vet
IMCPT-SparseGM-100
IMCPT-SparseGM dataset is a new visual graph matching benchmark addressing partial matching and graphs with larger sizes, based on the novel stereo benchmark [Image Matching Challenge PhotoTourism (IMC-PT) 2020](https://www.cs.ubc.ca/research/image-matching-challenge/2020/). This dataset is released in CVPR 2023 paper [*Deep Learning of Partial Graph Matching via Differentiable Top-K*](https://openreview.net/forum?id=4OoXQPGd1s). | **# images** | **# classes** | **avg # nodes** | **avg # edges** | **# universe** | **partial rate** | | ------------ | ------------- | --------------- | ----------- | -------------- | ---------------- | | 25765 | 16 | 44.48 | 123.99 | 100 | 55.5% |
Provide a detailed description of the following dataset: IMCPT-SparseGM-100
MCCSD
This MCCS dataset is the first large-scale Mandarin Chinese Cued Speech dataset. This dataset covers 23 major categories of scenarios (e.g, communication, transportation and shoping) and 72 subcategories of scenarios (e.g, meeting, dating and introduction). It is recorded by four skilled native Mandarn Chinese Cued Speech cuers with portable cameras on the mobile phones. The Cued Speech videos are recorded with 30fps and 1280x720 format. We provide the raw Cued Speech videos, text file (with 1000 sentences) and corresponding annotations which contains two kind of data annotation. One is continuious video annotation with ELAN, the other is discrete audio annotations with Praat.
Provide a detailed description of the following dataset: MCCSD
LDCT-and-Projection-data
LDCT-and-Projection-data 医学去噪数据集
Provide a detailed description of the following dataset: LDCT-and-Projection-data
Celeb-HQ Facial Identity Recognition Dataset
Celeb-HQ Facial Identity Recognition Dataset * This dataset is curated for the facial identity classification task. * There are 307 identities (celebrities). * Each identity has 15 or more images. * The dataset contains 5,478 images. * There are 4,263 training images. * There are 1,215 test images.
Provide a detailed description of the following dataset: Celeb-HQ Facial Identity Recognition Dataset
Celeb-HQ Face Gender Recognition Dataset
Celeb-HQ Face Gender Recognition Dataset * This dataset is curated for the face gender classification task. * The dataset contains 30,000 images. * There are 23,999 train images. * There are 6,001 test images. * The whole face images are divided into two classes. * There are 11,057 male images. * There are 18,943 female images.
Provide a detailed description of the following dataset: Celeb-HQ Face Gender Recognition Dataset
LSA-T
LSA-T is the first continuous Argentinian Sign Language (LSA) dataset. It contains 14,880 sentence level videos of LSA extracted from the [CN Sordos YouTube channel](https://www.youtube.com/c/CNSORDOSARGENTINA) with labels and keypoints annotations for each signer. Videos are in 30 FPS full HD (1920x1080). * [Download link](https://app.seni.ar/datasets/lsat.7z) (45GB compressed) * [Visualization notebook](https://colab.research.google.com/drive/1kj5ztYw_57fi6wo2dpL18UkBR9ciV6ki) * [Presentation paper](https://arxiv.org/pdf/2211.15481.pdf) (preprint PDF)
Provide a detailed description of the following dataset: LSA-T
LSA16
This database contains images of 16 handshapes of the Argentinian Sign Language (LSA), each performed 5 times by 10 different subjects, for a total of 800 images. The subjects wore color hand gloves and dark clothes. Recording The dataset was recorded in an indoors environment, with artificial lightning. Subjects wore dark clothes and performed the handshapes standing, with a white wall as a background. To simplify the problem of hand segmentation, subjects wore fluorescent-colored gloves. These substantially simplify the problem of recognizing the position of the hand and performing its segmentation, and remove all issues associated to skin color variations, while fully retaining the difficulty of recognizing the handshape. The subjects performed the same handshape with both hands. Each handshape was executed imposing few constraints on the subjects to increase diversity and realism in the database. All subjects were non-signers and right-handed, were taught how to perform the handshape during the shooting session by showing them an image of the handshape as performed by one of the authors, and practiced each handshape a few times before recording. We employed a generic webcam for the recording, with a resolution of 640 by 480.
Provide a detailed description of the following dataset: LSA16
MulRan
MulRan is a dataset for Place Recognition and SLAM. The datasets were recorded in urban areas and contain sensor data of a car that is equipped with a 3D LiDAR (OS1-64) and a rotating radar (Navtech CIR204-H). For each sequence, the car is revisiting places several times.
Provide a detailed description of the following dataset: MulRan
RWTH-PHOENIX Handshapes dev set
We manually labelled 3359 images from the RWTH-PHOENIX-Weather 2014 Development set. Some of the 45 encountered pose-independent hand shape classes are depicted in Figure 1. They show the large intra-class variance and the strong similarity between several classes. The hand shapes occur with different frequency in the data. The distribution of counts per class can be verified in Figure 2 showing that the top 14 hand shapes explain 90% of the annotated samples. For our works on hand shape recognition we follow the hand shape taxonomy by the danish sign language lexicon team (Jette H. Kristoffersen and Thomas Troelsgård, Center for Tegnsprog, Denmark http://www.tegnsprog.dk), which amounts to over 60 different hand shapes, often with very subtle differences such as a flexed versus straight thumb. The employed classes are shown in Table1.
Provide a detailed description of the following dataset: RWTH-PHOENIX Handshapes dev set
XImageNet-12
Enlarge the dataset to understand how image background effect the Computer Vision ML model. With the following topics: Blur Background / Segmented Background / AI generated Background/ Bias of tools during annotation/ Color in Background / Dependent Factor in Background/ LatenSpace Distance of Foreground/ Random Background with Real Environment! We introduce XIMAGENET-12, an explainable benchmark dataset with over 200K images and 15,600 manual semantic annotations. Covering 12 categories from ImageNet to represent objects commonly encountered in practical life and simulating six diverse scenarios, including overexposure, blurring, color changing, etc., Our research builds upon the foundation laid by "Noise or Signal: The Role of Image Backgrounds in Object Recognition" (Xiao et al., ICLR 2022), "Explainable AI: Object Recognition With Help From Background" (Qiang et al., ICLR Workshop 2022), reinforced the notion that models trained solely on backgrounds can substantially improve accuracy. One noteworthy discovery highlighted in their studies is that more accurate models tend to rely less on backgrounds.
Provide a detailed description of the following dataset: XImageNet-12
RealCQA
RealCQA Scientific Chart Question Answering as a Test-bed for First-Order Logic
Provide a detailed description of the following dataset: RealCQA
mBBC dataset
To construct our multilingual dataset - mBBC - we gathered news articles from various BBC news websites in 43 different languages. This selection was based on the fact that BBC broadcasts news in these 43 languages, providing a global coverage across continents, and spanning a diverse range of language families, scripts, resource-levels, and word order ensuring a comprehensive representation of linguistic diversity. We collected data from various language families such as Indo-European, Sino-Tibetan, Niger-Congo, Austronesian, Dravidian, and more, encompassing several scripts like Latin, Cyrillic, Arabic, Devanagari, Chinese characters, and others. This extensive representation facilitates a comprehensive evaluation of multilingual language models across different linguistic contexts. Moreover, the dataset includes both high-resource languages like English, Spanish, and French, benefiting from extensive linguistic resources, as well as low-resource languages such as Somali, Burmese, and Nepali, with limited resources or smaller speaker populations. Including languages with varying resource levels enables us to assess the adaptability and effectiveness of multilingual language models across diverse linguistic settings. To ensure an unbiased and robust analysis, our dataset consists of news articles of minimum text length of 500 characters, sourced from reputable sources in 2023, ensuring the models studied have not seen the data during training in the most new LLMs.
Provide a detailed description of the following dataset: mBBC dataset
CodeInstruct
InstructCoder is the first dataset designed to adapt LLMs for general code editing. It consists of over 100k instruction-input-output triplets and covers multiple distinct code editing scenarios, generated by ChatGPT. LLaMA-33B finetuned on InstructCoder performs on par with ChatGPT on a real-world test set derived from GitHub commits.
Provide a detailed description of the following dataset: CodeInstruct
ChessReD
The Chess Recognition Dataset (ChessReD) comprises a diverse collection of images of chess formations captured using smartphone cameras; a sensor choice made to ensure real-world applicability. The dataset is accompanied by detailed annotations providing information about the chess pieces formation in the images. Therefore, the number of annotations for each image depends on the number of chess pieces depicted in it. There are 12 category ids in total (i.e., 6 piece types per colour) and the chessboard coordinates are in the form of algebraic notation strings (e.g., "a8"). **Dataset specifications** The dataset consists of 100 chess games, each with an arbitrary number of moves and therefore images, amounting to a total of 10,800 images being collected. It was split into training, validation, and test sets following a 60/20/20 split, which led to a total of 6,479 training images, 2,192 validation images, and 2,129 test images. Since two consecutive images of a chess game differ only by one move, the split was performed on game-level to ensure that quite similar images would not end up in different sets. The split was also stratified over the three distinct smartphone cameras (Apple iPhone 12, Huawei P40 pro, Samsung Galaxy S8) that were used to capture the images. The three smartphone cameras introduced variations to the dataset based on the distinct characteristics of their sensors.
Provide a detailed description of the following dataset: ChessReD
ChessReD2K
The Chess Recognition Dataset 2K (ChessReD2K) comprises a diverse collection of images of chess formations captured using smartphone cameras; a sensor choice made to ensure real-world applicability. The dataset is accompanied by detailed annotations providing information about the chess pieces formation in the images, bounding-boxes, and chessboard corner annotations. The number of annotations for each image depends on the number of chess pieces depicted in it. There are 12 category ids in total (i.e., 6 piece types per colour) and the chessboard coordinates are in the form of algebraic notation strings (e.g., "a8"). The corners are annotated based on their location on the chessboard (e.g., "bottom-left") with respect to the white player's view. This discrimination between these different types of corners provides information about the orientation of the chessboard that can be leveraged to determine the image's perspective and viewing angle. **Dataset specifications** The dataset consists of 20 chess games (selected from the [ChessReD dataset](https://paperswithcode.com/dataset/chessred)), each with an arbitrary number of moves and therefore images, amounting to a total of 2,078 images. A 70/15/15 split stratified over the smartphone cameras was followed, which led to a total of 14 training games (1,442 images), 3 validation games (330 images), and 3 test games (306 images) being annotated. The split was also stratified over the three distinct smartphone cameras (Apple iPhone 12, Huawei P40 pro, Samsung Galaxy S8) that were used to capture the images.
Provide a detailed description of the following dataset: ChessReD2K
WU-Minn HCP Data - 1200 Subjects
This HCP data release includes high-resolution 3T MR scans from young healthy adult twins and non-twin siblings (ages 22-35) using four imaging modalities: structural images (T1w and T2w), resting-state fMRI (rfMRI), task-fMRI (tfMRI), and high angular resolution diffusion imaging (dMRI). Behavioral and other individual subject measure data (both NIH Toolbox and non-Toolbox measures) is available on all subjects. MEG data and 7T MR data is available for a subset of subjects (twin pairs). The Open Access Dataset includes imaging data and most behavioral data. To protect subject privacy, some of the data (e.g., which subjects are twins) are part of a Restricted Access dataset.
Provide a detailed description of the following dataset: WU-Minn HCP Data - 1200 Subjects
AjwaOrMedjool
The dataset contains three subsets: 1- a dataset containing hand-crafted features to classify two types of organic dates (Ajwa or Medjool); 2- a dataset containing tabular data with features created automatically using deep learning to classify the two organic date types (Ajwa or Medjool); 3- a dataset for images of Ajwa and Medjool. This study is considered as the first work in Arabic using shallow machine learning and deep learning to create accurate models for classifying organic Saudi dates, which would enable scholars, researchers, and developers to create machine learning applications for classifying Saudi dates in various forms like websites, mobile apps, microcontrollers, tiny machine learning and internet of things applications. Please cite the following paper: Bati GF. Ajwa or Medjool: a binary balanced dataset to teach machine learning. Journal of Information Studies & Technology 2023:2.12. https://doi.org/10.5339/jist.2023.12 عجوة أو مجدول هي مجموعة بيانات متوازنة الصنفين لتصنيف التمور السعودية العضوية تتكون من ثلاث مجموعات فرعية: الأولى: تحوي البيانات المجدولة ذات الخصائص اليدوية لتصنيف التمور العضوية (عجوة أو مجدول)، والثانية: تجمع البيانات المجدولة ذات الخصائص المولدة أتوماتيكيّاً باستخدام التعلم العميق لتصنيف التمور العضوية (عجوة أو مجدول)، والثالثة: تجمع صوراً لتمور العجوة والمجدول. كما أنه أول بحث باللغة العربية يستخدم نماذج تعلم الآلة التقليدية والتعلم العميق لإنشاء نماذج ذات أداء عالٍ لتصنيف التمور السعودية العضوية بدون برمجة، مما يمكن الدارسين والباحثين والمطورين من تطوير تطبيقات تعلم آلة لتصنيف التمور السعودية بأشكال متنوعة في مواقع الإنترنت أو تطبيقات الجوالات أو في المتحكمات الدقيقة وتطبيقات إنترنت الأشياء وتعلم الآلات الصغيرة. كرماً الاستشهاد بالبحث التالي عند استخدام مجموعة البيانات: Bati GF. Ajwa or Medjool: a binary balanced dataset to teach machine learning. Journal of Information Studies & Technology 2023:2.12. https://doi.org/10.5339/jist.2023.12 فيديوهات عربية تشرح مجموعة البيانات: https://youtu.be/bPYHOYo4_Tw?feature=shared&t=1418 https://youtu.be/ADOuweANc5I?feature=shared&t=5775 https://youtu.be/PThKbc1kTSM?feature=shared&t=3253
Provide a detailed description of the following dataset: AjwaOrMedjool
PDFVQA
PDFVQA: A New Dataset for Real-World VQA on PDF Documents
Provide a detailed description of the following dataset: PDFVQA
PixelRec
an image cover dataset in short video recommendation
Provide a detailed description of the following dataset: PixelRec
Creative Visual Storytelling Anthology
The Creative Visual Storytelling Anthology is a collection of 100 author responses to an improved creative visual storytelling exercise over a sequence of three images. Each item contains four facet entries, corresponding to Entity, Scene, Narrative, and Title. The Creative Visual Storytelling Anthology was collected on Amazon Mechanical Turk. Five different authors performed the task for 20 different Flickr and Search-and-Rescue image-sets ( a sequence of 3 images) for a total of 100 items in the anthology. There are 300 unique Entity and Scene entries (single-image facets completed for each image), 200 unique Narrative entries (multi-image facets performed twice with two and then three images), and 100 unique Title entries (multi-image facets completed for three images). Thus, with each one assigned a title, there are 100 unique stories in the anthology all together. One set of images used in collecting the anthology originated from Flickr, under Creative Commons Licenses. We chose a subset of Huang et al’s VIST dataset and downselected their image sequences from five to three images to scaffold the Aristotelian dramatic structure. We do not release the Flickr images in order to track the providence of the images. The Flickr images' authors and copyright information and usage are documented in the Flickr imageset license spreadsheet. The second source of images came from a Search and Rescue (SAR) scenario. We selected three images in-order from experimental runs from a human-robot collaboration task, and similar sequential images were excluded for sake of diversity. The SAR images can be obtained through a private data sharing agreement until the time of their public release in a separate SAR-focused dataset. Please email the second author of the CHI paper for details.
Provide a detailed description of the following dataset: Creative Visual Storytelling Anthology
SingFake
The rise of singing voice synthesis presents critical challenges to artists and industry stakeholders over unauthorized voice usage. Unlike synthesized speech, synthesized singing voices are typically released in songs containing strong background music that may hide synthesis artifacts. Additionally, singing voices present different acoustic and linguistic characteristics from speech utterances. These unique properties make singing voice deepfake detection a relevant but significantly different problem from synthetic speech detection. In this work, we propose the singing voice deepfake detection task. We first present SingFake, the first curated in-the-wild dataset consisting of 28.93 hours of bonafide and 29.40 hours of deepfake song clips in five languages from 40 singers. We provide a train/val/test split where the test sets include various scenarios. We then use SingFake to evaluate four state-of-the-art speech countermeasure systems trained on speech utterances. We find these systems lag significantly behind their performance on speech test data. When trained on SingFake, either using separated vocal tracks or song mixtures, these systems show substantial improvement. However, our evaluations also identify challenges associated with unseen singers, communication codecs, languages, and musical contexts, calling for dedicated research into singing voice deepfake detection. The SingFake dataset and related resources are available online. https://singfake.org/
Provide a detailed description of the following dataset: SingFake
Cards
Card is a dataset of playing card images, which consists of 8,029 images with two clusterings, i.e., rank (Ace, King, Queen, etc.) and suits (clubs, diamonds, hearts, spades).
Provide a detailed description of the following dataset: Cards
DSEC
DSEC is a stereo camera dataset in driving scenarios that contains data from two monochrome event cameras and two global shutter color cameras in favorable and challenging illumination conditions. In addition, we collect Lidar data and RTK GPS measurements, both hardware synchronized with all camera data. One of the distinctive features of this dataset is the inclusion of VGA-resolution event cameras. Event cameras have received increasing attention for their high temporal resolution and high dynamic range performance. However, due to their novelty, event camera datasets in driving scenarios are rare. This work presents the first high-resolution, large-scale stereo dataset with event cameras.
Provide a detailed description of the following dataset: DSEC
ExLPose
We study human pose estimation in extremely low-light images. This task is challenging due to the difficulty of collecting real low-light images with accurate labels, and severely corrupted inputs that degrade prediction quality significantly. To address the first issue, we develop a dedicated camera system and build a new dataset of real low-light images with accurate pose labels. Thanks to our camera system, each low-light image in our dataset is coupled with an aligned well-lit image, which enables accurate pose labeling and is used as privileged information during training. We also propose a new model and a new training strategy that fully exploit the privileged information to learn representation insensitive to lighting conditions. Our method demonstrates outstanding performance on real extremely low-light images, and extensive analyses validate that both of our model and dataset contribute to the success.
Provide a detailed description of the following dataset: ExLPose
Vibrating Plates
We present a structured benchmark dataset for a representative vibroacoustic problem: Predicting the frequency response for vibrating plates. The vibrating plates benchmark dataset consists of in total 12,000 varied plate designs and accompanying vibration patterns, when the plates are excited by a harmonic force. These vibration platterns give the vibration velocity at every location of the plate orthogonal to its surface. The plate designs incorporate randomly placed beadings, indentations in the plate surface. The beadings stiffen the plates and completely change the resulting vibration patterns. Additionally, the size, thickness and damping loss factor of the plates are varied. The dataset is intended to further the development of surrogate modeling methods for partial differential equations in the field of vibroacoustics. It contains two settings, G-5000 and V-5000. Both incorporate the beading pattern variation but only G-5000 additionally includes the variation of plate size, thickness and damping loss factor.
Provide a detailed description of the following dataset: Vibrating Plates
QASiNa
Question Answering Sirah Nabawiyah (QASiNa) Dataset is a reading comprehension dataset consists of QA from Sirah Nabawiyah literature in Indonesian Language
Provide a detailed description of the following dataset: QASiNa
BLEFF
Synthetic (Blender) Dataset for forward facing scenes Toe vaualte NVS quality and camera parameter accuracy.
Provide a detailed description of the following dataset: BLEFF
XImageNet
we introduce XIMAGENET-12, an explainable benchmark dataset with over 200K images and 15,600 manual semantic annotations. Covering 12 categories from ImageNet to represent objects commonly encountered in practical life and simulating six diverse scenarios, including overexposure, blurring, color changing, etc.,
Provide a detailed description of the following dataset: XImageNet
iFF
Real-world dataset on forward facing scenes with different camera intrinisc parameters.
Provide a detailed description of the following dataset: iFF
WinSyn
75k photos of windows + 21k synthetic renders of building windows.
Provide a detailed description of the following dataset: WinSyn
Dataset of Paper Corpus
Overview of the scoping review paper corpus, sorted by their diferent intent types, categories, and subcategories. Note: Papers (77) may include multiple unique intents (172) and can therefore appear in multiple categories and subcategories.
Provide a detailed description of the following dataset: Dataset of Paper Corpus
PAD Dataset
Multi-pose Anomaly Detection (MAD) dataset, which represents the first attempt to evaluate the performance of pose-agnostic anomaly detection. The MAD dataset containing 4,000+ highresolution multi-pose views RGB images with camera/pose information of 20 shape-complexed LEGO animal toys for training, as well as 7,000+ simulation and real-world collected RGB images (without camera/pose information) with pixel-precise ground truth annotations for three types of anomalies in test sets. Note that MAD has been further divided into MAD-Sim and MAD-Real for simulation-to-reality studies to bridge the gap between academic research and the demands of industrial manufacturing.
Provide a detailed description of the following dataset: PAD Dataset
CIC-DDoS2019
his is an academic intrusion detection dataset. All the credit goes to the original authors: Dr. Iman Sharafaldin, Dr. Saqib Hakak, Dr. Arash Habibi Lashkari Dr. Ali Ghorbani. Please cite their original paper. The dataset offers an extended set of Distributed Denial of Service attacks, most of which employ some form of amplification through reflection. The dataset shares its feature set with the other CIC NIDS datasets, IDS2017, IDS2018 and DoS2017
Provide a detailed description of the following dataset: CIC-DDoS2019
Banking_CG
The dataset identifies the shortcomings of existing benchmarks in evaluating the problem of compositional generalization, which underscores the need for the development of datasets tailored to assess compositional generalization in open intent detection tasks.
Provide a detailed description of the following dataset: Banking_CG
OOS_CG
The dataset identifies the shortcomings of existing benchmarks in evaluating the problem of compositional generalization, which underscores the need for the development of datasets tailored to assess compositional generalization in open intent detection tasks.
Provide a detailed description of the following dataset: OOS_CG
StackOverflow_CG
The dataset identifies the shortcomings of existing benchmarks in evaluating the problem of compositional generalization, which underscores the need for the development of datasets tailored to assess compositional generalization in open intent detection tasks.
Provide a detailed description of the following dataset: StackOverflow_CG
SWE-bench
SWE-bench is a dataset that tests systems’ ability to solve GitHub issues automatically. The dataset collects 2,294 Issue-Pull Request pairs from 12 popular Python repositories. Evaluation is performed by unit test verification using post-PR behavior as the reference solution.
Provide a detailed description of the following dataset: SWE-bench
T$^3$Bench
T$^3$Bench is the first comprehensive text-to-3D benchmark containing diverse text prompts of three increasing complexity levels that are specially designed for 3D generation (300 prompts in total). To assess both the subjective quality and the text alignment, we propose two automatic metrics based on multi-view images produced by the 3D contents. The quality metric combines multi-view text-image scores and regional convolution to detect quality and view inconsistency. The alignment metric uses multi-view captioning and Large Language Model (LLM) evaluation to measure text-3D consistency.
Provide a detailed description of the following dataset: T$^3$Bench
LM-KBC 2023
A diverse set of 21 relations, each covering a different set of subject-entities and a complete list of ground truth object-entities per subject-relation-pair. The total number of object-entities varies for a given subject-relation pair. This dataset can be used to evaluate knowledge extraction systems.
Provide a detailed description of the following dataset: LM-KBC 2023
NewsEdits
News article revision histories provide clues to narrative and factual evolution in news articles. To facilitate analysis of this evolution, we present the first publicly available dataset of news revision histories, NewsEdits. Our dataset is large-scale and multilingual; it contains 1.2 million articles with 4.6 million versions from over 22 English- and French-language newspaper sources based in three countries, spanning 15 years of coverage (2006-2021). We define article-level edit actions: Addition, Deletion, Edit and Refactor, and develop a high-accuracy extraction algorithm to identify these actions. To underscore the factual nature of many edit actions, we conduct analyses showing that added and deleted sentences are more likely to contain updating events, main content and quotes than unchanged sentences. Finally, to explore whether edit actions are predictable, we introduce three novel tasks aimed at predicting actions performed during version updates. We show that these tasks are possible for expert humans but are challenging for large NLP models. We hope this can spur research in narrative framing and help provide predictive tools for journalists chasing breaking news.
Provide a detailed description of the following dataset: NewsEdits
SPKL
The SPKL dataset contains 1203 images of parking lots divided into 11 categories regarding vision conditions (including the 'winter' category absent in other datasets at the time of publishing). **Parking lot annotations**: lists of parking lot coordinates (4 points per lot) **Vision categories**: sunny, overcast, rainy, winter, fog, glare, night, infrared, occlusion (car), occlusion (tree), distortion **Labels**: binary (occupied/non-occupied)
Provide a detailed description of the following dataset: SPKL
POIE
Products for OCR and Information Extraction (POIE) dataset derives from camera images of various products in the real world. The images are carefully selected and manually annotated. Our labeling team consists of 8 experienced labelers. We first crop the nutrition tables from product images and adopt multiple commercial OCR engines (Azure and Baidu OCR) for pre-labeling. Then we use LabelMe to manually check the annotation of the location as well as transcription of every text box, and the values of entities for all the text in the images and repaired the OCR errors found. After discarding low-quality and blurred images, we obtain 3,000 images with 111,155 text instances. from https://github.com/jfkuang/cfam
Provide a detailed description of the following dataset: POIE
ESP
ESP dataset (Evaluation for Styled Prompt dataset) is a benchmark for zero-shot domain-conditional caption generation. ESP is a new dataset focusing on providing multiple styled text targets for the same image. It comprises 4.8k captions from 1k images in the COCO Captions test set. We collect five text domains with everyday usage: blog, social media, instruction, story, and news.
Provide a detailed description of the following dataset: ESP
AndroDrift
Dataset for the paper entitled "Efficient Concept Drift Handling for Batch Android Malware Detection Models". Contains 100 monthly goodware and malware samples between january of 2012 and december of 2019. The training set used consist of samples for the full 2012 year, whereas the remaining data is used for evaluation purposes on a quaterly basis.
Provide a detailed description of the following dataset: AndroDrift
Appdroid
Dataset used for the paper entitled "Towards a Fair Comparison and Realistic Evaluation Framework of Android Malware Detectors based on Static Analysis and Machine Learning". ##Desciption The dataset consist of 100 monthly samples of each class (malware, goodware and greyware) during the period starting from January 2012 to December 2019. We resorted the VTD values of apps for labeling. In particular, we used a VTD≥7 to label malware, VTD=0 for goodware and apps with a 1≤VTD≤6 rating were labeled as greyware. In total, our dataset consists of 28,800 app samples. The directory "dataset" contains a file with the SHA hashes of the APKs that comprise each of the three classes (goodware, malware and greyware). All these APKs were originally downloaded from AndroZoo. To download the APKs in our dataset, you can use the AZ tool. ## Authors and acknowledgment If you use this dataset, please cite: ``` @article{molinacoronado2022towards, title = {Towards a Fair Comparison and Realistic Evaluation Framework of Android Malware Detectors based on Static Analysis and Machine Learning}, author = {Borja Molina-Coronado and Usue Mori and Alexander Mendiburu and Jose Miguel-Alonso}, journal = {Computers & Security}, pages = {102996}, year = {2022}, issn = {0167-4048}, doi = {https://doi.org/10.1016/j.cose.2022.102996}, url = {https://www.sciencedirect.com/science/article/pii/S0167404822003881} } ``` ## License This work is licensed under a Creative Commons Attribution-NonCommercial 4.0 International License. For more information check the link below: http://creativecommons.org/licenses/by-nc/4.0/
Provide a detailed description of the following dataset: Appdroid
RT-Percept Sun Temple
Pre-rendered dataset used in Training and Predicting Visual Error for Real-Time Applications for the Sun Temple scene. Generated using the RT-Percept renderer and the RT-Percept scenes.
Provide a detailed description of the following dataset: RT-Percept Sun Temple
RT-Percept Lumberyard Bistro
Pre-rendered dataset used in Training and Predicting Visual Error for Real-Time Applications for the Lumberyard Bistro scenes. Generated using the RT-Percept renderer and the RT-Percept scenes.
Provide a detailed description of the following dataset: RT-Percept Lumberyard Bistro
RT-Percept Emerald Square
Pre-rendered dataset used in Training and Predicting Visual Error for Real-Time Applications for the Emerald Square scenes. Generated using the RT-Percept renderer and the RT-Percept scenes.
Provide a detailed description of the following dataset: RT-Percept Emerald Square
RT-Percept Sibenik Cathedral
Pre-rendered dataset used in Training and Predicting Visual Error for Real-Time Applications for the Sibenik Cathedral scene. Generated using the RT-Percept renderer and the RT-Percept scenes.
Provide a detailed description of the following dataset: RT-Percept Sibenik Cathedral
MathVista
**MathVista** is a consolidated Mathematical reasoning benchmark within Visual contexts. It consists of **three newly created datasets, IQTest, FunctionQA, and PaperQA**, which address the missing visual domains and are tailored to evaluate logical reasoning on puzzle test figures, algebraic reasoning over functional plots, and scientific reasoning with academic paper figures, respectively. It also incorporates **9 MathQA datasets** and **19 VQA datasets** from the literature, which significantly enrich the diversity and complexity of visual perception and mathematical reasoning challenges within our benchmark. In total, **MathVista** includes **6,141 examples** collected from **31 different datasets**. - Project: [https://mathvista.github.io/](https://mathvista.github.io/) - Visualization: [https://mathvista.github.io/#visualization](https://mathvista.github.io/#visualization) - Leaderboard: [https://mathvista.github.io/#leaderboard](https://mathvista.github.io/#leaderboard) - Paper: [https://arxiv.org/abs/2310.02255](https://arxiv.org/abs/2310.02255) - Data: [https://huggingface.co/datasets/AI4Math/MathVista](https://huggingface.co/datasets/AI4Math/MathVista) - Code: [https://github.com/lupantech/MathVista](https://github.com/lupantech/MathVista)
Provide a detailed description of the following dataset: MathVista
PaviaATN
The PaviaATN data consists of 62 4-channel fluorescence microscopy images of size 2720 × 2720. The four channels, in order, label Nuclei (first two), Actin and Tubulin. It was imaged in the Synthetic Physiology Laboratory(https://www.syntheticphysiologylab.com/) of the University of Pavia, and introduced in μSplit: image decomposition for fluorescence microscopy(https://arxiv.org/abs/2211.12872), published at ICCV 2023. Detailed Description The PaviaATN dataset comprises static lambda-stacks from a human keratinocytes cell line (HaCaT) expressing GFP-tubulin, RFP-LifeAct, and a customized version of the cell cycle indicator FastFUCCI that uses various combinations of a yellow fluorescent protein (YPF, mTurquoise2) and a far-red fluorescent protein (iRFP, miRFP670) to indicate multiple phases of the cell cycle. When a cell is in the G1 phase, increasing intensities of YFP fluorescence are detected in the nucleus. As a cell moves from G1 to S phase (G1/S), both YFP and iRFP fluorescence are detected in the nucleus of the cell. Finally, the sole iRFP fluorescence is detected in the nucleus during the S-G2-M phase. At the onset of the G1 phase, the nucleus shows no visible fluorescence intensity. In [μSplit: image decomposition for fluorescence microscopy](https://arxiv.org/abs/2211.12872), this dataset was used to resolve overlapping structures. All Images were acquired in a Nikon Ti2 microscope (100x silicon oil objective) equipped with an Okolab environmental control chamber and a Crest V3 spinning disk confocal in widefield mode.
Provide a detailed description of the following dataset: PaviaATN
GroOT
One of the recent trends in vision problems is to use natural language captions to describe the objects of interest. This approach can overcome some limitations of traditional methods that rely on bounding boxes or category annotations. This paper introduces a novel paradigm for Multiple Object Tracking called Type-to-Track, which allows users to track objects in videos by typing natural language descriptions. We present a new dataset for that Grounded Multiple Object Tracking task, called GroOT, that contains videos with various types of objects and their corresponding textual captions of 256K words describing their appearance and action in detail. To cover a diverse range of scenes, GroOT was created using official videos and bounding box annotations from the MOT17, TAO and MOT20.
Provide a detailed description of the following dataset: GroOT
Laser Data
This dataset contains two types of audio recordings. The first set of audio recordings consists of MEMS microphone response to acoustic activities (e.g., 19 participants reading provided text in front of the Google Home Smart Assistant). The second set of audio recordings consists of MEMS microphone response to photo-acoustic activities (laser modulated--with audio recordings of 19 participants, firing at the MEMS microphone of Google Home Smart Assistant). A total of 19 students (10 male and 9 female) were enrolled for data collection. All participants were asked to read the following 5 sentences in the microphone, Hey Google, Open the garage door, Hey Google, Close the garage door, Hey Google, Turn the light on, Hey Google, Turn the light off, Hey Google, What is the weather today?. Each audio sample was injected into the microphone through a laser, and the response of the microphone was recorded. This method produced a total data set of 95 acoustic- and 95 laser-induced audio recordings.
Provide a detailed description of the following dataset: Laser Data
CommercialAdsDataset
A large commercial Ads Dataset includes 480K labeled query-ad pairwise data with structured information of image, title, seller, description, and so on.
Provide a detailed description of the following dataset: CommercialAdsDataset
SSCBench
SSCBench establishes a large-scale SSC benchmark in street views that facilitates the training of robust and generalizable SSC models. Overall, SSCBench consists of three subsets, including 38,562 frames for training, 15,798 frames for validation, and 12,553 frames for testing respectively, amounting totally to 66,913 frames.
Provide a detailed description of the following dataset: SSCBench
MSU Video Saliency Prediction
The dataset presents open high-resolution test clips set with different types of content: movie fragments, sport streams, live caption clips. Used clips of 1920×1080 resolution and with duration from 13 to 38 seconds. And Performed reliable data collection from 50 observers (19–24 y. o.) using 500 Hz SMI iViewXTM Hi-Speed 1250 eye-tracker. Also used cross-fade which ensures the independence of the received fixations between different clips. The final ground-truth saliency map was estimated as a Gaussian mixture with centers at the fixation points. A standard deviation for the Gaussians equal to 120 was chosen (this value matches 8 angular degrees, which is known to be the sector of sharp vision).
Provide a detailed description of the following dataset: MSU Video Saliency Prediction
3DYoga90
3DYoga90 is organized within a three-level label hierarchy. It stands out as one of the most comprehensive open datasets, featuring the largest collection of RGB videos and 3D skeleton sequences among publicly available resources.
Provide a detailed description of the following dataset: 3DYoga90
JoinGym
All possible intermediate result cardinalities for 3300 queries on IMDb.
Provide a detailed description of the following dataset: JoinGym
EMBED
EMBED contains 364,000 screening and diagnostic mammographic exams for 110,000 patients from four hospitals over an 8-year period. The EMBED AWS Open Data release represents 20% of the dataset divided into two equal cohorts at the patient level. This release of the dataset includes 2D and C-view images. Digital breast tomosynthesis, ultrasound, and MRI exams will be added at a later date.
Provide a detailed description of the following dataset: EMBED
Bongard-OpenWorld
Bongard-OpenWorld is a new benchmark for evaluating real-world few-shot reasoning for machine vision. We hope it can help us better understand the limitations of current visual intelligence and facilitate future research on visual agents with stronger few-shot visual reasoning capabilities.
Provide a detailed description of the following dataset: Bongard-OpenWorld
RainDS
We managede to collect a real-world rain dataset, named RainDS, includinnumerousus image pairs in various lighting conditions and different scenes. Each pair contains four images: a rain streak image, a raindrop image, and an image including both types of rain, as well as their rain-free counterparts.
Provide a detailed description of the following dataset: RainDS
GPlay:SAppKG
The dataset is from google play store applications containing apps from different Google Play Store categories
Provide a detailed description of the following dataset: GPlay:SAppKG
Noise-SF
Based on RADDLE and SNIPS , we construct Noise-SF, which includes two different perturbation settings. For single perturbations setting, we include five types of noisy utterances (character-level: \textbf{Typos}, word-level: \textbf{Speech}, and sentence-level: \textbf{Simplification}, \textbf{Verbose}, and \textbf{Paraphrase}) from RADDLE. For mixed perturbations setting, we utilize TextFlint to introduce character-level perturbation (\textbf{EntTypos}), word-level perturbation (\textbf{Subword}), and sentence-level perturbation (\textbf{AppendIrr}) and combine them to get a mixed perturbations dataset.
Provide a detailed description of the following dataset: Noise-SF
NurViD
We propose NurViD, a large video dataset with expert-level annotation for nursing procedure activity understanding. NurViD consists of over 1.5k videos totaling 144 hours, making it approximately four times longer than the existing largest nursing activity datasets. Notably, it encompasses 51 distinct nursing procedures and 177 action steps, providing a much more comprehensive coverage compared to existing datasets that primarily focus on limited procedures. To evaluate the efficacy of current deep learning methods on nursing activity understanding, we establish three benchmarks on NurViD: procedure recognition on untrimmed videos, procedure and action recognition on trimmed videos, and action detection.
Provide a detailed description of the following dataset: NurViD
FDCompCN
A new fraud detection dataset FDCompCN for detecting financial statement fraud of companies in China. We construct a multi-relation graph based on the supplier, customer, shareholder, and financial information disclosed in the financial statements of Chinese companies. These data are obtained from the China Stock Market and Accounting Research (CSMAR) database. We select samples between 2020 and 2023, including 5,317 publicly listed Chinese companies traded on the Shanghai, Shenzhen, and Beijing Stock Exchanges.
Provide a detailed description of the following dataset: FDCompCN
Big-Five Backstage
The dataset consists of 3265 text samples corresponding to the concatenation of lines spoken by fictional characters. Texts are extracted from 400 theatre plays written by 132 different authors. Overall, it contains 3419136 words in total with a mean equal to 1047.2 words per character. Each text entry have binary labels representing gender of a character (Male or Female) and their five personality traits (Extraversion, Agreeableness, Openness, Neuroticism, Conscientiousness). The auxiliary part of the dataset includes author-level labels reflecting their gender, country of origin, and years of life.
Provide a detailed description of the following dataset: Big-Five Backstage
ELAI-Dust Storm
### **Context** As mentioned in the [reference paper](https://ieeexplore.ieee.org/abstract/document/9905145): *Dust storms are considered a severe meteorological disaster, especially in arid and semi-arid regions, which is characterized by dust aerosol-filled air and strong winds across an extensive area. Every year, a large number of aerosols are released from dust storms into the atmosphere, manipulating a deleterious impact both on the environment and human lives. Even if an increasing emphasis is being placed on dust storms due to the rapid change in global climate in the last fifty years by utilizing the measurements from the moderate-resolution imaging spectroradiometer (MODIS), the possibility of utilizing MODIS true-color composite images for the task has not been sufficiently discussed yet.* This data publication contains MODIS true-color dust images which are collected through an extensive visual inspection procedure to test the above hypothesis. This dataset includes a subset of the full dataset of RGB images each with visually-recognizable dust storm incidents in high latitude, temporally ranging from 2003 to 2019 over land as well as ocean throughout the world. All RGB images are manually annotated for dust storm detection using [CVAT](https://cvat.org/) tool such that the dust-susceptible pixel area in the image is masked with (255, 255, 255) in RGB space (white) and the nonsusceptible pixel area is masked with (0, 0, 0) in RGB space (black). ### **Inspiration** - Could the MODIS true-colour satellite images be utilized for detecting dust storms with higher accuracy and segmentation capability? - What is the role of accurate detection of boundaries in dust storm detection? - Are machine learning models capable of building a tight correlation between nearby pixels to detect the presence of dust? - Is it important to build an open dataset for dust storm detection using satellite true-colour images? ### **Content** This dataset contains 160 satellite true-colour images and their corresponding ground-truth label bitmaps, organized in two folders: images, and annotations. The associated notebook simply presents the image data visualization, statistical data augmentation and a U-Net-based model to detect dust storms in a semantic segment fashion. ### **Acknowledgements and Citation** The dataset of true-colour dust images, consisting of airborne dust and weaker dust traces, was collected using [MODIS database](https://modis.gsfc.nasa.gov/) from an extensive visual inspection procedure. The dataset can be used without additional permissions or fees. If you use these data in a publication, presentation, or other research product please use the following citation: N. Bandara, “Ensemble deep learning for automated dust storm detection using satellite images,” in 2022 International Research Conference on Smart Computing and Systems Engineering (SCSE), vol. 5. IEEE, 2022, pp. 178–183. For interested researchers, please note that the paper is openly accessible at [conference proceedings](http://repository.kln.ac.lk/bitstream/handle/123456789/25425/SCSE%202022%2027.pdf?sequence=1) and/or [here](https://www.researchgate.net/publication/364155069_Ensemble_Deep_Learning_for_Automated_Dust_Storm_Detection_Using_Satellite_Images). ### **Research Ideas** - Would the latest state-of-the-art segmentation models increase the performance of detecting dust storms in satellite true-colour images? - Few-shot learning for dust storm segmentation and related self-supervised learning techniques - What is the role of ensemble learning in improving model performance? - What are the optimum methods for data augmentation for increasing model performance? - How can this dataset be combined with other datasets of satellite true-colour images for detecting dust storms? - Methods to combine spatial and temporal information with respect to automated dust detection using satellite images and ground climate data ### **License** As described [here](https://creativecommons.org/licenses/by-sa/4.0/), ``` You are free to: Share — copy and redistribute the material in any medium or format Adapt — remix, transform, and build upon the material for any purpose, even commercially. This license is acceptable for Free Cultural Works. The licensor cannot revoke these freedoms as long as you follow the license terms. Under the following terms: Attribution — You must give appropriate credit, provide a link to the license, and indicate if changes were made. You may do so in any reasonable manner, but not in any way that suggests the licensor endorses you or your use. ShareAlike — If you remix, transform, or build upon the material, you must distribute your contributions under the same license as the original. No additional restrictions — You may not apply legal terms or technological measures that legally restrict others from doing anything the license permits.
Provide a detailed description of the following dataset: ELAI-Dust Storm
AIDA/testc
AIDA/testc is a new challenging test set for entity linking systems containing 131 Reuters news articles published between December 5th and 7th, 2020. It links the named entity mentions in this test set to their corresponding Wikipedia pages, using the same linking procedure employed in the original AIDA CoNLL-YAGO dataset. AIDA/testc has 1,160 unique Wikipedia identifiers, spanning over 3,777 mentions and encompassing a total of 46,456 words.
Provide a detailed description of the following dataset: AIDA/testc
Replication Package for: Benchmarking scalability of stream processing frameworks deployed as microservices in the cloud
This is our replication package for our study on Benchmarking scalability of stream processing frameworks deployed as microservices in the cloud. All scalability experiments are performed with the scalability benchmarking framework Theodolite at Kiel University's Software Performance Engineering Lab (SPEL) or Google Cloud. With this replication package, we provide: * Benchmark execution files in executions, * our benchmark (raw) results in results, * and analysis script for our results in analysis.
Provide a detailed description of the following dataset: Replication Package for: Benchmarking scalability of stream processing frameworks deployed as microservices in the cloud
MuLMS-AZ
The Multi-Layer Materials Science Argumentative Zoning (MuLMS-AZ) corpus consists of 50 documents (licensed CC BY) from the materials science domain, spanning across the following 7 sub-areas: "Electrolysis", "Graphene", "Polymer Electrolyte Fuel Cell (PEMFC)", "Solid Oxide Fuel Cell (SOFC)", "Polymers", "Semiconductors" and "Steel". There are annotations on sentence-level and token-level for several NLP tasks, including Argumentative Zoning (AZ). Every sentence in the dataset is labelled with one or multiple argumentative zones. The dataset can be used to train classifiers and text mining systems on argumentative zoning in the materials science domain.
Provide a detailed description of the following dataset: MuLMS-AZ
SOFC-Exp
The SOFC-Exp corpus contains 45 scientific publications about solid oxide fuel cells (SOFCs), published between 2013 and 2019 as open-access articles all with a CC-BY license. The dataset was manually annotated by domain experts.
Provide a detailed description of the following dataset: SOFC-Exp
Analysing state-backed propaganda websites: a new dataset and linguistic study
This paper analyses two hitherto unstudied sites sharing state-backed disinformation, Reliable Recent News (rrn.world) and WarOnFakes (waronfakes.com), which publish content in Arabic, Chinese, English, French, German, and Spanish.
Provide a detailed description of the following dataset: Analysing state-backed propaganda websites: a new dataset and linguistic study
CovidET-Appraisals
CovidET-Appraisals is the most comprehensive dataset to-date that assesses 24 cognitive appraisal dimensions of emotions, each with a natural language rationale, across 241 Reddit posts. CovidET-Appraisals presents an ideal testbed to evaluate the ability of large language models — excelling at a wide range of NLP tasks — to automatically assess and explain cognitive appraisals.
Provide a detailed description of the following dataset: CovidET-Appraisals
Blizzard TTS French Corpus for 2023 Challenge
The dataset contains 50 hours for high quality speech samples from a native speaker and 2 more hours of lower quality recordings from a different speaker
Provide a detailed description of the following dataset: Blizzard TTS French Corpus for 2023 Challenge
PRO-teXt
PRO-teXt is an extension of PROXD with the inclusion of text prompts to synthesize objects. There are 180/20 interactions for training/testing in PRO-teXt. Each interaction involves a linguistic command corresponding to an existing room arrangement.
Provide a detailed description of the following dataset: PRO-teXt
Synthetic non-linear boundary control problems dataset
Generated using the script below: https://github.com/zenineasa/MasterThesis/blob/main/Code/dataGenerator.py
Provide a detailed description of the following dataset: Synthetic non-linear boundary control problems dataset
Multi-Labelled SMILES Odors dataset
This dataset is a multi-labelled SMILES odor dataset with 138 odor descriptors. This dataset was created for replicating the paper: [A principal odor map unifies diverse tasks in olfactory perception](https://www.science.org/doi/full/10.1126/science.ade4401). The complete replication of the paper (dataset curation + model) can be found in the [OpenPOM](https://github.com/BioMachineLearning/openpom) GitHub repository. The dataset contains 4983 molecules, each described by multiple odor labels (e.g. creamy, grassy), was made by combining the [GoodScents](http://www.thegoodscentscompany.com/) and [Leffingwell PMP 2001 datasets](https://zenodo.org/record/4085098#.YqoYk8jMIUE) each containing odorant molecules and corresponding odor descriptors.
Provide a detailed description of the following dataset: Multi-Labelled SMILES Odors dataset
GlotSparse
Collection of news websites in low-resource languages.
Provide a detailed description of the following dataset: GlotSparse
GlotStoryBook
StoryBooks for 174 unique languages.
Provide a detailed description of the following dataset: GlotStoryBook
udhr-lid
Clean version of UDHR (Universal Declaration of Human Rights), at the long sentence level.
Provide a detailed description of the following dataset: udhr-lid
AugMod
Context A radio signal consists in two channels, channel I (for 'In phase') and channel Q (for 'Quadrature') and can be assimilated as a stream of complex numbers. It may convey information by coding it as a sequence of symbols sampled from a finite set of complex numbers called a "modulation". There exist several standard modulations such as (non exhaustive list): BPSK, QAM, QPSK of order N, PSK of order N… In general modulation is not directly observable from a signal. The goal of this dataset is to detect the underlying modulation of a radio signal which may have suffer various alterations during its transmission. This task is of interest for instance for sensing the electromagnetic environment in the cognitive radio paradigm. This dataset is made available in the context of the paper: T. Courtat and H. du Mas des Bourboux, "A light neural network for modulation detection under impairments," 2021 International Symposium on Networks, Computers and Communications (ISNCC), 2021, pp. 1-7, doi: 10.1109/ISNCC52172.2021.9615851. Please visit https://github.com/ThalesGroup/pythagore-mod-reco for the libraries to read the data and train neural networks on this dataset. Content The given dataset: is given in a hdf5 file is composed of 7 classes: BPSK, PSK8, QAM16, QAM32, QAM64, QAM8, QPSK spans 5 bins in signal-to-noise ration: 0, 10, 20, 30, 40 consists of 174 720 examples, each 1024 samples long with both I and Q. Two notebooks allow to: visualize the data: plot-one-sample train a classifiers: training-example
Provide a detailed description of the following dataset: AugMod
PETA-Protein
PETA: Evaluating the Impact of Protein Transfer Learning with Sub-word Tokenization on Downstream Applications
Provide a detailed description of the following dataset: PETA-Protein
WEATHub
WEATHub is a dataset containing 24 languages. It contains words organized into groups of (target1, target2, attribute1, attribute2) to measure the association target1:target2 :: attribute1:attribute2. For example target1 can be insects, target2 can be flowers. And we might be trying to measure whether we find insects or flowers pleasant or unpleasant. The measurement of word associations is quantified using the WEAT metric in our paper. It is a metric that calculates an effect size (Cohen's d) and also provides a p-value (to measure statistical significance of the results). In our paper, we use word embeddings from language models to perform these tests and understand biased associations in language models across different languages.
Provide a detailed description of the following dataset: WEATHub
QUILT-1M
Recent accelerations in multi-modal applications have been made possible with the plethora of image and text data available online. However, the scarcity of similar data in the medical field, specifically in histopathology, has halted similar progress. To enable similar representation learning for histopathology, we turn to YouTube, an untapped resource of videos, offering 1,087 hours of valuable educational histopathology videos from expert clinicians. From YouTube, we curate Quilt: a large-scale vision-language dataset consisting of 768,826 image and text pairs. Quilt was automatically curated using a mixture of models, including large language models), handcrafted algorithms, human knowledge databases, and automatic speech recognition. In comparison, the most comprehensive datasets curated for histopathology amass only around 200K samples. We combine Quilt with datasets, from other sources, including Twitter, research papers, and the internet in general, to create an even larger dataset: Quilt-1M, with 1M paired image-text samples, marking it as the largest vision-language histopathology dataset to date. We demonstrate the value of Quilt-1M by fine-tuning a pre-trained CLIP model. Our model outperforms state-of-the-art models on both zero-shot and linear probing tasks for classifying new pathology images across 13 diverse patch-level datasets of 8 different sub-pathologies and cross-modal retrieval tasks.
Provide a detailed description of the following dataset: QUILT-1M
AdaptiX
GitHub repository for "AdaptiX – A Transitional XR Framework for Development and Evaluation of Shared Control Applications in Assistive Robotics", which is used in several shared control applications
Provide a detailed description of the following dataset: AdaptiX
CheXphoto
CheXphoto is a competition for x-ray interpretation based on a new dataset of naturally and synthetically perturbed chest x-rays hosted by Stanford and VinBrain. Chest radiography is the most common imaging examination globally, and is critical for screening, diagnosis, and management of many life threatening diseases. Most chest x-ray algorithms have been developed and validated on digital x-rays, while the vast majority of developing regions use films. An appealing solution to scaled deployment is to leverage the ubiquity of smartphones for automated interpretation of film through cellphone photography. Automated interpretation of photos of chest x-rays at the same high-level of performance as with digital chest x-rays is challenging because photographs of x-rays introduce visual artifacts not commonly found in digital x-rays. To encourage high model performance for this application, we developed CheXphoto, a dataset of photos of chest x-rays and synthetic transformations designed to mimic the effects of photography. With the launch of the CheXphoto competition, we are pleased to announce the release of an additional set of x-ray film images provided by VinBrain, a subsidiary of Vingroup. Please see Validation and Test Sets for details.
Provide a detailed description of the following dataset: CheXphoto
Yahoo S5
Automatic anomaly detection is critical in today's world where the sheer volume of data makes it impossible to tag outliers manually. The goal of this dataset is to benchmark your anomaly detection algorithm. The dataset consists of real and synthetic time-series with tagged anomaly points. The dataset tests the detection accuracy of various anomaly-types including outliers and change-points. The synthetic dataset consists of time-series with varying trend, noise and seasonality. The real dataset consists of time-series representing the metrics of various Yahoo services.
Provide a detailed description of the following dataset: Yahoo S5
InBreast
Rationale and objectives: Computer-aided detection and diagnosis (CAD) systems have been developed in the past two decades to assist radiologists in the detection and diagnosis of lesions seen on breast imaging exams, thus providing a second opinion. Mammographic databases play an important role in the development of algorithms aiming at the detection and diagnosis of mammary lesions. However, available databases often do not take into consideration all the requirements needed for research and study purposes. This article aims to present and detail a new mammographic database. Materials and methods: Images were acquired at a breast center located in a university hospital (Centro Hospitalar de S. João [CHSJ], Breast Centre, Porto) with the permission of the Portuguese National Committee of Data Protection and Hospital's Ethics Committee. MammoNovation Siemens full-field digital mammography, with a solid-state detector of amorphous selenium was used. **Results: **The new database-INbreast-has a total of 115 cases (410 images) from which 90 cases are from women with both breasts affected (four images per case) and 25 cases are from mastectomy patients (two images per case). Several types of lesions (masses, calcifications, asymmetries, and distortions) were included. Accurate contours made by specialists are also provided in XML format. Conclusion: The strengths of the actually presented database-INbreast-relies on the fact that it was built with full-field digital mammograms (in opposition to digitized mammograms), it presents a wide variability of cases, and is made publicly available together with precise annotations. We believe that this database can be a reference for future works centered or related to breast cancer imaging.
Provide a detailed description of the following dataset: InBreast
CodRED
CodRED is the first human-annotated cross-document relation extraction (RE) dataset, aiming to test the RE systems’ ability of knowledge acquisition in the wild. CodRED has the following features: * it requires natural language understanding in different granularity, including coarse-grained document retrieval, as well as fine-grained cross-document multi-hop reasoning; * it contains 30,504 relational facts associated with 210,812 reasoning text paths, as well as enjoys a broad range of balanced relations, and long documents in diverse topics; * it provides strong supervision about the reasoning text paths for predicting the relation, to help guide RE systems to perform meaningful and interpretable reasoning; * it contains adversarially-created hard NA instances to avoid RE models to predict relations by inferring from entity names instead of text information.
Provide a detailed description of the following dataset: CodRED
HallusionBench
Large language models (LLMs), after being aligned with vision models and integrated into vision-language models (VLMs), can bring impressive improvement in image reasoning tasks. This was shown by the recently released GPT-4V(ison), LLaVA-1.5, etc. However, the strong language prior in these SOTA LVLMs can be a double-edged sword: they may ignore the image context and solely rely on the (even contradictory) language prior for reasoning. In contrast, the vision modules in VLMs are weaker than LLMs and may result in misleading visual representations, which are then translated to confident mistakes by LLMs. To study these two types of VLM mistakes, i.e., language hallucination and visual illusion, we curated HallusionBench, an image-context reasoning benchmark that is still challenging to even GPT-4V and LLaVA-1.5. We provide a detailed analysis of examples in HallusionBench, which sheds novel insights on the illusion or hallucination of VLMs and how to improve them in the future.
Provide a detailed description of the following dataset: HallusionBench
SCARED
Sub-Challenge Part of the Endoscopic Vision Challenge
Provide a detailed description of the following dataset: SCARED
E-IC
This dataset, adapted from COCO Caption, is designed for the *Image Caption* task and evaluates multimodal model editing in terms of reliability, stability and generality. You can download the dataset from [here](https://drive.google.com/drive/folders/1jBdTJxUb9wEeHnvG-RY8dv5_I4QlDpUS?usp=drive_link)
Provide a detailed description of the following dataset: E-IC
E-VQA
This dataset, adapted from VQAv2, is designed for the *Visual Question Answering* task and evaluates multimodal model editing in terms of reliability, stability and generality. You can download the dataset from [here](https://drive.google.com/drive/folders/1jBdTJxUb9wEeHnvG-RY8dv5_I4QlDpUS?usp=drive_link)
Provide a detailed description of the following dataset: E-VQA