dataset_name
stringlengths
2
128
description
stringlengths
1
9.7k
prompt
stringlengths
59
185
Chest x-ray landmark dataset
Set of landmark annotations for JSRT, Montgomery, Shenzhen and a subset of Padchest datasets
Provide a detailed description of the following dataset: Chest x-ray landmark dataset
Montgomery County X-ray Set
X-ray images in this data set have been acquired from the tuberculosis control program of the Department of Health andHuman Services of Montgomery County, MD, USA. This set contains 138 posterior-anterior x-rays, of which 80 x-rays are normal and 58 x-rays areabnormal with manifestations of tuberculosis. All images are de-identified and available in DICOM format. The set covers a wide range of abnormalities,including effusions and miliary patterns. The data set includes radiology readings available as a text files and summary of its content
Provide a detailed description of the following dataset: Montgomery County X-ray Set
Shenzhen Hospital X-ray Set
X-ray images in this data set have been collected by Shenzhen No.3 Hospital in Shenzhen, Guangdong providence,China. The x-rays were acquired as part of the routine care at Shenzhen Hospital. The set contains images in JPEG format. There are 326 normal x-raysand 336 abnormal x-rays showing various manifestations of tuberculosis.
Provide a detailed description of the following dataset: Shenzhen Hospital X-ray Set
ARQMath2 - Task 1
The goal of ARQMath is to advance techniques for mathematical information retrieval, in particular, retrieving answers to mathematical questions (Task 1), and formula retrieval (Task 2). Using the question posts from Math Stack Exchange, participating systems are given a question or a formula from a question and asked to return a ranked list of either potential answers to the question or potentially useful formulae (in the case of a formula query). Relevance is determined by the expected utility of each returned item. These tasks allow participating teams to explore leveraging math notation together with text to improve the quality of retrieval results.
Provide a detailed description of the following dataset: ARQMath2 - Task 1
ACDC (Adverse Conditions Dataset with Correspondences)
We introduce ACDC, the Adverse Conditions Dataset with Correspondences for training and testing semantic segmentation methods on adverse visual conditions. It comprises a large set of 4006 images which are evenly distributed between fog, nighttime, rain, and snow. Each adverse-condition image comes with a high-quality fine pixel-level semantic annotation, a corresponding image of the same scene taken under normal conditions and a binary mask that distinguishes between intra-image regions of clear and uncertain semantic content. ACDC supports two tasks: 1. standard semantic segmentation 2. uncertainty-aware semantic segmentation
Provide a detailed description of the following dataset: ACDC (Adverse Conditions Dataset with Correspondences)
RR
Review-Rebuttal (RR) dataset is introduced to facilitate the study of argument pair extraction in the peer review and rebuttal domain.
Provide a detailed description of the following dataset: RR
XYSquares
Synthetic dataset intended for benchmarking disentanglement frameworks. XYSquares is adversarial in nature; the distance between any two observations in the dataset is constant when measured using a pixel-wise distance function. It is usually impossible for VAE frameworks that use pixel-wise losses to disentangle this dataset. The dataset is constructed from 3 non-overlapping red, green and blue squares that are each $8\times8$ pixels in size. Each square can move along the $x$ and $y$ coordinates of an $8\times8$ grid. The resulting images are $64\times64$ pixels in size. With this construction the dataset has a total of 8 ground-truth factors for a total of $8^6 = 262144$ observations.
Provide a detailed description of the following dataset: XYSquares
OpenLane
**OpenLane** is the first real-world and the largest scaled 3D lane dataset to date. The dataset collects valuable contents from public perception dataset [Waymo Open Dataset](/dataset/waymo-open-dataset) and provides lane&closest-in-path object(CIPO) annotation for 1000 segments. In short, OpenLane owns 200K frames and over 880K carefully annotated lanes. The OpenLane Dataset is publicly released to aid the research community in making advancements in 3D perception and autonomous driving technology.
Provide a detailed description of the following dataset: OpenLane
3D Cars
Car CAD models from "3d object detection and viewpoint estimation with a deformable 3d cuboid model" were used to generate the dataset. For each of the 199 car models, the authors generated $64\times64$ color renderings from 24 rotation angles each offset by 15 degrees, as well as from 4 different camera elevations.
Provide a detailed description of the following dataset: 3D Cars
Physical Audiovisual CommonSense
**PACS** (**Physical Audiovisual CommonSense**) is the first audiovisual benchmark annotated for physical commonsense attributes. PACS contains a total of 13,400 question-answer pairs, involving 1,377 unique physical commonsense questions and 1,526 videos. The dataset provides new opportunities to advance the research field of physical reasoning by bringing audio as a core component of this multimodal problem.
Provide a detailed description of the following dataset: Physical Audiovisual CommonSense
EgoMon
EgoMon Gaze & Video Dataset is an Egocentric (first person) Dataset that consists of 7 videos of 30 minutes, more or less, each one of them. - 7 videos with the gaze information plotted on them. - The same videos (without the gaze information plotted on them). - A total of 13428 images, more or less, that corresponds to each frame per second of all these videos. - 7 text files with the gaze data extracted from each video.
Provide a detailed description of the following dataset: EgoMon
ChangeIt
ChangeIt dataset with more than 2600 hours of video with state-changing actions published at CVPR 2022.
Provide a detailed description of the following dataset: ChangeIt
SynLiDAR
SynLiDAR is a large-scale synthetic LiDAR sequential point cloud dataset with point-wise annotations. 13 sequences of LiDAR point cloud with around 20k scans (over 19 billion points and 32 semantic classes) are collected from virtual urban cities, suburban towns, neighborhood, and harbor.
Provide a detailed description of the following dataset: SynLiDAR
CUHK-SYSU-TBPS
CUHK-SYSU-TBPS is a dataset for text-based person search task.
Provide a detailed description of the following dataset: CUHK-SYSU-TBPS
PRW-TBPS
PRW-TBPS is a dataset for text based person search task.
Provide a detailed description of the following dataset: PRW-TBPS
VinDr-CXR
**VinDr-CXR** is an open large-scale dataset of chest X-rays with radiologist’s annotations. It's bult from more than 100,000 raw images in DICOM format that were retrospectively collected from the Hospital 108 and the Hanoi Medical University Hospital, two of the largest hospitals in Vietnam. The published dataset consists of 18,000 postero-anterior (PA) view CXR scans that come with both the localization of critical findings and the classification of common thoracic diseases. These images were annotated by a group of 17 radiologists with at least 8 years of experience for the presence of 22 critical findings (local labels) and 6 diagnoses (global labels); each finding is localized with a bounding box. The local and global labels correspond to the “Findings” and “Impressions” sections, respectively, of a standard radiology report. The dataset is divided into two parts: the training set of 15,000 scans and the test set of 3,000 scans. Each image in the training set was independently labeled by 3 radiologists, while the annotation of each image in the test set was even more carefully treated and obtained from the consensus of 5 radiologists. Description adopted from [here](https://vindr.ai/datasets/cxr).
Provide a detailed description of the following dataset: VinDr-CXR
VinDr-PCXR
**VinDr-PCXR** is an open, large-scale pediatric chest X-ray dataset for interpretation of common thoracic diseases in children. The dataset contains 9,125 CXR scans retrospectively collected from a major pediatric hospital in Vietnam between 2020 and 2021. Each scan was manually annotated by a pediatric radiologist who has more than ten years of experience. The dataset was labeled for the presence of 36 critical findings and 15 diseases. It aims to aid research in the detection of multiple findings and diseases.
Provide a detailed description of the following dataset: VinDr-PCXR
TimberSeg 1.0
The **TimberSeg 1.0** dataset is composed of 220 images showing wood logs in various environments and conditions in Canada. The images are densely annotated with segmentation masks for each log instance, as well as the corresponding bounding box and class label. This dataset aim towards enabling autonomous forestry forwarders, therefore it contains nearly 2500 instances of wood logs from an operators' point-of-view. Images were taken in the forest, near the roadside, in lumberyards and above timber-filled trailers. The logs were annotated considering a grasping perspective, meaning that only the logs above the piles and accessible are segmented.
Provide a detailed description of the following dataset: TimberSeg 1.0
CrossLoc Benchmark Datasets
To study the data-scarcity mitigation for learning-based visual localization methods via sim-to-real transfer, we curate and now present the CrossLoc benchmark datasets—a multimodal aerial sim-to-real data available for flights above nature and urban terrains. Unlike the previous computer vision datasets focusing on localization in a single domain (mostly real RGB images), the provided benchmark datasets include various multimodal synthetic cues paired to all real photos. Complementary to the paired real and synthetic data, we offer rich synthetic data that efficiently fills the flight envelope volume in the vicinity of the real data. The synthetic data rendering was achieved using the proposed data generation workflow TOPO-DataGen. The provided CrossLoc datasets were used as an initial benchmark to showcase the use of synthetic data to assist visual localization in the real world with limited real data. Please refer to our main paper at https://arxiv.org/abs/2112.09081 and our code at https://github.com/TOPO-EPFL/CrossLoc for details.
Provide a detailed description of the following dataset: CrossLoc Benchmark Datasets
Daily load patterns
This data set provides fine-granular statistics on trading traffic generated by six global exchanges over the course of two days in February 2019 for a set of representative feeds and recorded by the systems of vwd Vereinigte Wirtschaftsdienste GmbH (now known as Infront Financial Technology GmbH). Please note that these numbers represent only limited market segments of the actual exchange and the measured feeds might provide different products and instrument types. The exchanges are identified as AU = Sydney, FFM = Frankfurt am Main (GER), HK = Hong Kong (CN), Q = NASDAQ (USA), TK = Tokyo (JPN), UK = London (UK). Please see the Zenodo page https://doi.org/10.5281/zenodo.6381970 for details on syntax etc.
Provide a detailed description of the following dataset: Daily load patterns
SerialTrack Particle Image Dataset
This dataset accompanies the linked SerialTrack paper and provides test case data (2D/3D, varying particle density) across a range of synthetic and experimental imaging modalities. Included test cases can be used for further code development, validation of and comparisons for existing particle tracking codes, and/or evaluating and learning to use our SerialTrack code on known data.
Provide a detailed description of the following dataset: SerialTrack Particle Image Dataset
BrWac2Wiki
This is a dataset for multi-document summarization in Portuguese, what means that it has examples of multiple documents (input) related to human-written summaries (output). In particular, it has entries of multiple related texts from Brazilian websites about a subject, and the summary is the Portuguese Wikipedia lead section on the same subject (lead: the first section, i.e., summary, of any Wipedia article). Input texts were extracted from BrWac corpus, and the output from Brazilian Wikipedia dumps page. BrWac2Wiki contains 114.652 examples of (documents, wikipedia) pairs! So it is suitable for training and validating AI models for multi-document summarization in Portuguese. More information on the paper "PLSUM: Generating PT-BR Wikipedia by Summarizing Websites", by André Seidel Oliveira¹ and Anna Helena Reali Costa¹, that is going to be presented at ENIAC 2021. Our work is inspired by WikiSum, a similar dataset for the English language.
Provide a detailed description of the following dataset: BrWac2Wiki
Hello Watt
Hello Watt collects power usage data at a resolution of 30 minutes. To develop and test our disaggregation methods we consider a subsample consisting of power consumption of 5k households with off-peak pricing contracts for one month. In addition to the type of their water heating, some users also provide such metadata as the home surface area, and the number of inhabitants.
Provide a detailed description of the following dataset: Hello Watt
STEM-ECR
###Grounding Scientific Entity References in STEM Scholarly Content to Authoritative Encyclopedic and Lexicographic Sources The STEM ECR v1.0 dataset has been developed to provide a benchmark for the evaluation of scientific entity extraction, classification, and resolution tasks in a domain-independent fashion. It comprises annotations for scientific entities in scientific Abstracts drawn from 10 disciplines in Science, Technology, Engineering, and Medicine. The annotated entities are further grounded to Wikipedia and Wiktionary, respectively. The dataset is organized in the following folders: - Scientific Entity Annotations: Contains annotations for Process, Material, Method, and Data scientific entities in the STEM dataset. - Scientific Entity Resolution: Annotations for the STEM dataset scientific entities with Entity Linking (EL) annotations to Wikipedia and Word Sense Disambiguation (WSD) annotations to Wiktionary.
Provide a detailed description of the following dataset: STEM-ECR
endless forams
This dataset was built based on a subset of foraminifer samples from the Yale Peabody Museum (YPM) Coretop Collection and the Natural History Museum, Lon- don (NHM) Henry A. Buckley Collection.
Provide a detailed description of the following dataset: endless forams
CronQuestions
CRONQUESTIONS, the Temporal KGQA dataset consists of two parts: a KG with temporal annotations, and a set of natural language questions requiring temporal reasoning.
Provide a detailed description of the following dataset: CronQuestions
i2b2 De-identification Dataset
This dataset contains 1304 de-identified longitudinal medical records describing 296 patients.
Provide a detailed description of the following dataset: i2b2 De-identification Dataset
MATRES
This is the Multi-Axis Temporal RElations for Start-points (i.e., MATRES) dataset
Provide a detailed description of the following dataset: MATRES
DuLeMon
DuLeMon is a large-scale Chinese Long-term Memory Conversation dataset, which simulates long-term memory conversations and focuses on the ability to actively construct and utilize the user's and the bot's persona in a long-term interaction. DuLeMon contains about 27.5k human-human conversations, 449k utterances, and 12k persona grounding sentences. This corpus can be used to explore Long-term Memory Conversation, Personalized Dialogue, and Persona Extraction / Matching / Retrieval.
Provide a detailed description of the following dataset: DuLeMon
BigDetection
**BigDetection** is a new large-scale benchmark to build more general and powerful object detection systems. It leverages the training data from existing datasets ([LVIS](/dataset/lvis), [OpenImages](/dataset/openimages-v6) and [Object365](/dataset/objects365)) with carefully designed principles, and curate a larger dataset for improved detector pre-training. BigDetection dataset has 600 object categories and contains 3.4M training images with 36M object bounding boxes.
Provide a detailed description of the following dataset: BigDetection
MIMIC-IV-ED
MIMIC-IV-ED is a large, freely available database of emergency department (ED) admissions at the Beth Israel Deaconess Medical Center between 2011 and 2019. As of MIMIC-ED v1.0, the database contains 448,972 ED stays. Vital signs, triage information, medication reconciliation, medication administration, and discharge diagnoses are available. All data are deidentified to comply with the Health Information Portability and Accountability Act (HIPAA) Safe Harbor provision. MIMIC-ED is intended to support a diverse range of education initiatives and research studies.
Provide a detailed description of the following dataset: MIMIC-IV-ED
GD-NLI
This is a set of *debiased* Natural Language Inference (NLI) datasets produced by the paper [Generating Data to Mitigate Spurious Correlations in Natural Language Inference Datasets](https://arxiv.org/abs/2203.12942). The datasets are constructed by augmenting SNLI or MNLI with data samples that are *generated to mitigate the spurious correlations* in the original datasets. Please visit [this repository](https://github.com/jimmycode/gen-debiased-nli) for more details. Citation: ``` @inproceedings{gen-debiased-nli-2022, title = "Generating Data to Mitigate Spurious Correlations in Natural Language Inference Datasets", author = "Wu, Yuxiang and Gardner, Matt and Stenetorp, Pontus and Dasigi, Pradeep", booktitle = "Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics", month = may, year = "2022", publisher = "Association for Computational Linguistics", } ```
Provide a detailed description of the following dataset: GD-NLI
MIMIC-IV
Retrospectively collected medical data has the opportunity to improve patient care through knowledge discovery and algorithm development. Broad reuse of medical data is desirable for the greatest public good, but data sharing must be done in a manner which protects patient privacy. The Medical Information Mart for Intensive Care (MIMIC)-III database provided critical care data for over 40,000 patients admitted to intensive care units at the Beth Israel Deaconess Medical Center (BIDMC). Importantly, MIMIC-III was deidentified, and patient identifiers were removed according to the Health Insurance Portability and Accountability Act (HIPAA) Safe Harbor provision. MIMIC-III has been integral in driving large amounts of research in clinical informatics, epidemiology, and machine learning. Here we present MIMIC-IV, an update to MIMIC-III, which incorporates contemporary data and improves on numerous aspects of MIMIC-III. MIMIC-IV adopts a modular approach to data organization, highlighting data provenance and facilitating both individual and combined use of disparate data sources. MIMIC-IV is intended to carry on the success of MIMIC-III and support a broad set of applications within healthcare.
Provide a detailed description of the following dataset: MIMIC-IV
Dataset of Distribution Transformers at Cauca Department (Colombia)
Dataset contains 16.000 electric power distribution transformers from Cauca Department (Colombia). They are distributed in rural and urban areas of 42 municipalities. The information covers 2019 and 2020 years, has 6 categorical variables and 5 continuous variables. First ones correspond to: location, self-protected, removable connector, criticality according to ceraunic level, client and installation type. Second ones are transformer power, burn rate, users number, unsupplied electricity and secondary lines length.
Provide a detailed description of the following dataset: Dataset of Distribution Transformers at Cauca Department (Colombia)
V3C1
The dataset has been designed to represent true web videos in the wild, with good visual quality and diverse content characteristics, and will serve as evaluation basis for the Video Browser Showdown 2019-2021 and TREC Video Retrieval (TRECVID) Ad-Hoc Video Search tasks 2019-2021. The dataset comes with a shot segmentation (around 1 million shots) for which we analyze content specifics and statistics. Our research shows that the content of V3C1 is very diverse, has no predominant characteristics and provides a low self-similarity. Thus it is very well suited for video retrieval evaluations as well as for participants of TRECVID AVS or the VBS.
Provide a detailed description of the following dataset: V3C1
IACC.3
The IACC.3 dataset is approximately 4600 Internet Archive videos (144 GB, 600 h) with Creative Commons licenses in MPEG-4/H.264 format with duration ranging from 6.5 min to 9.5 min and a mean duration of almost 7.8 min. Most videos will have some metadata provided by the donor available e.g., title, keywords, and description.
Provide a detailed description of the following dataset: IACC.3
Rope3D
**Roadside Perception 3D** (**Rope3D**) is a dataset for autonomous driving and monocular 3D object detection task consisting of 50k images and over 1.5M 3D objects in various scenes, which are captured under different settings including various cameras with ambiguous mounting positions, camera specifications, viewpoints, and different environmental conditions.
Provide a detailed description of the following dataset: Rope3D
ATLANTIS
**ATLANTIS** is a benchmark for semantic segmentation of waterbody images. This dataset covers a wide range of natural waterbodies such as sea, lake, river and man-made (artificial) water-related structures such as dam, reservoir, canal, and pier. ATLANTIS includes 5,195 pixel-wise annotated images split to 3,364 training, 535 validation, and 1,296 testing images. In addition to 35 waterbodies, this dataset covers 21 general labels such as person, car, road and building.
Provide a detailed description of the following dataset: ATLANTIS
UDE-Office-Home
Different from the setting of domain adaptation which uses all labeled source and unlabeled target domain examples for training, domain examples should be divided into two disjoint parts: training and test. UDE-Office-Home is built from Office-Home, so the performance of domain-adapted or domain-expanded models on the source and target domain can be evaluated.
Provide a detailed description of the following dataset: UDE-Office-Home
UDE-DomainNet
Different from the setting of domain adaptation which uses all labeled source and unlabeled target domain examples for training, domain examples should be divided into two disjoint parts: training and test. UDE-DomainNet is built from DomainNet, so the performance of domain-adapted or domain-expanded models on the source and target domain can be evaluated.
Provide a detailed description of the following dataset: UDE-DomainNet
PlotQA
PlotQA is a VQA dataset with 28.9 million question-answer pairs grounded over 224,377 plots on data from real-world sources and questions based on crowd-sourced question templates. Existing synthetic datasets (FigureQA, DVQA) for reasoning over plots do not contain variability in data labels, real-valued data, or complex reasoning questions. Consequently, proposed models for these datasets do not fully address the challenge of reasoning over plots. In particular, they assume that the answer comes either from a small fixed size vocabulary or from a bounding box within the image. However, in practice this is an unrealistic assumption because many questions require reasoning and thus have real valued answers which appear neither in a small fixed size vocabulary nor in the image. In this work, we aim to bridge this gap between existing datasets and real world plots by introducing PlotQA. Further, 80.76% of the out-of-vocabulary (OOV) questions in PlotQA have answers that are not in a fixed vocabulary.
Provide a detailed description of the following dataset: PlotQA
IAM Dataset
We introduce a large and comprehensive dataset to facilitate the study of several essential AM tasks in the debating system. In our work, we first review the existing subtasks (claim extraction, stance classification, evidence extraction), and then propose two integrated argument mining tasks: claim extraction with stance classification (CESC) and claim-evidence pair extraction (CEPE).
Provide a detailed description of the following dataset: IAM Dataset
AWS Documentation
We present the AWS documentation corpus, an open-book QA dataset, which contains 25,175 documents along with 100 matched questions and answers. These questions are inspired by the author's interactions with real AWS customers and the questions they asked about AWS services. The data was anonymized and aggregated. All questions in the dataset have a valid, factual and unambiguous answer within the accompanying documents, we deliberately avoided questions that are ambiguous, incomprehensible, opinion-seeking, or not clearly a request for factual information. All questions, answers and accompanying documents in the dataset are annotated by authors. There are two types of answers: text and yes-no-none(YNN) answers. Text answers range from a few words to a full paragraph sourced from a continuous block of words in a document or from different locations within the same document. Every question in the dataset has a matched text answer. Yes-no-none(YNN) answers can be yes, no, or none depending on the type of question. For example the question: “Can I stop a DB instance that has a read replica?” has a clear yes or no answer but the question “What is the maximum number of rows in a dataset in Amazon Forecast?” is not a yes or no question and therefore has a “None” as the YNN answer. 23 questions have ‘Yes’ YNN answers, 10 questions have ‘No’ YNN answers and 67 questions have ‘None’ YNN answers.
Provide a detailed description of the following dataset: AWS Documentation
TEM nanowire morphologies for classification and segmentation
TEM image dataset containing four nanowire morphologies of bio-derived protein nanowires and synthetic peptide nanowires. The peptide / protein nanowires used in this study were synthesized and imaged by Brian Montz in Prof. Todd Emrick's research group at the Department of Polymer Science and Engineering Department, University of Massachusetts Amherst. We acknowledge financial support from the U.S. National Science Foundation, Grant NSF DMREF #1921839 and DMREF #1921871. Nanowires were classified into either of the four morphologies: bundle, singular, dispersed or network. Each morphology contains 100 images (jpg files). For the dispersed and network morphologies, because these two morphologies are harder to visually distinguish, we have created manual segmentation labels of the nanowires (included in these two morphology folders as png files). Percolation analysis was done on these manually segmented nanowires to provide quantitative metric on whether the nanowires form a network in the image. encoders_trained_with_optimized_hyperparameter.zip contains 4 sets of encoders trained with either SimCLR or Barlow-Twins self-supervised methods on either generic TEM images, or generic everyday photographic images (each with 5 replicates with different random seed) with optimized hyperparameters. seg_mask_5_resolutions.zip contains ground truth 2D binary encoding of segmented nanowires at 5 resolutions.
Provide a detailed description of the following dataset: TEM nanowire morphologies for classification and segmentation
Battery test data - fast formation study
Forty prismatic lithium-ion pouch cells were built at the University of Michigan Battery Laboratory. The cells have a nominal capacity of 2.36Ah and comprise a NCM111 cathode and graphite anode. Cells were formed using two different formation protocols: "fast formation" and "baseline formation". After formation, cells were put under cycle life testing at room temperature and 45degC. Cells were cycled until the discharge capacities dropped below 50% of the initial capacities. Data was collected by the cycler equipment (Maccor) during both the formation process as well as during the cycling test. Data was processed in the Voltaiq software and subsequently exported as .csv files.
Provide a detailed description of the following dataset: Battery test data - fast formation study
TRECVID-AVS16 (IACC.3)
Internet Archive videos (IACC.3) under Creative Commons licenses. The test video collection for TRECVID-AVS2016-TRECVID-AVS2018 contains 335,944 web video clips (600hr).
Provide a detailed description of the following dataset: TRECVID-AVS16 (IACC.3)
TRECVID-AVS17 (IACC.3)
Internet Archive videos (IACC.3) under Creative Commons licenses. The test video collection for TRECVID-AVS2016-TRECVID-AVS2018 contains 335,944 web video clips (600hr).
Provide a detailed description of the following dataset: TRECVID-AVS17 (IACC.3)
TRECVID-AVS18 (IACC.3)
Internet Archive videos (IACC.3) under Creative Commons licenses. The test video collection for TRECVID-AVS2016-TRECVID-AVS2018 contains 335,944 web video clips (600hr).
Provide a detailed description of the following dataset: TRECVID-AVS18 (IACC.3)
TRECVID-AVS19 (V3C1)
The dataset has been designed to represent true web videos in the wild, with good visual quality and diverse content characteristics, The test video collection for TRECVID-AVS2019-TRECVID-AVS2021, which contains 1,082,649 web video clips, with even more diverse content, no predominant characteristics and low self-similarity.
Provide a detailed description of the following dataset: TRECVID-AVS19 (V3C1)
TRECVID-AVS20 (V3C1)
The dataset has been designed to represent true web videos in the wild, with good visual quality and diverse content characteristics, The test video collection for TRECVID-AVS2019-TRECVID-AVS2021, which contains 1,082,649 web video clips, with even more diverse content, no predominant characteristics and low self-similarity.
Provide a detailed description of the following dataset: TRECVID-AVS20 (V3C1)
TRECVID-AVS21 (V3C1)
The dataset has been designed to represent true web videos in the wild, with good visual quality and diverse content characteristics, The test video collection for TRECVID-AVS2019-TRECVID-AVS2021, which contains 1,082,649 web video clips, with even more diverse content, no predominant characteristics and low self-similarity.
Provide a detailed description of the following dataset: TRECVID-AVS21 (V3C1)
RealMCVSR
Our RealMCVSR dataset provides real-world HD video triplets concurrently recorded by Apple iPhone 12 Pro Max equipped with triple cameras having fixed focal lengths: ultra-wide (30mm), wide-angle (59mm), and telephoto (147mm). To concurrently record video triplets, we built an iOS app that provides full control over exposure parameters (i.e., shutter speed and ISO) of the cameras. For recording each scene, we set the cameras in the auto-exposure mode, where the shutter speeds of the three cameras are synced to avoid varying motion blur across a video triplet. ISOs are adjusted accordingly for each camera to pick up the same exposure. Each video is saved in the MOV format using HEVC/H.265 encoding with the HD resolution (1080 x 1920). The dataset contains triplets of 161 video clips with 23,107 frames in total. The video triplets are split into training, validation, and testing sets, each of which has 137, 8, and 16 triplets with 19,426, 1,141, and 2,540 frames, respectively.
Provide a detailed description of the following dataset: RealMCVSR
ActorShift
**ActorShift** is a dataset where the domain shift comes from the change in actor species: we use humans in the source domain and animals in the target domain. This causes large variances in the appearance and motion of activities. For the corresponding dataset we select 1,305 videos of 7 human activity classes from Kinetics-700 as the source domain: sleeping, watching tv, eating, drinking, swimming, running and opening a door. For the target domain we collect 200 videos from YouTube of animals performing the same activities. We divide them into 35 videos for training (5 per class) and 165 for evaluation. The target domain data is scarce, meaning there is the additional challenge of adapting to the target domain with few unlabeled examples.
Provide a detailed description of the following dataset: ActorShift
Niramai Oncho Dataset
Onchocerciasis is causing blindness in over half a million people in the world today. Drug development for the disease is crippled as there is no way of measuring effectiveness of the drug without an invasive procedure. Drug efficacy measurement through assessment of viability of onchocerca worms requires the patients to undergo nodulectomy which is invasive, expensive, time-consuming, skill-dependent, infrastructure dependent and lengthy process. The motivation of Niramai Oncho Dataset is to develop algorithms that can detect Onchocerca worms non-invasively using thermal imaging. The dataset consists of both thermal images and videos captured for each nodule site during imaging of a participant. Histopathological confirmation of the presence of female adult worm is used to obtain the ground truth. In total, the dataset consists of 125 participants' data with 192 palpable nodules. Out of 192 palpable nodules, 101 correspond to live female nodules and the remain 91 correspond to dead nodules.
Provide a detailed description of the following dataset: Niramai Oncho Dataset
SEL
The semantic line (SEL) dataset contains 1,750 outdoor images in total, which are split into 1,575 training and 175 testing images. Each semantic line is annotated by the coordinates of the two end-points on an image boundary. If an image has a single dominant line, it is set as the ground truth primary semantic line. If an image has multiple semantic lines, the line with the best rank by human annotators is set as the ground-truth primary line, and the others as additional ground-truth semantic lines. In SEL, 61% of the images contain multiple semantic lines.
Provide a detailed description of the following dataset: SEL
Persian Preschool Cognition Speech
Data collection was conducted by asking some adults from social media and some students from an elementary school to participate in our experiment. Table.1 shows the number of data gathered for recognizing each color. Due to the fact that two words are used for black in Persian, the number of black samples is more. In addition, because the color recognition is a RAN task, a sequence of data has been gathered. Table.2 depicts the number of sequence data for colors. For the meaningless words, 12 voices have been gathered on average for each word (there are 40 meaningless words in this task).
Provide a detailed description of the following dataset: Persian Preschool Cognition Speech
MedMCQA
**MedMCQA** is a large-scale, Multiple-Choice Question Answering (MCQA) dataset designed to address real-world medical entrance exam questions. MedMCQA has more than 194k high-quality AIIMS & NEET PG entrance exam MCQs covering 2.4k healthcare topics and 21 medical subjects are collected with an average token length of 12.77 and high topical diversity.
Provide a detailed description of the following dataset: MedMCQA
InstaOrder
InstaOrder can be used to understand the geometrical relationships of instances in an image. The dataset consists of 2.9M annotations of geometric orderings for class-labeled instances in 101K natural scenes. The scenes were annotated by 3,659 crowd-workers regarding (1) occlusion order that identifies occluder/occludee and (2) depth order that describes ordinal relations that consider relative distance from the camera.
Provide a detailed description of the following dataset: InstaOrder
GDA
The gene-disease associations corpus contains 30,192 titles and abstracts from PubMed articles that have been automatically labelled for genes, diseases and gene-disease associations via distant supervision. The test set is comprised of 1000 of these examples. It is common to hold out a random 20% of the examples in the train set as a validation set.
Provide a detailed description of the following dataset: GDA
Relative Human
Relative Human (RH) contains multi-person in-the-wild RGB images with rich human annotations, including: Depth layers: relative depth relationship/ordering between all people in the image. Age group classfication: adults, teenagers, kids, babies. Others: Genders, Bounding box, 2D pose.
Provide a detailed description of the following dataset: Relative Human
SpokenSTS
Spoken versions of the Semantic Textual Similarity dataset for testing semantic sentence level embeddings. Contains thousands of sentence pairs annotated by humans for semantic similarity. The spoken sentences can be used in sentence embedding models to test whether your model learns to capture sentence semantics. All sentences available in 6 synthetic Wavenet voices and a subset (5%) in 4 real voices recorded in a sound attenuated booth. Code to train a visually grounded spoken sentence embedding model and evaluation code is available at https://github.com/DannyMerkx/speech2image/tree/Interspeech21
Provide a detailed description of the following dataset: SpokenSTS
NAS Dataset for DIP
Dataset for our CVPR paper: "ISNAS-DIP: Image-Specific Neural Architecture Search for Deep Image Prior".
Provide a detailed description of the following dataset: NAS Dataset for DIP
CNN Filter DB
A database of over 1.4 billion 3x3 convolution filters extracted from hundreds of diverse CNN models with relevant meta information.
Provide a detailed description of the following dataset: CNN Filter DB
Heavy Snowfall
We introduce an object detection dataset in challenging adverse weather conditions covering 12000 samples in real-world driving scenes and 1500 samples in controlled weather conditions within a fog chamber. The dataset includes different weather conditions like fog, snow, and rain and was acquired by over 10,000 km of driving in northern Europe. The driven route with cities along the road is shown on the right. In total, 100k Objekts were labeled with accurate 2D and 3D bounding boxes. The main contributions of this dataset are: - We provide a proving ground for a broad range of algorithms covering signal enhancement, domain adaptation, object detection, or multi-modal sensor fusion, focusing on the learning of robust redundancies between sensors, especially if they fail asymmetrically in different weather conditions. - The dataset was created with the initial intention to showcase methods, which learn of robust redundancies between the sensor and enable a raw data sensor fusion in case of asymmetric sensor failure induced through adverse weather effects. - In our case we departed from proposal level fusion and applied an adaptive fusion driven by measurement entropy enabling the detection also in case of unknown adverse weather effects. This method outperforms other reference fusion methods, which even drop in below single image methods. - Please check out our paper for more information.
Provide a detailed description of the following dataset: Heavy Snowfall
Light Snowfall
We introduce an object detection dataset in challenging adverse weather conditions covering 12000 samples in real-world driving scenes and 1500 samples in controlled weather conditions within a fog chamber. The dataset includes different weather conditions like fog, snow, and rain and was acquired by over 10,000 km of driving in northern Europe. The driven route with cities along the road is shown on the right. In total, 100k Objekts were labeled with accurate 2D and 3D bounding boxes. The main contributions of this dataset are: - We provide a proving ground for a broad range of algorithms covering signal enhancement, domain adaptation, object detection, or multi-modal sensor fusion, focusing on the learning of robust redundancies between sensors, especially if they fail asymmetrically in different weather conditions. - The dataset was created with the initial intention to showcase methods, which learn of robust redundancies between the sensor and enable a raw data sensor fusion in case of asymmetric sensor failure induced through adverse weather effects. - In our case we departed from proposal level fusion and applied an adaptive fusion driven by measurement entropy enabling the detection also in case of unknown adverse weather effects. This method outperforms other reference fusion methods, which even drop in below single image methods. - Please check out our paper for more information.
Provide a detailed description of the following dataset: Light Snowfall
Clear Weather
We introduce an object detection dataset in challenging adverse weather conditions covering 12000 samples in real-world driving scenes and 1500 samples in controlled weather conditions within a fog chamber. The dataset includes different weather conditions like fog, snow, and rain and was acquired by over 10,000 km of driving in northern Europe. The driven route with cities along the road is shown on the right. In total, 100k Objekts were labeled with accurate 2D and 3D bounding boxes. The main contributions of this dataset are: - We provide a proving ground for a broad range of algorithms covering signal enhancement, domain adaptation, object detection, or multi-modal sensor fusion, focusing on the learning of robust redundancies between sensors, especially if they fail asymmetrically in different weather conditions. - The dataset was created with the initial intention to showcase methods, which learn of robust redundancies between the sensor and enable a raw data sensor fusion in case of asymmetric sensor failure induced through adverse weather effects. - In our case we departed from proposal level fusion and applied an adaptive fusion driven by measurement entropy enabling the detection also in case of unknown adverse weather effects. This method outperforms other reference fusion methods, which even drop in below single image methods. - Please check out our paper for more information.
Provide a detailed description of the following dataset: Clear Weather
CORD
OCR is inevitably linked to NLP since its final output is in text. Advances in document intelligence are driving the need for a unified technology that integrates OCR with various NLP tasks, especially semantic parsing. Since OCR and semantic parsing have been studied as separate tasks so far, the datasets for each task on their own are rich, while those for the integrated post-OCR parsing tasks are relatively insufficient. In this study, we publish a consolidated dataset for receipt parsing as the first step towards post-OCR parsing tasks. The dataset consists of thousands of Indonesian receipts, which contains images and box/text annotations for OCR, and multi-level semantic labels for parsing. The proposed dataset can be used to address various OCR and parsing tasks.
Provide a detailed description of the following dataset: CORD
Soil and Plant X-ray CT data with semantic annotations
Leaves from genetically unique Juglans regia plants were scanned using X-ray micro-computed tomography (microCT) on the X-ray μCT beamline (8.3.2) at the Advanced Light Source (ALS) in Lawrence Berkeley National Laboratory (LBNL), Berkeley, CA USA). Soil samples were collected in Fall of 2017 from the riparian oak forest located at the Russell Ranch Sustainable Agricultural Institute at the University of California Davis. The soil was sieved through a 2 mm mesh and was air dried before imaging. A single soil aggregate was scanned at 23 keV using the 10x objective lens with a pixel resolution of 650 nanometers on beamline 8.3.2 at the ALS. Additionally, a drought stressed almond flower bud (Prunus dulcis) from a plant housed at the University of California, Davis, was scanned using a 4x lens with a pixel resolution of 1.72 µm on beamline 8.3.2 at the ALS Raw tomographic image data was reconstructed using TomoPy. Reconstructions were converted to 8-bit tif or png format using ImageJ or the PIL package in Python before further processing. Images were annotated using Intel’s Computer Vision Annotation Tool (CVAT) and ImageJ. Both CVAT and ImageJ are free to use and open source. Leaf images were annotated in following Théroux-Rancourt et al. (2020). Specifically, Hand labeling was done directly in ImageJ by drawing around each tissue; with 5 images annotated per leaf. Care was taken to cover a range of anatomical variation to help improve the generalizability of the models to other leaves. All slices were labeled by Dr. Mina Momayyezi and Fiona Duong.To annotate the flower bud and soil aggregate, images were imported into CVAT. The exterior border of the bud (i.e. bud scales) and flower were annotated in CVAT and exported as masks. Similarly, the exterior of the soil aggregate and particulate organic matter identified by eye were annotated in CVAT and exported as masks. To annotate air spaces in both the bud and soil aggregate, images were imported into ImageJ. A gaussian blur was applied to the image to decrease noise and then the air space was segmented using thresholding. After applying the threshold, the selected air space region was converted to a binary image with white representing the air space and black representing everything else. This binary image was overlaid upon the original image and the air space within the flower bud and aggregate was selected using the “free hand” tool. Air space outside of the region of interest for both image sets was eliminated. The quality of the air space annotation was then visually inspected for accuracy against the underlying original image; incomplete annotations were corrected using the brush or pencil tool to paint missing air space white and incorrectly identified air space black. Once the annotation was satisfactorily corrected, the binary image of the air space was saved. Finally, the annotations of the bud and flower or aggregate and organic matter were opened in ImageJ and the associated air space mask was overlaid on top of them forming a three-layer mask suitable for training the fully convolutional network. All labeling of the soil aggregate and soil aggregate images was done by Dr. Devin Rippner. These images and annotations are for training deep learning models to identify different constituents in leaves, almond buds, and soil aggregates Limitations: For the walnut leaves, some tissues (stomata, etc.) are not labeled and only represent a small portion of a full leaf. Similarly, both the almond bud and the aggregate represent just one single sample of each. The bud tissues are only divided up into buds scales, flower, and air space. Many other tissues remain unlabeled. For the soil aggregate annotated labels are done by eye with no actual chemical information. Therefore particulate organic matter identification may be incorrect.
Provide a detailed description of the following dataset: Soil and Plant X-ray CT data with semantic annotations
NILoc
IMU, WiFi data along with aligned Visual SLAM groundtruth locations from a smartphone carried during natural human motion
Provide a detailed description of the following dataset: NILoc
OPD Dataset
The link includes both our OPDSynth and OPDReal dataset. For OPDSynth, we select objects with openable parts from an existing dataset of articulated 3D models PartNet-Mobility. For OPDReal, we reconstruct 3D polygonal meshes for articulated objects in real indoor environments and annotate their parts and articulation information.
Provide a detailed description of the following dataset: OPD Dataset
NKL
NKL (short for NanKai Lines) is a dataset for semantic line detection. Semantic lines are meaningful line structures that outline the conceptual structure of natural images. The NKL dataset contains 5,000 images of various scenes. Each of these images is annotated by multiple skilled human annotators. The dataset is split into training and validation subsets. There are 4,000 images in the training set and 1,000 in the validation set.
Provide a detailed description of the following dataset: NKL
MedQA-USMLE
Multiple choice question answering based on the United States Medical License Exams (USMLE). The dataset is collected from the professional medical board exams. It covers three languages: English, simplified Chinese, and traditional Chinese, and contains 12,723, 34,251, and 14,123 questions for the three languages, respectively.
Provide a detailed description of the following dataset: MedQA-USMLE
CurveLanes
CurveLanes is a new benchmark lane detection dataset with 150K lanes images for difficult scenarios such as curves and multi-lanes in traffic lane detection. It is collected in real urban and highway scenarios in multiple cities in China. It is the largest lane detection dataset so far and establishes a more challenging benchmark for the community. We separate the whole dataset 150K into three parts: train:100K, val: 20K and testing: 30K. The resolution of most images in this dataset is 2650×1440. For each image, we manually annotate all lanes in image with natural cubic splines. All images are carefully selected so that most of them image contains at least one curve lane. More difficult scenarios such as S-curves, Y-lanes, night and multi-lanes (the number of lane lines is more than 4) can be found in this dataset.
Provide a detailed description of the following dataset: CurveLanes
HC-STVG1
The newly proposed HC-STVG task aims to localize the target person spatio-temporally in an untrimmed video. For this task, we collect a new benchmark dataset, which has spatio temporal annotations related to the target persons in complex multi-person scenes, together with full interaction and rich action information.
Provide a detailed description of the following dataset: HC-STVG1
HC-STVG2
We have added data and cleaned the labels in HC-STVG to build the HC-STVG2.0. While the original database contained 5660 videos, the new database has been re-annotated and modified and now contains 16,000 + videos for this challenge.
Provide a detailed description of the following dataset: HC-STVG2
FairytaleQA
**FairytaleQA** is a dataset focusing on narrative comprehension of kindergarten to eighth-grade students. Annotated by educational experts based on an evidence-based theoretical framework, FairytaleQA consists of 10,580 explicit and implicit questions derived from 278 children-friendly story narratives, covering seven types of narrative elements or relations. It can support narrative Question Generation (QG) and Narrative Question Answering (QA) tasks.
Provide a detailed description of the following dataset: FairytaleQA
NTIC Screening Dataset
In the last two years, millions of lives have been lost due to COVID-19. Despite the vaccination programmes for a year, hospitalization rates and deaths are still high due to the new variants of COVID-19. Stringent guidelines and COVID-19 screening measures such as temperature check and mask check at all public places are helping reduce the spread of COVID-19. Visual inspections to ensure these screening measures can be taxing and erroneous. Automated inspection ensures an effective and accurate screening. To perform automated screening, thermal based screening is effective as it is illumination independent and can work even under no lighting conditions. This NTIC screening dataset consists of thermal images of persons walking into public premises like offices, malls and railway stations. The ground truth consists of annotations of human faces and whether they are masks or not. Broadly, this dataset is divided into 3 sub-datasets: Thermal Surveillance Dataset: 902 thermal images with 1354 people wearing masks and 213 people without masks Augmented Surveillance Dataset: 543 images with 434 people wearing masks and 109 people without masks Lighting Dataset: 420 thermal images and their corresponding visual images.
Provide a detailed description of the following dataset: NTIC Screening Dataset
LARQS
Word embedding is a modern distributed word representations approach widely used in many natural language processing tasks. Converting the vocabulary in a legal document into a word embedding model facilitates subjecting legal documents to machine learning, deep learning, and other algorithms and subsequently performing the downstream tasks of natural language processing vis-à-vis, for instance, document classification, contract review, and machine translation. The most common and practical approach of accuracy evaluation with the word embedding model uses a benchmark set with linguistic rules or the relationship between words to perform analogy reasoning via algebraic calculation. This paper proposes establishing a 1,134 Legal Analogical Reasoning Questions Set (LARQS) from the 2,388 Chinese Codex corpus using five kinds of legal relations, which are then used to evaluate the accuracy of the Chinese word embedding model. Moreover, we discovered that legal relations might be ubiquitous in the word embedding model.
Provide a detailed description of the following dataset: LARQS
NightCity
The largest real-world night-time semantic segmentation dataset with pixel-level labels.
Provide a detailed description of the following dataset: NightCity
ShipsEar
This contribution presents a database of underwater sounds produced by vessels of various types. Besides sound recordings, the database contains details of the conditions for obtaining each recording: type of vessel, location of the recording equipment, weather conditions, etc. For its realization, a methodology for recording sounds and gathering additional information has been established, that will facilitate its use to the research community, and expanding the number of records in the database in the future. The sounds are recorded in shallow waters and in real conditions. Therefore, the recordings contain both natural and anthropogenic environment noise. It aims to provide a database of real sounds that researchers can use, for example, to train boat detectors and classifiers, usable in monitoring maritime traffic.
Provide a detailed description of the following dataset: ShipsEar
TrojVQA
A collection of 840 pretrained VQA models which may be regular “clean” models or malicious “backdoored” models which have been trained to include a secret backdoor trigger and behavior. This collection includes models with traditional single-key backdoors as well as Dual-Key Multimodal Backdoors. For more information, see our work “Dual-Key Multimodal Backdoors for Visual Question Answering" (https://arxiv.org/abs/2112.07668). This dataset is inspired by and modeled after those created by TrojAI (https://arxiv.org/abs/2003.07233). It is intended to enable the development of defensive algorithms to detect and/or purify backdoored VQA models.
Provide a detailed description of the following dataset: TrojVQA
SEN
SEN is a novel publicly available human-labelled dataset for training and testing machine learning algorithms for the problem of entity level sentiment analysis of political news headlines. Dataset consists of 3819 human-labelled political news headlines coming from several major on-line media outlets in English and Polish.
Provide a detailed description of the following dataset: SEN
SeaDronesSee
SeaDronesSee is a large-scale data set aimed at helping develop systems for Search and Rescue (SAR) using Unmanned Aerial Vehicles (UAVs) in maritime scenarios. Building highly complex autonomous UAV systems that aid in SAR missions requires robust computer vision algorithms to detect and track objects or persons of interest. This data set provides three sets of tracks: object detection, single-object tracking and multi-object tracking. Each track consists of its own data set and leaderboard. Object Detection: 5,630 train images, 859 validation images, 1,796 testing images Single-Object Tracking: 58 training video clips, 70 validation video clips and 80 testing video clips Multi-Object Tracking: 22 video clips with 54,105 frames Additionally, we provide multi-spektral footage: Multi-Spektral Object Detection: 246 train images, 61 validation images, 125 testing images We will continue to update this data set to make it more versatile and reflect real-world requirements in dynamic situations.
Provide a detailed description of the following dataset: SeaDronesSee
DGTA-VisDrone
Object Detection data set created from the engine DeepGTAV, which is based on the video game GTAV. Part of the three data sets proposed in the paper. This data set is motivated from the VisDrone data set with almost the same classes.
Provide a detailed description of the following dataset: DGTA-VisDrone
DGTA-SeaDronesSee
Object Detection data set created from the engine DeepGTAV, which is based on the video game GTAV. Part of the three data sets proposed in the paper. This data set is motivated from the SeaDronesSee dataset with almost the same classes.
Provide a detailed description of the following dataset: DGTA-SeaDronesSee
DGTA-Cattle
Object Detection data set created from the engine DeepGTAV, which is based on the video game GTAV. Part of the three data sets proposed in the paper. This data set is motivated from the Cattle dataset with almost the same classes.
Provide a detailed description of the following dataset: DGTA-Cattle
Cattle
Cattle data set, which was introduced in a paper. We (not the authors) created a train-val-test split.
Provide a detailed description of the following dataset: Cattle
Pistachio Image Dataset
Citation Request : 1. OZKAN IA., KOKLU M. and SARACOGLU R. (2021). Classification of Pistachio Species Using Improved K-NN Classifier. Progress in Nutrition, Vol. 23, N. 2, pp. DOI:10.23751/pn.v23i2.9686. (Open Access) https://www.mattioli1885journals.com/index.php/progressinnutrition/article/view/9686/9178 ABSTRACT: In order to keep the economic value of pistachio nuts which have an important place in the agricultural economy, the efficiency of post-harvest industrial processes is very important. To provide this efficiency, new methods and technologies are needed for the separation and classification of pistachios. Different pistachio species address different markets, which increases the need for the classification of pistachio species. In this study, it is aimed to develop a classification model different from traditional separation methods, based on image processing and artificial intelligence which are capable to provide the required classification. A computer vision system has been developed to distinguish two different species of pistachios with different characteristics that address different market types. 2148 sample image for these two kinds of pistachios were taken with a high-resolution camera. The image processing techniques, segmentation and feature extraction were applied on the obtained images of the pistachio samples. A pistachio dataset that has sixteen attributes was created. An advanced classifier based on k-NN method, which is a simple and successful classifier, and principal component analysis was designed on the obtained dataset. In this study; a multi-level system including feature extraction, dimension reduction and dimension weighting stages has been proposed. Experimental results showed that the proposed approach achieved a classification success of 94.18%. The presented high-performance classification model provides an important need for the separation of pistachio species and increases the economic value of species. In addition, the developed model is important in terms of its application to similar studies. Keywords: Classification, Image processing, k nearest neighbor classifier, Pistachio species 2. SINGH D, TASPINAR YS, KURSUN R, CINAR I, KOKLU M, OZKAN IA, LEE H-N., (2022). Classification and Analysis of Pistachio Species with Pre-Trained Deep Learning Models, Electronics, 11 (7), 981. https://doi.org/10.3390/electronics11070981. (Open Access) ABSTRACT: Pistachio is a shelled fruit from the anacardiaceae family. The homeland of pistachio is the Middle East. The Kirmizi pistachios and Siirt pistachios are the major types grown and exported in Turkey. Since the prices, tastes, and nutritional values of these types differs, the type of pistachio becomes important when it comes to trade. This study aims to identify these two types of pistachios, which are frequently grown in Turkey, by classifying them via convolutional neural networks. Within the scope of the study, images of Kirmizi and Siirt pistachio types were obtained through the computer vision system. The pre-trained dataset includes a total of 2148 images, 1232 of Kirmizi type and 916 of Siirt type. Three different convolutional neural network models were used to classify these images. Models were trained by using the transfer learning method, with AlexNet and the pre-trained models VGG16 and VGG19. The dataset is divided as 80% training and 20% test. As a result of the performed classifications, the success rates obtained from the AlexNet, VGG16, and VGG19 models are 94.42%, 98.84%, and 98.14%, respectively. Models’ performances were evaluated through sensitivity, specificity, precision, and F-1 score metrics. In addition, ROC curves and AUC values were used in the performance evaluation. The highest classification success was achieved with the VGG16 model. The obtained results reveal that these methods can be used successfully in the determination of pistachio types. Keywords: pistachio; genetic varieties; machine learning; deep learning; food recognition
Provide a detailed description of the following dataset: Pistachio Image Dataset
SLNET
**SLNET** is collection of third party Simulink models. It is curated via mining open source repository (GitHub and Matlab Central) using SLNET-Miner (https://github.com/50417/SLNet_Miner).
Provide a detailed description of the following dataset: SLNET
SUN-SEG-Easy (Unseen)
The SUN-SEG dataset is a high-quality per-frame annotated VPS dataset, which includes 158,690 frames from the famous SUN dataset. It extends the labels with diverse types, i.e., object mask, boundary, scribble, polygon, and visual attribute. It also introduces the pathological information from the original SUN dataset, including pathological classification labels, location information, and shape information. Notably, the origin SUN dataset has 113 colonoscopy videos, including 100 positive cases with 49, 136 polyp frames and 13 negative cases with 109, 554 non-polyp frames. It manually trims them into 378 positive and 728 negative short clips, meanwhile maintaining their intrinsic consecutive relationship. Such data pre-processing ensures each clip has around 3~11s duration at a real-time frame rate (i.e., 30 fps), which promotes the fault-tolerant margin for various algorithms and devices. To this end, the re-organized SUN-SEG contains 1, 106 short video clips with 158, 690 video frames totally, offering a solid foundation to build a representative benchmark. As such, it yields the final version of our SUN-SEG dataset, which includes 49,136 polyp frames (i.e., positive part) and 109,554 non-polyp frames (i.e., negative part) taken from different 285 and 728 colonoscopy videos clips, as well as the corresponding annotations.
Provide a detailed description of the following dataset: SUN-SEG-Easy (Unseen)
Example EPCIS Event Chain
This is an example data set for a hypothetical electronic products supply network. The supply network consists of six actors. A simple electronic product is assembled by Manufacturer C with components from Supplier A and Supplier B. The product is sold to consumers by Retailer D or Retailer E. Finally, at the end of life, the product is returned to be recycled by Reseller F. This dataset may be used as a (semi) realistic and non-trivial chain (actually DAG) of business events, expressed in the GS1 EPCIS standard, to demonstrate tracking in e.g. provenance or recall use cases.
Provide a detailed description of the following dataset: Example EPCIS Event Chain
P-DukeMTMC-reID
P-DukeMTMC-reID is a modified version based on DukeMTMC-reID dataset. There are 12,927 images (665 identifies) in training set, 2,163 images (634 identities) for querying and 9,053 images in the gallery set.
Provide a detailed description of the following dataset: P-DukeMTMC-reID
Occluded-DukeMTMC
Occluded-DukeMTMC contains 15,618 training images, 17,661 gallery images, and 2,210 occluded query images. The experiment results on Occluded-DukeMTMC will demonstrate the superiority of our method in Occluded Person Re-ID problems, let alone that our method does not need any manually cropping procedure as pre-process.
Provide a detailed description of the following dataset: Occluded-DukeMTMC
SUN-SEG-Hard (Unseen)
The SUN-SEG dataset is a high-quality per-frame annotated VPS dataset, which includes 158,690 frames from the famous SUN dataset. It extends the labels with diverse types, i.e., object mask, boundary, scribble, polygon, and visual attribute. It also introduces the pathological information from the original SUN dataset, including pathological classification labels, location information, and shape information. Notably, the origin SUN dataset has 113 colonoscopy videos, including 100 positive cases with 49, 136 polyp frames and 13 negative cases with 109, 554 non-polyp frames. It manually trims them into 378 positive and 728 negative short clips, meanwhile maintaining their intrinsic consecutive relationship. Such data pre-processing ensures each clip has around 3~11s duration at a real-time frame rate (i.e., 30 fps), which promotes the fault-tolerant margin for various algorithms and devices. To this end, the re-organized SUN-SEG contains 1, 106 short video clips with 158, 690 video frames totally, offering a solid foundation to build a representative benchmark. As such, it yields the final version of our SUN-SEG dataset, which includes 49,136 polyp frames (i.e., positive part) and 109,554 non-polyp frames (i.e., negative part) taken from different 285 and 728 colonoscopy videos clips, as well as the corresponding annotations.
Provide a detailed description of the following dataset: SUN-SEG-Hard (Unseen)
FoCus
We introduce a new dataset, called FoCus, that supports knowledge-grounded answers that reflect user’s persona. One of the situations in which people need different types of knowledge, based on their preferences, occurs when they travel around the world.
Provide a detailed description of the following dataset: FoCus
CICERO
**CICERO** contains 53,000 inferences for five commonsense dimensions -- cause, subsequent event, prerequisite, motivation, and emotional reaction -- collected from 5600 dialogues. It involves two challenging generative and multi-choice alternative selection tasks for the state-of-the-art NLP models to solve. Download the dataset using [this link](https://github.com/declare-lab/CICERO/releases/download/v1.0.0/data.zip).
Provide a detailed description of the following dataset: CICERO
FEMNIST
See paper: Caldas, Sebastian, et al. "Leaf: A benchmark for federated settings." arXiv preprint arXiv:1812.01097 (2018).
Provide a detailed description of the following dataset: FEMNIST
L3DAS22
# L3DAS22: MACHINE LEARNING FOR 3D AUDIO SIGNAL PROCESSING This dataset supports the L3DAS22 IEEE ICASSP Gand Challenge. The challenge is supported by a [Python API](https://github.com/l3das/L3DAS22) that facilitates the dataset download and preprocessing, the training and evaluation of the baseline models and the results submission. ## Scope of the Challenge The [L3DAS22 Challenge](https://www.l3das.com/icassp2022/index.html) aims at encouraging and fostering research on machine learning for 3D audio signal processing. 3D audio is gaining increasing interest in the machine learning community in recent years. The range of applications is incredibly wide, extending from virtual and real conferencing to autonomous driving, surveillance and many more. In these contexts, a fundamental procedure is to properly identify the nature of events present in a soundscape, their spatial position and eventually remove unwanted noises that can interfere with the useful signal. To this end, L3DAS22 Challenge presents two tasks: 3D Speech Enhancement and 3D Sound Event Localization and Detection, both relying on first-order Ambisonics recordings in reverberant office environments. Each task involves 2 separate tracks: 1-mic and 2-mic recordings, respectively containing sounds acquired by one 1st order Ambisonics microphone and by an array of two ones. The use of two Ambisonics microphones represents one of the main novelties of the L3DAS22 Challenge. We expect higher accuracy/reconstruction quality when taking advantage of the dual spatial perspective of the two microphones. Moreover, we are very interested in identifying other possible advantages of this configuration over standard Ambisonics formats. Interactive demos of our baseline models are available on [Replicate](https://replicate.ai/l3das/l3das22_challenge). Top 5 ranked teams can submit a regular paper according to the ICASSP guidelines. Prizes will be awarded to the challenge winners thanks to the support of Kuaishou Technology. ## Tasks Tasks The tasks we propose are: * **3D Speech Enhancement** The objective of this task is the enhancement of speech signals immersed in the spatial sound field of a reverberant office environment. Here the models are expected to extract the monophonic voice signal from the 3D mixture containing various background noises. The evaluation metric for this task is a combination of short-time objective intelligibility (STOI) and word error rate (WER). * **3D Sound Event Localization and Detection** The aim of this task is to detect the temporal activities of a known set of sound event classes and, in particular, to further locate them in the space. Here the models must predict a list of the active sound events and their respective location at regular intervals of 100 milliseconds. Performance on this task is evaluated according to the location-sensitive detection error, which joins the localization and detection error metrics. ## Dataset Info The L3DAS22 datasets contain multiple-source and multiple-perspective B-format Ambisonics audio recordings. We sampled the acoustic field of a large office room, placing two first-order Ambisonics microphones in the center of the room and moving a speaker reproducing the analytic signal in 252 fixed spatial positions. Relying on the collected Ambisonics impulse responses (IRs), we augmented existing clean monophonic datasets to obtain synthetic tridimensional sound sources by convolving the original sounds with our IRs. We extracted speech signals from the Librispeech dataset and office-like background noises from the FSD50K dataset. We aimed at creating plausible and variegate 3D scenarios to reflect possible real-life situations in which sound and disparate types of background noises coexist in the same 3D reverberant environment. We provide normalized raw waveforms as predictors data and the target data varies according to the task. The dataset is divided in two main sections, respectively dedicated to the challenge tasks. The first section is optimized for 3D Speech Enhancement and contains more than 60000 virtual 3D audio environments with a duration up to 12 seconds. In each sample, a spoken voice is always present alongside with other office-like background noises. As target data for this section we provide the clean monophonic voice signals. For each subset we also provide a csv file, where we annotated the coordinates and spatial distance of the IR convolved with the target voice signals for each datapoint. This may be useful to estimate the delay caused by the virtual time-of-flight of the target voice signal and to perform a sample-level alignment of the input and ground truth signals. The other sections, instead, is dedicated to the 3D Sound Event Localization and Detection task and contains 900 30-seconds-long audio files. Each data point contains a simulated 3D office audio environment in which up to 3 simultaneous acoustic events may be active at the same time. In this section, the samples are not forced to contain a spoken voice. As target data for this section we provide a list of the onset and offset time stamps, the typology class, and the spatial coordinates of each individual sound event present in the data-points. We split both dataset sections into a training set and a development set, paying attention to create similar distributions. The train set of the SE section is divided in two partitions: train360 and train100, and contain speech samples extracted from the correspondent partitions of Librispeech (only the sample) up to 12 seconds. The train360 is split in 2 zip files for a more convenient download. All sets of the SELD section are divided in: OV1, OV2, OV3. These partitions refer to the maximum amount of possible overlapping sounds, which are 1, 2 or 3, respectively. ## L3DAS22 Challenge Supporting API The [gitHub supporting API](https://github.com/l3das/L3DAS22) is aimed at downloading the dataset, pre-processing the sound files and the metadata, training and evaluating the baseline models and validating the final results. We provide easy-to-use instruction to produce the results included in our paper. Moreover, we extensively commented our code for easy customization. For further information please refer to the [challenge website](https://www.l3das.com/icassp2022/index.html) and to the [challenge documentation](https://www.l3das.com/assets/file/L3DAS22_ICASSP_documentation.pdf).
Provide a detailed description of the following dataset: L3DAS22
MS-FIMU
Open Dataset: Mobility Scenario FIMU * An open, multidimensional (6 categorical attributes), and synthetic dataset of faked virtual humans generated by an optimization approach applied to a real-life call-detail-records-based anonymized database. * The original database is from a mobile network operator in France (Orange), which collects statistics on the frequency of users on days and union of days through analyzing mobile phone data (i.e., call detail records). The period of the analysis is 7 days from 2017-05-31 to 2017-06-06. * This dataset can be used for classification tasks and for evaluating (locally) differentially private mechanisms on multidimensional data.
Provide a detailed description of the following dataset: MS-FIMU
RoomEnv-v0
# The Room environment - v0 [There is a newer version, v1](../README.md) We have released a challenging [OpenAI Gym](https://www.gymlibrary.dev/) compatible environment. The best strategy for this environment is to have both episodic and semantic memory systems. See the [paper](https://arxiv.org/abs/2204.01611) for more information. ## Prerequisites 1. A unix or unix-like x86 machine 1. python 3.8 or higher. 1. Running in a virtual environment (e.g., conda, virtualenv, etc.) is highly recommended so that you don't mess up with the system python. 1. This env is added to the PyPI server. Just run: `pip install room-env` ## Data collection Data is collected from querying ConceptNet APIs. For simplicity, we only collect triples whose format is (`head`, `AtLocation`, `tail`). Here `head` is one of the 80 MS COCO dataset categories. This was kept in mind so that later on we can use images as well. If you want to collect the data manually, then run below: ``` python collect_data.py ``` ## How does this environment work? The OpenAI-Gym-compatible Room environment is one big room with _N_<sub>_people_</sub> number of people who can freely move around. Each of them selects one object, among _N_<sub>_objects_</sub>, and places it in one of the _N_<sub>_locations_</sub> locations. _N_<sub>_agents_</sub> number of agent(s) are also in this room. They can only observe one human placing an object, one at a time; **x**<sup>(_t_)</sup>. At the same time, they are given one question about the location of an object; **q**<sup>(_t_)</sup>. **x**<sup>(_t_)</sup> is given as a quadruple, (**h**<sup>(_t_)</sup>,**r**<sup>(_t_)</sup>,**t**<sup>(_t_)</sup>,_t_), For example, `<James’s laptop, AtLocation, James’s desk, 42>` accounts for an observation where an agent sees James placing his laptop on his desk at *t* = 42. **q**<sup>(_t_)</sup> is given as a double, (**h**,**r**). For example, `<Karen’s cat, AtLocation>` is asking where Karen’s cat is located. If the agent answers the question correctly, it gets a reward of  + 1, and if not, it gets 0. The reason why the observations and questions are given as RDF-triple-like format is two folds. One is that this structured format is easily readable / writable by both humans and machines. Second is that we can use existing knowledge graphs, such as ConceptNet . To simplify the environment, the agents themselves are not actually moving, but the room is continuously changing. There are several random factors in this environment to be considered: 1. With the chance of _p_<sub>commonsense</sub>, a human places an object in a commonsense location (e.g., a laptop on a desk). The commonsense knowledge we use is from ConceptNet. With the chance of 1 − *p*<sub>_commonsense_</sub>, an object is placed at a non-commonsense random location (e.g., a laptop on the tree). 1. With the chance of _p_<sub>_new_\__location_</sub>, a human changes object location. 1. With the chance of _p_<sub>_new_\__object_</sub>, a human changes his/her object to another one. 1. With the chance of _p_<sub>_switch_\__person_</sub>, two people switch their locations. This is done to mimic an agent moving around the room. All of the four probabilities account for the Bernoulli distributions. Consider there is only one agent. Then this is a POMDP, where _S_<sub>_t_</sub> = (**x**<sup>(_t_)</sup>, **q**<sup>(_t_)</sup>), _A_<sub>_t_</sub> = (do something with **x**<sup>(_t_)</sup>, answer **q**<sup>(_t_)</sup>), and _R_<sub>_t_</sub> ∈ *{0, 1}*. Currently there is no RL trained for this. We only have some heuristics. Take a look at the paper for more details. ## RoomEnv-v0 ```python import gym import room_env env = gym.make("RoomEnv-v0") (observation, question), info = env.reset() rewards = 0 while True: (observation, question), reward, done, truncated, info = env.step("This is my answer!") rewards += reward if done: break print(rewards) ``` Every time when an agent takes an action, the environment will give you an observation and a question to answer. You can try directly answering the question, such as `env.step("This is my answer!")`, but a better strategy is to keep the observations in memory systems and take advantage of the current observation and the history of them in the memory systems. Take a look at [this repo](https://github.com/tae898/explicit-memory) for an actual interaction with this environment to learn a policy. ## Contributing Contributions are what make the open source community such an amazing place to be learn, inspire, and create. Any contributions you make are **greatly appreciated**. 1. Fork the Project 1. Create your Feature Branch (`git checkout -b feature/AmazingFeature`) 1. Run `make test && make style && make quality` in the root repo directory, to ensure code quality. 1. Commit your Changes (`git commit -m 'Add some AmazingFeature'`) 1. Push to the Branch (`git push origin feature/AmazingFeature`) 1. Open a Pull Request ## [Cite our paper](https://arxiv.org/abs/2204.01611) ```bibtex @misc{https://doi.org/10.48550/arxiv.2204.01611, doi = {10.48550/ARXIV.2204.01611}, url = {https://arxiv.org/abs/2204.01611}, author = {Kim, Taewoon and Cochez, Michael and Francois-Lavet, Vincent and Neerincx, Mark and Vossen, Piek}, keywords = {Artificial Intelligence (cs.AI), FOS: Computer and information sciences, FOS: Computer and information sciences}, title = {A Machine With Human-Like Memory Systems}, publisher = {arXiv}, year = {2022}, copyright = {Creative Commons Attribution 4.0 International} } ``` ## Cite our code [![DOI](https://zenodo.org/badge/477781069.svg)](https://zenodo.org/badge/latestdoi/477781069) ## Authors - [Taewoon Kim](https://taewoon.kim/) - [Michael Cochez](https://www.cochez.nl/) - [Vincent Francois-Lavet](http://vincent.francois-l.be/) - [Mark Neerincx](https://ocw.tudelft.nl/teachers/m_a_neerincx/) - [Piek Vossen](https://vossen.info/) ## License [MIT](https://choosealicense.com/licenses/mit/)
Provide a detailed description of the following dataset: RoomEnv-v0