dataset_name
stringlengths
2
128
description
stringlengths
1
9.7k
prompt
stringlengths
59
185
Indian Number Plates Dataset
### **This dataset is collected by DataCluster Labs. To download full dataset or to submit a request for your new data collection needs, please drop a mail to: [sales@datacluster.ai](mailto:sales@datacluster.ai)** This dataset is an extremely challenging set of over 20,000+ original Number plate images captured and crowdsourced from over 700+ urban and rural areas, where each image is **manually reviewed and verified** by computer vision professionals at DC Labs. ### **Dataset Features** - Dataset size : 20,000+ - Captured by : Over 4000+ crowdsource contributors - Resolution : 100% of images HD and above (1920x1080 and above) - Location : Captured with 700+ cities and villages across India - Diversity : Various lighting conditions like day, night, varied distances, view points etc. - Device used : Captured using mobile phones in 2020-2022 - Usage : Number plate detection, ANPR, Number plate recognition, Self driving system, etc. ### Available Annotation formats COCO, YOLO, PASCAL-VOC, Tf-Record *To download full datasets or to submit a request for your dataset needs, please drop a mail on sales@datacluster.ai . Visit [www.datacluster.ai](https://www.datacluster.ai/) to know more.
Provide a detailed description of the following dataset: Indian Number Plates Dataset
Vehicle Dataset | Indian Vehicle Dataset
### **This dataset is collected by DataCluster Labs. To download full dataset or to submit a request for your new data collection needs, please drop a mail to: [sales@datacluster.ai](mailto:sales@datacluster.ai)** This dataset is an extremely challenging set of over 50,000+ original Vehicle images captured and crowdsourced from over 1000+ urban and rural areas, where each image is **manually reviewed and verified** by computer vision professionals at Datacluster Labs. ### **Dataset Features** - Dataset size : 50,000+ images - Captured by : Over 1000+ crowdsource contributors - Resolution : 100% images are HD and above (1920x1080 and above) - Location : Captured with 1000+ cities accross India - Diversity : Various lighting conditions like day, night, varied distances, view points etc. - Device used : Captured using mobile phones in 2020-2022 - Usage : Vehicle Detection, Automobile detection, Construction vehicle detection, Self driving systems, etc. ### **Vehicle Classes** - Indian Auto - Indian Truck - Bus - Truck - Tempo Traveller - Tractor - Car - Two Wheelers ### Available Annotation formats COCO, YOLO, PASCAL-VOC, Tf-Record *To download full datasets or to submit a request for your dataset needs, please drop a mail on sales@datacluster.ai . Visit [www.datacluster.ai](https://www.datacluster.ai/) to know more.
Provide a detailed description of the following dataset: Vehicle Dataset | Indian Vehicle Dataset
Twitter MediaEval
The task addresses the problem of the appearance and propagation of posts that share misleading multimedia content (images or video). In the context of the task, different types of misleading use are considered: 1. Reposting of real multimedia, such as real photos from the past re-posted as being associated to a current event, 2. Digitally manipulated multimedia, 3. Synthetic multimedia, such as artworks or snapshots presented as real imagery.
Provide a detailed description of the following dataset: Twitter MediaEval
GWA
GWA is a large-scale audio dataset of over 2 million synthetic room impulse responses (IRs) and their corresponding detailed geometric and simulation configurations. Our dataset samples acoustic environments from over 6.8K high-quality diverse and professionally designed houses represented as semantically labeled 3D meshes
Provide a detailed description of the following dataset: GWA
PGDP5K
PGDP5K is a dataset consisting of 5000 diagram samples composed of 16 shapes, covering 5 positional relations, 22 symbol types and 6 text types, labeled with more fine-grained annotations at primitive level, including primitive classes, locations and relationships, where 1,813 non-duplicated images are selected from the Geometry3K dataset and other 3,187 images are collected from three popular textbooks across grades 6-12 on mathematics curriculum websites by taking screenshots from PDF books.
Provide a detailed description of the following dataset: PGDP5K
ADHD-200
Attention Deficit Hyperactivity Disorder (ADHD) affects at least 5-10% of school-age children and is associated with substantial lifelong impairment, with annual direct costs exceeding $36 billion/year in the US. Despite a voluminous empirical literature, the scientific community remains without a comprehensive model of the pathophysiology of ADHD. Further, the clinical community remains without objective biological tools capable of informing the diagnosis of ADHD for an individual or guiding clinicians in their decision-making regarding treatment. The ADHD-200 Sample is a grassroots initiative, dedicated to accelerating the scientific community's understanding of the neural basis of ADHD through the implementation of open data-sharing and discovery-based science. Towards this goal, we are pleased to announce the unrestricted public release of 776 resting-state fMRI and anatomical datasets aggregated across 8 independent imaging sites, 491 of which were obtained from typically developing individuals and 285 in children and adolescents with ADHD (ages: 7-21 years old). Accompanying phenotypic information includes: diagnostic status, dimensional ADHD symptom measures, age, sex, intelligence quotient (IQ) and lifetime medication status. Preliminary quality control assessments (usable vs. questionable) based upon visual timeseries inspection are included for all resting state fMRI scans. In accordance with HIPAA guidelines and 1000 Functional Connectomes Project protocols, all datasets are anonymous, with no protected health information included. http://fcon_1000.projects.nitrc.org/indi/adhd200/
Provide a detailed description of the following dataset: ADHD-200
BurstSR
BurstSR is a dataset consisting of smartphone bursts and high-resolution DSLR ground-truth
Provide a detailed description of the following dataset: BurstSR
WikiBanEvasion
A dataset comprising 8,551 ban evasion pairs on Wikipedia, where each pair comprises a parent account and the child account. We adopt a strategy to ensure that there is a 1:1 mapping between parent and child accounts. For each of the accounts in these ban evasion pairs, we provide the following data: - Wikipedia usernames, creation date, ban date, and other account-level meta-data - Corresponding edit information in form of revision IDs, pages edited, added text, deleted text, edit comment, and timestamp
Provide a detailed description of the following dataset: WikiBanEvasion
Penn94
Node classification on Penn94
Provide a detailed description of the following dataset: Penn94
genius
node classification on genius
Provide a detailed description of the following dataset: genius
twitch-gamers
node classification on twitch-gamers
Provide a detailed description of the following dataset: twitch-gamers
NICO++
The goal of NICO Challenge is to facilitate the OOD (Out-of-Distribution) generalization in visual recognition through promoting the research on the intrinsic learning mechanisms with native invariance and generalization ability. The training data is a mixture of several observed contexts while the test data is composed of unseen contexts. Participants are tasked with developing reliable algorithms across different contexts (domains) to improve the generalization ability of models.
Provide a detailed description of the following dataset: NICO++
MFQE v2
A dataset for compressed video quality enhancement.
Provide a detailed description of the following dataset: MFQE v2
Water Footprint Recommender System Data
It contains data from two different realities: Food.com, a well-known American recipe site, and Planeat, an Italian site that allows you to plan recipes to save food waste. The dataset is divided into two parts: embeddings, which can be used directly to execute the work and receive suggestions, and raw data, which must first be processed into embeddings.
Provide a detailed description of the following dataset: Water Footprint Recommender System Data
Korpus Malti
General Corpora for the Maltese Language.
Provide a detailed description of the following dataset: Korpus Malti
TBBR
The dataset of Thermal Bridges on Building Rooftops (TBBR dataset) consists of annotated combined RGB and thermal drone images with a height map. All images were converted to a uniform format of 3000$\times$4000 pixels, aligned, and cropped to 2400$\times$3400 to remove empty borders. The raw images for our dataset were recorded with a normal (RGB) and a FLIR-XT2 (thermal) camera on a DJI M600 drone. They show six large building blocks of around 20 buildings per block recorded in the city centre of the German city Karlsruhe east of the market square. Because of a high overlap rate of the images, the same buildings are on average recorded from different angles in different images about 20 times. All images were recorded during a drone flight on March 19, 2019 from 7 a.m. to 8 a.m. At this time, temperatures were between 3.78$^{\circ}$ C and 4.97$^{\circ}$ C, humidity between 80% and 98%. There was no rain on the day of the flight, but there was 2.3 $mm/m^2$ 48 hours beforehand. For recording the thermographic images an emissivity of 1.0 was set. The global radiation during this period was between 38.59 $W / m^2$ and 120.86 $W / m^2$. No direct sunlight can be seen visually on any of the recordings. The dataset contains 924 images with a total of 6930 annotations of thermal bridges on rooftops, split into train and test subsets with 722 (5614) and 202 (1313) images (annotations), respectively. The annotations only include thermal bridges that are visually identifiable with the human eye. Because of the aforementioned image overlap, each thermal bridge is annotated multiple times from different angles. For the annotation of the thermal images the image processing program VGG Image Annotator from the Visual Geometry Group, version 2.0.10, was used. The thermal bridge annotations are outlined with polygon shapes. These polygon lines were placed as close as possible but outside the area of significant temperature increase. If a detected thermal bridge was partially covered by another building component located in the foreground, the thermal bridge was also marked across the covering in case of minor coverings. Adjacent thermal bridges, which affect different rooftop components, were annotated separately. For example, a window with poor insulation of the window reveal located in the area of a poorly insulated roof is annotated individually. There is no overlap between annotated areas. While each image contains annotations, they also include thermal bridges present that are not annotated. **Usage:** Each compressed archive file represents one of the six building blocks. For the related publication the final block (Flug1_105Media) was used as a hold-out test sample. The archives contain Numpy files (one per image) of shape (2400, 3200, 5), where the final dimension is the color channel in the format [B, G, R, Thermal, Height]. Archives were compressed using ZStandard compression. They can be decompressed in a terminal by running e.g. ``` tar -I zstd -xvf Flug1_105Media.tar.zst ``` these will be decompressed into the file structure: ``` images/ └── Flug1_105Media/ └── DJI_0004_R.npy └── DJI_0006_R.npy └── ... ``` Corresponding annotations are provided in the COCO JSON format. There is one file for training (Flug1_100Media - Flug1_104Media blocks) and one for test (Flug1_105Media block). They contain a single class (thermal bridge) and expect the folder structure shown below. Note: The annotation files contain relative paths to numpy files, in case of problems please convert to absolute paths (i.e. insert the containing directory before each file path in the JSON annotation files). We recommend the following folder structure for reproduction of our work with Detectron2: ``` β”œβ”€β”€ train/ β”‚ β”œβ”€β”€ Flug1_100-104Media_coco.json β”‚ └── images/ β”‚ β”œβ”€β”€ Flug1_100Media/ β”‚ β”‚ β”œβ”€β”€ DJI_XXXX_R.npy β”‚ β”‚ └── ... β”‚ β”œβ”€β”€ ... β”‚ └── Flug1_104Media/ β”‚ β”œβ”€β”€ DJI_XXXX_R.npy β”‚ └── ... └── test/ β”œβ”€β”€ Flug1_105Media_coco.json └── images/ └── Flug1_105Media/ β”œβ”€β”€ DJI_XXXX_R.npy └── ... ```
Provide a detailed description of the following dataset: TBBR
DWS
Temp
Provide a detailed description of the following dataset: DWS
OAGT
OAGL is a paper topic dataset consisting of 6942930 records which comprise various scientific publication attributes like abstracts, titles, keywords, publication years, venues, etc. The last two fields of each record are the topic id from a taxonomy of 27 topics created from the entire collection and the 20 most significant topic words. Each dataset record (sample) is stored as a JSON line in the text file.
Provide a detailed description of the following dataset: OAGT
AIC
A large-scale dataset named AIC (AI Challenger) with three sub-datasets, human keypoint detection (HKD), large-scale attribute dataset (LAD) and image Chinese captioning (ICC).
Provide a detailed description of the following dataset: AIC
ExtMarker
Three-dimensional position of external markers placed on the chest and abdomen of healthy individuals breathing during intervals from 73s to 222s. The markers move because of the respiratory motion, and their position is sampled at approximately 10Hz. Markers are metallic objects used during external beam radiotherapy to track and predict the motion of tumors due to breathing for accurate dose delivery. The same data was used and described in detail in the following article: Krilavicius, Tomas, et al. β€œPredicting Respiratory Motion for Real-Time Tumour Tracking in Radiotherapy.” ArXiv:1508.00749 [Physics], Aug. 2015. arXiv.org, http://arxiv.org/abs/1508.00749.
Provide a detailed description of the following dataset: ExtMarker
CzechNewsDatasetForSTS
The data originate from the journalistic domain in the Czech language. We describe the process of collecting and annotating the data in detail. The dataset contains 138,556 human annotations divided into train and test sets. In total, 485 journalism students participated in the creation process. To increase the reliability of the test set, we compute the annotation as an average of 9 individual annotations. We evaluate the quality of the dataset by measuring inter and intra annotation annotators' agreements. Beside agreement numbers, we provide detailed statistics of the collected dataset. We conclude our paper with a baseline experiment of building a system for predicting the semantic similarity of sentences. Due to the massive number of training annotations (116 956), the model can perform significantly better than an average annotator (0,92 versus 0,86 of Person's correlation coefficients). See https://arxiv.org/abs/2108.08708
Provide a detailed description of the following dataset: CzechNewsDatasetForSTS
FaceVerse
FaceVerse-High Quality 3D Face Dataset contains 2,688 high-quality head scans (21 expressions from 128 identities) captured by a dense DLSR rig. For each scan, we provide the 3D model (.obj), the corresponding texture map (.jpeg) and the FaceVerse fitted model (.ply) with the same topology.
Provide a detailed description of the following dataset: FaceVerse
EmoDB Dataset
The EMODB database is the freely available German emotional database. The database is created by the Institute of Communication Science, Technical University, Berlin, Germany. Ten professional speakers (five males and five females) participated in data recording. The database contains a total of 535 utterances. The EMODB database comprises of seven emotions: 1) anger; 2) boredom; 3) anxiety; 4) happiness; 5) sadness; 6) disgust; and 7) neutral. The data was recorded at a 48-kHz sampling rate and then down-sampled to 16-kHz.
Provide a detailed description of the following dataset: EmoDB Dataset
Construction Vehicle Image Dataset |Trucks|Tractor etc.
This dataset is an extremely challenging set of over 20,000+ original Construction vehicle images captured and crowdsourced from over 600+ urban and rural areas, where each image is manually reviewed and verified by computer vision professionals at Datacluster Labs. - Dataset Features - Dataset size : 20,000+ - Captured by : Over 1000+ crowdsource contributors - Resolution : 100% of the images are HD and above (1920x1080 and above) - Location : Captured with 600+ cities accross India - Diversity : Various lighting conditions like day, night, varied distances, view points etc. - Device used : Captured using mobile phones in 2020-2022 - Usage : Construction site object detection, workplace safety monitoring, self driving systems, etc. - Available Annotation formats - COCO, YOLO, PASCAL-VOC, Tf-Record **To download full datasets or to submit a request for your dataset needs, please drop a mail on sales@datacluster.ai . Visit www.datacluster.ai to know more.** This dataset is collected by DataCluster Labs. To download full dataset or to submit a request for your new data collection needs, please drop a mail to: sales@datacluster.ai
Provide a detailed description of the following dataset: Construction Vehicle Image Dataset |Trucks|Tractor etc.
SWAT A7
11 days of continuous operation: 7 under normal operation and 4 days with attack scenarios: + Collected network traffic & all the values obtained from all the 51 sensors and actuators + Data labelled according to normal and abnormal behaviours + Attack Scenarios: Derived through the attack models developed by our research team. The attack model considers the intent space of a CPS as an attack model. 41 attacks were launched during the 4 days and are described in the PDF.
Provide a detailed description of the following dataset: SWAT A7
DigiLeTs
A dataset with $23\,870$ digital trajectories (i.e. time series) of handwritten lower- and uppercase Latin letters and Arabic numbers ($a$-$z$, $A$-$Z$, $0$-$9$), generated by $77$ experts using a Wacom Pen Tablet. An expert is considered a proficient user of the recorded symbols, in this case adult native German speakers. DigiLetTs was created to extend the [Omniglot dataset](https://github.com/brendenlake/omniglot) and contains five variants per character per subject to allow the quantification of intra-subject variability and to assess and account for individual writing styles. The determination and imitation of subject-dependent writing styles is introduced as a new task in [this paper](link-to-paper). For more information about the dataset, please refer to the repository (Homepage button below).
Provide a detailed description of the following dataset: DigiLeTs
Bosch CNC Machining Dataset
The dataset provided is a collection of real-world industrial vibration data collected from a brownfield CNC milling machine. The acceleration has been measured using a tri-axial accelerometer (Bosch CISS Sensor) mounted inside the machine. The X- Y- and Z-axes of the accelerometer have been recorded using a sampling rate equal to 2 kHz. Thereby normal as well as anomalous data have been collected for 4 different timeframes, each lasting 5 months from February 2019 until August 2021 and labelled accordingly. It can be used to investigate the scalability of models and research process variations as the anomaly impact differs. In total there is data from three different CNC milling machines each executing 15 processes. For a detailed description of the data and experimental set-up, please refer to the paper: https://doi.org/10.1016/j.procir.2022.04.022
Provide a detailed description of the following dataset: Bosch CNC Machining Dataset
Telegraphic Summaries
# README Created by Malireddy Chanakya & Srivenkata N Mounika Somisetty & Malireddy Chaitanya ## The dataset contains: - 200 short stories - 200 corresponding telegraphic summaries - 50 selected abstractive summaries - 50 selected extractive summaries each by SMMRY and RESOOMER - 45 MCQ questions for 15 stories (3 questions/story) - index.txt ### How to Use? index.txt contains a table listing out each: 1. story’s id 2. name 3. author 4. word count 5. length of the telegraphic summary 6. length of the abstractive summary Corresponding to each story an <id>.txt file is present in the stories directory, telegraphic directory and possibly in the abstractive directory ### Guidelines followed for telegraphic summaries: 1. A segment is defined as a continuous span of words in the source, chosen as a part of the summary. 2. A word should not be fragmented. Eg - if the word "breaking" appears in the source, the entire word should be a part of the segment and not fragments like "break". 3. Each segment should be relevant to the plot, try to advance the story and have some continuity with the preceding and the following segment. 4. Each segment extracted from a dialogue should be enclosed in quotes. 5. Each segment extracted from parentheses should be enclosed in parentheses. 6. Segments should be arranged in the same order as they appear in the story. 7. The summary should be minimal. If multiple segments mean the same thing, pick the shortest. Adjectives, adverbs, and modifiers are not to be included if they are not relevant to the plot. Extraneous facts and long descriptions are to be ignored. 8. When the segments are read in sequence the plot should be apparent and unambiguous. ### Guidelines followed for abstractive summarization: 1. Summaries should be written from a third party perspective. Eg - "This story is about a girl..." 2. Summaries should only discuss the plot and try to avoid inferences and opinions not immediately apparent from the story. 3. Summaries should maintain the same order of events as they occur in the source text.
Provide a detailed description of the following dataset: Telegraphic Summaries
Visual Domain Decathlon
The goal of this challenge is to solve simultaneously ten image classification problems representative of very different visual domains. The data for each domain is obtained from the following image classification benchmarks: ImageNet CIFAR-100 Aircraft Daimler pedestrian classification Describable textures German traffic signs Omniglot SVHN UCF101 Dynamic Images VGG-Flowers The union of the images from the ten datasets is split in training, validation, and test subsets. Different domains contain different image categories as well as a different number of images. The task is to train the best possible classifier to address all ten classification tasks using the training and validation subsets, apply the classifier to the test set, and send us the resulting annotation file for assessment. The winner will be determined based on a weighted average of the classification performance on each domain, using the scoring scheme described below. At test time, your model is allowed to know the ground-truth domain of each test image (ImageNet, CIFAR-100, ...) but, of course, not its category.
Provide a detailed description of the following dataset: Visual Domain Decathlon
METR-LA Point Missing
The original dataset from [Diffusion Convolutional Recurrent Neural Network: Data-Driven Traffic Forecasting](https://arxiv.org/abs/1707.01926) contains traffic readings collected from 207 loop detectors on highways in Los Angeles County, aggregated in 5 minutes intervals over four months between March 2012 and June 2012. The __Point missing__ setting, introduced in [Filling the G_ap_s: Multivariate Time Series Imputation by Graph Neural Networks](https://arxiv.org/abs/2108.00298v3), is a variant for imputation in which 25% of data are masked out uniformly at random. Results on this dataset are assumed to be obtained __in-sample__, meaning that the test interval is used also for training, excluding data used for evaluation.
Provide a detailed description of the following dataset: METR-LA Point Missing
PEMS-BAY Point Missing
The original dataset from [Diffusion Convolutional Recurrent Neural Network: Data-Driven Traffic Forecasting](https://arxiv.org/abs/1707.01926) contains 6 months of traffic readings from 01/01/2017 to 05/31/2017 collected every 5 minutes by 325 traffic sensors in San Francisco Bay Area. The measurements are provided by California Transportation Agencies (CalTrans) Performance Measurement System (PeMS). The __Point missing__ setting, introduced in [Filling the G_ap_s: Multivariate Time Series Imputation by Graph Neural Networks](https://arxiv.org/abs/2108.00298v3), is a variant for imputation in which 25% of data are masked out uniformly at random. Results on this dataset are assumed to be obtained __in-sample__, meaning that the test interval is used also for training, excluding data used for evaluation.
Provide a detailed description of the following dataset: PEMS-BAY Point Missing
CPED
We construct a dataset named CPED from 40 Chinese TV shows. CPED consists of multisource knowledge related to empathy and personal characteristic. This knowledge covers 13 emotions, gender, Big Five personality traits, 19 dialogue acts and other knowledge. * We build a multiturn Chinese Personalized and Emotional Dialogue dataset called CPED. To the best of our knowledge, CPED is the first Chinese personalized and emotional dialogue dataset. CPED contains 12K dialogues and 133K utterances with multi-modal context. Therefore, it can be used in both complicated dialogue understanding and human-like conversation generation. * CPED has been annotated with 3 character attributes (name, gender age), Big Five personality traits, 2 types of dynamic emotional information (sentiment and emotion) and DAs. The personality traits and emotions can be used as prior external knowledge for open-domain conversation generation, making the conversation system have a good command of personification capabilities. * We propose three tasks for CPED: personality recognition in conversations (**PRC**), emotion recognition in conversations (**ERC**), and personalized and emotional conversation (**PEC**). A set of experiments verify the importance of using personalities and emotions as prior external knowledge for conversation generation.
Provide a detailed description of the following dataset: CPED
Bongard-HOI
Bongard-HOI testifies to which extent your few-shot visual learner can quickly induce the true HOI concept from a handful of images and perform reasoning with it. Further, the learner is also expected to transfer the learned few-shot skills to novel HOI concepts compositionally.
Provide a detailed description of the following dataset: Bongard-HOI
Heroes Corpus
Each episode directory contains word-level and segment-level information of the whole episode and also parallel samples extracted under segments_eng and segments_spa subdirectories. Each sample is stored as an WAV audio file, text file and a CSV file containing word timing information and word-level paralinguistic and prosodic features. This dataset contains short audio and text excerpts from the TV series "Heroes" (Copyright Universal Media Studios (2006-2007,2007-2008, 2008-2009)). It is compiled and used only for research purposes. Creation of this dataset is partially financed by the UPF DTIC-Maria de Maeztu Strategic Program. This dataset is created with automated tools. There might be errors due to the automated process. Description from: [https://repositori.upf.edu/handle/10230/35572](https://repositori.upf.edu/handle/10230/35572)
Provide a detailed description of the following dataset: Heroes Corpus
Congolese Swahili – French parallel text corpora
French sentences are sourced from Tatoeba repository and then translated into Congolese Swahili.
Provide a detailed description of the following dataset: Congolese Swahili – French parallel text corpora
language-modeling-recommendation
This is the Big-Bench version of our language-based movie recommendation dataset https://github.com/google/BIG-bench/tree/main/bigbench/benchmark_tasks/movie_recommendation GPT-2 has a 48.8% accuracy, chance is 25%.
Provide a detailed description of the following dataset: language-modeling-recommendation
Binomial and toric ideal data
This data set consists of randomly generated binomial and toric ideals. It was used for predicting a certain complexity measure of Buchberger's algorithm for toric and binomial ideals in small number of variables.
Provide a detailed description of the following dataset: Binomial and toric ideal data
GEN1 Detection
Prophesee’s GEN1 Automotive Detection Dataset is the largest Event-Based Dataset to date. The dataset was recorded using a PROPHESEE GEN1 sensor with a resolution of 304Γ—240 pixels, mounted on a car dashboard. The labels were obtained using the gray level estimation feature of the ATIS camera by labelling manually. It contains 39 hours of open road and various driving scenarios ranging from urban, highway, suburbs and countryside scenes. Manual bounding box annotations are available for two classes are present: pedestrians and cars. (Truck and buses are not labelled).
Provide a detailed description of the following dataset: GEN1 Detection
BreastDICOM4
Several *datasets* are fostering innovation in higher-level functions for everyone, everywhere. By providing this repository, we hope to encourage the research community to focus on hard problems. In this repository, we present our medical imaging [DICOM](https://en.wikipedia.org/wiki/DICOM) files of patients from our [User Tests and Analysis 4 (UTA4)](https://github.com/MIMBCD-UI/meta/wiki/User-Research#test-4-single-modality-vs-multi-modality-) study. Here, we provide a *dataset* of the used medical images during the [UTA4](https://github.com/MIMBCD-UI/meta/wiki/User-Research#test-4-single-modality-vs-multi-modality-) tasks. This repository and respective *dataset* should be paired with the [`dataset-uta4-rates`](https://github.com/MIMBCD-UI/dataset-uta4-rates) repository *dataset*. Work and results are published on a top [Human-Computer Interaction (HCI)](https://en.wikipedia.org/wiki/Human%E2%80%93computer_interaction) conference named [AVI 2020](https://dl.acm.org/conference/avi) ([page](https://sites.google.com/unisa.it/avi2020)). Results were analyzed and interpreted on our [Statistical Analysis](https://mimbcd-ui.github.io/statistical-analysis/) charts. The user tests were made in clinical institutions, where clinicians diagnose several patients for a **Single-Modality** *vs* **Multi-Modality** comparison. For example, in these tests, we used both [`prototype-single-modality`](https://github.com/mida-project/prototype-single-modality) and [`prototype-multi-modality`](https://github.com/mida-project/prototype-multi-modality) repositories for the comparison. On the same hand, the hereby *dataset* represents the pieces of information of both [BreastScreening](https://BreastScreening.github.io) and [MIDA](https://mida-project.github.io) projects. These projects are research projects that deal with the use of a recently proposed technique in literature: [Deep Convolutional Neural Networks (CNNs)](https://en.wikipedia.org/wiki/Convolutional_neural_network). From a developed User Interface (UI) and *framework*, these deep networks will incorporate [several datasets](https://github.com/MIMBCD-UI/meta/wiki/Datasets) in different modes. For more information about the available *datasets* please follow the [Datasets](https://github.com/MIMBCD-UI/meta/wiki/Datasets) page on the [Wiki](https://github.com/MIMBCD-UI/meta/wiki) of the [`meta`](https://github.com/MIMBCD-UI/meta) information repository. Last but not least, you can find further information on the [Wiki](https://github.com/MIMBCD-UI/dataset-uta4-dicom/wiki) in this repository. We also have several demos to see in our [YouTube Channel](https://www.youtube.com/channel/UCPz4aTIVHekHXTxHTUOLmXw), please follow us.
Provide a detailed description of the following dataset: BreastDICOM4
PSB2
https://arxiv.org/abs/2106.06086
Provide a detailed description of the following dataset: PSB2
BreastRates4
Several *datasets* are fostering innovation in higher-level functions for everyone, everywhere. By providing this repository, we hope to encourage the research community to focus on hard problems. In this repository, we present our severity *rates* ([BIRADS](https://en.wikipedia.org/wiki/BI-RADS)) of clinicians while diagnosing several patients from our [User Tests and Analysis 4 (UTA4)](https://github.com/MIMBCD-UI/meta/wiki/User-Research#test-4-single-modality-vs-multi-modality-) study. Here, we provide a *dataset* for the measurements of severity *rates* ([BIRADS](https://en.wikipedia.org/wiki/BI-RADS)) concerning the patient diagnostic. Work and results are published on a top [Human-Computer Interaction (HCI)](https://en.wikipedia.org/wiki/Human%E2%80%93computer_interaction) conference named [AVI 2020](https://dl.acm.org/conference/avi) ([page](https://sites.google.com/unisa.it/avi2020)). Results were analyzed and interpreted from our [Statistical Analysis](https://mimbcd-ui.github.io/statistical-analysis/) charts. The user tests were made in clinical institutions, where clinicians diagnose several patients for a **Single-Modality** *vs* **Multi-Modality** comparison. For example, in these tests, we used both [`prototype-single-modality`](https://github.com/mida-project/prototype-single-modality) and [`prototype-multi-modality`](https://github.com/mida-project/prototype-multi-modality) repositories for the comparison. On the same hand, the hereby *dataset* represents the pieces of information of both [BreastScreening](https://BreastScreening.github.io) and [MIDA](https://mida-project.github.io) projects. These projects are research projects that deal with the use of a recently proposed technique in literature: [Deep Convolutional Neural Networks (CNNs)](https://en.wikipedia.org/wiki/Convolutional_neural_network). From a developed User Interface (UI) and *framework*, these deep networks will incorporate [several datasets](https://github.com/MIMBCD-UI/meta/wiki/Datasets) in different modes. For more information about the available *datasets* please follow the [Datasets](https://github.com/MIMBCD-UI/meta/wiki/Datasets) page on the [Wiki](https://github.com/MIMBCD-UI/meta/wiki) of the [`meta`](https://github.com/MIMBCD-UI/meta) information repository. Last but not least, you can find further information on the [Wiki](https://github.com/MIMBCD-UI/dataset-uta4-rates/wiki) in this repository. We also have several demos to see in our [YouTube Channel](https://www.youtube.com/channel/UCPz4aTIVHekHXTxHTUOLmXw), please follow us.
Provide a detailed description of the following dataset: BreastRates4
BreastClassifications4
Several *datasets* are fostering innovation in higher-level functions for everyone, everywhere. By providing this repository, we hope to encourage the research community to focus on hard problems. In this repository, we present the real results severity ([BIRADS](https://en.wikipedia.org/wiki/BI-RADS)) and pathology (post-report) *classifications* provided by the Radiologist Director from the Radiology Department of [Hospital Fernando Fonseca](https://hff.min-saude.pt/) while diagnosing several patients (see [`dataset-uta4-dicom`](https://github.com/MIMBCD-UI/dataset-uta4-dicom)) from our [User Tests and Analysis 4 (UTA4)](https://github.com/MIMBCD-UI/meta/wiki/User-Research#test-4-single-modality-vs-multi-modality-) study. Here, we provide a *dataset* for the measurements of both severity ([BIRADS](https://en.wikipedia.org/wiki/BI-RADS)) and pathology *classifications* concerning the patient diagnostic. Work and results are published on a top [Human-Computer Interaction (HCI)](https://en.wikipedia.org/wiki/Human%E2%80%93computer_interaction) conference named [AVI 2020](https://dl.acm.org/conference/avi) ([page](https://sites.google.com/unisa.it/avi2020)). Results were analyzed and interpreted from our [Statistical Analysis](https://mimbcd-ui.github.io/statistical-analysis/) charts. The user tests were made in clinical institutions, where clinicians diagnose several patients for a **Single-Modality** *vs* **Multi-Modality** comparison. For example, in these tests, we used both [`prototype-single-modality`](https://github.com/mida-project/prototype-single-modality) and [`prototype-multi-modality`](https://github.com/mida-project/prototype-multi-modality) repositories for the comparison. On the same hand, the hereby *dataset* represents the pieces of information of both [BreastScreening](https://BreastScreening.github.io) and [MIDA](https://mida-project.github.io) projects. These projects are research projects that deal with the use of a recently proposed technique in literature: [Deep Convolutional Neural Networks (CNNs)](https://en.wikipedia.org/wiki/Convolutional_neural_network). From a developed User Interface (UI) and *framework*, these deep networks will incorporate [several datasets](https://github.com/MIMBCD-UI/meta/wiki/Datasets) in different modes. For more information about the available *datasets* please follow the [Datasets](https://github.com/MIMBCD-UI/meta/wiki/Datasets) page on the [Wiki](https://github.com/MIMBCD-UI/meta/wiki) of the [`meta`](https://github.com/MIMBCD-UI/meta) information repository. Last but not least, you can find further information on the [Wiki](https://github.com/MIMBCD-UI/dataset-uta4-rates/wiki) in this repository. We also have several demos to see in our [YouTube Channel](https://www.youtube.com/channel/UCPz4aTIVHekHXTxHTUOLmXw), please follow us.
Provide a detailed description of the following dataset: BreastClassifications4
CIFAR100-LT
The Long-tailed Version of CIFAR100
Provide a detailed description of the following dataset: CIFAR100-LT
MathMLben
MathMLben is a benchmark to the evaluate tools for mathematical format conversion (LaTeX ↔ MathML ↔ CAS). It comprises semantically annotated and linked formulae extracted from the NTCIR 11/12 arXiv and Wikipedia task / dataset, NIST Digital Library of Mathematical Functions (DLMF) and annotations using the AnnoMathTeX formula and identifier name recommender system (https://annomathtex.wmflabs.org).
Provide a detailed description of the following dataset: MathMLben
Riposte!
From the [Riposte! A Large Corpus of Counter-Arguments ](https://arxiv.org/abs/1910.03246) abstract: > Constructive feedback is an effective method for improving critical thinking skills. Counter-arguments (CAs), one form of constructive feedback, have been proven to be useful for critical thinking skills. However, little work has been done for constructing a large-scale corpus of them which can drive research on automatic generation of CAs for fallacious micro-level arguments (i.e. a single claim and premise pair). In this work, we cast providing constructive feedback as a natural language processing task and create Riposte!, a corpus of CAs, towards this goal. Produced by crowdworkers, Riposte! contains over 18k CAs. We instruct workers to first identify common fallacy types and produce a CA which identifies the fallacy. We analyze how workers create CAs and construct a baseline model based on our analysis. Some notes: - The main files of the corpus are `train.csv`, `dev.csv`, and `test.csv`, in `topic` and `no_topic` directories. - The content of these files is the annotations done by each of the crowdsourcing workers as is. - The carg column is the counter-argument produced by the annotators against the claim and premise after selecting the fallacy. The annotators would first see the `claim` and `premise`, select if the fallacy exists or not, and fill in the slots to produce the `carg`. - A multi-label representation of the corpus has been added to the `sampled` directory. The rest is the original unmodified corpus provided by the authors of the paper.
Provide a detailed description of the following dataset: Riposte!
RESPIRATORY AND DRUG ACTUATION DATASET
Asthma is a common, usually long-term respiratory disease with negative impact on society and the economy worldwide. Treatment involves using medical devices (inhalers) that distribute medicationto the airways, and its efficiency depends on the precision of the inhalation technique. Health monitoring systems equipped with sensors and embedded with sound signal detection enable the recognition of drug actuation and could be powerful tools for reliable audio content analysis. The RDA Suite includes a set of tools for audio processing, feature extraction and classification and is provided along with a dataset consisting of respiratory and drug actuation sounds. The classification models in RDA are implemented based on conventional and advanced machine learning and deep network architectures. This study provides a comparative evaluation of the implemented approaches, examines potential improvements and discusses challenges and future tendencies. The central aim of this research is to identify associations between high-level classification labels and low-level features extracted from audio clips of different semantic activities. We investigate the clinical applicability of different audio-based signal processing methods for assessing medication adherence. The dataset consists of recordings acquired in anacoustically controlled setting, free of ambient indoor environment noise, at the University of Patras. Three subjects, who were familiar with the inhaler technique, participated in the study. The participants were instructed to use the inhaler, as typically performed in a clinical procedure. Foreach and every participant informed consent was obtained.During breathing and drug actuation, the audio signals were acquired by a microphone attached to the inhalation de-vice, communicating with a mobile phone via Bluetooth.The addition of the adherence monitoring device did not impact the normal functioning of the inhaler, which had afull placebo canister. In total, 370 audio files were recorded with a different duration each, containing an entire inhaleruse case, with respiratory flow ranging on 180-240 L/min.Each audio recording was sampled with a 8KHz sampling frequency, as a mono channel WAV file, at 8-bit depth.The audio recordings were segmented and annotated by a human specialist into inhaler actuation, exhalation, inhalationand environmental noise. The obtained segments (of non-mixed states) were of variable length and, for some methods, were further segmented into frames of fixed length for the purposes of feature extraction.The constructed database overall consisted of 193 drug actuation segments, 319 inhalation and 620 exhalation segments and 505 noise segments, ready to be used for audio sound recognition using different sets of features.
Provide a detailed description of the following dataset: RESPIRATORY AND DRUG ACTUATION DATASET
BankNote-Net
Millions of people around the world have low or no vision. Assistive software applications have been developed for a variety of day-to-day tasks, including currency recognition. To aid with this task, we present BankNote-Net, an open dataset for assistive currency recognition. The dataset consists of a total of 24,816 embeddings of banknote images captured in a variety of assistive scenarios, spanning 17 currencies and 112 denominations. These compliant embeddings were learned using supervised contrastive learning and a MobileNetV2 architecture, and they can be used to train and test specialized downstream models for any currency, including those not covered by our dataset or for which only a few real images per denomination are available (few-shot learning). We deploy a variation of this model for public use in the last version of the Seeing AI app developed by Microsoft, which has over a 100 thousand monthly active users.
Provide a detailed description of the following dataset: BankNote-Net
Billboard in Japanese Streetscapes
Annotated and original images of billboards in Japanese street scapes
Provide a detailed description of the following dataset: Billboard in Japanese Streetscapes
Fongbe Speech Dataset
This dataset was created for Fongbe automatic speech recognition task and contains about 3979 recordings of 13 participants reading a text written in Fongbe, one sentence at a time. Fongbe is a vernacular language spoken mainly in Benin, by more than 50% of the population, and a littke in Togo and in Nigeria. It’s an under-resourced because it lacks linguistics resources (speech corpus and text data) and very few websites provide textual data. In this dataset, each example contains the audio files and the associated text. The audio is high-quality (16-bit, 16kHz) recorded using an adroid app that we built for the need. The dataset is multi-speaker, containing recordings from 13 volunteers (male and female).
Provide a detailed description of the following dataset: Fongbe Speech Dataset
FIJO
This dataset was collected as part of the multidisciplinary project Femmes face aux dΓ©fis de la transformation numΓ©rique : une Γ©tude de cas dans le secteur des assurances (Women Facing the Challenges of Digital Transformation: A Case Study in the Insurance Sector) at UniversitΓ© Laval, funded by the Future Skills Centre. It includes job offers, in French, from insurance companies between 2009 and 2020.
Provide a detailed description of the following dataset: FIJO
FeedbackQA
[πŸ“„ Read](https://arxiv.org/abs/2204.03025)<br> [πŸ’Ύ Code](https://github.com/McGill-NLP/feedbackqa)<br> [πŸ”— Webpage](https://mcgill-nlp.github.io/feedbackqa/)<br> [πŸ’» Demo](http://206.12.100.48:8080/)<br> [πŸ€— Huggingface Dataset](https://huggingface.co/datasets/McGill-NLP/feedbackQA)<br> [πŸ’¬ Discussions](https://github.com/McGill-NLP/feedbackqa/discussions) # Overview Users interact with QA systems and leave feedback. In this project, we investigate methods of improving QA systems further post-deployment based on user interactions. # Dataset We collect a retrieval-based QA dataset, FeedbackQA, which contains interactive feedback from users. We collect this dataset by deploying a base QA system to crowdworkers who then engage with the system and provide feedback on the quality of its answers. The feedback contains both structured ratings and unstructured natural language explanations. Check the dataset explorer at the bottom for some real examples. # Methods We propose a method to improve the RQA model with the feedback data, training a reranker to select an answer candidate as well as generate the explanation. We find that this approach not only increases the accuracy of the deployed model but also other stronger models for which feedback data is not collected. Moreover, our human evaluation results show that both human-written and model-generated explanations help users to make informed and accurate decisions about whether to accept an answer. Read our paper for more details, and play with our demo for an intuitive understanding of what we have done.
Provide a detailed description of the following dataset: FeedbackQA
SEN12MS-CR-TS
**SEN12MS-CR-TS** is a multi-modal and multi-temporal data set for cloud removal. It contains time-series of paired and co-registered Sentinel-1 and cloudy as well as cloud-free Sentinel-2 data from European Space Agency's Copernicus mission. Each time series contains 30 cloudy and clear observations regularly sampled throughout the year 2018. Our multi-temporal data set is readily pre-processed and backward-compatible with [SEN12MS-CR](https://paperswithcode.com/dataset/sen12ms-cr).
Provide a detailed description of the following dataset: SEN12MS-CR-TS
Human Palm and Gloves Dataset | Human Body Parts Dataset
The dataset consists of images of Human palms captured using a mobile phone. The images have been taken in a real-world scenario like holding objects or performing simple gestures. The dataset has a wide variety of variations like illumination, distances etc. It consists of images of 3 main gestures: Frontal-open palm, Back open palm and fist with the wrist. It also has a lot of images with people wearing gloves. **Dataset Features:** - Captured by 4000+ unique users - Covers wide variety of palm images in indoor and outdoor scenes - Images of palm with gloves - Captured by male and female - Distributed age groups like teenage, adults and old - Captured using mobile phones - Highly diverse - Various lighting conditions like day, night, indoor and outdoor - Outdoor scene with a variety of viewpoints **Annotations Format:** - Classification and detection annotations available - Multiple category annotations possible - COCO, PASCAL VOC and YOLO formats **To download full datasets or to submit a request for your dataset needs, please ping us at (sales@datacluster.ai)[sales@datacluster.ai] Visit [www.datacluster.ai](www.datacluster.ai) to know more.**
Provide a detailed description of the following dataset: Human Palm and Gloves Dataset | Human Body Parts Dataset
Indian Number Plates Dataset | Vehicle Number Plates | English OCR Detection
This dataset is an extremely challenging set of over 20,000+ original Number plate images captured and crowdsourced from over 700+ urban and rural areas, where each image is manually reviewed and verified by computer vision professionals at [Datacluster Labs](www.datacluster.ai) **Dataset Features** - Dataset size : 20,000+ - Captured by : Over 4000+ crowdsource contributors - Resolution : 100% of images HD and above (1920x1080 and above) - Location : Captured with 700+ cities and villages across India - Diversity : Various lighting conditions like day, night, varied distances, view points etc. - Device used : Captured using mobile phones in 2020-2022 - Usage : Number plate detection, ANPR, Number plate recognition, Self driving system, etc. **Available Annotation formats** - COCO, YOLO, PASCAL-VOC, Tf-Record **To download full datasets or to submit a request for your dataset needs, please ping us at [sales@datacluster.ai](sales@datacluster.ai) Visit [www.datacluster.ai](www.datacluster.ai) to know more.**
Provide a detailed description of the following dataset: Indian Number Plates Dataset | Vehicle Number Plates | English OCR Detection
Bottles and Cups Dataset | Household Objects
This dataset consists of images of bottles and cups. ### **Introduction** Dataset consists of images of bottle and cups captured using mobile phones in a real-world scenario. Images were captured under a wide variety of indoor lighting conditions. This dataset can be used for the detection of a wide variety of bottles and cups made up of a variety of materials from a lot of different of viewpoints, locations, orientations, etc. ### **Dataset Features** - Captured by 3000+ unique users - Captured using mobile phones - Variety of different material of bottles and cups - HD Resolution - Highly diverse - Various lighting conditions - Indoor scenes ### **Dataset Features** - Classification and detection annotations available - Multiple category annotations possible - COCO, PASCAL VOC and YOLO formats **To download full datasets or to submit a request for your dataset needs, please ping us at [sales@datacluster.ai](sales@datacluster.ai) Visit [www.datacluster.ai](www.datacluster.ai) to know more.** **Note**: All the images are manually captured and verified by a large contributor base on DataCluster platform
Provide a detailed description of the following dataset: Bottles and Cups Dataset | Household Objects
Transparent Object Images | Indoor Object Dataset
This dataset is an extremely challenging set of over 3000+ original Transparent object images such as glasses and mirrors are captured and crowdsourced from over 500+ urban and rural areas, where each image is **manually reviewed and verified** by computer vision professionals at Datacluster Labs. ### **Dataset Features** - Dataset size : 3000+ - Captured by : Over 500+ crowdsource contributors - Resolution : 99% images HD and above (1920x1080 and above) - Location : Captured with 600+ cities accross India - Diversity : Diversity in object type, lighting, camera type etc. - Device used : Captured using mobile phones in 2020-2022 - Usage : Glass detection | Mirror detection | Transparent Cups detection | Home automation etc. ### Available Annotation formats COCO, YOLO, PASCAL-VOC, Tf-Record **To download full datasets or to submit a request for your dataset needs, please ping us at [sales@datacluster.ai](sales@datacluster.ai) Visit [www.datacluster.ai](www.datacluster.ai) to know more.** **Note**: All the images are manually captured and verified by a large contributor base on DataCluster platform
Provide a detailed description of the following dataset: Transparent Object Images | Indoor Object Dataset
Stairs Image Dataset | Parts of House | Indoor
This dataset is an extremely challenging set of over 3000+ originally Stair images captured and crowdsourced from over 500+ urban and rural areas, where each image is **manually reviewed and verified** by computer vision professionals at Datacluster Labs. ### **Dataset Features** - Dataset size : 3000+ - Captured by : Over 500+ crowdsource contributors - Resolution : 100% images HD and above (1920x1080 and above) - Location : Captured with 500+ cities accross India - Diversity : Various lighting conditions like day, night, varied distances, view points etc. - Device used : Captured using mobile phones in 2020-2022 - Usage : Stair detection , Stair Edge detection , Computer Vision , etc. ### Available Annotation formats COCO, YOLO, PASCAL-VOC, Tf-Record **To download full datasets or to submit a request for your dataset needs, please ping us at [sales@datacluster.ai](sales@datacluster.ai) Visit [www.datacluster.ai](www.datacluster.ai) to know more.** **Note**: All the images are manually captured and verified by a large contributor base on DataCluster platform
Provide a detailed description of the following dataset: Stairs Image Dataset | Parts of House | Indoor
PMData
The PMData dataset aims to combine the traditional lifelogging with sports activity logging.
Provide a detailed description of the following dataset: PMData
OrdinalDataset
It includes 10 data sets that consists of both raw data set and encoded data set where it is encoded through BERT-Sort Encoder with MLM initialization of . In each data set folder, there are original files and encoded data sets with 4 different MLMs. For instance, bank/bank.csv is the original file for raw data set and bank/bank.csv_bs__roberta.csv is encoded raw data set with BERT-Sort Encoder which is initiated with RoBERTa MLM. Both raw and encoded data sets have been used to evaluate the proposed approach in 5 AutoML platforms.
Provide a detailed description of the following dataset: OrdinalDataset
BRCA-M2C
Dataset for multi-class cell classification in breast cancer H\&E images using dot annotations . The labelled cell classes are lymphocytes, tumor or epithelial cells, and stromal cells.
Provide a detailed description of the following dataset: BRCA-M2C
PubTabNet
PubTabNet is a large dataset for image-based table recognition, containing 568k+ images of tabular data annotated with the corresponding HTML representation of the tables. The table images are extracted from the scientific publications included in the PubMed Central Open Access Subset (commercial use collection). Table regions are identified by matching the PDF format and the XML format of the articles in the PubMed Central Open Access Subset. More details are available in our paper "Image-based table recognition: data, model, and evaluation".
Provide a detailed description of the following dataset: PubTabNet
Oximeter Image Dataset | Medical Device Reading
This dataset is an extremely challenging set of over 2000+ original Oximeter images captured and crowdsourced from over 300+ urban and rural areas, where each image is **manually reviewed and verified** by computer vision professionals at Datacluster Labs. ### **Dataset Features** - Dataset size : 2000+ - Captured by : Over 300+ crowdsource contributors - Resolution : 100% images HD and above (1920x1080 and above) - Location : Captured with 50+ cities accross India - Diversity : Various lighting conditions like day, night, varied distances, view points etc. - Device used : Captured using mobile phones in 2020-2021 - Usage : Oximeter Reading detection, Medical Device, Healthcare system detection, etc. ### Available Annotation formats COCO, YOLO, PASCAL-VOC, Tf-Record **To download full datasets or to submit a request for your dataset needs, please ping us at [sales@datacluster.ai](sales@datacluster.ai) Visit [www.datacluster.ai](www.datacluster.ai) to know more.** **Note**: All the images are manually captured and verified by a large contributor base on DataCluster platform
Provide a detailed description of the following dataset: Oximeter Image Dataset | Medical Device Reading
ICM
ICM is curated for the image-text matching task. Each image has a corresponding caption text, which describes the image in detail. We first use CTR to select the most relevant pairs. Then, human annotators manually perform a 2nd round manual correction, obtaining 400,000 image-text pairs, including 200,000 positive cases and 200,000 negative cases. We keep the ratio of positive and negative pairs consistent in each of the train/val/test sets.
Provide a detailed description of the following dataset: ICM
IQM
IQM is curated for the image-text matching task. Each image has a corresponding search query. We first use CTR to select the most relevant pairs. In this dataset, we randomly select image-query pairs in the candidate set after performing the cleaning process, obtaining 400,000 image-text pairs, including 200,000 positive cases and 200,000 negative cases. We keep the ratio of positive and negative pairs consistent in each of the train/val/test sets.
Provide a detailed description of the following dataset: IQM
SWORD
The new dataset contains around 1,500 train videos and 290 test videos, with 50 frames per video on average. The dataset was obtained after processing the manually captured video sequences of static real-life urban scenes. The main property of the dataset is the abundance of close objects and, consequently, the larger prevalence of occlusions. According to the introduced heuristic, the mean area of occluded image parts for SWORD is approximately five times larger than for RealEstate10k data (14% vs 3% respectively). This rationalizes the collection and usage of SWORD and explains that SWORD allows training more powerful models despite being of smaller size.
Provide a detailed description of the following dataset: SWORD
PCQM4Mv2-LSC
PCQM4Mv2 is a quantum chemistry dataset originally curated under the PubChemQC project. Based on the PubChemQC, we define a meaningful ML task of predicting DFT-calculated HOMO-LUMO energy gap of molecules given their 2D molecular graphs. The HOMO-LUMO gap is one of the most practically-relevant quantum chemical properties of molecules since it is related to reactivity, photoexcitation, and charge transport. Moreover, predicting the quantum chemical property only from 2D molecular graphs without their 3D equilibrium structures is also practically favorable. This is because obtaining 3D equilibrium structures requires DFT-based geometry optimization, which is expensive on its own.
Provide a detailed description of the following dataset: PCQM4Mv2-LSC
MUSES
MUSES is a large-scale dataset for temporal event (action) localization. It focuses on the temporal localization of multi-shot events, which are captured with multiple shots. Such events often appear in edited videos, such as TV shows and movies. What’s included in MUSES: * 3,697 videos of TV and movie dramas * 716 hours of duration * 25 event categories * 652k shots * 31,477 annotated event instances
Provide a detailed description of the following dataset: MUSES
ICR
In this dataset, we collect 200,000 image-text pairs. Each image has a corresponding caption text, which describes the image in detail. It contains two subtasks: image-to-text retrieval and text-to-image retrieval tasks.
Provide a detailed description of the following dataset: ICR
IQR
IQR is proposed for the image-text retrieval task. We use 200,000 queries and the corresponding images as the annotated image-query pairs.
Provide a detailed description of the following dataset: IQR
Flickr30k-CNA
Former Flickr30k-CN translates the training and validation sets of Flickr30k using machine translation and manually translates the test set. We check the machine-translated results and find two kinds of problems. (1) Some sentences have language problems and translation errors. (2) Some sentences have poor semantics. In addition, the different translation ways between the training set and test set prevent the model from achieving accurate performance. We gather 6 professional English and Chinese linguists to meticulously re-translate all data of Flickr30k and double-check each sentence.
Provide a detailed description of the following dataset: Flickr30k-CNA
OUMVLP
The OU-ISIR Gait Database, Multi-View Large Population Dataset (OU-MVLP) is meant to aid research efforts in the general area of developing, testing and evaluating algorithms for cross-view gait recognition. The Institute of Scientific and Industrial Research (ISIR), Osaka University (OU) has copyright in the collection of gait video and associated data and serves as a distributor of the OU-ISIR Gait Database. The data was collected in conjunction with an experience-based long-run exhibition of video-based gait analysis at a science museum. The approved informed consent was obtained from all the subjects in this dataset. The dataset consists of 10,307 subjects (5,114 males and 5,193 females with various ages, ranging from 2 to 87 years) from 14 view angles, ranging 0Β°-90Β°, 180Β°-270Β°. Gait images of 1,280 x 980 pixels at 25 fps are captured by seven network cameras (Cam1-7) placed at intervals of 15-deg azimuth angles along a quarter of a circle whose center coincides with the center of the walking course. Its radius is approximately 8 m and height is approximately 5 m.
Provide a detailed description of the following dataset: OUMVLP
Mars Sample Localization
It contains grayscale mono and stereo images (NavCam and LocCam) from laboratory tests performed by a prototype rover on a martian-like testbed. The dataset can be used for artificial sample-tube detection and pose estimation. It also contains synthetic color images of the sample tube on a martian scenario created with Unreal Engine.
Provide a detailed description of the following dataset: Mars Sample Localization
MUStARD++
**MUStARD++** is a multimodal sarcasm detection dataset (MUStARD) pre-annotated with 9 emotions. It can be used for the task of detecting the emotion in a sarcastic statement.
Provide a detailed description of the following dataset: MUStARD++
Data for paper "Zone extrapolations in parametric timed automata"
Contains the current version of IMITATOR, all models and necessary scripts to reproduce all experiments on the benchmarks set.
Provide a detailed description of the following dataset: Data for paper "Zone extrapolations in parametric timed automata"
Bike and Car Odometer Dataset ! Speedometer OCR
This dataset consists of odometer or speedometer images of bike and car vehicles. ### **Introduction** This dataset can be used to detect or recognize odometer readings of the vehicles. Moreover, it can be used to classify the make of the car and bikes. The usecases can in the domain of insurance, repair and OCR. ### **Dataset Features** - Captured by 4000+ unique users - Rich in diversity - Mobile phone view point - Various lighting conditions - Digital and Analog Categories - Vehicle Model Types ### **Dataset Features** - Classification and detection annotations available - Multiple category annotations possible - COCO, PASCAL VOC and YOLO formats *To download full datasets or to submit a request for your dataset needs, please ping us on **sales@datacluster.ai*** Visit www.datacluster.in to know more. **Note**: All the images are manually verified and are contributed by the large contributor base on DataCluster platform
Provide a detailed description of the following dataset: Bike and Car Odometer Dataset ! Speedometer OCR
Mobile Phone Dataset | Smartphone & Feature Phone
### **This dataset is collected by DataCluster Labs, India. To download full dataset or to submit a request for your new data collection needs, please drop a mail to:&nbsp;[sales@datacluster.ai](mailto:sales@datacluster.ai)** This dataset is an extremely challenging set of over 3000+ original Mobile Phone images captured and crowdsourced from over 1000+ urban and rural areas, where each image is **manually reviewed and verified** by computer vision professionals at ****DC Labs. ### **Dataset Features** - Dataset size : 3000+ - Captured by : Over 1000+ crowdsource contributors - Resolution : 99% images HD and above (1920x1080 and above) - Location : Captured with 600+ cities accross India - Diversity : Various lighting conditions like day, night, varied distances, view points etc. - Device used : Captured using mobile phones in 2020-2021 - Applications : Mobile Phone detection, cracked screen detection, etc. ### Available Annotation formats COCO, YOLO, PASCAL-VOC, Tf-Record **To download full datasets or to submit a request for your dataset needs, please ping us at [sales@datacluster.ai](sales@datacluster.ai) Visit [www.datacluster.ai](www.datacluster.ai) to know more.** **Note**: All the images are manually captured and verified by a large contributor base on DataCluster platform
Provide a detailed description of the following dataset: Mobile Phone Dataset | Smartphone & Feature Phone
Real world moire pattern classification
This dataset consists of two categories. Real world images and spoof images(Captured from Screen) which shows moire patterns. ### **Introduction** This dataset can be used to classify real world images and images captured from laptop, mobile phones or TV. The spoof images shows the moire pattern which can help detect fake or spoof personality. This dataset can be used to enhance vision based security algorithms. ### **Dataset Features** - Captured by 3000+ unique users - Rich in diversity - Mobile phone view point - Various lighting conditions - Indoor and Outdoor scene ### **Dataset Features** - Classification and detection annotations available - Multiple category annotations possible - COCO, PASCAL VOC and YOLO formats *To download full datasets or to submit a request for your dataset needs, please ping us on **sales@datacluster.ai*** Visit www.datacluster.in to know more. **Note**: All the images are manually verified and are contributed by the large contributor base on DataCluster platform
Provide a detailed description of the following dataset: Real world moire pattern classification
Indian Food Image Dataset
This dataset is an extremely challenging set of over 5000+ original India food images captured and crowdsourced from over 800+ urban and rural areas, where each image is **manually reviewed and verified** by computer vision professionals at ****DC Labs. ### **Dataset Features** - Dataset size : 5000+ - Captured by : Over 800+ crowdsource contributors - Resolution : 99% images HD and above (1920x1080 and above) - Location : Captured with 800+ cities accross India - Diversity : Various lighting conditions like day, night, varied distances, view points, dimlight etc. - Device used : Captured using mobile phones in 2020-2021 - Usage : Indian food classification, Dish classification, Food plate detection, etc. ### Available Annotation formats COCO, YOLO, PASCAL-VOC, Tf-Record **To download full datasets or to submit a request for your dataset needs, please ping us at [sales@datacluster.ai](sales@datacluster.ai) Visit [www.datacluster.ai](www.datacluster.ai) to know more.** **Note**: All the images are manually captured and verified by a large contributor base on DataCluster platform
Provide a detailed description of the following dataset: Indian Food Image Dataset
Suitcase/Luggage Dataset Indoor Object Image
This dataset is an extremely challenging set of over 7000+ original Suitcase/Luggage images captured and crowdsourced from over 800+ urban and rural areas, where each image is **manually reviewed and verified** by computer vision professionals at ****DC Labs. ### **Dataset Features** - Dataset size : 6000+ - Captured by : Over 1000+ crowdsource contributors - Resolution : 99% images HD and above (1920x1080 and above) - Location : Captured with 800+ cities accross India - Diversity : Various lighting conditions like day, night, varied distances, view points etc. - Device used : Captured using mobile phones in 2021-2022 - Usage : Luggage detection, suitcase detection, etc. ### Available Annotation formats COCO, YOLO, PASCAL-VOC, Tf-Record **To download full datasets or to submit a request for your dataset needs, please ping us at [sales@datacluster.ai](sales@datacluster.ai) Visit [www.datacluster.ai](www.datacluster.ai) to know more.** **Note**: All the images are manually captured and verified by a large contributor base on DataCluster platform
Provide a detailed description of the following dataset: Suitcase/Luggage Dataset Indoor Object Image
Capture-24
This dataset contains Axivity AX3 wrist-worn activity tracker data that were collected from 151 participants in 2014-2016 around the Oxfordshire area. Participants were asked to wear the device in daily living for a period of roughly 24 hours, amounting to a total of almost 4,000 hours. Vicon Autograph wearable cameras and Whitehall II sleep diaries were used to obtain the ground truth activities performed during the period (e.g. sitting watching TV, walking the dog, washing dishes, sleeping), resulting in more than 2,500 hours of labelled data. Accompanying code to analyse this data is available at https://github.com/activityMonitoring/capture24. The following papers describe the data collection protocol in full: i.) Gershuny J, Harms T, Doherty A, Thomas E, Milton K, Kelly P, Foster C (2020) Testing self-report time-use diaries against objective instruments in real time. Sociological Methodology doi: 10.1177/0081175019884591; ii.) Willetts M, Hollowell S, Aslett L, Holmes C, Doherty A. (2018) Statistical machine learning of sleep and physical activity phenotypes from sensor data in 96,220 UK Biobank participants. Scientific Reports. 8(1):7961. Regarding Data Protection, the Clinical Data Set will not include any direct subject identifiers. However, it is possible that the Data Set may contain certain information that could be used in combination with other information to identify a specific individual, such as a combination of activities specific to that individual ("Personal Data"). Accordingly, in the conduct of the Analysis, users will comply with all applicable laws and regulations relating to information privacy. Further, the user agrees to preserve the confidentiality of, and not attempt to identify, individuals in the Data Set.
Provide a detailed description of the following dataset: Capture-24
satp-zsm-stage1
This is the replication data for the paper: "Crossing the Linguistic Causeway: A Binational Approach for Translating Soundscape Attributes to Bahasa Melayu". * contains survey responses to a quantitative evaluation survey adopted from [Watcharasupat et al., 2022](https://doi.org/10.48550/arXiv.2203.12245)
Provide a detailed description of the following dataset: satp-zsm-stage1
GlassTemp
The GlassTemp dataset is collected from [Polyinfo](https://polymer.nims.go.jp/en/). It uses monomers as polymer graphs to predict the property of glass transition temperature. The glass transition temperature of the material itself denotes the temperature range over which this glass transition takes place.
Provide a detailed description of the following dataset: GlassTemp
MeltingTemp
The MeltingTemp dataset is collected from [Polyinfo](https://polymer.nims.go.jp/en/). It uses monomers as polymer graphs to predict the property of polymer melting temperature.
Provide a detailed description of the following dataset: MeltingTemp
PolyDensity
The PolyDensity is collected from [Polyinfo](https://polymer.nims.go.jp/en/). It uses monomers as polymer graphs to predict the property of polymer density.
Provide a detailed description of the following dataset: PolyDensity
$O_2$Perm
The $O_2$Perm dataset is created from the [Membrane Society of Australasia portal](https://membrane-australasia.org/msa-activities/polymer-gas-separation-membrane-database/). It uses monomers as polymer graphs to predict the property of oxygen permeability. It has he limited size (595 polymers), which brings great challenges to the property prediction.
Provide a detailed description of the following dataset: $O_2$Perm
Replication Data for: The elastic origins of tail asymmetry
Dataset and Stata codes for replicating Tables 1, 3 and Figures 1-4.
Provide a detailed description of the following dataset: Replication Data for: The elastic origins of tail asymmetry
Motion Blurred and Defocused Dataset
This dataset consists of blurred, noisy and defocused images. ### **Introduction** Dataset consists of blurred images captured using mobile phones in real-world scenario. Images were captured under wide variety of lighting conditions, weather, indoor and outdoor. This dataset can be used for Image De-noising, Deblurring and noise removal algorithms. This can be also work as robust test set for denoising algorithms. ### **Dataset Features** - Captured by 3000+ unique users - Rich in diversity - Mobile phone view point - HD Resolution - Various lighting conditions - Indoor and Outdoor scene ### **Dataset Features** - Classification and detection annotations available - Multiple category annotations possible - COCO, PASCAL VOC and YOLO formats *To download full datasets or to submit a request for your dataset needs, please ping us on **sales@datacluster.ai*** Visit www.datacluster.in to know more. **Note**: All the images are manually verified and are contributed by the large contributor base on DataCluster platform
Provide a detailed description of the following dataset: Motion Blurred and Defocused Dataset
Masks Dataset | Unattended Mask Images
This dataset is an extremely challenging set of over 7000+ original Masks images captured and crowdsourced from over 1200+ urban and rural areas, where each image is **manually reviewed and verified** by computer vision professionals at ****DC Labs. ### **Dataset Features** - Dataset size : 7000+ - Captured by : Over 1200+ crowdsource contributors - Resolution : 99% images HD and above (1920x1080 and above) - Location : Captured with 900+ cities accross India - Diversity : Various lighting conditions like day, night, varied distances, view points etc. - Device used : Captured using mobile phones in 2020-2021 - Usage : Mask detection, Mask segregation, Trash Mask detection, etc. ### Available Annotation formats COCO, YOLO, PASCAL-VOC, Tf-Record **To download full datasets or to submit a request for your dataset needs, please ping us at [sales@datacluster.ai](sales@datacluster.ai) Visit [www.datacluster.ai](www.datacluster.ai) to know more.** **Note**: All the images are manually captured and verified by a large contributor base on DataCluster platform
Provide a detailed description of the following dataset: Masks Dataset | Unattended Mask Images
Italian Crime News
The dataset contains the main components of the news articles published online by the newspaper named <a href="https://gazzettadimodena.gelocal.it/modena">Gazzetta di Modena</a>: url of the web page, title, sub-title, text, date of publication, crime category assigned to each news article by the author. The news articles are written in Italian and describe 11 types of crime events occurred in the province of Modena between the end of 2011 and 2021. Moreover, the dataset includes data derived from the abovementioned components thanks to the application of Natural Language Processing techniques. Some examples are the place of the crime event occurrence (municipality, area, address and GPS coordinates), the date of the occurrence, and the type of the crime events described in the news article obtained by an automatic categorization of the text. In the end, news articles describing the same crime events (duplciates) are detected by calculating the document similarity. Now, we are working on the application of question answering to extract the 5W+1H and we plan to extend the current dataset with the obtained data. Other researchers can employ the dataset to apply other algorithms of text categorization and duplicate detection and compare their results with the benchmark. The dataset can be useful for several scopes, e.g., geo-localization of the events, text summarization, crime analysis, crime prediction, community detection, topic modeling.
Provide a detailed description of the following dataset: Italian Crime News
Parkinson Speech Dataset with Multiple Types of Sound Recordings Data Set
The PD database consists of training and test files. The training data belongs to 20 PWP (6 female, 14 male) and 20 healthy individuals (10 female, 10 male) who appealed at the Department of Neurology in Cerrahpasa Faculty of Medicine, Istanbul University. From all subjects, multiple types of sound recordings (26 voice samples including sustained vowels, numbers, words and short sentences) are taken. A group of 26 linear and time'β€œfrequency based features are extracted from each voice sample. UPDRS ((Unified Parkinson's Disease Rating Scale) score of each patient which is determined by expert physician is also available in this dataset. Therefore, this dataset can also be used for regression. After collecting the training dataset which consists of multiple types of sound recordings and performing our experiments, in line with the obtained findings we continued collecting an independent test set from PWP via the same physician's examination process under the same conditions. During the collection of this dataset, 28 PD patients are asked to say only the sustained vowels 'a' and 'o' three times respectively which makes a total of 168 recordings. The same 26 features are extracted from voice samples of this dataset. This dataset can be used as an independent test set to validate the results obtained on training set.
Provide a detailed description of the following dataset: Parkinson Speech Dataset with Multiple Types of Sound Recordings Data Set
Memento10k
Memorability dataset with 10000 3-second videos. Each video has upwards of 90 human annotations, and the split-half consistency of this dataset is 0.73 (best in class for video memorabilty datasets).
Provide a detailed description of the following dataset: Memento10k
The manifest and store data of 870,515 Android mobile applications
Involves a crawler to collect data from the Google Play store including the application's metadata and APK files. The manifest files were extracted from the APK files and then processed to extract the features. The data set is composed of 870,515 records/apps, and for each app we produced 48 features. The data set was used to built and test two bootstrap aggregating of multiple XGBoost machine learning classifiers. The dataset were collected between April 2017 and November 2018. We then checked the status of these applications on three different occasions; December 2018, February 2019, and May-June 2019. (2022-06-03)
Provide a detailed description of the following dataset: The manifest and store data of 870,515 Android mobile applications
CellTypeGraph Benchmark
Classifying all cells in an organ is a relevant and difficult problem from plant developmental biology. We here abstract the problem into a new benchmark for node classification in a geo-referenced graph. Solving it requires learning the spatial layout of the organ including symmetries. To allow the convenient testing of new geometrical learning methods, the benchmark of Arabidopsis thaliana ovules is made available as a PyTorch data loader, along with a large number of precomputed features.
Provide a detailed description of the following dataset: CellTypeGraph Benchmark
2BallEllsbergData
Two Ball Ellsberg Paradox: Representative US Data. You can find all the Stata code for data analysis, including commented lines, explanations and step-by-step implementation of ORIV.
Provide a detailed description of the following dataset: 2BallEllsbergData
ProsocialDialog
Most existing dialogue systems fail to respond properly to potentially unsafe user utterances by either ignoring or passively agreeing with them. To address this issue, we introduce **ProsocialDialog**, the first large-scale multi-turn dialogue dataset to teach conversational agents to respond to problematic content following social norms. Covering diverse unethical, problematic, biased, and toxic situations, ProsocialDialog contains responses that encourage prosocial behavior, grounded in commonsense social rules (i.e., rules-of-thumb, RoTs). **ProsocialDialog** consists of 58K dialogues between a speaker showing potentially unsafe behavior and a speaker giving constructive feedback for more socially acceptable behavior. Specifically, it contains a rich suite of: * 331K utterances * 160K Rules-of-thumb (RoTs) * 497K dialogue safety labels accompanied by free-form rationales
Provide a detailed description of the following dataset: ProsocialDialog
BinaryCorp
BinaryCorp is built for binary similarity detection based on the ArchLinux official repositories and Arch User Repository. BinaryCorp contains tens of thousands of software, including editors, instant messenger, HTTP server, web browser, compiler, graphics library, cryptographic library, etc. The binary code similarity task requires a large number of labeled data, thus we use the infrastructures provided by ArchLinux to construct our dataset with different optimization levels (e.g O0, O1, O2, O3, Os). Get BinaryCorp at [here](https://cloud.vul337.team:8443/s/cxnH8DfZTADLKCs) . For more details, please check our official repo: https://github.com/vul337/jTrans
Provide a detailed description of the following dataset: BinaryCorp
Chilean Waiting List
The Chilean Waiting List corpus comprises de-identified referrals from the waiting list in Chilean public hospitals. A subset of 10,000 referrals (including medical and dental notes) was manually annotated with ten entity types with clinical relevance, keeping 1,000 annotations for a future shared task. A trained medical doctor or dentist annotated these referrals and then, together with three other researchers, consolidated each of the annotations. The annotated corpus has more than 48% of entities embedded in other entities or containing another. This corpus can be a useful resource to build new models for Nested Named Entity Recognition (NER). This work constitutes the first annotated corpus using clinical narratives from Chile and one of the few in Spanish. Hugging Face datasets: https://huggingface.co/plncmm. After predicting over each entity type, merge the prediction to obtain your final micro f1-score. This is for a fair comparison with actual state-of-the-art models.
Provide a detailed description of the following dataset: Chilean Waiting List
Indian Signboard Image Dataset | Text in Image
### **Introduction** The dataset consists of Indian traffic signs images for classification and detection. The images have been taken in varied weather conditions in daylight, evening and nights. The dataset has a wide variety of variations of illumination, distances, view points etc. This dataset represents a very challenging set of unstructured images of Indian traffic signboards. ### **Dataset Features** - Captured by 2000+ unique users - Covers wide variety of Indian traffic signs - Captured with 20+ cities accross India - Captured using mobile phones - Highly diverse - Various lighting conditions like day, night, - Outdoor scene with variety of view points ### **Dataset Features** - Classification and detection annotations available - Multiple category annotations possible - COCO, PASCAL VOC and YOLO formats **To download full datasets or to submit a request for your dataset needs, please ping us at [sales@datacluster.ai](sales@datacluster.ai) Visit [www.datacluster.ai](www.datacluster.ai) to know more.** **Note**: All the images are manually captured and verified by a large contributor base on DataCluster platform
Provide a detailed description of the following dataset: Indian Signboard Image Dataset | Text in Image
Human Wrist Image Dataset | Human Body Parts
This dataset consists of images of wrist (with different kind of bands on it). ### **Introduction** Dataset consists of images of wrist captured using mobile phones in real-world scenario. Images were captured under wide variety of lighting conditions, weather, indoor and outdoor. This dataset can be used for Augmented Reality, Mixed Reality, Rakhi Detection, Wrist-watch Detection, Hand-band Detection, etc. ### **Dataset Features** - Captured by 3000+ unique users - Rich in diversity - Mobile phone view point - Various items on the wrist - Consists male and female wrists - HD Resolution - Various lighting conditions - Indoor and Outdoor scene ### **Dataset Features** - Classification and detection annotations available - Multiple category annotations possible - COCO, PASCAL VOC and YOLO formats **To download full datasets or to submit a request for your dataset needs, please ping us at [sales@datacluster.ai](sales@datacluster.ai) Visit [www.datacluster.ai](www.datacluster.ai) to know more.** **Note**: All the images are manually captured and verified by a large contributor base on DataCluster platform
Provide a detailed description of the following dataset: Human Wrist Image Dataset | Human Body Parts
Cracked Mobile Screen Dataset
This dataset consists of images of Cracked screen like cracked mobile screen. ### **Introduction** The dataset consists of images of cracked screens. This dataset covers different types of damages to the screen of mobile phones. The images were captured under a variety of illumination, distances, viewpoints, etc. This dataset represents a very challenging set of cracked screen detection and recognition. This dataset can be used for insurance purpose, AR and to study variety of cracks happen on mobile phones. ### **Dataset Features ** - Approx. 7000+ unique images - Captured by 5000+ unique users - Captured using mobile phones - Various lighting conditions like daylight, night - Outdoor scene with a variety of lillumination and viewpoints - Dataset Format - Classification and detection annotations available - Multiple category annotations possible - COCO, PASCAL VOC and YOLO formats To get more images, Visit: https://github.com/datacluster-labs/Datacluster-Datasets/tree/gh-pages/Cracked%20Screen%20Image%20Dataset **To download full datasets or to submit a request for your dataset needs, please ping us at [sales@datacluster.ai](sales@datacluster.ai) Visit [www.datacluster.ai](www.datacluster.ai) to know more.** **Note**: All the images are manually captured and verified by a large contributor base on DataCluster platform
Provide a detailed description of the following dataset: Cracked Mobile Screen Dataset