dataset_name
stringlengths
2
128
description
stringlengths
1
9.7k
prompt
stringlengths
59
185
High-Resolution Stereo Scans of 100 Sorghum Panicles
This dataset contains stereo images, depth data, camera position/orientation, and camera information for 100 sorghum panicles (the seed-bearing head of the sorghum stalk), as well as semantic segmentation labels for a subset of the data. The 100 sampled sorghum stalks are drawn from 10 different species, in groups of 10. For image capture an illumination-invariant flash camera developed at CMU was swept around the panicle using a UR5 robotic arm, and approximately 150 image pairs were captured for each panicle, giving a full 3D view of each stalk.
Provide a detailed description of the following dataset: High-Resolution Stereo Scans of 100 Sorghum Panicles
MSU FR VQA Database
The dataset was created for video quality assessment problem. It was formed with 36 clips from Vimeo, which were selected from 18,000+ open-source clips with high bitrate (license CCBY or CC0). The clips include videos recorded by both professionals and amateurs. Almost half of the videos contain scene changes and high dynamism. Moreover, the synthetic to natural lightning ratio is approximately 1 to 3. Content type: nature, sport, humans close up, gameplays, music videos, water stream or steam, CGI Effects and distortions: shaking, slow-motion, grain/noisy, too dark/bright regions, macro shooting, captions (text), extraneous objects on the camera lens or just close to it Resolution: 1920x1080 as the most popular modern video resolution (more in the future) Format: yuv420p FPS: 24, 25, 30, 39, 50, 60 Videos duration: mainly 10 seconds Such content diversity helps simulate near-realistic conditions. The choice of videos collected for the benchmark dataset employed clustering in terms of space-time complexity to obtain a representative distribution. For compression we used 40 codecs of 10 compression standards (H.264, AV1, H.265, VVC, etc.). Each video was compressed with 3 target bitrates: 1,000 Kbps, 2,000 Kbps, and 4,000 Kbps, and different real-life encoding modes: constant quality (CRF) and variable bitrate (VBR). The choice of bitrate range simplifies the subjective comparison procedure since the video quality is more difficult to distinguish visually at higher bitrates. The subjective assessment involved pairwise comparisons using crowdsourcing service Subjectify.us. To increase the relevance of the results, each pair of videos received at least 10 responses from participants. In total, 766362 valid answers were collected from more than 10800 unique participants.
Provide a detailed description of the following dataset: MSU FR VQA Database
FHRMA dataset for FS detection
FHRMA is an open-source project for Fetal Heart Rate Morphological Analysis containing Matlab source code and datasets. As a sub-project, it includes a deep learning method and dataset for automatic identification of the maternal heart rate (MHR) and, more generally, false signals (FSs) on fetal heart rate (FHR) recordings. The challenge concerns particularly the FHR signal recorded with Doppler sensors, on which MHR interference and other FSs are particularly common, but the dataset also includes FHR recorded with scalp-ECG. The training and validation dataset contained 1030 expert-annotated periods (mean duration: 36 min) from 635 recordings. Labels consist of annotating each time sample as either 1: False signal; 0: True signal, or -1: do not know or irrelevant.  Test datasets are also available, but the test dataset annotations are not available on an open-access basis. Researchers who want to evaluate their models will have to send their results for evaluation; hence, a competition has been opened.  As a baseline method and to help get started, a GRU-based model coded in Python/tensorflow is available.  More details on the dataset, the problem description and the open-source method are available in: [Use of Deep Learning to Detect the Maternal Heart Rate andFalse Signals on Fetal Heart Rate Recordings] (https://www.mdpi.com/2079-6374/12/9/691)
Provide a detailed description of the following dataset: FHRMA dataset for FS detection
Dataset for Post-OCR text correction in Sanskrit
This dataset contains around 218K sentences, with 1.5 million words, from 30 different books designed for Post-OCR text correction.
Provide a detailed description of the following dataset: Dataset for Post-OCR text correction in Sanskrit
INDRA
INDRA is a dataset capturing videos of Indian roads from the pedestrian point-of-view. INDRA contains 104 videos comprising of 26k 1080p frames, each annotated with a binary road crossing safety label and vehicle bounding boxes.
Provide a detailed description of the following dataset: INDRA
FIB
Factual Inconsistency Benchmark (**FIB**) is a benchmark that focuses on the task of summarization. Specifically, the benchmark involves comparing the scores an LLM assigns to a factually consistent versus a factual inconsistent summary for an input news article. For factually consistent summaries, human-written reference summaries are used to manually verify as factually consistent.
Provide a detailed description of the following dataset: FIB
MM-Locate-News
**MM-Locate-News** is a dataset for location estimation of news. It consists of 6395 news articles covering 237 cities and 152 countries across all continents as well as multiple domains such as health, environment, and politics. The dataset is collected in a weakly-supervised manner, and multiple data cleaning steps are applied to remove articles with potential inaccurate geolocation information. The acquired dataset addresses drawbacks of other datasets such as BreakingNews as it considers multimodal content of news to label the corresponding location.
Provide a detailed description of the following dataset: MM-Locate-News
Nlakh
**Nlakh** is a dataset for Musical Instrument Retrieval. It is a combination of the NSynth dataset, which provides a large number of instruments, and the Lakh dataset, which provides multi-track MIDI data.
Provide a detailed description of the following dataset: Nlakh
EmoPars
**EmoPars** is a dataset of 30,000 Persian Tweets labeled with Ekman’s six basic emotions (Anger, Fear, Happiness, Sadness, Hatred, and Wonder). This is the first publicly available emotion dataset in the Persian language.
Provide a detailed description of the following dataset: EmoPars
ArmanEmo
**ArmanEmo** is a human-labeled emotion dataset of more than 7000 Persian sentences labeled for seven categories. The dataset has been collected from different resources, including Twitter, Instagram, and Digikala (an Iranian e-commerce company) comments. Labels are based on Ekman's six basic emotions (Anger, Fear, Happiness, Hatred, Sadness, Wonder) and another category (Other) to consider any other emotion not included in Ekman's model.
Provide a detailed description of the following dataset: ArmanEmo
YM2413-MDB
**YM2413-MDB** is an 80s FM video game music dataset with multi-label emotion annotations. It includes 669 audio and MIDI files of music from Sega and MSX PC games in the 80s using YM2413, a programmable sound generator based on FM. The collected game music is arranged with a subset of 15 monophonic instruments and one drum instrument.
Provide a detailed description of the following dataset: YM2413-MDB
SSL4EO-S12
**SSL4EO-S12** is a large-scale, global, multimodal, and multi-seasonal corpus of satellite imagery from the ESA Sentinel-1 & -2 satellite missions.
Provide a detailed description of the following dataset: SSL4EO-S12
PaintNet
**PaintNet** is a dataset for learning robotic spray painting of free-form 3D objects. PaintNet includes more than 800 object meshes and the associated painting strokes collected in a real industrial setting.
Provide a detailed description of the following dataset: PaintNet
Open Images V7
Open Images is a computer vision dataset covering ~9 million images with labels spanning thousands of object categories. A subset of 1.9M includes diverse annotations types. - 15,851,536 boxes on 600 classes - 2,785,498 instance segmentations on 350 classes - 3,284,280 relationship annotations on 1,466 relationships - 675,155 localized narratives (synchronized voice, mouse trace, and text caption) - 66,391,027 point-level annotations on 5,827 classes - 61,404,966 image-level labels on 20,638 classes Images are under a CC BY 2.0 license, annotations under CC BY 4.0 license.
Provide a detailed description of the following dataset: Open Images V7
ec-darkpattern
**ec-darkpattern** is a dataset for dark pattern detection and prepared its baseline detection performance with state-of-the-art machine learning methods. The original dataset was obtained from Mathur et al.’s study in 2019 [11kScale], which consists of 1,818 dark pattern texts from shopping sites. Negative samples, i.e., non-dark pattern texts, by retrieving texts from the same websites as Mathur et al.'s dataset.
Provide a detailed description of the following dataset: ec-darkpattern
IGLU
IGLU is a dataset designed for interactive grounded language understanding. It has a total of 8,136 single-turn data pairs of instructions and actions. Every single sample is randomly initialized with a pre-built structure from previously collected multi-turn interactions data.
Provide a detailed description of the following dataset: IGLU
kaggle stroke Prediction competition
It is a competition on kaggle with stroke Prediction, which is heavily imbalanced.
Provide a detailed description of the following dataset: kaggle stroke Prediction competition
Flare7K
Flare7K, the first nighttime flare removal dataset, which is generated based on the observation and statistic of real-world nighttime lens flares. It offers 5,000 scattering flare images and 2,000 reflective flare images, consisting of 25 types of scattering flares and 10 types of reflective flares. The 7,000 flare patterns can be randomly added to the flare-free images, forming the flare-corrupted and flare-free image pairs.
Provide a detailed description of the following dataset: Flare7K
NLPeer
**NLPeer** is a multidomain corpus of more than 5k papers and 11k review reports from five different venues. In addition to the new datasets of paper drafts, camera-ready versions and peer reviews from the NLP community, this dataset has a unified data representation, and augment previous peer review datasets to include parsed, structured paper representations, rich metadata and versioning information.
Provide a detailed description of the following dataset: NLPeer
CSCD-IME
Chinese Spelling Correction Dataset for errors generated by pinyin IME (CSCD-IME), a dataset containing 40,000 annotated sentences from real posts of official media on Sina Weibo. It is designed to detect and correct spelling mistakes in Chinese texts.
Provide a detailed description of the following dataset: CSCD-IME
Virtuoso Strings
**Virtuoso Strings** is a dataset for soft onsets detection for string instruments. It consists of over 144 recordings of professional performances of an excerpt from Haydn's string quartet Op. 74 No. 1 Finale, each with corresponding individual instrumental onset annotations.
Provide a detailed description of the following dataset: Virtuoso Strings
Cinescale
We provide a database containing shot scale annotations (i.e., the apparent distance of the camera from the subject of a filmed scene) for more than 792,000 image frames. Frames belong to 124 full movies from the entire filmographies by 6 important directors: Martin Scorsese, Jean-Luc Godard, Béla Tarr, Federico Fellini, Michelangelo Antonioni, and Ingmar Bergman. Each frame, extracted from videos at 1 frame per second, is annotated on the following scale categories: Extreme Close Up (ECU), Close Up (CU), Medium Close Up (MCU), Medium Shot (MS), Medium Long Shot (MLS), Long Shot (LS), Extreme Long Shot (ELS), Foreground Shot (FS), and Insert Shots (IS). Two independent coders annotated all frames from the 124 movies, whilst a third one checked their coding and made decisions in cases of disagreement. The CineScale database enables AI-driven interpretation of shot scale data and opens to a large set of research activities related to the automatic visual analysis of cinematic material, such as the automatic recognition of the director’s style, or the unfolding of the relationship between shot scale and the viewers’ emotional experience. To these purposes, we also provide the model and the code for building a Convolutional Neural Network (CNN) architecture for automated shot scale recognition. All this material is provided through the project website, where video frames can also be requested to authors, for research purposes under fair use.
Provide a detailed description of the following dataset: Cinescale
BANKEX
Contains stock market closing prices of ten financial institutions. Closing Price in Indian Rupee (INR). Daily samples retrieved between 12 July 2005 and 3 November 2017. All time series with 3 032 samples.
Provide a detailed description of the following dataset: BANKEX
CIPM evaluation data
The main goal of the Continuous Integration of Performance Models (CIPM) is to enable an accurate architecture-based performance prediction at each point of the systems development life cycle. For this goal, the CIPM approach continuously updates the architecture-level performance models of a software system according to observed changes at development time and at operation time. The linked webpage provides data on experiments with CIPM published in several papers.
Provide a detailed description of the following dataset: CIPM evaluation data
Activities
Contains ten synthetic time series with five days of high activity and two days of low activity. Each series has 3584 samples.
Provide a detailed description of the following dataset: Activities
Comet
**Comet** is a dataset which contains 11.5k user-assistant dialogs (totalling 103k utterances), grounded in simulated personal memory graphs.
Provide a detailed description of the following dataset: Comet
GF-PA66 3D XCT (latest)
Stack of 2D gray images of glass fiber-reinforced polyamide 66 (GF-PA66) 3D X-ray Computed Tomography (XCT) specimen. Usage: 2D/3D image segmentation Format: HDF5 Libraries to read HDF5 files: 1) silx: [https://github.com/silx-kit/silx](https://github.com/silx-kit/silx) 2) h5py: [https://www.h5py.org](https://www.h5py.org) 3) pymicro: [https://github.com/heprom/pymicro](https://github.com/heprom/pymicro) Trained models to segment this dataset: [https://doi.org/10.5281/zenodo.4601560](https://doi.org/10.5281/zenodo.4601560) Please cite us as ``` @ARTICLE{10.3389/fmats.2021.761229, AUTHOR={Bertoldo, João P. C. and Decencière, Etienne and Ryckelynck, David and Proudhon, Henry}, TITLE={A Modular U-Net for Automated Segmentation of X-Ray Tomography Images in Composite Materials}, JOURNAL={Frontiers in Materials}, VOLUME={8}, YEAR={2021}, URL={https://www.frontiersin.org/article/10.3389/fmats.2021.761229}, DOI={10.3389/fmats.2021.761229}, ISSN={2296-8016}, } ```
Provide a detailed description of the following dataset: GF-PA66 3D XCT (latest)
IEEE CIS Fraud Detection
The Vesta dataset was released for use in the IEEE CIS Fraud Detection competition. It contains 590,540 card transactions, 20,663 of which are fraudulent (3.5%). Each transaction has 431 features (400 numerical, 31 categorical), along with the relative timestamp and a label of whether it was fraudulent or legitimate. For anonymization purposes, the names of the identity features have been masked, along with the names of the extra features engineered by Vesta.
Provide a detailed description of the following dataset: IEEE CIS Fraud Detection
Tamil Memes
Social media are interactive platforms that facilitate the creation or sharing of information, ideas or other forms of expression among people. This exchange is not free from offensive, trolling or malicious contents targeting users or communities. One way of trolling is by making memes, which in most cases combines an image with a concept or catchphrase. The challenge of dealing with memes is that they are region-specific and their meaning is often obscured in humour or sarcasm. To facilitate the computational modelling of trolling in the memes for Indian languages, we created a meme dataset for Tamil (TamilMemes). We annotated and released the dataset containing suspected trolls and not-troll memes. In this paper, we use the a image classification to address the difficulties involved in the classification of troll memes with the existing methods. We found that the identification of a troll meme with such an image classifier is not feasible which has been corroborated with precision, recall and F1-score.
Provide a detailed description of the following dataset: Tamil Memes
Padding Ain't Enough: Assessing the Privacy Guarantees of Encrypted DNS – Web Scans
This dataset contains the main data set of our FOCI 2020 paper "Padding Ain’t Enough: Assessing the Privacy Guarantees of Encrypted DNS". https://www.usenix.org/conference/foci20/presentation/bushart You can find the source code for this project on GitHub: https://github.com/jonasbb/padding-aint-enough When using this software or our dataset, please cite our FOCI 20 paper. ``` @inproceedings {PaddingAintEnough, author = {Jonas Bushart and Christian Rossow}, booktitle = {10th {USENIX} Workshop on Free and Open Communications on the Internet ({FOCI} 20)}, month = aug, publisher = {{USENIX} Association}, title = {Padding Ain{\textquoteright}t Enough: Assessing the Privacy Guarantees of Encrypted {DNS}}, year = {2020}, } ```
Provide a detailed description of the following dataset: Padding Ain't Enough: Assessing the Privacy Guarantees of Encrypted DNS – Web Scans
Padding Ain't Enough: Assessing the Privacy Guarantees of Encrypted DNS – Subpage-Agnostic Domain Classification Firefox
This dataset contains one part for the "Subpage-Agnostic Domain Classification" section of our FOCI 2020 paper "Padding Ain’t Enough: Assessing the Privacy Guarantees of Encrypted DNS". https://www.usenix.org/conference/foci20/presentation/bushart You can find the source code for this project on GitHub: https://github.com/jonasbb/padding-aint-enough When using this software or our dataset, please cite our FOCI 20 paper. ``` @inproceedings {PaddingAintEnough, author = {Jonas Bushart and Christian Rossow}, booktitle = {10th {USENIX} Workshop on Free and Open Communications on the Internet ({FOCI} 20)}, month = aug, publisher = {{USENIX} Association}, title = {Padding Ain{\textquoteright}t Enough: Assessing the Privacy Guarantees of Encrypted {DNS}}, year = {2020}, } ```
Provide a detailed description of the following dataset: Padding Ain't Enough: Assessing the Privacy Guarantees of Encrypted DNS – Subpage-Agnostic Domain Classification Firefox
Padding Ain't Enough: Assessing the Privacy Guarantees of Encrypted DNS – Subpage-Agnostic Domain Classification Tor Browser
This dataset contains the second part of the "Subpage-Agnostic Domain Classification" section of our FOCI 2020 paper "Padding Ain’t Enough: Assessing the Privacy Guarantees of Encrypted DNS". https://www.usenix.org/conference/foci20/presentation/bushart You can find the source code for this project on GitHub: https://github.com/jonasbb/padding-aint-enough When using this software or our dataset, please cite our FOCI 20 paper. ``` @inproceedings {PaddingAintEnough, author = {Jonas Bushart and Christian Rossow}, booktitle = {10th {USENIX} Workshop on Free and Open Communications on the Internet ({FOCI} 20)}, month = aug, publisher = {{USENIX} Association}, title = {Padding Ain{\textquoteright}t Enough: Assessing the Privacy Guarantees of Encrypted {DNS}}, year = {2020}, } ```
Provide a detailed description of the following dataset: Padding Ain't Enough: Assessing the Privacy Guarantees of Encrypted DNS – Subpage-Agnostic Domain Classification Tor Browser
ComMU
ComMU has 11,144 MIDI samples that consist of short note sequences created by professional composers with their corresponding 12 metadata. This dataset is designed for a new task, combinatorial music generation which generate diverse and high-quality music only with metadata through auto-regressive language model.
Provide a detailed description of the following dataset: ComMU
Datasets for automatic acoustic identification of insects (Orthoptera and Cicadidae)
This dataset contains recordings of 32 sound producing insect species with a total 335 files and a length of 57 minutes. The dataset was compiled for training neural networks to automatically identify insect species while comparing adaptive, waveform-based frontends to conventional mel-spectrogram frontends for audio feature extraction. This work will be submitted for publication in the future and this dataset can be used to replicate the results, as well as other uses. The scripts for audio processing and the machine learning implementations will be published on Github. The recordings are split into two datasets. Roughly half of the recordings (147) are of nine species belonging to the order Orthoptera. These recordings stem from a dataset that was originally compiled by Baudewijn Odé (unpublished). The remaining recordings (188) are of 23 species in the family Cicadidae. These recordings were selected from the Global Cicada Sound Collection hosted on Bioacoustica (doi.org/10.1093/database/bav054), including recordings published in doi.org/10.3897/BDJ.3.e5792 & doi.org/10.11646/zootaxa.4340.1. Many recordings from this collection included speech annotations in the beginning of the recordings, therefore the last ten seconds of audio were extracted and used in this dataset. All files were manually inspected and files with strong noise interference or with sounds of multiple species were removed. Between species, the number of files ranges from four to 22 files and the length from 40 seconds to almost nine minutes of audio material for a single species. The files range in length from less than one second to several minutes. All original files were available with sample rates of at least 44.1 kHz or higher but were resampled to 44.1 kHz mono WAV files for consistency. The annotation files contain information for each recording, including the file name, species name and identifier, as well as the data subset they were included in for training the neural network (training, test, validation).
Provide a detailed description of the following dataset: Datasets for automatic acoustic identification of insects (Orthoptera and Cicadidae)
FDH
The Flickr Diverse Humans (FDH) dataset consists of 1.53M images of human figures from the YFCC100M dataset. Each image is annotated with keypoints, pixel-to-vertex correspondences (from CSE ) and a segmentation mask.
Provide a detailed description of the following dataset: FDH
SummZoo
**SummZoo**, a benchmark consists of 8 diverse summarization tasks with multiple sets of few-shot samples for each task, covering both monologue and dialogue domains.
Provide a detailed description of the following dataset: SummZoo
IRIS Multiple Instance Learning Dataset
This dataset contains the data for the paper 'Using Multiple Instance Learning for Explainable Solar Flare Prediction'. It consists of 10'000 spectrographs (bags) recorded by NASA's IRIS satellite with associated class labels AR (non-flaring active region) and PF (pre-flare active region). Each spectrograph consists of several hundred individual spectral pixels (instances). The dataset is intended to explore the use of multiple instance learning for the prediction of solar flares. Even though class labels are only known at the level of the full spectrograph ('bag-level'), multiple instance learning allows to learn the association of individual spectra to each of the classes ('instance-level'). More information is provided in the paper and on the Zenodo page.
Provide a detailed description of the following dataset: IRIS Multiple Instance Learning Dataset
DyML-Vehicle
DyML-Vehicle merges two vehicle re-ID datasets PKU VehicleID [1], VERI-Wild [1]. Since these two datasets have only annotations on the identity (fine) level, we manually annotate each image with “model” label (e.g., Toyota Camry, Honda Accord, Audi A4) and “body type” label (e.g., car, suv, microbus, pickup). Moreover, we label all the taxi images as a novel testing class under coarse level. [1] Hongye Liu, Yonghong Tian, Yaowei Wang, Lu Pang, and Tiejun Huang. Deep relative distance learning: Tell the difference between similar vehicles. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 2167–2175, 2016. 4 [2] Y. Lou, Y. Bai, J. Liu, S. Wang, and L. Duan. Veri-wild: A large dataset and a new method for vehicle re-identification in the wild. In 2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pages 3230–3238, 2019. 4
Provide a detailed description of the following dataset: DyML-Vehicle
IDK-MRC
**IDK-MRC** is an Indonesian Machine Reading Comprehension (MRC) dataset consists of more than 10K questions in total with over 5K unanswerable questions with diverse question types.
Provide a detailed description of the following dataset: IDK-MRC
DyML-Animal
DyML-Animal is based on animal images selected from ImageNet-5K [1]. It has 5 semantic scales (i.e., classes, order, family, genus, species) according to biological taxonomy. Specifically, there are 611 “species” for the fine level, 47 categories corresponding to “order”, “family” or “genus” for the middle level, and 5 “classes” for the coarse level. We note some animals have contradiction between visual perception and biological taxonomy, e.g., whale in “mammal” actually looks more similar to fish. Annotating the whale images as belonging to mammal would cause confusion to visual recognition. So we take a detailed check on potential contradictions and intentionally leave out those animals. [1] Jia Deng, Wei Dong, Richard Socher, Li-Jia Li, Kai Li, and Li Fei-Fei. Imagenet: A large-scale hierarchical image database. In 2009 IEEE conference on computer vision and pattern recognition, pages 248–255. Ieee, 2009. 5
Provide a detailed description of the following dataset: DyML-Animal
DyML-Product
DyML-Product is derived from iMaterialist-2019, a hierarchical online product dataset. The original iMaterialist-2019 offers up to 4 levels of hierarchical annotations. We remove the coarsest level and maintain 3 levels for DyML-Product. https://github.com/MalongTech/imaterialist-product-2019
Provide a detailed description of the following dataset: DyML-Product
aiMotive Dataset
aiMotive dataset is a multimodal dataset for robust autonomous driving with long-range perception. The dataset consists of 176 scenes with synchronized and calibrated LiDAR, camera, and radar sensors covering a 360-degree field of view. The collected data was captured in highway, urban, and suburban areas during daytime, night, and rain and is annotated with 3D bounding boxes with consistent identifiers across frames.
Provide a detailed description of the following dataset: aiMotive Dataset
RoMQA
**RoMQA** is a benchmark for robust, multi-evidence, and multi-answer question answering (QA). RoMQA contains clusters of questions that are derived from related constraints mined from the Wikidata knowledge graph. The dataset evaluates robustness of QA models to varying constraints by measuring worst-case performance within each question cluster.
Provide a detailed description of the following dataset: RoMQA
CCSE
**Chinese Character Stroke Extraction (CCSE)** is a benchmark containing two large-scale datasets: Kaiti CCSE (CCSE-Kai) and Handwritten CCSE (CCSE-HW). It is designed for stroke extraction problems.
Provide a detailed description of the following dataset: CCSE
Kor-Learner
**Kor-Learner** is a Korean grammatical error correction (GEC) dataset made from the NIKL learner corpus containing essays written by Korean learners and their grammatical error correction annotations by their tutors in an morpheme-level XML file format. It contains more than 28K sentence pairs.
Provide a detailed description of the following dataset: Kor-Learner
Kor-Native
**Kor-Learner** is a Korean grammatical error correction (GEC) dataset collected grammatically from two sources, and the correct sentences were read using Google Text-to-Speech(TTS) system. The general public was tasked with dictating grammatically correct sentences and transcribe them. It contains more than 17K sentence pairs.
Provide a detailed description of the following dataset: Kor-Native
Kor-Lang8
**Kor-Lang8** is a Korean grammatical error correction (GEC) dataset extracted from the NAIST Lang-8 Learner Corpora by the language label. It contains more than 109K sentence pairs.
Provide a detailed description of the following dataset: Kor-Lang8
pmuBAGE
**pmuBAGE** (the Benchmarking Assortment of Generated PMU Events) is a dataset that consists of almost 1000 instances of labeled event data to encourage benchmark evaluations on phasor measurement unit (PMU) data analytics. PMU data are challenging to obtain, especially those covering event periods. Nevertheless, power system problems have recently seen phenomenal advancements via data-driven machine learning solutions. A highly accessible standard benchmarking dataset would enable a drastic acceleration of the development of successful machine learning techniques in this field.
Provide a detailed description of the following dataset: pmuBAGE
Vident-lab
**Vident-lab** is a dataset of dental videos with multi-task labels to facilitate further research in relevant video processing applications. The dataset constitutes a low-quality frame, its high-quality counterpart, a teeth segmentation mask, and an inter-frame homography matrix. The homography warps the current frame to the previous frame with respect to the teeth. The dataset has the training, validation, and test sets of 300, 29, and 80 videos, respectively.
Provide a detailed description of the following dataset: Vident-lab
Demetr
**Demetr** is a diagnostic dataset with 31K English examples (translated from 10 source languages) for evaluating the sensitivity of MT evaluation metrics to 35 different linguistic perturbations spanning semantic, syntactic, and morphological error categories.
Provide a detailed description of the following dataset: Demetr
EUR-Lex-Sum
**EUR-Lex-Sum** is a dataset for cross-lingual summarization. It is based on manually curated document summaries of legal acts from the European Union law platform. Documents and their respective summaries exist as crosslingual paragraph-aligned data in several of the 24 official European languages, enabling access to various cross-lingual and lower-resourced summarization setups. The dataset contains up to 1,500 document/summary pairs per language, including a subset of 375 cross-lingually aligned legal acts with texts available in all 24 languages.
Provide a detailed description of the following dataset: EUR-Lex-Sum
ExPUNations
**ExPUNations** is a humor dataset with such extensive and fine-grained annotations specifically for puns. This dataset is designed for two new tasks namely, explanation generation to aid with pun classification and keyword-conditioned pun generation
Provide a detailed description of the following dataset: ExPUNations
POPGym
POPGym is designed to benchmark memory in deep reinforcement learning. It contains a set of *environments* and a collection of *memory model baselines*. The environments are all Partially Observable Markov Decision Process (POMDP) environments following the [Openai Gym](https://github.com/openai/gym) interface. Our environments follow a few basic tenets: 1. **Painless Setup** - `popgym` environments require only `gym`, `numpy`, and `mazelib` as dependencies 2. **Laptop-Sized Tasks** - Most tasks can be solved in less than a day on the CPU 3. **True Generalization** - All environments are heavily randomized. The paper uses 15M environment steps for each trial.
Provide a detailed description of the following dataset: POPGym
Haydn Annotation Dataset
The Haydn Annotation Dataset consists of note onset annotations from 24 experiment participants with varying musical experience. The annotation experiments use recordings from the ARME Virtuoso Strings Dataset.
Provide a detailed description of the following dataset: Haydn Annotation Dataset
UJ-CS/Math/Phy
Definitions of jargon/terms in computer science, mathematics, and physics
Provide a detailed description of the following dataset: UJ-CS/Math/Phy
APIDIS
# Data Approximatively 2 hours of videos were captured from 7 viewpoints during a professional basketball game. **Raw data** The cameras were recording at almost 22 fps in average, with a resolution of 1600x1200 pixels. The video files are available in their native format, i.e. one motion jpeg file (~300 MB) per minute per camera. **Pseudo-synchronised video** A pseudo synchronised dataset is provided by resampling the raw videos at 25 fps (using the closest available frame) with a resolution of 800x600 pixels. The video files are available as one MPEG-4 file (between 28 MB and 56 MB) per minute per camera. For optimal timestamps accuracy, the original dataset should be prefered. # Annotations **Timestamps** All timestamps are expressed in seconds since Epoch when provided as integers. When provided in a human readable format, e.g. in filenames, they follow the <a href="http://en.wikipedia.org/wiki/ISO_8601">ISO 8601</a> date/time syntax. **Calibration** All cameras are <a href="http://www.arecontvision.com/">Arecont Vision</a> AV2100M IP cameras (<a href="http://www.arecontvision.com/resources.php?pid=106">datasheet</a>). The fish-eye lenses used for the top view cameras are <a href="http://www.fujinon.com">Fujinon</a> FE185C086HA-1 lenses (<a href="http://www.fujifilmusa.com/products/optical_devices/security/fish-eye/5-mp/fe185c086ha-1/">datasheet</a>). Each camera was indivually calibrated into a common world coordinate system. **Additional annotations** Several manually annotated objects are provided (see NEM'08 summit paper): - Basket ball events like ball possession periods, throws, violations ; for the whole basketball game - Ball, Players and referees position ; for one minute of the game. Annotated events and salient-objects are recorded into two kinds of XML files whose syntax is described in the XML Schema Definition (xsd) files `apidis-annotation-ver23.xsd` and `apidis-salientobj-ver1.xsd`. A simplified structural diagram of event xml files is illustrated in `event-xml-simple.png`. The tags for describing the detected objects and their properties are illlustrated in `salient-obj-xml.png`. Ball was annotated in the 2D images on every camera where it was visible. An approximate 3D localization is inferred for the pseudo-synchronized dataset. # Terms of use: This dataset is available for non-commercial research in video signal processing only. We kindly ask you to mention the APIDIS project when using this dataset (in publications, video demonstrations...). # Acknowledgements We would like to thank Jean-François Prior (<a href="http://www.belfiusnamurcapitale.be/">Dexia Namur</a> basket ball team), Philippe Delmulle <a href="http://www.dbcwaregem.be/">Declercq Stortbeton Waregem</a> basket ball team) and the city of <a href="http://www.namur.be/">Namur</a> for their authorisations and technical help collecting this dataset.
Provide a detailed description of the following dataset: APIDIS
modified_shemo
A modification on the ShEMO dataset with help of an Automatic Speech Recognition (ASR) system. the ShEMO dataset contains 3000 audio files along with 3000 text files for each sentence as a ground-truth transcription. The text file of the sentence related to the corresponding audio file can be found through the names of the files. In fact, the audio and the text file of an utterance have the same name. But out of 3000 files, only 2838 have the same name. With further investigations, we found that some of these text files have the wrong names and referred to the wrong audio file. We fixed the errors and inconsistencies in ShEMO dataset by using an Automatic Speech Recognition (ASR) system.
Provide a detailed description of the following dataset: modified_shemo
PKG sample
A random sample from Pubmed Knowledge Graph.
Provide a detailed description of the following dataset: PKG sample
Brain Tumor Dataset
This brain tumor dataset contains 3064 T1-weighted contrast-enhanced images with three kinds of brain tumor. Detailed information on the dataset can be found in the readme file.
Provide a detailed description of the following dataset: Brain Tumor Dataset
SeaTurtleID
**SeaTurtleID** is a public large-scale, long-span dataset with sea turtle photographs captured in the wild. The dataset is suitable for benchmarking re-identification methods and evaluating several other computer vision tasks. It consists of 7774 high-resolution photographs of 400 unique individuals collected within 12 years in 1081 encounters. Each photograph is accompanied by rich metadata, e.g., identity label, head segmentation mask, and encounter timestamp.
Provide a detailed description of the following dataset: SeaTurtleID
LVOS
**LVOS** is a dataset for long-term video object segmentation (VOS). It consists of 220 videos with a total duration of 421 minutes. The videos in our LVOS last 1.59 minutes on average, which is 20 times longer than videos in existing VOS datasets. Each video includes various attributes, especially challenges deriving from the wild, such as long-term reappearing and cross-temporal similar objects.
Provide a detailed description of the following dataset: LVOS
Brazilian Protest
**Brazilian Protest** is a dataset for event filtering that focuses on protests in multi-modal social media data, with most of the text in Portuguese. The dataset contains 4.5 million tweets, of which 155 thousand are associated with an URL to an uncurated article and 370 thousand have an associated media content (including the media of the uncurated articles).
Provide a detailed description of the following dataset: Brazilian Protest
Foot3D
A dataset of high resolution, textured scans of articulated left feet, useful for 3D shape representation learning.
Provide a detailed description of the following dataset: Foot3D
KITTI-6DoF
**KITTI-6DoF** is a dataset that contains annotations for the 6DoF estimation task for 5 object categories on 7,481 frames.
Provide a detailed description of the following dataset: KITTI-6DoF
STIR
While convolutions are known to be invariant to (discrete) translations, scaling continues to be a challenge and most image recognition networks are not invariant to them. To explore these effects, we have created the Scaled and Translated Image Recognition (STIR) dataset. This dataset contains objects of size $s \in \[17, 64\]$, each randomly placed in a $64 \times 64$ pixel image.
Provide a detailed description of the following dataset: STIR
Lyra Dataset
**Lyra** is a dataset of 1570 traditional and folk Greek music pieces that includes audio and video (timestamps and links to YouTube videos), along with annotations that describe aspects of particular interest for this dataset, including instrumentation, geographic information and labels of genre and subgenre, among others.
Provide a detailed description of the following dataset: Lyra Dataset
KITTI360-EX
**KITTI360-EX** is a dataset for outer- and inner FoV expansion. It contains 76k pinhole images as well as 76k spherical images and is used for beyond-FoV estimation.
Provide a detailed description of the following dataset: KITTI360-EX
MOET
**MOET** a dataset consists of gaze data from participants tracking specific objects, annotated with labels and bounding boxes, in crowded real-world videos, for training and evaluating attention decoding algorithms.
Provide a detailed description of the following dataset: MOET
CORSMAL
**CORSMAL** is a dataset for estimating the position and orientation in 3D (or 6D pose) of an object from a single view. The dataset consists of 138,240 images of rendered hands and forearms holding 48 synthetic objects, split into 3 grasp categories over 30 real backgrounds.
Provide a detailed description of the following dataset: CORSMAL
The QUAERO French Medical Corpus
A vast amount of information in the biomedical domain is available as natural language free text. An increasing number of documents in the field are written in languages other than English. Therefore, it is essential to develop resources, methods and tools that address Natural Language Processing in the variety of languages used by the biomedical community. In this paper, we report on the development of an extensive corpus of biomedical documents in French annotated at the entity and concept level. Three text genres are covered, comprising a total of 103,056 words. Ten entity categories corresponding to UMLS Semantic Groups were annotated, using automatic pre-annotations validated by trained human annotators. The pre-annotation method was found helful for entities and achieved above 0.83 precision for all text genres. Overall, a total of 26,409 entity annotations were mapped to 5,797 unique UMLS concepts.
Provide a detailed description of the following dataset: The QUAERO French Medical Corpus
HyperRED
HyperRED is a dataset for the new task of hyper-relational extraction, which extracts relation triplets together with qualifier information such as time, quantity or location. For example, the relation triplet (Leonard Parker, Educated At, Harvard University) can be factually enriched by including the qualifier (End Time, 1967). HyperRED contains 44k sentences with 62 relation types and 44 qualifier types.
Provide a detailed description of the following dataset: HyperRED
SemEval-2018 Task 1
POST-COMPETITION: The official competition is now over, but you are welcome to develop and test new solutions on this website. All data with gold labels (training, developing, and test) are available here. The test data in this archive do not include the instances from the Equity Evaluation Corpus (EEC) used for bias evaluation. The EEC corpus is available here.
Provide a detailed description of the following dataset: SemEval-2018 Task 1
LEAFTOP
Nouns extracted automatically from Bible translations across 1580 languages.
Provide a detailed description of the following dataset: LEAFTOP
SAF
This dataset can be found on HuggingFace: https://huggingface.co/datasets/Short-Answer-Feedback/saf_communication_networks_english https://huggingface.co/datasets/Short-Answer-Feedback/saf_micro_job_german
Provide a detailed description of the following dataset: SAF
SYNS-Patches
**SYNS-Patches** dataset, which is a subset of SYNS. The original SYNS is composed of aligned image and LiDAR panoramas from 92 different scenes belonging to a wide variety of environments, such as Agriculture, Natural (e.g. forests and fields), Residential, Industrial and Indoor. It represents the subset of patches from each scene extracted at eye level at 20 degree intervals of a full horizontal rotation. This results in 18 images per scene and a total dataset size of 1656.
Provide a detailed description of the following dataset: SYNS-Patches
PromptSpeech
**PromptSpeech** is a dataset that consists of speech and the corresponding prompts. We synthesize speech with 5 different style factors (gender, pitch, speaking speed, volume, and emotion) from a commercial TTS API. The emotion factor has 5 categories and the gender factor has 2 categories.
Provide a detailed description of the following dataset: PromptSpeech
CVGL
**CVGL Camera Calibration Dataset** consists of 49 camera configurations with town 1 having 25 configurations while town 2 having 24 configurations. The parameters modified for generating the configurations include fov, x, y, z, pitch, yaw, and roll. Here, fov is the field of view, (x, y, z) is the translation while (pitch, yaw, and roll) is the rotation between the cameras. The total number of image pairs is 79, 320, out of which 18, 083 belong to Town 1 while 61, 237 belong to Town 2, the difference in the number of images is due to the length of the tracks.
Provide a detailed description of the following dataset: CVGL
NCTE Transcripts
**NCTE Transcripts** consists of 1,660 45-60 minute long 4th and 5th grade elementary mathematics observations collected by the National Center for Teacher Effectiveness (NCTE) between 2010-2013. The anonymized transcripts represent data from 317 teachers across 4 school districts that serve largely historically marginalized students. The transcripts come with rich metadata, including turn-level annotations for dialogic discourse moves, classroom observation scores, demographic information, survey responses and student test scores.
Provide a detailed description of the following dataset: NCTE Transcripts
ReplicaGrasp
**ReplicaGrasp** dataset is created by spawning objects from GRAB into the ReplicaCAD scenes, simulated in random positions and orientations using the Habitat simulator. We capture 4,800 instances, with 50 different objects spawned in one of 48 receptacles in both, upright and randomly fallen orientations.
Provide a detailed description of the following dataset: ReplicaGrasp
DS-1000
**DS-1000** is a code generation benchmark with a thousand data science questions spanning seven Python libraries that (1) reflects diverse, realistic, and practical use cases, (2) has a reliable metric, (3) defends against memorization by perturbing questions.
Provide a detailed description of the following dataset: DS-1000
ESB
**ESB** is a benchmark for evaluating the performance of a single automatic speech recognition (ASR) system across a broad set of speech datasets. It comprises eight English speech recognition datasets, capturing a broad range of domains, acoustic conditions, speaker styles, and transcription requirements.
Provide a detailed description of the following dataset: ESB
Spiced
**Spiced** is a paraphrase dataset of scientific findings annotated for degree of information change. Spiced contains 6,000 scientific finding pairs extracted from news stories, social media discussions, and full texts of original papers.
Provide a detailed description of the following dataset: Spiced
MO7 Dataset
MO7 dataset consists of 50,000 images with over 900 unique objects and over 18 classes. The dataset was collected in Missouri, Kansas, and Washington, 3 states known for their rural environment and challenging rural road conditions. The data consists of over 400 miles recorded in the 3 states combined . Our objective was to choose routes that are challenging in nature and underrepresented in other available datasets. Thus, we focused on unmarked roads, curvy roads, hills, and unpaved gravel roads, in addition to the availability of objects that are specific to rural areas such as agricultural machinery, small construction machinery used around farms, and farm animals.
Provide a detailed description of the following dataset: MO7 Dataset
SuperMat
A growing number of papers are published in the area of superconducting materials science. However, novel text and data mining (TDM) processes are still needed to efficiently access and exploit this accumulated knowledge, paving the way towards data-driven materials design. Herein, we present SuperMat (Superconductor Materials), an annotated corpus of linked data derived from scientific publications on superconductors, which comprises 142 articles, 16052 entities, and 1398 links that are characterised into six categories: the names, classes, and properties of materials; links to their respective superconducting critical temperature (Tc); and parametric conditions such as applied pressure or measurement methods. The construction of SuperMat resulted from a fruitful collaboration between computer scientists and material scientists, and its high quality is ensured through validation by domain experts. The quality of the annotation guidelines was ensured by satisfactory Inter Annotator Agreement (IAA) between the annotators and the domain experts.
Provide a detailed description of the following dataset: SuperMat
IMaSC
IMaSC is a Malayalam text and speech corpus made available by ICFOSS for the purpose of developing speech technology for Malayalam, particularly text-to-speech. The corpus contains 34,473 text-audio pairs of Malayalam sentences spoken by 8 speakers, totalling in approximately 50 hours of audio.
Provide a detailed description of the following dataset: IMaSC
PPMR Dataset
This dataset contains 23 patients in total. Within each patients MRI, not all images show evidence of PMG and some slices are normal. Although the ratio between controls and patients is 3:1, the ratio between normal slices and anomaly slices is around 5:1. Each patient’s brain includes around 150 scans on average.
Provide a detailed description of the following dataset: PPMR Dataset
ODDS
Outliers or anomalies are instances that do not conform to the norm of a dataset. Outlier detection is an important data mining problem that has been researched within diverse research areas and applications domains such as intrusion detection, fraud detection, unusual event detection, disease condition detection etc. The exact notion of an outlier is different for different application domains. Hence, applying a technique developed for one domain to another is not straightforward. Moreover, availability of labeled data for training/validation of outlier detection methods is scarce and often noise contained in data tends to be similar to outliers, thus makes it difficult to distinguish them. Because of these challenges outlier detection is not an easy problem to solve. Furthermore, research on outlier detection has been held back by the lack of good benchmark datasets with ground truths. Existing benchmarks are typically either proprietary or else very artificial. Moreover, existing real-world outlier/anomaly detection datasets lack the availability of ground truth. In ODDS, we openly provide access to a large collection of outlier detection datasets with ground truth (if available). Our focus is to provide datasets from different domains and present them under a single platform for the research community. As such, we arrange the datasets based on their types into different tables in ODDS library. The ODDS library is being actively developed since summer 2016 and is growing as a result of our research pursuits in outlier/anomaly mining and also to help the corresponding research community. Researchers are welcome to share their datasets with us to include in ODDS library by emailing srayana@cs.stonybrook.edu.
Provide a detailed description of the following dataset: ODDS
N-ImageNet
The N-ImageNet dataset is an event-camera counterpart for the ImageNet dataset. The dataset is obtained by moving an event camera around a monitor displaying images from ImageNet. N-ImageNet contains approximately 1,300k training samples and 50k validation samples. In addition, the dataset also contains variants of the validation dataset recorded under a wide range of lighting or camera trajectories. Additional details about the dataset are explained in the paper available through this [link](https://arxiv.org/abs/2112.01041). Please cite this paper if you make use of the dataset.
Provide a detailed description of the following dataset: N-ImageNet
ProNCI
**ProNCI** consists of 22.5K proper noun compounds along with their free-form semantic interpretations. ProNCI is 60 times larger than prior noun compound datasets and also includes non-compositional examples.
Provide a detailed description of the following dataset: ProNCI
CUP
**CUP** (Context-sitUated Pun) is a dataset containing 4.5k tuples of context words and pun pairs, each labelled with whether they are compatible for composing a pun.
Provide a detailed description of the following dataset: CUP
Greek Parliament Proceedings
**Greek Parliament Proceedings** is a curated dataset of the Greek Parliament Proceedings that extends chronologically from 1989 up to 2020. It consists of more than 1 million speeches with extensive metadata, extracted from 5,355 parliamentary record files.
Provide a detailed description of the following dataset: Greek Parliament Proceedings
McQueen
**McQueen** dataset contains 15k visual conversations and over 80k queries where each one is associated with a fully-specified rewrite version. In addition, for entities appearing in the rewrite, the corresponding image box annotation is provided.
Provide a detailed description of the following dataset: McQueen
RuCoLA
The **Russian Corpus of Linguistic Acceptability (RuCoLA)** is built from the ground up under the well-established binary LA approach. RuCoLA consists of 9.8k in-domain sentences from linguistic publications and 3.6k out-of-domain sentence produced by generative models.
Provide a detailed description of the following dataset: RuCoLA
ComFact
ComFact is a benchmark for commonsense fact linking, where models are given contexts and trained to identify situationally-relevant commonsense knowledge from KGs. The novel benchmark, C-om-Fact, contains ∼293k in-context relevance annotations for common-sense triplets across four stylistically diverse dialogue and storytelling datasets.
Provide a detailed description of the following dataset: ComFact
CoreSearch
**CoreSearch** is a dataset for Cross-Document Event Coreference Search. It consists of two separate passage collections: (1) a collection of passages containing manually annotated coreferring event mention, and (2) an annotated collection of destructor passages.
Provide a detailed description of the following dataset: CoreSearch
DIOR-RSVG
**DIOR-RSVG** is a large-scale benchmark dataset of remote sensing data (RSVG). It aims to localize the referred objects in remote sensing (RS) images with the guidance of natural language. This new dataset includes image/expression/box triplets for training and evaluating visual grounding models.
Provide a detailed description of the following dataset: DIOR-RSVG
VD-Ref
**VD-Ref** is a dataset with ground-truth mappings from both noun phrases and pronouns to image regions. This dataset contains a set of 10k complete sets from the VisDialog dataset, and uses the StanfordCoreNLP tool to tokenize the sentences, making it proper for the succeeding human annotation.
Provide a detailed description of the following dataset: VD-Ref
K-MHaS: Korean Multi-label Hate Speech Dataset
Korean Multi-label Hate Speech Dataset We introduce K-MHaS, a new multi-label dataset for hate speech detection that effectively handles Korean language patterns. * consisting of 109,692 utterances from Korean online news comments, labeled with 8 fine-grained hate speech classes. * data collection period: between January 2018 and June 2020. * providing (a) binary classification and (b) multi-label classification from 1(one) to 4(four) labels. * (a) binary classification: Hate Speech or Not Hate Speech * (b) fine-grained classification: Politics, Origin, Physical, Age, Gender, Religion, Race, and Profanity. For the fine-grained classification, a Hate Speech class from the binary classification is broken down into eight classes, associated with the hate speech category.
Provide a detailed description of the following dataset: K-MHaS: Korean Multi-label Hate Speech Dataset
LTCC
LTCC contains 17,119 person images of 152 identities, and each identity is captured by at least two cameras. The dataset can be divided into two subsets: one cloth-change set where 91 persons appear with 416 different sets of outfits in 14,783 images, and one cloth-consistent subset containing the remaining 61 identities with 2,336 images without outfit changes. On average, there are 5 different clothes for each cloth-changing person, with the number of outfit changes ranging from 2 to 14.
Provide a detailed description of the following dataset: LTCC
PRCC
This dataset consists of 33698 images from 221 identities. Each person in Cameras A and B is wearing the same clothes, but the images are captured in different rooms. For Camera C, the person wears different clothes, and the images are captured in a different day.
Provide a detailed description of the following dataset: PRCC