dataset_name stringlengths 2 128 | description stringlengths 1 9.7k | prompt stringlengths 59 185 |
|---|---|---|
VISEM-Tracking | **VISEM-Tracking** is a dataset consisting of 20 video recordings of 30s of spermatozoa with manually annotated bounding-box coordinates and a set of sperm characteristics analyzed by experts in the domain. It is an extension of the previously published VISEM dataset. In addition to the annotated data, unlabeled video clips are provided for easy-to-use access and analysis of the data. | Provide a detailed description of the following dataset: VISEM-Tracking |
OCR-IDL | The OCR-IDL dataset comprises the OCR annotations for a subset of 26M pages of the large-scale IDL document library. These annotations have a monetary value over $20,000 and are made publicly available with the aim of advancing the Document Intelligence research field. Our motivation is two-fold: First, by making these annotations public, we aim to level the differences between research groups and companies who have big private datasets to pre/train on. And second, we make use of a commercial OCR engine to obtain high quality annotations, leading to reduce the noise provided by OCR on pretraining and downstream tasks. | Provide a detailed description of the following dataset: OCR-IDL |
ZeroKBC | **ZeroKBC** is comprehensive benchmark that covers all scenarios of zero-shot Knowledge Base Completion (KBC) task. It has 3 zero-shot scenarios with 8 fine-grained settings. | Provide a detailed description of the following dataset: ZeroKBC |
Fallout New Vegas Dialog | **Fallout New Vegas Dialog** is a multilingual sentiment annotated dialog dataset from Fallout New Vegas. The game developers have preannotated every line of dialog in the game in one of the 8 different sentiments: anger, disgust, fear, happy, neutral, pained, sad and surprised and they have been translated into 5 different languages: English, Spanish, German, French and Italian. | Provide a detailed description of the following dataset: Fallout New Vegas Dialog |
NarraSum | **NarraSum** is a large-scale narrative summarization dataset. It contains 122K narrative documents, which are collected from plot descriptions of movies and TV episodes with diverse genres, and their corresponding abstractive summaries. | Provide a detailed description of the following dataset: NarraSum |
STVD-PVCD | STVD is the largest public dataset on the PVCD task. It was constituted with about 83 thousands of videos having in total of more than 10 thousands of hours duration and including more than 420 thousands of video copy pairs. It offers different test sets for a fine performance characterization (frame degradation, global transformation, video speeding, etc.) with a frame level annotation for the real-time detection and video alignment. Baseline comparisons were reported to show a room for improvement. More information about the STVD dataset can be found into the publications [1, 2].
[1] V.H. Le, M. Delalandre and D. Conte. A large-Scale TV Dataset for partial video copy detection. International Conference on Image Analysis and Processing (ICIAP), Lecture Notes in Computer Science (LNCS), vol 13233, pp. 388-399, 2022. [http://mathieu.delalandre.free.fr/publications/ICIAP2022.pdf](http://mathieu.delalandre.free.fr/publications/ICIAP2022.pdf)
[2] V.H. Le, M. Delalandre and D. Conte. Une large base de données pour la détection de segments de vidéos TV. Journées Francophones des Jeunes Chercheurs en Vision par Ordinateur (ORASIS), 2021. [http://mathieu.delalandre.free.fr/publications/ORASIS2021.pdf](http://mathieu.delalandre.free.fr/publications/ORASIS2021.pdf) | Provide a detailed description of the following dataset: STVD-PVCD |
STVD-FC | STVD-FC is the largest public dataset on the political content analysis and fact-checking tasks. It consists of more than 1,200 fact-checked claims that have been scraped from a fact-checking service with associated metadata. For the video counterpart, the dataset contains nearly 6,730 TV programs, having a total duration of 6,540 hours, with metadata. These programs have been collected during the 2022 French presidential election with a dedicated workstation and protocol. The dataset is delivered as different parts for accessibility of the 2 TB of data and proper indexes. More information about the STVD-FC dataset can be found into the publication [1].
[1] F. Rayar, M. Delalandre and V.H. Le. A large-scale TV video and metadata database for French political content analysis and fact-checking. Conference on Content-Based Multimedia Indexing (CBMI), pp. 181–185, 2022. Source: [http://mathieu.delalandre.free.fr/publications/CBMI2022.pdf](http://mathieu.delalandre.free.fr/publications/CBMI2022.pdf) | Provide a detailed description of the following dataset: STVD-FC |
RGBD1K | **RGBD1K** is a benchmark for RGB-D Object Tracking which contains 1050 sequences with about 2.5M frames in total. | Provide a detailed description of the following dataset: RGBD1K |
Lindenthal Camera Traps | This data set contains 775 video sequences, captured in the wildlife park Lindenthal (Cologne, Germany) as part of the AMMOD project, using an Intel RealSense D435 stereo camera. In addition to color and infrared images, the D435 is able to infer the distance (or “depth”) to objects in the scene using stereo vision. Observed animals include various birds (at daytime) and mammals such as deer, goats, sheep, donkeys, and foxes (primarily at nighttime). A subset of 412 images is annotated with a total of 1038 individual animal annotations, including instance masks, bounding boxes, class labels, and corresponding track IDs to identify the same individual over the entire video. | Provide a detailed description of the following dataset: Lindenthal Camera Traps |
Morphosyntactic-analysis-dataset | This dataset is for evaluation of morphosyntactic analyzers. | Provide a detailed description of the following dataset: Morphosyntactic-analysis-dataset |
Wireless AI Research Dataset | Wireless AI Research Dataset is a flexible and easy-to-use dataset with realistic environments designed for various wireless AI tasks. It supports sensing tasks such as localization and environment reconstruction, MIMO tasks such as reflection system and beam-forming, and PHY tasks such as CSI feedback and channel estimation | Provide a detailed description of the following dataset: Wireless AI Research Dataset |
Imbalanced-MiniKinetics200 | **Imbalanced-MiniKinetics200** was proposed by "Minority-Oriented Vicinity Expansion with Attentive Aggregation for Video Long-Tailed Recognition" to evaluate varying scenarios of video long-tailed recognition. Similar to CIFAR-10/100-LT, it utilizes an imbalance factor to construct long-tailed variants of the MiniKinetics200 dataset. **Imbalanced-MiniKinetics200** is a subset of Mini-Kinetics-200 consisting of 200 categories which is also a subset of Kinetics400.
Both the raw frames and extracted features with ResNet50/101 are provided. | Provide a detailed description of the following dataset: Imbalanced-MiniKinetics200 |
DialogCC | **DialogCC** is a large-scale multi-modal dialogue dataset, which covers diverse real-world topics and various images per dialogue. It contains 651k unique images and is designed for image and text retrieval tasks. | Provide a detailed description of the following dataset: DialogCC |
3D FRONT HUMAN | **3D FRONT HUMAN** is a dataset that extends the large-scale synthetic scene dataset 3D-FRONT. Specifically, the 3D scenes with humans, i.e., non-contact humans (a sequence of walking motion and standing humans) as well as contact humans (sitting, touching, and lying humans). 3D FRONT HUMAN contains four room types: 1) 5689 bedrooms, 2) 2987 living rooms, 3) 2549 dining rooms and 4) 679 libraries. We use 21 object categories for the bedrooms, 24 for the living and dining rooms, and 25 for the libraries. | Provide a detailed description of the following dataset: 3D FRONT HUMAN |
Tragic Talkers | **Tragic Talkers** is an audio-visual dataset consisting of excerpts from the "Romeo and Juliet" drama captured with microphone arrays and multiple co-located cameras for light-field video. Tragic Talkers provides ideal content for object-based media (OBM) production. It is designed to cover various conventional talking scenarios, such as monologues, two-people conversations, and interactions with considerable movement and occlusion, yielding 30 sequences captured from a total of 22 different points of view and two 16-element microphone arrays. | Provide a detailed description of the following dataset: Tragic Talkers |
Anatomy of Video Editing (AVE) | Machine learning is transforming the video editing industry. Recent advances in computer vision have leveled-up video editing tasks such as intelligent reframing, rotoscoping, color grading, or applying digital makeups. However, most of the solutions have focused on video manipulation and VFX. This work introduces the Anatomy of Video Editing, a dataset, and benchmark, to foster research in AI-assisted video editing. Our benchmark suite focuses on video editing tasks, beyond visual effects, such as automatic footage organization and assisted video assembling. To enable research on these fronts, we annotate more than 1.5M tags, with relevant concepts to cinematography, from 196176 shots sampled from movie scenes. We establish competitive baseline methods and detailed analyses for each of the tasks. We hope our work sparks innovative research towards underexplored areas of AI-assisted video editing. | Provide a detailed description of the following dataset: Anatomy of Video Editing (AVE) |
RadQA | RadQA is a radiology question answering dataset with 3074 questions posed against radiology reports and annotated with their corresponding answer spans (resulting in a total of 6148 question-answer evidence pairs) by physicians. The questions are manually created using the clinical referral section of the reports that take into account the actual information needs of ordering physicians and eliminate bias from seeing the answer context (and, further, organically create unanswerable questions). The answer spans are marked within the Findings and Impressions sections of a report. The dataset aims to satisfy the complex clinical requirements by including complete (yet concise) answer phrases (which are not just entities) that can span multiple lines. | Provide a detailed description of the following dataset: RadQA |
HM3DSem | The Habitat-Matterport 3D Semantics Dataset (HM3DSem) is the largest-ever dataset of 3D real-world and indoor spaces with densely annotated semantics that is available to the academic community. HM3DSem v0.2 consists of 142,646 object instance annotations across 216 3D-spaces from HM3D and 3,100 rooms within those spaces. The HM3D scenes are annotated with the 142,646 raw object names, which are mapped to 40 Matterport categories. On average, each scene in HM3DSem v0.2 consists of 661 objects from 106 categories. This dataset is the result of 14,200+ hours of human effort for annotation and verification by 20+ annotators.
HM3DSem v0.2 is free and available here for academic, non-commercial research. Researchers can use it with FAIR’s Habitat simulator to train embodied agents, such as home robots and AI assistants, at scale for semantic navigation tasks. HM3DSem v0.1 was also the basis of the recently concluded Habitat 2022 ObjectNav challenge. Please see our arxiv report for more details. | Provide a detailed description of the following dataset: HM3DSem |
HengamCorpus | HengamCopus is a Persian corpus with temporal tags (BIO standard tagging scheme). This dataset was generated by applying HengamTagger (https://github.com/kargaranamir/parstdex) to a large number of sentences. There are two types of Persian text datasets included in these collections: formal ones (Persian Wikipedia and Hamshahri Corpus), and informal ones (Twitter and HelloKish). In the creation of HengamCorpus, to maximize the diversity of patterns for training and evaluation, they uniformly draw samples from sets of sentences of unique “temporal pattern profile”, presence/absence vector of different temporal patterns within the sentence. | Provide a detailed description of the following dataset: HengamCorpus |
SWIMSEG | The SWIMSEG dataset contains 1013 images of sky/cloud patches, along with their corresponding binary segmentation maps. The ground truth annotation was done in consultation with experts from Singapore Meteorological Services. All images were captured in Singapore using WAHRSIS, a calibrated ground-based whole sky imager, over a period of 22 months from October 2013 to July 2015. Each patch covers about 60-70 degrees of the sky with a resolution of 600x600 pixels. | Provide a detailed description of the following dataset: SWIMSEG |
SWINSEG | The SWINSEG dataset contains 115 nighttime images of sky/cloud patches along with their corresponding binary ground truth maps. The ground truth annotation was done in consultation with experts from Singapore Meteorological Services. All images were captured in Singapore using WAHRSIS, a calibrated ground-based whole sky imager, over a period of 12 months from January to December 2016. All image patches are 500x500 pixels in size, and were selected considering several factors such as time of the image capture, cloud coverage, and seasonal variations. | Provide a detailed description of the following dataset: SWINSEG |
SWINySEG | The SWINySEG dataset contains 6768 daytime- and nighttime-images of sky/cloud patches along with their corresponding binary ground truth maps. The images in the SWINySeg dataset are taken from two of our earlier sky/cloud image segmentation datasets -- SWIMSEG and SWINSEG. All images were captured in Singapore using WAHRSIS, a calibrated ground-based whole sky imager, over a period of 12 months from January to December 2016. The ground truth annotation was done in consultation with experts from Singapore Meteorological Services. | Provide a detailed description of the following dataset: SWINySEG |
MixedWM38 | MixedWM38 Dataset(WaferMap) has more than 38000 wafer maps, including 1 normal pattern, 8 single defect patterns, and 29 mixed defect patterns, a total of 38 defect patterns. | Provide a detailed description of the following dataset: MixedWM38 |
DeepHS Fruit v2 | The data set covers recordings of ripening fruit with labels of destructive measurements (fruit flesh firmness, sugar content and overall ripeness). The labels are provided within three categories (firmness, sweetness and overall ripeness).
Four measurement series were performed. Besides 1018 labeled recordings, the data set contains 4671 recordings without ripeness label.
The data set contains recordings of:
Avocados, Kiwis, Persimmons, Papayas, Mango
Three different hyperspectral cameras were used:
Specim FX 10, INNO-SPEC Redeye 1.7, Corning microHSI 410 Vis-NIR Hyperspectral Sensor | Provide a detailed description of the following dataset: DeepHS Fruit v2 |
ImageNet-W | ImageNet-W(atermark) is a test set to evaluate models’ reliance on the newly found watermark shortcut in ImageNet, which is used to predict the *carton* class. ImageNet-W is created by overlaying transparent watermarks on the ImageNet validation set. Two metrics are used to evaluate watermark shortcut reliance: (1) IN-W Gap: the top-1 accuracy drop from ImageNet to ImageNet-W, (2) Carton Gap: carton class accuracy increase from ImageNet to ImageNet-W. Combining ImageNet-W with previous out-of-distribution variants of ImageNet (e.g., Stylized ImageNet, ImageNet-R, ImageNet-9) forms a comprehensive suite of multi-shortcut evaluation on ImageNet. | Provide a detailed description of the following dataset: ImageNet-W |
UrbanCars | UrbanCars facilitates multi-shortcut learning under the controlled setting with two shortcuts—background and co-occurring object. The task is classifying the car body type into two categories: *urban* car and *country* car. The dataset contains three splits: training, validation, and testing. In the training set, two shortcuts spuriously correlate with the car body type. Both validation and testing sets are balanced, i.e., no spurious correlations. The validation set is used for model selection, and the testing set evaluates the mitigation of two shortcuts. | Provide a detailed description of the following dataset: UrbanCars |
REAP | **REAP** is a digital benchmark that allows the user to evaluate patch attacks on real images, and under real-world conditions. Built on top of the Mapillary Vistas dataset, the benchmark contains over 14,000 traffic signs. Each sign is augmented with a pair of geometric and lighting transformations, which can be used to apply a digitally generated patch realistically onto the sign. | Provide a detailed description of the following dataset: REAP |
BeautyFace | **BeautyFace** is a dataset containing 3,000 high-quality face images with a higher resolution of 512*512, covering more recent makeup styles and more diverse face poses, backgrounds, expressions, races, illumination. Each face has annotated parsing map. | Provide a detailed description of the following dataset: BeautyFace |
Accidental Turntables | **Accidental Turntables** contains a challenging set of 41,212 images of cars in cluttered backgrounds, motion blur and illumination changes that serves as a benchmark for 3D pose estimation. | Provide a detailed description of the following dataset: Accidental Turntables |
ScanEnts3D | **Scan Entities in 3D** (**ScanEnts3D**) is a large-scale dataset which provides explicit correspondences between 369k objects across 84k natural referentural sentences, covering 705 real-world scenes. | Provide a detailed description of the following dataset: ScanEnts3D |
VOST | **VOST** consists of more than 700 high-resolution videos, captured in diverse environments, which are 20 seconds long on average and densely labeled with instance masks. A careful, multi-step approach is adopted to ensure that these videos focus on complex transformations, capturing their full temporal extent. | Provide a detailed description of the following dataset: VOST |
MAPS-KB | **MAPS-KB** is a million-scale probabilistic simile knowledge base, covering 4.3 million triplets over 0.4 million terms from 70 GB corpora. It is designed for the tasks of simile detection and component extraction. | Provide a detailed description of the following dataset: MAPS-KB |
FLAG3D | **FLAG3D** is a large-scale 3D fitness activity dataset with language instruction containing 180K sequences of 60 categories. FLAG3D features the following three aspects: 1) accurate and dense 3D human pose captured from advanced MoCap system to handle the complex activity and large movement, 2) detailed and professional language instruction to describe how to perform a specific activity, 3) versatile video resources from a high-tech MoCap system, rendering software, and cost-effective smartphones in natural environments. | Provide a detailed description of the following dataset: FLAG3D |
Selection from FFHQ & StyleGAN2:FFHQ (used in "Testing Human Ability To Detect Deepfake Images of Human Faces" study) | This dataset is the image stimulus pool of 50 deepfake and 50 real images, used for the experiment in the study titled "Testing Human Ability To Detect Deepfake Images of Human Faces".
Images were obtained through random selection from the Flickr Faces High Quality dataset (https://paperswithcode.com/dataset/ffhq) and likewise from output of the StyleGAN2 algorithm (https://paperswithcode.com/method/stylegan2) as trained on the FFHQ dataset.
Also included are the 20 deepfake images, similarly obtained (from StyleGAN2:FFHQ), which were used in the familiarization intervention in the study; and the 20 deepfake images (different images but similarly obtained, albeit with an element of curation to select for images with the specific "tell-tale signs" / "visible rendering artefacts") which were used in the advice intervention in the study.
The 50 real and 50 deepfake images that were used as test stimuli in the experiment are in /real and /fake respectively; the 20 familiarization images are in /familiarization; and the 20 images used in the advice intervention are in /advice. | Provide a detailed description of the following dataset: Selection from FFHQ & StyleGAN2:FFHQ (used in "Testing Human Ability To Detect Deepfake Images of Human Faces" study) |
MOPRD | **MOPRD**, a multidisciplinary open peer review dataset consists of paper metadata, multiple version manuscripts, review comments, meta-reviews, author's rebuttal letters, and editorial decisions from 6578 papers. | Provide a detailed description of the following dataset: MOPRD |
VASR | **Visual Analogies of Situation Recognition (VASR)** is a dataset for visual analogical mapping, adapting the classical word-analogy task into the visual domain. It contains 196K object transitions and 385K activity transitions. Experiments demonstrate that state-of-the-art models do well when distractors are chosen randomly (~86%), but struggle with carefully chosen distractors (~53%, compared to 90% human accuracy) | Provide a detailed description of the following dataset: VASR |
Lombardia Sentinel-2 Image Time Series for Crop Mapping | Usually, the information related to the crop types available in a given territory is annual information, that is, we only know the type of main crop grown over a year and we do not know any crops that have followed one another during the year and also we do not know when a particular crop is sown and when it is harvested.
The main objective of this dataset is to create the basis for experimenting with suitable solutions to give a reliable answer to the above questions, or to propose models capable of producing dynamic segmentation maps that show when a crop begins to grow and when it is collected. Consequently, being able to understand if more than one crop has been grown in a territory within a year.
In this dataset, we have 20 coverage classes as ground-truth values provided by Regine Lombardia.
The mapping of the class labels used (see file lombardia-classes/classes25pc.txt) brings together some classes and provides the time intervals within which that category grows.
The last two columns of the following table are respectively the date (month-day) of the start and end of the interval in which the class is visible during the construction of our dataset. | Provide a detailed description of the following dataset: Lombardia Sentinel-2 Image Time Series for Crop Mapping |
Text2shape | A large dataset of natural language descriptions for physical 3D objects in the ShapeNet dataset. | Provide a detailed description of the following dataset: Text2shape |
ShapeGlot | ShapeGlot: Learning Language for Shape Differentiation | Provide a detailed description of the following dataset: ShapeGlot |
TbV Dataset | The TbV dataset is large-scale dataset created to allow the community to improve the state of the art in machine learning tasks related to mapping, that are vital for self-driving.
- Over 1000 scenarios ("logs") captured by a fleet of autonomous vehicles.
- 200 logs include real-world lane geometry or crosswalk changes, where an HD map has become stale.
- Each log represents a continuous observation of a scene around a self-driving vehicle.
- On average, each scenario is 54 seconds in duration. Each scenario has an HD map representing lane boundaries, crosswalks, drivable area, and a raster map of ground height at 0.3 meter resolution.
- Captured across 4 seasons in six diverse cities (Austin, TX, Detroit, MI, Miami, FL, Palo Alto, CA, Pittsburgh, PA, and
Washington, D.C.)
- Includes 559.4K LiDAR Sweeps.
- Includes 7.8M Images.
- 15.5 hours of driving data.
- 180 miles of driving (by the ego-vehicle). | Provide a detailed description of the following dataset: TbV Dataset |
BRACE | BRACE is a dataset for audio-conditioned dance motion synthesis challenging common assumptions for this task:
- strong music-dance correlation
- controlled motion data
- simple poses and movements
To address these issues:
- We focus on breakdancing which features acrobatic moves, tangled postures and weaker dance-music correlation.
- We adopt a hybrid labelling pipeline leveraging estimation models as well as manual annotations to obtain good quality keypoint sequences at a reduced cost.
- Our efforts produced the BRACE dataset, which contains over 3 hours and 30 minutes of densely annotated poses.
- BRACE is also useful to fine-tune pose-estimation models thanks to its high quality keypoint annotations for complicated and uncommon poses. | Provide a detailed description of the following dataset: BRACE |
THU-FVFDT | **THU-FVFDT** is a dataset containing raw finger vein and finger dorsal texture images of 220 different subjects. Images are captured in two different sessions with interval of about dozens of seconds. One session is for training and the other for testing. Four finger vein images and four finger dorsal texture images are captured simultaneously in each session. We only offer one of four images in that there is approximately no difference between them. The size of raw images is 720×576 pixels. | Provide a detailed description of the following dataset: THU-FVFDT |
2D_NACA_RANS | Dataset of low fidelity resolutions of the RANS equations over airfoils. | Provide a detailed description of the following dataset: 2D_NACA_RANS |
V3C | The Vimeo Creative Commons Collection, in short V3C, is a collection of 28’450 videos (with overall length of about 3’800 h) published under creative commons license on Vimeo. V3C comes with a shot segmentation for each video, together with the resulting keyframes in original as well as reduced resolution and additional metadata. It is intended to be used from 2019 at the International large-scale TREC Video Retrieval Evaluation campaign (TRECVid). | Provide a detailed description of the following dataset: V3C |
MapAI Dataset | # MapAI: Precision in Building Segmentation Dataset
The dataset comprises 7500 training images and 1500 validation images from Denmark. The test dataset is split into two tasks, where the first task (1368 images) is to segment the buildings only using aerial images. In contrast, the second task (978 images) allows using aerial images and lidar data. All data samples have a resolution of 500x500. The aerial images are RGB images, while the lidar data are rasterized. The ground truth masks have two classes, building, and background. | Provide a detailed description of the following dataset: MapAI Dataset |
MiST | **MiST** (Modals In Scientific Text) is a dataset containing 3737 modal instances in five scientific domains annotated for their semantic, pragmatic, or rhetorical function. | Provide a detailed description of the following dataset: MiST |
BGVP | BG Vulnerable Pedestrian (BGVP) is a dataset to help train well-rounded models and thus induce research to increase the efficacy of vulnerable pedestrian detection. The dataset contains 2,000 images with 5,932 bounding box instances from four categories, i.e., Children Without Disability, Elderly without Disability, With Disability, and Non-Vulnerable. | Provide a detailed description of the following dataset: BGVP |
OIVIO | It consists of 36 sequences, recorded in mines, tunnels, and other dark environments, totaling more than 145 minutes of stereo camera video and IMU data. In each sequence, the scene is illuminated by an onboard light of approximately 1350, 4500, or 9000 lumens. We accommodate both direct and indirect VIO methods by providing the geometric and photometric camera calibrations. The full dataset, including sensor data, calibration sequences, and evaluation scripts can be downloaded here. | Provide a detailed description of the following dataset: OIVIO |
UMA-VI Dataset | The dataset contains 32 sequences for the evaluation of VI motion estimation methods, totalling ∼80 min of data. The dataset covers challenging conditions (mainly illumination changes and low textured environments) in different degrees and a wide rage of scenarios (including corridors, parking, classrooms, halls, etc.) from two different buildings at the University of Malaga. In general, we provide at least two different sequences within the same scenario, with different illumination conditions or following different trajectories. All sequences were recorded with our VI sensor handheld, except a few that were recorded while mounted in a car. | Provide a detailed description of the following dataset: UMA-VI Dataset |
AIROGS | The Rotterdam EyePACS AIROGS dataset (in full, so including train and test) contains 113,893 color fundus images from 60,357 subjects and approximately 500 different sites with a heterogeneous ethnicity. | Provide a detailed description of the following dataset: AIROGS |
CRCDX | Histological images of colorectal cancer, derived from the TCGA database | Provide a detailed description of the following dataset: CRCDX |
FreCDo | **FreCDo** is a corpus for French dialect identification comprising 413,522 French text samples collected from public news websites in Belgium, Canada, France and Switzerland. | Provide a detailed description of the following dataset: FreCDo |
Objaverse | **Objaverse** is a large dataset of objects with 800K+ (and growing) 3D models with descriptive captions, tags, and animations. Objaverse improves upon present day 3D repositories in terms of scale, number of categories, and in the visual diversity of instances within a category. | Provide a detailed description of the following dataset: Objaverse |
multiRAW | To encourage reproducible research, a labeled MultiRAW dataset containing>7k RAW images acquired using multiple camera sensors is made publicly accessible for RAW-domain processing.
Provide:
* four cameras RAW images
* corresponding RGB images
* detection labels
* segmentation labels | Provide a detailed description of the following dataset: multiRAW |
ROSCOE | **ROSCOE** is a suite of interpretable, unsupervised automatic scores that improve and extend previous text generation evaluation metrics. | Provide a detailed description of the following dataset: ROSCOE |
Robust Summarization Evaluation Benchmark | **Robust Summarization Evaluation Benchmark** is a large human evaluation dataset consisting of over 22k summary-level annotations over state-of-the-art systems on three datasets. | Provide a detailed description of the following dataset: Robust Summarization Evaluation Benchmark |
SMACv2 | **SMACv2** (StarCraft Multi-Agent Challenge v2) is a new version of the benchmark where scenarios are procedurally generated and require agents to generalise to previously unseen settings (from the same distribution) during evaluation. | Provide a detailed description of the following dataset: SMACv2 |
TBBR Raw | This dataset contains the raw images for the dataset of Thermal Bridges on Building Rooftops (TBBR) dataset.
This dataset contains 5696 drone images (2848 RGB and 2848 thermal) of building rooftops, recorded with a normal (RGB) and a FLIR-XT2 (thermal) camera on a DJI M600 drone. They show six large building blocks of around 20 buildings per block recorded in the city centre of the German city Karlsruhe east of the market square. Because of a high overlap rate of the images, the same buildings are on average recorded from different angles in different images about 20 times.
All images were recorded during a drone flight on March 19, 2019 from 7 a.m. to 8 a.m. At this time, temperatures were between 3.78 ° C and 4.97 ° C, humidity between 80% and 98%. There was no rain on the day of the flight, but there was 2.3mm/m² 48 hours beforehand. For recording the thermographic images an emissivity of 1.0 was set. The global radiation during this period was between 38.59 W / m² and 120.86 W / m². No direct sunlight can be seen visually on any of the recordings. | Provide a detailed description of the following dataset: TBBR Raw |
MuReD Dataset | Early detection of retinal diseases is one of the most important means of preventing partial or permanent blindness in patients. One of the major stumbling blocks for manual retinal examination is the lack of a sufficient number of qualified medical personnel per capita to diagnose diseases. Computer-aided diagnosis systems (CAD) have proven to be very effective in helping physicians reduce the time taken to make a diagnosis and minimize variability in image interpretation. Still, they are not flexible enough to accommodate the simultaneous presence of multiple retinal diseases, which is a common situation in real-world applications. In the past years, few datasets that focus on the classification of numerous retinal pathologies present at the same time, i.e., multi-label classification have been proposed, but there are some shared problems with all of them, such as a narrow range of pathologies to classify, high level of class imbalance, low amount of samples for the underrepresented labels, no assurance in image quality, among others. All these problems hinder the performance of any model trained with these datasets, which leads to poor robustness, lack of generalization, and reduced trustability in its predictions.
To address these problems, we constructed the Multi-Label Retinal Diseases (MuReD) dataset, using images collected from three different state-of-the-art sources, i.e., ARIA, STARE, and RFMiD datasets, and performing a sequence of post-processing steps to ensure the quality of the images, a wide range of diseases to classify, and a sufficient number of samples per disease label.
The MuReD dataset consists of 2208 images with 20 different labels, with varying image quality and resolution. At the same time, ensuring a minimal degree of quality in the data, with a sufficient number of samples per label. To the best of our knowledge, the MuReD dataset, is the only publicly available dataset that applies a sequence of post-processing steps to ensure the quality of the images, the variety of pathologies, and the number of samples per label, resulting in increased data quality and a significant reduction of the class imbalance present in the publicly available datasets.
It is envisaged that the MuReD dataset will enable the creation of more robust, general, and trustable models for the automatic detection and classification of retinal diseases. | Provide a detailed description of the following dataset: MuReD Dataset |
FETA Car-Manuals | **FETA** benchmark focuses on text-to-image and image-to-text retrieval in public car manuals and sales catalogue brochures. The FETA Car-Manuals dataset consists of a total of 349 PDF documents from 5 car manufacturers, namely Nissan, Toyota, Mazda, Renault, Chevrolet. | Provide a detailed description of the following dataset: FETA Car-Manuals |
FETA IKEA | **FETA** benchmark focuses on text-to-image and image-to-text retrieval in public car manuals and sales catalogue brochures. The FETA IKEA dataset contains 26 documents with 7366 pages total, approximately 9574 images and 23927 texts automatically extracted from those pages. | Provide a detailed description of the following dataset: FETA IKEA |
Verifee | **Verifee** is a dataset of news articles with fine-grained trustworthiness annotations. It contains over 10, 000 unique articles from almost 60 Czech online news sources. These are categorized into one of the 4 classes across the credibility spectrum we propose, raging from entirely trustworthy articles all the way to the manipulative ones. | Provide a detailed description of the following dataset: Verifee |
Werewolf Among Us | **Werewolf Among Us** is a dataset multimodal dataset for modeling persuasion behaviors. It contains 199 dialogue transcriptions and videos captured in a multi-player social deduction game setting, 26,647 utterance level annotations of persuasion strategy, and game level annotations of deduction game outcomes. | Provide a detailed description of the following dataset: Werewolf Among Us |
PulseImpute | **PulseImpute** is a benchmark for Pulsative Physiological Signal Imputation which includes realistic mHealth missingness models, an extensive set of baselines, and clinically-relevant downstream tasks. It contains 440,953 100 Hz 5-minute ECG waveforms from 32,930 patients | Provide a detailed description of the following dataset: PulseImpute |
Oxford Ontology Library | The ontology files, readme and statistical information can be found and browsed in the [ontology library](http://krr-nas.cs.ox.ac.uk/ontologies/lib). Because many of the ontologies make use of imports, we have "localised" ontologies by parsing them, resolving and parsing all imports, merging the main and imported ontologies together, and re-serialising the ontology, all using the OWL API. The original main ontology and the imported ontologies are saved in the "sources/".
Each localised ontology is assigned a unique ID and is companied with a hard link. All the hard links are stored [here](http://krr-nas.cs.ox.ac.uk/ontologies/UID). In the [readme](http://krr-nas.cs.ox.ac.uk/ontologies/readme.htm), there is some information on all the localised ontologies--their IDs, ontology file directories, and statistics. | Provide a detailed description of the following dataset: Oxford Ontology Library |
PSI-AVA | **PSI-AVA** is a dataset designed for holistic surgical scene understanding. It contains approximately 20.45 hours of the surgical procedure performed by three expert surgeons and annotations for both long-term (Phase and Step recognition) and short-term reasoning (Instrument detection and novel Atomic Action recognition) in robot-assisted radical prostatectomy videos. | Provide a detailed description of the following dataset: PSI-AVA |
CA4P-483 | **CA4P-483** is a dataset designed to facilitate the sequence labeling tasks and regulation compliance identification between privacy policies and software. It contains 483 Chinese Android application privacy policies, over 11K sentences, and 52K fine-grained annotations. | Provide a detailed description of the following dataset: CA4P-483 |
Hansel | Hansel is a human-annotated Chinese entity linking (EL) dataset, focusing on tail entities and emerging entities:
- The test set contains Few-shot (FS) and zero-shot (ZS) slices, has 10K examples and uses Wikidata as the corresponding knowledge base, useful for testing Chinese/multilingual EL systems' generalization ability to tail and emerging entities.
- The training and validation sets are from Wikipedia hyperlinks, useful for large-scale pretraining of Chinese EL systems. | Provide a detailed description of the following dataset: Hansel |
OASum | **OASum** is a large-scale open-domain aspect-based summarization dataset which contains more than 3.7 million instances with around 1 million different aspects on 2 million Wikipedia pages. | Provide a detailed description of the following dataset: OASum |
TAS-NIR | **TAS-NIR** is a VIS+NIR dataset of semantically annotated images in unstructured outdoor environments. It consists of 209 VIS+NIR image pairs with a fine-grained semantic segmentation. | Provide a detailed description of the following dataset: TAS-NIR |
E-NER | **E-NER** is a publicly available legal Named Entity Recognition (NER) data set. It contains 52 filings from the US SEC EDGAR database. The named entity tags are hand annotated. | Provide a detailed description of the following dataset: E-NER |
SimpEvalASSET | **SimpEvalASSET** is a dataset for learning learnable metrics using modern language models. It comprises of 12K human ratings on 2.4K simplifications of 24 systems, and SIMPEVAL_2022, a challenging simplification benchmark consisting of over 1K human ratings of 360 simplifications including generations from GPT-3.5. | Provide a detailed description of the following dataset: SimpEvalASSET |
ChartQA | Charts are very popular for analyzing data. When exploring charts, people often ask a variety of complex reasoning questions that involve several logical and arithmetic operations. They also commonly refer to visual features of a chart in their questions. However, most existing datasets do not focus on such complex reasoning questions as their questions are template-based and answers come from a fixed-vocabulary. In this work, we present a large-scale benchmark covering 9.6K human-written questions as well as 23.1K questions generated from human-written chart summaries. To address the unique challenges in our benchmark involving visual and logical reasoning over charts, we present two transformer-based models that combine visual features and the data table of the chart in a unified way to answer questions. While our models achieve the state-of-the-art results on the previous datasets as well as on our benchmark, the evaluation also reveals several challenges in answering complex reasoning questions. | Provide a detailed description of the following dataset: ChartQA |
FedTADBench | **FedTADBench** is a federated time series anomaly detection benchmark. It covers 5 time series anomaly detection algorithms, 4 federated learning frameworks, and 3 time series anomaly detection datasets. | Provide a detailed description of the following dataset: FedTADBench |
CAP-DATA | **CAP-DATA** is a large-scale benchmark consisting of 11,727 in-the-wild accident videos with over 2.19 million frames together with labeled fact-effect-reason-introspection description and temporal accident frame label. It can support many useful tasks for accident inference, such as accident detection and prediction (AccidentDet/Pre), causal inference of accident (Accident-Causal), accident classification (Accident-Cla), text-video based accident retrieval (Accident-Retri), and question answering in an accident (Accident-QA) of the driving scene. | Provide a detailed description of the following dataset: CAP-DATA |
JEMMA | **JEMMA** is an Extensible Java Dataset for ML4Code Applications, which is a large-scale dataset targeted at ML4 code. JEMMA comes with a considerable amount of pre-processed information such as metadata, representations (e.g., code tokens, ASTs, graphs), and several properties (e.g., metrics, static analysis results) for 50,000 Java projects from the 50KC dataset, with over 1.2 million classes and over 8 million methods. | Provide a detailed description of the following dataset: JEMMA |
MPV | Consists of 37,723/14,360 person/clothes images, with the resolution of 256x192. Each person has different poses. We split them into the train/test set 52,236/10,544 three-tuples, respectively. You can download the dataset at [MPV(Google Drive)](https://drive.google.com/file/d/1Vmc-n8I4jh3wppSbaisoeiZDtoWEUY27/view?usp=sharing) | Provide a detailed description of the following dataset: MPV |
SPARF | **SPARF** is a large-scale ShapeNet-based synthetic dataset for novel view synthesis consisting of ~17 million images rendered from nearly 40,000 shapes at high resolution (400×400 pixels). | Provide a detailed description of the following dataset: SPARF |
Cards Against Humanity | A dataset of games played in the card game "Cards Against Humanity" (CAH), by human players, derived from the online CAH labs.
Each round includes the cards presented to users - a "black" prompt with a blank or question and 10 "white" punchlines as possible responses, and which punchline was picked by a player each round, along with text and metadata.
An example prompt is “TSA guidelines now prohibit ___ on airplanes”. Candidate punchlines are “Goblins”, “BATMAN!!!”, “Poor people”, and “The
right amount of cocaine”. Importantly, many cards are offensive or politically incorrect.
Used to explore human humor preferences.
Available upon request from CAH labs: mail@cardsagainsthumanity.com
Train/test splits and data processing available in the paper/code: "Cards Against AI: Predicting Humor in a Fill-in-the-blank Party Game": https://github.com/ddofer/CAH | Provide a detailed description of the following dataset: Cards Against Humanity |
Cityscapes-DVPS | Cityscapes-DVPS is derived from Cityscapes-VPS by adding re-computed depth maps from Cityscapes dataset. Cityscapes-DVPS is distributed under Creative Commons Attribution-NonCommercial-ShareAlike license. | Provide a detailed description of the following dataset: Cityscapes-DVPS |
SemKITTI-DVPS | SemKITTI-DVPS is derived from SemanticKITTI dataset. SemanticKITTI dataset is based on the odometry dataset of the KITTI Vision benchmark. SemanticKITTI dataset provides perspective images and panoptic-labeled 3D point clouds. To convert it for DVPS, we project the 3D point clouds onto the image plane and name the derived dataset as SemKITTI-DVPS. SemKITTI-DVPS is distributed under Creative Commons Attribution-NonCommercial-ShareAlike license. | Provide a detailed description of the following dataset: SemKITTI-DVPS |
Berlin V2X | The Berlin V2X dataset offers high-resolution GPS-located wireless measurements across diverse urban environments in the city of Berlin for both cellular and sidelink radio access technologies, acquired with up to 4 cars over 3 days. The data enables thus a variety of different ML studies towards vehicle-to-anything (V2X) communication.
The data includes information on
* physical layer parameters (such as signal strength and signal quality)
* cellular radio resource management like cell identity, carrier aggregation and assigned resource blocks
* wireless Quality of Service (QoS) like delay and throughput (for cellular) or packet error rate (for sidelink)
* positioning information.
The datasets are labelled and pre-filtered for a fast on-boarding and applicability. The measurement methodology pursues an application to Machine Learning (ML) for tasks such as QoS prediction, transfer learning, proactive radio resource allocation or link selection, among others. | Provide a detailed description of the following dataset: Berlin V2X |
lilGym | **lilGym** is a benchmark for language-conditioned reinforcement learning in visual environment based on 2,661 highly-compositional human-written natural language statements grounded in an interactive visual environment. Each statement is paired with multiple start states and reward functions to form thousands of distinct Markov Decision Processes of varying difficulty. | Provide a detailed description of the following dataset: lilGym |
Autonomous-driving Streaming Perception Benchmarrk | The **Autonomous-driving StreAming Perception** (ASAP) benchmark is a benchmark to evaluate the online performance of vision-centric perception in autonomous driving. It extends the 2Hz annotated nuScenes dataset by generating high-frame-rate labels for the 12Hz raw images. | Provide a detailed description of the following dataset: Autonomous-driving Streaming Perception Benchmarrk |
NusaCrowd | **NusaCrowd** is a collaborative initiative to collect and unite existing resources for Indonesian languages, including opening access to previously non-public resources. Through this initiative, the authors have has brought together 137 datasets and 117 standardized data loaders. The quality of the datasets has been assessed manually and automatically, and their effectiveness has been demonstrated in multiple experiments. | Provide a detailed description of the following dataset: NusaCrowd |
PopQA | **PopQA** is an open-domain QA dataset with 14k QA pairs with fine-grained Wikidata entity ID, Wikipedia page views, and relationship type information. | Provide a detailed description of the following dataset: PopQA |
RegDB-C* | RegDB-C* is an evaluation set that consists of algorithmically generated corruptions applied to the RegDB test-set, and especially to both the visible and the thermal data. In comparison with the RegDB-C dataset proposed by Chen et al. in "Benchmarks for Corruption Invariant Person Re-identification" paper, our dataset is used in a multimodal manner and do not consider visible data corruptions only. Used corruptions are globally the same; Noise: Gaussian, shot, impulse, and speckle; Blur: defocus, frosted glass, motion, zoom, and Gaussian; Weather: snow, frost, fog, brightness, spatter, and rain; Digital: contrast, elastic, pixel, JPEG compression, and saturate. However, corruptions are adapted to respect the thermal modality encoding, and brightness is not used to corrupt the thermal data. Five severity levels are considered per corruption. | Provide a detailed description of the following dataset: RegDB-C* |
SYSU-MM01-C* | SYSU-MM01-C* is an evaluation set that consists of algorithmically generated corruptions applied to the SYSU-MM01 test-set, and especially to both the visible and the thermal data. In comparison with the SYSU-MM01-C dataset proposed by Chen et al. in "Benchmarks for Corruption Invariant Person Re-identification" paper, our dataset is used in a multimodal manner and do not consider visible data corruptions only. Used corruptions are globally the same; Noise: Gaussian, shot, impulse, and speckle; Blur: defocus, frosted glass, motion, zoom, and Gaussian; Weather: snow, frost, fog, brightness, spatter, and rain; Digital: contrast, elastic, pixel, JPEG compression, and saturate. However, corruptions are adapted to respect the thermal modality encoding, and brightness is not used to corrupt the thermal data. Five severity levels are considered per corruption. | Provide a detailed description of the following dataset: SYSU-MM01-C* |
ThermalWORLD-C* | ThermalWORLD-C* is an evaluation set that consists of algorithmically generated corruptions applied to the ThermalWORLD test-set, and especially to both the visible and the thermal data. In comparison with the corruption approach proposed by Chen et al. in "Benchmarks for Corruption Invariant Person Re-identification" paper, our dataset is used in a multimodal manner and do not consider visible data corruptions only. Used corruptions are globally the same; Noise: Gaussian, shot, impulse, and speckle; Blur: defocus, frosted glass, motion, zoom, and Gaussian; Weather: snow, frost, fog, brightness, spatter, and rain; Digital: contrast, elastic, pixel, JPEG compression, and saturate. However, corruptions are adapted to respect the thermal modality encoding, and brightness is not used to corrupt the thermal data. Five severity levels are considered per corruption. | Provide a detailed description of the following dataset: ThermalWORLD-C* |
Synthetic Federated Quantum Sensing Dataset | This is the first federated quantum dataset in the literature. | Provide a detailed description of the following dataset: Synthetic Federated Quantum Sensing Dataset |
MMBody | The MMBody dataset provides human body data with motion capture, GT mesh, Kinect RGBD, and millimeter wave sensor data. See [homepage](https://chen3110.github.io/mmbody/index.html) for more details.
To download the dataset, please send us an e-mail (anjunchen@zju.edu.cn) including contact details (title, full name, organization, and country) and the purpose for downloading the dataset. Important note for students and post-docs: we hope to know the contact details of your academic supervisor. By sending the e-mail you accept the following terms and conditions. | Provide a detailed description of the following dataset: MMBody |
Indian Party Symbol Dataset | There was no predefined dataset of party symbols to be usedas a benchmark. We curated a dataset from various nationaland regional websites owned by the ECI. The dataset consists of symbols (image files) of 49 National and State registered parties approved by the ECI. For each image of theoriginal party symbol, 18 different distortions and transformations were created as variations to the training data. Each image is of the dimension 180 x 180. The final labeled dataset consists of 931 images of party symbols with their corresponding party names as the labels. | Provide a detailed description of the following dataset: Indian Party Symbol Dataset |
UCR Anomaly Archive | The UCR Anomaly Archive is a collection of 250 uni-variate time series collected in human medicine, biology, meteorology and industry. The collected time series contain a few natural anomalies though the majority of the anomalies are artificial . The dataset was first used in an anomaly detection contest preceding the ACM SIGKDD conference 2021.
Each of the time series contains exactly one, occasionally subtle anomaly after a given time stamp. The data before that timestamp can be considered normal.
The time series collected in the UCR Anomaly Archive can be categorized into 12 types originating from the four domains human medicine, meteorology, biology and industry. The distribution across the domains is highly imbalanced with around 64% of the times series being collected in human medicine applications, 22% in biology, 9% in industry and 5% being air temperature measurements. The time series within a single type (e.g. ECG) are not completely unique, but differ in terms of injected anomalies or a modification of the original time series through added Gaussian noise and wandering baselines.
The downloadable archive contains, among other supplemental material, a set of slides explaining the injected anomalies with examples. | Provide a detailed description of the following dataset: UCR Anomaly Archive |
CropAndWeed Dataset | The CropAndWeed dataset is focused on the fine-grained identification of 74 relevant crop and weed species with a strong emphasis on data variability. Annotations of labeled bounding boxes, semantic masks and stem positions are provided for about 112k instances in more than 8k high-resolution images of both real-world agricultural sites and specifically cultivated outdoor plots of rare weed types. Additionally, each sample is enriched with meta-annotations regarding environmental conditions. | Provide a detailed description of the following dataset: CropAndWeed Dataset |
ADVETA | **ADVErsarial Table perturbAtion (ADVETA)** is a robustness evaluation benchmark featuring natural and realistic ATPs. It is based on three mainstream Text-to-SQL datasets, Spider, WikiSQL and WTQ. | Provide a detailed description of the following dataset: ADVETA |
SODA | **SODA** is a high-quality social dialogue dataset. In contrast to most existing crowdsourced, small-scale dialogue corpora, Soda distills 1.5M socially-grounded dialogues from a pre-trained language model (InstructGPT; Ouyang et al., ). Dialogues are distilled by contextualizing social commonsense knowledge from a knowledge graph (Atomic10x). | Provide a detailed description of the following dataset: SODA |
Naamapadam | **Naamapadam** is a Named Entity Recognition (NER) dataset for the 11 major Indian languages from two language families. In each language, it contains more than 400k sentences annotated with a total of at least 100k entities from three standard entity categories (Person, Location and Organization) for 9 out of the 11 languages. The training dataset has been automatically created from the Samanantar parallel corpus by projecting automatically tagged entities from an English sentence to the corresponding Indian language sentence. | Provide a detailed description of the following dataset: Naamapadam |
pursuitMW | Multi-agent pursuit in matrix world (pursuitMW) is a partially observable Markov game (POMG) between a swarm of pursuers and a swarm of evaders. Algorithms can be developed for the pursuers, evaders, or both of them. | Provide a detailed description of the following dataset: pursuitMW |
HNEI diagnosis dataset | This dataset contains more than 700,000 unique voltage vs. capacity curves for training Artificial Intelligence (AI) systems for lithium-ion battery diagnosis and prognosis. It was calculated using the mechanistic modeling approach. See “Big data training data for artificial intelligence-based Li-ion diagnosis and prognosis“ (Journal of Power Sources, Volume 479, 15 December 2020, 228806) and "Analysis of Synthetic Voltage vs. Capacity Datasets for Big Data Li-ion Diagnosis and Prognosis" (Energies, under review) for more details.
This dataset was compiled with a resolution of 0.01 for the triplets and C/25 charges. This accounts for more than 5,000 different paths. Each path was simulated with at most 0.85% increases for each degradation mode. | Provide a detailed description of the following dataset: HNEI diagnosis dataset |
CHAIRS dataset | **CHAIRS** is a large-scale motion-captured f-AHOI dataset, consisting of 17.3 hours of versatile interactions between 46 participants and 81 articulated and rigid sittable objects. CHAIRS provides 3D meshes of both humans and articulated objects during the entire interactive process, as well as realistic and physically plausible full-body interactions. | Provide a detailed description of the following dataset: CHAIRS dataset |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.