dataset_name stringlengths 2 128 | description stringlengths 1 9.7k | prompt stringlengths 59 185 |
|---|---|---|
Argoverse 2 | Argoverse 2 (AV2) is a collection of three datasets for perception and forecasting research in the self-driving domain. The annotated Sensor Dataset contains 1,000 sequences of multimodal data, encompassing high-resolution imagery from seven ring cameras, and two stereo cameras in addition to lidar point clouds, and 6-DOF map-aligned pose. Sequences contain 3D cuboid annotations for 26 object categories, all of which are sufficiently-sampled to support training and evaluation of 3D perception models. The Lidar Dataset contains 20,000 sequences of unlabeled lidar point clouds and map-aligned pose. This dataset is the largest ever collection of lidar sensor data and supports self-supervised learning and the emerging task of point cloud forecasting. Finally, the Motion Forecasting Dataset contains 250,000 scenarios mined for interesting and challenging interactions be- tween the autonomous vehicle and other actors in each local scene. Models are tasked with the prediction of future motion for “scored actors" in each scenario and are provided with track histories that capture object location, heading, velocity, and category. In all three datasets, each scenario contains its own HD Map with 3D lane and crosswalk geometry — sourced from data captured in six distinct cities. We believe these datasets will support new and existing machine learning research problems in ways that existing datasets do not. All datasets are released under the CC BY-NC-SA 4.0 license. | Provide a detailed description of the following dataset: Argoverse 2 |
MiSCS | Microscopy images of shrub cross sections for instance segmentation of tree rings.
Tree rings are used in dendroecology to reconstruct past climate. Shrubs are of special importance for climate reconstruction in the Arctic as they are the only plants with tree rings that can grow there. From a computer vision point of view, the task of detecting of shrub tree rings in microscopy images is a special case of the instance segmentation problem with several unique challenges such as the concentric ring shape of the objects.
This dataset provides 213 high resolution microscopy images split into 3 subsets according to species for further advancements in this area. | Provide a detailed description of the following dataset: MiSCS |
ChatGPT Paraphrases | This is a dataset of paraphrases created by ChatGPT.
**We used this prompt to generate paraphrases:**
Generate 5 similar paraphrases for this question, show it like a numbered list without commentaries: *{text}*
This dataset is based on the [Quora paraphrase question](https://www.kaggle.com/competitions/quora-question-pairs), texts from the [SQUAD 2.0](https://huggingface.co/datasets/squad_v2) and the [CNN news dataset](https://huggingface.co/datasets/cnn_dailymail).
We generated 5 paraphrases for each sample, totally this dataset has about 350k data rows. You can make 30 rows from a row
from each sample. In this way you can make 10.5 millions train pairs (350k rows with 5 paraphrases -> 6x5x350000 = 10.5 millions of bidirected or 6x5x350000/2 = 5.25 millions of unique pairs).
**We used:**
- 231927 questions from the Quora dataset
- 92005 texts from the Squad 2.0 dataset
- 29110 texts from the CNN news dataset
**Structure of the dataset:**
- text column - an original sentence or question from the datasets
- paraphrases - a list of 5 paraphrases
- category - question / sentence
- source - quora / squad_2 / cnn_news | Provide a detailed description of the following dataset: ChatGPT Paraphrases |
BUAA-MIHR dataset | BUAA-MIHR dataset is a remote photoplethysmography (rPPG) dataset. BUAA-MIHR dataset for evaluation of remote photoplethysmography pipeline under multi-illumination situations. We recruited 15 healthy subjects (12 male, 3 female, 18 to 30 years old) in this experiment and a total number of 165 video sequences were recorded under various illuminations. The experiments were conducted in a darkroom in order to isolate from ambient light. | Provide a detailed description of the following dataset: BUAA-MIHR dataset |
AISIA-VN-Review-S | In AISIA-VN-Review-S and AISIA-VN-Review-F datasets, we first collect 450K customer reviewing comments from various e–commerce websites. Then, we manually label each review to be either positive or negative, resulting in 358,743 positive reviews and 100,699 negative reviews. We named this dataset the sentiment classification from reviews collected by AISIA, the full version (AISIA-VN-Review-F). However, in this work, we are interested in improving the model’s performance when the training data are limited; thus, we only consider a subset of up to 25K training reviews and evaluate the model on another 170K reviews. We refer to this subset from the full dataset as AISIA-VN-Review-S. It is important to emphasize that our team spends a lot of time and effort to manually classify each review into positive or negative sentiments. | Provide a detailed description of the following dataset: AISIA-VN-Review-S |
6IMPOSE | The dataset includes the synthetic data generated from rendering the 3D meshes of LM objects and several household objects in Blender for training 6D pose estimation algorithms. The whole dataset contains synthetic data for 18 objects (13 from LM and 5 from household objects), with 20,000 data samples for each object. Each data sample includes an RGB image in .png format and a depth image in .exr format. Each sample has the annotations of mask labels in .png format and the ground truth pose labels saved in .json files. Apart from the training data, the 3D meshes of the objects and the pre-trained models of the 6D pose estimation algorithm are also included. The whole dataset takes approximately ~1T of storage memory. | Provide a detailed description of the following dataset: 6IMPOSE |
LEA-GCN-dataset | The datasets of "Towards Lightweight Cross-domain Sequential Recommendation via External Attention-enhanced Graph Convolution Network" (DASFAA 2023) | Provide a detailed description of the following dataset: LEA-GCN-dataset |
OntoLAMA | Instructions: <https://krr-oxford.github.io/DeepOnto/ontolama/>.
Huggingface: <https://huggingface.co/datasets/krr-oxford/OntoLAMA>.
Zenodo: <https://doi.org/10.5281/zenodo.6480540 > | Provide a detailed description of the following dataset: OntoLAMA |
Bio-ML | The Bio-ML dataset provides five ontology pairs for both equivalence and subsumption ontology matching.
See detailed instructions at: [https://krr-oxford.github.io/DeepOnto/bio-ml](https://krr-oxford.github.io/DeepOnto/bio-ml)
See the OAEI Bio-ML track at: [https://www.cs.ox.ac.uk/isg/projects/ConCur/oaei/](https://www.cs.ox.ac.uk/isg/projects/ConCur/oaei/)
See our resource paper at: [https://arxiv.org/abs/2205.03447](https://arxiv.org/abs/2205.03447) (accepted at ISWC-2022 and nominated as the best resource paper candidate) | Provide a detailed description of the following dataset: Bio-ML |
GIRT-Data | GIRT-Data is the first and largest dataset of issue report templates (IRTs) in both YAML and Markdown format. This dataset and its corresponding open-source crawler tool are intended to support research in this area and to encourage more developers to use IRTs in their repositories. The stable version of the dataset contains 1_084_300 repositories, and 50_032 of them support IRTs. | Provide a detailed description of the following dataset: GIRT-Data |
SynthBRSet | 3D Computer Graphics is leveraged to generate a large and diverse dataset for training bike rotation estimators in bike parking assessment. By using 3D graphics software (Blender), the algorithm is able to accurately annotate the rotations of bikes with respect to the parking spot area in two axes y and z , which is crucial for training models for visual object-to-spot rotation estimation. Additionally, the ease of building the algorithm in Python made the generated dataset diverse with a wide range of variations in terms of parking space, lighting conditions, backgrounds, material textures, and colors, as well as objects and camera angles, to improve the generalization of the trained model. Overall, the use of 3D computer graphics allows for the efficient and precise generation of visual data for this task as well as for many potential tasks in computer vision. | Provide a detailed description of the following dataset: SynthBRSet |
MFNet | The first RGB-Thermal urban scene image dataset with pixel-level annotation. We published this new RGB-Thermal semantic segmentation dataset in support of further development of autonomous vehicles in the future. This dataset contains 1569 images (820 taken at daytime and 749 taken at nighttime). Eight classes of obstacles commonly encountered during driving
(car, person, bike, curve, car stop, guardrail, color cone, and bump) are labeled in this dataset. | Provide a detailed description of the following dataset: MFNet |
Chicago Face Database (CFD) | "The Chicago Face Database was developed at the University of Chicago by Debbie S. Ma, Joshua Correll, and Bernd Wittenbrink. The CFD is intended for use in scientific research. It provides high-resolution, standardized photographs of male and female faces of varying ethnicity between the ages of 17-65. Extensive norming data are available for each individual model. These data include both physical attributes (e.g., face size) as well as subjective ratings by independent judges (e.g., attractiveness).
Detailed information about the construction of the database and the available norming data can be found in Ma, Correll, & Wittenbrink (2015)." | Provide a detailed description of the following dataset: Chicago Face Database (CFD) |
ViNLI | A large-scale and high-quality corpus is necessary for studies on NLI for Vietnamese, which can be considered a low-resource language. In this paper, we introduce ViNLI (Vietnamese Natural Language Inference), an open-domain and high-quality corpus for evaluating Vietnamese NLI models, which is created and evaluated with a strict process of quality control. ViNLI comprises over 30,000 human-annotated premise-hypothesis sentence pairs extracted from more than 800 online news articles on 13 distinct topics. | Provide a detailed description of the following dataset: ViNLI |
OpenLane-V2 val | **OpenLane-V2** is the world's first perception and reasoning benchmark for scene structure in autonomous driving. The primary task of the dataset is scene structure perception and reasoning, which requires the model to recognize the dynamic drivable states of lanes in the surrounding environment. The challenge of this dataset includes not only detecting lane centerlines and traffic elements but also recognizing the attribute of traffic elements and topology relationships on detected objects.
The <a href="https://github.com/OpenDriveLab/OpenLane-V2#task">OLS</a> score is defined to measure model performance. | Provide a detailed description of the following dataset: OpenLane-V2 val |
METABRIC | https://ega-archive.org/studies/EGAS00000000083 | Provide a detailed description of the following dataset: METABRIC |
SLOPER4D | **SLOPER4D** is a novel scene-aware dataset collected in large urban environments to facilitate the research of global human pose estimation (GHPE) with human-scene interaction in the wild. It consists of 15 sequences of human motions, each of which has a trajectory length of more than 200 meters (up to 1,300 meters) and covers an area of more than 2,000 (up to 13,000), including more than 100K LiDAR frames, 300k video frames, and 500K IMU-based motion frames. With SLOPER4D, we provide a detailed and thorough analysis of two critical tasks, including camera-based 3D HPE and LiDAR-based 3D HPE in urban environments, and benchmark a new task, GHPE. | Provide a detailed description of the following dataset: SLOPER4D |
YTD-18M | YTD-18M is a large-scale corpus of 18M video-based dialogues, constructed from web videos: crucial to the data collection pipeline is a pretrained language model that converts error-prone automatic transcripts to a cleaner dialogue format while maintaining meaning. | Provide a detailed description of the following dataset: YTD-18M |
Overall-Driving-Behavior-Recognition-By-Smartphone | Monitoring and evaluating of driving behavior is the main goal of this paper that encourage us to develop a new system based on Inertial Measurement Unit (IMU) sensors of smartphones. In this system, a hybrid of Discrete Wavelet Transformation (DWT) and Adaptive Neuro Fuzzy Inference System (ANFIS) is used to recognize overall driving behaviors. The behaviors are classified into the safe, the semi-aggressive, and the aggressive classes that are adopted with Driver Anger Scale (DAS) self-reported questionnaire results. The proposed system extracts four features from IMU sensors in the forms of time series. They are decomposed by DWT in two levels and their energies are sent to six ANFISs. Each ANFIS models the different perception about driving behavior under uncertain knowledge and returns the similarity or dissimilarity between driving behaviors. The results of these six ANFISs are combined by three different decision fusion approaches. Results show that Coiflet-2 is the most suitable mother wavelet for driving behavior analysis. In addition, the proposed system recognizes the overall driving behavior patterns with 92% accuracy without necessity to evaluate the maneuvers one by one. We show that without longitude acceleration data, the driver behavior cannot be recognized successfully while the results do not disturb substantially when the gyroscope is not available. | Provide a detailed description of the following dataset: Overall-Driving-Behavior-Recognition-By-Smartphone |
Microscopy Images of Drosophila Wing | Microscopy Images of the Drosophila Wing dataset are divided into two folders, Tumor/ No Tumor. The tumor folder has images of different stages of cancer, including both early and late stages. The organization of images was done in a way that the Tumor Folder has images that already have a Tumor or is going to develop cancer in the next few days. In contrast, the No Tumor Folder has images with no sign of cancer or a tiny tumor percentage that will be suppressed the following day. | Provide a detailed description of the following dataset: Microscopy Images of Drosophila Wing |
16s rDNA sequencing of feces from C9orf72 loss of function mice | In one round of sequencing, 5 fecal pellets from 2 pro-inflammatory environments (Harvard BRI/Johns Hopkins) and 2 pro-survival environments (Broad Institute/Jackson Labs) were sequenced at the 16s rDNA locus. In a second round of sequencing, 9 fecal pellets from Harvard BRI, 9 fecal pellets from Broad Institute, 6 fecal pellets from Harvard BRI mice transplanted with Harvard BRI feces, and 6 pellets from Harvard BRI mice transplanted with Broad feces were sequenced at the 16S rDNA locus | Provide a detailed description of the following dataset: 16s rDNA sequencing of feces from C9orf72 loss of function mice |
CTCyclistDetectionDataset | Over 20,000 annotated synthetic images and web-scraped images of bicyclists with bounding box annotations in Pascal VOC format. | Provide a detailed description of the following dataset: CTCyclistDetectionDataset |
nuScenes LiDAR only | Robust detection and tracking of objects is crucial for the deployment of autonomous vehicle technology. Image based benchmark datasets have driven development in computer vision tasks such as object detection, tracking and segmentation of agents in the environment. Most autonomous vehicles, however, carry a combination of cameras and range sensors such as lidar and radar. As machine learning based methods for detection and tracking become more prevalent, there is a need to train and evaluate such methods on datasets containing range sensor data along with images. In this work we present nuTonomy scenes (nuScenes), the first dataset to carry the full autonomous vehicle sensor suite: 6 cameras, 5 radars and 1 lidar, all with full 360 degree field of view. nuScenes comprises 1000 scenes, each 20s long and fully annotated with 3D bounding boxes for 23 classes and 8 attributes. It has 7x as many annotations and 100x as many images as the pioneering KITTI dataset. We define novel 3D detection and tracking metrics. We also provide careful dataset analysis as well as baselines for lidar and image based detection and tracking. Data, development kit and more information are available online. | Provide a detailed description of the following dataset: nuScenes LiDAR only |
GAS | **GAS (Grasp Area Segmentation)** dataset consists of 10089 RGB images of cluttered scenes grouped into 1121 grasp-area segmentation tasks. For each RGB image we provide a binary segmentation map with the graspable and non-graspable regions for every object in the scene. The dataset can be used for meta-training part-based grasp area estimation networks.
For creating the GAS dataset we use the RGB images and corresponding ground truth segmentation masks from the GraspNet 1-Billion dataset. | Provide a detailed description of the following dataset: GAS |
CIMAT-Cyclist | This provides a benchmark for cyclist's orientation detection, "CIMAT-Cyclist" with bounding box based labels according to eight different classes depending on the orientation. Which contains 11, 103 images, of which 6,605 images were collected in approximately 450 videos and images taken from sports events and the streets of the state of Zacatecas, Mexico, while 4,498 additional images were obtained from the web in pages such as pixabay, pexels, freephotos, among others.
"CIMAT-Cyclist" provide 20,229 instances over 11,103 cyclist's images, where 80% of the images were split for the training set and 20% for the test set.
Cyclists are divided into 8 classes according to orientation: CyclistN, CyclistNE, cyclistE, cyclistSE, cyclistS, cyclistSW, cyclistW and cyclistNW. | Provide a detailed description of the following dataset: CIMAT-Cyclist |
Caselaw4 | __Caselaw4__ is a dataset of 350k common law judicial decisions from the [U.S. Caselaw Access Project](https://case.law/), of which 250k have been automatically annotated with binary outcome labels of _AFFIRM_ and _REVERSE_.
The court case reports used in the dataset are from New Mexico, North Carolina, Illinois, and Arkansas Courts of Appeal. These Courts hear appeals exclusively from lower courts within their respective states, on matters of domestic state law, and the data for these jurisdictions are freely available.
Since each case in Caselaw4 appeals some lower court ruling, the possible outcomes of each case are as follows:
- the previous ruling is kept as is (_AFFIRM_);
- the previous ruling is changed/annulled (_REVERSE_);
- some parts of the previous ruling are kept and some are changed (_MIXED_);
- the appeal is dismissed (a type of _AFFIRM_).
The data in Caselaw4 are stored in JSON format. In addition to the original metadata about the case name, date, court, judges, cases cited etc., we (a) automatically annotated a subset of 250k cases with the _AFFIRM_ or _REVERSE_ outcome label (with weighted average precision of 95.45%), and (b) manually annotated 500 cases from the New Mexico Court of Appeals with the _AFFIRM_, _REVERSE_, or _MIXED_ outcome label as well as with the outcome sentences. | Provide a detailed description of the following dataset: Caselaw4 |
BFN | This database is a database of backdoored neural networks intended for face recognition. The networks are of the FaceNet architecture and are trained on Casia-WebFace, with and without additional samples (which are the source of the backdoor). More information regarding backdoors and the project within which this fits can be found in the public release of the source code : https://gitlab.idiap.ch/bob/bob.paper.backdoored_facenets.biosig2022.
There are two sets of backdoored networks. A first one with backdoors with varying triggers (the triggers dataset) and a second one with backdoors with varying trigger placement strategies (the locations dataset). A third set of networks is also provided, just regular networks without any backdoor, referred to as the clean dataset. Configuration yaml files are provided to replicate the backdoored networks using the repository content linked above, in addition to pickle files containing validation scores on all validation datasets.
The purpose of this dataset is to allow for evaluation of backdoored network detection work on face recognition networks. The backdoored networks here are trained from a clean checkpoint and finetuned on poisoned data. The characteristics of each network is provided in the yaml file colocated with the checkpoint. Multiple triggers (organic and synthetic) are explored, in addition to multiple placement strategies (systematically random, static, in context etc). The finetuning is done exclusively on the layers reported in the yaml file. | Provide a detailed description of the following dataset: BFN |
DarkTrack2021 | **DarkTrack2021** is a challenging nighttime UAV tracking benchmark, which contains 110 challenging sequences with over 100 K frames in total. | Provide a detailed description of the following dataset: DarkTrack2021 |
RarePlanes | RarePlanes is a unique open-source machine learning dataset from CosmiQ Works and AI.Reverie that incorporates both real and synthetically generated satellite imagery. The RarePlanes dataset specifically focuses on the value of AI.Reverie synthetic data to aid computer vision algorithms in their ability to automatically detect aircraft and their attributes in satellite imagery. Although other synthetic/real combination datasets exist, RarePlanes is the largest openly-available very-high resolution dataset built to test the value of synthetic data from an overhead perspective. Previous research has shown that synthetic data can reduce the amount of real training data needed and potentially improve performance for many tasks in the computer vision domain. The real portion of the dataset consists of 253 Maxar WorldView-3 satellite scenes spanning 112 locations and 2,142 km^2 with 14,700 hand-annotated aircraft. The accompanying synthetic dataset is generated via AI.Reverie’s novel simulation platform and features 50,000 synthetic satellite images with ~630,000 aircraft annotations. Both the real and synthetically generated aircraft feature 10 fine grain attributes including: aircraft length, wingspan, wing-shape, wing-position, wingspan class, propulsion, number of engines, number of vertical-stabilizers, presence of canards, and aircraft role. Finally, we conduct extensive experiments to evaluate the real and synthetic datasets and compare performances. By doing so, we show the value of synthetic data for the task of detecting and classifying aircraft from an overhead perspective. | Provide a detailed description of the following dataset: RarePlanes |
ICVL-HSI | ICVL is a hyperspectral image dataset, collected by "Sparse Recovery of Hyperspectral Signal from Natural RGB Images"
The database images were acquired using a Specim PS Kappa DX4 hyperspectral camera and a rotary stage for spatial scanning. At this time it contains 200 images and will continue to grow progressively.
Images were collected at 1392×1300 spatial resolution over 519 spectral bands (400-1,000nm at roughly 1.25nm increments). The .raw files contain raw out-of-camera data in ENVI format and .hdr files contain the headers required to decode them. For your convenience, .mat files are provided, downsampled to 31 spectral channels from 400nm to 700nm at 10nm increments.
The original dataset only contains clean images. For hyperspectral image denoising benchmarks, the testing datasets come from "3D Quasi-Recurrent Neural Network for Hyperspectral Image Denoising" | Provide a detailed description of the following dataset: ICVL-HSI |
MVK | The dataset contains single-shot videos taken from moving cameras in underwater environments. The first shard of a new Marine Video Kit dataset is presented to serve for video retrieval and other computer vision challenges. In addition to basic meta-data statistics, we present several insights based on low-level features as well as semantic annotations of selected keyframes.
1379 videos with a length from 2 s to 4.95 min, with the mean and median duration of each video is 29.9 s, and 25.4 s, respectively.
We capture data from 11 different regions and countries during the time from 2011 to 2022. | Provide a detailed description of the following dataset: MVK |
DocRED-FE | DocRED-FE is the DocRED with Fine-Grained Entity Type | Provide a detailed description of the following dataset: DocRED-FE |
IBL-NeRF | IBL-NeRF Dataset.
Contains multi-view images with its intrinsic components. | Provide a detailed description of the following dataset: IBL-NeRF |
Biwi 3D Audiovisual Corpus of Affective Communication - B3D(AC)^2 | **BIWI 3D** corpus comprises a total of 1109 sentences uttered by 14 native English speakers (6 males and 8 females). A real time 3D scanner and a professional microphone were used to capture the facial movements and the speech of the speakers. The dense dynamic face scans were acquired at 25 frames per second and the RMS error in the 3D reconstruction is about 0.5 mm. In order to ease automatic speech segmentation, we carried out the recordings in a anechoic room, with walls covered by sound wave-absorbing materials.
Each sentence was recorded twice:
- First, the speaker read the sentence from text, with a neutral expression.
- Then, the speaker watched a clip extracted from a feature film where the sentence is acted by professional actors and the context is highly emotional. After rating the emotions induced by the video, the speaker repeated the sentence. | Provide a detailed description of the following dataset: Biwi 3D Audiovisual Corpus of Affective Communication - B3D(AC)^2 |
Brightfield vs Fluorescent Staining Dataset | Differential fluorescent staining is an effective tool widely adopted for the visualization, segmentation and quantification of cells and cellular substructures as a part of standard microscopic imaging protocols. Incompatibility of staining agents with viable cells represents major and often inevitable limitations to its applicability in live experiments, requiring extraction of samples at different stages of experiment increasing laboratory costs. Accordingly, development of computerized image analysis methodology capable of segmentation and quantification of cells and cellular substructures from plain monochromatic images obtained by light microscopy without help of any physical markup techniques is of considerable interest. The enclosed set contains human colon adenocarcinoma Caco-2 cells microscopic images obtained under various imaging conditions with different viable vs non-viable cells fractions. Each field of view is provided in a three-fold representation, including phase-contrast microscopy and two differential fluorescent microscopy images with specific markup of viable and non-viable cells, respectively, produced using two different staining schemes, representing a prominent test bed for the validation of image analysis methods. | Provide a detailed description of the following dataset: Brightfield vs Fluorescent Staining Dataset |
OmniBlender | Synthetic omnidirectional multi-view image dataset.
Photo-realistic rendered images with Cycles engine. | Provide a detailed description of the following dataset: OmniBlender |
Ricoh360 | Real-world omnidirectional multi-view image dataset. | Provide a detailed description of the following dataset: Ricoh360 |
Burned Area Delineation from Satellite Imagery | The dataset contains 73 satellite images of different forests damaged by wildfires across Europe with a resolution of up to 10m per pixel. Data were collected from the Sentinel-2 L2A satellite mission and the target labels were generated from the Copernicus Emergency Management Service (EMS) annotations, with five different severity levels, ranging from undamaged to completely destroyed. | Provide a detailed description of the following dataset: Burned Area Delineation from Satellite Imagery |
ShapeIt | The ShapeIt dataset introduced by Alper et al. (2023) consists of 109 nouns and noun phrases along with the basic shape normally associated with that item, chosen from the set {circle, rectangle, triangle}. | Provide a detailed description of the following dataset: ShapeIt |
Video Call MOS Set | The dataset contains 10 reference videos and 1467 degraded videos. The videos were transmitted via Microsoft Teams calls in 83 different network conditions and contain various typical videoconferencing impairments. It also includes P.910 Crowd subjective video MOS ratings (see paper for more info). | Provide a detailed description of the following dataset: Video Call MOS Set |
PAIR-LRT-Human Dataset | **PAIR-LRT-Human Dataset** contains pairs of thermal and RGB images captured using a FLIR Lepton3.5 thermal sensor and a Raspberry Pi camera v2, respectively. The dataset includes a total of 33,228 image pairs captured under different environmental conditions, with one human occupant in a standing position and an upper-body pose. The clothing of the occupant is in one of three colors. The images have a heatmap resolution of 16 × 12 and an RGB resolution of 128 × 96, and the field of view for each image is 71◦ × 57◦. The dataset includes two persons. | Provide a detailed description of the following dataset: PAIR-LRT-Human Dataset |
Mocheg | A large-scale dataset that consists of 21,184 claims, where each claim is assigned a truthfulness label and ruling statement, with 58,523 pieces of evidence in the form of text and images. It supports the end-to-end multimodal fact-checking and explanation generation, where the input is a claim and a large collection of web sources, including articles, images, videos, and tweets, and the goal is to assess the truthfulness of the claim by retrieving relevant evidence and predicting a truthfulness label (i.e., support, refute and not enough information), and generate a rationalization statement to explain the reasoning and ruling process. | Provide a detailed description of the following dataset: Mocheg |
Guzheng_Tech99 | Instrument playing technique (IPT) is a key element of musical presentation.
Guzheng is a polyphonic instrument. In Guzheng performance, notes with different IPTs are usually overlapped and mixed IPTs that can be decomposed into multiple independent IPTs are usually used. Most existing work on IPT detection typically uses datasets with monophonic instrumental solo pieces. This dataset fills a gap in the research field.
The dataset comprises 99 Guzheng solo compositions, recorded by professionals in a studio, totaling 9064.6 seconds. It includes seven playing techniques labeled for each note (onset, offset, pitch, vibrato, point note, upward portamento, downward portamento, plucks, glissando, and tremolo), resulting in 63,352 annotated labels. The dataset is divided into 79, 10, and 10 songs for the training, validation, and test sets, respectively.
More details about the code and datasets can be found at https://lidcc.github.io/GuzhengTech99/
potential use cases: instrument playing technique detection, Guzheng transcription, multi-pitch estimation, note tracking, sound event detection… | Provide a detailed description of the following dataset: Guzheng_Tech99 |
ESP Dataset | ESP dataset (Evaluation for Styled Prompt dataset) is a new benchmark for zero-shot domain-conditional caption generation. The dataset aims to evaluate the capability to generate diverse domain-specific language conditioned on the same image. It comprises 4.8k captions from 1k images in the COCO Captions test set. We collected five text domains with everyday usage: blog, social media, instruction, story, and news using Amazon MTurk. | Provide a detailed description of the following dataset: ESP Dataset |
CVACT | The CVACT dataset is a matching task between street- and aerial views, from Canberra (Australia). This task helps to determine localization without GPS coordinates for the street-view images. Google Street View panoramas are used as ground images, and matching aerial images also from the Google Maps API. The dataset comprises 35,532 image pairs for training and 8,884 image pairs for evaluation, and recall is the primary metric for evaluation. To further test the generalization in comparison to the CVUSA dataset, CVACT features 92,802 test images. | Provide a detailed description of the following dataset: CVACT |
VIGOR | Similar to CVUSA and CVACT, the VIGOR dataset contains satellites and street imagery to match them to each other to find the location of the street imagery. For this purpose, data from 4 major American cities were used, namely San Francisco, New York, Seattle and Chicago. Unlike the previous datasets, there are two settings: The SAME-Area setting where images of all cities are available in training and validation split. Secondly, there is the CROSS area setting where training is done on two cities (New York, Seattle) and evaluation is done on Chicago and San Francisco. In addition, the dataset contains semi-positive images which are very close to an actual ground truth image and thus serve as a distraction for the matching task. In total, the dataset consists of 90,618 satellite images and 105,214 street images. | Provide a detailed description of the following dataset: VIGOR |
CIFAKE: Real and AI-Generated Synthetic Images | The quality of AI-generated images has rapidly increased, leading to concerns of authenticity and trustworthiness.
CIFAKE is a dataset that contains 60,000 synthetically-generated images and 60,000 real images (collected from CIFAR-10). Can computer vision techniques be used to detect when an image is real or has been generated by AI?
##Dataset details
The dataset contains two classes - REAL and FAKE.
For REAL, we collected the images from Krizhevsky & Hinton's CIFAR-10 dataset
For the FAKE images, we generated the equivalent of CIFAR-10 with Stable Diffusion version 1.4
There are 100,000 images for training (50k per class) and 20,000 for testing (10k per class)
##References
If you use this dataset, you must cite the following sources
Krizhevsky, A., & Hinton, G. (2009). Learning multiple layers of features from tiny images.
Bird, J.J., Lotfi, A. (2023). CIFAKE: Image Classification and Explainable Identification of AI-Generated Synthetic Images. arXiv preprint arXiv:2303.14126.
Real images are from Krizhevsky & Hinton (2009), fake images are from Bird & Lotfi (2023). The Bird & Lotfi study is a preprint currently available on ArXiv and this description will be updated when the paper is published.
##License
This dataset is published under the same MIT license as CIFAR-10:
Permission is hereby granted, free of charge, to any person obtaining a copy of this software and associated documentation files (the "Software"), to deal in the Software without restriction, including without limitation the rights to use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of the Software, and to permit persons to whom the Software is furnished to do so, subject to the following conditions:
The above copyright notice and this permission notice shall be included in all copies or substantial portions of the Software.
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE. | Provide a detailed description of the following dataset: CIFAKE: Real and AI-Generated Synthetic Images |
WHOOPS! A Vision-and-Language Benchmark of Synthetic and Compositional Images | WHOOPS! Is a dataset and benchmark for visual commonsense. The dataset is comprised of purposefully commonsense-defying images created by designers using publicly-available image generation tools like Midjourney. It contains commonsense-defying image from a wide range of reasons, deviations from expected social norms and everyday knowledge. | Provide a detailed description of the following dataset: WHOOPS! A Vision-and-Language Benchmark of Synthetic and Compositional Images |
MBPP | The benchmark consists of around 1,000 crowd-sourced Python programming problems, designed to be solvable by entry-level programmers, covering programming fundamentals, standard library functionality, and so on. Each problem consists of a task description, code solution and 3 automated test cases. | Provide a detailed description of the following dataset: MBPP |
OPRA | The OPRA Dataset was introduced in Demo2Vec: Reasoning Object Affordances From Online Videos (CVPR'18) for reasoning object affordances from online demonstration videos. It contains 11,505 demonstration clips and 2,512 object images scraped from 6 popular YouTube product review channels along with the corresponding affordance annotations. More details can be found on our https://sites.google.com/view/demo2vec/. | Provide a detailed description of the following dataset: OPRA |
EPIC-Hotspot | From Grounded Human-Object Interaction Hotspots from Video (ICCV'19): We collect annotations for interaction keypoints on EPIC Kitchens in order to quantitatively evaluate our method in parallel to the OPRA dataset (where annotations are available). We note that these annotations are collected purely for evaluation, and are not used for training our model. We select the 20 most frequent verbs, and select 31 nouns that afford these interactions. | Provide a detailed description of the following dataset: EPIC-Hotspot |
Autoregressive Paraphrase Dataset (ARPD) | For more details see https://huggingface.co/datasets/jpwahle/autoregressive-paraphrase-dataset | Provide a detailed description of the following dataset: Autoregressive Paraphrase Dataset (ARPD) |
Ranking social media news feed | A dataset consisting of recipient 46 users and, 26180 tweets. The dataset includes the news feed of the users and 13 features that may influence the relevance of the tweets. | Provide a detailed description of the following dataset: Ranking social media news feed |
N5k360 | We applied our framework, dubbed as ”PreNeRF 360”, to enable the use of the Nutrition5k dataset in NeRF and introduce an updated version of this dataset, known as the N5k360 dataset. | Provide a detailed description of the following dataset: N5k360 |
NIH-CXR-LT | NIH-CXR-LT. NIH ChestXRay14 contains over 100,000 chest X-rays labeled with 14 pathologies, plus a “No Findings” class. We construct a single-label, long-tailed version of the NIH ChestXRay14 dataset by introducing five new disease findings described above. The resulting NIH-CXR-LT dataset has 20 classes, including 7 head classes, 10 medium classes, and 3 tail classes. NIH-CXR-LT contains 88,637 images labeled with one of 19 thorax diseases, with 68,058 training and 20,279 test images. The validation and balanced test sets contain 15 and 30 images per class, respectively. | Provide a detailed description of the following dataset: NIH-CXR-LT |
MIMIC-CXR-LT | MIMIC-CXR-LT. We construct a single-label, long-tailed version of MIMIC-CXR in a similar manner. MIMIC-CXR is a multi-label classification dataset with over 200,000 chest X-rays labeled with 13 pathologies and a “No Findings” class. The resulting MIMIC-CXR-LT dataset contains 19 classes, of which 10 are head classes, 6 are medium classes, and 3 are tail classes. MIMIC-CXR-LT contains 111,792 images labeled with one of 18 diseases, with 87,493 training images and 23,550 test set images. The validation and balanced test sets contain 15 and 30 images per class, respectively. | Provide a detailed description of the following dataset: MIMIC-CXR-LT |
GATITOS | The GATITOS (Google's Additional Translations Into Tail-languages: Often Short) dataset is a high-quality, multi-way parallel dataset of tokens and short phrases, intended for training and improving machine translation models. This dataset consists in 4,000 English segments (4,500 tokens) that have been translated into each of 26 low-resource languages, as well as three higher-resource pivot languages (es, fr, hi). All translations were made directly from English, with the exception of Aymara, which was translated from the Spanish. | Provide a detailed description of the following dataset: GATITOS |
ALPIX-VSR | we collected a new real-world dataset, called ALPIXVSR, using a ALPIX-Eiger event camera1
. The camera outputs well aligned RGB frames and events. The RGB frames
enjoy a resolution of 3264 × 2448 and are generated by a
carefully designed image signal processor(ISP) from RAW
data with the Quad Bayer pattern , and the events have
a resolution with 1632 × 1224.
https: //vlis2022.github.io/cvpr23/egvsr | Provide a detailed description of the following dataset: ALPIX-VSR |
EHE | Human Action Evaluation (HAE) has rarely been applied to real-world disease monitoring, the EHE dataset aims to gather sample data to validate effective HAE methods that could then be expanded on a larger validation scale. EHE consists of several actions from morning exercises that patients complete daily in the elderly home. The EHE dataset contained 869 action repetitions performed by 25 older people. Six exercises were collected for the EHE dataset via Kinect v2. | Provide a detailed description of the following dataset: EHE |
BEAR | **BEAR (Benchmark on video Action Recognition)** is a collection of 18 video datasets grouped into 5 categories (anomaly, gesture, daily, sports, and instructional), which covers a diverse set of real-world applications. | Provide a detailed description of the following dataset: BEAR |
VR-Folding | **VR-Folding** contains garment meshes of 4 categories from CLOTH3D dataset, namely Shirt, Pants, Top and Skirt. For flattening task, there are 5871 videos which contain 585K frames in total. For folding task, there are 3896 videos which contain 204K frames in total. The data for each frame include multi-view RGB-D images, object masks, full garment meshes, and hand poses. | Provide a detailed description of the following dataset: VR-Folding |
MMHS150k | Existing hate speech datasets contain only textual data. We create a new manually annotated multimodal hate speech dataset formed by 150,000 tweets, each one of them containing text and an image. We call the dataset MMHS150K. | Provide a detailed description of the following dataset: MMHS150k |
MSLS | The largest and most diverse dataset for lifelong place recognition from image sequences in urban and suburban settings. | Provide a detailed description of the following dataset: MSLS |
ARKitTrack | **ARKitTrack** is a new RGB-D tracking dataset for both static and dynamic scenes captured by consumer-grade LiDAR scanners equipped on Apple's iPhone and iPad. ARKitTrack contains 300 RGBD sequences, 455 targets, and 229.7K video frames in total. This dataset has 123.9K pixel-level target masks along with the bounding box annotations and frame-level attributes. | Provide a detailed description of the following dataset: ARKitTrack |
HAMMER | **HAMMER** dataset contains 13 Scenes. Each scene has two setups, with/without objects (with : scene includes several objects with various surface material, without : scene with only backgrounds - naked) and each scene has two camera trajectories. Each trajectories composed with roughly 300 frames, which adds up to 16k frames in total (13 x 2 x 2 x 300). Each trajectory contains corresponding images from each cameras : d435 – stereo, l515 – Lidar (D-ToF), polarization – RGBP (RGB with polarization), tof – (I-ToF). Each camera folder contains its intrinsic file and its own recorded images together with rendered depth GT / instance GT and camera pose. All the cameras are fully synchronized via robotic arm’s data acquisition setup. | Provide a detailed description of the following dataset: HAMMER |
CelebV-Text | **CelebV-Text** comprises 70,000 in-the-wild face video clips with diverse visual content, each paired with 20 texts generated using the proposed semi-automatic text generation strategy. The provided texts describes both static and dynamic attributes precisely. | Provide a detailed description of the following dataset: CelebV-Text |
CIRCO | **CIRCO (Composed Image Retrieval on Common Objects in context)** is an open-domain benchmarking dataset for Composed Image Retrieval (CIR) based on real-world images from COCO 2017 unlabeled set. It is the first CIR dataset with multiple ground truths and aims to address the problem of false negatives in existing datasets. CIRCO comprises a total of 1020 queries, randomly divided into 220 and 800 for the validation and test set, respectively, with an average of 4.53 ground truths per query. | Provide a detailed description of the following dataset: CIRCO |
IHDP | The Infant Health and Development Program (IHDP) is a randomized controlled study designed to evaluate the effect of home visit from specialist doctors on the cognitive test scores of premature infants. The datasets is first used for benchmarking treatment effect estimation algorithms in Hill [35], where selection bias is induced by removing non-random subsets of the treated individuals to create an observational dataset, and the outcomes are generated using the original covariates and treatments. It contains 747 subjects and 25 variables. | Provide a detailed description of the following dataset: IHDP |
Jobs | The Jobs dataset by LaLonde [36] is a widely used benchmark in the causal inference community, where the treatment is job training and the outcomes are income and employment status after training. The dataset includes 8 covariates such as age, education, and previous earnings. Our goal is to predict unemployment, using the feature set of Dehejia and Wahba [37]. Following Shalit et al. [8], we combined the LaLonde experimental sample (297 treated, 425 control) with the PSID comparison group (2490 control). | Provide a detailed description of the following dataset: Jobs |
Hi4D | **Hi4D** contains 4D textured scans of 20 subject pairs, 100 sequences, and a total of more than 11K frames. Hi4D contains rich interaction centric annotations in 2D and 3D alongside accurately registered parametric body models. | Provide a detailed description of the following dataset: Hi4D |
AGIQA-1K | AI Generated Content (AIGC) refers to any form of content, such as text, images, audio, or video, that is created with the help of artificial intelligence technology. With the flourishing development of deep learning, the efficiency of AIGC generation has increased, and AI-Generated Image (AGI) is becoming more prevalent in areas such as culture, entertainment, education, social media, etc.
Unlike Natural Scene Images (NSIs) captured from natural scenes, AGIs are directly generated from AI models. Thus, AGIs obtain some unique quality characteristics and viewers tend to evaluate the quality of AGIs from some different aspects of NSIs.
Therefore, we propose the first perceptual AGI Quality Assessment (AGIQA-1K) database, which provides 1,080 AGIs along with quality labels, including technical issues, AI artifacts, unnaturalness, discrepancy, and aesthetics as major evaluation aspects. | Provide a detailed description of the following dataset: AGIQA-1K |
OLD French Coronavirus Screening Data | The RT-PCR screening tests used and the results of which are reported in SI-DEP made it possible to suspect the presence of the worrisome variant (VOC) Alpha (20I/501Y.V1) and indistinctly from the VOC Beta (20H/501Y. V2) or Gamma (20J/501Y.V3). This screening strategy targeting Alpha, Beta and Gamma VOCs is no longer suited to the increasing diversity of emerging SARS-CoV-2 variants. Since 05/31/2021, the screening strategy has evolved to search for certain mutations of interest that can be found in different variants. It therefore no longer makes it possible to assign the infection to a specific variant but makes it possible to follow the evolution over time and in the territory of the proportion of infections due to a virus carrying these mutations.
For the first phase of deployment, the E484K, E484Q and L452R mutations were selected because they are potentially linked to immune escape and/or increased transmissibility and are found in the majority of VOCs to date. Due to the rendering of the results of these PCRs in shorter delays than sequencing, specific measures can be implemented as soon as cases carrying mutations of interest are detected in order to slow down their dissemination (reinforcement of contact tracing, screening or vaccination campaign).
The emergence of the B.1.1.529 variant, known as Omicron, implies an evolution of the current doctrine with regard to the mutations sought during screening, this variant not presenting any of the 3 mutations cited above. Between 29/11 and 19/12, three targets were sought by the new screening strategy: the 69/70 deletion and the N501Y and K417N mutations, carried by the Omicron variant. Since 12/20, the 69/70 deletion and the K417N, S371L-S373P and Q493R substitutions are searched and the E484Q and N501Y mutations are no longer searched. The presence of the 69/70 deletion or the K417N, S371L-S373P or Q493R substitutions remains to be interpreted with caution, however, the new, more specific screening strategy currently being deployed will make it possible to suspect the Omicron variant more precisely.
REPORTING OF RESULTS IN SI-DEP
The screening results are entered by following a nomenclature in the form of a succession of alphabetical characters representing the mutation(s) sought and numbers representing the result. Since 29/11/2021, a new variable "D" has been created in order to report results on mutations DEL69/70 or N501Y or K417N. Retrospective tests have been carried out on samples since 01/11/2021 to search for these three mutations. Then, since 12/20/2021, variable “D” reports results on mutations DEL69/70 and/or K417N, and/or S371L-S373P and/or Q493R.
Codes and associated mutations of interest A = E484K C = L452R D = DEL69/70, K417N, S371L-S373P and/or Q493R
1 = Presence of the mutation sought 0 = Absence of the mutation sought 8 = Uninterpretable 9 = Not sought
For mutations associated with code D:
A "D1" result means that at least one of the mutations associated with this code is positive; A "D0" result means that at least one of the mutations associated with this code is negative, and that none are positive
ANALYSIS OF DATA TRANSMITTED IN SI-DEP
The data is reported to all positive tests carried out (RT-PCR + TA), and not to the number of people tested or positive. In fact, the definition of persons for the calculation of the indicators retains only one test for a person in a given period, whereas the latter may have been tested several times (antigen test, followed by a PCR, for example). This methodology could result in not counting a test that provides information on the presence of a mutation, and therefore underestimating their representation. All of the positive tests in the SI-DEP database are the subject of a search for information on one or more mutations, with the application of a cleaning procedure concerning strict duplicates, based on a corresponding key:
au pseudonyme de la personne (code anonymat individuel basé sur les traits identifiants de la personne)
à la date et l’heure du prélèvement
au type d’analyse
DATA AVAILABLE
The data is available at department, region and France level.
Available variables:
nb_pos: Number of positive tests nb_crib: Number of tests screened tx_crib: Rate of tests screened nb_A0: Number of positive tests for which the search for mutation A is negative nb_A1: Number of positive tests for which the search for mutation A is positive tx_A1: Proportion presence of mutation A nb_C0: Number of positive tests for which the search for mutation C is negative nb_C1: Number of positive tests for which the search for mutation C is positive tx_C1: Proportion of presence of mutation C nb_D0: Number of positive tests for which the 'one of the following mutations: DEL69/70, K417N, S371L-S373P or Q493R is absent nb_D1: Number of positive tests for which one of the following mutations: DEL69/70, K417N, S371L-S373P or Q493R is present tx_D1:Proportion of tests with the presence of one or more of the following mutations: DEL69/70, K417N, S371L-S373P or Q493R nb_A0C0: Number of tests for which mutations A and C are absent nb_A01C01: Number of tests for which mutations A and C are searched for and interpretable tx_A0C0: Proportion of tests with absence of the two mutations A and C
The complete methodological note is available. | Provide a detailed description of the following dataset: OLD French Coronavirus Screening Data |
Fraunhofer Portugal AICOS EDoF Dataset | The Fraunhofer Portugal AICOS EDoF Dataset was produced within the TAMI project and is composed of images of microscopic fields of view (FOV) of Liquid-based Cervical Cytology (LBC) samples. A total of 15 LBC samples were supplied by the Pathology Services from Hospital Fernando Fonseca and the Portuguese Oncology Institute of Porto. For each LBC sample, a set of images were obtained using a version of µSmartScope [1,2] prototype adapted to the cervical cytology use case [3,4].
µSmartScope is a portable 3D-printed prototype based on a smartphone developed by Fraunhofer AICOS that allows the fully automatic acquisition of microscopic images. The acquisition is done using the automated focus approach described in [1], where for each FOV, all images in the precise phase of the approach are stored as well as an indication of the image with the best focus metric (standard deviation of the Tenenbaum gradient [5]).
The dataset is divided into two partitions: the first partition contains the raw data with 5 images per stack without any pre-processing; the second partition contains the pre-processed data according to the best workflow proposed in [6], and also the respective EDoF generated using Complex Wavelets method for the fusion of the microscopy Images. The images from the second partition were pre-processed using chromatic, static, and elastic alignment; they can be used to fully replicate the work in [6].
The size of every image present in this database is 960x720 pixels. For this work, 144 EDoFs images generated with 5 aligned images per stack are used (where the central image is the one with the best focus metric (C). Figure 1 shows an example of a stack and the respective EDoF.
If you find this dataset useful for your research, please cite as: T. Albuquerque, L. Rosado, R. Cruz, M. J. M. Vasconcelos, T. Oliveira J. S. Cardoso, Rethinking Low-Cost Microscopy Workflow: Image Enhancement using Deep Based Extended Depth of Field Methods, In Intelligent Systems with Applications (Elsevier), 2023. https://doi.org/10.1016/j.iswa.2022.200170. | Provide a detailed description of the following dataset: Fraunhofer Portugal AICOS EDoF Dataset |
NYCBike1 | Bike flow data of New York City with grid 16x8. | Provide a detailed description of the following dataset: NYCBike1 |
NYCBike2 | Bike flow data of New York City. | Provide a detailed description of the following dataset: NYCBike2 |
NYCTaxi | Taxi flow data of New York City with grid 20x10. | Provide a detailed description of the following dataset: NYCTaxi |
MELON | 1. A unique dataset comprising multimodal creative and designed documents containing images with corresponding captions paired with music based on around 50mood/themes.
2. Motivation: To enhance user experience and to increase accessibility to wider community, motivate research in cross-modal retrieval field.
3. Use case: Music Retrieval for designed documents, Music augmentation for multimodal designed documents, Design image retrieval for music based on mood/themes. | Provide a detailed description of the following dataset: MELON |
ARMBench | **ARMBench** is a large-scale, object-centric benchmark dataset for robotic manipulation in the context of a warehouse. ARMBench contains images, videos, and metadata that corresponds to 235K+ pick-and-place activities on 190K+ unique objects. The data is captured at different stages of manipulation, i.e., pre-pick, during transfer, and after placement. | Provide a detailed description of the following dataset: ARMBench |
WikiTableSet | WikiTableSet is a large publicly available image-based table recognition dataset in three languages built from Wikipedia.
WikiTableSet contains nearly 4 million English table images, 590K Japanese table images, 640k French table images with corresponding HTML representation, and cell bounding boxes.
We build a Wikipedia table extractor [WTabHTML](https://github.com/phucty/wtabhtml) and use this to extract tables (in HTML code format) from the 2022-03-01 dump of Wikipedia. In this study, we select Wikipedia tables from three representative languages, i.e., English, Japanese, and French; however, the dataset could be extended to around 300 languages with 17M tables using our table extractor.
Second, we normalize the HTML tables following the PubTabNet format (separating table headers and table data, removing CSS and style tags). Finally, we use Chrome and Selenium to render table images from table HTML codes.
This dataset provides a standard benchmark for studying table recognition algorithms in different languages or even multilingual table recognition algorithms.
You can click [here](https://arxiv.org/pdf/2303.07641.pdf) for more details about this dataset. | Provide a detailed description of the following dataset: WikiTableSet |
SemanticKITTI-C | #### 🤖 Robo3D - The SemanticKITTI-C Benchmark
SemanticKITTI-C is an evaluation benchmark heading toward robust and reliable 3D semantic segmentation in autonomous driving. With it, we probe the robustness of 3D segmentors under out-of-distribution (OoD) scenarios against corruptions that occur in the real-world environment. Specifically, we consider natural corruptions happen in the following cases:
- Adverse weather conditions, such as fog, wet ground, and snow;
- External disturbances that are caused by motion blur or result in LiDAR beam missing;
- Internal sensor failure, including crosstalk, possible incomplete echo, and cross-sensor scenarios.
SemanticKITTI-C is part of the [Robo3D](https://arxiv.org/abs/2303.17597) benchmark. Visit our homepage to explore more details. | Provide a detailed description of the following dataset: SemanticKITTI-C |
KITTI-C | #### 🤖 Robo3D - The KITTI-C Benchmark
KITTI-C is an evaluation benchmark heading toward robust and reliable 3D object detection in autonomous driving. With it, we probe the robustness of 3D detectors under out-of-distribution (OoD) scenarios against corruptions that occur in the real-world environment. Specifically, we consider natural corruptions happen in the following cases:
- Adverse weather conditions, such as fog, wet ground, and snow;
- External disturbances that are caused by motion blur or result in LiDAR beam missing;
- Internal sensor failure, including crosstalk, possible incomplete echo, and cross-sensor scenarios.
KITTI-C is part of the [Robo3D](https://arxiv.org/abs/2303.17597) benchmark. Visit our homepage to explore more details. | Provide a detailed description of the following dataset: KITTI-C |
nuScenes-C | #### 🤖 Robo3D - The nuScenes-C Benchmark
nuScenes-C is an evaluation benchmark heading toward robust and reliable 3D perception in autonomous driving. With it, we probe the robustness of 3D detectors and segmentors under out-of-distribution (OoD) scenarios against corruptions that occur in the real-world environment. Specifically, we consider natural corruptions happen in the following cases:
- Adverse weather conditions, such as fog, wet ground, and snow;
- External disturbances that are caused by motion blur or result in LiDAR beam missing;
- Internal sensor failure, including crosstalk, possible incomplete echo, and cross-sensor scenarios.
SemanticKITTI-C is part of the [Robo3D](https://arxiv.org/abs/2303.17597) benchmark. Visit our homepage to explore more details. | Provide a detailed description of the following dataset: nuScenes-C |
WOD-C | #### 🤖 Robo3D - The WOD-C Benchmark
WOD-C is an evaluation benchmark heading toward robust and reliable 3D perception in autonomous driving. With it, we probe the robustness of 3D detectors and segmentors under out-of-distribution (OoD) scenarios against corruptions that occur in the real-world environment. Specifically, we consider natural corruptions happen in the following cases:
- Adverse weather conditions, such as fog, wet ground, and snow;
- External disturbances that are caused by motion blur or result in LiDAR beam missing;
- Internal sensor failure, including crosstalk, possible incomplete echo, and cross-sensor scenarios.
WOD-C is part of the [Robo3D](https://arxiv.org/abs/2303.17597) benchmark. Visit our homepage to explore more details. | Provide a detailed description of the following dataset: WOD-C |
CAIS | We collect utterances from the Chinese Artificial Intelligence Speakers (CAIS), and annotate them with slot tags and intent labels. The training, validation and test sets are split by the distribution of intents, where detailed statistics are provided in the supplementary material. Since the utterances are collected from speaker systems in the real world, intent labels are partial to the PlayMusic option. We adopt the BIOES tagging scheme for slots instead of the BIO2 used in the ATIS, since previous studies have highlighted meaningful improvements with this scheme (Ratinov and Roth, 2009) in the sequence labeling field | Provide a detailed description of the following dataset: CAIS |
HumanEval-X | HumanEval-X is a benchmark for evaluating the multilingual ability of code generative models. It consists of 820 high-quality human-crafted data samples (each with test cases) in Python, C++, Java, JavaScript, and Go, and can be used for various tasks, such as code generation and translation. | Provide a detailed description of the following dataset: HumanEval-X |
HiREST | **HiREST (HIerarchical REtrieval and STep-captioning)** dataset is a benchmark that covers hierarchical information retrieval and visual/textual stepwise summarization from an instructional video corpus. It consists of 3.4K text-video pairs from a video dataset, where 1.1K videos have annotations of moment spans relevant to text query and breakdown of each moment into key instruction steps with caption and timestamps (totaling 8.6K step captions). The dataset consists of video retrieval, moment retrieval, and two novel moment segmentation and step captioning tasks. | Provide a detailed description of the following dataset: HiREST |
OpinionQA | **OpinionQA** is a dataset for evaluating the alignment of LM opinions with those of 60 US demographic groups over topics ranging from abortion to automation. | Provide a detailed description of the following dataset: OpinionQA |
MP-DocVQA | The dataset is aimed to perform Visual Question Answering on multipage industry scanned documents. The questions and answers are reused from Single Page DocVQA (SP-DocVQA) dataset. The images also corresponds to the same in original dataset with previous and posterior pages with a limit of up to 20 pages per document. | Provide a detailed description of the following dataset: MP-DocVQA |
S&P 500 Pair Trading | A pool of real stocks from S&P 500 for recent 21 years from 01/02/2000 to 12/31/2020.
We filter stocks that have missing data throughout the whole period, resulting in 150 stocks with 5,284 trading days. | Provide a detailed description of the following dataset: S&P 500 Pair Trading |
CSI 300 Pair Trading | A daily emerging stock market dataset (Chinese CSI 300 dataset) including 300 stocks and 5,088 time steps from the CSMAR database. We construct our stock dataset using a pool of stocks from the CSI 300 index for the last 21 years, from 01/02/2000 to 12/31/2020. Instead of all stocks in the market, we select the stocks that used to belong to the major market index CSI 300, and filter out stocks that have missing price data over the period.
For each trading day, we use the fundamental price features as the features of stocks, including open price, close price, and volume. Additionally, we normalize price features such as open price and close price with logarithm.
The dataset randomly splits stocks into five non-overlapping sub-datasets. For each subset, the first 90% of trading days are used as train data, the following 5% as validation data, and the rest 5% as test data. | Provide a detailed description of the following dataset: CSI 300 Pair Trading |
NBA player performance prediction dataset | The dataset covers the 2022-23 NBA regular season (2022-10-18 to 2023-01-20) which contains 691 games in 92 game days. There are 582 active players among the 30 teams. Besides 7 basic statistics, we collected 3 tracking statistics, and 3 advanced statistics. We use tracking statistics to more accurately reflect players' movements on the court, and advanced statistics to more properly represent a player's effectiveness and contribution to the game. Together, these two types of data give us a better understanding of factors that are not visible on the scoreboard. | Provide a detailed description of the following dataset: NBA player performance prediction dataset |
pm2.5 dataset | pm2.5 time series data | Provide a detailed description of the following dataset: pm2.5 dataset |
U2OS | The archive contains original images from U2OS cells stained with Hoechst 33342 as PNG files. It also contains images (as Photoshop and GIMP files) showing hand-segmentation of the Hoechst images into regions containing single nuclei. | Provide a detailed description of the following dataset: U2OS |
NIH3T3 | The archive contains original images from NIH3T3 cells stained with Hoechst 33342 as PNG files. It also contains images (as Photoshop and GIMP files) showing hand-segmentation of the Hoechst images into regions containing single nuclei. | Provide a detailed description of the following dataset: NIH3T3 |
PMC-OA | **PMC-OA** is a large-scale dataset that contains 1.65M image-text pairs. The figures and captions from PubMed Central, 2,478,267 available papers are covered and 12,211,907 figure-caption pairs are extracted. | Provide a detailed description of the following dataset: PMC-OA |
EdAcc | **The Edinburgh International Accents of English Corpus (EdAcc)** is a new automatic speech recognition (ASR) dataset composed of 40 hours of English dyadic conversations between speakers with a diverse set of accents. EdAcc includes a wide range of first and second-language varieties of English and a linguistic background profile of each speaker. | Provide a detailed description of the following dataset: EdAcc |
CIRCLE | **CIRCLE** is a dataset containing 10 hours of full-body reaching motion from 5 subjects across nine scenes, paired with ego-centric information of the environment represented in various forms, such as RGBD videos. | Provide a detailed description of the following dataset: CIRCLE |
ConductorMotion100 | We construct a large-scale conducting motion dataset, named ConductorMotion100, by deploying pose estimation on conductor view videos of concert performance recordings collected from online video platforms. The construction of ConductorMotion100 removes the need for expensive motion-capture equipment and makes full use of massive online video resources. As a result, the scale of ConductorMotion100 has reached an unprecedented length of 100 hours. | Provide a detailed description of the following dataset: ConductorMotion100 |
UnAV-100 | Existing audio-visual event localization (AVE) handles manually trimmed videos with only a single instance in each of them. However, this setting is unrealistic as natural videos often contain numerous audio-visual events with different categories. To better adapt to real-life applications, we focus on the task of dense-localizing audio-visual events, which aims to jointly localize and recognize all audio-visual events occurring in an untrimmed video. To tackle this problem, we introduce the first Untrimmed Audio-Visual (UnAV-100) dataset, which contains 10K untrimmed videos with over 30K audio-visual events covering 100 event categories. Each video has 2.8 audio-visual events on average, and the events are usually related to each other and might co-occur as in real-life scenes. We believe our UnAV-100, with its realistic complexity, can promote the exploration on comprehensive audio-visual video understanding. | Provide a detailed description of the following dataset: UnAV-100 |
xCodeEval | xCodeEval is one of the largest **executable** multilingual multitask benchmarks consisting of 17 programming languages with execution-level parallelism. It features a total of seven tasks involving code understanding, generation, translation, and retrieval, and **it employs an execution-based evaluation** instead of traditional lexical approaches. It also provides a test-case-based multilingual code execution engine, [ExecEval](https://github.com/ntunlp/ExecEval) that supports all the programming languages in xCodeEval. | Provide a detailed description of the following dataset: xCodeEval |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.