dataset_name
stringlengths
2
128
description
stringlengths
1
9.7k
prompt
stringlengths
59
185
Argoverse-HD
[Argoverse-HD](https://www.cs.cmu.edu/~mengtial/proj/streaming/) is a dataset built for streaming object detection, which encompasses real-time object detection, video object detection, tracking, and short-term forecasting. It contains the video data from [Argoverse 1.1](https://www.argoverse.org/av1.html) with our own MS COCO-style bounding box annotations with track IDs. The annotations are backward-compatible with COCO as one can directly evaluate COCO pre-trained models on this dataset to estimate the efficiency or the cross-dataset generalization capability of the models. The dataset contains **high-quality and temporally-dense annotations for high-resolution videos** (1920 x 1200 @ 30 FPS). Overall, there are 70,000 image frames and 1.3 million bounding boxes. Argoverse-HD is the dataset used in the [Streaming Perception Challenge](https://eval.ai/web/challenges/challenge-page/800/overview), which includes two tracks: - **Detection-only (real-time object detection)**. In this track, the participants will develop single-frame object detectors as they would for COCO and LVIS challenges. The crucial distinction is that the evaluation will score latency through streaming accuracy. - **Full-stack**. In this track, the method is unrestricted. However, most likely tracking and forecasting will be used to compensate for the latency of the detectors. By default, all submissions measure their latency on a V100 GPU with the [official toolkit](https://github.com/karthiksharma98/sap-starterkit).
Provide a detailed description of the following dataset: Argoverse-HD
TimeHetNet
This meta-dataset is composed of previously known datasets. This includes: It also uses a specific script to read and sample small tasks from specified sizes and lengths. DS included here are from : PeekDB (https://github.com/RafaelDrumond/PeekDB) Informer data-sets (https://github.com/zhouhaoyi/Informer2020) Monash (https://zenodo.org/communities/forecasting) UEA (http://www.timeseriesclassification.com/Downloads/Archives/Multivariate2018_arff.zip) CNC (https://www.kaggle.com/datasets/shasun/tool-wear-detection-in-cnc-mill/download) MINING (https://www.kaggle.com/datasets/edumagalhaes/quality-prediction-in-a-mining-process/download) Plant_Monitoring (https://www.kaggle.com/datasets/inIT-OWL/production-plant-data-for-condition-monitoring/download) Licenses from each db differs individually.
Provide a detailed description of the following dataset: TimeHetNet
PeekDB
Data-set from "PEEK-An LSTM Recurrent Network for Motion Classification from Sparse Data"
Provide a detailed description of the following dataset: PeekDB
Monash
Time Series Forecasting Repository containing datasets of related time series for global forecasting.
Provide a detailed description of the following dataset: Monash
Councils in Action
Using Council Data Project infrastructures (https://councildataproject.org), we assemble longitudinal municipal council meeting transcript data. This initial release of the Councils in Action dataset includes over 350 meetings of the city councils of Seattle Washington and Portland Oregon, and the county council of King County Washington. See cdp-data (https://councildataproject.org/cdp-data) for more details on programmatic access.
Provide a detailed description of the following dataset: Councils in Action
MTic
Periodic Tic sounds (T0=1s) sampled at 16kHz with duration of nearly 10s.
Provide a detailed description of the following dataset: MTic
STEW
This dataset consists of raw EEG data from 48 subjects who participated in a multitasking workload experiment utilizing the SIMKAP multitasking test. The subjects’ brain activity at rest was also recorded before the test and is included as well. The Emotiv EPOC device, with sampling frequency of 128Hz and 14 channels was used to obtain the data, with 2.5 minutes of EEG recording for each case. Subjects were also asked to rate their perceived mental workload after each stage on a rating scale of 1 to 9 and the ratings are provided in a separate file.
Provide a detailed description of the following dataset: STEW
Age and Gender
EEG signals from 60 users have been recorded whose age range lies between 6 and 55 years. Among all, there were 25 females and 35 male users. In general, all the participants were either school children or belonged to the socioeconomic cross section of the population with no medical history. The EEG recordings were acquired from all 14 electrodes operating at a sampling rate of 128 Hz. During recording, the participants were asked to comfortably sit on the chair with clear thoughts and a relaxed state.
Provide a detailed description of the following dataset: Age and Gender
Replication Data for: "Empirical Analysis of EIP-1559: Transaction Fees, Waiting Time, and Consensus Security"
Transaction fee mechanism (TFM) is an essential component of a blockchain protocol. However, a systematic evaluation of the real-world impact of TFMs is still absent. Using rich data from the Ethereum blockchain, mempool, and exchanges, we study the effect of EIP-1559, one of the first deployed TFMs that depart from the traditional first-price auction paradigm. We conduct a rigorous and comprehensive empirical study to examine its causal effect on blockchain transaction fee dynamics, transaction waiting time and security. Our results show that EIP-1559 improves the user experience by making fee estimation easier, mitigating intra-block difference of gas price paid, and reducing users' waiting times. However, EIP-1559 has only a small effect on gas fee levels and consensus security. In addition, we found that when Ether's price is more volatile, the waiting time is significantly higher. We also verify that a larger block size increases the presence of siblings. These findings suggest new directions for improving TFM. Paper on arXiv: https://arxiv.org/abs/2201.05574 Code and Script on GitHub: https://github.com/SciEcon/EIP1559
Provide a detailed description of the following dataset: Replication Data for: "Empirical Analysis of EIP-1559: Transaction Fees, Waiting Time, and Consensus Security"
Replication Data for: "Deciphering Bitcoin Blockchain Data by Cohort Analysis" Version 3.1
Bitcoin is a peer-to-peer electronic payment system that popularized rapidly in recent years. Usually, we need to query the complete history of bitcoin blockchain data to acquire variables of economic meaning. This becomes increasingly difficult now with over 1.6 billion historical transactions on the Bitcoin blockchain. It is thus important to query Bitcoin transaction data in a way that is more efficient and provides economic insights. We apply cohort analysis that interprets bitcoin blockchain data using methods developed for population data in social science. Specifically, we query and process the Bitcoin transaction input and output data within each daily cohort. With this, we then create datasets and visualizations for some key indicators of bitcoin transactions, including the daily lifespan distributions of accumulated spent transaction output (STXO) and the daily age distributions of accumulated unspent transaction output (UTXO). We provide a computationally feasible approach to characterize bitcoin transactions, which paves the way for future studies of economic behaviors in the emerging market of Bitcoin. Github:https://github.com/SciEcon/UTXO arXiv: https://arxiv.org/abs/2103.0017 Nature Research: https://www.nature.com/articles/s41597-022-01254-0 Nature PDF: https://rdcu.be/cKRkg
Provide a detailed description of the following dataset: Replication Data for: "Deciphering Bitcoin Blockchain Data by Cohort Analysis" Version 3.1
ReferIt3D
ReferIt3D provides two large-scale and complementary visio-linguistic datasets: i) Sr3D, which contains 83.5K template-based utterances leveraging spatial relations among fine-grained object classes to localize a referred object in a scene, and ii) Nr3D which contains 41.5K natural, free-form, utterances collected by deploying a 2-player object reference game in 3D scenes. This dataset can be used for 3D visual grounding and 3D dense captioning tasks.
Provide a detailed description of the following dataset: ReferIt3D
Southern California Seismic Network Data
These files are supplementary material for “Generalized Seismic Phase Detection with Deep Learning” by Ross et al. (2018), BSSA (doi.org/10.1785/0120180080). The models were trained using keras and TensorFlow, and can be used with these libraries. The training dataset contains 4.5 million seismograms evenly split between P-waves, S-waves, and pre-event noise classes. We encourage the use of this hdf5 dataset for training deep learning models, and hope that it and the model architecture in the paper can serve as a benchmark for future studies. For additional information please contact Zachary Ross (zross@caltech.edu).
Provide a detailed description of the following dataset: Southern California Seismic Network Data
Wireless-Intelligence
Wireless-Intelligence is a database website provided for AI-based wireless communication research, in which each dataset consists of hundreds and thousands of channel samples in different forms. The data is available for free to researchers for non-commercial use. ## What is Wireless-Intelligence? Wireless-Intelligence is a free and open database website for AI-based wireless communication researchers. In Wireless-Intelligence, we aim to provide high-quality datasets for multiple kinds of research areas in AI-based wireless communications, including channel state information (CSI) feedback, channel estimation, positioning, and more datasets in the following updates. For each dataset, hundreds of thousands of labeled samples are provided for training and testing in different forms. We hope that the Wireless-Intelligence can help the researchers in the future. ## Why Wireless-Intelligence? AI-based wireless communications have attracted lots of attention from both academia and industry recently, due to its great potential of improving the performance of wireless communications systems. However, the lack of abundant high-quality wireless datasets restricts the development of AI-based wireless communications. That is why we want to establish the Wireless-Intelligence website. Here, we provide plenty of labeled wireless communication datasets for different research areas for free, so that researchers can use them to make more effective studies and comparable results.
Provide a detailed description of the following dataset: Wireless-Intelligence
SUES-200
Cross-view Image Dataset Across Drone and Satellite - multi-height - multi-scene
Provide a detailed description of the following dataset: SUES-200
USC-GRAD-STDdb
USC-GRAD-STDdb comprises 115 video segments containing more than 25,000 annotated frames of HD 720p resolution (≈1280x720) with small objects of interest from 16 (≈4x4) to 256 (≈16x16) as pixel area. The length of the videos changes from 150 up to 500 frames. The size of every object is determined through the bounding box, so that a good annotation is of utmost importance for reliable performance metrics. As it may seem obvious, the smaller the object, the harder the annotation. The annotation has been carried out with the ViTBAT tool, adjusting the boxes as much as possible to the objects of interest in each video frame. In total, more than 56,000 ground truth labels have been generated.
Provide a detailed description of the following dataset: USC-GRAD-STDdb
Synthetic Object Preference Adaptation Data
This dataset involves a 2D or 3D agent moving from a start to goal pose while interacting with nearby objects. These objects can influence position of the agent via attraction or repulsion forces as well as influence orientation via attraction to object's orientation. This dataset can be used to pre-train general policy behavior, which can be later fine-tuned quickly for a person's specific preferences. Example use-cases include: - self-driving cars maintaining distance from other cars - robot pick-and-place tasks with intermediate subtasks (ie: scanning factory items before dropping them off) Overall, pre-training initial policy behavior to be fine-tuned later is a powerful paradigm and is arguably essential for robots to handle changing environments and user preferences. This is compared to the paradigm of training on massive amounts of data and remaining fixed at test time, hoping that generalization alone will help the agent handle new scenarios.
Provide a detailed description of the following dataset: Synthetic Object Preference Adaptation Data
SurveyBank
There are 9,321 survey papers with high quality included in the SurvayBank in the domain of computer science.
Provide a detailed description of the following dataset: SurveyBank
Brightkite
Brightkite was once a location-based social networking service provider where users shared their locations by checking-in. The friendship network was collected using their public API, and consists of 58,228 nodes and 214,078 edges. The network is originally directed but the collectors have constructed a network with undirected edges when there is a friendship in both ways. The collectors have also collected a total of 4,491,143 checkins of these users over the period of Apr. 2008 - Oct. 2010.
Provide a detailed description of the following dataset: Brightkite
Assembly101
Assembly101 is a new procedural activity dataset featuring 4321 videos of people assembling and disassembling 101 "take-apart" toy vehicles. Participants work without fixed instructions, and the sequences feature rich and natural variations in action ordering, mistakes, and corrections. Assembly101 is the first multi-view action dataset, with simultaneous static (8) and egocentric (4) recordings. Sequences are annotated with more than 100K coarse and 1M fine-grained action segments, and 18M 3D hand poses. We benchmark on three action understanding tasks: recognition, anticipation and temporal segmentation. Additionally, we propose a novel task of detecting mistakes. The unique recording format and rich set of annotations allow us to investigate generalization to new toys, cross-view transfer, long-tailed distributions, and pose vs. appearance. We envision that Assembly101 will serve as a new challenge to investigate various activity understanding problems. Image Source: [https://assembly-101.github.io/](https://assembly-101.github.io/)
Provide a detailed description of the following dataset: Assembly101
GBCU
GBCU is the first public dataset for Gallbladder Cancer identification from Ultrasound images. GBCU contains a total of 1255 (432 normal, 558 benign, and 265 malignant) annotated abdominal Ultrasound images collected from 218 patients. Of the 218 patients, 71, 100, and 47 were from the normal, benign, and malignant classes, respectively. The sizes of the training and testing sets are 1133 and 122, respectively. To ensure generalization to unseen patients, all images of any particular patient were either in the train or the test split. We acquired data samples from patients referred to PGIMER, Chandigarh (a referral hospital in Northern India) for abdominal ultrasound examinations of suspected Gallbladder pathologies. The study was approved by the Ethics Committee of PGIMER, Chandigarh. We obtained informed written consent from the patients at the time of recruitment, and protect their privacy by fully anonymizing the data. Grayscale B-mode static images, including both sagittal and axial sections, were recorded by radiologists for each patient using a Logiq S8 machine. Each image is labeled as one of the three classes - normal, benign, or malignant. The ground-truth labels were biopsy-proven to assert the correctness. Additionally, bounding-box annotations for abnormal pathologies (e.g. stone, benign mural thickening, or malignancy), and the GB are provided. The GBCU dataset is suitable for both image classification and object detection tasks. Apart from the Gallbladder Cancer, the dataset can also be used for detection of several other pathologies.
Provide a detailed description of the following dataset: GBCU
SIDD-Image
This is the first image-based network intrusion detection dataset. This large-scale dataset included network traffic protocol communication-based images from 15 different observation locations of different countries in Asia. This dataset is used to identify two different types of anomalies from benign network traffic. Each image with a size of 48 × 48 contains multi-protocol communications within 128 seconds. The SIDD dataset can be to applied to a broad range of tasks such as machine learning-based network intrusion detection, non-iid federated learning, and so forth.
Provide a detailed description of the following dataset: SIDD-Image
VideoCC3M
We propose a new, scalable video-mining pipeline which transfers captioning supervision from image datasets to video and audio. We use this pipeline to mine paired video and captions, using the [Conceptual Captions3M](https://paperswithcode.com/dataset/conceptual-captions) image dataset as a seed dataset. Our resulting dataset VideoCC3M consists of millions of weakly paired clips with text captions and will be released publicly. The core idea of our mining pipeline is to start with an image captioning dataset, and for each image-caption pair in a dataset, find frames in videos similar to the image. We then extract short video clips around the matching frames and transfer the caption to those clips. See the paper for the steps in detail. We ran our mining pipeline with the image captioning dataset - [Conceptual Captions 3M](https://paperswithcode.com/dataset/conceptual-captions) (CC3M). We only use the images in the dataset which are still publicly available online, which gives us 1.25 image-caption pairs. We apply our pipeline to online videos. We filter videos for viewcount > 1000, length < 20 minutes, uploaded within the last 10 years, but at least 90 days ago, and filter using content-appropriateness signals to get 150M videos. This gives us 10.3M clip-text pairs with 6.3M video clips (total 17.5K hours of video) and 970K unique captions. We call the resulting dataset VideoCC3M.
Provide a detailed description of the following dataset: VideoCC3M
Cyclone Data
Archive of Global Tropical Cyclone Tracks Tracks from 1980 to May 2019.
Provide a detailed description of the following dataset: Cyclone Data
Ocean Drifters
From Schaub, Michael T., et al. "Random walks on simplicial complexes and the normalized hodge 1-laplacian." SIAM Review 62.2 (2020): 353-391. This datataset comes from the Global Ocean Drifter Program available at the AOML/NOAA Drifter Data Assembly. While the entire dataset spans several decades of measurements, Schaub et al. focused on data from Jan 2011–June 2018 and limit ourselves to buoys that have been active for at least 3 months within that time period. They built trajectories by considering the location information of every buoy every 12 hours. As buoys may fail to record a position, there are trajectories with missing data. In these cases, they split the trajectories into multiple contiguous trajectories. For the analysis, they've examined trajectories around Madagascar with a latitude y_{lat} \in [−30, −10], and longitude x_{long} \in [39, 55]. This results in 400 total trajectories. To construct the Simplicial Complex, they first have transformed the data into Euclidean coordinates via an area-preserving (Lambert) projection. We discretize Euclidean space using a hexagonal grid, with the width of the hexagon equal to 1.66◦ (latitude). Each hexagon corresponds to a node, and we add an edge between two such nodes if there is a nonzero net flow from one hexagon to its adjacent neighbors. They considered all triangles (3-cliques) in this graph to be faces of the simplcial complex. The Laplacian Ls 1 of the resulting complex has a two-dimensional harmonic space. Each dimension of this space corresponds to an “obstacle” of the flow. Finally, the discretization of each trajectory was performed by rounding its positional coordinates to the nearest hexagon and consider the resulting sequence of edges that the trajectory traverses in the complex.
Provide a detailed description of the following dataset: Ocean Drifters
BCI
The evaluation of human epidermal growth factor receptor 2 (HER2) expression is essential to formulate a precise treatment for breast cancer. The routine evaluation of HER2 is conducted with immunohistochemical techniques (IHC), which is very expensive. Therefore, we propose a breast cancer immunohistochemical (BCI) benchmark attempting to synthesize IHC data directly with the paired hematoxylin and eosin (HE) stained images. The dataset contains 4870 registered image pairs, covering a variety of HER2 expression levels (0, 1+, 2+, 3+).
Provide a detailed description of the following dataset: BCI
SDF Shader Dataset
This dataset contains 63 signed distance function shaders collected mostly from Shadertoy. Along with the shader source files, the dataset also provides point clouds of signed distance function samples in different distributions, available as a standalone zip file of `.npz` files: https://drive.google.com/file/d/1StTkilQSk83lj60VaqcMHh3GT73CSIKT/view
Provide a detailed description of the following dataset: SDF Shader Dataset
TEMPO
TEMPOral reasoning in video and language (TEMPO) is a dataset that consists of two parts: a dataset with real videos and template sentences (TEMPO - Template Language) which allows for controlled studies on temporal language, and a human language dataset which consists of temporal sentences annotated by humans (TEMPO - Human Language).
Provide a detailed description of the following dataset: TEMPO
rc_49
Includes several sets of synthetic stereo images labelled with grasp rectangles representing parallel-jaw grasps (Cornell-like format). The set was introduced in the range of the ICAR paper "Automatic generation of realistic training data for learning parallel-jaw grasping from synthetic stereo images", please refer to it if you are using the data.
Provide a detailed description of the following dataset: rc_49
PLOD-unfiltered
PLOD: An Abbreviation Detection Dataset This is the PLOD (unfiltered) Dataset published at LREC 2022. The dataset can help build sequence labelling models for the task of Abbreviation Detection.
Provide a detailed description of the following dataset: PLOD-unfiltered
KAIST VIO Dataset
This is the dataset for testing the robustness of various VO/VIO methods, acquired on reak UAV.
Provide a detailed description of the following dataset: KAIST VIO Dataset
GRIT
The General Robust Image Task (GRIT) Benchmark is an evaluation-only benchmark for evaluating the performance and robustness of vision systems across multiple image prediction tasks, concepts, and data sources. GRIT hopes to encourage our research community to pursue the following research directions: 1. **General purpose vision models** - GRIT facilitates the evaluation of unified and general-purpose vision models that demonstrate a wide range of skills across a diverse set of concepts. 2. **Robust specialized models** - GRIT simplifies and unifies quantification of misinformation, calibration, and generalization under distribution shifts due to novel concepts, novel data sources or image distortions for 7 standard vision and vision-language tasks. 3. **Efficient learning** - GRIT includes a `restricted` and an `unrestricted` track. The `restricted `track constrains the allowed training data to a selected but rich set of data sources that allows more scientific and meaningful comparison between models. This is meant to encourage resource constrained researchers to participate in the GRIT challenge and to spur interest in efficient learning methods as opposed to the dominant paradigm of training larger models on ever increasing amounts of training data. The `unrestricted` track allows much more flexibility in training data selection to test the capability of vision models trained with massive data and compute.
Provide a detailed description of the following dataset: GRIT
DoPose
DoPose (Dortmund 6D Pose dataset) is a dataset of highly cluttered and closely stacked objects. The dataset is saved in the BOP format. The dataset includes RGB images, Depth images, 6D Pose of objects, segmentation mask (all and visible), COCO Json annotation, camera transformations, and 3D model of all objects. The dataset contains 2 different types of scenes (table and bin). Each scene contains different view angles. For the bin scenes, the data contains 183 scenes with 2150 image views. Of those 183 scenes, 35 scenes contain 2 views, 20 contain 3 views and 128 contain 16 views. And for table scenes, the data contains 118 scenes with 1175 image views. in Those 118 scenes, 20 scenes contain 3 views, 50 scenes with 6 images, and 48 scenes with 17 images. So in total, our data contains 301 scenes and 3325 view images. Most of the scenes contain mixed objects. The dataset contains 19 objects in total.
Provide a detailed description of the following dataset: DoPose
Abstractive Text Summarization from Il Post
IlPost dataset, containing news articles taken from IlPost. There are two features: * source: Input news article. * target: Summary of the article.
Provide a detailed description of the following dataset: Abstractive Text Summarization from Il Post
Abstractive Text Summarization from Fanpage
Fanpage dataset, containing news articles taken from Fanpage. There are two features: * source: Input news article. * target: Summary of the article.
Provide a detailed description of the following dataset: Abstractive Text Summarization from Fanpage
MLSum-it
The MLSum-it dataset is the translated version (Helsinki-NLP/opus-mt-es-it) of the spanish portion of MLSum, containing news articles taken from BBC/mundo. There are two features: * source: Input news article. * target: Summary of the article.
Provide a detailed description of the following dataset: MLSum-it
Electromagnetic Calorimeter Shower Images
Each HDF5 file has the following structure: `energy Dataset {100000, 1}` `layer_0 Dataset {100000, 3, 96}` `layer_1 Dataset {100000, 12, 12}` `layer_2 Dataset {100000, 12, 6}` `overflow Dataset {100000, 3}` In practice, each file is a collection of 100,000 calorimeter showers corresponding to the particle specified in the file name (eplus = positrons, gamma = photons, piplus = charged pions). The calorimeter we built is segmented longitudinally into three layer with different depths and granularities. In units of mm, the three layers have the following (eta, phi, z) dimensions: Layer 0: (5, 160, 90) | Layer 1: (40, 40, 347) | Layer 2: (80, 40, 43) In the HDF5 files, the `energy` entry specifies the true energy of the incoming particle in units of GeV. `layer_0`, `layer_1`, and `layer_2` represents the energy deposited in each layer of the calorimeter in an image data format. Given the segmentation of each calorimeter layer, these images have dimensions 3x96 (in layer 0), 12x12 (in layer 1), and 12x6 (in layer 3). The `overflow` contains the amount of energy that was deposited outside of the calorimeter section we are considering.
Provide a detailed description of the following dataset: Electromagnetic Calorimeter Shower Images
RETOUCH
The goal of the challenge is to compare automated algorithms that are able to detect and segment various types of fluids on a common dataset of optical coherence tomography (OCT) volumes representing different retinal diseases, acquired with devices from different manufacturers. We made available a dataset of OCT volumes containing a wide variety of retinal fluid lesions with accompanying reference annotations. We invite the medical imaging community to participate by developing and testing existing and novel automated retinal OCT segmentation methods.
Provide a detailed description of the following dataset: RETOUCH
NExT-QA
**NExT-QA** is a VideoQA benchmark targeting the explanation of video contents. It challenges QA models to reason about the causal and temporal actions and understand the rich object interactions in daily activities. It supports both multi-choice and open-ended QA tasks. The videos are untrimmed and the questions usually invoke local video contents for answers.
Provide a detailed description of the following dataset: NExT-QA
MMChat
- A large scale Chinese multi-modal dialogue corpus (120.84K dialogues and 198.82 K images). - MMCHAT contains image-grounded dialogues collected from real conversations on social media. - We manually annotate 100K dialogues from MMCHAT with the dialogue quality and whether the dialogues are related to the given image. - We provide the rule-filtered raw dialogues that are used to create MMChat (Rule Filtered Raw MMChat). It contains 4.257 M dialogue sessions and 4.874 M images - We provide a version of MMChat that is filtered based on LCCC (LCCC Filtered MMChat). This version contain much cleaner dialogues (492.6 K dialogue sessions and 1.066 M images)
Provide a detailed description of the following dataset: MMChat
MassiveText
**MassiveText** is a collection of large English-language text datasets from multiple sources: web pages, books, news articles, and code. The data pipeline includes text quality filtering, removal of repetitious text, deduplication of similar documents, and removal of documents with significant test-set overlap. MassiveText contains 2.35 billion documents or about 10.5 TB of text. Usage: [Gopher](https://paperswithcode.com/paper/scaling-language-models-methods-analysis-1) is trained on 300B tokens (12.8% of the tokens in the dataset), so the authors sub-sample from MassiveText with sampling proportions specified per subset (books, news, etc.). These sampling proportions are tuned to maximize downstream performance. The largest sampling subset is the curated web-text corpus MassiveWeb, which is found to improve downstream performance relative to existing web-text datasets such as C4 (Raffel et al., 2020). Find Datasheets in the [Gopher paper](https://paperswithcode.com/paper/scaling-language-models-methods-analysis-1).
Provide a detailed description of the following dataset: MassiveText
Avicenna: Deductive Commonsense Reasoning
A syllogism is a common form of deductive reasoning that requires precisely two premises and one conclusion. The Avicenna corpus is a benchmark for syllogistic NLI and syllogistic NLG: - syllogistic NLI: Identifying the possibility of inferring between pairs of inputted sentences. - syllogistic NLG: Generating a conclusion sentence for two sentences with a syllogistic relation.
Provide a detailed description of the following dataset: Avicenna: Deductive Commonsense Reasoning
PLAD
PLAD is a dataset where sparse depth is provided by line-based visual SLAM to verify StructMDC.
Provide a detailed description of the following dataset: PLAD
CER Smart Metering Project - Electricity Customer Behaviour Trial
The CER initiated the Smart Metering Project in 2007 with the purpose of undertaking trials to assess the performance of Smart Meters, their impact on consumers’ energy consumption and the economic case for a wider national rollout. It is a collaborative energy industry-wide project managed by the CER and actively involving energy industry participants including the Sustainable Energy Authority of Ireland (SEAI), the Department of Communications, Energy and Natural Resources (DCENR), ESB Networks, Bord Gáis Networks, Electric Ireland, Bord Gáis Energy and other energy suppliers.
Provide a detailed description of the following dataset: CER Smart Metering Project - Electricity Customer Behaviour Trial
Unitail
The United Retail Datasets (Unitail) is a large-scale benchmark of basic visual tasks on products that challenges algorithms for detecting, reading, and matching. It offers the Unitial-Det, with 1.8M quadrilateral-shaped instances annotated; and the Unitial-OCR, containing 1454 product categories, 30k text regions, and 21k transcriptions to enable robust reading on products and motivate enhanced product matching.
Provide a detailed description of the following dataset: Unitail
HFFD
We build a hybrid fake face (HFF) dataset, which contains eight types of face images. For real face images, three types of face images are randomly selected from three open datasets. They are low-resolution face images from CelebA, high-resolution face images from CelebA-HQ, and face video frames from FaceForensics, respectively. Thus, real face images under internet scenarios are simulated as real as possible. Then, some most representative face manipulation techniques, which include PGGAN and StyleGAN for identity manipulation, Face2Face and Glow for face expression manipulation, and StarGAN for face attribute transfer, are selected to produce fake face images. The HFF dataset is a large fake face dataset, which contains more than 155k face images.
Provide a detailed description of the following dataset: HFFD
MIMI dataset
Nowadays, new branches of research are proposing the use of non-traditional data sources for the study of migration trends in order to find an original methodology to answer open questions about cross-border human mobility. The Multi-aspect Integrated Migration Indicators (MIMI) dataset is a new dataset to be exploited in migration studies as a concrete example of this new approach. It includes both official data about bidirectional human migration (traditional flow and stock data) with multidisciplinary variables and original indicators, including economic, demographic, cultural and geographic indicators, together with the Facebook Social Connectedness Index (SCI). It results from the process of gathering, embedding and integrating traditional and novel variables, resulting in this new multidisciplinary dataset that could significantly contribute to nowcast/forecast bilateral migration trends and migration drivers. Thanks to this variety of knowledge, experts from several research fields (demographers, sociologists, economists) could exploit MIMI to investigate the trends in the various indicators, and the relationship among them. Moreover, it could be possible to develop complex models based on these data, able to assess human migration by evaluating related interdisciplinary drivers, as well as models able to nowcast and predict traditional migration indicators in accordance with original variables, such as the strength of social connectivity. Here, the SCI could have an important role. It measures the relative probability that two individuals across two countries are friends with each other on Facebook, therefore it could be employed as a proxy of social connections across borders, to be studied as a possible driver of migration. All in all, the motivations for building and releasing the MIMI dataset lie in the need of new perspectives, methods and analyses that can no longer prescind from taking into account a variety of new factors. The heterogeneous and multidimensional sets of data present in MIMI offer an all-encompassing overview of the characteristics of human migration, enabling a better understanding and an original potential exploration of the relationship between migration and non-traditional sources of data.
Provide a detailed description of the following dataset: MIMI dataset
HiNER-original
This dataset releases a significantly sized standard-abiding Hindi NER dataset containing 109,146 sentences and 2,220,856 tokens, annotated with 11 tags.
Provide a detailed description of the following dataset: HiNER-original
HiNER-collapsed
This dataset releases a significantly sized standard-abiding Hindi NER dataset containing 109,146 sentences and 2,220,856 tokens, annotated with 3 collapsed tags (PER, LOC, ORG).
Provide a detailed description of the following dataset: HiNER-collapsed
PLOD-filtered
PLOD: An Abbreviation Detection Dataset This is the PLOD (filtered) Dataset published at LREC 2022. The dataset can help build sequence labelling models for the task of Abbreviation Detection.
Provide a detailed description of the following dataset: PLOD-filtered
Fig-QA
**Fig-QA** consists of 10256 examples of human-written creative metaphors that are paired as a Winograd schema. It can be used to evaluate the commonsense reasoning of models. The metaphors themselves can also be used as training data for other tasks, such as metaphor detection or generation. Image source: [https://github.com/nightingal3/Fig-QA](https://github.com/nightingal3/Fig-QA)
Provide a detailed description of the following dataset: Fig-QA
Czech Subjectivity Dataset
Czech subjectivity dataset of 10k manually annotated subjective and objective sentences from movie reviews and descriptions. See the paper description https://arxiv.org/abs/2204.13915
Provide a detailed description of the following dataset: Czech Subjectivity Dataset
Twitter-COMMs
Detecting out-of-context media, such as "mis-captioned" images on Twitter, is a relevant problem, especially in domains of high public significance. Twitter-COMMs is a large-scale multimodal dataset with 884k tweets relevant to the topics of Climate Change, COVID-19, and Military Vehicles. This dataset can be used to develop methods to detect misinformation on social media platforms related to these three topics.
Provide a detailed description of the following dataset: Twitter-COMMs
https://osf.io/73c4q/
Briganti et al. 2018
Provide a detailed description of the following dataset: https://osf.io/73c4q/
https://osf.io/mj5wa/
Armour et al. 2017
Provide a detailed description of the following dataset: https://osf.io/mj5wa/
Streetscore
Paper abstract: Social science literature has shown a strong connection between the visual appearance of a city’s neighborhoods and the behavior and health of its citizens. Yet, this re- search is limited by the lack of methods that can be used to quantify the appearance of streetscapes across cities or at high enough spatial resolutions. In this paper, we de- scribe ‘Streetscore’, a scene understanding algorithm that predicts the perceived safety of a streetscape, using training data from an online survey with contributions from more than 7000 participants. We first study the predictive power of commonly used image features using support vector re- gression, finding that Geometric Texton and Color His- tograms along with GIST are the best performers when it comes to predict the perceived safety of a streetscape. Us- ing Streetscore, we create high resolution maps of perceived safety for 21 cities in the Northeast and Midwest of the United States at a resolution of 200 images/square mile, scoring ∼1 million images from Google Streetview. These datasets should be useful for urban planners, economists and social scientists looking to explain the social and eco- nomic consequences of urban perception.
Provide a detailed description of the following dataset: Streetscore
HowMany-QA
HowMany-Qa is a object counting dataset. It is taken from the counting-specific union of VQA 2.0 (Goyal et al., 2017) and Visual Genome QA (Krishna et al., 2016).
Provide a detailed description of the following dataset: HowMany-QA
TorWIC
TorWIC is the dataset discussed in POCD: Probabilistic Object-Level Change Detection and Volumetric Mapping in Semi-Static Scenes. The purpose of this dataset is to evaluate the map mainteneance capabilities in a warehouse environment undergoing incremental changes. This dataset is collected in a Clearpath Robotics facility.
Provide a detailed description of the following dataset: TorWIC
Identity Access Management dataset
We release 280 synthetic IAM graphs generated using IAM graphs of commercial companies. Specifically, we vary the number of nodes, but keep graph density as is, i.e. in the range of 0.259 ± 0.198 (avg ± std). To generate a synthetic graph, we first sample the number of users and datastores from uniform distributions over the following intervals [10, 150] and [50, 300] respectively that cover variations of those parameters across real graphs. After fixing node counts we sample with replacement the actual nodes from a real world graph, which is chosen at random. Then we add Gaussian N(0, 0.01) noise to node embeddings and renormalize them. To match the graph density with the density of the underlying baseline we sample edges from a multinomial distribution, where each component is proportional to the cosine distance between a user and a datastore embeddings. Also we enforce the invariant that dynamic edges are always a subset of all permission edges. A synthetic graph generated in such a way is an ”upsampled” version of an underlying real world graph.
Provide a detailed description of the following dataset: Identity Access Management dataset
GMD-12
A dataset for medical consultation dialogues. See our related paper for more details: https://arxiv.org/pdf/2204.13953.pdf
Provide a detailed description of the following dataset: GMD-12
r/transprogrammer survey results
Questions regarding computer science education for members of the r/transprogrammer Reddit. Used for the paper "Why The Trans Programmer?" by Skye Kychenthal.
Provide a detailed description of the following dataset: r/transprogrammer survey results
ErAConD
ErAConD is a novel GEC dataset consisting of parallel original and corrected utterances drawn from open-domain chatbot conversations. We collected 186 dialogs containing 1735 user utterance turns of open-domain dialog data by deploying BlenderBot on Amazon Mechanical Turk (AMT) via LEGOEval. This dataset is, to our knowledge, the first GEC dataset targeted to a human-machine conversational setting.
Provide a detailed description of the following dataset: ErAConD
NLU Evaluation Corpora
This project is a collection of three corpora which can be used for evaluating chatbots or other conversational interfaces. Two of the corpora were extracted from StackExchange, one from a Telegram chatbot.
Provide a detailed description of the following dataset: NLU Evaluation Corpora
OntoRock
OntoRock is a benchmark for evaluating the robustness of existing NER models via a systematic evaluation protocol.
Provide a detailed description of the following dataset: OntoRock
UAGD
The source images of UAGD is manually selected from APPA-REAL, UTKFace and AgeDB datasets very carefully, which means only face images that are having large poses, containing noise pixels, bearing various expressions, and under different illuminations could be chosen. We also double clean and remove the images that have wrong or not pretty sure label by crowdsourcing platform. UAGD has almost the same number of female and male images in each age, about 75 female and 75 male, total 150 face.
Provide a detailed description of the following dataset: UAGD
VSR
The Visual Spatial Reasoning (VSR) corpus is a collection of caption-image pairs with true/false labels. Each caption describes the spatial relation of two individual objects in the image, and a vision-language model (VLM) needs to judge whether the caption is correctly describing the image (True) or not (False).
Provide a detailed description of the following dataset: VSR
Multivariate-Mobility-Paris
The original dataset was provided by Orange telecom in France, which contains anonymized and aggregated human mobility data. The Multivariate-Mobility-Paris dataset comprises information from 2020-08-24 to 2020-11-04 (72 days during the COVID-19 pandemic), with time granularity of 30 minutes and spatial granularity of 6 coarse regions in Paris, France. In other words, it represents a multivariate time series dataset. This dataset can be used for several time-series tasks such as univariate/multivariate forecasting/classification with classic, machine learning, and privacy-preserving machine learning techniques.
Provide a detailed description of the following dataset: Multivariate-Mobility-Paris
DrugEHRQA
Contains over 70,000 question-answer pairs from both structured tables and unstructured notes from a publicly available Electronic Health Record (EHR).
Provide a detailed description of the following dataset: DrugEHRQA
WikiMulti
**wikimulti** is a dataset for cross-lingual summarization based on Wikipedia articles in 15 languages.
Provide a detailed description of the following dataset: WikiMulti
SYMON
Contains 5,193 video summaries of popular movies and TV series. SyMoN captures naturalistic storytelling videos for human audience made by human creators, and has higher story coverage and more frequent mental-state references than similar video-language story datasets.
Provide a detailed description of the following dataset: SYMON
COVMis-Stance
**COVMis-Stance** is a stance detection dataset for COVID-19 misinformation. It consists of fake news and claims related to COVID. Fake news was collected from articles fact-checking sites, and fake claims were from the WHO official Twitter. It contains 2631 tweets annotated for stance towards 111 COVID19 misinformation items.
Provide a detailed description of the following dataset: COVMis-Stance
PQuAD
Persian Question Answering Dataset (PQuAD) is a crowdsourced reading comprehension dataset on Persian Wikipedia articles. It includes 80,000 questions along with their answers, with 25% of the questions being adversarially unanswerable.
Provide a detailed description of the following dataset: PQuAD
VCSL
VCSL (Video Copy Segment Localization) is a new comprehensive segment-level annotated video copy dataset. Compared with existing copy detection datasets restricted by either video-level annotation or small-scale, VCSL not only has two orders of magnitude more segment level labelled data, with 160k realistic video copy pairs containing more than 280k localized copied segment pairs, but also covers a variety of video categories and a wide range of video duration. All the copied segments inside each collected video pair are manually extracted and accompanied by precisely annotated starting and ending timestamps.
Provide a detailed description of the following dataset: VCSL
ViViD++
A dataset capturing diverse visual data formats that target varying luminance conditions, and was recorded from alternative vision sensors, by handheld or mounted on a car, repeatedly in the same space but in different conditions.
Provide a detailed description of the following dataset: ViViD++
MuCGEC
MuCGEC is a multi-reference multi-source evaluation dataset for Chinese Grammatical Error Correction (CGEC), consisting of 7,063 sentences collected from three different Chinese-as-a-Second-Language (CSL) learner sources. Each sentence has been corrected by three annotators, and their corrections are meticulously reviewed by an expert, resulting in 2.3 references per sentence.
Provide a detailed description of the following dataset: MuCGEC
Custom Spatio-Temporal Action Video Dataset
This spatio-temporal actions dataset for video understanding consists of 4 parts: original videos, cropped videos, video frames, and annotation files. This dataset uses a proposed new multi-person annotation method of spatio-temporal actions. First, we use ffmpeg to crop the videos and frame the videos; then use yolov5 to detect human in the video frame, and then use deep sort to detect the ID of the human in the video frame. By processing the detection results of yolov5 and deep sort, we can get the annotation file of the spatio-temporal action dataset to complete the work of customizing the spatio-temporal action dataset.
Provide a detailed description of the following dataset: Custom Spatio-Temporal Action Video Dataset
Task2Dial
A novel dataset of document-grounded task-based dialogues, where an Information Giver (IG) provides instructions (by consulting a document) to an Information Follower (IF), so that the latter can successfully complete the task. In this unique setting, the IF can ask clarification questions which may not be grounded in the underlying document and require commonsense knowledge to be answered.
Provide a detailed description of the following dataset: Task2Dial
3MASSIV
A multilingual, multimodal and multi-aspect, expertly-annotated dataset of diverse short videos extracted from short-video social media platform - Moj. 3MASSIV comprises of 50k short videos (~20 seconds average duration) and 100K unlabeled videos in 11 different languages and captures popular short video trends like pranks, fails, romance, comedy expressed via unique audio-visual formats like self-shot videos, reaction videos, lip-synching, self-sung songs, etc.
Provide a detailed description of the following dataset: 3MASSIV
PeerSum
**PeerSum** is a new MDS dataset using peer reviews of scientific publications. The dataset differs from the existing MDS datasets in that summaries (i.e., the meta-reviews) are highly abstractive and they are real summaries of the source documents. In PeerSum, we have reviews (with scores), comments and responses as the source documents and the meta-review (with an acceptance outcome) as the ground truth summary. Each sample of this dataset contains a summary, corresponding source documents and also other complementary information (e.g., review scores) for one paper. The second version of PeerSum (peersum_v2) has 16,308 samples, while there are 10,862 samples in the first version. The dataset is stored in the json format. For each sample, details are based on following keys with explanation: * paper_id: unique id for each sample * title: the title of the corresponding paper * abstract: paper abstract * score: final score of this paper (if there is not a final, it will be an average of review scores) * acceptance: acceptance of the paper (e.g., accept, reject or spotlight) * meta_review: meta-review of the paper and this is treated as the summary * reviews: [review_id, writer, content (rating, confidence, comment), replyto] review_id and replyto are for the conversation structure * label: train, val, test (8/1/1) For each review (i.e., official review, public comment, or author/reviewer response): * review_id: unique id of each review * writer: official_reviewer, public, author * content: (rating, confidence, comment) * replyto: connect to a review (review_id and replyto are for the conversation structure)
Provide a detailed description of the following dataset: PeerSum
HOI4D
A large-scale 4D egocentric dataset with rich annotations, to catalyze the research of category-level human-object interaction. HOI4D consists of 2.4M RGB-D egOCentric video frames over 4000 sequences collected by 4 participants interacting with 800 different object instances from 16 categories over 610 different indoor rooms.
Provide a detailed description of the following dataset: HOI4D
TASTEset
**TASTEset** Recipe Dataset and Food Entities Recognition is a dataset for Named Entity Recognition (NER) which consists of 700 recipes with more than 13,000 entities to extract.
Provide a detailed description of the following dataset: TASTEset
Pirá
A large set of questions and answers about the ocean and the Brazilian coast both in Portuguese and English. Pirá is a crowdsourced question answering (QA) dataset on the ocean and the Brazilian coast designed for reading comprehension. The dataset contains 2261 QA sets, as well as the texts associated with them. Each QA set contains at least four elements: a question in Portuguese and in English, and an answer in Portuguese and in English. Around 90% of the QA sets also contain human evaluations. Pirá is, to the best of our knowledge, the first QA dataset with supporting texts in Portuguese, and, perhaps more importantly, the first bilingual QA dataset that includes Portuguese as one of its languages. Pirá is also the first QA dataset in Portuguese with unanswerable questions so as to allow the study of answer triggering. Finally, it is the first QA dataset that tackles scientific knowledge about the ocean, climate change, and marine biodiversity.
Provide a detailed description of the following dataset: Pirá
Danish Airs and Grounds
Danish Airs and Grounds (DAG) is a large collection of street-level and aerial images targeting such cases. Its main challenge lies in the extreme viewing-angle difference between query and reference images with consequent changes in illumination and perspective. The dataset is larger and more diverse than current publicly available data, including more than 50 km of road in urban, suburban and rural areas. All images are associated with accurate 6-DoF metadata that allows the benchmarking of visual localization methods.
Provide a detailed description of the following dataset: Danish Airs and Grounds
SOMOS
The SOMOS dataset is a large-scale mean opinion scores (MOS) dataset consisting of solely neural text-to-speech (TTS) samples. It can be employed to train automatic MOS prediction systems focused on the assessment of modern synthesizers, and can stimulate advancements in acoustic model evaluation. It consists of 20K synthetic utterances of the LJ Speech voice, a public domain speech dataset which is a common benchmark for building neural acoustic models and vocoders. Utterances are generated from 200 TTS systems including vanilla neural acoustic models as well as models which allow prosodic variations.
Provide a detailed description of the following dataset: SOMOS
MUSIC-AVQA
The large-scale MUSIC-AVQA dataset of musical performance contains 45,867 question-answer pairs, distributed in 9,288 videos for over 150 hours. All QA pairs types are divided into 3 modal scenarios, which contain 9 question types and 33 question templates. Finally, as an open-ended problem of our AVQA tasks, all 42 kinds of answers constitute a set for selection.
Provide a detailed description of the following dataset: MUSIC-AVQA
Winoground
Winoground is a dataset for evaluating the ability of vision and language models to conduct visio-linguistic compositional reasoning. Given two images and two captions, the goal is to match them correctly -- but crucially, both captions contain a completely identical set of words, only in a different order. The dataset was carefully hand-curated by expert annotators and is labeled with a rich set of fine-grained tags to assist in analyzing model performance.
Provide a detailed description of the following dataset: Winoground
MCoNaLa
**MCoNaLa** is a multilingual dataset to benchmark code generation from natural language commands extending beyond English. Modeled off of the methodology from the English Code/Natural Language Challenge (CoNALa) dataset, the authors annotated a total of 896 NL-code pairs in three languages: Spanish, Japanese, and Russian. Due to the limited sample of multiple languages, we use English CoNaLa samples for training, where the intents are originally written in English. Spanish, Japanese, and Russian are of the Target Language (TL), whose samples are always (only) used for testing purpose due to the limited amount. English is the High-Resource Language (HRL) for which the samples can be leveraged for model training.
Provide a detailed description of the following dataset: MCoNaLa
Kinetics-GEB+
**Kinetics-GEB+** (Generic Event Boundary Captioning, Grounding and Retrieval) is a dataset that consists of over 170k boundaries associated with captions describing status changes in the generic events in 12K videos.
Provide a detailed description of the following dataset: Kinetics-GEB+
DeToxy
**DeToxy** is a publicly available toxicity annotated dataset for the English language. DeToxy is sourced from various openly available speech databases and consists of over 2 million utterances. The dataset would act as a benchmark for the relatively new and un-explored Spoken Language Processing task of detecting toxicity from spoken utterances and boost further research in this space.
Provide a detailed description of the following dataset: DeToxy
GigaST
GigaST is a large-scale pseudo speech translation (ST) corpus. The corpus was created by translating the text in GigaSpeech, an English ASR corpus, into German and Chinese. The training set is translated by a strong machine translation system and the test set was translated by human. ST models trained with an addition of the corpus obtain new state-of-the-art results on the MuST-C English-German benchmark test set.
Provide a detailed description of the following dataset: GigaST
MagicData-RAMC
The MagicData-RAMC corpus contains 180 hours of conversational speech data recorded from native speakers of Mandarin Chinese over mobile phones with a sampling rate of 16 kHz. The dialogs in the dialogs are classified into 15 diversified domains and tagged with topic labels, ranging from science and technology to ordinary life. Accurate transcription and precise speaker voice activity timestamps are manually labeled for each sample. Speakers' detailed information is also provided.
Provide a detailed description of the following dataset: MagicData-RAMC
Animal Kingdom
Animal Kingdom is a large and diverse dataset that provides multiple annotated tasks to enable a more thorough understanding of natural animal behaviors. The wild animal footage used in the dataset records different times of the day in an extensive range of environments containing variations in backgrounds, viewpoints, illumination and weather conditions. More specifically, the dataset contains 50 hours of annotated videos to localize relevant animal behavior segments in long videos for the video grounding task, 30K video sequences for the fine-grained multi-label action recognition task, and 33K frames for the pose estimation task, which correspond to a diverse range of animals with 850 species across 6 major animal classes.
Provide a detailed description of the following dataset: Animal Kingdom
ROAD
ROAD is designed to test an autonomous vehicle's ability to detect road events, defined as triplets composed by an active agent, the action(s) it performs and the corresponding scene locations. ROAD comprises videos originally from the Oxford RobotCar Dataset, annotated with bounding boxes showing the location in the image plane of each road event.
Provide a detailed description of the following dataset: ROAD
BEHAVE
BEHAVE is a full body human-object interaction dataset with multi-view RGBD frames and corresponding 3D SMPL and object fits along with the annotated contacts between them. Dataset contains ~15k frames at 5 locations with 8 subjects performing a wide range of interactions with 20 common objects.
Provide a detailed description of the following dataset: BEHAVE
Bamboo
Bamboo Dataset is a mega-scale and information-dense dataset for both classification and detection pre-training. It is built upon integrating **24** public datasets (e.g. **ImagenNet**, **Places365**, **Object365**, **OpenImages**) and added new annotations through **active learning**. Bamboo has 69M image classification annotations and 32M object bounding boxes.
Provide a detailed description of the following dataset: Bamboo
Visual Affordance Learning
A large-scale multi-view RGBD visual affordance learning dataset, a benchmark of 47210 RGBD images from 37 object categories, annotated with 15 visual affordance categories and 35 cluttered/complex scenes with different objects and multiple affordances. To the best of our knowledge, this is the first ever and the largest multi-view RGBD visual affordance learning dataset.
Provide a detailed description of the following dataset: Visual Affordance Learning
PETCI
PETCI is a Parallel English Translation dataset of Chinese Idioms, collected from an idiom dictionary and Google and DeepL translation. PETCI contains 4,310 Chinese idioms with 29,936 English translations. These translations capture diverse translation errors and paraphrase strategies.
Provide a detailed description of the following dataset: PETCI
Kobest
**Kobest** is a benchmark for Korean language reasoning. It consists of five Korean-language downstream tasks. Professional Korean linguists designed the tasks that require advanced Korean linguistic knowledge.
Provide a detailed description of the following dataset: Kobest
Sen4AgriNet
A Sentinel-2 based time series multi country benchmark dataset, tailored for agricultural monitoring applications with Machine and Deep Learning. Sen4AgriNet dataset is annotated from farmer declarations collected via the Land Parcel Identification System (LPIS) for harmonizing country wide labels. Sen4AgriNet is the only multi-country, multi-year dataset that includes all spectral information. It is constructed to cover the period 2016-2020 for Catalonia and France, while it can be extended to include additional countries. Currently, it contains 42.5 million parcels, which makes it significantly larger than other available archives.
Provide a detailed description of the following dataset: Sen4AgriNet
YouTube-GDD
YouTubeGun Detection Dataset is collected from 343 high-definition YouTube videos and contains 5000 well-chosen images, in which 16064 instances of gun and 9046 instances of person are annotated. Compared to other datasets, YouTube-GDD is "dynamic", containing rich contextual information
Provide a detailed description of the following dataset: YouTube-GDD
SynWoodScape
**SynWoodScape** is a synthetic version of the surround-view dataset covering many of its weaknesses and extending it. WoodScape comprises four surround-view cameras and nine tasks, including segmentation, depth estimation, 3D bounding box detection, and a novel soiling detection. Semantic annotation of 40 classes at the instance level is provided for over 10,000 images. With WoodScape, we would like to encourage the community to adapt computer vision models for the fisheye camera instead of using naive rectification.
Provide a detailed description of the following dataset: SynWoodScape