dataset_name
stringlengths
2
128
description
stringlengths
1
9.7k
prompt
stringlengths
59
185
Schwerin
Schwerin contains handwritten texts written in medieval German. Train sample consists of 793 lines, validation - 68 lines and test - 196 lines.
Provide a detailed description of the following dataset: Schwerin
WorldKG
The WorldKG knowledge graph is a comprehensive large-scale geospatial knowledge graph based on OpenStreetMap that provides a semantic representation of geographic entities from over 188 countries. WorldKG contains a higher number of representations of geographic entities compared to other knowledge graphs and can be used as an underlying data source for various applications such as geospatial question answering, geospatial data retrieval, and other cross-domain semantic data-driven applications.
Provide a detailed description of the following dataset: WorldKG
TVRecap
**TVRecap** a story generation dataset that requires generating detailed TV show episode recaps from a brief summary and a set of documents describing the characters involved. Unlike other story generation datasets, TVRecap contains stories that are authored by professional screenwriters and that feature complex interactions among multiple characters. Generating stories in TVRecap requires drawing relevant information from the lengthy provided documents about characters based on the brief summary. In addition, by swapping the input and output, TVRecap can serve as a challenging testbed for abstractive summarization.
Provide a detailed description of the following dataset: TVRecap
HYouTube
**HYouTube** is a video for Video harmonization, which aims to adjust the foreground of a composite video to make it compatible with the background. The dataset was created by adjusting the foreground of real videos to create synthetic composite videos. It is based on [Youtube-VOS](https://paperswithcode.com/dataset/youtube-vos)
Provide a detailed description of the following dataset: HYouTube
EFO-1-QA
**EFO-1-QA** is a new dataset to benchmark the combinatorial generalizability of Complex Query Answering (CQA) models by including 301 different queries types, which is 20 times larger than existing datasets.
Provide a detailed description of the following dataset: EFO-1-QA
ISIC 2020 Challenge Dataset
The dataset contains 33,126 dermoscopic training images of unique benign and malignant skin lesions from over 2,000 patients. Each image is associated with one of these individuals using a unique patient identifier. All malignant diagnoses have been confirmed via histopathology, and benign diagnoses have been confirmed using either expert agreement, longitudinal follow-up, or histopathology. A thorough publication describing all features of this dataset is available in the form of a pre-print that has not yet undergone peer review. The dataset was generated by the International Skin Imaging Collaboration (ISIC) and images are from the following sources: Hospital Clínic de Barcelona, Medical University of Vienna, Memorial Sloan Kettering Cancer Center, Melanoma Institute Australia, University of Queensland, and the University of Athens Medical School. The dataset was curated for the SIIM-ISIC Melanoma Classification Challenge hosted on Kaggle during the Summer of 2020. DOI: https://doi.org/10.34970/2020-ds01
Provide a detailed description of the following dataset: ISIC 2020 Challenge Dataset
EUEN17037_Daylight_and_View_Standard_TestDataSet
EUEN17037 Daylight and View Standard Test Dataset.
Provide a detailed description of the following dataset: EUEN17037_Daylight_and_View_Standard_TestDataSet
quantumNoise
The dataset consists in many runs of the same quantum circuit on different IBM quantum machines. We used 9 different machines and for each one of them, we run 2000 executions of the circuit. The circuit has 9 differents measurement steps along it. To obtain the 9 outcome distributions, for each execution, parts of the circuit are appended 9 times (in the same call to the IBM API, thus, in the shortest possible time) measuring a new step each time. The calls to the IBM API followed two different strategies. One was adopted to maximize the number of calls to the interface, parallelizing the code with as many possible runs and even running 8000 shots per run but considering for 8 times 1000 out of the memory to get the probabilities. The other strategy was slower, without parallelization and with a minimum waiting time between subsequent executions. The latter was adopted to get more uniformly distributed executions in time.
Provide a detailed description of the following dataset: quantumNoise
BiRdQA
BiRdQA is a bilingual multiple-choice question answering dataset with 6614 English riddles and 8751 Chinese riddles. Image source: [https://arxiv.org/pdf/2109.11087v1.pdf](https://arxiv.org/pdf/2109.11087v1.pdf)
Provide a detailed description of the following dataset: BiRdQA
ParaShoot
ParaShoot is the first question answering dataset in modern Hebrew. The dataset follows the format and crowdsourcing methodology of [SQuAD](/dataset/squad), and contains approximately 3000 annotated examples, similar to other question-answering datasets in low-resource languages.
Provide a detailed description of the following dataset: ParaShoot
Cloud VR gaming network traffic data
Oculus Quest2 VR gaming network traffic data collected at the gaming server.
Provide a detailed description of the following dataset: Cloud VR gaming network traffic data
CAMELS Multifield Dataset
CMD is a publicly available collection of hundreds of thousands 2D maps and 3D grids containing different properties of the gas, dark matter, and stars from more than 2,000 different universes. The data has been generated from thousands of state-of-the-art (magneto-)hydrodynamic and gravity-only N-body simulations from the CAMELS project. Each 2D map and 3D grid has a set of labels associated to it: 2 cosmological parameters characterizing fundamental properties of the Universe, and 4 astrophysical parameters parametrizing the strength of astrophysical processes such as feedback from supernova and supermassive black-holes. The main task this dataset was designed is to perform a robust inference on the value of the cosmological parameters from each map and grid. The data itself was generated from two completely different set of simulations, and it is not obvious that training one model on one will work when predicting on the other. Since simulations of the real Universe may never be perfect, this dataset provides the data to tackle this problem. Solving this problem will help cosmologists to constrain the value of the cosmological parameters with the highest accuracy and therefore unveil the mysteries of our Universe. CMD can also be used for many other tasks, such as field mapping and super-resolution.
Provide a detailed description of the following dataset: CAMELS Multifield Dataset
GermEval 2021 - Toxic, Engaging, & Fact-Claiming Comments test set
The data set was provided as part of the GermEval 2021 competition for the identification of toxic, engaging, and fact-claiming comments. - in total: 4188 anonymized and annotated German Facebook comments - training set: 3244 comments drawn from a Facebook page of a German political talk show between January till July 2019 - test set: 944 comments drawn from a Facebook page of a German political talk show between September till December 2020 The data is described in Risch, Stoll, Wilms, Wiegand. Overview of the GermEval 2021 Shared Task on the Identification of Toxic, Engaging, and Fact-Claiming Comments. Proceedings of the GermEval 2021 Shared Task on the Identification of Toxic, Engaging, and Fact-Claiming Comments co-located with KONVENS 2021, DOI [10.48415/2021/fhw5-x128](https://doi.org/10.48415/2021/fhw5-x128)
Provide a detailed description of the following dataset: GermEval 2021 - Toxic, Engaging, & Fact-Claiming Comments test set
Hilti SLAM Challenge
Hilti SLAM Challenge is a dataset for Simultaneous Localization and Mapping (SLAM) algorithms due to sparsity, varying illumination conditions, and dynamic objects. The sensor platform used to collect this dataset contains a number of visual, lidar and inertial sensors which have all been rigorously calibrated. All data is temporally aligned to support precise multi-sensor fusion. Each dataset includes accurate ground truth to allow direct testing of SLAM results. Raw data as well as intrinsic and extrinsic sensor calibration data from twelve datasets in various environments is provided. Each environment represents common scenarios found in building construction sites in various stages of completion.
Provide a detailed description of the following dataset: Hilti SLAM Challenge
ERATO
**ERATO** is a large-scale multi-modal dataset for Pairwise Emotional Relationship Recognition (PERR). It has 31,182 video clips, lasting about 203 video hours. Different from the existing datasets, ERATO contains interaction-centric videos with multi-shots, varied video length, and multiple modalities including visual, audio and text
Provide a detailed description of the following dataset: ERATO
CI-ToD
**CI-ToD** is a dataset for Consistency Identification in Task-oriented Dialog system.
Provide a detailed description of the following dataset: CI-ToD
Wrench
Wrench is a benchmark platform for thorough and standardized evaluation of Weak Supervision (WS). It consists of 22 varied real-world datasets for classification and sequence tagging; a range of real, synthetic, and procedurally-generated weak supervision sources; and a modular, extensible framework for WS evaluation, including implementations for popular WS methods.
Provide a detailed description of the following dataset: Wrench
CoWeSe
**CoWeSe** is a Spanish biomedical corpus consisting of 4.5GB (about 750M tokens) of clean plain text. CoWeSe is the result of a massive crawler on 3000 Spanish domains executed in 2020.
Provide a detailed description of the following dataset: CoWeSe
METEOR
**METEOR** is a complex traffic dataset which captures traffic patterns in unstructured scenarios in India. METEOR consists of more than 1000 one-minute video clips, over 2 million annotated frames with ego-vehicle trajectories, and more than 13 million bounding boxes for surrounding vehicles or traffic agents. METEOR is a unique dataset in terms of capturing the heterogeneity of microscopic and macroscopic traffic characteristics.
Provide a detailed description of the following dataset: METEOR
SketchHairSalon
**SketchHairSalon** is a dataset for hair generation containing thousands of annotated hair sketch-image pairs and corresponding hair mattes.
Provide a detailed description of the following dataset: SketchHairSalon
XTD10
**XTD10** is a dataset for cross-lingual image retrieval and tagging consisting of the MSCOCO2014 caption test dataset annotated in 7 languages that were collected using a crowdsourcing platform.
Provide a detailed description of the following dataset: XTD10
FloDial
**Flo**wchart Grounded **Dial**og Dataset (**FloDial**) is a corpus of troubleshooting dialogs between a user and an agent collected using Amazon Mechanical Turk. The dataset is accompanied with two knowledge sources over which the dialogs are grounded: (1) a set of troubleshooting flowcharts and (2) a set of FAQs which contains supplementary information about the domain not present in the flowchart. FloDial consists of 2,738 dialogs grounded on 12 different troubleshooting flowcharts.
Provide a detailed description of the following dataset: FloDial
COME15K
**COME15K** is an RGB-D saliency detection dataset which contains 15,625 image pairs with high quality polygon-/scribble-/object-/instance-/rank-level annotations.
Provide a detailed description of the following dataset: COME15K
safe-control-gym
**safe-control-gym** is an open-source benchmark suite that extends OpenAI's Gym API with (i) the ability to specify (and query) symbolic models and constraints and (ii) introduce simulated disturbances in the control inputs, measurements, and inertial properties. We provide implementations for three dynamic systems -- the cart-pole, 1D, and 2D quadrotor -- and two control tasks -- stabilization and trajectory tracking.
Provide a detailed description of the following dataset: safe-control-gym
AStitchInLanguageModels
**AStitchInLanguageModels** is a dataset for the exploration of idiomaticity in pre-trained language models.
Provide a detailed description of the following dataset: AStitchInLanguageModels
Diagnosis of COVID-19 and its clinical spectrum
This dataset contains anonymized data from patients seen at the Hospital Israelita Albert Einstein, at São Paulo, Brazil, and who had samples collected to perform the SARS-CoV-2 RT-PCR and additional laboratory tests during a visit to the hospital. All data were anonymized following the best international practices and recommendations. All clinical data were standardized to have a mean of zero and a unit standard deviation.
Provide a detailed description of the following dataset: Diagnosis of COVID-19 and its clinical spectrum
WHPA
This dataset was created as part of the following study, which was published in the Journal of Hydrology: *A new framework for experimental design using Bayesian Evidential Learning: the case of wellhead protection area* [https://doi.org/10.1016/j.jhydrol.2021.126903](https://doi.org/10.1016/j.jhydrol.2021.126903). The pre-print is available on arXiv: [https://arxiv.org/pdf/2105.05539.pdf](https://arxiv.org/pdf/2105.05539.pdf) **Files description** This dataset contains 4148 simulation results, i.e., 4148 pairs of predictor/target. **bkt.npy** contains the breakthrough curves from all 6 injection wells recorded at the pumping well. **pz.npy** contains the 2D coordinates of the backtracked particles' end points, used to delineate the WHPA. **Introduction** The Wellhead Protection Area (WHPA) is a zone around a pumping well where human activities are limited in order to preserve water resources, usually based on how long dangerous chemicals in the area will take to reach the pumping well (according to local regulation). The flow velocity in the subsurface around the well determines it, and it can be computed numerically using particle tracking or transport simulation, or in practice using tracer testing. A groundwater model is typically calibrated against field data before being used to calculate the WHPA. In highly populated places where land occupation is a big concern, the introduction of such zones could have a large socioeconomic impact. **WHPA prediction** Different tracers emerge from six data sources (injection wells) scattered across the pumping well. Their job is to inject individual tracers into the system in order to predict their transport and record their breakthrough curves (BCs) at the pumping well location. Numerous particles are artificially positioned around the pumping well, and their origins are traced backward in time to identify the associated WHPA. Our predictor and target will be generated using the USGS' open-source finite-difference code Modflow. To get different sets of predictors and targets, we will run different hydrologic models with one variable parameter, namely hydraulic conductivity in metres per day. To obtain a satisfactory heterogeneity in the hydraulic conductivity fields, which will control the shape and extent of our target, the PAs, we use sequential gaussian simulation based on arbitrarily defined variograms. The pumping well is located at the 1000, 500 metres mark and is surrounded by six injection wells.
Provide a detailed description of the following dataset: WHPA
IECSIL FIRE-2018 Shared Task
The dataset is taken from the First shared task on Information Extractor for Conversational Systems in Indian Languages (IECSIL) . It consists of 15,48,570 Hindi words in Devanagari script and corresponding NER labels. Each sentence end is marked by \newline" tag. Fig. 1 shows a snapshot of one sentence in the dataset. Our Dataset has nine classes, namely, Datenum, Event, Location, Name, Number, Occupation, Organization, Other, Things. Image taken from paper: https://www.researchgate.net/publication/349190662_Analysis_of_Contextual_and_Non-contextual_Word_Embedding_Models_for_Hindi_NER_with_Web_Application_for_Data_Collection
Provide a detailed description of the following dataset: IECSIL FIRE-2018 Shared Task
Spectre-v1
* Description A dataset of assembly functions that are vulnerable to Spectre-V1 attack. * Motivation Several techniques have been proposed to detect vulnerable Spectre gadgets in widely deployed commercial software. Unfortunately, detection techniques proposed so far rely on hand-written rules which fall short in covering subtle variations of known Spectre gadgets as well as demand a huge amount of time to analyze each conditional branch in software. Moreover, detection tool evaluations are based only on a handful of these gadgets, as it requires arduous effort to craft new gadgets manually. * Potential Use Cases Generating assembly code. Evaluating new detection tools.
Provide a detailed description of the following dataset: Spectre-v1
MAVS
**MAVS** is an audio-visual smartphone dataset captured in five different recent smartphones. This new dataset contains 103 subjects captured in three different sessions considering the different real-world scenarios. Three different languages are acquired in this dataset to include the problem of language dependency of the speaker recognition systems.
Provide a detailed description of the following dataset: MAVS
TIAGE
TIAGE is a topic-shift aware dialog benchmark constructed utilizing human annotations on topic shifts. Based on TIAGE, three tasks can be conducted to investigate different scenarios of topic-shift modeling in dialog settings: topic-shift detection, topic-shift triggered response generation and topic-aware dialog generation.
Provide a detailed description of the following dataset: TIAGE
FusedChat
FusedChat is an inter-mode dialogue dataset. It contains dialogue sessions fusing task-oriented dialogues (TOD) and open-domain dialogues (ODD). Based on MultiWOZ, FusedChat appends or prepends an ODD to every existing TOD. See more details in the paper.
Provide a detailed description of the following dataset: FusedChat
Panoptic nuScenes
**Panoptic nuScenes** is a benchmark dataset that extends the popular nuScenes dataset with point-wise groundtruth annotations for semantic segmentation, panoptic segmentation, and panoptic tracking tasks.
Provide a detailed description of the following dataset: Panoptic nuScenes
BUG
**BUG** is a large-scale gender bias dataset of 108K diverse real-world English sentences, sampled semiautomatically from large corpora using lexical syntactic pattern matching
Provide a detailed description of the following dataset: BUG
D3D-HOI
D3D-HOI is a dataset of monocular videos with ground truth annotations of 3D object pose, shape and part motion during human-object interactions. The dataset consists of several common articulated objects captured from diverse real-world scenes and camera viewpoints. Each manipulated object (e.g., microwave oven) is represented with a matching 3D parametric model. This data allows researchers to evaluate the reconstruction quality of articulated objects and establish a benchmark for this challenging task. Image source: [https://github.com/facebookresearch/d3d-hoi](https://github.com/facebookresearch/d3d-hoi)
Provide a detailed description of the following dataset: D3D-HOI
MuViHand
**MuViHand** is a dataset for 3D Hand Pose Estimation that consists of multi-view videos of the hand along with ground-truth 3D pose labels. The dataset includes more than 402,000 synthetic hand images available in 4,560 videos. The videos have been simultaneously captured from six different angles with complex backgrounds and random levels of dynamic lighting. The data has been captured from 10 distinct animated subjects using 12 cameras in a semi-circle topology.
Provide a detailed description of the following dataset: MuViHand
ItaCoLA
**ItaCoLA** is a corpus for monolingual and cross-lingual acceptability judgments which contains almost 10,000 sentences with acceptability judgments.
Provide a detailed description of the following dataset: ItaCoLA
Paint4Poem
Paint4Poem consists of 301 high-quality poem-painting pairs collected manually from an influential modern Chinese artist Feng Zikai.
Provide a detailed description of the following dataset: Paint4Poem
MOLD
**MOLD** is a Marathi dataset for offensive language identification
Provide a detailed description of the following dataset: MOLD
ROF
**ROF** is a dataset for occluded face recognition that contains faces with both upper face occlusion, due to sunglasses, and lower face occlusion, due to masks.
Provide a detailed description of the following dataset: ROF
Mr. TYDI
**Mr. TyDi** is a multi-lingual benchmark dataset for mono-lingual retrieval in eleven typologically diverse languages, designed to evaluate ranking with learned dense representations. The goal of this resource is to spur research in dense retrieval techniques in non-English languages, motivated by recent observations that existing techniques for representation learning perform poorly when applied to out-of-distribution data.
Provide a detailed description of the following dataset: Mr. TYDI
ChMusic
**ChMusic** is a traditional Chinese music dataset for training model and performance evaluation of musical instrument recognition. This dataset cover 11 musical instruments, consisting of Erhu, Pipa, Sanxian, Dizi, Suona, Zhuiqin, Zhongruan, Liuqin, Guzheng, Yangqin and Sheng.
Provide a detailed description of the following dataset: ChMusic
TFRD
**TFRD** is a dataset to evaluate machine learning modelling methods for theTemperature field reconstruction of heat source systems (TFR-HSS). The password for the dataset download is "tfrd"
Provide a detailed description of the following dataset: TFRD
Fishyscapes
**Fishyscapes** is a public benchmark for uncertainty estimation in a real-world task of semantic segmentation for urban driving. It evaluates pixel-wise uncertainty estimates towards the detection of anomalous objects in front of the vehicle.
Provide a detailed description of the following dataset: Fishyscapes
EU-ADR
The **EU-ADR** corpus is a biomedical relation extraction dataset that contains 100 abstracts, with relations between drug, disorder, and targets.
Provide a detailed description of the following dataset: EU-ADR
CAT
**CAT** is a specialized dataset for co-saliency detection - one of the core tasks in the field of computer vision. This dataset is intended for both helping to assess the performance of vision algorithms and supporting research that aims to exploit large volumes of annotated data, e.g., for training deep neural networks. CAT consists of 33,500 images
Provide a detailed description of the following dataset: CAT
FewGLUE_64_labeled
### Introduction The FewGLUE_64_labeled dataset is a new version of FewGLUE dataset. It contains a 64-sample training set, a development set (the original SuperGLUE development set), a test set, and an unlabeled set. It is constructed to facilitate the research of few-shot learning for natural language understanding tasks. Compared with the original FewGLUE dataset, it differs in the number of labeled data examples in the training set, where the original FewGLUE has 32 training examples while FewGLUE_64_labeled has 64 labeled examples. Purposes for constructing a new version of FewGLUE dataset include: 1. To answer the questions that what is the best performance that few-shot learning can achieve and whether it is possible to further close the performance gap between few-shot learning and fully-supervised systems. 2. To explore to which degree the number of labeled training examples influences the few-shot performance. Please refer to the [FewNLU paper](https://arxiv.org/pdf/2109.12742.pdf) as well as the [FewNLU leaderboard](fewnlu.github.io) for more details. ### Acknowledgement Part of the FewGLUE_64_labeled dataset is based on the original 32-sample version of [FewGLUE](https://github.com/timoschick/fewglue). We collect them together in one package for the convenience of usage. We appreciate all the contributors who made their dataset public, which greatly advanced few-shot learning as well as the [FewNLU project](https://github.com/THUDM/FewNLU).
Provide a detailed description of the following dataset: FewGLUE_64_labeled
DSSE-200
The DSSE-200 is a complex document layout dataset including various dataset styles. The dataset contains 200 images from pictures, PPT, brochure documents, old newspapers and scanned documents.
Provide a detailed description of the following dataset: DSSE-200
PASS
PASS is a large-scale image dataset, containing 1.4 million images, that does not include any humans and which can be used for high-quality pretraining while significantly reducing privacy concerns. Image source: [https://arxiv.org/pdf/2109.13228v1.pdf](https://arxiv.org/pdf/2109.13228v1.pdf)
Provide a detailed description of the following dataset: PASS
VQA-MHUG
**VQA-MHUG** is a 49-participant dataset of multimodal human gaze on both images and questions during visual question answering (VQA) collected using a high-speed eye tracker.
Provide a detailed description of the following dataset: VQA-MHUG
MultiDoc2Dial
MultiDoc2Dial is a new task and dataset on modeling goal-oriented dialogues grounded in multiple documents. Most previous works treat document-grounded dialogue modeling as a machine reading comprehension task based on a single given document or passage. We aim to address more realistic scenarios where a goal-oriented information-seeking conversation involves multiple topics, and hence is grounded on different documents.
Provide a detailed description of the following dataset: MultiDoc2Dial
Doc2Dial
For goal-oriented document-grounded dialogs, it often involves complex contexts for identifying the most relevant information, which requires better understanding of the inter-relations between conversations and documents. Meanwhile, many online user-oriented documents use both semi-structured and unstructured contents for guiding users to access information of different contexts. Thus, we create a new goal-oriented document-grounded dialogue dataset that captures more diverse scenarios derived from various document contents from multiple domains such ssa.gov and studentaid.gov. For data collection, we propose a novel pipeline approach for dialogue data construction, which has been adapted and evaluated for several domains.
Provide a detailed description of the following dataset: Doc2Dial
TCP-CI
This dataset is a benchmark of 25 open-source subjects with 21.5k builds and 3.6k failed builds that enables a fair comparison and evaluation of Test Case Prioritization (TCP) techniques. We made our data collection tools available, which can be used to extend and update the subjects. The description of the structure and files of the dataset can be also found in the documentation of the data collection tool.
Provide a detailed description of the following dataset: TCP-CI
VVAD-LRS3
A dataset for Visual Voice Activity Detection extracted from the LRS3 dataset. The dataset contains data to train a Visual Voice Activity Detection(VVAD). The data comes in 4 different flavors: - faceImages: A series of images of faces with the corresponding label True for speaking and False for not speaking - lipImages: A series of images of lips with the corresponding label True for speaking and False for not speaking - faceFeatures: A series of feature maps extracted with dlibs face landmark detection of faces with the corresponding label True for speaking and False for not speaking - lipFeatures: A series of feature maps extracted with dlibs face landmark detection of lips with the corresponding label True for speaking and False for not speaking Image source: [https://arxiv.org/pdf/2109.13789v1.pdf](https://arxiv.org/pdf/2109.13789v1.pdf)
Provide a detailed description of the following dataset: VVAD-LRS3
OpenViDial 2.0
OpenViDial 2.0 is a larger-scale open-domain multi-modal dialogue dataset compared to the previous version [OpenViDial 1.0](/dataset/openvidial). OpenViDial 2.0 contains a total number of 5.6 million dialogue turns extracted from either movies or TV series from different resources, and each dialogue turn is paired with its corresponding visual context. Image source: [https://github.com/ShannonAI/OpenViDial](https://github.com/ShannonAI/OpenViDial)
Provide a detailed description of the following dataset: OpenViDial 2.0
LEAKAGE-PERSONA Dataset
This is the synthetic dataset used for training a model which alerts users for potential leakages of personal information.
Provide a detailed description of the following dataset: LEAKAGE-PERSONA Dataset
Lincolnbeet
The Lincolnbeet dataset is an object detection dataset designed to encourage research in the identification of items in environments with high levels of occlusion, and in the development of better approaches to evaluate object detection models in practical scenarios. This dataset was introduced in the paper: "Towards practical object detection for weed spraying in precision agriculture". The dataset contains 4402 images that contain weed plants and sugar beets which are located with object detection labels. The image size is 1920 x 1080 pixels, and the labels included in the dataset are in COCOjson, XML, and darknets formats.
Provide a detailed description of the following dataset: Lincolnbeet
MFAQ
** MFAQ** is a multilingual FAQ dataset publicly available. It contains around 6M FAQ pairs from the web, in 21 different languages. Although this is significantly larger than existing FAQ retrieval datasets, it comes with its own challenges: duplication of content and uneven distribution of topics.
Provide a detailed description of the following dataset: MFAQ
JDDC 2.0
**JDDC 2.0** is a large-scale multimodal multi-turn dialogue dataset collected from a mainstream Chinese E-commerce platform JD.com, containing about 246 thousand dialogue sessions, 3 million utterances, and 507 thousand images, along with product knowledge bases and image category annotations. The dataset is divided into the training set, the validation set, and the test set according to the ratio of 80%, 10%, and 10%.
Provide a detailed description of the following dataset: JDDC 2.0
MaRVL
**M**ulticultural **R**easoning over **V**ision and **L**anguage (MaRVL) is a dataset based on an ImageNet-style hierarchy representative of many languages and cultures (Indonesian, Mandarin Chinese, Swahili, Tamil, and Turkish). The selection of both concepts and images is entirely driven by native speakers. Afterwards, we elicit statements from native speakers about pairs of images. The task consists in discriminating whether each grounded statement is true or false.
Provide a detailed description of the following dataset: MaRVL
REFLACX
The REFLACX dataset contains eye-tracking data for 3,032 readings of chest x-rays by five radiologists. The dictated reports were transcribed and have timestamps synchronized with the eye-tracking data. Localization labels for abnormalities are very costly, and the collection of eye-tracking data and reports for implicit localization labels may be an alternative for scaling up data collection. One of the potential uses for these data is in additional supervision for training computer vision models. For more details, check the [Physionet page](https://physionet.org/content/reflacx-xray-localization/1.0.0/) and the dataset description paper ("REFLACX, a dataset of reports and eye-tracking data for localization of abnormalities in chest x-rays").
Provide a detailed description of the following dataset: REFLACX
EDGAR-CORPUS
EDGAR-CORPUS is a novel corpus comprising annual reports from all the publicly traded companies in the US spanning a period of more than 25 years. All the reports are downloaded, split into their corresponding items (sections), and provided in a clean, easy-to-use JSON format. Image source: [https://arxiv.org/pdf/2109.14394v1.pdf](https://arxiv.org/pdf/2109.14394v1.pdf)
Provide a detailed description of the following dataset: EDGAR-CORPUS
RAFT
The RAFT benchmark (Realworld Annotated Few-shot Tasks) focuses on naturally occurring tasks and uses an evaluation setup that mirrors deployment. RAFT is a few-shot classification benchmark that tests language models: - across multiple domains (lit reviews, medical data, tweets, customer interaction, etc.) - on economically valuable classification tasks (someone inherently cares about the task) - with evaluation that mirrors deployment (50 labeled examples per task, info retrieval allowed, hidden test set) Description from: [https://raft.elicit.org/](https://raft.elicit.org/) Image source: [https://raft.elicit.org/](https://raft.elicit.org/)
Provide a detailed description of the following dataset: RAFT
StoryDB
StoryDB is a broad multi-language dataset of narratives. StoryDB is a corpus of texts that includes stories in 42 different languages. Every language includes 500+ stories. Some of the languages include more than 20 000 stories. Every story is indexed across languages and labeled with tags such as a genre or a topic. The corpus shows rich topical and language variation and can serve as a resource for the study of the role of narrative in natural language processing across various languages including low resource ones.
Provide a detailed description of the following dataset: StoryDB
JARVIS-DFT
JARVIS-DFT is a repository of density functional theory based calculation data for materials.
Provide a detailed description of the following dataset: JARVIS-DFT
MiniHack
MiniHack is a sandbox framework for easily designing rich and diverse environments for Reinforcement Learning (RL). MiniHack includes a collection of example environments that can be used to test various capabilities of RL agents, as well as serve as building blocks for researchers wishing to develop their own environments. MiniHack's navigation tasks challenge the agent to reach the goal position by overcoming various difficulties on their way, such as fighting monsters in corridors, crossing a river by pushing boulders into it, navigating through complex, procedurally generated mazes, etc. MiniHack's skill acquisition tasks enable utilising the rich diversity of NetHack objects, monsters and dungeon features, and the interactions between them. The skill acquisition tasks feature a large action space (75 actions), where the actions are instantiated differently depending on which object they are acting on.
Provide a detailed description of the following dataset: MiniHack
SCIMAT
**SCIMAT** is a large question-answer dataset for mathematics and science problems; such dataset can have impact on online education, intelligent tutoring and automated grading.
Provide a detailed description of the following dataset: SCIMAT
iShape
**iShape** is an irregular shape dataset for instance segmentation. iShape contains six sub-datasets with one real and five synthetics, each represents a scene of a typical irregular shape.
Provide a detailed description of the following dataset: iShape
Riedones3D
**Riedones3D** is a dataset of 2,070 scans of coins. With this dataset, the authors propose two benchmarks, one for point cloud registration, essential for coin die recognition, and a benchmark of coin die clustering
Provide a detailed description of the following dataset: Riedones3D
Contextualised Polyseme Word Sense Dataset v2
This is a revised and extended second version of a Contextualised Polyseme Word Sense Dataset. The dataset contains two human annotated measures of word sense similarity for polysemic target words used in contexts invoking different sense interpretations. The first set contains graded similarity judgements for highlighted target words displayed in two different contexts. The second set contains co-predication acceptability judgements for sentence constructions combining the sentence pairs from the first set.
Provide a detailed description of the following dataset: Contextualised Polyseme Word Sense Dataset v2
DVSMOTION20
This dataset is designed to enhance the progress of event-based optical flow algorithms. The data was collected using the IniVation DAViS346 camera, which has a 346 x 260 spatial resolution. The dataset is classified into camera motion data (stationary scene and moving camera) and object motion data (stationary camera and moving objects). The camera motion data contains four real indoor sequences (namely, checkerboard, classroom, conference room, and conference room translation) with ground truth motion inferred from IMU. The movement of the camera in this category was restricted by a gimbal, and the IMU was calibrated before each collection. The object motion data includes two real sequences (called hands and cars) containing multiple object motions. This category does not have ground-truth motion since the object motion cannot be inferred from IMU.
Provide a detailed description of the following dataset: DVSMOTION20
BKAI-IGH NeoPolyp-Small
This dataset contains 1200 images (1000 WLI images and 200 FICE images) with fine-grained segmentation annotations. The training set consists of 1000 images, and the test set consists of 200 images. All polyps are classified into neoplastic or non-neoplastic classes denoted by red and green colors, respectively. This dataset is a part of a bigger dataset called NeoPolyp.
Provide a detailed description of the following dataset: BKAI-IGH NeoPolyp-Small
OV
# Description OV dataset is the camera calibration dataset. There are 16 lenses ranging from 90° to 180° FOV: * 2012-A0 (single-plane target) * 3136-H0 (single-plane target) * 5501-C4 (single-plane target) * 130108MP (single-plane target) * ov00—ov07 (corner target) * ov00—ov03 (cube target) Note: We also provide the other datasets evaluated in the [BabelCalib](https://arxiv.org/abs/2109.09704) paper: [Kalibr](https://vision.in.tum.de/research/vslam/double-sphere), [OCamCalib](https://sites.google.com/site/scarabotix/ocamcalib-toolbox) and [UZH](https://fpv.ifi.uzh.ch/datasets/). # Format The datasets have a Deltille format. The [Deltille detector](https://github.com/facebookarchive/deltille) is a robust deltille and checkerboard detector. It comes with detector library, example detector code, and MATLAB bindings. [BabelCalib](https://ylochman.github.io/babelcalib) provides functions for calibration and evaluation using the Deltille software's outputs. Calibration from Deltille detections requires format conversion which is peformed by [`import_ODT`](https://github.com/ylochman/babelcalib/blob/main/core/feature/import_ODT.m). Please see a complete [calibration example](https://github.com/ylochman/babelcalib/blob/main/calib_run_opt2.m) from Deltille data.
Provide a detailed description of the following dataset: OV
Refer-YouTube-VOS
There exist previous works [6, 10] that constructed referring segmentation datasets for videos. Gavrilyuk et al. [6] extended the A2D [33] and J-HMDB [9] datasets with natural sentences; the datasets focus on describing the ‘actors’ and ‘actions’ appearing in videos, therefore the instance annotations are limited to only a few object categories corresponding to the dominant ‘actors’ performing a salient ‘action’. Khoreva et al. [10] built a dataset based on DAVIS [25], but the scales are barely sufficient to learn an end-to-end model from scratch Youtube-VOS has 4,519 high-resolution videos with 94 common object categories. Each video has pixel-level instance segmentation annotation at every 5 frames in 30-fps videos, and their durations are around 3 to 6 seconds. We employed Amazon Mechanical Turk to annotate referring expressions. To ensure the quality of the annotations, we selected around 50 turkers after a validation test. Each turker was given a pair of videos, the original video and the mask-overlaid one with the target object highlighted, and was asked to provide a discriminative sentence within 20 words that describes the target object accurately. We collected two kinds of annotations, which describe the highlighted object (1) based on a whole video (Full-video expression) and (2) using only the first frame of the video (First-frame expression). After the initial annotation, we conducted verification and cleaning jobs for all annotations, and dropped objects if an object cannot be localized using language expressions only. The followings are the statistics and analysis of the two annotation types of the dataset after the verification. **Full-video expression:** Youtube-VOS has 6,459 and 1,063 unique objects in train and validation split, respectively. Among them, we cover 6,388 unique objects in 3,471 videos (6, 388/6, 459 = 98.9%) with 12,913 expressions in train split and 1,063 unique objects in 507 videos (1, 063/1, 063 = 100%) with 2,096 expressions in validation split. On average, each video has 3.8 language expressions and each expression has 10.0 words. **First-frame expression:** There are 6,006 unique objects in 3,412 videos (6, 006 /6, 459 = 93.0%) with 10,897 expressions in train split and 1,030 unique objects in 507 videos (1, 030/1, 063 = 96.9%) with 1,993 expressions in validation split. The number of annotated objects is lower than that of the full-video expressions because using only the first frame makes annotation more ambiguous and inconsistent and we dropped more annotations during the verification. On average, each video has 3.2 language expressions and each expression has 7.5 words.
Provide a detailed description of the following dataset: Refer-YouTube-VOS
Corn Seeds Dataset
This dataset is the images of corn seeds considering the top and bottom view independently (two images for one corn seed: top and bottom). There are four classes of the corn seed (Broken-B, Discolored-D, Silkcut-S, and Pure-P) 17802 images are labeled by the experts at the AdTech Corp. and 26K images were unlabeled out of which 9k images were labeled using the Active Learning (BatchBALD) We have created three different dataset: (1). Primary dataset: contains the 17802 images labeled by the experts. Top-view(8901) and Bottom-view(8901). (2). Dataset with fake images: We generated fake images using Conditional GAN (BigGAN) as follows: broken-2937, discolored-5823, pure-2937, silkcut-5823 instances and added them into the train set to balance the data set. (3). Balanced dataset: In this case of adding newly captured images labeled using the Batch Active Learning method, new 9000 labeled images are added into the primary dataset. This new dataset contains 26,802 images split into train and validation set 80: 20, respectively. Contains the 17802 images and the 9K images labeled by the Active Learning (BatchBALD).
Provide a detailed description of the following dataset: Corn Seeds Dataset
TLDR9+
TLDR9+ is a large-scale summarization dataset containing over 9 million training instances extracted from Reddit discussion forum. This dataset is specifically gathered to perform extreme summarization (i.e., generating one-sentence summary in high compression and abstraction) and is more than twice larger than the previously proposed dataset. With the help of human annotations, a more fine-grained dataset is distilled by sampling High-Quality instances from TLDR9+ and call it TLDRHQ. dataset. Image source: [https://arxiv.org/pdf/2110.01159v1.pdf](https://arxiv.org/pdf/2110.01159v1.pdf)
Provide a detailed description of the following dataset: TLDR9+
LexGLUE
Legal General Language Understanding Evaluation (LexGLUE) benchmark is a collection of datasets for evaluating model performance across a diverse set of legal NLU tasks in a standardized way. Image source: [https://arxiv.org/pdf/2110.00976v1.pdf](https://arxiv.org/pdf/2110.00976v1.pdf)
Provide a detailed description of the following dataset: LexGLUE
Riseholme-2021
Risholme-2021 contains >3.5K images of strawberries at various growth stages along with anomalous instances. Data collection was performed in the strawberry research farm at the Riseholme campus of the University of Lincoln in UK. For more details, please check out "Homepage" down below.
Provide a detailed description of the following dataset: Riseholme-2021
KG20C
KG20C is a Knowledge Graph about high quality papers from 20 top computer science Conferences. It can serve as a standard benchmark dataset in scholarly data analysis for several tasks, including knowledge graph embedding, link prediction, recommendation systems, and question answering . For more information and download, please see the dataset homepage.
Provide a detailed description of the following dataset: KG20C
CCIHP
CCIHP dataset is devoted to fine-grained description of people in the wild with localized & characterized semantic attributes. It contains 20 attribute classes and 20 characteristic classes split into 3 categories (size, pattern and color). The dataset has been introduced in this paper: Loesch, A., & Audigier, R. (2021, September). Describe me if you can! Characterized instance-level human parsing. In 2021 IEEE International Conference on Image Processing (ICIP) (pp. 2528-2532). IEEE. The annotations were made with Pixano, an opensource, smart annotation tool for computer vision applications: https://pixano.cea.fr/
Provide a detailed description of the following dataset: CCIHP
COVID-19 Contact Tracing Survey
A survey of Israelis about their attitudes towards COVID-19 contact tracing apps
Provide a detailed description of the following dataset: COVID-19 Contact Tracing Survey
COVID-19 Contact Tracing Survey in Israel
A survey of Israelis about their attitudes towards COVID-19 contact tracing apps
Provide a detailed description of the following dataset: COVID-19 Contact Tracing Survey in Israel
CaDIS
CaDIS: a Cataract Dataset for Image Segmentation is a dataset for semantic segmentation created by Digital Surgery Ltd. on top of the CATARACTS dataset. CaDIS consists of 4670 images sampled from the 25 videos on CATARACTS' training set. Each pixel in each image is labeled with its respective instrument or anatomical class from a set of 36 identified classes. More details about the dataset could be found in the paper (https://arxiv.org/pdf/1906.11586.pdf).
Provide a detailed description of the following dataset: CaDIS
!Optimizer 2021 Data
The data used for !Optimizer 2021 competition, based on seven biological model organisms.
Provide a detailed description of the following dataset: !Optimizer 2021 Data
Galaxy Zoo DECaLS
Approx. 300,000 images of galaxies labelled by shape. Labels are from www.galaxyzoo.org volunteers, and are noisy. Images are from the DECaLS telescope survey. Also includes predictions from an ensemble of EfficientNets, each using MC Dropout and a novel probabilistic loss function.
Provide a detailed description of the following dataset: Galaxy Zoo DECaLS
FooDI-ML
Food Drinks and groceries Images Multi Lingual (FooDI-ML) is a dataset that contains over 1.5M unique images and over 9.5M store names, product names descriptions, and collection sections gathered from the Glovo application. The data made available corresponds to food, drinks and groceries products from 37 countries in Europe, the Middle East, Africa and Latin America. The dataset comprehends 33 languages, including 870K samples of languages of countries from Eastern Europe and Western Asia such as Ukrainian and Kazakh, which have been so far underrepresented in publicly available visiolinguistic datasets. The dataset also includes widely spoken languages such as Spanish and English. Description from: [FooDI-ML: a large multi-language dataset of food, drinks and groceries images and descriptions](https://arxiv.org/abs/2110.02035) Image source: [https://github.com/Glovo/foodi-ml-dataset](https://github.com/Glovo/foodi-ml-dataset)
Provide a detailed description of the following dataset: FooDI-ML
BRAX
Brax is a differentiable physics engine that simulates environments made up of rigid bodies, joints, and actuators. Brax is written in JAX and is designed for use on acceleration hardware. It is both efficient for single-device simulation, and scalable to massively parallel simulation on multiple devices, without the need for pesky datacenters. Description from: [https://github.com/google/brax](https://github.com/google/brax) Image source: [https://github.com/google/brax](https://github.com/google/brax)
Provide a detailed description of the following dataset: BRAX
TOAD-GAN
A procedurally generated jump'n'run game with control over level similarity. Image source: [https://github.com/Mawiszus/TOAD-GAN](https://github.com/Mawiszus/TOAD-GAN)
Provide a detailed description of the following dataset: TOAD-GAN
RNADesign
An environment for RNA design given structure constraints with structures from different datasets to choose from.
Provide a detailed description of the following dataset: RNADesign
CARL
CARL (context adaptive RL) provides highly configurable contextual extensions to several well-known RL environments. It's designed to test your agent's generalization capabilities in all scenarios where intra-task generalization is important. Benchmarks include: - [OpenAI gym classic control suite](/dataset/openai-gym) extended with several physics context features like gravity or friction - [OpenAI gym Box2D](/dataset/openai-gym) BipedalWalker, LunarLander and CarRacing, each with their own modification possibilities like new vehicles to race - All [Brax locomotion environments](/dataset/brax) with exposed internal features like joint strength or torso mass - [Super Mario (TOAD-GAN)](/dataset/toad-gan), a procedurally generated jump'n'run game with control over level similarity - [RNADesign](/dataset/rnadesign), an environment for RNA design given structure constraints with structures from different datasets to choose from Description from: [CARL](https://github.com/automl/CARL) Image source: [https://github.com/automl/CARL](https://github.com/automl/CARL)
Provide a detailed description of the following dataset: CARL
WMT 2020
**WMT 2020** is a collection of datasets used in shared tasks of the Fifth Conference on Machine Translation. The conference builds on a series of annual workshops and conferences on Statistical Machine Translation. The conference featured ten shared tasks: * a news translation task, * a biomedical translation task, * a similar language translation task, * an unsupervised and very low resource translation task, * an automatic post-editing task, * a metrics task (assess MT quality given reference translation), * a quality estimation task (assess MT quality without access to any reference), * a parallel corpus filtering and alignment task, * a lifelong learning in MT task, * a chat translation task.
Provide a detailed description of the following dataset: WMT 2020
Multirotor-Gym
Multirotor gym environment for learning control policies for various unmanned aerial vehicles.
Provide a detailed description of the following dataset: Multirotor-Gym
PANC
Enables research on early detection of sexual predators in chats (eSPD). It is made from the sexual predator identification dataset from PAN12 and from the dataset ChatCoder2. It provides both full-length predator chats from PervertedJustice as well as short segments of non-predator chats. Together these can be used to evaluate eSPD systems.
Provide a detailed description of the following dataset: PANC
STPLS3D
Our project (STPLS3D) aims to provide a large-scale aerial photogrammetry dataset with synthetic and real annotated 3D point clouds for semantic and instance segmentation tasks. Although various 3D datasets with different functions and scales have been proposed recently, it remains challenging for individuals to complete the whole pipeline of large-scale data collection, sanitization, and annotation (e.g., semantic and instance labels). Moreover, the created datasets usually suffer from extremely imbalanced class distribution or partial low-quality data samples. Motivated by this, we explore the procedurally synthetic 3D data generation paradigm to equip individuals with the full capability of creating large-scale annotated photogrammetry point clouds. Specifically, we introduce a synthetic aerial photogrammetry point clouds generation pipeline that takes full advantage of open geospatial data sources and off-the-shelf commercial packages. Unlike generating synthetic data in virtual games, where the simulated data usually have limited gaming environments created by artists, the proposed pipeline simulates the reconstruction process of the real environment by following the same UAV flight pattern on a wide variety of synthetic terrain shapes and building densities, which ensure similar quality, noise pattern, and diversity with real data. In addition, the precise semantic and instance annotations can be generated fully automatically, avoiding the expensive and time-consuming manual annotation process. Based on the proposed pipeline, we present a richly-annotated synthetic 3D aerial photogrammetry point cloud dataset, termed STPLS3D, with more than 16 km^2 of landscapes and up to 18 fine-grained semantic categories. For verification purposes, we also provide a parallel dataset collected from four areas in the real environment.
Provide a detailed description of the following dataset: STPLS3D
Nico-illust
This dataset contains over 400,000 images (illustrations) from Niconico Seiga and Niconico Shunga
Provide a detailed description of the following dataset: Nico-illust
Restaurant-ACOS
The Restaurant-ACOS dataset is constructed based on the SemEval 2016 Restaurant dataset (Pontiki et al., 2016) and its expansion datasets (Fan et al., 2019; Xu et al., 2020). The SemEval 2016 Restaurant dataset (Pontiki et al., 2016) was annotated with explicit and implicit aspects, categories, and sentiment. (Fan et al., 2019; Xu et al., 2020) further added the opinion annotations. We integrate their annotations to construct aspect-category-opinion-sentiment quadruples and further annotate the implicit opinions. The Restaurant-ACOS dataset contains 2286 sentences with 3658 quadruples. It is worth noting that the Restaurant-ACOS is available for all subtasks in ABSA, including aspect-based sentiment classification, aspect-sentiment pair extraction, aspect-opinion pair extraction, aspect-opinion sentiment triple extraction, aspect-category-sentiment triple extraction, etc.
Provide a detailed description of the following dataset: Restaurant-ACOS
Laptop-ACOS
Laptop-ACOS is a brand new Laptop dataset collected from the Amazon platform in the years 2017 and 2018 (covering ten types of laptops under six brands such as ASUS, Acer, Samsung, Lenovo, MBP, MSI, and so on). It contains 4,076 review sentences, much larger than the SemEval Laptop datasets. For Laptop-ACOS, we annotate the four elements and their corresponding quadruples all by ourselves. We employ the aspect categories defined in the SemEval 2016 Laptop dataset. The Laptop-ACOS dataset contains 4076 sentences with 5758 quadruples. As we have mentioned, a large percentage of the quadruples contain **implicit aspects or implicit opinions** . By comparing two datasets, it can be observed that Laptop-ACOS has a higher percentage of implicit opinions than [Restaurant-ACOS](https://paperswithcode.com/dataset/restaurant-acos) . It is worth noting that the Laptop-ACOS is available for all subtasks in ABSA, including aspect-based sentiment classification, aspect-sentiment pair extraction, aspect-opinion pair extraction, aspect-opinion sentiment triple extraction, aspect-category-sentiment triple extraction, etc.
Provide a detailed description of the following dataset: Laptop-ACOS
Interference suppression techniques for OPM-based MEG: Opportunities and challenges
OPM data 1. Auditory evoked field paradigm during participant movement 2. Motor-beta power changes during a finger-tapping paradigm
Provide a detailed description of the following dataset: Interference suppression techniques for OPM-based MEG: Opportunities and challenges
Molecule3D
Molecule3D is a new benchmark that includes a dataset with precise ground-state geometries of approximately 4 million molecules derived from density functional theory (DFT). It also provides a set of software tools for data processing, splitting, training, and evaluation, etc.
Provide a detailed description of the following dataset: Molecule3D
aethel
A dataset of approximately 75,000 phrases and sentences, syntactically analyzed as typelogical derivations (i.e. proofs of modal intuitionistic linear logic, or programs of the corresponding λ calculus). Analyses were obtained by transforming the dependency graphs of the Lassy-Small corpus.
Provide a detailed description of the following dataset: aethel