dataset_name
stringlengths
2
128
description
stringlengths
1
9.7k
prompt
stringlengths
59
185
Olivetti face
This dataset contains a set of face images taken between April 1992 and April 1994 at AT&T Laboratories Cambridge.
Provide a detailed description of the following dataset: Olivetti face
Sentence-level argument annotation
The dataset is based on a debate.org crawl. It is restricted to a subset of four out of the total 23 categories -- politics, society, economics and science -- and contains additional annotations. 3 human annotators familiar with linguistics segmented these documents and labeled them as being of medium or low quality, to exclude low quality documents. The annotators were then asked to indicate the beginning of each new argument and to label argumentative sentences summarizing the aspects of the post as conclusion and outside of argumentation. In this way, we obtained a ground truth of labeled arguments on a sentence level (Krippendorff's alpha=0.24 based on 20 documents and three annotators).
Provide a detailed description of the following dataset: Sentence-level argument annotation
debatepedia
Debatepedia is a debate platform that lists arguments to a topic on one page, including subtitles, structuring the arguments into different aspects.
Provide a detailed description of the following dataset: debatepedia
debate.org
Debate.org is a debate platform that is organized in rounds where each of two opponents submits posts arguing for their side.
Provide a detailed description of the following dataset: debate.org
Student Essay
Student Essay is widely used in research on argument segmentation
Provide a detailed description of the following dataset: Student Essay
Chaoyang
Chaoyang dataset contains 1111 normal, 842 serrated, 1404 adenocarcinoma, 664 adenoma, and 705 normal, 321 serrated, 840 adenocarcinoma, 273 adenoma samples for training and testing, respectively. This noisy dataset is constructed in the real scenario. - Details: Colon slides from Chaoyang hospital, the patch size is 512 Γ— 512. We invited 3 professional pathologists to label the patches, respectively. We took the parts of labeled patches with consensus results from 3 pathologists as the testing set. Others we used as the training set. For the samples with inconsistent labeling opinions of the three doctors in the training set (this part accounts for about 40%), we randomly selected the opinions from one of the three doctors. - The original WSIs are scanned at X20 objective magnification.
Provide a detailed description of the following dataset: Chaoyang
CRC
Request access: cadpath.ai@impdiagnostics.com The CRC dataset contains 1133 colorectal biopsy and polypectomy slides and is the result of our ongoing efforts to contribute to CRC diagnosis with a reference dataset. We aim to detect high-grade lesions with high sensitivity. High-grade lesions encompass conventional adenomas with high-grade dysplasia (including intra-mucosal carcinomas) and invasive adenocarcinomas. In addition, we also intend to identify low-grade lesions (corresponding to conventional adenomas with low-grade dysplasia). Accordingly, we created three diagnostic categories for the algorithm, labelled as non-neoplastic, low-grade and high-grade lesions.
Provide a detailed description of the following dataset: CRC
MEFB
The MEFB consists of a test set of 100 image pairs.
Provide a detailed description of the following dataset: MEFB
RISeC
We propose a newly annotated dataset for information extraction on recipes. Unlike previous approaches to machine comprehension of procedural texts, we avoid a priori pre-defining domain-specific predicates to recognize (e.g., the primitive instructionsin MILK) and focus on basic understanding of the expressed semantics rather than directly reduce them to a simplified state representation. We thus frame the semantic comprehension of procedural text such as recipes, as fairly generic NLP subtasks, covering (i) entity recognition (ingredients, tools and actions), (ii) relation extraction (what ingredients and tools are involved in the actions), and (iii) zero anaphora resolution (link actions to implicit arguments, e.g., results from previous recipe steps). Further, our Recipe Instruction Semantic Corpus (RISeC) dataset includes textual descriptions for the zero anaphora, to facilitate language generation thereof. Besides the dataset itself, we contribute a pipeline neural architecture that addresses entity and relation extractionas well an identification of zero anaphora.
Provide a detailed description of the following dataset: RISeC
FALLMUD
FAscicle Lower Leg Muscle Ultrasound Dataset is a dataset composed of 812 ultrasound images of lower leg muscles to analyze muscle weaknesses and prevent injuries. It combines the datasets provided by two articles, β€œEstimating Full Regional Skeletal Muscle Fibre Orientation from B-Mode Ultrasound Images Using Convolutional, Residual, and Deconvolutional Neural Networks” published by Ryan Cunningham et al. and β€œAutomated Analysis of Musculoskeletal Ultrasound Images Using Deep Learning” published by Neil Cronin, with complementary annotations. The dataset has been introduced in this paper: Michard, H., Luvison, B., Pham, Q. C., Morales-Artacho, A. J., & Guilhem, G. (2021, August). AW-Net: automatic muscle structure analysis on B-mode ultrasound images for injury prevention. In Proceedings of the 12th ACM Conference on Bioinformatics, Computational Biology, and Health Informatics (pp. 1-9).
Provide a detailed description of the following dataset: FALLMUD
BCNB
Breast cancer (BC) has become the greatest threat to women’s health worldwide. Clinically, identification of axillary lymph node (ALN) metastasis and other tumor clinical characteristics such as ER, PR, and so on, are important for evaluating the prognosis and guiding the treatment for BC patients. Several studies intended to predict the ALN status and other tumor clinical characteristics by clinicopathological data and genetic testing score. However, due to the relatively poor predictive values and high genetic testing costs, these methods are often limited. Recently, deep learning (DL) has enabled rapid advances in computational pathology, DL can perform high-throughput feature extraction on medical images and analyze the correlation between primary tumor features and above status. So far, there is no relevant research on preoperatively predicting ALN metastasis and other tumor clinical characteristics based on WSIs of primary BC samples. Our paper has introduced a new dataset of **Early Breast Cancer Core-Needle Biopsy WSI (BCNB)**, which includes core-needle biopsy whole slide images (WSIs) of early breast cancer patients and the corresponding clinical data. The WSIs have been examined and annotated by two independent and experienced pathologists blinded to all patient-related information. Based on this dataset, we have studied the deep learning algorithm for predicting the metastatic status of ALN preoperatively by using multiple instance learning (MIL), and have achieved the best AUC of 0.831 in the independent test cohort. For more details, please review our [paper](https://arxiv.org/abs/2112.02222). **There are WSIs of 1058 patients, and only part of tumor regions are annotated in WSIs. Except for the WSIs, we have also provided the clinical characteristics of each patient, which includes age, tumor size, tumor type, ER, PR, HER2, HER2 expression, histological grading, surgical, Ki67, molecular subtype, number of lymph node metastases, and the metastatic status of axillary lymph node (ALN). The dataset has been desensitized, and not contained the privacy information of patients.** Based on this dataset, we have studied the prediction of the metastatic status of axillary lymph node (ALN) in our [paper](https://arxiv.org/abs/2112.02222), which is a weakly supervised classification task. However, other researches based on our dataset are also feasible, such as the prediction of histological grading, molecular subtype, HER2, ER, and PR. We do not limit the specific content for your research, and any research based on our dataset is welcome. **Please note that the dataset is only used for education and research, and the usage for commercial and clinical applications is not allowed. The usage of this dataset must follow the [license](https://github.com/bupt-ai-cz/BALNMP#license).**
Provide a detailed description of the following dataset: BCNB
LHC Olympics 2020
These are the official datasets for the LHC Olympics 2020 Anomaly Detection Challenge. Each "black box" contains 1M events meant to be representative of actual LHC data. These events may include signal(s) and the challenge consists of finding these signals using the method of your choice. We have uploaded a total of THREE black boxes to be used for the challenge. In addition, we include a background sample of 1M events meant to aid in the challenge. The background sample consists of QCD dijet events simulated using Pythia8 and Delphes 3.4.1. Be warned that both the physics and the detector modeling for this simulation may not exactly reflect the "data" in the black boxes. For both background and black box data, events are selected using a single fat-jet (R=1) trigger with pT threshold of 1.2 TeV. These events are stored as pandas dataframes saved to compressed h5 format. For each event, all reconstructed particles are assumed to be massless and are recorded in detector coordinates (pT, eta, phi). More detailed information such as particle charge is not included. Events are zero padded to constant size arrays of 700 particles. The array format is therefore (Nevents=1M, 2100). For more information, including a complete description of the challenge and an example Jupyter notebook illustrating how to read and process the events, see the official LHC Olympics 2020 webpage here. UPDATE: November 23, 2020 Now that the challenge is over, we have uploaded the solutions to Black Boxes 1 and 3. They are simple ASCII files (events_LHCO2020_BlackBox1.masterkey and events_LHCO2020_BlackBox3.masterkey) where each line is the truth label -- 0 for background and 1 (and 2 in the case of BB3) for signal -- of each event in the corresponding h5 files (same ordering). For more information about the solutions, please visit the LHCO2020 webpage. UPDATE: February 11, 2021 We have uploaded the Delphes detector cards and Pythia command files used to produce the Black Box datasets.
Provide a detailed description of the following dataset: LHC Olympics 2020
SSD
SSD (Sub-slot Dialog) dataset: This is the dataset for the ACL 2022 paper "A Slot Is Not Built in One Utterance: Spoken Language Dialogs with Sub-Slots". [arxiv](https://arxiv.org/pdf/2203.10759.pdf)
Provide a detailed description of the following dataset: SSD
Banglish
A Bilingual Dataset for Bangla and English Voice Commands Colloquial Bangla has adopted many English words due to colonial influence. In conversational Bangla, it is quite common to speak in a mixture of English and Bangla. This phenomenon, prevalent in conversational language is known as code-switching (CS). CS is defined as the continuous alternation between two languages in a single conversation. Thus, in Bangla natural language processing, it is often necessary to map a single base command to its many different variants - spoken in multiple mixtures of English and Bangla. In order to facilitate this, we have curated a dataset centered around common browser commands.
Provide a detailed description of the following dataset: Banglish
washed_contract
Dataset contains about 48K contracts which are open source on Etherscan.
Provide a detailed description of the following dataset: washed_contract
TekGen
The Dataset is part of the KELM corpus This is the Wikipedia text--Wikidata KG aligned corpus used to train the data-to-text generation model. Please note that this is a corpus generated with distant supervision and should not be used as gold standard for evaluation. It consists of 3 files: https://storage.googleapis.com/gresearch/kelm-corpus/updated-2021/quadruples-train.tsv https://storage.googleapis.com/gresearch/kelm-corpus/updated-2021/quadruples-validation.tsv https://storage.googleapis.com/gresearch/kelm-corpus/updated-2021/quadruples-test.tsv Each file contains one example per line. Each example is a json object with three fields: triples: A list of triples of the form (subject, relation, object). eg. (Person X, award received, Award Y). If the triple has a subproperty, then it is quadruple instead. eg. (Person X, Award Y, received on, Date Z). serialized triples: triples concatenated together as used for input to T5. The format is "<subject> <relation> <object>" where some subjects have multiple relations, e.g. "<subject> <relation1> <object1> <relation2> <object2> <relation3> <object3>". For more details on how these relations are grouped, please refer to the paper. sentence: The wikipedia sentence aligned to these triples. The names, aliases and Wikidata Ids of the entities can be found in https://storage.googleapis.com/gresearch/kelm-corpus/updated-2021/entities.jsonl.
Provide a detailed description of the following dataset: TekGen
DaReCzech
## DareCzech DaReCzech is a dataset for text relevance ranking in Czech. The dataset consists of more than 1.6M annotated query-documents pairs, which makes it one of the largest available datasets for this task. ### Obtaining the Annotated Data Please, first read [a disclaimer](https://github.com/seznam/DaReCzech/blob/master/disclaimer.md) that contains the terms of use. If you comply with them, send an email to [srch.vyzkum@firma.seznam.cz](mailto:srch.vyzkum@firma.seznam.cz) and the link to the dataset will be sent to you.
Provide a detailed description of the following dataset: DaReCzech
MAD
MAD (Movie Audio Descriptions) is an automatically curated large-scale dataset for the task of natural language grounding in videos or natural language moment retrieval. MAD exploits available audio descriptions of mainstream movies. Such audio descriptions are redacted for visually impaired audiences and are therefore highly descriptive of the visual content being displayed. MAD contains over 384,000 natural language sentences grounded in over 1,200 hours of video, and provides a unique setup for video grounding as the visual stream is truly untrimmed with an average video duration of 110 minutes. 2 orders of magnitude longer than legacy datasets. Take a look at the paper for additional information. From the authors on availability: "Due to copyright constraints, MAD’s videos will not be publicly released. However, we will provide all necessary features for our experiments’ reproducibility and promote future research in this direction"
Provide a detailed description of the following dataset: MAD
Crowd 11
This dataset defines a total of 11 crowd motion patterns and it is composed of over 6000 video sequences with an average length of 100 frames per sequence. This documentation presents how to download and process the Crowd-11 dataset. If you use this dataset, please cite our paper: ``` Camille Dupont, Luis Tobias, and Bertrand Luvison. "Crowd-11: A Dataset for Fine Grained Crowd Behaviour Analysis." In Computer Vision and Pattern Recognition Workshops (CVPRW), 2017. ``` Since this dataset is a composition of web videos and already existing datasets, we ask you to download and accept licence of each source and dataset. The construction of the Crowd-11 dataset is composed of two steps: # Step 1: Retrieve videos of interest from the web and/or pre-existing datasets ## Retrieve the pre-existing datasets of interest The pre-existing datasets are: | DATASET NAME | url | $SOURCE_NAME | | ------------- |:-----------------------------------------------------------------:| -------------:| | UMN | http://mha.cs.umn.edu/proj_events.shtml#crowd | umn | | AGORASET | https://www.sites.univ-rennes2.fr/costel/corpetti/agoraset/Site/AGORASET.html | agoraset | | PETS | http://www.cvg.reading.ac.uk/PETS2009/a.html#s3 | pets | | HOCKEY FIGHT | http://visilab.etsii.uclm.es/personas/oscar/FightDetection/ | hockey | | MOVIES | http://visilab.etsii.uclm.es/personas/oscar/FightDetection/ | peliculas | | CUHK | http://www.ee.cuhk.edu.hk/~jshao/CUHKcrowd_files/cuhk_crowd_dataset.htm | cuhk | | WWW | http://www.ee.cuhk.edu.hk/~jshao/WWWCrowdDataset.html | www | | WORLDEXPO'10 CROWD COUNTING | http://www.ee.cuhk.edu.hk/~xgwang/expo.html | shanghai | | VIOLENT-FLOWS | http://www.openu.ac.il/home/hassner/data/violentflows/ | violent_flow | These datasets should be stored in their "existing_datasets/$SOURCE_NAME/" folder: . └── existing_datasets β”œβ”€β”€ agoraset β”œβ”€β”€ cuhk β”œβ”€β”€ hockey β”œβ”€β”€ peliculas β”‚ β”œβ”€β”€ fights β”‚ └── noFights β”œβ”€β”€ pets β”œβ”€β”€ shanghai β”œβ”€β”€ umn β”œβ”€β”€ violent_flow └── www ## Copy the videos of interest from the datasets of interest The list of the videos of interest is in existing_datasets_urls.csv. To extract them into the VOI folder, execute: ``` python existing_datasets_gathering.py ``` The VOI folder should have the following structure: . └── VOI β”œβ”€β”€ agoraset β”œβ”€β”€ cuhk β”œβ”€β”€ hockey β”œβ”€β”€ peliculas β”œβ”€β”€ pets β”œβ”€β”€ shanghai β”œβ”€β”€ umn β”œβ”€β”€ violent_flow └── www ## Download the videos of interest from the web The web sources are: | SOURCE NAME | url | $SOURCE_NAME | | ------------- |:----------------------------------:| -------------:| | YOUTUBE | https://www.youtube.com/ | youtube | | GETTYIMAGES | http://www.gettyimages.fr/ | gettyimages | | POND5 | https://www.pond5.com/ | pond5 | The list of the web urls to download is in web_urls.csv. The web_urls.csv file's structure is as follows : | $SOURCE NAME | URL | OUTPUT_NAME | TS_MULTIPLIER | | ------------- |:---------------------:| ------------:| --------------:| We do not provide the script to download them, but many tools exist to do it (pytube, urllib, etc...). Note: a few videos have a ts_multiplier field. These video are in slow motion and the ts_multiplier is provided to speed them up (cf. SETPTS option in avconv). The downloaded videos should be stored in their VOI/$SOURCE_NAME folder, which should now have the following structure: . └── VOI β”œβ”€β”€ agoraset β”œβ”€β”€ cuhk β”œβ”€β”€ gettyimages β”œβ”€β”€ hockey β”œβ”€β”€ peliculas β”œβ”€β”€ pets β”œβ”€β”€ pond5 β”œβ”€β”€ shanghai β”œβ”€β”€ umn β”œβ”€β”€ violent_flow β”œβ”€β”€ youtube └── www # Step 2: Processing original videos into the Crowd-11 dataset Once the VOI folder is complete, a preprocessing step is required in order to crop and trim the original videos into the Crowd-11 dataset. The preprocessing.csv file's structure is as follows : | Videoname | Label | Frame_start | Frame_end | Top_left | Top_right | Width | Height | $SOURCE_NAME | Scene_number | Crop_number | | ------------- |:---------:| ------------:| ----------:| ---------:| ----------:| -----:| ------:| ------------:| ------------:| -----------:| Installation: You need to have avconv installed: ``` sudo apt-get install avconv ``` Then, you need to install several python package. A virtualeenv installation is recommended: ``` virtualenv -p python3 py source py/bin/activate pip install sk-video ``` Execution (in the virtualenv): ``` python script_formating.py ```
Provide a detailed description of the following dataset: Crowd 11
AAAC
DeepA2 is a modular framework for deep argument analysis. DeepA2 datasets contain comprehensive logical reconstructions of informally presented arguments in short argumentative texts. This item references two two synthetic DeepA2 datasets for artificial argument analysis: AAAC01 and AAAC02.
Provide a detailed description of the following dataset: AAAC
SuperCaustics
SuperCaustics is a simulation tool made in Unreal Engine for generating massive computer vision datasets that include transparent objects. SuperCaustics is a real-time, open-source simulation of transparent objects designed for deep learning applications. SuperCaustics features extensive modules for stochastic environment creation; uses hardware ray-tracing to support caustics, dispersion, and refraction; and enables generating massive datasets with multi-modal, pixel-perfect ground truth annotations. <p align="center"> <img src="https://github.com/MMehdiMousavi/SuperCaustics/raw/main/Assets/SuperCaustics.gif" alt="drawing" width="600"/> </p>
Provide a detailed description of the following dataset: SuperCaustics
Supplementary data: "Revealing drivers and risks for power grid frequency stability with explainable AI"
This repository contains processed data and result files for the paper "Revealing drivers and risks for power grid frequency stability with explainable AI".
Provide a detailed description of the following dataset: Supplementary data: "Revealing drivers and risks for power grid frequency stability with explainable AI"
Pre-Processed Power Grid Frequency Time Series
This repository contains ready-to-use frequency time series as well as the corresponding pre-processing scripts in python.
Provide a detailed description of the following dataset: Pre-Processed Power Grid Frequency Time Series
PartImageNet
**PartImageNet** is a large, high-quality dataset with part segmentation annotations. It consists of 158 classes from [ImageNet](/dataset/imagenet) with approximately 24000 images. PartImageNet offers part-level annotations on a general set of classes with non-rigid, articulated objects, while having an order of magnitude larger size compared to existing datasets. It can be utilized in multiple vision tasks including but not limited to: Part Discovery, Semantic Segmentation, Few-shot Learning.
Provide a detailed description of the following dataset: PartImageNet
CDC fluview
country- and state-level historical ILI data from 2010 to 2018 from the CDC (CDC).
Provide a detailed description of the following dataset: CDC fluview
MDBD
In order to study the interaction of several early visual cues (luminance, color, stereo, motion) during boundary detection in challenging natural scenes, we have built a multi-cue video dataset composed of short binocular video sequences of natural scenes using a consumer-grade Fujifilm stereo camera (MΓ©ly, Kim, McGill, Guo and Serre, 2016). We considered a variety of places (from university campuses to street scenes and parks) and seasons to minimize possible biases. We attempted to capture more challenging scenes for boundary detection by framing a few dominant objects in each shot under a variety of appearances. Representative sample keyframes are shown on the figure below. The dataset contains 100 scenes, each consisting of a left and right view short (10-frame) color sequence. Each sequence was sampled at a rate of 30 frames per second. Each frame has a resolution of 1280 by 720 pixels.
Provide a detailed description of the following dataset: MDBD
NASA C-MAPSS-2
The generation of data-driven prognostics models requires the availability of datasets with run-to-failure trajectories. In order to contribute to the development of these methods, the dataset provides a new realistic dataset of run-to-failure trajectories for a small fleet of aircraft engines under realistic flight conditions. The damage propagation modelling used for the generation of this synthetic dataset builds on the modeling strategy from previous work . The dataset was generated with the Commercial Modular Aero-Propulsion System Simulation (C-MAPSS) dynamical model. The data set is been provided by the Prognostics CoE at NASA Ames in collaboration with ETH Zurich and PARC.
Provide a detailed description of the following dataset: NASA C-MAPSS-2
INSTANCE
INSTANCE is a data collection of more than 1.3 million seismic waveforms originating from a selection of about 54,000 earthquakes occurred since 2005 in Italy and surrounding regions and seismic noise recordings randomly extracted from event free time windows of the continuous waveforms archive. The purpose is to provide reference datasets useful to develop and test seismic data processing routines based on machine learning and deep learning frameworks. The primary source of this information is ISIDe (Italian Seismological Instrumental and Parametric Data-Base) for earthquakes and the Italian node of EIDA (http://eida.ingv.it) for seismic data. All the waveforms have been sized to a 120 s window, preprocessed and resampled at 100 Hz. For each trace we provide a large number of parameters as metadata, either derived from event information or computed from trace data. Associated metadata allow for the identification of the source, the station, the path travelled by seismic waves and assessment of the trace quality. The total size of the data collection is about 330 GB. Waveforms files are available either in counts or ground motion units in hdf5 format to facilitate fast access from commonly used machine learning frameworks.
Provide a detailed description of the following dataset: INSTANCE
GIF Reply Dataset
The released GIF Reply dataset contains 1,562,701 real text-GIF conversation turns on Twitter. In these conversations, 115,586 unique GIFs are used. Metadata, including OCR extracted text, annotated tags, and object names, are also available for some GIFs in this dataset. For more details about the dataset, check out our Github repo (https://github.com/xingyaoww/gif-reply/), and our paper (https://arxiv.org/pdf/2109.12212.pdf),
Provide a detailed description of the following dataset: GIF Reply Dataset
JUSTICE
The dataset contains 3304 cases from the Supreme Court of the United States from 1955 to 2021. Each case has the case's identifiers as well as the facts of the case and the decision outcome. Other related datasets rarely included the facts of the case which could prove to be helpful in natural language processing applications. One potential use case of this dataset is determining the outcome of a case using its facts.
Provide a detailed description of the following dataset: JUSTICE
MultiviewC
The MultiviewC dataset mainly contributes to multiview cattle action recognition, 3D objection detection and tracking. We build a novel synthetic dataset MultiviewC through UE4 based on real cattle video dataset which is offered by CISRO. The format of our data set has been adjusted on the basis of MultiviewX for set-up, annotation and files structure.
Provide a detailed description of the following dataset: MultiviewC
VFP290K
Vision-based Fallen Person (VFP290K) is a novel, large-scale dataset for the detection of fallen persons composed of fallen person images collected in various real-world scenarios. VFP290K consists of 294,714 frames of fallen persons extracted from 178 videos, including 131 scenes in 49 locations.
Provide a detailed description of the following dataset: VFP290K
Adressa
The Adressa Dataset is a news dataset that includes news articles (in Norwegian) in connection with anonymized users. We hope that this dataset will be helpful to achieve a better understanding of the news articles in conjunction with their readers. This dataset is published with the collaboration of Norwegian University of Science and Technology (NTNU) and Adressavisen (local newspaper in Trondheim, Norway) as a part of RecTech project on recommendation technology. For further details of the project and the dataset please refer to the paper mentioned below for citations.
Provide a detailed description of the following dataset: Adressa
PsyQA
**PsyQA** is a Chinese Dataset for generating long counseling text for mental health support.
Provide a detailed description of the following dataset: PsyQA
CALVIN
CALVIN (Composing Actions from Language and Vision), is an open-source simulated benchmark to learn long-horizon language-conditioned robot manipulation tasks.
Provide a detailed description of the following dataset: CALVIN
BIG-bench
The **Beyond the Imitation Game Benchmark** (BIG-bench) is a collaborative benchmark intended to probe large language models and extrapolate their future capabilities. Big-bench include more than 200 tasks. Image source: [https://arxiv.org/pdf/2206.04615.pdf](https://arxiv.org/pdf/2206.04615.pdf)
Provide a detailed description of the following dataset: BIG-bench
PubMed Cognitive Control Abstracts
A collection of 385,705 scientific abstracts about Cognitive Control and their GPT-3 embeddings.
Provide a detailed description of the following dataset: PubMed Cognitive Control Abstracts
MIT-BIH Malignant Ventricular Ectopy Database (VFDB)
This database includes 22 half-hour ECG recordings of subjects who experienced episodes of sustained ventricular tachycardia, ventricular flutter, and ventricular fibrillation. The reference annotation (.atr) files contain only rhythm labels (no beat labels).
Provide a detailed description of the following dataset: MIT-BIH Malignant Ventricular Ectopy Database (VFDB)
AF Classification from a Short Single Lead ECG Recording - The PhysioNet Computing in Cardiology Challenge 2017
The 2017 PhysioNet/CinC Challenge aims to encourage the development of algorithms to classify, from a single short ECG lead recording (between 30 s and 60 s in length), whether the recording shows normal sinus rhythm, atrial fibrillation (AF), an alternative rhythm, or is too noisy to be classified.
Provide a detailed description of the following dataset: AF Classification from a Short Single Lead ECG Recording - The PhysioNet Computing in Cardiology Challenge 2017
MultiSports
Spatio-temporal action detection is an important and challenging problem in video understanding. The existing action detection benchmarks are limited in aspects of small numbers of instances in a trimmed video or low-level atomic actions. This paper aims to present a new multi-person dataset of spatio-temporal localized sports actions, coined as MultiSports. We first analyze the important ingredients of constructing a realistic and challenging dataset for spatio-temporal action detection by proposing three criteria: (1) multi-person scenes and motion dependent identification, (2) with well-defined boundaries, (3) relatively fine-grained classes of high complexity. Based on these guidelines, we build the dataset of MultiSports v1.0 by selecting 4 sports classes, collecting 3200 video clips, and annotating 37701 action instances with 902k bounding boxes. Our dataset is characterized with important properties of high diversity, dense annotation, and high quality. Our MultiSports, with its realistic setting and detailed annotations, exposes the intrinsic challenges of spatio-temporal action detection. We hope our MultiSports can serve as a standard benchmark for spatio-temporal action detection in the future.
Provide a detailed description of the following dataset: MultiSports
ECG in High Intensity Exercise Dataset
The data presented here was extracted from a larger dataset collected through a collaboration between the Embedded Systems Laboratory (ESL) of the Swiss Federal Institute of Technology in Lausanne (EPFL), Switzerland and the Institute of Sports Sciences of the University of Lausanne (ISSUL). In this dataset, we report the extracted segments used for an analysis of R peak detection algorithms during high intensity exercise. Protocol of the experiments The protocol of the experiment was the following. 22 subjects performing a cardio-pulmonary maximal exercise test on a cycle ergometer, using a gas mask. A single-lead electrocardiogram (ECG) was measured using the BIOPAC system. An initial 3 min of rest were recorded. After this baseline, the subjects started cycling at a power of 60W or 90W depending on their fitness level. Then, the power of the cycle ergometer was increased by 30W every 3 min till exhaustion (in terms of maximum oxygen uptake or VO2max). Finally, physiology experts assessed the so-called ventilatory thresholds and the VO2max based on the pulmonary data (volume of oxygen and CO2). Description of the extracted dataset The characteristics of the dataset are the following: We report only 20 out of 22 subjects that were used for the analysis, because for two subjects the signals were too corrupted or not complete. Specifically, subjects 5 and 12 were discarded. The ECG signal was sampled at 500 Hz and then downsampled at 250 Hz. The original ECG signal were measured at maximum 10 mV. Then, they were scaled down by a factor of 1000, hence the data is represented in uV. For each subject, 5 segments of 20 s were extracted from the ECG recordings and chosen based on different phases of the maximal exercise test (i.e., before and after the so-called second ventilatory threshold or VT2, before and in the middle of VO2max, and during the recovery after exhaustion) to represent different intensities of physical activity. seg1 --> [VT2-50,VT2-30] seg2 --> [VT2+60,VT2+80] seg3 --> [VO2max-50,VO2max-30] seg4 --> [VO2max-10,VO2max+10] seg5 --> [VO2max+60,VO2max+80] The R peak locations were manually annotated in all segments and reviewed by a physician of the Lausanne University Hospital, CHUV. Only segment 5 of subject 9 could not be annotated since there was a problem with the input signal. So, the total number of segments extracted were 20 * 5 - 1 = 99. Format of the extracted dataset The dataset is divided in two main folders: The folder `ecg_segments/` contains the ECG signals saved in two formats, `.csv` and `.mat`. This folder includes both raw (`ecg_raw`) and processed (`ecg`) signals. The processing consists of a morphological filtering and a relative energy non filtering method to enhance the R peaks. The `.csv` files contain only the signal, while the `.mat` files include the signal, the time vector within the maximal stress test, the sampling frequency and the unit of the signal amplitude (uV, as we mentioned before). The folder `manual_annotations/` contains the sample indices of the annotated R peaks in `.csv` format. The annotation was done on the processed signals.
Provide a detailed description of the following dataset: ECG in High Intensity Exercise Dataset
Concepticon
This resource, our Concepticon, links concept labels from different conceptlists to concept sets. Each concept set is given a unique identifier, a unique label, and a human-readable definition. Concept sets are further structured by defining different relations between the concepts, as you can see in the graphic to the right, which displays the relations between concept sets linked to the concept set SIBLING. The resource can be used for various purposes. Serving as a rich reference for new and existing databases in diachronic and synchronic linguistics, it allows researchers a quick access to studies on semantic change, cross-linguistic polysemies, and semantic associations.
Provide a detailed description of the following dataset: Concepticon
BOVText
BOVText is a new large-scale benchmark dataset named Bilingual, Open World Video Text(BOVText), the first large-scale and multilingual benchmark for video text spotting in a variety of scenarios. All data are collected from KuaiShou and YouTube
Provide a detailed description of the following dataset: BOVText
HIU-DMTL-Data
See the paper for more details.
Provide a detailed description of the following dataset: HIU-DMTL-Data
Aircraft Context Dataset
The Aircraft Context Dataset, a composition of two inter-compatible large-scale and versatile image datasets focusing on manned aircraft and UAVs, is intended for training and evaluating classification, detection and segmentation models in aerial domains. Additionally, a set of relevant meta-parameters can be used to quantify dataset variability as well as the impact of environmental conditions on model performance.
Provide a detailed description of the following dataset: Aircraft Context Dataset
PTR
**PTR** is a new large-scale diagnostic visual reasoning dataset for research around part-based conceptual, relational and physical reasoning. PTR contains around 70k RGBD synthetic images with ground truth object and part level annotations regarding semantic instance segmentation, color attributes, spatial and geometric relationships, and certain physical properties such as stability. These images are paired with 700k machine-generated questions covering various types of reasoning types.
Provide a detailed description of the following dataset: PTR
RealFaceDB
The dataset contains patches of facial reflectance as described in the paper, namely the diffuse albedo, diffuse normals, specular albedo, specular normals, as well as the shape in UV space. For the shape, reconstructed meshes have been registered to a common topology and the XYZ values of the points have been mapped to the RGB in UV coordinates and interpolated to complete the UV map. From the complete UV maps of 6144x4096 pixels, patches of 512x512 pixels have been sampled. The dataset contains 7500 such patches (1500 of each datatype) that are anonymized, randomized and sampled so that they do not contain identifiable features. To obtain access to the dataset, you need to complete and sign a licence agreement, which should be completed by a full-time academic staff member (not a student). To obtain the licence agreement and the dataset please send an email to Alexandros Lattas (a.lattas@imperial.ac.uk) and Stylianos Moschoglou (s.moschoglou@imperial.ac.uk). Please contact us through your academic email and include your name and position. We will verify your request and contact you regarding how to download the dataset. Note that the agreement requires that: The data must be used for non-commercial research and education purposes only. You agree not to copy, sell, trade, or exploit the model for any commercial purposes. You must destroy the data after 2 years since the first download. If you will be publishing any work using this dataset, please cite the following paper.
Provide a detailed description of the following dataset: RealFaceDB
Yelp-Fraud
Yelp-Fraud is a multi-relational graph dataset built upon the [Yelp spam review dataset](http://odds.cs.stonybrook.edu/yelpchi-dataset/), which can be used in evaluating graph-based node classification, fraud detection, and anomaly detection models. - **Dataset Statistics** | # Nodes | %Fraud Nodes (Class=1) | |-------|--------| | 45,954 | 14.5 | | Relation | # Edges | |--------|--------| | R-U-R | 49,315 | | R-T-R | 573,616 | | R-S-R | 3,402,743 | | All | 3,846,979 | - **Graph Construction** The Yelp spam review dataset includes hotel and restaurant reviews filtered (spam) and recommended (legitimate) by Yelp. We conduct a spam review detection task on the Yelp-Fraud dataset which is a binary classification task. We take 32 handcrafted features from [SpEagle](http://shebuti.com/wp-content/uploads/2016/06/15-kdd-collectiveopinionspam.pdf) paper as the raw node features for Yelp-Fraud. Based on previous studies which show that opinion fraudsters have connections in user, product, review text, and time, we take reviews as nodes in the graph and design three relations: **1) R-U-R:** it connects reviews posted by the same user; **2) R-S-R:** it connects reviews under the same product with the same star rating (1-5 stars); **3) R-T-R:** it connects two reviews under the same product posted in the same month. To download the dataset, please visit [this](https://github.com/YingtongDou/CARE-GNN) Github repo. For any other questions, please email ytongdou(AT)gmail.com for inquiry.
Provide a detailed description of the following dataset: Yelp-Fraud
Amazon-Fraud
Amazon-Fraud is a multi-relational graph dataset built upon the [Amazon review dataset](https://jmcauley.ucsd.edu/data/amazon/), which can be used in evaluating graph-based node classification, fraud detection, and anomaly detection models. - **Dataset Statistics** | # Nodes | %Fraud Nodes (Class=1)| |-------|--------| | 11,944 | 9.5 | | Relation | # Edges | |--------|--------| | U-P-U | 175,608 | | U-S-U | 3,566,479 | | U-V-U | 1,036,737 | | All | 4,398,392 | - **Graph Construction** The Amazon dataset includes product reviews under the Musical Instruments category. Similar to this [paper](https://arxiv.org/abs/2005.10150), we label users with more than 80% helpful votes as benign entities and users with less than 20% helpful votes as fraudulent entities. we conduct a fraudulent user detection task on the Amazon-Fraud dataset, which is a binary classification task. We take 25 handcrafted features from this [paper](https://arxiv.org/abs/2005.10150) as the raw node features for Amazon-Fraud. We take users as nodes in the graph and design three relations: **1) U-P-U:** it connects users reviewing at least one same product; **2) U-S-V:** it connects users having at least one same star rating within one week; **3) U-V-U:** it connects users with top 5% mutual review text similarities (measured by TF-IDF) among all users. To download the dataset, please visit [this](https://github.com/YingtongDou/CARE-GNN) Github repo. For any other questions, please email ytongdou(AT)gmail.com for inquiry.
Provide a detailed description of the following dataset: Amazon-Fraud
ForgeryNet
We construct the ForgeryNet dataset, an extremely large face forgery dataset with unified annotations in image- and video-level data across four tasks: 1) Image Forgery Classification, including two-way (real / fake), three-way (real / fake with identity-replaced forgery approaches / fake with identity-remained forgery approaches), and n-way (real and 15 respective forgery approaches) classification. 2) Spatial Forgery Localization, which segments the manipulated area of fake images compared to their corresponding source real images. 3) Video Forgery Classification, which re-defines the video-level forgery classification with manipulated frames in random positions. This task is important because attackers in real world are free to manipulate any target frame. and 4) Temporal Forgery Localization, to localize the temporal segments which are manipulated. ForgeryNet is by far the largest publicly available deep face forgery dataset in terms of data-scale (2.9 million images, 221,247 videos), manipulations (7 image-level approaches, 8 video-level approaches), perturbations (36 independent and more mixed perturbations) and annotations (6.3 million classification labels, 2.9 million manipulated area annotations and 221,247 temporal forgery segment labels). We perform extensive benchmarking and studies of existing face forensics methods and obtain several valuable observations.
Provide a detailed description of the following dataset: ForgeryNet
ECG Heartbeat Categorization Dataset
This dataset consists of a series of CSV files. Each of these CSV files contain a matrix, with each row representing an example in that portion of the dataset. The final element of each row denotes the class to which that example belongs. Acknowledgements: Mohammad Kachuee, Shayan Fazeli, and Majid Sarrafzadeh. "ECG Heartbeat Classification: A Deep Transferable Representation." arXiv preprint arXiv:1805.00794 (2018). Inspiration: Can you identify myocardial infarction?
Provide a detailed description of the following dataset: ECG Heartbeat Categorization Dataset
3D Datasets of Broccoli in the Field
This work was undertaken by members of the Lincoln Centre for Autonomous Systems, University of Lincoln, UK. The four data collection sessions were conducted at three different sites in Lincolnshire, UK and one in Murcia, Spain (see Fig. 1). The sessions were conducted at the beginning and towards the end of harvesting season in UK and at the end of the harvest in Spain. The variety of broccoli plants grown in UK is called Iron Man whilst the variety grown in Spain is called Titanium.The weather during UK data capture included a mixture of different conditions including sunny, overcast and raining with broccoli varying in maturity levels from small to larger to already harvested, while the conditions for data capture in Spain included strong sunlight and mature plants at the very end of the harvesting season. The tractor was driven through the broccoli field at a slow walking speed with two rows of broccoli plants being imaged by the RGB-D sensor.
Provide a detailed description of the following dataset: 3D Datasets of Broccoli in the Field
Ladybird Cobbitty 2017 Brassica Dataset
This data set contains weekly scans of cauliflower and broccoli covering a ten week growth cycle from transplant to harvest. The data set includes ground-truth, physical characteristics of the crop; environmental data collected by a weather station and a soil-senor network; and scans of the crop performed by an autonomous agricultural robot, which include stereo colour, thermal and hyperspectral imagery. The crop were planted at Lansdowne Farm, a University of Sydney agricultural research and teaching facility. Lansdowne Farm is located in Cobbitty, a suburb 70km south-west of Sydney in New South Wales (NSW), Australia. Four 80 metre raised crop beds were prepared with a North-South orientation. Approximately 144 Brassica were planted in each bed. Cauliflower were planted in the first and third bed (from west to east). Broccoli were planted in the second and fourth beds.
Provide a detailed description of the following dataset: Ladybird Cobbitty 2017 Brassica Dataset
ShadowLink
ShadowLink dataset is designed to evaluate the impact of entity overshadowing on the task of entity disambiguation. Paper: "Robustness Evaluation of Entity Disambiguation Using Prior Probes: the Case of Entity Overshadowing" by Vera Provatorova, Svitlana Vakulenko, Samarth Bhargav, Evangelos Kanoulas. EMNLP 2021.
Provide a detailed description of the following dataset: ShadowLink
Large Labelled Logo Dataset (L3D)
It is composed of around 770k of color 256x256 RGB images extracted from the European Union Intellectual Property Office (EUIPO) open registry. Each of them is associated to multiple labels that classify the figurative and textual elements that appear in the images. These annotations have been classified by the EUIPO evaluators using the Vienna classification, a hierarchical classification of figurative marks. We suggest it to be used for: 1. Unconditional trademark generation 2. Conditional trademark generation. 3. Multi-label logo classification (Vienna classification). 4. Optical Character Recognition. 5. Conditional trade 6. Image segmentation. 7. Image retrieval
Provide a detailed description of the following dataset: Large Labelled Logo Dataset (L3D)
ValueNet
We present a new large-scale human value dataset called ValueNet, which contains human attitudes on 21,374 text scenarios. The dataset is organized in ten dimensions that conform to the basic human value theory in intercultural research.
Provide a detailed description of the following dataset: ValueNet
MFA
The **MFA (Many Faces of Anger)** dataset includes 200 in-the-wild videos from North American and Persian cultures with fine-grained labels of: 'annoyed', 'anger', 'disgust', 'hatred' and 'furious' and 13 related emojis. Image source: [https://arxiv.org/pdf/2112.05267.pdf](https://arxiv.org/pdf/2112.05267.pdf)
Provide a detailed description of the following dataset: MFA
Statutory Interpretation Data Set
This dataset contains a set of sentences by extracting all the sentences mentioning the term from the court decisions retrieved from the Caselaw access project data. In total the corpus consists of 26,959 sentences. The sentences are classified into four categories according to their usefulness for the interpretation: - high value - sentence intended to define or elaborate on the meaning of the term - certain value - sentence that provides grounds to elaborate on the term's meaning - potential value - sentence that provides additional information beyond what is known from the provision the term comes from - no value - no additional information over what is known from the provision
Provide a detailed description of the following dataset: Statutory Interpretation Data Set
The RoboCup Rescue Dataset
In this paper, we introduce a victim dataset for the RoboCup Rescue competitions. The RoboCup Rescue robots have to collect points within several disciplines, e.g. a search task within an area to survey simulated baby doll (victim). When a robot comes across a victim, a heat detector does not completely proof if this is a living being and not just a heat emitting somewhat else. Further investigations are necessary so that a face detection could prove the existence of a victim. Lots of face detection approaches can be found in literature, which manly are used for human face recognition. These cannot be straightforward used for victim faces which are, in case of the RoboCup Rescue competitions, typically dolls. Thus we present the results of standard approaches and developed an own approach via bag-of-visual-words (BoVW).
Provide a detailed description of the following dataset: The RoboCup Rescue Dataset
PHANTOM
To evaluate the presented approaches, we created the Physical Anomalous Trajectory or Motion (PHANTOM) dataset consisting of six classes featuring everyday objects or physical setups, and showing nine different kinds of anomalies. We designed our classes to evaluate detection of various modes of video abnormalities that are generally excluded in video AD settings. The train and test sets of each class contain approximately 30 videos of varying lengths. The train set contains only normal videos, while the test set is evenly balanced between normal and anomalous videos. The classes were designed to be of varying difficulties and to feature different types of anomalies. For example, the window class was filmed in multiple lighting scenarios to increase variance. The normal videos include motion that follows an expected trajectory (pendulum, keyboard) or an expected movement (window). The sushi class features procedural motion, while candle and magnets feature more subtle motion that only appears locally. The anomalous videos can feature an interference of the regular motion (window, candle, magnets), an added or removed step in the usual procedure (sushi), motion that follows a different trajectory (pendulum, keyboard), or contains a different object (pendulum).
Provide a detailed description of the following dataset: PHANTOM
GOF
Optical Flow in challenging scenes with gyroscope readings!
Provide a detailed description of the following dataset: GOF
AliMeeting
AliMeeting corpus consists of 120 hours of recorded Mandarin meeting data, including far-field data collected by 8-channel microphone array as well as near-field data collected by headset microphone. Each meeting session is composed of 2-4 speakers with different speaker overlap ratio, recorded in rooms with different size.
Provide a detailed description of the following dataset: AliMeeting
PatternNet
PatternNet is a large-scale high-resolution remote sensing dataset collected for remote sensing image retrieval. There are 38 classes and each class has 800 images of size 256Γ—256 pixels. The images in PatternNet are collected from Google Earth imagery or via the Google Map API for some US cities. The following table shows the classes and the corresponding spatial resolutions. The figure shows some example images from each class.
Provide a detailed description of the following dataset: PatternNet
HT1080WT cells - 3D collagen type I matrices
Human fibrosarcoma HT1080WT (ATCC) cells at low cell densities embedded in 3D collagen type I matrices [1]. The time-lapse videos were recorded every 2 minutes for 16.7 hours and covered a field of view of 1002 pixels Γ— 1004 pixels with a pixel size of 0.802 ΞΌm/pixel The videos were pre-processed to correct frame-to-frame drift artifacts, resulting in a final size of 983 pixels Γ— 985 pixels pixels. [1] Hasini Jayatilaka, Anjil Giri, Michelle Karl, Ivie Aifuwa, Nicholaus J Trenton, Jude M Phillip, Shyam Khatau, and Denis Wirtz. EB1 and cytoplasmic dynein mediate protrusion dynamics for efficient 3-dimensional cell migration. FASEB J., 32(3):1207–1221, 2018. ISSN 0892-6638. doi: 10.1096/fj.201700444RR.
Provide a detailed description of the following dataset: HT1080WT cells - 3D collagen type I matrices
LfGP Data
The expert data and trained models used for our Learning from Guided Play paper. For details on use, see our open source repository at https://github.com/utiasSTARS/lfgp.
Provide a detailed description of the following dataset: LfGP Data
deepMTJ
[![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/luuleitner/deepMTJ/blob/master/mtj_tracking/predict/mtj_tracking.ipynb) deepMTJ: Muscle-Tendon Junction Tracking in Ultrasound Images ------------------------------------------------------------- `deepMTJ` is a machine learning approach for automatically tracking of muscle-tendon junctions (MTJ) in ultrasound images. Our method is based on a convolutional neural network trained to infer MTJ positions across various ultrasound systems from different vendors, collected in independent laboratories from diverse observers, on distinct muscles and movements. We built `deepMTJ` to support clinical biomechanists and locomotion researchers with an open-source tool for gait analyses. Introduction into the deepMTJ dataset ------------------------------------- This repository contains the full test dataset used for `deepMTJ` performance assessments, the trained TensorFlow (Keras) model and a Link to the code repository of deepMTJ. Furthermore, we provide online predictions using `deepMTJ` via a [![Colab Notebook](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/luuleitner/deepMTJ/blob/master/mtj_tracking/predict/mtj_tracking.ipynb) (For multiple and large file predictions) and via [deepmtj.org](https://deepmtj.org/) (Cloud based predictions). - The dataset comprises 1344 images of muscle-tendon junctions recorded with 3 ultrasound imaging systems (Aixplorer V6, Esaote MyLab60, Telemed ArtUs), on 2 muscles (Lateral Gastrocnemius, Medial Gastrocnemius), and 2 movements (isometric maximum voluntary contractions, passive torque movements). - We have included the ground truth labels for each image. These reference labels are the computed mean from 4 specialist labels. Specialist annotators had 2-10 years of experience in biomechanical and clinical research investigating muscles and tendons in 2-9 ultrasound studies in the past 2 years.
Provide a detailed description of the following dataset: deepMTJ
PeopleSansPeople
In recent years, person detection and human pose estimation have made great strides, helped by large-scale labeled datasets. However, these datasets had no guarantees or analysis of human activities, poses, or context diversity. Additionally, privacy, legal, safety, and ethical concerns may limit the ability to collect more human data. An emerging alternative to real-world data that alleviates some of these issues is synthetic data. However, creation of synthetic data generators is incredibly challenging and prevents researchers from exploring their usefulness. Therefore, we release a human-centric synthetic data generator PeopleSansPeople which contains simulation-ready 3D human assets, a parameterized lighting and camera system, and generates 2D and 3D bounding box, instance and semantic segmentation, and COCO pose labels. Using PeopleSansPeople, we performed benchmark synthetic data training using a Detectron2 Keypoint R-CNN variant [1]. We found that pre-training a network using synthetic data and fine-tuning on target real-world data (few-shot transfer to limited subsets of COCO-person train [2]) resulted in a keypoint AP of 60.37Β±0.48 (COCO test-dev2017) outperforming models trained with the same real data alone (keypoint AP of 55.80) and pre-trained with ImageNet (keypoint AP of 57.50). This freely-available data generator should enable a wide range of research into the emerging field of simulation to real transfer learning in the critical area of human-centric computer vision.
Provide a detailed description of the following dataset: PeopleSansPeople
CPPE-5
CPPE - 5 (Medical Personal Protective Equipment) is a new challenging dataset with the goal to allow the study of subordinate categorization of medical personal protective equipments, which is not possible with other popular data sets that focus on broad level categories. Some features of this dataset are: * high quality images and annotations (~4.6 bounding boxes per image) * real-life images unlike any current such dataset * majority of non-iconic images (allowing easy deployment to real-world environments)
Provide a detailed description of the following dataset: CPPE-5
MPOSE2021
MPOSE2021, a dataset for real-time short-time HAR, suitable for both pose-based and RGB-based methodologies. It includes 15,429 sequences from 100 actors and different scenarios, with limited frames per scene (between 20 and 30). In contrast to other publicly available datasets, the peculiarity of having a constrained number of time steps stimulates the development of real-time methodologies that perform HAR with low latency and high throughput.
Provide a detailed description of the following dataset: MPOSE2021
Semantic Segmentation Vineyard Rows
## Test dataset for Semantic Segmentation. The datasets includes 500 RGB - images with the relative single-channel binary masks. Images are taken from the vineyards in Grugliasco - Turin - Piedmont Region -Italy
Provide a detailed description of the following dataset: Semantic Segmentation Vineyard Rows
QST
QST contains `1,167` video clips that are cut out from `216 time-lapse 4K videos` collected from YouTube, which can be used for a variety of tasks, such as **`(high-resolution) video generation`**, **`(high-resolution) video prediction`**, **`(high-resolution) image generation`**, **`texture generation`**, **`image inpainting`**, **`image/video super-resolution`**, **`image/video colorization`**, **`image/video animating`**, etc. Each short clip contains multiple frames (from a minimum of `58` frames to a maximum of `1,200` frames, a total of `285,446` frames), and the resolution of each frame is more than `1,024 x 1,024`. Specifically, QST consists of a training set (containing `1000` clips, totally `244,930` frames), a validation set (containing `100` clips, totally `23,200` frames), and a testing set (containing `67` clips, totally `17,316` frames). Click [here](https://pan.baidu.com/s/1HUmSu-H1ot39ENeesVuz4Q ) (Key: qst1) to download the QST dataset.
Provide a detailed description of the following dataset: QST
Stanford 3D Scanning Repository
In recent years, the number of range scanners and surface reconstruction algorithms has been growing rapidly. Many researchers, however, do not have access to scanning facilities or dense polygonal models. The purpose of this repository is to make some range data and detailed reconstructions available to the public.
Provide a detailed description of the following dataset: Stanford 3D Scanning Repository
Montreal Archive of Sleep Studies
The Montreal Archive of Sleep Studies (MASS) is an open-access and collaborative database of laboratory-based polysomnography (PSG) recordings O’Reilly, C., et al. (2014) J Seep Res, 23(6):628-635. Its goal is to provide a standard and easily accessible source of data for benchmarking the various systems developed to help the automation of sleep analysis. It also provides a readily available source of data for fast validation of experimental results and for exploratory analyses. Finally, it is a shared resource that can be used to foster large-scale collaborations in sleep studies. MASS is composed of cohorts themselves comprising subsets. Recordings within subsets is kept as homogeneous as possible, whereas it is more heterogeneous between subsets. To allow inter-study comparisons, researchers validating their results on MASS are encouraged to specify which portion of the database they used in their assessment (e.g., MASS-C1 for the whole cohort 1, MASS-C1/SS1-SS3 for subsets 1, 2 and 3 of cohort 1). Currently, the first MASS cohort available is described in O’Reilly, C., et al. (2014) J Seep Res, 23(6):628-635. This cohort comprises polysomnograms of 200 complete nights recorded in 97 men and 103 women of age varying between 18 and 76 years (mean: 38.3 years, SD: 18.9 years). It has been split into five different subsets.
Provide a detailed description of the following dataset: Montreal Archive of Sleep Studies
SyntheticChairSketch
The dataset contains naive and stylized sketches for a chair category of the ShapeNetCore dataset. Each chair shape folder contains two subfolders: "naive" and "stylized", representing two rendering styles. https://cvssp.org/data/SyntheticChairSketch/
Provide a detailed description of the following dataset: SyntheticChairSketch
Cross-cultural pop song mood ratings (US, KR, BR)
- Mood ratings of 8 emotions gathered across 360 pop songs - 166 raters from US, S.Korea and Brazil - MIR features from Spotify
Provide a detailed description of the following dataset: Cross-cultural pop song mood ratings (US, KR, BR)
DISRPT2019
The DISRPT 2019 workshop introduces the first iteration of a cross-formalism shared task on discourse unit segmentation. Since all major discourse parsing frameworks imply a segmentation of texts into segments, learning segmentations for and from diverse resources is a promising area for converging methods and insights. We provide training, development and test datasets from all available languages and treebanks in the RST, SDRT and PDTB formalisms, using a uniform format. Because different corpora, languages and frameworks use different guidelines for segmentation, the shared task is meant to promote design of flexible methods for dealing with various guidelines, and help to push forward the discussion of standards for discourse units. For datasets which have treebanks, we will evaluate in two different scenarios: with and without gold syntax, or otherwise using provided automatic parses for comparison.
Provide a detailed description of the following dataset: DISRPT2019
DISRPT2021
The DISRPT 2021 shared task, co-located with CODI 2021 at EMNLP, introduces the second iteration of a cross-formalism shared task on discourse unit segmentation and connective detection, as well as the first iteration of a cross-formalism discourse relation classification task. We provide training, development and test datasets from all available languages and treebanks in the RST, SDRT and PDTB formalisms, using a uniform format. Because different corpora, languages and frameworks use different guidelines, the shared task is meant to promote design of flexible methods for dealing with various guidelines, and help to push forward the discussion of standards for computational approaches to discourse relations. We include data for evaluation with and without gold syntax, or otherwise using provided automatic parses for comparison to gold syntax data.
Provide a detailed description of the following dataset: DISRPT2021
PuzzTe
Puzzles dataset: comparison, knight&knaves, and zebra puzzles.
Provide a detailed description of the following dataset: PuzzTe
CODD
The Cooperative Driving dataset is a synthetic dataset generated using CARLA that contains lidar data from multiple vehicles navigating simultaneously through a diverse set of driving scenarios. This dataset was created to enable further research in multi-agent perception (cooperative perception) including cooperative 3D object detection, cooperative object tracking, multi-agent SLAM and point cloud registration. Towards that goal, all the frames have been labelled with ground-truth sensor pose and 3D object bounding boxes.
Provide a detailed description of the following dataset: CODD
GUM
GUM is an open source multilayer English corpus of richly annotated texts from twelve text types. Annotations include: * Multiple POS tags, morphological features and lemmatization * Sentence segmentation and rough speech act * Document structure in TEI XML (paragraphs, headings, figures, etc.) * ISO date/time annotations * Speaker and addressee information (where relevant) * Constituent and dependency syntax * Information status (given, accessible, new, split antecedent) * Entity and coreference annotation, including bridging anaphora * Entity linking (Wikification) * Discourse parses in Rhetorical Structure Theory and discourse dependencies
Provide a detailed description of the following dataset: GUM
AMALGUM
AMALGUM is a machine annotated multilayer corpus following the same design and annotation layers as GUM, but substantially larger (around 4M tokens). The goal of this corpus is to close the gap between high quality, richly annotated, but small datasets, and the larger but shallowly annotated corpora that are often scraped from the Web.
Provide a detailed description of the following dataset: AMALGUM
Faster__convergence_MOEAD
Dataset with raw outputs of experiments connected to the GitHub repository: https://github.com/yurilavinas/MOEADr/tree/ECJ The size of the data is big, make sure to have enough space.
Provide a detailed description of the following dataset: Faster__convergence_MOEAD
Webis-STEREO-21
We present the Webis-STEREO-21 dataset, a massive collection of Scientific Text Reuse in Open-access publications. It contains more than 91 million cases of reused text passages found in 4.2 million unique open-access publications. Featuring a high coverage of scientific disciplines and varieties of reuse, as well as comprehensive metadata to contextualize each case, our dataset addresses the most salient shortcomings of previous ones on scientific writing. Webis-STEREO-21 allows for tackling a wide range of research questions from different scientific backgrounds, facilitating both qualitative and quantitative analysis of the phenomenon as well as a first-time grounding on the base rate of text reuse in scientific publications.
Provide a detailed description of the following dataset: Webis-STEREO-21
TUAC
A new subset of the popular open source electroencephalogram (EEG) corpus – TUH EEG: - The Temple University Artifact Corpus (TUAR) consists of high yield artifact files annotated using a five-way classification system: 1. Chewing (CHEW): An artifact resulting from the tensing and relaxing of the jaw muscles. 2. Electrode (ELEC): An artifact that encompasses various electrode related phenomena. 3. Eye Movement (EYEM): A spike-like waveform created during patient eye movement. 4. Muscle (MUSC): A common artifact with high frequency, sharp waves corresponding to patient movement. 5. Shiver (SHIV): A specific and sustained sharp wave artifact that occurs when a patient shivers. - EEG artifacts are waveforms that are not of cerebral origin and may have been affected by several external and physiological factors. - These artifacts cause false alarms in seizure prediction machine learning systems. This corpus was developed to support research and evaluation of artifact suppression technology.
Provide a detailed description of the following dataset: TUAC
NepaliNewsCorpus
# Nepali News Corpus ## Raw nepali text scrapped from several online websites. ### Total file size of concatenated text file -> 4.6 GB. ## This raw text corpus can be used for tasks such as casual language modelling and masked language modelling.
Provide a detailed description of the following dataset: NepaliNewsCorpus
Epinions social network
This is a who-trust-whom online social network of a general consumer review site Epinions.com. Members of the site can decide whether to ''trust'' each other. All the trust relationships interact and form the Web of Trust which is then combined with review ratings to determine which reviews are shown to the user. Nodes 75879 Edges 508837 Nodes in largest WCC 75877 (1.000) Edges in largest WCC 508836 (1.000) Nodes in largest SCC 32223 (0.425) Edges in largest SCC 443506 (0.872) Average clustering coefficient 0.1378 Number of triangles 1624481 Fraction of closed triangles 0.0229 Diameter (longest shortest path) 14 90-percentile effective diameter 5
Provide a detailed description of the following dataset: Epinions social network
AFLW-19
The [original AFLW](https://www.tugraz.at/institute/icg/research/team-bischof/lrs/downloads/aflw/) provides at most 21 points for each face, but excluding coordinates for invisible landmarks, causing difficulties for training most of the existing baseline approaches. To make fair comparisons, the authors manually annotate the coordinates of these invisible landmarks to enable training of those baseline approaches. The new annotation does not include two ear points because it is very difficult to decide the location of invisible ears. This causes the point number of AFLW-19 to be 19. The original AFLW does not provide train-test partition. AFLW-19 adopts a partition with 20,000 images for training and 4,386 images for testing (AFLW-Full). In addition, a frontal subset (AFLW-Frontal) is proposed where all landmarks are visible (totally 1,165 images). The new 19-point annotation file is available at the [project page](http://mmlab.ie.cuhk.edu.hk/projects/compositional.html).
Provide a detailed description of the following dataset: AFLW-19
Covid Dataset
The COVID-19 CT dataset is constructed by Shenzhen Research Institute of Big Data (SRIBD), Future Network of Intelligence Institute (FNii) and CUHKSZ-JD Joint AI Lab, Chinese University of Hongkong, Shenzhen, China, which contains 368 medical findings in Chinese and 1,104 chest CT scans from the First Affiliated Hospital of Jinan University Guangzhou and the Fifth Affiliated Hospital of Sun Yat-sen University Zhuhai in China. Please see more details in our TNNLS paper - Medical-VLBERT: Medical Visual Language BERT for COVID-19 CT Report Generation With Alternate Learning.
Provide a detailed description of the following dataset: Covid Dataset
The Reddit COVID Dataset
The Reddit COVID Dataset is a dataset of 4.51M Reddit posts and 17.8M comments - all mentions of COVID until 2021-10-25 across the entire Reddit social network. Both were procured with [SocialGrep's export feature](https://socialgrep.com/exports?utm_source=paperswithcode&utm_medium=link&utm_campaign=theredditcoviddataset) and released as part of SocialGrep [Reddit datasets](https://socialgrep.com/datasets?utm_source=paperswithcode&utm_medium=link&utm_campaign=theredditcoviddataset). The posts are labeled with their subreddit, title, creation date, domain, selftext, and score. The comments are labeled with their subreddit, body, creation date, sentiment (calculated for you using a VADER pipeline), and score.
Provide a detailed description of the following dataset: The Reddit COVID Dataset
M2DGR
We collected long-term challenging sequences for ground robots both indoors and outdoors with a complete sensor suite, which includes six surround-view fish-eye cameras, a sky-pointing fish-eye camera, a perspective color camera, an event camera, an infrared camera, a 32-beam LIDAR, two GNSS receivers, and two IMUs. To our knowledge, this is the first SLAM dataset focusing on ground robot navigation with such rich sensory information. We recorded trajectories in a few challenging scenarios like lifts, complete darkness, which can easily fail existing localization solutions. These situations are commonly faced in ground robot applications, while they are seldom discussed in previous datasets. We launched a comprehensive benchmark for ground robot navigation. On this benchmark, we evaluated existing state-of-the-art SLAM algorithms of various designs and analyzed their characteristics and defects individually.
Provide a detailed description of the following dataset: M2DGR
Grasping dataset: suction-based
A small and simple dataset featuring RGB-D images and heightmaps of various objects in a bin with manually annotated suctionable regions
Provide a detailed description of the following dataset: Grasping dataset: suction-based
IWSLT 2017
The IWSLT 2017 translation dataset.
Provide a detailed description of the following dataset: IWSLT 2017
AU Dataset for Visuo-Haptic Object Recognition for Robots
Multimodal object recognition is still an emerging field. Thus, publicly available datasets are still rare and of small size. This dataset was developed to help fill this void and presents multimodal data for 63 objects with some visual and haptic ambiguity. The dataset contains visual, kinesthetic and tactile (audio/vibrations) data. To completely solve sensory ambiguity, sensory integration/fusion would be required. This report describes the creation and structure of the dataset. The first section explains the underlying approach used to capture the visual and haptic properties of the objects. The second section describes the technical aspects (experimental setup) needed for the collection of the data. The third section introduces the objects, while the final section describes the structure and content of the dataset.
Provide a detailed description of the following dataset: AU Dataset for Visuo-Haptic Object Recognition for Robots
Pre-trained Transliterated Embeddings for Indian Languages
We release various types of word embeddings for multiple Indian languages. Please note that for a majority of our work, we had transliterated the corpora to the Devanagiri script and the script is changed. Word Embedding models using FastText, ElMo, and cross-lingual models based on an orthogonal alignment of monolingual models for all pairs of these languages.
Provide a detailed description of the following dataset: Pre-trained Transliterated Embeddings for Indian Languages
Phishing and Benign Websites
An annotated dataset of 38,800 phishing and benign websites.
Provide a detailed description of the following dataset: Phishing and Benign Websites
2012 i2b2 Temporal Relations
The Sixth Informatics for Integrating Biology and the Bedside (i2b2) Natural Language Processing Challenge for Clinical Records focused on the temporal relations in clinical narratives. The organizers provided the research community with a corpus of discharge summaries annotated with temporal information, to be used for the development and evaluation of temporal reasoning systems. 18 teams from around the world participated in the challenge. During the workshop, participating teams presented comprehensive reviews and analysis of their systems, and outlined future research directions suggested by the challenge contributions.
Provide a detailed description of the following dataset: 2012 i2b2 Temporal Relations
eICU-CRD
The eICU Collaborative Research Database is a large multi-center critical care database made available by Philips Healthcare in partnership with the MIT Laboratory for Computational Physiology. The eICU Collaborative Research Database holds data associated with over 200,000 patient stays, providing a large sample size for research studies.
Provide a detailed description of the following dataset: eICU-CRD
Evolutionary Illusion Generator Illusions
A dataset of illusions generated by the AI model EIGen.
Provide a detailed description of the following dataset: Evolutionary Illusion Generator Illusions
PredNet Grayscale Model Weights
A pretrained PredNet neural network, used in EIGen to generate grayscale illusions.
Provide a detailed description of the following dataset: PredNet Grayscale Model Weights
PredNet Color Model Weights
A pretrained PredNet neural network, used in EIGen to generate color illusions.
Provide a detailed description of the following dataset: PredNet Color Model Weights