dataset_name stringlengths 2 128 | description stringlengths 1 9.7k | prompt stringlengths 59 185 |
|---|---|---|
SynPick | SynPick is a synthetic dataset for dynamic scene understanding in bin-picking scenarios. In contrast to existing datasets, this dataset is both situated in a realistic industrial application domain -- inspired by the well-known Amazon Robotics Challenge (ARC) -- and features dynamic scenes with authentic picking actions as chosen by our picking heuristic developed for the ARC 2017. The dataset is compatible with the popular BOP dataset format.
The dataset consists of 21 Synthetic videos with 503,232 with diverse lightning and 3 different views of each video. | Provide a detailed description of the following dataset: SynPick |
Cylinder in Crossflow | **Cylinder in Crossflow** is a synthetic dataset that involves unsteady laminar flow past a cylinder that generates vortex
shedding pattern known as a von Kármán vortex street. The governing equations for
this system are the incompressible Navier-Stokes equations. The cylinder
has a diameter of 1 and the free stream velocity is 1. The kinematic viscosity $\nu$ is
varied such that the Reynolds number is between 100 and 400. Symmetry boundary conditions are applied at the top and bottom edges of the domain and an open pressure boundary condition is applied at the outlet. Solutions are generated on the
unstructured mesh of 6384 quad elements. | Provide a detailed description of the following dataset: Cylinder in Crossflow |
Color-connectivity | Synthetic graph classification datasets with the task of recognizing the connectivity of same-colored nodes in 4 graphs of varying topology.
* The four Color-connectivity datasets were created by taking a graph and randomly coloring half of its nodes one color, e.g., red, and the other nodes blue, such that the red nodes either form a single connected island or two disjoint islands.
The binary classification task is then distinguishing between these two cases.
The node colorings were sampled by running two red-coloring random walks starting from two random nodes.
* For the underlying graph topology we used: 1) 16x16 2D grid, 2) 32x32 2D grid, 3) Euroroad road network (Šubelj et al. 2011), and 4) Minnesota road network.
* We sampled a balanced set of 15,000 coloring examples for each graph, except for Minnesota network for which we generated 6,000 examples due to memory constraints.
* The Color-connectivity task requires combination of local and long-range graph information processing to which most existing message-passing Graph Neural Networks (GNNs) do not scale.
These datasets can serve as a common-sense validation for new and more powerful GNN methods.
These testbed datasets can still be improved, as the node features are minimal (only a binary color) and recognition of particular topological patterns (e.g., rings or other subgraphs) is not needed to solve the task. | Provide a detailed description of the following dataset: Color-connectivity |
MovieGraphBenchmark | The dataset contains entities from IMDB, TheMovieDB and TheTVDB with goldstandard matches between the sources. Due to the licensing of IMDB we provide a script to build the IMDB part of the dataset yourself.
The dataset contains a variety of entity types to match: persons, movies, series, episodes and companies. | Provide a detailed description of the following dataset: MovieGraphBenchmark |
OpenEA Benchmark | 1.0 Version of OpenEA benchmark datasets. Please use the updated 2.0 version, that has been subsequently released.
Introduced by "A Benchmarking Study of Embedding-based Entity Alignment for Knowledge Graphs by Sun et. al, VLDB 2020"
Contains entities from DBpedia, YAGO and Wikidata. | Provide a detailed description of the following dataset: OpenEA Benchmark |
FewCLUE | Chinese Few-shot Learning Evaluation Benchmark (FewCLUE) is a comprehensive small sample evaluation benchmark in Chinese. It includes nine tasks, ranging from single-sentence and sentence-pair classification tasks to machine reading comprehension tasks. | Provide a detailed description of the following dataset: FewCLUE |
ZS-F-VQA | The ZS-F-VQA dataset is a new split of the F-VQA dataset for zero-shot problem.
Firstly we obtain the original train/test split of F-VQA dataset and combine them together to filter out the triples whose answers appear in top-500 according to its occurrence frequency.
Next, we randomly divide this set of answers into new training split (a.k.a. seen) $\mathcal{A}_s$ and testing split (a.k.a. unseen) $\mathcal{A}_u$ at the ratio of 1:1.
With reference to F-VQA standard dataset, the division process is repeated 5 times.
For each $(i,q,a)$ triplet in original F-VQA dataset, it is divided into training set if $a \in \mathcal{A}_s$. Else it is divided into testing set.
The overlap of answer instance between training and testing set in F-VQA are $2565$ compared to $0$ in ZS-F-VQA. | Provide a detailed description of the following dataset: ZS-F-VQA |
IMC PhotoTourism | Dataset provided by the Image Matching Workshop
https://www.cs.ubc.ca/research/image-matching-challenge/current/ | Provide a detailed description of the following dataset: IMC PhotoTourism |
ValidData | This dataset contains a total of 11 variables. These are:
1. vectorprice: The value in local currency of the product
2. Exchange: The official exchange rate between USD and the local currency when data was extracted.
3. Usprice: The price of the product in USD
4. vectorsold: The number of items sold by the vendor when data was extracted.
5. vectorproduct: The name of the product sold by the vendor
6. country: The name of the country where the product was sold.
7. vectorquestions: The number of questions that the vendor received when data was extracted
8. goodfeedback: the number of positive feedbacks that the vendor received when data was extracted
9. neutralfeedback: the number of neutral feedback (neither positive nor negative)
10. badfeedback: the number of negative feedback.
11. Trust: Just the ratio between goodfeedback divided by goodfeedback + neutralfeedback + badfeedback | Provide a detailed description of the following dataset: ValidData |
AIM-500 | AIM-500 is the first natural image matting test set, contains 500 high-resolution real-world natural images from three types of images (salient opaque foregrounds, salient transparent/meticulous foregrounds, non-salient foregrounds), and multiple categories. The amount of each category is shown in the following table.
| Portrait | Animal | Transparent | Plant | Furniture | Toy | Fruit |
| :----:| :----: | :----: | :----: | :----: | :----: | :----: |
| 100 | 200 | 34 | 75 | 45 | 36 | 10 | | Provide a detailed description of the following dataset: AIM-500 |
BrnoCompSpeed | The dataset contains 21 full-HD videos, each around 1 hr long, captured at six different locations. Vehicles in the videos (20 865 instances in total) are annotated with the precise speed measurements from optical gates using LiDAR and verified with several reference GPS tracks. The dataset is available for download and it contains the videos and metadata (calibration, lengths of features in image, annotations, and so on) for future comparison and evaluation.
This dataset was published with paper SOCHOR Jakub et al. Comprehensive Data Set for Automatic Single Camera Visual Speed Measurement, IEEE T-ITS. | Provide a detailed description of the following dataset: BrnoCompSpeed |
VESUS | The Varied Emotion in Syntactically Uniform Speech (VESUS) repository is a lexically controlled database collected by the NSA lab. Here, actors read a semantically neutral script of words, phrases, and sentences with different emotional inflections. VESUS contains 252 distinct phrases, each read by 10 actors in 5 emotional states (neutral, angry, happy, sad, fearful). | Provide a detailed description of the following dataset: VESUS |
Wasserstein Distances, Geodesics and Barycenters of Merge Trees | This repository contains all the ensemble datasets (along with their meta-data) used in the manuscript "Wasserstein Distances, Geodesics and Barycenters of Merge Trees". | Provide a detailed description of the following dataset: Wasserstein Distances, Geodesics and Barycenters of Merge Trees |
Shifts | The **Shifts Dataset** is a dataset for evaluation of uncertainty estimates and robustness to distributional shift. The dataset, which has been collected from industrial sources and services, is composed of three tasks, with each corresponding to a particular data modality: tabular weather prediction, machine translation, and self-driving car (SDC) vehicle motion prediction. All of these data modalities and tasks are affected by real, `in-the-wild' distributional shifts and pose interesting challenges with respect to uncertainty estimation. | Provide a detailed description of the following dataset: Shifts |
PASTIS | PASTIS is a benchmark dataset for panoptic and semantic segmentation of agricultural parcels from satellite image time series. It is composed of 2433 one square kilometer-patches in the French metropolitan territory for which sequences of satellite observations are assembled into a four-dimensional spatio-temporal tensor. The dataset contains both semantic and instance annotations, assigning to each pixel a semantic label and an instance id. There is an official 5 fold split provided in the dataset's metadata.
Image source: [https://github.com/VSainteuf/pastis-benchmark](https://github.com/VSainteuf/pastis-benchmark) | Provide a detailed description of the following dataset: PASTIS |
Hockey Fight Detection Dataset | Whereas the action recognition community has focused mostly on detecting simple actions like clapping, walking or jogging, the detection of fights or in general aggressive behaviors has been comparatively less studied. Such capability may be extremely useful in some video surveillance scenarios like in prisons, psychiatric or elderly centers or even in camera phones. After an analysis of previous approaches we test the well-known Bag-of-Words framework used for action recognition in the specific problem of fight detection, along with two of the best action descriptors currently available: STIP and MoSIFT. For the purpose of evaluation and to foster research on violence detection in video we introduce a new video database containing 1000 sequences divided in two groups: fights and non-fights. Experiments on this database and another one with fights from action movies show that fights can be detected with near 90% accuracy. | Provide a detailed description of the following dataset: Hockey Fight Detection Dataset |
Giantsteps | Giantsteps is a dataset that includes songs in major and minor scales for all pitch classes, i.e., a 24-way classification task. | Provide a detailed description of the following dataset: Giantsteps |
Emomusic | 1000 songs has been selected from Free Music Archive (FMA). The excerpts which were annotated are available in the same package song ids 1 to 1000. Some redundancies were identified, which reduced the dataset down to 744 songs. The dataset is split between the development set (619 songs) and the evaluation set (125 songs). The extracted 45 seconds excerpts are all re-encoded to have the same sampling frequency, i.e, 44100Hz. | Provide a detailed description of the following dataset: Emomusic |
MTASS | MTASS is an open-source dataset in which mixtures contain three types of audio signals. | Provide a detailed description of the following dataset: MTASS |
BNLP-Resources | Datasets for Bangla Natural Language Processing tasks. | Provide a detailed description of the following dataset: BNLP-Resources |
Hindi MSR-VTT | This dataset is the Hindi version of standard English MSR-VTT dataset. | Provide a detailed description of the following dataset: Hindi MSR-VTT |
CADNET | We introduce the CADNET dataset, which is an annotated collection of 3,317 3D Engineering models over 43 categories. Owing to the availability of large annotated datasets and also enough computational power in the form of GPUs, many deep learning-based solutions for object classification have been proposed of late, especially in the domain of images and graphical models. Nevertheless, very few solutions have been proposed for the task of functional classification of CAD models. Hence, for this research, CAD models have been collected from Engineering Shape Benchmark (ESB), National Design Repository (NDR), and augmented with newer models created using a modeling software to form a dataset - ‘CADNET’. | Provide a detailed description of the following dataset: CADNET |
BinKit | BinKit is a binary code similarity analysis (BCSA) benchmark. BinKit provides scripts for building a cross-compiling environment, as well as the compiled dataset. The original dataset includes 1,352 distinct combinations of compiler options of 8 architectures, 5 optimization levels, and 13 compilers.
For more details, please check: https://github.com/SoftSec-KAIST/BinKit | Provide a detailed description of the following dataset: BinKit |
TFix's Code Patches Data | The dataset contains more than 100k code patch pairs extracted from open source projects on GitHub. Each pair comes with the erroneous and the fixed version of the corresponding code snippet. Instead of the whole file, the code snippets are extracted to focus on the problematic region (error line + other lines around it). For each sample, the repository name, the commit id, and the file names are provided so that one can access the complete files in case of interest.
The dataset only has JavaScript programs and the error are detected by the popular static code analyzer ESLint. The dataset can be used in the fields of: program repair, code generation, bug finding, transfer learning and many more fields related to machine learning for code | Provide a detailed description of the following dataset: TFix's Code Patches Data |
CVEfixes | CVEfixes is a comprehensive vulnerability dataset that is automatically collected and curated from Common Vulnerabilities and Exposures (CVE) records in the public [U.S. National Vulnerability Database (NVD)](https://nvd.nist.gov/). The goal is to support data-driven security research based on source code and source code metrics related to fixes for CVEs in the NVD by providing detailed information at different interlinked levels of abstraction, such as the commit-, file-, and method level, as well as the repository- and CVE level.
At the initial release, the dataset covers all published CVEs up to 9 June 2021. All open-source projects that were reported in CVE records in the NVD in this time frame and had publicly available git repositories were fetched and considered for the construction of this vulnerability dataset. The dataset is organized as a relational database and covers 5495 vulnerability fixing commits in 1754 open source projects for a total of 5365 CVEs in 180 different Common Weakness Enumeration (CWE) types. The dataset includes the source code before and after fixing of 18249 files, and 50322 functions. | Provide a detailed description of the following dataset: CVEfixes |
MultiBench | MultiBench, a systematic and unified large-scale benchmark for multimodal learning spanning 15 datasets, 10 modalities, 20 prediction tasks, and 6 research areas. MultiBench provides an automated end-to-end machine learning pipeline that simplifies and standardizes data loading, experimental setup, and model evaluation. To enable holistic evaluation, MultiBench offers evaluation methodology to study (1) generalization, (2) time and space complexity, and (3) modality robustness. | Provide a detailed description of the following dataset: MultiBench |
BEHAVIOR | BEHAVIOR is a benchmark with the 100 household activities that represent a new challenge for embodied AI solutions.
BEHAVIOR is a challenge in simulation where embodied agents make continuous full-body control decisions based on sensor information. Agents need to navigate and manipulate the simulated environment with the goal of acomplishing 100 household activities. BEHAVIOR tests the ability to perceive the environment, plan, and execute complex long-horizon activities that involve multiple objects, rooms, and state transitions, all with the reproducibility, safety and observability offered by a realistic physics simulation.
Description from: [BEHAVIOR Challenge @ ICCV 2021](http://svl.stanford.edu/behavior/challenge.html) | Provide a detailed description of the following dataset: BEHAVIOR |
OG RGB+D | OG RGB+D is a new gait recognition database called OG RGB+D database, which breaks through the limitation of other gait databases and includes multimodal gait data of various occlusions (self-occlusion, active occlusion, and passive occlusion) by a multiple synchronous Azure Kinect DK sensors data acquisition system (multi-Kinect SDAS) that can be also applied in security situations. Because Azure Kinect DK can simultaneously collect multimodal data to support different types of gait recognition algorithms, especially enables to effectively obtain camera-centric multi-person 3D poses, and multi view is better to deal with occlusion than single-view. In particular, the OG RGB+D database provides accurate silhouettes and the optimized human 3D joints data (OJ) by fusing data collected by multi-Kinects which are more accurate in human pose representation under occlusion.
Description from: [A Benchmark for Gait Recognition under Occlusion Collected by Multi-Kinect SDAS](https://arxiv.org/pdf/2107.08990v1.pdf) | Provide a detailed description of the following dataset: OG RGB+D |
Wikidata-14M | **Wikidata-14M** is a recommender system dataset for recommending items to Wikidata editors. It consists of 220,000 editors responsible for 14 million interactions with 4 million items. | Provide a detailed description of the following dataset: Wikidata-14M |
CARLE | **CARLE** is a life-like cellular automata simulator and reinforcement learning environment. CARLE is flexible, capable of simulating any of the 262,144 different rules defining Life-like cellular automaton universes. CARLE is also fast and can simulate automata universes at a rate of tens of thousands of steps per second through a combination of vectorization and GPU acceleration. Finally, CARLE is simple. Compared to high-fidelity physics simulators and video games designed for human players, CARLE's two-dimensional grid world offers a discrete, deterministic, and atomic universal playground, despite its complexity. | Provide a detailed description of the following dataset: CARLE |
Forms Dataset | The **Forms Dataset** is a dataset for document structure extraction comprising of 5K forms. | Provide a detailed description of the following dataset: Forms Dataset |
Undecided Voters in US Presidential Elections | This data contains the election polls for the 2004, 2008, 2012, and 2016 US presidential election by state including data on undecided voter proportions.
See https://github.com/bonStats/undecided-voters-us-pres-elections#readme for data description. | Provide a detailed description of the following dataset: Undecided Voters in US Presidential Elections |
UHCSDB | DeCost, Hecht, Francis, Webler, Picard, and Holm.
UHCSDB (Ultrahigh Carbon Steel micrograph DataBase): tools for exploring large heterogeneous microstructure datasets.
accepted for publication in IMMI 2017 doi: 10.1007/s40192-017-0097-0
ABSTRACT:
We present a new microstructure dataset consisting of ultrahigh carbon steel (UHCS) micrographs taken over a range of length scales under systematically varied heat treatments. Using the UHCS dataset as a case study, we develop a set of visualization tools for interacting with and exploring large microstructure and metadata datasets. Based on generic microstructure representations adapted from the field of computer vision, these tools enable image-based microstructure retrieval, as well as spatial maps of both microstructure and related metadata, such as processing conditions or properties measurements. We provide the microstructure image data, processing metadata, and source code for these microstructure exploration tools. The UHCS dataset is intended as a community resource for development and evaluation of microstructure data science techniques and for creation of microstructure data science teaching modules. | Provide a detailed description of the following dataset: UHCSDB |
AADB2021 | We present a data set from a first-principles study of amino-methylated and acetylated (capped) dipeptides of the 20 proteinogenic amino acids – including alternative possible side chain protonation states and their interactions with selected divalent cations (Ca$^{2+}$, Mg$^{2+}$ and Ba$^{2+}$. The data covers 21,909 stationary points on the respective potential-energy surfaces in a wide relative energy range of up to 4 eV (390 kJ/mol). Relevant properties of interest, like partial charges, were derived for the conformers. | Provide a detailed description of the following dataset: AADB2021 |
AADB2021Ontology | This onotology is populated with the data from AADB2021 (https://dx.doi.org/10.17172/NOMAD/2021.02.10-1). Details can be found in the related article on arXiv.org: https://arxiv.org/abs/2107.08855 | Provide a detailed description of the following dataset: AADB2021Ontology |
Global Wheat Head 2021 | Global WHEAT Dataset 2021 is the extentions of the Global Wheat Dataset 2020. It is the first large-scale dataset for wheat head detection from field optical images. It included a very large range of cultivars from differents continents. Wheat is a staple crop grown all over the world and consequently interest in wheat phenotyping spans the globe. Therefore, it is important that models developed for wheat phenotyping, such as wheat head detection networks, generalize between different growing environments around the world.
Dataset and official splits can be download [here](https://zenodo.org/record/5092309) | Provide a detailed description of the following dataset: Global Wheat Head 2021 |
WikiGraphs | **WikiGraphs** is a dataset of Wikipedia articles each paired with a knowledge graph, to facilitate the research in conditional text generation, graph generation and graph representation learning. Existing graph-text paired datasets typically contain small graphs and short text (1 or few sentences), thus limiting the capabilities of the models that can be learned on the data.
WikiGraphs is collected by pairing each Wikipedia article from the established [WikiText-103 benchmark](wikitext-103) with a subgraph from the Freebase knowledge graph. This makes it easy to benchmark against other state-of-the-art text generative models that are capable of generating long paragraphs of coherent text. Both the graphs and the text data are of significantly larger scale compared to prior graph-text paired datasets. | Provide a detailed description of the following dataset: WikiGraphs |
GR712RC LEON3 Power Model Data | # Dataset Files
The official dataset files are hosted at [https://dx.doi.org/10.21227/1y7r-am78](https://dx.doi.org/10.21227/1y7r-am78).
# Generating the models from the LEON3 sample data
The data for this paper is generated using a custom open-source methodology called **__REPPS__**. In order to replicate the results, first you must follow all the steps in [https://github.com/TSL-UOB/TP-REPPS](https://github.com/TSL-UOB/TP-REPPS) in order to install and configure all the scripts and supporting programs. Afterwards you can proceed with executing the following commands to generate the various models from the LEON3 data.
**DISCLAIMER - If you have any issues please don't hesitate to contact via [email](mailto:kris.nikov@bris.ac.uk).**
## Generate models trained on BEEBS and validated on the use_case_core application
### ASIC Only Model
```
./octave_makemodel.sh -r /PATH/TO/ESL_paper_data/data/LEON3_BEEBS_finegrain.data -t /PATH/TO/ESL_paper_data/data/LEON3_use_case_finegrain.data -b /PATH/TO/ESL_paper_data/split/LEON3_BEEBS_use_case_split.data -p 6 -e 4 -d 2 -o 2 -s 20210421_leon3_beebs_ucc_pwr_fngr_nocyc_nocth_asicdata_avgrelerr_nfolds_ools.data
```
### Bottom-Up Search
```
./octave_makemodel.sh -r /PATH/TO/ESL_paper_data/data/LEON3_BEEBS_finegrain.data -t /PATH/TO/ESL_paper_data/data/LEON3_use_case_finegrain.data -b /PATH/TO/ESL_paper_data/split/LEON3_BEEBS_use_case_split.data -p 6 -l 9,10,11,12,13,14,15,16,17,18,19,20,21,22,23,24 -m 1 -n 16 -c 1 -g -i 50 -d 2 -o 2 -s 20210425_leon3_beebs_ucc_pwr_fngr_allev_nocyc_nocth_botup_avgrelerr_nfolds_ools.data
```
### Top-Down Search
```
./octave_makemodel.sh -r /PATH/TO/ESL_paper_data/data/LEON3_BEEBS_finegrain.data -t /PATH/TO/ESL_paper_data/data/LEON3_use_case_finegrain.data -b /PATH/TO/ESL_paper_data/split/LEON3_BEEBS_use_case_split.data -p 6 -l 9,10,11,12,13,14,15,16,17,18,19,20,21,22,23,24 -m 2 -n 1 -c 1 -g -i 50 -d 2 -o 2 -s 20210425_leon3_beebs_ucc_pwr_fngr_allev_nocyc_nocth_topdown_avgrelerr_nfolds_ools
```
## Validate the previous models on BEEBS as well (no need to redo all the event selection, just use same events)
### ASIC Only Model
```
./octave_makemodel.sh -r /PATH/TO/ESL_paper_data/data/LEON3_BEEBS_finegrain.data -b /PATH/TO/ESL_paper_data/split/LEON3_BEEBS_BEEBS_split.data -p 6 -e 4 -d 2 -o 2 -s 20210421_leon3_beebs_beebs_pwr_fngr_nocyc_nocth_asicdata_avgrelerr_nfolds_ools.data
```
### Bottom-Up Search
```
./octave_makemodel.sh -r /PATH/TO/ESL_paper_data/data/LEON3_BEEBS_finegrain.data -b /PATH/TO/ESL_paper_data/split/LEON3_BEEBS_BEEBS_split.data -p 6 -e 24 -d 2 -o 2 -s 20210421_leon3_beebs_beebs_pwr_fngr_allev_nocyc_nocth_botup_avgrelerr_nfolds_ools.data
```
### Top-Down Search
```
./octave_makemodel.sh -r /PATH/TO/ESL_paper_data/data/LEON3_BEEBS_finegrain.data -b /PATH/TO/ESL_paper_data/split/LEON3_BEEBS_BEEBS_split.data -p 6 -e 9,10,12,13,14,15,16,18,19,20,22,23 -d 2 -o 2 -s 20210421_leon3_beebs_beebs_pwr_fngr_allev_nocyc_nocth_topdown_avgrelerr_nfolds_ools.data
```
# Visualise the data
## Generate model per-sample breakdown files for the 1st run of the use_case_opt application
### ASIC Only Model
```
./octave_makemodel.sh -r /PATH/TO/ESL_paper_data/data/LEON3_BEEBS_finegrain.data -t /PATH/TO/ESL_paper_data/data/LEON3_use_case_finegrain_1run.data -b /PATH/TO/ESL_paper_data/split/LEON3_BEEBS_onlyusecaseopt_split.data -p 6 -e 4 -d 2 -o 6 -s /PATH/TO/ESL_paper_data/20210421_leon3_beebs_uco_pwr_fngr_nocyc_nocth_asicdata_avgrelerr_nfolds_ools_1r.data
```
### Bottom-Up Search
```
./octave_makemodel.sh -r /PATH/TO/ESL_paper_data/data/LEON3_BEEBS_finegrain.data -t /PATH/TO/ESL_paper_data/data/LEON3_use_case_finegrain_1run.data -b /PATH/TO/ESL_paper_data/split/LEON3_BEEBS_onlyusecaseopt_split.data -p 6 -e 24 -d 2 -o 6 -s /PATH/TO/ESL_paper_data/20210427_leon3_beebs_uco_pwr_fngr_allev_nocyc_nocth_botup_avgrelerr_nfolds_ools_1r.data
```
### Top-Down Search
```
./octave_makemodel.sh -r /PATH/TO/ESL_paper_data/data/LEON3_BEEBS_finegrain.data -t /PATH/TO/ESL_paper_data/data/LEON3_use_case_finegrain_1run.data -b /PATH/TO/ESL_paper_data/split/LEON3_BEEBS_onlyusecaseopt_split.data -p 6 -e 9,10,12,13,14,15,16,18,19,20,22,23 -d 2 -o 6 -s /PATH/TO/ESL_paper_data/20210427_leon3_beebs_uco_pwr_fngr_allev_nocyc_nocth_topdown_avgrelerr_nfolds_ools_1r.data
```
## Generate model per-sample breakdown files for the 1st run of the BEEBS benchmarks
### ASIC Only Model
```
./octave_makemodel.sh -r /PATH/TO/ESL_paper_data/data/LEON3_BEEBS_finegrain.data -t /PATH/TO/ESL_paper_data/data/LEON3_BEEBS_finegrain_1run.data -b /PATH/TO/ESL_paper_data/split/LEON3_BEEBS_BEEBS_split.data -p 6 -e 4 -d 2 -o 6 -s /PATH/TO/ESL_paper_data/20210423_leon3_beebs_beebs_pwr_fngr_nocyc_nocth_asicdata_avgrelerr_nfolds_ools_1r.data
```
### Bottom-Up Search
```
./octave_makemodel.sh -r /PATH/TO/ESL_paper_data/data/LEON3_BEEBS_finegrain.data -t /PATH/TO/ESL_paper_data/data/LEON3_BEEBS_finegrain_1run.data -b /PATH/TO/ESL_paper_data/split/LEON3_BEEBS_BEEBS_split.data -p 6 -e 24 -d 2 -o 6 -s /PATH/TO/ESL_paper_data/20210427_leon3_beebs_beebs_pwr_fngr_allev_nocyc_nocth_botup_avgrelerr_nfolds_ools_1r.data
```
### Top-Down Search
```
./octave_makemodel.sh -r /PATH/TO/ESL_paper_data/data/LEON3_BEEBS_finegrain.data -t /PATH/TO/ESL_paper_data/data/LEON3_BEEBS_finegrain_1run.data -b /PATH/TO/ESL_paper_data/split/LEON3_BEEBS_BEEBS_split.data -p 6 -e 9,10,12,13,14,15,16,18,19,20,22,23 -d 2 -o 6 -s /PATH/TO/ESL_paper_data/20210427_leon3_beebs_beebs_pwr_fngr_allev_nocyc_nocth_topdown_avgrelerr_nfolds_ools_1r.data
```
## Plot the model per-sample breakdwon data using `MODELDATA_plot.py`
### Plot the use_case_opt 1st run per-sample physical measurements and model errors
```
./MODELDATA_plot.py -p 1 -x "Samples[#]" -t 10 -y "Power[W]" -b /PATH/TO/ESL_paper_data/data/LEON3_use_case_opt_finegrain_1run.data -l "Sensor Data" -i /PATH/TO/ESL_paper_data/20210421_leon3_beebs_uco_pwr_fngr_nocyc_nocth_asicdata_avgrelerr_nfolds_ools_1r.data -a 'ASIC Data Only' -i /PATH/TO/ESL_paper_data/20210427_leon3_beebs_uco_pwr_fngr_allev_nocyc_nocth_botup_avgrelerr_nfolds_ools_1r.data -a "Bottom-Up Search" -i /PATH/TO/ESL_paper_data/20210427_leon3_beebs_uco_pwr_fngr_allev_nocyc_nocth_topdown_avgrelerr_nfolds_ools_1r.data -a "Top-Down Search"
```
### Plot the BEEBS 1st run per-sample physical measurements and model errors
```
./MODELDATA_plot.py -p 1 -x "Samples[#]" -t 10 -y "Power[W]" -b /PATH/TO/ESL_paper_data/data/LEON3_BEEBS_finegrain_1run_physicaldata.data -l "Sensor Data" -i /PATH/TO/ESL_paper_data/20210423_leon3_beebs_beebs_pwr_fngr_nocyc_nocth_asicdata_avgrelerr_nfolds_ools_1r.data -a 'ASIC Data Only' -i /PATH/TO/ESL_paper_data/20210427_leon3_beebs_beebs_pwr_fngr_allev_nocyc_nocth_botup_avgrelerr_nfolds_ools_1r.data -a "Bottom-Up Search" -i /PATH/TO/ESL_paper_data/20210427_leon3_beebs_beebs_pwr_fngr_allev_nocyc_nocth_topdown_avgrelerr_nfolds_ools_1r.data -a "Top-Down Search"
``` | Provide a detailed description of the following dataset: GR712RC LEON3 Power Model Data |
S2Looking | **S2Looking** is a building change detection dataset that contains large-scale side-looking satellite images captured at varying off-nadir angles. The S2Looking dataset consists of 5,000 registered bitemporal image pairs (size of 1024*1024, 0.5 ~ 0.8 m/pixel) of rural areas throughout the world and more than 65,920 annotated change instances. We provide two label maps to separately indicate the newly built and demolished building regions for each sample in the dataset. We establish a benchmark task based on this dataset, i.e., identifying the pixel-level building changes in the bi-temporal images. | Provide a detailed description of the following dataset: S2Looking |
QVHighlights | The Query-based Video Highlights (**QVHighlights**) dataset is a dataset for detecting customized moments and highlights from videos given natural language (NL). It consists of over 10,000 YouTube videos, covering a wide range of topics, from everyday activities and travel in lifestyle vlog videos to social and political activities in news videos. Each video in the dataset is annotated with: (1) a human-written free-form NL query, (2) relevant moments in the video w.r.t. the query, and (3) five-point scale saliency scores for all query-relevant clips. | Provide a detailed description of the following dataset: QVHighlights |
GenWiki | GenWiki is a large-scale dataset for knowledge graph-to-text (G2T) and text-to-knowledge graph (T2G) conversion. It is introduced in the paper ["GenWiki: A Dataset of 1.3 Million Content-Sharing Text and Graphs for Unsupervised Graph-to-Text Generation"](https://www.aclweb.org/anthology/2020.coling-main.217.pdf) by Zhijing Jin, Qipeng Guo, Xipeng Qiu, and Zheng Zhang at COLING 2020. | Provide a detailed description of the following dataset: GenWiki |
Iranis | The **Iranis Dataset** is a Large-scale dataset of Farsi license plate characters containing a large-scale dataset with more than 83,000 images of Farsi numbers and letters collected from real-world license plate images captured by various cameras.
Image source: [https://github.com/alitourani/Iranis-dataset](https://github.com/alitourani/Iranis-dataset) | Provide a detailed description of the following dataset: Iranis |
Geometry3K | A new large-scale geometry problem-solving dataset
- 3,002 multi-choice geometry problems
- dense annotations in formal language for the diagrams and text
- 27,213 annotated diagram logic forms (literals)
- 6,293 annotated text logic forms (literals) | Provide a detailed description of the following dataset: Geometry3K |
TLFM dataset | TLFM dataset structured in sequences of at least nine timesteps. The dataset includes 9696 images of both brightfield and green fluorescent protein channels at a resolution of 256 × 256. Dataset for multi-domain (BF and GFP) microscopy image sequence generation. | Provide a detailed description of the following dataset: TLFM dataset |
RailSem19 | RailSem19 offers 8500 unique images taken from a the ego-perspective of a rail vehicle (trains and trams). Extensive semantic annotations are provided, both geometry-based (rail-relevant polygons, all rails as polylines) and dense label maps with many Cityscapes-compatible road labels. Many frames show areas of intersection between road and rail vehicles (railway crossings, trams driving on city streets). RailSem19 is usefull for rail applications and road applications alike.
Image credit: [https://wilddash.cc/railsem19](https://wilddash.cc/railsem19) | Provide a detailed description of the following dataset: RailSem19 |
MSLR WEB30K | The datasets are machine learning data, in which queries and urls are represented by IDs. The datasets consist of feature vectors extracted from query-url pairs along with relevance judgment labels:
(1) The relevance judgments are obtained from a retired labeling set of a commercial web search engine (Microsoft Bing), which take 5 values from 0 (irrelevant) to 4 (perfectly relevant).
(2) The features are basically extracted by us, and are those widely used in the research community.
In the data files, each row corresponds to a query-url pair. The first column is relevance label of the pair, the second column is query id, and the following columns are features. The larger value the relevance label has, the more relevant the query-url pair is. A query-url pair is represented by a 136-dimensional feature vector. | Provide a detailed description of the following dataset: MSLR WEB30K |
Istella LETOR | The Istella LETOR full dataset is composed of 33,018 queries and 220 features representing each query-document pair. It consists of 10,454,629 examples labeled with relevance judgments ranging from 0 (irrelevant) to 4 (perfectly relevant). The average number of per-query examples is 316. It has been splitted in train and test sets according to a 80%-20% scheme. | Provide a detailed description of the following dataset: Istella LETOR |
DQN Replay Dataset | The DQN Replay Dataset was collected as follows:
We first train a [DQN][nature_dqn] agent, on all 60 [Atari 2600 games][ale]
with [sticky actions][stochastic_ale] enabled for 200 million frames (standard protocol) and save all of the experience tuples
of *(observation, action, reward, next observation)* (approximately 50 million)
encountered during training.
This logged DQN data can be found in the public [GCP bucket][gcp_bucket]
`gs://atari-replay-datasets` which can be downloaded using [`gsutil`][gsutil].
To install gsutil, follow the instructions [here][gsutil_install].
After installing gsutil, run the command to copy the entire dataset:
```
gsutil -m cp -R gs://atari-replay-datasets/dqn
```
To run the dataset only for a specific Atari 2600 game (*e.g.*, replace `GAME_NAME`
by `Pong` to download the logged DQN replay datasets for the game of Pong),
run the command:
```
gsutil -m cp -R gs://atari-replay-datasets/dqn/[GAME_NAME]
```
This data can be generated by running the online agents using
[`batch_rl/baselines/train.py`](https://github.com/google-research/batch_rl/blob/master/batch_rl/baselines/train.py) for 200 million frames
(standard protocol). Note that the dataset consists of approximately 50 million
experience tuples due to frame skipping (*i.e.*, repeating a selected action for
`k` consecutive frames) of 4. The stickiness parameter is set to 0.25, *i.e.*,
there is 25% chance at every time step that the environment will execute the
agent's previous action again, instead of the agent's new action.
[nature_dqn]: https://www.nature.com/articles/nature14236?wm=book_wap_0005
[gsutil_install]: https://cloud.google.com/storage/docs/gsutil_install#install
[gsutil]: https://cloud.google.com/storage/docs/gsutil
[batch_rl]: http://tgabel.de/cms/fileadmin/user_upload/documents/Lange_Gabel_EtAl_RL-Book-12.pdf
[stochastic_ale]: https://arxiv.org/abs/1709.06009
[ale]: https://github.com/mgbellemare/Arcade-Learning-Environment
[gcp_bucket]: https://console.cloud.google.com/storage/browser/atari-replay-datasets
[project_page]: https://offline-rl.github.io | Provide a detailed description of the following dataset: DQN Replay Dataset |
JS Fake Chorales | A MIDI dataset of 500 4-part chorales generated by the KS_Chorus algorithm, annotated with results from hundreds of listening test participants, with 500 further unannotated chorales. | Provide a detailed description of the following dataset: JS Fake Chorales |
QC-Science | QC-Science contains 47832 question-answer pairs belonging to the science domain tagged with labels of the form subject - chapter - topic. The dataset was collected with the help of a leading e-learning platform. The dataset consists of 40895 samples for training, 2153 samples for validation and 4784 samples for testing.
Description adopted from: [https://arxiv.org/pdf/2107.10649v1.pdf](https://arxiv.org/pdf/2107.10649v1.pdf)
Image source: [https://arxiv.org/pdf/2107.10649v1.pdf](https://arxiv.org/pdf/2107.10649v1.pdf) | Provide a detailed description of the following dataset: QC-Science |
OntoNotes 4.0 | OntoNotes Release 4.0 contains the content of earlier releases -- OntoNotes Release 1.0 LDC2007T21, OntoNotes Release 2.0 LDC2008T04 and OntoNotes Release 3.0 LDC2009T24 -- and adds newswire, broadcast news, broadcast conversation and web data in English and Chinese and newswire data in Arabic. This cumulative publication consists of 2.4 million words as follows: 300k words of Arabic newswire 250k words of Chinese newswire, 250k words of Chinese broadcast news, 150k words of Chinese broadcast conversation and 150k words of Chinese web text and 600k words of English newswire, 200k word of English broadcast news, 200k words of English broadcast conversation and 300k words of English web text. | Provide a detailed description of the following dataset: OntoNotes 4.0 |
Multinational Structured Address Dataset | The Multinational Structured Address Dataset is a collection of addresses of 61 different countries. The addresses can either be "complete" (all the usual address components) or "incomplete" (missing some usual address components). | Provide a detailed description of the following dataset: Multinational Structured Address Dataset |
MyFood Dataset | MyFood Dataset is an image database for segmenting images of Brazilian foods. Composed of 9 classes: rice, beans, boiled egg, fried egg, pasta, salad, roasted meat, apple and chicken breast. With an average of 125 images per class and a total of 1250 images, with a ratio of 60-20-20 for the training, validation and testing sets, respectively. | Provide a detailed description of the following dataset: MyFood Dataset |
MD17 | Energies and forces for molecular dynamics trajectories of eight organic molecules. Level of theory DFT: PBE+vdW-TS. | Provide a detailed description of the following dataset: MD17 |
Spectrum Challange 2 Dataset | The dataset is approved for public release, distribution unlimited.
The dataset is contained in two files - scrimmage4_link_dataset.pickle and scrimmage5_link_dataset.pickle
The pickle files are stored as list of tuples, each list corresponding to a single link, and containing two elements. Each element a length equal to the number of frames in that link - it varies between link to link.
The first tuple is contains the paramenters -
1. Signal to Noise Ratio ('snr') - 1 element
2. The Modulation and Coding Scheme ('mcs') - 1 element
3. The center frequency of the link ('centerFreq') - 1 element
4. The bandwidth of the link ('bandwidth') - 1 element
5. The Power Spectral Density ('psd') - 16 elements
Thus the total width of each element of the first tuple for a link is 20.
The second tuple contains the success of transmission ('rxSuccess'). If it is 1, there is no frame error, if it is 0, there is a frame error.
Here are the links to the dataset files mentioned in the code (one pickle file for each scrimmage):
[Scrimmage 4](https://purdue0-my.sharepoint.com/:u:/g/personal/amahdeej_purdue_edu/EQsfaBF0MjJNvBXqkPq-Lv0BlyAm8ph8O85s-vxOqVjJTA?e=pYHIQS) (547.5 MB) [Mirror](https://app.box.com/s/i0c1qimr0mjuyr38celtxbsuedhlp9tr)
[Scrimmage 5](https://purdue0-my.sharepoint.com/:u:/g/personal/amahdeej_purdue_edu/EVnfh_V2BZBOk9SOTvKDLa4BGQ54LA9rr_r0cfFQWC_SLw?e=Jh4yCL) (979.7 MB) [Mirror](https://app.box.com/s/sqyvrapww6z5ydg0rrx7tjszs32bhndx)
A larger dataset containing complete information about each match is also available. Please refer to SC2_Dataset_Documentation.pdf for more details regarding the structure of the full dataset. SC2_Dataset_Technical_Design_Report.pdf contains more information about the dataset acquisition process.
Here is the link to the full dataset (separate sqlite files for each match):
[Full Dataset](https://purdue0-my.sharepoint.com/:f:/g/personal/amahdeej_purdue_edu/EszW2WkpQWBLg9Y6cYX1FtUBpEyMS5XpUuCUxa2vFj5nXg?e=Nh0tk6) (135.517 GB) [Mirror (Needs Access Request)](https://app.box.com/s/snwqgmzxljjsu129wampesj0xgn2ozpq)
Please use the following citation to refer to the dataset:
A. S. M. M. Jameel, A. P. Mohamed, X. Zhang and A. El Gamal, "Deep Learning for Frame Error Prediction using a DARPA Spectrum Collaboration Challenge (SC2) Dataset," in IEEE Networking Letters, doi: 10.1109/LNET.2021.3096813. | Provide a detailed description of the following dataset: Spectrum Challange 2 Dataset |
AP | This is a paraphrasing dataset created using the adversarial paradigm. A task was designed called the Adversarial Paraphrasing Task (APT) whose objective was to write sentences that mean the same as a given sentence but have as different syntactical and lexical properties as possible.
As shown in the paper, this dataset can be used to measure the performance of paraphrase identifier models and train them. This dataset and the task associated with it (APT) can also be used to challenge neural networks to generate better adversarial paraphrases (the work has done this for T5-base), which will in turn help create better paraphrase identifiers. | Provide a detailed description of the following dataset: AP |
Vehicle-Rear | Vehicle-Rear is a novel dataset for vehicle identification that contains more than three hours of high-resolution videos, with accurate information about the make, model, color and year of nearly 3,000 vehicles, in addition to the position and identification of their license plates. | Provide a detailed description of the following dataset: Vehicle-Rear |
TIP 2018 | The first large demoire dataset. The dataset contains 135,000 image pairs, each containing an image contaminated with moire patterns and its corresponding uncontaminated reference image. | Provide a detailed description of the following dataset: TIP 2018 |
Action-Camera Parking | The Action-Camera Parking Dataset contains 293 images captured at a roughly 10-meter height using a GoPro Hero 6 camera. It can be used for training machine learning models that perform image-based parking space occupancy classification.
Image credit: [https://github.com/martin-marek/parking-space-occupancy](https://github.com/martin-marek/parking-space-occupancy) | Provide a detailed description of the following dataset: Action-Camera Parking |
ICDAR 2021 Competition on Historical Map Segmentation | - **Revision:** v1.0.0-full-20210527a
- **DOI:** 10.5281/zenodo.4817662
- **Authors:** J. Chazalon, E. Carlinet, Y. Chen, J. Perret, C. Mallet, B. Duménieu and T. Géraud
- **Official competition website:** https://icdar21-mapseg.github.io/
This is the dataset of the ICDAR 2021 Competition on Historical Map Segmentation (“MapSeg”).
This competition ran from November 2020 to April 2021.
## Motivation
This competition aims as encouraging research in the digitization of historical maps. In order to be usable in historical studies, information contained in such images need to be extracted. The general pipeline involves multiples stages; we list some essential ones here:
- segment map content: locate the area of the image which contains map content;
- extract map object from different layers: detect objects like roads, buildings, building blocks, rivers, etc. to create geometric data;
- georeference the map: by detecting objects at known geographic coordinate, compute the transformation to turn geometric objects into geographic ones (which can be overlaid on current maps).
## Tasks
The tasks we propose simulate the three essential digitization steps we just mentioned.
### Task 1: “Detect Building Blocks”
This task is the flagship of this competition.
Given a fragment of map sheet image focused on map content, you need to detect the building blocks.
Building blocks are symbolized by a thick line.
They do not overlap between themselves, but many other elements can perturb their detection:
- special buildings (hatched areas) can be included in building blocks (sometimes they cover the building block completely);
- text can be overlaid on lines;
- those maps contain many lines which need to be filtered out (internal building structures, railways, rivers, gardens…).
Expected output for this task is a binary mask indicating for each pixel whether it belongs to a building block or not.
Evaluation tools also tolerate a label map in TIFF format, where each pixel is labelled with the identifier of the shape it belongs to, using an INT16.
To extract shapes from the binary mask, 4-connectivity is used (hence the background has 8-connectivity).
### Task 2: “Segment Map Area”
This tasks is the equivalent of text area detection for OCR: given the image of a complete map sheet, you need to segment the area which contains map content.
This area is usually well separated from the other elements (title, legend, scale…) by several frames but sometimes map contents exceeds the frame for some large objects.
While most of the area is delineated by straight lines, some objects were drawn outside the frame on several sheets.
We decided to segment each of those regions as closely as possible.
Expected output for this task is a binary mask indicating for each pixel whether it belongs to the map area or not.
### Task 3: “Locate Graticule Lines Intersections”
This task is essential to the georeferencing of the map: graticule lines are lines which indicate the North/South/East/West coordinates relative to the reference point. Their intersection points are very useful to provide key points for the registration of the map image.
Given the image of a complete map sheet, you need to locate the intersection points of such lines.
These lines usually cover the map content from left to right or from top to bottom but beware:
- due to document aging paper sheets are not flat anymore and lines are not straight;
- lines may be in diagonal for some areas;
- lines can be overlaid with many other objects.
Expected output for this task is a list of coordinates (in image referential, i.e. `0,0` at top left, x-axis pointing to the right and y-axis pointing downward). | Provide a detailed description of the following dataset: ICDAR 2021 Competition on Historical Map Segmentation |
MIT-BIH Arrhythmia Database | The MIT-BIH Arrhythmia Database contains 48 half-hour excerpts of two-channel ambulatory ECG recordings, obtained from 47 subjects studied by the BIH Arrhythmia Laboratory between 1975 and 1979. Twenty-three recordings were chosen at random from a set of 4000 24-hour ambulatory ECG recordings collected from a mixed population of inpatients (about 60%) and outpatients (about 40%) at Boston's Beth Israel Hospital; the remaining 25 recordings were selected from the same set to include less common but clinically significant arrhythmias that would not be well-represented in a small random sample.
The recordings were digitized at 360 samples per second per channel with 11-bit resolution over a 10 mV range. Two or more cardiologists independently annotated each record; disagreements were resolved to obtain the computer-readable reference annotations for each beat (approximately 110,000 annotations in all) included with the database.
This directory contains the entire MIT-BIH Arrhythmia Database. About half (25 of 48 complete records, and reference annotation files for all 48 records) of this database has been freely available here since PhysioNet's inception in September 1999. The 23 remaining signal files, which had been available only on the MIT-BIH Arrhythmia Database CD-ROM, were posted here in February 2005.
Much more information about this database may be found in the [MIT-BIH Arrhythmia Database Directory](https://archive.physionet.org/physiobank/database/html/mitdbdir/mitdbdir.htm). | Provide a detailed description of the following dataset: MIT-BIH Arrhythmia Database |
C# EditCompletion | We scraped the 53 most popular C# repositories from GitHub and extracted all commits since the beginning of the project’s history. From each commit, we extracted edits in C# files along with the edits in their surrounding context. | Provide a detailed description of the following dataset: C# EditCompletion |
UDD | UDD is an underwater open-sea farm object detection dataset. UDD consists of 3 categories (seacucumber, seaurchin, and scallop) with 2,227 images. It's the first dataset collected in a real open-sea farm for underwater robot picking. | Provide a detailed description of the following dataset: UDD |
News Articles Dataset with Summary | This dataset is the news articles scraped from New York Times, CNN, Business Insider and Breitbart. The original dataset published in Kaggle did not provide any human summaries, it only offered the title of the article, while this could be used as the summary, it is not ideal as the headline title was too short. We generated the label manually by adding the human summary for the available articles. We also added another column called theme to the dataset, this column would state the genre of the news articles.
The dataset is ideal for summarization as the provided news articles are long and will consume lots of time to read it. Therefore, it is ideal to generate automatic summarization for the articles in the dataset. The dataset consists of 50,001 rows of data. | Provide a detailed description of the following dataset: News Articles Dataset with Summary |
Perla Dataset | This dataset contains the results of a depression screening experiment using two instruments: The PHQ-9 depression screening questionnaire and the chabot Perla.
The dataset was used to compare the results of these two methods to assess the presence of depression in the population. The sample consist of Spanish speaking participants who responded to both PHQ-9 and Perla's questions. | Provide a detailed description of the following dataset: Perla Dataset |
MSU Video Super Resolution Benchmark: Detail Restoration | This is a dataset for a video super-resolution task. The dataset contains the most complex content for the restoration task: faces, text, QR-codes, car numbers, unpatterned textures, small details. Videos include different types of motion and different types of degradation: bicubic interpolation (BI) and Gaussian blurring and downsampling (BD). The resolution of all input video sequences is 480x320.
Source: [https://videoprocessing.ai/benchmarks/video-super-resolution.html](https://videoprocessing.ai/benchmarks/video-super-resolution.html)
Image Source: [https://videoprocessing.ai/benchmarks/video-super-resolution.html](https://videoprocessing.ai/benchmarks/video-super-resolution.html) | Provide a detailed description of the following dataset: MSU Video Super Resolution Benchmark: Detail Restoration |
SaRNet | **SaRNet** is a single class dataset consisting of tiles of satellite imagery labeled with potential 'targets'. Labelers were instructed to draw boxes around anything they suspect may a paraglider wing, missing in a remote area of Nevada. Volunteers were shown examples of similar objects already in the environment for comparison. | Provide a detailed description of the following dataset: SaRNet |
H3DS | **H3DS** a high-resolution 3D full head textured scans and 360º images dataset collected with a structured light scanner, consisting of 23 3D full-head scans containing images, masks and camera poses. The 3D geometry has been captured using a structured light scanner, which leads to precise ground truth geometries. | Provide a detailed description of the following dataset: H3DS |
Navigation Turing Test | Replay data from human players and AI agents navigating in a 3D game environment.
Introduced in "Navigation Turing Test (NTT): Learning to Evaluate Human-Like Navigation" [ICML 2021] to learn how to evaluate humanlike behavior in agents. | Provide a detailed description of the following dataset: Navigation Turing Test |
GIGO revisited: ML publications' approaches to training data | A random sample of 200 machine learning publications, systematically analyzed by a team of labelers, who asked up to 15 questions about how the publication discusses its training data. More documentation in data/README.md. | Provide a detailed description of the following dataset: GIGO revisited: ML publications' approaches to training data |
RaidaR | RaidaR is a rich annotated image dataset of rainy street scenes. RaidaR consists of 58,542 real rainy images containing several rain-induced artifacts: fog, droplets, road reflections, etc. 5,000/3,658 images were carefully semantic/instance segmentated, respectively. | Provide a detailed description of the following dataset: RaidaR |
COVIDEmo | A dataset of tweets that reference the COVID-19 pandemic with emotion labels. | Provide a detailed description of the following dataset: COVIDEmo |
CLIP | We created a dataset of clinical action items annotated over MIMIC-III. This dataset, which we call CLIP, is annotated by physicians and covers 718 discharge summaries, representing 107,494 sentences. Annotations were collected as character-level spans to discharge summaries after applying surrogate generation to fill in the anonymized templates from MIMIC-III text with faked data. We release these spans, their aggregation into sentence-level labels, and the sentence tokenizer used to aggregate the spans and label sentences. We also release the surrogate data generator, and the document IDs used for training, validation, and test splits, to enable reproduction. The spans are annotated with 0 or more labels of 7 different types, representing the different actions that may need to be taken: Appointment, Lab, Procedure, Medication, Imaging, Patient Instructions, and Other. We encourage the community to use this dataset to develop methods for automatically extracting clinical action items from discharge summaries. | Provide a detailed description of the following dataset: CLIP |
Smoking Data of hospitalized Covid-19 patients | Data related to 1040 patients with Covid-19 admitted to hospitals in Iran have been collected. These patients were randomly selected from patients admitted to hospitals in Rasht, Tehran, and Bojnord. Of these 1040 patients, 375 of them are female, and 665 of them are male. Also, the age of these people is between 14 and 91 years, and the average age is about 54 years. | Provide a detailed description of the following dataset: Smoking Data of hospitalized Covid-19 patients |
A robot dataset of successful and failed placement executions | The dataset contains the following data from successful and failed executions of the Toyota HSR robot placing a book on a shelf.
* RGB images from the robot's head camera
* Depth images from the robot's head camera
* Rendered images of the robot's 3D model from the point of view of the robot's head camera
* Force-torque readings from a wrist-mounted force-torque sensor
* Joint efforts, velocities and positions
* extrinsic and intrinsic camera calibration parameters
* frame-level anomaly annotations
The anomalies that occur during execution include:
* the manipulated book falling down
* books on the shelf being disturbed significantly
* camera occlusions
* robot being disturbed by an external collision
The dataset is split into a train, validation and test set with the following number of trials:
* Train: 48 successful trials
* Validation: 6 successful trials
* Test: 60 anomalous trials and 7 successful trials | Provide a detailed description of the following dataset: A robot dataset of successful and failed placement executions |
Reasonable Crowd | The **Reasonable Crowd** dataset is a dataset to evaluate autonomous driving in a limited operating domain. The data consists of 92 traffic scenarios, with multiple ways of traversing each scenario. Multiple annotators expressed their preference between pairs of scenario traversals. | Provide a detailed description of the following dataset: Reasonable Crowd |
BH-rPPG | BH-rPPG dataset (stands for Beihang University Remote PhotoPlethysmoGraphy) is a dataset consists of 3 lighting conditions with uneven distribution which collected in indoor environment. In order to evaluate the performance of deep learning based rPPG under different lighting conditions, we recruited twelve healthy subjects (11 males and 1 females) on campus, with a mean age of 32, SD of 2.5. | Provide a detailed description of the following dataset: BH-rPPG |
CalCROP21 | **CalCROP21** is a georeferenced multi-spectral dataset of satellite Imagery and crop labels. It is a semantic segmentation benchmark dataset, for the diverse crops in the Central Valley region of California at 10m spatial resolution using a Google Earth Engine based robust image processing pipeline. | Provide a detailed description of the following dataset: CalCROP21 |
IRLCov19 | **IRLCov19** is a multilingual Twitter dataset related to Covid-19 collected in the period between February 2020 to July 2020 specifically for regional languages in India. It contains more than 13 million tweets. | Provide a detailed description of the following dataset: IRLCov19 |
SNARE | **SNARE**, short for ShapeNet Annotated with Referring Expressions, is a benchmark requires a model to choose which of two objects is being referenced by a natural language description. | Provide a detailed description of the following dataset: SNARE |
TinyVIRAT-v2 | **TinyVIRAT-v2** is a benchmark dataset for recognizing real-world low-resolution activities present in videos. The dataset is comprised of naturally occuring low-resolution actions. This is an extension of the TinyVIRAT dataset and consists of actions with multiple labels. The videos are extracted from security videos which makes them realistic and more challenging. | Provide a detailed description of the following dataset: TinyVIRAT-v2 |
OLR 2021 | The **OLR 2021** dataset contains the data for the Oriental Language Recognition (OLR) 2021 Challenge, which intends to improve the performance of language recognition systems and speech recognition systems within multilingual scenarios. | Provide a detailed description of the following dataset: OLR 2021 |
Facebook Page-Page | This webgraph is a page-page graph of verified Facebook sites. Nodes represent official Facebook pages while the links are mutual likes between sites. Node features are extracted from the site descriptions that the page owners created to summarize the purpose of the site. This graph was collected through the Facebook Graph API in November 2017 and restricted to pages from 4 categories which are defined by Facebook. These categories are: politicians, governmental organizations, television shows and companies. The task related to this dataset is multi-class node classification for the 4 site categories.
This web graph is a page-page graph of verified Facebook sites. Nodes represent official Facebook pages while the links are mutual likes between sites. Node features are extracted from the site descriptions that the page owners created to summarize the purpose of the site. This graph was collected through the Facebook Graph API in November 2017 and restricted to pages from 4 categories that are defined by Facebook. These categories are: politicians, governmental organizations, television shows, and companies. The task related to this dataset is multi-class node classification for the 4 site categories. | Provide a detailed description of the following dataset: Facebook Page-Page |
Wiki Squirrel | The data was collected from the English Wikipedia (December 2018). These datasets represent page-page networks on specific topics (chameleons, crocodiles and squirrels). Nodes represent articles and edges are mutual links between them. The edges csv files contain the edges - nodes are indexed from 0. The features json files contain the features of articles - each key is a page id, and node features are given as lists. The presence of a feature in the feature list means that an informative noun appeared in the text of the Wikipedia article. The target csv contains the node identifiers and the average monthly traffic between October 2017 and November 2018 for each page. For each page-page network we listed the number of nodes an edges with some other descriptive statistics. | Provide a detailed description of the following dataset: Wiki Squirrel |
BMELD | **BMELD** is a bilingual (English-Chinese) dialogue corpus for Neural chat translation. | Provide a detailed description of the following dataset: BMELD |
Open Buildings | Building footprints are useful for a range of important applications, from population estimation, urban planning and humanitarian response, to environmental and climate science. This large-scale open dataset contains the outlines of buildings derived from high-resolution satellite imagery in order to support these types of uses. The project being based in Ghana, the current focus is on the continent of Africa. | Provide a detailed description of the following dataset: Open Buildings |
TERRA-REF | The ARPA-E funded TERRA-REF project is generating open-access reference datasets for the study of plant sensing, genomics, and phenomics. Sensor data were generated by a field scanner sensing platform that captures color, thermal, hyperspectral, and active flourescence imagery as well as three dimensional structure and associated environmental measurements. This dataset is provided alongside data collected using traditional field methods in order to support calibration and validation of algorithms used to extract plot level phenotypes from these datasets.
Data were collected at the University of Arizona Maricopa Agricultural Center in Maricopa, Arizona.
This site hosts a large field scanner with fifteen sensors, many of which are capable of capturing mm-scale images and point clouds at daily to weekly intervals.
These data are intended to be re-used, and are accessible as a combination of files and databases linked by spatial, temporal, and genomic information. In addition to providing open access data, the entire computational pipeline is open source, and we enable users to access high-performance computing environments.
The study has evaluated a sorghum diversity panel, biparental cross populations, and elite lines and hybrids from structured sorghum breeding populations.
In addition, a durum wheat diversity panel was grown and evaluated over three winter seasons.
The initial release includes derived data from from two seasons in which the sorghum diversity panel was evaluated.
Future releases will include data from additional seasons and locations.
The TERRA-REF reference dataset can be used to characterize phenotype-to-genotype associations, on a genomic scale, that will enable knowledge-driven breeding and the development of higher-yielding cultivars of sorghum and wheat.
The data is also being used to develop new algorithms for machine learning, image analysis, genomics, and optical sensor engineering. | Provide a detailed description of the following dataset: TERRA-REF |
Deezer User Networks | The data was collected from the music streaming service Deezer (November 2017). These datasets represent friendship networks of users from 3 European countries. Nodes represent the users and edges are the mutual friendships. We reindexed the nodes in order to achieve a certain level of anonimity. The csv files contain the edges -- nodes are indexed from 0. The json files contain the genre preferences of users -- each key is a user id, the genres loved are given as lists. Genre notations are consistent across users. In each dataset users could like 84 distinct genres. Liked genre lists were compiled based on the liked song lists. The countries included are Romania, Croatia and Hungary. For each dataset we listed the number of nodes an edges. | Provide a detailed description of the following dataset: Deezer User Networks |
Facebook Pages | We collected data about Facebook pages (November 2017). These datasets represent blue verified Facebook page networks of different categories. Nodes represent the pages and edges are mutual likes among them. We reindexed the nodes in order to achieve a certain level of anonimity. The csv files contain the edges -- nodes are indexed from 0. We included 8 different distinct types of pages. These are listed below. For each dataset we listed the number of nodes an edges. | Provide a detailed description of the following dataset: Facebook Pages |
DBP2.0 zh-en | The DBP2.0 dataset can be downloaded from the figshare repository. It has three entity alignment settings, i.e., ZH-EN, JA-EN and FR-EN. Each setting has the following files:
ent_links: reference entity alignment;
rel_triples_1: relation triples in the ZH or JA or FR KG, list of triples like (h \t r \t t);
rel_triples_2: relation triples in the EN KG;
splits/train_links: training data for entity alignment, list of pairs like (e1 \t e2);
splits/valid_links: validation data for entity alignment;
splits/test_links: test data for entity alignment;
splits/train_unlinked_ent1: training data for dangling entity detection, list of dangling entities in the ZH or JA or FR KG;
splits/train_unlinked_ent2: training data for dangling entity detection, list of dangling entities in the EN KG;
splits/valid_unlinked_ent1: validation data for dangling entity detection, list of dangling entities in the ZH or JA or FR KG;
splits/valid_unlinked_ent2: validation data for dangling entity detection, list of dangling entities in the EN KG;
splits/test_unlinked_ent1: test data for dangling entity detection, list of dangling entities in the ZH or JA or FR KG;
splits/test_unlinked_ent2: test data for dangling entity detection, list of dangling entities in the EN KG;
More information see: https://github.com/nju-websoft/OpenEA/tree/master/dbp2.0 | Provide a detailed description of the following dataset: DBP2.0 zh-en |
EmailSum | Email Thread Summarization (EmailSum) is a dataset which contains human-annotated short (<30 words) and long (<100 words) summaries of 2,549 email threads (each containing 3 to 10 emails) over a wide variety of topics. It was developed to spur research in thread summarization. | Provide a detailed description of the following dataset: EmailSum |
SyDog | SyDog is a synthetic dataset of dogs containing ground truth pose and bounding box coordinates which was generated using the game engine, Unity. | Provide a detailed description of the following dataset: SyDog |
mTVR | mTVR is a large-scale multilingual video moment retrieval dataset, containing 218K English and Chinese queries from 21.8K TV show video clips. The dataset is collected by extending the popular TVR dataset (in English) with paired Chinese queries and subtitles. Compared to existing moment retrieval datasets, mTVR is multilingual, larger, and comes with diverse annotations. | Provide a detailed description of the following dataset: mTVR |
Chest ImaGenome | Chest ImaGenome is a dataset with a scene graph data structure to describe 242,072 images. Local annotations are automatically produced using a joint rule-based natural language processing (NLP) and atlas-based bounding box detection pipeline. Through a radiologist constructed CXR ontology, the annotations for each CXR are connected as an anatomy-centered scene graph, useful for image-level reasoning and multimodal fusion applications. Overall, the following are provided: i) 1256 combinations of relation annotations between 29 CXR anatomical locations (objects with bounding box coordinates) and their attributes, structured as a scene graph per image, ii) over 670,000 localized comparison relations (for improved, worsened, or no change) between the anatomical locations across sequential exams, as well as ii) a manually annotated gold standard scene graph dataset from 500 unique patients.
Description from: [Chest ImaGenome Dataset for Clinical Reasoning](https://paperswithcode.com/paper/chest-imagenome-dataset-for-clinical)
Image source: [Chest ImaGenome Dataset for Clinical Reasoning](https://paperswithcode.com/paper/chest-imagenome-dataset-for-clinical) | Provide a detailed description of the following dataset: Chest ImaGenome |
ManiSkill | ManiSkill is a large-scale learning-from-demonstrations benchmark for articulated object manipulation with visual input (point cloud and image). ManiSkill supports object-level variations by utilizing a rich and diverse set of articulated objects, and each task is carefully designed for learning manipulations on a single category of objects. ManiSkill is equipped with high-quality demonstrations to facilitate learning-from-demonstrations approaches and perform evaluations on common baseline algorithms. ManiSkill can encourage the robot learning community to explore more on learning generalizable object manipulation skills. | Provide a detailed description of the following dataset: ManiSkill |
DadaGP | DadaGP is a new symbolic music dataset comprising 26,181 song scores in the GuitarPro format covering 739 musical genres, along with an accompanying tokenized format well-suited for generative sequence models such as the Transformer. The tokenized format is inspired by event-based MIDI encodings, often used in symbolic music generation models. The dataset is released with an encoder/decoder which converts GuitarPro files to tokens and back.
Description from: [DadaGP: A Dataset of Tokenized GuitarPro Songs for Sequence Models](https://paperswithcode.com/paper/dadagp-a-dataset-of-tokenized-guitarpro-songs)
Image source: [https://arxiv.org/pdf/2107.14653v1.pdf](https://arxiv.org/pdf/2107.14653v1.pdf) | Provide a detailed description of the following dataset: DadaGP |
OpenForensics | OpenForensics is a large-scale dataset posing a high level of challenges that is designed with face-wise rich annotations explicitly for face forgery detection and segmentation. With its rich annotations, the OpenForensics dataset has great potentials for research in both deepfake prevention and general human face detection. | Provide a detailed description of the following dataset: OpenForensics |
HR-Crime | HR-Crime is a subset of the [UCF-Crime](https://paperswithcode.com/dataset/ucf-crime) dataset suitable for human-related anomaly detection tasks. | Provide a detailed description of the following dataset: HR-Crime |
USC | The Uzbek speech corpus (USC) comprises 958 different speakers with a total of 105 hours of transcribed audio recordings. This is the first open-source Uzbek speech corpus dedicated to the ASR task. | Provide a detailed description of the following dataset: USC |
SoundingEarth | SoundingEarth consists of co-located aerial imagery and audio samples all around the world. | Provide a detailed description of the following dataset: SoundingEarth |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.