dataset_name stringlengths 2 128 | description stringlengths 1 9.7k | prompt stringlengths 59 185 |
|---|---|---|
WebFace260M | **WebFace260M** is a million-scale face benchmark, which is constructed for the research community towards closing the data gap behind the industry.
It consists of:
- Noisy 4M identities and 260M faces
- High-quality training data with 42M images of 2M identities by using automatic cleaning
- A test set with rich attributes and a time-constrained evaluation protocol | Provide a detailed description of the following dataset: WebFace260M |
DeepFake MNIST+ | DeepFake MNIST+ is a deepfake facial animation dataset. The dataset is generated by a SOTA image animation generator. It includes 10,000 facial animation videos in ten different actions, which can spoof the recent liveness detectors. | Provide a detailed description of the following dataset: DeepFake MNIST+ |
mvor | **Multi-View Operating Room** (MVOR) dataset consists of 732 synchronized multi-view frames recorded by three RGB-D cameras in a hybrid OR during real clinical interventions. Each multi-view frame consists of three color and three depth images. The MVOR dataset was sampled from four days of recording in an interventional room at the University Hospital of Strasbourg during procedures such as vertebroplasty and lung biopsy. There are in total 4699 bounding boxes, 2926 2D keypoint annotations, and 1061 3D keypoint annotations. | Provide a detailed description of the following dataset: mvor |
ConvRef | **ConvRef** is a conversational QA benchmark with reformulations.
It consists of around 11k natural conversations with about 205k reformulations.
ConvRef builds upon the conversational KG-QA benchmark [ConvQuestions](/dataset/convquestions).
Questions come from five different domains: books, movies, music, TV series and soccer and answers are Wikidata entities.
We used conversation sessions in ConvQuestions as input to our user study. Study participants interacted with a baseline QA system, that was trained using the available paraphrases in ConvQuestions as proxies for reformulations. Users were shown follow-up questions in a given conversation interactively, one after the other, along with the answer coming from the baseline QA system. For wrong answers, the user was prompted to reformulate the question up to four times if needed. In this way, users were able to pose reformulations based on previous wrong answers and the conversation history. | Provide a detailed description of the following dataset: ConvRef |
ISAdetect dataset | This repository holds two datasets: one with both the original binaries and the code sections extracted from them (“full dataset”), and one with only the code sections (“only code sections”). The code sections were extracted by carving out sections of the binary that were marked as executable. The binaries were scraped from Debian repositories.
There are also two CSV files available, one with full binaries and one with only code sections, which include the 293 features extracted from about 3000 binaries per architecture. These features can be used to train classifiers.
The dataset consists of thousands of binaries for the following 23 architectures: alpha, amd64, arm64, armel, armhf, hppa, i386, ia64, m68k, mips, mips64el, mipsel, powerpc, powerpcspe, powerpc64, powerpc64el, riscv, s390, s390x, sh4, sparc, sparc64 and x32.
There are 98 500 binary files, about 27 gigabytes (uncompressed) of binary files and about 15 gigabytes (uncompressed) of only code sections from those binary files.
Both datasets hold the binaries in directories named by the architecture. The files inside the folders are named as MD5 hashes of the original binary files, and a hash file ending with “.code” contains only the concatenation of all code sections of the original binary file. Each architecture folder also holds a JSON file named after the architecture, e.g. amd64 holds amd64.json. The structure of the JSON file is as follows (described in a JSON Schema-like notation)
This work is based on work by John Clemens, 2015, “Automatic classification of object code using machine learning” and De Nicolao, Pietro et al., 2018, “ELISA: ELiciting ISA of Raw Binaries for Fine-Grained Code and Data Separation”
This dataset is released as part of the following papers:
Sami Kairajärvi, Andrei Costin, and Timo Hämäläinen. 2020. ISAdetect: Usable automated detection of ISA (CPU architecture and endianness) for executable binary files and object code. In Tenth ACM Conference on Data and Application Security and Privacy (CODASPY’20), March 16–18, 2020, New Orleans, LA, USA. ACM, New York, NY, USA, 5 pages. https://doi.org/10.1145/3374664.3375742
Kairajärvi, Sami, Andrei Costin, and Timo Hämäläinen. "Towards usable automated detection of CPU architecture and endianness for arbitrary binary files and object code sequences." arXiv preprint arXiv:1908.05459 (2019).
Kairajärvi, Sami. "Automatic identification of architecture and endianness using binary file contents." (2019).
The code associated with this dataset can be found at https://github.com/kairis/isadetect
Changelog: version 6 - 29.3.2020
Add Weka models
version 5 - 17.1.2020
Clean up dataset
version 4 - 13.1.2020
Initial release | Provide a detailed description of the following dataset: ISAdetect dataset |
BBBC041 | P. vivax (malaria) infected human blood smears with bounding box annotations. The data consists of two classes of uninfected cells (RBCs and leukocytes) and four classes of infected cells (gametocytes, rings, trophozoites, and schizonts). | Provide a detailed description of the following dataset: BBBC041 |
SoMeSci | Knowledge about software used in scientific investigations is important for several reasons, for instance, to enable an understanding of provenance and methods involved in data handling. However, software is usually not formally cited, but rather mentioned informally within the scholarly description of the investigation, raising the need for automatic information extraction and disambiguation. Given the lack of reliable ground truth data, we present SoMeSci - Software Mentions in Science - a gold standard knowledge graph of software mentions in scientific articles. It contains high quality annotations (IRR: κ = .82) of 3756 software mentions in 1367 PubMed Central articles. Besides the plain mention of the software, we also provide relation labels for additional information, such as the version, the developer, a URL or citations. Moreover, we distinguish between different types, such as application, plugin or programming environment, as well as different types of mentions, such as usage or creation. To the best of our knowledge, SoMeSci is the most comprehensive corpus about software mentions in scientific articles, providing training samples for Named Entity Recognition, Relation Extraction, Entity Disambiguation, and Entity Linking. Finally, we sketch potential use cases and provide baseline results for the different tasks. | Provide a detailed description of the following dataset: SoMeSci |
MuDoCo_QueryRewrite | <Task description: joint learning of coreference resolution and query rewrite>
Given an ongoing dialogue between a user and a dialogue assistant, for the user query, the model is required to predict both coreference links between the query and the dialogue context, and the self-contained rewritten user query that is independent to the dialogue context.
<Dataset>
The MuDoCo dataset is a public dataset that contains 7.5k task-oriented multi-turn dialogues across 6 domains (calling, messaging, music, news, reminders, weather). Each dialogue turn is annotated with coreference links (links field). Please refer to the paper of the MuDoCo dataset for more details. Upon on the MuDoCo dataset, we annotate the query rewrite for each utterance, including both user and system turns. More details are provided in https://github.com/apple/ml-cread. | Provide a detailed description of the following dataset: MuDoCo_QueryRewrite |
VerbCL | **VerbCL** is a dataset that consists of the citation graph of court opinions, which cite previously published court opinions in support of their arguments. In particular, it focuses on the verbatim quotes, i.e., where the text of the original opinion is directly reused.
**VerbCL** is derived from CourtListener and introduces the task of highlight extraction as a single-document summarization task based on the citation graph. | Provide a detailed description of the following dataset: VerbCL |
BugRepo | BugRepo maintains a collection of bug reports that are publicly available for research purposes. Bug reports are a main data source for facilitating NLP-based research in software engineering. We categorize the datasets into the following research directions. | Provide a detailed description of the following dataset: BugRepo |
THRED | This is two-hop relation extraction dataset derived from WikiHop dataset [1].
[1] Johannes Welbl and Pontus Stenetorp and Sebastian Riedel. Constructing Datasets
for Multi-hop Reading Comprehension Across Documents. TACL, 2018. | Provide a detailed description of the following dataset: THRED |
HiXray | **HiXray** is a High-quality X-ray security inspection image dataset, which contains 102,928 common prohibited items of 8 categories. It has been gathered from the real-world airport security inspection and annotated by professional security inspectors | Provide a detailed description of the following dataset: HiXray |
Invisible Mobile Keyboard Dataset | **Invisible Mobile Keyboard Dataset** contains user initial, age, type of mobile devices, size of the screen, time taken for typing each phrase, and annotation of typed phrases with coordinate values of the typed position (x and y points). The collected dataset is the first and only dataset for a novel IMK decoding task. | Provide a detailed description of the following dataset: Invisible Mobile Keyboard Dataset |
MSDA | * 5 domains: synthetic domain, document domain, street view domain, handwritten domain, and car license domain
* over five million images | Provide a detailed description of the following dataset: MSDA |
MAPS | MAPS – standing for MIDI Aligned Piano Sounds – is a database of MIDI-annotated piano recordings. MAPS has been designed in order to be released in the music information retrieval research community, especially for the development and the evaluation of algorithms for single-pitch or multipitch estimation and automatic transcription of music. It is composed by isolated notes, random-pitch chords, usual musical chords and pieces of music. The database provides a large amount of sounds obtained in various recording conditions. | Provide a detailed description of the following dataset: MAPS |
LLVIP | * Visible-infrared Paired Dataset for Low-light Vision
* 30976 images (15488 pairs)
* 24 dark scenes, 2 daytime scenes
* Support for image-to-image translation (visible to infrared, or infrared to visible), visible and infrared image fusion, low-light pedestrian detection, and infrared pedestrian detection
* (The original image and video pairs (before registration) of LLVIP are also released!) | Provide a detailed description of the following dataset: LLVIP |
VisEvent | **VisEvent** (Visible-Event benchmark) is a dataset constructed for the evaluation of tracking by combing visible and event cameras. VisEvent is featured in:
Large-scale: 820 video sequences (RGB video + Event flows), contains 371,128 frames, 500 / 320 for the train / testing respectively;
High-quality Dense Annotation: Manual annotation with careful inspection in each frame;
Multiple-baseline: Dual-modality SOTA trackers.
Image Source: [VisEvent: Reliable Object Tracking via Collaboration of Frame and Event Flows](/paper/visevent-reliable-object-tracking-via) | Provide a detailed description of the following dataset: VisEvent |
FLUE | FLUE is a French Language Understanding Evaluation benchmark. It consists of 5 tasks: Text Classification, Paraphrasing, Natural Language Inference, Constituency Parsing and Part-of-Speech Tagging, and Word Sense Disambiguation. | Provide a detailed description of the following dataset: FLUE |
Images from camera traps in the Jura and Ain counties (France) | This dataset contains images taken from camera traps set up in the Jura and Ain counties in France. We use this dataset to illustrate the training of a deep learning algorithm with application to animal specie sidentification. See more here https://github.com/oliviergimenez/computo-deeplearning-occupany-lynx. | Provide a detailed description of the following dataset: Images from camera traps in the Jura and Ain counties (France) |
MAST | A new data consolidation called Multi-Attributed and Structured Text-to-face (MAST) dataset. The motivation is to have a large corpus of high-quality face images with fine-grained and attribute-focussed annotations. This has the benefits of the attribute oriented approach as well as the semantics in a textual description. | Provide a detailed description of the following dataset: MAST |
FlickrStyle10K | FlickrStyle10K is collected and built on Flickr30K image caption dataset. The original FlickrStyle10K dataset has 10,000 pairs of images and stylized captions including humorous and romantic styles. However, only 7,000 pairs from the official training set are now publicly accessible. The dataset can be downloaded via https://zhegan27.github.io/Papers/FlickrStyle_v0.9.zip | Provide a detailed description of the following dataset: FlickrStyle10K |
PEM Fuel Cell Dataset | This dataset are about Nafion 112 membrane standard tests and MEA activation tests of PEM fuel cell in various operation condition. Dataset include two general electrochemical analysis method, Polarization and Impedance curves. In this dataset, effect of different pressure of H2/O2 gas, different voltages and various humidity conditions in several steps are considered. Behavior of PEM fuel cell during distinct operation condition tests, activation procedure and different operation condition before and after activation analysis can be concluded from data. In Polarization curves, voltage and power density change as a function of flows of H2/O2 and relative humidity. Resistance of the used equivalent circuit of fuel cell can be calculated from Impedance data. Thus, experimental response of the cell is obvious in the presented data, which is useful in depth analysis, simulation and material performance investigation in PEM fuel cell researches. | Provide a detailed description of the following dataset: PEM Fuel Cell Dataset |
KITTI MOTS | The Multi-Object and Segmentation (MOTS) benchmark [2] consists of 21 training sequences and 29 test sequences. It is based on the KITTI Tracking Evaluation 2012 and extends the annotations to the Multi-Object and Segmentation (MOTS) task. To this end, we added dense pixel-wise segmentation labels for every object. We evaluate submitted results using the metrics HOTA, CLEAR MOT, and MT/PT/ML. We rank methods by HOTA [1]. Our development kit and GitHub evaluation code provide details about the data format as well as utility functions for reading and writing the label files. (adapted for the segmentation case). Evaluation is performed using the code from the [TrackEval repository](https://github.com/JonathonLuiten/TrackEval).
[1] J. Luiten, A. Os̆ep, P. Dendorfer, P. Torr, A. Geiger, L. Leal-Taixé, B. Leibe: [HOTA: A Higher Order Metric for Evaluating Multi-object Tracking.](https://link.springer.com/article/10.1007/s11263-020-01375-2) IJCV 2020.
[2] P. Voigtlaender, M. Krause, A. Os̆ep, J. Luiten, B. Sekar, A. Geiger, B. Leibe: [MOTS: Multi-Object Tracking and Segmentation.](https://arxiv.org/pdf/1902.03604.pdf) CVPR 2019. | Provide a detailed description of the following dataset: KITTI MOTS |
OSLD | Open Set Logo Detection Dataset (OSLD Dataset) is a dataset of eCommerce product images with associated brand logo images. It is released under creative commons (CC BY-NC 4.0) license to promote research in open set logo detection. The dataset can be used only for research purposes. The dataset contains:
- 20K eCommerce product images, with logo bounding box annotations
- 12.1K logo classes with 20.8K canonical logo images
- Product image logo bounding box to canonical logo image match pair annotations
Image source: [https://arxiv.org/pdf/1911.07440.pdf](https://arxiv.org/pdf/1911.07440.pdf) | Provide a detailed description of the following dataset: OSLD |
ESPADA | We present a new aerial image dataset, named ESPADA, intended for the training of deep neural networks for depth image estimation from a single aerial image. Given the difficulty of creating aerial image datasets containing image pairs of chromatic images related to their depth images, simulators such as AirSim have been proposed to generate synthetic images from photorealistic scenes. The latter enables the generation of thousands of images that can be used to train and evaluate neural models. However, we argue that synthetic photorealistic aerial image datasets can be improved by adding images generated from photogrammetric models imported into the simulator, thus enabling a less artificial generation of both chromatic and depth images. To assess the quality of these images, we compare the performance of 4 deep neural networks whose pre-trained models and code for re-training are publicly available. We also use ORB-SLAM, in its RGB-D version, to indirectly assess the estimated depth image. To accomplish this, chromatic images from 3 aerial videos and their depth images, estimated with the networks trained with ESPADA, are fed into ORB-SLAM. The estimated camera pose is compared against the trajectory retrieved from the GPS flight trajectory. Our results indicate that images generated from photogrammetric models improve the performance of depth estimation from a single aerial image. | Provide a detailed description of the following dataset: ESPADA |
AP-10K | AP-10K is the first large-scale benchmark for general animal pose estimation, to facilitate the research in animal pose estimation. AP-10K consists of 10,015 images collected and filtered from 23 animal families and 60 species following the taxonomic rank and high-quality keypoint annotations labeled and checked manually. | Provide a detailed description of the following dataset: AP-10K |
N15News | N15News is a large-scale multimodal news dataset comprising 200K imagetext pairs and 15 categories, which exceeding the previous news dataset in both the number of categories and samples.
Image source: [https://arxiv.org/pdf/2108.13327v1.pdf](https://arxiv.org/pdf/2108.13327v1.pdf) | Provide a detailed description of the following dataset: N15News |
Cats and Dogs | A large set of images of cats and dogs.
Homepage: https://www.microsoft.com/en-us/download/details.aspx?id=54765 | Provide a detailed description of the following dataset: Cats and Dogs |
GESTURES | This is the dataset to support the paper:
Fernando Pérez-García et al., 2021, Transfer Learning of Deep Spatiotemporal Networks to Model Arbitrarily Long Videos of Seizures.
The paper has been accepted for publication at the 24th International Conference on Medical Image Computing and Computer Assisted Intervention (MICCAI 2021).
A preprint is available on arXiv: https://arxiv.org/abs/2106.12014
Contents:
1) A CSV file "seizures.csv" with the following fields:
- Subject: subject number
- Seizure: seizure number
- OnsetClonic: annotation marking the onset of the clonic phase
- GTCS: whether the seizure generalises
- Discard: whether one (Large, Small), none (No) or both (Yes) views were discarded for training.
2) A folder "features_fpc_8_fps_15" containing two folders per seizure.
The folders contain features extracted from all possible snippets from the small (S) and large (L) views. The snippets were 8 frames long and downsampled to 15 frames per second. The features are in ".pth" format and can be loaded using PyTorch: https://pytorch.org/docs/stable/generated/torch.load.html
The last number of the file name indicates the frame index. For example, the file "006_01_L_000015.pth" corresponds to the features extracted from a snippet starting one second into the seizure video. Each file contains 512 numbers representing the deep features extracted from the corresponding snippet.
3) A description file, "README.txt" | Provide a detailed description of the following dataset: GESTURES |
ASR-GLUE | The **ASR-GLUE** benchmark is a collection of 6 different NLU (Natural Language Understanding) tasks for evaluating the performance of models under automatic speech recognition (ASR) error across 3 different levels of background noise and 6 speakers with various voice characteristics. | Provide a detailed description of the following dataset: ASR-GLUE |
SHIFT15M | **SHIFT15M** is a dataset that can be used to properly evaluate models in situations where the distribution of data changes between training and testing.
The SHIFT15M dataset has several good properties: (i) Multiobjective. Each instance in the dataset has several numerical values that can be used as target variables. (ii) Large-scale. The SHIFT15M dataset consists of 15million fashion images. (iii) Coverage of types of dataset shifts. SHIFT15M contains multiple dataset shift problem settings (e.g., covariate shift or target shift). SHIFT15M also enables the performance evaluation of the model under various magnitudes of dataset shifts by switching the magnitude. | Provide a detailed description of the following dataset: SHIFT15M |
VesselGraph | **VesselGraph** is a dataset of whole-brain vessel graphs based on specific imaging protocols. Specifically, vascular graphs are extracted using a refined graph extraction scheme leveraging the volume rendering engine Voreen and provided in an accessible and adaptable form through the OGB and PyTorch Geometric dataloaders. | Provide a detailed description of the following dataset: VesselGraph |
MEDIC | **MEDIC** is a large social media image classification dataset for humanitarian response consisting of 71,198 images to address four different tasks in a multi-task learning setup. It consists data from several data sources such as [CrisisMMD](crisismmd), data from AIDR and Damage Multimodal Dataset (DMD). | Provide a detailed description of the following dataset: MEDIC |
CrossedWires | **CrossedWires** is a living dataset of models and hyperparameters that exposes semantic differences between two popular deep learning frameworks: PyTorch and Tensorflow.
The CrossedWires dataset currently consists of models trained on CIFAR10 images using three different computer vision architectures: VGG16, ResNet50 and DenseNet121 across a large hyperparameter space. Using hyperparameter optimization, each of the three models was trained on 400 sets of hyperparameters suggested by the HyperSpace search algorithm.
The CrossedWires dataset includes PyTorch and Tensforflow models with test accuracies as different as 0.681 on syntactically equivalent models and identical hyperparameter choices. The 340 GB dataset and benchmarks presented here include the performance statistics, training curves, and model weights for all 1200 hyperparameter choices, resulting in 2400 total models. The CrossedWires dataset provides an opportunity to study semantic differences between syntactically equivalent models across popular deep learning frameworks. | Provide a detailed description of the following dataset: CrossedWires |
HeadlineCause | **HeadlineCause** is a dataset for detecting implicit causal relations between pairs of news headlines. The dataset includes over 5000 headline pairs from English news and over 9000 headline pairs from Russian news labeled through crowdsourcing. The pairs vary from totally unrelated or belonging to the same general topic to the ones including causation and refutation relations. | Provide a detailed description of the following dataset: HeadlineCause |
TIMo | TIMo (Time-of-Flight Indoor Monitoring) is a dataset of infrared and depth videos intended for the use in Anomaly Detection and Person Detection/People Counting. It features more than 1,500 sequences for anomaly detection, which sum up to more than 500,000 individual frames. For person detection the dataset contains more than than 240 sequences. The data was captured using a Microsoft Azure Kinect RGB-D camera. In addition, we provide annotations of anomalous frame ranges for use with anomaly detection and bounding boxes and segmentation masks for use with person detection. The data was captured in parts from a tilted view and a top-down perspective. | Provide a detailed description of the following dataset: TIMo |
Lyra | Lyra is a dataset for code generation that consists on Python code with embedded SQL. This dataset contains 2,000 carefully annotated database manipulation programs from real usage projects. Each program is paired with both a Chinese comment and an English comment. | Provide a detailed description of the following dataset: Lyra |
BSARD | The **Belgian Statutory Article Retrieval Dataset (BSARD)** is a French native corpus for studying *statutory article retrieval*. BSARD consists of more than 22,600 statutory articles from Belgian law and about 1,100 legal questions posed by Belgian citizens and labeled by experienced jurists with relevant articles from the corpus. | Provide a detailed description of the following dataset: BSARD |
ReadingBank | ReadingBank is a benchmark dataset for reading order detection built with weak supervision from WORD documents, which contains 500K document images with a wide range of document types as well as the corresponding reading order information. | Provide a detailed description of the following dataset: ReadingBank |
WikiNLDB | WikiNLDB is a novel dataset for training Natural Language Databases (NLDBs) which is generated by transforming structured data from Wikidata into natural language facts and queries.
Image source: [https://arxiv.org/pdf/2106.01074v1.pdf](https://arxiv.org/pdf/2106.01074v1.pdf) | Provide a detailed description of the following dataset: WikiNLDB |
UQ NIDS Datasets | A comprehensive dataset, merging all the aforementioned datasets. The newly published dataset represents the benefits of shared dataset feature sets, where the merging of multiple smaller ones is possible. This will eventually lead to a bigger and more universal NIDS datasets containing flows from multiple network setups and different attack settings. An additional label feature identifying the original dataset of each flow. This can be used to compare the same attack scenarios conducted over two or more different test-bed networks. The attack categories have been modified to combine all parent categories. Attacks named DoS attacks-Hulk, DoS attacks-SlowHTTPTest, DoS attacks-GoldenEye and DoS attacks-Slowloris have been renamed to the parent DoS category. Attacks named DDOS attack-LOIC-UDP, DDOS attack-HOIC and DDoS attacks-LOIC-HTTP have been renamed to DDoS. Attacks named FTP-BruteForce, SSH-Bruteforce, Brute Force -Web and Brute Force -XSS have been combined as a brute-force category. Finally, SQL Injection attacks have been included in the injection attacks category. The NF-UQ-NIDS dataset has a total of 11,994,893 records, out of which 9,208,048 (76.77%) are benign flows and 2,786,845 (23.23%) are attacks. The table below lists the distribution of the final attack categories. | Provide a detailed description of the following dataset: UQ NIDS Datasets |
UQ NetFlow NIDS v1 | A comprehensive dataset, merging all the aforementioned datasets. The newly published dataset represents the benefits of shared dataset feature sets, where the merging of multiple smaller ones is possible. This will eventually lead to a bigger and more universal NIDS datasets containing flows from multiple network setups and different attack settings. An additional label feature identifying the original dataset of each flow. This can be used to compare the same attack scenarios conducted over two or more different test-bed networks. The attack categories have been modified to combine all parent categories. Attacks named DoS attacks-Hulk, DoS attacks-SlowHTTPTest, DoS attacks-GoldenEye and DoS attacks-Slowloris have been renamed to the parent DoS category. Attacks named DDOS attack-LOIC-UDP, DDOS attack-HOIC and DDoS attacks-LOIC-HTTP have been renamed to DDoS. Attacks named FTP-BruteForce, SSH-Bruteforce, Brute Force -Web and Brute Force -XSS have been combined as a brute-force category. Finally, SQL Injection attacks have been included in the injection attacks category. The NF-UQ-NIDS dataset has a total of 11,994,893 records, out of which 9,208,048 (76.77%) are benign flows and 2,786,845 (23.23%) are attacks. The table below lists the distribution of the final attack categories. | Provide a detailed description of the following dataset: UQ NetFlow NIDS v1 |
AMFDS | # Arabic Multi Fonts Dataset
A multi-word multi-font Arabic word-image dataset.
AMDS is a dataset of Arabic word images.
The dataset was generated using the TextImagesToolkit
https://github.com/msfasha/TextImagesToolkit.
The database of comprised of a number of binary files and text files.
The binary files stores all the image files in binary format.\
The text file include information about the image word and the location of that image in the binary file.
The binary file format is suitable for transferring images to the cloud, in addition to faster loading process which is suitable for large number of images.
More information about the dataset can be found at:
https://github.com/msfasha/Arabic-Multi-Fonts-Dataset/edit/main/README.md | Provide a detailed description of the following dataset: AMFDS |
VideoMatte240K | VideoMatte240K consists of 484 high-resolution green screen videos and generate a total of 240,709 unique frames of alpha mattes and foregrounds with chroma-key software Adobe After Effects. The videos are purchased as stock footage or found as royalty-free materials online. 384 videos are at 4K resolution and 100 are in HD. The videos are split by 479 : 5 to form the train and validation sets. The dataset consists of a vast amount of human subjects, clothing, and poses that are helpful for training robust models.
Image source: [https://arxiv.org/pdf/2012.07810v1.pdf](https://arxiv.org/pdf/2012.07810v1.pdf) | Provide a detailed description of the following dataset: VideoMatte240K |
PhotoMatte85 | PhotoMatte85 contains 85 protrait images. The dataset is donated to us by a third-party commercial company. The footage are shot with professional studio lighting and the subjects are in standard portrait posing. We provide the alpha matte and foreground images extracted from the green screen photos. Due to license issue, we will not release the other 13K images used in training. | Provide a detailed description of the following dataset: PhotoMatte85 |
Phy-Q | **Phy-Q** is a benchmark that requires an agent to reason about physical scenarios and take an action accordingly. Inspired by the physical knowledge acquired in infancy and the capabilities required for robots to operate in real-world environments, the authors identify 15 essential physical scenarios. For each scenario, a wide variety of distinct task templates are created, and all the task templates within the same scenario can be solved by using one specific physical rule.
By having such a design, two distinct levels of generalization can be evaluated, namely the local generalization and the broad generalization. The benchmark gives a Phy-Q (physical reasoning quotient) score that reflects the physical reasoning ability of the agents. | Provide a detailed description of the following dataset: Phy-Q |
UQ NIDS Datasets (FlowMeter Format) | CICFlowMeter format of the datasets are made up of 83 features. | Provide a detailed description of the following dataset: UQ NIDS Datasets (FlowMeter Format) |
CICIDS2018 | CICIDS2018 includes seven different attack scenarios: Brute-force, Heartbleed, Botnet, DoS, DDoS, Web attacks, and infiltration of the network from inside. The attacking infrastructure includes 50 machines and the victim organization has 5 departments and includes 420 machines and 30 servers. The dataset includes the captures network traffic and system logs of each machine, along with 80 features extracted from the captured traffic using CICFlowMeter-V3. | Provide a detailed description of the following dataset: CICIDS2018 |
PCC | The Potsdam Commentary Corpus (PCC) is a corpus of 220 German newspaper commentaries (2.900 sentences, 44.000 tokens) taken from the online issues of the Märkische Allgemeine Zeitung (MAZ subcorpus) and Tagesspiegel (ProCon subcorpus) and is annotated with a range of different types of linguistic information.
The central subcorpus that we are making publicly available consists of 176 MAZ texts, which are annotated with
* Sentence Syntax
* Coreference
* Discourse Structure (RST & PDTB)
* Aboutness topics | Provide a detailed description of the following dataset: PCC |
MASC | The Manually Annotated Sub-Corpus (MASC) consists of approximately 500,000 words of contemporary American English written and spoken data drawn from the Open American National Corpus (OANC).
All of MASC includes manually validated annotations for sentence boundaries, token, lemma and POS;
noun and verb chunks; named entities (person, location, organization, date); Penn Treebank syntax;
coreference; and discourse structure.
Additional manually produced or validated annotations have been produced by the MASC project
for portions of the sub-corpus, including full-text annotation for FrameNet frame elements
and a 100K+ sentence corpus with WordNet 3.1 sense tags, of which one-tenth are also annotated for
FrameNet frame elements.
Annotations of all or portions of the sub-corpus for a wide variety of other linguistic phenomena
have been contributed by other projects, including PropBank, TimeBank, Pittsburgh opinion, and several others.
Unlike most freely available corpora including a wide variety of linguistic annotations,
MASC contains a balanced selection of texts from a broad range of genres. | Provide a detailed description of the following dataset: MASC |
EVIL | To automatically generate Python and assembly programs used for security exploits, we curated a large dataset for feeding NMT techniques. A sample in the dataset consists of a snippet of code from these exploits and their corresponding description in the English language. We collected exploits from publicly available databases (exploitdb, shellstorm), public repositories (e.g., GitHub), and programming guidelines. In particular, we focused on exploits targeting Linux, the most common OS for security-critical network services, running on IA-32 (i.e., the 32-bit version of the x86 Intel Architecture). The dataset is stored in the folder EVIL/datasets and consists of two parts: i) Encoders: a Python dataset, which contains Python code used by exploits to encode the shellcode; ii) Decoders: an assembly dataset, which includes shellcode and decoders to revert the encoding. | Provide a detailed description of the following dataset: EVIL |
FinQA | FinQA is a new large-scale dataset with Question-Answering pairs over Financial reports, written by financial experts. The dataset contains 8,281 financial QA
pairs, along with their numerical reasoning processes. | Provide a detailed description of the following dataset: FinQA |
Common Objects in 3D | Common Objects in 3D is a large-scale dataset with real multi-view images of object categories annotated with camera poses and ground truth 3D point clouds. The dataset contains a total of 1.5 million frames from nearly 19,000 videos capturing objects from 50 MS-COCO categories and, as such, it is significantly larger than alternatives both in terms of the number of categories and objects. | Provide a detailed description of the following dataset: Common Objects in 3D |
Creative Style Responses | Raw responses of ~10,000 people to a simple survey of creative habits. The numeric responses are an ordinal scale 1-5 for questions that ask about 2 contrasting creative habits/preferences along a given habit dimension. The endpoints of the scale are in the name of the column. The Discipline field is a 'check all that apply' question. These tags were mapped to 3 broad disciplines in the paper. See 'CreativeStyle_Responses_Tagged_Cleaned.xlsx' for processed data where creative habits are assigned as tags. | Provide a detailed description of the following dataset: Creative Style Responses |
Discipline Mapping | Mapping of detailed discipline tags to one of three broader disciplines (Arts, Science, Business) | Provide a detailed description of the following dataset: Discipline Mapping |
Gender Mapping | Mapping of free text gender entries to one of three genders: Male, Female, Non-Binary. | Provide a detailed description of the following dataset: Gender Mapping |
Shadow Accrual Maps | Large-scale shadows from buildings in a city play an important role in determining the environmental quality of public spaces. They can be both beneficial, such as for pedestrians during summer, and detrimental, by impacting vegetation and by blocking direct sunlight. Determining the effects of shadows requires the accumulation of shadows over time across different periods in a year. In our paper Shadow Accrual Maps: Efficient Accumulation of City-Scale Shadows over Time, we present a simple yet efficient class of approach that uses the properties of sun movement to track the changing position of shadows within a fixed time interval. This repository presents the computed shadow information for New York City, Chicago, Los Angeles, Boston and Washington DC. | Provide a detailed description of the following dataset: Shadow Accrual Maps |
IfAct | We consider the task of **identifying human actions visible in online videos**.
We focus on the widely spread genre of lifestyle vlogs, which consist of videos of people performing actions while verbally describing them. Our goal is to identify if actions mentioned in the speech description of a video are visually present.
We introduce a novel dataset, IfAct, consisting of 1,268 short video clips paired with sets of actions mentioned in the video transcripts, as well as **manual annotations of whether the actions are visible or not**. The dataset includes a total of **14,769 actions, 4,340 of which are visible**. | Provide a detailed description of the following dataset: IfAct |
Creative Habit Tags | Survey responses where all creative habit ordinal responses were converted to Creative Habit Tags - these tags were used in the analysis to build a network of people linked if they share similar creative habit sets, or a network of creative habits linked if the co-occur in the similar sets of people. | Provide a detailed description of the following dataset: Creative Habit Tags |
Business Matching | This is a proprietary dataset from a large internet services company of ranked pairs of relevant and irrelevant businesses for different queries, for a total
of 17,069 pairs. How well a query matches a candidate is represented by 41 features. | Provide a detailed description of the following dataset: Business Matching |
Wiki Talk Page Comments | This public dataset contains 127,820 comments from Wikipedia Talk Pages labeled with whether or not they are toxic | Provide a detailed description of the following dataset: Wiki Talk Page Comments |
W3C Experts | This is a subset of the TREC 2005 enterprise track data, and consists of 48 topics and 200 candidates per topic, with each candidate labeled as an expert or non-expert for the topic. The task is to rank the candidates based on their expertise on a topic, using a corpus of mailing lists from the World Wide Web Consortium (W3C). This is an application where the unconstrained algorithm does better for the minority protected group. | Provide a detailed description of the following dataset: W3C Experts |
SMAC | The StarCraft Multi-Agent Challenge (SMAC) is a benchmark that provides elements of partial observability, challenging dynamics, and high-dimensional observation spaces. SMAC is built using the StarCraft II game engine, creating a testbed for research in cooperative MARL where each game unit is an independent RL agent. | Provide a detailed description of the following dataset: SMAC |
Source Code Tagger Training Set | # Ensemble Tagger Training and Testing Set
This data includes two files: The training set used to create the SCANL Ensemble tagger [1] and the "unseen" testing set that includes words from systems that are not available in the training set. These are derived from a prior dataset of [Grammar Patterns](https://github.com/SCANL/datasets/tree/master/grammar_patterns_data); described in a different paper [2]. Within each of these csv files, you'll find several columns. We explain these columns below:
1. Type (only in training set) - Type (or return type) of the identifier to which current word belongs.
2. Identifier - The full identifier from which the current word was split.
3. Grammar Pattern - The sequence of part-of-speech tags generated by splitting the identifier into words and annotating with part-of-spech tags.
4. Word - The current word; derived by splitting the corresponding identifier.
5. SWUM annotation - The annotation that the SWUM POS tagger applied to a given word.
6. POSSE annotation - The annotation that the POSSE POS tagger applied to a given word.
7. Stanford annotation - The annotation that the Stanford POS tagger applied to a given word.
8. Flair annotation - The annotation that the FLAIR POS tagger applied to a given word.
9. Position - The position of a given word within its original identifier. For example, given an identifier: GetXMLReaderHandler, Get is in position 1, XML is in position 2, Reader is in position 3 and Handler is in position 4.
10. Identifier size (max position) - The length, in words, of the identifier of which the word was originally part.
11. Normalized position - We normalized the position metric described above such that the first word in the identifier is in position 1, all middle words are in position 2, and the last word is in position 3. For example, given an identifier: GetXMLReaderHandler, Get is in position 1, XML is in position 2, Reader is in position 2 and Handler is in position 3. The reason for this feature is to mitigate the sometimes-negative effect of very long identifiers [2].
12. [Context](#context) - The dataset contains five categories of identifier name: function, parameter, attribute, declaration, and class. We provide the category to which the given identifier belongs as one of the features to allow the ensemble to learn patterns that are more pervasive for certain identifier types versus others. For example, function identifiers contain verbs at a higher rate than other types of identifiers [2].
13. Correct - The correct part-of-speech tag for the current word.
14. System - System in which the current word was found.
15. Identifier Code - Each identifier has a unique number. Each word that has the same number is a part of the same identifier. For example, you can concatenate each word with a code of 0 to recreate the original identifier.
# Context
The numbers under the **context** feature represent the following categories (number -> category):
1. attribute
2. class
3. declaration
4. function
5. parameter
# Best Features
We found [1] that the best features, of the features described above, were
1. SWUM
2. POSSE
3. Stanford
4. Normalized position
5. Context
# Tagset
The tagset that we use is a subset of Penn treebank. Each of our annotations and an example can be found below. Further examples and definitions can be found in the paper [1]
| Abbreviation | Expanded Form | Examples |
|--------------|-----------------------------------------|-----------------------------------------------------------------|
| N | noun | Disneyland, shoe, faucet, mother, bedroom |
| DT | determiner | the, this, that, these, those, which |
| CJ | conjunction | and, for, nor, but, or, yet, so |
| P | preposition | behind, in front of, at, under, beside, above, beneath, despite |
| NPL | noun plural | streets, cities, cars, people, lists, items, elements. |
| NM | noun modifier (adjective) | red, cold, hot, scary, beautiful, happy, faster, small |
| NM | noun modifier (noun-adjunct *italicized*) | *employee*Name, *file*Path, *font*Size, *user*Id |
| V | verb | run, jump, drive, spin |
| VM | verb modifier (adverb) | very, loudly, seriously, impatiently, badly |
| PR | pronoun | she, he, her, him, it, we, us, they, them, I, me, you |
| D | digit | 1, 2, 10, 4.12, 0xAF |
| PRE | preamble (e.g., Hungarian) | Gimp, GLEW, GL, G, p_, m_, b_ |
# Word of Caution
Flair and Stanford recognize a larger number of verb conjugations (e.g., VBZ, VBD) than the ensemble, Posse, and SWUM. We left these conjugations in just in case someone wants to use them. If you are uninterested in using these conjugations, you should normalized them to just V-- inline with our [tagset](#tagset).
# Identifier Naming Structure Catalogue
We have put together a catalogue of [identifier naming structures](https://github.com/SCANL/identifier_name_structure_catalogue) in source code. This catalogue explains a lot more about why this work is important, how we are using the ensemble tagger and why the tagset looks the way it does.
# The actual tagger implementation
You can find the tagger that was trained using this data here: https://github.com/SCANL/ensemble_tagger
# Please cite the paper!
1. C. D. Newman, M. J. Decker, R. S. AlSuhaibani, A. Peruma, S. Mohapatra, T. Vishoi, M. Zampieri, M. W. Mkaouer, T. J. Sheldon, and E. Hill, "An Ensemble Approach for Annotating Source Code Identifiers with Part-of-speech Tags," in IEEE Transactions on Software Engineering, doi: 10.1109/TSE.2021.3098242.
2. Christian D. Newman, Reem S. Alsuhaibani, Michael J. Decker, Anthony Peruma, Dishant Kaushik, Mohamed Wiem Mkaouer, Emily Hill,
On the generation, structure, and semantics of grammar patterns in source code identifiers, Journal of Systems and Software, 2020, 110740, ISSN 0164-1212, https://doi.org/10.1016/j.jss.2020.110740. (http://www.sciencedirect.com/science/article/pii/S0164121220301680)
# Interested in our research?
**Check out https://scanl.org/** | Provide a detailed description of the following dataset: Source Code Tagger Training Set |
VIVOS | VIVOS is a free Vietnamese speech corpus consisting of 15 hours of recording speech prepared for Automatic Speech Recognition task.
The corpus was prepared by AILAB, a computer science lab of VNUHCM - University of Science, with Prof. Vu Hai Quan is the head of.
We publish this corpus in hope to attract more scientists to solve Vietnamese speech recognition problems. The corpus should only be used for academic purposes. | Provide a detailed description of the following dataset: VIVOS |
catbAbI QA-mode | We aim to improve the bAbI benchmark as a means of developing intelligent dialogue agents. To this end, we propose concatenated-bAbI (catbAbI): an infinite sequence of bAbI stories. catbAbI is generated from the bAbI dataset and during training, a random sample/story from any task is drawn without replacement and concatenated to the ongoing story. The preprocessig for catbAbI addresses several issues: it removes the supporting facts, leaves the questions embedded in the story, inserts the correct answer after the question mark, and tokenises the full sample into a single sequence of words. As such, catbAbI is designed to be trained in an autoregressive way and analogous to closed-book question answering.
catbAbI models can be trained in two different ways: language modelling mode (LM-mode)
or question-answering mode (QA-mode). In LM-mode, the catbAbI models are trained like
autoregressive word-level language models. In QA-mode, the catbAbI models are only trained to
predict the tokens that are answers to questions—making it more similar to regular bAbI. QA-mode
is simply implemented by masking out losses on non-answer predictions. In both training modes,
the model performance is solely measured by its accuracy and perplexity when answering the
questions. | Provide a detailed description of the following dataset: catbAbI QA-mode |
catbAbI LM-mode | We aim to improve the bAbI benchmark as a means of developing intelligent dialogue agents. To this end, we propose concatenated-bAbI (catbAbI): an infinite sequence of bAbI stories. catbAbI is generated from the bAbI dataset and during training, a random sample/story from any task is drawn without replacement and concatenated to the ongoing story. The preprocessig for catbAbI addresses several issues: it removes the supporting facts, leaves the questions embedded in the story, inserts the correct answer after the question mark, and tokenises the full sample into a single sequence of words. As such, catbAbI is designed to be trained in an autoregressive way and analogous to closed-book question answering.
catbAbI models can be trained in two different ways: language modelling mode (LM-mode) or question-answering mode (QA-mode). In LM-mode, the catbAbI models are trained like autoregressive word-level language models. In QA-mode, the catbAbI models are only trained to predict the tokens that are answers to questions—making it more similar to regular bAbI. QA-mode is simply implemented by masking out losses on non-answer predictions. In both training modes, the model performance is solely measured by its accuracy and perplexity when answering the questions. | Provide a detailed description of the following dataset: catbAbI LM-mode |
Security of Alerting Authorities in the WWW: Measuring Namespaces, DNSSEC, and Web PKI | This data set includes all raw data (e.g., collected certificates) of the WWW 2021 paper "Security of Alerting Authorities in the WWW: Measuring Namespaces, DNSSEC, and Web PKI".
* Current certificates in use by AA hosts.
* CT-logged certificates used by AA hosts. | Provide a detailed description of the following dataset: Security of Alerting Authorities in the WWW: Measuring Namespaces, DNSSEC, and Web PKI |
The Rise of Certificate Transparency and Its Implications on the Internet Ecosystem | This includes all data from the ACM IMC 2018 paper "The Rise of Certificate Transparency and Its Implications on the Internet Ecosystem". | Provide a detailed description of the following dataset: The Rise of Certificate Transparency and Its Implications on the Internet Ecosystem |
SHAD3S | We introduce the SHAD3S dataset, that for a given contour representation of a mesh, under a given illumination condition, provides the illumination masks on the object, a shadow mask on the ground, its diffuse and sketch renders.
[Dataset creation code](https://github.com/bvraghav/standible) | Provide a detailed description of the following dataset: SHAD3S |
Exposure-Errors | A dataset of over 24,000 images exhibiting the broadest range of exposure values to date with a corresponding properly exposed image. | Provide a detailed description of the following dataset: Exposure-Errors |
sRGB2XYZ Dataset | The sRGB2XYZ dataset contains ~1,200 pairs of camera-rendered sRGB and the corresponding scene-referred CIE XYZ images (971 training, 50 validation, and 244 testing images). | Provide a detailed description of the following dataset: sRGB2XYZ Dataset |
Raw2raw dataset | This dataset consists of an unpaired and paired set of images captured by two different smartphone cameras: Samsung Galaxy S9 and iPhone X. The unpaired set includes 196 images captured by each smartphone camera (total of 392). The paired set includes 115 pair of images used for testing. In addition to this paired set, we have another small set of 22 anchor paired images | Provide a detailed description of the following dataset: Raw2raw dataset |
Landscape Dataset | Landscape Dataset consists of landscape images collected from Flickr. | Provide a detailed description of the following dataset: Landscape Dataset |
Portrait Dataset | A portrait dataset of images collected from Flickr. | Provide a detailed description of the following dataset: Portrait Dataset |
DBFC Dataset | This dataset includes Direct Borohydride Fuel Cell (DBFC) impedance and polarization test in anode with Pd/C, Pt/C and Pd decorated Ni–Co/rGO catalysts. In fact, different concentration of Sodium Borohydride (SBH), applied voltages and various anode catalysts loading with explanation of experimental details of electrochemical analysis are considered in data. Voltage, power density and resistance of DBFC change as a function of weight percent of SBH (%), applied voltage and amount of anode catalyst loading that are evaluated by polarization and impedance curves with using appropriate equivalent circuit of fuel cell. Can be stated that interpretation of electrochemical behavior changes by the data of related cell is inevitable, which can be useful in simulation, power source investigation and depth analysis in DB fuel cell researches. | Provide a detailed description of the following dataset: DBFC Dataset |
CREAK | A testbed for commonsense reasoning about entity knowledge, bridging fact-checking about entities with commonsense inferences.
Image source: [https://arxiv.org/pdf/2109.01653v1.pdf](https://arxiv.org/pdf/2109.01653v1.pdf) | Provide a detailed description of the following dataset: CREAK |
CameraFusion | We present a novel approach to reference-based super-resolution (RefSR) with the focus on real-world dual-camera super-resolution (DCSR).
This dataset currently consists of 143 pairs of telephoto and wide-angle images in 4K resolution captured by smartphone dual-cameras.
See our paper for more details: Dual-Camera Super-Resolution with Aligned Attention Modules. | Provide a detailed description of the following dataset: CameraFusion |
Story Cloze | Representation and learning of commonsense knowledge is one of the foundational problems in the quest to enable deep language understanding. This issue is particularly challenging for understanding casual and correlational relationships between events. While this topic has received a lot of interest in the NLP community, research has been hindered by the lack of a proper evaluation framework. This paper attempts to address this problem with a new framework for evaluating story understanding and script learning: the 'Story Cloze Test'. This test requires a system to choose the correct ending to a four-sentence story. We created a new corpus of ~50k five-sentence commonsense stories, ROCStories, to enable this evaluation. This corpus is unique in two ways: (1) it captures a rich set of causal and temporal commonsense relations between daily events, and (2) it is a high quality collection of everyday life stories that can also be used for story generation. Experimental evaluation shows that a host of baselines and state-of-the-art models based on shallow language understanding struggle to achieve a high score on the Story Cloze Test. We discuss these implications for script and story learning, and offer suggestions for deeper language understanding. | Provide a detailed description of the following dataset: Story Cloze |
MSU Shot Boundary Detection Benchmark | This is a dataset for a shot boundary detection task. The dataset contains 2 existing datasets and 19 manually marked up open source videos with a total length of more than 1200 minutes and 10000 scene transitions. The dataset includes different types of videos with different resolutions from 360×288 to 1920×1080 in MP4 and MKV formats. Videos include samples in RGB scale or in grayscale with FPS from 23 to 60. | Provide a detailed description of the following dataset: MSU Shot Boundary Detection Benchmark |
Real-world graphs for betweenness-centrality ranking estimation | The ground truth betweenness-centralities for the real-world graphs are provided by AlGhamdi et al. (2017), which are computed by the parallel implementation of Brandes algorithm on a 96000-core supercomputer. The ground truth scores for the synthetic networks are provided by Fan et al. (2019) and are computed using the graph-tool (Peixoto, 2014) library.
The presented approach is compared to several baseline models. The performance of those models are adopted from the benchmark provided by Fan et al. (2019):
ABRA (Riondato & Upfal, 2018): Samples pairs of nodes until the desired accuracy is reached. Where the error tolerance λ was set to 0.01 and the probability δ was set to 0.1.
RK (Riondato & Kornaropoulos, 2014): The number of pairs of nodes is determined by the diameter of the network. Where the error tolerance and the probability were set similar to ABRA.
k-BC (Pfeffer & Carley, 2012): Does only k steps of Brandes algorithm (Brandes, 2001) which was set to 20% of the diameter of the network.
KADABRA (Borassi & Natale, 2019): Uses bidirectional BFS to sample the shortest paths. The variant where it computest the top-k% nodes with the highest betweenness-centrality was used. The error tolerance and probability were set to be the same as ABRA and RK.
Node2Vec (Grover & Leskovec, 2016): Uses a biased random walk to aggregate information from the neighbors. The vector representations of each node were then mapped with a trained MLP to ranking scores.
DrBC (Fan et al., 2019): Shallow graph convolutional network that outputs a ranking score for each node by propagating through the neighbors with a walk length of 5. | Provide a detailed description of the following dataset: Real-world graphs for betweenness-centrality ranking estimation |
MOD | MOD is a large-scale open-domain multimodal dialogue dataset incorporating abundant Internet memes into utterances. The dataset consists of ∼45K Chinese conversations with ∼606K utterances. Each conversation contains about 13 utterances with about 4 Internet memes on average and each utterance equipped with an Internet meme is annotated with the corresponding emotion.
Image source: [https://arxiv.org/pdf/2109.01839v1.pdf](https://arxiv.org/pdf/2109.01839v1.pdf) | Provide a detailed description of the following dataset: MOD |
EVIL-Encoders | This dataset contains samples to generate Python code for security exploits. In order to make the dataset representative of real exploits, it includes code snippets drawn from exploits from public databases. Differing from general-purpose Python code found in previous datasets, the Python code of real exploits entails low-level operations on byte data for obfuscation purposes (i.e., to encode shellcodes). Therefore, real exploits make extensive use of Python instructions for converting data between different encoders, for performing low-level arithmetic and logical operations, and for bit-level slicing, which cannot be found in the previous general-purpose Python datasets.
In total, we built a dataset that consists of 1,114 original samples of exploit-tailored Python snippets and their corresponding intent in the English language. These samples include complex and nested instructions, as typical of Python programming.
In order to perform more realistic training and for a fair evaluation, we left untouched the developers' original code snippets and did not decompose them. We provided English intents to describe nested instructions altogether.
In order to bootstrap the training process for the NMT model, we include in our dataset both the original, exploit-oriented snippets and snippets from a previous general-purpose Python dataset. This enables the NMT model to generate code that can mix general-purpose and exploit-oriented instructions. Among the several datasets for Python code generation, we choose the Django dataset due to its large size. This corpus contains 14,426 unique pairs of Python statements from the Django Web application framework and their corresponding description in English.
Therefore, our final dataset contains 15,540 unique pairs of Python code snippets alongside their intents in natural language. | Provide a detailed description of the following dataset: EVIL-Encoders |
EVIL-Decoders | This is an assembly dataset built on top of Shellcode_IA32, a dataset for automatically generating assembly from natural language descriptions that consists of 3,200 assembly instructions, commented in the English language, which were collected from shellcodes for IA-32 and written for the Netwide Assembler (NASM) for Linux.
In order to make the data more representative of the code that we aim to generate (i.e., complete exploits, inclusive of decoders to be delivered in the shellcode), we enriched the dataset with further samples of assembly code, drawn from the exploits that we collected from public databases. Different from the previous dataset, the new one includes assembly code from real decoders used in actual exploits. The final dataset contains 3,715 unique pairs of assembly code snippets/English intents.
To better support developers in the automatic generation of the assembly programs, we looked beyond a one-to-one mapping between natural language intents and their corresponding code.
Therefore, the dataset includes 783 lines (~21% of the dataset) of multi-line snippets, i.e., intents that generate multiple lines of assembly code, separated by the newline character (\n). These multi-line snippets contain a number of different assembly instructions that can range between 2 and 5. | Provide a detailed description of the following dataset: EVIL-Decoders |
Failure-Dataset-OpenStack | This failure dataset contains information on the events collected in the OpenStack cloud computing platform during three different campaigns of fault-injection experiments performed with three different workloads. | Provide a detailed description of the following dataset: Failure-Dataset-OpenStack |
SemEval-2021 Task 11: NLPContributionGraph | NLPContributionGraph was introduced as Task 11 at SemEval 2021 for the first time. The task is defined on a dataset of Natural Language Processing (NLP) scholarly articles with their contributions structured to be integrable within Knowledge Graph infrastructures such as the Open Research Knowledge Graph. The structured contribution annotations are provided as (1) Contribution sentences : a set of sentences about the contribution in the article; (2) Scientific terms and relations: a set of scientific terms and relational cue phrases extracted from the contribution sentences; and (3) Triples: semantic statements that pair scientific terms with a relation, modeled toward subject-predicate-object RDF statements for KG building. The Triples are organized under three (mandatory) or more of twelve total information units (viz., ResearchProblem, Approach, Model, Code, Dataset, ExperimentalSetup, Hyperparameters, Baselines, Results, Tasks, Experiments, and AblationAnalysis). | Provide a detailed description of the following dataset: SemEval-2021 Task 11: NLPContributionGraph |
Chest-Xray8 (COVID-19) | This dataset contains 1125 X-ray images of the studied individuals’ chests, including
125 images labeled as COVID-19, 500 images labeled as pneumonia, and 500 images labeled
as no findings. | Provide a detailed description of the following dataset: Chest-Xray8 (COVID-19) |
PlantVillage | The PlantVillage dataset consists of 54303 healthy and unhealthy leaf images divided into 38 categories by species and disease. | Provide a detailed description of the following dataset: PlantVillage |
S-COCO | Synthetic COCO (S-COCO) is a synthetically created dataset for homography estimation learning. It was introduced by DeTone et al., where the source and target images are generated by duplicating the same COCO image. The source patch $I_S$ is generated by randomly cropping a source candidate at position $p$ with a size of 128 ×128 pixels. Then the patch’s corners are randomly perturbed vertically and horizontally by values within the range [−$\rho$,$\rho$] and the four correspondences define a homography $H_{ST}$ . The inverse of this homography $H_{TS} = (H_{ST} )^{-1}$ is applied to the target candidate and from the resulted warped image a target patch $I_T$ is cropped at the same location p. Both $I_S$ and $I_T$ are the input data with the homography $H_{ST}$ as ground truth. | Provide a detailed description of the following dataset: S-COCO |
PDS-COCO | Photometrically Distorted Synthetic COCO (PDS-COCO) dataset is a synthetically created dataset for homography estimation learning. The idea is exactly the same as in the Synthetic [COCO (S-COCO)](https://paperswithcode.com/dataset/s-coco) dataset with SSD-like image distortion added at the beginning of the whole procedure: the first step involves adjusting the brightness of the image using randomly picked value $\delta_b \in \mathcal{U}(-32, 32)$. Next, contrast, saturation and hue noise is applied with the following values: $\delta_c \in \mathcal{U}(0.5, 1.5)$, $\delta_s \in \mathcal{U}(0.5, 1.5)$ and $\delta_h \in \mathcal{U}(-18, 18)$. Finally, the color channels of the image are randomly swapped with a probability of $0.5$. Such a photometric distortion procedure is applied to the original image independently to create source and target candidates. | Provide a detailed description of the following dataset: PDS-COCO |
BioLeaflets | **BioLeaflets** is a biomedical dataset for Data2Text generation. It is a corpus of 1,336 package leaflets of medicines authorised in Europe, which were obtained by scraping the European Medicines Agency (EMA) website. Package leaflets are included in the packaging of medicinal products and contain information to help patients use the product safely and appropriately, under the guidance of their healthcare professional. Each document contains six sections: 1) What is the product and what is it used for 2) What you need to know before you take the product 3) product usage instructions 4) possible side effects, 5) product storage conditions 6) other information. | Provide a detailed description of the following dataset: BioLeaflets |
CholecT50 | **CholecT50** is a dataset of endoscopic videos of laparoscopic cholecystectomy surgery introduced to enable research on fine-grained action recognition in laparoscopic surgery.
It is annotated with triplet information in the form of <instrument, verb, target>.
The dataset is a collection of 50 videos consisting of 45 videos from the Cholec80 dataset and 5 videos from an in-house dataset of the same surgical procedure.
CholecT50 is an extension of **CholecT40** with 10 additional videos and standardized classes.
**CholecT45** is a subset of CholecT50 consisting of 45 videos from the Cholec80 dataset and first public release of CholecT50.
The following are the official variants of the dataset:
- **1. CholecT50** : the original version as used in the Rendezvous publication.
- ** 2.CholecT50 (challenge)** : the variant used in CholecTriplet challenge.
- ** 3. CholecT50 (cross-val)**: official cross validation split of CholecT50.
- ** 4. CholecT45 (cross-val)**: official cross validation split of CholecT45.
- ** 5. CholecT40** : the original version of CholecT40 as used in Tripnet publication.
The complete dataset split information is given [here](https://arxiv.org/abs/2204.05235). | Provide a detailed description of the following dataset: CholecT50 |
WTW | **WTW** (Wired Table in the Wild) is a large-scale dataset which includes well-annotated structure parsing of multiple style tables in several scenes like the photo, scanning files, web pages.
WTW dataset has 10970 training samples and 3611 testing ones. The test images are divided into 7 challenging categories.
Dataset for trains and test contain images and labels. The label is in XML format, which has cell bbox and the structure label, includes start row, end row, start col, end col, and table id. In addition, the test set also contains separate file descripts sub-classification information for each image. | Provide a detailed description of the following dataset: WTW |
PMPC | **PMPC** (Persona Match on Persona-Chat) is a dataset for Speaker Persona Detection (SPD) which aims to detect speaker personas based on the plain conversational text. | Provide a detailed description of the following dataset: PMPC |
MultiEURLEX | **MultiEURLEX** is a multilingual dataset for topic classification of legal documents. The dataset comprises 65k European Union (EU) laws, officially translated in 23 languages, annotated with multiple labels from the EUROVOC taxonomy. The dataset covers 23 official EU languages from 7 language families. | Provide a detailed description of the following dataset: MultiEURLEX |
M-PCCD | The emerging MPEG point cloud codecs (V-PCC and G-PCC variants) are assessed, and best practices for rate allocation are investigated [1]. For this purpose, three experiments are conducted. In the first experiment, a rigorous evaluation of the codecs is performed, adopting test conditions dictated by experts of the group on a carefully selected set of models, using both subjective and objective quality assessment methodologies. In the other two experiments, different rate allocation schemes for geometry-only and geometry-plus-color encoding are subjectively evaluated, in order to draw conclusions on the best-performing approaches in terms of perceived quality for a given bit rate.
In this webpage, we make publicly available quality scores associated with the stimuli under assessment for each experiment. For purposes of reproducibility, a content that is used while not being part of established point cloud repositories adopted by standardisation bodies, is re-distributed. Moreover, scripts are provided in order to generate the reference models and the rendering-related meta-data that are used in this study. | Provide a detailed description of the following dataset: M-PCCD |
FunKPoint | **FunKPoint** is a dataset for finding correspondences in visual data that has ground truth correspondences for 10 tasks and 20 object categories. | Provide a detailed description of the following dataset: FunKPoint |
MFH | The **MFH** dataset is a multi-viewpoint fine-grained hand hygiene dataset. It contains 73,1147 samples in total, which are collected by 6 camera views in 6 different locations. All samples are split into 7 classes in total. MFH dataset is distinguished from existing datasets in three aspects: the large intra-class difference, the subtle inter-class difference, and the data mismatch in distribution between the training phase and the inference phase. This dataset thus provides a more realistic benchmark. | Provide a detailed description of the following dataset: MFH |
Hummingbird | **Hummingbird** is a dataset to examine stylistic lexical cues from human perception and BERT used to characterize their discrepancy. In HUMMINGBIRD crowd-workers relabeled benchmarking datasets for style classification tasks. | Provide a detailed description of the following dataset: Hummingbird |
WhyAct | **WhyAct** is a dataset for identifying human action reasons in online videos, consisting of 1,077 visual actions manually annotated with their reasons. | Provide a detailed description of the following dataset: WhyAct |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.