dataset_name
stringlengths
2
128
description
stringlengths
1
9.7k
prompt
stringlengths
59
185
CIC IoT Dataset 2022
# CIC IoT Dataset 2022 This project aims to generate a state-of-the-art dataset for profiling, behavioural analysis, and vulnerability testing of different IoT devices with different protocols such as IEEE 802.11, Zigbee-based and Z-Wave. The following illustrates the main objectives of the CIC-IoT dataset project: - Configure various IoT devices and analyze the behaviour exhibited. - Conduct manual and semi-automated experiments of various categories. - Further analyze the network traffic when the devices are idle for three minutes and when powered on for the first two minutes. - Generating different scenarios and analyzing the devices' behaviour in different situations. - Conducting and capturing the network terrific of devices undercurrent and important attacks in IoT environment. Current CIC IoT dataset project and activities around it can be summarized in the following steps: ### Network configuration Our lab network configuration was configured with a 64-bit Window machine with two network interface cards - one is connected to the network gateway, and the other is connected to an unmanaged network switch. Simultaneously, [Wireshark](https://www.wireshark.org/), the open-source network protocol analyzer, listens to both interfaces, captures and saves the output packet captured (pcap) files. Hence, IoT devices that require an Ethernet connection are connected to this switch. Additionally, a smart automation hub, Vera Plus is also connected to the unmanaged switch, which creates our wireless IoT environment to serve IoT devices compatible with Wi-Fi, ZigBee, Z-Wave and Bluetooth. ![](https://www.unb.ca/cic/_assets/images/iot-dataset.jpg) ### Dataset For collecting the data, we captured the network traffic of the IoT devices coming through the gateway using Wireshark and dumpcap in six different types of experiments. The former was used for manual experiments, while the latter was used for semi-automated ones. All the experiments can be organized as follows: 1. **Power:** In this experiment, we powered on all the devices in our lab individually and started a network traffic capture in isolation. 1. **Idle:** In this experiment, we captured the whole network traffic from late in the evening to early in the morning, which we call idle time. In this period, the whole lab was completely evacuated and there were no human interactions involved. 1. **Interactions:** In this experiment, all possible functionality on IoT devices has been extracted and the corresponding network activity and transmitted packets for each functionality/activity have been captured. 1. **Scenarios:** In these experiments, we conducted six different types of scenario experiments using a combination of devices as simulations of the network activity inside a smart home. These experiments were done to see how devices behave while interacting with each other simultaneously. 1. **Active:** In addition to the idle time, the whole network communications were also captured throughout the day. All fellow researchers during this period were allowed to enter the lab whenever they wanted. They might interact with devices and generate network traffic either passively or actively. 1. **Attacks:** In this experiment, we performed two different attacks, Flood and RTSP- Brute Force, on some of our devices and captured their attack network traffic. ### Case study – device identification After generating the dataset, we performed a case study on the idea of transferability – training datasets in our lab and transferring the trained model to another lab for testing. We conducted 20 different experiments based on the number of sampled devices from the United States lab. Forty-eight features were extracted from both the training dataset from our lab and the testing dataset from the other lab. Three classes of device types were used in this experiment: Audio, Camera and Home Automation. However, no labels were required for the test dataset since that was what was to be predicted but the training dataset required labels. After training, the model is transferred to the other lab for testing on each device to predict the class of the device in question. For example, if Amazon Echo Dot is tested on the trained model, the classifier should be able to predict this device as belonging to device type Audio. How this works is by counting the prediction of the classifier based on the features for each device type. The device type with the highest count is predicted as the class for the device in question. ### Dataset directory The main dataset directory (CIC IoT Dataset) contains six subdirectories related to each experiment, namely: 1. **Power:** In this directory, you will find the power experiment packet captures for each device, categorized by different device classes. 1. **Idle:** In this directory, you will find idle experiment packet captures for 30 days, named and sorted by date. 1. **Interactions:** In this directory, you will find the interactions experiments packet captures for each device, categorized by different device classes. Each interaction includes three packet captures. 1. **Scenarios:** In this directory, you will find six sub-directories, each of which is related to one scenario. Each scenario includes three packet captures. 1. **Active:** In this directory, you will find active experiment packet captures for 30 days, named and sorted by date. 1. **Attacks:** In this directory, you will find two sub-directories, Flood and RTSP BruteForce, each for a specific attack performed on a few devices. The latter was performed using two different tools, Hydra and Nmap. Each attack includes three packet captures per device. ### Contributing The project is not currently in development, but any contribution is welcome. Please contact one of the authors of the paper. ### Acknowledgments The authors would like to thank the [Canadian Institute for Cybersecurity](https://www.unb.ca/cic/about/index.html) for its financial and educational support. ### Citation Sajjad Dadkhah, Hassan Mahdikhani, Priscilla Kyei Danso, Alireza Zohourian, Kevin Anh Truong, Ali A. Ghorbani, "[_Towards the development of a realistic multidimensional IoT profiling dataset_](https://ieeexplore.ieee.org/document/9851966)", Submitted to: The 19th Annual International Conference on Privacy, Security & Trust (PST2022) August 22-24, 2022, Fredericton, Canada.
Provide a detailed description of the following dataset: CIC IoT Dataset 2022
IoT devices captures
# IoT devices captures - [Samuel Marchal](https://research.aalto.fi/en/persons/samuel-marchal) (Creator) ## Description This dataset represents the traffic emitted during the setup of 31 smart home IoT devices of 27 different types (4 types are represented by 2 devices each). Each setup was repeated at least 20 times per device-type. Each directory contains several pcap files, each representing a setup of the given device directory. Files are named Setup-X-Y-STA.pcap where X is the person realizing the setup and Y is the sequence number of the given capture. The file \_iotdevice-mac.txt contains the MAC address of the considered IoT device. Please refer to the following publication when citing this dataset: Markus Miettinen, Samuel Marchal, Ibbad Hafeez, N. Asokan, Ahmad-Reza Sadeghi, Sasu Tarkoma, "IoT Sentinel: Automated Device-Type Identification for Security Enforcement in IoT," in Proc. 37th IEEE International Conference on Distributed Computing Systems (ICDCS 2017), Jun. 2017. | Date made available | 3 Apr 2017 | | --- | --- | | Publisher | [Aalto University](https://research.aalto.fi/en/datasets/iot-devices-captures) | | Date of data production | 2016 | ## Dataset Licences - Unspecified ## Contact person - [Samuel Marchal](https://research.aalto.fi/en/persons/samuel-marchal) - ## DOI [10.24342/285a9b06-de31-4d8b-88e9-5bdba46cc161](https://doi.org/10.24342/285a9b06-de31-4d8b-88e9-5bdba46cc161) ## Access Dataset - [captures\_IoT\_Sentinel.zip](https://research.aalto.fi/files/13004478/captures_IoT_Sentinel.zip) File: multipart/x-zip, 25.4 MB Type: Dataset ## Cite this - **DataSetCite** Marchal, S. (Creator) (3 Apr 2017). IoT devices captures. Aalto University. captures\_IoT\_Sentinel(.zip). 10.24342/285a9b06-de31-4d8b-88e9-5bdba46cc161
Provide a detailed description of the following dataset: IoT devices captures
IoT Traffic Traces
# **IOT TRAFFIC TRACES** # Data Collected for IEEE TMC 2018 Cite our data A. Sivanathan, H. Habibi Gharakheili, F. Loi, A. Radford, C. Wijenayake, A. Vishwanath and V. Sivaraman, "Classifying IoT Devices in Smart Environments Using Network Traffic Characteristics", IEEE Transactions on Mobile Computing, Aug, 2018. Post Processing Tools [https://github.com/arunmir/sdn-sim](https://github.com/arunmir/sdn-sim) Resources [List\_Of\_Devices.txt](https://iotanalytics.unsw.edu.au/resources/List_Of_Devices.txt)
Provide a detailed description of the following dataset: IoT Traffic Traces
IoT Benign and Attack Traces
**IOT BENIGN AND ATTACK TRACES** # Data Collected for ACM SOSR 2019 # Attack & Benign Data # Instructions [Flow data](https://iotanalytics.unsw.edu.au/anomaly-data/flowdata.zip) contains flow counters of MUD flow, each instance in the file are collected every one minute. [Annotations](https://iotanalytics.unsw.edu.au/anomaly-data/annotations.zip) contains information about the start, end time of the attack and corresponsing MUD flows that are impacted through the Attack. More information about the device and the attacker can be found in [here](https://iotanalytics.unsw.edu.au/anomaly-data/attackinfo.xlsx) Below is an example of the annotations from the Samsung smart camera. eg: "1527838552,1527839153,Localfeatures|Arpfeatures,ArpSpoof100L2D" The above line indicates that the start time of the attack to be 1527838552 and end time is 1527839153. "Localfeatures|Arpfeatures" explains that it should impact the local communication and ARP protocol. "ArpSpoof100L2D" means that the attack was arpspoof lauched with the maximum rate of 100 packets per seconds. In order to identify the attack rows in flow stats you can use below condition. "if (flowtime \>= startTime\*1000 and endTime\*1000\>=flowtime) then attack = true" -- This corresponds to the line 4470 to 4479 in the samsung smart camera. # Cite our data A. Hamza, H. Habibi Gharakheili, T. Benson, V. Sivaraman, "Detecting Volumetric Attacks on IoT Devices via SDN-Based Monitoring of MUD Activity", ACM SOSR, San Jose, California, USA, Apr 2019. # Source code [https://github.com/ayyoob/mud-ie](https://github.com/ayyoob/mud-ie) # Contact [ayyoobhamza@student.unsw.edu.au](https://github.com/ayyoob/mud-ie)
Provide a detailed description of the following dataset: IoT Benign and Attack Traces
Persian Font Recognition (PFR)
Persian Font Recognition (PFR) A dataset in order to solve font recognition for the Persian language. This dataset is part of a paper titled "Persis: A Persian Font Recognition Pipeline Using Convolutional Neural Networks". If you use the findings of the paper or this dataset please cite the paper. [GitHub repo](https://github.com/mehrdad-dev/persis)
Provide a detailed description of the following dataset: Persian Font Recognition (PFR)
Persian Text Image Segmentation (PTI SEG)
Persian Text Image Segmentation (PTI SEG) This dataset is part of a paper titled "Persis: A Persian Font Recognition Pipeline Using Convolutional Neural Networks". A dataset in order to solve image segmentation for the Persian texts. For generating the PTI SEG dataset, we use 735 different backgrounds in four types (stock, papers, noisy real world, and texture), and apply four effects (gradient light, folding, subtle noise, and ink bleed) to the image If you use the findings of the paper or this dataset please cite the paper. [GitHub repo](https://github.com/mehrdad-dev/persis)
Provide a detailed description of the following dataset: Persian Text Image Segmentation (PTI SEG)
CoverageEval
**CoverageEval** is a dataset specifically designed for evaluating LLMs on this task. To create CoverageEval, we parse the code coverage logs generated during the execution of the test cases. This parsing step enables us to extract the relevant coverage annotations. We then carefully structure and export the dataset in a format that facilitates its use and evaluation by researchers and practitioners alike.
Provide a detailed description of the following dataset: CoverageEval
bestLDS data
Downloadable zip file containing raw data (simulated and real) as well as model fits / saved parameters.
Provide a detailed description of the following dataset: bestLDS data
SuperCLUE
****SuperCLUE** is a Chinese language model evaluation benchmark named after another popular Chinese LLM benchmark CLUE. SuperCLUE encompasses three sub-tasks: actual users' queries and ratings derived from an LLM battle platform (CArena), open-ended questions with single and multiple-turn dialogues (OPEN), and closed-ended questions with the same stems as open-ended single-turn ones (CLOSE).
Provide a detailed description of the following dataset: SuperCLUE
PointOdyssey
**PointOdyssey** is a large-scale synthetic dataset, and data generation framework, for the training and evaluation of long-term fine-grained tracking algorithms. The dataset currently includes 104 videos, averaging 2,000 frames long, with orders of magnitude more correspondence annotations than prior work.
Provide a detailed description of the following dataset: PointOdyssey
OTTO Recommender Systems Dataset
The `OTTO` session dataset is a large-scale dataset intended for multi-objective recommendation research. We collected the data from anonymized behavior logs of the [OTTO](https://otto.de) webshop and the app. The mission of this dataset is to serve as a benchmark for session-based recommendations and foster research in the multi-objective and session-based recommender systems area. We also launched a [Kaggle competition](https://www.kaggle.com/competitions/otto-recommender-system) with the goal to predict clicks, cart additions, and orders based on previous events in a user session. For additional background, please see the published [OTTO Recommender Systems Dataset](https://github.com/otto-de/recsys-dataset) GitHub. ## Key Features - 12M real-world anonymized user sessions - 220M events, consiting of `clicks`, `carts` and `orders` - 1.8M unique articles in the catalogue - Ready to use data in `.jsonl` format - Evaluation metrics for multi-objective optimization ## Dataset Statistics | Dataset | #sessions | #items | #events | #clicks | #carts | #orders | Density [%] | | :------ | ---------: | --------: | ----------: | ----------: | ---------: | --------: | ----------: | | Train | 12.899.779 | 1.855.603 | 216.716.096 | 194.720.954 | 16.896.191 | 5.098.951 | 0.0005 | | Test | 1.671.803 | 1.019.357 | 13.851.293 | 12.340.303 | 1.155.698 | 355.292 | 0.0005 |
Provide a detailed description of the following dataset: OTTO Recommender Systems Dataset
CANNOT
## Dataset Summary **CANNOT** is a dataset that focuses on negated textual pairs. It currently contains **77,376 samples**, of which roughly of them are negated pairs of sentences, and the other half are not (they are paraphrased versions of each other). The most frequent negation that appears in the dataset is verbal negation (e.g., will → won't), although it also contains pairs with antonyms (cold → hot). <br> ## Languages CANNOT includes exclusively texts in **English**. <br> ## Dataset Structure The dataset is given as a [`.tsv`](https://en.wikipedia.org/wiki/Tab-separated_values) file with the following structure: | premise | hypothesis | label | |:------------|:---------------------------------------------------|:-----:| | A sentence. | An equivalent, non-negated sentence (paraphrased). | 0 | | A sentence. | The sentence negated. | 1 | The dataset can be easily loaded into a Pandas DataFrame by running: ```Python import pandas as pd dataset = pd.read_csv('negation_dataset_v1.0.tsv', sep='\t') ``` <br> ## Dataset Creation The dataset has been created by cleaning up and merging the following datasets: 1. _Not another Negation Benchmark: The NaN-NLI Test Suite for Sub-clausal Negation_ (see [`datasets/nan-nli`](https://github.com/dmlls/cannot-dataset/tree/main/datasets/nan-nli)). 2. _GLUE Diagnostic Dataset_ (see [`datasets/glue-diagnostic`](https://github.com/dmlls/cannot-dataset/tree/main/datasets/glue-diagnostic)). 3. _Automated Fact-Checking of Claims from Wikipedia_ (see [`datasets/wikifactcheck-english`](https://github.com/dmlls/cannot-dataset/tree/main/datasets/wikifactcheck-english)). 4. _From Group to Individual Labels Using Deep Features_ (see [`datasets/sentiment-labelled-sentences`](https://github.com/dmlls/cannot-dataset/tree/main/datasets/sentiment-labelled-sentences)). In this case, the negated sentences were obtained by using the Python module [`negate`](https://github.com/dmlls/negate). 5. _It Is Not Easy To Detect Paraphrases: Analysing Semantic Similarity With Antonyms and Negation Using the New SemAntoNeg Benchmark_ (see [`datasets/antonym-substitution`](https://github.com/dmlls/cannot-dataset/tree/main/datasets/antonym-substitution)). Once processed, the number of remaining samples in each of the datasets above are: | Dataset | Samples | |:--------------------------------------------------------------------------|-----------:| | Not another Negation Benchmark | 118 | | GLUE Diagnostic Dataset | 154 | | Automated Fact-Checking of Claims from Wikipedia | 14,970 | | From Group to Individual Labels Using Deep Features | 2,110 | | It Is Not Easy To Detect Paraphrases | 8,597 | | <div align="right"><b>Total</b></div> | **25,949** | Additionally, for each of the negated samples, another pair of non-negated sentences has been added by paraphrasing them with the pre-trained model [`🤗tuner007/pegasus_paraphrase`](https://huggingface.co/tuner007/pegasus_paraphrase). Finally, the swapped version of each pair (premise ⇋ hypothesis) has also been included, and any duplicates have been removed. With this, the number of premises/hypothesis in the CANNOT dataset that appear in the original datasets are: | <div align="left"><b>Dataset</b></div> | <div align="center"><b>Sentences</b></div> | |:--------------------------------------------------------------------------|----------------------:| | Not another Negation Benchmark | 552 &nbsp;&nbsp;&nbsp; (0.36 %) | | GLUE Diagnostic Dataset | 586 &nbsp;&nbsp;&nbsp; (0.38 %) | | Automated Fact-Checking of Claims from Wikipedia | 89,728 &nbsp; (59.98 %) | | From Group to Individual Labels Using Deep Features | 12,626 &nbsp;&nbsp;&nbsp; (8.16 %) | | It Is Not Easy To Detect Paraphrases | 17,198 &nbsp; (11.11 %) | | <div align="right"><b>Total</b></div> | **120,690** &nbsp; (77.99 %) | The percentages above are in relation to the total number of premises and hypothesis in the CANNOT dataset. The remaining 22.01 % (34,062 sentences) are the novel premises/hypothesis added through paraphrase and rule-based negation. <br> ## Additional Information <br> ### Licensing Information The CANNOT dataset is released under [CC BY-SA 4.0](https://creativecommons.org/licenses/by-sa/4.0/). <a rel="license" href="http://creativecommons.org/licenses/by-sa/4.0/"> <img alt="Creative Commons License" width="100px" src="https://i.creativecommons.org/l/by-sa/4.0/88x31.png"/> </a> <br> ### Citation Please cite our [INLG 2023 paper](https://arxiv.org/abs/2307.13989), if you use our dataset. **BibTeX:** ```bibtex @misc{anschütz2023correct, title={This is not correct! Negation-aware Evaluation of Language Generation Systems}, author={Miriam Anschütz and Diego Miguel Lozano and Georg Groh}, year={2023}, eprint={2307.13989}, archivePrefix={arXiv}, primaryClass={cs.CL} } ``` <br> ### Contributions Contributions to the dataset can be submitted through the [project repository](https://github.com/dmlls/cannot-dataset).
Provide a detailed description of the following dataset: CANNOT
Dataset of UAI 2021 Paper "An Unsupervised Video Game Playstyle Metric via State Discretization"
This is a part of dataset of the paper published in UAI 2021 (37th Conference on Uncertainty in Artificial Intelligence). Including training datasets, testing datasets, and HSD models of three game platforms used in the paper: TORCS, RGSK, and Atari. The example program for using this file will be put on the author's github repo branch: https://github.com/DSobscure/cgi_drl_platform/tree/playstyle_uai2021
Provide a detailed description of the following dataset: Dataset of UAI 2021 Paper "An Unsupervised Video Game Playstyle Metric via State Discretization"
MoNuSAC
Different types of cells play a vital role in the initiation, development, invasion, metastasis and therapeutic response of tumors of various organs. For example, (1) most carcinomas originate from epithelial cells, (2) spatial arrangement of tumor infiltrating Lymphocytes (TILs) is associated with clinical outcome in several cancers, including the ones of breast, prostate, and lung (Fridman et. al., Nature Reviews Cancer, 2012), and (3) tumor associated macrophages (TAMs) influence diverse processes such as angiogenesis, neoplastic cell mitogenesis, antigen presentation, matrix degradation, and cytotoxicity in various tumors (Ruffel and Coussens, Cancer Cell, 2015). Thus, accurate identification and segmentation of nuclei of multiple cell-types is important for AI enabled characterization of tumor and its microenvironment. In this challenge, participants will be provided with H&E stained tissue images of four organs with annotations of multiple cell-types including epithelial cells, lymphocytes, macrophages, and neutrophils. Participants will use the annotated dataset to develop computer vision algorithms to recognize these cell-types from the tissue images of unseen patients released in the testing set of the challenge. Additionally, all cell-types will not have equal number of annotated instances in the training dataset which will encourage participants to develop algorithms for learning from imbalanced classes in a few shot learning paradigm. H&E staining of human tissue sections is a routine and most common protocol used by pathologists to enhance the contrast of tissue sections for tumor assessment (grading, staging, etc.) at multiple microscopic resolutions. Hence, we will provide the annotated dataset of H&E stained digitized tissue images of several patients acquired at multiple hospitals using one of the most common 40x scanner magnification. The annotations will be done with the help of expert pathologists.
Provide a detailed description of the following dataset: MoNuSAC
BubbleML
A multi-physics dataset of boiling processes. This repository includes downloads, visualizations, and sample applications. This dataset can be used to train operator networks for phase-change phenomena, act as a ground truth for Physics-Informed Neural Networks, or train computer vision models.
Provide a detailed description of the following dataset: BubbleML
SKILL-102
SKILL-102 consists of 102 image classification datasets. Each one supports one complex classification task, and the corresponding dataset was obtained from previously published sources (e.g., task 1: classify flowers into 102 classes, such as lily, rose, petunia, etc using 8,185 train/val/test images (Nilsback & Zisserman, 2008a); task 2: classify 67 types of scenes, such as kitchen, bedroom, gas station, library, etc using 15,524 images (Quattoni & Torralba, 2009). In total, SKILL-102 comprises 102 tasks, 5,033 classes, and 2,041,225 training images. To the best of our knowledge, SKILL-102 is the most challenging completely real (not synthesized or permuted) image classification benchmark for LL and SKILL algorithms, with the largest number of tasks, number of classes, and inter-task variance.
Provide a detailed description of the following dataset: SKILL-102
MoralChoice Survey
*MoralChoice* is a survey dataset to evaluate the moral beliefs encoded in LLMs. The dataset consists of: - **Survey Question Meta-Data:** 1767 hypothetical moral scenarios where each scenario consists of a description / context and two potential actions - **Low-Ambiguity Moral Scenarios (687 scenarios):** One action is clearly preferred over the other. - **High-Ambiguity Moral Scenarios (680 scenarios):** Neither action is clearly preferred - **Survey Question Templates:** 3 hand-curated question templates - **Survey Responses:** Outputs from 28 open- and closed-sourced LLMs
Provide a detailed description of the following dataset: MoralChoice Survey
Marmoset-8K
All animal procedures are overseen by veterinary staff of the MIT and Broad Institute Department of Comparative Medicine, in compliance with the NIH guide for the care and use of laboratory animals and approved by the MIT and Broad Institute animal care and use committees. Video of common marmosets (Callithrix jacchus) was collected in the laboratory of Guoping Feng at MIT. Marmosets were recorded using Kinect V2 cameras (Microsoft) with a resolution of 1080p and frame rate of 30 Hz. After acquisition, images to be used for training the network were manually cropped to 1000 x 1000 pixels or smaller. The dataset is 7,600 labeled frames from 40 different marmosets collected from 3 different colonies (in different facilities). Each cage contains a pair of marmosets, where one marmoset had light blue dye applied to its tufts. One human annotator labeled the 15 marker points on each animal present in the frame (frames contained either 1 or 2 animals). https://benchmark.deeplabcut.org/datasets.html
Provide a detailed description of the following dataset: Marmoset-8K
TriMouse-161
Three wild-type (C57BL/6J) male mice ran on a paper spool following odor trails (Mathis et al 2018). These experiments were carried out in the laboratory of Venkatesh N. Murthy at Harvard University. Data were recorded at 30 Hz with 640 x 480 pixels resolution acquired with a Point Grey Firefly FMVU-03MTM-CS. One human annotator was instructed to localize the 12 keypoints (snout, left ear, right ear, shoulder, four spine points, tail base and three tail points). All surgical and experimental procedures for mice were in accordance with the National Institutes of Health Guide for the Care and Use of Laboratory Animals and approved by the Harvard Institutional Animal Care and Use Committee. 161 frames were labeled, making this a real-world sized laboratory dataset.
Provide a detailed description of the following dataset: TriMouse-161
Fish-100
Schools of inland silversides (Menidia beryllina, n=14 individuals per school) were recorded in the Lauder Lab at Harvard University while swimming at 15 speeds (0.5 to 8 BL/s, body length, at 0.5 BL/s intervals) in a flow tank with a total working section of 28 x 28 x 40 cm as described in previous work, at a constant temperature (18±1°C) and salinity (33 ppt), at a Reynolds number of approximately 10,000 (based on BL). Dorsal views of steady swimming across these speeds were recorded by high-speed video cameras (FASTCAM Mini AX50, Photron USA, San Diego, CA, USA) at 60-125 frames per second (feeding videos at 60 fps, swimming alone 125 fps). The dorsal view was recorded above the swim tunnel and a floating Plexiglas panel at the water surface prevented surface ripples from interfering with dorsal view videos. Five keypoints were labeled (tip, gill, peduncle, dorsal fin tip, caudal tip). 100 frames were labeled, making this a real-world sized laboratory dataset.
Provide a detailed description of the following dataset: Fish-100
DEplain-APA-sent
### DEplain-APA-sent: A German Parallel Corpus for Sentence Simplification on News Texts DEplain is a new dataset of parallel, professionally written and manually aligned simplifications in **plain German** “plain DE” (or in German: **“Einfache Sprache”**). DEplain consists of four main subcorpora: DEplain-APA-doc, **DEplain-APA-sent**, DEplain-web-doc, and DEplain-web-sent. DEplain-APA-sent consists of approx. 500 news document pairs and approx. 13k sentence pairs. The sentence pairs are all manually aligned. The data is available upon request, please see [https://doi.org/10.5281/zenodo.7674560](https://doi.org/10.5281/zenodo.7674560) for more information. The corpus can be used for German **text simplification**, or in more detail **sentence simplification**.
Provide a detailed description of the following dataset: DEplain-APA-sent
DEplain-APA-doc
### DEplain-APA-doc: A German Parallel Corpus for Document Simplification on News Texts DEplain is a new dataset of parallel, professionally written and manually aligned simplifications in **plain German** “plain DE” (or in German: **“Einfache Sprache”**). DEplain consists of four main subcorpora: **DEplain-APA-doc,** DEplain-APA-sent, DEplain-web-doc, and DEplain-web-sent. DEplain-APA-doc consists of approx. 500 news document pairs. The data is available upon request, please see [https://doi.org/10.5281/zenodo.7674560](https://doi.org/10.5281/zenodo.7674560) for more information. The corpus can be used for German **text simplification**, or in more detail **document simplification**.
Provide a detailed description of the following dataset: DEplain-APA-doc
DEplain-web-sent
### DEplain-web-sent: A German Parallel Corpus for Sentence Simplification on Web Texts DEplain is a new dataset of parallel, professionally written and manually aligned simplifications in **plain German** “plain DE” (or in German: **“Einfache Sprache”**). DEplain consists of four main subcorpora: DEplain-APA-doc, DEplain-APA-sent, DEplain-web-doc, and **DEplain-web-sent**. DEplain-web-sent consists of approx. 150 aligned documents and approx. 2k manually aligned sentence pairs. The data is publicly available (see licenses). The corpus include texts from the following domains: fictional texts (literature and fairy tales), bible texts, health-related texts, and texts for language learners. The corpus can be used for German **text simplification**, or in more detail **sentence simplification**. The corpus is also available on Huggingface: [https://huggingface.co/datasets/DEplain/DEplain-web-sent](https://huggingface.co/datasets/DEplain/DEplain-web-sent).
Provide a detailed description of the following dataset: DEplain-web-sent
DEplain-web-doc
### DEplain-web-doc: A German Parallel Corpus for Document Simplification on Web Texts DEplain is a new dataset of parallel, professionally written and manually aligned simplifications in **plain German** “plain DE” (or in German: **“Einfache Sprache”**). DEplain consists of four main subcorpora: DEplain-APA-doc, DEplain-APA-sent, **DEplain-web-doc**, and DEplain-web-sent. DEplain-web-doc consists of approx. 150 aligned documents. The data is publicly available (see licenses). The corpus includes texts from the following domains: fictional texts (literature and fairy tales), bible texts, health-related texts, texts for language learners, texts for accessibility, and public administration texts. The corpus can be used for German **text simplification**, or in more detail **document simplification**. The corpus is also available on Huggingface: see [https://huggingface.co/datasets/DEplain/DEplain-web-doc](https://huggingface.co/datasets/DEplain/DEplain-web-doc).
Provide a detailed description of the following dataset: DEplain-web-doc
UMVM
We present a further analysis of visual modality incompleteness, benchmarking latest MMEA models on our proposed dataset MMEA-UMVM. To create our **MMEA-UMVM**(uncertainly missing visual modality) datasets, we perform random image dropping on MMEA datasets. Specifically, we randomly discard entity images to achieve varying degrees of visual modality missing, ranging from 0.05 to the maximum $R_{img}$ of the raw datasets with a step of 0.05 or 0.1. Finally, we get a total number of 97 data split. Refer to the following paper for more details: [*Rethinking Uncertainly Missing and Ambiguous Visual Modality in Multi-Modal Entity Alignment*](https://arxiv.org/abs/2307.16210)
Provide a detailed description of the following dataset: UMVM
BioVid
To advance methods for pain assessment, in particular automatic assessment methods, the BioVid Heat Pain Database was collected in a collaboration of the Neuro-Information Technology group of the University of Magdeburg and the Medical Psychology group of the University of Ulm. In our study, 90 participants were subjected to experimentally induced heat pain in four intensities. To compensate for varying heat pain sensitivities, the stimulation temperatures were adjusted based on the subject-specific pain threshold and pain tolerance. Each of the four pain levels was stimulated 20 times in randomized order. For each stimulus, the maximum temperature was held for 4 seconds. The pauses between the stimuli were randomized between 8-12 seconds. The pain stimulation experiment was conducted twice: once with un-occluded face and once with facial EMG sensors.
Provide a detailed description of the following dataset: BioVid
SEED-Bench
**SEED-Bench** consists of 19K multiple choice questions with accurate human annotations (~6 larger than existing benchmarks), which spans 12 evaluation dimensions including the comprehension of both the image and video modality.
Provide a detailed description of the following dataset: SEED-Bench
ToolBench
**ToolBench** is an instruction-tuning dataset for tool use, which is created automatically using ChatGPT. Specifically, the authors collect 16,464 real-world RESTful APIs spanning 49 categories from RapidAPI Hub, then prompt ChatgPT to generate diverse human instructions involving these APIs, covering both single-tool and multi-tool scenarios.
Provide a detailed description of the following dataset: ToolBench
LRD
We collected a new low-light raw denoising (LRD) dataset for training and benchmarking. In contrast to the SID dataset, which sets a fixed exposure time to capture long and short exposure images, we captured long and short exposure images based on the exposure value (EV). Motivated by multi-exposure image fusion, the exposure value for long exposure images was set to 0, and the exposure value for short exposure was set to the commonly used parameters -1, -2, and -3. The dataset is designed for application to low-light raw image denoising and low-light raw image synthesis. The dataset contains both indoor and outdoor scenes. For each scene instance, we first captured a long-exposure image at ISO 100 to get a noise-free reference image. Then we captured multiple short-exposure images using different ISO levels and EVs, with a 1-2 second interval between subsequent images to wait for the sensor to cool down, thus avoiding unexpected noise introduced by sensor heating.
Provide a detailed description of the following dataset: LRD
MineralImage5k
We present a comprehensive dataset comprising a vast collection of raw mineral samples for the purpose of mineral recognition. The dataset encompasses more than 5,000 distinct mineral species and incorporates subsets for zero-shot and few-shot learning. In addition to the samples themselves, some entries in the dataset are accompanied by supplementary natural language descriptions, size measurements, and segmentation masks. For detailed information on each sample, please refer to the minerals_full.csv file.
Provide a detailed description of the following dataset: MineralImage5k
PAD-UFES-20
Over the past few years, different Computer-Aided Diagnosis (CAD) systems have been proposed to tackle skin lesion analysis. Most of these systems work only for dermoscopy images since there is a strong lack of public clinical images archive available to evaluate the aforementioned CAD systems. To fill this gap, we release a skin lesion benchmark composed of clinical images collected from smartphone devices and a set of patient clinical data containing up to 21 features. The dataset consists of 1373 patients, 1641 skin lesions, and 2298 images for six different diagnostics: three skin diseases and three skin cancers. In total, 58.4% of the skin lesions are biopsy-proven, including 100% of the skin cancers. By releasing this benchmark, we aim to support future research and the development of new tools to assist clinicians to detect skin cancer.
Provide a detailed description of the following dataset: PAD-UFES-20
Top Jet W-Momentum Reconstruction Dataset
A set of Monte Carlo simulated events, for the evaluation of top quarks' (and their child particles') momentum reconstruction, produced using the HEPData4ML package [1]. Specifically, the entries in this dataset correspond with top quark jets, and the momentum of the jets' constituent particles. This is a newer version of the "Top Quark Momentum Reconstruction Dataset", but with sufficiently large changes to warrant this separate posting. [1] J. T. Offermann, X. Liu, and T. Hoffman, HEPData4ML (2023), https://github.com/janTOffermann/HEPData4ML.
Provide a detailed description of the following dataset: Top Jet W-Momentum Reconstruction Dataset
FinBench
FinBench is a benchmark for evaluating the performance of machine learning models with both tabular data inputs and profile text inputs.
Provide a detailed description of the following dataset: FinBench
LLMs4OL Evaluation Datasets
Three tasks were addressed in the LLMs4OL paradigm. The datasets released address the three tasks respectively. They are as follows: Task A. Term Typing Datasets Task B. Type Taxonomy Discovery Datasets Task C. Type Non-Taxonomic Relation Extraction Datasets
Provide a detailed description of the following dataset: LLMs4OL Evaluation Datasets
ccHarmony
**ccHarmony** is a color checker (cc) based image harmonization dataset. The dataset contains 350 real images and 426 segmented foregrounds, in which each real image has one or two segmented foregrounds. Each foreground is associated with 10 synthetic composite images. Therefore, our dataset has in total 4260 pairs of synthetic composite images and ground-truth real images. We split all pairs into 3080 training pairs and 1180 test pairs.
Provide a detailed description of the following dataset: ccHarmony
Human-M3
**Human-M3** is an outdoor multi-modal multi-view multi-person human pose database which includes not only multi-view RGB videos of outdoor scenes but also corresponding pointclouds.
Provide a detailed description of the following dataset: Human-M3
Tiny-ImageNet-C
Tiny-ImageNet-C is an open-source data set comprising algorithmically generated corruptions (blur, noise) applied to the Tiny-ImageNet (ImageNet-200) test-set.
Provide a detailed description of the following dataset: Tiny-ImageNet-C
ETHEC
It includes 47,978 butterfly images with a 4-level label-hierarchy. Hierarchy of labels from the ETHEC dataset across 4 levels: family, sub-family, genus and species. 6 family -> 21 sub-family -> 135 genus -> 561 species
Provide a detailed description of the following dataset: ETHEC
Replication Data for: Bitcoin Gold, Litecoin Silver
Historically, gold and silver have played distinct roles in traditional monetary systems. While gold has primarily been revered as a superior store of value, prompting individuals to hoard it, silver has commonly been used as a medium of exchange. As the financial world evolves, the emergence of cryptocurrencies has introduced a new paradigm of value and exchange. However, the store-of-value characteristic of these digital assets remains largely uncharted. Charlie Lee, the founder of Litecoin, once likened Bitcoin to gold and Litecoin to silver. To validate this analogy, our study employs several metrics, including UTXO, STXO, WAL, CoinDaysDestroyed (CDD), and public on-chain transaction data. Furthermore, we've devised trading strategies centered around the Price-to-Utility (PU) ratio, offering a fresh perspective on crypto-asset valuation beyond traditional utilities. Our back-testing results not only display trading indicators for both Bitcoin and Litecoin but also substantiate Lee's metaphor, underscoring Bitcoin's superior store-of-value proposition relative to Litecoin. We anticipate that our findings will drive further exploration into the valuation of crypto assets. For enhanced transparency and to promote future research, we've made our datasets available on Harvard Dataverse and shared our Python code on GitHub as open source.
Provide a detailed description of the following dataset: Replication Data for: Bitcoin Gold, Litecoin Silver
CIDII Dataset
The CIDII dataset is a binary classification, consisting of two classes of correct information and disinformation related to Islamic issues. The CIDII dataset belongs to our research (DISINFORMATION DETECTION ABOUT ISLAMIC ISSUES ON SOCIAL MEDIA USING DEEP LEARNING TECHNIQUES) published in MJCS journal in the link below: https://ejournal.um.edu.my/index.php/MJCS/article/view/41935 This dataset consists of five columns: 1- ID: Each article has a unique ID. 2- Article: The article contains text that is either facts related to Islamic issues if the information is correct, or posts targeting the Islamic religion if the information is false. Most posts contain only the body without a title. 3- Propagation Source: The source refers to the source of the article content, as it contains a Facebook link in the event that the post is disinformation, or it contains a link to Islamic websites in the event that the article refers to correct information (an explanation of a verse, a hadith, or an article related to the Islamic religion). 4- Article Type: This column contains the type of article published. Is it a post if the article is disinformation, or is it an Islamic article, a Quranic interpretation, or a hadith, in case of that the information is correct? 5- Class Type: This column shows whether the article belongs to the category of correct information or disinformation.
Provide a detailed description of the following dataset: CIDII Dataset
MoisesDB
In this paper, we introduce the MoisesDB dataset for musical source separation. It consists of 240 tracks from 45 artists, covering twelve musical genres. For each song, we provide its individual audio sources, organized in a two-level hierarchical taxonomy of stems. This will facilitate building and evaluating fine-grained source separation systems that go beyond the limitation of using four stems (drums, bass, other, and vocals) due to lack of data. To facilitate the adoption of this dataset, we publish an easy-to-use Python library to download, process and use MoisesDB. Alongside a thorough documentation and analysis of the dataset contents, this work provides baseline results for open-source separation models for varying separation granularities (four, five, and six stems), and discuss their results.
Provide a detailed description of the following dataset: MoisesDB
XStoryCloze
XStoryCloze consists of the professionally translated version of the English StoryCloze dataset (Spring 2016 version) to 10 non-English languages. This dataset is intended to be used for evaluating the zero- and few-shot learning capabilities of multlingual language models. This dataset is released by Meta AI.
Provide a detailed description of the following dataset: XStoryCloze
TDMD
**TDMD** contains eight reference DCM objects with six typical distortions. Using processed video sequences (PVS) derived from the DCM, the authors conducted a large-scale subjective experiment that resulted in 303 distorted DCM samples with mean opinion scores, making the TDMD the largest available DCM database to our knowledge.
Provide a detailed description of the following dataset: TDMD
CHI3D
**CHI3D** is a lab-based accurate 3D motion capture dataset with 631 sequences containing 2,525 contact events,728,664 ground truth 3d poses, as well as FlickrCI3D, a dataset of 11,216 images, with 14,081 processed pairs of people, and 81,233 facet-level surface correspondences.
Provide a detailed description of the following dataset: CHI3D
VisAlign
**VisAlign** is a dataset for measuring AI-human visual alignment in terms of image classification, a fundamental task in machine perception. In order to evaluate AI-Human visual alignment, a dataset should encompass samples with various scenarios that may arise in the real world and have gold human perception labels. The dataset consists of three groups of samples, namely Must-Act (i.e., Must-Classify), Must-Abstain, and Uncertain, based on the quantity and clarity of visual information in an image and further divided into eight categories.
Provide a detailed description of the following dataset: VisAlign
CAER
Jiyoung Lee, Seungryong Kim, Sunok Kim, Jungin Park, Kwanghoon Sohn; Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV), 2019, pp. 10143-10152
Provide a detailed description of the following dataset: CAER
AISECKG
Cybersecurity education is exceptionally challenging as it involves learning the complex attacks; tools and developing critical problem-solving skills to defend the systems. For a student or novice researcher in the cybersecurity domain, there is a need to design an adaptive learning strategy that can break complex tasks and concepts into simple representations. An AI-enabled automated cybersecurity education system can improve cognitive engagement and active learning. Knowledge graphs (KG) provide a visual representation in a graph that can reason and interpret from the underlying data, making them suitable for use in education and interactive learning. However, there are no publicly available datasets for the cybersecurity education domain to build such systems. The data is present as unstructured educational course material, Wiki pages, capture the flag (CTF) writeups, etc. Creating knowledge graphs from unstructured text is challenging without an ontology or annotated dataset. However, data annotation for cybersecurity needs domain experts. To address these gaps, we made three contributions in this paper. First, we propose an ontology for the cybersecurity education domain for students and novice learners. Second, we develop AISecKG, a triple dataset with cybersecurity-related entities and relations as defined by the ontology. This dataset can be used to construct knowledge graphs to teach cybersecurity and promote cognitive learning. It can also be used to build downstream applications like recommendation systems or self-learning question-answering systems for students. The dataset would also help identify malicious named entities and their probable impact. Third, using this dataset, we show a downstream application to extract custom-named entities from texts and educational material on cybersecurity.
Provide a detailed description of the following dataset: AISECKG
FracAtlas
FractureAtlas is a musculoskeletal bone fracture dataset with annotations for deep learning tasks like classification, localization, and segmentation. The dataset contains a total of 4,083 X-Ray images with annotation in COCO, VGG, YOLO, and Pascal VOC format. This dataset is made freely available for any purpose. The data provided within this work are free to copy, share or redistribute in any medium or format. The data might be adapted, remixed, transformed, and built upon. The dataset is licensed under a CC-BY 4.0 license. It should be noted that to use the dataset correctly, one needs to have knowledge of medical and radiology fields to understand the results and make conclusions based on the dataset. It's also important to consider the possibility of labeling errors. Furthermore, any publication that utilizes this resource should acknowledge the original paper, and the authors are encouraged to share their code and models to assist the research community in replicating the experiments and promoting the field of medical imaging.
Provide a detailed description of the following dataset: FracAtlas
TheoremQA
We propose the first question-answering dataset driven by STEM theorems. We annotated 800 QA pairs covering 350+ theorems spanning across Math, EE&CS, Physics and Finance. The dataset is collected by human experts with very high quality. We provide the dataset as a new benchmark to test the limit of large language models to apply theorems to solve challenging university-level questions. We provide a pipeline in the following to prompt LLMs and evaluate their outputs with WolframAlpha.
Provide a detailed description of the following dataset: TheoremQA
Volumetric CMR Cartesian Datasets
Datasets at https://zenodo.org/record/8105485 for Motion Robust CMR Reconstruction Code in https://github.com/syedmurtazaarshad/motion-robust-CMR
Provide a detailed description of the following dataset: Volumetric CMR Cartesian Datasets
CheXlocalize
[CheXlocalize](https://stanfordaimi.azurewebsites.net/datasets/23c56a0d-15de-405b-87c8-99c30138950c) is a radiologist-annotated segmentation dataset on chest X-rays. The dataset consists of two types of radiologist annotations for the localization of 10 pathologies: pixel-level segmentations and most-representative points. Annotations were drawn on images from the [CheXpert](https://stanfordmlgroup.github.io/competitions/chexpert/) validation and test sets. The dataset also consists of two separate sets of radiologist annotations: (1) ground-truth pixel-level segmentations on the validation and test sets, drawn by two board-certified radiologists, and (2) benchmark pixel-level segmentations and most-representative points on the test set, drawn by a separate group of three board-certified radiologists. The validation and test sets consist of 234 chest X-rays from 200 patients and 668 chest X-rays from 500 patients, respectively. The 10 pathologies of interest are Atelectasis, Cardiomegaly, Consolidation, Edema, Enlarged Cardiomediastinum, Lung Lesion, Lung Opacity, Pleural Effusion, Pneumothorax, and Support Devices. For more details, please see our paper, [_Benchmarking saliency methods for chest X-ray interpretation_](https://doi.org/10.1038/s42256-022-00536-x).
Provide a detailed description of the following dataset: CheXlocalize
SciGraphQA
SciGraphQA is a large-scale, open-domain dataset focused on generating multi-turn conversational question-answering dialogues centered around understanding and describing scientific graphs and figures. It contains over 300,000 samples derived from academic research papers in computer science and machine learning domains. Each sample in ScFiGraphQA consists of a scientific graph image sourced from papers on ArXiv, accompanied by rich textual context including the paper's title, abstract, figure caption, and a paragraph from the paper referencing the figure. Using this comprehensive context, the dataset employs a to produce multi-turn question-answer dialogues aimed at explaining the given graph in an interactive, conversational format. On average, each sample contains 2-3 turns of question-answer exchange. The key motivation behind SciGraphQA is providing a large-scale resource to support research and development of multi-modal AI systems that can engage in informative, open-ended conversations about graphs and data visualizations. The multi-turn dialogue format presents a more natural and interactive setting compared to standard visual question answering datasets that use fixed sets of standalone questions. Potential use cases of SciGraphQA include pre-training and benchmarking multi-modal conversational models for scientific graph comprehension, building AI assistants that can discuss data insights, and developing aids to help individuals understand complex figures and diagrams interactively. The academic source material also provides a way to evaluate model capabilities on expert-level graphs spanning diverse topics and complex visual encodings.
Provide a detailed description of the following dataset: SciGraphQA
ClassEval
In this work, we make the first attempt to evaluate LLMs in a more challenging code generation scenario, i.e. class-level code generation. We first manually construct the first class-level code generation benchmark ClassEval of 100 class-level Python code generation tasks with approximately 500 person-hours. Based on it, we then perform the first study of 11 state-of-the-art LLMs on class-level code generation. Based on our results, we have the following main findings. First, we find that all existing LLMs show much worse performance on class-level code generation compared to on standalone method-level code generation benchmarks like HumanEval; and the method-level coding ability cannot equivalently reflect the class-level coding ability among LLMs. Second, we find that GPT-4 and GPT-3.5 still exhibit dominate superior than other LLMs on class-level code generation, and the second-tier models includes Instruct-Starcoder, Instruct-Codegen, and Wizardcoder with very similar performance. Third, we find that generating the entire class all at once (i.e. holistic generation strategy) is the best generation strategy only for GPT-4 and GPT-3.5, while method-by-method generation (i.e. incremental and compositional) is better strategies for the other models with limited ability of understanding long instructions and utilizing the middle information. Lastly, we find the limited model ability of generating method-dependent code and discuss the frequent error types in generated classes.
Provide a detailed description of the following dataset: ClassEval
Text2KGBench
**Text2KGBench** is a benchmark to evaluate the capabilities of language models to generate KGs from natural language text guided by an ontology. Given an input ontology and a set of sentences, the task is to extract facts from the text while complying with the given ontology (concepts, relations, domain/range constraints) and being faithful to the input sentences.
Provide a detailed description of the following dataset: Text2KGBench
Real-CE
**Real-CE** is a real-world Chinese-English benchmark dataset for the task of STISR with the emphasis on restoring structurally complex Chinese characters. The benchmark provides 1,935/783 real LR-HR text image pairs (contains 33,789 text lines in total) for training/testing in 2× and 4× zooming modes, complemented by detailed annotations, including detection boxes and text transcripts.
Provide a detailed description of the following dataset: Real-CE
MDOT
### Description The consists of 92 groups of video clips with 113, 918 high resolution frames taken by two drones and 63 groups of video clips with 145, 875 high resolution frames taken by three drones.
Provide a detailed description of the following dataset: MDOT
PLC_data_1
Less complex PLC dataset, the states each have a dedicated feature where the positive flank (0->1 value switch) indicates a state start. The noise is generated by inducing random features. Signals: 35 States: 17 Cycles: 3000 Noise [%]: 10 Cycle Time: 89.9
Provide a detailed description of the following dataset: PLC_data_1
PLC_data_2
More complex PLC dataset, the states each have a unique combination of feature values indicating a state start. This leads to less precise cycle cutting due to our logic only considering one cyclic signal. For this reason this part of the code is still in development. The noise is generated by inducing random values. Signals: 26 States: 15 Cycles: 400 Noise [%]: 10 Column names: ['Timestamp', 'Signal 1', 'Signal 2', 'Signal 3', 'Signal 4', 'Signal 5', 'Signal 6', 'Signal 7', 'Signal 8', 'Signal 9', 'Signal 10', 'Signal 11', 'Signal 12', 'Signal 13', 'Signal 14', 'Signal 15', 'Signal 16', 'Signal 17', 'Signal 18', 'Signal 19', 'Signal 20', 'Signal 21', 'Signal 22', 'Signal 23', 'Signal 24', 'Signal 25', 'Signal 26', 'Cycle', 'State'] Data shape: (178267, 29) Cycle Time: 147.2
Provide a detailed description of the following dataset: PLC_data_2
Arabic-ToD
The Arabic-TOD dataset is based on the BiToD dataset. Of the 3,689 BiToD-English dialogues, 1,500 dialogues (30,000 utterances) were translated into Arabic. We translated the task-related keywords such as cuisine, dietary restrictions, and price-level for the restaurant domain, price-level for the hotel domain, type, and price-level for the attraction domain, day, weather, and city for the weather domain. We keep the rest of values without translation, like hotels’ and restaurants’ names, locations, and addresses. These values are real entities in Hong Kong city (literals), and most of them contain Chinese words written in English, therefore they have not been translated. According to the slot-values in the Arabic-TOD dataset, we used the slots names as they are in English and translated their corresponding values, except the entities in Hong Kong city since the Arabic-TOD dataset supports codeswitching. We did not translate the 'UserTask' for all dialogues, since it is not important in developing the system. It is just as a summarization of the dialogue contents.
Provide a detailed description of the following dataset: Arabic-ToD
MGPFD
**MGPFD** is a dataset for multi-goal path finding problem, including a training dataset and a simulation dataset.
Provide a detailed description of the following dataset: MGPFD
ALTA 2021 Shared Task
This dataset is described in the [ALTA 2021 Shared Task website](https://www.alta.asn.au/events/sharedtask2021/index.html) and associated [CodaLab competition](https://competitions.codalab.org/competitions/33739). The basic task is to build an automatic evidence grading system for evidence-based medicine. Evidence-based medicine is a medical practice which requires practitioners to search medical literature for evidence when making clinical decisions. The practitioners are also required to grade the quality of extracted evidence on some chosen scale. The goal of the grading system is to automatically determine the grade of an evidence given the article abstract(s) from which the evidence is extracted. The grading scale used for this task is the Strength of Recommendation Taxonomy (SORT). This taxonomy has 3 grades - A (strong), B (moderate) and C (weak). The grade of an evidence depends on multiple factors and information about this grading scale can be found in the paper by [Ebell et al. (2004)](https://www.jabfm.org/content/17/1/59).
Provide a detailed description of the following dataset: ALTA 2021 Shared Task
UJI Probes
This package contains an anonymized packets of 802.11 probe requests captured throughout March of 2023 at Universitat Jaume I. The packet capture file is in the standardized *.pcap binary format and can be opened with any packet analysis tool such as Wireshark or scapy (Python packet analysis and manipulation package). The dataset is usable for analyzis of Wi-Fi probe requests, presence detection, occupancy estimation or signal stability analyzis.
Provide a detailed description of the following dataset: UJI Probes
ALTA 2023 Shared Task
This dataset is described in the [ALTA 2023 Shared Task](https://www.alta.asn.au/events/sharedtask2023/index.html) and associated [CodaLab competition](https://codalab.lisn.upsaclay.fr/competitions/14327). The goal of this task is to build automatic detection systems that can discriminate between human-authored and synthetic text generated by Large Language Models (LLMs). The generated synthetic text will come from a variety of sources, including different domain sources (e.g., law, medical) and different LLMs (e.g., T5, GPT-X). The performance of the models will be evaluated based on their accuracy, robustness in detecting synthetic text.
Provide a detailed description of the following dataset: ALTA 2023 Shared Task
ALTA 2022 Shared Task
This dataset is described in the [ALTA 2022 Shared Task](https://www.alta.asn.au/events/sharedtask2022/index.html) and associated [CodaLab competition](https://codalab.lisn.upsaclay.fr/competitions/6935). The goal of this task is to build automatic sentence classifiers that can map the content of biomedical abstracts into a set of pre-defined categories, which are used for Evidence-Based Medicine (EBM). EBM practitioners rely on specific criteria when judging whether a scientific article is relevant to a given question. They generally follow the PICO criterion: Population (P) (i.e., participants in a study); Intervention (I); Comparison (C) (if appropriate); and Outcome (O) (of an Intervention). Variations and extensions of this classification have been proposed, and for this task we will extend PICO by adding the classes Background (B) and Study Design (S); and including sentences that have no relevant content: Other (O). Therefore, the goal will be to classify the provided sentences according to the PIBOSO schema. Such information could be leveraged in various ways: e.g., to improve search performance; to enable structured querying with specific categories; and to aid users in more quickly making judgements against specified PICOSO criteria.
Provide a detailed description of the following dataset: ALTA 2022 Shared Task
In-the-wild ChatGPT Prompts
This dataset contains 6,387 ChatGPT prompts collected from four platforms (Reddit, Discord, websites, and open-source datasets) during Dec 2022 to May 2023. Among these prompts, 666 jailbreak prompts are identified.
Provide a detailed description of the following dataset: In-the-wild ChatGPT Prompts
LPR4M
**LPR4M** is a large-scale live commerce dataset, offering a significantly broader coverage of categories and diverse modalities such as video, image, and text. It contains 4M exactly matched〈clip, image〉pairs of 4M live clips, and 332k shop images. Each image has 12 clips with different product variations, e.g., viewpoint, scale, and occlusion.
Provide a detailed description of the following dataset: LPR4M
IDiff-Face
The IDiff-Face dataset was proposed in the paper "IDiff-Face: Synthetic-based Face Recognition through Fizzy Identity-Conditioned Diffusion Models". This dataset is synthetically generated using the IDiff-Face model.
Provide a detailed description of the following dataset: IDiff-Face
GVLM
For change detection tasks, current open-source datasets mainly focus on building extraction (e.g., WHU building dataset and LEVIR-CD dataset) (Chen and Shi, 2020; Ji et al., 2018) and urban development monitoring (e.g., SECOND dataset, Google dataset and CDD dataset) (Yang et al., 2022; Peng et al., 2021; Lebedev et al., 2018), whereas datasets for natural disaster monitoring have been seldom investigated. Therefore, we sought to present the GVLM dataset, the first large-scale and open-source VHR landslide mapping dataset. It includes $17$ bitemporal very-high-resolution imagery pairs with a spatial resolution of $0.59$ m acquired via Google Earth service. Each sub-dataset contains a pair of bitemporal images and the corresponding ground-truth map. The total coverage of the dataset is $163.77 km2$. The landslide sites in different geographical locations have various sizes, shapes, occurrence times, spatial distributions, phenology states, and land cover types, resulting in considerable spectral heterogeneity and intensity variations in the remote sensing imagery. The GVLM dataset can be used to develop and evaluate machine/deep learning models for change detection, semantic segmentation and landslide extraction.
Provide a detailed description of the following dataset: GVLM
Engineered Cardiac Microbundle Time-Lapse Microscopy Image Dataset
The "Microbundle Time-lapse Dataset" contains 24 experimental time-lapse images of cardiac microbundles using three distinct types of experimental testbed of beating lab grown hiPSC-based cardiac microbundles. Of the 24 experimental time-lapse images, 23 examples are brightfield videos, and a single example is a phase contrast video. We categorize the different experimental testbeds into 3 types, where "Type 1" includes movies obtained from standard experimental microbundle platforms termed microbundle strain gauges [1,2,3]. We refer to data collected from non-standard platforms termed FibroTUGs [4] as "Type 2" data, and "Type 3" data represents a highly versatile and diverse nanofabricated experimental platform [5,6]. Within this dataset, we include 11 examples of "Type 1" tissue, 7 examples of "Type 2" tissue, and 6 examples of "Type 3" tissue, totaling to 24 different examples of these experimental data. In addition to the raw videos shared in ".tif" format, we include the tissue masks, whether generated automatically via our computational pipeline [7] or manually via tracing in ImageJ [8], that were used to run the "MicroBundleCompute" software [7] for analyzing these data. These masks are included within the "masks" subfolders, where each mask text file is a two-dimensional array in which the tissue domain is denoted by “1” and the background domain is denoted by “0”. We include the "mask.tif" files for visualization purposes only. In brief, this dataset was used to showcase the functionality of the "MicroBundleCompute" analysis software [7] including pillar tracking and analysis of heterogeneous displacement and strain fields. To reproduce the results shown in [9], the manuscript introducing the "MicroBundleCompute" software, only a single pre-processing step is required. Specifically, the single ".tif" file for each experiment needs to be converted into a series of individual images saved in the ".TIF" format in the "movie" folder. References: [1] Boudou T, Legant WR, Mu A, Borochin MA, Thavandiran N, Radisic M, Zandstra PW, Epstein JA, Margulies KB, Chen CS. A microfabricated platform to measure and manipulate the mechanics of engineered cardiac microtissues. Tissue Engineering Part A. 2012 May 1;18(9-10):910-9. [2] Xu F, Zhao R, Liu AS, Metz T, Shi Y, Bose P, Reich DH. A microfabricated magnetic actuation device for mechanical conditioning of arrays of 3D microtissues. Lab on a Chip. 2015;15(11):2496-503. [3] Bielawski KS, Leonard A, Bhandari S, Murry CE, Sniadecki NJ. Real-time force and frequency analysis of engineered human heart tissue derived from induced pluripotent stem cells using magnetic sensing. Tissue Engineering Part C: Methods. 2016 Oct 1;22(10):932-40. [4] DePalma SJ, Davidson CD, Stis AE, Helms AS, Baker BM. Microenvironmental determinants of organized iPSC-cardiomyocyte tissues on synthetic fibrous matrices. Biomaterials science. 2021;9(1):93-107. [5] Jayne RK, Karakan MÇ, Zhang K, Pierce N, Michas C, Bishop DJ, Chen CS, Ekinci KL, White AE. Direct laser writing for cardiac tissue engineering: a microfluidic heart on a chip with integrated transducers. Lab on a Chip. 2021;21(9):1724-37. [6] Karakan MÇ. A Direct-Laser-Written Heart-on-a-Chip Platform for Generation and Stimulation of Engineered Heart Tissues (Doctoral dissertation, Boston University, 2023). [7] Kobeissi H, & Lejeune E (2023). MicroBundleCompute [Computer software]. https://github.com/HibaKob/MicroBundleCompute [8] Bourne R. Fundamentals of digital imaging in medicine. Springer Science & Business Media; 2010 Jan 18. [9] Kobeissi H, Jilberto, J, Karakan MÇ, Gao X, DePalma SJ, Das SL, Quach L, Urquia J, Baker BM, Chen CS, Nordsletten D, Lejeune E. MicroBundleCompute: Automated segmentation, tracking, and analysis of sub-domain deformation in cardiac microbundles, under review (2023).
Provide a detailed description of the following dataset: Engineered Cardiac Microbundle Time-Lapse Microscopy Image Dataset
Vessel detection Dateset
a vessel dataset using 85 videos. The dataset covers normal weather conditions such as sunny, rainy, reflective, low light, night, etc. The first 43 videos are annotated with one video every 1 second, and the last 42 videos are annotated with one video every 2 seconds. Furthermore, since the collected videos contain many duplicate images, we use multiple similar images to annotate only one image to alleviate overfitting when training the model, with a total of 4563 images and 5864 annotation frames. Among them, 82 videos are used as the training model, of which 3063 are used as the training set, 1318 as the validation set, and three untrained scenes are used as the test set, totaling 182 images.
Provide a detailed description of the following dataset: Vessel detection Dateset
CG-Eval
This paper presents CG-Eval, the first comprehensive evaluation of the generation capabilities of large Chinese language models across a wide range of academic disciplines. The models' performance was assessed based on their ability to generate accurate and relevant responses to different types of questions in six disciplines, namely, Science and Engineering, Humanities and Social Sciences, Mathematical Calculations, Medical Practitioner Qualification Examination, Judicial Examination, and Certified Public Accountant Examination. This paper also presents Gscore, a composite index derived from the weighted sum of multiple metrics to measure the quality of model's generation against a reference.
Provide a detailed description of the following dataset: CG-Eval
Inshorts News
Inshorts News dataset Inshorts provides a news summary in 60 words or less. Inshorts is a news service that offers short summaries of news from around the web. This dataset contains headlines and a summary of news items and their source.
Provide a detailed description of the following dataset: Inshorts News
MNIST-C
Common corruptions dataset for MNIST.
Provide a detailed description of the following dataset: MNIST-C
DeepPatent
The dataset consists of over 350,000 public domain patent drawings collected from the United States Patent and Trademark Office (USPTO). The whole collection consists of a total of 45,000 design patents published between January 2018 and June 2019.
Provide a detailed description of the following dataset: DeepPatent
PapioVoc
Abstract The data collection process consisted of continuously recording during one month a group of Guinea baboons living in semi-liberty at the CNRS primatology center in Rousset-sur-Arc (France). Two microphones we placed nearby their enclosure to continuously record the sounds produced by the group. A convolutional neural network (CNN) was used on these large and noisy audio recordings to automatically extract segments of sound containing a baboon vocal production by following the method of Bonafos et al. (2023). The resulting dataset consists of one-second to several-minute wav files of automatically detected vocalizations segments. The dataset thus provides a wide range of baboon vocalizations produced at all times of the day. It can be used to study vocal productions of non-human primates, their repertoire, their distribution over the day, their frequency, and their heterogeneity. In addition to the analysis of animal communication, the dataset can also be used as a learning base for sound classification models. Data acquisition The data are audio recordings of baboons. The recordings were made with a H6 Zoom recorder, using the included XYH-6 stereo microphone. The sample size is 44100 Hertz, 16 bits. The microphones were placed in the vicinity of the enclosure for one month and recorded continuously on a PC computer. A CNN passed over the data with a sliding window of 1 second and an overlap of 80% to detect the vocal productions of the baboons. The dataset consists of the segments predicted by the CNN to contain a baboon vocalization. Windows containing signal less than one second apart were merged into a single vocalization. Data source location Institution: CNRS, Primate Facility City/Town/Region: Rousset-sur-Arc Country: France Latitude and longitude for collected samples/data: 43.47033535251509, 5.6514732876668905 Value of the data This dataset is relatively unique in terms of the quantity of vocalizations available. This massive dataset can be very useful to two types of scientific communities: experts in primatology who study the vocal productions of non-human primates, and experts in data science and audio signal processing. The machine learning research community has at its disposal a database of several dozen hours of animal vocalizations, which will make it possible to build up a large learning base, very useful for Environemental Sound Recognition tasks, for example. Objective This dataset is a follow-up of two studies on the vocal productions of Guinea baboons (Papio papio) in which we carried out analyses of their vocal productions on the basis of a relatively large vocalization sample containing around 1300 vocalizations (Boë, Berthommier, Legou, Captier, Kemp, Sawallis, Becker, Rey, & Fagot, 2017; Kemp, Rey, Legou, Boë, Berthommier, Becker, & Fagot, 2017). The aim was to collect a larger database using the technique of deep convolutional neural networks in order to 1) automatically detect vocal productions in a large continuous audio recording and 2) perform a categorization of these vocalizations on a more massive sample. A description of the pipeline that enabled these automatic detections and categorizations is given in Bonafos, Pudlo, Freyermuth, Legou, Fagot, Tronçon, & Rey (2023). Data description The data is a set of audio files in wav format. They are at least one second long (the size of the window), up to several minutes, if several windows are consecutively predicted as containing signal. Moreover, we add the labeled data we used to train the CNN which did the prediction. We also provide two hours of the continuous recordings to have an idea of the continuous recordings and test the code of the paper provided on gitlab. In addition, there is a database in csv format listing all the vocalizations, the day and time of their production, and the prediction probabilities of the model. Experimental design, materials and methods The original recordings represent one month of continuous audio recording. Seven hours of this month were manually labelled. They were segmented and labelled according to whether or not there was a monkey vocalization (i.e., noise or vocalization) and, if there was a vocalization, according to the type of vocalization (6 possible classes: bark, copulation grunt, grunt, scream, yak, wahoo). These manually labelled data were used as a training set for a CNN, which was automatically trained following the pipeline of Bonafos et al. (2023). This model was then used to automatically detect and classify vocalization during the whole month of audio recording. It processes the data in the same way when predicting new data as it does when training. It uses a sliding window of one second with an overlap of 80%. It does not take into account information from previous predictions, but calculates the probability of a vocalization in each one-second window independently. It then iterates through the month. For each window, the model predicts two outputs: the probability that there is a vocalization and the probability of each class of vocalization. For the purpose of generating the wav files, if a window has a probability of a vocalization greater than 0.5, it is considered to contain a vocalization. If it is the first one, a vocalization is started at that moment. If the time windows that follow a vocalization also contain a vocalization, then the signal they contain is added to the first segment for which a vocalization has been detected. As soon as a one-second segment no longer contains a signal corresponding to a vocalization, the wav file is closed. If windows are predicted to contain no vocalizations, but are between two windows that contain vocalizations within 1 second of each other, then all windows are merged.
Provide a detailed description of the following dataset: PapioVoc
BabbleCor
What is BabbleCor? BabbleCor is a crosslinguistic corpus of infant and child vocalizations from 52 children exposed to five different languages: English, Spanish, Tsimane', Yêlí-Dnye, Tseltal Mayan, and bilingual Quechua-Spanish. How was BabbleCor created? BabbleCor consists of very short audio clips (approximately 400ms) of child vocalizations. To generate these clips, each child first completed a daylong audio recording, between 6 and 16 hours in length, where a small, lightweight recorder was worn inside of a clothing pocket designed for the device. From these daylong recordings, child vocalizations were either identified by the proprietary Language ENvironment Analysis algorithm, which assigns utterances to speakers in naturalistic audio recordings (e.g. Female Adult, Child) or the vocalizations were identified by hand. 100 of the utterances identified as child vocalizations were randomly selected and chopped into the smaller clips in BabbleCor. Where do the BabbleCor clip annotations come from? Each short clip (~400ms) was categorized according to a 5-way scheme by citizen science annotators on the iHEARu PLAY platform (https://www.ihearu-play.eu/). Annotators classified clips as 1) canonical - containing a consonant to vowel transition, 2) non-canonical - not containing a consonant to vowel transition, 3) crying, 4) laughing, or 5) junk. For further details on corpus creation, please see Methods described in Cychosz et al. (2021). What are the metadata? There are two metadata components in BabbleCor: Annotation_Tags and Public_Metadata. As the name suggests, Public_Metadata includes corpus metadata that is publicly available to all corpus users: child ID, child age, child's assigned gender, corpus of origin, and clip ID. Annotation_Tags contains the annotation tags for each clip ID, such as canonical babble, laughing, etc. For access to the annotation tags, please sign, scan, & email the data sharing agreement to babblecorpus@gmail.com (see Data_Sharing_Agreement).
Provide a detailed description of the following dataset: BabbleCor
Nikon RAW Low Light
Dataset release for the BMVC 2021 Paper "Few-Shot Domain Adaptation for Low Light RAW Image Enhancement" Abstract: Enhancing practical low light raw images is a difficult task due to severe noise and color distortions from short exposure time and limited illumination. Despite the success of existing Convolutional Neural Network (CNN) based methods, their performance is not adaptable to different camera domains. In addition, such methods also require large datasets with short-exposure and corresponding long-exposure ground truth raw images for each camera domain, which is tedious to compile. To address this issue, we present a novel few-shot domain adaptation method to utilize the existing source camera labeled data with few labeled samples from the target camera to improve the target domain’s enhancement quality in extreme low-light imaging. Our experiments show that only ten or fewer labeled samples from the target camera domain are sufficient to achieve similar or better enhancement performance than training a model with a large labeled target camera dataset. To support research in this direction, we also present a new low-light raw image dataset captured with a Nikon camera, comprising short-exposure and their corresponding long-exposure ground truth images. The code is available at https://val.cds.iisc.ac.in/HDR/BMVC21/index.html.
Provide a detailed description of the following dataset: Nikon RAW Low Light
Canon RAW Low Light
The goal of this project is to present two new datasets that seek to expand the capability of the Learning to See in the Dark Low-light enhancement CNN for the Canon 6D DSLR, and explore how the network performs when modified in various ways, both pruning it and making it deeper. The original paper Learning to See in the Dark was published in CVPR 2018, by Chen Chen, Qifeng Chen, Jia Xu, and Vladlen Koltun.
Provide a detailed description of the following dataset: Canon RAW Low Light
Raw_-Subjective-Scores-120-videos
A review on raw subjective scores and data manipulation for before and after refining Mean opinion Scores
Provide a detailed description of the following dataset: Raw_-Subjective-Scores-120-videos
Detecting Security Patches Via Behavioral Data
The code that created this dataset can be seen in [https://github.com/nitzanfarhi/SecurityPatchDetection](https://github.com/nitzanfarhi/SecurityPatchDetection) and can be reproduced by running: ```console python data_collection\create_dataset.py --all -o data_collection\data ``` Notice that this dataset doesn't include the commits' generated data as it is very big. This can be generated by running only : ```console python data_collection\create_dataset.py --commits -data_collection\data ``` A repository name is symbolised by <COMPANY_NAME>_<REPOSITORY_NAME> ### License This dataset is publicly available for researchers. If you are using our dataset, you should cite our related research paper which outlines the details of the dataset and its underlying principles: @article{farhi2023detecting, title={Detecting Security Patches via Behavioral Data in Code Repositories}, author={Farhi, Nitzan and Koenigstein, Noam and Shavitt, Yuval}, journal={arXiv preprint arXiv:2302.02112}, year={2023} } As well as mentioning [gharchive.org](gharchive.org), if you use their data as well.
Provide a detailed description of the following dataset: Detecting Security Patches Via Behavioral Data
SI-HDR
The dataset consists of 181 HDR images. Each image includes: 1) a RAW exposure stack, 2) an HDR image, 3) simulated camera images at two different exposures 4) Results of 6 single-image HDR reconstruction methods: Endo et al. 2017, Eilertsen et al. 2017, Marnerides et al. 2018, Lee et al. 2018, Liu et al. 2020, and Santos et al. 2020 # Project web page More details can be found at: <https://www.cl.cam.ac.uk/research/rainbow/projects/sihdr_benchmark/> # Overview This dataset contains 181 RAW exposure stacks selected to cover a wide range of image content and lighting conditions. Each scene is composed of 5 RAW exposures and merged into an HDR image using the estimator that accounts photon noise [3] (code at [HDRutils](https://github.com/gfxdisp/HDRutils)). A simple color correction was applied using a reference white point and all merged HDR images were resized to 1920×1280 pixels. The primary purpose of the dataset was to compare various single image HDR (SI-HDR) methods [1]. Thus, we selected a wide variety of content covering nature, portraits, cities, indoor and outdoor, daylight and night scenes. After merging and resizing, we simulated captures by applying a custom CRF and added realistic camera noise based on estimated noise parameters of *Canon 5D Mark III*. The simulated captures were inputs to six selected SI-HDR methods. You can view the reconstructions of various methods for select scenes on our [interactive viewer](https://www.cl.cam.ac.uk/research/rainbow/projects/sihdr_benchmark/). For the remaining scenes, please download the appropriate zip files. We conducted a rigorous pairwise comparison experiment on these images to find that widely-used metrics did not correlate well with subjective data. We then proposed an improved evaluation protocol for SI-HDR [1]. If you find this dataset useful, please cite [1]. # References [1] Param Hanji, Rafał K. Mantiuk, Gabriel Eilertsen, Saghi Hajisharif, and Jonas Unger. 2022. “Comparison of single image hdr reconstruction methods — the caveats of quality assessment.” In *Special Interest Group on Computer Graphics and Interactive Techniques Conference Proceedings (SIGGRAPH ’22 Conference Proceedings)*. [Online]. Available: <https://www.cl.cam.ac.uk/research/rainbow/projects/sihdr_benchmark/> [2] Gabriel Eilertsen, Saghi Hajisharif, Param Hanji, Apostolia Tsirikoglou, Rafał K. Mantiuk, and Jonas Unger. 2021. “How to cheat with metrics in single-image HDR reconstruction.” In *Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV) Workshops*. 3998–4007. [3] Param Hanji, Fangcheng Zhong, and Rafał K. Mantiuk. 2020. “Noise-Aware Merging of High Dynamic Range Image Stacks without Camera Calibration.” In *Advances in Image Manipulation (ECCV workshop)*. Springer, 376–391. [Online]. Available: <https://www.cl.cam.ac.uk/research/rainbow/projects/noise-aware-merging/>
Provide a detailed description of the following dataset: SI-HDR
DemogPairs
Although deep face recognition has achieved impressive results in recent years, there is increasing controversy regarding racial and gender bias of the models, questioning their trustworthiness and deployment into sensitive scenarios. DemogPairs is a validation set with 10.8K facial images and 58.3M identity verification pairs, distributed in demographically-balanced folds of Asian, Black and White females and males. We also propose a benchmark of experiments using DemogPairs over state-of-the-art deep face recognition models in order to analyze their cross-demographic behavior and potential demographic biases (see figure below).
Provide a detailed description of the following dataset: DemogPairs
UIUC Scooping Dataset
**Overview:** This dataset encompasses a compilation of 6,700 executed scoops (excavations), mapped across a vast spectrum of materials, terrain topography, and compositions. **Motivation:** The primary motivation behind collecting this dataset is to advance research in granular material manipulation and meta-learning. We also believe that this dataset would help with research in domain specific problems including autonomous terrain excavation. The diverse nature of the dataset, spanning a broad spectrum of materials and terrains, provides a rich foundation for various studies. **Dataset Characteristics:** - **Total Samples:** 6,700 - **Tasks:** 67 (with each task being defined by its unique combination of materials and composition) - **Materials:** 12 distinct materials - **Terrain Compositions:** Materials are combined in 4 different ways to create diverse terrains. - **Volume Metrics:** - Maximum scoop volume: 260.8 cm³ - Average scoop volume: 31.3 cm³ **Content Summary:** For each scoop action in the dataset, the following data points are recorded: - **Action Information:** This includes the scoop location, scoop yaw, scoop depth, and scoop stiffness. - **Terrain State (Pre-Scooping):** Captured as RGBD data, using an overhead RealSense L515. - **Volume Scooped:** Represents the volume of the material scooped during a particular action. - **End-Effector Sensor Data:** F/T sensor data recorded while executing the scooping action. The scoops are executed using the UR5e. **Potential Use Cases:** 1. **Granular Material Manipulation:** Given the varied materials and terrains, this dataset is a valuable asset for those looking into the dynamics and intricacies of manipulating granular substances. 2. **Meta-learning:** With 67 tasks based on materials and compositions, researchers can delve into meta-learning applications, especially regarding how different materials interact and behave in a few-shot setting. 3. **Self-supervised Learning:** The extensive number of samples can be harnessed for training models in a self-supervised manner, extracting patterns without explicit labeled supervision. 4. **Visual Estimation of Terrain Properties:** The RGBD data offers an opportunity for training models that can visually estimate and interpret terrain characteristics based on depth and color data with the F/T sensor data acting as ground truth. 5. **Autonomous Excavation and Construction Robotics.** The dataset provides results of scooping (excavation) on terrains with different materials including sand, rigid rocks, rock fragments, etc. and can be useful for developing novel approaches for autonomous excavation and construction robotics in general. 6. **Deep Reinforcement Learning and Decision Making.** The dataset provides a set of observations, actions, and the corresponding reward values (the scoop volume) and can be naturally posed as a sequential decision making problem.
Provide a detailed description of the following dataset: UIUC Scooping Dataset
NEnv
Dataset of 30 4K HDR environment maps and trained models.
Provide a detailed description of the following dataset: NEnv
Neuochemical and USV measurements of the effects of morphine withdrawal in rats
Data was acquired to investigate the effects of morphine withdrawal in rats, both in the brain neurochemistry and ultrasonic vocalisation. To this end, a following experiment was performed. 38 adult male Sprague–Dawley rats were used in the experiment, divided into four groups: Saline-Control (10), Morphine-Control (9), Saline-Withdrawal (10) and Morphine-Withdrawal (9). Morphine in dose 10.0 mg/kg (s.c.) was administered repeatedly to the morphine groups over a 14 day period; saline solution was administered repeatedly (1.0 ml/kg) to other groups in the same manner. Administration was performed in testing cages in a group of four animals. The testing room was situated in a remote part of the laboratory, and significantly different than home cage room, both in the lighting conditions and in the arrangement of spatial cues, which were constant throughout the whole experiment. All animals were kept for 30 min in the testing box after each saline or morphine injection. On day 14, Morph-Control group and control Saline-Control group were exposed to the context of drug administration, and the ultrasonic vocalizations were recorded for 20 min. Withdrawal groups, 30 min after last injection, were left undisturbed (in their home cages), whilst they were subjected to 2-week withdrawal period. On day 28, the rats were re-exposed to the context of morphine/saline administration and ultrasonic vocalization was recorded. Each rat was immediately decapitated after USVs recording session, and its brain tissues were frozen. The USVs were recorded individually for each rat, in a dark room with dim red light. USV recordings were processed using a proprietary RatRec Pro software; individual calls were extracted and counted. All identified episodes were of the "50 kHz appetitive" class, no "aversive 22 kHz" alarm calls were recorded. For each rat, tissue samples of 6 brain structures were extracted: medial prefrontal cortex (mPFC), nucleus accumbens (NAcc), striatum (Cpu), hippocampus (Hipp), amygdala and ventral tegmental area (VTA). In each sample, levels of 15 compounds were measured: norepinephrine (NA), 3-methoxy-4-hydroxyphenylglycol (MHPG), 3,4-dihydroxyphenylacetic acid (DOPAC), dopamine (DA), 5-hydroxyindoleacetic acid (5-HIAA), homovanillic acid (HVA), serotonin (5-HT), 3-methoxytyramine (3-MT), taurine, glutamine (Gln), glycine (Gly), aspartic acid (Asp), glutamic acid (Glu), alanine (Ala) and gamma-aminobutyric acid (GABA).
Provide a detailed description of the following dataset: Neuochemical and USV measurements of the effects of morphine withdrawal in rats
MeViS
MeViS is a large-scale dataset for motion expressions guided video segmentation, which focuses on segmenting objects in video content based on a sentence describing the motion of the objects. The dataset contains numerous motion expressions to indicate target objects in complex environments.
Provide a detailed description of the following dataset: MeViS
FunnyBirds
**FunnyBirds** is a synthetic vision dataset that is developed to automatically and quantitatively analyze XAI methods. It consists of 50 500 images (50k train, 500 test) of 50 synthetic bird species.
Provide a detailed description of the following dataset: FunnyBirds
PIPPA
PIPPA (Personal Interaction Pairs between People and AI) is a partially-synthetic dataset. The dataset comprises over 1 million utterances that are distributed across 26,000 conversation sessions and provides a rich resource for researchers and AI developers to explore and refine conversational AI systems in the context of role-play scenarios.
Provide a detailed description of the following dataset: PIPPA
LOL-v2
LOL-v2-real contains 689 low-/normal-light image pairs for training and 100 pairs for testing.
Provide a detailed description of the following dataset: LOL-v2
DAS-2
The DAS-2 traces were kindly provided by the Advanced School for Computing and Imaging (ASCI), the owner of the DAS-2 system. To use these traces, you must include an acknowledgement to the source of the data in any published material that refers to the data. Please also consider refering to the Grid Workloads Archive in the acknowledgements.
Provide a detailed description of the following dataset: DAS-2
Notebook Inaccessibility
This dataset artifact contains the intermediate datasets from pipeline executions necessary to reproduce the results of the paper. We share this artifact in hopes of providing a starting point for other researchers to extend the analysis on notebooks, discover more about their accessibility, and offer solutions to make data science more accessible. The scripts needed to generate these datasets and analyse them are shared in the [Github Repository](https://github.com/make4all/notebooka11y) for this work. The dataset contains large files of approximately 60 GB so please exercise caution when extracting the data from compressed files. The dataset contains files which could take a significant amount of run time of the scripts to generate/reproduce. ### Dataset Contents We briefly summarize the included files in our dataset. Please refer to the [documentation](https://github.com/make4all/notebooka11y/blob/main/pipeline/README.md) for specific information about the structure of the data in these files, the scripts to generate them, and runtimes for various parts of our data processing pipeline. 1. `epoch_9_loss_0.04706_testAcc_0.96867_X_resnext101_docSeg.pth`: We share this model file, originally provided by [Jobin et al.](https://github.com/jobinkv/DocFigure), to enable the classification of figures found in our dataset. Please place this into the `model/` [directory](https://github.com/make4all/notebooka11y/tree/main/model). 2. `model-results.csv`: This file contains results from the classification performed on the figures found in the notebooks in our dataset. > Performing this classification may take upto a day. 3. `a11y-scan-dataset.zip`: This archive contains two files and results in datasets of approximately 60GB when extracted. Please ensure that you have sufficient disk space to uncompress this zip archive. The archive contains: - `a11y/a11y-detailed-result.csv`: This dataset contains the accessibility scan results from the scans run on the 100k notebooks across themes. > The detailed result file can be really large (> 60 GB) and can be time-consuming to construct. - `a11y/a11y-aggregate-scan.csv`: This file is an aggregate of the detailed result that contains the number of each type of error found in each notebook. > This file is also shared outside the compressed directory. 4. `errors-different-counts-a11y-analyze-errors-summary.csv`: This file contains the counts of errors that occur in notebooks across different themes. 5. `nb_processed_cell_html.csv`: This file contains metadata corresponding to each cell extracted from the html exports of our notebooks. 6. `nb_first_interactive_cell.csv`: This file contains the necessary metadata to compute the first interactive element, as defined in our paper, in each notebook. 7. `nb_processed.csv`: This file contains the necessary data after processing the notebooks extracting the number of images, imports, languages, and cell level information. 8. `processed_function_calls.csv`: This file contains the information about the notebooks, the various imports and function calls used within the notebooks.
Provide a detailed description of the following dataset: Notebook Inaccessibility
FIREBALL
Dungeons & Dragons (D&D) is a tabletop roleplaying game with complex natural language interactions between players and hidden state information. Recent work has shown that large language models (LLMs) that have access to state information can generate higher quality game turns than LLMs that use dialog history alone. However, previous work used game state information that was heuristically created and was not a true gold standard game state. We present FIREBALL, a large dataset containing nearly 25,000 unique sessions from real D&D gameplay on Discord with true game state info. We recorded game play sessions of players who used the Avrae bot, which was developed to aid people in playing D&D online, capturing language, game commands and underlying game state information. We demonstrate that FIREBALL can improve natural language generation (NLG) by using Avrae state information, improving both automated metrics and human judgments of quality. Additionally, we show that LLMs can generate executable Avrae commands, particularly after finetuning.
Provide a detailed description of the following dataset: FIREBALL
Myket Android Application Install
This dataset contains information on application install interactions of users in the [Myket](https://myket.ir/) android application market. The dataset was created for the purpose of evaluating interaction prediction models, requiring user and item identifiers along with timestamps of the interactions. Hence, the dataset can be used for interaction prediction and building a recommendation system. Furthermore, the data forms a dynamic network of interactions, and we can also perform network representation learning on the nodes in the network, which are users and applications. ## Data Creation The dataset was initially generated by the Myket data team, and later cleaned and subsampled by Erfan Loghmani a master student at Sharif University of Technology at the time. The data team focused on a two-week period and randomly sampled 1/3 of the users with interactions during that period. They then selected install and update interactions for three months before and after the two-week period, resulting in interactions spanning about 6 months and two weeks. We further subsampled and cleaned the data to focus on application download interactions. We identified the top 8000 most installed applications and selected interactions related to them. We retained users with more than 32 interactions, resulting in 280,391 users. From this group, we randomly selected 10,000 users, and the data was filtered to include only interactions for these users. The detailed procedure can be found in [here](create_data.ipynb). ## Data Structure The dataset has two main files. - `myket.csv`: This file contains the interaction information and follows the same format as the datasets used in the "[JODIE: Predicting Dynamic Embedding Trajectory in Temporal Interaction Networks](https://github.com/claws-lab/jodie)" (ACM SIGKDD 2019) project. However, this data does not contain state labels and interaction features, resulting in associated columns being all zero. - `app_info_sample.csv`: This file comprises features associated with applications present in the sample. For each individual application, information such as the approximate number of installs, average rating, count of ratings, and category are included. These features provide insights into the applications present in the dataset. ## Dataset Details - Total Instances: 694,121 install interaction instances - Instances Format: Triplets of user_id, app_name, timestamp - 10,000 users and 7,988 android applications - Item features for 7,606 applications For a detailed summary of the data's statistics, including information on users, applications, and interactions, please refer to the Python notebook available at [summary-stats.ipynb](summary-stats.ipynb). The notebook provides an overview of the dataset's characteristics and can be helpful for understanding the data's structure before using it for research or analysis. ### Top 20 Most Installed Applications | Package Name | Count of Interactions | | ---------------------------------- | --------------------- | | com.instagram.android | 15292 | | ir.resaneh1.iptv | 12143 | | com.tencent.ig | 7919 | | com.ForgeGames.SpecialForcesGroup2 | 7797 | | ir.nomogame.ClutchGame | 6193 | | com.dts.freefireth | 6041 | | com.whatsapp | 5876 | | com.supercell.clashofclans | 5817 | | com.mojang.minecraftpe | 5649 | | com.lenovo.anyshare.gps | 5076 | | ir.medu.shad | 4673 | | com.firsttouchgames.dls3 | 4641 | | com.activision.callofduty.shooter | 4357 | | com.tencent.iglite | 4126 | | com.aparat | 3598 | | com.kiloo.subwaysurf | 3135 | | com.supercell.clashroyale | 2793 | | co.palang.QuizOfKings | 2589 | | com.nazdika.app | 2436 | | com.digikala | 2413 | ## Comparison with SNAP Datasets The Myket dataset introduced in this repository exhibits distinct characteristics compared to the real-world datasets used by the project. The table below provides a comparative overview of the key dataset characteristics: | Dataset | #Users | #Items | #Interactions | Average Interactions per User | Average Unique Items per User | | --------------- | ---------------- | --------------- | ------------- | ----------------------------- | ----------------------------- | | **Myket** | **10,000** | **7,988** | 694,121 | 69.4 | 54.6 | | LastFM | 980 | 1,000 | 1,293,103 | 1,319.5 | 158.2 | | Reddit | **10,000** | 984 | 672,447 | 67.2 | 7.9 | | Wikipedia | 8,227 | 1,000 | 157,474 | 19.1 | 2.2 | | MOOC | 7,047 | 97 | 411,749 | 58.4 | 25.3 | The Myket dataset stands out by having an ample number of both users and items, highlighting its relevance for real-world, large-scale applications. Unlike LastFM, Reddit, and Wikipedia datasets, where users exhibit repetitive item interactions, the Myket dataset contains a comparatively lower amount of repetitive interactions. This unique characteristic reflects the diverse nature of user behaviors in the Android application market environment. ## Citation If you use this dataset in your research, please cite the following [preprint](https://arxiv.org/abs/2308.06862): ``` @misc{loghmani2023effect, title={Effect of Choosing Loss Function when Using T-batching for Representation Learning on Dynamic Networks}, author={Erfan Loghmani and MohammadAmin Fazli}, year={2023}, eprint={2308.06862}, archivePrefix={arXiv}, primaryClass={cs.LG} } ```
Provide a detailed description of the following dataset: Myket Android Application Install
Open-Platypus
**Open-Platypus** is a family of fine-tuned and merged Large Language Models (LLMs) that achieves the strongest performance and currently stands at first place in HuggingFace's Open LLM Leaderboard.
Provide a detailed description of the following dataset: Open-Platypus
SHADR
# SDoH Human Annotated Demoographic Robustness (SHADR) Dataset ## Overview The Social determinants of health (SDoH) play a pivotal role in determining patient outcomes. However, their documentation in electronic health records (EHR) remains incomplete. This dataset was created from a study examining the capability of large language models in extracting SDoH from the free text sections of EHRs. Furthermore, the study delved into the potential of synthetic clinical text to bolster the extraction process of these scarcely documented, yet crucial, clinical data. ## Dataset Structure & Modification To understand potential biases in high-performing models and in those pre-trained on general text, GPT-4 was utilized to infuse demographic descriptors into our synthetic data. For instance: - **Original Sentence**: "Widower admits fears surrounding potential judgment…" - **Modified Sentence**: "Hispanic widower admits fears surrounding potential judgment..." Such demographic-infused sentences underwent manual validation. Out of these: - 419 had mentions of SDoH - 253 had mentions of adverse SDoH - The remainder were tagged as NO_SDoH ## Instructions for Model Evaluation 1. Initially, run your model inference on the original sentences. 2. Subsequently, apply the same model to infer on the demographic-modified sentences. 3. Perform comparisons for robustness. For a detailed understanding of the "adverse" labeling, refer to https://arxiv.org/pdf/2308.06354.pdf. Here, the 'adverse' column demarcates if the label corresponds to an "adverse" or "non-adverse" SDoH. ## Current Performance Metrics - **Best Model Performance**: - **Any SDoH**: 88% Macro-F1 - **Adverse SDoH**: 84% Macro-F1 - **Robustness Rate**: - **Any SDoH**: 9.9% - **Adverse SDoH**: 14.3% --- How to Cite: ``` @misc{guevara2023large, title={Large Language Models to Identify Social Determinants of Health in Electronic Health Records}, author={Marco Guevara and Shan Chen and Spencer Thomas and Tafadzwa L. Chaunzwa and Idalid Franco and Benjamin Kann and Shalini Moningi and Jack Qian and Madeleine Goldstein and Susan Harper and Hugo JWL Aerts and Guergana K. Savova and Raymond H. Mak and Danielle S. Bitterman}, year={2023}, eprint={2308.06354}, archivePrefix={arXiv}, primaryClass={cs.CL} } ```
Provide a detailed description of the following dataset: SHADR
OVEN
In this project, we formally present the task of Open-domain Visual Entity recognitioN (OVEN), where a model need to link an image onto a Wikipedia entity with respect to a text query. We construct OVEN-Wiki by re-purposing 14 existing datasets with all labels grounded onto one single label space: Wikipedia entities. OVEN challenges models to select among six million possible Wikipedia entities, making it a general visual recognition benchmark with the largest number of labels.
Provide a detailed description of the following dataset: OVEN
InfoSeek
In this project, we introduce InfoSeek, a visual question answering dataset tailored for information-seeking questions that cannot be answered with only common sense knowledge. Using InfoSeek, we analyze various pre-trained visual question answering models and gain insights into their characteristics. Our findings reveal that state-of-the-art pre-trained multi-modal models (e.g., PaLI-X, BLIP2, etc.) face challenges in answering visual information-seeking questions, but fine-tuning on the InfoSeek dataset elicits models to use fine-grained knowledge that was learned during their pre-training.
Provide a detailed description of the following dataset: InfoSeek
ASOS Data
The Automated Surface Observing Systems (ASOS) program is a joint effort of the National Weather Service (NWS), the Federal Aviation Administration (FAA), and the Department of Defense (DOD). These automated systems collect observations on a continual basis, 24 hours a day. Automated Weather Observing System (AWOS) units are operated and controlled by the Federal Aviation Administration. These systems are among the oldest automated weather stations and predate ASOS. They generally report at 20-minute intervals and, unlike ASOS, do not report special observations for rapidly changing weather conditions. ASOS observations are operationally generated each hour, and special observations are provided whenever the weather changes. These special reports are generated when conditions exceed preselected weather element thresholds, e.g., the visibility decreases to less than 3 miles.
Provide a detailed description of the following dataset: ASOS Data
Dataset of a Study of Computational reproducibility of Jupyter notebooks from biomedical publications
The dataset is generated from the study of computational reproducibility of Jupyter notebooks from biomedical publications. Our focus lies in evaluating the extent of reproducibility of Jupyter notebooks derived from GitHub repositories linked to publications present in the biomedical literature repository, PubMed Central. We analyzed the reproducibility of Jupyter notebooks from GitHub repositories associated with publications indexed in the biomedical literature repository PubMed Central. The dataset includes the metadata information of the journals, publications, the Github repositories mentioned in the publications and the notebooks present in the Github repositories.
Provide a detailed description of the following dataset: Dataset of a Study of Computational reproducibility of Jupyter notebooks from biomedical publications
Dataset of a Study of Computational reproducibility of Jupyter notebooks from biomedical publications version 1
This repository contains the dataset for the study of the computational reproducibility of Jupyter notebooks from biomedical publications. We analyzed the reproducibility of Jupyter notebooks from GitHub repositories associated with publications indexed in the biomedical literature repository PubMed Central. The dataset includes the metadata information of the journals, publications, the Github repositories mentioned in the publications and the notebooks present in the Github repositories.
Provide a detailed description of the following dataset: Dataset of a Study of Computational reproducibility of Jupyter notebooks from biomedical publications version 1