dataset_name stringlengths 2 128 | description stringlengths 1 9.7k | prompt stringlengths 59 185 |
|---|---|---|
SUDOER | The dataset aims to provide system prompts and user prompts for assistant. You should make random pairs and compute human preference for both system prompt obedience and user prompt relevance through A/B testing. | Provide a detailed description of the following dataset: SUDOER |
neuronIO | ## Single cortical neurons as deep artificial neural networks
This dataset contains training and testing subsets of the input/output relationship of a single cortical layer 5 pyramidal cell (L5PC) neuron at 1ms single spike temporal resolution.
The data is obtained via a simulation that contains all of the currently (2021) known and well modeled "messy biological details" that relate to the operation of single neurons in the brain.
The goal with this dataset is to allow machine learning modeling experts easier access to high quality biological data and eventually find **as-small-as-possible** models that **as-accurately-as-possible** capture the simulation data of a single cortical neuron at 1ms temporal resolution. Where in this case "a small model" can refer to: "fast", "parameter efficient", "conceptually simple", "elegant", etc.
## All related resources
Github repo: [github.com/SelfishGene/neuron_as_deep_net](https://github.com/SelfishGene/neuron_as_deep_net)
Neuron version of paper: [cell.com/neuron/fulltext/S0896-6273(21)00501-8](https://www.cell.com/neuron/fulltext/S0896-6273(21)00501-8)
Open Access (slightly older) bioRxiv version of Paper: [biorxiv.org/content/10.1101/613141v2](https://www.biorxiv.org/content/10.1101/613141v2)
Dataset and pretrained networks: [kaggle.com/selfishgene/single-neurons-as-deep-nets-nmda-test-data](https://www.kaggle.com/selfishgene/single-neurons-as-deep-nets-nmda-test-data)
Dataset for training new models: [kaggle.com/selfishgene/single-neurons-as-deep-nets-nmda-train-data](https://www.kaggle.com/selfishgene/single-neurons-as-deep-nets-nmda-train-data)
Notebook with main result: [kaggle.com/selfishgene/single-neuron-as-deep-net-replicating-key-result](https://www.kaggle.com/selfishgene/single-neuron-as-deep-net-replicating-key-result)
Notebook exploring the dataset: [kaggle.com/selfishgene/exploring-a-single-cortical-neuron](https://www.kaggle.com/selfishgene/exploring-a-single-cortical-neuron)
Twitter thread for short visual summery #1: [twitter.com/DavidBeniaguev/status/1131890349578829825](https://twitter.com/DavidBeniaguev/status/1131890349578829825)
Twitter thread for short visual summery #2: [twitter.com/DavidBeniaguev/status/1426172692479287299](https://twitter.com/DavidBeniaguev/status/1426172692479287299)
Figure360, author presentation of Figure 2 from the paper: [youtube.com/watch?v=n2xaUjdX03g](https://www.youtube.com/watch?v=n2xaUjdX03g)
If you use this dataset or associated models or code, please cite the following two works:
1. David Beniaguev, Idan Segev and Michael London. "Single cortical neurons as deep artificial neural networks." Neuron. 2021; 109: 2727-2739.e3 doi: https://doi.org/10.1016/j.neuron.2021.07.002
1. Hay, Etay, Sean Hill, Felix Schürmann, Henry Markram, and Idan Segev. 2011. “Models of Neocortical
Layer 5b Pyramidal Cells Capturing a Wide Range of Dendritic and Perisomatic Active Properties.”
Edited by Lyle J. Graham. PLoS Computational Biology 7 (7): e1002107.
doi: https://doi.org/10.1371/journal.pcbi.1002107. | Provide a detailed description of the following dataset: neuronIO |
UruDendro | 64 RGB wood cross-section images with their ring and pith annotations | Provide a detailed description of the following dataset: UruDendro |
100STLYE-Labelled | Over 4 million frames of motion capture data for 100 different styles of locomotion. Can be used for animation, human motion and sequence modelling research.
This version of the dataset includes the features extracted from the raw motion capture data. This includes local phases, foot contacts, joint positions, joint rotations, joint velocities, character trajectory etc. | Provide a detailed description of the following dataset: 100STLYE-Labelled |
satp-zsm-stage2 | This is the replication data for the paper: "Crossing the Linguistic Causeway: Ethnonational Differences on Soundscape Attributes in Bahasa Melayu". | Provide a detailed description of the following dataset: satp-zsm-stage2 |
Soundscape Attributes Translation Project (SATP) Dataset | The data and audio included here were collected for the Soundscape Attributes Translation Project (SATP). First introduced in Aletta et. al. (2020), the SATP is an attempt to provide validated translations of soundscape attributes in languages other than English. The recordings were used for headphones - based listening experiments.
The data are provided to accompany publications resulting from this project and to provide a unique dataset of 1000s of perceptual responses to a standardised set of urban soundscape recordings. This dataset is the result of efforts from hundreds of researchers, students, assistants, PIs, and participants from institutions around the world. We have made an attempt to list every contributor to this Zenodo repo; if you feel you should be included, please get in touch. | Provide a detailed description of the following dataset: Soundscape Attributes Translation Project (SATP) Dataset |
MiniWob++ | MiniWob++ is a suite of web-browser based tasks introduced in Liu et al. (2018) (an extension of the earlier MiniWob task suite (Shi et al., 2017)). Tasks range from simple button clicking to complex form-filling, for example, to book a flight when given particular instructions (Fig. 1a).
Programmatic rewards are available for each task, permitting standard reinforcement learning techniques.
(Source: Section 2.1 of https://arxiv.org/pdf/2202.08137.pdf) | Provide a detailed description of the following dataset: MiniWob++ |
SAGC-A68 | The analysis of building models for usable area, building safety, and energy efficiency requires accurate classification data of spaces and space elements. To reduce input model preparation effort and errors, automated classification of spaces and space elements is desirable. Although existing space function classifiers use space adjacency or connectivity graphs as input, the application of Graph Deep Learning (GDL) to space layout element classification has not been extensively researched due to the lack of suitable datasets. To bridge this gap, we introduce a dataset named SAGC-A68, which comprises access graphs automatically generated from 68 digital 3D models of space layouts of apartment buildings designed or built between 1952 and 2019 in 13 countries. Each access graph contains nodes representing spaces and space elements and edges representing the connection between them. Nodes are uniquely identified and characterized by 16 features including “Position X”, “Position Y”, “Position Z”, “Width”, “Height”, “Depth”, “Area”, “Volume”, “Is_internal”, “Door_opening_quantity”, “Window_quantity”, “Max_door_width”,” Encloses_ws”, “Is_contained_in_ws”, ”bounding_box”, and “Label” (28 identified labels are shown in bold type in Table 1). Edges are identified by a unique ID and characterized by three features, including “Z_angle”, “Delta_z”, and “Length”. In total, the dataset comprises 4871 nodes and 4566 edges, including disconnected nodes representing shafts. It is suitable for developing GDL models for space element and space function classification in Building information modeling (BIM) authoring systems. | Provide a detailed description of the following dataset: SAGC-A68 |
LAGENDA | The LAGENDA dataset is a large-scale dataset with age and gender annotations for face and body bounding boxes. The dataset consists of 67,159 images from the Open Images Dataset and comprises 84,192 pairs (FaceCrop, BodyCrop). This dataset offers a high level of diversity, encompassing various scenes and domains. It contains minimal celebrity data, thus reflecting real-world, in-the-wild scenarios. The dataset spans a wide age range, from 0 to 95 years old. | Provide a detailed description of the following dataset: LAGENDA |
ConvSumX | **ConvSumX** is a cross-lingual conversation summarization benchmark, through a new annotation schema that explicitly considers source input context. ConvSumX consists of 2 sub-tasks under different real-world scenarios, with each covering 3 language directions. | Provide a detailed description of the following dataset: ConvSumX |
BeaverTails | **BeaverTails** is a dataset aimed at fostering research on safety alignment in large language models (LLMs). This dataset uniquely separates annotations of helpfulness and harmlessness for question-answering pairs, thus offering distinct perspectives on these crucial attributes. In total, the authors have compiled safety meta-labels for 30,207 question-answer (QA) pairs and gathered 30,144 pairs of expert comparison data for both | Provide a detailed description of the following dataset: BeaverTails |
IU X-Ray | IU X-ray (Demner-Fushman et al., 2016) is a set of chest X-ray images paired with their corresponding diagnostic reports. The dataset contains 7,470 pairs of images and reports. | Provide a detailed description of the following dataset: IU X-Ray |
Peir Gross | Peir Gross (Jing et al., 2018) was collected with descriptions in the Gross sub-collection from PEIR digital library, resulting in 7.442 image-caption pairs from 21 different sub-categories. Each caption contains only one sentence. | Provide a detailed description of the following dataset: Peir Gross |
R2C7K | We consider the problem of referring camouflaged object detection (Ref-COD), a new task that aims to segment specified camouflaged objects based on a small set of referring images with salient target objects. | Provide a detailed description of the following dataset: R2C7K |
SemanticSpray Dataset | [Homepage](https://semantic-spray-dataset.github.io/) | [GitHub](https://github.com/aldipiroli/semantic_spray_dataset)
LiDARs are one of the main sensors used for autonomous driving applications, providing accurate depth estimation regardless of lighting conditions. However, they are severely affected by adverse weather conditions such as rain, snow, and fog.
This dataset provides semantic labels for a subset of the [Road Spray dataset](https://www.fzd-datasets.de/spray/), which contains scenes of vehicles traveling at different speeds on wet surfaces, creating a trailing spray effect. We provide semantic labels for over 200 dynamic scenes, labeling each point in the LiDAR point clouds as background (road, vegetation, buildings, ...), foreground (moving vehicles), and noise (spray, LiDAR artifacts).
Citation
```
@ARTICLE{10143263,
author={Piroli, Aldi and Dallabetta, Vinzenz and Kopp, Johannes and Walessa, Marc and Meissner, Daniel and Dietmayer, Klaus},
journal={IEEE Robotics and Automation Letters},
title={Energy-Based Detection of Adverse Weather Effects in LiDAR Data},
year={2023},
volume={8},
number={7},
pages={4322-4329},
doi={10.1109/LRA.2023.3282382}}
``` | Provide a detailed description of the following dataset: SemanticSpray Dataset |
10X PBMC (92k) Zheng et. al. 2017 | The data is provided by 10x Genomics under "Single Cell 3' Paper: Zheng et al. 2017 (v1 Chemistry)" and consists of data from the following 9 cell types: CD4+/CD45RA+/CD25- naïve T cells, CD4+ helper T cells, CD4+/CD25+ regulatory T cells, CD4+/CD45RO+ memory T cells, CD8+/CD45RA+ naïve cytotoxic T cells, CD8+ cytotoxic T cells, CD56+ natural killer cells, CD34+ cells, and CD19+ B cells. The data contains 32738 genes and 92043 cells. | Provide a detailed description of the following dataset: 10X PBMC (92k) Zheng et. al. 2017 |
CODE-15% | A dataset of 12-lead ECGs with annotations. The dataset contains 345 779 exams from 233 770 patients. It was obtained through stratified sampling from the CODE dataset ( 15% of the patients). The data was collected by the Telehealth Network of Minas Gerais in the period between 2010 and 2016. | Provide a detailed description of the following dataset: CODE-15% |
PTB-XL | Electrocardiography (ECG) is a key diagnostic tool to assess the cardiac condition of a patient. Automatic ECG interpretation algorithms as diagnosis support systems promise large reliefs for the medical personnel - only on the basis of the number of ECGs that are routinely taken. However, the development of such algorithms requires large training datasets and clear benchmark procedures. In our opinion, both aspects are not covered satisfactorily by existing freely accessible ECG datasets.
The PTB-XL ECG dataset is a large dataset of 21799 clinical 12-lead ECGs from 18869 patients of 10 second length. The raw waveform data was annotated by up to two cardiologists, who assigned potentially multiple ECG statements to each record. The in total 71 different ECG statements conform to the SCP-ECG standard and cover diagnostic, form, and rhythm statements. To ensure comparability of machine learning algorithms trained on the dataset, we provide recommended splits into training and test sets. In combination with the extensive annotation, this turns the dataset into a rich resource for the training and the evaluation of automatic ECG interpretation algorithms. The dataset is complemented by extensive metadata on demographics, infarction characteristics, likelihoods for diagnostic ECG statements as well as annotated signal properties. | Provide a detailed description of the following dataset: PTB-XL |
OCFR-LFW | A occluded version of the LFW dataset for occluded face recognition verification. Uses structured occlusions generated to seem more realistic. | Provide a detailed description of the following dataset: OCFR-LFW |
CCIC | The dataset contains concrete images having cracks. The data is collected from various METU Campus Buildings.
The dataset is divided into two as negative and positive crack images for image classification.
Each class has 20000images with a total of 40000 images with 227 x 227 pixels with RGB channels.
The dataset is generated from 458 high-resolution images (4032x3024 pixel) with the method proposed by Zhang et al (2016).
High-resolution images have variance in terms of surface finish and illumination conditions.
No data augmentation in terms of random rotation or flipping is applied.
If you use this dataset please cite:
2018 – Özgenel, Ç.F., Gönenç Sorguç, A. “Performance Comparison of Pretrained Convolutional Neural Networks on Crack Detection in Buildings”, ISARC 2018, Berlin.
Lei Zhang , Fan Yang , Yimin Daniel Zhang, and Y. J. Z., Zhang, L., Yang, F., Zhang, Y. D., & Zhu, Y. J. (2016). Road Crack Detection Using Deep Convolutional Neural Network. In 2016 IEEE International Conference on Image Processing (ICIP). http://doi.org/10.1109/ICIP.2016.7533052 | Provide a detailed description of the following dataset: CCIC |
Multicenter dataset of simulated neuroimaging features - quadratic relationship with age | A detailed description of this dataset can be found in the Zenodo repository: https://zenodo.org/record/8119042#.ZK-jJC9BxhE | Provide a detailed description of the following dataset: Multicenter dataset of simulated neuroimaging features - quadratic relationship with age |
Multicenter dataset of neuroimaging features (part I) | A detailed description of this dataset can be found in the Zenodo repository: https://zenodo.org/record/7845311#.ZK-jty9BxhE | Provide a detailed description of the following dataset: Multicenter dataset of neuroimaging features (part I) |
Multicenter dataset of neuroimaging features (part II) | A detailed description of this dataset can be found in the Zenodo repository: https://zenodo.org/record/7845361#.ZK-k7y9BxhE | Provide a detailed description of the following dataset: Multicenter dataset of neuroimaging features (part II) |
Subjective Perception of Active Noise Reduction (SPANR) | This repository contains replication data to the paper titled: "Anti-noise window: subjective perception of active noise reduction and effect of informational masking" | Provide a detailed description of the following dataset: Subjective Perception of Active Noise Reduction (SPANR) |
TRansPose | **TRansPose** is a large-scale multispectral dataset that combines stereo RGB-D, TIR (TIR) images, and object poses to promote transparent object research. The dataset includes 99 transparent objects, encompassing 43 household items, 27 recyclable trashes, 29 chemical laboratory equivalents, and 12 non-transparent objects. It comprises a vast collection of 333,819 images and 4,000,056 annotations, providing instance-level segmentation masks, ground-truth poses, and completed depth information. | Provide a detailed description of the following dataset: TRansPose |
MMBench | **MMBench** is a multi-modality benchmark. It methodically develops a comprehensive evaluation pipeline, primarily comprised of two elements. The first element is a meticulously curated dataset that surpasses existing similar benchmarks in terms of the number and variety of evaluation questions and abilities. The second element introduces a novel CircularEval strategy and incorporates the use of ChatGPT. This implementation is designed to convert free-form predictions into pre-defined choices, thereby facilitating a more robust evaluation of the model's predictions. | Provide a detailed description of the following dataset: MMBench |
RAISE-LPBF | Laser powder bed fusion (LBPF) is the additive manufacturing (3D printing) process for metals. RAISE-LPBF is a large dataset on the effect of laser power and laser dot speed in 316L stainless steel bulk material. Both process parameters are independently sampled for each scan line from a continuous distribution, so interactions of different parameter choices can be investigated. Process monitoring comprises on-axis high-speed (20k FPS) video. The data can be used to derive statistical properties of LPBF, as well as to build anomaly detectors.
RAISE-LPBF-Laser is the machine learning benchmark to reconstruct the laser parameters of the RAISE-LBPF dataset.
Paper: https://doi.org/10.1016/j.addlet.2023.100161 | Provide a detailed description of the following dataset: RAISE-LPBF |
HLW | We introduce Horizon Lines in the Wild (HLW), a large dataset of real-world images with
labeled horizon lines, captured in a diverse set of environments. The dataset is available
for download at our project website [1]. We begin by characterizing limitations in existing
datasets for evaluating horizon line detection methods and then describe our approach for
leveraging structure from motion to automatically label images with horizon lines. | Provide a detailed description of the following dataset: HLW |
Parcel3D | Synthetic dataset of over 13,000 images of damaged and intact parcels with full 2D and 3D annotations in the COCO format. For details see our [paper](https://openaccess.thecvf.com/content/CVPR2023W/VISION/html/Naumann_Parcel3D_Shape_Reconstruction_From_Single_RGB_Images_for_Applications_in_CVPRW_2023_paper.html) and for visual samples our [project page](https://a-nau.github.io/parcel3d/). | Provide a detailed description of the following dataset: Parcel3D |
CBTex | Dataset of >200 synthetic cardboard texture images that were rendered with DoubeGum's cardboard shader in Blender. Used to generate [Parcel3D](https://a-nau.github.io/parcel3d/), the dataset for our [paper](https://openaccess.thecvf.com/content/CVPR2023W/VISION/html/Naumann_Parcel3D_Shape_Reconstruction_From_Single_RGB_Images_for_Applications_in_CVPRW_2023_paper.html) on single image 3D reconstructions of potentially damaged parcels. | Provide a detailed description of the following dataset: CBTex |
Parcel2D Real | Real-world dataset of ~400 images of cuboid-shaped parcels with full 2D and 3D annotations in the COCO format. | Provide a detailed description of the following dataset: Parcel2D Real |
HabiCrowd | HabiCrowd, a new dataset and benchmark for crowd-aware visual navigation that surpasses other benchmarks in terms of human diversity and computational utilization. HabiCrowd can be utilized to study crowd-aware visual navigation tasks. A notable feature of HabiCrowd is that our crowd-aware settings is 3D, which is scarcely studied by previous works. | Provide a detailed description of the following dataset: HabiCrowd |
Segmentation in the Wild | Recent advances in language-image pre-training has witnessed the emerging field of building transferable systems that can effortlessly adapt to a wide range of computer vision & multimodal tasks in the wild. This also poses a challenge to evaluate the transferability of these models due to the lack of easy-to-use evaluation toolkits and public benchmarks. "Segmentation in the Wild (SegInW)" Challenge is a part of X-Decoder, that proposed a new benchmark to evaluate the transfer ability of pre-trained vision models. This benchmark presents a diverse set of downstream segmentation datasets, measuring the ability of pre-training models on both the segmentation accuracy and their transfer efficiency in a new task, in terms of training examples and trainable parameters. This SegInW Challenge consists of 25 free public Segmentation datasets, crowd-sourced on roboflow.com. For more details about the challenge submission format, please refer to X-Decoder for SGinW. | Provide a detailed description of the following dataset: Segmentation in the Wild |
WaterScenes | A Multi-Task 4D Radar-Camera Fusion Dataset for Autonomous Driving on Water Surfaces description of the dataset
* WaterScenes, the first multi-task 4D radar-camera fusion dataset on water surfaces, which offers data from multiple sensors, including a 4D radar, monocular camera, GPS, and IMU. It can be applied in multiple tasks, such as object detection, instance segmentation, semantic segmentation, free-space segmentation, and waterline segmentation.
* Our dataset covers diverse time conditions (daytime, nightfall, night), lighting conditions (normal, dim, strong), weather conditions (sunny, overcast, rainy, snowy) and waterway conditions (river, lake, canal, moat). An information list is also offered for retrieving specific data for experiments under different conditions.
* We provide 2D box-level and pixel-level annotations for camera images, and 3D point-level annotations for radar point clouds. We also offer precise timestamps for the synchronization of different sensors, as well as intrinsic and extrinsic parameters.
* We provide a toolkit for radar point clouds that includes: pre-processing, labeling, projection and visualization, assisting researchers in processing and analyzing our dataset. | Provide a detailed description of the following dataset: WaterScenes |
BDD-QA | **BDD-QA** is distinguished by its encompassing range of traffic actions, crafted to rigorously evaluate a model's decision-making abilities in traffic scenario. This makes it a potent tool for high-level decision-making research within traffic contexts, including autonomous driving developments. | Provide a detailed description of the following dataset: BDD-QA |
HDT-QA | HDT-QA, coupled with driving manuals, offers an extensive compendium of driving instructions and driving knowledge tests across all 51 states of the US. This resource is beneficial for assessing the incorporation and impact of traffic knowledge within intelligent driving systems, marking a crucial stride towards more advanced, informed, and safe autonomous driving technology. | Provide a detailed description of the following dataset: HDT-QA |
Complex-TV-QA | The Complex-TV-QA dataset, to our knowledge, is the inaugural resource that provides human-annotated, detailed video captions within traffic scenarios, alongside complex reasoning questions. This novel dataset not only stands as a vital tool for evaluating language models in real-world video-QA and video-reasoning research, but also offers valuable insights for the development and understanding of multi-modal video reasoning models and related works. | Provide a detailed description of the following dataset: Complex-TV-QA |
NEU dataset | Data set used in the work One-Shot Recognition of Manufacturing Defects in Steel Surfaces | Provide a detailed description of the following dataset: NEU dataset |
NILUT | Read all the details about the dataset in our paper "NILUT: Conditional Neural Implicit 3D Lookup Tables for Image Enhancement"
* We host the dataset in Kaggle: https://www.kaggle.com/datasets/photolab/nilut-3d-lut-dataset
* More information in our repo: https://github.com/mv-lab/nilut | Provide a detailed description of the following dataset: NILUT |
RidgeBase | Contactless fingerprint matching using smartphone cameras can alleviate major challenges of traditional fingerprint systems including hygienic acquisition, portability and presentation attacks. However, development of practical and robust contactless fingerprint matching techniques is constrained by the limited availability of large scale real-world datasets. To motivate further advances in contactless fingerprint matching across sensors, we introduce the RidgeBase benchmark dataset. RidgeBase consists of more than 15,000 contactless and contact-based fingerprint image pairs acquired from 88 individuals under different background and lighting conditions using two smartphone cameras and one flatbed contact sensor. Unlike existing datasets, RidgeBase is designed to promote research under different matching scenarios that include Single Finger Matching and Multi-Finger Matching for both contactless-to-contactless (CL2CL) and contact-to-contactless (C2CL) verification and identification. Furthermore, due to the high intra-sample variance in contactless fingerprints belonging to the same finger, we propose a set-based matching protocol inspired by the advances in facial recognition datasets. This protocol is specifically designed for pragmatic contactless fingerprint matching that can account for variances in focus, polarity and finger-angles. We report qualitative and quantitative baseline results for different protocols using a COTS fingerprint matcher (Verifinger) and a Deep CNN based approach on the RidgeBase dataset. The dataset can be downloaded here: \url{https://www.buffalo.edu/cubs/research/datasets/ridgebase-benchmark-dataset.html} | Provide a detailed description of the following dataset: RidgeBase |
SHD - Adding | This dataset is based on the Spiking Heidelberg Digits (SHD) dataset. Sample inputs consist of two spike encoded digits sampled uniformly at random from the SHD dataset and concatenated, with the target being the sum of the digits (irrespective of language). The train and test split remain the same, with the test set consisting of 16k such samples based on the SHD test set.
For comparability, please report the performance for a temporal binning resolution of 2ms, and use a last time step loss to test the model’s temporal integration capabilities.
Importantly, solving this dataset requires integrating temporal information over multiple timescales; on a shorter timescale identifying each digit, on a longer timescale calculating their sum, crucially requiring retaining the first digit in memory.
When using this dataset, please cite the following two papers:
[1] Spieler, A., Rahaman, N., Martius, G., Schölkopf, B., & Levina, A. (2023). The ELM Neuron: an Efficient and Expressive Cortical Neuron Model Can Solve Long-Horizon Tasks. arXiv preprint arXiv:2306.16922.
[2] Cramer, B., Stradmann, Y., Schemmel, J., & Zenke, F. (2020). The heidelberg spiking data sets for the systematic evaluation of spiking neural networks. IEEE Transactions on Neural Networks and Learning Systems, 33(7), 2744-2757. | Provide a detailed description of the following dataset: SHD - Adding |
WYWEB | An evaluation bentchmark for classical Chinese. | Provide a detailed description of the following dataset: WYWEB |
RePoGen | Synthetic humans generated by the RePoGen method. | Provide a detailed description of the following dataset: RePoGen |
CPAP | Kang et al.'s Markovian model for treatment adherence in obstructive sleep apnea.
Kang, Yuncheol, et al. "Markov models for treatment adherence in obstructive sleep apnea." IIE Annual Conference. Proceedings. Institute of Industrial and Systems Engineers (IISE), 2013.
Kang, Yuncheol, et al. "Modelling adherence behaviour for the treatment of obstructive sleep apnoea." European journal of operational research 249.3 (2016): 1005-1013. | Provide a detailed description of the following dataset: CPAP |
PatchDB | PatchDB is a large-scale security patch dataset that contains around 12K security patches and 24K non-security patches from the real world. | Provide a detailed description of the following dataset: PatchDB |
PolypGen | Polyps in the colon are widely known cancer precursors identified by colonoscopy. Whilst most polyps are benign, the polyp’s number, size and surface structure are linked to the risk of colon cancer. Several methods have been developed to automate polyp detection and segmentation. However, the main issue is that they are not tested rigorously on a large multicentre purpose-built dataset, one reason being the lack of a comprehensive public dataset. As a result, the developed methods may not generalise to different population datasets. To this extent, we have curated a dataset from six unique centres incorporating more than 300 patients. The dataset includes both single frame and sequence data with 3762 annotated polyp labels with precise delineation of polyp boundaries verified by six senior gastroenterologists. To our knowledge, this is the most comprehensive detection and pixel-level segmentation dataset (referred to as PolypGen) curated by a team of computational scientists and expert gastroenterologists. The paper provides insight into data construction and annotation strategies, quality assurance, and technical validation. | Provide a detailed description of the following dataset: PolypGen |
DiPCo | We present a speech data corpus that simulates a "dinner party" scenario taking place in an everyday home environment. The corpus was created by recording multiple groups of four Amazon employee volunteers having a natural conversation in English around a dining table. The participants were recorded by a single-channel close-talk microphone and by five far-field 7-microphone array devices positioned at different locations in the recording room. The dataset contains the audio recordings and human labeled transcripts of a total of 10 sessions with a duration between 15 and 45 minutes. The corpus was created to advance in the field of noise robust and distant speech processing and is intended to serve as a public research and benchmarking data set. | Provide a detailed description of the following dataset: DiPCo |
Zucker HRI Dataset | **Zucker HRI Dataset** contains two different agent types (robot and human) in several scenarios. The robot switched between 3 different motion controllers (Linear, NHTTC, and CADRL) over multiple different scenarios with different permutations of human agents. There are also scenes without the robot for a baseline. | Provide a detailed description of the following dataset: Zucker HRI Dataset |
Rad-ReStruct | Rad-ReStruct is a fine-grained structured reporting dataset for Chest X-Ray images. The structured reporting process is modeled as a hierarchical VQA task and the task is recognizing different findings in different body regions and predicting their attributes. | Provide a detailed description of the following dataset: Rad-ReStruct |
satnet-sudoku | A set of easy Sudoku instances used in the SATNet paper for training SatNet on how to learn to play Sudoku.
The instances are easy (plenty of hints) and it is therefore rather easy to get high accuracy on these. More challenging instances are available in the rrn-sudoku dataset. | Provide a detailed description of the following dataset: satnet-sudoku |
rrn-sudoku | A set of 180,000 Sudoku grids with a variable number of hints from the minimal number of 17 (extremely hard instances) to 34 (easy instances), with 10,000 instances per level of hardness.
Training how to play the hardest Sudoku instances is a bit of a challenge. | Provide a detailed description of the following dataset: rrn-sudoku |
many-solutions-sudoku | A data set of Sudoku grids with more than one solution.
This was introduced to train on logical reasoning problems with non-unique solutions. | Provide a detailed description of the following dataset: many-solutions-sudoku |
Protein structures Ingraham | A data set introduced for training on the protein design task. | Provide a detailed description of the following dataset: Protein structures Ingraham |
T2I-CompBench | T2I-CompBench is a comprehensive benchmark for open-world compositional text-to-image generation, consisting of 6,000 compositional textual prompts from 3 categories (attribute binding, object relationships, and complex compositions) and 6 sub-categories (color binding, shape binding, texture binding, spatial relationships, non-spatial relationships, and complex compositions). | Provide a detailed description of the following dataset: T2I-CompBench |
InternVid | **InternVid** is a large-scale video-centric multimodal dataset that enables learning powerful and transferable video-text representations for multimodAL understanding and generation. The InternVid dataset contains over 7 million videos lasting nearly 760K hours, yielding 234M video clips accompanied by detailed descriptions of total 4.1B words. | Provide a detailed description of the following dataset: InternVid |
COLLIE-v1 | **COLLIE-v1** is a dataset with 2080 instances comprising 13 constraint structures designed for text generation under constraints. It is a grammar-based framework that allows the specification of rich, compositional constraints with diverse generation levels (word, sentence, paragraph, passage). | Provide a detailed description of the following dataset: COLLIE-v1 |
Experimental Results for "A Unified Perspective on Natural Gradient Variational Inference with Gaussian Mixture Models" | This package contains the raw data / logs (fetched from WandB) for the experiments of the following publication:
O. Arenz, P. Dahlinger, Z. Ye, M. Volpp, and G. Neumann. A unified perspective on natural gradient variational inference with gaussian mixture models. Transactions on Machine Learning Research, 2023. URL: https://openreview.net/forum?id=tLBjsX4tjs. | Provide a detailed description of the following dataset: Experimental Results for "A Unified Perspective on Natural Gradient Variational Inference with Gaussian Mixture Models" |
OCTID | An open-source Optical Coherence Tomography Image Database containing different retinal OCT images with various pathological conditions. This comprehensive open-access database contains over 500 high-resolution images categorized into different pathological conditions. The image classes include Normal (NO), Macular Hole (MH), Age-related Macular Degeneration (AMD), Central Serous Retinopathy (CSR), and Diabetic Retinopathy (DR). | Provide a detailed description of the following dataset: OCTID |
VideoInstruct | Video Instruction Dataset is used to train Video-ChatGPT. It consists of 100,000 high-quality video instruction pairs. employs a combination of human-assisted and semi-automatic annotation techniques, aiming to produce high-quality video instruction data. These methods create question-answer pairs related to
1. Video summarization
2. Description-based question-answers (exploring spatial, temporal, relationships, and reasoning concepts)
3. Creative/generative question-answers
The details are available at https://github.com/mbzuai-oryx/Video-ChatGPT/blob/main/data/README.md. | Provide a detailed description of the following dataset: VideoInstruct |
TSN-FlexTest Traffic Streams for Spot Robot, Tactile Internet, and Generic Data | In this dataset, we provide detailed traffic stream data for the Spot robot, including both the Spot robot control traffic stream and the Spot video stream. The Spot robot traffic streams provide realistic traffic data for communication network evaluations, e.g., for measurements with the TSN FlexText testbed. Furthermore, we share data for the tactile internet including audio, video, and robotic communication. Finally, the dataset includes generic data streams for three different intervals (0.2ms, 0.3ms, and 0.5ms) with two different Ethernet frame sizes. The data is provided as .*pcap which can be replayed with various tools or be analyzed, e.g., with Wireshark. The Spot data streams are split into two directions and are based on Spot API calls. | Provide a detailed description of the following dataset: TSN-FlexTest Traffic Streams for Spot Robot, Tactile Internet, and Generic Data |
SOD4SB | The **Small Object Detection for Spotting Birds (SOD4SB)** dataset is a dataset consisting of 39,070 images including 137,121 bird instances. The SOD4SD dataset contains a wide variety of small bird types and a variety of scenes. | Provide a detailed description of the following dataset: SOD4SB |
FathomNet2023 | The FathomNet2023 competition dataset is a subset of the [broader FathomNet marine image repository](https://fathomnet.org/). The training and test images for the competition were all collected in the Monterey Bay Area between the surface and 1300 meters depth by the Monterey Bay Aquarium Research Institute. The images contain bounding box annotations of 290 categories of bottom dwelling animals. The training and validation data are split across an 800 meter depth threshold: all training data is collected from 0-800 meters, evaluation data comes from the whole 0-1300 meter range. Since an organisms' habitat range is partially a function of depth, the species distributions in the two regions are overlapping but not identical. Test images are drawn from the same region but may come from above or below the depth horizon. The competition goal is to label the animals present in a given image (i.e. multi-label classification) and determine whether the image is out-of-sample. | Provide a detailed description of the following dataset: FathomNet2023 |
OpenLane-V2 test | **OpenLane-V2** is the world's first perception and reasoning benchmark for scene structure in autonomous driving. The primary task of the dataset is scene structure perception and reasoning, which requires the model to recognize the dynamic drivable states of lanes in the surrounding environment. The challenge of this dataset includes not only detecting lane centerlines and traffic elements but also recognizing the attribute of traffic elements and topology relationships on detected objects.
The [OLS](https://github.com/OpenDriveLab/OpenLane-V2#task) score is defined to measure model performance. And test server is [here](https://eval.ai/web/challenges/challenge-page/1925/overview). | Provide a detailed description of the following dataset: OpenLane-V2 test |
DialogStudio | DialogStudio, a meticulously curated collection of dialogue datasets. These datasets are unified under a consistent format while retaining their original information. We incorporate domain-aware prompts and identify dataset licenses, making DialogStudio an exceptionally rich and diverse resource for dialogue research and model training. | Provide a detailed description of the following dataset: DialogStudio |
DNA-Rendering | **DNA-Rendering** is a large-scale, high-fidelity repository of human performance data for neural actor rendering. It contains over 1500 human subjects, 5000 motion sequences, and 67.5M frames' data volume. Upon the massive collections, the authors provide human subjects with grand categories of pose actions, body shapes, clothing, accessories, hairdos, and object intersection, which ranges the geometry and appearance variances from everyday life to professional occasions. Second, they provide rich assets for each subject -- 2D/3D human body keypoints, foreground masks, SMPLX models, cloth/accessory materials, multi-view images, and videos. These assets boost the current method's accuracy on downstream rendering tasks. Third, they construct a professional multi-View system to capture data, which contains 60 synchronous cameras with max 4096×3000 resolution, 15 fps speed, and stern camera calibration steps, ensuring high-quality resources for task training and evaluation. | Provide a detailed description of the following dataset: DNA-Rendering |
AitW | **Android in the Wild (AitW)** is a dataset for device-control research which is orders of magnitude larger than current datasets. The dataset contains human demonstrations of device interactions, including the screens and actions, and corresponding natural language instructions. It consists of 715k episodes spanning 30k unique instructions, four versions of Android (v10–13), and eight device types (Pixel 2 XL to Pixel 6) with varying screen resolutions. It contains multi-step tasks that require semantic understanding of language and visual context. | Provide a detailed description of the following dataset: AitW |
BIOSCAN_1M_Insect Dataset | In an effort to catalog insect biodiversity, we propose a new large dataset of hand-labelled insect images, the BIOSCAN-1M Insect Dataset. Each record is taxonomically classified by an expert, and also has associated genetic information including raw nucleotide barcode sequences and assigned barcode index numbers, which are genetically-based proxies for species classification. This dataset contains a curated million-image dataset, primarily to train computer-vision models capable of providing image-based taxonomic assessment, however, the dataset also presents compelling characteristics, the study of which would be of interest to the broader machine learning community. Driven by the biological nature inherent to the dataset, a characteristic long-tailed class-imbalance distribution is exhibited. Furthermore, taxonomic labelling is a hierarchical classification scheme, presenting a highly fine-grained classification problem at lower levels. Beyond spurring interest in biodiversity research within the machine learning community, progress on creating an image-based taxonomic classifier will also further the ultimate goal of all BIOSCAN research: to lay the foundation for a comprehensive survey of global biodiversity. | Provide a detailed description of the following dataset: BIOSCAN_1M_Insect Dataset |
LLNeRF Dataset | **LLNeRF Dataset** is a real-world dataset as a benchmark for model learning and evaluation. To obtain real low-illumination images with real noise distributions, photos are taken at nighttime outdoor scenes or low-light indoor scenes containing diverse objects. Since the ISP operations are device dependent and the noise distributions across devices are also different, the data is collected using a mobile phone camera and a DSLR camera to enrich the diversity of the dataset. | Provide a detailed description of the following dataset: LLNeRF Dataset |
MeDAL Retina Dataset | Our primary objective in creating this dataset is to support researchers in the advancement of algorithms for keypoints detection and the pretraining of large models on retinal images using a self-supervised approach. The keypoints in the dataset have been carefully annotated by students from our lab, ensuring meticulous accuracy.
The dataset contains
261 annotated images
1920 images for auxiliary training of the decoder of the descriptors
93209 images for training large models in self-supervised manner | Provide a detailed description of the following dataset: MeDAL Retina Dataset |
AudioSet CC | The subset of audio samples from the AudioSet ontology which are licensed with Creative Commons. This set contains approximately 10,000 samples of 10s long clips, and is freely modifiable and distributable. Each clip has with it, its full label set and unique ID. | Provide a detailed description of the following dataset: AudioSet CC |
Replication Data for: AI Ethics on Blockchain: Topic Analysis on Twitter Data for Blockchain Security | Blockchain has empowered computer systems to be more secure using a distributed network. However, the current blockchain design suffers from fairness issues in transaction ordering. Miners are able to reorder transactions to generate profits, the so-called miner extractable value (MEV). Existing research recognizes MEV as a severe security issue and proposes potential solutions, including prominent Flashbots. However, previous studies have mostly analyzed blockchain data, which might not capture the impacts of MEV in a much broader AI society. Thus, in this research, we applied natural language processing (NLP) methods to comprehensively analyze topics in tweets on MEV. We collected more than 20000 tweets with \#MEV and \#Flashbots hashtags and analyzed their topics. Our results show that the tweets discussed profound topics of ethical concern, including security, equity, emotional sentiments, and the desire for solutions to MEV. We also identify the co-movements of MEV activities on blockchain and social media platforms. Our study contributes to the literature at the interface of blockchain security, MEV solutions, and AI ethics. (2023-07-06)
Subject | Provide a detailed description of the following dataset: Replication Data for: AI Ethics on Blockchain: Topic Analysis on Twitter Data for Blockchain Security |
Replication Data for: On the Mechanics of NFT Valuation: AI Ethics and Social Media | As CryptoPunks pioneers the innovation of non-fungible tokens (NFTs) in AI and art, the valuation mechanics of NFTs has become a trending topic. Earlier research identifies the impact of ethics and society on the price prediction of CryptoPunks. Since the booming year of the NFT market in 2021, the discussion of CryptoPunks has propagated on social media. Still, existing literature hasn't considered the social sentiment factors after the historical turning point on NFT valuation. In this paper, we study how sentiments in social media, together with gender and skin tone, contribute to NFT valuations by an empirical analysis of social media, blockchain, and crypto exchange data. We evidence social sentiments as a significant contributor to the price prediction of CryptoPunks. Furthermore, we document structure changes in the valuation mechanics before and after 2021. Although people's attitudes towards Cryptopunks are primarily positive, our findings reflect imbalances in transaction activities and pricing based on gender and skin tone. Our result is consistent and robust, controlling for the rarity of an NFT based on the set of human-readable attributes, including gender and skin tone. Our research contributes to the interdisciplinary study at the intersection of AI, Ethics, and Society, focusing on the ecosystem of decentralized AI or blockchain. We provide our data and code for replicability as open access on GitHub. | Provide a detailed description of the following dataset: Replication Data for: On the Mechanics of NFT Valuation: AI Ethics and Social Media |
Replication Data for: AI Ethics on Blockchain | Blockchain has empowered computer systems to be more secure using a distributed network. However, the current blockchain design suffers from fairness issues in transaction ordering. Miners are able to reorder transactions to generate profits, the so-called miner extractable value (MEV). Existing research recognizes MEV as a severe security issue and proposes potential solutions, including prominent Flashbots. However, previous studies have mostly analyzed blockchain data, which might not capture the impacts of MEV in a much broader AI society. Thus, in this research, we applied natural language processing (NLP) methods to comprehensively analyze topics in tweets on MEV. We collected more than 20000 tweets with \#MEV and \#Flashbots hashtags and analyzed their topics. Our results show that the tweets discussed profound topics of ethical concern, including security, equity, emotional sentiments, and the desire for solutions to MEV. We also identify the co-movements of MEV activities on blockchain and social media platforms. Our study contributes to the literature at the interface of blockchain security, MEV solutions, and AI ethics. (2023-07-06) | Provide a detailed description of the following dataset: Replication Data for: AI Ethics on Blockchain |
Replication Data for: On the Mechanics of NFT Valuation | As CryptoPunks pioneers the innovation of non-fungible tokens (NFTs) in AI and art, the valuation mechanics of NFTs has become a trending topic. Earlier research identifies the impact of ethics and society on the price prediction of CryptoPunks. Since the booming year of the NFT market in 2021, the discussion of CryptoPunks has propagated on social media. Still, existing literature hasn't considered the social sentiment factors after the historical turning point on NFT valuation. In this paper, we study how sentiments in social media, together with gender and skin tone, contribute to NFT valuations by an empirical analysis of social media, blockchain, and crypto exchange data. We evidence social sentiments as a significant contributor to the price prediction of CryptoPunks. Furthermore, we document structure changes in the valuation mechanics before and after 2021. Although people's attitudes towards Cryptopunks are primarily positive, our findings reflect imbalances in transaction activities and pricing based on gender and skin tone. Our result is consistent and robust, controlling for the rarity of an NFT based on the set of human-readable attributes, including gender and skin tone. Our research contributes to the interdisciplinary study at the intersection of AI, Ethics, and Society, focusing on the ecosystem of decentralized AI or blockchain. We provide our data and code for replicability as open access on GitHub. (2023-07-06) | Provide a detailed description of the following dataset: Replication Data for: On the Mechanics of NFT Valuation |
Replication Data for: Blockchain Network Analysis | Decentralized finance (DeFi) is known for its unique mechanism design, which applies smart contracts to facilitate peer-to-peer transactions. The decentralized bank is a typical DeFi application. Ideally, a decentralized bank should be decentralized in the transaction. However, many recent studies have found that decentralized banks have not achieved a significant degree of decentralization. This research conducts a comparative study among mainstream decentralized banks. We apply core-periphery network features analysis using the transaction data from four decentralized banks, Liquity, Aave, MakerDao, and Compound. We extract six features and compare the banks' levels of decentralization cross-sectionally. According to the analysis results, we find that: 1) MakerDao and Compound are more decentralized in the transactions than Aave and Liquity. 2) Although decentralized banking transactions are supposed to be decentralized, the data show that four banks have primary external transaction core addresses such as Huobi, Coinbase, and Binance, etc. We also discuss four design features that might affect network decentralization. Our research contributes to the literature at the interface of decentralized finance, financial technology (Fintech), and social network analysis and inspires future protocol designs to live up to the promise of decentralized finance for a truly peer-to-peer transaction network. (2023-07-06) | Provide a detailed description of the following dataset: Replication Data for: Blockchain Network Analysis |
SciBench | **SciBench** is a large-scale scientific problem-solving benchmark suite that aims to systematically examine the reasoning capabilities required for complex scientific problem solving. SciBench contains two carefully curated datasets: an open set featuring a range of collegiate-level scientific problems drawn from mathematics, chemistry, and physics textbooks, and a closed set comprising problems from undergraduate-level exams in computer science and mathematics. | Provide a detailed description of the following dataset: SciBench |
The Rambles | Collection of stream of consciousness.
Natural language processing dataset.
Individual thoughts are separated by a double new line. | Provide a detailed description of the following dataset: The Rambles |
DDXPlus | There has been a rapidly growing interest in Automatic Symptom Detection (ASD) and Automatic Diagnosis (AD) systems in the machine learning research literature, aiming to assist doctors in telemedicine services. These systems are designed to interact with patients, collect evidence about their symptoms and relevant antecedents, and possibly make predictions about the underlying diseases. Doctors would review the interactions, including the evidence and the predictions, collect if necessary additional information from patients, before deciding on next steps. Despite recent progress in this area, an important piece of doctors' interactions with patients is missing in the design of these systems, namely the differential diagnosis. Its absence is largely due to the lack of datasets that include such information for models to train on. In this work, we present a large-scale synthetic dataset of roughly 1.3 million patients that includes a differential diagnosis, along with the ground truth pathology, symptoms and antecedents for each patient. Unlike existing datasets which only contain binary symptoms and antecedents, this dataset also contains categorical and multi-choice symptoms and antecedents useful for efficient data collection. Moreover, some symptoms are organized in a hierarchy, making it possible to design systems able to interact with patients in a logical way. As a proof-of-concept, we extend two existing AD and ASD systems to incorporate the differential diagnosis, and provide empirical evidence that using differentials as training signals is essential for the efficiency of such systems or for helping doctors better understand the reasoning of those systems. | Provide a detailed description of the following dataset: DDXPlus |
Smarty4covid | Harnessing the power of Artificial Intelligence (AI) and m-health towards detecting new bio-markers indicative of the onset
and progress of respiratory abnormalities/conditions has greatly attracted the scientific and research interest especially during
COVID-19 pandemic. The smarty4covid dataset contains audio signals of cough (4,676), regular breathing (4,665), deep
breathing (4,695) and voice (4,291) as recorded by means of mobile devices following a crowd-sourcing approach. Other self
reported information is also included (e.g. COVID-19 virus tests), thus providing a comprehensive dataset for the development
of COVID-19 risk detection models. The smarty4covid dataset is released in the form of a web-ontology language (OWL)
knowledge base enabling data consolidation from other relevant datasets, complex queries and reasoning. It has been utilized
towards the development of models able to: (i) extract clinically informative respiratory indicators from regular breathing
records, and (ii) identify cough, breath and voice segments in crowd-sourced audio recordings. A new framework utilizing
the smarty4covid OWL knowledge base towards generating counterfactual explanations in opaque AI-based COVID-19 risk
detection models is proposed and validated. | Provide a detailed description of the following dataset: Smarty4covid |
CAD | Dataset of primarily English Reddit entries which addresses several limitations of prior work. It (1) contains six conceptually distinct primary categories as well as secondary categories, (2) has labels annotated in the context of the conversation thread, (3) contains rationales and (4) uses an expert-driven group-adjudication process for high quality annotations. | Provide a detailed description of the following dataset: CAD |
Rosario Dataset | Agricultural dataset collected on-board out weed removing robot. The dataset is composed by six different sequences in a soybean field and it contains stereo images, IMU measurements, wheel odometry and GPS-RTK (positional ground-truth) | Provide a detailed description of the following dataset: Rosario Dataset |
DPR-ANN | We provide the [code](https://github.com/IntelLabs/DPR-dataset-generator/tree/main) to generate base and query vector datasets for similarity search benchmarking and evaluation on high-dimensional vectors stemming from large language models. With the dense passage retriever (DPR) [[1]](#1), we encode text snippets from the C4 dataset [[2]](#2) to generate 768-dimensional vectors:
- context DPR embeddings for the base set and
- question DPR embeddings for the query set.
The metric for similarity search is inner product [[1]](#1).
The number of base and query embedding vectors is parametrizable.
See the [main repository](https://github.com/IntelLabs/DPR-dataset-generator/tree/main) for details on how to generate
the DPR10M specific instance introduced in [[3]](#3).
<a id="1">[1]</a>
Karpukhin, V.; Oguz, B.; Min, S.; Lewis, P.; Wu, L.; Edunov, S.; Chen, D.; Yih, W..: Dense Passage
Retrieval for Open-Domain Question Answering. In: Proceedings of the 2020 Conference on Empirical
Methods in Natural Language Processing (EMNLP). 6769–6781. (2020)
<a id="2">[2]</a>
Raffel, C.; Shazeer, N.; Roberts, A.; Lee, K.; Narang, S.; Matena, M.; Zhou, Y.; Li, W.; Liu,
P.J.: Exploring the Limits of Transfer Learning with a Unified Text-to-Text Transformer.
In: The Journal of Machine Learning Research 21,140:1–140:67.(2020)
<a id="3">[3]</a>
Aguerrebere, C.; Bhati I.; Hildebrand M.; Tepper M.; Willke T.:Similarity search in the blink of an eye with compressed
indices. In: Proceedings of the VLDB Endowment, 16, 11 (2023) | Provide a detailed description of the following dataset: DPR-ANN |
COCO-O | COCO-O(ut-of-distribution) contains 6 domains (sketch, cartoon, painting, weather, handmake, tattoo) of COCO objects which are hard to be detected by most existing detectors. The dataset has a total of 6,782 images and 26,624 labelled bounding boxes. | Provide a detailed description of the following dataset: COCO-O |
grobid-quantities-holdout | The dataset is described here:
https://grobid-quantities.readthedocs.io/en/latest/guidelines.html | Provide a detailed description of the following dataset: grobid-quantities-holdout |
SOEval | SOEVAL is created by us by mining questions from StackOverflow. Our goal was to create a prompt dataset that reflects the real-life needs of software developers. To build this dataset, we first collected 500 popular and recent questions with Python and Java tags for each. From these 1,000 questions, we applied a set of inclusion and exclusion criteria. The inclusion criteria were: the question has to (1) explicitly ask “how to do X” in Python or Java; (2) include code in its body; (3) have an accepted answer that includes code. We excluded questions that were (1) open-ended and asking for best practices/guidelines for a specific problem in Python/Java; (2) related to finding a specific API/module for a given task; (3) related to errors due to environment configuration (e.g., missing dependency library); (4) related to configuring libraries/API; (5) syntax specific types of questions. By applying the criteria above to these 1K questions, we obtained 28 and 42 prompts for Java and Python, respectively. | Provide a detailed description of the following dataset: SOEval |
Inria building dataset | **Inria building dataset** contains 360 images (5120×5120) collected from 5 cities (Austin, Chicago, Kitsap, Tyrol, and Vienna) | Provide a detailed description of the following dataset: Inria building dataset |
SK-VG | **SK-VG** is a dataset for Scene Knowledge-guided Visual Grounding, where the image content and referring expressions are not sufficient to ground the target objects, forcing the models to have a reasoning ability on the long-form scene knowledge. To perform this task, SK-VG is the first dataset of the fourth type, where for each image, we provide human-written knowledge to describe its content. | Provide a detailed description of the following dataset: SK-VG |
OpenGDA | **OpenGDA** is a benchmark for evaluating graph domain adaptation models. It provides abundant pre-processed and unified datasets for different types of tasks (node, edge, graph). They originate from diverse scenarios, covering web information systems, urban systems and natural systems. Furthermore, it integrates state-of-the-art models with standardized and end-to-end pipelines. Overall, OpenGDA provides a user-friendly, scalable and reproducible benchmark | Provide a detailed description of the following dataset: OpenGDA |
Building3D | **Building3D** is an urban-scale dataset consisting of more than 160 thousands buildings along with corresponding point clouds, mesh and wireframe models, covering 16 cities in Estonia about 998 Km2. Besides mesh models and real-world LiDAR point clouds, it also includes wireframe models. | Provide a detailed description of the following dataset: Building3D |
Massachusetts building dataset | The official dataset contains a training set (137 images), a validation set (4 images), and a testing set (10 images) | Provide a detailed description of the following dataset: Massachusetts building dataset |
Replay | **Replay** is a collection of multi-view, multi-modal videos of humans interacting socially. Each scene is filmed in high production quality, from different viewpoints with several static cameras, as well as wearable action cameras, and recorded with a large array of microphones at different positions in the room. The full Replay dataset consists of 68 scenes of social interactions between people, such as playing boarding games, exercising, or unwrapping presents. Each scene is about 5 minutes long and filmed with 12 cameras, static and dynamic. Audio is captured separately by 12 binaural microphones and additional near-range microphones for each actor and for each egocentric video. All sensors are temporally synchronized, undistorted, geometrically calibrated, and color calibrated. | Provide a detailed description of the following dataset: Replay |
Description Detection Dataset | **Description Detection Dataset** ($D^3$, /dikju:b/) is an attempt at creating a next-generation object detection dataset. Unlike traditional detection datasets, the class names of the objects are no longer simple nouns or noun phrases, but rather complex and descriptive, such as `a dog not being held by a leash`. For each image in the dataset, any object that matches the description is annotated. The dataset provides annotations such as bounding boxes and finely crafted instance masks.It comprises of 422 well-designed descriptions and 24,282 positive object-description pairs.
The dataset is meant for the Described Object Detection (DOD) task. OVD detects object based on category name, and each category can have zero to multiple instances; REC grounds one region based on a language description, whether the object truly exits or not; DOD detects all instances on each image in the dataset, based on a flexible reference. | Provide a detailed description of the following dataset: Description Detection Dataset |
ARTE | The ARTE database, so far, contains 13 acoustic environments that were recorded with a purpose-built 62-channel microphone array in various locations around Sydney (Australia), and was decoded into the higher-order Ambisonics (HOA) format.
For each acoustic environment the following files are provided:
HOA environment files: The recorded environments were decoded into 31mixed-order HOA channels and saved as WAV-files with a sampling frequency of 44.1 kHz and 32 bits per sample. Thereby, channels 1-25 refer to the 3D HOA periphonic (horizontal) components up to the order of M = 4, and channels 26-31 refer to additional sectorial 2D components (i.e., m = n) up to the order of M = 7.
HOA RIR files: In each environment, Room Impulse Responses (RIRs) were measured with a Tannoy V8 dual-concentric loudspeaker at a number of positions relative to the microphone array. Currently, only a single RIR is provided in each environment which was measured with a loudspeaker in front of the microphone array (0 degree azimuth) at a distance of 1.3 m. Similar to the noise files, the RIRs are provided as 31-channel WAV-files with a sampling frequency of 44.1 kHz and 32 bits per sample. In addition to the “standard” RIR, a second version is provided in which the RIR was split into a direct sound (DS) component as well as a reverberation component (REV). The separated version of the RIR can be useful for enhancing the directionality (and frequency response) of the direct sound by decoding it into a single loudspeaker channel (i.e., a loudspeaker at an azimuth angle of 0 degrees) and then adding it back to the reverberant component, which is decoded normally. This process has been shown to be particularly useful when evaluating the benefit provided by directional signal enhancement methods (e.g., beamformers) in hearing aids.
Binaural environment files: The HOA noise files were transformed into binaural headphone signals by simulating their playback via a 41-channel loudspeaker array to the in-ear microphones of a calibrated Bruel & Kjaer Head and Torso Simulator (HATS type 4128C). These binaural signals are provided in two versions: (a) an unprocessed version that needs to be presented via headphones that are equalized using an artificial ear and (b) a version that can be directly played back via any diffuse-field equalized headphones.
Binaural RIRs: The HOA RIRs were transformed into binaural RIRs in the same way as the HOA noise files (see above) and were saved both unequalized and diffuse-field equalized.
Basic acoustic measures: A number of basic acoustic measures are provided by a separate PDF-file for each environment, including: (a) unweighted sound pressure levels (dB SPL), (b) A-weighted sound pressure levels (dBA), (c) reverberation time (RT60), (d) third-octave power spectra in dB SPL, (e) temporal envelopes, (f) amplitude modulation spectra, and (g) directional characteristics in the horizontal plane. The acoustic measures were derived by simulating the playback of the MOA noise files (and RIRs) via a 41-channel loudspeaker array to a calibrated omni-directional 1/4” GRAS microphone (Type 46BL).
Apart from the acoustic environment specific files, the ARTE database includes a number of MatlabTM functions that help decoding the provided HOA files into a format that can be played back via a given loudspeaker array, and includes a number of examples.
Further technical details are described in Weisser, et al. (2019).
Supporting material: The provided MatlabTM scripts and examples assume that the downloaded files are organized in a specific directory structure. This structure is generated automatically when downloading (and unzipping) the main zip-file (ARTE database downloas.7z). Note that this zip-file contains all required functions except the MOA and binaural sound files and RIRs. Due to their file size (about 10 GB in total), these sound files should be downloaded, one by one, from the individual links provided below.
Some notes on calibration: All HOA noise files were normalized in the same way such that they correctly maintain their original differences in sound pressure level. Hence, once the sensitivity of the loudspeaker playback system is known, the same playback gain must be applied to all noise files. Even though this playback gain can be derived using any of the provided noise files, the easiest noise file for calibrating the loudspeaker playback system is the provided diffuse noise due to its steady-state behavior. Given that most playback environments contain significant low-frequency background noise, and loudspeakers have different low-frequency roll-offs, the provided A-weighted sound pressure levels should be best used for calibration. Also, it is assumed here that all loudspeakers in the playback array have the same distance to the listener, identical sensitivity, and a flat frequency response. If this is not the case the loudspeakers need to be equalized individually. Also, reverberation of the playback room should be as low as possible.
Acknowledgement: The development of the ARTE database was financially supported by the HEARing CRC, established and supported under the Cooperative Research Centres Program – an initiative of the Australian Government, and the Oticon foundation.
References: Weisser, A., Buchholz, J. M., Oreinos, C., Badajoz-Davila, J., Galloway, J., Beechey, T., Keidser, G. (2019). The Ambisonics Recordings of Typical Environments (ARTE) database. Acta Acustica united with Acustica. (see provided pdf-file) | Provide a detailed description of the following dataset: ARTE |
REFCAT | Internet Archive Scholar Reference Dataset. | Provide a detailed description of the following dataset: REFCAT |
Can you predict product backorder? | **Problem Statement**
Material backorder is a common problem in a supply chain system, impacting an inventory system's service level and effectiveness. Identifying parts with the highest chances of shortage prior to their occurrence can present a high opportunity to improve an overall company’s performance. In this project, we will train classifiers to predict future back-ordered products and generate predictions for a test set.
**File descriptions**
Here we have two CSV files (Training_BOP.csv and Testing_BOP.csv)
Training_BOP.csv - the training set
Testing_BOP.csv - the testing set
Each file has 23 columns; the last column (went_on_backorder) is the target column.
**Data fields**
sku - sku code
national_inv - Current inventory level of component
lead_time - Transit time
in_transit_qty - Quantity in transit
forecast_x_month - Forecast sales for the net 3, 6, and 9 months
sales_x_month - Sales quantity for the prior 1, 3, 6, and 9 months
min_bank - Minimum recommended amount in stock
potential_issue - Indicator variable noting a potential issue with the item
pieces_past_due - Parts overdue from the source
perf_x_months_avg - Source performance in the last 6 and 12 months
local_bo_qty - Amount of stock orders overdue
x17-x22 - General Risk Flags
went_on_back_order - Product went on backorder
Validation - indicator variable for training (0), validation (1), and test set (2) | Provide a detailed description of the following dataset: Can you predict product backorder? |
LAION-Aesthetics V2 6.5+ | * A subset of the LAION 5B samples with English captions, obtained using LAION-Aesthetics_Predictor V2
* 625K image-text pairs with predicted aesthetics scores of 6.5 or higher
* available at https://huggingface.co/datasets/ChristophSchuhmann/improved_aesthetics_6.5plus | Provide a detailed description of the following dataset: LAION-Aesthetics V2 6.5+ |
DFEW | Recently, facial expression recognition (FER) in the wild has gained
a lot of researchers’ attention because it is a valuable topic to enable the FER techniques to move from the laboratory to the real
applications. In this paper, we focus on this challenging but interesting topic and make contributions from three aspects. First, we
present a new large-scale ’in-the-wild’ dynamic facial expression
database, DFEW (Dynamic Facial Expression in the Wild), consisting of over 16,000 video clips from thousands of movies. These
video clips contain various challenging interferences in practical
scenarios such as extreme illumination, occlusions, and capricious
pose changes. Second, we propose a novel method called ExpressionClustered Spatiotemporal Feature Learning (EC-STFL) framework
to deal with dynamic FER in the wild. Third, we conduct extensive
benchmark experiments on DFEW using a lot of spatiotemporal
deep feature learning methods as well as our proposed EC-STFL.
Experimental results show that DFEW is a well-designed and challenging database, and the proposed EC-STFL can promisingly improve the performance of existing spatiotemporal deep neural networks in coping with the problem of dynamic FER in the wild. Our
DFEW database is publicly available and can be freely downloaded
from https://dfew-dataset.github.io/. | Provide a detailed description of the following dataset: DFEW |
FERV39k | Current benchmarks for facial expression recognition (FER) mainly focus on static images, while there are limited datasets for FER in videos. It is still ambiguous to evaluate whether performances of existing methods remain satisfactory in real-world application-oriented scenes. For example, the “Happy” expression with high intensity in Talk-Show is more discriminating than the same expression with low intensity in Official-Event. To fill this gap, we build a large-scale multi-scene dataset, coined as FERV39k. We analyze the important ingredients of constructing such a novel dataset in three aspects: (1) multi-scene hierarchy and expression class, (2) generation of candidate video clips, (3) trusted manual labelling process. Based on these guidelines, we select 4 scenarios subdivided into 22 scenes, annotate 86k samples automatically obtained from 4k videos based on the welldesigned workflow, and finally build 38,935 video clips labeled with 7 classic expressions. Experiment benchmarks on four kinds of baseline frameworks were also provided and further analysis on their performance across different scenes and some challenges for future research were given. Besides, we systematically investigate key components of DFER by ablation studies. The baseline framework and our project are available on url. | Provide a detailed description of the following dataset: FERV39k |
Data and Code from: Naïve Individuals Promote Collective Exploration in Homing Pigeons. | This archive contains raw data, intermediate results, statistics, and figures for the manuscript "Naïve individuals promote collective exploration in homing pigeons"
Once unzipped, the folder structure will look as follow:
- data/ [raw data and intermediate results]
- img/ [all plots in the manuscript]
- scripts/ [source code]
See data/README.txt and scripts/main.R
Funding
NSF grant No. PHY-1505048
TWCF0316 from the Templeton World Charity Foundation’s “Diverse Intelligences” scheme | Provide a detailed description of the following dataset: Data and Code from: Naïve Individuals Promote Collective Exploration in Homing Pigeons. |
GoodsAD | The GoodsAD dataset contains 6124 images with 6 categories of common supermarket goods. Each category contains multiple goods. All images are acquired with 3000 × 3000 high-resolution. The object locations in the images are not aligned. Most objects are in the center of the images and one image only contains a single object. Most anomalies occupy only a small fraction of image pixels. Both image-level and pixel-level annotations are provided.
Each image is named with 6 digits, with the first three digits representing the category of the product and the last three representing the serial number. The dataset format is same as MVTec AD. | Provide a detailed description of the following dataset: GoodsAD |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.