dataset_name stringlengths 2 128 | description stringlengths 1 9.7k | prompt stringlengths 59 185 |
|---|---|---|
ContractNLI | ContractNLI is a dataset for document-level natural language inference (NLI) on contracts whose goal is to automate/support a time-consuming procedure of contract review. In this task, a system is given a set of hypotheses (such as “Some obligations of Agreement may survive termination.”) and a contract, and it is asked to classify whether each hypothesis is entailed by, contradicting to or not mentioned by (neutral to) the contract as well as identifying evidence for the decision as spans in the contract.
ContractNLI is the first dataset to utilize NLI for contracts and is also the largest corpus of annotated contracts (as of September 2021). ContractNLI is an interesting challenge to work on from a machine learning perspective (the label distribution is imbalanced and it is naturally multi-task, all the while training data being scarce) and from a linguistic perspective (linguistic characteristics of contracts, particularly negations by exceptions, make the problem difficult). | Provide a detailed description of the following dataset: ContractNLI |
Data Science Problems | Evaluate a natural language code generation model on real data science pedagogical notebooks! Data Science Problems (DSP) includes well-posed data science problems in Markdown along with unit tests to verify correctness and a Docker environment for reproducible execution. About 1/3 of notebooks in this benchmark also include data dependencies, so this benchmark not only can test a model's ability to chain together complex tasks, but also evaluate the solutions on real data! See our paper [Training and Evaluating a Jupyter Notebook Data Science Assistant](https://arxiv.org/abs/2201.12901) for more details about state of the art results and other properties of the dataset. | Provide a detailed description of the following dataset: Data Science Problems |
Visual Fields | 28,943 Humphrey Visual Field (HVF) tests from 3,871 patients and 7,428 eyes.
This file contains sensitivity values, TD values, age, laterality (left or right eye) and gender when specified. Sensitivity and TD values are stored both in long format (as a vector) and provided as an 8 x 9 matrix. The latter is meant to preserve the original spatial organization of the data, which is particularly useful in spatial-aware processing often employed in machine learning. All visual field data are stored as a right eye. Empty matrix cells are filled with a fixed value (100).
Institution: University of Washington
Data Collection: between 1998 and 2018.
Please cite: [Giovanni Montesano, Andrew Chen, Randy Lu, Cecilia S. Lee, Aaron Y. Lee; UWHVF: A Real-World, Open Source Dataset of Perimetry Tests From the Humphrey Field Analyzer at the University of Washington. Trans. Vis. Sci. Tech. 2022;11(1):2. doi: https://doi.org/10.1167/tvst.11.1.1.](https://tvst.arvojournals.org/article.aspx?articleid=2778219) | Provide a detailed description of the following dataset: Visual Fields |
UI5k | This dataset contains 54,987 UI screenshots and the metadata from 7,748 Android applications belonging to 25 application categories
Download link: [https://www.dropbox.com/sh/kfkhevxykzwputb/AAAhL6ipmOg4zZn4jUL_myF0a?dl=0](https://www.dropbox.com/sh/kfkhevxykzwputb/AAAhL6ipmOg4zZn4jUL_myF0a?dl=0) | Provide a detailed description of the following dataset: UI5k |
NEWSKVQA | **NEWSKVQA** is a new dataset of 12K news videos spanning across 156 hours with 1M multiple-choice question-answer pairs covering 8263 unique entities. | Provide a detailed description of the following dataset: NEWSKVQA |
Drone vs Bird | For the Drone-vs-Bird Detection Challenge 2021, 77 different video sequences have been made available as training data. These video sequences originate from the previous installment of the challenge and were collected using MPEG4-coded static cameras by the SafeShore project, by the Fraunhofer IOSB research institute and by the ALADDIN2 project. On average, the video sequences consist of 1,384 frames, while each frame contains 1.12 annotated drones. The video sequences are recorded with both static cameras and moving cameras and the resolution varies between 720×576 and 3840×2160 pixels. In total, 8 different types of drones exist in the dataset , i.e. 3 with fixed wings and 5 rotary ones. For each video, a separate annotation file is provided, which contains the frame number and the bounding box (expressed as [topx topy width height]) for the frames in which drones enter the scenes. | Provide a detailed description of the following dataset: Drone vs Bird |
DELAUNAY | **DELAUNAY** is a dataset of abstract paintings and non-figurative art objects labelled by the artists' names. This dataset provides a middle ground between natural images and artificial patterns and can thus be used in a variety of contexts, for example to investigate the sample efficiency of humans and artificial neural networks.
The dataset comprises 11,503 images from 53 categories, i.e. artists (mean number of images per artist: 217.04; standard deviation: 58.55), along with the associated URLs. These samples are split between a training set of 9202 images and a test set of 2301 images. | Provide a detailed description of the following dataset: DELAUNAY |
Medical Question Pairs | # Medical Question Pairs (MQP) Dataset
This repository contains a dataset of 3048 similar and dissimilar medical question pairs hand-generated and labeled by Curai's doctors. The dataset is described in detail in [our paper](https://drive.google.com/file/d/1CHPGBXkvZuZc8hpr46HeHU6U6jnVze-s/view).
## Methodology
We present our doctors with a list of 1524 patient-asked questions randomly sampled from the publicly available crawl of [HealthTap](https://github.com/durakkerem/Medical-Question-Answer-Datasets). Each question results in one similar and one different pair through the following instructions provided to the labelers:
1. Rewrite the original question in a different way while maintaining the same intent. Restructure the syntax as much as possible and change medical details that would not impact your response.
e.g. "I'm a 22-y-o female" could become "My 26 year old daughter"
2. Come up with a related but dissimilar question for which the answer to the original question would be WRONG OR IRRELEVANT. Use similar key words.
The first instruction generates a _positive_ question pair (similar) and the second generates a _negative_ question pair (different). With the above instructions, we intentionally frame the task such that positive question pairs can look very different by superficial metrics, and negative question pairs can conversely look very similar. This ensures that the task is not trivial.
## Dataset format
The dataset is formatted as `dr_id, question_1, question_2, label`. We used 11 different doctors for this task so `dr_id` ranges from 1 to 11. The label is 1 if the question pair is similar and 0 otherwise.
## Dataset statistics
The final dataset contains 4567 unique questions. The minimum, maximum, median and average number of tokens in these questions are 4, 81, 20 and 22.675 respectively showing there is reasonable variance in the length of the questions. The shortest question is `Are fibroadenomas malignant?`
An off-the-shelf medical entity recognizer finds around 1000 unique medical entities in the questions. Some of the top entity mentions were: `physician, pregnancy, pain, lasting weeks, menstruation, emotional state, cancer, visual function, headache, bleeding, fever, sexual intercourse` | Provide a detailed description of the following dataset: Medical Question Pairs |
InfiniteRep | InfiniteRep is a synthetic, open-source dataset for fitness and physical therapy (PT) applications. It includes 1k videos of diverse avatars performing multiple repetitions of common exercises. It includes significant variation in the environment, lighting conditions, avatar demographics, and movement trajectories. From cadence to kinematic trajectory, each rep is done slightly differently -- just like real humans. InfiniteRep videos are accompanied by a rich set of pixel-perfect labels and annotations, including frame-specific repetition counts.
The dataset features:
+ 100 videos per exercise, spanning 5 to 10 repetitions each (1,000 videos total)
+ 7 unique indoor scenes
+ Realistic environmental occlusion (+ corresponding labels)
+ Diverse lighting conditions
+ Varied body shape, skin tones, and clothing
+ Rich annotations for 2D and 3D supervision
## Exercises
The dataset currently includes the following exercises:
+ Pushups
+ Alternating Bicep Curls (with dumbbells)
+ Delt Flys (with dumbbells)
+ Squats
+ Bird Dogs
+ Supermans
+ Bicycle Crunches
+ Leg Raises
+ Front Raises (with dumbbells)
+ Overhead Press (with dumbbells)
## Annotations
The dataset includes the following annotations:
+ Bounding boxes
+ Segmentation masks
+ Keypoints
+ Joint angles (quaternions)
+ Percent occlusion
+ Avatar characteristics
+ Camera position
+ and more
Want depth labels? They are not included in the dataset but we can send them to you. Email us at info@toinfinity.ai.
## Download
Download the dataset: [toinfinity.ai/infiniterep ](https://toinfinity.ai/infiniterep)
Github repo with additional documentation: [https://github.com/toinfinityai/InfiniteRep](https://github.com/toinfinityai/InfiniteRep)
## Need more data?
Infinity AI specializes in generating custom synthetic data. If you need more (or different data), drop us a line at info@toinfinity.ai (we read every email). | Provide a detailed description of the following dataset: InfiniteRep |
Extended heartSeg | The dataset X of this work is an extension of the heartSeg dataset. Each sample x ∈ X is an RGB image capturing the heart region of Medaka (Oryzias latipes) hatchlings from a constant ventral view. Since the body of Medaka is see-through, noninvasive studies regarding the internal organs and the whole circulatory system are practicable. A Medaka’s heart contains three parts: the atrium, the ventricle, and the bulbus. The atrium receives deoxygenated blood from the circulatory system and delivers it to the ventricle, which forwards it into the bulbus. The bulbus is the heart’s exit chamber and provides the gill arches with a constant blood flow. The blood flow through these three chambers was captured in 63 short recordings (around 11 seconds with 24 frames per second each) in total, from which the single image samples x ∈ X are extracted. The dataset is split into training and test data following the heartSeg dataset with ntrain = 565 samples in the training set Xtrain and ntest = 165 samples in the test set Xtest. The RGB image samples have a 640 × 480 pixels resolution. | Provide a detailed description of the following dataset: Extended heartSeg |
Real spreading processes in multilayer networks | Presented data contains the record of five spreading campaigns that occurred in a virtual world platform. Users distributed avatars between each other during the campaigns. The processes varied in time and range and were either incentivized or not incentivized. Campaign data is accompanied by events. The data can be used to build a multilayer network to place the campaigns in a wider context. To the best of the authors knowledge, the study is the first publicly available dataset containing a complete real multilayer social network together, along with five complete spreading processes in it.
Full description available in Jankowski, J., Michalski, R., & Bródka, P. (2017). A multilayer network dataset of interaction and influence spreading in a virtual world. Scientific data, 4(1), 1-9. https://www.nature.com/articles/sdata2017144 | Provide a detailed description of the following dataset: Real spreading processes in multilayer networks |
IndicGLUE | We now introduce IndicGLUE, the Indic General
Language Understanding Evaluation Benchmark,
which is a collection of various NLP tasks as de-
scribed below. The goal is to provide an evaluation
benchmark for natural language understanding ca-
pabilities of NLP models on diverse tasks and mul-
tiple Indian languages. | Provide a detailed description of the following dataset: IndicGLUE |
Natural Sprites | This csv consists of (x-position, y-position, area) tuples of three views (left, middle, right) of downscaled binary masks with aspect ratio kept (64 x 128) from the 2019 YouTube-VIS challenge, which can be found at https://competitions.codalab.org/competitions/20127#participate-get-data. Extracting pairs from this csv results in 234,652 transitions in the given statistics. These statistics can be used to augment ground truth factor distributions with natural transitions, which we demonstrate with spriteworld. For details, we refer to our paper, which can be found at https://openreview.net/forum?id=EbIDjBynYJ8. | Provide a detailed description of the following dataset: Natural Sprites |
KITTI-Masks | This Dataset consists of 2120 sequences of binary masks of pedestrians. The sequence length varies between 2-710. For details, we refer to our paper. It is based on the original KITTI Segmentation challenge which can be found at https://www.vision.rwth-aachen.de/page/mots
A detailed description can be found at: https://openreview.net/pdf?id=EbIDjBynYJ8
An example dataloader can be found at:
https://github.com/bethgelab/slow_disentanglement/ | Provide a detailed description of the following dataset: KITTI-Masks |
3DIdent | Novel benchmark which features aspects of natural scenes, e.g. a complex 3D object and different lighting conditions, while still providing access to the continuous ground-truth factors.
We use the Blender rendering engine to create visually complex 3D images. Each image in the dataset shows a colored 3D object which is located and rotated above a colored ground in a 3D space. Additionally, each scene contains a colored spotlight which is focused on the object and located on a half-circle around the scene. The observations are encoded with an RGB color space, and the spatial resolution is 224x224 pixels.
The images are rendered based on a 10-dimensional latent, where: (1) three dimensions describe the XYZ position, (2) three dimensions describe the rotation of the object in Euler angles, (3) two dimensions describe the color of the object and the ground of the scene, respectively, and (4) two dimensions describe the position and color of the spotlight. We use the HSV color space to describe the color of the object and the ground with only one latent each by having the latent factor control the hue value.
The training set and test set contain 250,000 and 25,000 observation-latent pairs, respectively, whereby the latents are uniformly sampled from the unit hyperrectangle. | Provide a detailed description of the following dataset: 3DIdent |
Causal3DIdent | Update on 3DIdent, where we introduce six additional object classes (Hare, Dragon, Cow, Armadillo, Horse, and Head), and impose a causal graph over the latent variables. For further details, see Appendix B in the associated paper (https://arxiv.org/abs/2106.04619). | Provide a detailed description of the following dataset: Causal3DIdent |
CENTER-TBI | The CENTER-TBI database contains prospectively collected data of more than 4,500 patients with TBI in Europe. The Registry and Acute Care data has been collected during a 3 years’ period (2015-2017) in 65 centers in Europe. For all patients, outcome data has been collected up to 2 years after injury.
The CENTER-TBI investigators welcome all forms of collaboration and data sharing. Interested scientists may obtain access to the CENTER-TBI clinical, imaging, high resolution ICU and biomarker data for the purposes of scientific investigation, teaching or planning clinical research studies. Obtaining access to and using CENTER-TBI data requires adherence to the CENTER-TBI Data Use Agreement and harmonized procedures for the data access requests as outlined in the documents listed below.
The application process includes submission of an online application form. The application must include the investigator’s institutional affiliation and the proposed uses of the CENTER-TBI data. CENTER-TBI data may not be used for commercial products or redistributed in any way. | Provide a detailed description of the following dataset: CENTER-TBI |
PCFG SET | The Probabilistic Context Free Grammar String Edit Task (PCFG SET) dataset is a dataset with sequence to sequence problems specifically designed to test different aspects of **compositional generalisation**. In particular, the dataset contains splits to test for *systematicity*, *productivity*, *substitutivity*, *localism* and *overgeneralisation*.
The input alphabet of PCFG SET contains three types of words: words for unary and binary functions that represent \emph{string edit operations} (e.g. $\texttt{append}, \texttt{copy}, \texttt{reverse})$, elements to form the string sequences that these functions can be applied to (e.g. $\texttt{A}, \texttt{B}, \texttt{A1}, \texttt{B1}$), and a separator to separate the arguments of a binary function ($\texttt{,}$). The input sequences that are formed with this alphabet are sequences describing how a series of such operations are to be applied to a string argument. For instance:
- $\texttt{repeat A B C }$
- $\texttt{echo remove\_first D K , E F}$
- $\texttt{append swap F G H , repeat I J}$
The input sequences are generated with a PCFG, whose production probabilities are learned with EM to match the depth and length distributions in a corpus with English sentences.
The output of a PCFG SET sequence, representing its meaning, is constructed by recursively applying the string edit operations specified in the sequence. For instance:
- $\texttt{repeat A B C }$ & $\rightarrow$ & $\texttt{A B C A B C}$
- $\texttt{echo remove\_first D K , E F}$ & $\rightarrow$ & $\texttt{E F F}$
- $\texttt{append swap F G H , repeat I J}$ & $\rightarrow$ & $\texttt{H G F I J I J }$
The string alphabet used for the construction of the dataset has 520 distinct elements, the length of the string arguments to a functions is limited to 5.The dataset contains around 100 thousand examples in total. A full description of the dataset can be found in Hupkes et al (2020). | Provide a detailed description of the following dataset: PCFG SET |
deepMTJ_IEEEtbme | This dataset comprises 1344 expert annotated images of muscle-tendon junctions recorded with 3 ultrasound imaging systems (Aixplorer V6, Esaote MyLab60, Telemed ArtUs), on 2 muscles (Lateral Gastrocnemius, Medial Gastrocnemius), and 2 movements (isometric maximum voluntary contractions, passive torque movements). | Provide a detailed description of the following dataset: deepMTJ_IEEEtbme |
EquiBind data | The protein-ligand complexes of PDBBind v2020 preprocessed as described in the paper "EquiBind: Geometric Deep Learning for Drug Binding Structure Prediction" with associated code at https://github.com/HannesStark/EquiBind
Contained are 19119 complexes of PDBBinds total 19433 protein-ligand complexes. Excluded are those for which the ligand files could not be loaded using RDKit.
Paper Abstract:
Predicting how a drug-like molecule binds to a specific protein target is a core problem in drug discovery. An extremely fast computational binding method would enable key applications such as fast virtual screening or drug engineering. Existing methods are computationally expensive as they rely on heavy candidate sampling coupled with scoring, ranking, and fine-tuning steps. We challenge this paradigm with EQUIBIND, an SE(3)-equivariant geometric deep learning model performing direct-shot prediction of both i) the receptor binding location (blind docking) and ii) the ligand’s bound pose and orientation. EquiBind achieves significant speed-ups and better quality compared to traditional and recent baselines. Further, we show extra improvements when coupling it with existing fine-tuning techniques at the cost of increased running time. Finally, we propose a novel and fast fine-tuning model that adjusts torsion angles of a ligand’s rotatable bonds based on closed-form global minima of the von Mises angular distance to a given input atomic point cloud, avoiding previous expensive differential evolution strategies for energy minimization. | Provide a detailed description of the following dataset: EquiBind data |
SoundDescs | We introduce a new audio dataset called SoundDescs that can be used for tasks such as text to audio retrieval, audio captioning etc. This dataset contains 32,979 pairs of audio files and text descriptions. There are 23 categories found in SoundDescs including but not limited to nature, clocks, fire etc.
SoundDescs can be downloaded from [here](https://github.com/akoepke/audio-retrieval-benchmark) and retrieval results for this dataset can be found in the associated paper [Audio Retrieval with Natural Language Queries: A Benchmark Study](https://arxiv.org/pdf/2112.09418.pdf). | Provide a detailed description of the following dataset: SoundDescs |
C3D features for PHD2GIF | The feature files are named with the youtube IDs.
https://drive.google.com/drive/folders/10-6hkQxMKMGwLXANxfPRE7xw5PKiMjLn?usp=sharing | Provide a detailed description of the following dataset: C3D features for PHD2GIF |
Rice Dataset Commeo and Osmancik | ata Set Name: Rice Dataset (Commeo and Osmancik)
Abstract: A total of 3810 rice grain's images were taken for the two species (Cammeo and Osmancik), processed and feature inferences were made. 7 morphological features were obtained for each grain of rice. | Provide a detailed description of the following dataset: Rice Dataset Commeo and Osmancik |
Rice Image Dataset | Citation Request: See the articles for more detailed information on the data.
Koklu, M., Cinar, I., & Taspinar, Y. S. (2021). Classification of rice varieties with deep learning methods. Computers and Electronics in Agriculture, 187, 106285. https://doi.org/10.1016/j.compag.2021.106285
Cinar, I., & Koklu, M. (2021). Determination of Effective and Specific Physical Features of Rice Varieties by Computer Vision In Exterior Quality Inspection. Selcuk Journal of Agriculture and Food Sciences, 35(3), 229-243. https://doi.org/10.15316/SJAFS.2021.252
Cinar, I., & Koklu, M. (2022). Identification of Rice Varieties Using Machine Learning Algorithms. Journal of Agricultural Sciences https://doi.org/10.15832/ankutbd.862482
Cinar, I., & Koklu, M. (2019). Classification of Rice Varieties Using Artificial Intelligence Methods. International Journal of Intelligent Systems and Applications in Engineering, 7(3), 188-194. https://doi.org/10.18201/ijisae.2019355381
https://www.kaggle.com/mkoklu42
DATASET: https://www.muratkoklu.com/datasets/ | Provide a detailed description of the following dataset: Rice Image Dataset |
Grapevine Leaves Image Dataset | KOKLU Murat (a), UNLERSEN M. Fahri (b), OZKAN Ilker Ali (a), ASLAN M. Fatih(c), SABANCI Kadir (c)
(a) Department of Computer Engineering, Selcuk University, Turkey, Konya, Turkey
(b) Department of Electrical and Electronics Engineering, Necmettin Erbakan University, Konya, Turkey
(c) Department of Electrical-Electronic Engineering, Karamanoglu Mehmetbey University, Karaman, Turkey
Citation Request :
Koklu, M., Unlersen, M. F., Ozkan, I. A., Aslan, M. F., & Sabanci, K. (2022). A CNN-SVM study based on selected deep features for grapevine leaves classification. Measurement, 188, 110425. Doi:https://doi.org/10.1016/j.measurement.2021.110425
Link: https://doi.org/10.1016/j.measurement.2021.110425
https://www.kaggle.com/mkoklu42
DATASET: https://www.muratkoklu.com/datasets/
Highlights
• Classification of five classes of grapevine leaves by MobileNetv2 CNN Model.
• Classification of features using SVMs with different kernel functions.
• Implementing a feature selection algorithm for high classification percentage.
• Classification with highest accuracy using CNN-SVM Cubic model.
Abstract: The main product of grapevines is grapes that are consumed fresh or processed. In addition, grapevine leaves are harvested once a year as a by-product. The species of grapevine leaves are important in terms of price and taste. In this study, deep learning-based classification is conducted by using images of grapevine leaves. For this purpose, images of 500 vine leaves belonging to 5 species were taken with a special self-illuminating system. Later, this number was increased to 2500 with data augmentation methods. The classification was conducted with a state-of-art CNN model fine-tuned MobileNetv2. As the second approach, features were extracted from pre-trained MobileNetv2′s Logits layer and classification was made using various SVM kernels. As the third approach, 1000 features extracted from MobileNetv2′s Logits layer were selected by the Chi-Squares method and reduced to 250. Then, classification was made with various SVM kernels using the selected features. The most successful method was obtained by extracting features from the Logits layer and reducing the feature with the Chi-Squares method. The most successful SVM kernel was Cubic. The classification success of the system has been determined as 97.60%. It was observed that feature selection increased the classification success although the number of features used in classification decreased.
Keywords: Deep learning, Transfer learning, SVM, Grapevine leaves, Leaf identification | Provide a detailed description of the following dataset: Grapevine Leaves Image Dataset |
Acoustic Extinguisher Fire Dataset | Yavuz Selim TASPINAR, Murat KOKLU and Mustafa ALTIN
Citation Request :
1: KOKLU M., TASPINAR Y.S., (2021). Determining the Extinguishing Status of Fuel Flames With Sound Wave by Machine Learning Methods. IEEE Access, 9, pp.86207-86216, Doi: 10.1109/ACCESS.2021.3088612
Link: https://ieeexplore.ieee.org/document/9452168 (Open Access)
https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=9452168
2: TASPINAR Y.S., KOKLU M., ALTIN M., (2021). Classification of Flame Extinction Based on Acoustic Oscillations using Artificial Intelligence Methods. Case Studies in Thermal Engineering, 28, 101561, Doi: 10.1016/j.csite.2021.101561
Link: https://www.sciencedirect.com/science/article/pii/S2214157X21007243 (Open Access) https://www.sciencedirect.com/sdfe/reader/pii/S2214157X21007243/pdf
3: TASPINAR Y.S., KOKLU M., ALTIN M., (2022). Acoustic-Driven Airflow Flame Extinguishing System Design and Analysis of Capabilities of Low Frequency in Different Fuels. Fire Technology, Doi: 10.1007/s10694-021-01208-9
Link: https://link.springer.com/content/pdf/10.1007/s10694-021-01208-9.pdf"
https://www.kaggle.com/mkoklu42
DATASET: https://www.muratkoklu.com/datasets/
SHORT DESCRIPTION: The dataset was obtained as a result of the extinguishing tests of four different fuel flames with a sound wave extinguishing system. The sound wave fire-extinguishing system consists of 4 subwoofers with a total power of 4,000 Watt placed in the collimator cabinet. There are two amplifiers that enable the sound come to these subwoofers as boosted. Power supply that powers the system and filter circuit ensuring that the sound frequencies are properly transmitted to the system is located within the control unit. While computer is used as frequency source, anemometer was used to measure the airflow resulted from sound waves during the extinguishing phase of the flame, and a decibel meter to measure the sound intensity. An infrared thermometer was used to measure the temperature of the flame and the fuel can, and a camera is installed to detect the extinction time of the flame. A total of 17,442 tests were conducted with this experimental setup. The experiments are planned as follows:
1. Three different liquid fuels and LPG fuel were used to create the flame.
2. 5 different sizes of liquid fuel cans are used to achieve different size of flames.
3. Half and full gas adjustment is used for LPG fuel.
4. While carrying out each experiment, the fuel container, at 10 cm distance, was moved forward up to 190 cm by increasing the distance by 10 cm each time.
5. Along with the fuel container, anemometer and decibel meter were moved forward in the same dimensions.
6. Fire extinguishing experiments was conducted with 54 different frequency sound waves at each distance and flame size.
Throughout the flame extinguishing experiments, the data obtained from each measurement device was recorded and a dataset was created. The dataset includes the features of fuel container size representing the flame size, fuel type, frequency, decibel, distance, airflow and flame extinction. Accordingly, 6 input features and 1 output feature will be used in models. The explanation of a total of seven features for liquid fuels in the dataset is given in Table 1, and the explanation of 7 features for LPG fuel is given in Table 2.
The status property (flame extinction or non-extinction states) can be predicted by using six features in the dataset. Status and fuel features are categorical, while other features are numerical. 8,759 of the 17,442 test results are the non-extinguishing state of the flame. 8,683 of them are the extinction state of the flame. According to these numbers, it can be said that the class distribution of the dataset is almost equal."
KEYWORDS: Fire, Extinguishing System, Sound wave, Machine learning, Fire safety, Low frequency, Acoustic | Provide a detailed description of the following dataset: Acoustic Extinguisher Fire Dataset |
Adult Data Set | Data Set Information:
Extraction was done by Barry Becker from the 1994 Census database. A set of reasonably clean records was extracted using the following conditions: ((AAGE>16) && (AGI>100) && (AFNLWGT>1)&& (HRSWK>0))
Prediction task is to determine whether a person makes over 50K a year. | Provide a detailed description of the following dataset: Adult Data Set |
CVR | This data set includes votes for each of the U.S. House of Representatives Congressmen on the 16 key votes identified by the CQA. The CQA lists nine different types of votes: voted for, paired for, and announced for (these three simplified to yea), voted against, paired against, and announced against (these three simplified to nay), voted present, voted present to avoid conflict of interest, and did not vote or otherwise make a position known (these three simplified to an unknown disposition). | Provide a detailed description of the following dataset: CVR |
MMLU | **MMLU** (**Massive Multitask Language Understanding**) is a new benchmark designed to measure knowledge acquired during pretraining by evaluating models exclusively in zero-shot and few-shot settings. This makes the benchmark more challenging and more similar to how we evaluate humans. The benchmark covers 57 subjects across STEM, the humanities, the social sciences, and more. It ranges in difficulty from an elementary level to an advanced professional level, and it tests both world knowledge and problem solving ability. Subjects range from traditional areas, such as mathematics and history, to more specialized areas like law and ethics. The granularity and breadth of the subjects makes the benchmark ideal for identifying a model’s blind spots.
Image source: [https://arxiv.org/pdf/2009.03300v3.pdf](https://arxiv.org/pdf/2009.03300v3.pdf) | Provide a detailed description of the following dataset: MMLU |
Cyberbullying Classification | As social media usage becomes increasingly prevalent in every age group, a vast majority of citizens rely on this essential medium for day-to-day communication. Social media’s ubiquity means that cyberbullying can effectively impact anyone at any time or anywhere, and the relative anonymity
of the internet makes such personal attacks more difficult to stop than traditional bullying.
On April 15th, 2020, UNICEF issued a warning in response to the increased risk of cyberbullying during the COVID-19 pandemic due to widespread school closures, increased screen time, and decreased face-to-face social interaction. The statistics of cyberbullying are outright alarming: 36.5% of middle and high school students have felt cyberbullied and 87% have observed cyberbullying, with effects ranging from decreased academic performance to depression to suicidal thoughts.
In light of all of this, this dataset contains more than 47000 tweets labelled according to the class of cyberbullying:
Age;
Ethnicity;
Gender;
Religion;
Other type of cyberbullying;
Not cyberbullying
The data has been balanced in order to contain ~8000 of each class.
Trigger Warning These tweets either describe a bullying event or are the offense themselves, therefore explore it to the point where you feel comfortable. | Provide a detailed description of the following dataset: Cyberbullying Classification |
Research Artifact - GitHub Sponsors: Exploring a New Way to Contribute to Open Source | This is a research artifact for the ICSE'22 paper "**GitHub Sponsors: Exploring a New Way to Contribute to Open Source**". The following three research questions were constructed to guide the study.
* RQ1: *Who participates in GitHub Sponsors?*
* RQ1.1: *What are the characteristics of sponsored developers?*
* RQ1.2: *What are the characteristics of sponsors?*
* RQ2: *What characteristics make developers more likely to receive sponsorship?*
* RQ3: *What are developers' perceived challenges and benefits related to sponsoring?*
* RQ3.1: *Why are develpers looking for sponsors?*
* RQ3.2: *What is the impact of (not) getting sponsorship?*
* RQ3.3: *Why are developers sponsoring?*
This artifact is a repository including lists of studied repositories on GitHub, a dataset for the network diagram for answering RQ1, the features for sponsored and non-sponsored developers for RQ2, the features for sponsors for RQ2, and survey material and coding of responses for RQ3. | Provide a detailed description of the following dataset: Research Artifact - GitHub Sponsors: Exploring a New Way to Contribute to Open Source |
PETRAW | PETRAW data set was composed of 150 sequences of peg transfer training sessions. The objective of the peg transfer session is to transfer 6 blocks from the left to the right and back. Each block must be extracted from a peg with one hand, transferred to the other hand, and inserted in a peg at the other side of the board.
All cases were acquired by a non-medical expert on the LTSI Laboratory from the University of Rennes. The data set was divided into a training data set composed of 90 cases and a test data set composed of 60 cases. A case was composed of kinematic data, a video, semantic segmentation of each frame, and workflow annotation. | Provide a detailed description of the following dataset: PETRAW |
ProteinKG25 | ProteinKG25 is a large-scale KG dataset with aligned descriptions and protein sequences respectively to GO terms and proteins entities. ProteinKG25 contains 4,990,097 triplets (4,879,951 Protein-GO triplets and 110,146 GO-GO triplets), 612,483 entities (565,254 proteins and 47,229 GO terms) and 31 relations. | Provide a detailed description of the following dataset: ProteinKG25 |
SHERLOCK | **SHERLOCK** is a corpus of 363K commonsense inferences grounded in 103K images. Annotators highlight localized clues (color bubbles) and draw plausible abductive inferences about them (speech bubbles). It can be used for testing machine capacity for abductive reasoning beyond literal image contents. | Provide a detailed description of the following dataset: SHERLOCK |
Deep Soccer Captioning | **Deep Soccer Captioning** is a dataset consists of 22k caption-clip pairs and three visual features (images, optical flow, inpainting) for 500 hours of SoccerNet videos. | Provide a detailed description of the following dataset: Deep Soccer Captioning |
ASC (TIL, 19 tasks) | A set of 19 ASC datasets (reviews of 19 products) producing a sequence of 19 tasks. Each dataset represents a task. The datasets are from 4 sources: (1) HL5Domains (Hu and Liu, 2004) with reviews of 5 products; (2) Liu3Domains (Liu et al., 2015) with reviews of 3 products; (3) Ding9Domains (Ding et al., 2008) with reviews of 9 products; and (4) SemEval14 with reviews of 2 products - SemEval 2014 Task 4 for laptop and restaurant. For (1), (2) and (3), we split about 10% of the original data as the validate data, another about 10% of the original data as the testing data. For (4), We use 150 examples from the training set for validation. To be consistent with existing research(Tang et al., 2016), examples belonging to the conflicting polarity (both positive and negative sentiments are expressed about an aspect term) are not used. Statistics and details of the 19 datasets are given on Page https://github.com/ZixuanKe/PyContinual. | Provide a detailed description of the following dataset: ASC (TIL, 19 tasks) |
ArgSciChat | **ArgSciChat** is an argumentative dialogue dataset. It consists of 498 messages collected from 41 dialogues on 20 scientific papers. It can be used to evaluate conversational agents and further encourage research on argumentative scientific agents. | Provide a detailed description of the following dataset: ArgSciChat |
Dynamic OLAT Dataset | To provide ground truth supervision for video consistency modeling, we build up a high-quality dynamic OLAT dataset.
Our capture system consists of a light stage setup with 114 LED light sources and Phantom Flex4K-GS camera (global shutter, stationary 4K ultra-high-speed camera at 1000 fps), resulting in dynamic OLAT imageset recording at 25 fps using the overlapping method.
Our dynamic OLAT dataset provides sufficient semantic, temporal and lighting consistency supervision to train our neural video portrait relighting scheme, which can generalize to in-the-wild scenarios. | Provide a detailed description of the following dataset: Dynamic OLAT Dataset |
USR-TopicalChat | This dataset was collected with the goal of assessing dialog evaluation metrics. In the paper, USR: An Unsupervised and Reference Free Evaluation Metric for Dialog (Mehri and Eskenazi, 2020), the authors collect this data to measure the quality of several existing word-overlap and embedding-based metrics, as well as their newly proposed USR metric. | Provide a detailed description of the following dataset: USR-TopicalChat |
USR-PersonaChat | This dataset was collected with the goal of assessing dialog evaluation metrics. In the paper, USR: An Unsupervised and Reference Free Evaluation Metric for Dialog (Mehri and Eskenazi, 2020), the authors collect this data to measure the quality of several existing word-overlap and embedding-based metrics, as well as their newly proposed USR metric. | Provide a detailed description of the following dataset: USR-PersonaChat |
Memotion Analysis | A multimodal dataset for sentiment analysis on internet memes. | Provide a detailed description of the following dataset: Memotion Analysis |
HelloWorld | HelloWorld is a dataset of kinesthetic demonstrations collected using a Franka Emika Panda robot. During the data collection, the robot was made to write the lower case letters $h, e, l, o, w, r, d$ one at a time on a horizontal surface and the $x$ and $y$ coordinates of the end-effector were recorded. Multiple demonstrations were collected for each letter. These demonstrations can be used for kinesthetic teaching. See further details [here](https://github.com/sayantanauddy/clfd). | Provide a detailed description of the following dataset: HelloWorld |
CIP | The CIP dataset is composed of 2 subsets, containing low-cost (MPU9250) and high-end (MTwAwinda) Magnetic, Angular Rate, and Gravity (MARG) sensor data respectively. It provides data for the analysis of the complete inertial pose pipeline, from raw measurements, to sensor-to-segment calibration, multi-sensor fusion, skeleton kinematics, to the complete human pose. Multiple trials were collected with 21 and 10 subjects respectively, performing 6 types of movements (ranging from calibration, to daily-activities, range-of-motion and random). It presents a high degree of variability and complex dynamics while containing common sources of error found on real conditions. This amounts to 3.5M samples, synchronized with a ground-truth inertial motion capture system (Xsens) at 60hz. This dataset may contribute to assess, benchmark and develop novel algorithms for each of the pipelines' processing steps, with applications in classic or data-driven inertial pose estimation algorithms, human movement understanding and forecasting and ergonomic assessment in industrial or rehabilitation settings. | Provide a detailed description of the following dataset: CIP |
wildFireClimateChangeTweets | Here I provided the datasets I used for this analysis. It includes the tweets I streamed using the Tweepy package on Python during the peach of the wildfire season in late summer/early fall of 2020.
The files include:
1- Public:
39 items of CSV files for the dates (day by day) starting 09/06/2020 to 09/23/2020
1 file combined for all the dates (+185k of tweets with the desired keywords)
2- Government
Locals:
CA: Counties of Napa, Mendocino, Santa Clara, Sonoma, and Fresno
OR: City of Salem, and Counties of Lane, Clackamas, Jackson, and Multnomah
CO: County of Larimer and Boulder Cities of Boulder, Grand Junction, Glenwood Springs, and Cortex
15 accounts in total
State-level:
Governors of three states, California, Colorado, and Oregon
Congress representatives: 5 for each state
18 accounts in total
Federal: 6 senators, California, Colorado, and Oregon
Information Provided for each tweet:
Date/Time
UserID
UserName
Tweet's text
Number of retweets
Number of likes
Provide: | Provide a detailed description of the following dataset: wildFireClimateChangeTweets |
DSC (10 tasks) | A set of 10 DSC datasets (reviews of 10 products) to produce sequences of tasks. The products are Sports, Toys, Tools, Video, Pet, Musical, Movies, Garden, Offices, and Kindle. 2500 positive and 2500 negative training reviews per task . The validation reviews are with 250 positive and 250 negative and the test reviews are with 250
positive and 250 negative reviews. The detailed statistic on page https://github.com/ZixuanKe/PyContinual | Provide a detailed description of the following dataset: DSC (10 tasks) |
Weibo-Douban | This dataset is used for user identity linkage across two online social networks in Chinese. It contains two popular Chinese social platforms: Sina Weibo\footnote{https://weibo.com} and Douban\footnote{https://www.douban.com}.
Details:
* 9,714 users and 117,218 relations in Weibo; 9,526 users and 120,245 relations in Douban; 1,397 pair of matched users.
* Approximate power-law degree distribution and high aggregation coefficient.
* Multiple text attributes available, including username, geographical location and recent (text) posts of the users.
* Construction time: April 2020. | Provide a detailed description of the following dataset: Weibo-Douban |
Ransomware PCAP repository | This is a repository of PCAP files obtained by executing ransomware binaries and capturing the network traffic created when encrypting a set of files shared from an SMB server. There are 94 samples from 32 different ransomware families downloaded from malware-traffic-analysis and hybrid-analysis. There is a link to an info page for each sample, offering some information about the sample and about the scenario where it ran ('More info' column in the table).
You can download 10% of the packets from each traffic trace for free. If you find it useful and you want to download the whole samples, we ask for your e-mail and institution name, in order to keep a record of hoy many people are interested in these files. This helps us to keep this repository up and include more samples (as it proves that it is interesting for the community). We do not send you any kind of spam. We will only send you a link to download the full pcap files. In order to refer to this repository please include the link in your paper, cite the repository shared in IEEE dataPort (here) and/or cite this paper in which the repository is explained in more detail
We also offer a text file containing a description of all the input/output operations that appear in the SMB traffic. We had to create our own software in order to extract this information from large pcap files. | Provide a detailed description of the following dataset: Ransomware PCAP repository |
20Newsgroup (10 tasks) | This dataset has 20 classes and each class has about 1000 documents. The data split for train/validation/test is 1600/200/200. We created 10 tasks, 2
classes per task. Since this is topic-based text classification data, the classes are very different and have little shared knowledge. As mentioned above, this application (and dataset) is mainly used to show a CL model's ability to overcome forgetting. Detailed statistics please on page https://github.com/ZixuanKe/PyContinual | Provide a detailed description of the following dataset: 20Newsgroup (10 tasks) |
NMED-T | Losorelli, Steven, Nguyen, Duc T., Dmochowski, Jacek P., and Kaneshiro, Blair
This dataset contains cortical (EEG) and behavioral data collected during natural music listening. Dense-array EEG was recorded from 20 adult participants who each heard a set of 10 full-length songs with electronically produced beats at various tempos. In a separate subsequent listen, each participant tapped to the beat of a 35-second excerpt from each song. Participants also delivered ratings of familiarity and enjoyment for each full-length song during the EEG recording. Finally, the dataset includes basic demographic information about the participants, as well as Matlab scripts to perform the illustrated analyses presented in the paper introducing the dataset (Losorelli et al., 2017). Cleaned and aggregated data are published in Matlab format; raw EEG is published in Matlab format, while raw tapping data are published in .txt format. Stimulus audio is not published, but metadata links are provided. | Provide a detailed description of the following dataset: NMED-T |
F-CelebA (10 tasks) | F-CelebA - This dataset is adapted from federated learning. Federated learning
is an emerging machine learning paradigm with an emphasis on data privacy. The idea is to train
through model aggregation rather than conventional data aggregation and keep local data staying
on the local device. This dataset naturally consists of similar tasks and each of the 10 tasks contains images of a celebrity labeled by whether he/she is smiling or not. More detailed please check page https://github.com/ZixuanKe/CAT | Provide a detailed description of the following dataset: F-CelebA (10 tasks) |
ADIMA | **ADIMA** is a novel, linguistically diverse, ethically sourced, expert annotated and well-balanced multilingual profanity detection audio dataset comprising of 11,775 audio samples in 10 Indic languages spanning 65 hours and spoken by 6,446 unique users. | Provide a detailed description of the following dataset: ADIMA |
VizWiz-VQA-Grounding | The **VizWiz-VQA-Grounding** dataset is a dataset that visually grounds answers to visual questions asked by people with visual impairments.
Training Set:
- 6,494 examples
Validation Set:
- 1,131 examples
Test Set:
- 2,373 examples | Provide a detailed description of the following dataset: VizWiz-VQA-Grounding |
Wukong | **Wukong** is a large-scale Chinese cross-modal dataset for benchmarking different multi-modal pre-training methods to facilitate the Vision-Language Pre-training (VLP). This dataset contains 100 million Chinese image-text pairs from the web. This base query list is taken from and is filtered according to the frequency of Chinese words and phrases. | Provide a detailed description of the following dataset: Wukong |
MeLa BitChute | **MeLa BitChute** is a near-complete dataset of over 3M videos from 61K channels over 2.5 years (June 2019 to December 2021) from the social video hosting platform BitChute, a commonly used alternative to YouTube. Additionally, the dataset includes a variety of video-level metadata, including comments, channel descriptions, and views for each video.
The dataset contains data from 3,036,190 videos, 61,229 channels, and 11,434,571 comments between June 28th, 2019 and December 31st, 2021. This dataset provides timestamped activities and estimates on views for the majority of channels and videos on the platform, allowing researchers to align BitChute videos with behavior on other platforms. Therefore, this dataset can facilitate both studies of BitChute in isolation and studies of BitChute’s role in the larger ecosystem. | Provide a detailed description of the following dataset: MeLa BitChute |
UAV_udc | https://github.com/zzr-idam/Under-Display-Camera-UAV | Provide a detailed description of the following dataset: UAV_udc |
LAW | The Laboratory for Web Algorithmics (LAW) was established in 2002 at the Dipartimento di Scienze dell'Informazione (now merged in the Computer Science Department) of the Università degli studi di Milano.
The LAW is part of the NADINE FET EU project.
Research at LAW concerns all algorithmic aspects of the study of the web and of social networks. | Provide a detailed description of the following dataset: LAW |
Taillard Instances | Taillar's permutation flow shop, the job shop, and the open shop scheduling problems instances:
We restrict ourselves to basic problems: the processing times are fixed, there are neither set-up times nor due dates nor release dates, etc. Then, the objective is the minimization of the makespan. | Provide a detailed description of the following dataset: Taillard Instances |
FICS PCB Image Collection (FPIC) | Optical images of printed circuit boards as well as detailed annotations of any text, logos, and surface-mount devices (SMDs). There are several hundred samples spanning a wide variety of manufacturing locations, sizes, node technology, applications, and more.
- pcb_image: Optical images of each PCB surface and rear, tagged with a unique identifier.
- color_checker: Pallette to account for environmental illumination factors as well as a scale reference for the photo resolution. Each pcb image indicates which color checker it is associated with.
- ocr_annotation: Optical Character Recognition annotations. This includes polygon boundaries around all relevant text on a PCB image. Whether the piece of text is on the board or a device, whether it is a logo or not, orientation, and more are noted within the columns of the csv.
- smd_annotation: Surface-mount Device (SMD) annotations. This includes polygon boundaries around all relevant SMD devices such as resistors, capacitors, inductors, transistors, diodes, LEDs, and more. Along with each component, its associated silkscreen designator ('L', 'R', 'C', 'U', etc.) is recorded.
- vtp_annotation: Vias, traces, and pins (VTP) annotations. These are regions of connectivity between SMDs on a PCB. Few annotations currently exist, this is considered in 'beta' mode currently.
- metadata: Holds two files corresponding to information about image files.
* pcb.csv holds information about the physical PCB samples such as their color, online item description, and any notes.
* color_checker.csv indicates the pixels per millimeter (ppmm) of any image associated with that color checker, whether an X-Rite ColorChecker Passport or Nano was used, what camera performed the acquisition, and any relevant notes.
Each annotation file is designed to be compatible with the S3A application (https://gitlab.com/ficsresearch/s3a or https://pypi.org/project/s3a/), a Python tool for visualizing polygon annotations on an image. | Provide a detailed description of the following dataset: FICS PCB Image Collection (FPIC) |
HuSHeM | At the Isfahan Fertility and Infertility Center, semen samples were collected from fifteen patients. The sperm samples were fixed and stained using the Diff-Quick method. Using an Olympus CX21 microscope with a ×100 objective lens and a ×10 eyepiece and a Sony color camera (Model No SSC-DC58AP), 725 images were taken. The resolution of each image was 576×720 pixels. From these images, the sperm heads were cropped and classified into five classes by three specialists. The classes are Normal, Pyriform, Tapered, Amorphous, and Others. After the classification, only the samples which there was a collective consensus about their class were kept in the dataset. Four classes of Normal, Pyriform, Tapered, and Amorphous are included in this dataset. The resulting dataset of sperm heads denoted as Human Sperm Head Morphology dataset (HuSHeM) consists of four folders, each corresponding to a specific set of sperm shapes. The folder names reflect the shape of the contained images. There are 54 Normal, 53 Tapered, 57 Pyriform, and 52 Amorphous sperm heads. The images of sperm heads are in the RGB format with the size of 131×131 pixels. | Provide a detailed description of the following dataset: HuSHeM |
SCIAN | Dataset of sperm head images with expert-classification labels. The dataset contains 1854 sperm head images obtained from six semen smears and classified by three Chilean referent domain experts according to World Health Organization (WHO) criteria, in one of the following classes: normal, tapered, pyriform, small and amorphous. This gold-standard is aimed for use in evaluating and comparing not only known techniques, but also future improvements to present approaches for classification of human sperm heads for semen analysis. | Provide a detailed description of the following dataset: SCIAN |
TransCG | TransCG is the first large-scale real-world dataset for transparent object depth completion and grasping, which contains 57,715 RGB-D images of 51 transparent objects and many opaque objects captured from different perspectives (~240 viewpoints) of 130 scenes under real-world settings. The samples are captured by two different types of cameras (Realsense D435 & L515).
The following data is provided:
- The 3D model of the transparent object;
- The 6dpose of the transparent object in each viewpoint of each scene;
- The raw RGB-D image, and the ground-truth refined depth image;
- The mask of the transparent objects;
- The ground-truth surface normals of every sample. | Provide a detailed description of the following dataset: TransCG |
Synth-Colon | Synthetic dataset for polyp segmentation. It
is the first dataset generated using zero annotations from medical professionals.
The dataset is composed of 20 000 images with a resolution of 500×500. SynthColon additionally includes realistic colon images generated with a CycleGAN
and the Kvasir training set images. Synth-Colon can also be used for the colon
depth estimation task because it provides depth and 3D information for each
image. . In summary, Synth-Colon
includes:
– Synthetic images of the colon and one polyp.
– Masks indicating the location of the polyp.
– Realistic images of the colon and polyps. Generated using CycleGAN and
the Kvasir dataset.
– Depth images of the colon and polyp.
– 3D meshes of the colon and polyp in OBJ format. | Provide a detailed description of the following dataset: Synth-Colon |
Munich Sentinel2 Crop Segmentation | Contains squared blocks of 48×48 pixels including 13 Sentinel-2 bands.
Each 480-m block was mined from a large geographical area of interest (102 km × 42 km) located north of Munich, Germany. | Provide a detailed description of the following dataset: Munich Sentinel2 Crop Segmentation |
A collection of LFR benchmark graphs | This dataset is a collection of undirected and unweighted LFR benchmark graphs as proposed by Lancichinetti et al. [1]. We generated the graphs using the code provided by Santo Fortunato on his personal website [2], embedded in our evaluation framework [3], with two different parameter sets. Let N denote the number of vertices in the network, then
Maximum community size: 0.2N (Set A); 0.1N (Set B)
Minimum community size: 0.05N (Set A); 10 (Set B)
Maximum node degree: 0.19N (Set A); 0.19N (Set B)
Community size distribution exponent: 1.0 (Set A); 1.0 (Set B)
Degree distribution exponent: 2.0 (Set A); 2.0 (Set B).
All other parameters assume default values. We provide graphs with different combinations of average degree, network size and mixing parameter for the given parameter sets:
Set A: For average degrees in {15, 25, 50} we provide network sizes in {300, 600, 1200}, each with 20 different mixing parameters linearly spaced in [0.2, 0.8]. For each configuration we provide 100 benchmark graphs.
Set A: For average degrees in {15, 25, 50} we provide mixing parameters in {0.35, 0.45, 0.55}, each with network sizes in {300, 450, 600, 900, 1200, 1800, 2400, 3600, 4800, 6200, 9600, 19200}. For each configuration we provide 50 benchmark graphs.
Set B: For average degrees in {20} we provide network sizes in {300, 600, 1200, 2400}, each with 20 different mixing parameters linearly spaced in [0.2, 0.8]. For each configuration we provide 100 benchmark graphs.
Benchmark graphs are given in edge list format. Further, for each benchmark graph we provide ground truth communities as membership list and as structured datatype (.json), its generating random seeds and basic network statistics.
[1] Lancichinetti A, Fortunato S, Radicchi F (2008) Benchmark graphs for testing community detection algorithms. Physical Review E 78(4):046110,https://doi.org/10.1103/PhysRevE.78.046110
[2] https://www.santofortunato.net/resources, Accessed: 19 Jan 2021
[3] https://github.com/synwalk/synwalk-analysis, Accessed: 19 Jan 2021 | Provide a detailed description of the following dataset: A collection of LFR benchmark graphs |
Volunteer task execution events in Galaxy Zoo and The Milky Way citizen science projects | ## Context of the data sets
The Zooniverse platform (www.zooniverse.org) has successfully built a large community of volunteers contributing to citizen science projects. Galaxy Zoo and the Milky Way Project were hosted there.
The original Galaxy Zoo project was launched in July 2007, but has since been redesigned and relaunched three times, building each time on the success of its predecessor. In 2010, the Zooniverse launched the third iteration of Galaxy Zoo, called Galaxy Zoo: Hubble, but for simplicity, we use the term Galaxy Zoo throughout this text to refer to this project. Each volunteer classifying on Galaxy Zoo is presented with a galaxy from the Sloan Digital Sky Survey (SDSS) or the Hubble Space Telescope as well as a decision tree of questions with answers represented by a fairly simple icon. The task is straightforward, and no specialist knowledge is required to execute it.
Tasks in the Milky Way Project exhibit a larger cognitive load than those in Galaxy Zoo. Volunteers are asked to draw ellipses onto the image to mark the locations of bubbles. A short, online tutorial shows how to use the tool, along with examples of prominent bubbles. As a secondary task, users can also mark rectangular areas of interest, which can be labeled as small bubbles, green knots, dark nebulae, star clusters, galaxies, fuzzy red objects, or “other.” Users can add as many annotations as they wish before submitting the image, at which point they’re given another image for annotation.
## Description of the raw data
In this repository, each file is a project, each line on a file is one classification record. The lines contain three pieces of information separated by commas (","). The first information is the `classification id`, which uniquely identifies the classification in the data set. The second information is the `volunteer id`, which uniquely identifies, in the data set, the volunteer who carried out the classification. The third information is the `date and time` in which the classification was carried out.
The data set from the Galaxy Zoo project consists of records of 9,667,586 tasks executed by 86,413 volunteers over 840 days, starting on April 17th, 2010. The data set from the Milky Way Project consists of records from 643,408 tasks executed by 23,889 volunteers over 670 days, starting on December 3rd, 2010.
These datasets were provided by Arfon Smith and Robert Simpson, from the Zooniverse platform, in October, 2012. To understand how volunteers make their contributions in these citizen science projects, [Ponciano, Brasileiro, Simpson and Smith (2014)](https://doi.org/10.1109/MCSE.2014.4) analyzed both data sets considering a volunteer engagement perspective.
## Metrics derived from the data set
[Ponciano, Brasileiro, Simpson and Smith (2014)](https://doi.org/10.1109/MCSE.2014.4) proposed and computed the following metrics on the data set: _Frequency_, or the number of days in which the volunteer was actively executing tasks in the project. _Daily productivity_, or the average number of tasks the volunteer executed per day in which he or she was active. _Typical session duration_, or the short, continuous period of time the volunteer devoted to execute tasks on the project. A session begins when a volunteer starts a task execution, but it may end for a variety of reasons, such as the volunteer achieving the time he or she wanted to devote to the project, or that person getting tired or bored because of something related to the task performed. The typical session duration is the median of the duration of all the volunteer’s contribution sessions. _Devoted time_, or the total time the volunteer has spent executing tasks on the project. It's calculated as the sum of the duration of all the volunteer’s contribution sessions.
They generate statistical probability distributions to the volunteer engagement characteristics that fit the parameters of Zipf and Log Normal. The results reported in the study reveal many characteristics of the distributions of volunteer participation in the projects. For example, they show that the majority of the volunteers perform tasks in just one day and do not come back, but those who come back contribute the larger proportion of tasks executed. For more information about methods and results of the first study that analysed the data set, please, see [Ponciano, Brasileiro, Simpson and Smith (2014)](https://doi.org/10.1109/MCSE.2014.4).
In a subsequent study, [Ponciano and Brasileiro (2014)](https://doi.org/10.15346/hc.v1i2.12) deepened the study by considering a new framework to study volunteer engagement. In this new study, new metrics and a clustering approach were used to identify groups of volunteers who exhibit a similar engagement profile. A new set of metrics is designed to measure the engagement of participants that exhibit an ongoing contribution and have contributed in at least two different days, so they focus on participants that are more likely to fit into the voluntarism definition. In this perspective, they formalyzed the following metrics: _Activity ratio_, _Daily devoted time_, _Relative activity duration_, and _Variation in periodicity_. Their results show that the volunteers in such projects can be grouped into five distinct engagement profiles that we label as follows: hardworking, spasmodic, persistent, lasting, and moderate. For more information about the method and results on the engagement profiles see [Ponciano and Brasileiro (2014)](https://doi.org/10.15346/hc.v1i2.12).
## Reporting the use of the data set
The data sets stored in this repository are freely available to be used at the Creative Commons Attribution licence. In case you use the data set, please, include in your work a citation of the previous studies [Ponciano, Brasileiro, Simpson and Smith (2014)](https://doi.org/10.1109/MCSE.2014.4) and [Ponciano and Brasileiro (2014)](https://doi.org/10.15346/hc.v1i2.12) that were the first to characterize the data from Galaxy Zoo and the Milky Way Project in a volunteer engagement perspective. After that, you may also inform the Zooniverse platform that you have used data from Galaxy Zoo and the Milky Way Project. To do so, you can use [this form](https://docs.google.com/forms/d/e/1FAIpQLSdbAKVT2tGs1WfBqWNrMekFE5lL4ZuMnWlwJuCuNM33QO2ZYg/viewform) indicated by the platform at the [publication page](https://www.zooniverse.org/about/publications).
## References
Lesandro Ponciano, Francisco Brasileiro, Robert Simpson and Arfon Smith. "Volunteers' Engagement in Human Computation Astronomy Projects". Computing in Science and Engineering vol. 16, no. 6, pp. 52-59 (2014) DOI: [10.1109/MCSE.2014.4](https://doi.org/10.1109/MCSE.2014.4)
Lesandro Ponciano and Francisco Brasileiro. "Finding Volunteers' Engagement Profiles in Human Computation for Citizen Science Projects". Human Computation vol. 1, no. 2, pp. 245-264 (2014). DOI: [10.15346/hc.v1i2.12](https://doi.org/10.15346/hc.v1i2.12) | Provide a detailed description of the following dataset: Volunteer task execution events in Galaxy Zoo and The Milky Way citizen science projects |
Motor Imagery dataset | From dataset repository for "2020 International BCI Competition":
https://osf.io/pq7vb/?view_only=08e7108d89fd42bab2adbd6b98fb683d | Provide a detailed description of the following dataset: Motor Imagery dataset |
Error Grids for multi-fidelity benchmark functions in mf2 | Provide:
* a high-level explanation of the dataset characteristics
* explain motivations and summary of its content
* potential use cases of the dataset
Collection of Error Grid data files. Intended purpose is to allow confirmation of analysis and to perform future analysis on other error measurement methods that are included. | Provide a detailed description of the following dataset: Error Grids for multi-fidelity benchmark functions in mf2 |
KITTI'15 MSplus | Extension of the official [KITTI'15 dataset](http://www.cvlibs.net/datasets/kitti/). independently moving instance segmentation ground truth to cover all moving objects, not just a selection of cars and vans.
- Instance Motion Segmentation of all moving objects
- Binary Motion Segmentation (background/foreground)
- Validation Masks
Dataset contains:
- Instance Motion Segmenation for the **training** split of the KITTI'15 dataset | Provide a detailed description of the following dataset: KITTI'15 MSplus |
DLR-ACD | The DLR-ACD dataset is a collection of aerial images for crowd counting and density estimation, as well as for person localization at mass events. It contains 33 large aerial images acquired through 16 different flight campaigns at various mass events and over urban scenes involving crowds, such as sport events, city centers, open-air fairs and festivals.
The images were captured with standard DSLR cameras installed on a helicopter, and their spatial resolution (or ground sampling distance – GSD) ranges from 4.5 to 15 cm/pixel. The dataset was labeled manually with point-annotations on individual people and contains 226,291 person annotations in total, ranging from 285 to 24,368 annotations per image. | Provide a detailed description of the following dataset: DLR-ACD |
CBCT Walnut | The scans are performed using a custom-built, highly flexible X-ray CT scanner, the FleX-ray scanner, developed by XRE nvand located in the FleX-ray Lab at the Centrum Wiskunde & Informatica (CWI) in Amsterdam, Netherlands. The general purpose of the FleX-ray Lab is to conduct proof of concept experiments directly accessible to researchers in the field of mathematics and computer science. The scanner consists of a cone-beam microfocus X-ray point source that projects polychromatic X-rays onto a 1536-by-1944 pixels, 14-bit flat panel detector (Dexella 1512NDT) and a rotation stage in-between, upon which a sample is mounted. All three components are mounted on translation stages which allow them to move independently from one another.
Please refer to the paper for all further technical details.
The complete data set can be found via the following links: 1-8 https://doi.org/10.5281/zenodo.2686725 , 9-16 https://doi.org/10.5281/zenodo.2686970, 17-24 https://doi.org/10.5281/zenodo.2687386, 25-32 https://doi.org/10.5281/zenodo.2687634, 33-37 https://doi.org/10.5281/zenodo.2687896, 38-42 https://doi.org/10.5281/zenodo.2688111
The corresponding Python scripts for loading, pre-processing and reconstructing the projection data in the way described in the paper can be found on github https://github.com/cicwi/WalnutReconstructionCodes | Provide a detailed description of the following dataset: CBCT Walnut |
NON-LINEAR PHASE NOISE MITIGATION OVER SYSTEMS USING CONSTELLATION SHAPING: EXPERIMENTAL DATASET | This dataset contains the full set of experimental waveforms that were used to produce the article "Non-Linear Phase Noise Mitigation over Systems using Constellation Shaping", published in the Journal of Lightwave Technology with DOI: 10.1109/JLT.2019.2917308. | Provide a detailed description of the following dataset: NON-LINEAR PHASE NOISE MITIGATION OVER SYSTEMS USING CONSTELLATION SHAPING: EXPERIMENTAL DATASET |
V2X-SIM | **V2X-Sim**, short for vehicle-to-everything simulation, is the a synthetic collaborative perception dataset in autonomous driving developed by AI4CE Lab at NYU and MediaBrain Group at SJTU to facilitate collaborative perception between multiple vehicles and roadside infrastructure. Data is collected from both roadside and vehicles when they are presented near the same intersection. With information from both the roadside infrastructure and vehicles, the dataset aims to encourage research on collaborative perception tasks.
Although not collected from the real world, highly realistic traffic simulation software is used to ensure the representativeness of the dataset compared to real-world driving scenarios. To be more exact, the traffic flow of the recording files is managed by CARLA-SUMO co-simulation, and three town maps from CARLA are currently used to increase the diversity of the dataset.
Here is a tutorial showing how to load the dataset: [https://ai4ce.github.io/V2X-Sim/tutorial.html](https://ai4ce.github.io/V2X-Sim/tutorial.html) | Provide a detailed description of the following dataset: V2X-SIM |
Typography-MNIST | **Typography-MNIST** is a dataset comprising of 565,292 MNIST-style grayscale images representing 1,812 unique glyphs in varied styles of 1,355 Google-fonts. The glyph-list contains common characters from over 150 of the modern and historical language scripts with symbol sets, and each font-style represents varying subsets of the total unique glyphs. The dataset has been developed as part of the Cognitive Type project which aims to develop eye-tracking tools for real-time mapping of type to cognition and to create computational tools that allow for the easy design of typefaces with cognitive properties such as readability. | Provide a detailed description of the following dataset: Typography-MNIST |
ASOS Digital Experiments Dataset | A novel dataset that can support the end-to-end design and running of Online Controlled Experiments (OCE) with adaptive stopping.
See OSF page for the schema and datasheet. | Provide a detailed description of the following dataset: ASOS Digital Experiments Dataset |
NYT11-HRL | Preprocessed version of NYT11.
Each relational triple is formatted as follows:
rtext : relation type
em1 : source entity mention
em2 : target entity mention
tags : the proposed entity annotation scheme for the sentence
0 : $O$ non-entity
1 : $S_I$ inside of a source entity
2 : $T_I$ inside of a target entity
3 : $O_I$ inside of not-concerned entity
4 : $S_B$ begin of a source entity
5 : $T_B$ begin of a target entity
6 : $O_B$ begin of not-concerned entity | Provide a detailed description of the following dataset: NYT11-HRL |
NYT10-HRL | a dataset from A Hierarchical Framework for Relation Extraction with Reinforcement Learning | Provide a detailed description of the following dataset: NYT10-HRL |
VaccineLies | A Natural Language Resource for Learning to Recognize Misinformation about the COVID-19 and HPV Vaccines. | Provide a detailed description of the following dataset: VaccineLies |
CoVaxLies v2 | CoVaxLies v2 includes 47 Misinformation Targets (MisTs) found on Twitter about the COVID-19 vaccines. Language experts annotated tweets as Relevant or Not Relevant, and then further annotated Relevant tweets with Stance towards each MisT. This collection is a first step in providing large-scale resources for misinformation detection and misinformation stance identification. | Provide a detailed description of the following dataset: CoVaxLies v2 |
EUCA dataset | # EUCA dataset description
Associated Paper:
**[EUCA: the End-User-Centered Explainable AI Framework](http://arxiv.org/abs/2102.02437)**
Authors:
Weina Jin, Jianyu Fan, Diane Gromala, Philippe Pasquier, Ghassan Hamarneh
## Introduction:
EUCA dataset is for modelling personalized or interactive explainable AI. It contains 309 data points of 32 end-users' preferences on 12 forms of explanation (including feature-, example-, and rule-based explanations). The data were collected from a user study on 32 layperson participants in the Greater Vancouver city area in 2019-2020. In the user study, the participants (P01-P32) were presented with AI-assisted critical tasks on house price prediction, health status prediction, purchasing a self-driving car, and studying for a biological exam [1]. Within each task and for its given explanation goal [2], the participants selected and rank the explanatory forms [3] that they saw the most suitable.
1 [EUCA_EndUserXAI_ExplanatoryFormRanking.csv](https://github.com/weinajin/end-user-xai/blob/master/SupplementaryMaterialS3_EUCA_Dataset/EUCA_EndUserXAI_ExplanatoryFormRanking.csv)
**Column description**:
- **Index** - Participants' number
- **Case** - task-explanation goal combination
- **accept to use AI? trust it?** - Participants response to whether they will use AI given the task and explanation goal
- **require explanation?** - Participants response to the question whether they request an explanation for the AI
- **1st, 2nd, 3rd, ...** - Explanatory form card selection and ranking
cards fulfill requirement? - After the card selection, participants were asked whether the selected card combination fulfill their explainability requirement.
2 [EUCA_EndUserXAI_demography.csv](https://github.com/weinajin/end-user-xai/blob/master/SupplementaryMaterialS3_EUCA_Dataset/EUCA_EndUserXAI_demography.csv)
It contains the participants demographics, including their age, gender, educational background, and their knowledge and attitudes toward AI.
[EUCA dataset zip file for download](https://github.com/weinajin/end-user-xai/blob/master/SupplementaryMaterialS3_EUCA_Dataset/EUCA_Dataset.zip)
## More Context for EUCA Dataset
### [1] Critical tasks
There are four tasks. Task label and their corresponding task titles are:
house - Selling your house
car - Buying an autonomous driving vehicle
health - Personal health decision
bird - Learning bird species
Please refer to [EUCA quantatative data analysis report](https://github.com/weinajin/end-user-xai/blob/master/SupplementaryMaterialS3_EUCA_Dataset/SupplementaryMaterialS2_UserStudy.pdf) for the storyboard of the tasks and explanation goals presented in the user study.
### [2] [Explanation goal](http://weina.me/end-user-xai/need.html)
End-users may have different goals/purposes to check an explanation from AI. The EUCA dataset includes the following 11 explanation goals, with its [label] in the dataset, full name and description
1. [trust] **[Calibrate trust](http://weina.me/end-user-xai/need.html#trust)**: trust is a key to
establish human-AI decision-making partnership. Since users can
easily distrust or overtrust AI, it is important to calibrate the
trust to reflect the capabilities of AI systems.
2. [safe] **[Ensure safety](http://weina.me/end-user-xai/need.html#safe)**: users need to ensure
safety of the decision consequences.
3. [bias] - **[Detect bias](http://weina.me/end-user-xai/need.html#bias)**: users need to ensure the
decision is impartial and unbiased.
4. [unexpect] **[Resolve disagreement with AI](http://weina.me/end-user-xai/need.html#unexpected)**: the AI
prediction is *unexpected* and there are
disagreements between users and AI.
5. [expected] - **[Expected](http://weina.me/end-user-xai/need.html#expected)**: the AI's prediction is
*expected* and aligns with users'
expectations.
6. [differentiate] **[Differentiate similar instances](http://weina.me/end-user-xai/need.html#differentiate)**: due to
the consequences of wrong decisions, users sometimes need to discern
similar instances or outcomes. For example, a doctor differentiates
whether the diagnosis is a benign or malignant tumor.
7. [learning] **[Learn](http://weina.me/end-user-xai/need.html#learn)**: users need to gain knowledge,
improve their problem-solving skills, and discover new knowledge
8. [control] **[Improve](http://weina.me/end-user-xai/need.html#improve)**: users seek causal factors to
control and improve the predicted outcome.
9. [communicate] **[Communicate with stakeholders](http://weina.me/end-user-xai/need.html#communicate)**: many
critical decision-making processes involve multiple stakeholders,
and users need to discuss the decision with them.
10. [report] **[Generate reports](http://weina.me/end-user-xai/need.html#report)**: users need to utilize
the explanations to perform particular tasks such as report
production. For example, a radiologist generates a medical report on
a patient's X-ray image.
11. [multi] **[Trade-off multiple objectives](http://weina.me/end-user-xai/need.html#multi)**: AI may be
optimized on an incomplete objective while the users seek to fulfill
multiple objectives in real-world applications. For example, a
doctor needs to ensure a treatment plan is effective as well as has
acceptable patient adherence. Ethical and legal requirements may
also be included as objectives.
### [3] [Explanatory form](http://weina.me/end-user-xai/explanatory_form.html)
The following 12 explanatory forms are end-user-friendly, i.e.: no technical knowledge is required for the end-user to interpret the explanation.
* [Feature-Based Explanation](http://weina.me/end-user-xai/explanatory_form.html/#feature)
* Feature Attribution - fa
* Note: for tasks that has image as input data, the feature attribution is denoted by the following two cards:
* ir: important regions (a.k.a. heat map or saliency map)
* irc: important regions with their feature contribution percentage
* Feature Shape - fs
* Feature Interaction - fi
* [Example-Based Explanation](http://weina.me/end-user-xai/explanatory_form.html/#example)
* Similar Example - se
* Typical Example - te
* Counterfactual Example - ce
* Note: for contractual example, there were two visual variations used in the user study:
* cet: counterfactual example with transition from one example to the counterfactual one
* ceh: counterfactual example with the contrastive feature highlighted
* [Rule-Based Explanation](http://weina.me/end-user-xai/explanatory_form.html/#rule)
* Rule - rt
* Decision Tree - dt
* Decision Flow - df
* [Supplementary Information](http://weina.me/end-user-xai/explanatory_form.html/#suppl)
* Input
* Output
* Performance
* Dataset - prior (output prediction with prior distribution of each class in the training set)
Note: occasionally there is a wild card, which means the participant draw the card by themselves. It is indicated as 'wc'.
For visual examples of each explanatory form card, please refer to the [Explanatory_form_labels.pdf](https://github.com/weinajin/end-user-xai/blob/master/SupplementaryMaterialS3_EUCA_Dataset/EUCA_explanatory_form_labels.pdf) document.
[Link to the details on users' requirements on different explanatory forms](http://weina.me/end-user-xai/explanatory_form.html)
## Code and report for EUCA data quantatitve analysis
* [EUCA data analysis code](https://github.com/weinajin/end-user-xai/tree/master/SupplementaryMaterialS4_EUCA_data_analysis_code)
* [EUCA quantatative data analysis report](https://github.com/weinajin/end-user-xai/blob/master/SupplementaryMaterialS3_EUCA_Dataset/SupplementaryMaterialS2_UserStudy.pdf)
## EUCA data citation
```
@article{jin2021euca,
title={EUCA: the End-User-Centered Explainable AI Framework},
author={Weina Jin and Jianyu Fan and Diane Gromala and Philippe Pasquier and Ghassan Hamarneh},
year={2021},
eprint={2102.02437},
archivePrefix={arXiv},
primaryClass={cs.HC}
}
``` | Provide a detailed description of the following dataset: EUCA dataset |
ADFI | ADFI Dataset is an image dataset for anomaly detection methods with a focus on industrial inspection.
Each category sub dataset comprises a training set of images and a test set of images with various kinds of defects as well as images without defects.
Supplementary information: ADFI provides a cloud service that automatically creates machine learning models for anomaly detection.
You can create anomaly detection models with these datasets for free on the ADFI website. | Provide a detailed description of the following dataset: ADFI |
MuLD | **MuLD** (**Multitask Long Document Benchmark**) is a set of 6 NLP tasks where the inputs consist of at least 10,000 words. The benchmark covers a wide variety of task types including translation, summarization, question answering, and classification. Additionally there is a range of output lengths from a single word classification label all the way up to an output longer than the input text. | Provide a detailed description of the following dataset: MuLD |
PeMS07 | PeMS07 is a traffic forecasting benchmark. | Provide a detailed description of the following dataset: PeMS07 |
Roman Republican Coin Dataset | Based on Crawford’s work, we collect the most diverse and extensive image dataset of the reverse sides. For most of the Roman Republic coin classes, the obverse side depicts more discriminative information than the observe side. Our dataset has 228 motif classes, including 100 classes that are the main classes for training and testing, which we call the main dataset RRCD-Main. The images of the additional 128 classes constitute the disjoint test set, RRCD-Disjoint, which we allocate to assess the generalization ability of our models. Therefore, the training and testing can be evaluated on completely disjoint datasets. To the best of our knowledge, RRCD is the most diverse dataset proposed while it is
the largest dataset of the Roman Republican coins | Provide a detailed description of the following dataset: Roman Republican Coin Dataset |
Malnutrition data | The malnutrition data, from the United Nations Children's Fund data warehouse, include two variables, stunted growth and the prevalence of low birth weight, collected in 77 countries from 1985 to 2019. Stunted growth is defined as the proportion of newborns aging from 0 to 59 months with a low height-for-age measurement (below two standard deviations). The stunted growth data represent a point sparseness case with 4-23 recordings per nation. The low birth weight data are a partial sparseness case, with recordings during 2000-2015 only.
It can be used in the functional data analysis and the sparse functional data fitting. | Provide a detailed description of the following dataset: Malnutrition data |
MuMiN | MuMiN is a misinformation graph dataset containing rich social media data (tweets, replies, users, images, articles, hashtags), spanning 21 million tweets belonging to 26 thousand Twitter threads, each of which have been semantically linked to 13 thousand fact-checked claims across dozens of topics, events and domains, in 41 different languages, spanning more than a decade.
MuMiN fills a gap in the existing misinformation datasets in multiple ways:
- By having a large amount of social media information which have been semantically linked to fact-checked claims on an individual basis.
- By featuring 41 languages, enabling evaluation of multilingual misinformation detection models.
- By featuring both tweets, articles, images, social connections and hashtags, enabling multimodal approaches to misinformation detection.
MuMiN features two node classification tasks, related to the veracity of a claim:
- Claim classification: Determine the veracity of a claim, given its social network context.
- Tweet classification: Determine the likelihood that a social media post to be fact-checked is discussing a misleading claim, given its social network context.
To use the dataset, see the "Getting Started" guide and tutorial at the [MuMiN website](https://mumin-dataset.github.io/). | Provide a detailed description of the following dataset: MuMiN |
MuMiN-small | This is the small version of the [MuMiN dataset](https://paperswithcode.com/dataset/mumin). | Provide a detailed description of the following dataset: MuMiN-small |
MuMiN-medium | This is the medium version of the [MuMiN dataset](https://paperswithcode.com/dataset/mumin). | Provide a detailed description of the following dataset: MuMiN-medium |
MuMiN-large | This is the large version of the [MuMiN dataset](https://paperswithcode.com/dataset/mumin). | Provide a detailed description of the following dataset: MuMiN-large |
TopiOCQA | **TopiOCQA** (pronounced Tapioca) is an open-domain conversational dataset with topic switches on Wikipedia. TopiOCQA contains 3,920 conversations with information-seeking questions and free-form answers. On average, a conversation in the dataset spans 13 question-answer turns and involves four topics (documents). TopiOCQA poses a challenging test-bed for models, where efficient retrieval is required on multiple turns of the same conversation, in conjunction with constructing valid responses using conversational history. | Provide a detailed description of the following dataset: TopiOCQA |
KuaiRec | KuaiRec is a real-world dataset collected from the recommendation logs of the video-sharing mobile app Kuaishou. For now, it is the first dataset that contains a fully observed user-item interaction matrix. For the term “fully observed”, we mean there are almost no missing values in the user-item matrix, i.e., each user has viewed each video and then left feedback. | Provide a detailed description of the following dataset: KuaiRec |
iFLYTEK | iFLYTEK and ChangGuang Satellite jointly held the challenge of extracting cultivated land from high-resolution remote sensing images. | Provide a detailed description of the following dataset: iFLYTEK |
Supplementary material | Funding Covid-19 research: Insights from an exploratory analysis using open data infrastructures - Supplementary material | Provide a detailed description of the following dataset: Supplementary material |
20000 utterances | 20000 utterances | Provide a detailed description of the following dataset: 20000 utterances |
MuVi | A dataset of music videos with continuous valence/arousal ratings as well as emotion tags.
A unique feature is that ratings are provided in 3 modalities:
- muted video
- music only
- music and video together
The github provides:
- video_urls.csv: Contains the YouTube ids of MuVi dataset. We can also provide all the media files (for all modalities) upon e-mail request.
- participant_data.csv: We provide the anonymised profile and demographic information of the annotators.
- media_data.csv: Contains the static annotations which describe the media item’s overall emotion. The terms that were used are based on the GEMS-28 term list.
- av_data.csv: Includes the dynamic (continuous) annotations for Valence and Arousal. | Provide a detailed description of the following dataset: MuVi |
GF-PA66 3D XCT | Stack of 2D gray images of glass fiber-reinforced polyamide 66 (GF-PA66) 3D X-ray Computed Tomography (XCT) specimen.
Usage: 2D/3D image segmentation
Format: HDF5
Libraries to read HDF5 files:
1) silx: [https://github.com/silx-kit/silx](https://github.com/silx-kit/silx)
2) h5py: [https://www.h5py.org](https://www.h5py.org)
3) pymicro: [https://github.com/heprom/pymicro](https://github.com/heprom/pymicro)
Trained models to segment this dataset: [https://doi.org/10.5281/zenodo.4601560](https://doi.org/10.5281/zenodo.4601560)
Please cite us as
```
@ARTICLE{10.3389/fmats.2021.761229,
AUTHOR={Bertoldo, João P. C. and Decencière, Etienne and Ryckelynck, David and Proudhon, Henry},
TITLE={A Modular U-Net for Automated Segmentation of X-Ray Tomography Images in Composite Materials},
JOURNAL={Frontiers in Materials},
VOLUME={8},
YEAR={2021},
URL={https://www.frontiersin.org/article/10.3389/fmats.2021.761229},
DOI={10.3389/fmats.2021.761229},
ISSN={2296-8016},
}
``` | Provide a detailed description of the following dataset: GF-PA66 3D XCT |
SMCOVID19-CT | We present a real data analysis of a CT experiment that was conducted in Italy for 8 months and involved more than 100,000 CT app users.
SM-Covid-19 uses a NO-SQL data storage system to ensure scalability and performance. At regular intervals, SM-Covid-19 backend generates a complete dump of the dataset. The dump is converted into a relational database stored into a CSV formatted file to allow the open data to be easy to consult and process. The CSV file is structured as follows:
PID1 and PID2 fields are pre-processed via SHA256 hash with a seed stored into the SoftMining backend system. The CSV is finally cleaned to remove duplicates.
• Date of the contact (dd/MM/YYYY)
• Time of the contact (HH:MM:SS)
• PID1 (256-bit hex)
• PID2 (256-bit hex)
• Contact duration (Integer, in seconds)
Contact distance (Float, in meters) | Provide a detailed description of the following dataset: SMCOVID19-CT |
Icon645 | Icon645 is a large-scale dataset of icon images that cover a wide range of objects:
* **645,687** colored icons
* **377** different icon classes
These collected icon classes are frequently mentioned in the [IconQA](https://iconqa.github.io/) questions. In this work, we use the icon data to pre-train backbone networks on the icon classification task in order to extract semantic representations from abstract diagrams in IconQA. On top of pre-training encoders, the large-scale icon data could also contribute to open research on abstract aesthetics and symbolic visual understanding. | Provide a detailed description of the following dataset: Icon645 |
Study data | # Challenges in Migrating Imperative Deep Learning Programs to Graph Execution: An Empirical Study
## File Descriptions
File | Description
--- | ---
`commit_categorizations.csv` | Categorizations for the commits in our dataset.
`commits.csv` | Information for the commits in our dataset
`datasets.csv` | Contains the names and descriptions of our datasets.
`issue_categorizations.csv` | Categorizations for the chosen issues from our dataset.
`issues.csv` | Information for the issues in our dataset.
`pipeline_stages.csv` | DL pipeline stages and their respective descriptions.
`problem_categories.csv` | Problem categories and their respective descriptions.
`problem_causes.csv` | Problem causes and their respective descriptions.
`problem_fixes.csv` | Problem fixes and their respective descriptions.
`problem_symptoms.csv` | Problem symptoms and their respective descriptions.
`studied_subjects_commits.csv` | Project data for commits.
`studied_subjects_issues.csv` | Project data for issues.
## Column Descriptions
### `commit_categorizations.csv`
Column | Description
--- | ---
`tf.function related fix?` | `TRUE` when a bug fix related to `tf.function` was found and `FALSE` otherwise. If `FALSE`, subsequent column values will be blank.
`stage` | DL pipeline stage where the problem fix was found.
### `issue_categorizations.csv`
Column | Description
--- | ---
`tf.function related problem?` | `TRUE` when a bug related to `tf.function` was found and `FALSE` otherwise. If `FALSE`, subsequent column values will be blank.
`stage` | DL pipeline stage where the problem was found.
`GH_id` | GitHub issue unique identifier.
### `issues.csv`
Column | Description
--- | ---
`GH_id` | GitHub issue unique identifier. | Provide a detailed description of the following dataset: Study data |
MCVQA | The MCVQA dataset consists of 248, 349 training questions and 121, 512 validation
questions for real images in Hindi and Code-mixed. For each Hindi question, we also provide its 10 corresponding answers in Hindi. | Provide a detailed description of the following dataset: MCVQA |
AirSim Stereo Synthetic Dataset | Synthetic Dataset created in AirSim | Provide a detailed description of the following dataset: AirSim Stereo Synthetic Dataset |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.