dataset_name stringlengths 2 128 | description stringlengths 1 9.7k | prompt stringlengths 59 185 |
|---|---|---|
LSMDC-E | LSMDC-E contains 20,151 training samples, 1,477 validation samples and 2,005 test samples, which is modified from LSMDC 2021. We take the first four sentences in every five-sentence story as the story context and the last sentence as the story ending. As every sentence relates to a movie frame set in LSMDC, we take the last frame set as the ending-related image set for IgSEG. | Provide a detailed description of the following dataset: LSMDC-E |
BUP20 | Video sequences from a glasshouse environment in Campus Kleinaltendorf(CKA), University of Bonn, captured by [PATHoBot](https://ieeexplore.ieee.org/document/9562047), a glasshouse monitoring robot.
- 10 Video sequences, 120s long.
- 2 cultivar Mavera (yellow) and All- rounder (red).
- RGB-D images (Intel RealSense D435i cameras).
- Robot odometry and IMU.
- High quality sparese instance segmentation labels. | Provide a detailed description of the following dataset: BUP20 |
FOR-instance | The challenge of accurately segmenting individual trees from laser scanning data hinders the assessment of crucial tree parameters necessary for effective forest management, impacting many downstream applications. While dense laser scanning offers detailed 3D representations, automating the segmentation of trees and their structures from point clouds remains difficult. The lack of suitable benchmark datasets and reliance on small datasets have limited method development. The emergence of deep learning models exacerbates the need for standardized benchmarks. Addressing these gaps, the FOR-instance data represent a novel benchmarking dataset to enhance forest measurement using dense airborne laser scanning data, aiding researchers in advancing segmentation methods for forested 3D scenes.
In this repository, users will find forest laser scanning point clouds from unamnned aerial vehicle (using Riegl sensors) that are manually segmented according to the individual trees (1130 trees) and semantic classes. The point clouds are subdivided into five data collections representing different forests in Norway, the Czech Republic, Austria, New Zealand, and Australia.
These data are meant to be used either for developement of new methods (using the dev data) or for testing of exisitng methods (test data). The data splits are provided in the data_split_metadata.csv file.
A full description of the FOR-instance data can be found at http://arxiv.org/abs/2309.01279 | Provide a detailed description of the following dataset: FOR-instance |
SB20 | Video sequences captured at a field on Campus Kleinaltendorf (CKA), University of Bonn, captured by [BonBot-I](https://ieeexplore.ieee.org/document/9981304), an autonomous weeding robot. The data was captured by mounting an Intel RealSense D435i sensor with a nadir view of the ground.
- RGB-D video sequences (Intel RealSense D435i cameras).
- Robot odometry and IMU.
- Crops and 8 different categories of weeds at different growth stages.
- Different illumination conditions.
- Three herbicide treatment regimes (30%, 70%, 100%), impacting weed density directly.
- High quality sparese instance segmentation labels. | Provide a detailed description of the following dataset: SB20 |
Column Correlation Data | Contains correlation data for 119,384 column pairs, taken from 3,952 data sets, including Pearson correlation, Spearman correlation, and Theil's U. This data can be used, e.g., for approaches that predict column correlation based on column properties, including column names. | Provide a detailed description of the following dataset: Column Correlation Data |
MEAD | Multi-view Emotional Audio-visual Dataset | Provide a detailed description of the following dataset: MEAD |
FinVis | Pretrain: 200k
Instruction: 100k | Provide a detailed description of the following dataset: FinVis |
Bomstic | Plant growth | Provide a detailed description of the following dataset: Bomstic |
Text_VPH | Este conjunto de datos consiste en comentarios de publicaciones del MINSA (Perú) en Facebook sobre la vacuna contra el VPH entre los años 2019 y 2020. Se leyó cuidadosamente cada uno de los comentarios, luego se procedió a clasificarlos de manera manual. Para esta clasificación se interpretó los mensajes de las personas, por lo que se analizó los hilos (comentarios y respuestas) por separado y se procedió a etiquetarlos por temas `"Topic"` . Un profesional de salud realizó una segunda clasificación y las discrepancias se resolvieron con un tercer profesional. Luego, se seleccionaron subcategorías que hacían referencia directa a las vacunas contra el VPH. La clasificación se realizó utilizando las siguientes categorías `"topic_c"` :
- 0: El comentario tiene una postura contraria a la vacuna contra el VPH (antivacuna)
- 1: El comentario tien una postura a favor de la vacuna contra el VPH (provacuna)
- 2: El comentario refleja una duda o dudas relacionada con la vacuna contra el VPH
- 3: El comentario habla de cualquier otra cosa
Citar:
- Lewis De La Cruz, Lucy Cordova, & Esperanza Reyes. (2023). <i>Text_VPH</i> [Data set]. Kaggle. https://doi.org/10.34740/KAGGLE/DSV/6460567 | Provide a detailed description of the following dataset: Text_VPH |
UniKG | We construct a large-scale Heterogeneous Graph benchmark dataset named UniKG from Wikidata.
UniKG contains 77.31 million multi-attribute entities labels by 2000 classes, 564 million directed edges annotated by 2082 diverse association types, which significantly surpasses the scale of existing homogeneous graph datasets.
UniKG have the capability to facilitate downstream task of diverse domains. | Provide a detailed description of the following dataset: UniKG |
repository of migratable containers in UMS | Repository of containerized services that can be migrated through UMS.
The dataset includes containers of the UMS platform plus sample containerized services that can be live migrated using UMS.
Specifically, the dataset includes containers for the following two services:
1. Memhog application: this is a containerized service to check the impact of memory footprint of the containers upon live migration.
2. Yolo v3-Tiny application: this is a containerized service with a real-world application for object detection. This will help users to examine UMS under real-world settings. | Provide a detailed description of the following dataset: repository of migratable containers in UMS |
CamlessVideosFromTheWild | 57 stock videos from Pexels, predominantly covering road scenes which involve minimal distortion.
They involve different camera setups, also with varying camera heights, obstacles present throughout some videos (for e.g. car hood), highly varying image resolutions, and even weather conditions (day, rain, snow, night etc.).
For more details, check the paper
CamLessMonoDepth: Monocular Depth Estimation with Unknown Camera Parameters
https://arxiv.org/pdf/2110.14347v1.pdf | Provide a detailed description of the following dataset: CamlessVideosFromTheWild |
List of OWL reasoners | CSV file with a list of all examined OWL reasoners. For each item, information on usability and maintenance status, project pages, source code repositories and related documentation was gathered. | Provide a detailed description of the following dataset: List of OWL reasoners |
FROG (2D Laser People Detection) | FROG is a 2D LiDAR dataset with annotations for people detectors. It consists of 6 fully annotated sequences, and 30 total hours of recordings at the Royal Alcázar of Seville (Spain). The main motivation of this dataset is providing higher quality data with a richer variety of crowded scenarios, for improving the field of people detectors based on knee-high 2D range finders. | Provide a detailed description of the following dataset: FROG (2D Laser People Detection) |
GRAZPEDWRI-DX | Digital radiography is widely available and the standard modality in trauma imaging, often enabling to diagnose pediatric wrist fractures. However, image interpretation requires time-consuming specialized training. Due to astonishing progress in computer vision algorithms, automated fracture detection has become a topic of research interest. This paper presents the GRAZPEDWRI-DX dataset containing annotated pediatric trauma wrist radiographs of 6,091 patients, treated at the Department for Pediatric Surgery of the University Hospital Graz between 2008 and 2018. A total number of 10,643 studies (20,327 images) are made available, typically covering posteroanterior and lateral projections. The dataset is annotated with 74,459 image tags and features 67,771 labeled objects. We de-identified all radiographs and converted the DICOM pixel data to 16-Bit grayscale PNG images. The filenames and the accompanying text files provide basic patient information (age, sex). Several pediatric radiologists annotated dataset images by placing lines, bounding boxes, or polygons to mark pathologies like fractures or periosteal reactions. They also tagged general image characteristics. This dataset is publicly available to encourage computer vision research. | Provide a detailed description of the following dataset: GRAZPEDWRI-DX |
EEG Eye State | All data is from one continuous EEG measurement with the Emotiv EEG Neuroheadset. The duration of the measurement was 117 seconds. The eye state was detected via a camera during the EEG measurement and added later manually to the file after analysing the video frames. '1' indicates the eye-closed and '0' the eye-open state. All values are in chronological order with the first measured value at the top of the data. | Provide a detailed description of the following dataset: EEG Eye State |
FMC-MWO2KG | The Failure Mode Classification dataset released in the paper ["MWO2KG and Echidna: Constructing and exploring knowledge graphs from maintenance data"](https://journals.sagepub.com/doi/10.1177/1748006X221131128) by Stewart et al. The goal is to label a given observation (made by a maintainer) with the corresponding Failure Mode Code.
Each row contains an observation made by a maintainer, followed by a comma, followed by the Failure Mode, for example:
falure,Breakdown
As they are written in technical language, there are often spelling/grammatical/tokenisation errors made in the observations - these are typical of maintenance work orders.
The dataset comprises 502 (observation, label) pairs (for training), 62 pairs (for validation) and 62 pairs (for testing). The labels are taken from a set of 22 failure mode codes from ISO 14224. In order to pull a list of observations in which to label, we ran MWO2KG over the data once and exported a list of all entities labelled as ‘observation’ (such as ‘leaking’, ‘not working’) by the Named Entity Recognition model. We then removed all results that were incorrectly predicted as observations by the NER model and proceeded to label each observation with the most appropriate failure mode code using a text editor.
The source code of the above paper (which also includes this dataset) is located on [GitHub](https://github.com/nlp-tlp/mwo2kg-and-echidna).
The direct link to the data (`train.txt`, `dev.txt`, and `test.txt`) is available [here](https://github.com/nlp-tlp/mwo2kg-and-echidna/tree/main/mwo2kg/failure_mode_classification/input_data). | Provide a detailed description of the following dataset: FMC-MWO2KG |
HEIMT | Continuous EEG activity was recorded from each member of the dyad using an ActiveTwo head cap and the ActiveTwo Biosemi system (BioSemi, Amsterdam, Netherlands). Recordings were
collected from 64 Ag-AgCl scalp electrodes and from bilateral mastoids. Two electrodes were placed next to each other 1 cm below the right eye to record eye-blink responses. A ground electrode was established by BioSemi’s common Mode Sense active electrode and Driven Right Leg passive electrode. EEG activity was digitized with ActiView software (BioSemi) and sampled at 2048 Hz. Data was downsampled post-acquisition and analyzed at 512 Hz. | Provide a detailed description of the following dataset: HEIMT |
US federal environmental agency websites, 2016–2020 | This dataset contains 40,000 URLs of US federal environmental agency websites, along with links to captures in the Internet Archive Wayback Machine for 2016 and 2020 when present. It also contains the prevalence of 56 environmental terms and phrases and how the presence of those terms on the webpages changed from 2016 to 2020.
During the Trump administration, website changes indicative of climate denial prompted civil society organizations to develop tools for tracking online government information sources. We examine a large sample of websites of US federal environmental agencies and show that between 2016 and 2020: 1) the use of the term “climate change” decreased by an estimated 38%; 2) access to as much as 20% of the Environmental Protection Agency’s website was removed; 3) changes were made more to Cabinet agencies’ websites and to highly visible pages.
This dataset can be used to examine webpage change over time with the assistance of web archives. | Provide a detailed description of the following dataset: US federal environmental agency websites, 2016–2020 |
ISLTranslate | Sign languages are the primary means of communication for a large number of people worldwide. Recently, the availability of Sign language translation datasets have facilitated the incorporation of Sign language research in the NLP community. Though a wide variety of research focuses on improving translation systems for sign language, the lack of ample annotated resources hinders research in the data driven natural language processing community. In this resource paper, we introduce ISLTranslate, a translation dataset for continuous Indian Sign Language (ISL), consisting of 30k ISL-English sentence pairs. To the best of our knowledge, it is the first and largest translation dataset for continuous Indian Sign Language with corresponding English transcripts. We provide a detailed analysis of the dataset and examine the distribution of words and phrases covered in the proposed dataset. To validate the performance of existing end-to-end Sign language to spoken language translation systems, we benchmark the created dataset with multiple existing state-of-the-art systems for sign languages. | Provide a detailed description of the following dataset: ISLTranslate |
BDD-A | Dataset Statistics: The statistics of our dataset are summarized and compared with the
largest existing dataset (DR(eye)VE) [1] in Table 1. Our dataset was collected using videos
selected from a publicly available, large-scale, crowd-sourced driving video dataset, BDD100k [30,
31]. BDD100K contains human-demonstrated dashboard videos and time-stamped sensor
measurements collected during urban driving in various weather and lighting conditions. To
efficiently collect attention data for critical driving situations, we specifically selected video clips
that both included braking events and took place in busy areas (see supplementary materials
for technical details). We then trimmed videos to include 6.5 seconds prior to and 3.5 seconds
after each braking event. It turned out that other driving actions, e.g., turning, lane switching
and accelerating, were also included. 1,232 videos (=3.5 hours) in total were collected following
these procedures. Some example images from our dataset are shown in Fig. 6. Our selected
videos contain a large number of different road users. We detected the objects in our videos
using YOLO [22].On average, each video frame contained 4.4 cars and 0.3 pedestrians, multiple
times more than the DR(eye)VE dataset (Table 1).
Data Collection Procedure: For our eye-tracking experiment, we recruited 45 participants
who each had more than one year of driving experience. The participants watched the selected
driving videos in the lab while performing a driving instructor task: participants were asked
to imagine that they were driving instructors sitting in the copilot seat and needed to press
the space key whenever they felt it necessary to correct or warn the student driver of potential
dangers. Their eye movements during the task were recorded at 1000 Hz with an EyeLink 1000
desktop-mounted infrared eye tracker, used in conjunction with the Eyelink Toolbox scripts [7]
for MATLAB. Each participant completed the task for 200 driving videos. Each driving video
was viewed by at least 4 participants. The gaze patterns made by these independent participants
were aggregated and smoothed to make an attention map for each frame of the stimulus video
(see Fig. 6 and supplementary materials for technical details).
Psychological studies [19, 11] have shown that when humans look through multiple visual
cues that simultaneously demand attention, the order in which humans look at those cues is
highly subjective. Therefore, by aggregating gazes of independent observers, we could record
multiple important visual cues in one frame. In addition, it has been shown that human drivers
look at buildings, trees, flowerbeds, and other unimportant objects non-negligibly frequently
[1]. Presumably, these eye movements should be regarded as noise for driving-related machine
learning purposes. By averaging the eye movements of independent observers, we were able to
effectively wash out those sources of noise (see Fig. 2B).
Comparison with In-Car Attention Data: We collected in-lab driver attention data using
videos from the DR(eye)VE dataset. This allowed us to compare in-lab and in-car attention
maps of each video. The DR(eye)VE videos we used were 200 randomly selected 10-second
video clips, half of them containing braking events and half without braking events.
We tested how well in-car and in-lab attention maps highlighted driving-relevant objects.
We used YOLO [22] to detect the objects in the videos of our dataset. We identified three
object categories that are important for driving and that had sufficient instances in the videos
(car, pedestrian and cyclist). We calculated the proportion of attended objects out of total
detected instances for each category for both in-lab and in-car attention maps (see supplementary
materials for technical details). The results showed that in-car attention maps highlighted
significantly less driving-relevant objects than in-lab attention maps (see Fig. 2A).
The difference in the number of attended objects between the in-car and in-lab attention maps
can be due to the fact that eye movements collected from a single driver do not completely indicate
all the objects that demand attention in the particular driving situation. One individual’s eye
movements are only an approximation of their attention [23], and humans can also track objects
with covert attention without looking at them [6]. The difference in the number of attended
objects may also reflect the difference between first-person driver attention and third-person
driver attention. It may be that the human observers in our in-lab eye-tracking experiment also
looked at objects that were not relevant for driving. We ran a human evaluation experiment to
address this concern.
Human Evaluation: To verify that our in-lab driver attention maps highlight regions that
should indeed demand drivers’ attention, we conducted an online study to let humans compare
in-lab and in-car driver attention maps. In each trial of the online study, participants watched
one driving video clip three times: the first time with no edit, and then two more times in
random order with overlaid in-lab and in-car attention maps, respectively. The participant was
then asked to choose which heatmap-coded video was more similar to where a good driver would
look. In total, we collected 736 trials from 32 online participants. We found that our in-lab
attention maps were more often preferred by the participants than the in-car attention maps
(71% versus 29% of all trials, statistically significant as p = 1×10−29, see Table 2). Although
this result cannot suggest that in-lab driver attention maps are superior to in-car attention maps
in general, it does show that the driver attention maps collected with our protocol represent
where a good driver should look from a third-person perspective.
In addition, we will show in the Experiments section that in-lab attention data collected
using our protocol can be used to train a model to effectively predict actual, in-car driver
attention. This result proves that our dataset can also serve as a substitute for in-car driver
attention data, especially in crucial situations where in-car data collection is not practical.
To summarize, compared with driver attention data collected in-car, our dataset has three
clear advantages: multi-focus, little driving-irrelevant noise, and efficiently tailored to crucial
driving situations. | Provide a detailed description of the following dataset: BDD-A |
Synthetic Speech Attribution | Synthetic Speech Attribution Dataset. | Provide a detailed description of the following dataset: Synthetic Speech Attribution |
Robust e-NeRF Synthetic Event Dataset | This synthetic event dataset is used in [**Robust *e*-NeRF**](https://wengflow.github.io/robust-e-nerf) to study the collective effect of camera speed profile, contrast threshold variation and refractory period on the quality of NeRF reconstruction from a moving event camera. It is simulated using an [improved version of ESIM](https://github.com/wengflow/rpg_esim) with three different camera configurations of increasing difficulty levels (*i.e.* *easy*, *medium* and *hard*) on seven Realistic Synthetic $360^{\circ}$ scenes (adopted in the synthetic experiments of NeRF), resulting in a total of 21 sequence recordings. Please refer to the [Robust *e*-NeRF paper](https://arxiv.org/abs/2309.08596) for more details.
The dataset allows for a retrospective comparison between event-based and image-based NeRF reconstruction methods, as the event sequences were simulated under highly similar conditions as the NeRF synthetic dataset. In particular, we adopt the same camera intrinsics and camera distance to the object at the origin. Furthermore, the event camera travels in a hemi-/spherical spiral motion about the object, thereby having a similar camera pose distribution for training. Apart from that, we also use the same test camera poses/views. Nonetheless, this new synthetic event dataset is not only specific to NeRF reconstruction, but also suitable for novel view synthesis, 3D reconstruction, localization and SLAM in general. | Provide a detailed description of the following dataset: Robust e-NeRF Synthetic Event Dataset |
FinArg | With the goal of reasoning on the financial textual data, we present a novel dataset for annotating arguments, their components, and relations in the transcripts of earnings conference calls (ECCs). | Provide a detailed description of the following dataset: FinArg |
Autoformer | This is dataset for A TIME SERIES IS WORTH 64 WORDS: LONG-TERM FORECASTING WITH TRANSFORMERS
We evaluate the performance of our proposed PatchTST on 8 popular datasets, including
Weather, Traffic, Electricity, ILI and 4 ETT datasets (ETTh1, ETTh2, ETTm1, ETTm2). These
datasets have been extensively utilized for benchmarking and publicly available on (Wu et al., 2021).
The statistics of those datasets are summarized in Table 2. We would like to highlight several large
datasets: Weather, Traffic, and Electricity. They have many more number of time series, thus the
results would be more stable and less susceptible to overfitting than other smaller datasets. | Provide a detailed description of the following dataset: Autoformer |
CY101 Dataset | In this dataset an uppertorso humanoid robot with 7-DOF arm explored 100 different objects belonging to 20 different categories using 10 behaviors: Look, Crush, Grasp, Hold, Lift, Drop, Poke, Push, Shake and Tap. | Provide a detailed description of the following dataset: CY101 Dataset |
ChatGPT-GNN Dataset | The data can be found in the Data folder, which contains two files:
- `ticker_train_data.json`: This file holds the data utilized for training and validation of our model.
- `ticker_test_data.json`: This file contains the data used for model evaluation.
To load the data, you can start with 4 lines of code:
```
import pandas as pd
import json
train_data = pd.read_json('./Data/ticker_train_data.json')
test_data = pd.read_json('./Data/ticker_test_data.json')
```
The Affected Companies column provides two key insights:
- Companies that ChatGPT predicts will be influenced by the financial news.
- The sentiment indicating the nature of the impact on these companies (e.g., positive or negative). | Provide a detailed description of the following dataset: ChatGPT-GNN Dataset |
MCSI | The Mpox Close Skin Images dataset (MCSI) is a collection of skin images obtained from diverse public sources, that we accurately pre-processed (i.e., cropped and zoomed) in order to focus the skin lesion (if present), and to evaluate Machine Learning models aimed at detecting different pathologies from skin lesion pictures taken with smartphone cameras. It includes a total of 400 pictures homogeneously divided in 4 different classes: mpox, which contains samples of mpox (formerly Monkeypox) skin lesions; chickenpox, with samples of chickenpox cases; acne, containing samples of acne at different severity levels; and healthy, which contains samples of skin without any evident symptoms. This repository is part of the supplementary material accompanying the paper named: A Transfer Learning and Explainable Solution to Detect mpox from Smartphones images. | Provide a detailed description of the following dataset: MCSI |
urban_change_monitoring_mariupol_ua | This dataset contains the ground truth for urban changes occurred in Mariupol, Ukraine for the time frame 2017-2020. This is useful for transferring the urban change monitoring network ERCNN-DRS (https://github.com/It4innovations/ERCNN-DRS_urban_change_monitoring) to that region. | Provide a detailed description of the following dataset: urban_change_monitoring_mariupol_ua |
CapMIT1003 | The CapMIT1003 database contains captions and clicks collected for images from the MIT1003 database, for which reference eye scanpath are available. The database is distributed as a single SQLite3 database named capmit1003.db. For convenience, a lightweight Python class to access the database is provided in the official repository | Provide a detailed description of the following dataset: CapMIT1003 |
SODA-D | SODA-D is a large-scale dataset tailored for small object detection in driving scenario, which is built on top of MVD dataset and owned data, where the former is a dataset dedicated to pixel-level understanding of street scenes, and the latter is mainly captured by onboard cameras and mobile phones. With 24704 well-chosen and high-quality images of driving scenarios, SODA-D comprises 277596 instances of 9 categories with horizontal bounding boxes. | Provide a detailed description of the following dataset: SODA-D |
SODA-A | SODA-A is a large-scale benchmark specialized for small object detection task under aerial scenes, which has 800203 instances with oriented rectangle box annotation across 9 classes. It contains 2510 high-resolution images extracted from Google Earth. | Provide a detailed description of the following dataset: SODA-A |
RAD-ChestCT Dataset | The RAD-ChestCT dataset is a large medical imaging dataset developed by Duke MD/PhD Rachel Draelos during her Computer Science PhD supervised by Lawrence Carin. The full dataset includes 35,747 chest CT scans from 19,661 adult patients. The public Zenodo repository contains an initial release of 3,630 chest CT scans, approximately 10% of the dataset. This dataset is of significant interest to the machine learning and medical imaging research communities. | Provide a detailed description of the following dataset: RAD-ChestCT Dataset |
Chaotic Trajectories | Dataset of chaotic Chua, Lorenz, Lorenz96, Mackey-Glass with tau=17, Mackey-Glass with tau=30, Rossler, Sprott systems. | Provide a detailed description of the following dataset: Chaotic Trajectories |
ImageNet-64 | Imagenet64 is a massive dataset of small images called the down-sampled version of Imagenet. Imagenet64 comprises 1,281,167 training data and 50,000 test data with 1,000 labels. | Provide a detailed description of the following dataset: ImageNet-64 |
MelodyNet | we introduce a large-scale and diverse symbolic melody dataset called MelodyNet that contains more than 0.4 million melody pieces extracted from approximately 1.6 million songs. MelodyNet is used for large-scale pre-training and domain-specific n-gram lexicon construction. | Provide a detailed description of the following dataset: MelodyNet |
FairPrism | FairPrism is a dataset of 5,000 examples of AI-generated English text with detailed human annotations covering a diverse set of harms relating to gender and sexuality. FairPrism aims to address several limitations of existing datasets for measuring and mitigating fairness-related harms, including improved transparency, clearer specification of dataset coverage, and accounting for annotator disagreement and harms that are context-dependent. FairPrism’s annotations include the extent of stereotyping and demeaning harms, the demographic groups targeted, and appropriateness for different applications. The annotations also include specific harms that occur in interactive contexts and harms that raise normative concerns when the “speaker” is an AI system. Due to its precision and granularity, FairPrism can be used to diagnose (1) the types of fairness-related harms that AI text generation systems
cause, and (2) the potential limitations of mitigation methods. | Provide a detailed description of the following dataset: FairPrism |
Application of PanDict system based on EPSEIRV and SI3R models in epidemic forecasting and healthcare resource planning | Global epidemics, like COVID-19, have substantial impacts on almost all countries in multiple aspects, such as economy, hospitalization, lifestyle, etc1, 2. COVID-19 can spread to populations worldwide due, in part, to their high contagiousness, but more importantly, because of our inability to quickly address some of the most fundamental problems of a newly emerged virus: 1) How quickly will the virus spread? Whether and under what conditions will new variants emerge? 3) How do we arrange our resources accordingly? Since previous epidemic models were incapable of addressing these three most important questions, we developed the PanDict system, which can help address all three of the most essential problems discussed above. To elaborate, our model consists of three crucial parts, each tackling one of the three above-mentioned problems: 1) predicting the spread of the new virus in each local community and calculating its R0 value using our newly devised EPSEIRV model; 2) creating and using the SI3R model to simulate variant competition; 3) forecasting hospitalization deficiencies in each state and producing visual representations of the projected demand using our IHOV model. In contrast to other vague and incorrect predictions/models, our EPSEIRV model accurately predicted the spread of the Omicron variant of Sars-CoV-2 in the United States and South Africa prior to their peaks. Moreover, in January 2022, we concluded that the R0 value of Omicron is around 18.8. The high infection speeds of these viruses allow them to circulate widely in the population before vaccines are fully developed. Thus, there will be inevitable surges in the number of patients, which can potentially overwhelm unprepared hospitals, hence making the IHOV model especially imperative. In a nutshell, when a novel disease emerges, the PanDict model can quickly and accurately predict how fast the disease spreads, whether the disease will successfully mutate, and how to arrange hospitalization resources to most efficiently mitigate suffering. These crucial functions can apprise our users of where the potential epidemic is heading and how to diminish its impact. Furthermore, the PanDict model will allow hospitalization systems to be much more prepared for upcoming surges of patients, which would significantly reduce excess deaths and hospitalization deficiencies. The system also supports related departments or corporation plans with the EPSEIRV model and SI3R model during the contemporary epidemic. | Provide a detailed description of the following dataset: Application of PanDict system based on EPSEIRV and SI3R models in epidemic forecasting and healthcare resource planning |
Vega-Lite Chart Collection | We present a new collection of 1,981 Vega-Lite specifications, which is used to demonstrate the generalizability and viability of our NL generation framework. This collection is the largest set of human-generated charts obtained from GitHub to date. It covers varying levels of complexity from a simple line chart without any interaction to a chart with four plots where data points are linked with selection interactions. Compared to the benchmarks, our dataset shows the highest average pairwise edit distance between specifications, which proves that the charts are highly diverse from one another. Moreover, it contains the largest number of charts with composite views, interactions (e.g., tooltips, panning & zooming, and linking), and diverse chart types (e.g., map, grid & matrix, diagram, etc.). | Provide a detailed description of the following dataset: Vega-Lite Chart Collection |
Verified Smart Contracts | **Verified Smart Contracts** is a dataset of real Ethereum smart contracts, containing both Solidity and Vyper source code. It consists of every deployed Ethereum smart contract as of 1st of April 2022, whose been verified on [Etherscan](https://etherscan.io/) and has a least one transaction. A total of 186,397 unique smart contracts are provided, filtered down from 2,217,692 smart contracts. The dataset contains 53,843,305 lines of code. | Provide a detailed description of the following dataset: Verified Smart Contracts |
Verified Smart Contract Code Comments | **Verified Smart Contracts Code Comments** is a dataset of real Ethereum smart contract functions, containing "code, comment" pairs of both Solidity and Vyper source code. The dataset is based on every deployed Ethereum smart contract as of 1st of April 2022, whose been verified on [Etherscan](https://etherscan.io/) and has a least one transaction. A total of 1,541,370 smart contract functions are provided, parsed from 186,397 unique smart contracts, filtered down from 2,217,692 smart contracts. | Provide a detailed description of the following dataset: Verified Smart Contract Code Comments |
Vulnerable Verified Smart Contracts | **Vulnerable Verified Smart Contracts** is a dataset of real vulnerable Ethereum smart contracts. Based on the manually labeled [Benchmark dataset of Solidity smart contracts](https://doi.org/10.5281/zenodo.7744053). A total of 609 vulnerable contracts are provided, containing 1,117 vulnerabilities. | Provide a detailed description of the following dataset: Vulnerable Verified Smart Contracts |
maze-dataset | This package provides utilities for generation, filtering, solving, visualizing, and processing of mazes for training ML systems. Primarily built for the [maze-transformer interpretability](https://github.com/understanding-search/maze-transformer) project. You can find our paper on it here: http://arxiv.org/abs/2309.10498
This package includes a variety of [maze generation algorithms](maze_dataset/generation/generators.py), including randomized depth first search, Wilson's algorithm for uniform spanning trees, and percolation. Datasets can be filtered to select mazes of a certain length or complexity, remove duplicates, and satisfy custom properties. A variety of output formats for visualization and training ML models are provided. | Provide a detailed description of the following dataset: maze-dataset |
STRAT | Spatial TRAnsformation for virtual Try-on (STRAT) dataset contains three subdatasets: STRAT-glasses, STRAT-hat, and STRAT-tie, which correspond to "glasses try-on", "hat try-on", and "tie try-on" respectively. In each subdataset, the training set has 2000 pairs of foregrounds (accessories) and backgrounds (human faces or portrait images), while the test set has 1000 pairs of foregrounds and backgrounds. For each pairwise sample, both the vertice coordinates and warpping parameters of the foreground for each pairwise are provided for supervised learning and evaluation of spatial transformation. | Provide a detailed description of the following dataset: STRAT |
CAFD | This dataset comprises 16 499 images with 42 classes encompassing the most popular Central Asian cuisine consumed locally. | Provide a detailed description of the following dataset: CAFD |
ImageNet-1k vs iNaturalist | A benchmark dataset for out-of-distribution detection. ImageNet-1k is in-distribution, while iNaturalist is out-of-distribution. | Provide a detailed description of the following dataset: ImageNet-1k vs iNaturalist |
ImageNet-1k vs SUN | A benchmark dataset for out-of-distribution detection. ImageNet-1k is in-distribution, while SUN is out-of-distribution. | Provide a detailed description of the following dataset: ImageNet-1k vs SUN |
ImageNet-1k vs Places | A benchmark dataset for out-of-distribution detection. ImageNet-1k is in-distribution, while Places is out-of-distribution. | Provide a detailed description of the following dataset: ImageNet-1k vs Places |
ImageNet-1k vs Textures | A benchmark dataset for out-of-distribution detection. ImageNet-1k is in-distribution, while Textures is out-of-distribution. | Provide a detailed description of the following dataset: ImageNet-1k vs Textures |
ImageNet-1k vs OpenImage-O | OpenImage-O is built for the ID dataset ImageNet-1k. It is manually annotated, comes with a naturally diverse distribution, and has a large scale. It is built to overcome several shortcomings of existing OOD benchmarks. OpenImage-O is image-by-image filtered from the test set of OpenImage-V3, which has been collected from Flickr without a predefined list of class names or tags, leading to natural class statistics and avoiding an initial design bias. | Provide a detailed description of the following dataset: ImageNet-1k vs OpenImage-O |
MPDD | Metal Parts Defect Detection Dataset | Provide a detailed description of the following dataset: MPDD |
NICOL Robot Kinematics Dataset | Kinematics Dataset for the NICOL robot (Neuro-inspired Collaborator). Data is intended for Training and Testing of inverse kinematics applications.
Contains Training, Test and Validation data for NICOL's right arm. Data for two differently-sized workspaces given.
Every sample is a tuple of a uniform randomly sampled robot joint state and the corresponding pose that was calculated with forward kinematics. | Provide a detailed description of the following dataset: NICOL Robot Kinematics Dataset |
CodeGen4Libs Dataset | The dataset is specifically constructed for the library-oriented code generation task, which are constructed in the paper “CodeGen4Libs: A Two-Stage Approach for Library-Oriented Code Generation”. | Provide a detailed description of the following dataset: CodeGen4Libs Dataset |
OG-MARL | Diverse datasets for offline multi-agent reinforcement learning research. Includes datasets for popular MARL benchmark environments such as:
* MAMuJoCo
* SMAC v1 & v2
* PettingZoo
* FlatLand
* CityLearn | Provide a detailed description of the following dataset: OG-MARL |
GePaDe | This dataset encompasses 265 speeches (over 200,000 tokens) from the German Bundestag, primarily from the 19th legislative term (2017-2021), given by 195 distinct speakers representing 6 political parties.
The data was annotated to perform a semantic role labeling task, namely to identify who said what to whom (speaker attribution). Cues (triggers) were annotated that are associated with events of speech, writing, or thought. Additionally, the arguments (roles) of each trigger have been annotated, encompassing the SOURCE, ADDRESSEE, MESSAGE, MEDIUM, TOPIC, and EVIDENCE related to the speech event.
The dataset was introduced in the international GermEval 2023 Shared Task on Speaker Attribution in Newswire and Parliamentary Debates (SpkAtt-2023) to evaluate the quality of systems for automated identification of cues and associated roles.
Reference
Rehbein, I. et al, Overview of the GermEval 2023 Shared Task on Speaker Attribution in Newswire and Parliamentary Debates, https://github.com/umanlp/SpkAtt-2023/blob/master/doc/SpkAtt2023-proceedings.pdf | Provide a detailed description of the following dataset: GePaDe |
Google Brain - Ventilator Pressure Prediction | What do doctors do when a patient has trouble breathing? They use a ventilator to pump oxygen into a sedated patient's lungs via a tube in the windpipe. But mechanical ventilation is a clinician-intensive procedure, a limitation that was prominently on display during the early days of the COVID-19 pandemic. At the same time, developing new methods for controlling mechanical ventilators is prohibitively expensive, even before reaching clinical trials. High-quality simulators could reduce this barrier.
Current simulators are trained as an ensemble, where each model simulates a single lung setting. However, lungs and their attributes form a continuous space, so a parametric approach must be explored that would consider the differences in patient lungs.
Partnering with Princeton University, the team at Google Brain aims to grow the community around machine learning for mechanical ventilation control. They believe that neural networks and deep learning can better generalize across lungs with varying characteristics than the current industry standard of PID controllers.
In this competition, you’ll simulate a ventilator connected to a sedated patient's lung. The best submissions will take lung attributes compliance and resistance into account.
If successful, you'll help overcome the cost barrier of developing new methods for controlling mechanical ventilators. This will pave the way for algorithms that adapt to patients and reduce the burden on clinicians during these novel times and beyond. As a result, ventilator treatments may become more widely available to help patients breathe. | Provide a detailed description of the following dataset: Google Brain - Ventilator Pressure Prediction |
Grasp-Anything | We leverage knowledge from foundation models to introduce Grasp-Anything, a new large-scale dataset with 1M (one million) samples and 3M objects, substantially surpassing prior datasets in diversity and magnitude. In addition, Grasp-Anything can universally cover objects in our daily lives and offer a great range of object diversity. | Provide a detailed description of the following dataset: Grasp-Anything |
WINGBEATS | Context
The database contains wav recordings from the same optical sensor inserted in-turn into six insectary boxes containing only one mosquito species of both sexes (about 200-300 flying mosquitoes in each cage). As the mosquitoes fly randomly through the sensor their wingbeat partially occludes the light from the transmitter to the receiver. The light fluctuation recorded is modulated by the wingbeat of the insect. The resulting signal is pseudo-acoustic, meaning that it sounds exactly like a microphone recording but has been acquired using optical means (however, not vision based).
Insect Biometrics, in the context of our work, is a measurable behavioral characteristic of flying insects. Biometric identifiers are related to the shape of the body (main body size, wing shape, wingbeat frequency, pattern movement of the wings). Biometric identification methods use biometric characteristics or traits to verify species/sex identities when insects access endpoint traps following a bait.
Content
• 279,566 wingbeat recordings correctly labeled
• 6 mosquito species (Ae. aegypti, Ae. albopictus, An. arabiensis, An. gambiae, Cu. pipiens, Cu. quinquefasciatus)
• 3 genera of mosquito species (Aedes, Anopheles, Culex)
Acknowledgements
The data have been recorded at the premises of Biogents, Regensburg, Germany (https://www.biogents.com/) and with the help of IRIDEON SA, Spain (http://irideon.eu/ ).
The data have been recorded using the device published in:
Potamitis I. and Rigakis I., "Large Aperture Optoelectronic Devices to Record and Time-Stamp Insects’ Wingbeats," in IEEE Sensors Journal, vol. 16, no. 15, pp. 6053-6061, Aug.1, 2016.
doi: 10.1109/JSEN.2016.2574762
The REMOSIS project that supported the creation of the database has received funding from the European Union’s Horizon 2020 research and innovation program under grant agreement No 691131.
We gratefully acknowledge the support of NVIDIA Corporation with the donation of a TITAN-X GPU used for training the deep learning networks used to classify mosquitoes’ spectra.
Inspiration
The point of having such recordings is to eventually embed optoelectronic sensors in automatic traps that will report counts, species and sex identity of captured mosquitoes. All species of this dataset can be dangerous as they are potential vectors of pathogens that cause serious illnesses.
A widespread network of traps for insects of economic importance such as fruit flies and of hygienic importance such as mosquitoes allows the automatic creation of spatiotemporal maps and cuts down significantly the manual cost of visiting the traps. The creation of historical data can lead to the prediction of outbreaks and risk assessment in general.
We provide code to read the data and extract the power spectral density signature of each wingbeat. We also extract Mel-scaled, filter-bank features. How about wavelets and time-varying autoregressive models?
The starter code using top-tier shallow classifiers achieves a mean accuracy of 81-84%. Deep-learning performs better.
Can you classify genus, perform clustering, apply transfer learning to spectral data?
Come aboard and help humanity against killer mosquitoes! | Provide a detailed description of the following dataset: WINGBEATS |
ABUZZ | As part of our policy to openly share all data from this project, we have included a downloadable package comprising all acoustic data collected over the course of this work. This includes acoustic recordings from 20 different species of mosquitoes, using a variety of mobile phones for each. This data can be downloaded from the online repository on dryad.org. The supplementary audio files are not included in this package, and may be downloaded separately. | Provide a detailed description of the following dataset: ABUZZ |
Br35H :: Brain Tumor Detection 2020 | ✔️Abstract
A Brain tumor is considered as one of the aggressive diseases, among children and adults. Brain tumors account for 85 to 90 percent of all primary Central Nervous System (CNS) tumors. Every year, around 11,700 people are diagnosed with a brain tumor. The 5-year survival rate for people with a cancerous brain or CNS tumor is approximately 34 percent for men and36 percent for women. Brain Tumors are classified as: Benign Tumor, Malignant Tumor, Pituitary Tumor, etc. Proper treatment, planning, and accurate diagnostics should be implemented to improve the life expectancy of the patients. The best technique to detect brain tumors is Magnetic Resonance Imaging (MRI). A huge amount of image data is generated through the scans. These images are examined by the radiologist. A manual examination can be error-prone due to the level of complexities involved in brain tumors and their properties.
Application of automated classification techniques using Machine Learning (ML) and Artificial Intelligence (AI) has consistently shown higher accuracy than manual classification. Hence, proposing a system performing detection and classification by using Deep Learning Algorithms using Convolution-Neural Network (CNN), Artificial Neural Network (ANN), and Transfer-Learning (TL) would be helpful to doctors all around the world.
✔️ Context
Brain Tumors are complex. There are a lot of abnormalities in the sizes and location of the brain tumor(s). This makes it really difficult for complete understanding of the nature of the tumor. Also, a professional Neurosurgeon is required for MRI analysis. Often times in developing countries the lack of skillful doctors and lack of knowledge about tumors makes it really challenging and time-consuming to generate reports from MRI’. So an automated system on Cloud can solve this problem.
✔️ Definition
To Detect and Classify Brain Tumor using, CNN and TL; as an asset of Deep Learning and to examine the tumor position(segmentation).
✔️ About the data:
The dataset contains 3 folders: yes, no and pred which contains 3060 Brain MRI Images.
Folder Description
Yes The folder yes contains 1500 Brain MRI Images that are tumorous
No The folder no contains 1500 Brain MRI Images that are non-tumorous
By: Ahmed Hamada | Provide a detailed description of the following dataset: Br35H :: Brain Tumor Detection 2020 |
Amazon Multilingual Counterfactual Dataset (AMCD) | The dataset contains *sentences* from Amazon customer reviews (sampled from [Amazon product review dataset](https://s3.amazonaws.com/amazon-reviews-pds/readme.html)) annotated for counterfactual detection (CFD) *binary classification*.
Counterfactual statements describe events that did not or cannot take place. Counterfactual statements may be identified as statements of the form – If p was true, then q would be true (i.e. assertions whose antecedent (p) and consequent (q) are known or assumed to be false).
The key features of this dataset are:
* The dataset is multilingual and contains sentences in English, German, and Japanese.
* The labeling was done by professional linguists and high quality was ensured.
* The dataset is supplemented with the annotation guidelines and definitions, which were worked out by professional linguists. We also provide the clue word lists, which are typical for counterfactual sentences and were used for initial data filtering. The clue word lists were also compiled by professional linguists. | Provide a detailed description of the following dataset: Amazon Multilingual Counterfactual Dataset (AMCD) |
SMID | This is the low-light image enhancement dataset collected by the CVPR 2018 paper "Seeing Motion in the Dark". | Provide a detailed description of the following dataset: SMID |
SDSD-indoor | The dataset collected by the paper Seeing Dynamic Scene in the Dark: High-Quality Video Dataset with Mechatronic Alignment, ICCV 2021 | Provide a detailed description of the following dataset: SDSD-indoor |
SDSD-outdoor | Seeing Dynamic Scene in the Dark: High-Quality Video Dataset with Mechatronic Alignment | Provide a detailed description of the following dataset: SDSD-outdoor |
WiSARD | WiSARD stands for Wilderness Search and Rescue Dataset (pronounced "wizard"). WiSARD consists of visual and thermal imagery taken from a drone flying over various wilderness environments in Washington, USA. The purpose of the WiSAR Image Dataset is to advance computer vision and deep learning research with a targeted application for wilderness search and rescue. | Provide a detailed description of the following dataset: WiSARD |
LOL-v2-synthetic | From Fidelity to Perceptual Quality: A Semi-Supervised Approach for Low-Light Image Enhancement | Provide a detailed description of the following dataset: LOL-v2-synthetic |
GlotScript | GlotScript-R is a resource that provides the attested writing systems for more than 7,000 languages. | Provide a detailed description of the following dataset: GlotScript |
ARAD-1K | The dataset used for NTIRE 2022 Spectral Recovery Challenge | Provide a detailed description of the following dataset: ARAD-1K |
KAIST | High-quality hyperspectral reconstruction using a spectral prior | Provide a detailed description of the following dataset: KAIST |
CAVE | Multispectral imaging using multiplexed illumination. | Provide a detailed description of the following dataset: CAVE |
VidChapters-7M | VidChapters-7M is a dataset of 817K user-chaptered videos including 7M chapters in total. VidChapters-7M is automatically created from videos online in a scalable manner by scraping user-annotated chapters and hence without any additional manual annotation. It is designed for training and evaluating models for video chapter generation with or without ground-truth boundaries, and video chapter grounding, as well as for video-language pretraining. | Provide a detailed description of the following dataset: VidChapters-7M |
ABIDE | Autism spectrum disorder (ASD) is characterized by qualitative impairment in social reciprocity, and by repetitive, restricted, and stereotyped behaviors/interests. Previously considered rare, ASD is now recognized to occur in more than 1% of children. Despite continuing research advances, their pace and clinical impact have not kept up with the urgency to identify ways of determining the diagnosis at earlier ages, selecting optimal treatments, and predicting outcomes. For the most part this is due to the complexity and heterogeneity of ASD. To face these challenges, large-scale samples are essential, but single laboratories cannot obtain sufficiently large datasets to reveal the brain mechanisms underlying ASD. In response, the Autism Brain Imaging Data Exchange (ABIDE) initiative has aggregated functional and structural brain imaging data collected from laboratories around the world to accelerate our understanding of the neural bases of autism. With the ultimate goal of facilitating discovery science and comparisons across samples, the ABIDE initiative now includes two large-scale collections: ABIDE I and ABIDE II. Each collection was created through the aggregation of datasets independently collected across more than 24 international brain imaging laboratories and are being made available to investigators throughout the world, consistent with open science principles, such as those at the core of the International Neuroimaging Data-sharing Initiative. For details about these initiatives visit the collection specific pages: ABIDE I and ABIDE II. | Provide a detailed description of the following dataset: ABIDE |
PFD | Recent progress in computer vision has been driven by high-capacity models trained on large datasets. Unfortunately, creating large datasets with pixel-level labels has been extremely costly due to the amount of human effort required. In this paper, we present an approach to rapidly creating pixel-accurate semantic label maps for images extracted from modern computer games. Although the source code and the internal operation of commercial games are inaccessible, we show that associations between image patches can be reconstructed from the communication between the game and the graphics hardware. This enables rapid propagation of semantic labels within and across images synthesized by the game, with no access to the source code or the content. We validate the presented approach by producing dense pixel-level semantic annotations for 25 thousand images synthesized by a photorealistic open-world computer game. Experiments on semantic segmentation datasets show that using the acquired data to supplement real-world images significantly increases accuracy and that the acquired data enables reducing the amount of hand-labeled real-world data: models trained with game data and just 1/3 of the CamVid training set outperform models trained on the complete CamVid training set. | Provide a detailed description of the following dataset: PFD |
EKubric | We use Kubric and ESIM simulator to make our EKubric dataset, which has 15,367 RGB-PointCloud-Event pairs with annotations (including optical flow, scene flow, surface normal, semantic segmentation and object coordinates ground truths). | Provide a detailed description of the following dataset: EKubric |
Real HSI | End-to-End Low Cost Compressive Spectral Imaging with Spatial-Spectral Self-Attention | Provide a detailed description of the following dataset: Real HSI |
ForceCVPR2020 | This is a simulated dataset for force prediction | Provide a detailed description of the following dataset: ForceCVPR2020 |
Glass | From USA Forensic Science Service; 6 types of glass; defined in terms of their oxide content (i.e. Na, Fe, K, etc) | Provide a detailed description of the following dataset: Glass |
Statlog image segmentation | The instances were drawn randomly from a database of 7 outdoor images. The images were handsegmented to create a classification for every pixel.
Each instance is a 3x3 region. | Provide a detailed description of the following dataset: Statlog image segmentation |
sonar | The task is to train a network to discriminate between sonar signals bounced off a metal cylinder and those bounced off a roughly cylindrical rock.
The file "sonar.mines" contains 111 patterns obtained by bouncing sonar signals off a metal cylinder at various angles and under various conditions. The file "sonar.rocks" contains 97 patterns obtained from rocks under similar conditions. The transmitted sonar signal is a frequency-modulated chirp, rising in frequency. The data set contains signals obtained from a variety of different aspect angles, spanning 90 degrees for the cylinder and 180 degrees for the rock.
Each pattern is a set of 60 numbers in the range 0.0 to 1.0. Each number represents the energy within a particular frequency band, integrated over a certain period of time. The integration aperture for higher frequencies occur later in time, since these frequencies are transmitted later during the chirp.
The label associated with each record contains the letter "R" if the object is a rock and "M" if it is a mine (metal cylinder). The numbers in the labels are in increasing order of aspect angle, but they do not encode the angle directly. | Provide a detailed description of the following dataset: sonar |
CS-Campus3D | We present CS-Campus3D, the first 3D aerial-ground cross-source dataset consisting of point cloud data from both aerial and ground LiDAR scans. The point clouds in CS-Campus3D have representation gaps and other features like different views, point densities, and noise pattern. | Provide a detailed description of the following dataset: CS-Campus3D |
Robot@Home2 | [Robot@Home2](https://www.sciencedirect.com/science/article/pii/S2352711023001863),
is an enhanced version aimed at improving usability and functionality for
developing and testing mobile robotics and computer vision algorithms.
Robot@Home2 consists of three main components. Firstly, a [relational
database](https://doi.org/10.5281/zenodo.7811795) that states the contextual
information and data links, compatible with Standard Query Language. Secondly,a
[Python package](https://pypi.org/project/robotathome/) for managing the
database, including downloading, querying, and interfacing functions. Finally,
learning resources in the form of [Jupyter
notebooks](https://drive.google.com/drive/folders/1ENnxbKP5MJdlGl2Q93WTbIlofuy6Icxq),
runnable locally or on the Google Colab platform, enabling users to explore the
dataset without local installations. These freely available tools are expected
to enhance the ease of exploiting the Robot@Home dataset and accelerate research
in computer vision and robotics. | Provide a detailed description of the following dataset: Robot@Home2 |
Publication text: code, data, and new measures | Data for novelty and its impact detection in scientific publications from Microsoft Academic Graph (now OpenAlex) | Provide a detailed description of the following dataset: Publication text: code, data, and new measures |
InfraParis | InfraParis is a novel and versatile dataset supporting multiple tasks across three modalities: RGB, depth, and infrared. From the city to the suburbs, it contains a variety of styles in different areas of the greater Paris area, providing rich semantic information. InfraParis contains 7301 images with bounding boxes and full semantic (19 classes) annotations.
We assess various state-of-the-art baseline techniques, encompassing models for the tasks of semantic segmentation, object detection, and depth estimation. | Provide a detailed description of the following dataset: InfraParis |
UIIS dataset | This is the first general Underwater Image Instance Segmentation (UIIS) dataset containing 4,628 images for 7 categories with pixel-level annotations for underwater instance segmentation task | Provide a detailed description of the following dataset: UIIS dataset |
Nam | A holistic approach to cross-channel image noise modeling and its application to image denoising | Provide a detailed description of the following dataset: Nam |
Multi-Sensor Calibration | Two separate datasets of calibration runs in front of a calibration board:
- 4IMUs+3Cams
-4IMUs+4Cams
sensor hz topic resolution
MicroStrain GX3-25 500hz /gx3_25/data
MicroStrain GX3-25 100hz /gx3_35/imudata
Xsens MTI-100 400hz /imu/data
RealSense T265 IMU 200hz /t265/imu
RealSense T265 Left 30hz /t265/fisheye2/image_raw 848x800
RealSense T265 Right 30hz /t265/fisheye2/image_raw 848x800
ELP Left Rolling Shutter 25hz /elpsplit_sync_image_node/left/image_raw 640x480
ELP Right Rolling Shutter 25hz /elp/split_sync_image_node/right/image_raw 640x480 | Provide a detailed description of the following dataset: Multi-Sensor Calibration |
AI-GA: AI-Generated Abstracts dataset | The AI-GA (Artificial Intelligence Generated Abstracts) dataset is a collection of abstracts and titles, with half of the abstracts being AI-generated and the other half being original. This dataset is designed to be used for research and experimentation in the field of natural language processing, particularly in the context of language generation and machine learning. | Provide a detailed description of the following dataset: AI-GA: AI-Generated Abstracts dataset |
Monovab | The most popular news portal's Facebook pages such as Prothom Alo, BBC Bangla, BD News 24, Bangla Tribune, Kaler Kantho, Daily Jugantor are picked to build the dataset. Following a manual collection of posts, a total of 130 posts for 11 news topics were obtained and converted into a CSV file. The dataset is annotated in Ekman's seven universal emotions and they are collected using a self-developed scraper algorithm. | Provide a detailed description of the following dataset: Monovab |
BirdSoundsDenoising: Deep Visual Audio Denoising for Bird Sounds | This is the dataset for BirdSoundsDenoising including training, validation and test. | Provide a detailed description of the following dataset: BirdSoundsDenoising: Deep Visual Audio Denoising for Bird Sounds |
SaGA | The primary data of the SaGA corpus are made up of 25 dialogs of interlocutors (50), who engage in a spatial communication task combining direction-giving and sight description. Six of those dialogues with data only from the direction giver are available including audio (*.wav) and video (*.mp4) data. The secondary data consists of annotations (*.eaf) of gestures and speech-gesture referents, which have been completely and systematically annotated based on an annotation grid (cf. the SaGA documentation). The corpus is comprised of of 9881 isolated words and 1764 isolated gestures. The stimulus is a model of a town presented in a Virtual Reality (VR) environment. Upon finishing a "bus ride" through the VR town along five landmarks, a router explained the route as well as the wayside landmarks to an unknown and naive follower. The SaGA Corpus was curated for CLARIN as part of the Curation Project "Editing and Integration of Multimodal Resources in CLARIN-D" by the CLARIN-D Working Group 6 "Speech and Other Modalities". | Provide a detailed description of the following dataset: SaGA |
BiGe | The BiGe corpus is comprised of 54.360 shots of interest extracted from TED and TEDx talks. All shots are tracked with fully 3d landmarks. | Provide a detailed description of the following dataset: BiGe |
OPFLearnData | The datasets are resulting from OPFLearn.jl, a Julia package for creating AC OPF datasets. The package was developed to provide researchers with a standardized way to efficiently create AC OPF datasets that are representative of more of the AC OPF feasible load space compared to typical dataset creation methods. The OPFLearn dataset creation method uses a relaxed AC OPF formulation to reduce the volume of the unclassified input space throughout the dataset creation process.
The dataset contains load profiles and their respective optimal primal and dual solutions. Load samples are processed using AC OPF formulations from PowerModels.jl. More information on the dataset creation method can be found in our publication, "OPF-Learn: An Open-Source Framework for Creating Representative AC Optimal Power Flow Datasets" and in the package website: https://github.com/NREL/OPFLearn.jl. | Provide a detailed description of the following dataset: OPFLearnData |
LLeQA | LLeQA is a French native dataset for studying information retrieval and long-form question answering in the legal domain. It consists of a knowledge corpus of 27,941 statutory articles collected from the Belgian legislation, and 1,868 legal questions posed by Belgian citizens and labeled by experienced jurists with a comprehensive answer rooted in relevant articles from the corpus. | Provide a detailed description of the following dataset: LLeQA |
UV6K | UV6K is a high-resolution remote sensing urban vehicle segmentation dataset.
- Images: 6,313
- Vehicle: 245,141
- Resolution: 0.1m
- Image Size: 1024x1024 | Provide a detailed description of the following dataset: UV6K |
Clickbait PDFs | The paper presents a study of Clickbait PDFs, which are PDF documents leading to various attacks on the Web. Clickbait PDFs are different from the well-known "MalPDFs", usually found in phishing emails, as they do not contain malware.
The study leverages a dataset of PDF files we receive from two industrial collaborators, Cisco and InQuest Labs. As this is paid data, that the companies retrieve as part of their business logic, we are not allowed to share it. We are also not allowed to share the data we obtain via the VirusTotal Public API.
Nonetheless, we share PDF file hashes to allow retrieving them from VirusTotal. Moreover, we share the screenshots of the first pages and the URLs extracted from the PDFs. We focus on the URLs relevant for our hypotheses (the total number of extracted URLs is around four millions). In addition, we also share the language of the text in the PDFs and the search engine rankings of the PDFs distributed via SEO attacks.
Part of our experiments involve developing and training a deep learning model (based on [DeepCluster](https://openaccess.thecvf.com/content_ECCV_2018/papers/Mathilde_Caron_Deep_Clustering_for_ECCV_2018_paper.pdf)).
We created an additional [Github repository](https://github.com/emerald1010/from_attachments_to_seo) containing the scripts that can help reproduce the clustering procedure.
This data is shared via several CSV files, a folder with .png images and a .npy file containing an intermediate result of our deep learning model. Each file also has an ad-hoc description in the "Data Explorer" tab of the dataset. This [notebook](https://www.kaggle.com/code/emerald101/artifact-code-for-paper-from-attachments-to-seo) contains the documentation and the information necessary to run the experiments presented in our paper. | Provide a detailed description of the following dataset: Clickbait PDFs |
SchizzoSQUAD | The “Mental Health” forum was used, a forum dedicated to people suffering from schizophrenia and different mental disorders. Relevant posts of active users, who regularly participate, were extrapolated providing a new method of obtaining low-bias content and without privacy issues. This corpus i then processed to offer a SQUAD (Standford Question Answering Dataset) version in order to train a ML QA model.
The structure is a Json with different keys with the following structure:
{
"data": [
{
"paragraphs": [
{
"qas": [
{
"question": "What is the patient afraid of?",
"id": 304425,
"answers": [
{
"answer_id": 305544,
"document_id": 501182,
"question_id": 304425,
"text": "I’m afraid of people except for my family and a few close friends",
"answer_start": 29769,
"answer_category": null
}
],
"is_impossible": false
}
],
"context": "\t\" I don’t understand why people send aid to other countries and go on missions when our own country people are suffering and have nothing, in the us at least.\"\n\t\"7-21-19 Said \\\"What is she afraid of? Afraid of family!?,
"document_id": 501182
}
]
} | Provide a detailed description of the following dataset: SchizzoSQUAD |
EPIC-STATES | EPIC-STATES builds upon the raw data in the EPIC-KITCHENS dataset and consists of 10 object state categories: open, close, in-hand, out-of-hand, whole, cut, raw, cooked, peeled, unpeeled. EPIC-STATES consists of 14,346 object bounding boxes from the EPIC-KITCHENS dataset (2018 version), each labeled with 10 binary labels corresponding to the 10 state classes. | Provide a detailed description of the following dataset: EPIC-STATES |
EPIC-ROI | EPIC-ROI builds on top of the EPIC-KITCHENS dataset, and consists of 103 diverse images with pixel-level annotations for regions where human hands frequently touch in everyday interaction. Specifically, image regions that afford any of the most frequent actions: take, open, close, press, dry, turn, peel are considered as positives. We manually watched video for multiple participants to define a) object categories, and b) specific regions within each category where participants interacted while conducting any of the 7 selected actions. These 103 images were sampled from across 9 different kitchens (7 to 15 images with minimal overlap, from each kitchen). EPIC-ROI is only used for evaluation, and contains 32 val images and 71 test images. Images from the same kitchen are in the same split. The Regions-of-Interaction task is to score each pixel in the image with the probability of a hand interacting with it. Performance is measured using average precision. | Provide a detailed description of the following dataset: EPIC-ROI |
Reddit Ideology Database | Dataset with articles posted in the r/Liberal and r/Conservative subreddits. In total, we collected a corpus of 226,010 articles. We have collected news articles to understand political expression through the shared news articles. | Provide a detailed description of the following dataset: Reddit Ideology Database |
Graph dataset MOLT-4 | Dataset introduced by Xifeng Yan et al.
in: SIGMOD '08: Proceedings of the 2008 ACM SIGMOD international conference on Management of data, June 2008, Pages 433–444, https://doi.org/10.1145/1376616.1376662
The dataset is now hosted by TUD.
The dataset consists of small molecules activities against Leukemia tumors. | Provide a detailed description of the following dataset: Graph dataset MOLT-4 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.