dataset_name stringlengths 2 128 | description stringlengths 1 9.7k | prompt stringlengths 59 185 |
|---|---|---|
Top-N Recommendation Runs | We ran 21 recommender systems on three datasets (BeerAdvocate, LibraryThing and MovieLens 1M). The output of these recommenders was evaluated using rec_eval tool. We also measured statistically significant improvements using permutation test. The output of both tools can be found in data. | Provide a detailed description of the following dataset: Top-N Recommendation Runs |
TREC Ad Hoc Retrieval Runs (2020) | TREC Submissions for all Ad Hoc Retrieval runs.
core (2017-2018)
deep-docs (2019-2020)
deep-pass (2019-2020)
web (2009-2014)
robust (2004) | Provide a detailed description of the following dataset: TREC Ad Hoc Retrieval Runs (2020) |
ChinaOpen-1k | ChinaOpen is a new video dataset targeted at open-world multimodal learning, with raw data gathered from Bilibili, a popular Chinese video-sharing website. The dataset has a large webly annotated training set of videos (associated with user-generated titles and tags) and a smaller manually annotated test set of videos (with manually checked user titles / tags, manually written captions, and manual labels describing what visual objects / actions / scenes shown in the visual content). | Provide a detailed description of the following dataset: ChinaOpen-1k |
Large-scale Ridesharing DARP Instances Based on Real Travel Demand | This dataset presents a set of large-scale ridesharing Dial-a-Ride Problem (DARP) instances. The instances were created as a standardized set of ridesharing DARP problems for the purpose of benchmarking and comparing different solution methods.
The instances are using actual past demand and realistic travel time data from 3 different US cities, Chicago, New York City, and Washington, DC. The instances consist of real travel requests from the selected period, positions of vehicles with their capacities, and realistic shortest travel times between all pairs of locations in each city.
Unlike the instances commonly used in the ridesharing DARP research, the presented instances use the latest demand data from different cities.
The dataset also contains the results of two baseline solution methods, the Insertion Heuristic, and the optimal Vehicle-group Assignment method. | Provide a detailed description of the following dataset: Large-scale Ridesharing DARP Instances Based on Real Travel Demand |
HumanEval-ET | Extension test cases of HumanEval, as well as generated code. | Provide a detailed description of the following dataset: HumanEval-ET |
MBPP-ET | Extension test cases of MBPP, as well as generated code. | Provide a detailed description of the following dataset: MBPP-ET |
APPS-ET | Extension test cases of APPS, as well as generated code. | Provide a detailed description of the following dataset: APPS-ET |
XGBoost | 123 | Provide a detailed description of the following dataset: XGBoost |
Szeged Corpus | The Szeged Treebank is the largest fully manually annotated treebank of the Hungarian language. It contains 82,000 sentences, 1.2 million words and 250,000 punctuation marks. Texts were selected from six different domains, ~200,000 words in size from each. The domains are the following:
fiction
compositions of pupils between 14-16 years of age
newspaper articles (from the newspapers Népszabadság, Népszava, Magyar Hírlap, HVG)
texts in informatics
legal texts
business and financial news
The treebank exists in three versions:
Szeged Treebank 1.0 is annotated for noun phrases and clauses;
Szeged Treebank 2.0 contains a deep phrase-structured syntactic analysis for all sentences;
Szeged Dependency Treebank contains dependency-style annotation of all sentences.
A morphologically reannotated version of the corpus, Szeged Corpus 2.5 has just been released, where participles, causative, frequentative and model verbs are distinctively marked, and unknown or misspelled words have been corrected, along with some minor morphological modifications.
If you are interested in Szeged Corpus 2.5, please contact Veronika Vincze. | Provide a detailed description of the following dataset: Szeged Corpus |
Uncorrelated Corrupted Dataset | Uncorrelated Corrupted Dataset is an evaluation set that consists of realistic visible-infrared (V-I) corruptions allowing for models' corruption robustness evaluation. Initially proposed for multimodal person re-identification, our dataset can also be used for the evaluation of V-I cross-modal approaches. Corruptions of the visible modality are the twenty corruptions proposed by Chen & al. in the "Benchmarks for Corruption Invariant Person Re-identification" paper. Corruptions of the infrared modalities have been proposed in our paper, introducing 19 corruptions that respect the infrared modality encoding. In practice, the corruptions are applied randomly and independently to the visible and the infrared cameras, making it more suited to a not co-located camera setting. | Provide a detailed description of the following dataset: Uncorrelated Corrupted Dataset |
Correlated Corrupted Dataset | Correlated Corrupted Dataset is an evaluation set that consists of realistic visible-infrared (V-I) corruptions allowing for models' corruption robustness evaluation. Initially proposed for multimodal person re-identification, our dataset can also be used for the evaluation of V-I cross-modal approaches. Corruptions of the visible modality are the twenty corruptions proposed by Chen & al. in the "Benchmarks for Corruption Invariant Person Re-identification" paper. Corruptions of the infrared modalities have been proposed in our paper, introducing 19 corruptions that respect the infrared modality encoding. In practice, for co-located visible-infrared cameras, weather-related corruptions should, for example, affect each camera. Also, blur-related corruption would likely occur in both visible and infrared cameras. This dataset tackles this aspect by considering the eventual correlations that may occur from one modality camera to another. | Provide a detailed description of the following dataset: Correlated Corrupted Dataset |
FaMoS | FaMoS is a dynamic 3D head dataset from 95 subjects, each performing 28 motion sequences. The sequences comprise of six prototypical expressions (i.e., Anger, Disgust, Fear, Happiness, Sadness, and Surprise), two head rotations (left/right and up/down), and diverse facial motions, including extreme and asymmetric expressions. Each sequence is recorded at 60 fps. In total, FaMoS contains around 600K 3D head meshes (i.e., ~225 frames per sequence). For each frame, registrations in FLAME meshes are publicly available. | Provide a detailed description of the following dataset: FaMoS |
Demande Dataset | **Demande Dataset** contains the features and probabilites of ten different functions. | Provide a detailed description of the following dataset: Demande Dataset |
FICLE | The FICLE dataset is a derivative of the FEVER dataset, which is a collection of 185,445 claims generated by modifying sentences obtained from Wikipedia. These claims were then verified without knowledge of the original sentences they were derived from. Each sample in the FEVER dataset consists of a claim sentence, a context sentence extracted from a Wikipedia URL as evidence, and a type label indicating whether the claim is supported, refuted, or lacks sufficient information. | Provide a detailed description of the following dataset: FICLE |
100STYLE | Over 4 million frames of motion capture data for 100 different styles of locomotion. Can be used for animation, human motion and sequence modelling research. | Provide a detailed description of the following dataset: 100STYLE |
GPABenchmark | GPT-generated and hum-written academic abstract corpus with over 600k samples in Computer Science, Physics, and Humanity Science. | Provide a detailed description of the following dataset: GPABenchmark |
YouTube8M-MusicTextClips | The YouTube8M-MusicTextClips dataset consists of over 4k high-quality human text descriptions of music found in video clips from the YouTube8M dataset.
For each selected YouTube music video, we extracted 10 second clips at the middle of the video for annotation. We provided annotators with only the audio corresponding to this clip. Thus, text annotations describe audio alone, not the visual content of the clip.
The dataset annotations are divided into train and test split files. As the dataset is meant mainly for evaluation, there are 3169 annotated clips from the test set and only 1000 annotated clips from the train set. | Provide a detailed description of the following dataset: YouTube8M-MusicTextClips |
ACCT Data Repository | This dataset is a collection of fluorescent images from mice in order to test an automatic cell counting tool that we developed. 62 images viewed from 2 or 3 different fields of views are shown. In brief, the dataset was derived from brain sections of a model for HIV-induced brain injury (HIVgp120tg), which expresses soluble gp120 envelope protein in astrocytes under the control of a modified GFAP promoter. The mice were in a mixed C57BL/6.129/SJL genetic background, and two genotypes of 9 month old male mice were selected: wild type controls (Resting, n = 3) and transgenic littermates (HIVgp120tg, Activated, n = 3). No randomization was performed. HIVgp120tg mice show among other hallmarks of human HIV neuropathology an increase in microglia numbers which indicates activation of the cells compared to non-transgenic littermate controls.
Brain sections were obtained using a vibratome (Leica VT1000S, Leica Biosystems, Buffalo Grove, IL) and cerebral cortex in 40 μm thick sagittal sections spaced 320 μm apart medial to lateral from brains of each genotype. Staining was performed with rabbit anti-ionized calcium-binding adaptor molecule 1 (Iba-1) IgG (1:125; Wako) with secondary antibody Fluorescein isothiocyanate (FITC). For quantification of Iba-1 stained microglia, cell bodies were counted in the cerebral cortex from three fields of view for three sections each per animal. Between 2 and 3 images were collected per field of view to capture as many cells as possible in sufficient focus for identification. Images were acquired at 10X magnification and pixel resolution 1280x1280 and cropped to 1280x733 pixel area to exclude irregular tissue edges. For more details, please refer to [ACCT is a fast and accessible automatic cell counting tool using machine learning for 2D image segmentation](https://doi.org/10.1038/s41598-023-34943-w).
We note that this data repository links to some images gathered in the [fluocells (Fluorescent Neuronal Cells)](https://paperswithcode.com/dataset/fluocells) dataset introduced by Morelli et al. which can be found here: [https://paperswithcode.com/dataset/fluocells](https://paperswithcode.com/dataset/fluocells).
We provide a link to our automatic cell counting tool that this dataset was used for here at the following Github link: [https://github.com/tkataras/Automatic-Cell-Counting-with-TWS](https://github.com/tkataras/Automatic-Cell-Counting-with-TWS). | Provide a detailed description of the following dataset: ACCT Data Repository |
wiki | The dataset wiki consists of Wikipedia articles, where the goal is to predict the total page views of each article.
\# Nodes: 1,925,342,
\# Edges: 303,434,860,
\# Features: 600,
\# Classes: 5. | Provide a detailed description of the following dataset: wiki |
OxIOD | ## OxIOD Dataset
[Oxford Inertial Odometry Dataset](http://deepio.cs.ox.ac.uk/) [<a id="d1" href="#oxiod">1</a>] is a large set of inertial data for inertial odometry which is recorded by smartphones at 100 Hz in indoor environment. The suite consists of 158 tests and covers a distance of over 42 km, with OMC ground track available for 132 tests. Therefore, it does not include pure rotational movements and pure translational movements, which are helpful for systematically evaluating the model's performance under different conditions; however, it covers a wide range of everyday movements.
Due to the different focus, some information (for example, the alignment of the coordinate frames) is not accurately described. In addition, the orientation of the ground trace contains frequent irregularities (e.g., jumps in orientation that are not accompanied by similar jumps in the IMU data). The dataset is available at [Link](https://forms.gle/wjE7u5AonoyyrgXJ7).
## How to use OxIOD Dataset
The dataset can be download from [here](`https://forms.gle/wjE7u5AonoyyrgXJ7`). The Dataset Contains:
### 24 Handheld Sequences
Total 8821 seconds for 7193 meters.
| data1 | time (s) | distance (m) |
| :---- | :------- | :----------- |
| seq1 | 376 | 301 |
| seq2 | 234 | 177 |
| seq3 | 188 | 147 |
| seq4 | 216 | 166 |
| seq5 | 322 | 264 |
| seq6 | 325 | 274 |
| seq7 | 141 | 118 |
| total | 1802 | 1447 |
| data2 | time (s) | dis (m) |
| :---- | :------- | :------ |
| seq1 | 326 | 281 |
| seq2 | 312 | 264 |
| seq3 | 301 | 249 |
| total | 939 | 794 |
| data3 | time | dis |
| :---- | :--- | :--- |
| seq1 | 308 | 251 |
| seq2 | 379 | 324 |
| seq3 | 609 | 533 |
| seq4 | 538 | 467 |
| seq5 | 383 | 319 |
| total | 2217 | 1894 |
| data4 | time | dis |
| :---- | :--- | :--- |
| seq1 | 317 | 242 |
| seq2 | 322 | 243 |
| seq3 | 606 | 476 |
| seq4 | 438 | 359 |
| seq5 | 350 | 284 |
| total | 2033 | 1604 |
| data5 | time | dis |
| :---- | :--- | :--- |
| seq1 | 310 | 237 |
| seq2 | 594 | 466 |
| seq3 | 560 | 445 |
| seq4 | 366 | 306 |
| total | 1830 | 1454 |
### 11 Pocket Sequences
Total 5622 seconds for 4231 meters.
| data1 | time | dis |
| :---- | :--- | :--- |
| seq1 | 330 | 284 |
| seq2 | 456 | 379 |
| seq3 | 506 | 405 |
| seq4 | 491 | 387 |
| seq5 | 240 | 182 |
| total | 2023 | 1637 |
| data2 | time | dis |
| :---- | :--- | :--- |
| seq1 | 651 | 492 |
| seq2 | 559 | 414 |
| seq3 | 628 | 429 |
| seq4 | 668 | 494 |
| seq5 | 470 | 371 |
| seq6 | 623 | 494 |
| total | 3599 | 2694 |
### 8 Handbag Sequences
Total 4100 seconds for 3431 meters.
| data1 | time | dis |
| :---- | :--- | :--- |
| seq1 | 575 | 437 |
| seq2 | 570 | 467 |
| seq3 | 580 | 466 |
| seq4 | 445 | 366 |
| total | 2170 | 1736 |
| data2 | time | dis |
| :---- | :--- | :--- |
| seq1 | 575 | 487 |
| seq2 | 560 | 499 |
| seq3 | 425 | 381 |
| seq4 | 370 | 328 |
| total | 1930 | 1695 |
### 13 Trolley Sequences
Total 4262 seconds for 2685 meters.
| data1 | time | dis |
| :---- | :--- | :--- |
| seq1 | 447 | 251 |
| seq2 | 309 | 169 |
| seq3 | 359 | 209 |
| seq4 | 599 | 362 |
| seq5 | 612 | 374 |
| seq6 | 586 | 380 |
| seq7 | 274 | 174 |
| total | 3186 | 1919 |
| data2 | time | dis |
| :---- | :--- | :-- |
| seq1 | 156 | 106 |
| seq2 | 168 | 118 |
| seq3 | 161 | 113 |
| seq4 | 163 | 113 |
| seq5 | 217 | 158 |
| seq6 | 211 | 158 |
| total | 1076 | 766 |
### 8 Slow Walking Sequences
Total 4150 seconds for 2421 meters.
| data1 | time | dis |
| :---- | :--- | :--- |
| seq1 | 612 | 382 |
| seq2 | 603 | 353 |
| seq3 | 617 | 341 |
| seq4 | 594 | 323 |
| seq5 | 606 | 352 |
| seq6 | 503 | 331 |
| seq7 | 311 | 172 |
| seq8 | 304 | 167 |
| total | 4150 | 2421 |
### 7 Running Sequences
Total 3732 seconds for 4356 meters.
| data1 | time | dis |
| :---- | :--- | :--- |
| seq1 | 691 | 761 |
| seq2 | 623 | 719 |
| seq3 | 590 | 665 |
| seq4 | 603 | 679 |
| seq5 | 619 | 766 |
| seq6 | 303 | 373 |
| seq7 | 303 | 393 |
| total | 3732 | 4356 |
### 26 Multi Devices Sequences
Total 7144 seconds for 5350 meters.
| iPhone 5 | time | dis |
| :------- | :--- | :--- |
| seq1 | 178 | 150 |
| seq2 | 163 | 133 |
| seq3 | 160 | 126 |
| seq4 | 124 | 100 |
| seq5 | 174 | 139 |
| seq6 | 167 | 136 |
| seq7 | 197 | 150 |
| seq8 | 184 | 141 |
| seq9 | 184 | 142 |
| total | 1531 | 1217 |
| iPhone 6 | time | dis |
| :------- | :--- | :--- |
| seq1 | 180 | 165 |
| seq2 | 184 | 171 |
| seq3 | 182 | 168 |
| seq4 | 150 | 140 |
| seq5 | 183 | 162 |
| seq6 | 171 | 155 |
| seq7 | 184 | 139 |
| seq8 | 185 | 148 |
| seq9 | 173 | 133 |
| total | 1592 | 1381 |
| nexus 5 | time | dis |
| :------ | :--- | :--- |
| seq1 | 604 | 452 |
| seq2 | 609 | 438 |
| seq3 | 605 | 414 |
| seq4 | 609 | 403 |
| seq5 | 607 | 388 |
| seq6 | 607 | 401 |
| seq7 | 186 | 130 |
| seq8 | 194 | 127 |
| total | 4021 | 2752 |
### 35 Multi Users Sequences
Total 8821 seconds for 9465 meters.
| user 2 | time | dis |
| :----- | :--- | :--- |
| seq1 | 311 | 284 |
| seq2 | 358 | 313 |
| seq3 | 390 | 328 |
| seq4 | 217 | 172 |
| seq5 | 311 | 240 |
| seq6 | 256 | 193 |
| seq7 | 371 | 296 |
| seq8 | 450 | 375 |
| seq9 | 264 | 221 |
| total | 2928 | 2422 |
| user 3 | time | dis |
| :----- | :--- | :--- |
| seq1 | 382 | 301 |
| seq2 | 318 | 272 |
| seq3 | 340 | 295 |
| seq4 | 232 | 198 |
| seq5 | 214 | 185 |
| seq6 | 356 | 289 |
| seq7 | 258 | 203 |
| total | 2100 | 1743 |
| user 4 | time | dis |
| :----- | :--- | :--- |
| seq1 | 387 | 367 |
| seq2 | 329 | 307 |
| seq3 | 305 | 288 |
| seq4 | 248 | 229 |
| seq5 | 356 | 314 |
| seq6 | 293 | 272 |
| seq7 | 297 | 260 |
| seq8 | 468 | 411 |
| seq9 | 435 | 364 |
| total | 3118 | 2812 |
| user 5 | time | dis |
| :----- | :--- | :--- |
| seq1 | 294 | 237 |
| seq2 | 305 | 264 |
| seq3 | 253 | 211 |
| seq4 | 390 | 337 |
| seq5 | 300 | 226 |
| seq6 | 338 | 284 |
| seq7 | 168 | 154 |
| seq8 | 410 | 395 |
| seq9 | 274 | 250 |
| seq10 | 152 | 130 |
| total | 2884 | 2488 |
### 26 Large Scale Sequences
Total 4161 seconds for 3465 meters.
| floor1 | time | dis |
| :----- | :--- | :--- |
| seq1 | 153 | 142 |
| seq2 | 165 | 143 |
| seq3 | 158 | 142 |
| seq4 | 157 | 145 |
| seq5 | 156 | 142 |
| seq6 | 156 | 142 |
| seq7 | 161 | 144 |
| seq8 | 155 | 143 |
| seq9 | 160 | 126 |
| seq10 | 158 | 143 |
| total | 1579 | 1412 |
| floor4 | time | dis |
| :----- | :--- | :--- |
| seq1 | 160 | 170 |
| seq2 | 157 | 153 |
| seq3 | 162 | 153 |
| seq4 | 118 | 106 |
| seq5 | 164 | 153 |
| seq6 | 163 | 143 |
| seq7 | 169 | 141 |
| seq8 | 166 | 153 |
| seq9 | 172 | 135 |
| seq10 | 169 | 154 |
| seq11 | 166 | 152 |
| seq12 | 165 | 154 |
| seq13 | 165 | 133 |
| seq14 | 164 | 153 |
| seq15 | 163 | 153 |
| seq16 | 159 | 133 |
| total | 2582 | 2053 |
In each folder, there is a raw data subfolder and a syn data subfolder, which represent the raw data collection without synchronisation but with high precise timestep, and the synchronised data but without high precise timestep.
The header of files is
**vicon (vi*.csv)**
1. Time
2. Header
3. translation.x translation.y translation.z
4. rotation.x rotation.y rotation.z rotation.w
**Sensors (imu*.csv)**
1. Time
2. attitude_roll(radians) attitude_pitch(radians) attitude_yaw(radians)
3. rotation_rate_x(radians/s) rotation_rate_y(radians/s) rotation_rate_z(radians/s)
4. gravity_x(G) gravity_y(G) gravity_z(G)
5. user_acc_x(G) user_acc_y(G) user_acc_z(G)
6. magnetic_field_x(microteslas) magnetic_field_y(microteslas) magnetic_field_z(microteslas) | Provide a detailed description of the following dataset: OxIOD |
RONIN | ## RoNIN
The RoNIN dataset contains over 40 hours of IMU sensor data from 100 human subjects with 3D ground-truth trajectories under natural human movements. This data set provides measurements of the accelerometer, gyroscope, magnetometer, and ground track, including direction and location in 327 sequences and at a frequency of 200 Hz. A two-device data collection protocol was developed. A harness was used to attach one phone to the body for 3D tracking, allowing subjects to control the other phone to collect IMU data freely. It should be noted that the ground track can only be obtained using the 3D tracker phone attached to the harness. In addition, the body trajectory is estimated instead of the IMU. RoNIN datset contians 42.7 hours of IMU-motion data over 276 sequences in 3 buildings, and collected from 100 human subjects with three Android devices.
The dataset contains following:
<ol>
<li>Data 13.81 GB</li>
<ul>seen_subjects_test_set.zip 3.15 GB </ul>
<ul>train_dataset_1.zip 4.49 GB </ul>
<ul>train_dataset_2.zip 3.18 GB </ul>
<ul>unseen_subjects_test_set.zip 2.99 GB </ul>
<br>
<li>Pretrained_Models 57.05 MB </li>
<ul>ronin_body_heading.zip 579.52 KB </ul>
<ul>ronin_lstm.zip 2.29 MB </ul>
<ul>ronin_resnet.zip 48.49 MB </ul>
<ul>ronin_tcn.zip 5.71 MB </ul>
</ol>
All HDF5 files are organized as follows
HDF5 data format
-----------------
<ol>
<li style="font-weight:bold">raw:</li>
<ul style="font-weight:bold">tango:</ul>
<ol><ul>gyro, gyro_uncalib, acce, magnet, game_rv, gravity, linacce, step, tango_pose, tango_adf_pose, rv, pressure, (optional) [wifi, gps, magnetic_rv, magnet_uncalib]</ul></ol>
<ul style="font-weight:bold">imu:</ul>
<ol><ul>gyro, gyro_uncalib, acce, magnet, game_rv, gravity, linacce, step. rv, pressure, (optional) [wifi, gps, magnetic_rv, magnet_uncalib]</ul></ol>
<br>
<li style="font-weight:bold">synced:</li>
<ul >time, gyro, gyro_uncalib, acce, magnet, game_rv, rv, gravity, linacce, step</ul>
<br>
<li style="font-weight:bold">pose:</li>
<ul>tango_pos, tango_ori, (optional) ekf_ori</ul>
</ol> | Provide a detailed description of the following dataset: RONIN |
RepoIMU | ## RepoIMU T-stick
The RepoIMU T-stick is a small, low-cost, and high-performance inertial measurement unit (IMU) that can be used for a wide range of applications. The RepoIMU T-stick is a 9-axis IMU that measures the acceleration, angular velocity, and magnetic field. This database contains two separate sets of experiments recorded with a T-stick and a pendulum. A total of 29 trials were collected on the T-stick, and each trial lasted approximately 90 seconds. As the name suggests, the IMU is attached to a T-shaped stick equipped with six reflective markers. Each experiment consists of slow or fast rotation around a principal sensor axis or translation along a principal sensor axis. In this scenario, the data from the Vicon Nexus OMC system and the XSens MTi IMU are synchronized and provided at a frequency of 100 Hz. The authors clearly state that the IMU coordinate system and the ground trace are not aligned and propose a method to compensate for one of the two required rotations based on quaternion averaging. Unfortunately, some experiments contain gyroscope clipping and ground tracking, which significantly affect the obtained errors. Therefore, careful pre-processing and removal of some trials should be considered when using the dataset to evaluate the model's accuracy.
## RepoIMU T-pendulum
The second part of the RepoIMU dataset contains data from a triple pendulum on which the IMUs are mounted. Measurement data is provided at 90 Hz or 166 Hz. However, the IMU data contains duplicate samples. This is usually the result of artificial sampling or transmission problems where missed samples are replaced by duplicating the last sample received, effectively reducing the sampling rate. The sampling rate achieved when discarding frequent samples is about 25 Hz and 48 Hz for the accelerometer and gyroscope, respectively. Due to this issue, it is not recommended to use this database for model training and evaluation. Due to this fact, we cannot recommend using pendulum tests to evaluate the accuracy of IOE with high precision. | Provide a detailed description of the following dataset: RepoIMU |
Ekman6 | the YF-E6 emotion dataset using the 6 basic emotion type as keywords on social video-sharing websites including YouTube and Flickr, leading to a total of 3000 videos. The dataset is labeled through crowdsourcing by 10 different annotators (5 males and 5 females), whose age ranged from 22 to 45. Annotators were given detailed definition for each emotion before performing the task. Every video is manually labeled by all the annotators. A video is excluded from the final dataset when over half of annotations are inconsistent with the initial search keyword. | Provide a detailed description of the following dataset: Ekman6 |
MoB | A dataset of cartoon video clips. For each video clip, the presence or absence of each feature was marked by the annotators.
Malicious: Video content which may not be suitable for viewing by toddlers and pre-schoolers includes a set of clearly defined, trivial and intu-itive features as well as some complex and subtle audio andvideo features. The former includes forms of obscenity and violence on a higher level e.g. nudity, gore, etc. while the latter includes elements such as fast repetitive motion, loud music, disgusting and scary characters, smashing people or things, forms of aggression, loud music, screaming or shout-ing, gunshots and explosions.
Benign: Educational videos and videos of nursery rhymes are usually considered to be appropriate for the toddlers and pre-schoolers; in fact some experts recommend letting kids watch them for a limited number of hours.Benign videos are characterized by a slower tempo, softer music or sound effects, moderate-paced motions, and soft-toned conversations. Most importantly, benign video content should not contain any indicators of malicious content as discussed earlier. | Provide a detailed description of the following dataset: MoB |
ISOD | ISOD contains **2,000 manually labelled RGB-D images** from **20 diverse sites**, each featuring **over 30 types of small objects** randomly placed amidst the items already present in the scenes. These objects, **typically ≤3cm in height**, include LEGO blocks, rags, slippers, gloves, shoes, cables, crayons, chalk, glasses, smartphones (and their cases), fake banana peels, fake pet waste, and piles of toilet paper, among others. These items were chosen because they either threaten the safe operation of indoor mobile robots or create messes if run over.
In addition to **RGB** images, ISOD also includes corresponding **depth** images and **IMU** readings. A reference image of each floor type was also recorded using a smartphone.
This dataset was used as a real-world validation dataset in the original work to explore the performance of the model beyond synthetic data, specifically focusing on the potential application of real-time robot navigation. | Provide a detailed description of the following dataset: ISOD |
S2RDA | Our proposed Synthetic-to-Real benchmark for more practical visual DA (termed S2RDA) includes two challenging transfer tasks of S2RDA-49 and S2RDA-MS-39. In each task, source/synthetic domain samples are synthesized by rendering 3D models from ShapeNet. The used 3D models are in the same label space as the target/real domain and each class has 12K rendered RGB images. The real domain of S2RDA-49 comprises 60,535 images of 49 classes, collected from ImageNet validation set, ObjectNet, VisDA-2017 validation set, and the web. For S2RDA-MS-39, the real domain collects 41,735 natural images exclusive for 39 classes from MetaShift, which contain complex and distinct contexts, e.g., object presence (co-occurrence of different objects), general contexts (indoor or outdoor), and object attributes (color or shape), leading to a much harder task. Compared to VisDA-2017, our S2RDA contains more categories, more realistically synthesized source domain data coming for free, and more complicated target domain data collected from diverse real-world sources, setting a more practical, challenging benchmark for future DA research. | Provide a detailed description of the following dataset: S2RDA |
CIFAR-10C | Common corruptions dataset for CIFAR10 | Provide a detailed description of the following dataset: CIFAR-10C |
2DeteCT | Maximilian B. Kiss, Sophia B. Coban, K. Joost Batenburg, Tristan van Leeuwen, and Felix Lucka "2DeteCT - A large 2D expandable, trainable, experimental Computed Tomography dataset for machine learning", [Sci Data 10, 576 (2023)](https://doi.org/10.1038/s41597-023-02484-6) or [arXiv:2306.05907 (2023)](https://arxiv.org/abs/2306.05907)
Abstract:
"Recent research in computational imaging largely focuses on developing machine learning (ML) techniques for image reconstruction, which requires large-scale training datasets consisting of measurement data and ground-truth images. However, suitable experimental datasets for X-ray Computed Tomography (CT) are scarce, and methods are often developed and evaluated only on simulated data. We fill this gap by providing the community with a versatile, open 2D fan-beam CT dataset suitable for developing ML techniques for a range of image reconstruction tasks. To acquire it, we designed a sophisticated, semi-automatic scan procedure that utilizes a highly-flexible laboratory X-ray CT setup. A diverse mix of samples with high natural variability in shape and density was scanned slice-by-slice (5000 slices in total) with high angular and spatial resolution and three different beam characteristics: A high-fidelity, a low-dose and a beam-hardening-inflicted mode. In addition, 750 out-of-distribution slices were scanned with sample and beam variations to accommodate robustness and segmentation tasks. We provide raw projection data, reference reconstructions and segmentations based on an open-source data processing pipeline."
The data collection has been acquired using a highly flexible, programmable and custom-built X-ray CT scanner, the FleX-ray scanner, developed by [TESCAN-XRE NV](https://info.tescan.com/micro-ct), located in the FleX-ray Lab at the [Centrum Wiskunde & Informatica (CWI)](https://www.cwi.nl/en/) in Amsterdam, Netherlands. It consists of a cone-beam microfocus X-ray point source (limited to 90 kV and 90 W) that projects polychromatic X-rays onto a 14-bit CMOS (complementary metal-oxide semiconductor) flat panel detector with CsI(Tl) scintillator (Dexella 1512NDT) and 1536-by-1944 pixels, each. To create a 2D dataset, a fan-beam geometry was mimicked by only reading out the central row of the detector. Between source and detector there is a rotation stage, upon which samples can be mounted. The machine components (i.e., the source, the detector panel, and the rotation stage) are mounted on translation belts that allow the moving of the components independently from one another.
Please refer to the paper for all further technical details.
The complete data collection can be found via the following links: [1-1,000](https://doi.org/10.5281/zenodo.8014757), [1,001-2,000](https://doi.org/10.5281/zenodo.8014765), [2,001-3,000](https://doi.org/10.5281/zenodo.8014786), [3,001-4,000](https://doi.org/10.5281/zenodo.8014828), [4,001-5,000](https://doi.org/10.5281/zenodo.8014873), [5,521-6,370](https://doi.org/10.5281/zenodo.8014906).
Each slice folder ‘slice00001 - slice05000’ and ‘slice05521 - slice06370’ contains three folders for each mode: ‘mode1’, ‘mode2’, ‘mode3’. In each of these folders there are the sinogram, the dark-field, and the two flat-fields for the raw data archives, or just the reconstructions and for mode2 the additional reference segmentation.
The corresponding reference reconstructions and segmentations can be found via the following links: [1-1,000](https://doi.org/10.5281/zenodo.8017582), [1,001-2,000](https://doi.org/10.5281/zenodo.8017603), [2,001-3,000](https://doi.org/10.5281/zenodo.8017611), [3,001-4,000](https://doi.org/10.5281/zenodo.8017617), [4,001-5,000](https://doi.org/10.5281/zenodo.8017623), [5,521-6,370](https://doi.org/10.5281/zenodo.8017652).
The corresponding Python scripts for loading, pre-processing, reconstructing and segmenting the projection data in the way described in the paper can be found on [github](https://github.com/mbkiss/2DeteCTcodes). A machine-readable file with the used scanning parameters and instrument data for each acquisition mode as well as a script loading it can be found on the GitHub repository as well.
Note: It is advisable to use the graphical user interface when decompressing the .zip archives. If you experience a zipbomb error when unzipping the file on a Linux system rerun the command with the UNZIP_DISABLE_ZIPBOMB_DETECTION=TRUE environment variable by setting in your .bashrc “export UNZIP_DISABLE_ZIPBOMB_DETECTION=TRUE”.
For more information or guidance in using the data collection, please get in touch with
Maximilian.Kiss [at] cwi.nl
Felix.Lucka [at] cwi.nl | Provide a detailed description of the following dataset: 2DeteCT |
Uniref90 | UniRef90 is generated by clustering UniRef100 seed sequences.
The UniRef100 sequences shorter than 11 residues are excluded in UniRef90 clusters. Each UniRef90 cluster has one representative sequence from the UniRef100 database.
UniRef90 cluster titles and identifiers are derived from the representative UniRef100 entry. The UniRef90 identifier is generated by replacing the "UniRef100_" prefix of the representative with "UniRef90_", e.g. "UniRef90_P99999". | Provide a detailed description of the following dataset: Uniref90 |
maadaa-FaEco Dataset | The dataset is organized into 24 typical scenarios, showcasing the richness of real-world environments, conditions, and objects. It is carefully curated to reflect diverse and realistic situations, allowing models to be tested and refined under a wide range of conditions.
This dataset contains 33 meticulously fine-annotated sub-datasets. Each is labeled and annotated with a high degree of precision. These datasets offer a vast range of potential use cases, from object detection and segmentation to pose estimation and beyond.
This dataset provides a unique context in various applications including personalized recommendations, virtual fittings, beauty AI, and product recognition. This extensive, versatile dataset is a significant resource that we believe will inspire a myriad of innovative solutions in the field. | Provide a detailed description of the following dataset: maadaa-FaEco Dataset |
MVP-24K | Multi-grained Vehicle Parsing (MVP) is a large-scale dataset for semantic analysis of vehicles in the wild, which has several featured properties.
1. The MVP contains 24,000 vehicle images captured in read-world surveillance scenes, which makes it more scalable for real applications.
2. For different requirements, we annotate the vehicle images with pixel-level part masks in two granularities, i.e., the coarse annotations of ten classes and the fine annotations of 59 classes. The former can be applied to object-level applications such as vehicle Re-Id, fine-grained classification, and pose estimation, while the latter can be explored for high-quality image generation and content manipulation.
3. The images reflect the complexity of real surveillance scenes, such as different viewpoints, illumination conditions, backgrounds, and etc. In addition, the vehicles have diverse countries, types, brands, models, and colors, which makes the dataset more diverse and challenging. | Provide a detailed description of the following dataset: MVP-24K |
Belfort | # The Belfort dataset
This dataset includes minutes of Belfort municipal council drawn up between 1790 and 1946. Documents include deliberations, lists of councillors, convocations, and agendas. It includes 24,105 text-line images that were automatically detected from pages. Up to 4 transcriptions are available for each line image: two from humans, and two from automatic models.
Files are organized in three folders: `Images`, `Transcriptions`, and `Partitions`.
## Images
The dataset include 24,105 text-line images that were automatically detected using a generic [Doc-UFCN](https://pypi.org/project/doc-ufcn/) model, and resized to a fixed height of 128 pixels.
## Transcriptions
Up to 4 transcriptions are available for each image, as summarized in the following table:
| Folder | N transcriptions | Description | Comments |
|:----------: |-----------------: |-----------------------------|-----------------------------------------------------------------------------------|
| callico_1/ | 24,105 | Human annotation n°1 | All lines have at least one human annotation |
| callico_2/ | 8,878 | Human annotation n°2 | Only 33% of lines have two different human annotations |
| dan/ | 24,102 | DAN automatic model | 3 images have empty transcriptions (no text was predicted by the model) |
| pylaia/ | 23,536 | PyLaia automatic model | 569 images have empty transcriptions (no text was predicted by the model) |
| rasa/ | 23,287 | RASA aggregation algorithm | 818 images have empty transcriptions |
| rover/ | 24,104 | ROVER aggregation algorithm | 1 image has an empty transcription |
## Data partition
We provide two distinct splits, both of them containing 19,013 training images, 2,262 validation images and 2,830 test images.
* The *Agreement-based split* ensures the reliability of the test set:
* The test set includes lines with perfect agreement between human annotators (Character Error Rate = 0%);
* The validation set includes lines with good agreement between human annotators (0% < Character Error Rate < 5%);
* The training set includes all the other lines.
* The *Random split* is randomized.
## Evaluation
Evaluation results in the paper are computed by comparing predictions to human annotations.
Automatic and aggregated transcriptions are only used during model training. | Provide a detailed description of the following dataset: Belfort |
AMIGOS | We present a database for research on affect, personality traits and mood by means of neuro-physiological signals. Different to other databases, we elicited affect using both short and long videos in two configurations, one with individual viewers and one with groups of viewers. The database allows the multimodal study of the affective responses of individuals in relation to their personality and mood, and the analysis of how these responses are affected by (i) the individual/group configuration, and (ii) the duration of the videos (short vs long). The data is collected in two experimental settings. In the first one, 40 participants watched 16 short emotional videos while they were alone. In the second one, the same participants watched 4 long videos, some of them alone and the rest in groups. In both settings, the participants' signals, namely, Electroencephalogram (EEG), Electrocardiogram (ECG), and Galvanic Skin Response (GSR), were recorded using wearable sensors. Frontal, full-body and depth videos were also recorded. Participants have been profiled for personality using the Big-five personality traits, and for mood with the baseline Positive Affect and Negative Affect Schedules. Participants emotions have been annotated with both, self-assessment of affective levels (valence, arousal, control, familiarity, like/dislike, and selection of basic emotion) felt by the participants during the first experiment, and external-assessment of participants' levels of valence and arousal for both experiments. We present a detailed correlation analysis that includes correlations between self-assessment and external-assessment of affect, between valence and arousal elicited by short and long videos on individuals and groups, as well as, between personality, mood, social context, and affect dimensions. We also present baseline methods and results for single-trial classification of valence and arousal, and for single-trial classification of personality traits, mood and social context (alone vs group), using EEG, GSR and ECG and fusion of modalities for both experiments. | Provide a detailed description of the following dataset: AMIGOS |
MagicBrush | **MagicBrush** is a manually-annotated instruction-guided image editing dataset covering diverse scenarios single-turn, multi-turn, mask-provided, and mask-free editing. MagicBrush comprises 10K (source image, instruction, target image) triples, which is sufficient to train large-scale image editing models. | Provide a detailed description of the following dataset: MagicBrush |
OCTScenes | **OCTScenes** contains 5000 tabletop scenes with a total of 15 everyday objects. Each scene is captured in 60 frames covering a 360-degree perspective.
OCTScenes-A dataset, the 0--3099 scenes without segmentation annotation are for training, while the 3100--3199 scenes with segmentation annotation can be used for testing. In the OCTScenes-B dataset, the 0--4899 scenes without segmentation annotation are for training, while the 4900--4999 scenes with segmentation annotation can be used for testing. | Provide a detailed description of the following dataset: OCTScenes |
Im-Promptu Visual Analogy Suite | **Im-Promptu Visual Analogy Suite** is a meta-learning framework. Each visual analogy suite is divided into two broad kind of analogies depending on the underlying relation - Primitive and Composite tasks
(1) Primitive: A single image attribute is modified at a time. For example, the color of the object is changed from red to blue.
(2) Composite: Multiple image attributes are modified at a time. For example, the color of the object is changed from red to blue and the scene orientation is changed from -15 degrees to +15 degrees. | Provide a detailed description of the following dataset: Im-Promptu Visual Analogy Suite |
MSVD-Indonesian | MSVD-Indonesian is derived from the MSVD dataset, which is obtained with the help of a machine translation service. This dataset can be used for multimodal video-text tasks, including text-to-video retrieval, video-to-text retrieval, and video captioning. Same as the original English dataset, the MSVD-Indonesian dataset contains about 80k video-text pairs. | Provide a detailed description of the following dataset: MSVD-Indonesian |
DACCORD | DACCORD is a new dataset dedicated to the task of automatically detecting contradictions between sentences in French.
The dataset is currently composed of 1034 sentence pairs. It covers the themes of Russia’s invasion of Ukraine in 2022, the Covid-19 pandemic, and the climate crisis. | Provide a detailed description of the following dataset: DACCORD |
GenPlot | This dataset contains the pre-generated dataset referenced in the GenPlot Paper.
## 5 Chart Types
There are 5 subfolders in this dataset containing 100_000 plots. Each subfolder corresponds to the each chart type.
l = line
v = vertical_bar
h = horizontal_bar
s = scatter
d = dot
## Metadata
The metadata is stored in CSV format, with the text in a format similar to [Google's DePlot Model](https://huggingface.co/google/deplot).
Ex.
`2005 | 4344.2 <0x0A> 2006 | 4549.7 <0x0A> 2007 | 4667.2 <0x0A> 2008 | 4579.1 <0x0A> 2009 | 4579.1 <0x0A> 2010 | 4520.4 <0x0A> 2011 | 4373.5 <0x0A> 2012 | 4079.9 <0x0A> 2013 | 3844.9 <0x0A> 2014 | 3492.6 <0x0A> 2015 | 3610.0 <0x0A> 2016 | 3991.8 <0x0A> 2017 | 4314.8 <0x0A> 2018 | 3962.4 <0x0A> 2019 | 4256.1` | Provide a detailed description of the following dataset: GenPlot |
COVID-19 Vaccine Stance Dataset | The data contains CSV files with anonymized user names, tweet texts, vaccine stance, cumulative score for the vaccine stance, location, and topic information. The file named `all_predicted_cumulative_stance.csv` contains all the tweets, scores, and classifications. We have broken this file into two separate files named `demotivate_cumulative_stance.csv` and `motivate_cumulative_stance.csv`, containing the `demotivating` and `motivating` tweets, respectively. We used these two files in the visualization tool presented at: https://ashiqur-rony.github.io/visualize-covid-stance/ | Provide a detailed description of the following dataset: COVID-19 Vaccine Stance Dataset |
Labelling for Explosions and Road accidents from UCF-Crime | The whole UCF-Crime dataset consists of real-world 240 × 320 RGB videos with 13 realistic anomaly types such as explosion, road accident, burglary, etc., and normal examples. The CPD specific requires a change in data distribution. We suppose that explosions and road accidents correspond to such a scenario, while most other types correspond to point anomalies. For example, data, obviously, com from a normal regime before the explosion. After it, we can see fire and smoke, which last for some time. Thus, the first moment when an explosion appears is a change point. Along with a volunteer, the authors carefully labelled chosen anomaly types. Their opinions were averaged. We provide the obtained markup, so other researchers can use it to validate their CPD algorithm for video. | Provide a detailed description of the following dataset: Labelling for Explosions and Road accidents from UCF-Crime |
Waymo Open Motion Dataset | As autonomous driving systems mature, motion forecasting has received increasing attention as a critical requirement for planning. Of particular importance are interactive situations such as merges, unprotected turns, etc., where predicting individual object motion is not sufficient. Joint predictions of multiple objects are required for effective route planning. There has been a critical need for high-quality motion data that is rich in both interactions and annotation to develop motion planning models. In this work, we introduce the most diverse interactive motion dataset to our knowledge, and provide specific labels for interacting objects suitable for developing joint prediction models. With over 100,000 scenes, each 20 seconds long at 10 Hz, our new dataset contains more than 570 hours of unique data over 1750 km of roadways. It was collected by mining for interesting interactions between vehicles, pedestrians, and cyclists across six cities within the United States. We use a high-accuracy 3D auto-labeling system to generate high quality 3D bounding boxes for each road agent, and provide corresponding high definition 3D maps for each scene. Furthermore, we introduce a new set of metrics that provides a comprehensive evaluation of both single agent and joint agent interaction motion forecasting models. Finally, we provide strong baseline models for individual-agent prediction and joint-prediction. We hope that this new large-scale interactive motion dataset will provide new opportunities for advancing motion forecasting models. | Provide a detailed description of the following dataset: Waymo Open Motion Dataset |
CASIE | ### Annotation corpus of cybersecurity event in news articles
The corpus contains 1000 annotation and source files. Our cybersecurity focused on five event types: Databreach, Phishing, Ransom, Discover, and Patch.
More details of the annotation and CASIE's system are in the papers. If you use our data, please cite one of the following papers.
Taneeya Satyapanich, Francis Ferraro, and Tim Finin, "CASIE: Extracting Cybersecurity Event Information from Text", InProceedings, Proceeding of the 34th AAAI Conference on Artificial Intelligence, February 2020.
Taneeya Satyapanich, Tim Finin, and Francis Ferraro, "Extracting Rich Semantic Information about Cybersecurity Events", InProceedings, Second Workshop on Big Data for CyberSecurity, held in conjunction with the IEEE Int. Conf. on Big Data, December 2019.
Any problems found, please contact taneeya1@umbc.edu. | Provide a detailed description of the following dataset: CASIE |
Face dataset by Generated Photos | The free Face dataset made for students and teachers. It contains 10,000 photos with equal distribution of race and gender parameters, along with metadata and facial landmarks. Free to use for research with citation Photos by Generated.Photos.
Photos
All the photos are 100% synthetic. Based on model-released photos. Royalty-free. Can be used for any research purpose except for the ones violating the law. Worldwide. No time limitations.
Quantity 10,000
Quality 256x256px
Diversity Ethnicity, gender
Metadata
The JSON files contain the metadata for each image in a machine-readable format, including:
(1) FaceLandmarks: mouth, right_eyebrow, left_eyebrow, right_eye, left_eye, nose, jaw.
(2) FaceAttributes: headPose, gender, makeup, emotion, facialHair, hair (hairColor, hairLength, bald), occlusion, ethnicity, eye_color, smile, age | Provide a detailed description of the following dataset: Face dataset by Generated Photos |
Regex101 Regular expressions | This is a dataset of regular expressions collected from regex101.com. It is not made directly available, but can be [crawled from regex101](https://github.com/dataunitylab/semantic-regex). | Provide a detailed description of the following dataset: Regex101 Regular expressions |
Boson-nighttime | In order to collect thermal aerial data, we used FLIR's Boson thermal imager (8.7 mm focal length, 640p resolution, and $50^\circ$ horizontal field of view)\footnote{\url{https://www.flir.es/products/boson/}}. The collected images are nadir at approx. 1m/px spatial resolution. We performed six flights from 9:00 PM to 4:00 AM and label this dataset as \textbf{Boson-nighttime}, accordingly. To create a single map, we first run a structure-from-motion (SfM) algorithm to reconstruct the thermal map from multiple views. Subsequently, orthorectification is performed by aligning the photometric satellite maps with thermal maps at the same spatial resolution. The ground area covered by Boson-nighttime measures $33~\text{km}{^2}$ in total. The most prevalent map feature is the desert, with small portions of farms, roads, and buildings.
The Bing satellite map\footnote{\url{https://www.bing.com/maps/aerial}} is cropped in the corresponding area as our satellite reference map. We tile the thermal map into $512\times512$~px thermal image crops with a stride of $35$ px. Each thermal image crop pairs with the corresponding satellite image crop. Areas covered by three flights of Boson-nighttime are used for training and validation. The remaining areas, covered by the other three flights are used for testing. The train/validation/test splits for Boson-nighttime are $10256$/$13011$/$26568$ pairs of satellite and thermal image crops, respectively. | Provide a detailed description of the following dataset: Boson-nighttime |
CompMix | **CompMix** is a crowdsourced QA benchmark which naturally demands the integration of a mixture of input sources. CompMix has a total of 9,410 questions, and features several complex intents like joins and temporal conditions. | Provide a detailed description of the following dataset: CompMix |
ToolQA | **ToolQA** is a question answering benchmark for Large Language Models (LLMs) which is designed to faithfully evaluate LLMs' ability to use external tools for question answering. The development of ToolQA involved a scalable, automated process for dataset curation, along with 13 specialized tools designed for interaction with external knowledge in order to answer questions. | Provide a detailed description of the following dataset: ToolQA |
DISCO-10M | DISCO-10M is a novel and extensive music dataset that surpasses the largest previously available music dataset by an order of magnitude. | Provide a detailed description of the following dataset: DISCO-10M |
MI-Motion | **Multi-Person Interaction Motion (MI-Motion) Dataset** includes skeleton sequences of multiple individuals collected by motion capture systems and refined and synthesized using a game engine. The dataset contains 167k frames of interacting people's skeleton poses and is categorized into 5 different activity scenes. | Provide a detailed description of the following dataset: MI-Motion |
SMART-101 | Recent times have witnessed an increasing number of applications of deep neural networks towards solving tasks that require superior cognitive abilities, e.g., playing Go, generating art, ChatGPT, etc. Such a dramatic progress raises the question: how generalizable are neural networks in solving problems that demand broad skills? To answer this question, we propose SMART: a Simple Multimodal Algorithmic Reasoning Task (and the associated SMART-101 dataset) for evaluating the abstraction, deduction, and generalization abilities of neural networks in solving visuo-linguistic puzzles designed specifically for children of younger age (6--8). Our dataset consists of 101 unique puzzles; each puzzle comprises a picture and a question, and their solution needs a mix of several elementary skills, including pattern recognition, algebra, and spatial reasoning, among others. To train deep neural networks, we programmatically augment each puzzle to 2,000 new instances; each instance varied in appearance, associated natural language question, and its solution. To foster research and make progress in the quest for artificial general intelligence, we are publicly releasing our SMART-101 dataset, consisting of the full set of programmatically-generated instances of 101 puzzles and their solutions.
The dataset was introduced in our paper Are Deep Neural Networks SMARTer than Second Graders? by Anoop Cherian, Kuan-Chuan Peng, Suhas Lohit, Kevin A. Smith, and Joshua B. Tenenbaum, CVPR 2023 | Provide a detailed description of the following dataset: SMART-101 |
DeepGraviLens | DeepGraviLens is a data set of simulated gravitational lenses consisting of images associated with brightness variation time series. In this dataset, both non-transient and transient phenomena (supernovae explosions) are simulated. | Provide a detailed description of the following dataset: DeepGraviLens |
Blood Cell Detection Dataset | ## Overview
This is a dataset of blood cells photos.
There are 364 images across three classes: `WBC` (white blood cells), `RBC `(red blood cells), and `Platelets`. There are 4888 labels across 3 classes (and 0 null examples).
Fork this dataset (upper right hand corner) to receive the raw images, or (to save space) grab the 500x500 export.
## Use Cases
This is a small scale object detection dataset, commonly used to assess model performance. It's a first example of medical imaging capabilities. This dataset is mainly preprocessed for YOLOV5 Application.
##Using this Dataset
I'm releasing the data as public domain. Feel free to use it for any purpose. This dataset is already splitted into train,testing and validation datasets(70% for training, 20% testing and 10% for validation). The train,testing and validation folders are further classified as IMAGES AND LABELS.
`images` Folder : It contains images of blood cells.
labels Folder : It conatins labelling of blood cells across three classes: `WBC` (white blood cells), `RBC` (red blood cells), and `Platelets`.
Except for GitHub, this dataset is also published on [Kaggle](https://www.kaggle.com/datasets/adhoppin/blood-cell-detection-datatset). | Provide a detailed description of the following dataset: Blood Cell Detection Dataset |
Yelp Review Polarity | The Yelp Reviews Polarity dataset is obtained from the Yelp Dataset Challenge in 2015 (1,569,264 samples that have review text).
The polarity label is constructed by considering stars 1 and 2 negative, and 3 and 4 positive.
The polarity dataset has 280,000 training samples and 19,000 test samples in each polarity. | Provide a detailed description of the following dataset: Yelp Review Polarity |
L3Cube-MahaCorpus | **L3Cube-MahaCorpus** is a Marathi monolingual data set scraped from different internet sources. We expand the existing Marathi monolingual corpus with 24.8M sentences and 289M tokens. We also present, MahaBERT, MahaAlBERT, and MahaRoBerta all BERT-based masked language models, and MahaFT, the fast text word embeddings both trained on full Marathi corpus with 752M tokens. | Provide a detailed description of the following dataset: L3Cube-MahaCorpus |
PTVD | **PTVD** is a plot-oriented multimodal dataset in the TV domain. It is also the first non-English dataset of its kind. Additionally, PTVD contains more than 26 million bullet screen comments (BSCs), powering large-scale pre-training. | Provide a detailed description of the following dataset: PTVD |
FunQA | **FunQA** is a challenging video question answering (QA) dataset specifically designed to evaluate and enhance the depth of video reasoning based on counter-intuitive and fun videos. Unlike most video QA benchmarks which focus on less surprising contexts, e.g., cooking or instructional videos, FunQA covers three previously unexplored types of surprising videos: 1) HumorQA, 2) CreativeQA, and 3) MagicQA. For each subset, we establish rigorous QA tasks designed to assess the model's capability in counter-intuitive timestamp localization, detailed video description, and reasoning around counter-intuitiveness. In total, the FunQA benchmark consists of 312K free-text QA pairs derived from 4.3K video clips, spanning a total of 24 video hours. Extensive experiments with existing VideoQA models reveal significant performance gaps for the FunQA videos across spatial-temporal reasoning, visual-centered reasoning, and free-text generation. | Provide a detailed description of the following dataset: FunQA |
UNIPD-BPE | The University of Padova Body Pose Estimation dataset (UNIPD-BPE) is an extensive dataset for multi-sensor body pose estimation containing both single-person and multi-person sequences with up to 4 interacting people
A network with 5 Microsoft Azure Kinect RGB-D cameras is exploited to record synchronized high-definition RGB and depth data of the scene from multiple viewpoints, as well as to estimate the subjects’ poses using the Azure Kinect Body Tracking SDK.
Simultaneously, full-body Xsens MVN Awinda inertial suits allow obtaining accurate poses and anatomical joint angles, while also providing raw data from the 17 IMUs required by each suit.
All the cameras and inertial suits are hardware synchronized, while the relative poses of each camera with respect to the inertial reference frame are calibrated before each sequence to ensure maximum overlap of the two sensing systems outputs.
The setup used allowed to record synchronized 3D poses of the persons on the scene both via Xsens’ inverse kinematics algorithm (inertial motion capture) and by exploiting the Azure Kinect Body tracking SDK (markerless motion capture), simultaneously.
The additional raw data (RGB, depth, camera network configuration) allow the user to assess the performance of any custom markerless motion capture algorithm (based on RGB, depth, or both).
Further analyses can be progressed by varying the number of cameras being used and/or their resolution and frame rate.
Moreover, raw angular velocities, linear accelerations, magnetic fields, and orientations from each IMU allow to develop and test multimodal BPE approaches focused on merging visual and inertial data.
Finally, the precise body dimensions of each subject are provided.
They include body height, weight, and segment lengths measured before the beginning of a recording session.
They were used to scale the Xsens biomechanical model, and also constitute a ground truth for assessing the markerless BPE accuracy on estimating each subject’s body dimensions.
The recorded sequences include 15 participants performing a set of 12 ADLs (e.g., walking, sitting, and jogging).
The actions were chosen to present different challenges to BPE algorithms, including different movement speeds, self-occlusions, and complex body poses.
Moreover, multi-person sequences, with up to 4 people performing a set of 7 different actions, are provided.
Such sequences offer challenging scenarios where multiple self-occluded persons move and interact in a restricted space.
They allow assessing the accuracy of multi-person tracking algorithms, focused on maintaining frame-by-frame consistent IDs of each detected person.
A total of 13.3h of RGB, depth, and markerless BPE data are present in the dataset, corresponding to over 1,400,000 frames obtained from a calibrated network with 5 RGB-D cameras.
The inertial suits, on the other hand, allowed to record 3h of inertial motion capture data, corresponding to a total of over 600,000 frames recorded by each of the 17 IMUs used by every suit. | Provide a detailed description of the following dataset: UNIPD-BPE |
EgoISM-HOI | EgoISM-HOI is a new multimodal dataset composed of synthetic and real images of egocentric human-objects interactions in an industrial environment with rich annotations of hands and objects. EgoISM-HOI contains a total of 39,304 RGB images, 23,356 depth maps and instance segmentation masks, 59,860 hand annotations, 237,985 object instances across 19 object categories and 35,416 egocentric human-object interactions. | Provide a detailed description of the following dataset: EgoISM-HOI |
LGP | Generated for further pre-training pre-trained models like BERT, RoBERTa, ALBERT, DeBERTa, etc.. in order to get stronger logical reasoning ability. | Provide a detailed description of the following dataset: LGP |
GUE | A collection of $28$ datasets across $7$ tasks constructed for genome language model evaluation. Contains seven tasks: promoter prediction. core promoter prediction, splice site prediction, covid variant classification, epigenetic marks prediction, and transcription factor binding sites prediction on human and mouse. | Provide a detailed description of the following dataset: GUE |
WDC Block | WDC Block is a benchmark for comparing the performance of blocking methods that are used as part of entity resolution pipelines.
Entity resolution aims to identify records in two datasets (A and B) that describe the same real-world entity. Since comparing all record pairs between two datasets can be computationally expensive, entity resolution is approached in two steps, blocking and matching. Blocking applies a computationally cheap method to remove non-matching record pairs and produces a smaller set of candidate record pairs reducing the workload of the matcher. During matching a more expensive pair-wise matcher produces a final set of matching record pairs.
Existing benchmark datasets for blocking and matching are rather small with respect to the Cartesian product AxB for comparing all records and the vocabulary size. If blockers are evaluated only on these small datasets, effects resulting from a high number of records or from a large vocabulary size (large number of unique tokens that need to be indexed) may be missed. The Web Data Commons Block (WDC-Block) is a new blocking benchmark that provides much larger datasets and thus requires blockers that address these scalability challenges. WDC Block features a maximal Cartesian product of 200 billion pairs of product offers which were extracted form 3,259 e-shops. Additionally, we provide three development sets with different sizes (~1K pairs, ~5K pairs & ~20K pairs) to experiment with different amounts of training data for the blockers. | Provide a detailed description of the following dataset: WDC Block |
3D-Speaker | **3D-Speaker** is a large-scale speech corpus designed to facilitate the research of speech representation disentanglement. 3DSpeaker contains over 10,000 speakers, each of whom are simultaneously recorded by multiple Devices, locating at different Distances, and some speakers are speaking multiple Dialects. The controlled combinations of multi-dimensional audio data yield a matrix of a diverse blend of speech representations entanglement, thereby motivating intriguing methods to untangle them. | Provide a detailed description of the following dataset: 3D-Speaker |
ShuttleSet22 | **ShuttleSet22** is a badminton singles dataset which is collected from high-ranking matches in 2022. ShuttleSet22 consists of 30,172 strokes in 2,888 rallies in the training set, 1,400 strokes in 450 rallies in validation set, and 2,040 strokes in 654 rallies in testing set with detailed stroke-level metadata within a rally. | Provide a detailed description of the following dataset: ShuttleSet22 |
YouTube-ASL | **YouTube-ASL** is a large-scale, open-domain corpus of American Sign Language (ASL) videos and accompanying English captions drawn from YouTube. With ~1000 hours of videos and >2500 unique signers, YouTube-ASL is ~3x as large and has ~10x as many unique signers as the largest prior ASL dataset. | Provide a detailed description of the following dataset: YouTube-ASL |
RoseBlooming-Dataset | The RoseBlooming dataset is a stage-specific flower dataset for detection. The dataset, consisting of overhead images, contains two rose cultivars and was filmed over a period of months.
The dataset has 519 images, and most of the images contain several bounding boxes. Therefore, this dataset contains over 7,000 bounding boxes. The developmental stages of flowering branches were visually classified and annotated into two stages: rose_small, and rose_large. For the rose variation, the dataset includes 2 rose cultivars (‘Samourai 08’ and ‘Blossom Pink’ roses). The dataset contains images under various weather conditions. | Provide a detailed description of the following dataset: RoseBlooming-Dataset |
N-back-tasks-for-ChatGPT | **Dataset Introduction.** We create this dataset to test the working memory capacity of language models. We choose the N-back task because it is widely used in cognitive science as a measure of working memory capacity. To create the N-back task dataset, we generated 30 blocks of trials for $N = \{1, 2, 3\}$, respectively. Each block contains 30 trials, including 10 match trials and 20 nonmatch trials. The dataset for each block is stored in a text file. The first line in the text file is the letter presented on every trial. The second line is the condition corresponding to every letter in the first line ('m':this is a match trial; '-': this is a nonmatch trial). We have created many versions of the N-back task, including verbal ones and spatial ones.
**Data Resources.** The dataset we created can be accessed at [https://github.com/Daniel-Gong/ChatGPT_WM/tree/main/datasets](https://github.com/Daniel-Gong/ChatGPT_WM/tree/main/datasets).
**Data Pre-Processing.** None.
**Prompt Example.** Here we only focus on the base version of verbal N-back tasks. We use the following format of prompts for $N = \{1, 2, 3\}$:
```
User:
Instruction: as a language model, you are asked to perform a 1-back task. A letter will be presented on every trial. Your task is to respond with 'm' whenever the letter presented is the same as the previous letter, and '-' whenever the letter presented is different from the previous letter. A strict rule is that you must not output anything other than 'm' or '-'. Now begins the task.
User:
{letter}
Model:
{-}(because this is the first letter)
User:
{letter}
Model:
{m/-}
...
```
```
User:
Instruction: as a language model, you are asked to perform a 2-back task. A letter will be presented on every trial. Your task is to respond with 'm' whenever the letter presented is the same as the letter two trials ago, and '-' whenever the letter presented is different from the letter two trials ago. A strict rule is that you must not output anything other than 'm' or '-'. Now begins the task.
User:
{letter}
Model:
{-}(because this is the first letter)
User:
{letter}
Model:
{m/-}
...
```
```
User:
Instruction: as a language model, you are asked to perform a 3-back task. A letter will be presented on every trial. Your task is to respond with 'm' whenever the letter presented is the same as the letter three trials ago, and '-' whenever the letter presented is different from the letter three trials ago. A strict rule is that you must not output anything other than 'm' or '-'. Now begins the task.
User:
{letter}
Model:
{-}(because this is the first letter)
User:
{letter}
Model:
{m/-}
...
```
**Metrics.** We use exact match of the extraction results to calculate the hit rate, false alarm rate, and accuracy. $d'$ (detection sensitivity) is calculated as the $z$ score of hit rate minus the $z$ score of false alarm rate. In the case where the hit rate or false alarm rate is equal to either 0 or 1, they will be adjusted by 0.01 to handle the problem of $z$ score being infinite. | Provide a detailed description of the following dataset: N-back-tasks-for-ChatGPT |
GlobalOpinionQA | **GlobalOpinionQA** consists of questions and answers from cross-national surveys designed to capture diverse opinions on global issues across different countries. It contains a subset of survey questions about global issues and opinions adapted from the World Values Survey and Pew Global Attitudes Survey. | Provide a detailed description of the following dataset: GlobalOpinionQA |
GEM HOUSE OPENDATA | Filip Milojkovic, August 13, 2021, "GEM HOUSE openData: German Electricity consumption in Many HOUSEholds over three years 2018-2020 (Fresh Energy)", IEEE Dataport, doi: https://dx.doi.org/10.21227/4821-vf03. | Provide a detailed description of the following dataset: GEM HOUSE OPENDATA |
UTRSet-Real | The **UTRSet-Real** dataset is a comprehensive, manually annotated dataset specifically curated for **Printed Urdu OCR** research. It contains over **11,000** printed text line images, each of which has been meticulously annotated. One of the standout features of this dataset is its remarkable diversity, which includes variations in fonts, text sizes, colours, orientations, lighting conditions, noises, styles, and backgrounds. This diversity closely mirrors real-world scenarios, making the dataset highly suitable for training and evaluating models that aim to excel in real-world Urdu text recognition tasks.
The availability of the UTRSet-Real dataset addresses the scarcity of comprehensive real-world printed Urdu OCR datasets. By providing researchers with a valuable resource for developing and benchmarking Urdu OCR models, this dataset promotes standardized evaluation and reproducibility and fosters advancements in the field of Urdu OCR. Further, to complement the UTRSet-Real for training purposes, we also present [**UTRSet-Synth**](https://paperswithcode.com/dataset/utrset-synth), a high-quality synthetic dataset closely resembling real-world representations of Urdu text. For more information and details about the [UTRSet-Real](https://paperswithcode.com/dataset/utrset-real) & [UTRSet-Synth](https://paperswithcode.com/dataset/utrset-synth) datasets, please refer to the paper ["UTRNet: High-Resolution Urdu Text Recognition In Printed Documents"](https://arxiv.org/abs/2306.15782) | Provide a detailed description of the following dataset: UTRSet-Real |
UTRSet-Synth | The **UTRSet-Synth** dataset is introduced as a complementary training resource to the [**UTRSet-Real** Dataset](https://paperswithcode.com/dataset/utrset-real), specifically designed to enhance the effectiveness of Urdu OCR models. It is a high-quality synthetic dataset comprising 20,000 lines that closely resemble real-world representations of Urdu text.
To generate the dataset, a custom-designed synthetic data generation module which offers precise control over variations in crucial factors such as font, text size, colour, resolution, orientation, noise, style, and background, was employed. Moreover, the UTRSet-Synth dataset tackles the limitations observed in existing datasets. It addresses the challenge of standardizing fonts by incorporating over 130 diverse Urdu fonts, which were thoroughly refined to ensure consistent rendering schemes. It overcomes the scarcity of Arabic words, numerals, and Urdu digits by incorporating a significant number of samples representing these elements. Additionally, the dataset is enriched by randomly selecting words from a vocabulary of 100,000 words during the text generation process. As a result, UTRSet-Synth contains a total of 28,187 unique words, with an average word length of 7 characters.
The availability of the UTRSet-Synth dataset, a synthetic dataset that closely emulates real-world variations, addresses the scarcity of comprehensive real-world printed Urdu OCR datasets. By providing researchers with a valuable resource for developing and benchmarking Urdu OCR models, this dataset promotes standardized evaluation, and reproducibility, and fosters advancements in the field of Urdu OCR. For more information and details about the [UTRSet-Real](https://paperswithcode.com/dataset/utrset-real) & [UTRSet-Synth](https://paperswithcode.com/dataset/utrset-synth) datasets, please refer to the paper ["UTRNet: High-Resolution Urdu Text Recognition In Printed Documents"](https://arxiv.org/abs/2306.15782) | Provide a detailed description of the following dataset: UTRSet-Synth |
FLIP | FLIP includes several benchmark datasets that contain a variety of protein sequences, each with a real-valued label indicating its "fitness" (how well the protein performs some particular function). The goal is to predict the fitness of a given protein sequence using the sequence. Different representations of protein sequences (e.g. learned embeddings from large language models) may prove helpful here.
Some of the benchmark datasets (thermostability) contain a highly diverse set of sequences from many different protein families. Others (AAV, GB1) contain all sequences that are mutants of a single parent sequence. Each benchmark dataset features multiple "splits" -- different ways of train-test splitting the data to assess how well a model might generalize given limited information. The AAV benchmark, for example, features the "mutant vs designed" split in which a model is trained on randomly generated mutants and asked to predict the fitness of designed sequences, and the "seven vs many" split in which a model is trained on sequences with seven mutations and asked to make predictions for sequences with a different number of mutations. | Provide a detailed description of the following dataset: FLIP |
UrduDoc | The **UrduDoc Dataset** is a benchmark dataset for Urdu text line detection in scanned documents. It is created as a byproduct of the **UTRSet-Real** dataset generation process. Comprising 478 diverse images collected from various sources such as books, documents, manuscripts, and newspapers, it offers a valuable resource for research in Urdu document analysis. It includes 358 pages for training and 120 pages for validation, featuring a wide range of styles, scales, and lighting conditions. It serves as a benchmark for evaluating printed Urdu text detection models, and the benchmark results of state-of-the-art models are provided. The Contour-Net model demonstrates the best performance in terms of h-mean.
The UrduDoc dataset is the first of its kind for printed Urdu text line detection and will advance research in the field. It will be made publicly available for non-commercial, academic, and research purposes upon request and execution of a no-cost license agreement. To request the dataset and for more information and details about the [UrduDoc ](https://paperswithcode.com/dataset/urdudoc), [UTRSet-Real](https://paperswithcode.com/dataset/utrset-real) & [UTRSet-Synth](https://paperswithcode.com/dataset/utrset-synth) datasets, please refer to the [Project Website](https://abdur75648.github.io/UTRNet/) of our paper ["UTRNet: High-Resolution Urdu Text Recognition In Printed Documents"](https://arxiv.org/abs/2306.15782) | Provide a detailed description of the following dataset: UrduDoc |
FLIP -- AAV, Designed vs mutant | FLIP includes several benchmark datasets that contain a variety of protein sequences, each with a real-valued label indicating its "fitness" (how well the protein performs some particular function). The goal is to predict the fitness of a given protein sequence using the sequence. Different representations of protein sequences (e.g. learned embeddings from large language models) may prove helpful here.
This sub-dataset (AAV) is a set of 201,426 training sequences and 82,583 test sequences in which the goal is to predict the fitness of mutants of the capsid protein from the adeno-associated virus (AAV). The training set proteins were designed, while the test set proteins are random mutants. The absolute value of the fitness is not important, but its ranking / relative value is -- protein designers would like to be able to pick a sequence with high fitness relative to those in the training set. Performance is therefore usually assessed using Spearman's r correlation coefficient. | Provide a detailed description of the following dataset: FLIP -- AAV, Designed vs mutant |
Hurricane | a new spatio-temporal benchmark dataset (Hurricane), which is suited for forecasting during extreme events and anomalies. The dataset is provided through the Florida Department of Revenue which provides the monthly sales revenue (2003-2020) for the tourism industry for all 67 counties of Florida which are prone to annual hurricanes. Furthermore, we aligned and joined the raw time series with the history of hurricane categories based on time for each county.
More precisely, the hurricane category indicates the maximum sustained wind
speed which can result in catastrophic damages (Oceanic 2022). | Provide a detailed description of the following dataset: Hurricane |
Finance | State-level data for USA. The changes in the number of employees based on one million employees active in the US during the COVID-19
pandemic and is gathered from Homebase (Bartik et al. 2020). We further enriched
the data with the state-level policies as an indication of extreme events (e.g., the
state’s business closure order | Provide a detailed description of the following dataset: Finance |
Climabench | The topic of Climate Change (CC) has received limited attention in NLP despite its real world urgency. Activists and policy-makers need NLP tools in order to effectively process the vast and rapidly growing textual data produced on CC. Their utility, however, primarily depends on whether the current state-of-the-art models can generalize across various tasks in the CC domain. In order to address this gap, we introduce Climate Change Benchmark (ClimaBench), a benchmark collection of existing disparate datasets for evaluating model performance across a diverse set of CC NLU tasks systematically. Further, we enhance the benchmark by releasing two large-scale labelled text classification and question-answering datasets curated from publicly available environmental disclosures. Lastly, we provide an analysis of several generic and CC-oriented models answering whether fine-tuning on domain text offers any improvements across these tasks. We hope this work provides a standard assessment tool for research on CC text data. | Provide a detailed description of the following dataset: Climabench |
S1SLC_CVDL | ABSTRACT
Development of the Complex-Valued (CV) deep learning architectures has enabled us to exploit the amplitude and phase components of the CV Synthetic Aperture Radar (SAR) data. However, most of the available annotated SAR datasets provide only the amplitude information (Only detected SAR data) and disregard the phase information. The lack of high-quality and large-scale annotated CV-SAR datasets is a significant challenge for developing CV deep learning algorithms in remote sensing. In order to tackle this problem, a large-scale semantically annotated CV-SAR dataset is developed using the Single Look Complex (SLC) StripMap (SM) Sentinel-1 (S1) SAR data in two polarization channels (HH and HV) for Complex-Valued Deep Learning applications (S1SLC_CVDL). The S1SLC_CVDL dataset comprises 276,571 CV-SAR patches (100×100 pixel), derived from three scenes acquired over Chicago and Houston in the Uniate States, and Sao Paulo in Brazil in May 2021. These three scenes are selected to cover different landcovers including various vegetation covers, constructed areas and water bodies. The CV-SAR patches in this dataset are semantically annotated in 7 different classes, including, Agricultural fields (AG), Forest and Woodlands (FR), High Density Urban Areas (HD), High Rise Buildings (HR), Low Density Urban Areas (LD), Industrial Regions (IR), and Water Regions (WR). Refer to the cited articles for more information about the dataset and the selected S1 scenes.
Overall, the S1SLC_CVDL dataset provides semantically annotated CV-SAR data which can serve as a valuable resource for researchers and practitioners in the field of CV deep architecture developments for remote sensing applications.
Instructions:
The S1SLC_CVDL dataset comprises 276,571 patches (100×100 pixel) of Single Look Complex (SLC) StripMap (SM) Sentinel-1 (S1) CV-SAR data, derived from three scenes acquired over Chicago and Houston in the Uniate States, and Sao Paulo in Brazil in May 2021. Refer to the cited articles for more information about the dataset and the selected S1 scenes.
The S1SCL_CVDL.zip file, includes three subfolders (one for each S1 scene, Chicago, Houston, and Sao Paulo) containing the patches in two polarization channels (HH and HV) and the semantic labels of the corresponding patches. The data and label files are in .npy format and can be loaded into the python environment, using the “numpy.load(‘path to the file’)” function.
The semantic labels are provided as the numeric format as following:
Agricultural fields (AG)
Forest and Woodlands (FR)
High Density Urban Areas (HD)
High Rise Buildings (HR)
Low Density Urban Areas (LD)
Industrial Regions (IR)
Water Regions (WR)
For example, if the ith element of the label file is “1”, it indicates that the ith element in the corresponding data file for HH and HV polarization channels is from Agricultural fields (AG) class.
References
Please cite the following articles if you find the dataset useful:
R. M. Asiyabi, M. Datcu, A. Anghel, H. Nies, " Complex-Valued End-to-end Deep Network with Coherency Preservation for Complex-Valued SAR Data Reconstruction and Classification," in IEEE Transactions on Geoscience and Remote Sensing, 2023.
R. M. Asiyabi and M. Datcu, "Earth Observation Semantic Data Mining: Latent Dirichlet Allocation-Based Approach," in IEEE Journal of Selected Topics in Applied Earth Observations and Remote Sensing, vol. 15, pp. 2607-2620, 2022, doi: 10.1109/JSTARS.2022.3159277.
Funding Agency:
European Union’s Horizon 2020 research and innovation programme under the Marie Skłodowska-Curie
Grant Number:
860370 | Provide a detailed description of the following dataset: S1SLC_CVDL |
Drunkard's Dataset | Estimating camera motion in deformable scenes poses a complex and open research challenge. Most existing non-rigid structure from motion techniques assume to observe also static scene parts besides deforming scene parts in order to establish an anchoring reference. However, this assumption does not hold true in certain relevant application cases such as endoscopies. To tackle this issue with a common benchmark, we introduce the Drunkard’s Dataset, a challenging collection of synthetic data targeting visual navigation and reconstruction in deformable environments. This dataset is the first large set of exploratory camera trajectories with ground truth inside 3D scenes where every surface exhibits non-rigid deformations over time. Simulations in realistic 3D buildings lets us obtain a vast amount of data and ground truth labels, including camera poses, RGB images and depth, optical flow and normal maps at high resolution and quality. | Provide a detailed description of the following dataset: Drunkard's Dataset |
BEDLAM | **BEDLAM** is a large-scale synthetic video dataset designed to train and test algorithms on the task of 3D human pose and shape estimation (HPS). It contains diverse body shapes, skin tones, and motions. The clothing is realistically simulated on the moving bodies using commercial clothing physics simulation. | Provide a detailed description of the following dataset: BEDLAM |
OASST1 | **license**:
apache-2.0
**tags**:
human-feedback
**size_categories**:
100K<n<1M
**pretty_name**:
OpenAssistant Conversations
# OpenAssistant Conversations Dataset (OASST1)
## Dataset Description
- **Homepage:** https://www.open-assistant.io/
- **Repository:** https://github.com/LAION-AI/Open-Assistant
- **Paper:** https://arxiv.org/abs/2304.07327
### Dataset Summary
In an effort to democratize research on large-scale alignment, we release OpenAssistant
Conversations (OASST1), a human-generated, human-annotated assistant-style conversation
corpus consisting of 161,443 messages in 35 different languages, annotated with 461,292
quality ratings, resulting in over 10,000 fully annotated conversation trees. The corpus
is a product of a worldwide crowd-sourcing effort involving over 13,500 volunteers.
Please refer to our [paper](https://arxiv.org/abs/2304.07327) for further details.
### Dataset Structure
This dataset contains message trees. Each message tree has an initial prompt message as the root node,
which can have multiple child messages as replies, and these child messages can have multiple replies.
All messages have a role property: this can either be "assistant" or "prompter". The roles in
conversation threads from prompt to leaf node strictly alternate between "prompter" and "assistant".
This version of the dataset contains data collected on the [open-assistant.io](https://open-assistant.io/) website until April 12 2023.
### JSON Example: Message
For readability, the following JSON examples are shown formatted with indentation on multiple lines.
Objects are stored without indentation (on single lines) in the actual jsonl files.
```json
{
"message_id": "218440fd-5317-4355-91dc-d001416df62b",
"parent_id": "13592dfb-a6f9-4748-a92c-32b34e239bb4",
"user_id": "8e95461f-5e94-4d8b-a2fb-d4717ce973e4",
"text": "It was the winter of 2035, and artificial intelligence (..)",
"role": "assistant",
"lang": "en",
"review_count": 3,
"review_result": true,
"deleted": false,
"rank": 0,
"synthetic": true,
"model_name": "oasst-sft-0_3000,max_new_tokens=400 (..)",
"labels": {
"spam": { "value": 0.0, "count": 3 },
"lang_mismatch": { "value": 0.0, "count": 3 },
"pii": { "value": 0.0, "count": 3 },
"not_appropriate": { "value": 0.0, "count": 3 },
"hate_speech": { "value": 0.0, "count": 3 },
"sexual_content": { "value": 0.0, "count": 3 },
"quality": { "value": 0.416, "count": 3 },
"toxicity": { "value": 0.16, "count": 3 },
"humor": { "value": 0.0, "count": 3 },
"creativity": { "value": 0.33, "count": 3 },
"violence": { "value": 0.16, "count": 3 }
}
}
```
### JSON Example: Conversation Tree
For readability, only a subset of the message properties is shown here.
```json
{
"message_tree_id": "14fbb664-a620-45ce-bee4-7c519b16a793",
"tree_state": "ready_for_export",
"prompt": {
"message_id": "14fbb664-a620-45ce-bee4-7c519b16a793",
"text": "Why can't we divide by 0? (..)",
"role": "prompter",
"lang": "en",
"replies": [
{
"message_id": "894d30b6-56b4-4605-a504-89dd15d4d1c8",
"text": "The reason we cannot divide by zero is because (..)",
"role": "assistant",
"lang": "en",
"replies": [
// ...
]
},
{
"message_id": "84d0913b-0fd9-4508-8ef5-205626a7039d",
"text": "The reason that the result of a division by zero is (..)",
"role": "assistant",
"lang": "en",
"replies": [
{
"message_id": "3352725e-f424-4e3b-a627-b6db831bdbaa",
"text": "Math is confusing. Like those weird Irrational (..)",
"role": "prompter",
"lang": "en",
"replies": [
{
"message_id": "f46207ca-3149-46e9-a466-9163d4ce499c",
"text": "Irrational numbers are simply numbers (..)",
"role": "assistant",
"lang": "en",
"replies": []
},
// ...
]
}
]
}
]
}
}
```
Please refer to [oasst-data](https://github.com/LAION-AI/Open-Assistant/tree/main/oasst-data) for
details about the data structure and Python code to read and write jsonl files containing oasst data objects.
If you would like to explore the dataset yourself you can find a
[`getting-started`](https://github.com/LAION-AI/Open-Assistant/blob/main/notebooks/openassistant-oasst1/getting-started.ipynb)
notebook in the `notebooks/openassistant-oasst1` folder of the [LAION-AI/Open-Assistant](https://github.com/LAION-AI/Open-Assistant)
github repository.
## Main Dataset Files
Conversation data is provided either as nested messages in trees (extension `.trees.jsonl.gz`)
or as a flat list (table) of messages (extension `.messages.jsonl.gz`).
### Ready For Export Trees
```
2023-04-12_oasst_ready.trees.jsonl.gz 10,364 trees with 88,838 total messages
2023-04-12_oasst_ready.messages.jsonl.gz 88,838 messages
```
Trees in `ready_for_export` state without spam and deleted messages including message labels.
The oasst_ready-trees file usually is sufficient for supervised fine-tuning (SFT) & reward model (RM) training.
### All Trees
```
2023-04-12_oasst_all.trees.jsonl.gz 66,497 trees with 161,443 total messages
2023-04-12_oasst_all.messages.jsonl.gz 161,443 messages
```
All trees, including those in states `prompt_lottery_waiting` (trees that consist of only one message, namely the initial prompt),
`aborted_low_grade` (trees that stopped growing because the messages had low quality), and `halted_by_moderator`.
### Supplemental Exports: Spam & Prompts
```
2023-04-12_oasst_spam.messages.jsonl.gz
```
These are messages which were deleted or have a negative review result (`"review_result": false`).
Besides low quality, a frequent reason for message deletion is a wrong language tag.
```
2023-04-12_oasst_prompts.messages.jsonl.gz
```
These are all the kept initial prompt messages with positive review result (no spam) of trees in `ready_for_export` or `prompt_lottery_waiting` state.
### Using the Huggingface Datasets
While HF datasets is ideal for tabular datasets, it is not a natural fit for nested data structures like the OpenAssistant conversation trees.
Nevertheless, we make all messages which can also be found in the file `2023-04-12_oasst_ready.trees.jsonl.gz` available in parquet as train/validation splits.
These are directly loadable by [Huggingface Datasets](https://pypi.org/project/datasets/).
To load the oasst1 train & validation splits use:
```python
from datasets import load_dataset
ds = load_dataset("OpenAssistant/oasst1")
train = ds['train'] # len(train)=84437 (95%)
val = ds['validation'] # len(val)=4401 (5%)
```
The messages appear in depth-first order of the message trees.
Full conversation trees can be reconstructed from the flat messages table by using the `parent_id`
and `message_id` properties to identify the parent-child relationship of messages. The `message_tree_id`
and `tree_state` properties (only present in flat messages files) can be used to find all messages of a message tree or to select trees by their state.
### Languages
OpenAssistant Conversations incorporates 35 different languages with a distribution of messages as follows:
**Languages with over 1000 messages**
- English: 71956
- Spanish: 43061
- Russian: 9089
- German: 5279
- Chinese: 4962
- French: 4251
- Thai: 3042
- Portuguese (Brazil): 2969
- Catalan: 2260
- Korean: 1553
- Ukrainian: 1352
- Italian: 1320
- Japanese: 1018
<details>
<summary><b>Languages with under 1000 messages</b></summary>
<ul>
<li>Vietnamese: 952</li>
<li>Basque: 947</li>
<li>Polish: 886</li>
<li>Hungarian: 811</li>
<li>Arabic: 666</li>
<li>Dutch: 628</li>
<li>Swedish: 512</li>
<li>Turkish: 454</li>
<li>Finnish: 386</li>
<li>Czech: 372</li>
<li>Danish: 358</li>
<li>Galician: 339</li>
<li>Hebrew: 255</li>
<li>Romanian: 200</li>
<li>Norwegian Bokmål: 133</li>
<li>Indonesian: 115</li>
<li>Bulgarian: 95</li>
<li>Bengali: 82</li>
<li>Persian: 72</li>
<li>Greek: 66</li>
<li>Esperanto: 59</li>
<li>Slovak: 19</li>
</ul>
</details>
## Contact
- Discord [Open Assistant Discord Server](https://ykilcher.com/open-assistant-discord)
- GitHub: [LAION-AI/Open-Assistant](https://github.com/LAION-AI/Open-Assistant)
- E-Mail: [open-assistant@laion.ai](mailto:open-assistant@laion.ai) | Provide a detailed description of the following dataset: OASST1 |
NBMOD | ## Introduction
NBMOD is a dataset created for researching the task of specific object grasp detection by robots in noisy environments. The dataset comprises three subsets: Simple background Single-object Subset (SSS), Noisy background Single-object Subset (NSS), and Multi-Object grasp detection Subset (MOS). The SSS subset contains 13,500 images, the NSS subset contains 13,000 images, and the MOS subset contains 5,000 images.
## What are the differences of NBMOD?
Unlike the renowned Cornell dataset, the NBMOD dataset differs in that its backgrounds are no longer simple whiteboards. The NSS and MOS subsets comprise a substantial number of images with noise, where this noise corresponds to interfering objects unrelated to the target objects for grasping detection. Moreover, in the MOS subset, each image encompasses multiple target objects for grasp detection, which closely resembles real-world working environments. | Provide a detailed description of the following dataset: NBMOD |
GUG | See article for detail | Provide a detailed description of the following dataset: GUG |
VFD-2000 | **VFD-2000** is a video fight detection dataset containing more than 2000 videos. YouTube is the data source. Specific scenarios are searched using “fight” as a search keyword, for example, “street fight”, “beach fight”, and “violence in the restaurant”. 200 videos under 20 different scenes are collected. | Provide a detailed description of the following dataset: VFD-2000 |
V-LoL-Trains | Despite the successes of recent developments in visual AI, different shortcomings still exist; from missing exact logical reasoning, to abstract generalization abilities, to understanding complex and noisy scenes. Unfortunately, existing benchmarks, were not designed to capture more than a few of these aspects. Whereas deep learning datasets focus on visually complex data but simple visual reasoning tasks, inductive logic datasets involve complex logical learning tasks, however, lack the visual component. To address this, we propose the visual logical learning dataset, V-LoL, that seamlessly combines visual and logical challenges. Notably, we introduce the first instantiation of V-LoL, V-LoL-Train, -- a visual rendition of a classic benchmark in symbolic AI, the Michalski train problem. By incorporating intricate visual scenes and flexible logical reasoning tasks within a versatile framework, V-LoL-Train provides a platform for investigating a wide range of visual logical learning challenges. | Provide a detailed description of the following dataset: V-LoL-Trains |
R&D Datasets for solving event combinatorics in all hadronic top quark pair events at the LHC | Used in the development of Topographs: Topological Reconstruction of Particle Physics Processes using Graph Neural Networks
The datasets contain 5.8M ttbar events in the all hadronic decay channel, with jets matched to the truth partons in the top quark decays. | Provide a detailed description of the following dataset: R&D Datasets for solving event combinatorics in all hadronic top quark pair events at the LHC |
DiaSafety | **DiaSafety** is a comprehensive dialogue safety dataset. It consists of 11K contextual dialogues under 7 unsafe subaspects in chitchat. | Provide a detailed description of the following dataset: DiaSafety |
TomoSAM | A dataset made of 3D image data and their embeddings to test TomoSAM | Provide a detailed description of the following dataset: TomoSAM |
Dissonance Twitter Dataset | **Dissonance Twitter Dataset** is a dataset collected from annotating tweets for dissonance. | Provide a detailed description of the following dataset: Dissonance Twitter Dataset |
Simple Shapes Dataset | It consists of 32x32 pixel images of shapes with multiple attributes (size, location, rotation, color). Each image is also paired with its ground truth information (attributes), and a natural language description (English) of the image.
The dataset is composed of:
a train set of 500,000 samples,
a val and a test set of 1000 samples each.
It also contains already processed 12-dimensional visual features (from a VAE), and presaved BERT features of the text descriptions.
Link to dataset: https://zenodo.org/record/8112838 | Provide a detailed description of the following dataset: Simple Shapes Dataset |
3D-POP | The dataset is designed specifically to solve a range of computer vision problems (2D-3D tracking, posture) faced by biologists while designing behavior studies with animals.
Typically, datasets for animal-specific vision tasks are created using open-source video material. This might be effective for an initial start, but these methods are not deployment ready for the behavior community. Therefore, we designed a semi-automated method for biologists to create well-curated datasets at a large scale for the ML and Vision community.
3D-POP is the first dataset with 3D ground truth for multi-animal, multi-view tracking problems.
**Highlight**: The dataset is captured with the intention of using it for various vision problems and with different levels of complexity (no of cameras, no of individuals)
Video explanation: [Link to YouTube video](https://www.youtube.com/watch?v=uGMsJ0qQZrA)
Video teaser: [Link to YouTube video](https://www.youtube.com/watch?v=er4u0WpRJeQ)
## Dataset Features:
### Marker-based videos:
- 6 hours+ of annotations of 18 individuals (groups of 1, 2, 5, 10).
- Bounding box
- Trajectories (2D and 3D)
- Posture (2D and 3D) with 9 key points
- Identities
- Total of 57 sequences (4K) with 4 views.
- Dataset customization* (Users can modify the dataset and add key points to the dataset)
### Markerless:
- 1Hr+ videos of 18 individuals in groups of 1, 2, 5, 11. The birds have no markers. This data is provided as test cases and unsupervised approaches.
## Problems:
### 2D domain:
- Position, Posture of birds (different group sizes n = 1, 2, 5, 10) with Single/Multiview.
- Tracking with single - multiview
### 3D domain:
- Position, Posture of birds (different group sizes n = 1, 2, 5, 10) with Single/Multiview.
- Tracking with single - multiview
### Fine-grained recognition:
- Identity tracking with ground truth.
### Unsupervised learning:
- 2D or 3D posture problems
## Idea:
The dataset is created with a motion capture system, using the 6-DOF tracking ability. Assumptions are that head and body act as rigid bodies when birds walk and forage (proved with experiment). Therefore, we get the 3D position of key points by tracking head/body orientation. | Provide a detailed description of the following dataset: 3D-POP |
UT-Zappos50K | UT Zappos50K (UT-Zap50K) is a large shoe dataset consisting of 50,025 catalog images collected from Zappos.com. The images are divided into 4 major categories — shoes, sandals, slippers, and boots — followed by functional types and individual brands. The shoes are centered on a white background and pictured in the same orientation for convenient analysis.
This dataset is created in the context of an online shopping task, where users pay special attentions to fine-grained visual differences. For instance, it is more likely that a shopper is deciding between two pairs of similar men's running shoes instead of between a woman's high heel and a man's slipper. GIST and LAB color features are provided. In addition, each image has 8 associated meta-data (gender, materials, etc.) labels that are used to filter the shoes on Zappos.com.
We introduced this dataset in the context of a pairwise comparison task, where the goal is to predict which of two images more strongly exhibits a visual attribute. When given a novel image pair, we want to answer the question, “Does Image A contain more or less of an attribute than Image B?” Both training and evaluation are performed using pairwise labels.
However, the usefulness of this dataset extends beyond the comparison task that we’ve demonstrated. The meta-data labels and the large size of the dataset makes it suitable for other tasks as well, such as:
category/brand classification
fine-grained attribute learning with rationales
gender-specific style matching
zero-shot learning | Provide a detailed description of the following dataset: UT-Zappos50K |
C-GQA | We propose a split built on top of Stanford GQA dataset originally proposed for VQA and name it Compositional GQA (C-GQA) dataset (see supplementary for the details). CGQA contains over 9.5k compositional labels making it the most extensive dataset for CZSL. With cleaner labels and a larger label space, our hope is that this dataset will inspire further research on the topic. | Provide a detailed description of the following dataset: C-GQA |
Wind Speed Forecasting | Air pollution management through wind speed forecasting: the time series exhibits a daily cyclical behavior and a long-term seasonality. | Provide a detailed description of the following dataset: Wind Speed Forecasting |
Random Hierarchy Model | Artificial hierarchical datasets to study how neural networks learn hierarchical tasks. See papers for details. | Provide a detailed description of the following dataset: Random Hierarchy Model |
LSMI | # Large Scale Multi-Illuminant (LSMI) Dataset for Developing White Balance Algorithm under Mixed Illumination (ICCV 2021)
<!-- ABOUT THE PROJECT -->
## Change Log
**LSMI Dataset Version : 1.1**
1.0 : LSMI dataset released. (Aug 05, 2021)
1.1 : Add option for saving sub-pair images for 3-illuminant scene (ex. _1,_12,_13) & saving subtracted image (ex. _2,_3,_23) (Feb 20, 2022)
## About
[[Paper]](https://dykim.me/publication/lsmi/LSMI.pdf)
[[Project site]](https://dykim.me/publication/lsmi/)
[[Download Dataset]](https://forms.gle/EjBAUzrrsWBxGX4o7)
[[Video]](https://youtu.be/i8OAdYryig0)
This is an official repository of **"Large Scale Multi-Illuminant (LSMI) Dataset for Developing White Balance Algorithm under Mixed Illumination"**, which is accepted as a poster in ICCV 2021.
This repository provides
1. Preprocessing code of "Large Scale Multi Illuminant (LSMI) Dataset"
2. Code of Pixel-level illumination inference U-Net
3. Pre-trained model parameter for testing U-Net
If you use our code or dataset, please cite our paper:
```
@inproceedings{kim2021large,
title={Large Scale Multi-Illuminant (LSMI) Dataset for Developing White Balance Algorithm Under Mixed Illumination},
author={Kim, Dongyoung and Kim, Jinwoo and Nam, Seonghyeon and Lee, Dongwoo and Lee, Yeonkyung and Kang, Nahyup and Lee, Hyong-Euk and Yoo, ByungIn and Han, Jae-Joon and Kim, Seon Joo},
booktitle={Proceedings of the IEEE/CVF International Conference on Computer Vision},
pages={2410--2419},
year={2021}
}
```
## Requirements
Our running environment is as follows:
- Python version 3.8.3
- Pytorch version 1.7.0
- CUDA version 11.2
We provide a docker image, which supports all extra requirements (ex. dcraw,rawpy,tensorboard...), including specified version of python, pytorch, CUDA above.
You can download the docker image [here](https://hub.docker.com/r/dongyoung95/torch1.7_lsmi).
The following instructions are assumed to run in a docker container that uses the docker image we provided.
<!-- GETTING STARTED -->
## Getting Started
### Clone this repo
In the docker container, clone this repository first.
```sh
git clone https://github.com/DY112/LSMI-dataset.git
```
### Download the LSMI dataset
You should first download the LSMI dataset from [here](https://forms.gle/EjBAUzrrsWBxGX4o7).
The dataset is composed of 3 sub-folers named "galaxy", "nikon", "sony".
Folders named by each camera include several scenes, and each scene folder contains full-resolution RAW files and JPG files that is converted to sRGB color space.
Move all three folders to the root of cloned repository.
In each sub-folders, we provides metadata (meta.json), and train/val/test scene index (split.json).
In meta.json, we provides following informations.
- NumOfLights : Number of illuminants in the scene
- MCCCoord : Locations of Macbeth color chart
- Light1,2,3 : Normalized chromaticities of each illuminant (calculated through running 1_make_mixture_map.py)
### Preprocess the LSMI dataset
0. Convert raw images to tiff files
To convert original 1-channel bayer-pattern images to 3-channel RGB tiff images, run following code:
```sh
python 0_cvt2tiff.py
```
You should modify **SOURCE** and **EXT** variables properly.
The converted tiff files are generated at the same location as the source file.
This process uses **DCRAW** command, with **'-h -D -4 -T'** as options.
There is no black level subtraction, saturated pixel clipping or else.
You can change the parameters as appropriate for your purpose.
1. Make mixture map
```sh
python 1_make_mixture_map.py
```
Change the **CAMERA** variable properly to the target directory you want.
This code does the following operations for each scene:
- Subtract black level (no saturation clipping)
- Use Macbeth Color Chart's achromatic patches, find each illuminant's chromaticities
- Use green channel pixel values, calculate pixel level illuminant mixture map
- Mask uncalculable pixel positions (which have 0 as value for all scene pairs) to **ZERO_MASK**
After running this code, **npy tpye mixture map** data will be generated at each scene's directory.
:warning: If you run this code with **ZERO_MASK=-1**, the full resolution mixture map may contains -1 for uncalculable pixels. You **MUST** replace this value appropriately before resizing to prevent this negative value from interpolating with other values.
2. Crop for train/test U-Net (Optional)
```sh
python 2_preprocess_data.py
```
This preprocessing code is **written only for U-Net**, so you can skip this step and freely process the full resolution LSMI set (tiff and npy files).
The image and the mixture map are resized as a square with a length of the **SIZE** variable inside the code, and the ground-truth image is also generated.
Note that the side of the image will be cropped to make the image shape square.
If you don't want to crop the side of the image and just want to resize whole image anyway, use **SQUARE_CROP=False**
We set the default test size to **256**, and set train size to **512**, and **SQUARE_CROP=True**.
The new dataset is created in a folder with the name of the CAMERA_SIZE. (Ex. galaxy_512)
### Use U-Net for pixel-level AWB
You can download pre-trained model parameter [here](https://yonsei-my.sharepoint.com/:f:/g/personal/dongyoung_kim_o365_yonsei_ac_kr/EkXIAmMiJApDuaB0HNFUPfYBrNEu1PDCF7deRHDbpZkExw?e=Blw861).
Pre-trained model is trained on 512x512 data with random crop & random pixel level relighting augmentation method.
Locate downloaded **models** folder into **SVWB_Unet**.
- Test U-Net
```sh
cd SVWB_Unet
sh test.sh
```
- Train U-Net
```sh
cd SVWB_Unet
sh train.sh
```
## Dataset License
<a rel="license" href="http://creativecommons.org/licenses/by-nc/4.0/"><img alt="Creative Commons License" style="border-width:0" src="https://i.creativecommons.org/l/by-nc/4.0/88x31.png" /></a><br />This work is licensed under a <a rel="license" href="http://creativecommons.org/licenses/by-nc/4.0/">Creative Commons Attribution-NonCommercial 4.0 International License</a>.
<!--
## Acknowledgements
* []()
* []()
* []()
--> | Provide a detailed description of the following dataset: LSMI |
EyePACS-light | This is a machine-learning-ready glaucoma dataset using a balanced subset of standardized fundus images from the Rotterdam EyePACS AIROGS train set. This dataset is split into training, validation, and test folders which contain 2500, 270, and 500 fundus images in each class respectively. Each training set has a folder for each class: referable glaucoma (RG) and non-referable glaucoma (NRG).
Three versions of the same dataset are available with different standardization strategies:
RAW - Resizing the source image to 256x256 pixels
PAD - Padding the source image to a square image and then resizing it to 256x256 pixels. This method preserves the aspect ratio but the resultant image contains less usable information.
CROP - Cropping black background in the fundus image, pad the resultant image to create a square image, and then resize to 256x256 pixels. This method preserves the aspect ratio and the resultant image contains the most usable information. | Provide a detailed description of the following dataset: EyePACS-light |
SMDG | Standardized Multi-Channel Dataset for Glaucoma (SMDG-19) is a collection and standardization of 19 public datasets, comprised of full-fundus glaucoma images, associated image metadata like, optic disc segmentation, optic cup segmentation, blood vessel segmentation, and any provided per-instance text metadata like sex and age. This dataset is the largest public repository of fundus images with glaucoma. | Provide a detailed description of the following dataset: SMDG |
Small ImageNet 150 | This new dataset represents a subset of the ImageNet1k. It consists of 99000 images and 150 classes. 90000 of them are for training, 600 images for each class. The validation test size is 7500. For testing, we add 1500 images from the ImageNetV2 Top-Images dataset to the validation. | Provide a detailed description of the following dataset: Small ImageNet 150 |
EBD | A large-scale benchmark with 1605 high-resolution, well-annotated images, featuring more
complex scenes and a wider range of DOF settings. | Provide a detailed description of the following dataset: EBD |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.