dataset_name stringlengths 2 128 | description stringlengths 1 9.7k | prompt stringlengths 59 185 |
|---|---|---|
Semeion | 1593 handwritten digits from around 80 persons were scanned, stretched in a rectangular box 16x16 in a gray scale of 256 values.
The dataset was created by Tactile Srl, Brescia, Italy (http://www.tattile.it) and donated in 1994 to Semeion Research Center of Sciences of Communication, Rome, Italy (http://www.semeion.it), for machine learning research.
For any questions, e-mail Massimo Buscema (m.buscema '@' semeion.it) or Stefano Terzi (s.terzi '@' semeion.it)
##Data Set Information:
1593 handwritten digits from around 80 persons were scanned, stretched in a rectangular box 16x16 in a gray scale of 256 values. Then each pixel of each image was scaled into a boolean (1/0) value using a fixed threshold.
Each person wrote on a paper all the digits from 0 to 9, twice. The commitment was to write the digit the first time in the normal way (trying to write each digit accurately) and the second time in a fast way (with no accuracy).
The best validation protocol for this dataset seems to be a 5x2CV, 50% Tune (Train +Test), and completely blind 50% Validation
##Attribute Information:
This dataset consists of 1593 records (rows) and 256 attributes (columns).
Each record represents a handwritten digit, originally scanned with a resolution of 256 grays scale (28).
Each pixel of each original scanned image was first stretched, and after scaled between 0 and 1 (setting to 0 for every pixel whose value was under the value 127 of the grey scale (127 included) and setting to 1 for each pixel whose original value in the grey scale was over 127).
Finally, each binary image was scaled again into a 16x16 square box (the final 256 binary attributes). | Provide a detailed description of the following dataset: Semeion |
Spanish TimeBank 1.0 | Spanish TimeBank 1.0 was developed by researchers at Barcelona Media and consists of Spanish texts in the AnCora corpus annotated with temporal and event information according to the TimeML specification language.
TimeML is a schema for annotating eventualities and time expressions in natural language as well as the temporal relations among them, thus facilitating the task of extraction, representation and exchange of temporal information. Spanish Timebank 1.0 is annotated in three levels, marking events, time expressions and event metadata. The TimeML annotation scheme was tailored for the specifics of the Spanish language. Temporal relations in Spanish present distinctions of verbal mood (e.g., indicative, subjunctive, conditional, etc.) and grammatical aspect (e.g., imperfective) which are absent in English. Spanish TimeBank 1.0 joins the family of TimeBank annotated corpora which includes languages such as English, Italian, French, Korean and Chinese. Through their common layer of annotation, these corpora provide resources useful for multilingual temporal extraction and processing, such as multilingual text entailment, opinion mining or question answering. Spanish Timebank 1.0 is the Spanish language complement to Catalan Timebank 1.0 LDC2012T10. | Provide a detailed description of the following dataset: Spanish TimeBank 1.0 |
MoCapAct | The MoCapAct dataset contains training data and models for humanoid locomotion research. It consists of expert policies that are trained to track individual clip snippets and HDF5 files of noisy rollouts collected from each expert, including proprioceptive observations and actions. | Provide a detailed description of the following dataset: MoCapAct |
Aurora-2 | The Aurora-2 data are based on a version of the original TIDigits (as available from LDC) downsampled at 8 kHz. Different noise signals have been artificially added to clean speech data. The software tool for filtering and noise adding is available in the download area. You can use the tool for creating distorted data at sampling rates of 8 or 16 kHz.
The recognition experiments for Aurora-2 are based on the usage of the HTK recognizer as it is available from Cambridge University. Scripts and configuration files are part of the Aurora-2 CDs as they are distributed by ELRA/ELDA. A published paper is available describing some details of the data creation and the recognition experiments.
The experiments as distributed on the CDs are based on acoustic features as they are created as output of a cepstral analysis scheme that has been standardized by ETSI. We refer to this feature extraction scheme as first standard. Later on the advanced front-end has been standardized as a second standard. We provide a set of scripts in the download area for performing the Aurora-2 experiment with the advanced front-end. A report is available containing more details about the set-up and the obtained recognition results. | Provide a detailed description of the following dataset: Aurora-2 |
French Timebank | French TimeBank, a corpus for French annotated in ISO-TimeML.
Some statistics:
- Documents: 109
- Events: 2100
- Timexs: 608
- Signal: 288
- ALink: 36
- SLink: 457
- TLinks: 191 | Provide a detailed description of the following dataset: French Timebank |
Basque TimeBank | A set of basque documents annotated with EusTimeML - a mark-up language for temporal information in Basque. | Provide a detailed description of the following dataset: Basque TimeBank |
Catalan TimeBank 1.0 | Catalan TimeBank 1.0 was developed by researchers at Barcelona Media and consists of Catalan texts in the AnCora corpus annotated with temporal and event information according to the TimeML specification language. | Provide a detailed description of the following dataset: Catalan TimeBank 1.0 |
Weibo | This dataset is from [DeepHawkes: Bridging the Gap between Prediction and Understanding of Information Cascades](https://dl.acm.org/doi/10.1145/3132847.3132973), CIKM 2017. It includes Weibo tweets and their retweets posted in a day. | Provide a detailed description of the following dataset: Weibo |
WinoGAViL | This dataset is collected via the WinoGAViL game to collect challenging vision-and-language associations. Inspired by the popular card game Codenames, a “spymaster” gives a textual cue related to several visual candidates, and another player has to identify them.
We use the game to collect 3.5K instances, finding that they are intuitive for humans (>90% Jaccard index) but challenging for state-of-the-art AI models, where the best model (ViLT) achieves a score of 52%, succeeding mostly where the cue is visually salient.
Researchers are welcome to evaluate models on this dataset.
A simple intended use is zero-shot prediction:
run vision-and-language model, producing a score for the (cue,image) pair, and taking the K pairs with the highest scores.
A supervised setting is also possible, code for re-running the experiments is available in the github repository. https://github.com/WinoGAViL/WinoGAViL-experiments | Provide a detailed description of the following dataset: WinoGAViL |
Capriccio | Capriccio is a sentiment classification dataset on tweets that simulates data drift.
It is created by slicing the Sentiment140 dataset ([homepage](http://help.sentiment140.com/home), [Huggingface datasets](https://huggingface.co/datasets/sentiment140)) with a sliding window of 500,000 tweets, resulting in 38 slices.
Thus, each slice can be used to represent the training/validation dataset of a sentiment classification model that is re-trained every day.
Each slice has 425,000 tweets for training (file named `%d_train.json`) and 75,000 tweets for validation (file named `%d_val.json`).
The name comes from the adjective *capricious*. | Provide a detailed description of the following dataset: Capriccio |
KuaiRand | KuaiRand is an unbiased sequential recommendation dataset collected from the recommendation logs of the video-sharing mobile app, Kuaishou (快手). It is the first recommendation dataset with millions of intervened interactions of randomly exposed items inserted in the standard recommendation feeds! | Provide a detailed description of the following dataset: KuaiRand |
MentSum | Mental health remains a significant challenge of public health worldwide. With increasing popularity of online platforms, many use the platforms to share their mental health conditions, express their feelings, and seek help from the community and counselors. While posts are of varying length, it is beneficial to provide a short, but informative summary for fast processing by the counselors. To facilitate research in summarization of mental health online posts, we introduce Mental Health Summarization dataset, MentSum, containing over 24k carefully selected user posts from Reddit, along with their short user-written summary (called TLDR) in English from 43 mental health subreddits.
MentSum is distributed through Data Usage Agreement (DUA). Please fill out the request form at https://ir.cs.georgetown.edu/resources/ to obtain this dataset. | Provide a detailed description of the following dataset: MentSum |
Line Coverage Dataset | The dataset contains road networks taken from 50 most populous cities in the world. The road networks are obtained using OpenStreetMap. These road networks are used to benchmark routing algorithms on graphs.
This LineCoverage-database is made available under the Open Database License: http://opendatacommons.org/licenses/odbl/1.0/. Any rights in individual contents of the database are licensed under the Database Contents License: http://opendatacommons.org/licenses/dbcl/1.0/
The data was processed at University of North Carolina at Charlotte.
© Copyright University of North Carolina at Charlotte, 2022
The original maps (.osm files) were obtained from OpenStreetMap.
© OpenStreetMap contributors
OpenStreetMap® is open data, licensed under the Open Data Commons Open Database License (ODbL) by the OpenStreetMap Foundation (OSMF). | Provide a detailed description of the following dataset: Line Coverage Dataset |
ABCD Study | The ABCD Study is a prospective longitudinal study starting at the ages of 9-10 and following participants for 10 years. The study includes a diverse sample of nearly 12,000 youth enrolled at 21 research sites across the country. It measures brain development (via structural, task functional, and resting state functional imaging), social, emotional, and cognitive development, mental health, substance use and attitudes, gender identity and sexual health, bio-specimens, as well as a variety of physical health, and environmental factors. | Provide a detailed description of the following dataset: ABCD Study |
NCANDA | The NCANDA consortium is composed of an Administrative component at the University of California San Diego, a Data Analysis and Informatics component at SRI International, and five research sites (University of California San Diego, SRI International, Duke University, the University of Pittsburgh, and the Oregon Health & Science University). A sample of 831 individuals (ages 12-21) were recruited for the study across the five research sites. The enrolled participants are followed in an accelerated longitudinal design that involves structural and functional imaging of the brain along with extensive neuropsychological and clinical assessments.
The NIAAA distributes NCANDA data to qualified investigators to promote open and public sharing of data and to accelerate the process of discovery. See the [Access to NCANDA Data section](https://www.niaaa.nih.gov/national-consortium-alcohol-and-neurodevelopment-adolescence-ncanda) to learn more about procedures for accessing data from the NCANDA Repository. | Provide a detailed description of the following dataset: NCANDA |
CrossDomainTypes4Py | A Python Dataset for Cross-Domain Evaluation of Type Inference Systems | Provide a detailed description of the following dataset: CrossDomainTypes4Py |
Persuasion Strategies | Modeling what makes an advertisement persuasive, i.e., eliciting the desired response from consumer, is critical to the study of propaganda, social psychology, and marketing. Despite its importance, computational modeling of persuasion in computer vision is still in its infancy, primarily due to the lack of benchmark datasets that can provide persuasion-strategy labels associated with ads. Motivated by persuasion literature in social psychology and marketing, we introduce an extensive vocabulary of persuasion strategies and build the first ad image corpus annotated with persuasion strategies. The dataset also provides image segmentation masks, which labels persuasion strategies in the corresponding ad images on the test split. | Provide a detailed description of the following dataset: Persuasion Strategies |
VGMIDI | VGMIDI is a dataset of piano arrangements of video game soundtracks. It contains 200 MIDI pieces labeled according to emotion and 3,850 unlabeled pieces. Each labeled piece was annotated by 30 human subjects according to the Circumplex (valence-arousal) model of emotion using a custom [web tool](https://github.com/lucasnfe/adl-music annotation). | Provide a detailed description of the following dataset: VGMIDI |
AquaTrash | This dataset contains 369 images of Trash used for deep learning. Each image is manually labelled by our team for accurate detections making a total of 470 bounding boxes. There are total 4 classes {(0: glass), (1:paper), (2:metal), (3:plastic)} | Provide a detailed description of the following dataset: AquaTrash |
Infinity Spills Basic Dataset | Infinity AI's Spills Basic Dataset is a synthetic, open-source dataset for safety applications. It features 150 videos of photorealistic liquid spills across 15 common settings. Spills take on in-context reflections, caustics, and depth based on the surrounding environment, lighting, and floor. Each video contains a spill of unique properties (size, color, profile, and more) and is accompanied by pixel-perfect labels and annotations. This dataset can be used to develop computer vision algorithms to detect the location and type of spill from the perspective of a fixed camera.
## Key Features
+ 150 videos
+ 4 environments where spills commonly occur
+ 15 unique indoor scenes
+ Realistic spill appearance
+ Diverse variation in spill geometry
+ Multiple spill colors
+ Rich annotations
## Annotations
Each video is accompanied by a rich set of pixel-perfect labels and annotations, including:
+ Spill characteristics (size, color, profile, depth, etc.)
+ Segmentation masks
+ Bounding boxes
For the full description of labels and metadata, check out the [README](https://github.com/toinfinityai/infinity-datasets/tree/main/smartfacility-spills-basic).
## File Size
Total dataset size: 890MB
All 150 videos are contained in a single zipped folder (560MB).
Video resolutions (all mp4):
960 x 720 at 24fps
960 x 640 at 24fps
960 x 637 at 24fps
960 x 384 at 24fps.
## Resources
+ [**Github README**](https://github.com/toinfinityai/infinity-datasets/tree/main/smartfacility-spills-basic): full dataset and annotation descriptions
+ [**Demo Jupyter notebook**](https://github.com/toinfinityai/infinity-datasets/blob/main/smartfacility-spills-basic/quickstart.ipynb): Shows how to (1) filter/query the dataset based on metadata, and (2) how to visualize labels
+ **Questions?** We’re happy to chat asynchronously via email or hop on a call. Just send us a note at [info@toinfinity.ai](info@toinfinity.ai) (this goes to all of the Infinity AI founders).
## Terms and Conditions
This work is licensed under a
[Creative Commons Attribution 4.0 International License](http://creativecommons.org/licenses/by/4.0/). Both academic and commercial applications are allowed.
## Build better models faster with synthetic data
[Infinity AI](https://infinity.ai), the creator of this dataset, is a synthetic data company. You can download all of Infinity's synthetic datasets and APIs at the [Infinity Marketplace](https://marketplace.infinity.ai/). Feel free to get in touch in you want custom data, or want to book a scoping session on how to make synthetic data your competitive edge.
[www.infinity.ai](https://infinity.ai) | [LinkedIn](https://www.linkedin.com/company/infinityai) | [Synthetic Data Blog](https://medium.com/infinity-ai) | Provide a detailed description of the following dataset: Infinity Spills Basic Dataset |
Survival Analysis of Heart Failure Patients | The dataset contains cardiovascular medical records taken from 299 patients. The patient cohort comprised of 105 women and 194 men between 40 and 95 years in age. All patients in the cohort were diagnosed with the systolic dysfunction of the left ventricle and had previous history of heart failures. As a result of their previous history every patient was classified into either class III or class IV of New York Heart Association (NYHA) classification for various stages of heart failure. | Provide a detailed description of the following dataset: Survival Analysis of Heart Failure Patients |
Anime Face Dataset by Character Name | This dataset is suitable for the image classification model. Train image classification model to classify anime characters by face image. This dataset includes 130 characters with 75 images each, scrapped from Danbooru.
List of characters:
Abigail williams
Aegis
Aisaka Taiga
Albedo
Anastasia
Aqua
Arcue Brunestud
Asia Argento
Astolfo
Asuna Yuuki
Atago
Ayanami Rei
Belfast
Bremerton
C.C.
Eru Chitanda
Chloe Von Einzbern
Cleveland
d.va
Dido
Emilia
Enterprise
Formable
Fubuki
Fujibayashi Kyou
Fujiwara Chika
Furukawa Nagisa
Gawr Gura
Gilgamesh
Giorno Giovanna
Hanekawa Tsubasa
Hatsune Miku
Hayasaka Ai
Hirasawa Yui
Hyuuga Hinata
Ichigo
Illyasviel Von Einzbern
irisviel Von Einzbern
Ishtar
Iroha Isshiki
Jonathan Joestar
Kamado Nezuko
Ka Madoka
Kanbaru Suruga
Karin Kakudate
Karna
Misato Katsuragi
Keqing
Kirito
Kiryu Coco
Kizuna Ai
Shinobu Kochou
Shouko Komi
Laffey
Cú Chulainn
Kurisu Makise
Mash Kyrielight
Sakura Matou
Megumin
Mei
Meltryllis
Aqua Minato
Mikoto Misaka
Kaori Miyazono
Calliope Mori
Yuki Nagato
Azusa Nakano
Itsuki Nakano
Nakano Miku
Nakano Nino
Nakano Yotsuba
Nami
Okayu Nekomata
Robin Nico
Ina'nis Ninomae
Maki Nishikino
Souji Okita
Mio Ookami
Ougi Oshino
Shinobu Oshino
Ouro Kronii
Paimon
Platelet
Ram
Raphtalia
Rem Rezero
Rias Gremory
Medusa
Shiki Ryougi
Sakura Futaba
Mai Sakurajima
Riko Sakurauchi
Chie Satonaka
Semiramis
Sengoku Nadeko
Hitagi Senjougahara
Hotaru Share
Kaguya Shinomiya
Shirakami Fubuki
Naoto Shirogane
Shirogane Noel
Shishiro Botan
Shuten-Douji
Sinon
Asuka Langley Souryuu
ST AR-15
Super Sonico
Suzuhara Lulu
Haruhi Suzumiya
Taihou
Takagi-San
Ann Takamaki
Rikka Takanashi
Takao
Rikka Takarada
Hifumi Takimoto
Towa Tokoyami
Rin Toosaka
Nozomi Toujou
Yoshiko Tsushima
Unicorn
Pekora Usada
Erice Utsumi
Amelia Watson
Waver Velvet
Xenovia Quarta
Yui
Yui Yuigahama
Yukino Yukinoshita
Zero Two | Provide a detailed description of the following dataset: Anime Face Dataset by Character Name |
SentimentArcs: Sentiment Reference Corpus for Novels | SentimentArcs’ reference corpus for novels consists of 25 narratives selected to create a diverse set of well recognized novels that can serve as a benchmark for future studies. The composition of the corpora was limited by the effect of copyright laws as well as historical imbalances. Most works were obtained from US and Australian Gutenberg Projects. The corpora is expected to grow in size and diversity over time.
Several dimensions of diversity were considered for inclusion including popularity, period, genre, topic, style and author diversity. The first version of our corpus includes only English, although Proust and Homer are included in translation. SentimentArcs has processed a larger set of novels, including some in foreign languages. The initial reference corpus is in English since performance across all ensemble models was uneven in less resourced languages
In sum, the corpora includes (1) the two most popular novels on Gutenberg.org (Project Gutenberg, 2021b), (2) eight of the fifteen most assigned novels at top US universities (EAB, 2021), and (3) three works that have sold over 20 million copies (Books, 2021). There are eight works by women, two by African-Americans and five works by two LGBTQ authors. Britain leads with 15 authors followed by 6 Americans and one each from France, Russia, North Africa and Ancient Greece. | Provide a detailed description of the following dataset: SentimentArcs: Sentiment Reference Corpus for Novels |
Scholars on Twitter | This is a dataset of paired OpenAlex author_ids (https://docs.openalex.org/about-the-data/author) and tweeter_id.
The dataset includes 492,124 unique author_ids and 423,920 unique tweeter_ids forming 498,672 unique author-tweeter pairs. The file contains the following columns:
author_id: author_id from OpenAlex
tweeter_id: tweeter_id of the Twitter user
criteria: A list of the different matching criteria that identified the pair
valid: This column indicates whether the match has been manually checked. A 0 indicates a false positive, and a 1 indicates a true positive. Empty rows have not been manually validated. | Provide a detailed description of the following dataset: Scholars on Twitter |
Coronavirus (COVID-19) Tweets Dataset | This dataset includes CSV files that contain IDs and sentiment scores of the tweets related to the COVID-19 pandemic. The real-time Twitter feed is monitored for coronavirus-related tweets using 90+ different keywords and hashtags that are commonly used while referencing the pandemic. The oldest tweets in this dataset date back to October 01, 2019. This dataset has been wholly re-designed on March 20, 2020, to comply with the content redistribution policy set by Twitter. Twitter's policy restricts the sharing of Twitter data other than IDs; therefore, only the tweet IDs are released through this dataset. You need to hydrate the tweet IDs in order to get complete data. | Provide a detailed description of the following dataset: Coronavirus (COVID-19) Tweets Dataset |
HANNA | HANNA, a large annotated dataset of Human-ANnotated NArratives for Automatic Story Generation (ASG) evaluation, has been designed for the benchmarking of automatic metrics for ASG. HANNA contains 1,056 stories generated from 96 prompts from the WritingPrompts dataset. Each prompt is linked to a human story and to 10 stories generated by different ASG systems. Each story was annotated on six human criteria (Relevance, Coherence, Empathy, Surprise, Engagement and Complexity) by three raters. HANNA also contains the scores produced by 72 automatic metrics on each story. | Provide a detailed description of the following dataset: HANNA |
Talking With Hands 16.2M | This is a 16.2-million frame (50-hour) multimodal dataset of two-person face-to-face spontaneous conversations. This dataset features synchronized body and finger motion as well as audio data. It represents the largest motion capture and audio dataset of natural conversations to date. The statistical analysis verifies strong intraperson and interperson covariance of arm, hand, and speech features, potentially enabling new directions on data-driven social behavior analysis, prediction, and synthesis. | Provide a detailed description of the following dataset: Talking With Hands 16.2M |
The TREC Fair Ranking track 2020 | The TREC Fair Ranking track evaluates systems according to how well they fairly rank documents. The 2020 focuses on scholarly search and fairly ranking academic abstracts and papers from authors belonging to different groups. | Provide a detailed description of the following dataset: The TREC Fair Ranking track 2020 |
FF-ANN-ID: Intrusion detection in WSNs | This dataset consists of six columns. The first four columns represent the input features (i.e., area, sensing range, transmission range, and the number of sensors). The last two columns represent the response variable or target variable (i.e., number of barriers (Gaussian) and number of barriers (Uniform)). | Provide a detailed description of the following dataset: FF-ANN-ID: Intrusion detection in WSNs |
Wikipedia users activity | Wikipedia users activity for two language editions, Portuguese and Italian, for up to 8 January 2020.
**Feature Description**
**Pages**: Number of unique pages (>= 0) edited by the user.
**Activity**: Number of edits (>= 0) performed by the user.
**Anonymity**: Categorical value (Yes [1], No [0]) indicating whether the user is anonymous or not. Anonymous users are identified by their IP.
**Not Minor**: Ratio [0, 1] of edits flagged by the own editor for revision. 1 (0) means all (no) edits of the editor flagged by him or herself as not minor.
**Comments**: Ratio [0, 1] of edits in which a comment was included. One comment allowed per edition.
**Presence**: Ratio [0, 1] between the registration date of the user and the date of the beginning of the system (January 2001).
**Frequence**: Frequency ratio [0, 1] of edits per time window of 30 days in the editor's life cycle. Maximum value limited at 1.
**Regularity**: Regularity ratio [0, 1] per time window of 30 days. 1 means at least one interaction every 30 days in the editor's life cycle.
**Bytes**: Overall integer number of bytes edited by the user. Insertions/ deletions respectively increase/decrease the amount of bytes. | Provide a detailed description of the following dataset: Wikipedia users activity |
A Simulated 4-DOF Ship Motion Dataset for System Identification under Environmental Disturbances | This dataset contains data of 125 1-hour simulations of ship motion during various sea states performing random maneuvers in 4 degrees of freedom (surge-sway-yaw-roll). The original ship is a patrol ship developed by Perez et al. [1]. We have extended it with a set of two symmetrically placed rudder propellers. Additionally, we simulate wind forces according to Isherwood's wind model [2]. Wind-induced waves are generated with the JONSWAP spectrum [3] and the corresponding wave forces are then computed using wave force response amplitude operators (ROA).
Implementations of the ship model, Isherwood's wave model, wave force ROAs and the JONSWAP spectrum can be found in the Marine Systems Simulator toolbox by Fossen and Perez [4].
The dataset is split into a routine operation set (96 hours) and into an Out-Of-Distribution (OOD) set (29 hours). The routine operation set is split into train-validation-test with a 60-10-30 split, while the OOD set is used solely for testing.
The dataset is used for the evaluation of nonlinear system identification methods for multi-step predictions. The following inputs and outputs are considered for the identification problem. Inputs are the shaft speeds of both propellers, their azimut angles, wind angle of attack, and wind speed. Measured states or outputs are surge velocity, sway velocity and roll rate, as well as yaw angle and roll angle.
Please see the README.txt file for details regarding the file structure of this dataset and a description of the variables in the .tab files.
This research is funded by Deutsche Forschungsgemeinschaft (DFG, German Research Foundation) under Germany's Excellence Strategy - EXC 2075 - 390740016. We acknowledge the support by the Stuttgart Center for Simulation Science (SimTech).
[1] T. Perez, A. Ross, and T. I. Fossen, “A 4-DOF SIMULINK model of a coastal patrol vessel for manoeuvring in waves,” in IFAC MCMC, 2006.
[2] R. M. Isherwood, “Wind resistance of merchant ships,” The Royal Institution of Naval Architects, 1972.
[3] K. Hasselmann, T. Barnett, E. Bouws, H. Carlson, D. Cartwright, K. Enke, J. Ewing, H. Gienapp, D. Hasselmann, P. Kruseman, A. Meerburg, P. Muller, D. Olbers, K. Richter, W. Sell, and H. Walden, “Measurements of wind-wave growth and swell decay during the joint north sea wave project (JONSWAP),” Deut. Hydrogr. Z., vol. 8, pp. 1-95, 01 1973.
[4] T. I. Fossen and T. Perez, “Marine Systems Simulator (MSS),” https://github.com/cybergalactic/MSS, 2004, last accessed: 2022-06-14. | Provide a detailed description of the following dataset: A Simulated 4-DOF Ship Motion Dataset for System Identification under Environmental Disturbances |
Youtube-VIS 2022 Validation | Video object segmentation has been studied extensively in the past decade due to its importance in understanding video spatial-temporal structures as well as its value in industrial applications. Recently, data-driven algorithms (e.g. deep learning) have become the dominant approach to computer vision problems and one of the most important keys to their successes is the availability of large-scale datasets. Previously, we presented the first large-scale video object segmentation dataset named YouTubeVOS and hosted the Large-scale Video Object Segmentation Challenge in conjuction with ECCV 2018, ICCV 2019 and CVPR 2021. This year, we are thrilled to invite you to the 4th Large-scale Video Object Segmentation Challenge in conjunction with CVPR 2022. The benchmark would be an augmented version of the YouTubeVOS dataset with more annotations. Some incorrect annotations are also corrected. For more details, check our website for the workshop and challenge. | Provide a detailed description of the following dataset: Youtube-VIS 2022 Validation |
CitySim Dataset | The development of safety-oriented research ideas and applications requires fine-grained vehicle trajectory data that not only has high accuracy but also captures a substantial number of critical safety events. This paper introduces the CitySim Dataset, which was devised with a core objective of facilitating safety-based research and applications. CitySim has vehicle trajectories extracted from 1140-minutes of drone videos recorded at 12 different locations. It covers a variety of road geometries including freeway basic segments, weaving segments, expressway merge/diverge segments, signalized intersections, stop-controlled intersections, and intersections without sign/signal control. CitySim trajectories were generated through a five-step procedure which ensured the trajectory accuracy. Furthermore, the dataset provides vehicle rotated bounding box information which is demonstrated to improve safety evaluation. Compared to other video-based trajectory datasets, the CitySim Dataset has significantly more critical safety events with higher severity including cut-in, merge, and diverge events. In addition, CitySim facilitates research towards digital twin applications by providing relevant assets like the recording locations'3D base maps and signal timings. These features enable more comprehensive conditions for safety research and applications such as autonomous vehicle safety and location-based safety analysis. The dataset is available online at https://github.com/ozheng1993/UCF-SST-CitySim-Dataset. | Provide a detailed description of the following dataset: CitySim Dataset |
Sparse LiDAR KITTI dataset | Sparse LiDAR extracted from velodyne 64 beams in KITTI dataset. It contains severals LiDAR: LiDAR 2 beams, LiDAR 4 beams, LiDAR 8 beams, LiDAR 16 beams, LiDAR 32 beams | Provide a detailed description of the following dataset: Sparse LiDAR KITTI dataset |
Herbarium 2022 | The Herbarium 2022: Flora of North America is a part of a project of the New York Botanical Garden funded by the National Science Foundation to build tools to identify novel plant species around the world. The dataset strives to represent all known vascular plant taxa in North America, using images gathered from 60 different botanical institutions around the world.
In botany, a ‘flora’ is a complete account of the plants found in a geographic region. The dichotomous keys and detailed descriptions of diagnostic morphological features contained within a flora are used by botanists to determine which names to apply to plant specimens. This year's competition dataset aims to encapsulate the flora of North America so that we can test the capability of artificial intelligence to replicate this traditional tool —a crucial first step to harnessing AI’s potential botanical applications.
The Herbarium 2022: Flora of North America dataset comprises 1.05 M images of 15,501 vascular plants, constituting more than 90% of the taxa documented in North America. Our dataset is constrained to include only vascular land plants (lycophytes, ferns, gymnosperms, and flowering plants).
Our dataset has a long-tail distribution. The number of images per taxon is as few as seven and as many as 100 images. Although more images are available, we capped the maximum number in an attempt to ensure sufficient but manageable training data size for competition participants. | Provide a detailed description of the following dataset: Herbarium 2022 |
Scapped instagram comment | This is a dataset of audiens comment for each KOL that uses Instagram as their campaign platform. The comments are scrapped and generated as csv through apify.com | Provide a detailed description of the following dataset: Scapped instagram comment |
UzWordnet | UzWordnet is a lexical-semantic database, or a “word-net”, for the (Northern) Uzbek language (native: O’zbek till) compatible with [Princeton Wordnet](https://wordnet.princeton.edu). By providing it open source (see License), we aim to motivate, support, and increase the application of database and knowledge graphs principles and techniques to the study of computational aspects of the (Northern) Uzbek language and, more generally, the usability of Uzbek within IT applications and the Internet. | Provide a detailed description of the following dataset: UzWordnet |
Rhythmic Gymnastic | The Rhythmic Gymnastics dataset contains videos of four different types of gymnastics routines: ball, clubs, hoop and ribbon. Each type of routine has 250 associated videos, and the length of each video is approximately 1 min 35 s. We chose high-standard international competition videos, including videos from the 36th and 37th International Artistic Gymnastics Competitions, to construct the dataset. We have edited out the irrelevant parts of the original videos (such as replay shots and athlete warmups). We have annotated each video with three scores (a difficulty score, an execution score and a total score), which were given by the referee in accordance with the official scoring system. | Provide a detailed description of the following dataset: Rhythmic Gymnastic |
CTI-to-MITRE | This dataset contains samples of CTI (Cyber Threat Intelligence) data in natural language, labeled with the corresponding adversarial techniques from the MITRE ATT&CK framework.
This dataset can be used for research on analyzing tactics and techniques of cyber attacks from text in natural language. | Provide a detailed description of the following dataset: CTI-to-MITRE |
EMOVO | This article describes the first emotional corpus, named EMOVO, applicable to Italian language,. It is a database built from the voices of up to 6 actors who played 14 sentences simulating 6 emotional states (disgust, fear, anger, joy, surprise, sadness) plus the neutral state. These emotions are the well-known Big Six found in most of the literature related to emotional speech. The recordings were made with professional equipment in the Fondazione Ugo Bordoni laboratories. The paper also describes a subjective validation test of the corpus, based on emotion-discrimination of two sentences carried out by two different groups of 24 listeners. The test was successful because it yielded an overall recognition accuracy of 80%. It is observed that emotions less easy to recognize are joy and disgust, whereas the most easy to detect are anger, sadness and the neutral state. | Provide a detailed description of the following dataset: EMOVO |
QM9 Charge Densities and Energies Calculated with VASP | QM9 molecules calculated with VASP (at DFT level) using Atomic Simulation Environment with the following parameters:
Vasp(xc='PBE', istart=0, algo='Normal', icharg=2, nelm=180, ispin=1, nelmdl=6, isym=0, lcorr=True, potim=0.1, nelmin=5, kpts=[1,1,1], ismear=0, ediff=0.1E-05, sigma=0.1, nsw=0, ldiag=True, lreal='Auto', lwave=False, lcharg=True, encut=400)
The resulting CHGCAR files have been compressed with lz4 compression and packed in non-compressed tar archives with up to 1000 structures in each.
The datasplits json files contain the indices (0-index) of the train, validation and test sets used in the paper "Equivariant Graph neural networks for fast electron density estimation of molecules, liquids, and solids"
The QM9 molecule structures were obtained from https://doi.org/10.6084/m9.figshare.c.978904.v5 | Provide a detailed description of the following dataset: QM9 Charge Densities and Energies Calculated with VASP |
NMC Li-ion Battery Cathode Energies and Charge Densities | This dataset contains charge densities for NMC (Ni, Mn and Co) 2x2x1 supercell (12 transition metal atoms and 12 Li/vacancy site) with varying levels of Li content. For each structure we first randomly sample the number of Mn, Ni and Co atoms given that the total number of transition metal atoms is 12 and then randomly assign them to the transition metal positions of the lattice. Similarly the number of vacancies is uniformly sampled between 0 and 12 and vacancies are assigned to the Li site. The generated configurations are then relaxed in two steps: First we relax the atom positions with fixed cell parameters and then we allow both positions and cell parameters to relax. We keep only the electron density (CHGCAR) file after the last cell relaxation step. The atoms are relaxed until forces on each atom are lower than 0.01 eV/Å.
The final relaxation is done with the following VASP settings (DFT level of theory) through the Atomic Simulation Environment:
xc='PBE', gga='PS', istart=1, algo='Normal', icharg=1, nelm=1800, ispin=1, nelmdl=6, isym=0, lcorr=True, potim=0.1, nelmin=5, kpts=[3,3,1], ismear=0, ediff=0.1E-03, ediffg=-0.05, sigma=0.1, nsw=200, isif=3, ibrion=2, ldiag=True, lreal='Auto', lwave=False, lcharg=True, prec='Normal'
The resulting CHGCAR files have been compressed with lz4 compression and packed in non-compressed tar archives with up to 1000 structures in each.
The datasplits json files contain the indices (0-index) of the train, validation and test sets used in the paper "Equivariant Graph neural networks for fast electron density estimation of molecules, liquids, and solids" | Provide a detailed description of the following dataset: NMC Li-ion Battery Cathode Energies and Charge Densities |
Ethylene Carbonate Molecular Dynamics | This dataset consists of charge densities of individual snapshots from a molecular dynamics trajectory (DFT simulations?). We insert 8 ethylene carbonate molecules in the simulation box. To quickly explore a large part of the configurational space we put Hookean constraints on the molecular bonds (to maintain molecular identity such that molecules are not torn apart at such high temperature) and run Langevin molecular dynamics with thermostat temperature of 3000 K. The simulation was run for 12380 steps of 0.5 fs.
The resulting CHGCAR files have been compressed with lz4 compression and packed in non-compressed tar archives with up to 1000 structures in each.
The datasplits json files contain the indices (0-index) of the train, validation and test sets used in the paper "Equivariant Graph neural networks for fast electron density estimation of molecules, liquids, and solids"
To only use equilibrated structures the first 1000 time steps are not used in the paper. Index 0 in the split files corresponds to index 1000 of the CHGCAR files (01000.CHGCAR.lz4). | Provide a detailed description of the following dataset: Ethylene Carbonate Molecular Dynamics |
Crackseg9k | The dataset published here is the largest, most diverse and consistent crack segmentation dataset constructed so far. It contains 9255 images that combine different smaller open source datasets. It consists of 10 sub datasets preprocessed and resized to 400x400 namely, Crack500, Deepcrack, Sdnet, Cracktree, Gaps, Volker, Rissbilder, Noncrack, Masonry and Ceramic. | Provide a detailed description of the following dataset: Crackseg9k |
Shifts-Weather | A dataset of real distributional shift across multiple large-scale tasks.
Reference: [Arxiv Paper](https://arxiv.org/abs/2107.07455) | Provide a detailed description of the following dataset: Shifts-Weather |
RL-ISN-dataset | The datasets of "Reinforcement Learning-enhanced Shared-account Cross-domain Sequential Recommendation" (TKDE 2022) | Provide a detailed description of the following dataset: RL-ISN-dataset |
Car datasets in multiple scenes | This dataset is a collection of 4,000 images of cars in multiple scenes that are ready to use for optimizing the accuracy of computer vision models.
All of the contents is sourced from PIXTA's stock library of 100M+ Asian-featured images and videos.
PIXTA is the largest platform of visual materials in the Asia Pacific region offering fully-managed services, high quality contents and data, and powerful tools for businesses & organizations to enable their creative and machine learning projects.
For more details, please refer to the link: https://www.pixta.ai/ Or send your inquiries to contact@pixta.ai | Provide a detailed description of the following dataset: Car datasets in multiple scenes |
Unified SSL Benchmark (USB) | The Unified SSL Benchmark (USB) consists of 15 diverse, challenging, and comprehensive tasks from CV, natural language processing (NLP), and audio processing (Audio) to evaluate self-supervised learning (SSL) methods. A modular and extensible codebase is open-sourced for fair evaluation on these SSL methods. | Provide a detailed description of the following dataset: Unified SSL Benchmark (USB) |
MDIA | MDIA is a large-scale multilingual benchmark for dialogue generation. It covers real-life conversations in 46 languages across 19 language families. | Provide a detailed description of the following dataset: MDIA |
Norwegian Endurance Athlete ECG Database | ## Abstract
The Norwegian Endurance Athlete ECG Database contains 12-lead ECG recordings from 28 elite athletes from various sports in Norway. All recordings are 10 seconds resting ECGs recorded with a General Electric (GE) MAC VUE 360 electrocardiograph. All ECGs are interpreted with both the GE Marquette SL12 algorithm (version 23 (v243)) and one cardiologist with training in interpretation of athlete's ECG. The data was collected at the University of Oslo in February and March 2020.
## Background
Athletes often have increased thickness in the left ventricular wall and extended chambers in both the left and right ventricle compared to untrained people at the same age [1]. These changes occur as a result of the heart adapting to large amounts of exercise. These changes can be seen on an echocardiogram, but the changes also give electrical manifestations that can be observed on an electrocardiogram (ECG). Even if these changes are considered healthy, they can be confused with pathological changes that are related to sudden cardiac death (SCD) [2]. In addition, studies show that the incidence of SCD is higher in athletes than in non-athletes of the same age [3,4]. Current measures and procedures for detecting athletes with an increased risk of SCD are characterized by low accuracy and low precision. This emphasizes that ECG interpretation of athletes is an area that requires increased focus.
## Methods
Twenty-eight healthy athletes were recruited for this study. 19 (68%) of the participants were men and 9 (32%) were women. Participant's ages ranged from 20 to 43 years (Mean = 25 years, standard deviation = 4.7 years). The distribution among sports was 24 rowers (86%), 2 kayakers (7%) and 2 cyclists (7%). The average amount of training hours for 2017 was 822 hours with a standard deviation of 117 hours, in 2018 the average amount of training was 820 hours with a standard deviation of 113 hours and in 2019 the average amount of training was 798 hours with a standard deviation of 171 hours.
The study protocol and consent form were approved by the Norwegian Centre for Research Data (application ID: 389013) and the University of Oslo, and the ethical considerations were approved by the Regional Committees for Medical and Health Research Ethics (application ID: 51205). All participants were informed and gave written consent before the test was initiated, they also agreed to have their ECG shared in an open database after the project was finished. The test subjects were lying horizontally on a bed, relaxing, while electrodes were attached to perform a 12-lead ECG recording. The recordings were performed as a standard 10 seconds resting ECG. The device used was a GE MAC VUE 360. The device's built-in interpretation algorithm, Marquette 12SL (version 23 (v243)), performed automatic interpretation of all ECGs.
All ECG recordings were examined by a cardiologist, with specialization in athletes' hearts, after the recordings were completed. The cardiologist interpreted the ECGs according to the international criteria for ECG interpretation of athletes.
## Data Description
Each of the 28 waveform files consists of 12 arrays, representing the twelve leads. The ECGs were obtained using a General Electric (GE) MAC VUE 360 electrocardiograph and interpreted using the built-in ECGs are GE Marquette SL12 algorithm (version 23 (v243)) and a cardiologist with training in interpretation of athlete's ECG.
The waveform files are stored in .dat -files with a corresponding .hea file containing all the metadata. This file formats are compatible with the Python WaveForm DataBase (WFDB) package and this makes it easy to import the data.
All ECG waveforms are sampled and stored with a sampling frequency of 500Hz and a length of 5000 samples (10 seconds). The header file contains information about the total amount of leads, samples per lead and additional information about each lead. The last two lines in the header file contains the diagnose given by the Marquette SL12 (SL12) algorithm and the cardiologist (C).
```
ath_001 12 500 5000
ath_001.dat 16 50000/mV 16 0 10251 49595 0 I
ath_001.dat 16 50000/mV 16 0 -1096 35223 0 II
ath_001.dat 16 50000/mV 16 0 -10267 60826 0 III
ath_001.dat 16 50000/mV 16 0 -3724 3505 0 AVR
ath_001.dat 16 50000/mV 16 0 9391 26379 0 AVL
ath_001.dat 16 50000/mV 16 0 -5395 57481 0 AVF
ath_001.dat 16 50000/mV 16 0 13580 61759 0 V1
ath_001.dat 16 50000/mV 16 0 11410 33501 0 V2
ath_001.dat 16 50000/mV 16 0 14721 52508 0 V3
ath_001.dat 16 50000/mV 16 0 16103 51083 0 V4
ath_001.dat 16 50000/mV 16 0 6662 44197 0 V5
ath_001.dat 16 50000/mV 16 0 -3806 11333 0 V6
#SL12: sinus bradycardia with marked sinus arrhythmia, Right Axis Deviation, Borderline ECG
#C: Sinus arrhythmia, Normal ECG
```
## Usage Notes
The intended use of this database is for the development of better algorithms designed to make better diagnostics for athletes based on ECG. One of the unique features of this database is that the ECGs are annotated by both a trained cardiologist and by a state-of-the-art ECG software (GE Marquette SL12).
To get started in Python you can use this code to import the ECG-signals and metadata
```
import wfdb
import numpy as np
import os
directory = "./your/directory/"
ECGs = []
for ecgfilename in sorted(os.listdir(directory )):
if ecgfilename.endswith(".dat"):
ecg = wfdb.rdsamp(directory + ecgfilename.split(".")[0])
ECGs.append(ecg)
ECGs = np.asarray(ECGs)
```
The numpy array (ECGs) now contains all ECG signals and metadata.
Despite the fact that the measurements were taken from top-trained athletes it is not confirmed whether they had athletic remodeling of the heart or not. No echocardiographic or other examinations were performed to investigate the structure of the heart.
## Release Notes
1.0.0 Initial release of the dataset.
## Ethics
The authors declare no ethics concerns.
## Acknowledgements
I will thank Professor Emeritus Knut Gjessdal for providing his medical expertise and interpreting all of the ECGs. This work was done at the University of Oslo and I will thank Professor Ørjan Grrøttem Martinsen for providing appropriate facilities for ECG measurements. | Provide a detailed description of the following dataset: Norwegian Endurance Athlete ECG Database |
FindVehicle | The ***first*** NER dataset in the field of traffic, which is to extract the characteristics and attributes of the vehicle on the road.
* Both flat and overlapped named entities annotation.
* Both coarse-grained and fine-grained named entities.
* It contains 8 kinds of coarse-grained entities and 12 kinds of fine-grained entities. (It includes 65 vehicle brands and 4793 vehicle models all over the world.) | Provide a detailed description of the following dataset: FindVehicle |
Datasets for 3D shape reconstruction from 2D microscopy images | Two single cell datsets for 3D shape reconstruction from 2D microscopy images used for our three previous publication’s, together with the respective model predictions. | Provide a detailed description of the following dataset: Datasets for 3D shape reconstruction from 2D microscopy images |
Operating ITS-G5 DSRC over Licensed and Unlicensed Bands: A City-Scale Performance Evaluation | A large-scale dataset of measurements of ETSI ITS-G5 Dedicated Short Range Communications (DSRC) is presented. Our dataset consists of network interactions happening between two On-Board Units (OBUs) and four Road Side Units (RSUs). Each OBU was fitted onto a vehicle. The two vehicles have been driven across the Innovate UK-funded FLOURISH Test Track encompassing key roads in the center of Bristol, UK. As for the RSUs, they were located at fixed locations around the track. Each RSU and OBU is equipped with two transceivers operating at different frequencies. During our experiments, each transceiver broadcast Cooperative Awareness Messages (CAMs) every 10ms to the neighboring RSUs and or OBUs.
The dataset refers to eight experimental sessions of the length of 2 hours each that have taken place over four days -- allocating two experimental sessions per-day. During the same day, both transceivers onboard any of the RSUs and OBUs was operated on two different frequencies. In particular, we operated the transceivers both over the licensed DSRC band, and over the unlicensed Industrial, Scientific, and Medical radio (ISM) bands 2.4GHz-2.5GHz and 5.725GHz-5.875GHz.
During each experimental session, for each transceiver, all the transmitted and received CAMs were recorded. Furthremore, for each of the received CAMs, we also recorded its Received Signal Strength Indicator (RSSI) and the location of the receiving transceiver, to generate a complete dataset of network interactions | Provide a detailed description of the following dataset: Operating ITS-G5 DSRC over Licensed and Unlicensed Bands: A City-Scale Performance Evaluation |
Images of Public Streetlights with Operational Monitoring using Computer Vision Techniques | This dataset consists of ~350k JPEG images of streetlight columns installed on a public road infrastructure located in the city of Bristol, UK.
Each streetlight is photographed by a Raspberry Pi Camera Module v1, installed on each lamppost, providing a unique camera placement, photographic angle, and distance from the streetlight. Several streetlights are partially obstructed by vegetation or are outside the Field of View (FoV) of the Raspberry Pi camera. Finally, the cameras facing the sky are susceptible to weather conditions (e.g., rain, snow, direct sunlight, etc.) that can partially or entirely alter the quality of the images taken.
The above provides a unique and diverse dataset of images that can be used for training tools and machine learning models for inspection, monitoring and maintenance use-cases within Smart Cities applications. | Provide a detailed description of the following dataset: Images of Public Streetlights with Operational Monitoring using Computer Vision Techniques |
Videezy4K | To evaluate the performance on 4K burst images/video, we collect several clips from website. The dataset can be download from : https://drive.google.com/file/d/1YDljUONvyKUO24smTx__CUH_4Zxhle09/view?usp=sharing | Provide a detailed description of the following dataset: Videezy4K |
SC_burst | Contains16 burst images using smartphones for burst/video denoising, restoration, and enhancement tasks. The raw format are unified and saved SC_burst in ".MAT", where the raw data and metadata are stored. | Provide a detailed description of the following dataset: SC_burst |
Niko Chord Progression Dataset | ### Introduction
The Niko Chord Progression Dataset is used in [AccoMontage2](https://github.com/billyblu2000/AccoMontage2). It contains 5k+ chord progression pieces, labeled with styles. There are four styles in total: Pop Standard, Pop Complex, Dark and R&B. Some progressions have an 'Unknown' style. Some statistics are provided below.
| | Mean | Variance |
| -------------------------- | ----- | -------- |
| Note Pitch | 57 | 167.70 |
| Note Velocity | 79.05 | 457.89 |
| Note Duration (in seconds) | 1.38 | 1.62 |
### Data Formats
You can access the Niko Chord Progression Dataset in two formats: MIDI format and the quantized note matrix format.
##### MIDI (dataset.zip)
Each chord progression piece is stored as a single MIDI file.
##### Quantized Note Matrix (dataset.pkl)
A python dictionary with format like the following. `nmat`is an 2-d matrix, each row represent a quantized note: `[start, end, pitch, velocity]`. <u>Each note is quantized at the eighth note level. eg., `start=2` means the note begins at the third eighth note.</u> `root` is also an 2-d matrix. It labels the roots of the chords using an eighth note sample rate. Each row of the `root` represents a bar. Each element is an integer ranged from 0 (C note) to 11 (B note).
```python
{'piece name':
{'nmat': [[0, 3, 60, 60], ...], # 2-d matrix: note matrix
'root': [[0,0,0,0,0,0,0,0], ...], # 2-d matrix: root label
'style': 'some style', # pop_standard, pop_complex, dark, r&b, unknown
'mode': 'some mode', # M, m
'tonic': 'some tonic' # C, Db, ... B
},
...
}
# load the dataset using pickle
import pickle
with open('dataset_path_and_name.pkl', 'rb') as file:
dataset = pickle.load(file)
```
### Supplementary description
##### Original Dataset
The Niko Chord Progression Dataset is a re-organized version of the original Niko Dataset. The original Niko Dataset have duplicate progressions and unnecessary labels, it was thus processed and converted to this version.
##### Style Mapping
The style label was mapped from the original dataset to the new dataset. The style label in the original dataset is stored as folder names, and thus the style can be obtained from the file path. The following shows a detailed description of the style mapping function.
```
// Structure of the original dataset
.
├─A Major - F# Minor ---> progressions are sorted based on tonics and modes
│ ├─1 - Best Melodies ---> eliminated
│ │ ├─Catchy
│ │ ├─Dark_HipHop_Trap
│ │ ├─EDM
│ │ ├─Emotional
│ │ ├─Pop
│ │ └─R&B_Neosoul
│ ├─2 - Best Chords
│ │ ├─Dark_HipHop_Trap ---> New style: Dark
│ │ ├─EDM
│ │ │ ├─Classy_7th_9th ---> New style: Pop Complex
│ │ │ ├─Emotional ---> New style: Pop Complex
│ │ │ └─Standard ---> New style: Pop Standard
│ │ ├─Emotional ---> New style: Pop Complex
│ │ ├─Pop
│ │ │ ├─Classy_7th_9th ---> New style: Pop Complex
│ │ │ ├─Emotional ---> New style: Pop Complex
│ │ │ └─Standard ---> New style: Pop Standard
│ │ └─R&B_Neosoul ---> New style: R&B
│ └─3 - Rest Of Pack
│ ├─A-Bm-D (I-ii-IV) ---> progressions sorted based on root pattern
│ │ ├─Arps ---> eliminated
│ │ ├─Basslines ---> eliminated
│ │ ├─Chord Breakdown ---> New style: Unknown
│ │ ├─Chord Progression -> New style: Unknown
│ │ ├─Epic Endings ---> eliminated
│ │ ├─Fast Chord Rhythm -> eliminated
│ │ │ ├─Back & Forth
│ │ │ └─Same Time
│ │ ├─Melodies ---> eliminated
│ │ │ ├─115-130bpm
│ │ │ ├─130-160bpm
│ │ │ ├─160-180bpm
│ │ │ └─90-115bpm
│ │ └─Slow Chord Rhythm -> New style: Unknown
...
```
### Cite
```
L. Yi, H. Hu, J. Zhao, and G. Xia, “AccoMontage2: A Complete Harmonization and Accompaniment Arrangement System”, in Proceedings of the 23rd International Society for Music Information Retrieval Conference, Bengaluru, India, 2022.
```
### License
MIT Licensed. Copyright © 2022 New York University Shanghai Music X Lab. All rights reserved. | Provide a detailed description of the following dataset: Niko Chord Progression Dataset |
EMC Dutch Clinical Corpus | EMC Dutch clinical corpus contains four types of anonymized clinical documents: entries from general practitioners, specialists’ letters, radiology reports, and discharge letters. The identified UMLS terms in the corpus are annotated for negation, temporality, and experiencer properties.
Zubair Afzal, Ewoud Pons, Ning Kang, Miriam CJM Sturkenboom, Martijn J Schuemie, Jan A Kors. ContextD: an algorithm to identify contextual properties of medical terms in a Dutch clinical corpus. BMC Bioinformatics 2014, 15:373 doi:10.1186/s12859-014-0373-3 | Provide a detailed description of the following dataset: EMC Dutch Clinical Corpus |
Human faces with mixed-race & various emotions | This dataset consists of 600+ items of faces with different emotions and mixed races that are ready to use for optimizing the accuracy of computer vision models. Human age range from 20 to 60 years old, balance in gender, no occlusion, with head direction (<45 degree up-down and left-right). All of the contents is sourced from PIXTA's stock library of 100M+ Asian-featured images and videos. PIXTA is the largest platform of visual materials in the Asia Pacific region offering fully-managed services, high quality contents and data, and powerful tools for businesses & organizations to enable their creative and machine learning projects.
For more details, please refer to the link: https://www.pixta.ai/
Or send your inquiries to contact@pixta.ai | Provide a detailed description of the following dataset: Human faces with mixed-race & various emotions |
OSDD | The Objects States Detection Dataset consists of images depicting everyday household objects in a number of different states. The ground-truth annotations involve the labels and bounding boxes spanning 18 object categories and 9 state classes.
The object categories are: \textit{bottle, jar, tub, book, drawer, door, cup, mug, glass, bowl, basket, box, phone, charger, socket, towel, shirt} and \textit{newspaper}. The 9 state classes are: \textit{open, close, empty, containing something liquid (CL), containing something solid (CS), plugged, unplugged, folded} and \textit{unfolded}.
The images were obtained by selecting video frames from the something-something V2 Dataset~\url{https://developer.qualcomm.com/software/ai-datasets/something-something}.
Specifically, images containing visually salient objects and states of the aforementioned categories were captured and annotated with bounding-boxes and ground truth labels referring to the corresponding object categories and state classes. Overall, the dataset contains 13,744 images and 19,018 annotations obtained by selecting the first, last and middle frames of 9,015 videos, after checking that each of them contains salient information. | Provide a detailed description of the following dataset: OSDD |
Online Novel Recommendation Dataset | A dataset for online novel recommendation. | Provide a detailed description of the following dataset: Online Novel Recommendation Dataset |
FutureHouse | We present a new large-scale photorealistic panoramic dataset named FutureHouse, which has the following characteristics.
1) It contains over 70,000 high-quality models with high-resolution meshes and physical material. All models are measured in real world standards.
2) Selected scene layouts are carefully designed by over 100 excellent artists. All of selected layouts are used in realworld display.
3) It contains 28,579 good panoramic views from 1,752 house-scale scenes. Therefore, it can be used for perspective image tasks as well as omnidirectional image tasks.
4) More physical material representation. Most materials are represent by microfacet BRDF modeling metalness, and the rest are represent by special shading models, e.g., cloth material and transmission material.
5) High rendering quality. Benefiting from commercial rendering engine, Unreal engine 4, and powerful deep learning super sampling (DLSS), our renderings have less noise.
Our SVBRDF representation including base color and metalness is capable of producing nonmonochrome specular reflectance. | Provide a detailed description of the following dataset: FutureHouse |
PKU SketchRe-ID Dataset | The PKU Sketch Re-ID dataset is constructed by National Engineering Laboratory for Video Technology (NELVT), Peking University.
The dataset contains 200 persons, each of which has one sketch and two photos. Photos of each person were captured during daytime by two cross-view cameras. We cropped the raw images (or video frames) manually to make sure that every photo contains the one specific person. We have a total of 5 artists to draw all persons’ sketches and every artist has his own painting style. | Provide a detailed description of the following dataset: PKU SketchRe-ID Dataset |
A-AVA | A new Actor-identified A-AVA dataset based on the existing AVA dataset and the TAO dataset, by assigning the unique actor identity and actions to each actor. | Provide a detailed description of the following dataset: A-AVA |
FathomNet | FathomNet is an open-source image database that can be used to train, test, and validate state-of-the-art artificial intelligence algorithms to help us understand our ocean and its inhabitants. Inspired by annotated image databases such as ImageNet and COCO, FathomNet aims to establish the same kind of reference data set for images of ocean life. The long-term goal of FathomNet is to aggregate >1k fully annotated and localized images per marine species of Animalia (>200k), with the ability to expand and include other underwater concepts (e.g., substrate type, equipment, debris, etc.) for training and validating machine learning models. We hope that contributions from the broader community will realize our goals for FathomNet. | Provide a detailed description of the following dataset: FathomNet |
HQ-YTVIS | While Video Instance Segmentation (VIS) has seen rapid progress, current approaches struggle to predict high-quality masks with accurate boundary details. To tackle this issue, we identify that the coarse boundary annotations of the popular YouTube-VIS dataset constitute a major limiting factor. To benchmark high-quality mask predictions for VIS, we introduce the HQ-YTVIS dataset as well as Tube-Boundary AP in ECCV 2022. HQ-YTVIS consists of a manually re-annotated test set and our automatically refined training data, which provides training, validation and testing support to facilitate future development of VIS methods aiming at higher mask quality. | Provide a detailed description of the following dataset: HQ-YTVIS |
tida-gcn-data | The datasets of "Time Interval-enhanced Graph Neural Network for Shared-account Cross-domain Sequential Recommendation" (TNNLs 2022) | Provide a detailed description of the following dataset: tida-gcn-data |
SPARKESX | We present the Single-dish PARKES data sets for finding the uneXpected (SPARKESX), a compilation of real and simulated high-time resolution observations. SPARKESX comprises three mock surveys from the Parkes ''Murriyang'' radio telescope. A broad selection of simulated and injected expected signals (such as pulsars, fast radio bursts), poorly known signals (such as the features expected from flare stars) and unknown unknowns are generated for each survey. We provide a baseline by presenting how successful a typical pipeline based on the standard pulsar search software, PRESTO, is at finding the injected signals.
The dataset is designed to aid in the development of new search algorithms, including image processing, machine learning, and deep learning. The raw data, ground truth labels, and baseline are provided. | Provide a detailed description of the following dataset: SPARKESX |
Flight Scheduling Data | Dataset was introduced by Jones Granatyr in his book
https://iaexpert.academy/2016/10/25/review-de-livro-programando-a-inteligencia-coletiva where he scraped flight schedules. | Provide a detailed description of the following dataset: Flight Scheduling Data |
SinD | The **SIND** dataset is based on 4K video captured by drones, providing information including traffic participant trajectories, traffic light status, and high-definition maps | Provide a detailed description of the following dataset: SinD |
The BioScope Corpus | It is a freely available resource for research on handling negation and uncertainty in biomedical texts . The corpus consists of three parts, namely medical free texts,biological full papers and biological scientific abstracts. The dataset contains annotations at the token level for negative and speculative keywords and at the sentence level for their linguistic scope. The annotation process was carried out by two independent linguist annotators and a chief annotator – also responsible for setting up the annotation guidelines – who resolved cases where the annotators disagreed. | Provide a detailed description of the following dataset: The BioScope Corpus |
GeBiD | We provide a custom synthetic bimodal dataset, called GeBiD, designed specifically for the comparison of the joint- and cross-generative capabilities of Multimodal Variational Autoencoders. It comprises RGB images of geometric primitives and textual descriptions. The dataset offers 5 levels of difficulty (based on the number of attributes) to find the minimal functioning scenario for each model. Moreover, its rigid structure enables automatic qualitative evaluation of the generated samples. | Provide a detailed description of the following dataset: GeBiD |
Psychometric NLP | Psychometric NLP is a corpus for psychometric natural language processing (NLP) related to important dimensions such as trust, anxiety, numeracy, and literacy, in the health domain. The dataset aligns user text with their survey-based response items and encompasses survey-based psychometric measures, accompanying user-generated text, and self-reported demographic information, including race, sex, age, income, and education from 8,502 respondents. | Provide a detailed description of the following dataset: Psychometric NLP |
UAV Trajectory | The UAV Delievery dataset created to advance the research in drone delivery, contains trajectory details of UAV in different speed, altitude, and wind conditions. The dataset is created from the Truck based "Online Food Delivery Platform" using Open Air Traffic Simulator (ATS) with a UAVTrajectory.py plugin. A pre-processing step is used to select the deliveries under distance 5km, due to battery constraints of UAVs. After the pre-processing, the dataset consists of a total number of 6911 deliveries that are simulated and collected in log files. | Provide a detailed description of the following dataset: UAV Trajectory |
Eurovision 2018 votes | Eurovision 2018 official votes dataset.
more details can be found here: https://towardsdatascience.com/social-network-analysis-from-theory-to-applications-with-python-d12e9a34c2c7 | Provide a detailed description of the following dataset: Eurovision 2018 votes |
PhotoTour | The dataset consists of 1024 x 1024 bitmap (.bmp) images, each containing a 16 x 16 array of image patches. Each patch is sampled as 64x64 grayscale, with a canonical scale and orientation. For details of how the scale and orientation is established, please see the paper.
Two associated metadata files are included. The first file "info.txt" contains the match information. Each row of info.txt corresponds corresponds to a separate patch, with the patches ordered from left to right and top to bottom in each bitmap image. The first number on each row of info.txt is the 3D point ID from which that patch was sampled -- patches with the same 3D point ID are projected from the same 3D point (into different images). The second number in is not used at present.
The file "interest.txt" has information about the original interest points. Each row of interest.txt also corresponds to a separate patch, so it has the same number of rows as info.txt. The first number is the ID of the reference image in which the interest point was found. IMPORTANT: in order to establish matches and non-matches, you must use patches with the same reference image ID. Correspondences were found by projecting between images using this reference image only, so it is possible that patches with different 3D point ID's that have different reference image ID's could actually correspond to the same 3D point. The other information in interest.txt is: x, y, orientation, scale (log2 units). In order to make sure that non-matches were sufficiently different, we checked that these values were sufficiently far apart when establishing non-matches.
To allow researchers to replicate our learning results (if desired), we have include the match files that we used to generate the results in the paper. These are name "m50_n1_n2.txt" where n1 and n2 are the number of matches and non-matches present in the file. The format of the file is as follows:
patchID1 3DpointID1 unused1 patchID2 3DpointID2 unused2
...
"matches" have the same 3DpointID, and correspond to interest points that were detected with 5 pixels in position, and agreeing to 0.25 octaves of scale and pi/8 radians in angle. "non-matches" have different 3DpointID's, and correspond to interest points lying outside a range of 10 pixels in position, 0.5 octaves of scale and pi/4 radians in angle. | Provide a detailed description of the following dataset: PhotoTour |
HD1k | An autnonomous driving dataset and benchmark for optical flow. This dataset was created by the Heidelberg Collaboratory for Image Processing in close cooperation with Robert Bosch GmbH.
For the public training dataset, we provide:
1) > 1000 frames at 2560x1080 with diverse lighting and weather scenarios
2) reference data with error bars for optical flow
3) evaluation masks for dynamic objects
4) specific robustness evaluation on challenging scenes
The data was captured in a controlled environment with systematic variation of traffic scenarios, weather, and lighting conditions. The data was acquired at a frame rate of 200Hz with a resolution of 2560x1080. | Provide a detailed description of the following dataset: HD1k |
SoccerTrack Dataset | The SoccerTrack dataset comprises top-view and wide-view video footage annotated with bounding boxes. GNSS coordinates of each player are also provided. We hope that the SoccerTrack dataset will help advance the state of the art in multi-object tracking, especially in team sports.
## Dataset Details
<div>
<a href='https://openaccess.thecvf.com/content/CVPR2022W/CVSports/papers/Scott_SoccerTrack_A_Dataset_and_Tracking_Algorithm_for_Soccer_With_Fish-Eye_CVPRW_2022_paper.pdf'>
<img src='https://img.shields.io/badge/Paper-PDF-red?style=for-the-badge&logo=adobe-acrobat-reader'/>
</a>
<a href='https://github.com/AtomScott/SoccerTrack'>
<img src='https://img.shields.io/badge/Code-Page-blue?style=for-the-badge&logo=github'/>
</a>
<a href='https://soccertrack.readthedocs.io/'>
<img src='https://img.shields.io/badge/Documentation-Page-blue?style=for-the-badge&logo=read-the-docs'/>
</a>
</div>
**** | **Wide-View Camera** | **Top-View Camera** | **GNSS**
---|---|---|---
Device | Z CAM E2-F8 | DJI Mavic 3 | STATSPORTS APEX 10 Hz
Resolution | 8K (7,680 × 4,320 pixel) | 4K (3,840 × 2,160 pixesl) | Abs. err. in 20-m run: 0.22 ± 0.20 m
FPS | 30 | 30 | 10
Player tracking | ✅ | ✅ | ✅
Ball tracking | ✅ | ✅ | -
Bounding box | ✅ | ✅ | -
Location data | ✅ | ✅ | ✅
Player ID | ✅ | ✅ | ✅
All data in SoccerTrack was obtained from 11-vs-11 soccer games between college-aged athletes. Measurements were conducted after we received the approval of Tsukuba university’s ethics committee, and all participants provided signed informed permission. After recording several soccer matches, the videos were semi-automatically annotated based on the GNSS coordinates of each player.
## Citation
```
@inproceedings{scott2022soccertrack,
title={SoccerTrack: A Dataset and Tracking Algorithm for Soccer With Fish-Eye and Drone Videos},
author={Scott, Atom and Uchida, Ikuma and Onishi, Masaki and Kameda, Yoshinari and Fukui, Kazuhiro and Fujii, Keisuke},
booktitle={Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition},
pages={3569--3579},
year={2022}
}
``` | Provide a detailed description of the following dataset: SoccerTrack Dataset |
FSC147 | We introduce a dataset of 147 object categories containing over 6000 images that are suitable for the few-shot counting task. We collected and annotated images ourselves. Our dataset consists of 6135 images across a di- verse set of 147 object categories, from kitchen utensils and office stationery to vehicles and animals. The object count in our dataset varies widely, from 7 to 3731 objects, with an average count of 56 objects per image. In each image, each object instance is annotated with a dot at its approxi- mate center. In addition, three object instances are selected randomly as exemplar instances; these exemplars are also annotated with axis-aligned bounding boxes. | Provide a detailed description of the following dataset: FSC147 |
MIntRec | **MIntRec** is a novel dataset for multimodal intent recognition. It formulates coarse-grained and fine-grained intent taxonomies based on the data collected from the TV series *Superstore*. The dataset consists of 2,224 high-quality samples with text, video, and audio modalities and has multimodal annotations among twenty intent categories. | Provide a detailed description of the following dataset: MIntRec |
BioRED | BioRED is a first-of-its-kind biomedical relation extraction dataset with multiple entity types (e.g. gene/protein, disease, chemical) and relation pairs (e.g. gene–disease; chemical–chemical) at the document level, on a set of600 PubMed abstracts. Furthermore, BioRED label each relation as describing either a novel finding or previously known background knowledge, enabling automated algorithms to differentiate between novel and background information. | Provide a detailed description of the following dataset: BioRED |
Vi-Fi Multi-modal Dataset | A large-scale multi-modal dataset to facilitate research and studies that concentrate on vision-wireless systems.
The Vi-Fi dataset is a large-scale multi-modal dataset that consists of vision, wireless and smartphone motion sensor data of multiple participants and passer-by pedestrians in both indoor and outdoor scenarios. In Vi-Fi, vision modality includes RGB-D video from a mounted camera. Wireless modality comprises smartphone data from participants including WiFi FTM and IMU measurements.
The presence of Vi-Fi dataset facilitates and innovates multi-modal system research, especially, vision-wireless sensor data fusion, association and localization.
(Data collection was in accordance with IRB protocols and subject faces have been blurred for subject privacy.) | Provide a detailed description of the following dataset: Vi-Fi Multi-modal Dataset |
DistNLI | This dataset is named as the DistNLI dataset, which is a synthesized benchmark aiming to probe neural network models from the aspect of conjunctions on distributivity in NLI task in American English. DistNLI consists of sentence minimal pairs (premise and hypothesis) differentiated with conjunction structure within the pair and distributivity-related linguistic phenomenon. DistNLI is compiled with 328 sentences so far (164 for distributive and 164 for ambiguous predicates), annotated by 4 proficient English speakers with a background in NLP and Linguistics. Due to the specificity of the linguistic phenomenon involved and its size, this DistNLI dataset should only be used as an adversarial dataset in the investigation of distributivity of verb predication. | Provide a detailed description of the following dataset: DistNLI |
SELTO Dataset | A Benchmark Dataset for Deep Learning-based Methods for 3D Topology Optimization.
One can find a description of the provided dataset partitions in Section 3 of Dittmer, S., Erzmann, D., Harms, H. Maass, P., SELTO: Sample-Efficient Learned Topology Optimization (2022). | Provide a detailed description of the following dataset: SELTO Dataset |
PEN | * Provided explanations on the existing three benchmark datasets on solving algebraic word problems: ALG514, DRAW-1K, MAWPS | Provide a detailed description of the following dataset: PEN |
DRAW-1K | **DRAW-1K** is a dataset consisting of 1000 algebra word problems, semiautomatically annotated for the evaluation of automatic solvers. DRAW includes gold coefficient alignments that are necessary uniquely identify the derivation of an equation system. | Provide a detailed description of the following dataset: DRAW-1K |
ALG514 | 514 algebra word problems and associated equation systems gathered from Algebra.com. | Provide a detailed description of the following dataset: ALG514 |
MAWPS | **MAWPS** is an online repository of Math Word Problems, to provide a unified testbed to evaluate different algorithms.
MAWPS allows for the automatic construction of datasets with particular characteristics, providing tools for tuning the lexical and template overlap of a dataset as well as for filtering ungrammatical problems from web-sourced corpora. The online nature of this repository facilitates easy community contribution.
Amassed 3,320 problems, including the full datasets used in several previous works. | Provide a detailed description of the following dataset: MAWPS |
Eth-ICO | The sampled 2-hop subgraphs centered on ICO-wallet accounts on the Ethereum Interaction graph. | Provide a detailed description of the following dataset: Eth-ICO |
MSLR-WEB30K | The **MSLR-WEB30K** dataset consists of 30,000 search queries over the documents from search results. The data also contains the values of 136 features and a corresponding user-labeled relevance factor on a scale of one to five with respect to each query-document pair. | Provide a detailed description of the following dataset: MSLR-WEB30K |
Eth-Exchange | The sampled 2-hop subgraphs centered on Exchange accounts on the Ethereum Interaction graph. | Provide a detailed description of the following dataset: Eth-Exchange |
Eth-Mining | The sampled 2-hop subgraphs centered on Mining accounts on the Ethereum Interaction graph. | Provide a detailed description of the following dataset: Eth-Mining |
Eth-Phish/Hack | The sampled 2-hop subgraphs centered on Phish/Hack accounts on the Ethereum Interaction graph. | Provide a detailed description of the following dataset: Eth-Phish/Hack |
EOSIO-Robot | The sampled 2-hop subgraphs centered on Robot accounts on the EOSIO Interaction graph. | Provide a detailed description of the following dataset: EOSIO-Robot |
Censored_Planet_Quack | Hyperquack v.2 response data which contains structured data records in JSON.
Censored Planet at the University of Michigan provides this data, which are records of internet censorship test requests using the Echo protocol, and notes:
> The raw data posted here needs to be processed to avoid false inferences. Please use our analysis pipeline to process the data before using it. Censored Planet detects network interference of websites using remote measurements to infrastructural vantage points within networks (eg. institutions). Note that this raw data cannot determine the entity responsible for the blocking or the intent behind it. Please exercise caution when using the data | Provide a detailed description of the following dataset: Censored_Planet_Quack |
MultiCoNER | **MultiCoNER** is a large multilingual dataset (11 languages) for Named Entity Recognition. It is designed to represent some of the contemporary challenges in NER, including low-context scenarios (short and uncased text), syntactically complex entities such as movie titles, and long-tail entity distributions. | Provide a detailed description of the following dataset: MultiCoNER |
MLP | **Multimodal Lecture Presentations** (**MLP**) is a large-scale benchmark dataset for testing the capabilities of machine learning models in multimodal understanding of educational content. To benchmark the understanding of multimodal information in lecture slides, two research tasks are introduced; they are designed to be a first step towards developing AI that can explain and illustrate lecture slides: automatic retrieval of (1) spoken explanations for an educational figure (Figure-to-Text) and (2) illustrations to accompany a spoken explanation (Text-to-Figure). | Provide a detailed description of the following dataset: MLP |
Regressors-Regressions Dataset | This dataset is a collection of 5348 links from bug-introducing and bug-fixing commit sets extracted from Mozilla's [Bugzilla](https://bugzilla.mozilla.org) with the use of [bugbug](https://github.com/mozilla/bugbug). In this repository, you will find two shapes of it:
1. A CSV file containing all the information related to each issue
2. A JSON file in a format compatible with [PySZZ](https://github.com/grosa1/pyszz) | Provide a detailed description of the following dataset: Regressors-Regressions Dataset |
REFUGE2 | The goal of REFUGE2 challenge is to evaluate and compare automated algorithms for glaucoma detection and optic disc/cup segmentation on a standard dataset of retinal fundus images.
We invite the medical image analysis community to participate by developing and testing existing and novel automated classification and segmentation methods.
REFUGE2 challenge consists of THREE Tasks:
Classification of clinical Glaucoma
Segmentation of Optic Disc and Cup
Localization of Fovea (macular center) | Provide a detailed description of the following dataset: REFUGE2 |
GAMMA Challenge | GAMMA releases the world's first multi-modal dataset for glaucoma grading, which was provided by the Sun Yat-sen Ophthalmic Center of Sun Yat-sen University in Guangzhou, China. The dataset consists of 2D fundus images and 3D optical coherence tomography (OCT) images of 300 patients. The dataset was annotated with glaucoma grade in every sample, and macular fovea coordinates as well as optic disc/cup segmentation mask in the fundus image.
We invite the medical image analysis community to participate by developing and testing existing and novel automated classification and segmentation methods.
GAMMA challenge consists of THREE Tasks:
Grading glaucoma using multi-modality data
Segmentation of optic disc and cup in fundus images
Localization of fovea macula in fundus image | Provide a detailed description of the following dataset: GAMMA Challenge |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.