dataset_name stringlengths 2 128 | description stringlengths 1 9.7k | prompt stringlengths 59 185 |
|---|---|---|
A Dataset for Relation Extraction of Natural-Products | **A curated evaluation dataset for end-to-end Relation Extraction of relationships between organisms and natural-products.**
Details about the manual annotation:
* For Chemicals:
- The chemical labels are annotated as they appear in the abstract.
- In abstracts, singular chemicals and classes of chemicals produced by a specific organism were distinguished.
- The "type" attribute {“chemical”, “class”} is used to indicate the nature of the mentioned name.
- A "class" attribute for chemical entities has also been included if class information is present in the abstract.
- A Wikidata and PubChem identifiers were assigned to chemicals and classes when available.
* For Organisms:
- The organism labels are annotated as they appear in the abstract.
- If in an abstract, the genus name was mention first, e.g. "Plakinastrella sp." and then the specie name e.g "Plakinastrella clathrata" is precise, then only the specie name is used.
- A Wikidata identifier was assigned to all organisms.
- In some abstracts, only the genus name is mentioned.
* For Relations:
- Only the relations explicitly mentioned in the abstract are reported in the output labels.
- Relations are reported in their order of appearance in the abstract. | Provide a detailed description of the following dataset: A Dataset for Relation Extraction of Natural-Products |
Song Describer Dataset | The Song Describer Dataset (SDD) contains ~1.1k captions for 706 permissively licensed music recordings. It is designed for use in evaluation of models that address music-and-language (M&L) tasks such as music captioning, text-to-music generation and music-language retrieval. | Provide a detailed description of the following dataset: Song Describer Dataset |
GSAP-NER | A scholarly named entity recognition dataset with focus on machine learning models and datasets. | Provide a detailed description of the following dataset: GSAP-NER |
BatMobility Dataset | This is the dataset release for the ACM MobiCom 2023 paper "BatMobility: Towards Flying Without Seeing for Autonomous Drones".
* Project Website: batmobility.github.io
* Project Code: github.com/ConnectedSystemsLab/batmobility_ae
If you found this useful, please cite
@inproceedings{sie2023batmobility,
author = {Sie, Emerson and Liu, Zikun and Vasisht, Deepak},
title = {BatMobility: Towards Flying Without Seeing for Autonomous Drones},
booktitle = {ACM International Conference on Mobile Computing (MobiCom)},
year = {2023},
doi = {https://doi.org/10.1145/3570361.3592532},
} | Provide a detailed description of the following dataset: BatMobility Dataset |
Multi-EuP: The Multilingual European Parliament Dataset for Analysis of Bias in Information Retrieval | The Multi-Eup is a new multilingual benchmark dataset, comprising 22K multilingual documents collected from the European Parliament, spanning 24 languages. This dataset is designed to investigate fairness in a multilingual information retrieval (IR) context to analyze both language and demographic bias in a ranking context. It boasts an authentic multilingual corpus, featuring topics translated into all 24 languages, as well as cross-lingual relevance judgments. Furthermore, it offers rich demographic information associated with its documents, facilitating the study of demographic bias. | Provide a detailed description of the following dataset: Multi-EuP: The Multilingual European Parliament Dataset for Analysis of Bias in Information Retrieval |
HEADSET | The volumetric representation of human interactions is one of the fundamental domains in the development of immersive media productions and telecommunication applications. Particularly in the context of the rapid advancement of Extended Reality (XR) applications, this volumetric data has proven to be an essential technology for future XR elaboration. In this work, we present a new multimodal database to help advance the development of immersive technologies. Our proposed database provides ethically compliant and diverse volumetric data, in particular 27 participants displaying posed facial expressions and subtle body movements while speaking, plus 11 participants wearing head-mounted displays (HMDs). The recording system consists of a volumetric capture (VoCap) studio, including 31 synchronized modules with 62 RGB cameras and 31 depth cameras. In addition to textured meshes, point clouds, and multi-view RGB-D data, we use one Lytro Illum camera for providing light field (LF) data simultaneously. Finally, we also provide an evaluation of our dataset employment with regard to the tasks of facial expression classification, HMDs removal, and point cloud reconstruction. The dataset can be helpful in the evaluation and performance testing of various XR algorithms, including but not limited to facial expression recognition and reconstruction, facial reenactment, and volumetric video. HEADSET and its all associated raw data and license agreement will be publicly available for research purposes. | Provide a detailed description of the following dataset: HEADSET |
NITEC | This paper introduces NITEC, a new dataset and models for detecting eye contact from an ego-centric camera perspective. The models are trained and tested on this dataset, demonstrating strong performance and cross-dataset generalization ability. NITEC contains over 35,000 manually annotated samples from diverse sources, capturing challenging real-world conditions. Extensive experiments highlight NITEC's ability to enable training of accurate eye contact classifiers, advancing research in social human-robot interaction. | Provide a detailed description of the following dataset: NITEC |
AVQA | Audio-visual question answering aims to answer questions regarding both audio and visual modalities in a given video. For example, given a video showing a traffic intersection where the light turns red and the parking stick drops, and the question “why did the stick fall in the video?”, it requires to combine the visual information “the stick dropping” and the audio information of a train whistle to answer the question as “Here comes the train”. To achieve an accurate reasoning process and get the correct answer, it is essential to extract cues and contexts from both audio and visual modalities and discover their inner causal correlations.
Real-life scenarios contain more complex relationships between audio-visual objects and a wider varieties of audio-visual daily activities. AVQA is an audio-visual question answering dataset for the multimodal understanding of audio-visual objects and activities in real-life scenarios on videos. AVQA provides diverse sets of questions specially designed considering both audio and visual information, involving various relationships between objects or in activities. | Provide a detailed description of the following dataset: AVQA |
Heteroatom Doped Graphene Supercapacitor | Heteroatom doped graphene supercapacitor feature data is gathered from various literatures for use in machine learning tasks. Main motivation is to optimize supercapacitors and to gain knowledge into models for electrochemistry tasks. | Provide a detailed description of the following dataset: Heteroatom Doped Graphene Supercapacitor |
ARCADE | ARCADE: Automatic Region-based Coronary Artery Disease diagnostics using x-ray angiography imagEs Dataset Phase 2 consist of two folders with 300 images in each of them as well as annotations.
ARCADE: Automatic Region-based Coronary Artery Disease diagnostics using x-ray angiography imagEs Dataset Phase 1 consists of two datasets of XCA images for each of two tasks of ARCADE challenge. The first task includes in total 1200 coronary vessel tree images, which are divided into train(1000) and validation(200) groups, images for training are followed with annotations, depicting the division of a heart into 26 different regions based on the Syntax Score methodology[1]. Similarly, the second task includes a different set of 1200 images with same train-val division proportion with annotated regions containing atherosclerotic plaques. This dataset, carefully annotated by medical experts, enables scientists to actively contribute towards the advancement of an automated risk assessment system for patients with CAD. | Provide a detailed description of the following dataset: ARCADE |
Large Car-following Dataset Based on Lyft level-5: Following Autonomous Vehicles vs. Human-driven Vehicles | Studying how human drivers react differently when following autonomous vehicles (AV) vs. human-driven vehicles (HV) is critical for mixed traffic flow. This dataset contains extracted and enhanced two categories of car-following data, HV-following-AV (H-A) and HV-following-HV (H-H), from the open Lyft level-5 dataset. | Provide a detailed description of the following dataset: Large Car-following Dataset Based on Lyft level-5: Following Autonomous Vehicles vs. Human-driven Vehicles |
X3D | X3D is a dataset containing 15 scenes and covering 4 applications for X-ray 3D reconstruction. More specifically, the X3D dataset includes the scenes of
(1) medicine: jaw, leg, chest, foot, abdomen, aneurism, pelvis, pancreas, head
(2) biology: carp, bonsai
(3) security: box, backpack
(4) industry: engine, teapot
Two main tasks can be evaluated, i.e.,
(1) Novel View Synthesis
(2) CT Reconstruction | Provide a detailed description of the following dataset: X3D |
Vision-Automatic-Band-Gap-Extractor-Data | Hyperspectral data for a MA/FA lead iodide perovskite gradient. | Provide a detailed description of the following dataset: Vision-Automatic-Band-Gap-Extractor-Data |
Vision-Stability-Measurement | Controlled 2-hour degradation optical time series data for a MA/FA lead iodide perovskite gradient. | Provide a detailed description of the following dataset: Vision-Stability-Measurement |
kitab | KITAB is a challenging dataset and a dynamic data collection approach for testing abilities of Large Language Models (LLMs) in answering information retrieval queries with constraint filters. A filtering query with constraints can be of the form "List all books written by Toni Morrison that were published between 1970-1980". | Provide a detailed description of the following dataset: kitab |
MID Intrinsics | Intrinsic component extension of MIT Multi-Illumination Dataset proposed in the paper "Intrinsic Image Decomposition via Ordinal Shading", [Chris Careaga](https://ccareaga.github.io/) and [Yağız Aksoy](https://yaksoy.github.io), ACM Transactions on Graphics, 2023
### [Project Page](https://yaksoy.github.io/MIDIntrinsics/) | [Paper](https://yaksoy.github.io/papers/TOG23-Intrinsic.pdf) | [Video](https://youtu.be/pWtJd3hqL3c) | [Supplementary](https://yaksoy.github.io/papers/TOG23-Intrinsic-Supp.pdf) | [Data](https://1sfu-my.sharepoint.com/:f:/g/personal/ctc32_sfu_ca/EjZMBeiaFehHiRh0pBCNcDoBLA-e4g5prym4zjIfIiRCUA?e=UFNUsZ)
We provide estimations of albedo and shading for the [MIT Multi-Illumination Dataset](https://projects.csail.mit.edu/illumination/) | Provide a detailed description of the following dataset: MID Intrinsics |
EyeInfo | The EyeInfo Dataset is an open-source eye-tracking dataset created by Fabricio Batista Narcizo, a research scientist at the IT University of Copenhagen (ITU) and GN Audio A/S (Jabra), Denmark. This dataset was introduced in the paper "High-Accuracy Gaze Estimation for Interpolation-Based Eye-Tracking Methods" (DOI: 10.3390/vision5030041). The dataset contains high-speed monocular eye-tracking data from an off-the-shelf remote eye tracker using active illumination. The data from each user has a text file with data annotations of eye features, environment, viewed targets, and facial features. This dataset follows the principles of the General Data Protection Regulation (GDPR).
We have built a remote eye tracker with off-the-shelf components to collect the real eye-tracking data. The collected data contain binocular eye information from 83 participants (166 trials), with the following annotations: frame number, target ID, timestamp, viewed target coordinates, pupil center, the major/minor axes and angle orientation of fitted ellipse, and four enumerated corneal reflections’ coordinates. We have extracted the eye features from recorded eye videos using a feature-based eye-tracking method (i.e., binarization+fitting ellipse), and the raw data are available on individual annotated text files (CSV). The raw dataset contains outliers due to blinks, light reflections, missing glints, and low contrast between the iris and pupil. | Provide a detailed description of the following dataset: EyeInfo |
PsyMo | Psychological trait estimation from external factors such as movement and appearance is a challenging and long-standing problem in psychology, and is principally based on the psychological theory of embodiment. To date, attempts to tackle this problem have utilized private small-scale datasets with intrusive body-attached sensors. Potential applications of an automated system for psychological trait estimation include estimation of occupational fatigue and psychology, and marketing and advertisement. In this work, we propose PsyMo (Psychological traits from Motion), a novel, multi-purpose and multi-modal dataset for exploring psychological cues manifested in walking patterns. We gathered walking sequences from 312 subjects in 7 different walking variations and 6 camera angles. In conjunction with walking sequences, participants filled in 6 psychological questionnaires, totalling 17 psychometric attributes related to personality, self-esteem, fatigue, aggressiveness and mental health. We propose two evaluation protocols for psychological trait estimation. Alongside the estimation of self-reported psychological traits from gait, the dataset can be used as a drop-in replacement to benchmark methods for gait recognition. We anonymize all cues related to the identity of the subjects and publicly release only silhouettes, 2D / 3D human skeletons and 3D SMPL human meshes. | Provide a detailed description of the following dataset: PsyMo |
PhoCAL | Object pose estimation is crucial for robotic applications and augmented reality. To provide a benchmark with high-quality ground truth annotations to the community, we introduce a multimodal dataset for category-level object pose estimation with photometrically challenging objects termed PhoCaL. PhoCaL comprises 60 high quality 3D models of household objects over 8 categories including highly reflective, transparent and symmetric objects. We developed a novel robot-supported multi-modal (RGB, depth, polarisation) data acquisition and annotation process. It ensures sub-millimeter accuracy of the pose for opaque textured, shiny and transparent objects, no motion blur and perfect camera synchronisation. | Provide a detailed description of the following dataset: PhoCAL |
Test-of-Time | The goal of this dataset is to probe video-language models for understanding of simple temporal relations like "before" and "after". The dataset is only meant to be an evaluation set and not a training set.
Contents:
1. The dataset has synthetic videos which consists of a pair of shapes appearing gradually. For example, video for the caption "a red circle appears after a yellow circle" will first show a "yellow circle" appear and then a "red circle" appear. The model has to determine the right caption in comparison with a distractor caption "a yellow circle appears after a red circle". Note that this distractor caption has the same set of words but in a different order, motivated by the Winograd schema.
2. The dataset also has a control set in which videos only have a single event, e.g., "a red circle appears". Note that this is a control task to ensure that these videos are not out-of-distribution for a given video model.
A time-aware model shall perform perfectly well on both sets. A space-aware model that is not time-aware shall perform poorly on the temporal task while performing perfectly on the control task. | Provide a detailed description of the following dataset: Test-of-Time |
Beijing Traffic | The Beijing Traffic Dataset collects traffic speeds at 5-minute granularity for 3126 roadway segments in Beijing between 2022/05/12 and 2022/07/25. | Provide a detailed description of the following dataset: Beijing Traffic |
dockstring | Regression dataset for molecular docking scores (predicted molecule-protein binding affinity). Contains ~250,000 molecules against 58 protein targets. | Provide a detailed description of the following dataset: dockstring |
ITALIC | **ITALIC: An ITALian Intent Classification Dataset**
ITALIC is an intent classification dataset for the Italian language, which is the first of its kind.
It includes spoken and written utterances and is annotated with 60 intents.
The dataset is available on [Zenodo](https://zenodo.org/record/8040649) and connectors ara available for the [HuggingFace Hub](https://huggingface.co/datasets/RiTA-nlp/ITALIC).
### Data collection
The data collection follows the MASSIVE NLU dataset which contains an annotated textual dataset for 60 intents. The data collection process is described in the paper [Massive Natural Language Understanding](https://arxiv.org/abs/2204.08582).
Following the MASSIVE NLU dataset, a pool of 70+ volunteers has been recruited to annotate the dataset. The volunteers were asked to record their voice while reading the utterances (the original text is available on MASSIVE dataset). Together with the audio, the volunteers were asked to provide a self-annotated description of the recording conditions (e.g., background noise, recording device). The audio recordings have also been validated and, in case of errors, re-recorded by the volunteers.
All the audio recordings included in the dataset have received a validation from at least two volunteers. All the audio recordings have been validated by native italian speakers (self-annotated). | Provide a detailed description of the following dataset: ITALIC |
Jam-ALT | JamALT is a revision of the JamendoLyrics dataset (80 songs in 4 languages), adapted for use as an automatic lyrics transcription (ALT) benchmark.
The lyrics have been revised according to the newly compiled [annotation guide](https://huggingface.co/datasets/audioshake/jam-alt/blob/main/GUIDELINES.md), which include rules about spelling, punctuation, and formatting.
The audio is identical to the JamendoLyrics dataset. However, only 79 songs are included, as one of the 20 French songs has been removed due to concerns about potentially harmful content. | Provide a detailed description of the following dataset: Jam-ALT |
InfantMarmosetsVox | InfantMarmosetsVox is a dataset for multi-class call-type and caller identification. It contains audio recordings of different individual marmosets and their call-types. The dataset contains a total of 350 files of precisely labelled 10-minute audio recordings across all caller classes. The audio was recorded from five pairs of infant marmoset twins, each recorded individually in two separate sound-proofed recording rooms at a sampling rate of 44.1 kHz. The start and end time, call-type, and marmoset identity of each vocalization are provided, labeled by an experienced researcher. A PyTorch Dataloader is included in this dataset. | Provide a detailed description of the following dataset: InfantMarmosetsVox |
Wikidata5M-SI | Semi-inductive link prediction (LP) in knowledge graphs (KG) is the task of predicting facts for new, previously unseen entities based on context information. Although new entities can be integrated by retraining the model from scratch in principle, such an approach is infeasible for large-scale KGs, where retraining is expensive and new entities may arise frequently. In this paper, we propose and describe a large-scale benchmark to evaluate semi-inductive LP models. The benchmark is based on and extends Wikidata5M: It provides transductive, k-shot, and 0-shot LP tasks, each varying the available information from (i) only KG structure, to (ii) including textual mentions, and (iii) detailed descriptions of the entities. We report on a small study of recent approaches and found that semi-inductive LP performance is far from transductive performance on long-tail entities throughout all experiments. The benchmark provides a test bed for further research into integrating context and textual information in semi-inductive LP models. | Provide a detailed description of the following dataset: Wikidata5M-SI |
CORE-MM | Multi-modal Large Language Models (MLLMs) are increasingly prominent in the field of artificial intelligence. Although many benchmarks attempt to holistically evaluate MLLMs, they typically concentrate on basic reasoning tasks, often yielding only simple yes/no or multi-choice responses. These methods naturally lead to confusion and difficulties in conclusively determining the reasoning capabilities of MLLMs. To mitigate this issue, we manually curate CORE-MM benchmark dataset, specifically designed for MLLMs with a focus on complex reasoning tasks. Our benchmark comprises three key reasoning categories: deductive, abductive, and analogical reasoning. The queries in our dataset are intentionally constructed to engage the reasoning capabilities of MLLMs in the process of generating answers. For a fair comparison across various MLLMs, we incorporate intermediate reasoning steps into our evaluation criteria. CORE-MM benchmark consists of 279 manually curated reasoning questions, associated with a total of 342 images. Among which, 49 questions pertain to abductive reasoning, 181 require deductive reasoning, and 49 involve analogicalreasoning. Furthermore, the dataset is divided into two folds based on reasoning complexity, with 108 classified as “High” reasoning complexity and 171 as “Moderate” reasoning complexity. | Provide a detailed description of the following dataset: CORE-MM |
MGSM8KInstruct | MGSM8KInstruct, the multilingual math reasoning instruction dataset, encompassing ten distinct languages, thus addressing the issue of training data scarcity in multilingual math reasoning. | Provide a detailed description of the following dataset: MGSM8KInstruct |
FedNLP | We collect the various forms of Federal Reserve communications.
- fomc_doc: FOMC post-meeting statements, Minutes, and press conferences. (122 docs)
- speaker_doc: FOMC members’ speeches from over 30 different websites. (1300 docs)
- period: Jan 2015 - Jul 2020 | Provide a detailed description of the following dataset: FedNLP |
AV-Deepfake1M | The detection and localization of highly realistic deepfake audio-visual content are challenging even for the most advanced state-of-the-art methods. While most of the research efforts in this domain are focused on detecting high-quality deepfake images and videos, only a few works address the problem of the localization of small segments of audio-visual manipulations embedded in real videos. In this research, we emulate the process of such content generation and propose the AV-Deepfake1M dataset. The dataset contains content-driven (i) video manipulations, (ii) audio manipulations, and (iii) audio-visual manipulations for more than 2K subjects resulting in a total of more than 1M videos. The paper provides a thorough description of the proposed data generation pipeline accompanied by a rigorous analysis of the quality of the generated data. The comprehensive benchmark of the proposed dataset utilizing state-of-the-art deepfake detection and localization methods indicates a significant drop in performance compared to previous datasets. The proposed dataset will play a vital role in building the next-generation deepfake localization methods. The dataset and associated code are available at https://github.com/ControlNet/AV-Deepfake1M. | Provide a detailed description of the following dataset: AV-Deepfake1M |
formalgeo7k | 6981 SAT-level geometry problem with complete natural language description, geometric shapes, formal language annotations, and theorem sequences annotations. | Provide a detailed description of the following dataset: formalgeo7k |
formalgeo-imo | IMO-level geometry problem with complete natural language description, geometric shapes, formal language annotations, and theorem sequences annotations. | Provide a detailed description of the following dataset: formalgeo-imo |
CARE | https://drive.google.com/file/d/1X_JTfD8Ch-IxmG5VHtKk_xGZT336Fl1Q/view?usp=drive_link | Provide a detailed description of the following dataset: CARE |
InHARD | We introduce a RGB+S dataset named “Industrial Human Action Recognition Dataset” (InHARD) from a real-world setting for industrial human action recognition with over 2 million frames, collected from 16 distinct subjects. This dataset contains 13 different industrial action classes and over 4800 action samples. The introduction of this dataset should allow us the study and development of various learning techniques for the task of human actions analysis inside industrial environments involving human robot collaborations.
More details on the dataset at
Zenodo: **https://zenodo.org/records/4003541**
Github: **https://github.com/vhavard/InHARD**
Publication: **https://doi.org/10.1109/ICHMS49158.2020.9209531** | Provide a detailed description of the following dataset: InHARD |
XinhuaHallucinations | XinhuaHallucinations is part of UHGEval benchmark, it contains over 5000 news items. It can be used in hallucination evaluation or detection tasks. | Provide a detailed description of the following dataset: XinhuaHallucinations |
BurnMD | A dataset composed of 308 medium sized fires from the years 2018-2021, complete with both time series airborne based inference and ground operational estimation of fire extent, and operational mitigation data such as control line construction. | Provide a detailed description of the following dataset: BurnMD |
AK_FRAEX - Azure Kinect Frame Extractor demo videos | Video samples recorded in the field using the Azure Kinect DK. These videos are part of the AK-FRAEX software to demonstrate the use of frame extraction tasks. Visit the project site:
https://pypi.org/project/ak-frame-extractor
https://github.com/GRAP-UdL-AT/ak_frame_extractor
Explanations about the recording can be found at "AKFruitData: A dual software application for Azure Kinect cameras to acquire and extract informative data in yield tests performed in fruit orchard environments". https://doi.org/10.1016/j.softx.2022.101231
This version is updated with videos in MJPG mode (20210927_114012_k_r2_e_000_150_138.mkv, 20210927_192424_k_r2_e_000_150_138.mkv) and in BGRA32 mode (1080_06072022182211.mkv, 1080_07072022180929.mkv)
** Thanks to Iva Xhimitiku (https://orcid.org/0000-0002-6205-3445) for the videos in BGRA32 mode. ** | Provide a detailed description of the following dataset: AK_FRAEX - Azure Kinect Frame Extractor demo videos |
cryoPPP | The CryoPPP dataset consists of 34 ground truth data and metadata for 335 EMPIAR IDs. The ground truth data is comprised of a variety of 9893 Micrographs (~300 cryo-EM images per EMPIAR ID) with manually curated ground truth coordinates of picked protein particles. The metadata consists of 1,698,802 high-resolution micrographs deposited in EMPIAR with their respective FPT and Globus data download paths. | Provide a detailed description of the following dataset: cryoPPP |
BUPTCampus | BUPTCampus is a video-based visible-infrared dataset with approximately pixel-level aligned tracklet pairs and single-camera auxiliary samples. | Provide a detailed description of the following dataset: BUPTCampus |
BHSD | Intracranial hemorrhage (ICH) is a pathological condition characterized by bleeding inside the skull or brain, which can be attributed to various factors. Identifying, localizing and quantifying ICH has important clinical implications, in a bleed-dependent manner. While deep learning techniques are widely used in medical image segmentation and have been applied to the ICH segmentation task, existing public ICH datasets do not support the multi-class segmentation problem. To address this, we develop the Brain Hemorrhage Segmentation Dataset (BHSD), which provides a 3D multi-class ICH dataset containing 192 volumes with pixel-level annotations and 2200 volumes with slice-level annotations across five categories of ICH. To demonstrate the utility of the dataset, we formulate a series of supervised and semi-supervised ICH segmentation tasks. We provide experimental results with state-of-the-art models as reference benchmarks for further model developments and evaluations on this dataset. | Provide a detailed description of the following dataset: BHSD |
ITCPR dataset | The ITCPR dataset is a comprehensive collection specifically designed for the Zero-Shot Composed Person Retrieval (ZS-CPR) task. It consists of a total of 2,225 annotated triplets, derived from three distinct datasets: Celeb-reID, PRCC, and LAST. | Provide a detailed description of the following dataset: ITCPR dataset |
Tripadvisor Restaurant Reviews | Dataset of restaurant reviews from TripAdvisor that includes images and texts uploaded in reviews by users. Reviews in six different cities are included: Gijón (Spain), Barcelona (Spain), Madrid (Spain), New York City (USA), Paris (France) and London (United Kingdom). In the [original publication](https://www.sciencedirect.com/science/article/pii/S0020025520300931), the following task is proposed: **Can we explain, using the existing image or text from a different user, why a given restaurant was recommended to a certain user?** | Provide a detailed description of the following dataset: Tripadvisor Restaurant Reviews |
ULS labeled data | UAV Laser Scanning data collected over neotropical forest (Paracou French Guiana). Four flights conducted over one ha plot in 2021 and 2022.
Leaf wood labels were transferred from contemporaneous (2021) TLS acquisition, for which segmentation was done using LeWoS and onscreen post correction.
Predicted values using SOUL model (see ref below) for a single UAV-LS acquisition are provided as separate file. | Provide a detailed description of the following dataset: ULS labeled data |
First HAREM | HAREM, an initiative by Linguateca, boasts a Golden Collection—a meticulously curated repository of annotated Portuguese texts. This resource serves as a pivotal benchmark for evaluating systems in recognizing mentioned entities within documents. It stands as a cornerstone, supporting advancements and innovations in Portuguese language processing research, providing a comprehensive foundation for evaluating system performances and fostering ongoing developments in this domain. | Provide a detailed description of the following dataset: First HAREM |
withoutbg100 dataset | The withoutbg100 dataset consists of 100 image and alpha matte pairs. These pairs are chosen to represent a wide range of subjects and complexities, specifically crafted to enhance and test the capabilities of image background removal algorithms. The dataset includes images with complex elements such as fur and objects with varying transparency levels, providing a substantial challenge to even advanced matting techniques. | Provide a detailed description of the following dataset: withoutbg100 dataset |
MusicBench | The MusicBench dataset is a music audio-text pair dataset that was designed for text-to-music generation purpose and released along with Mustango text-to-music model. MusicBench is based on the MusicCaps dataset, which it expands from 5,521 samples to 52,768 training and 400 test samples!
Dataset Details
MusicBench expands MusicCaps by:
Including music features of chords, beats, tempo, and key that are extracted from the audio.
Describing these music features using text templates and thus enhancing the original text prompts.
Expanding the number of audio samples by performing musically meaningful augmentations: semitone pitch shifts, tempo changes, and volume changes.
Train set size = 52,768 samples Test set size = 400
This dataset also includes FMACaps, which was used as a second test set. | Provide a detailed description of the following dataset: MusicBench |
Various URL Datasets | ## Various URL Datasets
These are collections of URLs for benchmarking purposes.
- files/node_files.txt: all source files from a given Node.js snapshot as URLs (43415 URLs).
- files/linux_files.txt: all files from a Linux systems as URLs (169312 URLs).
- wikipedia/wikipedia_100k.txt: 100k URLs from a snapshot of all Wikipedia articles as URLs (March 6th 2023)
- others/kasztp.txt: test URLs from https://github.com/kasztp/URL_Shortener (MIT License) (48009 URLs).
- others/userbait.txt : test URLs from https://github.com/userbait/phishing_sites_detector (unknown copyright) (11430 URLs).
- top100/top100.txt: crawl of the top visited 100 websites and extracts unique URLs
**Disclaimer**: This repository is developed and released for research purposes only.
- This project reshares some publicly available datasets. When in doubt, investigate the copyright of the files you want to use.
- There may be errors and duplicates in these files. | Provide a detailed description of the following dataset: Various URL Datasets |
NPO | The dataset is recorded with an on-vehicle ZED stereo camera in both urban and rural environments
The dataset contains various lighting conditions, such as normal lights, large-area shadows, dim lights, and sun glare. There are also different weather conditions, such as sunny, cloudy, and
snowy.
Negative obstacles (i.e., potholes and cracks) and positive obstacles (i.e., pedestrians, cars, and motorcycles) in 5, 000 images are labelled | Provide a detailed description of the following dataset: NPO |
NC4K | As far as we know, there only exists one large camouflaged object testing dataset, the COD10K, while the sizes of other testing datasets are less than 300. We then contribute another camouflaged object testing dataset, namely NC4K, which includes 4,121 images downloaded from the Internet. The new testing dataset can be used to evaluate the generalization ability of existing models. | Provide a detailed description of the following dataset: NC4K |
COD10K | Sensory ecologists have found that this s background matching camouflage strategy works by deceiving the visual perceptual system of the observer. Naturally, addressing concealed object detection (COD) requires a significant amount of visual perception knowledge. Understanding COD has not only scientific value in itself, but it also important for applications in many fundamental fields, such as computer vision (e.g., for search-and-rescue work, or rare species discovery), medicine (e.g., polyp segmentation, lung infection segmentation), agriculture (e.g., locust detection to prevent invasion), and art (e.g., recreational art). The high intrinsic similarities between the targets and non-targets make COD far more challenging than traditional object segmentation/detection. Although it has gained increased attention recently, studies on COD still remain scarce, mainly due to the lack of a sufficiently large dataset and a standard benchmark like Pascal-VOC, ImageNet, MS-COCO, ADE20K, and DAVIS.
To build the large-scale COD dataset, we build the COD10K, which contains 10,000 images (5,066 camouflaged, 3,000 background, 1,934 noncamouflaged), divided into 10 super-classes, and 78 sub-classes (69 camouflaged, nine non-camouflaged) which are collected from multiple photography websites. | Provide a detailed description of the following dataset: COD10K |
CHAMELEON | Camouflage strategies in nature are evolutionarily developed concealment techniques for survival in both predator and prey species. They are intended to delay or even avoid observation or detection by other animals. These are crypsis, mimicry which can rely on vision, odor, sound and behavior. Here, the term crypsis requires explanation, ‘it comprises all traits that reduce an animal’s risk of becoming detected when it is potentially perceivable to an observer’. There are also other forms of camouflage which are not intended to hide an animal in a scene but to confuse the observer (e.g. zebra stripes) or behavioral strategies just clearly intended for hiding such as nocturnal or subterranean life activity. The camouflage techniques are within the scope of biological sciences which are then followed by humans in military applications which were notably developed since World War I, but that research is poorly disclosed to the public.
The database allows finding how well the saliency map suits to human perception of objects, or what is detector efficiency for the objects which are intended to be hidden in the scene. Since the topic is quite novel, it was necessary to perform some fundamental steps such as obtaining the ground truth a database of camouflaged animals which are then manually annotated and evaluated by independent respondents. | Provide a detailed description of the following dataset: CHAMELEON |
Camouflaged Animal Dataset | The nine (moving camera) videos in this benchmark exhibit camouflaged animals that are difficult to see in a single frame, but can be detected based upon their motion across frames. | Provide a detailed description of the following dataset: Camouflaged Animal Dataset |
MoCA-Mask | The original Moving Camouflaged Animals (MoCA) Dataset includes 37K frames from 141 YouTube Video sequences with resolution and sampling rate of 720 × 1280 and 24fps in the majority of cases. The dataset covers 67 types of animals moving in natural scenes, but some are not camouflaged animals. Also, the ground truth of the original dataset is bounding boxes rather than dense segmentation masks, which makes it hard to evaluate the VCOD segmentation performance. To this end, we reorganize the dataset as MoCA-Mask and build a comprehensive benchmark with more comprehensive evaluation criteria. | Provide a detailed description of the following dataset: MoCA-Mask |
LinkedPapersWithCode | An RDF knowledge graph that provides comprehensive, current information about almost 400,000 machine learning publications. This includes the tasks addressed, the datasets utilized, the methods implemented, and the evaluations conducted, along with their results. Compared to its non-RDF-based counterpart Papers With Code, LPWC not only translates the latest advancements in machine learning into RDF format, but also enables novel ways for scientific impact quantification and scholarly key content recommendation. LPWC is openly accessible and is licensed under CC-BY-SA 4.0.
As a knowledge graph in the Linked Open Data cloud, we offer LPWC in multiple formats, from RDF dump files to a SPARQL endpoint for direct web queries, as well as a data source with resolvable URIs and links to the data sources SemOpenAlex, Wikidata, and DBLP.
Additionally, we supply knowledge graph embeddings, enabling LPWC to be readily applied in machine learning applications.
Note: the data is 5.5 months behind PapersWithCode, but hopefully this can be amended soon. | Provide a detailed description of the following dataset: LinkedPapersWithCode |
ClustMe and ClustML data S1 and S2 | Code and datasets S1 and S2 used in the paper ClustMe: A Visual Quality Measure for Ranking Monochrome Scatterplots based on Cluster Patterns. Computer Graphics Forum 38(3): 225-236 (2019) and to appear in ClustML: A Measure of Cluster Pattern Complexity in Scatterplots Learnt from Human-labeled Groupings, to appear in SAGE Information Visualization Journal.
S1: a set of 1000 scatterplot data generated by 2-component Gaussian Mixture Model with various parameters, together with the 8 GMM parameter values and 34 human judgments about perceived separability in scatterplot image: the task was to answer if they see "one" or "more-than-one" clusters.
S2: a set of 435 pairs of scatterplot data generated by projecting high-dimensional data with various dimensionality reduction techniques, together with 31 human judgments about which of the two images in a pair shows a more complex cluster pattern: right is more complex, left is more complex, or both are equally complex. | Provide a detailed description of the following dataset: ClustMe and ClustML data S1 and S2 |
Second HAREM | The Second HAREM was an evaluation exercise in Portuguese Named Entity Recognition. It aims to refine text annotation processes, building on the First HAREM. Challenges include adapting guidelines for new texts and establishing a unified document with directives from both editions. | Provide a detailed description of the following dataset: Second HAREM |
Mini HAREM | The MiniHAREM, a reiteration of the 2005 evaluation, used the same methodology and platform. Held from April 3rd to 5th, 2006, it offered participants a 48-hour window to annotate, verify, and submit text collections. Results are available, and the collection used is accessible. Participant lists, submitted outputs, and updated guidelines are provided. Additionally, the HAREM format checker ensures compliance with MiniHAREM directives. Information for the HAREM Meeting, open for registration until June 15th after the Linguateca Summer School in the University of Porto, is also available. | Provide a detailed description of the following dataset: Mini HAREM |
CLCXray | The CLCXray dataset contains 9,565 X-ray images, in which 4,543 X-ray images (real data) are obtained from the real subway scene and 5,022 X-ray images (simulated data) are scanned from manually designed baggage. There are 12 categories in the CLCXray dataset, including 5 types of cutters and 7 types of liquid containers. Five kinds of cutters include blade, dagger, knife, scissors, swiss army knife. Seven kinds of liquid containers include cans, carton drinks, glass bottle, plastic bottle, vacuum cup, spray cans, tin. The annotations are made in COCO format. | Provide a detailed description of the following dataset: CLCXray |
EuroSAT-SAR | A SAR version of the EuroSAT dataset. The images were collected from Sentinel-1 GRD products (two bands VV and VH) based on the geocoordinates of the EuroSAT images. | Provide a detailed description of the following dataset: EuroSAT-SAR |
EA-HAS-Bench | We present the first large-scale energy-aware benchmark that allows studying AutoML methods to achieve better trade-offs between performance and search energy consumption, named EA-HAS-Bench. EA-HAS-Bench provides a large-scale architecture/hyperparameter joint search space, covering diversified configurations related to energy consumption. Furthermore, we propose a novel surrogate model specially designed for large joint search space, which proposes a Bezier curve-based model to predict learning curves with unlimited shape and length. | Provide a detailed description of the following dataset: EA-HAS-Bench |
SIGARRA News Corpus | This dataset was taken from the SIGARRA information system at the University of Porto (UP). Every organic unit has its own domain and produces academic news. We collected a sample of 1000 news, manually annotating 905 using the Brat rapid annotation tool. This dataset consists of three files. The first is a CSV file containing news published between 2016-12-14 and 2017-03-01. The second file is a ZIP archive containing one directory per organic unit, with a text file and an annotations file per news article. The third file is an XML containing the complete set of news in a similar format to the HAREM dataset format. This dataset is particularly adequate for training named entity recognition models. | Provide a detailed description of the following dataset: SIGARRA News Corpus |
PropBank-PT | The PropBankPT (Branco et al., 2012) is a set of sentences annotated with their constituency structure and semantic role tags, composed of 3,406 sentences and 44,598 tokens taken from the Wall Street Journal translated.
For the creation of this PropBank we adopted a semi-automatic analysis with a double-blind annotation followed by adjudication. The resulting dataset contains three information levels: phrase constituency, grammatical functions, and phrase semantic roles.
The main motivation behind the creation of this resource was to build a high quality data set with semantic information that could support the development of automatic semantic role labelers for Portuguese.
The development of this resource started under the METANET4U project (at: http://metanet4u.eu/) whose main goal is to contribute to the establishment of a pan-European digital platform that makes available language resources and services, encompassing both datasets and software tools, for speech and language processing, and supports a new generation of exchange facilities for them.
You may also be interested in the related resources DeepBankPT, TreeBankPT, DependencyBankPT and LogicalFormBankPT, also available from this repository. | Provide a detailed description of the following dataset: PropBank-PT |
Representative PDE Benchmarks | Given the lack of consensus on a standard setof benchmarks for machine learning of PDEs, we propose a new suite of benchmarks here. Our aims in this regard are to ensure
i) sufficient diversity among the types of PDE considered
ii) access to training and test data is readily available for rapid prototyping and reproducibility
iii) intrinsic computational complexity of problem to make sure that it is worthwhile to
design fast surrogates to classical PDE solvers for a particular problem | Provide a detailed description of the following dataset: Representative PDE Benchmarks |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.