dataset_name stringlengths 2 128 | description stringlengths 1 9.7k | prompt stringlengths 59 185 |
|---|---|---|
Visiting Card | ID Card Images | Hindi-English | This dataset is an extremely challenging set of over 2000+ original Visiting card/ID card images captured and crowdsourced from over 300+ urban and rural areas, where each image is **manually reviewed and verified** by computer vision professionals at Datacluster Labs.
### **Dataset Features**
- Dataset size : 2000+
- Captured by : Over 150+ crowdsource contributors
- Resolution : 100% images HD and above (1920x1080 and above)
- Location : Captured with 300+ cities accross India
- Diversity : Covers a wide variety of things such as reflective paper, different fonts, different color and type of cards, etc.
- Device used : Captured using mobile phones in 2020-2021
- Usage : Visiting card detection, card edge detection, paper edge detection, ID card OCR, Hindi OCR, etc.
### Available Annotation formats
COCO, YOLO, PASCAL-VOC, Tf-Record
**To download full datasets or to submit a request for your dataset needs, please ping us at [sales@datacluster.ai](sales@datacluster.ai) Visit [www.datacluster.ai](www.datacluster.ai) to know more.**
**Note**:
All the images are manually captured and verified by a large contributor base on DataCluster platform | Provide a detailed description of the following dataset: Visiting Card | ID Card Images | Hindi-English |
Crowd in a rally | Crowd Counting | Crowd Human | This dataset is an extremely challenging set of over 3000+ original Crowd images captured and crowdsourced from over 300+ urban and rural areas, where each image is **manually reviewed and verified** by computer vision professionals at Datacluster Labs.
### **Dataset Features**
- Dataset size : 3000+
- Captured by : Over 400+ crowdsource contributors
- Resolution : 100% images HD and above (1920x1080 and above)
- Location : Captured with 300+ cities accross India
- Diversity : Various lighting conditions like day, night, varied distances, view points etc.
- Device used : Captured using mobile phones in 2021-2022
- Usage : Human detection, CCTV monitoring of crowd, Crowd tracking, etc.
### Available Annotation formats
COCO, YOLO, PASCAL-VOC, Tf-Record
**To download full datasets or to submit a request for your dataset needs, please ping us at [sales@datacluster.ai](sales@datacluster.ai) Visit [www.datacluster.ai](www.datacluster.ai) to know more.**
**Note**:
All the images are manually captured and verified by a large contributor base on DataCluster platform | Provide a detailed description of the following dataset: Crowd in a rally | Crowd Counting | Crowd Human |
WALT | We introduce a new dataset, Watch and Learn Time-lapse (WALT), consisting of multiple (4K and 1080p) cameras capturing urban environments over a year. | Provide a detailed description of the following dataset: WALT |
Autorickshaw Image Dataset | Niche Vehicle Dataset | This dataset is an extremely challenging set of over 8000+ original Fire and Smoke images captured and crowdsourced from over 1200+ urban and rural areas, where each image is **manually reviewed and verified** by computer vision professionals at Datacluster Labs.
### **Dataset Features**
- Dataset size : 8000+
- Captured by : Over 1200+ crowdsource contributors
- Resolution : 99% images HD and above (1920x1080 and above)
- Location : Captured with 800+ cities accross India
- Diversity : Various lighting conditions like day, night, varied distances, view points etc.
- Device used : Captured using mobile phones in 2021-2022
- Usage : Vehicle detection, Autorickshaw detection, Self driving, Indian vehicles, Number Plate detection, etc.
### Available Annotation formats
COCO, YOLO, PASCAL-VOC, Tf-Record
**To download full datasets or to submit a request for your dataset needs, please ping us at [sales@datacluster.ai](sales@datacluster.ai) Visit [www.datacluster.ai](www.datacluster.ai) to know more.**
**Note**:
All the images are manually captured and verified by a large contributor base on DataCluster platform | Provide a detailed description of the following dataset: Autorickshaw Image Dataset | Niche Vehicle Dataset |
Electronics Object Image Dataset | Computer Parts | This dataset is an extremely challenging set of over 5000+ original Electronic Items images captured and crowdsourced from over 1000+ urban and rural areas, where each image is **manually reviewed and verified** by computer vision professionals at Datacluster Labs.
### **Dataset Features**
- Dataset size : 5000+
- Captured by : Over 1000+ crowdsource contributors
- Resolution : 99% images HD and above (1920x1080 and above)
- Location : Captured with 800+ cities accross India
- Diversity : Various lighting conditions like day, night, varied distances, view points etc.
- Device used : Captured using mobile phones in 2020-2021
- Applications : Electronics Detection, Daily item detection, Home Automation, etc.
### Available Annotation formats
COCO, YOLO, PASCAL-VOC, Tf-Record
**To download full datasets or to submit a request for your dataset needs, please ping us at [sales@datacluster.ai](sales@datacluster.ai) Visit [www.datacluster.ai](www.datacluster.ai) to know more.**
**Note**:
All the images are manually captured and verified by a large contributor base on DataCluster platform | Provide a detailed description of the following dataset: Electronics Object Image Dataset | Computer Parts |
Hindi Text Image Dataset | Hindi in the wild | This dataset is an extremely challenging set of over 5000+ original Hindi text images captured and crowdsourced from over 700+ urban and rural areas, where each image is **manually reviewed and verified** by computer vision professionals at DataclusterLabs.
### **Dataset Features**
- Dataset size : 5000+
- Captured by : Over 700+ crowdsource contributors
- Resolution : 99% images HD and above (1920x1080 and above)
- Location : Captured with 400+ cities accross India
- Diversity : Various lighting conditions like day, night, varied distances, view points etc.
- Device used : Captured using mobile phones in 2021-2022
- Usage : Hindi text detection, Hindi NLP, Text recognition, etc.
### Available Annotation formats
COCO, YOLO, PASCAL-VOC, Tf-Record
**To download full datasets or to submit a request for your dataset needs, please ping us at [sales@datacluster.ai](sales@datacluster.ai) Visit [www.datacluster.ai](www.datacluster.ai) to know more.**
**Note**:
All the images are manually captured and verified by a large contributor base on DataCluster platform | Provide a detailed description of the following dataset: Hindi Text Image Dataset | Hindi in the wild |
Nasa Exoplanet Archive | The NASA Exoplanet Archive is an online astronomical exoplanet and stellar catalog and data service that collates and cross-correlates astronomical data and information on exoplanets and their host stars, and provides tools to work with these data. The archive is dedicated to collecting and serving important public data sets involved in the search for and characterization of extrasolar planets and their host stars. These data include stellar parameters (such as positions, magnitudes, and temperatures), exoplanet parameters (such as masses and orbital parameters) and discovery/characterization data (such as published radial velocity curves, photometric light curves, images, and spectra). | Provide a detailed description of the following dataset: Nasa Exoplanet Archive |
RefMatte | RefMatte is the first large-scale challenging dataset under the task referring image matting, generated by a comprehensive image composition and expression generation engine on top of current public high-quality matting foregrounds with flexible logics and re-labelled diverse attributes. RefMatte consists of 230 object categories, 47,500 images, 118,749 expression-region entities, and 474,996 expressions, which can be further extended easily in the future.
RefMatte comes along with two settings: keyword-based and expression-based. The former one takes a high-resolution image and a keyword as input, while the latter one takes a high-resolution image and a flowery expression as input.
Additionally, we construct a real-world test set with 100 high-resolution natural images and manually annotate complex phrases to evaluate the out-of-domain generalization abilities of RIM methods, named as RefMatte-RW100. | Provide a detailed description of the following dataset: RefMatte |
Summaries of genetic variation | The dataset represents data generated from a commonly used model in population genetics. It comprises a matrix of 1,000,000 rows and 9 columns, representing parameters and summaries generated by an infinite-sites coalescent model for genetic variation. The first two columns encode the scaled mutation rate (theta) and scaled recombination rate (rho). The subsequent seven columns are data summaries: number of segregating sites (C1), standard uniform random noise acting as a distractor (C2), pairwise mean number of nucleotidic differences (C3), mean $R^2$ across pairs separated by <10% of the simulated genomic regions (C4), number of distinct haplotypes (C5), frequency of the most common haplotype (C6), number of singleton haplotypes (C7).
(this text is not original and adapted from https://journal.r-project.org/archive/2015-2/nunes-prangle.pdf). | Provide a detailed description of the following dataset: Summaries of genetic variation |
TAT-QA | TAT-QA (Tabular And Textual dataset for Question Answering) is a large-scale QA dataset, aiming to stimulate progress of QA research over more complex and realistic tabular and textual data, especially those requiring numerical reasoning.
The unique features of TAT-QA include:
- The context given is hybrid, comprising a semi-structured table and at least two relevant paragraphs that describe, analyze or complement the table;
- The questions are generated by the humans with rich financial knowledge, most are practical;
- The answer forms are diverse, including single span, multiple spans and free-form;
- To answer the questions, various numerical reasoning capabilities are usually required, including addition (+), subtraction (-), multiplication (x), division (/), counting, comparison, sorting, and their compositions;In addition to the ground-truth answers, the corresponding derivations and scale are also provided if any.
In total, TAT-QA contains 16,552 questions associated with 2,757 hybrid contexts from real-world financial reports. | Provide a detailed description of the following dataset: TAT-QA |
DAST | This is an SDQC stance-annotated Reddit dataset for the Danish language generated within a thesis project. The dataset consists of over 5000 comments structured as comment trees and linked to 33 source posts.
The dataset is applicable for supervised stance classification and rumour veracity prediction. | Provide a detailed description of the following dataset: DAST |
KITTI-STEP | The Segmenting and Tracking Every Pixel (STEP) benchmark consists of 21 training sequences and 29 test sequences. It is based on the KITTI Tracking Evaluation and the Multi-Object Tracking and Segmentation (MOTS) benchmark. This benchmark extends the annotations to the Segmenting and Tracking Every Pixel (STEP) task. [Copy-pasted from http://www.cvlibs.net/datasets/kitti/eval_step.php] | Provide a detailed description of the following dataset: KITTI-STEP |
aidatatang_200zh | A Chinese Mandarin speech corpus by Beijing DataTang Technology Co., Ltd, containing 200 hours of speech data from 600 speakers. The transcription accuracy for each sentence is larger than 98%.
Aidatatang_200zh is a free Chinese Mandarin speech corpus provided by Beijing DataTang Technology Co., Ltd under Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International Public License.
The contents and the corresponding descriptions of the corpus include:
The corpus contains 200 hours of acoustic data, which is mostly mobile recorded data.
600 speakers from different accent areas in China are invited to participate in the recording.
The transcription accuracy for each sentence is larger than 98%.
Recordings are conducted in a quiet indoor environment.
The database is divided into training set, validation set, and testing set in a ratio of 7: 1: 2.
Detail information such as speech data coding and speaker information is preserved in the metadata file.
Segmented transcripts are also provided.
The corpus aims to support researchers in speech recognition, machine translation, voiceprint recognition, and other speech-related fields. Therefore, the corpus is totally free for academic use. | Provide a detailed description of the following dataset: aidatatang_200zh |
AnnoMI | # AnnoMI: A Dataset of Expert-Annotated Counselling Dialogues
## Dataset Introduction
Research on natural language processing approaches to analysing counselling dialogues has seen substantial development in recent years, but access to this area remains extremely limited, due to the lack of publicly available expert-annotated therapy conversations. In this paper, we introduce _**AnnoMI**_, the first publicly and freely accessible dataset of professionally transcribed dialogues demonstrating high- and low-quality motivational interviewing (MI), an effective counselling technique, with annotations on key MI aspects by domain experts.
This version contains:
* Metadata of each transcript & utterance, e.g. the demonstrated MI quality and the URL of the original video.
* Each utterance.
* The annotation on each utterance: **(Main) Therapist Behaviour** for each therapist utterance and **Client Talk Type** for each client utterance.
## Dataset Format
The dataset is stored in `dataset.csv`, in the same folder as this README. **Each row represents the information associated with an utterance.**
`dataset.csv` has the following columns:
* `transcript_id`: the unique numerical identifier of the conversation/transcript where this utterance belongs. Note that this identifier is NOT used for ordering, and it is only to distinguish between different conversations in the dataset.
* `mi_quality`: the MI quality demonstrated in the conversation/transcript where this utterance belongs. Either "high" or "low".
* `video_title`: the title of the original video of the conversation/transcript where this utterance belongs.
* `video_url`: the URL of the original video of the conversation/transcript where this utterance belongs.
* `topic`: the topic(s) of the conversation/transcript where this utterance belongs.
* `utterance_id`: the unique numerical index of this utterance. Note that this identifier IS ordering-sensitive, and the utterance whose `utterance_id` is $n$ is the $n$-th utterance of the conversation (identified by the `transcript_id` of this row) where this utterance belongs.
* `interlocutor`: the interlocutor of this utterance. Either "therapist" or "client".
* `timestamp`: the timestamp of this utterance w.r.t the original video of the conversation/transcript where this utterance belongs.
* `utterance_text`: the content of this utterance.
* `main_therapist_behaviour`: the (main) therapist behaviour of this utterance. "n/a" if the utterance is a client utterance, otherwise one of ["reflection", "question", "therapist\_input", "other].
* `client_talk_type`: the client talk type of this utterance. "n/a" if the utterance is a therapist utterance, otherwise one of ["change", "neutral", "sustain"].
## Citation
If you use this dataset in your research, please cite our [paper](https://zixiu-alex-wu.github.io/files/AnnoMI_ICASSP_Camera_Ready_Personal_Use.pdf) in the format below:
```bash
@inproceedings{wu2022anno,
title={Anno-MI: A Dataset of Expert-Annotated Counselling Dialogues},
author={Wu, Zixiu and Balloccu, Simone and Kumar, Vivek and Helaoui, Rim and Reiter, Ehud and Recupero, Diego Reforgiato and Riboni, Daniele},
booktitle={ICASSP 2022-2022 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP)},
pages={6177--6181},
year={2022},
organization={IEEE}
}
``` | Provide a detailed description of the following dataset: AnnoMI |
Chest X-ray images | Chest X-ray images for pneumonia detection. | Provide a detailed description of the following dataset: Chest X-ray images |
SV-Ident | SV-Ident comprises 4,248 sentences from social science publications in English and German. The data is the official data for the Shared Task: “Survey Variable Identification in Social Science Publications” (SV-Ident) 2022. Sentences are labeled with variables that are mentioned either explicitly or implicitly.
The dataset supports the following tasks:
- Variable Detection: identifying whether a sentence contains a variable mention or not.
- Variable Disambiguation: identifying which variable from a given vocabulary is mentioned in a sentence. | Provide a detailed description of the following dataset: SV-Ident |
Minigrid | There are other gridworld Gym environments out there, but this one is designed to be particularly simple, lightweight and fast. The code has very few dependencies, making it less likely to break or fail to install. It loads no external sprites/textures, and it can run at up to 5000 FPS on a Core i7 laptop, which means you can run your experiments faster. | Provide a detailed description of the following dataset: Minigrid |
PolyU-BPCoMa | PolyU-BPCoMa: A Dataset and Benchmark Towards Mobile Colorized Mapping Using a Backpack Multisensorial System | Provide a detailed description of the following dataset: PolyU-BPCoMa |
IEIs | We would like to introduce three types of _ion and electron insulators_, i.e. _Li-ion & electron insulators_ (LEIs), _Na-ion & electron insulators_ (NEIs), and _K-ion & electron insulators_ (KEIs), and provide a set of codes here to screen candidate materials from computational material database, [Materials Project](https://materialsproject.org/). The IEI materials are able to block the transport of multiple charge carriers (ions and electrons) and stay thermodynamically stable against specific alkali-metals. The screening workflows and usage of IEI materials in rechargeable solid-state Li/Na/K metal batteries are presented in the paper below. | Provide a detailed description of the following dataset: IEIs |
Hephaestus | Hephaestus is the first large-scale InSAR dataset. Driven by volcanic unrest detection, it provides 19,919 unique satellite frames annotated with a diverse set of labels. Moreover, each sample is accompanied by a textual description of its contents. The goal of this dataset is to boost research on exploitation of interferometric data enabling the application of state-of-the-art computer vision+NLP methods. Furthermore, the annotated dataset is bundled with a large archive of unlabeled frames to enable large-scale self-supervised learning. The final size of the dataset amounts to 110,573 interferograms. | Provide a detailed description of the following dataset: Hephaestus |
CARLANE Benchmark | Unsupervised Domain Adaptation demonstrates great potential to mitigate domain shifts by transferring models from labeled source domains to unlabeled target domains. While Unsupervised Domain Adaptation has been applied to a wide variety of complex vision tasks, only few works focus on lane detection for autonomous driving. This can be attributed to the lack of publicly available datasets. To facilitate research in these directions, we propose CARLANE, a 3-way sim-to-real domain adaptation benchmark for 2D lane detection. CARLANE encompasses the single-target datasets MoLane and TuLane and the multi-target dataset MuLane. These datasets are built from three different domains, which cover diverse scenes and contain a total of 163K unique images, 118K of which are annotated. In addition we evaluate and report systematic baselines, including our own method, which builds upon Prototypical Cross-domain Self-supervised Learning. We find that false positive and false negative rates of the evaluated domain adaptation methods are high compared to those of fully supervised baselines. This affirms the need for benchmarks such as CARLANE to further strengthen research in Unsupervised Domain Adaptation for lane detection. CARLANE, all evaluated models and the corresponding implementations are publicly available at https://carlanebenchmark.github.io. | Provide a detailed description of the following dataset: CARLANE Benchmark |
Hyperbard | Hyperbard is a dataset of diverse relational data representations derived from Shakespeare's plays. Our representations range from simple graphs capturing character co-occurrence in single scenes to hypergraphs encoding complex communication settings and character contributions as hyperedges with edge-specific node weights. By making multiple intuitive representations readily available for experimentation, we facilitate rigorous representation robustness checks in graph learning, graph mining, and network analysis, highlighting the advantages and drawbacks of specific representations. | Provide a detailed description of the following dataset: Hyperbard |
Example dataset for CellCluster code | Dataset to be used with the https://github.com/MathBioCU/WSINDy_CellCluster code | Provide a detailed description of the following dataset: Example dataset for CellCluster code |
FixEval | We introduce FixEval , a dataset for competitive programming bug fixing along with a comprehensive test suite and show the necessity of execution based evaluation compared to suboptimal match based evaluation metrics like BLEU, CodeBLEU, Syntax Match, Exact Match etc. | Provide a detailed description of the following dataset: FixEval |
MICCAI'2015 Gland Segmentation Challenge Contest Dataset | MICCAI'2015 Gland Segmentation Challenge Contest Dataset
Welcome to the challenge on gland segmentation in histology images. This challenge was held in conjuction with MICCAI 2015, Munich, Germany.
Objective of the Challenge
We aim to bring together researchers who are interested in the gland segmentation problem, to validate the performance of their existing or newly invented algorithms on the same standard dataset. In this challenge, we will provide the participants with images of Haematoxylin and Eosin (H&E) stained slides, consisting of a wide range of histologic grades. | Provide a detailed description of the following dataset: MICCAI'2015 Gland Segmentation Challenge Contest Dataset |
JetClass | JetClass is a new large-scale dataset to facilitate deep learning research in particle physics. It consists of 100M particle jets for training, 5M for validation and 20M for testing. The dataset contains 10 classes of jets, simulated with [MadGraph](https://launchpad.net/mg5amcnlo) + [Pythia](https://pythia.org/) + [Delphes](https://cp3.irmp.ucl.ac.be/projects/delphes). A detailed description of the JetClass dataset is presented in the paper [Particle Transformer for Jet Tagging](https://arxiv.org/abs/2202.03772). An interface to use the dataset is provided [here](https://github.com/jet-universe/particle_transformer). | Provide a detailed description of the following dataset: JetClass |
EVI | The EVI dataset is a challenging, multilingual spoken-dialogue dataset with 5,506 dialogues in English, Polish, and French. The dataset can be used to develop and benchmark conversational systems for user authentication tasks, i.e. speaker enrolment (E), speaker verification (V), speaker identification (I).
The dataset contains the audio data, machine transcriptions, and target identity for each dialogue, and the knowledge-base with personal information (postcode, name, and date of birth) for each identity. The dataset can be used with both text-independent biometric or knowledge-based authentication (KBA) tasks. | Provide a detailed description of the following dataset: EVI |
EEG and P300 database to determine the signal to noise ratio during a variety of realistic tasks | This database contains EEG and evoked potential recordings from 20 participants. This allows to assess the signal to noise ratio:
- Signal: The P300 power and VEP power can be used to assess the signal power
- Noise: The signal power consisting of EMG and baseline EEG during the different tasks allows to determine the noise level | Provide a detailed description of the following dataset: EEG and P300 database to determine the signal to noise ratio during a variety of realistic tasks |
HaGRID | We introduce a large image dataset **HaGRID** (**HA**nd **G**esture **R**ecognition **I**mage **D**ataset) for hand gesture recognition (HGR) systems. You can use it for image classification or image detection tasks. Proposed dataset allows to build HGR systems, which can be used in video conferencing services (Zoom, Skype, Discord, Jazz etc.), home automation systems, the automotive sector, etc.
HaGRID size is **716GB** and dataset contains **552,992** FullHD (1920 × 1080) RGB images divided into **18** classes of gestures. Also, some images have `no_gesture` class if there is a second free hand in the frame. This extra class contains **123,589** samples. The data were split into training 92%, and testing 8% sets by subject `user_id`, with 509,323 images for train and 43,669 images for test.
The annotations consist of bounding boxes of hands in COCO format `[top left X position, top left Y position, width, height]` with gesture labels. Also, annotations have 21 `landmarks` in format `[x,y]` relative image coordinates, markups of `leading hands` (`left` or `right` for gesture hand) and `leading_conf` as confidence for `leading_hand` annotation. We provide `user_id` field that will allow you to split the train / val dataset yourself. | Provide a detailed description of the following dataset: HaGRID |
GPA | multi-view imagery of people interacting with a variety of rich 3D environments | Provide a detailed description of the following dataset: GPA |
RPCD | The Reddit Photo Critique Dataset (RPCD) contains tuples of image and photo critiques. RPCD consists of __74K images__ and __220K comments__ and is collected from a Reddit community used by hobbyists and professional photographers to improve their photography skills by leveraging constructive community feedback.
The proposed dataset differs from previous aesthetics datasets mainly in three aspects, namely
* the large scale of the dataset and the extension of the comments criticizing different aspects of the image;
* it contains mostly UltraHD images;
* it can easily be extended to new data as it is collected through an automatic pipeline. | Provide a detailed description of the following dataset: RPCD |
OADAT | An experimental and synthetic (simulated) OA raw signals and reconstructed image domain datasets rendered with different experimental parameters and tomographic acquisition geometries.
For detailed information, see [github.com/berkanlafci/oadat](https://github.com/berkanlafci/oadat). | Provide a detailed description of the following dataset: OADAT |
Matlab code for the article: Model-based selection of most informative diagnostic tests and test parameters | Description TBC | Provide a detailed description of the following dataset: Matlab code for the article: Model-based selection of most informative diagnostic tests and test parameters |
Traditional and Context-specific Spam Twitter | This data set is being released to support the spam and context-specific spam detection tasks on Twitter data.
There are three sets of tweets, parenting-related, #MeToo-related (a social movement focused on tackling issues related to sexual harassment and sexual assault of women), and gun-violence-related tweets. Each set contains 5,000 tweets. These tweets are original tweets in English. There are no retweets, quoted tweets or non-English tweets. | Provide a detailed description of the following dataset: Traditional and Context-specific Spam Twitter |
adVFed | Natural Vertical Partitioned CVR Dataset for Vertical Federated Learning
This Dataset repo provides 2 industrial CVR Dataset for VFL research. | Provide a detailed description of the following dataset: adVFed |
Time Series COVID-19 Sales | The dataset contains the hotel demand and revenue of 8 major tourist destinations in the US (e.g., Los Angeles, Orlando ...). The dataset contains sales, daily occupancy, demand, and revenue of the upper-middle class hotels.
We also gathered dynamic exogenous variables such as the state’s closure/open policy to enrich our dataset. Specifically, we gathered numerious static features such as the number of hospitals, GPD, and population. | Provide a detailed description of the following dataset: Time Series COVID-19 Sales |
COCO-MEBOW | COCO-MEBOW (Monocular Estimation of Body Orientation in the Wild) is a new large-scale dataset for orientation estimation from a single in-the-wild image. The body-orientation labels for 133380 human bodies within 55K images from the COCO dataset have been collected using an efficient and high-precision annotation pipeline. There are 127844 human instance in training set and 5536 human instance in validation set. | Provide a detailed description of the following dataset: COCO-MEBOW |
895 Fire Videos Data | Description:
895 Fire Videos Data,the total duration of videos is 27 hours 6 minutes 48.58 seconds. The dataset adpoted different cameras to shoot fire videos. The shooting time includes day and night.The dataset can be used for tasks such as fire detection.
Data size:
895 videos, the total duration is 27 hours 6 minutes 48.58 seconds
Collecting environment:
including indoor and outdoor scenes
Data diversity:
multiple scenes, different time periods | Provide a detailed description of the following dataset: 895 Fire Videos Data |
5,011 Images – Human Frontal face Data (Male) | Description:
5,011 Images – Human Frontal face Data (Male). The data diversity includes multiple scenes, multiple ages and multiple races. This dataset includes 2,004 Caucasians , 3,007 Asians. This dataset can be used for tasks such as face detection, race detection, age detection, beard category classification.
Data size:
5,011 people, one image per person
Race distribution:
2,004 Caucasians , 3,007 Asians | Provide a detailed description of the following dataset: 5,011 Images – Human Frontal face Data (Male) |
1,995 People Face Images Data (Asian race) | Description:
1,995 People Face Images Data (Asian race). For each subject, more than 20 images per person with frontal face were collected. This data can be used for face recognition and other tasks.
Data size:
1,995 people, more than 20 images per person with frontal face
Race distribution:
Asian people | Provide a detailed description of the following dataset: 1,995 People Face Images Data (Asian race) |
PRTiger | Dataset for automatic pull request title generation. | Provide a detailed description of the following dataset: PRTiger |
SportsMOT | ## Motivation
Multi-object tracking (MOT) is a fundamental task in computer vision, aiming to estimate objects (e.g., pedestrians and vehicles) bounding boxes and identities in video sequences.
Prevailing human-tracking MOT datasets mainly focus on pedestrians in crowded street scenes (e.g., [MOT17](https://motchallenge.net/data/MOT17/)/[20](https://motchallenge.net/data/MOT20/)) or dancers in static scenes ([DanceTrack](https://github.com/DanceTrack/DanceTrack)).
In spite of the increasing demands for sports analysis, there is a lack of multi-object tracking datasets for a variety of **sports scenes**, where the background is complicated, players possess rapid motion and the camera lens moves fast.
To this purpose, we propose a large-scale multi-object tracking dataset named SportsMOT, consisting of **240 video** clips from **3 categories** (i.e., basketball, football and volleyball).
The objective is to only track players on the playground (i.e., except for a number of spectators, referees and coaches) in various sports scenes. We expect SportsMOT to encourage the community to concentrate more on the complicated sports scenes.
## Characteristics
- Large scale
- Fine Annotations
- Player id consistency
- No shot change
- High and fixed resolution(1080P)
- ...
## Focus
- Diverse sports **scenes**
- Complex **motion** patterns
- Challenging **re-id**
## Download
### Examples
You can download the example for SportsMOT.
- [OneDrive](https://1drv.ms/u/s!AtjeLq7YnYGRgQRrmqGr4B-k-xsC?e=7PndU8)
- [Baidu Netdisk](https://pan.baidu.com/s/1gytkTngxoGFlmP9_DBd1xw), password: 4dnw
### Official Dataset
Please Sign up in codalab, and participate in our [competition](https://codalab.lisn.upsaclay.fr/competitions/12424). Download links are available in `Participate`/`Get Data`.
## News
- SportsMOT is used for [DeeperAction@ECCV-2022](https://deeperaction.github.io/tracks/sportsmot.html).
- Refer to github repo: [MCG-NJU/SportsMOT](https://github.com/MCG-NJU/SportsMOT) for the latest info. | Provide a detailed description of the following dataset: SportsMOT |
SRSD-Feynman (Easy set) | Our SRSD (Feynman) datasets are designed to discuss the performance of Symbolic Regression for Scientific Discovery. We carefully reviewed the properties of each formula and its variables in the Feynman Symbolic Regression Database to design reasonably realistic sampling range of values so that our SRSD datasets can be used for evaluating the potential of SRSD such as whether or not an SR method con (re)discover physical laws from such datasets.
This is the Easy set of our SRSD-Feynman datasets. | Provide a detailed description of the following dataset: SRSD-Feynman (Easy set) |
SRSD-Feynman (Hard set) | Our SRSD (Feynman) datasets are designed to discuss the performance of Symbolic Regression for Scientific Discovery. We carefully reviewed the properties of each formula and its variables in the Feynman Symbolic Regression Database to design reasonably realistic sampling range of values so that our SRSD datasets can be used for evaluating the potential of SRSD such as whether or not an SR method con (re)discover physical laws from such datasets.
This is the Hard set of our SRSD-Feynman datasets. | Provide a detailed description of the following dataset: SRSD-Feynman (Hard set) |
SRSD-Feynman (Medium set) | Our SRSD (Feynman) datasets are designed to discuss the performance of Symbolic Regression for Scientific Discovery. We carefully reviewed the properties of each formula and its variables in the Feynman Symbolic Regression Database to design reasonably realistic sampling range of values so that our SRSD datasets can be used for evaluating the potential of SRSD such as whether or not an SR method con (re)discover physical laws from such datasets.
This is the Medium set of our SRSD-Feynman datasets. | Provide a detailed description of the following dataset: SRSD-Feynman (Medium set) |
BN-HTRd | We introduce a new **Dataset** ([BN-HTRd](https://data.mendeley.com/datasets/743k6dm543)) for offline Handwritten Text Recognition (HTR) from images of Bangla scripts comprising words, lines, and document-level annotations. The BN-HTRd dataset is based on the BBC Bangla News corpus - which acted as ground truth texts for the handwritings. Our dataset contains a total of 786 full-page images collected from 150 different writers. With a staggering 1,08,181 instances of handwritten words, distributed over 14,383 lines and 23,115 unique words, this is currently the 'largest and most comprehensive dataset' in this field. We also provided the bounding box annotations (YOLO format) for the segmentation of words/lines and the ground truth annotations for full-text, along with the segmented images and their positions. The contents of our dataset came from a diverse news category, and annotators of different ages, genders, and backgrounds, having variability in writing styles. The BN-HTRd dataset can be adopted as a basis for various handwriting classification tasks such as end-to-end document recognition, word-spotting, word/line segmentation, and so on.
**The statistics of the original dataset are given below:**
- Number of writers = 150
- Total number of images = 786
- Total number of lines = 14,383
- Total number of words = 1,08,181
- Total number of unique words = 23,115
- Total number of punctuation = 7,446
- Total number of characters = 5,74,203
**From v3.0 onwards, we are also providing automatic bounding box annotations (YOLO format) of 805 document images containing words/lines. The statistics of the automatic annotations are given below:**
- Number of writers = 87
- Total number of images = 805
- Total number of lines = 14,836
- Total number of words = 1,06,135 | Provide a detailed description of the following dataset: BN-HTRd |
23 Pairs of Identical Twins Face Image Data | Description:
23 Pairs of Identical Twins Face Image Data. The collecting scenes includes indoor and outdoor scenes. The subjects are Chinese males and females. The data diversity inlcudes multiple face angles, multiple face postures, close-up of eyes, multiple light conditions and multiple age groups. This dataset can be used for tasks such as twins' face recognition.
Data size:
23 pairs, each person in a pair of identical twins has 40 images (20 indoor images, 20 outdoor images)
Population distribution:
race distribution: Asian (Chinese); gender distribution: male 9 pairs, female 14 pairs; age distribution: 12 pairs under 18 years old, 10 pairs aged from 18 to 40, 1 pairs over 40 years old | Provide a detailed description of the following dataset: 23 Pairs of Identical Twins Face Image Data |
105,941 Images Natural Scenes OCR Data of 12 Languages | Description:
105,941 Images Natural Scenes OCR Data of 12 Languages. The data covers 12 languages (6 Asian languages, 6 European languages), multiple natural scenes, multiple photographic angles. For annotation, line-level quadrilateral bounding box annotation and transcription for the texts were annotated in the data. The data can be used for tasks such as OCR of multi-language.
Data size:
105,941 images, including Asian language family: Japanese 9,997 images, Korean 10,231 images, Indonesian 7,591 images, Malay 5,650 images, Vietnamese 8,822 images, Thai 9,645 images; European language family: French 10,015 images, German 7,213 images, Italian 8,824 images, Portuguese 7,754 images, Russian 10,376 images and Spanish 9,823 images
Collecting environment:
including shop plaque, stop board, poster, ticket, road sign, comic, cover picture, prompt/reminder, warning, packing instruction, menu, building sign, etc. | Provide a detailed description of the following dataset: 105,941 Images Natural Scenes OCR Data of 12 Languages |
4,458 People - 3D Facial Expressions Recognition Data | Description:
4,458 People - 3D Facial Expressions Recognition Data. The collection scenes include indoor scenes and outdoor scenes. The dataset includes males and females. The age distribution ranges from juvenile to the elderly, the young people and the middle aged are the majorities. The device includes iPhone X, iPhone XR. The data diversity includes different expressions, different ages, different races, different collecting scenes. This data can be used for tasks such as 3D facial expression recognition.
Data size:
4,458 people, 7 kinds of 3D expressions were collected for each person
Population distribution:
race distribution: Asian (Chinese), Black, Caucasian; gender distribution: male, female; age distribution: ranging from teenager to the elderly, the middle-aged and young people are the majorities | Provide a detailed description of the following dataset: 4,458 People - 3D Facial Expressions Recognition Data |
10,000 People - Human Pose Recognition Data | Description:
10,000 People - Human Pose Recognition Data. This dataset includes indoor and outdoor scenes.This dataset covers males and females. Age distribution ranges from teenager to the elderly, the middle-aged and young people are the majorities. The data diversity includes different shooting heights, different ages, different light conditions, different collecting environment, clothes in different seasons, multiple human poses. For each subject, the labels of gender, race, age, collecting environment and clothes were annotated. The data can be used for human pose recognition and other tasks.
Data size:
10,000 people
Race distribution:
Asian (Chinese) | Provide a detailed description of the following dataset: 10,000 People - Human Pose Recognition Data |
WikiTables-TURL | The WikiTables-TURL dataset was constructed by the authors of [TURL](https://paperswithcode.com/paper/turl-table-understanding-through) and is based on the WikiTable corpus, which is a large collection of Wikipedia tables. The dataset consists of 580,171 tables divided into fixed training, validation and testing splits. Additionally, the dataset contains metadata about each table, such as the table name, table caption and column headers.
406,706 of these tables are annotated for the Column Type Annotation (CTA) task, 55,970 tables for the Columns Property Annotation (CPA) task and 200,744 tables for the Cell Entity Annotation (CEA) task. As classes for the CTA and CPA, Freebase's types and relations were used, whereas for the CEA task entities from Freebase were used. The table below lists the total annotated columns (or cells in the case of CEA) for each split and for each task as well as the number of classes used for annotation.
| | Training | Validation | Testing | Classes |
|-----|--------|----------|-------|-------|
| CTA | 628,254 |13,391| 13,025 | 255 |
| CPA | 62,954| 2,175 | 2,072 | 121 |
| CEA | 1,264,217 | 76,720 | 225,777 | 1,787,737 |
The authors have made the dataset and its variants publicly available for [download](https://buckeyemailosu-my.sharepoint.com/personal/deng_595_buckeyemail_osu_edu/_layouts/15/onedrive.aspx?id=%2Fpersonal%2Fdeng%5F595%5Fbuckeyemail%5Fosu%5Fedu%2FDocuments%2FBuckeyeBox%20Data%2FTURL&ga=1). | Provide a detailed description of the following dataset: WikiTables-TURL |
GitTables-SemTab | The GitTables-SemTab dataset is a subset of the [GitTables](https://paperswithcode.com/dataset/gittables) dataset and was created to be used during the [SemTab](http://www.cs.ox.ac.uk/isg/challenges/sem-tab/) challenge. The dataset consists of 1101 tables and is used to benchmark the Column Type Annotation (CTA) task.
Its columns were annotated using semantic properties from DBpedia and semantic types and properties from Schema.org. The table below shows the number of annotated columns and number of classes used to annotate this dataset.
| | Columns | Classes |
|----------------|-------|-------|
| Column Type Annotation - Schema.org | 721 | 59 |
| Column Type Annotation - DBpedia | 2,533 | 122 | | Provide a detailed description of the following dataset: GitTables-SemTab |
Tough Tables | The ToughTables (2T) dataset was created for the [SemTab](http://www.cs.ox.ac.uk/isg/challenges/sem-tab/) challenge and includes 180 tables in total. The tables in this dataset can be categorized in two groups: the control (CTRL) group tables and tough (TOUGH) group tables.
The CTRL group contains 60 tables generated by querying the DBpedia SPARQL endpoint and tables collected from Wikipedia and their characteristic is that they are easy to annotate. The TOUGH group contains 120 tables mainly scraped from the web, some containing misspelled words and nicknames/homonyms and their characteristic is that they are hard to annotate. In both groups some tables were generated by the authors where they added noise to the collected tables.
The dataset was annotated for two tasks using DBpedia (DBP) types and entities and WikiData (WD): Column Type Annotation (CTA) and Cell Entity Annotation (CEA). In the table below the number of columns annotated for the CTA and number of cells annotated for the CEA task as well as the number of classes used are listed.
| | Annotations| Classes |
|-----|---------|---------|
| DBP-Column Type Annotation | 540 | 39 |
| DBP-Cell Entity Annotation | 663,656 | 16,023 |
| WD-Column Type Annotation| 540 | 276 |
| WD-Cell Entity Annotation | 667,244 | 24,653 | | Provide a detailed description of the following dataset: Tough Tables |
50stateSimulations | Every decade following the Census, states and municipalities must redraw districts for Congress, state houses, city councils, and more. The goal of the 50-State Simulation Project is to enable researchers, practitioners, and the general public to use cutting-edge redistricting simulation analysis to evaluate enacted congressional districts.
Evaluating a redistricting plan requires analysts to take into account each state’s redistricting rules and particular political geography. Comparing the partisan bias of a plan for Texas with the bias of a plan for New York, for example, is likely misleading. Comparing a state’s current plan to a past plan is also problematic because of demographic and political changes over time. Redistricting simulations generate an ensemble of alternative redistricting plans within a given state which are tailored to its redistricting rules. Unlike traditional evaluation methods, therefore, simulations are able to directly account for the state’s political geography and redistricting criteria.
This dataset contains sampled districting plans and accompanying summary statistics for all 50 U.S. states. | Provide a detailed description of the following dataset: 50stateSimulations |
Replication Data for: The use of differential privacy for census data and its impact on redistricting | Census statistics play a key role in public policy decisions and social science research. However, given the risk of revealing individual information, many statistical agencies are considering disclosure control methods based on differential privacy, which add noise to tabulated data. Unlike other applications of differential privacy, however, census statistics must be postprocessed after noise injection to be usable. We study the impact of the U.S. Census Bureau’s latest disclosure avoidance system (DAS) on a major application of census statistics, the redrawing of electoral districts. We find that the DAS systematically undercounts the population in mixed-race and mixed-partisan precincts, yielding unpredictable racial and partisan biases. While the DAS leads to a likely violation of the “One Person, One Vote” standard as currently interpreted, it does not prevent accurate predictions of an individual’s race and ethnicity. Our findings underscore the difficulty of balancing accuracy and respondent privacy in the Census. | Provide a detailed description of the following dataset: Replication Data for: The use of differential privacy for census data and its impact on redistricting |
PointCloud-C | PointCloud-C is the very first test-suite for point cloud robustness analysis under corruptions.
- Two sets: ModelNet-C for point cloud classification and ShapeNet-C for part segmentation.
- Real-world corruption sources, ranging from object-, senor-, and processing-levels.
- Seven types of corruptions, each with five severity levels.
- Benchmark with more than 20 point cloud recognition algorithms.
- Methods ranging from architecture design, augmentations, and pre-training. | Provide a detailed description of the following dataset: PointCloud-C |
PDDL Generators | This repository is a collection of PDDL generators, some of which have been used to generate benchmarks for the International Planning Competition (IPC). | Provide a detailed description of the following dataset: PDDL Generators |
larousse_1905_wd | This dataset links all the entries describing named entities of _Petit Larousse illustré_, a French dictionary published in 1905, to wikidata identifiers. The dataset is available in the JSON format as a list of entries, where each entry is a dictionary with two keys: the text of the entry and the list of wikidata identifiers. For example, for the entry AALI-PACHA:
```
{'texte': "AALI-PACHA, homme d'Etat turc, né à Constantinople. Il a attaché son nom à la politique de réformes du Tanzimat (1815-1871).",
'qid': ['Q439237']}
```
The dataset is described in the paper:
Nugues, Pierre, Connecting a French Dictionary from the Beginning of the 20th Century to Wikidata, in _Proceedings of the Language Resources and Evaluation Conference_, 2022. | Provide a detailed description of the following dataset: larousse_1905_wd |
ISIC 2019 | The goal for ISIC 2019 is classify dermoscopic images among nine different diagnostic categories.25,331 images are available for training across 8 different categories. Two tasks will be available for participation: 1) classify dermoscopic images without meta-data,
and 2) classify images with additional available meta-data. | Provide a detailed description of the following dataset: ISIC 2019 |
Brain Tumor MRI Dataset | This dataset is a combination of the following three datasets :
figshare,
SARTAJ dataset and
Br35H
This dataset contains 7022 images of human brain MRI images which are classified into 4 classes: glioma - meningioma - no tumor and pituitary. | Provide a detailed description of the following dataset: Brain Tumor MRI Dataset |
DCASE 2021 TASK1A | DCASE 2021 TASK1A dataset consists of audio examples from 10 different audio scenes. For more detailed, please follow the link: https://dcase.community/challenge2021/task-acoustic-scene-classification | Provide a detailed description of the following dataset: DCASE 2021 TASK1A |
Replication Data for: Singapore Soundscape Site Selection Survey (S5) | This dataset contains the data used for all statistical analysis in our publication "Singapore Soundscape Site Selection Survey (S5): Identification of Characteristic Soundscapes of Singapore via Weighted k-means Clustering", summarised in a single .csv file.
For more details on the study methodology, please refer to our manuscript:
Ooi, K.; Lam, B.; Hong, J.; Watcharasupat, K. N.; Ong, Z.-T.; Gan, W.-S. Singapore Soundscape Site Selection Survey (S5): Identification of Characteristic Soundscapes of Singapore via Weighted k-means Clustering. Sustainability, 2022.
For our replication code utilising this data, please refer to our Github repository: https://github.com/ntudsp/singapore-soundscape-site-selection-survey
A short explanation of the columns in the .csv file is as follows:
Full of life & exciting [Latitude]: The latitude, in degrees, of the location chosen by the participant as "Full of life & exciting".
Full of life & exciting [Longitude]: The longitude, in degrees, of the location chosen by the participant as "Full of life & exciting".
Full of life & exciting [# times visited]: The number of times that the participant had visited the chosen location they considered "Full of life & exciting" before, as reported by the participant.
Full of life & exciting [Duration]: The average duration per visit to the chosen location the participant considered "Full of life & exciting", as reported by the participant.
Chaotic & restless [Latitude]: The latitude, in degrees, of the location chosen by the participant as "Chaotic & restless".
Chaotic & restless [Longitude]: The longitude, in degrees, of the location chosen by the participant as "Chaotic & restless".
Chaotic & restless [# times visited]: The number of times that the participant had visited the chosen location they considered "Chaotic & restless" before, as reported by the participant.
Chaotic & restless [Duration]: The average duration per visit to the chosen location the participant considered "Chaotic & restless", as reported by the participant.
Calm & tranquil [Latitude]: The latitude, in degrees, of the location chosen by the participant as "Calm & tranquil".
Calm & tranquil [Longitude]: The longitude, in degrees, of the location chosen by the participant as "Calm & tranquil".
Calm & tranquil [# times visited]: The number of times that the participant had visited the chosen location they considered "Calm & tranquil" before, as reported by the participant.
Calm & tranquil [Duration]: The average duration per visit to the chosen location the participant considered "Calm & tranquil", as reported by the participant.
Boring & lifeless [Latitude]: The latitude, in degrees, of the location chosen by the participant as "Boring & lifeless".
Boring & lifeless [Longitude]: The longitude, in degrees, of the location chosen by the participant as "Boring & lifeless".
Boring & lifeless [# times visited]: The number of times that the participant had visited the chosen location they considered "Boring & lifeless" before, as reported by the participant.
Boring & lifeless [Duration]: The average duration per visit to the chosen location the participant considered "Boring & lifeless", as reported by the participant. | Provide a detailed description of the following dataset: Replication Data for: Singapore Soundscape Site Selection Survey (S5) |
ConcurrentQA Benchmark | ConcurrentQA is a textual multi-hop QA benchmark to require concurrent retrieval over multiple data-distributions (i.e. Wikipedia and email data). The dataset follow the exact same schema and design as HotpotQA. The data set is downloadable here: https://github.com/facebookresearch/concurrentqa. It also contains model and result analysis code. This benchmark can also be used to study privacy when reasoning over data distributed in multiple privacy scopes --- i.e. Wikipedia in the public domain and emails in the private domain.
The following is a blog post about the benchmark: https://ai.facebook.com/blog/building-systems-to-reason-securely-over-private-data/ | Provide a detailed description of the following dataset: ConcurrentQA Benchmark |
RICH | Inferring human-scene contact (HSC) is the first step toward understanding how humans interact with their surroundings. While detecting 2D human-object interaction (HOI) and reconstructing 3D human pose and shape (HPS) have enjoyed significant progress, reasoning about 3D human-scene contact from a single image is still challenging. Existing HSC detection methods consider only a few types of predefined contact, often reduce body and scene to a small number of primitives, and even overlook image evidence. To predict human-scene contact from a single image, we address the limitations above from both data and algorithmic perspectives. We capture a new dataset called RICH for “Real scenes, Interaction, Contact and Humans.” RICH contains multiview outdoor/indoor video sequences at 4K resolution, ground-truth 3D human bodies captured using markerless motion capture, 3D body scans, and high resolution 3D scene scans. A key feature of RICH is that it also contains accurate vertex-level contact labels on the body | Provide a detailed description of the following dataset: RICH |
Click-Through Rate Prediction - Avazu | # File descriptions
* train - Training set. 10 days of click-through data, ordered chronologically. Non-clicks and clicks are subsampled according to different strategies.
* test - Test set. 1 day of ads to for testing your model predictions.
* sampleSubmission.csv - Sample submission file in the correct format, corresponds to the All-0.5 Benchmark.
# Data fields
| Key | Description |
|----------|:-------------:|
| id | ... |
| click | 0/1 for non-click/click |
| hour | format is YYMMDDHH, so 14091123 means 23:00 on Sept. 11, 2014 UTC. |
| C1 | anonymized categorical variable|
| banner_pos | ... |
| site_id | ... |
| site_domain | ... |
| site_category | ... |
| app_id | ... |
| app_domain | ... |
| app_category | ... |
| device_id | ... |
| device_ip | ... |
| device_model | ... |
| device_type | ... |
| device_conn_type | ... |
| C14-C21| anonymized categorical variable | | Provide a detailed description of the following dataset: Click-Through Rate Prediction - Avazu |
Deep Indices | This dataset inclue multi-spectral acquisition of vegetation for the conception of new DeepIndices. The images were acquired with the Airphen (Hyphen, Avignon, France) six-band multi-spectral camera configured using the 450/570/675/710/730/850 nm bands with a 10 nm FWHM. The dataset were acquired on the site of INRAe in Montoldre (Allier, France, at 46°20'30.3"N 3°26'03.6"E) within the framework of the “RoSE challenge” founded by the French National Research Agency (ANR) and in Dijon (Burgundy, France, at 47°18'32.5"N 5°04'01.8"E) within the site of AgroSup Dijon. Images of bean and corn, containing various natural weeds (yarrows, amaranth, geranium, plantago, etc) and sowed ones (mustards, goosefoots, mayweed and ryegrass) with very distinct characteristics in terms of illumination (shadow, morning, evening, full sun, cloudy, rain, ...) were acquired in top-down view at 1.8 meter from the ground. (2020-05-01) | Provide a detailed description of the following dataset: Deep Indices |
Multi-Spectral Leaf Segmentation | This dataset were acquired with the Airphen (Hyphen, Avignon, France) six-band multi-spectral camera configured using the 450/570/675/710/730/850 nm bands with a 10 nm FWHM. And acquired on the site of INRAe in Montoldre (Allier, France, at 46°20'30.3"N 3°26'03.6"E) within the framework of the “RoSE challenge” founded by the French National Research Agency (ANR). Images contains bean, with various natural weeds (yarrows, amaranth, geranium, plantago, etc) and sowed ones (mustards, goosefoots, mayweed and ryegrass) with very distinct characteristics in terms of illumination (shadow, morning, evening, full sun, cloudy, rain, ...) The ground truth is defined for each images with polygons around leafs boundaries: In addition, each polygons are labeled into crop or weed. (2020-06-11) | Provide a detailed description of the following dataset: Multi-Spectral Leaf Segmentation |
NovelCraft | Scene-focused, multi-modal, episodic data of the images and symbolic world-states seen
by an agent completing a pogo-stick assembly task within a video game world. Classes consist of
episodes with novel objects inserted. A subset of these novel objects can impact gameplay and agent behavior. Novelty objects can vary in size, position, and occlusion within the images. Usable for novelty detection, generalized category discovery, and class-imbalanced classification. | Provide a detailed description of the following dataset: NovelCraft |
ArtBench-10 (32x32) | We introduce ArtBench-10, the first class-balanced, high-quality, cleanly annotated, and standardized dataset for benchmarking artwork generation. It comprises 60,000 images of artwork from 10 distinctive artistic styles, with 5,000 training images and 1,000 testing images per style. ArtBench-10 has several advantages over previous artwork datasets. Firstly, it is class-balanced while most previous artwork datasets suffer from the long tail class distributions. Secondly, the images are of high quality with clean annotations. Thirdly, ArtBench-10 is created with standardized data collection, annotation, filtering, and preprocessing procedures. We provide three versions of the dataset with different resolutions (32×32, 256×256, and original image size), formatted in a way that is easy to be incorporated by popular machine learning frameworks. | Provide a detailed description of the following dataset: ArtBench-10 (32x32) |
BindingDB | BindingDB is a public, web-accessible database of measured binding affinities, focusing chiefly on the interactions of protein considered to be drug-targets with small, drug-like molecules. As of May 27, 2022, BindingDB contains 41,296 Entries, each with a DOI, containing 2,519,702 binding data for 8,810 protein targets and 1,080,101 small molecules. There are 5,988 protein-ligand crystal structures with BindingDB affinity measurements for proteins with 100% sequence identity, and 11,442 crystal structures allowing proteins to 85% sequence identity.You can also use BindingDB data through the Registry of Open Data on AWS: https://registry.opendata.aws/binding-db. This dataset using the split by TransformerCPI(doi.org/10.1093/bioinformatics/btaa524) | Provide a detailed description of the following dataset: BindingDB |
LIT-PCBA(ALDH1) | Comparative evaluation of virtual screening methods requires a rigorous benchmarking procedure on diverse, realistic, and unbiased data sets. Recent investigations from numerous research groups unambiguously demonstrate that artificially constructed ligand sets classically used by the community (e.g., DUD, DUD-E, MUV) are unfortunately biased by both obvious and hidden chemical biases, therefore overestimating the true accuracy of virtual screening methods. We herewith present a novel data set (LIT-PCBA) specifically designed for virtual screening and machine learning. LIT-PCBA relies on 149 dose–response PubChem bioassays that were additionally processed to remove false positives and assay artifacts and keep active and inactive compounds within similar molecular property ranges. To ascertain that the data set is suited to both ligand-based and structure-based virtual screening, target sets were restricted to single protein targets for which at least one X-ray structure is available in complex with ligands of the same phenotype (e.g., inhibitor, inverse agonist) as that of the PubChem active compounds. Preliminary virtual screening on the 21 remaining target sets with state-of-the-art orthogonal methods (2D fingerprint similarity, 3D shape similarity, molecular docking) enabled us to select 15 target sets for which at least one of the three screening methods is able to enrich the top 1%-ranked compounds in true actives by at least a factor of 2. The corresponding ligand sets (training, validation) were finally unbiased by the recently described asymmetric validation embedding (AVE) procedure to afford the LIT-PCBA data set, consisting of 15 targets and 7844 confirmed active and 407,381 confirmed inactive compounds. The data set mimics experimental screening decks in terms of hit rate (ratio of active to inactive compounds) and potency distribution. It is available online at http://drugdesign.unistra.fr/LIT-PCBA for download and for benchmarking novel virtual screening methods, notably those relying on machine learning. | Provide a detailed description of the following dataset: LIT-PCBA(ALDH1) |
LIT-PCBA(ESR1_ant) | Comparative evaluation of virtual screening methods requires a rigorous benchmarking procedure on diverse, realistic, and unbiased data sets. Recent investigations from numerous research groups unambiguously demonstrate that artificially constructed ligand sets classically used by the community (e.g., DUD, DUD-E, MUV) are unfortunately biased by both obvious and hidden chemical biases, therefore overestimating the true accuracy of virtual screening methods. We herewith present a novel data set (LIT-PCBA) specifically designed for virtual screening and machine learning. LIT-PCBA relies on 149 dose–response PubChem bioassays that were additionally processed to remove false positives and assay artifacts and keep active and inactive compounds within similar molecular property ranges. To ascertain that the data set is suited to both ligand-based and structure-based virtual screening, target sets were restricted to single protein targets for which at least one X-ray structure is available in complex with ligands of the same phenotype (e.g., inhibitor, inverse agonist) as that of the PubChem active compounds. Preliminary virtual screening on the 21 remaining target sets with state-of-the-art orthogonal methods (2D fingerprint similarity, 3D shape similarity, molecular docking) enabled us to select 15 target sets for which at least one of the three screening methods is able to enrich the top 1%-ranked compounds in true actives by at least a factor of 2. The corresponding ligand sets (training, validation) were finally unbiased by the recently described asymmetric validation embedding (AVE) procedure to afford the LIT-PCBA data set, consisting of 15 targets and 7844 confirmed active and 407,381 confirmed inactive compounds. The data set mimics experimental screening decks in terms of hit rate (ratio of active to inactive compounds) and potency distribution. It is available online at http://drugdesign.unistra.fr/LIT-PCBA for download and for benchmarking novel virtual screening methods, notably those relying on machine learning. | Provide a detailed description of the following dataset: LIT-PCBA(ESR1_ant) |
LIT-PCBA(KAT2A) | Comparative evaluation of virtual screening methods requires a rigorous benchmarking procedure on diverse, realistic, and unbiased data sets. Recent investigations from numerous research groups unambiguously demonstrate that artificially constructed ligand sets classically used by the community (e.g., DUD, DUD-E, MUV) are unfortunately biased by both obvious and hidden chemical biases, therefore overestimating the true accuracy of virtual screening methods. We herewith present a novel data set (LIT-PCBA) specifically designed for virtual screening and machine learning. LIT-PCBA relies on 149 dose–response PubChem bioassays that were additionally processed to remove false positives and assay artifacts and keep active and inactive compounds within similar molecular property ranges. To ascertain that the data set is suited to both ligand-based and structure-based virtual screening, target sets were restricted to single protein targets for which at least one X-ray structure is available in complex with ligands of the same phenotype (e.g., inhibitor, inverse agonist) as that of the PubChem active compounds. Preliminary virtual screening on the 21 remaining target sets with state-of-the-art orthogonal methods (2D fingerprint similarity, 3D shape similarity, molecular docking) enabled us to select 15 target sets for which at least one of the three screening methods is able to enrich the top 1%-ranked compounds in true actives by at least a factor of 2. The corresponding ligand sets (training, validation) were finally unbiased by the recently described asymmetric validation embedding (AVE) procedure to afford the LIT-PCBA data set, consisting of 15 targets and 7844 confirmed active and 407,381 confirmed inactive compounds. The data set mimics experimental screening decks in terms of hit rate (ratio of active to inactive compounds) and potency distribution. It is available online at http://drugdesign.unistra.fr/LIT-PCBA for download and for benchmarking novel virtual screening methods, notably those relying on machine learning. | Provide a detailed description of the following dataset: LIT-PCBA(KAT2A) |
LIT-PCBA(MAPK1) | Comparative evaluation of virtual screening methods requires a rigorous benchmarking procedure on diverse, realistic, and unbiased data sets. Recent investigations from numerous research groups unambiguously demonstrate that artificially constructed ligand sets classically used by the community (e.g., DUD, DUD-E, MUV) are unfortunately biased by both obvious and hidden chemical biases, therefore overestimating the true accuracy of virtual screening methods. We herewith present a novel data set (LIT-PCBA) specifically designed for virtual screening and machine learning. LIT-PCBA relies on 149 dose–response PubChem bioassays that were additionally processed to remove false positives and assay artifacts and keep active and inactive compounds within similar molecular property ranges. To ascertain that the data set is suited to both ligand-based and structure-based virtual screening, target sets were restricted to single protein targets for which at least one X-ray structure is available in complex with ligands of the same phenotype (e.g., inhibitor, inverse agonist) as that of the PubChem active compounds. Preliminary virtual screening on the 21 remaining target sets with state-of-the-art orthogonal methods (2D fingerprint similarity, 3D shape similarity, molecular docking) enabled us to select 15 target sets for which at least one of the three screening methods is able to enrich the top 1%-ranked compounds in true actives by at least a factor of 2. The corresponding ligand sets (training, validation) were finally unbiased by the recently described asymmetric validation embedding (AVE) procedure to afford the LIT-PCBA data set, consisting of 15 targets and 7844 confirmed active and 407,381 confirmed inactive compounds. The data set mimics experimental screening decks in terms of hit rate (ratio of active to inactive compounds) and potency distribution. It is available online at http://drugdesign.unistra.fr/LIT-PCBA for download and for benchmarking novel virtual screening methods, notably those relying on machine learning. | Provide a detailed description of the following dataset: LIT-PCBA(MAPK1) |
ESOL(scaffold) | MoleculeNet is a benchmark specially designed for testing machine learning methods of molecular properties. As we aim to facilitate the development of molecular machine learning method, this work curates a number of dataset collections, creates a suite of software that implements many known featurizations and previously proposed algorithms. All methods and datasets are integrated as parts of the open source DeepChem package(MIT license).
MoleculeNet is built upon multiple public databases. The full collection currently includes over 700,000 compounds tested on a range of different properties. We test the performances of various machine learning models with different featurizations on the datasets(detailed descriptions here), with all results reported in AUC-ROC, AUC-PRC, RMSE and MAE scores.
For users, please cite:
Zhenqin Wu, Bharath Ramsundar, Evan N. Feinberg, Joseph Gomes, Caleb Geniesse, Aneesh S. Pappu, Karl Leswing, Vijay Pande, MoleculeNet: A Benchmark for Molecular Machine Learning, arXiv preprint, arXiv: 1703.00564, 2017. | Provide a detailed description of the following dataset: ESOL(scaffold) |
Lipophilicity(scaffold) | MoleculeNet is a benchmark specially designed for testing machine learning methods of molecular properties. As we aim to facilitate the development of molecular machine learning method, this work curates a number of dataset collections, creates a suite of software that implements many known featurizations and previously proposed algorithms. All methods and datasets are integrated as parts of the open source DeepChem package(MIT license).
MoleculeNet is built upon multiple public databases. The full collection currently includes over 700,000 compounds tested on a range of different properties. We test the performances of various machine learning models with different featurizations on the datasets(detailed descriptions here), with all results reported in AUC-ROC, AUC-PRC, RMSE and MAE scores.
For users, please cite:
Zhenqin Wu, Bharath Ramsundar, Evan N. Feinberg, Joseph Gomes, Caleb Geniesse, Aneesh S. Pappu, Karl Leswing, Vijay Pande, MoleculeNet: A Benchmark for Molecular Machine Learning, arXiv preprint, arXiv: 1703.00564, 2017. | Provide a detailed description of the following dataset: Lipophilicity(scaffold) |
FreeSolv(scaffold) | MoleculeNet is a benchmark specially designed for testing machine learning methods of molecular properties. As we aim to facilitate the development of molecular machine learning method, this work curates a number of dataset collections, creates a suite of software that implements many known featurizations and previously proposed algorithms. All methods and datasets are integrated as parts of the open source DeepChem package(MIT license).
MoleculeNet is built upon multiple public databases. The full collection currently includes over 700,000 compounds tested on a range of different properties. We test the performances of various machine learning models with different featurizations on the datasets(detailed descriptions here), with all results reported in AUC-ROC, AUC-PRC, RMSE and MAE scores.
For users, please cite:
Zhenqin Wu, Bharath Ramsundar, Evan N. Feinberg, Joseph Gomes, Caleb Geniesse, Aneesh S. Pappu, Karl Leswing, Vijay Pande, MoleculeNet: A Benchmark for Molecular Machine Learning, arXiv preprint, arXiv: 1703.00564, 2017. | Provide a detailed description of the following dataset: FreeSolv(scaffold) |
BACE(scaffold) | MoleculeNet is a benchmark specially designed for testing machine learning methods of molecular properties. As we aim to facilitate the development of molecular machine learning method, this work curates a number of dataset collections, creates a suite of software that implements many known featurizations and previously proposed algorithms. All methods and datasets are integrated as parts of the open source DeepChem package(MIT license).
MoleculeNet is built upon multiple public databases. The full collection currently includes over 700,000 compounds tested on a range of different properties. We test the performances of various machine learning models with different featurizations on the datasets(detailed descriptions here), with all results reported in AUC-ROC, AUC-PRC, RMSE and MAE scores.
For users, please cite:
Zhenqin Wu, Bharath Ramsundar, Evan N. Feinberg, Joseph Gomes, Caleb Geniesse, Aneesh S. Pappu, Karl Leswing, Vijay Pande, MoleculeNet: A Benchmark for Molecular Machine Learning, arXiv preprint, arXiv: 1703.00564, 2017. | Provide a detailed description of the following dataset: BACE(scaffold) |
BBBP(scaffold) | MoleculeNet is a benchmark specially designed for testing machine learning methods of molecular properties. As we aim to facilitate the development of molecular machine learning method, this work curates a number of dataset collections, creates a suite of software that implements many known featurizations and previously proposed algorithms. All methods and datasets are integrated as parts of the open source DeepChem package(MIT license).
MoleculeNet is built upon multiple public databases. The full collection currently includes over 700,000 compounds tested on a range of different properties. We test the performances of various machine learning models with different featurizations on the datasets(detailed descriptions here), with all results reported in AUC-ROC, AUC-PRC, RMSE and MAE scores.
For users, please cite:
Zhenqin Wu, Bharath Ramsundar, Evan N. Feinberg, Joseph Gomes, Caleb Geniesse, Aneesh S. Pappu, Karl Leswing, Vijay Pande, MoleculeNet: A Benchmark for Molecular Machine Learning, arXiv preprint, arXiv: 1703.00564, 2017. | Provide a detailed description of the following dataset: BBBP(scaffold) |
SIDER(scaffold) | MoleculeNet is a benchmark specially designed for testing machine learning methods of molecular properties. As we aim to facilitate the development of molecular machine learning method, this work curates a number of dataset collections, creates a suite of software that implements many known featurizations and previously proposed algorithms. All methods and datasets are integrated as parts of the open source DeepChem package(MIT license).
MoleculeNet is built upon multiple public databases. The full collection currently includes over 700,000 compounds tested on a range of different properties. We test the performances of various machine learning models with different featurizations on the datasets(detailed descriptions here), with all results reported in AUC-ROC, AUC-PRC, RMSE and MAE scores.
For users, please cite:
Zhenqin Wu, Bharath Ramsundar, Evan N. Feinberg, Joseph Gomes, Caleb Geniesse, Aneesh S. Pappu, Karl Leswing, Vijay Pande, MoleculeNet: A Benchmark for Molecular Machine Learning, arXiv preprint, arXiv: 1703.00564, 2017. | Provide a detailed description of the following dataset: SIDER(scaffold) |
Tox21(scaffold) | MoleculeNet is a benchmark specially designed for testing machine learning methods of molecular properties. As we aim to facilitate the development of molecular machine learning method, this work curates a number of dataset collections, creates a suite of software that implements many known featurizations and previously proposed algorithms. All methods and datasets are integrated as parts of the open source DeepChem package(MIT license).
MoleculeNet is built upon multiple public databases. The full collection currently includes over 700,000 compounds tested on a range of different properties. We test the performances of various machine learning models with different featurizations on the datasets(detailed descriptions here), with all results reported in AUC-ROC, AUC-PRC, RMSE and MAE scores.
For users, please cite:
Zhenqin Wu, Bharath Ramsundar, Evan N. Feinberg, Joseph Gomes, Caleb Geniesse, Aneesh S. Pappu, Karl Leswing, Vijay Pande, MoleculeNet: A Benchmark for Molecular Machine Learning, arXiv preprint, arXiv: 1703.00564, 2017. | Provide a detailed description of the following dataset: Tox21(scaffold) |
ToxCast(scaffold) | MoleculeNet is a benchmark specially designed for testing machine learning methods of molecular properties. As we aim to facilitate the development of molecular machine learning method, this work curates a number of dataset collections, creates a suite of software that implements many known featurizations and previously proposed algorithms. All methods and datasets are integrated as parts of the open source DeepChem package(MIT license).
MoleculeNet is built upon multiple public databases. The full collection currently includes over 700,000 compounds tested on a range of different properties. We test the performances of various machine learning models with different featurizations on the datasets(detailed descriptions here), with all results reported in AUC-ROC, AUC-PRC, RMSE and MAE scores.
For users, please cite:
Zhenqin Wu, Bharath Ramsundar, Evan N. Feinberg, Joseph Gomes, Caleb Geniesse, Aneesh S. Pappu, Karl Leswing, Vijay Pande, MoleculeNet: A Benchmark for Molecular Machine Learning, arXiv preprint, arXiv: 1703.00564, 2017. | Provide a detailed description of the following dataset: ToxCast(scaffold) |
VizNet-Sato | VizNet-Sato is a dataset from the authors of Sato and is based on the VizNet dataset. The authors choose from VizNet only relational web tables with headers matching their selected 78 DBpedia semantic types. The selected tables are divided into two categories: Full tables and Multi-column only tables. The first category corresponds to 78,733 selected tables from VizNet, while the second category includes 32,265 tables which have more than one column. The tables of both categories are divided into 5 subsets to be able to conduct 5-fold cross validation: 4 subsets are used for training and the last for evaluation.
The headers of the columns act as semantic annotations for the Column Type Annotation (CTA) task. Some statistics about both categories of tables are provided in the table below, where "Columns" refers to the number of annotated columns and "Classes" to the number of unique DBpedia semantic types used for annotation.
| | Columns | Classes |
|-----------------|---------|---------|
| Full | 120,609 | 78 |
| Multi-column | 74,141 | 78 | | Provide a detailed description of the following dataset: VizNet-Sato |
Road Anomaly | This dataset contains images of unusual dangers which can be encountered by a vehicle on the road – animals, rocks, traffic cones and other obstacles. Its purpose is testing autonomous driving perception algorithms in rare but safety-critical circumstances. | Provide a detailed description of the following dataset: Road Anomaly |
BBC News Summary | This dataset was created using a dataset used for data categorization that onsists of 2225 documents from the BBC news website corresponding to stories in five topical areas from 2004-2005 used in the paper of D. Greene and P. Cunningham. "Practical Solutions to the Problem of Diagonal Dominance in Kernel Document Clustering", Proc. ICML 2006; whose all rights, including copyright, in the content of the original articles are owned by the BBC. More at http://mlg.ucd.ie/datasets/bbc.html | Provide a detailed description of the following dataset: BBC News Summary |
Bengali.AI Handwritten Graphemes | This dataset contains images of individual hand-written Bengali characters. Bengali characters (graphemes) are written by combining three components: a grapheme_root, vowel_diacritic, and consonant_diacritic. Your challenge is to classify the components of the grapheme in each image. There are roughly 10,000 possible graphemes, of which roughly 1,000 are represented in the training set. The test set includes some graphemes that do not exist in the train but has no new grapheme components. It takes a lot of volunteers filling out sheets like this to generate a useful amount of real data; focusing the problem on the grapheme components rather than on recognizing whole graphemes should make it possible to assemble a Bengali OCR system without handwriting samples for all 10,000 graphemes. | Provide a detailed description of the following dataset: Bengali.AI Handwritten Graphemes |
RSTPReid | RSTPReid contains 20505 images of 4,101 persons from 15 cameras. Each person has 5 corresponding images taken by different cameras with complex both indoor and outdoor scene transformations and backgrounds in various periods of time, which makes RSTPReid much more challenging and more adaptable to real scenarios. Each image is annotated with 2 textual descriptions. For data division, 3701 (index < 18505), 200 (18505 <= index < 19505) and 200 (index >= 19505) identities are utilized for training, validation and testing, respectively (Marked by item 'split' in the JSON file). Each sentence is no shorter than 23 words. | Provide a detailed description of the following dataset: RSTPReid |
IBISCape | A Simulated Benchmark for multi-modal SLAM Systems Evaluation in Large-scale Dynamic Environments. | Provide a detailed description of the following dataset: IBISCape |
Active TLS Stack Fingerprinting Measurement Data | Measurement data related to the publication „Active TLS Stack Fingerprinting: Characterizing TLS Server Deployments at Scale“. It contains weekly TLS and HTTP scan data and the TLS fingerprints for each target. | Provide a detailed description of the following dataset: Active TLS Stack Fingerprinting Measurement Data |
DME VQA dataset | Medical VQA dataset built from the [IDRiD](https://ieee-dataport.org/open-access/indian-diabetic-retinopathy-image-dataset-idrid) and [eOphta](https://www.adcis.net/en/third-party/e-ophtha/) datasets. The dataset contains both healthy and unhealthy fundus images. For each image, a set of pre-defined questions is generated, including questions about regions (e.g. are there hard exudates in this region?), for which an associated mask denotes the location of the region.
The motivation for this dataset include the lack of public medical VQA datasets with related questions. In our dataset, questions are related because there is a high-level question about the DME grade of the image, and associated low-level questions that can lead to the answer of the high-level question. This allows to study the consistency of a VQA model i.e. how often the model produces contradictory answers to questions about a given image. Questions about regions are also a novel feature of this dataset.
The dataset can be used for general VQA purposes, and also for the more specific purpose of consistency improvement.
Number of images :
Train: 433
Val: 112
Test: 134
Number of QA pairs:
Train: 9779
Val: 2380
Test: 1311
To download the dataset, click [here](https://zenodo.org/record/6784358).
For more information, check [our paper](https://arxiv.org/abs/2206.13296). | Provide a detailed description of the following dataset: DME VQA dataset |
4D-OR | 4D-OR includes a total of 6734 scenes, recorded by six calibrated RGB-D Kinect sensors 1 mounted to the ceiling of the OR, with one frame-per-second, providing synchronized RGB and depth images. We provide fused point cloud sequences of entire scenes, automatically annotated human 6D poses and 3D bounding boxes for OR objects. Furthermore, we provide SSG annotations for each step of the surgery together with the clinical roles of all the humans in the scenes, e.g., nurse, head surgeon, anesthesiologist. | Provide a detailed description of the following dataset: 4D-OR |
YouTube-VIS 2021 | 3,859 high-resolution YouTube videos, 2,985 training videos, 421 validation videos and 453 test videos.
An improved 40-category label set by merging eagle and owl into bird, ape into monkey, deleting hands, and adding flying disc, squirrel and whale
8,171 unique video instances
232k high-quality manual annotations | Provide a detailed description of the following dataset: YouTube-VIS 2021 |
T2Dv2 | The T2Dv2 dataset consists of 779 tables originating from the English-language subset of the [WebTables](http://webdatacommons.org/webtables/) corpus. 237 tables are annotated for the Table Type Detection task, 236 for the Columns Property Annotation (CPA) task and 235 for the Row Annotation task. The annotations that are used are DBpedia types, properties and entities.
A subset of this dataset was annotated by [Chen et al.](https://paperswithcode.com/paper/190600781) for the Column Type Annotation (CTA) task where they annotate 236 tables with DBpedia types. In the papers it was used the subset was divided into training and testing splits and the evaluation was done on the testing split T2D-Te. This subset is available for download at their official [Github](https://github.com/alan-turing-institute/SemAIDA) repository.
Some characteristics for the different tasks are provided in the table below, where "Annotations" refers to the number of cells/rows/columns/tables annotated and "Classes" to the number of unique classes used for annotation.
| | Annotations | Classes |
|----------------|---------|---------|
| Column Property Annotation | 670 | 119 |
| Row Annotation | 26,106 | 13,975 |
| Table Type | 237 | 41 |
| Column Type Annotation | 411 | 37 | | Provide a detailed description of the following dataset: T2Dv2 |
Chalearn-AutoML-1 | This meta-dataset is first used in the AutoML1 challenge organized by Chalearn in 2015. It is composed of 30 pre-processed datasets, chosen to illustrate a wide variety of domains of applications: biology and medicine, ecology, energy and
sustainability management, image, text, audio, speech, video and other sensor data
processing, internet social media management and advertising, market analysis and
financial prediction. | Provide a detailed description of the following dataset: Chalearn-AutoML-1 |
NumtaDB | To benchmark Bengali digit recognition algorithms, a large publicly available dataset is required which is free from biases originating from geographical location, gender, and age. With this aim in mind, NumtaDB, a dataset consisting of more than 85,000 images of hand-written Bengali digits, has been assembled. | Provide a detailed description of the following dataset: NumtaDB |
OpenXAI | OpenXAI is the first general-purpose lightweight library that provides a comprehensive list of functions to systematically evaluate the quality of explanations generated by attribute-based explanation methods. OpenXAI supports the development of new datasets (both synthetic and real-world) and explanation methods, with a strong bent towards promoting systematic, reproducible, and transparent evaluation of explanation methods.
OpenXAI is an open-source initiative that comprises of a collection of curated high-stakes datasets, models, and evaluation metrics, and provides a simple and easy-to-use API that enables researchers and practitioners to benchmark explanation methods using just a few lines of code. | Provide a detailed description of the following dataset: OpenXAI |
HumanML3D | HumanML3D is a 3D human motion-language dataset that originates from a combination of HumanAct12 and Amass dataset. It covers a broad range of human actions such as daily activities (e.g., 'walking', 'jumping'), sports (e.g., 'swimming', 'playing golf'), acrobatics (e.g., 'cartwheel') and artistry (e.g., 'dancing'). Overall, HumanML3D dataset consists of 14,616 motions and 44,970 descriptions composed by 5,371 distinct words. The total length of motions amounts to 28.59 hours. The average motion length is 7.1 seconds, while average description length is 12 words. | Provide a detailed description of the following dataset: HumanML3D |
Persistence Diagram Benchmark | Persistence Diagram Benchmark | Provide a detailed description of the following dataset: Persistence Diagram Benchmark |
WikipediaGS | The WikipediaGS dataset was created by extracting Wikipedia tables from Wikipedia pages. It consists of 485,096 tables which were annotated with DBpedia entities for the Cell Entity Annotation (CEA) task.
Additionally, a subset of these tables was annotated by [Chen et al.](https://paperswithcode.com/paper/190600781) for the Column Type Annotation (CTA) task and includes 604 tables, where selected columns were annotated using DBpedia types. This subset is available for download at their official [Github](https://github.com/alan-turing-institute/SemAIDA) repository.
The table below shows the number of annotated cells/columns for each task and the number of different classes used for the annotation.
| | Annotations | Classes |
|-----|-------------|-----------|
| CEA | 4,453,329 | 1,222,358 |
| CTA | 620 | 31 | | Provide a detailed description of the following dataset: WikipediaGS |
CoNLL 2017 Shared Task - Automatically Annotated Raw Texts and Word Embeddings | Automatic segmentation, tokenization and morphological and syntactic annotations of raw texts in 45 languages, generated by UDPipe (http://ufal.mff.cuni.cz/udpipe), together with word embeddings of dimension 100 computed from lowercased texts by word2vec (https://code.google.com/archive/p/word2vec/). | Provide a detailed description of the following dataset: CoNLL 2017 Shared Task - Automatically Annotated Raw Texts and Word Embeddings |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.