dataset_name stringlengths 2 128 | description stringlengths 1 9.7k | prompt stringlengths 59 185 |
|---|---|---|
TuSimple | The **TuSimple** dataset consists of 6,408 road images on US highways. The resolution of image is 1280×720. The dataset is composed of 3,626 for training, 358 for validation, and 2,782 for testing called the TuSimple test set of which the images are under different weather conditions. | Provide a detailed description of the following dataset: TuSimple |
GTSRB | The **German Traffic Sign Recognition Benchmark** (**GTSRB**) contains 43 classes of traffic signs, split into 39,209 training images and 12,630 test images. The images have varying light conditions and rich backgrounds. | Provide a detailed description of the following dataset: GTSRB |
Tsinghua-Tencent 100K | Although promising results have been achieved in the areas of traffic-sign detection and classification, few works have provided simultaneous solutions to these two tasks for realistic real world images. We make two contributions to this problem. Firstly, we have created a large traffic-sign benchmark from 100000 Tencent Street View panoramas, going beyond previous benchmarks. We call this benchmark Tsinghua-Tencent 100K. It provides 100000 images containing 30000 traffic-sign instances. These images cover large variations in illuminance and weather conditions. Each traffic-sign in the benchmark is annotated with a class label, its bounding box and pixel mask. Secondly, we demonstrate how a robust end-to-end convolutional neural network (CNN) can simultaneously detect and classify traffic-signs. Most previous CNN image processing solutions target objects that occupy a large proportion of an image, and such networks do not work well for target objects occupying only a small fraction of an image like the traffic-signs here. Experimental results show the robustness of our network and its superiority to alternatives. The benchmark, source code and the CNN model introduced in this paper is publicly available. | Provide a detailed description of the following dataset: Tsinghua-Tencent 100K |
PA-100K | **PA-100K** is a recent-proposed large pedestrian attribute dataset, with 100,000 images in total collected from outdoor surveillance cameras. It is split into 80,000 images for the training set, and 10,000 for the validation set and 10,000 for the test set. This dataset is labeled by 26 binary attributes. The common features existing in both selected dataset is that the images are blurry due to the relatively low resolution and the positive ratio of each binary attribute is low. | Provide a detailed description of the following dataset: PA-100K |
PETA | The PEdesTrian Attribute dataset (**PETA**) is a dataset fore recognizing pedestrian attributes, such as gender and clothing style, at a far distance. It is of interest in video surveillance scenarios where face and body close-shots and hardly available. It consists of 19,000 pedestrian images with 65 attributes (61 binary and 4 multi-class). Those images contain 8705 persons. | Provide a detailed description of the following dataset: PETA |
RAP | The **Richly Annotated Pedestrian** (**RAP**) dataset is a dataset for pedestrian attribute recognition. It contains 41,585 images collected from indoor surveillance cameras. Each image is annotated with 72 attributes, while only 51 binary attributes with the positive ratio above 1% are selected for evaluation. There are 33,268 images for the training set and 8,317 for testing. | Provide a detailed description of the following dataset: RAP |
PhC-U373 | Briefly describe the dataset. Provide:
* a high-level explanation of the dataset characteristics
* explain motivations and summary of its content
* potential use cases of the dataset
If the description or image is from a different paper, please refer to it as follows:
Source: [title](url)
Image Source: [title](url) | Provide a detailed description of the following dataset: PhC-U373 |
DRIVE | The **Digital Retinal Images for Vessel Extraction** (**DRIVE**) dataset is a dataset for retinal vessel segmentation. It consists of a total of JPEG 40 color fundus images; including 7 abnormal pathology cases. The images were obtained from a diabetic retinopathy screening program in the Netherlands. The images were acquired using Canon CR5 non-mydriatic 3CCD camera with FOV equals to 45 degrees. Each image resolution is 584*565 pixels with eight bits per color channel (3 channels).
The set of 40 images was equally divided into 20 images for the training set and 20 images for the testing set. Inside both sets, for each image, there is circular field of view (FOV) mask of diameter that is approximately 540 pixels. Inside training set, for each image, one manual segmentation by an ophthalmological expert has been applied. Inside testing set, for each image, two manual segmentations have been applied by two different observers, where the first observer segmentation is accepted as the ground-truth for performance evaluation. | Provide a detailed description of the following dataset: DRIVE |
STARE | The **STARE** (**Structured Analysis of the Retina**) dataset is a dataset for retinal vessel segmentation. It contains 20 equal-sized (700×605) color fundus images. For each image, two groups of annotations are provided.. | Provide a detailed description of the following dataset: STARE |
CHASE_DB1 | **CHASE_DB1** is a dataset for retinal vessel segmentation which contains 28 color retina images with the size of 999×960 pixels which are collected from both left and right eyes of 14 school children. Each image is annotated by two independent human experts. | Provide a detailed description of the following dataset: CHASE_DB1 |
LUNA | The **LUNA** challenges provide datasets for automatic nodule detection algorithms using the largest publicly available reference database of chest CT scans, the LIDC-IDRI data set. In [LUNA16](https://paperswithcode.com/dataset/luna16), participants develop their algorithm and upload their predictions on 888 CT scans in one of the two tracks: 1) the complete nodule detection track where a complete CAD system should be developed, or 2) the false positive reduction track where a provided set of nodule candidates should be classified. | Provide a detailed description of the following dataset: LUNA |
Tox21 | The **Tox21** data set comprises 12,060 training samples and 647 test samples that represent chemical compounds. There are 801 "dense features" that represent chemical descriptors, such as molecular weight, solubility or surface area, and 272,776 "sparse features" that represent chemical substructures (ECFP10, DFS6, DFS8; stored in Matrix Market Format ). Machine learning methods can either use sparse or dense data or combine them. For each sample there are 12 binary labels that represent the outcome (active/inactive) of 12 different toxicological experiments. Note that the label matrix contains many missing values (NAs). The original data source and Tox21 challenge site is https://tripod.nih.gov/tox21/challenge/.
Source: [Tox21 Machine Learning Data Set](http://bioinf.jku.at/research/DeepTox/tox21.html)
Image Source: [https://www.frontiersin.org/articles/10.3389/fenvs.2015.00080/full](https://www.frontiersin.org/articles/10.3389/fenvs.2015.00080/full) | Provide a detailed description of the following dataset: Tox21 |
QM9 | **QM9** provides quantum chemical properties (at DFT level) for a relevant, consistent, and comprehensive chemical space of small organic molecules. This database may serve the benchmarking of existing methods, development of new methods, such as hybrid quantum mechanics/machine learning, and systematic identification of structure-property relationships. | Provide a detailed description of the following dataset: QM9 |
Douban | We release Douban Conversation Corpus, comprising a training data set, a development set and a test set for retrieval based chatbot. The statistics of Douban Conversation Corpus are shown in the following table.
| |Train|Val| Test |
| ------------- |:-------------:|:-------------:|:-------------:|
| session-response pairs | 1m|50k| 10k |
| Avg. positive response per session | 1|1| 1.18 |
| Fless Kappa | N\A|N\A|0.41 |
| Min turn per session | 3|3| 3 |
| Max ture per session | 98|91|45 |
| Average turn per session | 6.69|6.75|5.95 |
| Average Word per utterance | 18.56|18.50|20.74 |
The test data contains 1000 dialogue context, and for each context we create 10 responses as candidates. We recruited three labelers to judge if a candidate is a proper response to the session. A proper response means the response can naturally reply to the message given the context. Each pair received three labels and the majority of the labels was taken as the final decision.
<br>
As far as we known, this is the first human-labeled test set for retrieval-based chatbots. The entire corpus link https://www.dropbox.com/s/90t0qtji9ow20ca/DoubanConversaionCorpus.zip?dl=0 | Provide a detailed description of the following dataset: Douban |
Criteo | **Criteo** contains 7 days of click-through data, which is widely used for CTR prediction benchmarking. There are 26 anonymous categorical fields and 13 continuous fields in Criteo dataset.
Source: [AMER: Automatic Behavior Modeling and Interaction Exploration in Recommender System](https://arxiv.org/abs/2006.05933)
Image Source: [https://www.kaggle.com/c/criteo-display-ad-challenge](https://www.kaggle.com/c/criteo-display-ad-challenge) | Provide a detailed description of the following dataset: Criteo |
iPinYou | The **iPinYou** Global RTB(Real-Time Bidding) Bidding Algorithm Competition is organized by iPinYou from April 1st, 2013 to December 31st, 2013.The competition has been divided into three seasons. For each season, a training dataset is released to the competition participants, the testing dataset is reserved by iPinYou. The complete testing dataset is randomly divided into two parts: one part is the leaderboard testing dataset to score and rank the participating teams on the leaderboard, and the other part is reserved for the final offline evaluation. The participant's last offline submission is evaluated by the reserved testing dataset to get a team's offline final score. This dataset contains all three seasons training datasets and leaderboard testing datasets.The reserved testing datasets are withheld by iPinYou. The training dataset includes a set of processed iPinYou DSP bidding, impression, click, and conversion logs.
Source: [iPinYou Global RTB Bidding Algorithm Competition Dataset](https://contest.ipinyou.com/)
Image Source: [http://contest.ipinyou.com/ipinyou-dataset.pdf](http://contest.ipinyou.com/ipinyou-dataset.pdf) | Provide a detailed description of the following dataset: iPinYou |
PASCAL-Part | **PASCAL-Part** is a set of additional annotations for PASCAL VOC 2010. It goes beyond the original PASCAL object detection task by providing segmentation masks for each body part of the object. For categories that do not have a consistent set of parts (e.g., boat), it provides the silhouette annotation.
It can also serve as a set for human semantic part segmentation: It contains multiple humans per image in unconstrained poses and occlusions (1,716 for training and 1,817 for testing). It provides careful pixel-wise annotations for six body parts (i.e., head, torso, upper/lower-arms, and upper-/lower-legs). | Provide a detailed description of the following dataset: PASCAL-Part |
Citeseer | The CiteSeer dataset consists of 3312 scientific publications classified into one of six classes. The citation network consists of 4732 links. Each publication in the dataset is described by a 0/1-valued word vector indicating the absence/presence of the corresponding word from the dictionary. The dictionary consists of 3703 unique words. | Provide a detailed description of the following dataset: Citeseer |
Cora | The **Cora** dataset consists of 2708 scientific publications classified into one of seven classes. The citation network consists of 5429 links. Each publication in the dataset is described by a 0/1-valued word vector indicating the absence/presence of the corresponding word from the dictionary. The dictionary consists of 1433 unique words. | Provide a detailed description of the following dataset: Cora |
Pubmed | The **Pubmed** dataset consists of 19717 scientific publications from PubMed database pertaining to diabetes classified into one of three classes. The citation network consists of 44338 links. Each publication in the dataset is described by a TF/IDF weighted word vector from a dictionary which consists of 500 unique words. | Provide a detailed description of the following dataset: Pubmed |
NELL | **NELL** is a dataset built from the Web via an intelligent agent called Never-Ending Language Learner. This agent attempts to learn over time to read the web. NELL has accumulated over 50 million candidate beliefs by reading the web, and it is considering these at different levels of confidence. NELL has high confidence in 2,810,379 of these beliefs. | Provide a detailed description of the following dataset: NELL |
WN18 | The **WN18** dataset has 18 relations scraped from WordNet for roughly 41,000 synsets, resulting in 141,442 triplets. It was found out that a large number of the test triplets can be found in the training set with another relation or the inverse relation. Therefore, a new version of the dataset WN18RR has been proposed to address this issue. | Provide a detailed description of the following dataset: WN18 |
Scan2CAD | **Scan2CAD** is an alignment dataset based on 1506 ScanNet scans with 97607 annotated keypoints pairs between 14225 (3049 unique) CAD models from ShapeNet and their counterpart objects in the scans. The top 3 annotated model classes are chairs, tables and cabinets which arises due to the nature of indoor scenes in ScanNet. The number of objects aligned per scene ranges from 1 to 40 with an average of 9.3.
Additionally, all ShapeNet CAD models used in the Scan2CAD dataset are annotated with their rotational symmetries: either none, 2-fold, 4-fold or infinite rotational symmetries around a canonical axis of the object. | Provide a detailed description of the following dataset: Scan2CAD |
UTKFace | The **UTKFace** dataset is a large-scale face dataset with long age span (range from 0 to 116 years old). The dataset consists of over 20,000 face images with annotations of age, gender, and ethnicity. The images cover large variation in pose, facial expression, illumination, occlusion, resolution, etc. This dataset could be used on a variety of tasks, e.g., face detection, age estimation, age progression/regression, landmark localization, etc. | Provide a detailed description of the following dataset: UTKFace |
AFAD | The Asian Face Age Dataset (AFAD) is a new dataset proposed for evaluating the performance of age estimation, which contains more than 160K facial images and the corresponding age and gender labels. This dataset is oriented to age estimation on Asian faces, so all the facial images are for Asian faces. It is noted that the AFAD is the biggest dataset for age estimation to date. It is well suited to evaluate how deep learning methods can be adopted for age estimation. | Provide a detailed description of the following dataset: AFAD |
CACD | The **Cross-Age Celebrity Dataset** (**CACD**) contains 163,446 images from 2,000 celebrities collected from the Internet. The images are collected from search engines using celebrity name and year (2004-2013) as keywords. Therefore, it is possible to estimate the ages of the celebrities on the images by simply subtract the birth year from the year of which the photo was taken. | Provide a detailed description of the following dataset: CACD |
JIGSAWS | The **JHU-ISI Gesture and Skill Assessment Working Set** (**JIGSAWS**) is a surgical activity dataset for human motion modeling. The data was collected through a collaboration between The Johns Hopkins University (JHU) and Intuitive Surgical, Inc. (Sunnyvale, CA. ISI) within an IRB-approved study. The release of this dataset has been approved by the Johns Hopkins University IRB. The dataset was captured using the da Vinci Surgical System from eight surgeons with different levels of skill performing five repetitions of three elementary surgical tasks on a bench-top model: suturing, knot-tying and needle-passing, which are standard components of most surgical skills training curricula. The JIGSAWS dataset consists of three components:
* kinematic data: Cartesian positions, orientations, velocities, angular velocities and gripper angle describing the motion of the manipulators.
* video data: stereo video captured from the endoscopic camera. Sample videos of the JIGSAWS tasks can be downloaded from the official webpage.
* manual annotations including:
* gesture (atomic surgical activity segment labels).
* skill (global rating score using modified objective structured assessments of technical skills).
* experimental setup: a standardized cross-validation experimental setup that can be used to evaluate automatic surgical gesture recognition and skill assessment methods. | Provide a detailed description of the following dataset: JIGSAWS |
CompCars | The **Comprehensive Cars (CompCars)** dataset contains data from two scenarios, including images from web-nature and surveillance-nature. The web-nature data contains 163 car makes with 1,716 car models. There are a total of 136,726 images capturing the entire cars and 27,618 images capturing the car parts. The full car images are labeled with bounding boxes and viewpoints. Each car model is labeled with five attributes, including maximum speed, displacement, number of doors, number of seats, and type of car. The surveillance-nature data contains 50,000 car images captured in the front view.
The dataset can be used for the tasks of:
- Fine-grained classification
- Attribute prediction
- Car model verification
The dataset can be also used for other tasks such as image ranking, multi-task learning, and 3D reconstruction. | Provide a detailed description of the following dataset: CompCars |
METR-LA | **METR-LA** is a dataset for traffic prediction. | Provide a detailed description of the following dataset: METR-LA |
RT-GENE | Presents a diverse eye-gaze dataset. | Provide a detailed description of the following dataset: RT-GENE |
WN18RR | **WN18RR** is a link prediction dataset created from WN18, which is a subset of WordNet. WN18 consists of 18 relations and 40,943 entities. However, many text triples are obtained by inverting triples from the training set. Thus the WN18RR dataset is created to ensure that the evaluation dataset does not have inverse relation test leakage. In summary, WN18RR dataset contains 93,003 triples with 40,943 entities and 11 relation types. | Provide a detailed description of the following dataset: WN18RR |
FB15k-237 | **FB15k-237** is a link prediction dataset created from FB15k. While FB15k consists of 1,345 relations, 14,951 entities, and 592,213 triples, many triples are inverses that cause leakage from the training to testing and validation splits. FB15k-237 was created by Toutanova and Chen (2015) to ensure that the testing and evaluation datasets do not have inverse relation test leakage. In summary, FB15k-237 dataset contains 310,116 triples with 14,541 entities and 237 relation types. | Provide a detailed description of the following dataset: FB15k-237 |
T-LESS | **T-LESS** is a dataset for estimating the 6D pose, i.e. translation and rotation, of texture-less rigid objects. The dataset features thirty industry-relevant objects with no significant texture and no discriminative color or reflectance properties. The objects exhibit symmetries and mutual similarities in shape and/or size. Compared to other datasets, a unique property is that some of the objects are parts of others. The dataset includes training and test images that were captured with three synchronized sensors, specifically a structured-light and a time-of-flight RGB-D sensor and a high-resolution RGB camera. There are approximately 39K training and 10K test images from each sensor. Additionally, two types of 3D models are provided for each object, i.e. a manually created CAD model and a semi-automatically reconstructed one. Training images depict individual objects against a black background. Test images originate from twenty test scenes having varying complexity, which increases from simple scenes with several isolated objects to very challenging ones with multiple instances of several objects and with a high amount of clutter and occlusion. The images were captured from a systematically sampled view sphere around the object/scene, and are annotated with accurate ground truth 6D poses of all modeled objects. | Provide a detailed description of the following dataset: T-LESS |
ACE 2004 | **ACE 2004** Multilingual Training Corpus contains the complete set of English, Arabic and Chinese training data for the 2004 Automatic Content Extraction (ACE) technology evaluation. The corpus consists of data of various types annotated for entities and relations and was created by Linguistic Data Consortium with support from the ACE Program, with additional assistance from the DARPA TIDES (Translingual Information Detection, Extraction and Summarization) Program.
The objective of the ACE program is to develop automatic content extraction technology to support automatic processing of human language in text form. In September 2004, sites were evaluated on system performance in six areas: Entity Detection and Recognition (EDR), Entity Mention Detection (EMD), EDR Co-reference, Relation Detection and Recognition (RDR), Relation Mention Detection (RMD), and RDR given reference entities. All tasks were evaluated in three languages: English, Chinese and Arabic. | Provide a detailed description of the following dataset: ACE 2004 |
ACE 2005 | **ACE 2005** Multilingual Training Corpus contains the complete set of English, Arabic and Chinese training data for the 2005 Automatic Content Extraction (ACE) technology evaluation. The corpus consists of data of various types annotated for entities, relations and events by the Linguistic Data Consortium (LDC) with support from the ACE Program and additional assistance from LDC.
Source: [https://catalog.ldc.upenn.edu/LDC2006T06](https://catalog.ldc.upenn.edu/LDC2006T06)
Image Source: [https://arxiv.org/pdf/1811.06031.pdf](https://arxiv.org/pdf/1811.06031.pdf) | Provide a detailed description of the following dataset: ACE 2005 |
GENIA | The **GENIA** corpus is the primary collection of biomedical literature compiled and annotated within the scope of the GENIA project. The corpus was created to support the development and evaluation of information extraction and text mining systems for the domain of molecular biology.
The corpus contains 1,999 Medline abstracts, selected using a PubMed query for the three MeSH terms “human”, “blood cells”, and “transcription factors”. The corpus has been annotated with various levels of linguistic and semantic information.
The primary categories of annotation in the GENIA corpus and the corresponding subcorpora are:
* Part-of-Speech annotation
* Constituency (phrase structure) syntactic annotation
* Term annotation
* Event annotation
* Relation annotation
* Coreference annotation | Provide a detailed description of the following dataset: GENIA |
SemEval 2014 Task 4 Sub Task 2 | Sentiment analysis is increasingly viewed as a vital task both from an academic and a commercial standpoint. The majority of current approaches, however, attempt to detect the overall polarity of a sentence, paragraph, or text span, regardless of the entities mentioned (e.g., laptops, restaurants) and their aspects (e.g., battery, screen; food, service). By contrast, this task is concerned with aspect based sentiment analysis (ABSA), where the goal is to identify the aspects of given target entities and the sentiment expressed towards each aspect. Datasets consisting of customer reviews with human-authored annotations identifying the mentioned aspects of the target entities and the sentiment polarity of each aspect will be provided.
***Subtask 2: Aspect term polarity***
For a given set of aspect terms within a sentence, determine whether the polarity of each aspect term is positive, negative, neutral or conflict (i.e., both positive and negative).
For example:
“I loved their fajitas” → {fajitas: positive}
“I hated their fajitas, but their salads were great” → {fajitas: negative, salads: positive}
“The fajitas are their first plate” → {fajitas: neutral}
“The fajitas were great to taste, but not to see” → {fajitas: conflict} | Provide a detailed description of the following dataset: SemEval 2014 Task 4 Sub Task 2 |
Ohsumed | **Ohsumed** includes medical abstracts from the MeSH categories of the year 1991. In [Joachims, 1997] were used the first 20,000 documents divided in 10,000 for training and 10,000 for testing. The specific task was to categorize the 23 cardiovascular diseases categories. After selecting the such category subset, the unique abstract number becomes 13,929 (6,286 for training and 7,643 for testing). As current computers can easily manage larger number of documents we make available all 34,389 cardiovascular diseases abstracts out of 50,216 medical abstracts contained in the year 1991. | Provide a detailed description of the following dataset: Ohsumed |
MR | **MR** Movie Reviews is a dataset for use in sentiment-analysis experiments. Available are collections of movie-review documents labeled with respect to their overall sentiment polarity (positive or negative) or subjective rating (e.g., "two and a half stars") and sentences labeled with respect to their subjectivity status (subjective or objective) or polarity.
Source: [http://www.cs.cornell.edu/people/pabo/movie-review-data/](http://www.cs.cornell.edu/people/pabo/movie-review-data/)
Image Source: [https://storage.googleapis.com/kaggle-competitions/kaggle/3810/media/treebank.png](https://storage.googleapis.com/kaggle-competitions/kaggle/3810/media/treebank.png) | Provide a detailed description of the following dataset: MR |
STS Benchmark | STS Benchmark comprises a selection of the English datasets used in the STS tasks organized in the context of SemEval between 2012 and 2017. The selection of datasets include text from image captions, news headlines and user forums. | Provide a detailed description of the following dataset: STS Benchmark |
Yahoo! Answers | The Yahoo! Answers topic classification dataset is constructed using 10 largest main categories. Each class contains 140,000 training samples and 6,000 testing samples. Therefore, the total number of training samples is 1,400,000 and testing samples 60,000 in this dataset. From all the answers and other meta-information, we only used the best answer content and the main category information.
Source:[github](https://github.com/LC-John/Yahoo-Answers-Topic-Classification-Dataset/tree/master/dataset) | Provide a detailed description of the following dataset: Yahoo! Answers |
Weibo NER | The **Weibo NER** dataset is a Chinese Named Entity Recognition dataset drawn from the social media website Sina Weibo. | Provide a detailed description of the following dataset: Weibo NER |
Resume NER | Resume contains eight fine-grained entity categories -score from 74.5% to 86.88%.
Source: [Query-Based Named Entity Recognition](https://arxiv.org/abs/1908.09138)
Image Source: [https://arxiv.org/pdf/1805.02023.pdf](https://arxiv.org/pdf/1805.02023.pdf) | Provide a detailed description of the following dataset: Resume NER |
Reuters-21578 | The **Reuters-21578** dataset is a collection of documents with news articles. The original corpus has 10,369 documents and a vocabulary of 29,930 words. | Provide a detailed description of the following dataset: Reuters-21578 |
FCE | The Cambridge Learner Corpus **First Certificate in English** (CLC **FCE**) dataset consists of short texts, written by learners of English as an additional language in response to exam prompts eliciting free-text answers and assessing mastery of the upper-intermediate proficiency level. The texts have been manually error-annotated using a taxonomy of 77 error types. The full dataset consists of 323,192 sentences. The publicly released subset of the dataset, named FCE-public, consists of 33,673 sentences split into test and training sets of 2,720 and 30,953 sentences, respectively. | Provide a detailed description of the following dataset: FCE |
TACRED | TACRED is a large-scale relation extraction dataset with 106,264 examples built over newswire and web text from the corpus used in the yearly TAC Knowledge Base Population (TAC KBP) challenges. Examples in TACRED cover 41 relation types as used in the TAC KBP challenges (e.g., per:schools_attended and org:members) or are labeled as no_relation if no defined relation is held. These examples are created by combining available human annotations from the TAC KBP challenges and crowdsourcing. | Provide a detailed description of the following dataset: TACRED |
Natural Questions | The **Natural Questions** corpus is a question answering dataset containing 307,373 training examples, 7,830 development examples, and 7,842 test examples. Each example is comprised of a google.com query and a corresponding Wikipedia page. Each Wikipedia page has a passage (or long answer) annotated on the page that answers the question and one or more short spans from the annotated passage containing the actual answer. The long and the short answer annotations can however be empty. If they are both empty, then there is no answer on the page at all. If the long answer annotation is non-empty, but the short answer annotation is empty, then the annotated passage answers the question but no explicit short answer could be found. Finally 1% of the documents have a passage annotated with a short answer that is “yes” or “no”, instead of a list of short spans. | Provide a detailed description of the following dataset: Natural Questions |
MUTAG | In particular, **MUTAG** is a collection of nitroaromatic compounds and the goal is to predict their mutagenicity on Salmonella typhimurium. Input graphs are used to represent chemical compounds, where vertices stand for atoms and are labeled by the atom type (represented by one-hot encoding), while edges between vertices represent bonds between the corresponding atoms. It includes 188 samples of chemical compounds with 7 discrete node labels. | Provide a detailed description of the following dataset: MUTAG |
NCI1 | The **NCI1** dataset comes from the cheminformatics domain, where each input graph is used as representation of a chemical compound: each vertex stands for an atom of the molecule, and edges between vertices represent bonds between atoms. This dataset is relative to anti-cancer screens where the chemicals are assessed as positive or negative to cell lung cancer. Each vertex has an input label representing the corresponding atom type, encoded by a one-hot-encoding scheme into a vector of 0/1 elements. | Provide a detailed description of the following dataset: NCI1 |
PROTEINS | **PROTEINS** is a dataset of proteins that are classified as enzymes or non-enzymes. Nodes represent the amino acids and two nodes are connected by an edge if they are less than 6 Angstroms apart. | Provide a detailed description of the following dataset: PROTEINS |
ENZYMES | **ENZYMES** is a dataset of 600 protein tertiary structures obtained from the BRENDA enzyme database. The ENZYMES dataset contains 6 enzymes. | Provide a detailed description of the following dataset: ENZYMES |
COLLAB | **COLLAB** is a scientific collaboration dataset. A graph corresponds to a researcher’s ego network, i.e., the researcher and its collaborators are nodes and an edge indicates collaboration between two researchers. A researcher’s ego network has three possible labels, i.e., High Energy Physics, Condensed Matter Physics, and Astro Physics, which are the fields that the researcher belongs to. The dataset has 5,000 graphs and each graph has label 0, 1, or 2. | Provide a detailed description of the following dataset: COLLAB |
BC5CDR | **BC5CDR** corpus consists of 1500 PubMed articles with 4409 annotated chemicals, 5818 diseases and 3116 chemical-disease interactions. | Provide a detailed description of the following dataset: BC5CDR |
JNLPBA | **JNLPBA** is a biomedical dataset that comes from the GENIA version 3.02 corpus (Kim et al., 2003). It was created with a controlled search on MEDLINE. From this search 2,000 abstracts were selected and hand annotated according to a small taxonomy of 48 classes based on a chemical classification. 36 terminal classes were used to annotate the GENIA corpus. | Provide a detailed description of the following dataset: JNLPBA |
EBM-NLP | EBM-NLP annotates PICO (Participants, Interventions, Comparisons and Outcomes) spans in clinical trial abstracts.
The corresponding PICO Extraction task aims to identify the spans in clinical trial abstracts that describe the respective PICO elements. | Provide a detailed description of the following dataset: EBM-NLP |
ChemProt | **ChemProt** consists of 1,820 PubMed abstracts with chemical-protein interactions annotated by domain experts and was used in the BioCreative VI text mining chemical-protein interactions shared task. | Provide a detailed description of the following dataset: ChemProt |
SciERC | **SciERC** dataset is a collection of 500 scientific abstract annotated with scientific entities, their relations, and coreference clusters. The abstracts are taken from 12 AI conference/workshop proceedings in four AI communities, from the Semantic Scholar Corpus. SciERC extends previous datasets in scientific articles SemEval 2017 Task 10 and SemEval 2018 Task 7 by extending entity types, relation types, relation coverage, and adding cross-sentence relations using coreference links. | Provide a detailed description of the following dataset: SciERC |
Paper Field | **Paper Field** is built from the Microsoft Academic Graph and maps paper titles to one of 7 fields of study. Each field of study - geography, politics, economics, business, sociology, medicine, and psychology - has approximately 12K training examples. | Provide a detailed description of the following dataset: Paper Field |
CARS196 | CARS196 is composed of 16,185 car images of 196 classes. | Provide a detailed description of the following dataset: CARS196 |
PASCAL Context | The **PASCAL Context** dataset is an extension of the PASCAL VOC 2010 detection challenge, and it contains pixel-wise labels for all training images. It contains more than 400 classes (including the original 20 classes plus backgrounds from PASCAL VOC segmentation), divided into three categories (objects, stuff, and hybrids). Many of the object categories of this dataset are too sparse and; therefore, a subset of 59 frequent classes are usually selected for use. | Provide a detailed description of the following dataset: PASCAL Context |
SCUT-CTW1500 | The **SCUT-CTW1500** dataset contains 1,500 images: 1,000 for training and 500 for testing. In particular, it provides 10,751 cropped text instance images, including 3,530 with curved text. The images are manually harvested from the Internet, image libraries such as Google Open-Image, or phone cameras. The dataset contains a lot of horizontal and multi-oriented text. | Provide a detailed description of the following dataset: SCUT-CTW1500 |
OCHuman | This dataset focuses on heavily occluded human with comprehensive annotations including bounding-box, humans pose and instance mask. This dataset contains 13,360 elaborately annotated human instances within 5081 images. With average 0.573 MaxIoU of each person, **OCHuman** is the most complex and challenging dataset related to human.
Source: [https://github.com/liruilong940607/OCHumanApi](https://github.com/liruilong940607/OCHumanApi)
Image Source: [https://github.com/liruilong940607/OCHumanApi](https://github.com/liruilong940607/OCHumanApi) | Provide a detailed description of the following dataset: OCHuman |
YAGO3-10 | YAGO3-10 is benchmark dataset for knowledge base completion. It is a subset of YAGO3 (which itself is an extension of YAGO) that contains entities associated with at least ten different relations. In total, YAGO3-10 has 123,182 entities and 37 relations and 1,179,040 triples, and most of the triples describe attributes of persons such as citizenship, gender, and profession. | Provide a detailed description of the following dataset: YAGO3-10 |
MSU-MFSD | The **MSU-MFSD** dataset contains 280 video recordings of genuine and attack faces. 35 individuals have participated in the development of this database with a total of 280 videos. Two kinds of cameras with different resolutions (720×480 and 640×480) were used to record the videos from the 35 individuals. For the real accesses, each individual has two video recordings captured with the Laptop cameras and Android, respectively. For the video attacks, two types of cameras, the iPhone and Canon cameras were used to capture high definition videos on each of the subject. The videos taken with Canon camera were then replayed on iPad Air screen to generate the HD replay attacks while the videos recorded by the iPhone mobile were replayed itself to generate the mobile replay attacks. Photo attacks were produced by printing the 35 subjects’ photos on A3 papers using HP colour printer. The recording videos with respect to the 35 individuals were divided into training (15 subjects with 120 videos) and testing (40 subjects with 160 videos) datasets, respectively. | Provide a detailed description of the following dataset: MSU-MFSD |
SciCite | **SciCite** is a dataset of citation intents that addresses multiple scientific domains and is more than five times larger than ACL-ARC.
Source: [Structural Scaffolds for Citation Intent Classification in Scientific Publications](https://arxiv.org/abs/1904.01608)
Image Source: [https://arxiv.org/pdf/1904.01608v2.pdf](https://arxiv.org/pdf/1904.01608v2.pdf) | Provide a detailed description of the following dataset: SciCite |
PanoContext | The **PanoContext** dataset contains 500 annotated cuboid layouts of indoor environments such as bedrooms and living rooms. | Provide a detailed description of the following dataset: PanoContext |
Office-31 | The Office dataset contains 31 object categories in three domains: Amazon, DSLR and Webcam. The 31 categories in the dataset consist of objects commonly encountered in office settings, such as keyboards, file cabinets, and laptops. The Amazon domain contains on average 90 images per class and 2817 images in total. As these images were captured from a website of online merchants, they are captured against clean background and at a unified scale. The DSLR domain contains 498 low-noise high resolution images (4288×2848). There are 5 objects per category. Each object was captured from different viewpoints on average 3 times. For Webcam, the 795 images of low resolution (640×480) exhibit significant noise and color as well as white balance artifacts. | Provide a detailed description of the following dataset: Office-31 |
ImageCLEF-DA | The **ImageCLEF-DA** dataset is a benchmark dataset for ImageCLEF 2014 domain adaptation challenge, which contains three domains: Caltech-256 (C), ImageNet ILSVRC 2012 (I) and Pascal VOC 2012 (P). For each domain, there are 12 categories and 50 images in each category. | Provide a detailed description of the following dataset: ImageCLEF-DA |
Office-Home | **Office-Home** is a benchmark dataset for domain adaptation which contains 4 domains where each domain consists of 65 categories. The four domains are: Art – artistic images in the form of sketches, paintings, ornamentation, etc.; Clipart – collection of clipart images; Product – images of objects without a background and Real-World – images of objects captured with a regular camera. It contains 15,500 images, with an average of around 70 images per class and a maximum of 99 images in a class. | Provide a detailed description of the following dataset: Office-Home |
HPatches | The **HPatches** is a recent dataset for local patch descriptor evaluation that consists of 116 sequences of 6 images with known homography. The dataset is split into two parts: viewpoint - 59 sequences with significant viewpoint change and illumination - 57 sequences with significant illumination change, both natural and artificial. | Provide a detailed description of the following dataset: HPatches |
CityPersons | The **CityPersons** dataset is a subset of Cityscapes which only consists of person annotations. There are 2975 images for training, 500 and 1575 images for validation and testing. The average of the number of pedestrians in an image is 7. The visible-region and full-body annotations are provided. | Provide a detailed description of the following dataset: CityPersons |
CREMI | MICCAI Challenge on Circuit Reconstruction from Electron Microscopy Images.
# About
The goal of this challenge is to evaluate algorithms for automatic reconstruction of neurons and neuronal connectivity from serial section electron microscopy data. The comparison is performed not only by evaluating the quality of neuron segmentations, but also by assessing the accuracy of detecting synapses and identifying synaptic partners. The challenge is carried out on three large and diverse datasets from adult Drosophila melanogaster brain tissue, comprising neuron segmentation ground truth and annotations for synaptic connections. A successful solution would demonstrate its efficiency and generalizability, and carry great potential to reduce the time spent on manual reconstruction of neural circuits in electron microscopy volumes.
# Description
We provide three datasets, each consisting of two (5 μm)3 volumes (training and testing, each 1250 px × 1250 px × 125 px) of serial section EM of the adult fly brain. Each volume has neuron and synapse labelings and annotations for pre- and post-synaptic partners. | Provide a detailed description of the following dataset: CREMI |
ContactDB | **ContactDB** is a dataset of contact maps for household objects that captures the rich hand-object contact that occurs during grasping, enabled by use of a thermal camera. ContactDB includes 3,750 3D meshes of 50 household objects textured with contact maps and 375K frames of synchronized RGB-D+thermal images. | Provide a detailed description of the following dataset: ContactDB |
Polyvore | This dataset contains 21,889 outfits from polyvore.com, in which 17,316 are for training, 1,497 for validation and 3,076 for testing. | Provide a detailed description of the following dataset: Polyvore |
Watercolor2k | Watercolor2k is a dataset used for cross-domain object detection which contains 2k watercolor images with image and instance-level annotations. | Provide a detailed description of the following dataset: Watercolor2k |
Comic2k | **Comic2k** is a dataset used for cross-domain object detection which contains 2k comic images with image and instance-level annotations.
Image Source: [https://naoto0804.github.io/cross_domain_detection/](https://naoto0804.github.io/cross_domain_detection/) | Provide a detailed description of the following dataset: Comic2k |
Clipart1k | In Clipart1k, the target domain classes to be detected are the same as those in the source domain. All the images for a clipart domain were collected from one dataset (i.e., CMPlaces) and two image search engines (i.e., Openclipart2 and Pixabay3). Search queries used are 205 scene classes (e.g., pasture) used in CMPlaces to collect various objects and scenes with complex backgrounds. | Provide a detailed description of the following dataset: Clipart1k |
PeopleArt | People-Art is an object detection dataset which consists of people in 43 different styles. People contained in this dataset are quite different from those in common photographs. There are 42 categories of art styles and movements including Naturalism, Cubism, Socialist Realism, Impressionism, and Suprematism
Source: [Point Linking Network for Object Detection](https://arxiv.org/abs/1706.03646)
Image Source: [https://www.researchgate.net/figure/Generalization-results-on-Picasso-and-People-Art-datasets-Joseph-Redmon-2016_fig12_328175597](https://www.researchgate.net/figure/Generalization-results-on-Picasso-and-People-Art-datasets-Joseph-Redmon-2016_fig12_328175597) | Provide a detailed description of the following dataset: PeopleArt |
IconArt | This dataset contains 5955 painting images (from WikiCommons) : a train set of 2978 images and a test set of 2977 images (for classification task). 1480 of the 2977 images are annotated with bounding boxes for 7 iconographic classes : ‘angel’,‘Child_Jesus’,‘crucifixion_of_Jesus’,‘Mary’,‘nudity’, ‘ruins’,‘Saint_Sebastien’.
The dataset IconArt dataset was introduced in the following paper : "Weakly Supervised Object Detection in Artworks" Gonthier et al. ECCV 2018 Workshop Computer Vision for Art Analysis - VISART 2018.
https://wsoda.telecom-paristech.fr/
https://zenodo.org/record/4737435 | Provide a detailed description of the following dataset: IconArt |
COFW | The **Caltech Occluded Faces in the Wild** (**COFW**) dataset is designed to present faces in real-world conditions. Faces show large variations in shape and occlusions due to differences in pose, expression, use of accessories such as sunglasses and hats and interactions with objects (e.g. food, hands, microphones,
etc.). All images were hand annotated using the same 29 landmarks as in LFPW. Both the landmark positions as well as their occluded/unoccluded state were annotated. The faces are occluded to different degrees, with large variations in the type of occlusions encountered. COFW has an average occlusion of over 23. | Provide a detailed description of the following dataset: COFW |
RWTH-PHOENIX-Weather 2014 | The signing is recorded by a stationary color camera placed in front of the sign language interpreters. Interpreters wear dark clothes in front of an artificial grey background with color transition. All recorded videos are at 25 frames per second and the size of the frames is 210 by 260 pixels. Each frame shows the interpreter box only. | Provide a detailed description of the following dataset: RWTH-PHOENIX-Weather 2014 |
RWTH-PHOENIX-Weather 2014 T | Over a period of three years (2009 - 2011) the daily news and weather forecast airings of the German public tv-station PHOENIX featuring sign language interpretation have been recorded and the weather forecasts of a subset of 386 editions have been transcribed using gloss notation. Furthermore, we used automatic speech recognition with manual cleaning to transcribe the original German speech. As such, this corpus allows to train end-to-end sign language translation systems from sign language video input to spoken language.
The signing is recorded by a stationary color camera placed in front of the sign language interpreters. Interpreters wear dark clothes in front of an artificial grey background with color transition. All recorded videos are at 25 frames per second and the size of the frames is 210 by 260 pixels. Each frame shows the interpreter box only. | Provide a detailed description of the following dataset: RWTH-PHOENIX-Weather 2014 T |
VRD | The Visual Relationship Dataset (**VRD**) contains 4000 images for training and 1000 for testing annotated with visual relationships. Bounding boxes are annotated with a label containing 100 unary predicates. These labels refer to animals, vehicles, clothes and generic objects. Pairs of bounding boxes are annotated with a label containing 70 binary predicates. These labels refer to actions, prepositions, spatial relations, comparatives or preposition phrases. The dataset has 37993 instances of visual relationships and 6672 types of relationships. 1877 instances of relationships occur only in the test set and they are used to evaluate the zero-shot learning scenario. | Provide a detailed description of the following dataset: VRD |
PPI | protein roles—in terms of their cellular functions from
gene ontology—in various protein-protein interaction (PPI) graphs, with each graph corresponding
to a different human tissue [41]. positional gene sets are used, motif gene sets and immunological
signatures as features and gene ontology sets as labels (121 in total), collected from the Molecular
Signatures Database [34]. The average graph contains 2373 nodes, with an average degree of 28.8. | Provide a detailed description of the following dataset: PPI |
Kuzushiji-MNIST | Kuzushiji-MNIST is a drop-in replacement for the MNIST dataset (28x28 grayscale, 70,000 images). Since MNIST restricts us to 10 classes, the authors chose one character to represent each of the 10 rows of Hiragana when creating Kuzushiji-MNIST. Kuzushiji is a Japanese cursive writing style. | Provide a detailed description of the following dataset: Kuzushiji-MNIST |
NCI109 | Tudataset: A collection of benchmark datasets for learning with graphs | Provide a detailed description of the following dataset: NCI109 |
Slashdot | The **Slashdot** dataset is a relational dataset obtained from Slashdot. Slashdot is a technology-related news website know for its specific user community. The website features user-submitted and editor-evaluated current primarily technology oriented news. In 2002 Slashdot introduced the Slashdot Zoo feature which allows users to tag each other as friends or foes. The network cotains friend/foe links between the users of Slashdot. The network was obtained in February 2009. | Provide a detailed description of the following dataset: Slashdot |
Epinions | The **Epinions** dataset is built form a who-trust-whom online social network of a general consumer review site Epinions.com. Members of the site can decide whether to ''trust'' each other. All the trust relationships interact and form the Web of Trust which is then combined with review ratings to determine which reviews are shown to the user.
It contains 75,879 nodes and 50,8837 edges.
Source: [https://snap.stanford.edu/data/soc-Epinions1.html](https://snap.stanford.edu/data/soc-Epinions1.html) | Provide a detailed description of the following dataset: Epinions |
VOT2016 | **VOT2016** is a video dataset for visual object tracking. It contains 60 video clips and 21,646 corresponding ground truth maps with pixel-wise annotation of salient objects. | Provide a detailed description of the following dataset: VOT2016 |
CULane | **CULane** is a large scale challenging dataset for academic research on traffic lane detection. It is collected by cameras mounted on six different vehicles driven by different drivers in Beijing. More than 55 hours of videos were collected and 133,235 frames were extracted. The dataset is divided into 88880 images for training set, 9675 for validation set, and 34680 for test set. The test set is divided into normal and 8 challenging categories. | Provide a detailed description of the following dataset: CULane |
Udacity | The **Udacity** dataset is mainly composed of video frames taken from urban roads. It provides a total number of 404,916 video frames for training and 5,614 video frames for testing. This dataset is challenging due to severe lighting changes, sharp road curves and busy traffic.
Source: [Learning to Steer by Mimicking Features from Heterogeneous Auxiliary Networks](https://arxiv.org/abs/1811.02759)
Image Source: [https://www.researchgate.net/figure/Sample-from-the-Udacity-dataset-with-the-original-ground-truth-bounding-boxes-Note-that_fig3_345652980](https://www.researchgate.net/figure/Sample-from-the-Udacity-dataset-with-the-original-ground-truth-bounding-boxes-Note-that_fig3_345652980) | Provide a detailed description of the following dataset: Udacity |
SUN360 | The goal of the **SUN360** panorama database is to provide academic researchers in computer vision, computer graphics and computational photography, cognition and neuroscience, human perception, machine learning and data mining, with a comprehensive collection of annotated panoramas covering 360x180-degree full view for a large variety of environmental scenes, places and the objects within. To build the core of the dataset, the authors download a huge number of high-resolution panorama images from the Internet, and group them into different place categories. Then, they designed a WebGL annotation tool for annotating the polygons and cuboids for objects in the scene. | Provide a detailed description of the following dataset: SUN360 |
ACL Title and Abstract Dataset | This dataset gathers 10,874 title and abstract pairs from the ACL Anthology Network (until 2016).
The structure of the data is as follows:
- title
- abstract
- \newline
This dataset is used in our published paper:
Paper Abstract Writing through Editing Mechanism
## Citation
```
@inproceedings{wang-etal-2018-paper,
title = "Paper Abstract Writing through Editing Mechanism",
author = "Wang, Qingyun and
Zhou, Zhihao and
Huang, Lifu and
Whitehead, Spencer and
Zhang, Boliang and
Ji, Heng and
Knight, Kevin",
booktitle = "Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers)",
month = jul,
year = "2018",
address = "Melbourne, Australia",
publisher = "Association for Computational Linguistics",
url = "https://www.aclweb.org/anthology/P18-2042",
doi = "10.18653/v1/P18-2042",
pages = "260--265",
abstract = "We present a paper abstract writing system based on an attentive neural sequence-to-sequence model that can take a title as input and automatically generate an abstract. We design a novel Writing-editing Network that can attend to both the title and the previously generated abstract drafts and then iteratively revise and polish the abstract. With two series of Turing tests, where the human judges are asked to distinguish the system-generated abstracts from human-written ones, our system passes Turing tests by junior domain experts at a rate up to 30{\%} and by non-expert at a rate up to 80{\%}.",
}
``` | Provide a detailed description of the following dataset: ACL Title and Abstract Dataset |
Wikipedia Person and Animal Dataset | This dataset gathers 428,748 person and 12,236 animal infobox with descriptions based on Wikipedia dump (2018/04/01) and Wikidata (2018/04/12). | Provide a detailed description of the following dataset: Wikipedia Person and Animal Dataset |
VizWiz | The **VizWiz**-VQA dataset originates from a natural visual question answering setting where blind people each took an image and recorded a spoken question about it, together with 10 crowdsourced answers per visual question. The proposed challenge addresses the following two tasks for this dataset: predict the answer to a visual question and (2) predict whether a visual question cannot be answered. | Provide a detailed description of the following dataset: VizWiz |
KT3DMoSeg | Please find more details of this dataset at https://alex-xun-xu.github.io/ProjectPage/CVPR_18/index.html
3D motion segmentation has been the key problem in computer vision research due to the application in structure from motion and robotics. Traditional motion segmentation approaches are often evaluated on artificial dataset like Hopkins 155 [1] and its variants. Because the vanishing camera translation effect is often overlooked, these approaches would fail in real world scenes where camera is carrying out significant translation and scene has complex structure. We proposed the KT3DMoSeg to address the 3D motion segmentation problem in real world scenes. The KT3DMoSeg dataset was created upon the KITTI benchmark [2] by manually selecting 22 sequences and labelling each individual foreground object. We select sequence with more significant camera translation so camera mounted on moving cars are preferred. We are interested in the interplay of multiple motions, so clips with more than 3 motions are also chosen, as long as these moving objects contain enough features for forming motion hypotheses. 22 short clips, each with 10-20 frames, are chosen for evaluation. We extract dense trajectories from each sequence using [3] and prune out trajectories shorter than 5 frames.
Reference
[1] R. Tron and R. Vidal. A Benchmark for the Comparison of 3-D Motion Segmentation Algorithms. CVPR, 2007.
[2] A. Geiger, P. Lenz, C. Stiller, and R. Urtasun. Vision meets robotics: The kitti dataset. International Journal of Robotics Research, 2013.
[3] N. Sundaram, T. Brox, and K. Keutzer. Dense point trajectories by GPU-accelerated large displacement optical flow. In ECCV, 2010. | Provide a detailed description of the following dataset: KT3DMoSeg |
Hopkins155 | The Hopkins 155 dataset consists of 156 video sequences of two or three motions. Each video sequence motion corresponds to a low-dimensional subspace. There are 39−550 data vectors drawn from two or three motions for each video sequence. | Provide a detailed description of the following dataset: Hopkins155 |
S3DIS | The Stanford 3D Indoor Scene Dataset (**S3DIS**) dataset contains 6 large-scale indoor areas with 271 rooms. Each point in the scene point cloud is annotated with one of the 13 semantic categories. | Provide a detailed description of the following dataset: S3DIS |
VoxCeleb1 | **VoxCeleb1** is an audio dataset containing over 100,000 utterances for 1,251 celebrities, extracted from videos uploaded to YouTube. | Provide a detailed description of the following dataset: VoxCeleb1 |
OQMD v1.2 | The OQMD is a database of DFT calculated thermodynamic and structural properties of one million materials, created in Chris Wolverton's group at Northwestern University.
The OQMD v1.2 dataset for CGNN is downloadable from [this link](https://doi.org/10.5281/zenodo.7118055), which contains 561,888 materials. Its format is described in [here](https://github.com/Tony-Y/cgnn#dataset-files). The original data is available at [the OQMD website](https://oqmd.org). | Provide a detailed description of the following dataset: OQMD v1.2 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.