dataset_name
stringlengths
2
128
description
stringlengths
1
9.7k
prompt
stringlengths
59
185
DBLP-QuAD
In this work we create a question answering dataset over the DBLP scholarly knowledge graph (KG). DBLP is an on-line reference for bibliographic information on major computer science publications that indexes over 4.4 million publications, published by more than 2.2 million authors. Our dataset consists of 10,000 question answer pairs with the corresponding SPARQL queries which can be executed over the DBLP KG to fetch the correct answer. To the best of our knowledge, this is the first QA dataset for scholarly KGs.
Provide a detailed description of the following dataset: DBLP-QuAD
Slovo: Russian Sign Language Dataset
We introduce a large-scale video dataset **Slovo** for Russian Sign Language task. Slovo dataset size is about **16 GB**, and it contains **20400** RGB videos for **1000** sign language gestures from 194 singers. Each class has 20 samples. The dataset is divided into training set and test set by subject `user_id`. The training set includes 15300 videos, and the test set includes 5100 videos. The total video recording time is ~9.2 hours. About 35% of the videos are recorded in HD format, and 65% of the videos are in FullHD resolution. The average video length with gesture is 50 frames. Annotation file is easy to use and contains some useful columns, see `annotations.csv` file: | | attachment_id | user_id | width | height | length | text | train | begin | end | |---:|:--------------|:--------|------:|-------:|-------:|-------:|:--------|:------|:----| | 0 | de81cc1c-... | 1b... | 1440 | 1920 | 14 | привет | True | 30 | 45 | | 1 | 3c0cec5a-... | 64... | 1440 | 1920 | 32 | утро | False | 43 | 66 | | 2 | d17ca986-... | cf... | 1920 | 1080 | 44 | улица | False | 12 | 31 | where: - `attachment_id` - video file name - `user_id` - unique anonymized user ID - `width` - video width - `height` - video height - `length` - video length - `text` - gesture class in Russian Langauge - `train` - train or test boolean flag - `begin` - start of the gesture (for original dataset) - `end` - end of the gesture (for original dataset)
Provide a detailed description of the following dataset: Slovo: Russian Sign Language Dataset
KIBA
Dataset Description: Toward making use of the complementary information captured by the various bioactivity types, including IC50, K(i), and K(d), Tang et al. introduces a model-based integration approach, termed KIBA to generate an integrated drug-target bioactivity matrix. Task Description: Regression. Given the target amino acid sequence/compound SMILES string, predict their binding affinity. Dataset Statistics: 0.3.2 Update: 117,657 DTI pairs, 2,068 drugs, 229 proteins. Before: 118,036 DTI pairs, 2,068 drugs, 229 proteins. References: [1] Tang J, Szwajda A, Shakyawar S, et al. Making sense of large-scale kinase inhibitor bioactivity data sets: a comparative and integrative analysis. J Chem Inf Model. 2014;54(3):735-743. [2] Huang, Kexin, et al. “DeepPurpose: a Deep Learning Library for Drug-Target Interaction Prediction” Bioinformatics.
Provide a detailed description of the following dataset: KIBA
DAVIS-DTA
Dataset Description: The interaction of 72 kinase inhibitors with 442 kinases covering >80% of the human catalytic protein kinome. Task Description: Regression. Given the target amino acid sequence/compound SMILES string, predict their binding affinity. Dataset Statistics: 0.3.2 Update: 25,772 DTI pairs, 68 drugs, 379 proteins. Before: 27,621 DTI pairs, 68 drugs, 379 proteins. [1] Davis, M., Hunt, J., Herrgard, S. et al. Comprehensive analysis of kinase inhibitor selectivity. Nat Biotechnol 29, 1046–1051 (2011). [2] Huang, Kexin, et al. “DeepPurpose: a Deep Learning Library for Drug-Target Interaction Prediction” Bioinformatics.
Provide a detailed description of the following dataset: DAVIS-DTA
SheetCopilot
The SheetCopilot dataset contains 28 evaluation workbooks and 221 spreadsheet manipulation tasks that are applied to these workbooks. These tasks involve diverse atomic actions related to six task categories (i.e. Entry and manipulation, Formatting, Management, Charts, Pivot Table, and Formula). Dataset statistics: 1. Each task possesses one or more ground truth solutions. 2. The lengths of the task instructions range from 20 to 530 characters, with most tasks between 80 and 110 characters. 3. The number of atomic actions required by each task ranges from 1 to 9. Evaluation metrics: 1. Execution success rate, pass rate, and the number of used actions are evaluated to judge the functional correctness and efficiency of a method. 2. A submitted solution is considered correct if the properties to be checked match those of any of the GT solutions of the corresponding task. Please download the full datasets in our Github Repo: https://github.com/BraveGroup/SheetCopilot Thanks for using our dataset!
Provide a detailed description of the following dataset: SheetCopilot
DermSynth3D
A dataset of 100K synthetic images of skin lesions, ground-truth (GT) segmentations of lesions and healthy skin, GT segmentations of seven body parts (head, torso, hips, legs, feet, arms and hands), and GT binary masks of non-skin regions in the texture maps of 215 scans from the 3DBodyTex.v1 dataset [2], [3] created using the framework described in [1]. The dataset is primarily intended to enable the development of skin lesion analysis methods. Synthetic image creation consisted of two main steps. First, skin lesions from the Fitzpatrick 17k dataset were blended onto skin regions of high-resolution three-dimensional human scans from the 3DBodyTex dataset [2], [3]. Second, two-dimensional renders of the modified scans were generated. Use of the dataset, in part or in full, is conditional on citation of the following work: [1] Ashish Sinha, Jeremy Kawahara, Arezou Pakzad, Kumar Abhishek, Matthieu Ruthven, Enjie Ghorbel, Anis Kacem, Djamila Aouada, Ghassan Hamarneh, ‘DermSynth3D: Synthesis of in-the-wild annotated dermatology images’, 2023. [2] A. Saint, A. E. Rahman Shabayek, K. Cherenkova, G. Gusev, D. Aouada, and B. Ottersten, ‘Bodyfitr: Robust Automatic 3D Human Body Fitting’, in 2019 IEEE International Conference on Image Processing (ICIP), Taipei, Taiwan: IEEE, Sep. 2019, pp. 484–488. doi: 10.1109/ICIP.2019.8803819. [3] A. Saint et al., ‘3DBodyTex: Textured 3D Body Dataset’, in 2018 International Conference on 3D Vision (3DV), Verona: IEEE, Sep. 2018, pp. 495–504. doi: 10.1109/3DV.2018.00063.
Provide a detailed description of the following dataset: DermSynth3D
InstructOpenWiki
**InstructOpenWiki** is a substantial instruction tuning dataset for Open-world IE enriched with a comprehensive corpus, extensive annotations, and diverse instructions.
Provide a detailed description of the following dataset: InstructOpenWiki
ALCE
**ALCE** is a benchmark for Automatic LLMs' Citation Evaluation. ALCE collects a diverse set of questions and retrieval corpora and requires building end-to-end systems to retrieve supporting evidence and generate answers with citations.
Provide a detailed description of the following dataset: ALCE
Bactrian-X
**Bactrian-X** is a comprehensive multilingual parallel dataset of 3.4 million instruction-response pairs across 52 languages. The instructions were obtained from alpaca-52k, and dolly-15k, and tranlated into 52 languages (52 languages x 67k instances = 3.4M instances).
Provide a detailed description of the following dataset: Bactrian-X
JEEBench
JEEBench is a considerably more challenging benchmark dataset for evaluating the problem solving abilities of LLMs. It curates 450 challenging pre-engineering mathematics, physics and chemistry problems from the IIT JEE-Advanced Exam. Long-horizon reasoning on top of deep in-domain knowledge is essential for solving problems in this benchmark.
Provide a detailed description of the following dataset: JEEBench
JDsearch
**JDsearch** is a personalized product search dataset comprised of real user queries and diverse user-product interaction types (clicking, adding to cart, following, and purchasing) collected from JD.com, a popular Chinese online shopping platform. More specifically, the authors sample about 170,000 active users on a specific date, then record all their interacted products and issued queries in one year, without removing any tail users and products. This finally results in roughly 12,000,000 products, 9,400,000 real searches, and 26,000,000 user-product interactions.
Provide a detailed description of the following dataset: JDsearch
UltraDensePose
a character sheet dataset containing over 700,000 hand-drawn and synthesized images of diverse poses
Provide a detailed description of the following dataset: UltraDensePose
COCO-OOD
COCO-OOD dataset contains only unknown categories, consisting of 504 images with fine-grained annotations of 1655 unknown objects. All annotations consist of original annotations in COCO and the augmented annotations on the basis of the COCO definition.
Provide a detailed description of the following dataset: COCO-OOD
COCO-Mix
COCO-Mixed dataset includes 897 images with annotations of both known and unknown categories. It contains 2533 unknown objects and 2658 known objects, with original COCO annotations used as labels for known objects. Unambiguous unlabeled objects are also annotated. The dataset is more challenging to evaluate due to the images containing more object instances with complex categories and concentrated locations.
Provide a detailed description of the following dataset: COCO-Mix
DaLAJ
DaLAJ 1.0, a Dataset for Linguistic Acceptability Judgments for Swedish, comprising 9 596 sentences in its first version; and the initial experiment using it for the binary classification task. DaLAJ is based on the SweLL second language learner data, consisting of essays at different levels of proficiency.
Provide a detailed description of the following dataset: DaLAJ
ACNE04
The ACNE04 dataset includes 3756 Chinese face images with Acne. The ACNE04 dataset includes the annotations of local lesion numbers and global acne severity based on Hayashi Criterion.
Provide a detailed description of the following dataset: ACNE04
Drug Combination Extraction Dataset
This dataset consists of 1634 biomedical abstracts, expert-annotated for the purpose of extracting information about the efficacy of drug combinations from the scientific literature. Beyond its practical utility, the dataset also presents a unique NLP challenge, as the first relation extraction dataset consisting of variable-length relations. Furthermore, the relations in this dataset predominantly require language understanding beyond the sentence level, adding to the challenge of this task. We provide a promising baseline model (see the paper/repo) and identify clear areas for further improvement. We ask that new methods on this dataset are posted to our public leaderboard to improve visibility: https://leaderboard.allenai.org/drug_combo/submissions/public
Provide a detailed description of the following dataset: Drug Combination Extraction Dataset
ICConv
The dataset contains 105,811 information-seeking conversations converted from MS MARCO. This dataset is constructed to relieve the data scarcity problem of conversational search to an extent. Considering the multi-intent problem and contextual information, this large-scale intent-oriented and context-aware dataset is automatically constructed based on the web search session data in MS MARCO. This dataset can be used to train and evaluate conversational search systems.
Provide a detailed description of the following dataset: ICConv
RUGD
A Video Dataset for Visual Perception and Autonomous Navigation in Unstructured Environments. Website: http://rugd.vision/ The RUGD dataset focuses on semantic understanding of unstructured outdoor environments for applications in off-road autonomous navigation. The datset is comprised of video sequences captured from the camera onboard a mobile robot platform. The overall goal of the data collection is to provide a more representative dataset of environments that lack structural cues that are commonly found in urban city autonmous navigation datasets. The platform used for data collection is small enough to manuever in cluttered environments, and is rugged enough to traverse through challenging terrain to explore more unstructured areas of an environment. Dense pixel-wise annotations are provided for every fifth frame in a video sequence. The ontology is defined to support fine-grained terrain identification for path planning tasks, and object identification to avoid obstacles and localize landmarks. In total, 24 semantic categories can be found in the annotations of the videos including eight unique terrain types.
Provide a detailed description of the following dataset: RUGD
Drone-Action
Website: https://asankagp.github.io/droneaction/
Provide a detailed description of the following dataset: Drone-Action
RoCoG-v2
RoCoG-v2 (Robot Control Gestures) is a dataset intended to support the study of synthetic-to-real and ground-to-air video domain adaptation. It contains over 100K synthetically-generated videos of human avatars performing gestures from seven (7) classes. It also provides videos of real humans performing the same gestures from both ground and air perspectives
Provide a detailed description of the following dataset: RoCoG-v2
UTCD
UTCD is a compilation of 18 classification datasets spanning 3 categories of Sentiment, Intent/Dialogue, and Topic classification. UTCD focuses on the task of zero-shot text classification where the candidate labels are descriptive of the text being classified. UTCD consists of ~ 6M/800K train/test examples.
Provide a detailed description of the following dataset: UTCD
InspiRe
We analyze **social media** posts to tease out what makes a post inspiring and what topics are inspiring. We release a dataset of **5,800 inspiring and 5,800 non-inspiring English-language** public post unique ids collected from a dump of **Reddit public posts** made available by a third party and use linguistic heuristics to automatically detect which social media English-language posts are inspiring.
Provide a detailed description of the following dataset: InspiRe
WhenAct
We consider the task of **temporal human action localization in lifestyle vlogs**. We introduce a novel dataset consisting of **manual annotations** of **temporal localization for 13,000 narrated actions in 1,200 video clips**. We present an extensive analysis of this data, which allows us to better understand how the language and visual modalities interact throughout the videos. We propose a simple yet effective method to localize the narrated actions based on their **expected duration**. Through several experiments and analyses, we show that our method brings complementary information with respect to previous methods and leads to improvements over previous work for the task of temporal action localization.
Provide a detailed description of the following dataset: WhenAct
InDL
**Dataset Introduction** In this work, we introduce the In-Diagram Logic (InDL) dataset, an innovative resource crafted to rigorously evaluate the logic interpretation abilities of deep learning models. This dataset leverages the complex domain of visual illusions, providing a unique challenge for these models. The InDL dataset is characterized by its intricate assembly of optical illusions, wherein each instance poses a specific logic interpretation challenge. These illusions are constructed based on six classic geometric optical illusions, known for their intriguing interplay between perception and logic. **Motivations and Content** The motivation behind the creation of the InDL dataset arises from a recognized gap in current deep learning research. While models have exhibited remarkable proficiency in various domains such as image recognition and natural language processing, their performance in tasks requiring logical reasoning remains less understood and often opaque due to their inherent 'black box' characteristics. By using the medium of visual illusions, the InDL dataset aims to probe these models in a unique and challenging way, helping to illuminate their logic interpretation capabilities. The InDL dataset is a comprehensive collection of instances where each visual illusion varies in illusion strength. The strength signifies the degree of distortion introduced to challenge the models' logic interpretation. Hence, the dataset not only offers a complexity gradient for model evaluation but also allows the analysis of model performance against varying degrees of challenge intensity. **Potential Use Cases** The potential use cases of the InDL dataset are extensive. Beyond the primary goal of evaluating deep learning models' logic interpretation abilities, it also presents a robust tool for researchers to investigate how models react to visual perception challenges. This opens avenues to understand how these models can be improved and how their decision-making processes can be better interpreted. Additionally, the InDL dataset could provide a rich testing ground for model developers. Its diverse and challenging instances could allow them to rigorously benchmark their models and detect potential weaknesses that might be overlooked in more conventional datasets. Furthermore, the InDL dataset could serve as a valuable resource for teaching and learning purposes. It provides a visually engaging and intellectually stimulating way to explore the capabilities and limitations of deep learning models, particularly in the realm of logic interpretation.
Provide a detailed description of the following dataset: InDL
Dataset for neutron and gamma-ray pulse shape discrimination: radiation pulse signals and discrimination methodologies
This dataset provides neutron and gamma-ray pulse signals for pulse shape discrimination experiments. Serval traditional and recently proposed pulse shape discrimination algorithms are utilized to conduct pulse shape discrimination under raw pulse signals and noise-enhanced datasets. These algorithms include zero-crossing (ZC), charge comparison (CC), falling edge percentage slope (FEPS), frequency gradient analysis (FGA), pulse-coupled neural network (PCNN), ladder gradient (LG), and heterogeneous quasi-continuous spiking cortical model (HQC-SCM). This dataset also provides the source code of all these pulse shape discrimination methods, together with the source code of schematic pulse shape discrimination performance evaluation and anti-noise performance evaluation.
Provide a detailed description of the following dataset: Dataset for neutron and gamma-ray pulse shape discrimination: radiation pulse signals and discrimination methodologies
ACOS
Most of the aspect based sentiment analysis research aims at identifying the sentiment polarities toward some explicit aspect terms while ignores implicit aspects in text. To capture both explicit and implicit aspects, we focus on aspect-category based sentiment analysis, which involves joint aspect category detection and category-oriented sentiment classification. However, currently only a few simple studies have focused on this problem. The shortcomings in the way they defined the task make their approaches difficult to effectively learn the inner-relations between categories and the inter-relations between categories and sentiments. In this work, we re-formalize the task as a category-sentiment hierarchy prediction problem, which contains a hierarchy output structure to first identify multiple aspect categories in a piece of text, and then predict the sentiment for each of the identified categories. Specifically, we propose a Hierarchical Graph Convolutional Network (Hier-GCN), where a lower-level GCN is to model the inner-relations among multiple categories, and the higher-level GCN is to capture the inter-relations between aspect categories and sentiments. Extensive evaluations demonstrate that our hierarchy output structure is superior over existing ones, and the Hier-GCN model can consistently achieve the best results on four benchmarks.
Provide a detailed description of the following dataset: ACOS
ASQP
Aspect-based sentiment analysis (ABSA) typically focuses on extracting aspects and predicting their sentiments on individual sentences such as customer reviews. Recently, another kind of opinion sharing platform, namely question answering (QA) forum, has received increasing popularity, which accumulates a large number of user opinions towards various aspects. This motivates us to investigate the task of ABSA on QA forums (ABSA-QA), aiming to jointly detect the discussed aspects and their sentiment polarities for a given QA pair. Unlike review sentences, a QA pair is composed of two parallel sentences, which requires interaction modeling to align the aspect mentioned in the question and the associated opinion clues in the answer. To this end, we propose a model with a specific design of cross-sentence aspect-opinion interaction modeling to address this task. The proposed method is evaluated on three real-world datasets and the results show that our model outperforms several strong baselines adopted from related state-of-the-art models.
Provide a detailed description of the following dataset: ASQP
ASTE
Target-based sentiment analysis or aspect-based sentiment analysis (ABSA) refers to addressing various sentiment analysis tasks at a fine-grained level, which includes but is not limited to aspect extraction, aspect sentiment classification, and opinion extraction. There exist many solvers of the above individual subtasks or a combination of two subtasks, and they can work together to tell a complete story, i.e. the discussed aspect, the sentiment on it, and the cause of the sentiment. However, no previous ABSA research tried to provide a complete solution in one shot. In this paper, we introduce a new subtask under ABSA, named aspect sentiment triplet extraction (ASTE). Particularly, a solver of this task needs to extract triplets (What, How, Why) from the inputs, which show WHAT the targeted aspects are, HOW their sentiment polarities are and WHY they have such polarities (i.e. opinion reasons). For instance, one triplet from “Waiters are very friendly and the pasta is simply average” could be (‘Waiters’, positive, ‘friendly’). We propose a two-stage framework to address this task. The first stage predicts what, how and why in a unified model, and then the second stage pairs up the predicted what (how) and why from the first stage to output triplets. In the experiments, our framework has set a benchmark performance in this novel triplet extraction task. Meanwhile, it outperforms a few strong baselines adapted from state-of-the-art related methods.
Provide a detailed description of the following dataset: ASTE
TASD
Aspect-based sentiment analysis (ABSA) aims to detect the targets (which are composed by continuous words), aspects and sentiment polarities in text. Published datasets from SemEval-2015 and SemEval-2016 reveal that a sentiment polarity depends on both the target and the aspect. However, most of the existing methods consider predicting sentiment polarities from either targets or aspects but not from both, thus they easily make wrong predictions on sentiment polarities. In particular, where the target is implicit, i.e., it does not appear in the given text, the methods predicting sentiment polarities from targets do not work. To tackle these limitations in ABSA, this paper proposes a novel method for target-aspect-sentiment joint detection. It relies on a pre-trained language model and can capture the dependence on both targets and aspects for sentiment prediction. Experimental results on the SemEval-2015 and SemEval-2016 restaurant datasets show that the proposed method achieves a high performance in detecting target-aspect-sentiment triples even for the implicit target cases; moreover, it even outperforms the state-of-the-art methods for those subtasks of target-aspect-sentiment detection that they are competent to.
Provide a detailed description of the following dataset: TASD
HR-Avenue
The human-Related version of the CUHK Avenue dataset, first presented by Morais et al. in the paper "Learning Regularity in Skeleton Trajectories for Anomaly Detection in Videos".
Provide a detailed description of the following dataset: HR-Avenue
Watkins Marine Mammal Sounds
One of the founding fathers of marine mammal bioacoustics, William Watkins, carried out pioneering work with William Schevill at the Woods Hole Oceanographic Institution for more than four decades, laying the groundwork for our field today. One of the lasting achievements of his career was the Watkins Marine Mammal Sound Database, a resource that contains approximately 2000 unique recordings of more than 60 species of marine mammals (Table 1). Recordings were made by Watkins and Schevill as well as many others, including G. C. Ray, D. Wartzok, D. and M. Caldwell, K. Norris, and T. Poulter. Most of these have been digitized, along with approximately 15,000 annotated digital sound clips. The Watkins database has enormous historical and scientific value. The recordings provide sounds professionally identified as produced by particular marine mammal species in defined geographic regions (Figure 1) during specific seasons, which can be used as reference datasets for marine mammal detections from the growing amounts of passive acoustic monitoring (PAM) data that are being collected worldwide. In addition, the archive contains recordings that span seven decades, from the 1940's to the 2000's, and includes the very first recordings of 51 species of marine mammals. These data provide a rich resource to efforts aimed at examining long-term changes in vocal production that may be related to changes in ambient noise levels, as well as serve as a voucher collection for many species. We have made this resource fully accessible online, as was Watkins' goal (see crediting information below). The final product enables investigators, educators, students, and the public worldwide to freely and easily access acoustic samples from identified species of marine mammals, and place these samples in a geographic and temporal context. The physical collection has been donated to the New Bedford Whaling Museum.
Provide a detailed description of the following dataset: Watkins Marine Mammal Sounds
Speech Accent Archive
The speech accent archive uniformly presents a large set of speech samples from a variety of language backgrounds. Native and non-native speakers of English read the same paragraph and are carefully transcribed. The archive is used by people who wish to compare and analyze the accents of different English speakers.
Provide a detailed description of the following dataset: Speech Accent Archive
HR-UBnormal
The Human Related version of UBnormal ("UBnormal: New Benchmark for Supervised Open-Set Video Anomaly Detection," Acsintoae et al.) was introduced by Flaborea et al. in the paper "Contracting Skeletal Kinematics for Human-Related Video Anomaly Detection".
Provide a detailed description of the following dataset: HR-UBnormal
NaSGEC
**NaSGEC** is a new dataset to facilitate research on Chinese grammatical error correction (CGEC) for native speaker texts from multiple domains. Previous CGEC research primarily focuses on correcting texts from a single domain, especially learner essays.
Provide a detailed description of the following dataset: NaSGEC
CN-Celeb-AV
**CN-Celeb-AV** is a multi-genre AVPR dataset collected 'in the wild'. This dataset contains more than 420k video segments from 1,136 persons from public media.
Provide a detailed description of the following dataset: CN-Celeb-AV
FishEye8K
With the advance of AI, road object detection has been a prominent topic in computer vision, mostly using perspective cameras. Fisheye lens provides omnidirectional wide coverage for using fewer cameras to monitor road intersections, however with view distortions. The dataset will be available on the GitHub (https://github.com/MoyoG/FishEye8K) with PASCAL VOC, MS COCO, and YOLO annotation formats.
Provide a detailed description of the following dataset: FishEye8K
SHD
The Spiking Heidelberg Digits (SHD) dataset is an audio-based classification dataset of 1k spoken digits ranging from __zero__ to __nine__ in the English and German languages. The audio waveforms have been converted into spike trains using an artificial model of the inner ear and parts of the ascending auditory pathway. The SHD dataset has 8,156 training and 2,264 test samples. A full description of the dataset and how it was created can be found in the paper below. Please cite this paper if you make use of the dataset. Cramer, B.; Stradmann, Y.; Schemmel, J.; and Zenke, F. "The Heidelberg Spiking Data Sets for the Systematic Evaluation of Spiking Neural Networks". IEEE Transactions on Neural Networks and Learning Systems 33, 2744–2757, 2022.
Provide a detailed description of the following dataset: SHD
MADDPG AND P2P-VFRL FOR MINIMIZING AOI IN NTN NETWORK UNDER CSI UNCERTAINTY
MADDPG AND P2P-VFRL FOR MINIMIZING AOI IN NTN NETWORK UNDER CSI UNCERTAINTY
Provide a detailed description of the following dataset: MADDPG AND P2P-VFRL FOR MINIMIZING AOI IN NTN NETWORK UNDER CSI UNCERTAINTY
VNHSGE
The VNHSGE (VietNamese High School Graduation Examination) dataset, developed exclusively for evaluating large language models (LLMs), is introduced in this article. The dataset, which covers nine subjects, was generated from the Vietnamese National High School Graduation Examination and comparable tests. 300 literary essays have been included, and there are over 19,000 multiple-choice questions on a range of topics. The dataset assesses LLMs in multitasking situations such as question answering, text generation, reading comprehension, visual question answering, and more by including both textual data and accompanying images. Using ChatGPT and BingChat, we evaluated LLMs on the VNHSGE dataset and contrasted their performance with that of Vietnamese students to see how well they performed. The results show that ChatGPT and BingChat both perform at a human level in a number of areas, including literature, English, history, geography, and civics education. They still have space to grow, though, especially in the areas of mathematics, physics, chemistry, and biology. The VNHSGE dataset seeks to provide an adequate benchmark for assessing the abilities of LLMs with its wide-ranging coverage and variety of activities. We intend to promote future developments in the creation of LLMs by making this dataset available to the scientific community, especially in resolving LLMs' limits in disciplines involving mathematics and the natural sciences.
Provide a detailed description of the following dataset: VNHSGE
Stained mice brain blood vessels. Confocal-LFM
3D confocal stacks with corresponding 2D Light-field microscope images Confocal: -Single volume dimension: 1287x1287x64. -Number of samples: 362 -Voxel size: 0.086x0.086x0.9 um. -Objective: 40x/1.3 Oil. -Stain: tomato lectin (DyLight594 conjugated, DL-1177, Vector Laboratories). LightField: -Image dimensions 1287x1287. -PixelSize: 3.45 um. -Pixels per lenslet: 33x33. -Lenslet Pitch: 112 um. -MLA2Sensor distance: 2500um. -Tube-lens focal length: 165mm. -Objective: 40x/0.9 Air. H5 containing: -volData: confocal volumes 1287x1287x64 voxels. -LFData: LF 4D tensor 33x33x39x39 (Angular coord x, angular coord y, spatial coord x, spatial coord y). -gridCoords: image grid positions.
Provide a detailed description of the following dataset: Stained mice brain blood vessels. Confocal-LFM
Law Stack Exchange
## Description Dataset from the Law Stack Exchange, as used in ["Parameter-Efficient Legal Domain Adaptation"](https://aclanthology.org/2022.nllp-1.10/) (Li et al., 2022). We introduce a dataset with data from the Law Stack Exchange. This dataset is composed of questions from the Law Stack Exchange, which is a community forum-based website containing questions with answers to legal questions. We link the questions with their associated tags (e.g., "copyright" or "criminal-law"), and perform a multi-label classification task ## Citation Information ``` @inproceedings{li-etal-2022-parameter, title = "Parameter-Efficient Legal Domain Adaptation", author = "Li, Jonathan and Bhambhoria, Rohan and Zhu, Xiaodan", booktitle = "Proceedings of the Natural Legal Language Processing Workshop 2022", month = dec, year = "2022", address = "Abu Dhabi, United Arab Emirates (Hybrid)", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2022.nllp-1.10", pages = "119--129", } ```
Provide a detailed description of the following dataset: Law Stack Exchange
Legal Advice Reddit
## Dataset Summary New dataset introduced in [Parameter-Efficient Legal Domain Adaptation](https://aclanthology.org/2022.nllp-1.10) (Li et al., 2022) from the Legal Advice Reddit community (known as "/r/legaldvice"), sourcing the Reddit posts from the Pushshift Reddit dataset. The dataset maps the text and title of each legal question posted into one of eleven classes, based on the original Reddit post's "flair" (i.e., tag). Questions are typically informal and use non-legal-specific language. Per the Legal Advice Reddit rules, posts must be about actual personal circumstances or situations. We limit the number of labels to the top eleven classes and remove the other samples from the dataset. ## Citation Information ``` @inproceedings{li-etal-2022-parameter, title = "Parameter-Efficient Legal Domain Adaptation", author = "Li, Jonathan and Bhambhoria, Rohan and Zhu, Xiaodan", booktitle = "Proceedings of the Natural Legal Language Processing Workshop 2022", month = dec, year = "2022", address = "Abu Dhabi, United Arab Emirates (Hybrid)", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2022.nllp-1.10", pages = "119--129", } ```
Provide a detailed description of the following dataset: Legal Advice Reddit
PKLot
The PKLot dataset contains 12,417 images of parking lots and 695,899 images of parking spaces segmented from them, which were manually checked and labeled. All images were acquired at the parking lots of the Federal University of Parana (UFPR) and the Pontificial Catholic University of Parana (PUCPR), both located in Curitiba, Brazil.
Provide a detailed description of the following dataset: PKLot
SOTU_QA_2023
Curated QA Benchmark on State of the Union Address 2023. It contains curated question and answers based on knowledge presented in State of the Union Address 2023 (in Feb). It is especially useful for tool-augmented LMs / ALMs to examine the model's ability in answering over private document.
Provide a detailed description of the following dataset: SOTU_QA_2023
CVB
Existing image/video datasets for cattle behavior recognition are mostly small, lack well-defined labels, or are collected in unrealistic controlled environments. This limits the utility of machine learning (ML) models learned from them. Therefore, we introduce a new dataset, called Cattle Visual Behaviors (CVB), that consists of 502 video clips, each fifteen seconds long, captured in natural lighting conditions, and annotated with eleven visually perceptible behaviors of grazing cattle. By creating and sharing CVB, our aim is to develop improved models capable of recognizing all important cattle behaviors accurately and to assist other researchers and practitioners in developing and evaluating new ML models for cattle behavior classification using video data. The dataset is presented in the form of following three sub-directories. 1. raw_frames: contains 450 frames in each sub folder representing a 15 second video taken at a frame rate of 30 FPS. 2. annotations: contains the json files corresponding to the raw_frames folder. There is one json file for each video, that contains the bounding-box annotations for each cattle in the video and its associated behavior, and 3. CVB_in_AVA_format: contains the CVB data in the AVA dataset format.
Provide a detailed description of the following dataset: CVB
MSVWild863
WMVeID863 is captured with vehicles in motion with more challenges, such as motion blur, huge background changes, and especially intense flare degradation from car lamps, and sunlight. It contains 863 identities of vehicle triplets (RGB, NI, and TI) captured with 8 camera views at a traffic checkpoint, contributing 14127 images.
Provide a detailed description of the following dataset: MSVWild863
PRM800K
**PRM800K** is a process supervision dataset containing 800,000 step-level correctness labels for model-generated solutions to problems from the MATH dataset.
Provide a detailed description of the following dataset: PRM800K
PopulationGrowthDataset_Kigali
This dataset contains annual Sentinel-2 MSI composites (wet and dry season) for Kigali for the period 2016-2020. In addition, a metadata file containing population count at the grid level (100 x 100 m) for 2020 and at the census level (administrative units) for 2016 and 2020 is provided. Ancillary data such as the administrative boundaries of Kigali are also available.
Provide a detailed description of the following dataset: PopulationGrowthDataset_Kigali
IRFL: Image Recognition of Figurative Language
The IRFL dataset consists of idioms, similes, and metaphors with matching figurative and literal images, as well as two novel tasks of multimodal figurative understanding and preference. We collected figurative and literal images for textual idioms, metaphors, and similes using an automatic pipeline we created (idioms) and manually (metaphors + similes). We annotated the relations between these images and the figurative phrase they originated from. Using these images we created two novel tasks of figurative understanding and preference. The figurative understanding task evaluates Vision and Language Pre-Trained Models’ (VL-PTMs) ability to understand the relation between an image and a figurative phrase. The task is to choose the image that best visualizes the figurative phrase out of X candidates. The preference task examines VL-PTMs' preference for figurative images. In this task, the model needs to classify phrase images of different categories correctly based on their ranking by the model matching score. The best models achieve 22%, 30%, and 66% accuracy vs. humans 97%, 99.7%, and 100% on our understanding task for idioms, metaphors, and similes respectively. The best model achieved an F1 score of 61 on the preference task. Researchers are welcome to evaluate models on this dataset.
Provide a detailed description of the following dataset: IRFL: Image Recognition of Figurative Language
E-ReDial
E-ReDial is a conversational recommender system dataset with high-quality explanations. It consists of 756 dialogues with 12,003 utterances, each with 15.9 turns on average. 2,058 high-quality explanations are included, each with 79.2 tokens on average.
Provide a detailed description of the following dataset: E-ReDial
TAP
The Traffic Accident Prediction (TAP) data repository offers extensive coverage for 1,000 US cities (TAP-city) and 49 states (TAP-state), providing real-world road structure data that can be easily used for graph-based machine learning methods such as Graph Neural Networks. Additionally, it features multi-dimensional geospatial attributes, including angular and directional features, that are useful for analyzing transportation networks. The TAP repository has the potential to benefit the research community in various applications, including traffic crash prediction, road safety analysis, and traffic crash mitigation. The datasets can be accessed in the TAP-city and TAP-state directories. For example, this repository can aid in traffic accident occurrence prediction and accident severity prediction. Binary labels are used to indicate whether a node contains at least one accident for the occurrence prediction task, while severity is represented by a number between 0 and 7 for the severity prediction task. A severity level of 0 denotes no accident, and 1 to 7 represents increasingly significant impacts on traffic.
Provide a detailed description of the following dataset: TAP
SLAKE
SLAKE is an English-Chinese bilingual dataset consisting of 642 images and 14,028 question-answer pairs for training and testing Med-VQA systems.
Provide a detailed description of the following dataset: SLAKE
Synthetic Graph
We include five substructure counting tasks: 3-stars, triangles, tailed triangles, chordal cycles and attributed triangles. 3-star is a subgraph-counting task while the remaining are induced-subgraph-counting.
Provide a detailed description of the following dataset: Synthetic Graph
OVQA
OVQA contains 19,020 medical visual question and answer pairs generated from 2,001 medical images collected from 2,212 EMRs in Orthopedics.
Provide a detailed description of the following dataset: OVQA
Sonicverse
**Sonicverse** is a multisensory simulation platform with integrated audio-visual simulation for training household agents that can both see and hear. Sonicverse models realistic continuous audio rendering in 3D environments in real-time. Together with a new audio-Visual VR interface that allows humans to interact with agents with audio, Sonicverse enables a series of embodied AI tasks that need audio-visual perception.
Provide a detailed description of the following dataset: Sonicverse
ObjectFolder Real
The **ObjectFolder Real** dataset contains multisensory data collected from 100 real-world household objects. The visual data for each object include three high-quality 3D meshes of different resolutions and an HD video recording of the object rotating in a lightbox; The acoustic data for each object include impact sound recordings recorded at 30–50 points of the object, each of which is 6s long and is accompanied by the coordinate of the striking location on the object mesh, ground-truth contact force profile, and the accompanying video for the impact. The tactile data for each object include tactile readings at the same 30–50 points of the object, with each tactile reading as a video of the tactile RGB images that record the entire gel deformation process and is accompanied by two videos of the contact process from an in-hand camera and a third-view camera.
Provide a detailed description of the following dataset: ObjectFolder Real
gRefCOCO
gRefCOCO is the first large-scale Generalized Referring Expression Segmentation dataset that contains multi-target, no-target, and single-target expressions.
Provide a detailed description of the following dataset: gRefCOCO
Multi-Spectral Stereo Dataset (RGB, NIR, thermal images, LiDAR, GPS/IMU)
Abstract: We introduce the multi-spectral stereo (MS2) outdoor dataset, including stereo RGB, stereo NIR, stereo thermal, stereo LiDAR data, and GPS/IMU information. Our dataset provides rectified and synchronized 184K data pairs taken from city, residential, road, campus, and suburban areas in the morning, daytime, and nighttime under clear-sky, cloudy, and rainy conditions. We designed the dataset to explore various computer vision algorithms from multi-spectral sensor data to achieve high-level performance, reliability, and robustness against challenging environments. MS2 dataset provides: * 1. (Synchronized) Stereo RGB images / Stereo NIR images / Stereo thermal images * 2. (Synchronized) Stereo LiDAR scans / GPS/IMU navigation data * 3. Projected depth map (in RGB, NIR, thermal image planes) * 4. Odometry data (in RGB, NIR, thermal cameras, and LiDAR coordinates)
Provide a detailed description of the following dataset: Multi-Spectral Stereo Dataset (RGB, NIR, thermal images, LiDAR, GPS/IMU)
Iran's Built Heritage Binary Image Classification Dataset
**Iran's Built Heritage Binary Image Classification Dataset** contains approximately 10,500 CHB images gathered from four different sources: i) The archives of Iran’s cultural heritage ministry ii) The author’s (M.B) personal archives iii) images captured on site by the authors (M.B) during the research process iv) pictures crawled from the Internet but kept it to a minimum as their distribution differed due to heavy edits and effects.
Provide a detailed description of the following dataset: Iran's Built Heritage Binary Image Classification Dataset
AMI Meeting Corpus
AMI Meeting Corpus in JSON format.
Provide a detailed description of the following dataset: AMI Meeting Corpus
ICSI Meeting Corpus
ICSI Meeting Corpus in JSON format.
Provide a detailed description of the following dataset: ICSI Meeting Corpus
ELITR Minuting Corpus
ELITR Minuting Corpus in JSON format.
Provide a detailed description of the following dataset: ELITR Minuting Corpus
Switchboard Dialog Act Corpus
Switchboard Dialog Act Corpus
Provide a detailed description of the following dataset: Switchboard Dialog Act Corpus
BabySLM
**BabySLM** is a language-acquisition-friendly benchmark to probe speech-based LMs at the lexical and syntactic levels, both of which are compatible with the vocabulary typical of children's language experiences.
Provide a detailed description of the following dataset: BabySLM
CovidET-EXT
CovidET-EXT is a dataset that augments Zhan et al. (2022)'s abstractive dataset CovidET (in the context of the COVID-19 crisis) with extractive triggers. The result is a dataset of 1,883 Reddit posts about the COVID-19 pandemic, manually annotated with 7 fine-grained emotions (from CovidET) and their corresponding extractive triggers.
Provide a detailed description of the following dataset: CovidET-EXT
Unidecor
UNIDECOR is a unified corpus consolidating publicly available textual deception datasets into a common format.
Provide a detailed description of the following dataset: Unidecor
ARO
Attribution, Relation, and Order (ARO) benchmark to systematically evaluate the ability of VLMs to understand different types of relationships, attributes, and order information. ARO consists of Visual Genome Attribution, to test the understanding of objects' properties; Visual Genome Relation, to test for relational understanding; and COCO-Order & Flickr30k-Order, to test for order sensitivity in VLMs. ARO is orders of magnitude larger than previous benchmarks of compositionality, with more than 50,000 test cases.
Provide a detailed description of the following dataset: ARO
PanCollection
Pansharpening Datasets from WorldView 2, WorldView 3, QuickBird, Gaofen 2 sensors.
Provide a detailed description of the following dataset: PanCollection
FinRED
**FinRED** is a relation extraction dataset curated from financial news and earning call transcripts containing relations from the finance domain. FinRED has been created by mapping Wikidata triplets using distance supervision method.
Provide a detailed description of the following dataset: FinRED
A Game Of Sorts
**A Game Of Sorts** is a collaborative image ranking task. Players are asked to rank a set of images based on a given sorting criterion. The game provides a framework for the evaluation of visually grounded language understanding and generation of referring expressions in multimodal dialogue settings.
Provide a detailed description of the following dataset: A Game Of Sorts
SynthRAD2023
Purpose Medical imaging has become increasingly important in diagnosing and treating oncological patients, particularly in radiotherapy. Recent advances in synthetic computed tomography (sCT) generation have increased interest in public challenges to provide data and evaluation metrics for comparing different approaches openly. This paper describes a dataset of brain and pelvis computed tomography (CT) images with rigidly registered cone-beam CT (CBCT) and magnetic resonance imaging (MRI) images to facilitate the development and evaluation of sCT generation for radiotherapy planning. Acquisition and Validation Methods The dataset consists of CT, CBCT, and MRI of 540 brains and 540 pelvic radiotherapy patients from three Dutch university medical centers. Subjects' ages ranged from 3 to 93 years, with a mean age of 60. Various scanner models and acquisition settings were used across patients from the three data-providing centers. Details are available in a comma separated value files provided with the datasets. Data Format and Usage Notes The data is available on Zenodo (https://doi.org/10.5281/zenodo.7260704, https://doi.org/10.5281/zenodo.7868168) under the SynthRAD2023 collection. The images for each subject are available in nifti format. Potential Applications This dataset will enable the evaluation and development of image synthesis algorithms for radiotherapy purposes on a realistic multi-center dataset with varying acquisition protocols. Synthetic CT generation has numerous applications in radiation therapy, including diagnosis, treatment planning, treatment monitoring, and surgical planning.
Provide a detailed description of the following dataset: SynthRAD2023
Zambezi Voice
This work introduces Zambezi Voice, an open-source multilingual speech resource for Zambian languages. It contains two collections of datasets: unlabelled audio recordings of radio news and talk shows programs (160 hours) and labelled data (over 80 hours) consisting of read speech recorded from text sourced from publicly available literature books. The dataset is created for speech recognition but can be extended to multilingual speech processing research for both supervised and unsupervised learning approaches. To our knowledge, this is the first multilingual speech dataset created for Zambian languages. We exploit pretraining and cross-lingual transfer learning by finetuning the Wav2Vec2.0 large-scale multilingual pre-trained model to build end-to-end (E2E) speech recognition models for our baseline models. The dataset is released publicly under a Creative Commons BY-NC-ND 4.0 license and can be accessed through the project repository.
Provide a detailed description of the following dataset: Zambezi Voice
Youku-mPLUG
**Youku-mPLUG** is a large Chinese high-quality video-language dataset which is collected from Youku.com, a well-known Chinese video-sharing website, with strict criteria of safety, diversity, and quality. It contains 10 million video-text pairs for pre-training and 0.3 millon videos for downstream benchmarks covering Video-Text Retrieval, Video Captioning and Video Category Classification.
Provide a detailed description of the following dataset: Youku-mPLUG
MultiSum
**MultiSum** is a dataset for multimodal summarization (MSMO). It consists of 17 categories and 170 subcategories to encapsulate a diverse array of real-world scenarios. The dataset features: 1)Human-validated summaries for both video and textual content, providing superior human instruction and labels for multimodal learning. 2) Comprehensively and meticulously arranged categorization, spanning 17 principal categories and 170 subcategories to encapsulate a diverse array of real-world scenarios. 3) Benchmark tests performed on the proposed dataset to assess varied tasks and methods, including video temporal segmentation, video summarization, text summarization, and multimodal summarization.
Provide a detailed description of the following dataset: MultiSum
PAMAP2
The PAMAP2 Physical Activity Monitoring dataset contains data of 18 different physical activities (such as walking, cycling, playing soccer, etc.), performed by 9 subjects wearing 3 inertial measurement units and a heart rate monitor. The dataset can be used for activity recognition and intensity estimation, while developing and applying algorithms of data processing, segmentation, feature extraction and classification. ** Sensors ** 3 Colibri wireless inertial measurement units (IMU): - sampling frequency: 100Hz - position of the sensors: - 1 IMU over the wrist on the dominant arm - 1 IMU on the chest - 1 IMU on the dominant side's ankle HR-monitor: - sampling frequency: ~9Hz ** Data collection protocol ** Each of the subjects had to follow a protocol, containing 12 different activities. The folder Protocol contains these recordings by subject. Furthermore, some of the subjects also performed a few optional activities. The folder Optional contains these recordings by subject. ** Data files ** Raw sensory data can be found in space-separated text-files (.dat), 1 data file per subject per session (protocol or optional). Missing values are indicated with NaN. One line in the data files correspond to one timestamped and labeled instance of sensory data. The data files contain 54 columns: each line consists of a timestamp, an activity label (the ground truth) and 52 attributes of raw sensory data.
Provide a detailed description of the following dataset: PAMAP2
RKI and DIVI COVID-19 Data combined
This database consists of two main components; data on COVID-19 infections and data on the ICU occupancy of COVID-19 patients. The infections and the ICU occupancy are collected by the German health care departments, recorded by the Robert Koch Institute (2021) (RKI), the German federal government agency and scientific institute responsible for health reporting and disease control.
Provide a detailed description of the following dataset: RKI and DIVI COVID-19 Data combined
HighwayPavementCrackDetection
1. The image comes from the CCD camera of the highway measurement vehicle. 2. Cracks and sealed cracks have been labeled. 3. The form of labels is different from traditional block annotations, but uses redundant and dense annotation boxes. 4. Some of the data is manually annotated, while others are model generated annotations that have undergone careful manual inspection.
Provide a detailed description of the following dataset: HighwayPavementCrackDetection
mOKB6
Multilingual Open Knowledge Base Completion benchmark in 6 languages: English, Hindi, Telugu, Spanish, Portuguese, and Chinese.
Provide a detailed description of the following dataset: mOKB6
Krapivin
A dataset for benchmarking keyphrase extraction and generation techniques from long document English scientific papers. The dataset has high quality and consists of 2,000 scientific papers from the Computer Science domain published by ACM. Each paper has its keyphrases assigned by the authors and verified by the reviewers. Different parts of papers, such as title and abstract, are separated, enabling extraction based on the part of an article's text. The content of each paper is converted from PDF to plain text. The pieces of formulae, tables, figures and LaTeX mark up were removed automatically. Link: https://huggingface.co/datasets/midas/krapivin
Provide a detailed description of the following dataset: Krapivin
NUS
The dataset was constructed by first finding suitable publications and then collecting keyphrases from manual annotators. Google SOAP API was used to find documents using variants of the query “keywords general terms filetype:pdf”. Over 250 of these PDF documents were downloaded for further processing. Documents were then manually restricted to scientific conference papers, with a length range of 4-12 pages. The PDF documents were then converted to plain text using the PDF995 software suite (as it handled two-columned text better than other programs tried). At the end of this process, 211 documents in plain text format were selected which were converted successfully without problems. The authors then recruited student volunteers from our department to participate in manual keyphrase assignments. Each volunteer was given three PDF files (with author-assigned keyphrases hidden) to assign keyphrases to.
Provide a detailed description of the following dataset: NUS
Mindgames
We generate epistemic reasoning problems using modal logic to target theory of mind (tom) in natural language processing models.
Provide a detailed description of the following dataset: Mindgames
probability_words_nli
This dataset tests the capabilities of language models to correctly capture the meaning of words denoting probabilities (WEP, also called verbal probabilities), e.g. words like "probably", "maybe", "surely", "impossible".
Provide a detailed description of the following dataset: probability_words_nli
RESD
Russian dataset of emotional speech dialogues. This dataset was assembled from ~3.5 hours of live speech by actors who voiced pre-distributed emotions in the dialogue for ~3 minutes each. <br> Each sample of dataset contains name of part from the original dataset studio source, speech file (16000 or 44100Hz) of human voice, 1 of 7 labeled emotions and the speech-to-texted part of voice speech. <br> Emotions are represented in 7 states: **anger**, **disgust**, **fear**, **enthusiasm**, **happiness**, **neutral** and **sadness**. This dataset was created by Artem Amentes, Nikita Davidchuk and Ilya Lubenets ``` @misc{Aniemore, author = {Артем Аментес, Илья Лубенец, Никита Давидчук}, title = {Открытая библиотека искусственного интеллекта для анализа и выявления эмоциональных оттенков речи человека}, year = {2022}, publisher = {Hugging Face}, journal = {Hugging Face Hub}, howpublished = {\url{https://huggingface.com/aniemore/Aniemore}}, email = {hello@socialcode.ru} } ```
Provide a detailed description of the following dataset: RESD
AMOS
Despite the considerable progress in automatic abdominal multi-organ segmentation from CT/MRI scans in recent years, a comprehensive evaluation of the models' capabilities is hampered by the lack of a large-scale benchmark from diverse clinical scenarios. Constraint by the high cost of collecting and labeling 3D medical data, most of the deep learning models to date are driven by datasets with a limited number of organs of interest or samples, which still limits the power of modern deep models and makes it difficult to provide a fully comprehensive and fair estimate of various methods. To mitigate the limitations, we present AMOS, a large-scale, diverse, clinical dataset for abdominal organ segmentation. AMOS provides 500 CT and 100 MRI scans collected from multi-center, multi-vendor, multi-modality, multi-phase, multi-disease patients, each with voxel-level annotations of 15 abdominal organs, providing challenging examples and test-bed for studying robust segmentation algorithms under diverse targets and scenarios. We further benchmark several state-of-the-art medical segmentation models to evaluate the status of the existing methods on this new challenging dataset. We have made our datasets, benchmark servers, and baselines publicly available, and hope to inspire future research. Information can be found at https://amos22. grand-challenge. org.
Provide a detailed description of the following dataset: AMOS
PhotoChat
PhotoChat, the first dataset that casts light on the photo sharing behavior in online messaging. PhotoChat contains 12k dialogues, each of which is paired with a user photo that is shared during the conversation. Based on this dataset, we propose two tasks to facilitate research on image-text modeling: a photo-sharing intent prediction task that predicts whether one intends to share a photo in the next conversation turn, and a photo retrieval task that retrieves the most relevant photo according to the dialogue context.
Provide a detailed description of the following dataset: PhotoChat
MICCAI iSEG-2017
The MICCAI iSEG dataset was described in "https://iseg2017.web.unc.edu/how-to-cite/", that has a total of 10 images, including T1-1 through T1-10, T2-1 through T2-10, and a ground truth for the training set. And there are 13 images, T-11 through T-23, including T-11 through T-23 for test set.
Provide a detailed description of the following dataset: MICCAI iSEG-2017
MMConv
The main goal of the data collection is to acquire highly natural conversations that cover a wide variety of styles and scenarios. In total, the presented corpus consists of five domains: Food, Hotel, Nightlife, Shopping mall and Sightseeing. Controlled by our various task settings, the collected dialogues cover between one to four domains per dialogue, and are thus of greatly varying length and complexity. There are 808 single-task dialogues that contains a single venue target and 4, 298 multi-task dialogues consisting of at least two to four venue targets. These different venues vary in domains most of the times.
Provide a detailed description of the following dataset: MMConv
SIMMC2.0
Next generation task-oriented dialog systems need to understand conversational contexts with their perceived surroundings, to effectively help users in the real-world multimodal environment. Existing task-oriented dialog datasets aimed towards virtual assistance fall short and do not situate the dialog in the user's multimodal context. To overcome, we present a new dataset for Situated and Interactive Multimodal Conversations, SIMMC 2.0, which includes 11K task-oriented user<->assistant dialogs (117K utterances) in the shopping domain, grounded in immersive and photo-realistic scenes. The dialogs are collected using a two-phase pipeline: (1) A novel multimodal dialog simulator generates simulated dialog flows, with an emphasis on diversity and richness of interactions, (2) Manual paraphrasing of the generated utterances to collect diverse referring expressions. We provide an in-depth analysis of the collected dataset, and describe in detail the four main benchmark tasks we propose. Our baseline model, powered by the state-of-the-art language model, shows promising results, and highlights new challenges and directions for the community to study.
Provide a detailed description of the following dataset: SIMMC2.0
PAX-Ray++
The PAX-Ray++ dataset uses pseudo-labeled thorax CTs to enable the segmentation of anatomy in Chest X-Rays. By projecting the CTs to a 2D plane, we gather fine-grained annotated imaages resembling radiographs. It contains 7,377 frontal and lateral view images each with 157 anatomy classes and over 2 million annotated instances.
Provide a detailed description of the following dataset: PAX-Ray++
Famous Keyword Twitter Replies
The **"Famous Keyword Twitter Replies Dataset"** is a comprehensive collection of Twitter data that focuses on popular keywords and their associated replies. This dataset contains five essential columns that provide valuable insights into the Twitter conversation dynamics: 1. **Keyword:** This column represents the specific keyword or topic of interest that generated the original tweet. It helps identify the context or subject matter around which the conversation revolves. 2. **Main_tweet:** The main_tweet column contains the original tweet related to the keyword. It serves as the starting point or focal point of the conversation and often provides essential information or opinions on the given topic. 3. **Main_likes:** This column provides the number of likes received by the main_tweet. Likes serve as a measure of engagement and indicate the level of popularity or resonance of the original tweet within the Twitter community. 4. **Reply:** The reply column consists of the replies or responses to the main_tweet. These replies may include comments, opinions, additional information, or discussions related to the keyword or the original tweet itself. The replies help capture the diverse perspectives and conversations that emerge in response to the main_tweet. 5. **Reply_likes:** This column records the number of likes received by each reply. Similar to the main_likes column, the reply_likes column measures the level of engagement and popularity of individual replies. It enables the identification of particularly noteworthy or well-received replies within the dataset. By analyzing this "Famous Keyword Twitter Replies Dataset," researchers, analysts, and data scientists can gain valuable insights into how popular keywords spark discussions on Twitter and how these discussions evolve through replies. The dataset's information on likes allows for the evaluation of tweet and reply popularity, helping to identify influential or impactful content. This dataset serves as a valuable resource for various applications, including sentiment analysis, trend identification, opinion mining, and understanding social media dynamics. &gt; Number of tweets for each pairs of tweet and reply **Total has 17255 pairs of tweet/reply**
Provide a detailed description of the following dataset: Famous Keyword Twitter Replies
WeiboPolls
### Dataset Description The dataset described in the provided text is focused on social media polls collected from Weibo, a popular Chinese microblogging platform. The dataset aims to empirically study social media polls and analyze user engagement patterns. ### Characteristics of the Dataset - Size: The dataset consists of 20,252 polls collected from 1,860 users on Weibo. - Data Collection: The polls were obtained by sampling Weibo posts containing polls and examining the posting history of the authors. The dataset also includes comments on each post. - Sparsity: The dataset faces the challenge of sparse distribution of polls on Weibo, as less than 0.1% of the randomly gathered posts contained polls. - Content: The dataset includes user-generated polls with questions, answer choices, and corresponding votes. The polls often incorporate trendy hashtags to attract user attention and cover various topics, including social events, public emergencies (such as the COVID-19 outbreak), entertainment topics (celebrities, TV shows), and more. ### Motivations and Summary The motivation behind collecting this dataset is to explore social media polls on Weibo and analyze user engagement patterns. The study aims to understand how users interact with polls, the influence of polls on user engagement, and the types of topics that are more likely to contain polls. The dataset provides insights into user behavior on Weibo by examining factors such as the length of posts, comments, questions, and answers. It also highlights the preference for voting over commenting as a means of expressing opinions. The dataset's analysis suggests that posts with polls tend to attract more comments, likes, and reposts compared to posts without polls. ### Potential Use Cases This dataset can be useful for various research and practical applications, such as: - Social Media Analysis: Researchers can analyze the characteristics and dynamics of social media polls, understanding how they influence user engagement and the types of topics that attract poll creation. - User Engagement Studies: The dataset allows for the exploration of user behavior and preferences when it comes to interacting with polls, providing insights into the factors that drive user engagement on social media platforms. - Trend Analysis: By examining the hashtags associated with polls, the dataset can contribute to understanding social events, public emergencies, and entertainment trends on Weibo. - Marketing and Advertising: The dataset can assist marketers and advertisers in understanding user preferences and interests, enabling them to create targeted campaigns based on the popular topics identified in the dataset. Please note that the actual contents and specific applications of the dataset may vary based on further analysis and research conducted by users.
Provide a detailed description of the following dataset: WeiboPolls
PTCGA200
**PTCGA200** is a public pathological H&E image datasets from Patch TCGA in 200 microns by 512 px.
Provide a detailed description of the following dataset: PTCGA200
PCam200
**PCam200** is a public pathological H&E image dataset from Patch Camelyon in 200 microns by 512 px made in the same manner from Camelyon2016 challenge dataset.
Provide a detailed description of the following dataset: PCam200
SegPANDA200
**SegPANDA200** is a public pathological H&E image dataset from segmentation task on PANDA challenge in 200 microns by 512px made in the same manner from PANDA challenge dataset .
Provide a detailed description of the following dataset: SegPANDA200
VC-Clothes
Person re-identification (Reid) is now an active research topic for AI-based video surveillance applications such as specific person search, but the practical issue that the target person(s) may change clothes (clothes inconsistency problem) has been overlooked for long. For the first time, this paper systematically studies this problem. We first overcome the difficulty of lack of suitable dataset, by collecting a small yet representative real dataset for testing whilst building a large realistic synthetic dataset for training and deeper studies. Facilitated by our new datasets, we are able to conduct various interesting new experiments for studying the influence of clothes inconsistency. We find that changing clothes makes Reid a much harder problem in the sense of bringing difficulties to learning effective representations and also challenges the generalization ability of previous Reid models to identify persons with unseen (new) clothes. Representative existing Reid models are adopted to show informative results on such a challenging setting, and we also provide some preliminary efforts on improving the robustness of existing models on handling the clothes inconsistency issue in the data. We believe that this study can be inspiring and helpful for encouraging more researches in this direction. The dataset is avaliable on the project website: https://wanfb.github.io/dataset.html.
Provide a detailed description of the following dataset: VC-Clothes
CIFAR-10H
CIFAR-10H is a new dataset of soft labels reflecting human perceptual uncertainty for the 10,000-image CIFAR-10 test set. This contains 1,000 images for each of the 10 categories in the original CIFAR-10 dataset. There are a total of 511,400 human classifications collected via Amazon Mechanical Turk. When specifying the task on Amazon Mechanical Turk, participants were asked to categorize each image by clicking one of the 10 labels surrounding it as quickly and accurately as possible (but with no time limit). Label positions were shuffled between candidates. After an initial training phase, each participant (2,571 total) categorized 200 images, 20 from each category. Every 20 trials, an obvious image was presented as an attention check, and participants who scored below 75% on these were removed from the final analysis (14 total). We collected 51 judgments per image on average (range: 47 − 63). Average completion time was 15 minutes, and workers were paid $1.50 total.
Provide a detailed description of the following dataset: CIFAR-10H
OpenSpeaks Voice: Odia
OpenSpeaks Voice: Odia is a large speech dataset in the Odia language of India that is stewarded by Subhashish Panigrahi and is hosted at the O Foundation. It currently hosts over 70,000 audio files under a Universal Public Domain (CC0 1.0) Release. Of these, 66,000, hosted on Wikimedia Commons, include pronunciation of words and phrases, and the remaining 4,400 include pronunciation of sentences and are hosted on Mozilla Common Voice. The files on Wikimedia Commons were also released n 2023 as four physical media in the form of DVD-ROMs titled OpenSpeaks Voice: Odia Volume I, OpenSpeaks Voice: Odia Volume II, OpenSpeaks Voice: Balesoria-Odia Volume I, and OpenSpeaks Voice: Balesoria-Odia Volume II. The dataset uses Free/Libre and Open Source Software, primarily using web-based platforms such as Lingua Libre and Common Voice. Other tools used for this project include Kathabhidhana, developed by Panigrahi by forking the Voice Recorder for Tamil Wiktionary by Shrinivasan T, and Spell4wiki, Audacity among others. Over 64,000 files in this dataset are in the standard spoken variant of Odia (Central Odia), and the remaining 6,300 files are in Balesoria (Baleswari), the northern dialect of Odia. OpenSpeaks Voice: Balesoria-Odia Volume II was created by extracting words and phrases from the Nani Ma, a Balesoria-Odia documentary short directed by Panigrahi. The files within this dataset include transcription in Odia, making them accessible for automatic speech recognition (ASR). All the files are publicly available for ASR research and application building.
Provide a detailed description of the following dataset: OpenSpeaks Voice: Odia
Immobilized fluorescently stained zebrafish through the eXtended Field of view Light Field Microscope 2D-3D dataset
This dataset comprises three immobilized fluorescently stained zebrafish imaged through the eXtended Field of view Light Field Microscope (XLFM, also known as Fourier Light Field Microscope). The images were preprocessed with the SLNet, which extracts the sparse signals from the images (a.k.a. the neural activity). If you intend to use this with Pytorch, you can find a data loader and working source code to load and train networks here. This dataset is part of the publication: Fast light-field 3D microscopy with out-of-distribution detection and adaptation through Conditional Normalizing Flows.
Provide a detailed description of the following dataset: Immobilized fluorescently stained zebrafish through the eXtended Field of view Light Field Microscope 2D-3D dataset
Teeth3DS
Teeth3DS is the first public benchmark which has been created in the frame of the 3DTeethSeg 2022 MICCAI challenge to boost the research field and inspire the 3D vision research community to work on intra-oral 3D scans analysis such as teeth identification, segmentation, labeling, 3D modeling, and 3D reconstruction. Teeth3DS is made of 1800 intra-oral scans (23999 annotated teeth) collected from 900 patients covering the upper and lower jaws separately, acquired and validated by orthodontists/dental surgeons with more than 5 years of professional experience.
Provide a detailed description of the following dataset: Teeth3DS