dataset_name stringlengths 2 128 | description stringlengths 1 9.7k | prompt stringlengths 59 185 |
|---|---|---|
HC3 | The HC3 (Human ChatGPT Comparison Corpus) dataset consists of nearly 40K questions and their corresponding human/ChatGPT answers. The motivation for this dataset was to study ChatGPT's answers in contrast to human's answers. The questions range from a wide variety of domains, including open-domain, financial, medical, legal, and psychological areas. | Provide a detailed description of the following dataset: HC3 |
STEDUCOV: A DATASET ON STANCE DETECTION IN TWEETS TOWARDS ONLINE EDUCATION DURING COVID-19 PANDEMIC | StEduCov, a dataset annotated for stances toward online education during the COVID-19 pandemic. StEduCov has 17,097 tweets gathered over 15 months, from March 2020 to May 2021, using Twitter API. The tweets are manually annotated into agree, disagree or neutral classes. We used a set of relevant hashtags and keywords. Specifically, we utilised a combination of hashtags, such as '#COVID 19' or '#Coronavirus' with keywords, such as 'education', 'online learning', 'distance learning' and 'remote learning'. To ensure high annotation quality, three different annotators annotated each tweet and at least one of the reviewers from three judges revised it. They were guided by some instructions, such as that in the case of disagree class, there should be a clear negative statement about online education or its impact. Also, if the tweet is negative but refers to other people (e.g. 'my children hate online learning'). | Provide a detailed description of the following dataset: STEDUCOV: A DATASET ON STANCE DETECTION IN TWEETS TOWARDS ONLINE EDUCATION DURING COVID-19 PANDEMIC |
WMT-SLT22 | We provide separate training, development and test data. The training data is available right away. The development and test data will be released in several stages, starting with a release of the development sources only.
The training data comprises two corpora, called FocusNews and SRF, see below for a more detailed description. The linguistic domain of both corpora is general news, and both contain parallel data between Swiss German Sign Language (DSGS) and German. The corpora are distributed through Zenodo. | Provide a detailed description of the following dataset: WMT-SLT22 |
Jester (Gesture Recognition) | **Jester Gesture Recognition** dataset includes 148,092 labeled video clips of humans performing basic, pre-defined hand gestures in front of a laptop camera or webcam. It is designed for training machine learning models to recognize human hand gestures like sliding two fingers down, swiping left or right and drumming fingers. | Provide a detailed description of the following dataset: Jester (Gesture Recognition) |
OMMO | **OMMO** is a new benchmark for several outdoor NeRF-based tasks, such as novel view synthesis, surface reconstruction, and multi-modal NeRF. It contains complex objects and scenes with calibrated images, point clouds and prompt annotations. | Provide a detailed description of the following dataset: OMMO |
Govdocs1 | GovDocs is a corpus of nearly 1 million documents that are freely available for research and may be, to the best of the authors' knowledge, freely redistributed. These documents were obtained by performing searches for words randomly chosen from the Unix dictionary, numbers randomly chosen between 1 and 1 million, and randomized combinations of the two, for documents of specified file types that resided on web servers in the `.gov` domain using the Yahoo an Google search engines. The documents are representative of a diverse sample of real-world files of various formats produced by a variety of tools, including any malware that may be present in the files. Therefore, the corpus has been used in digital forensics, malware analysis, computer vision, and natural language processing research. | Provide a detailed description of the following dataset: Govdocs1 |
OmniObject3D | **OmniObject3D** is a large vocabulary 3D object dataset with massive high-quality real-scanned 3D objects. OmniObject3D has several appealing properties:
1) Large Vocabulary: It comprises 6,000 scanned objects in 190 daily categories, sharing common classes with popular 2D datasets (e.g., ImageNet and LVIS), benefiting the pursuit of generalizable 3D representations.
2) Rich Annotations: Each 3D object is captured with both 2D and 3D sensors, providing textured meshes, point clouds, multiview rendered images, and multiple real-captured videos.
3) Realistic Scans: The professional scanners support highquality object scans with precise shapes and realistic appearances. | Provide a detailed description of the following dataset: OmniObject3D |
SPEC5G | **SPEC5G** is a dataset for the analysis of natural language specification of 5G Cellular network protocol specification. SPEC5G contains 3,547,587 sentences with 134M words, from 13094 cellular network specifications and 13 online websites. It is designed for security-related text classification and summarisation. | Provide a detailed description of the following dataset: SPEC5G |
CMMD | Breast carcinoma is the second largest cancer in the world among women. Early detection of breast cancer has been shown to increase the survival rate, thereby significantly increasing patients' lifespans. Mammography, a noninvasive imaging tool with low cost, is widely used to diagnose breast disease at an early stage due to its high sensitivity. The recent popularization of artificial intelligence in computer-aided diagnosis creates opportunities for advances in areas such as (1) Computer-aided detection for locating suspect lesions such as mass and microcalcification, leaving the classification to the radiologist; and (2) Computer-aided diagnosis for characterizing the suspicious region of lesion and/or estimate its probability of onset; and (3) Findings of predictive image-based biomarkers by applying the computational methods to mine the potential relationships between image representation and molecular subtype, including luminal A, luminal B, HER2 positive, and Triple-negative.
However, existing publicly available mammography databases are limited by small sample size, lack of diversity in patient populations, missing biopsy confirmations and unknown molecular sub-types. To help fill the gap, we built a database conducted on 1,775 patients from China with benign or malignant breast disease who underwent mammography examination between July 2012 and January 2016. The database consists of 3,728 mammographies from these 1,775 patients, with biopsy confirmed type of benign or malignant tumors. For 749 of these patients (1,498 mammographies) we also include patients' molecular subtypes. Image data were acquired on a GE Senographe DS mammography system.
Publication Citation
Cai, H., Huang, Q., Rong, W., Song, Y., Li, J., Wang, J., Chen, J., & Li, L. (2019). Breast Microcalcification Diagnosis Using Deep Convolutional Neural Network from Digital Mammograms. Computational and Mathematical Methods in Medicine, 2019, 1–10. https://doi.org/10.1155/2019/2717454
Wang, J., Yang, X., Cai, H., Tan, W., Jin, C., & Li, L. (2016). Discrimination of Breast Cancer with Microcalcifications on Mammography by Deep Learning. Scientific Reports, 6(1). https://doi.org/10.1038/srep27327
Data Citation
Cui, Chunyan; Li Li; Cai, Hongmin; Fan, Zhihao; Zhang, Ling; Dan, Tingting; Li, Jiao; Wang, Jinghua. (2021) The Chinese Mammography Database (CMMD): An online mammography database with biopsy confirmed types for machine diagnosis of breast. The Cancer Imaging Archive. DOI: https://doi.org/10.7937/tcia.eqde-4b16
TCIA Citation
Clark, K., Vendt, B., Smith, K., Freymann, J., Kirby, J., Koppel, P., Moore, S., Phillips, S., Maffitt, D., Pringle, M., Tarbox, L., & Prior, F. (2013). The Cancer Imaging Archive (TCIA): Maintaining and Operating a Public Information Repository. Journal of Digital Imaging, 26(6), 1045–1057. https://doi.org/10.1007/s10278-013-9622-7 | Provide a detailed description of the following dataset: CMMD |
WDC Products | **WDC Products** is an entity matching benchmark which provides for the systematic evaluation of matching systems along combinations of three dimensions while relying on real-word data. The three dimensions are
i) amount of corner-cases
ii) generalization to unseen entities, and
iii) development set size
WDC Products contains 11715 product offers describing in total 2162 product entities belonging to various product categories. | Provide a detailed description of the following dataset: WDC Products |
Complete data from the Barro Colorado 50-ha plot: 423617 trees, 35 years | The 50-ha plot at Barro Colorado Island was initially demarcated and fully censused in 1982, and has been fully censused 7 times since, every 5 years from 1985 through 2015. Every measurement of every stem over 8 censuses is included in this archive. Most users will need only the 8 R Analytical Tables in the format tree, which come here zipped together into a single archive (bci.tree.zip), plus the single R Species Table. | Provide a detailed description of the following dataset: Complete data from the Barro Colorado 50-ha plot: 423617 trees, 35 years |
Trinity Speech-Gesture Dataset | **Trinity Gesture Dataset** includes 23 takes, totalling 244 minutes of motion capture and audio of a male native English speaker producing spontaneous speech on different topics. The actor’s motion was captured with 20 Viconcameras at 59.94 frames per second(fps), and the skeleton includes 69 joints. | Provide a detailed description of the following dataset: Trinity Speech-Gesture Dataset |
FZ queries | A set of 248 search queries annotated with the correct diagnosis. The diagnosis is referenced with a Concept Unique Identifier (CUI). In a retrieval setting, the task consists of retrieving an article from the FindZebra corpus with a CUI that matches the query CUI. | Provide a detailed description of the following dataset: FZ queries |
MTTN | **MTTN** is a large scale derived and synthesized dataset built with on real prompts and indexed with popular image-text datasets like MS-COCO, Flickr, etc. MTTN consists of over 2.4M sentences that are divided over 5 stages creating a combination amounting to over 12M pairs, along with a vocab size of consisting more than 300 thousands unique words that creates an abundance of variations. | Provide a detailed description of the following dataset: MTTN |
UICaption | **UICaption** is a dataset of 114k UI images paired with descriptions of their functionality. It is designed for the tasks of UI action entailment, instruction-based UI image retrieval, grounding referring expressions, and UI entity recognition. | Provide a detailed description of the following dataset: UICaption |
PIMA Diabetes Dataset with Paper, Experiments, and Code | Please refer to the following paper which includes a description of the dataset and a link to the dataset and the paper code:
Alain Hennebelle, Huned Materwal, and Leila Ismail, "HealthEdge: A Machine Learning-Based Smart Healthcare Framework for Prediction of Type 2 Diabetes in an Integrated IoT, Edge, and Cloud Computing System", arXiv:2301.10450, https://doi.org/10.48550/arXiv.2301.10450 | Provide a detailed description of the following dataset: PIMA Diabetes Dataset with Paper, Experiments, and Code |
BiodivTab | The BioDiv dataset includes manually labeled tables for CTA and CEA from the biodiversity domain. | Provide a detailed description of the following dataset: BiodivTab |
PushWorld | **PushWorld** is an environment with simplistic physics that requires manipulation planning with both movable obstacles and tools. It contains more than 200 PushWorld puzzles in PDDL and in an OpenAI Gym environment. | Provide a detailed description of the following dataset: PushWorld |
REN-20k Dataset | Reader Emotion News 20k Dataset | Provide a detailed description of the following dataset: REN-20k Dataset |
EHT | The English Headline Treebank (EHT) is an English headline treebank of 1,055 manually annotated and adjudicated universal dependency (UD) syntactic dependency trees to encourage research in improving NLP pipelines for English headlines. | Provide a detailed description of the following dataset: EHT |
BANDON | **BANDON** is a dataset for building change detection with off-nadir aerial images dataset, which is composed of off-Nadir image pairs of urban and rural areas. Overall, the BANDON dataset contains 2283 pairs of images, 2283 change labels,1891 BT-flows labels, 1891 pairs of segmentation labels, and 1891 pair of ST-offsets labels (test sets do not provide auxiliary annotations). | Provide a detailed description of the following dataset: BANDON |
PubMedCite | **PubMedCite** is a domain-specific dataset with about 192K biomedical scientific papers and a large citation graph preserving 917K citation relationships between them. It is characterized by preserving the salient contents extracted from full texts of references, and the weighted correlation between the salient. | Provide a detailed description of the following dataset: PubMedCite |
MusicCaps | **MusicCaps** is a dataset composed of 5.5k music-text pairs, with rich text descriptions provided by human experts. For each 10-second music clip, MusicCaps provides:
1) A free-text caption consisting of four sentences on average, describing the music and
2) A list of music aspects, describing genre, mood, tempo, singer voices, instrumentation, dissonances, rhythm, etc. | Provide a detailed description of the following dataset: MusicCaps |
Civil Comments | At the end of 2017 the Civil Comments platform shut down and chose make their ~2m public comments from their platform available in a lasting open archive so that researchers could understand and improve civility in online conversations for years to come. Jigsaw sponsored this effort and extended annotation of this data by human raters for various toxic conversational attributes.
In the data supplied for this competition, the text of the individual comment is found in the comment_text column. Each comment in Train has a toxicity label (target), and models should predict the target toxicity for the Test data. This attribute (and all others) are fractional values which represent the fraction of human raters who believed the attribute applied to the given comment.
The data also has several additional toxicity subtype attributes. Models do not need to predict these attributes for the competition, they are included as an additional avenue for research. Subtype attributes are:
* `severe_toxicity`
* `obscene`
* `threat`
* `insult`
* `identity_attack`
* `sexual_explicit`
Additionally, a subset of comments have been labelled with a variety of identity attributes, representing the identities that are mentioned in the comment. The columns corresponding to identity attributes are listed below. Only identities with more than 500 examples in the test set (combined public and private) will be included in the evaluation calculation. These identities are shown in bold.
* `male`
* `female`
* `transgender`
* `other_gender`
* `heterosexual`
* `homosexual_gay_or_lesbian`
* `bisexual`
* `other_sexual_orientation`
* `christian`
* `jewish`
* `muslim`
* `hindu`
* `buddhist`
* `atheist`
* `other_religion`
* `black`
* `white`
* `asian`
* `latino`
* `other_race_or_ethnicity`
* `physical_disability`
* `intellectual_or_learning_disability`
* `psychiatric_or_mental_illness`
* `other_disability` | Provide a detailed description of the following dataset: Civil Comments |
RealDOF | This dataset consists of 50 high resolution image pairs captured by dual-camera setup for single image defocus deblurring . Please note this is not a training set but a benchmark assessment. | Provide a detailed description of the following dataset: RealDOF |
ConsInv Dataset | ConsInv is a stereo RGB + IMU dataset designed for Dynamic SLAM testing and contains two subsets:
- **ConsInv-Indoors** contains sequences in an office setting where small objects are moved.
- **ConsInv-Outdoors** contains sequences in an urban environment, where cars and/or people move.
The novelty of ConsInv dataset is 1) the controlled degree of difficulty, from easy to very hard, and 2) the fact that the difficulty of the sequences comes only from object motion: relative motion between camera and object, motion ambiguity, challenging points of view when objects move. The difficulty does not come from motion speed, lack of features, lens flare, etc. - typically seen in other SLAM datasets. | Provide a detailed description of the following dataset: ConsInv Dataset |
DPD (Dual-view) | DPD dataset has two versions - single view and dual-view. This branch is for dual view benchmark evaluation. | Provide a detailed description of the following dataset: DPD (Dual-view) |
ASHRAE energy prediction III | Assessing the value of energy efficiency improvements can be challenging as there's no way to truly know how much energy a building would have used without the improvements. The best we can do is to build counterfactual models. Once a building is overhauled the new (lower) energy consumption is compared against modeled values for the original building to calculate the savings from the retrofit. More accurate models could support better market incentives and enable lower-cost financing. | Provide a detailed description of the following dataset: ASHRAE energy prediction III |
MarKG | The MarKG dataset has 11,292 entities, 192 relations and 76,424 images, including 2,063 analogy entities and 27 analogy relations. The original intention of MarKG is to provide prior knowledge of analogy entities and relations for better multimodal analogical reasoning. | Provide a detailed description of the following dataset: MarKG |
MARS (Multimodal Analogical Reasoning dataSet) | Analogical reasoning is fundamental to human cognition and holds an important place in various fields. However, previous studies mainly focus on single-modal analogical reasoning and ignore taking advantage of structure knowledge. We introduce the new task of multimodal analogical reasoning over knowledge graphs, which requires multimodal reasoning ability with the help of background knowledge. Our dataset MARS contains 10,685 training, 1,228 validation and 1,415 test instances. | Provide a detailed description of the following dataset: MARS (Multimodal Analogical Reasoning dataSet) |
BabyLM | **BabyLM** is a dataset for small scale language modeling, human language acquisition, low-resource NLP, and cognitive modeling. In partnership with CoNLL and CMCL, it provides a platform for approaches to pretraining with a limited-size corpus sourced from data inspired by the input to children. The task has three tracks, two of which restrict the training data to pre-released datasets of 10M and 100M words and are dedicated to explorations of approaches such as architectural variations, self-supervised objectives, or curriculum learning. The final track only restricts the amount of text used, allowing innovation in the choice of the data, its domain, and even its modality (i.e., data from sources other than text is welcome). | Provide a detailed description of the following dataset: BabyLM |
CMU Book Summary Dataset | This dataset contains plot summaries for 16,559 books extracted from Wikipedia, along with aligned metadata from Freebase, including book author, title, and genre.
All data is released under a Creative Commons Attribution-ShareAlike License. For questions or comments, please contact David Bamman (dbamman@cs.cmu.edu). | Provide a detailed description of the following dataset: CMU Book Summary Dataset |
SQA3D | SQA3D is a dataset for embodied scene understanding, where an agent needs to comprehend the scene it situates from an first person's perspective and answer questions. The questions are designed to be situated, embodied and knowledge-intensive. We offer three different modalities to represent a 3D scene: 3D scan, egocentric video and BEV picture. | Provide a detailed description of the following dataset: SQA3D |
RGB Arabic Alphabets Sign Language Dataset | This paper introduces the RGB Arabic Alphabet Sign Language (AASL) dataset. AASL comprises 7,856 raw and fully labeled RGB images of the Arabic sign language alphabets, which to our best knowledge is the first publicly available RGB dataset. The dataset is aimed to help those interested in developing real-life Arabic sign language classification models. AASL was collected from more than 200 participants and with different settings such as lighting, background, image orientation, image size, and image resolution. Experts in the field supervised, validated and filtered the collected images to ensure a high-quality dataset. AASL is made available to the public on Kaggle. | Provide a detailed description of the following dataset: RGB Arabic Alphabets Sign Language Dataset |
ATUE | **ATUE** is an antibody study benchmark with four real-world supervised tasks covering therapeutic antibody engineering, B cell analysis, and antibody discovery. | Provide a detailed description of the following dataset: ATUE |
E-FB15k237 | This dataset is based on FB15k237 and a pre-trained language-model-based KGE. The main task is to correct the wrong knowledge stored in the pre-trained model and replace the incorrect entities with alternative entities. The model can be downloaded from [here](https://drive.google.com/drive/folders/1EOHdg8rC9iwgSyKl5RnEv9z6ATW5Ntbr?usp=share_link). | Provide a detailed description of the following dataset: E-FB15k237 |
PanopTOP31K | Starting from the Panoptic Dataset, we use the PanopTOP framework to generate the PanopTOP31K dataset, consisting of 31K images from 23 different subjects recorded from diverse and challenging viewpoints, also including the top-view. | Provide a detailed description of the following dataset: PanopTOP31K |
ACL-Fig | **ACL-Fig** is a large-scale automatically annotated corpus consisting of 112,052 scientific figures extracted from 56K research papers in the ACL Anthology. The ACL-Fig-pilot dataset contains 1,671 manually labeled scientific figures belonging to 19 categories. | Provide a detailed description of the following dataset: ACL-Fig |
LiDAR-CS | **LiDAR-CS** is a dataset for 3D object detection in real traffic. It contains 84,000 point cloud frames under 6 groups of different sensors but with same corresponding scenarios, captured from hybrid realistic LivDAR simulator. | Provide a detailed description of the following dataset: LiDAR-CS |
AVSBench | **AVSBench** is a pixel-level audio-visual segmentation benchmark that provides ground truth labels for sounding objects. The dataset is divided into three subsets: AVSBench-object (Single-source subset, Multi-sources subset) and AVSBench-semantic (Semantic-labels subset). Accordingly, three settings are studied:
1) semi-supervised audio-visual segmentation with a single sound source
2) fully-supervised audio-visual segmentation with multiple sound sources
3) fully-supervised audio-visual semantic segmentation | Provide a detailed description of the following dataset: AVSBench |
Coffereview Dataset | The data set is based on roughly 6,000 coffee bean review spublished on the website Coffereviews going back to 1997. All of these reviews are scored with the q-grading scale (Coffee Review, 2021) | Provide a detailed description of the following dataset: Coffereview Dataset |
Processed CMIP5 EWS Data | Processed CMIP5 data used for testing the CNN-LSTM model. Details in Zenodo description | Provide a detailed description of the following dataset: Processed CMIP5 EWS Data |
GHOSTS | **GHOSTS** is the first natural-language dataset made and curated by working researchers in mathematics that (1) aims to cover graduate-level mathematics and (2) provides a holistic overview of the mathematical capabilities of language models. It a collection of multiple datasets of prompts, totalling 728 prompts, for which ChatGPT’s output was manually rated by experts. | Provide a detailed description of the following dataset: GHOSTS |
Fraunhofer EZRT XXL-CT Instance Segmentation Me163 | The 'Me 163' was a Second World War fighter airplane and a result of the German air force secret developments. One of these airplanes is currently owned and displayed in the historic aircraft exhibition of the 'Deutsches Museum' in Munich, Germany. To gain insights with respect to its history, design and state of preservation, a complete CT scan was obtained using an industrial XXL-computer tomography scanner at Fraunhofer EZRT .
Using the CT data from the Me 163, all its details can visually be examined at various levels, ranging from the complete hull down to single sprockets and rivets. However, while a trained human observer can identify and interpret the volumetric data with all its parts and connections, a virtual dissection of the airplane and all its different parts would be quite desirable. Nevertheless, this means, that an instance segmentation of all components and objects of interest into disjoint entities from the CT data is necessary.
As of currently, no adequate computer-assisted tools for automated or semi-automated segmentation of such XXL-airplane data are available, in a first step, an interactive data annotation and object labelling process has been established. So far, seven $512 \times 512 \times 512$ voxel sub-volumes from the Me 163 airplane have been annotated and labelled, whose results can potentially be used for various new applications in the field of digital heritage, non-destructive testing, or machine-learning. | Provide a detailed description of the following dataset: Fraunhofer EZRT XXL-CT Instance Segmentation Me163 |
EPIC-SOUNDS | **EPIC-SOUNDS** is a large scale dataset of audio annotations capturing temporal extents and class labels within the audio stream of the egocentric videos from EPIC-KITCHENS-100. EPIC-SOUNDS includes 78.4k categorised and 39.2k non-categorised segments of audible events and actions, distributed across 44 classes. | Provide a detailed description of the following dataset: EPIC-SOUNDS |
CaRB | CaRB [Bhardwaj et al., 2019] is developed by re-annotating the dev and test splits of OIE2016 via crowd-sourcing. Besides improving annotation quality, CaRB also provides a new matching scorer. CaRB scorer uses token level match and it matches relation with relation, arguments with arguments. | Provide a detailed description of the following dataset: CaRB |
LSOIE | LSOIE is a large-scale OpenIE data converted from QA-SRL 2.0 in two domains, i.e., Wikipedia and Science. It is 20 times larger than the next largest human-annotated OpenIE data, and thus is reliable for fair evaluation. LSOIE provides n-ary OpenIE annotations and gold tuples are in the 〈ARG0, Relation, ARG1, . . . , ARGn〉 format. The dataset has two subsets ... namely LSOIE-wiki and LSOIE-sci, for comprehensive evaluation. LSOIE-wiki has 24,251 sentences and LSOIE-sci has 47,919 sentences. | Provide a detailed description of the following dataset: LSOIE |
Motion Capture Data for Hand Motion Embodiment | **Motion Capture Data for Hand Motion Embodiment** contains demonstrations of different hand motion recorded with the Qualisys MOCAP system. | Provide a detailed description of the following dataset: Motion Capture Data for Hand Motion Embodiment |
DigiCall | We release 3.691 earning call transcripts and also annotated data set, labeled particularly for the digital strategy maturity by linguists.
https://github.com/hpataci/DigiCall | Provide a detailed description of the following dataset: DigiCall |
SkinCon | **SkinCon** is a skin disease dataset densely annotated by dermatologists. SkinCon includes 3230 images from the Fitzpatrick 17k skin disease (Fitzpatrick Skin Tone) dataset densely labelled with 48 clinical concepts, 22 of which have at least 50 images representing the concept. The concepts used were chosen by two dermatologists considering the clinical descriptor terms used to describe skin lesions. Examples include "plaque", "scale", and "erosion". | Provide a detailed description of the following dataset: SkinCon |
FaceOcc | **FaceOcc** is a high-quality face occlusion dataset which contains all mislabeled occlusions in CelebAMask-HQ and complements some occlusions and textures from the internet. The occlusion types cover sunglasses, spectacles, hands, masks, scarfs, microphones, etc. | Provide a detailed description of the following dataset: FaceOcc |
FES | FES is an indoor dataset that can be used for evaluation of deep learning approaches.
It consists of 301 top-view fisheye images from an indoor scene.
Annotations include bounding boxes and instance segmentation masks for 6 classes. | Provide a detailed description of the following dataset: FES |
Rice Grains BRRI | dataset (balanced) of 200 images consists of three classes - False Smut, Neck Blast and healthy grain class. Some of these images contain both diseases together. Field data collected under the supervision of staff from the Bangladesh Rice Research Institute (BRRI). | Provide a detailed description of the following dataset: Rice Grains BRRI |
The Copiale Cipher | The Copiale Cipher is a 105 pages manuscript containing all in all around 75 000 characters. Beautifully bound in green and gold brocade paper, written on high quality paper with two different watermarks, the manuscript can be dated back to around 1750. Apart from what is obviously an owner's mark (“Philipp 1866”) and a note in the end of the last page (“Copiales 3”), the manuscript is completely encoded. The cipher employed consists of 100 different symbols, comprising all from Latin and Greek letters, to diacritics and graphich signs such as Zodiac and alchemical symbols. Catchwords (preview fragments) of one to three or four characters are written at the bottom of left–hand pages.
Transcription, transliteration and decipherment brought to light a German text obviously related to an 18th century secret society, namely the "oculist order". A parallel manuscript is located at the Niedersächsisches Landesarchiv, Staatsarchiv Wolfenbüttel, Germany. | Provide a detailed description of the following dataset: The Copiale Cipher |
Fontenay Dataset | This data set encompasses 104 images and transcriptions of digital images of original charters from the Cistercian abbey Fontenay in Burgundy (France), dating mainly from the 12th c. and until 1213. The original data set was created as part of the ANR ORIFLAMMS (ANR-12-CORP-0010) project. Texts were transcribed in the original TEI-XML format, rendering both abbreviated and expanded forms of the original text. The alignment data was produced by merging coordinates created through the Oriflamms software. A new version was prepared in March-June 2022 as part of the research for the following paper: Camps, Jean-Baptiste, Chahan Vidal-Gorène, Dominique Stutzmann, Marguerite Vernet, and Ariane Pinche. « Data Diversity in Handwritten Text Recognition: Challenge or Opportunity? » In Digital Humanities 2022. Conference Abstracts (The University of Tokyo, Japan, 25-29 July 2022), published by DH2022 Local Organizing Committee, 160‑65. Tokyo, 2022. | Provide a detailed description of the following dataset: Fontenay Dataset |
Google1000 | A collection of 1000 public domain volumes that were scanned as part of the Google Book Search project. It is being distributed to support research in a variety of disciplines. Each volume comes
with the scanned images, OCR output, page tags and basic metadata. The volumes in this dataset are written in 4 languages: English, French, Italian and Spanish. This document describes the organization of the dataset and the file formats. | Provide a detailed description of the following dataset: Google1000 |
MaxwellBlobs | The dataset consists of random electromagnetic scatterers and their associated fields when illuminated by a 1000nm plane-wave illumination.
The scatterers have a refractive index of $n=1.5$ and are surrounded by air ($n=1.0$), and the side-length of the simulated area is 5.12 microns.
| Type | Samples | Input data | Output data | Input shape | Output shape |
|------|---------|------------------|---------------------|-------------|---------------|
| 2D | 17040 | Scatterer pixels | $E_z$ | 128x128 | 2x128x128 |
| 3D | 8720 | Scatterer voxels | $E_x$, $E_y$, $E_z$ | 128x128x128 | 6x128x128x128 | | Provide a detailed description of the following dataset: MaxwellBlobs |
MotionID: IMU specific motion | Dataset for User Verification part of MotionID: Human Authentication Approach.
Data type: bin (should be converted by attached notebook).
~50 hours of IMU (Inertial Measurement Units) data for one specific motion pattern, provided by 101 users.
For data collection only one smartphone (Samsung Galaxy S20) was used. The data was collected by 101 users, each of whom lifted the smartphone from the table 300 times, 50 times for each of the 6 locations of the device (see picture below).
The data collection procedure for user verification was as follows:
1) The user lifts the locked smartphone from the table surface to a comfortable level
2) The user unlocks the smartphone via an in-display ultrasonic fingerprint sensor
3) The user locks the smartphone via Home button and puts the device
4) The cycle is repeated for 50 times at each location, for a total of 300 times per person
Each measurement has a corresponding timestamp. Screen.txt file consists of timestamps with current status of device:
1) SCREEN_OFF - a phone turned off
2) SCREEN_ON - a phone is on
3) USER_PRESENT - a phone has just been unlocked | Provide a detailed description of the following dataset: MotionID: IMU specific motion |
MotionID: IMU all motions part1 | Dataset (part 1/3) for Motion Patterns Identification part of MotionID: Human Authentication Approach.
Data type: bin (should be converted by attached notebook).
Six users each with a Samsung Galaxy S10e smartphone collected IMU data every day for 2 weeks. At the end of two weeks, the users switched smartphones with each other and restarted the process. Each user spent 2 weeks per smartphone during the whole data collection process, which took 12 weeks in total. Throughout the experiment, the Galaxy S10e was the main and only device of each user. The smartphones were used habitually and ordinarily, with the only difference from real-life scenarios being that the data collection app was always on.
Each measurement has a corresponding timestamp. Screen.txt file consists of timestamps with current status of device:
1) SCREEN_OFF - a phone turned off
2) SCREEN_ON - a phone is on
3) USER_PRESENT - a phone has just been unlocked | Provide a detailed description of the following dataset: MotionID: IMU all motions part1 |
MotionID: IMU all motions part2 | Dataset (part 2/3) for Motion Patterns Identification part of MotionID: Human Authentication Approach.
Data type: bin (should be converted by attached notebook).
Six users each with a Samsung Galaxy S10e smartphone collected IMU data every day for 2 weeks. At the end of two weeks, the users switched smartphones with each other and restarted the process. Each user spent 2 weeks per smartphone during the whole data collection process, which took 12 weeks in total. Throughout the experiment, the Galaxy S10e was the main and only device of each user. The smartphones were used habitually and ordinarily, with the only difference from real-life scenarios being that the data collection app was always on.
Each measurement has a corresponding timestamp. Screen.txt file consists of timestamps with current status of device:
1) SCREEN_OFF - a phone turned off
2) SCREEN_ON - a phone is on
3) USER_PRESENT - a phone has just been unlocked | Provide a detailed description of the following dataset: MotionID: IMU all motions part2 |
MotionID: IMU all motions part3 | Dataset (part 3/3) for Motion Patterns Identification part of MotionID: Human Authentication Approach.
Data type: bin (should be converted by attached notebook).
Six users each with a Samsung Galaxy S10e smartphone collected IMU data every day for 2 weeks. At the end of two weeks, the users switched smartphones with each other and restarted the process. Each user spent 2 weeks per smartphone during the whole data collection process, which took 12 weeks in total. Throughout the experiment, the Galaxy S10e was the main and only device of each user. The smartphones were used habitually and ordinarily, with the only difference from real-life scenarios being that the data collection app was always on.
Each measurement has a corresponding timestamp. Screen.txt file consists of timestamps with current status of device:
1) SCREEN_OFF - a phone turned off
2) SCREEN_ON - a phone is on
3) USER_PRESENT - a phone has just been unlocked | Provide a detailed description of the following dataset: MotionID: IMU all motions part3 |
MOSE | **CoMplex video Object SEgmentation (MOSE)** is a dataset to study the tracking and segmenting objects in complex environments. MOSE contains 2,149 video clips and 5,200 objects from 36 categories, with 431,725 high-quality object segmentation masks. The most notable feature of MOSE dataset is complex scenes with crowded and occluded objects. | Provide a detailed description of the following dataset: MOSE |
DIV2KRK | Using the validation set (100 images) from the widely used DIV2K dataset, we blurred and subsampled each image with a different, randomly generated kernel. Kernels were 11x11 anisotropic gaussians with random lengths λ1, λ2∼U(0.6, 5) independently distributed for each axis, rotated by a random angle θ∼U[−π, π]. | Provide a detailed description of the following dataset: DIV2KRK |
ChatGPT-software-testing | ## Dataset Description
Our dataset contains questions from a well-known software testing book **Introduction to Software Testing 2nd Edition** by Ammann and Offutt.
We use all the text-book questions in Chapters 1 to 5 that have solutions available on the book’s official website.
Our dataset contains 40 such questions from these five chapters. 31 questions out of the 40 are multipart questions and the rest 9 are independent.
This tool generates responses from the ChatGPT automatically for these questions. All of these questions are run in both shared and separate context.
More information about the contexts can be found below.
### Combined.xlsx
Contains all the questions & answers for 3 iterations of both shared and separate contexts. Contains labels for answers and explanations given by ChatGPT.
### Combined_pair.xlsx
Contains the same data as **combined.xlsx** except for the questions that are independent i.e., not part of a multipart question.
### Combined_analysis.xlsx
Contains the result and analysis of the four research questions. Besides, it contains various illustrations for the results.
### Combined-temp.xlsx
Questions with missing shared contexts are replaced with the answers for the separate context to easily fetch the data for
RQ2 & RQ3 from a single column.
### examples folders
Contains examples of some interesting response categories.
### Case Study.pdf
Contains the following analysis:
- When responses are likely to be incorrect?
- What are the reasons for being incorrect?
- Can we fix it with prompt engineering?
- Case studies with actual examples.
## Separate Context Query
In separate context queries, we treat each of the 31 multipart questions as an independent question.
Each sub-question is asked in a separate chat thread.
Combining with the nine independent questions, a total of 40 questions are asked for each run. To evaluate the consistency in
ChatGPT’s responses, we collect a total of three runs for each question, which results in a total of 120 responses from ChatGPT.
## Shared Context Query
Our dataset contains six questions that contain total 31 multipart questions or sub-questions and nine questions that do not.
These six sub-questions are asked in a chat thread that are shared with other sub-questions as long as the sub-questions
refer to the same code or scenario. | Provide a detailed description of the following dataset: ChatGPT-software-testing |
Facebook MSC | Despite recent improvements in open-domain dialogue models, state of the art models are trained and evaluated on short conversations with little context. In contrast, the long-term conversation setting has hardly been studied. In this work we collect and release a humanhuman dataset consisting of multiple chat sessions whereby the speaking partners learn about each other’s interests and discuss the things they have learnt from past sessions. We show how existing models trained on existing datasets perform poorly in this long-term conversation setting in both automatic and human evaluations, and we study long-context models that can perform much better. In particular, we find retrieval-augmented methods and methods with an ability to summarize and recall previous conversations outperform the standard encoder-decoder architectures currently considered state of the art. | Provide a detailed description of the following dataset: Facebook MSC |
IISc VEED-Dynamic | **IISc VEED-Dynamic** consists of 200 diverse indoor and outdoor scenes (see samples below). The videos are rendered using blender and the blend files obtained for the scenes are mainly from blendswap and turbosquid. 4 different camera trajectories are added to each scene and thus we have a total of 800 videos. Motion is added to pre-existing objects in the scene or new objects are added and animated. The videos are rendered at full HD resolution (1920 x 1080) and at 30fps and contain 12 frames each. | Provide a detailed description of the following dataset: IISc VEED-Dynamic |
DIBCO 2009 | DIBCO 2009 is the first International Document Image Binarization Contest organized in the context of ICDAR 2009 conference. The general objective of the contest is to identify current advances in document image binarization using established evaluation performance measures. | Provide a detailed description of the following dataset: DIBCO 2009 |
H-DIBCO 2010 | H-DIBCO 2010 is the International Document Image Binarization Contest which is dedicated to handwritten document images organized in conjunction with ICFHR 2010 conference. The general objective of the contest is to identify current advances in handwritten document image binarization using meaningful evaluation performance measures. | Provide a detailed description of the following dataset: H-DIBCO 2010 |
IISc VEED | **IISc VEED** consists of 200 diverse indoor and outdoor scenes (see samples below). The videos are rendered with blender and the blend files are obtained for the scenes mainly from blendswap and turbosquid. 4 different camera trajectories are added to each scene and thus we have a total of 800 videos. The videos are rendered at full HD resolution (1920 x 1080) and at 30fps and contain 12 frames each. | Provide a detailed description of the following dataset: IISc VEED |
TACRED-Revisited | The TACRED-Revisited dataset improves the crowd-sourced TACRED dataset for relation extraction by relabeling the dev and test sets using expert linguistic annotators. Relabeling focuses on the 5K most challenging instances in dev and test, in total, 51.2% of these are corrected. Published at ACL 2020.
Paper (arXiv): https://arxiv.org/abs/2004.14855 | Provide a detailed description of the following dataset: TACRED-Revisited |
SurgT | **SurgT** is a dataset for benchmarking 2D Trackers in Minimally Invasive Surgery (MIS). It contains a total of 157 stereo endoscopic videos from 20 clinical cases, along with stereo camera calibration parameters. | Provide a detailed description of the following dataset: SurgT |
WUSTL_EHMS_2020 | The WUSTL-EHMS-2020 dataset was created using a real-time Enhanced Healthcare Monitoring System (EHMS) testbed [1]. This testbed collects both the network flow metrics and patients' biometrics due to the scarcity of a dataset that combines these biometrics. | Provide a detailed description of the following dataset: WUSTL_EHMS_2020 |
FINDSum | **FINDSum** is a large-scale dataset for long text and multi-table summarization. It is built on 21,125 annual reports from 3,794 companies and has two subsets for summarizing each company’s results of operations and liquidity. | Provide a detailed description of the following dataset: FINDSum |
Apnea-ECG | The data consist of 70 records, divided into a learning set of 35 records (a01 through a20, b01 through b05, and c01 through c10), and a test set of 35 records (x01 through x35), all of which may be downloaded from this page. Recordings vary in length from slightly less than 7 hours to nearly 10 hours each. Each recording includes a continuous digitized ECG signal, a set of apnea annotations (derived by human experts on the basis of simultaneously recorded respiration and related signals), and a set of machine-generated QRS annotations (in which all beats regardless of type have been labeled normal). In addition, eight recordings (a01 through a04, b01, and c01 through c03) are accompanied by four additional signals (Resp C and Resp A, chest and abdominal respiratory effort signals obtained using inductance plethysmography; Resp N, oronasal airflow measured using nasal thermistors; and SpO2, oxygen saturation). | Provide a detailed description of the following dataset: Apnea-ECG |
APRICOT-Mask | We present the APRICOT-Mask dataset, which augments the APRICOT dataset with pixel-level annotations of adversarial patches. We hope APRICOT-Mask along with the APRICOT dataset can facilitate the research in building defenses against physical patch attacks, especially patch detection and removal techniques. | Provide a detailed description of the following dataset: APRICOT-Mask |
TTStroke-21 ME22 | TTStroke-21 for MediaEval 2022. The task is of interest to researchers in the areas of machine learning (classification), visual content analysis, computer vision and sport performance. We explicitly encourage researchers focusing specifically in domains of computer-aided analysis of sport performance.
Our focus is on recordings that have been made by widespread and cheap video cameras, e.g. GoPro. We use a dataset specifically recorded at a sport faculty facility and continuously completed by students and teachers. This dataset is constituted of player-centered videos recorded in natural conditions without markers or sensors. It comprises 20 table strokes, and a rejection class. The problem is hence a typical research topic in the field of video indexing: for a given recording, we need to label the video by recognizing each stroke appearing in it.
Ground truth
The annotations consist in a description of the handedness of the player and information for each stroke performed (starting and ending frames, class of the stroke). The annotation process was designed as a crowdsourcing method. The annotation sessions are supervised by professional table tennis players and teachers, where the annotator spots and labels strokes in videos using a user-friendly web platform developed. We had a team of 15 annotators, professionals in the field of table tennis. Since a video can be annotated by several annotators, stroke detection according to the annotations was necessary. Our dataset is player-centered, with only one player in each video. An overlap between each annotation of 25% of the annotated stroke duration is allowed. Indeed, during matches with fast exchanges, the boundaries between strokes are hard to determine and annotators would sometimes overlap the annotations between two successive strokes.
Evaluation methodology
Twenty stroke classes and a non-stroke class are considered according to the rules of table tennis. This taxonomy was designed with professional table tennis teachers. We are working on videos recorded at the Faculty of Sports of the University of Bordeaux. Students are the sportsmen filmed and the teachers are supervising exercises conducted during the recording sessions. The recordings are markerless and allow the players to perform in natural conditions.
Subtask 1: for the classification subtask the table tennis videos are trimmed. The trimmed videos are distributed across the considered classes in the train and validation sets. A test set is provided without the distribution information. The participants are asked to fill an xml file with the prediction of their classification model. Submissions will be evaluated in terms of accuracy per class and global accuracy.
Subtask 2: for the detection subtask, supplementary videos are provided untrimmed and distributed across train, validation and test sets. For the train and validation sets, the temporal boundaries of the performed strokes are supplied in an xml file. The participants are asked to fill the empty xml files dedicated to the test video with the stroke boundaries inferred by their method. The IoU metric on temporal segments will be used for evaluation. | Provide a detailed description of the following dataset: TTStroke-21 ME22 |
Tasksource | Huggingface Datasets is a great library, but it lacks standardization, and datasets require preprocessing work to be used interchangeably. tasksource automates this and facilitates reproducible multi-task learning scaling.
Each dataset is standardized to either MultipleChoice, Classification, or TokenClassification dataset with identical fields. We do not support generation tasks as they are addressed by promptsource. All implemented preprocessings are in tasks.py or tasks.md. A preprocessing is a function that accepts a dataset and returns the standardized dataset. Preprocessing code is concise and human-readable. | Provide a detailed description of the following dataset: Tasksource |
TTStroke-21 ME21 | This task offers researchers an opportunity to test their fine-grained classification methods for detecting and recognizing strokes in table tennis videos. (The low inter-class variability makes the task more difficult than with usual general datasets like UCF-101.) The task offers two subtasks:
Subtask 1: Stroke Detection: Participants are required to build a system that detects whether a stroke has been performed, whatever its class, and to extract its temporal boundaries. The aim is to be able to distinguish between moments of interest in a game (players performing strokes) from irrelevant moments (between strokes, picking up the ball, having a break…). This subtask can be a preliminary step for later recognizing a stroke that has been performed.
Subtask 2: Stroke Classification: Participants are required to build a classification system that automatically labels video segments according to a performed stroke. There are 20 possible stroke classes.
Compared with Sports Video 2020, this year we extend the task in the direction of detection and also enrich the dataset with new and more diverse stroke samples. The overview paper of the task is already available here.
Participants are encouraged to make their code public with their submission. We provide a public baseline, have a look here. | Provide a detailed description of the following dataset: TTStroke-21 ME21 |
HeiChole Benchmark | Analyzing the surgical workflow is a prerequisite for many applications in computer assisted surgery (CAS), such as context-aware visualization of navigation information, specifying the most probable tool required next by the surgeon or determining the remaining duration of surgery. Since laparoscopic surgeries are performed using an endoscopic camera, a video stream is always available during surgery, making it the obvious choice as input sensor data for workflow analysis. Moreover, this offers the opportunity for structured assessment of surgical skill for safety, teaching and quality management.
The sub-challenge “Surgical Workflow and Skill Analysis” focuses on the online workflow analysis of laparoscopic surgeries. Participants are challenged to segment laparoscopic surgeries for gallbladder removal (cholecystectomy) into surgical phases, to recognize instrument presence and surgical actions as well as to classify surgical skill based on video data. Participants are encouraged (but not required!) to submit different results for phase segmentation, action recognition, instrument presence and skill classification . This novel kind of challenge investigates the current state-of-the-art results on surgical workflow analysis and skill assessment on one comprehensive dataset. | Provide a detailed description of the following dataset: HeiChole Benchmark |
Endoscapes | Cholecystectomy is a very common abdominal surgical procedure almost ubiquitously performed with a laparoscopic approach, hence guided by an endoscopic video. Deep learning
models for LC video analysis have been developed with the aim of assisting surgeons during interventions, improving staff awareness and readiness, and facilitating postoperative documentation and research. . However, datasets and models for video semantic segmentation of LC are lacking. Recognizing fine-grained hepatocystic anatomy through semantic segmentation could help surgeons better assess the critical view of safety (CVS), a universally recommended technique consisting in well exposing anatomical landmarks to prevent bile duct injuries. Additionally, segmentation masks of hepatocystic structures could be leveraged by deep learning models for automatic assessment of CVS and surgical action recognition to improve their performance. We believe that generating a dataset for video semantic segmentation of hepatocystic anatomy will promote surgical data science research and accelerate the development of applications for surgical safety. To generate a representative dataset, consecutive endoscopic videos of LC performed at Nouvel Hopital Civil (Strasbourg, France) were collected. Non-endoscopic, i.e., out-of-body, video frames were blackedout to comply with European data protection regulations. A frame every 30 seconds was sampled from the portion of the endoscopic video showing the hepatocystic anatomy being dissected, the most critical phase of the surgical procedure, and when surgeons should achieve the CVS. Such unselected and regularly spaced video frames were manually annotated with pixel-wise semantic annotations of anatomical and surgical instances, such as the cystic artery and the dissection. Overall, 1933 regularly spaced video frames from 201 LC videos were annotated with segmentation mask for 29 classes of the hepatocystic triangle, respectively.
performed in double by specifically trained computer scientists
and surgeons. | Provide a detailed description of the following dataset: Endoscapes |
ManiSkill2 | **ManiSkill2** is the next generation of the SAPIEN ManiSkill benchmark, to address critical pain points often encountered by researchers when using benchmarks for generalizable manipulation skills. It includes 20 manipulation task families with 2000+ object models and 4M+ demonstration frames, which cover stationary/mobile-base, single/dual-arm, and rigid/soft-body manipulation tasks with 2D/3D input data simulated by fully dynamic engines. | Provide a detailed description of the following dataset: ManiSkill2 |
OLIVES Dataset | Clinical diagnosis of the eye is performed over multifarious data modalities including scalar clinical labels, vectorized biomarkers, two-dimensional fundus images, and three-dimensional Optical Coherence Tomography (OCT) scans. While the clinical labels, fundus images and OCT scans are instrumental measurements, the vectorized biomarkers are interpreted attributes from the other measurements. Clinical practitioners use all these data modalities for diagnosing and treating eye diseases like Diabetic Retinopathy (DR) or Diabetic Macular Edema (DME). Enabling usage of machine learning algorithms within the ophthalmic medical domain requires research into the relationships and interactions between these relevant data modalities. Existing datasets are limited in that: (i) they view the problem as disease prediction without assessing biomarkers, and (ii) they do not consider the explicit relationship among all four data modalities over the treatment period. In this paper, we introduce the Ophthalmic Labels for Investigating Visual Eye Semantics (OLIVES) dataset that addresses the above limitations. This is the first OCT and fundus dataset that includes clinical labels, biomarker labels, and time-series patient treatment information from associated clinical trials. The dataset consists of 1268 fundus eye images each with 49 OCT scans, and 16 biomarkers, along with 3 clinical labels and a disease diagnosis of DR or DME. In total, there are 96 eyes' data averaged over a period of at least two years with each eye treated for an average of 66 weeks and 7 injections. OLIVES dataset has advantages in other fields of machine learning research including self-supervised learning as it provides alternate augmentation schemes that are medically grounded. | Provide a detailed description of the following dataset: OLIVES Dataset |
DeePhy | DeePhy is a novel DeepFake Phylogeny dataset consisting of 5040 DeepFake videos generated using three different generation techniques. It is one of the first datasets which incorporates the concept of Deepfake Phylogeny which refers to the idea of generation of DeepFakes using multiple generation techniques in a sequential manner.
The dataset can be used for the tasks of (i) DeepFake Detection ,(ii) Model Attribution of DeepFakes and (iii) Prediction of the sequential order of DeepFake techniques employed to create phylogenetic deepfakes. It will facilitate advancements in real-life scenarios of plagiarism detection, forgery detection, and reverse engineering of deepfakes. | Provide a detailed description of the following dataset: DeePhy |
WiRe57 | We manually performed the task of Open Information Extraction on 5 short documents, elaborating tentative guidelines for the task, and resulting in a ground truth reference of 347 tuples. [section 1]
A small corpus of 57 sentences taken from the beginning of 5 documents in English was used as the source text from which to extract tuples. Three documents are Wikipedia articles (Chilly Gonzales, the EM algorithm, and Tokyo) and two are newswire articles (taken from Reuters, hence the Wi-Re name). [section 3.1] | Provide a detailed description of the following dataset: WiRe57 |
DocOIE | We manually annotate 800 sentences from 80 documents in two domains (Healthcare and Transportation) to form a DocOIE dataset for evaluation. | Provide a detailed description of the following dataset: DocOIE |
Dataset for MPLP | This dataset is used for MPLP considering time windows constraints of customers and parking space. To randomly generate the dataset, please visit the link: https://github.com/Yubin-Liu/Hybrid-Q-Learning-Network-Approach-for-MPLP. | Provide a detailed description of the following dataset: Dataset for MPLP |
ATPChecker | A novel dataset for identifying privacy policy compliance of Android third-party libraries. | Provide a detailed description of the following dataset: ATPChecker |
OSASUD | Polysomnography (PSG) is a fundamental diagnostical method for the detection of Obstructive Sleep Apnea Syndrome (OSAS). Historically, trained physicians have been manually identifying OSAS episodes in individuals based on PSG recordings. Such a task is highly important for stroke patients, since in such cases OSAS is linked to higher mortality and worse neurological deficits. Unfortunately, the number of strokes per day vastly outnumbers the availability of polysomnographs and dedicated healthcare professionals. The data in this work pertains to 30 patients that were admitted to the stroke unit of the Udine University Hospital, Italy. Unlike previous studies, exclusion criteria are minimal. As a result, data are strongly affected by noise, and individuals may suffer from several comorbidities. Each patient instance is composed of overnight vital signs data deriving from multi-channel ECG, photoplethysmography and polysomnography, and related domain expert’s OSAS annotations. The dataset aims to support the development of automated methods for the detection of OSAS events based on just routinely monitored vital signs, and capable of working in a real-world scenario. | Provide a detailed description of the following dataset: OSASUD |
HWU64 | This project contains natural language data for human-robot interaction in home domain which we collected and annotated for evaluating NLU Services/platforms. | Provide a detailed description of the following dataset: HWU64 |
DocILE | **DocILE** is a large dataset of business documents for the tasks of Key Information Localization and Extraction and Line Item Recognition. It contains 6.7k annotated business documents, 100k synthetically generated documents, and nearly 1M unlabeled documents for unsupervised pre-training. The dataset has been built with knowledge of domain- and task-specific aspects, resulting in the following key features:
i) annotations in 55 classes, which surpasses the granularity of previously published key information extraction datasets by a large margin
ii) Line Item Recognition represents a highly practical information extraction task, where key information has to be assigned to items in a table
iii) documents come from numerous layouts and the test set includes zero- and few-shot cases as well as layouts commonly seen in the training set | Provide a detailed description of the following dataset: DocILE |
DTTD | **Digital-Twin Tracking Dataset (DTTD)** is a novel RGB-D dataset to enable further research of the problem and extend potential solutions towards longer ranges and mm localization accuracy. In total, 103 scenes of 10 common off-the-shelf objects with rich textures are recorded, with each frame annotated with a per-pixel semantic segmentation and ground-truth object poses provided by a commercial motion capturing system. | Provide a detailed description of the following dataset: DTTD |
CSL-Daily | CSL-Daily (Chinese Sign Language Corpus) is a large-scale continuous SLT dataset. It provides both spoken language translations and gloss-level annotations. The topic revolves around people's daily lives (e.g., travel, shopping, medical care), the most likely SLT application scenario.
[1] [Improving Sign Language Translation with Monolingual Data by Sign Back-Translation](https://openaccess.thecvf.com/content/CVPR2021/papers/Zhou_Improving_Sign_Language_Translation_With_Monolingual_Data_by_Sign_Back-Translation_CVPR_2021_paper.pdf), CVPR, 2021. | Provide a detailed description of the following dataset: CSL-Daily |
PN9 | It is a new large-scale pulmonary nodule dataset named PN9, which contains 8,798 thoracic CT scans and a total of 40,439 annotated nodules. | Provide a detailed description of the following dataset: PN9 |
Office-Home-LMT | The dataset is for research on the label distribution shift between multiple domain adaptations. We use **Cl**, **Pr**, and **Rw** to resample two reverse long-tailed distributions and one Gaussian d for each of them for BTDA with label shift. | Provide a detailed description of the following dataset: Office-Home-LMT |
Prophesee GEN4 Dataset | The dataset is split between train, test and val folders.
Files consist of 60 seconds recordings that were cut from longer recording sessions. Cuts from a single recording session are all in the same training split.
Each dat file is a binary file in which events are encoded using 4 bytes (unsigned int32) for the timestamps and 4 bytes (unsigned int32) for the data, encoding is little-endian ordering.
The data is composed of 14 bits for the x position, 14 bits for the y position and 1 bit for the polarity (encoded as -1/1).
Annotations use the numpy format and can simply be loaded form python using numpy boxes = np.load(path)
Boxes have the following fields:
* x abscissa of the top left corner in pixels
* y ordinate of the top left corner in pixels
* w width of the boxes in pixel
* h height of the boxes in pixel
* ts timestamp of the box in the sequence in microseconds
* class_id 0 for pedestrians, 1 for two wheelers, 2 for cars, 3 for trucks, 4 for buses, 5 for traffic signs, 6 for traffic lights | Provide a detailed description of the following dataset: Prophesee GEN4 Dataset |
Workshop Tools Dataset | # Workshop Tools Dataset
Motivated by the need for a dataset that also includes inertial information about the objects, we contribute the following dataset. It contains 20 common workshop tools, and for each object:
- a watertight triangular surface mesh;
- a synthetic colored surface point-cloud;
- ground truth inertial parameters;
- ground truth part-level segmentation; and
- a grasping reference frame.
## List of Objects
- Adjustable Wrench
- Bent Jaw Pliers
- C Clamp
- Electronic Caliper
- Hacksaw
- Machinist Hammer
- Nut Screwdriver
- Ruler
- Socket Wrench
- Vise Grip
- Allen Key
- Box Wrench
- Clamp
- File
- Hammer
- Measuring Tape
- Pliers
- Rubber Mallet
- Screwdriver
- Vise Clamp
## List of Files per Object
Each object has its on dedicated folder containing the following files:
- `Frames.png` :
Picture of the point-cloud in point-cloud.ply with the mesh reference frames, the grasping reference frame and the centre of mass. The mesh reference frame is the frame all points are expressed relative to and the world "Origin" is written over its origin. The grasping reference frame was manually defined such as to express how a human worker would intuitively grasp the object. The centre of mass of the object is visualized as a purple ball.
- `Inertia.txt` :
Geometric and mass properties of the object as computed with the CAD software used to produce the triangular mesh. This file is not directly used by our software but can be more easily understood by humans.
- `Inertia.yaml` :
This file is read by our software to obtain the ground truth inertial properties as well as the transform that relates the grasping reference frame with respect to the mesh frame.
- `Materials.txt` :
Human-readable notes about the materials, and therefore the mass densities, used for the object parts.
- `mesh.ply` :
Binary millimiter-scaled triangular mesh with colored vertices and the following header:
```
ply
format binary_little_endian 1.0
comment SOLIDWORKS generated,length unit = millimeters
element vertex 10913
property float x
property float y
property float z
element face 21642
property uchar red
property uchar green
property uchar blue
property uchar alpha
property list uchar int vertex_indices
end_header
```
- `point-cloud.ply` :
Colored and part-labeled point cloud with the following format header:
```
ply
format ascii 1.0
comment Created by Open3D
element vertex 2000
property float32 x
property float32 y
property float32 z
property float32 red
property float32 green
property float32 blue
property uint8 segmentation
end_header
```
- `reconstructed_mesh.ply` :
Triangular mesh reconstructed from the point-cloud in `point-cloud.ply` using the method described in the paper referenced below. Can be compared to the original mesh in `mesh.ply` to evaluate the quality of the reconstruction.
```
ply
format ascii 1.0
comment VCGLIB generated
element vertex 256
property double x
property double y
property double z
element face 500
property list uchar int vertex_indices
end_header
```
## Citation
If you used any part of this software in your work, please cite our paper:
```
@inproceedings{Nadeau_PartSegForInertialIdent_2023,
AUTHOR = {Philippe Nadeau AND Matthew Giamou AND Jonathan Kelly},
TITLE = { {The Sum of Its Parts: Visual Part Segmentation for Inertial Parameter Identification of Manipulated Objects} },
BOOKTITLE = {Proceedings of the {IEEE} International Conference on Robotics and Automation {(ICRA'23})},
YEAR = {2023},
ADDRESS = {London, UK},
MONTH = {May},
DOI = {}
}
``` | Provide a detailed description of the following dataset: Workshop Tools Dataset |
ETD500 | The paper used 500 scanned Electronic Theses and Dissertation cover pages (i.e., front pages). The dataset contains several intermediate datasets, briefly discussed in the paper. | Provide a detailed description of the following dataset: ETD500 |
A-FB15k237 | This dataset is based on FB15k237 and a pre-trained language-model-based KGE. The main task is to add the new knowledge that the pre-trained model didn't see in the previous training stage. The model can be downloaded from [here](https://drive.google.com/drive/folders/1EOHdg8rC9iwgSyKl5RnEv9z6ATW5Ntbr?usp=share_link). | Provide a detailed description of the following dataset: A-FB15k237 |
E-WN18RR | This dataset is based on WN18RR and a pre-trained language-model-based KGE. The main task is to correct the wrong knowledge stored in the pre-trained model and replace the incorrect entities with alternative entities. The model can be downloaded from [here](https://drive.google.com/drive/folders/1EOHdg8rC9iwgSyKl5RnEv9z6ATW5Ntbr?usp=share_link). | Provide a detailed description of the following dataset: E-WN18RR |
A-WN18RR | This dataset is based on WN18RR and a pre-trained language-model-based KGE. The main task is to add the new knowledge that the pre-trained model didn't see in the previous training stage. The model can be downloaded from [here](https://drive.google.com/drive/folders/1EOHdg8rC9iwgSyKl5RnEv9z6ATW5Ntbr?usp=share_link). | Provide a detailed description of the following dataset: A-WN18RR |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.