dataset_name
stringlengths
2
128
description
stringlengths
1
9.7k
prompt
stringlengths
59
185
AesVQA
AesVQA is a dataset that contains 72168 high-quality images and 324756 pairs of aesthetic questions. This dataset addresses the task of aesthetic VQA and introduces subjectiveness into VQA tasks.
Provide a detailed description of the following dataset: AesVQA
NSVA
NVSA is a large-scale NBA dataset for Sports Video Analysis (NSVA) with a focus on sports video captioning. This dataset consists of more than 32K video clips and it is also designed to address two additional tasks, namely fine-grained sports action recognition and salient player identification.
Provide a detailed description of the following dataset: NSVA
ChiQA
ChiQA is a dataset designed for visual question answering tasks that not only measures the relatedness but also measures the answerability, which demands more fine-grained vision and language reasoning. It contains more than 40K questions and more than 200K question-images pairs. The questions are real-world image-independent queries that are more various and unbiased.
Provide a detailed description of the following dataset: ChiQA
VQA-VS
The current OOD benchmark VQA-CP v2 only considers one type of shortcut (from question type to answer) and thus still cannot guarantee that the modelrelies on the intended solution rather than a solution specific to this shortcut. To overcome this limitation, VQA-VS proposes a new dataset that considers varying types of shortcuts by constructing different distribution shifts in multiple OOD test sets. In addition, VQA-VS overcomes three troubling practices in the use of VQA-CP v2, e.g., selecting models using OOD test sets, and further standardize OOD evaluation procedure. VQA-VS provides a more rigorous and comprehensive testbed for shortcut learning in VQA.
Provide a detailed description of the following dataset: VQA-VS
EmoWOZ
EmoWOZ is the first large-scale open-source dataset for emotion recognition in task-oriented dialogues. It contains emotion annotations for user utterances in the entire MultiWOZ (10k+ human-human dialogues) and DialMAGE (1k human-machine dialogues collected from our human trial). Overall, there are 83k user utterances annotated. In addition, the emotion annotation scheme is tailored to task-oriented dialogues and considers the valence, the elicitor, and the conduct of the user emotion.
Provide a detailed description of the following dataset: EmoWOZ
PAL4Inpaint
**PAL4Inpaint** is a dataset consisting of 4,795 inpainting results with per-pixel perceptual artifacts annotations designed for image inpainting tasks.
Provide a detailed description of the following dataset: PAL4Inpaint
MUAD
The MUAD dataset (Multiple Uncertainties for Autonomous Driving), consisting of 10,413 realistic synthetic images with diverse adverse weather conditions (night, fog, rain, snow), out-of-distribution objects, and annotations for semantic segmentation, depth estimation, object, and instance detection. Predictive uncertainty estimation is essential for the safe deployment of Deep Neural Networks in real-world autonomous systems and MUAD allows to a better assess the impact of different sources of uncertainty on model performance.
Provide a detailed description of the following dataset: MUAD
FormulaNet
# FormulaNet FormulaNet is a new large-scale Mathematical Formula Detection dataset. It consists of 46'672 pages of STEM documents from [arXiv](arxiv.org) and has 13 types of labels. The dataset is split into a [train](Dataset/train) set of 44'338 pages and a [validation](Dataset/val) set of 2'334 pages. Due to copyrights reasons, we can only provide the [list](urls.txt) of papers, which must be downloaded and processed. ## Labels * inline formulae * display formulae * headers * tables * figures * paragraphs * captions * footnotes * lists * bibliographies * display formulae reference number * display formulae with reference number * footnote reference number
Provide a detailed description of the following dataset: FormulaNet
iDesigner
Fashion trends are constantly evolving, but a trained eye can estimate with some accuracy the signature elements of a particular designer's style. 50,000 runway images, spanning 50 fashion designers, from our large repository of proprietary front row images from fashion shows over the past 15 years. The data includes a variety of fashion items, including: shoes, bags, dress, jackets etc.
Provide a detailed description of the following dataset: iDesigner
Kannada Treebank
This dataset was build as a part of development of Treebanks for Indian Languages funded by MeitY, Govt. of India. The Kannada treebank consists of 13.1 K sentences from general, tourism, conversational domains.
Provide a detailed description of the following dataset: Kannada Treebank
UnrealEgo
**UnrealEgo** is a dataset that provides in-the-wild stereo images with a large variety of motions for 3D human pose estimation. The in-the-wild stereo images are stereo fisheye images and depth maps with a resolution of 1024×1024 pixels each with 25 frames per second and a total of 450k (900k images) are captured for the dataset. Metadata is provided for each frame, including 3D joint positions, camera positions, and 2D coordinates of reprojected joint positions in the fisheye views.
Provide a detailed description of the following dataset: UnrealEgo
OmniCity
**OmniCity** is a dataset for omnipotent city understanding from multi-level and multi-view images. It contains multi-view satellite images as well as street-level panorama and mono-view images, constituting over 100K pixel-wise annotated images that are well-aligned and collected from 25K geo-locations in New York City. This dataset introduces a new task of fine-grained building instance segmentation on street-level panorama images. It also provides new problem settings for existing tasks, such as cross-view image matching, synthesis, segmentation, detection, etc., and facilitates the developing of new methods for large-scale city understanding, reconstruction, and simulation.
Provide a detailed description of the following dataset: OmniCity
MAFW
**MAFW** is a large-scale, multi-modal, compound affective database for dynamic facial expression recognition in the wild. It contains 10,045 video-audio clips, annotated with a compound emotional category and a couple of sentences that describe the subjects' affective behaviors in the clip. For the compound emotion annotation, each clip is categorized into one or more of the 11 widely-used emotions, i.e., anger, disgust, fear, happiness, neutral, sadness, surprise, contempt, anxiety, helplessness, and disappointment.
Provide a detailed description of the following dataset: MAFW
BGG dataset
We recorded gun sounds by changing the type and position of guns to diversify distances and angles in the PUBG environment. The BGG dataset consists of 2,195 samples with 37 different types of guns and five directions, including a silence in which there is no gunfire, but noises exist. The distance from the firearms ranged from 0 meters to 600 meters. The audio was recorded in stereo (i.e., two-channel audio), and each sample contains various environmental noises (e.g., water splashing, walking, and bullet friction).
Provide a detailed description of the following dataset: BGG dataset
MIMIC-SPARQL
Question Answering (QA) is a widely-used framework for developing and evaluating an intelligent machine. In this light, QA on Electronic Health Records (EHR), namely EHR QA, can work as a crucial milestone toward developing an intelligent agent in healthcare. EHR data are typically stored in a relational database, which can also be converted to a directed acyclic graph, allowing two approaches for EHR QA: Table-based QA and Knowledge Graph-based QA. MIMIC-SPARQL dataset provides graph-based EHR QA data where natural language queries are converted to SPARQL instead of SQL
Provide a detailed description of the following dataset: MIMIC-SPARQL
VideoAttentionTarget
A dataset with fully annotated attention targets in video for attention target estimation.
Provide a detailed description of the following dataset: VideoAttentionTarget
Citations to invalid DOI-identified entities obtained from processing DOI-to-DOI citations to add in COCI
This dataset contains a two-column CSV file, where the first column ("Valid_citing_DOI") contains the DOI of a citing entity retrieved in Crossref, while the second column ("Invalid_cited_DOI") contains the invalid DOI of a cited entity identified by looking at the field "reference" in the JSON document returned by querying the [Crossref API](https://api.crossref.org/ ) with the citing DOI. These citations to invalid DOIs have been retrieved while processing Crossref data for adding open citations in [COCI, the OpenCitations Index of Crossref open DOI-to-DOI citations](http://opencitations.net/index/coci). The citations listed in the present dataset have not been added in COCI since they point to a non-resolvable cited document.
Provide a detailed description of the following dataset: Citations to invalid DOI-identified entities obtained from processing DOI-to-DOI citations to add in COCI
time-agnostic-library: benchmarks on execution times and memory
This deposit contains benchmark code, data and results to assess the Python software time-agnostic-library v4.3.0. Two benchmarks were designed, one on the execution times and the other on the RAM. The goal is to assess whether time-agnostic-library is efficient and operable despite working live and without pre‑indexing. Moreover, all benchmarks are performed on four different triplestores: Blazegraph, GraphDB Free Edition, Apache Jena Fuseki, and OpenLink Virtuoso. The dataset used for the benchmarks contains bibliographical information about scholarly works in the journal Scientometrics only if the DOI is known. The data was extracted via Crossref. It is a temporal dataset in which provenance information and change-tracking have been managed by adopting the OpenCitations Data Model. Moreover, the dataset contains information on all the cited academic works. Journals, bibliographic resources, and authors always appear unambiguously, without duplicates. Finally, heuristics have been applied to recover the DOI of the cited works in case Crossref did not provide such information. In order to reproduce the results, extract the reproduce_results.zip archive. Then, execute run_benchmarks.sh on Linux or Mac, while run_benchmarks.bat on Windows. The results contained in results.zip were obtained using the following hardware specifications: CPU: Intel Core i9 12900k RAM: 128 GB DDR4 3200 MHz CL14 Storage: 1 TB SSD Nvme PCIe 4.0
Provide a detailed description of the following dataset: time-agnostic-library: benchmarks on execution times and memory
Wiki5K Hebrew segmentation
Training data for Hebrew morphological word segmentation
Provide a detailed description of the following dataset: Wiki5K Hebrew segmentation
SPMRL Hebrew segmentation data
Training data for Hebrew morphological word segmentation
Provide a detailed description of the following dataset: SPMRL Hebrew segmentation data
Table Tennis Ball Trajectories with Spin
This data set contains real-world table tennis ball trajectories recorded with our custom developed table tennis ball launcher AIMY. The data has been filtered. Faulty samples and ball trajectories have been removed from the data set. The trajectories are stored in the widely supported file format HDF5 (Hierarchical Data Format). For easier usage of the data, we attach a simple Python script.
Provide a detailed description of the following dataset: Table Tennis Ball Trajectories with Spin
HM3D-Semantics
**Habitat-Matterport 3D Semantics Dataset (HM3D-Semantics v0.1)** is the largest-ever dataset of semantically-annotated 3D indoor spaces. It contains dense semantic annotations for 120 high-resolution 3D scenes from the Habitat-Matterport 3D dataset. The HM3D scenes are annotated with the 1700+ raw object names, which are mapped to 40 Matterport categories. On average, each scene in HM3D-Semantics v0.1 consists of 646 objects from 114 categories. It can be used to train embodied agents, such as home robots and AI assistants, at scale for semantic navigation tasks.
Provide a detailed description of the following dataset: HM3D-Semantics
GOD-Wiki, DIR-Wiki, ThingsEEG-Text.
brain-image-text trimodal datasets
Provide a detailed description of the following dataset: GOD-Wiki, DIR-Wiki, ThingsEEG-Text.
FLEURS
We introduce FLEURS, the Few-shot Learning Evaluation of Universal Representations of Speech benchmark. FLEURS is an n-way parallel speech dataset in 102 languages built on top of the machine translation FLoRes-101 benchmark, with approximately 12 hours of speech supervision per language. FLEURS can be used for a variety of speech tasks, including Automatic Speech Recognition (ASR), Speech Language Identification (Speech LangID), Translation and Retrieval. In this paper, we provide baselines for the tasks based on multilingual pre-trained models like mSLAM. The goal of FLEURS is to enable speech technology in more languages and catalyze research in low-resource speech understanding.
Provide a detailed description of the following dataset: FLEURS
eAppleScab
The study showed that the apple scab can be detected in the high-resolution RGB images in an early stage of its development. If two datasets, the early and advanced stages, are grouped together, the scab in the early stage is not visible after image resizing for CNN inputs 200-500px. The dataset contains classified images 525x525px saved in three folders: "Background", "Healthy" and "Scab" to identify their categories. Dataset is available: https://www.kaggle.com/datasets/projectlzp201910094/eapplescab Dataset was presented in the paper: S.Kodors, G.Lācis, I.Moročko-Bičevska, I.Zarembo, O.Sokolova, T.Bartulsons, I.Apeināns and V.Žukovs. "Apple Scab Detection in the Early Stage of Disease Using a Convolutional Neural Network" Proceedings of the Latvian Academy of Sciences. Section B. Natural, Exact, and Applied Sciences., vol.76, no.4, 2022, pp.482-487. https://doi.org/10.2478/prolas-2022-0074
Provide a detailed description of the following dataset: eAppleScab
Domains Project
**Domains Project** is a public dataset contains freely available sorted list of Internet domains.
Provide a detailed description of the following dataset: Domains Project
LAION-5B
**LAION 5B** is a large-scale dataset for research purposes consisting of 5,85B CLIP-filtered image-text pairs. 2,3B contain English language, 2,2B samples from 100+ other languages and 1B samples have texts that do not allow a certain language assignment (e.g. names ). Additionally, we provide several nearest neighbor indices, an improved web interface for exploration & subset creation as well as detection scores for watermark and NSFW.
Provide a detailed description of the following dataset: LAION-5B
FSC-P2
The Fearless Steps Initiative by UTDallas-CRSS led to the digitization, recovery, and diarization of 19,000 hours of original analog audio data, as well as the development of algorithms to extract meaningful information from this multichannel naturalistic data resource. As an initial step to motivate a stream-lined and collaborative effort from the speech and language community, UTDallas-CRSS is hosting a series of progressively complex tasks to promote advanced research on naturalistic “Big Data” corpora. This began with ISCA INTERSPEECH-2019: "The FEARLESS STEPS Challenge: Massive Naturalistic Audio (FS-#1)". This first edition of this challenge encouraged the development of core unsupervised/semi-supervised speech and language systems for single-channel data with low resource availability, serving as the “First Step” towards extracting high-level information from such massive unlabeled corpora. As a natural progression following the successful Inaugural Challenge FS#1, the FEARLESS STEPS Challenge Phase-#2 focuses on the development of single-channel supervised learning strategies. This FS#2 provides 80 hours of ground-truth data through Training and Development sets, with an additional 20 hours of blind-set Evaluation data. Based on feedback from the Fearless Steps participants, additional Tracks for streamlined speech recognition and speaker diarization have been included in the FS#2. The results for this Challenge will be presented at the ISCA INTERSPEECH-2020 Special Session. We encourage participants to explore any and all research tasks of interest with the Fearless Steps Corpus – with suggested Task Domains listed below. Research participants can, however, also utilize the FS#2 corpus to explore additional problems dealing with naturalistic data, which we welcome as part of the special session. --- This (FS-02) edition of the FEARLESS STEPS Challenge includes the following 6 tasks --- ## TASK 1: Speech Activity Detection (SAD) ## TASK 2: Speaker Identification (using Speaker Segments) (SID) ## TASK 3: Speaker Diarization ## ├── (3.a.) Track 1: Diarization using system SAD (SD_track1) ## └── (3.b.) Track 2: Diarization using reference SAD (SD_track2) ## TASK 4: Automatic Speech Recognition ## ├── (4.a.) Track 1: ASR using system Diarization/SAD (ASR_track1) ## └── (4.b.) Track 2: ASR using Diarized Segments (ASR_track2)
Provide a detailed description of the following dataset: FSC-P2
safety-gym
openai.com/blog/safety-gym/
Provide a detailed description of the following dataset: safety-gym
LAV-DF
Localized Audio Visual DeepFake Dataset (LAV-DF). Paper: Do You Really Mean That? Content Driven Audio-Visual Deepfake Dataset and Multimodal Method for Temporal Forgery Localization
Provide a detailed description of the following dataset: LAV-DF
EgoHOS
**EgoHOS** is a labeled dataset consisting of 11243 egocentric images with per-pixel segmentation labels of hands and objects being interacted with during a diverse array of daily activities. The data are collected form multiple sources: 7,458 frames from Ego4D, 2,212 frames from EPIC-KITCHEN, 806 frames from THU-READ, and 350 frames of our own collected egocentric videos with people playing Escape Room. This dataset is designed for tasks including hand state classification, video activity recognition, 3D mesh reconstruction of hand-object interactions, and video inpainting of hand-object foregrounds in egocentric videos.
Provide a detailed description of the following dataset: EgoHOS
CC-Riddle
**CC-Riddle** is a Chinese character riddle dataset covering the majority of common simplified Chinese characters by crawling riddles from the Web and generating brand new ones. In the generation stage, the authors provide the Chinese phonetic alphabet, decomposition and explanation of the solution character for the generation model and get multiple riddle descriptions for each tested character. Then the generated riddles are manually filtered and the final dataset, CCRiddle is composed of both human-written riddles and filtered generated riddle.
Provide a detailed description of the following dataset: CC-Riddle
KETOD
**KETOD** (Knowledge-Enriched Task-Oriented Dialogue) is a dataset containing system responses designed for enriching task-oriented dialogues with chit-chat based on relevant entity knowledge. There are a total of 5,324 dialogues with enriched system responses.
Provide a detailed description of the following dataset: KETOD
VizWiz-FewShot
**VizWiz-FewShot** is a a few-shot localization dataset originating from photographers who authentically were trying to learn about the visual content in the images they took. It includes nearly 10,000 segmentations of 100 categories in over 4,500 images that were taken by people with visual impairments.
Provide a detailed description of the following dataset: VizWiz-FewShot
XLCoST
**XLCoST** is a benchmark dataset for cross-lingual code intelligence. The dataset contains fine-grained parallel data from 8 languages (7 commonly used programming languages and English), and supports 10 cross-language code tasks.
Provide a detailed description of the following dataset: XLCoST
HR-GLDD: A globally distributed high resolution landslide dataset
Sample data in the numpy array format (.npy) at the link https://zenodo.org/record/7189381#.Y0a2UHZBxD9. Satellite: Planetscope 3 meter Patch size: 128 x 128 x 4 Train: 1119 patches Test: 355 patches Validation:284 patches
Provide a detailed description of the following dataset: HR-GLDD: A globally distributed high resolution landslide dataset
NAFLD pathology and healthy tissue samples
The dataset contains tiles extracted from Whole Slide Images (WSI) of stained tissue samples. * Normal tissue from mouse and rat species; liver, brain, lung, heart, pancreas, spleen, kidney organs; Masson's Trichrome and H&I staining. This data was NOT manually curated and may contain various artifacts * Tissue with Non-Alcoholic Fatty Liver Disease in mouse, stained with Masson's Trichrome and H&E. This data was manually verified by an experienced pathologist.
Provide a detailed description of the following dataset: NAFLD pathology and healthy tissue samples
LAION COCO
**LAION-COCO** is the world’s largest dataset of 600M generated high-quality captions for publicly available web-images. The images are extracted from the english subset of Laion-5B with an ensemble of BLIP L/14 and 2 CLIP versions (L/14 and RN50x64). This dataset allow models to produce high quality captions for images.
Provide a detailed description of the following dataset: LAION COCO
ConvFinQA
**ConvFinQA** is a dataset designed to study the chain of numerical reasoning in conversational question answering. The dataset contains 3892 conversations containing 14115 questions where 2715 of the conversations are simple conversations, and the rest 1,177 are hybrid conversations.
Provide a detailed description of the following dataset: ConvFinQA
VISOR - Semi supervised video object segmentation
VISOR is a dataset of pixel annotations and a benchmark suite for segmenting hands and active objects in egocentric video. VISOR annotates videos from EPIC-KITCHENS, and it contains 272K manual semantic masks of 257 object classes, 9.9M interpolated dense masks, and 67K hand-object relations, covering 36 hours of 179 untrimmed videos.
Provide a detailed description of the following dataset: VISOR - Semi supervised video object segmentation
Trailers12k
Trailers12k is a movie trailer dataset comprised of 12,000 titles associated to ten genres. It distinguishes from other datasets by its collection procedure aimed at providing a high-quality publicly available dataset.
Provide a detailed description of the following dataset: Trailers12k
UCSF PDGM
MRI-based artificial intelligence (AI) research on patients with brain gliomas has been rapidly increasing in popularity in recent years in part due to a growing number of publicly available MRI datasets. Notable examples include The Cancer Genome Atlas Glioblastoma dataset (TCGA-GBM) consisting of 262 subjects and the International Brain Tumor Segmentation (BraTS) challenge dataset consisting of 542 subjects (including 243 preoperative cases from TCGA-GBM). The public availability of these glioma MRI datasets has fostered the growth of numerous emerging AI techniques including automated tumor segmentation, radiogenomics, and MRI-based survival prediction. Despite these advances, existing publicly available glioma MRI datasets have been largely limited to only 4 MRI contrasts (T2, T2/FLAIR, and T1 pre- and post-contrast) and imaging protocols vary significantly in terms of magnetic field strength and acquisition parameters. Here we present the University of California San Francisco Preoperative Diffuse Glioma MRI (UCSF-PDGM) dataset. The UCSF-PDGM dataset includes 501 subjects with histopathologically-proven diffuse gliomas who were imaged with a standardized 3 Tesla preoperative brain tumor MRI protocol featuring predominantly 3D imaging, as well as advanced diffusion and perfusion imaging techniques. The dataset also includes isocitrate dehydrogenase (IDH) mutation status for all cases and O[6]-methylguanine-DNA methyltransferase (MGMT) promotor methylation status for World Health Organization (WHO) grade III and IV gliomas. The UCSF-PDGM has been made publicly available in the hopes that researchers around the world will use these data to continue to push the boundaries of AI applications for diffuse gliomas.
Provide a detailed description of the following dataset: UCSF PDGM
Handwash Dataset
**Hand Wash Dataset** consists of 292 videos of hand washes with each hand wash having 12 steps, for a total of 3,504 clips, in different environments to provide as much variance as possible. The variance was important to ensure that the model is robust and can work in more than a few environments. This dataset is designed for action recognition tasks
Provide a detailed description of the following dataset: Handwash Dataset
Multi-domain Image Characteristics Dataset
The Multi-domain Image Characteristic Dataset consists of thousands of images sourced from the internet. Each image falls under one of three domains - animals, birds, or furniture. There are five types under each domain. There are 200 images of each type, summing up the total dataset to 3,000 images. The master file consists of two columns; the image name and the visible characteristics of that image. Every image was manually analyzed and the characteristics for each image were generated, ensuring accuracy. Images falling under the same domain have a similar set of characteristics. For example, pictures under the bird's domain will have a common set of characteristics such as the color of the bird, the presence of a beak, wing, eye, legs, etc. Care has been taken to ensure that each image is as unique as possible by including pictures that have different combinations of visible characteristics present. This includes pictures having variations in the capture angle, etc.
Provide a detailed description of the following dataset: Multi-domain Image Characteristics Dataset
CORRONA CERTAIN
CERTAIN, or the Comparative Effectiveness Registry to Study Therapies for Arthritis and Inflammatory Conditions, is designed as a prospective nested substudy under our larger RA registry. CERTAIN uses the existing CORRONA network of participating private and academic sites in order to recruit RA patients who have at least moderate disease activity. Patients starting or switching biologic or small molecule DMARD agents were eligible for enrollment and were treated with either an anti-TNF agent or a non TNF biologic, depending on the treatment selected by their physician. These categories of treatment were enrolled in a ratio of 3:2. The CERTAIN substudy includes 2,814 patients across 43 sites, with 117 rheumatologists. All 2814 patients have a “true baseline” before drug start at which time an array of biosamples was collected including DNA, RNA, plasma and serum. These samples were also collected at 3 and 6 months, along with physician and patient outcome measures q3months through 12 months. The CERTAIN study biosamples are a unique and valuable resource for companies looking to study biomarkers, genomics, genetics and their relation with deep prospective clinical outcomes.
Provide a detailed description of the following dataset: CORRONA CERTAIN
ACL Anthology Corpus with Full Text
[![License](https://img.shields.io/badge/License-CC%20BY--NC%204.0-lightgrey.svg)](https://creativecommons.org/licenses/by-nc-sa/4.0/) This repository provides full-text and metadata to the ACL anthology collection (80k articles/posters as of September 2022) also including .pdf files and grobid extractions of the pdfs. ## How is this different from what ACL anthology provides and what already exists? - We provide pdfs, full-text, references and other details extracted by grobid from the PDFs while [ACL Anthology](https://aclanthology.org/anthology+abstracts.bib.gz) only provides abstracts. - There exists a similar corpus call [ACL Anthology Network](https://clair.eecs.umich.edu/aan/about.php) but is now showing its age with just 23k papers from Dec 2016. ---- The goal is to keep this corpus updated and provide a comprehensive repository of the full ACL collection. This repository provides data for `80,013` ACL articles/posters - 1. 📖 All PDFs in ACL anthology : **size 45G** [download here](https://drive.google.com/file/d/1OGHyJrkaVpbrdbmxsDotG-tI3LiKyxuC/view?usp=sharing) 2. 🎓 All bib files in ACL anthology with abstracts : **size 172M** [download here](https://drive.google.com/file/d/1dJ-iE85moBv3iYG2LhRLT6KQyVkmllBg/view?usp=sharing) 3. 🏷️ Raw grobid extraction results on all the ACL anthology pdfs which includes full text and references : **size 3.6G** [download here](https://drive.google.com/file/d/1xC-K6__W3FCalIDBlDROeN4d4xh0IVry/view?usp=sharing) 4. 💾 Dataframe with extracted metadata (table below with details) and full text of the collection for analysis : **size 489M** [download here](https://drive.google.com/file/d/1CFCzNGlTls0H-Zcaem4Hg_ETj4ebhcDO/view?usp=sharing) | **Column name** | **Description** | | :----------------: | :---------------------------: | | `acl_id` | unique ACL id | | `abstract` | abstract extracted by GROBID | | `full_text` | full text extracted by GROBID | | `corpus_paper_id` | Semantic Scholar ID | | `pdf_hash` | sha1 hash of the pdf | | `numcitedby` | number of citations from S2 | | `url` | link of publication | | `publisher` | - | | `address` | Address of conference | | `year` | - | | `month` | - | | `booktitle` | - | | `author` | list of authors | | `title` | title of paper | | `pages` | - | | `doi` | - | | `number` | - | | `volume` | - | | `journal` | - | | `editor` | - | | `isbn` | - | ```python >>> import pandas as pd >>> df = pd.read_parquet('acl-publication-info.74k.parquet') >>> df acl_id abstract full_text corpus_paper_id pdf_hash ... number volume journal editor isbn 0 O02-2002 There is a need to measure word similarity whe... There is a need to measure word similarity whe... 18022704 0b09178ac8d17a92f16140365363d8df88c757d0 ... None None None None None 1 L02-1310 8220988 8d5e31610bc82c2abc86bc20ceba684c97e66024 ... None None None None None 2 R13-1042 Thread disentanglement is the task of separati... Thread disentanglement is the task of separati... 16703040 3eb736b17a5acb583b9a9bd99837427753632cdb ... None None None None None 3 W05-0819 In this paper, we describe a word alignment al... In this paper, we describe a word alignment al... 1215281 b20450f67116e59d1348fc472cfc09f96e348f55 ... None None None None None 4 L02-1309 18078432 011e943b64a78dadc3440674419821ee080f0de3 ... None None None None None ... ... ... ... ... ... ... ... ... ... ... ... 73280 P99-1002 This paper describes recent progress and the a... This paper describes recent progress and the a... 715160 ab17a01f142124744c6ae425f8a23011366ec3ee ... None None None None None 73281 P00-1009 We present an LFG-DOP parser which uses fragme... We present an LFG-DOP parser which uses fragme... 1356246 ad005b3fd0c867667118482227e31d9378229751 ... None None None None None 73282 P99-1056 The processes through which readers evoke ment... The processes through which readers evoke ment... 7277828 924cf7a4836ebfc20ee094c30e61b949be049fb6 ... None None None None None 73283 P99-1051 This paper examines the extent to which verb d... This paper examines the extent to which verb d... 1829043 6b1f6f28ee36de69e8afac39461ee1158cd4d49a ... None None None None None 73284 P00-1013 Spoken dialogue managers have benefited from u... Spoken dialogue managers have benefited from u... 10903652 483c818c09e39d9da47103fbf2da8aaa7acacf01 ... None None None None None [73285 rows x 21 columns] ``` The provided ACL id is consistent with S2 API as well - [https://api.semanticscholar.org/graph/v1/paper/ACL:P83-1025](https://api.semanticscholar.org/graph/v1/paper/ACL:P83-1025) The API can be used to fetch more information for each paper in the corpus. --- ## Text generation on Huggingface We fine-tuned the distilgpt2 model from huggingface using the full-text from this corpus. The model is trained for generation task. Text Generation Demo : <https://huggingface.co/shaurya0512/distilgpt2-finetune-acl22> Example: ```python >>> from transformers import AutoTokenizer, AutoModelForCausalLM >>> tokenizer = AutoTokenizer.from_pretrained("shaurya0512/distilgpt2-finetune-acl22") >>> model = AutoModelForCausalLM.from_pretrained("shaurya0512/distilgpt2-finetune-acl22") >>> >>> input_context = "We introduce a new language representation" >>> input_ids = tokenizer.encode(input_context, return_tensors="pt") # encode input context >>> outputs = model.generate( ... input_ids=input_ids, max_length=128, temperature=0.7, repetition_penalty=1.2 ... ) # generate sequences >>> print(f"Generated: {tokenizer.decode(outputs[0], skip_special_tokens=True)}") ``` ```text Generated: We introduce a new language representation for the task of sentiment classification. We propose an approach to learn representations from unlabeled data, which is based on supervised learning and can be applied in many applications such as machine translation (MT) or information retrieval systems where labeled text has been used by humans with limited training time but no supervision available at all. Our method achieves state-oftheart results using only one dataset per domain compared to other approaches that use multiple datasets simultaneously, including BERTScore(Devlin et al., 2019; Liu & Lapata, 2020b ) ; RoBERTa+LSTM + L2SRC - ``` ### TODO 1. ~~Link the acl corpus to semantic scholar(S2), sources like S2ORC~~ 2. Extract figures and captions from the ACL corpus using pdffigures - [scientific-figure-captioning](https://github.com/billchen0/scientific-figure-captioning) 3. Have a release schedule to keep the corpus updated. 4. ACL citation graph 5. ~~Enhance metadata with bib file mapping - include authors~~ 6. ~~Add citation counts for papers~~ 7. Use [ForeCite](https://github.com/allenai/ForeCite) to extract impactful keywords from the corpus 8. Link datasets using [paperswithcode](https://github.com/paperswithcode/paperswithcode-data)? - don't know how useful this is 9. Have some stats about the data - [linguistic-diversity](http://stats.aclrollingreview.org/submissions/linguistic-diversity/); [geo-diversity](http://stats.aclrollingreview.org/submissions/geo-diversity/); if possible [explorer](http://stats.aclrollingreview.org/submissions/explorer/) We are hoping that this corpus can be helpful for analysis relevant to the ACL community. **Please cite/star 🌟 this page if you use this corpus** ## Citing the ACL Anthology Corpus If you use this corpus in your research please use the following BibTeX entry: @Misc{acl_anthology_corpus, author = {Shaurya Rohatgi}, title = {ACL Anthology Corpus with Full Text}, howpublished = {Github}, year = {2022}, url = {https://github.com/shauryr/ACL-anthology-corpus} } [<img src="https://img.shields.io/badge/Buy%20Me%20a%20Coffee-ffdd00?style=for-the-badge&logo=buy-me-a-coffee&logoColor=black">](https://www.buymeacoffee.com/shauryrG) <!-- If you are feeling generous buy me a ☕ --> ## Acknowledgements We thank Semantic Scholar for providing access to the citation related data in this corpus. ## License ACL anthology corpus is released under the [CC BY-NC 4.0](https://creativecommons.org/licenses/by-nc/4.0/). By using this corpus, you are agreeing to its usage terms.
Provide a detailed description of the following dataset: ACL Anthology Corpus with Full Text
DBP-5L (Greek)
DPB-5L is a Multilingual KG dataset containing 5 KGs in English, French, Japanese, Greek, and Spanish. The dataset is used for the Knowledge Graph Completion and Entity Alignment task. DPB-5L (Greek) is a subset of DPB-5L with Greek KG.
Provide a detailed description of the following dataset: DBP-5L (Greek)
DBP-5L (English)
DPB-5L is a Multilingual KG dataset containing 5 KGs in English, French, Japanese, Greek, and Spanish. The dataset is used for the Knowledge Graph Completion and Entity Alignment task. DPB-5L (English) is a subset of DPB-5L with English KG.
Provide a detailed description of the following dataset: DBP-5L (English)
DBP-5L (Spanish)
DPB-5L is a Multilingual KG dataset containing 5 KGs in English, French, Japanese, Greek, and Spanish. The dataset is used for the Knowledge Graph Completion and Entity Alignment task. DPB-5L (Spanish) is a subset of DPB-5L with Spanish KG.
Provide a detailed description of the following dataset: DBP-5L (Spanish)
DBP-5L (Japanese)
DPB-5L is a Multilingual KG dataset containing 5 KGs in English, French, Japanese, Greek, and Spanish. The dataset is used for the Knowledge Graph Completion and Entity Alignment task. DPB-5L (Japanese) is a subset of DPB-5L with Japanese KG.
Provide a detailed description of the following dataset: DBP-5L (Japanese)
DPB-5L (French)
DPB-5L is a Multilingual KG dataset containing 5 KGs in English, French, Japanese, Greek, and Spanish. The dataset is used for the Knowledge Graph Completion and Entity Alignment task. DPB-5L (French) is a subset of DPB-5L with French KG.
Provide a detailed description of the following dataset: DPB-5L (French)
KPI-EDGAR
We introduce KPI-EDGAR, a novel dataset for Joint Named Entity Recognition and Relation Extraction building on financial reports uploaded to the Electronic Data Gathering, Analysis, and Retrieval (EDGAR) system, where the main objective is to extract Key Performance Indicators (KPIs) from financial documents (the named entity recognition part) and link them to their numerical values (the relation extraction part). Challenges include the fuzzy borders of entities and finding the correct numeric value/year pair for each entity.
Provide a detailed description of the following dataset: KPI-EDGAR
EgoTaskQA
**EgoTask QA** benchmark contains 40K balanced question-answer pairs selected from 368K programmatically generated questions generated over 2K egocentric videos. It provides a single home for the crucial dimensions of task understanding through question-answering on real-world egocentric videos.
Provide a detailed description of the following dataset: EgoTaskQA
FRMT
**FRMT** is a dataset and evaluation benchmark for Few-shot Region-aware Machine Translation, a type of style-targeted translation. The dataset consists of human translations of a few thousand English Wikipedia sentences into regional variants of Portuguese and Mandarin. Source documents are selected to enable detailed analysis of phenomena of interest, including lexically distinct terms and distractor terms.
Provide a detailed description of the following dataset: FRMT
ABO
**ABO** is a large-scale dataset designed for material prediction and multi-view retrieval experiments. The dataset contains Blender renderings of 30 viewpoints for each of the 7,953 3D objects, as well as camera intrinsics and extrinsic for each rendering.
Provide a detailed description of the following dataset: ABO
MTEB
**MTEB** is a benchmark which spans 8 embedding tasks covering a total of 56 datasets and 112 languages. The 8 task types are Bitext mining, Classification, Clustering, Pair Classification, Reranking, Retrieval, Semantic Textual Similarity and Summarisation. The 56 dataset contains varying text lengths and they are grouped into three categories: Sentence to sentence, Paragraph to paragraph and Sentence to paragraph.
Provide a detailed description of the following dataset: MTEB
Replication Data for: On estimating Armington elasticities for Japan's meat imports
**Replication Data for: On estimating Armington elasticities for Japan's meat imports** contains monthly import values and quantities from Jan 1996 to Dec 2020 for all 78 items.
Provide a detailed description of the following dataset: Replication Data for: On estimating Armington elasticities for Japan's meat imports
Spaces
We introduce our new dataset, Spaces, to provide a more challenging shared dataset for future view synthesis research. Spaces consists of 100 indoor and outdoor scenes, captured using a 16-camera rig. For each scene, we captured image sets at 5-10 slightly different rig positions (within ∼10cm of each other). This jittering of the rig position provides a flexible dataset for view synthesis, as we can mix views from different rig positions for the same scene during training. We calibrated the intrinsics and the relative pose of the rig cameras using a standard structure from motion approach, using the nominal rig layout as a prior. We corrected exposure differences . For our main experiments we undistort the images and downsample them to a resolution of 800 × 480. We use 90 scenes from the dataset for training and hold out 10 for evaluation.
Provide a detailed description of the following dataset: Spaces
virufy-data
Dataset to segmentize coughs
Provide a detailed description of the following dataset: virufy-data
PersonPath22
PersonPath22 is a large-scale multi-person tracking dataset containing 236 videos captured mostly from static-mounted cameras, collected from sources where we were given the rights to redistribute the content and participants have given explicit consent. Each video has ground-truth annotations including both bounding boxes and tracklet-ids for all the persons in each frame.
Provide a detailed description of the following dataset: PersonPath22
Riddle Sense
Question: I have five fingers but I am not alive. What am I? Answer: a glove. Answering such a riddle-style question is a challenging cognitive process, in that it requires complex commonsense reasoning abilities, an understanding of figurative language, and counterfactual reasoning skills, which are all important abilities for advanced natural language understanding (NLU). However, there is currently no dedicated datasets aiming to test these abilities. Herein, we present RiddleSense, a new multiple-choice question answering task, which comes with the first large dataset (5.7k examples) for answering riddle-style commonsense questions. We systematically evaluate a wide range of models over the challenge, and point out that there is a large gap between the best-supervised model and human performance — suggesting intriguing future research in the direction of higher-order commonsense reasoning and linguistic creativity towards building advanced NLU systems.
Provide a detailed description of the following dataset: Riddle Sense
Diamante
Diamante is a novel and efficient framework consisting of a data collection strategy and a learning method to boost the performance of pre-trained dialogue models. Two kinds of human feedback are collected and leveraged in Diamante, including explicit demonstration and implicit preference. The Diamante dataset is publicly available at the [LUGE platform](https://www.luge.ai/#/luge/dataDetail?id=52).
Provide a detailed description of the following dataset: Diamante
GD
These images were generated using UnityEyes simulator, after including essential eyeball physiology elements and modeling binocular vision dynamics. The images are annotated with head pose and gaze direction information, besides 2D and 3D landmarks of eye's most important features. Additionally, the images are distributed into eight classes denoting the gaze direction of a driver's eyes (TopLeft, TopRight, TopCenter, MiddleLeft, MiddleRight, BottomLeft, BottomRight, BottomCenter). This dataset was used to train a DNN model for estimating the gaze direction. The dataset contains 61,063 training images, 132,630 testing images and additional 72,000 images for improvement.
Provide a detailed description of the following dataset: GD
eLife
This dataset contains 4,828 full biomedical articles paired with non-technical lay summaries derived from the eLife scientific journal.
Provide a detailed description of the following dataset: eLife
PLOS
This dataset contains 27,525 full biomedical articles paired with non-technical lay summaries derived from various journals published by the Public Library of Science (PLOS).
Provide a detailed description of the following dataset: PLOS
Europarl-ASR
Europarl-ASR (EN) is a 1300-hour English-language speech and text corpus of parliamentary debates for (streaming) Automatic Speech Recognition training and benchmarking, speech data filtering and speech data verbatimization, based on European Parliament speeches and their official transcripts (1996-2020). Includes dev-test sets for streaming ASR benchmarking, made up of 18 hours of manually revised speeches. The availability of manual non-verbatim and verbatim transcripts for dev-test speeches makes this corpus also useful for the assessment of automatic filtering and verbatimization techniques. The corpus is released under an open licence at https://www.mllp.upv.es/europarl-asr/ Europarl-ASR CONTENTS: [Speech data] 1300 hours of English-language annotated speech data, 3 full sets of timed transcriptions (official non-verbatim, automatically noise-filtered, automatically verbatimized), 18 hours of speech data with both manually revised verbatim transcriptions and official non-verbatim transcriptions, split in 2 independent validation- evaluation partitions for 2 realistic ASR tasks (with vs. without previous knowledge of the speaker); [Text data] 70 million tokens of English-language text data; [Pretrained language models] the Europarl-ASR English-language n-gram language model and vocabulary.
Provide a detailed description of the following dataset: Europarl-ASR
Mint
**Mint** is a new Multilingual intimacy analysis dataset covering 13,384 tweets in 10 languages including English, French, Spanish, Italian, Portuguese, Korean, Dutch, Chinese, Hindi, and Arabic. The dataset is released along with the SemEval 2023 Task 9: Multilingual Tweet Intimacy Analysis.
Provide a detailed description of the following dataset: Mint
DA$^2$
**DA2** is a large-scale Dual-Arm Dexterity-Aware (DA2) grasp dataset, including a total of about 9M parallel-jaw grasp pairs for more than 6000 different meshes. The grasp pairs are labeled with multiple grasp dexterity measures by fully analyzing the grasp matrix. The dataset constitutes a standardized data source filling the gap in vision-guided dual-arm grasping of arbitrary objects.
Provide a detailed description of the following dataset: DA$^2$
DIMO
The **Industrial Metal Objects** dataset is a diverse dataset of industrial metal objects. These objects are symmetric, textureless and highly reflective, leading to challenging conditions not captured in existing datasets. The dataset contains both real-world and synthetic multi-view RGB images with 6D object pose labels.
Provide a detailed description of the following dataset: DIMO
HPD
These images were generated using Blender and IEE-Simulator with different head-poses, where the images are labelled according to nine classes (straight, turned bottom-left, turned left, turned top-left, turned bottom-right, turned right, turned top-right, reclined, looking up). The dataset contains 16,013 training images and 2,825 testing images, in addition to 4,700 images for improvements.
Provide a detailed description of the following dataset: HPD
H-DIBCO 2016
H-DIBCO 2016 is the international Handwritten Document Image Binarization Contest organized in the context of ICFHR 2016 conference
Provide a detailed description of the following dataset: H-DIBCO 2016
DIBCO 2011
DIBCO 2011 is the International Document Image Binarization Contest organized in the context of ICDAR 2011 conference. The general objective of the contest is to identify current advances in document image binarization for both machine-printed and handwritten document images using evaluation performance measures that conform to document image analysis and recognition.
Provide a detailed description of the following dataset: DIBCO 2011
DIBCO 2017
DIBCO 2017 is the international Competition on Document Image Binarization organized in conjunction with the ICDAR 2017 conference. The general objective of the contest is to identify current advances in document image binarization of machine-printed and handwritten document images using performance evaluation measures that are motivated by document image analysis and recognition requirements
Provide a detailed description of the following dataset: DIBCO 2017
CB-ToF-Extension
An extension of the Cornell-Box Time-of-Flight Dataset (https://paperswithcode.com/dataset/cb-tof) containing moving objects. It follows the same data structure.
Provide a detailed description of the following dataset: CB-ToF-Extension
DIBCO 2013
DIBCO 2013 is the international Document Image Binarization Contest organized in the context of ICDAR 2013 conference. The general objective of the contest is to identify current advances in document image binarization for both machine-printed and handwritten document images using evaluation performance measures that conform to document image analysis and recognition.
Provide a detailed description of the following dataset: DIBCO 2013
H-DIBCO 2014
H-DIBCO 2014 is the International Document Image Binarization Competition which is dedicated to handwritten document images organized in conjunction with ICFHR 2014 conference. The objective of the contest is to identify current advances in handwritten document image binarization using meaningful evaluation performance measures.
Provide a detailed description of the following dataset: H-DIBCO 2014
LRDE DBD
This is a dataset is composed of full-document images, groundtruth, and tools to perform an evaluation of binarization algorithms. It allows pixel-based accuracy and OCR-based evaluations.
Provide a detailed description of the following dataset: LRDE DBD
H-DIBCO 2018
H-DIBCO 2018 is the international Handwritten Document Image Binarization Contest organized in the context of ICFHR 2018 conference. The general objective of the contest is to record recent advances in document image binarization using established evaluation performance measures.
Provide a detailed description of the following dataset: H-DIBCO 2018
H-DIBCO 2012
H-DIBCO 2012 is the International Document Image Binarization Competition which is dedicated to handwritten document images organized in conjunction with ICFHR 2012 conference. The objective of the contest is to identify current advances in handwritten document image binarization using meaningful evaluation performance measures.
Provide a detailed description of the following dataset: H-DIBCO 2012
Acoustic frequency responses in a conventional classroom
Hahmann, Manuel; Verburg Riezu, Samuel Arturo (2021): Acoustic frequency responses in a conventional classroom. Technical University of Denmark. Dataset. https://doi.org/10.11583/DTU.13315286
Provide a detailed description of the following dataset: Acoustic frequency responses in a conventional classroom
Occluded COCO
**Occluded COCO** is automatically generated subset of COCO val dataset, collecting partially occluded objects for a large variety of categories in real images in a scalable manner, where target object is partially occluded but the segmentation mask is connected.
Provide a detailed description of the following dataset: Occluded COCO
Separated COCO
**Separated COCO** is automatically generated subsets of COCO val dataset, collecting separated objects for a large variety of categories in real images in a scalable manner, where target object segmentation mask is separated into distinct regions by the occluder.
Provide a detailed description of the following dataset: Separated COCO
Wind Tunnel and Flight Test Experiments
Our dataset comprises $23.468$ non-labelled and $356$ labelled samples where each sample is $512 \times 512 \times 1$ dimensional IR image collected with the thermographic measurement specifications. Some samples contain scars, shadows, salt \& pepper noises and contrast burst regions, demonstrating that realistic laminar-turbulent flow observation scenarios are subject to high noise. Besides, a laminar flow area may occur brighter or darker as compared to the regions in a turbulent flow. Due to some effect (e.g. shadowing the sun) it is even possible that, in one part of the image, the laminar flow area appears darker, and in another part, it appears brighter than the turbulent flow area. As you can see in the cover sample: Thermographic measurement examples from wind tunnel and flight test experiments: i. top and bottom row: wind tunnel ii. center row: vertical stabilizer from AFLoNext Project. Note that the red flow-separation lines were semi-automatically drawn as ground-truths by an internal software of our institution. In principle, the software took some pixel samples selected by human experts for each flow region as input, and it accordingly drew laminar flow boundary after statistical analysis on the selected pixels. Finally, if mislocalisation happened in the separation lines, human experts corrected them in an iterative way.
Provide a detailed description of the following dataset: Wind Tunnel and Flight Test Experiments
TREC-News
The TREC News Track features modern search tasks in the news domain. In partnership with The Washington Post, we are developing test collections that support the search needs of news readers and news writers in the current news environment. It's our hope that the track will foster research that establishes a new sense for what "relevance" means for news search.
Provide a detailed description of the following dataset: TREC-News
Robust04
The goal of the Robust track is to improve the consistency of retrieval technology by focusing on poorly performing topics. In addition, the track brings back a classic, ad hoc retrieval task in TREC that provides a natural home for new participants. An ad hoc task in TREC investigates the performance of systems that search a static set of documents using previously-unseen topics. For each topic, participants create a query and submit a ranking of the top 1000 documents for that topic.
Provide a detailed description of the following dataset: Robust04
Webis-Touché-2020
This paper is a condensed report on the second year of the Touché shared task on argument retrieval held at CLEF 2021. With the goal to provide a collaborative platform for researchers, we organized two tasks: (1) supporting individuals in finding arguments on controversial topics of social importance and (2) supporting individuals with arguments in personal everyday comparison situations.
Provide a detailed description of the following dataset: Webis-Touché-2020
TAT
**Taiwanese Across Taiwan (TAT)** corpus is a Large-Scale database of Native Taiwanese Article/Reading Speech collected across Taiwan. This corpus contains native Taiwanese speech of various accent across Taiwan. The corpus is annotated twice for use in voice recognition research. The corpus contains recording from 100 native speakers, each with length of 30 minutes making a total of 100 hours of speech data.
Provide a detailed description of the following dataset: TAT
SpeechMatrix
**SpeechMatrix** is a large-scale multilingual corpus of speech-to-speech translations mined from real speech of European Parliament recordings. It contains speech alignments in 136 language pairs with a total of 418 thousand hours of speech.
Provide a detailed description of the following dataset: SpeechMatrix
DIS5K
To build the highly accurate Dichotomous Image Segmentation dataset (DIS5K), we first manually collected over 12,000 images from Flickr1 based on our pre-designed keywords. Then, we obtained 5,470 images of 22 groups and 225 categories from the 12,000 images according to the structural complexities of the objects. Each image is then manually labeled with pixel-wise accuracy using GIMP. The labeled targets in DIS5K mainly focus on the “objects of the images defined by the pre-designed keywords (categories)” regardless of their characteristics e.g., salient, common, camouflaged, meticulous, etc. The average per-image labeling time is ∼30 minutes and some images cost up to 10 hours.
Provide a detailed description of the following dataset: DIS5K
HOWS
HOWS-CL-25 (Household Objects Within Simulation dataset for Continual Learning) is a synthetic dataset especially designed for object classification on mobile robots operating in a changing environment (like a household), where it is important to learn new, never seen objects on the fly. This dataset can also be used for other learning use-cases, like instance segmentation or depth estimation. Or where household objects or continual learning are of interest. Our dataset contains 150,795 unique synthetic images using 25 different household categories with 925 3D models in total. For each of those categories, we generated about 6000 RGB images. In addition, we also provide a corresponding depth, segmentation, and normal image. The dataset was created with BlenderProc [Denninger et al. (2019)], a procedural pipeline to generate images for deep learning. This tool created a virtual room with randomly textured floors, walls, and a light source with randomly chosen light intensity and color. After that, a 3D model is placed in the resulting room. This object gets customized by randomly assigning materials, including different textures, to achieve a diverse dataset. Moreover, each object might be deformed with a random displacement texture. We use 774 3D models from the ShapeNet dataset [A. X. Chang et al. (2015)] and the other models from various internet sites. Please note that we had to manually fix and filter most of the models with Blender before using them in the pipeline! For continual learning (CL), we provide two different loading schemes: - Five sequences with five categories each - Twelve sequences with three categories in the first and two in the other sequences. In addition to the RGB, depth, segmentation, and normal images, we also provide the calculated features of the RGB images (by ResNet50) as used in our RECALL paper. In those two loading schemes, ten percent of the images are used for validation, where we ensure that an object instance is either in the training or the validation set, not in both. This avoids learning to recognize certain instances by heart. We recommend using those loading schemes to compare your approach with others. For further information and code examples, please have a look at our website: https://github.com/DLR-RM/RECALL.
Provide a detailed description of the following dataset: HOWS
detection_of_IoT_botnet_attacks_N_BaIoT Data Set
This dataset addresses the lack of public botnet datasets, especially for the IoT. It suggests *real* traffic data, gathered from 9 commercial IoT devices authentically infected by Mirai and BASHLITE.
Provide a detailed description of the following dataset: detection_of_IoT_botnet_attacks_N_BaIoT Data Set
Lipogram-e
This is a dataset of 3 English books which do not contain the letter "e" in them. This dataset includes all of "Gadsby" by Ernest Vincent Wright, all of "A Void" by Georges Perec, and almost all of "Eunoia" by Christian Bok (except for the single chapter that uses the letter "e" in it) This dataset is contributed as part of a paper titled "Most Language Models can be Poets too: An AI Writing Assistant and Constrained Text Generation Studio" to appear at COLING 2022. This dataset and the works within them are examples of Lipograms, which are works where a letter or string is systematically omitted. Lipograms are an example of hard-constrained writing.
Provide a detailed description of the following dataset: Lipogram-e
Instructional-DT (Instr-DT)
This discourse treebank includes annotated instructional texts originally assembled at the Information Technology Research Institute, University of Brighton. This dataset contains 176 documents with an average of 32.6 EDUs for a total of 5744 EDUs and 53,250 words.
Provide a detailed description of the following dataset: Instructional-DT (Instr-DT)
Lowest Common Ancestor Generations (LCAG) Phasespace Particle Decay Reconstruction Dataset
This dataset contains simulated synthetic particle decays, simulated using the [PhaseSpace](https://github.com/zfit/phasespace) library. All simulated decay topologies have a common root particle of mass 100 (arbitrary units). Intermediate particles are selected at random with replacement from the following masses: [90, 80, 70, 50, 25, 20, 10]. Final state particles, which make up the leaf nodes of generated topologies, are drawn with replacement from the following masses: [1, 2, 3, 5, 12]. For each intermediate particle (including the root), we limit the minimum number of children to two, and the maximum five. The dataset contains the resulting simulated particle physics decays, with information about the detected particle (leaves) to be used as input, and Lowest Common Ancestor Generations (LCAGs) to be used as training targets. Tree topology creation to generate the dataset was as follows: starting from the root particle a set of children are selected from the available intermediate and final state particles such that the sum of their masses totals less than the root, this process is then repeated for each child particle which is not a final state particle and so on until only final state particles remain. This dataset consists of 200 topologies (unique decay processes) in total, with 16,000 samples per topology. In the paper's experiments, 2000 topologies for each of training, validation, and testing were used. Leaf node features are not normalized. We have not enforced any ordering of the nodes and leave them unsorted as created in the dataset.
Provide a detailed description of the following dataset: Lowest Common Ancestor Generations (LCAG) Phasespace Particle Decay Reconstruction Dataset
Europarl
A corpus of parallel text in 21 European languages from the proceedings of the European Parliament. The Europarl parallel corpus is extracted from the proceedings of the European Parliament (1996-2011). It includes versions in 21 European languages: Romanic (French, Italian, Spanish, Portuguese, Romanian), Germanic (English, Dutch, German, Danish, Swedish), Slavik (Bulgarian, Czech, Polish, Slovak, Slovene), Finni-Ugric (Finnish, Hungarian, Estonian), Baltic (Latvian, Lithuanian), and Greek. Parallel sentence counts are in the range 400K-2M, depending on the language combination. The goal of the extraction and processing was to generate sentence aligned text for statistical machine translation systems. For this purpose we extracted matching items and labeled them with corresponding document IDs. Using a preprocessor we identified sentence boundaries. We sentence aligned the data using a tool based on the Church and Gale algorithm. The Europarl corpus was collected mainly to aid research in statistical machine translation (training, evaluation), but it has been used for many other natural language problems: word sense disambiguation, anaphora resolution, information extraction, etc. Monolingual datasets are also available for 9 languages. These are supersets of the parallel versions. Monolingual word counts are in the range 7M-54M, depending on the language. Test Sets: Several test sets have been released for the Europarl corpus. In general, the Q4/2000 portion of the data (2000-10 to 2000-12) should be reserved for testing. All released test sets have been selected from this quarter.
Provide a detailed description of the following dataset: Europarl
WDC SOTAB
WDC SOTAB is a benchmark that features two annotation tasks: Column Type Annotation and Columns Property Annotation. The goal of the Column Type Annotation (CTA) task is to annotate the columns of a table with 91 Schema.org types, such as telephone, duration, Place, or Organization. The goal of the Columns Property Annotation (CPA) task is to annotate pairs of table columns with one out of 176 Schema.org properties, such as gtin13, startDate, priceValidUntil, or recipeIngredient. The benchmark consists of 59,548 tables annotated for CTA and 48,379 tables annotated for CPA originating from 74,215 different websites. The tables are split into training-, validation- and test sets for both tasks. The tables cover 17 popular Schema.org types including Product, LocalBusiness, Event, and JobPosting. The tables originate from the Schema.org Table Corpus. Some characteristics for the different tasks are provided in the table below, where "Columns" refers to the number of columns/column pairs labeled and "Classes" to the number of unique classes used for annotation. | | Columns| Classes | |----------------|---------|---------| | Column Property Annotation | 174,998 | 176 | | Column Type Annotation | 162,351 | 91 |
Provide a detailed description of the following dataset: WDC SOTAB
ECQA
This repository contains the publicly released dataset, code, and models for the Explanations for CommonsenseQA paper presented at ACL-IJCNLP 2021. Directories ```data``` and ```code``` inside the root folder contain dataset and code, respectively. The same [data](https://github.com/dair-iitd/ECQA-Dataset) and [code](https://github.com/dair-iitd/ECQA) are also made available through our AIHN collaboration partner institute IIT Delhi. You can download the full paper from [here](https://aclanthology.org/2021.acl-long.238/). Note that these annotations are provided for the questions of the CommonsenseQA data ([https://www.tau-nlp.org/commonsenseqa](https://www.tau-nlp.org/commonsenseqa)): arXiv:1811.00937 [cs.CL] (or arXiv:1811.00937v2 [cs.CL] for this version). ### Citations Please consider citing this paper as follows: ``` @inproceedings{aggarwaletal2021ecqa, title={{E}xplanations for {C}ommonsense{QA}: {N}ew {D}ataset and {M}odels}, author={Shourya Aggarwal and Divyanshu Mandowara and Vishwajeet Agrawal and Dinesh Khandelwal and Parag Singla and Dinesh Garg}, booktitle="Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers)}", Pages = 3050–3065, year = "2021", publisher = "Association for Computational Linguistics" } ```
Provide a detailed description of the following dataset: ECQA
TED Gesture Dataset
Co-speech gestures are everywhere. People make gestures when they chat with others, give a public speech, talk on a phone, and even think aloud. Despite this ubiquity, there are not many datasets available. The main reason is that it is expensive to recruit actors/actresses and track precise body motions. There are a few datasets available (e.g., MSP AVATAR [17] and Personality Dyads Corpus [18]), but their sizes are limited to less than 3 h, and they lack diversity in speech content and speakers. The gestures also could be unnatural owing to inconvenient body tracking suits and acting in a lab environment. Thus, we collected a new dataset of co-speech gestures: the TED Gesture Dataset. TED is a conference where people share their ideas from a stage, and recordings of these talks are available online. Using TED talks has the following advantages compared to the existing datasets: • Large enough to learn the mapping from speech to gestures. The number of videos continues to grow. • Various speech content and speakers. There are thousands of unique speakers, and they talk about their own ideas and stories. • The speeches are well prepared, so we expect that the speakers use proper hand gestures. • Favorable for automation of data collection and annotation. All talks come with transcripts, and flat background and steady shots make extracting human poses with computer vision technology easier.
Provide a detailed description of the following dataset: TED Gesture Dataset
DBE-KT22
**DBE-KT22** contains student exercise answering activities collected through an online practicing platform for the database systems course taught at the Australian National University within the period 2018-2021. The dataset is useful for research targeting students' knowledge tracing given historical sequences of exercise answering.
Provide a detailed description of the following dataset: DBE-KT22
HYPERVIEW
The dataset comprises 2886 patches in total (2 m GSD), of which 1732 patches for training and 1154 patches for testing. The patch size varies (depending on agricultural parcels) and is on average around 60x60 pixels. Each patch contains 150 contiguous hyperspectral bands (462-942 nm, with a spectral resolution of 3.2 nm), which reflects the spectral range of the hyperspectral imaging sensor deployed on-board Intuition-1. The participants are given a training set of 1732 training examples. The examples are hyperspectral image patches with the corresponding ground-truth information. Each masked patch corresponds to a field of interest. Ground truth are the soil parameters obtained for the soil samples collected for each field of interest in the process of laboratory analysis, and is represented by a 4-value vector.
Provide a detailed description of the following dataset: HYPERVIEW