dataset_name stringlengths 2 128 | description stringlengths 1 9.7k | prompt stringlengths 59 185 |
|---|---|---|
NLU++ | **nlu++** is a dataset for natural language understanding (NLU) in task-oriented dialogue (ToD) systems, with the aim to provide a much more challenging evaluation environment for dialogue NLU models, up to date with the current application and industry requirements. nlu++ is divided into two domains (banking and hotels) and brings several crucial improvements over current commonly used NLU datasets. 1) Nlu++ provides fine-grained domain ontologies with a large set of challenging multi-intent sentences, introducing and validating the idea of intent modules that can be combined into complex intents that convey complex user goals, combined with finer-grained and thus more challenging slot sets. 2) The ontology is divided into domain-specific and generic (i.e., domain-universal) intent modules that overlap across domains, promoting cross-domain reusability of annotated examples. 3) The dataset design has been inspired by the problems observed in industrial ToD systems, and 4) it has been collected, filtered and carefully annotated by dialogue NLU experts, yielding high-quality annotated data.
List of datasets:
Banking: online banking queries annotated with their corresponding intents.
Span Extraction: the data used for the SpanConvert paper.
NLU++: a challenging evaluation environment for dialogue NLU models (multi-domain, multi-label intents and slots).
EVI: a challenging multilingual dataset for knowledge-based enrollment, identification, and identification in spoken dialogue systems. | Provide a detailed description of the following dataset: NLU++ |
FREDo | FREDo is a Few-Shot Document-Level Relation Extraction Benchmark based on DocRED and SciERC. The dataset is divided into four subsets: training set (62 relations), validation set (16 relations), in-domain test set (16 relations), and cross-domain test set (7 relations). | Provide a detailed description of the following dataset: FREDo |
SemEval 2022 Task 12: Symlink - Linking Mathematical Symbols to their Descriptions | Symlink is a SemEval shared task of extracting mathematical symbols and their descriptions from LaTeX source of scientific documents. This is a new task in SemEval 2022, which attracted 180 individual registrations and 59 final submissions from 7 participant teams. | Provide a detailed description of the following dataset: SemEval 2022 Task 12: Symlink - Linking Mathematical Symbols to their Descriptions |
OC-Cityscape | Out-of-Context Cityscapes (OC-Cityscapes) is a new dataset build
by replacing roads in the validation data of Cityscapes with various textures such as water, sand, grass, etc.
https://drive.google.com/file/d/1pKdlglcvsGseLzS1MX8SdjzQO2o1KZm6/view?usp=sharing | Provide a detailed description of the following dataset: OC-Cityscape |
ANUBIS | **ANUBIS** is a large-scale human skeleton dataset containing 80 actions. Compared with previously collected datasets, ANUBIS is advantageous in the following four aspects: (1) employing more recently released sensors; (2) containing novel back view; (3) encouraging high enthusiasm of subjects; (4) including actions of the COVID pandemic era. | Provide a detailed description of the following dataset: ANUBIS |
Kompetencer | Kompetencer (en: competences) is a Danish job posting dataset annotated for nested spans of competences. | Provide a detailed description of the following dataset: Kompetencer |
ExVo2022 | Baseline code for the three tracks of ExVo 2022 competition.
Consists of 59,201 recordings totaling more than 36 hours of audio data from 1 702 speakers. To our knowledge, this is substantially larger than than any previously available dataset of human vocal bursts. | Provide a detailed description of the following dataset: ExVo2022 |
Cross-View Cross-Scene Multi-View Crowd Counting Dataset | A large synthetic multi-camera crowd counting dataset with a large number of scenes and camera views to capture many possible variations, which avoids the difficulty of collecting and annotating such a large real dataset.
The dataset is generated using GCC-CL [50], which works as a
plug-in for the game “Grand Theft Auto V”. The generating
process consists of two parts: scene simulation and multiview recording. First, crowd scenes are simulated, through
the selection of the background selected, region of interest
(ROI), weather condition, human models and postures, etc.
Next, cameras are placed at various locations to record the crowd scene from various perspectives. Birds-eye views are also collected for visualization. Each person has a specific ID for mapping their coordinates in the world coordinate system and their locations in each camera-view image. The camera parameters, such as coordinates, deflection angles and fields-of-view, are also recorded.
In total, the whole synthetic MV counting dataset contains 31 scenes. For each scene, around 100 camera views re set for multi-view recording. The multi-view recording is performed 100 times with different crowd distributions in the scene, i.e., each scene contains 100 multi-view frames, with each frame comprising 60 to 120 camera-views. The image resolution is 1920×1080. | Provide a detailed description of the following dataset: Cross-View Cross-Scene Multi-View Crowd Counting Dataset |
HaVG | A dataset that contains the description of an image or a section within the image in Hausa and its equivalent in English. Hausa, a Chadic language, is a member of the Afro-Asiatic language family. It is estimated that about 100 to 150 million people speak the language, with more than 80 million indigenous speakers. The dataset comprises 32,923 images and their descriptions that are divided into training, development, test, and challenge test set. The Hausa Visual Genome is the first dataset of its kind and can be used for Hausa-English machine translation, multi-modal research, and image description, among various other natural language processing and generation tasks. | Provide a detailed description of the following dataset: HaVG |
Biographical | Biographical is a semi-supervised dataset for RE. The dataset, which is aimed towards digital humanities (DH) and historical research, is automatically compiled by aligning sentences from Wikipedia articles with matching structured data from sources including Pantheon and Wikidata. | Provide a detailed description of the following dataset: Biographical |
ComPhy | ****Compositional Physical Reasoning** is a dataset for understanding object-centric and relational physics properties hidden from visual appearances. For a given set of objects, the dataset includes few videos of them moving and interacting under different initial conditions. The model is evaluated based on its capability to unravel the compositional hidden properties, such as mass and charge, and use this knowledge to answer a set of questions posted on one of the videos. | Provide a detailed description of the following dataset: ComPhy |
TuGebic | **TuGebic** is a corpus of recordings of spontaneous speech samples from Turkish-German bilinguals, and the compilation of a corpus called TuGebic. Participants in the study were adult Turkish and German bilinguals living in Germany or Turkey at the time of recording in the first half of the 1990s. The data were manually tokenised and normalised, and all proper names (names of participants and places mentioned in the conversations) were replaced with pseudonyms. Token-level automatic language identification was performed, which made it possible to establish the proportions of words from each language. | Provide a detailed description of the following dataset: TuGebic |
NHA12D | **NHA12D** is an annotated pavement crack dataset that contains images with different viewpoints and pavements types. This dataset is composed of 80 pavement images, including 40 concrete pavement images and 40 asphalt pavement images, captured by digital survey vehicles on the A12 network in the UK. | Provide a detailed description of the following dataset: NHA12D |
BigNews | Contains 3,689,229 English news articles on politics, gathered from 11 United States (US) media outlets covering a broad ideological spectrum. | Provide a detailed description of the following dataset: BigNews |
ORCAS-I | A labelled version of the ORCAS click-based dataset of Web queries, which provides 18 million connections to 10 million distinct queries.
DOI of the dataset: 10.48436/pp7xz-n9a06 | Provide a detailed description of the following dataset: ORCAS-I |
COUCH | **COUCH** is a large human-chair interaction dataset with clean annotations. The dataset consists of 3 hours and over 500 sequences of motion capture (MoCap) on human-chair interactions. | Provide a detailed description of the following dataset: COUCH |
ONCE-3DLanes | ONCE-3DLanes is a real-world autonomous driving dataset with lane layout annotation in 3D space. A dataset annotation pipeline is designed to automatically generate high-quality 3D lane locations from 2D lane annotations by exploiting the explicit relationship between point clouds and image pixels in 211,000 road scenes. | Provide a detailed description of the following dataset: ONCE-3DLanes |
CareCall | **carecall** is a Korean dialogue dataset for role-satisfying dialogue systems. The dataset was composed with a few samples of human-written dialogues using in-context few-shot learning of large-scale LMs. Large-scale LMs can generate dialogues with a specific personality, given a prompt consisting of a brief description of the chatbot’s properties and few dialogue examples. We use this method to build the entire dataset. | Provide a detailed description of the following dataset: CareCall |
VIS-TIR | A visible-light and thermal-infrared images dataset for dual-spectrum depth estimation. | Provide a detailed description of the following dataset: VIS-TIR |
TemporalWiki | TemporalWiki is a lifelong benchmark for ever-evolving LMs that utilizes the difference between consecutive snapshots of English Wikipedia and English Wikidata for training and evaluation, respectively. The benchmark hence allows researchers to periodically track an LM's ability to retain previous knowledge and acquire updated/new knowledge at each point in time. | Provide a detailed description of the following dataset: TemporalWiki |
Charlotte-ThermalFace | **Charlotte-ThermalFace** is a thermal face dataset. The data is fully annotated with the facial landmarks, ambient temperature, relative humidity, the air speed of the room, distance to the camera, and subject thermal sensation at the time of capturing each image.
There are approximately 10,000 infrared thermal images from 10 subjects in varying thermal conditions, at several distances from the camera, and at changing head positions. We have also controlled the air temperature to change from 20.5°C ( 69°F) to 26.5 °C( 80°F). Images are available in four different temperatures, 10 relative distances from the camera, starting at 1m ( 3.3 ft) to 6.6m( 21.6 ft), and 25 head positions.
• The first public facial thermal dataset annotated with the environmental properties including air temperature, relative humidity, airspeed, distance from the camera, and subjective thermal sensation of each person at the time.
• All the images are manually annotated with 72 or 43 facial landmarks.
• We are publishing the data in the original 16-bit radiometric TemperatureLinear format, which has the thermal value of each pixel. | Provide a detailed description of the following dataset: Charlotte-ThermalFace |
ExaASC | The **ExaASC** dataset is a dataset for Target-based Stance Detection in the Arabic Language that contains different types of targets like persons, entities and events. This corpus contains about 9500 tweets with replies and target specified in the source tweet. Each sample has at least two stance annotations provided by Exa Corporation annotators. The stance of each reply is annotated toward the target in the corresponding source tweet. Format of data is as follows: *id*, *main* (source tweet), *reply*, *target*, *label* of each annotator id and *majority_label*. | Provide a detailed description of the following dataset: ExaASC |
Endomapper | The Endomapper dataset is the first collection of complete endoscopy sequences acquired during regular medical practice, including slow and careful screening explorations, making secondary use of medical data. Its original purpose is to facilitate the development and evaluation of VSLAM (Visual Simultaneous Localization and Mapping) methods in real endoscopy data. The first release of the dataset is composed of 50 sequences with a total of more than 13 hours of video. It is also the first endoscopic dataset that includes both the computed geometric and photometric endoscope calibration as well as the original calibration videos. Meta-data and annotations associated to the dataset varies from anatomical landmark and description of the procedure labeling, tools segmentation masks, COLMAP 3D reconstructions, simulated sequences with groundtruth and meta-data related to special cases, such as sequences from the same patient. This information will improve the research in endoscopic VSLAM, as well as other research lines, and create new research lines. | Provide a detailed description of the following dataset: Endomapper |
BS-RSC | BS-RSC is a real-world rolling shutter (RS) correction dataset and a corresponding model to correct the RS frames in a distorted video. Real distorted videos with corresponding ground truth are recorded simultaneously via a well-designed beam-splitter-based acquisition system. BSRSC contains various motions of both camera and objects in dynamic scenes. | Provide a detailed description of the following dataset: BS-RSC |
CAVES | **CAVES** is the first large-scale dataset containing about 10k COVID-19 anti-vaccine tweets labelled into various specific anti-vaccine concerns in a multi-label setting. This is also the first multi-label classification dataset that provides explanations for each of the labels. Additionally, the dataset also provides class-wise summaries of all the tweets. | Provide a detailed description of the following dataset: CAVES |
D3 | DBLP is the largest open-access repository of scientific articles on computer science and provides metadata associated with publications, authors, and venues. We retrieved more than 6 million publications from DBLP and extracted pertinent metadata (e.g., abstracts, author affiliations, citations) from the publication texts to create the D3 Discovery Dataset (D3). D3 can be used to identify trends in research activity, productivity, focus, bias, accessibility, and impact of computer science research. | Provide a detailed description of the following dataset: D3 |
WikiWiki | **WikiWiki** is a dataset for understanding entities and their place in a taxonomy of knowledge—their types. It consists of entities and passages from 10M Wikipedia articles linked to the Wikidata knowledge graph with 41K types. | Provide a detailed description of the following dataset: WikiWiki |
ARCTIC | ARCTIC is a dataset of free-form interactions of hands and articulated objects. ARCTIC has 1.2M images paired with accurate 3D meshes for both hands and for objects that move and deform over time. The dataset also provides hand-object contact information. | Provide a detailed description of the following dataset: ARCTIC |
MeSHup | Contains 1,342,667 full text articles in English, together with the associated MeSH labels and metadata, authors, and publication venues that are collected from the MEDLINE database. | Provide a detailed description of the following dataset: MeSHup |
M-Phasis | A corpus of 9k German and French user comments collected from migration-related news articles. It goes beyond the hate-neutral dichotomy and is instead annotated with 23 features, which in combination become descriptors of various types of speech, ranging from critical comments to implicit and explicit expressions of hate. The annotations are performed by 4 native speakers per language and achieve high (0.77) inter-annotator agreements. | Provide a detailed description of the following dataset: M-Phasis |
W-Oops | W-Oops consists of 2,100 unintentional human action videos, with 44 goal-directed and 30 unintentional video-level activity labels collected through human annotations. | Provide a detailed description of the following dataset: W-Oops |
SkillSpan | **SkillSpan** is a dataset for Skill Extraction (SE). It is an important and widely-studied task useful to gain insights into labor market dynamics. However, there is a lacuna of datasets and annotation guidelines; available datasets are few and contain crowd-sourced labels on the span-level or labels from a predefined skill inventory. To address this gap, the authors introduce SkillSpan, a novel SE dataset consisting of 14.5K sentences and over 12.5K annotated spans. | Provide a detailed description of the following dataset: SkillSpan |
CoVERT | CoVERT is a fact-checked corpus of tweets with a focus on the domain of biomedicine and COVID-19-related (mis)information. The corpus consists of 300 tweets, each annotated with medical named entities and relations. Employs a novel crowdsourcing methodology to annotate all tweets with fact-checking labels and supporting evidence, which crowdworkers search for online. This methodology results in moderate inter-annotator agreement. | Provide a detailed description of the following dataset: CoVERT |
Monant Medical Misinformation | This dataset of medical misinformation was collected and is published by Kempelen Institute of Intelligent Technologies (KInIT). It consists of approx. 317k news articles and blog posts on medical topics published between January 1, 1998 and February 1, 2022 from a total of 207 reliable and unreliable sources. The dataset contains full-texts of the articles, their original source URL and other extracted metadata. If a source has a credibility score available (e.g., from Media Bias/Fact Check), it is also included in the form of annotation. Besides the articles, the dataset contains around 3.5k fact-checks and extracted verified medical claims with their unified veracity ratings published by fact-checking organisations such as Snopes or FullFact. Lastly and most importantly, the dataset contains 573 manually and more than 51k automatically labelled mappings between previously verified claims and the articles; mappings consist of two values: claim presence (i.e., whether a claim is contained in the given article) and article stance (i.e., whether the given article supports or rejects the claim or provides both sides of the argument).
The dataset is primarily intended to be used as a training and evaluation set for machine learning methods for claim presence detection and article stance classification, but it enables a range of other misinformation related tasks, such as misinformation characterization or analyses of misinformation spreading. | Provide a detailed description of the following dataset: Monant Medical Misinformation |
StyleGAN-Human | A large-scale human image dataset with over 230K samples capturing diverse poses and textures. | Provide a detailed description of the following dataset: StyleGAN-Human |
Two4Two | Two4Two is a library to create synthetic image data crafted for human evaluations of interpretable ML approaches (esp. image classification). The synthetic images show two abstract animals: Peaky (arms inwards) and Stretchy (arms outwards). They are similar-looking, abstract animals, made of eight blocks. The core functionality of this library is that one can correlate different parameters with an animal type to create bias in the data. | Provide a detailed description of the following dataset: Two4Two |
LitMind Dictionary | An open-source online generative dictionary that takes a word and context containing the word as input and automatically generates a definition as output. Incorporating state-of-the-art definition generation models, it supports not only Chinese and English, but also Chinese-English cross-lingual queries. Moreover, it has a user-friendly front-end design that can help users understand the query words quickly and easily. | Provide a detailed description of the following dataset: LitMind Dictionary |
OpenImage-O | It is manually annotated, comes with a naturally diverse distribution, and has a large scale. It is built to overcome several shortcomings of existing OOD benchmarks. OpenImage-O is image-by-image filtered from the test set of OpenImage-V3, which has been collected from Flickr without a predefined list of class names or tags, leading to natural class statistics and avoiding an initial design bias. | Provide a detailed description of the following dataset: OpenImage-O |
IPM NEL | This data is for the task of named entity recognition and linking/disambiguation over tweets. It comprises
the addition of an entity URI layer on top of an NER-annotated tweet dataset. The task is to detect entities
and then provide a correct link to them in DBpedia, thus disambiguating otherwise ambiguous entity surface
forms; for example, this means linking "Paris" to the correct instance of a city named that (e.g. Paris,
France vs. Paris, Texas).
The data concentrates on ten types of named entities: company, facility, geographic location, movie, musical
artist, person, product, sports team, TV show, and other.
The file is tab separated, in CoNLL format, with line breaks between tweets.
Data preserves the tokenisation used in the Ritter datasets.
PoS labels are not present for all tweets, but where they could be found in the Ritter
data, they're given. In cases where a URI could not be agreed, or was not present in
DBpedia, there is a NIL. See the paper for a full description of the methodology. | Provide a detailed description of the following dataset: IPM NEL |
PANACEA | The peer-reviewed publication for this dataset has been presented in the 2022 Annual Conference of the North American Chapter of the Association for Computational Linguistics (NAACL), and can be accessed here: https://arxiv.org/abs/2205.02596. Please cite this when using the dataset.
This dataset contains a heterogeneous set of True and False COVID claims and online sources of information for each claim.
The claims have been obtained from online fact-checking sources, existing datasets and research challenges. It combines different data sources with different foci, thus enabling a comprehensive approach that combines different media (Twitter, Facebook, general websites, academia), information domains (health, scholar, media), information types (news, claims) and applications (information retrieval, veracity evaluation).
The processing of the claims included an extensive de-duplication process eliminating repeated or very similar claims. The dataset is presented in a LARGE and a SMALL version, accounting for different degrees of similarity between the remaining claims (excluding respectively claims with a 90% and 99% probability of being similar, as obtained through the MonoT5 model). The similarity of claims was analysed using BM25 (Robertson et al., 1995; Crestani et al., 1998; Robertson and Zaragoza, 2009) with MonoT5 re-ranking (Nogueira et al., 2020), and BERTScore (Zhang et al., 2019).
The processing of the content also involved removing claims making only a direct reference to existing content in other media (audio, video, photos); automatically obtained content not representing claims; and entries with claims or fact-checking sources in languages other than English.
The claims were analysed to identify types of claims that may be of particular interest, either for inclusion or exclusion depending on the type of analysis. The following types were identified: (1) Multimodal; (2) Social media references; (3) Claims including questions; (4) Claims including numerical content; (5) Named entities, including: PERSON − People, including fictional; ORGANIZATION − Companies, agencies, institutions, etc.; GPE − Countries, cities, states; FACILITY − Buildings, highways, etc. These entities have been detected using a RoBERTa base English model (Liu et al., 2019) trained on the OntoNotes Release 5.0 dataset (Weischedel et al., 2013) using Spacy.
The original labels for the claims have been reviewed and homogenised from the different criteria used by each original fact-checker into the final True and False labels.
The data sources used are:
- The CoronaVirusFacts/DatosCoronaVirus Alliance Database. https://www.poynter.org/ifcn-covid-19-misinformation/
- CoAID dataset (Cui and Lee, 2020) https://github.com/cuilimeng/CoAID
- MM-COVID (Li et al., 2020) https://github.com/bigheiniu/MM-COVID
- CovidLies (Hossain et al., 2020) https://github.com/ucinlp/covid19-data
- TREC Health Misinformation track https://trec-health-misinfo.github.io/
- TREC COVID challenge (Voorhees et al., 2021; Roberts et al., 2020) https://ir.nist.gov/covidSubmit/data.html
The LARGE dataset contains 5,143 claims (1,810 False and 3,333 True), and the SMALL version 1,709 claims (477 False and 1,232 True).
The entries in the dataset contain the following information:
- Claim. Text of the claim.
- Claim label. The labels are: False, and True.
- Claim source. The sources include mostly fact-checking websites, health information websites, health clinics, public institutions sites, and peer-reviewed scientific journals.
- Original information source. Information about which general information source was used to obtain the claim.
- Claim type. The different types, previously explained, are: Multimodal, Social Media, Questions, Numerical, and Named Entities.
Funding. This work was supported by the UK Engineering and Physical Sciences Research Council (grant no. EP/V048597/1, EP/T017112/1). ML and YH are supported by Turing AI Fellowships funded by the UK Research and Innovation (grant no. EP/V030302/1, EP/V020579/1).
References
- Arana-Catania M., Kochkina E., Zubiaga A., Liakata M., Procter R., He Y.. Natural Language Inference with Self-Attention for Veracity Assessment of Pandemic Claims. NAACL 2022 https://arxiv.org/abs/2205.02596
- Stephen E Robertson, Steve Walker, Susan Jones, Micheline M Hancock-Beaulieu, Mike Gatford, et al. 1995. Okapi at trec-3. Nist Special Publication Sp,109:109.
- Fabio Crestani, Mounia Lalmas, Cornelis J Van Rijsbergen, and Iain Campbell. 1998. “is this document relevant?. . . probably” a survey of probabilistic models in information retrieval. ACM Computing Surveys (CSUR), 30(4):528–552.
- Stephen Robertson and Hugo Zaragoza. 2009. The probabilistic relevance framework: BM25 and beyond. Now Publishers Inc.
- Rodrigo Nogueira, Zhiying Jiang, Ronak Pradeep, and Jimmy Lin. 2020. Document ranking with a pre-trained sequence-to-sequence model. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: Findings, pages 708–718.
- Tianyi Zhang, Varsha Kishore, Felix Wu, Kilian Q Weinberger, and Yoav Artzi. 2019. Bertscore: Evaluating text generation with bert. In International Conference on Learning Representations.
- Yinhan Liu, Myle Ott, Naman Goyal, Jingfei Du, Mandar Joshi, Danqi Chen, Omer Levy, Mike Lewis, Luke Zettlemoyer, and Veselin Stoyanov. 2019. Roberta: A robustly optimized bert pretraining approach. arXiv preprint arXiv:1907.11692.
- Ralph Weischedel, Martha Palmer, Mitchell Marcus, Eduard Hovy, Sameer Pradhan, Lance Ramshaw, Nianwen Xue, Ann Taylor, Jeff Kaufman, Michelle Franchini, et al. 2013. Ontonotes release 5.0 ldc2013t19. Linguistic Data Consortium, Philadelphia, PA, 23.
- Limeng Cui and Dongwon Lee. 2020. Coaid: Covid-19 healthcare misinformation dataset. arXiv preprint arXiv:2006.00885.
- Yichuan Li, Bohan Jiang, Kai Shu, and Huan Liu. 2020. Mm-covid: A multilingual and multimodal data repository for combating covid-19 disinformation.
- Tamanna Hossain, Robert L. Logan IV, Arjuna Ugarte, Yoshitomo Matsubara, Sean Young, and Sameer Singh. 2020. COVIDLies: Detecting COVID-19 misinformation on social media. In Proceedings of the 1st Workshop on NLP for COVID-19 (Part 2) at EMNLP 2020, Online. Association for Computational Linguistics.
- Ellen Voorhees, Tasmeer Alam, Steven Bedrick, Dina Demner-Fushman, William R Hersh, Kyle Lo, Kirk Roberts, Ian Soboroff, and Lucy Lu Wang. 2021. Trec-covid: constructing a pandemic information retrieval test collection. In ACM SIGIR Forum, volume 54, pages 1–12. ACM New York, NY, USA. | Provide a detailed description of the following dataset: PANACEA |
Coding competition 2 | Dataset for machine learning based performance prediction in online coding competitions. | Provide a detailed description of the following dataset: Coding competition 2 |
Alphabet stock price | This dataset provides full historical daily stock price for Alphabet. There are 2 types of share class for Alphabet: GOOG and GOOGL. The two classes have very similar share price. This dataset is for GOOGL. This dataset is provided by Finsheet, a world-class provider of [Excel stock price](https://finsheet.io/) and [stock price Google Sheets](https://finsheet.io/). Sourcing data directly from Finnhub, a well-known financial data provider, Finsheet's data quality is unmatched and the same data is being used by financial institutions all around the world. That is why Finsheet is recognized by the faculty and students at Columbia University as a top platform to get [Excel stock price](http://www.columbia.edu/~tmd2142/get-stock-price-excel-google-sheets.html) and [stock price Google Sheets](http://www.columbia.edu/~tmd2142/get-stock-price-excel-google-sheets.html). Long story short, when it comes to financial data in spreadsheets, Finsheet is your number one option. | Provide a detailed description of the following dataset: Alphabet stock price |
DCF Valuation template | This is the DCF template provided by ValueInvesting.io, a high performing [value investing](https://valueinvesting.io/) platform. Within this template, users also have access to other models such as Dividend Discount Model and Earnings Power Value. The focus of ValueInvesting.io is to provide accurate and reliable [intrinsic value](https://valueinvesting.io/) for all stocks globally using valuation models, especially [DCF](https://valueinvesting.io/) and [WACC](https://valueinvesting.io/). Users have experience consistent return by following the valuation results recommended by ValueInvesting.io. Furthermore, users can also view and edit all model assumptions when exporting the model to Excel or Google Sheets. Those are the reason why they are listed in the number one position in the list of top 5 [best stock research websites](http://www.columbia.edu/~tmd2142/5-best-stock-research-websites.html) curated by students at Columbia University. | Provide a detailed description of the following dataset: DCF Valuation template |
CUHK Avenue | Avenue Dataset contains 16 training and 21 testing video clips. The videos are captured in CUHK campus avenue with 30652 (15328 training, 15324 testing) frames in total. | Provide a detailed description of the following dataset: CUHK Avenue |
UBnormal | UBnormal is a new supervised open-set benchmark composed of multiple virtual scenes for video anomaly detection. Unlike existing data sets, the data set introduces abnormal events annotated at the pixel level at training time, for the first time enabling the use of fully-supervised learning methods for abnormal event detection. To preserve the typical open-set formulation, the data set includes disjoint sets of anomaly types in the training and test collections of videos. | Provide a detailed description of the following dataset: UBnormal |
DRACO20K | DRACO20K dataset is used for evaluating object canonicalization on methods that estimate a canonical frame from a monocular input image.
Provides:
1. Mixed Reality Multi-view RGB-D images rendered from ShapeNet objects
2. Camera poses
3. NOCS maps
4. Semantic 2D keypoints with visibility
5. Object-centric mask | Provide a detailed description of the following dataset: DRACO20K |
QLEVR | Synthetic datasets have successfully been used to probe visual question-answering datasets for their reasoning abilities. [CLEVR](/dataset/clevr), for example, tests a range of visual reasoning abilities. The questions in CLEVR focus on comparisons of shapes, colors, and sizes, numerical reasoning, and existence claims. This paper introduces a minimally biased, diagnostic visual question-answering dataset, QLEVR, that goes beyond existential and numerical quantification and focus on more complex quantifiers and their combinations, e.g., asking whether there are more than two red balls that are smaller than at least three blue balls in an image. We describe how the dataset was created and present a first evaluation of state-of-the-art visual question-answering models, showing that QLEVR presents a formidable challenge to our current models.
Description and image from: [QLEVR Dataset Generation](https://github.com/zechenli03/QLEVR) | Provide a detailed description of the following dataset: QLEVR |
VocalSet | VocalSet is a a singing voice dataset consisting of 10.1 hours of monophonic recorded audio of professional singers demonstrating both standard and extended vocal techniques on all 5 vowels. Existing singing voice datasets aim to capture a focused subset of singing voice characteristics, and generally consist of just a few singers. VocalSet contains recordings from 20 different singers (9 male, 11 female) and a range of voice types. VocalSet aims to improve the state of existing singing voice datasets and singing voice research by capturing not only a range of vowels, but also a diverse set of voices on many different vocal techniques, sung in contexts of scales, arpeggios, long tones, and excerpts. | Provide a detailed description of the following dataset: VocalSet |
ASAP | There are eight essay sets. Each of the sets of essays was generated from a single prompt. Selected essays range from an average length of 150 to 550 words per response. Some of the essays are dependent upon source information and others are not. All responses were written by students ranging in grade levels from Grade 7 to Grade 10. All essays were hand graded and were double-scored. Each of the eight data sets has its own unique characteristics. The variability is intended to test the limits of your scoring engine's capabilities. | Provide a detailed description of the following dataset: ASAP |
CelebA+masks | The COVID-19 pandemic raises the problem of adapting face recognition systems to the new reality, where people may wear surgical masks to cover their noses and mouths. Traditional data sets (e.g., CelebA, CASIA-WebFace) used for training these systems were released before the pandemic, so they now seem unsuited due to the lack of examples of people wearing masks. We propose a method for enhancing data sets containing faces without masks by creating synthetic masks and overlaying them on faces in the original images. Our method relies on Spark AR Studio, a developer program made by Facebook that is used to create Instagram face filters. In our approach, we use 9 masks of different colors, shapes and fabrics. We employ our method to generate a number of 196,254 (96.8%) masks for the CelebA data set. | Provide a detailed description of the following dataset: CelebA+masks |
CASIA-WebFace+masks | The COVID-19 pandemic raises the problem of adapting face recognition systems to the new reality, where people may wear surgical masks to cover their noses and mouths. Traditional data sets (e.g., CelebA, CASIA-WebFace) used for training these systems were released before the pandemic, so they now seem unsuited due to the lack of examples of people wearing masks. We propose a method for enhancing data sets containing faces without masks by creating synthetic masks and overlaying them on faces in the original images. Our method relies on Spark AR Studio, a developer program made by Facebook that is used to create Instagram face filters. In our approach, we use 9 masks of different colors, shapes and fabrics. We employ our method to generate a number of 445,446 (90%) samples of masks for the CASIA-WebFace data set. | Provide a detailed description of the following dataset: CASIA-WebFace+masks |
VocalSound | VocalSound is a free dataset consisting of 21,024 crowdsourced recordings of laughter, sighs, coughs, throat clearing, sneezes, and sniffs from 3,365 unique subjects. The VocalSound dataset also contains meta-information such as speaker age, gender, native language, country, and health condition. | Provide a detailed description of the following dataset: VocalSound |
ReMASC | We introduce a new database of voice recordings with the goal of supporting research on vulnerabilities and protection of voice-controlled systems. In contrast to prior efforts, the proposed database contains genuine and replayed recordings of voice commands obtained in realistic usage scenarios and using state-of-the-art voice assistant development kits. Specifically, the database contains recordings from four systems (each with a different microphone array) in a variety of environmental conditions with different forms of background noise and relative positions between speaker and device. To the best of our knowledge, this is the first database that has been specifically designed for the protection of voice controlled systems (VCS) against various forms of replay attacks. | Provide a detailed description of the following dataset: ReMASC |
CLUES (Classifier Learning Using natural language ExplanationS) | **CLUES** is a benchmark for Classifier Learning Using natural language ExplanationS, consisting of a range of classification tasks over structured data along with natural language supervision in the form of explanations. CLUES consists of 36 real-world (CLUES-Real) and 144 synthetic (CLUES-Synthetic) classification tasks. It contains crowdsourced explanations describing real-world tasks from multiple teachers and programmatically generated explanations for the synthetic tasks.
The dataset has been created by a team of NLP researchers at UNC Chapel Hill.
Description from: [CLUES](https://clues-benchmark.github.io/) | Provide a detailed description of the following dataset: CLUES (Classifier Learning Using natural language ExplanationS) |
CVRPTW | Random sampled instances of the Capacitated Vehicle Routing Problem with Time Windows (CVRPTW) for 20, 50 and 100 customer nodes.
* Coordinates sampled from unit square
* demands sampled as integers from range [1, 9]
* time windows sampled with:
- ready times (TW start) as random integers in time horizon T
- due times (TW end) sampled from Normal distribution
The dataset is used as a validation and test set to evaluate machine learning based solvers for the CVRPTW.
The corresponding open source sample code can be used to sample corresponding training data or additional validation and test data, also for other problem sizes. | Provide a detailed description of the following dataset: CVRPTW |
QA2D | The Question to Declarative Sentence (QA2D) Dataset contains 86k question-answer pairs and their manual transformation into declarative sentences. 95% of question answer pairs come from SQuAD (Rajkupar et al., 2016) and the remaining 5% come from four other question answering datasets. | Provide a detailed description of the following dataset: QA2D |
FlickrLogos-32 | Object detection benchmark for logo detection.
Images are natural scenes. Each image contains multiple objects, and each image has a total of 1 logo. Logo detection & classification labels are provided. | Provide a detailed description of the following dataset: FlickrLogos-32 |
Nakdimon-test | Diacritized texts in Modern Hebrew, collected from eleven different sources.
Diacritized using Ktiv Male conventions. | Provide a detailed description of the following dataset: Nakdimon-test |
WebVidVQA3M | A dataset automatically generated using question generation neural models and alt-text video captions from the WebVid dataset, with 3M video-question-answer triplets. | Provide a detailed description of the following dataset: WebVidVQA3M |
NuScenes Occupancy Grids Dataset | Dynamic occupancy grids generated from NuScenes dataset. Dataset contains static environment and semantic labels, useful for long term prediction tasks. | Provide a detailed description of the following dataset: NuScenes Occupancy Grids Dataset |
OSCD | The Onera Satellite Change Detection dataset addresses the issue of detecting changes between satellite images from different dates.
It comprises 24 pairs of multispectral images taken from the Sentinel-2 satellites between 2015 and 2018. Locations are picked all over the world, in Brazil, USA, Europe, Middle-East and Asia. For each location, registered pairs of 13-band multispectral satellite images obtained by the Sentinel-2 satellites are provided. Images vary in spatial resolution between 10m, 20m and 60m.
Pixel-level change ground truth is provided for all 14 training and 10 test image pairs. The annotated changes focus on urban changes, such as new buildings or new roads. These data can be used for training and setting parameters of change detection algorithms. | Provide a detailed description of the following dataset: OSCD |
Twitter US Airline Sentiment | A sentiment analysis job about the problems of each major U.S. airline. Twitter data was scraped from February of 2015 and contributors were asked to first classify positive, negative, and neutral tweets, followed by categorizing negative reasons (such as "late flight" or "rude service"). You can download the non-aggregated results (55,000 rows) here. | Provide a detailed description of the following dataset: Twitter US Airline Sentiment |
State Traversal Observation Tokens | When arriving at each state, each observation token gets a coin toss to see whether it will appear in the output observation string. Numbers on the left are indices of observations, numbers on the right are indices of states. | Provide a detailed description of the following dataset: State Traversal Observation Tokens |
Twitter PoS VCB | The data is about 1.5 million English tweets annotated for part-of-speech using Ritter's extension of the PTB tagset. The tweets are from 2012 and 2013, tokenized using the GATE tokenizer and tagged jointly using the CMU ARK tagger and Ritter's T-POS tagger. Only when both these taggers' outputs are completely compatible over a whole tweet, is that tweet added to the dataset. | Provide a detailed description of the following dataset: Twitter PoS VCB |
Ritter PoS | PTB-tagged English Tweets | Provide a detailed description of the following dataset: Ritter PoS |
zulu-stance | This is a stance detection dataset in the Zulu language. The data is translated to Zulu by Zulu native speakers, from English source texts.
Our paper aims at utilizing this progress made for English to transfers that knowledge into other languages, which is a non-trivial task due to the domain gap between English and the target languages. We propose a black-box non-intrusive method that utilizes techniques from Domain Adaptation to reduce the domain gap, without requiring any human expertise in the target language, by leveraging low-quality data in both a supervised and unsupervised manner. This allows us to rapidly achieve similar results for stance detection for the Zulu language, the target language in this work, as are found for English. A natively-translated dataset is used for evaluation of domain transfer. | Provide a detailed description of the following dataset: zulu-stance |
nordic_langid | Automatic language identification is a challenging problem. Discriminating between closely related languages is especially difficult. This paper presents a machine learning approach for automatic language identification for the Nordic languages, which often suffer miscategorisation by existing state-of-the-art tools. Concretely we will focus on discrimination between six Nordic languages: Danish, Swedish, Norwegian (Nynorsk), Norwegian (Bokmål), Faroese and Icelandic. | Provide a detailed description of the following dataset: nordic_langid |
bornholmsk_parallel | This dataset is parallel text for Bornholmsk and Danish. | Provide a detailed description of the following dataset: bornholmsk_parallel |
bajer_danish_misogyny | This is a high-quality dataset of annotated posts sampled from social media posts and annotated for misogyny. Danish language. | Provide a detailed description of the following dataset: bajer_danish_misogyny |
SHAJ | This is an abusive/offensive language detection dataset for Albanian. The data is formatted following the OffensEval convention. Data is from Instagram and YouTube comments. | Provide a detailed description of the following dataset: SHAJ |
polstance | Political stance in Danish. Examples represent statements by politicians and are annotated for, against, or neutral to a given topic/article. | Provide a detailed description of the following dataset: polstance |
Animals-10 | It contains about 28K medium quality animal images belonging to 10 categories: dog, cat, horse, spyder, butterfly, chicken, sheep, cow, squirrel, and elephant.
All the images have been collected from "google images" and have been checked by humans. There is some erroneous data to simulate real conditions (eg. images taken by users of your app).
The main directory is divided into folders, one for each category. The image count for each category varies from 2K to 5 K units. | Provide a detailed description of the following dataset: Animals-10 |
Pose Estimation Lunar Robot | ## Overview
**The goal:** using simulation data to train neural networks to estimate the pose of a rover's camera with respect to a known target object
**The mission context:**
A simulated lunar surface, with lunar landers and lunar rovers. To accomplish their ressource extraction mission, the rovers must dig, transport and deliver regolith to a processing plant. For each of these tasks, a central need is for rovers to accurately estimate the relative pose both between themselves and with the landers.
<img src="https://github.com/TeamL3/learned-pose-estimation/raw/main/misc/images/overview_low.png" alt="lunar surface overview" style="width:720px;"/>
**The dataset:**
- tf.data dataset ready for training (RGBD images and ground truth pose labels)
- five different scenarios for relative pose estimation, some easier, some harder!
<img src="https://github.com/TeamL3/learned-pose-estimation/raw/main/misc/images/dataset.png" alt="dataset samples" style="width:720px;"/>
**The code:**
Utilities for manipulating the dataset and calculating training metrics + example jupyter notebooks for data exploration and model training + more details on the dataset are available on [github](https://github.com/TeamL3/learned-pose-estimation) | Provide a detailed description of the following dataset: Pose Estimation Lunar Robot |
CEREBRUM-7T | Ultra-high field MRI enables sub-millimetre resolution imaging of human brain, allowing to disentangle complex functional circuits across different cortical depths. Segmentation, meant as the partition of MR brain images in multiple anatomical classes, is an essential step in many functional and structural neuroimaging studies. In this work, we design and test CEREBRUM-7T, an optimised end-to-end CNN architecture, that allows to segment a whole 7T T1w MRI brain volume at once, without the need of partitioning it into 2D or 3D tiles. Despite deep learning (DL) methods are recently starting to emerge in 3T literature, to the best of our knowledge, CEREBRUM-7T is the first example of DL architecture directly applied on 7T data. Training is performed in a weakly supervised fashion, since it exploits a ground-truth (GT) with errors. The generated model is able to produce accurate multi-structure segmentation masks on six different classes, in only few seconds. In the experimental part, we show that the proposed solution outperforms the GT it was trained on in segmentation accuracy. For more details, please visit: https://rocknroll87q.github.io/cerebrum7t | Provide a detailed description of the following dataset: CEREBRUM-7T |
RGB-Stacking | RGB-Stacking is a benchmark for vision-based robotic manipulation. The robot is trained to learn how to grasp objects and balance them on top of one another.
Image source: [https://github.com/deepmind/rgb_stacking](https://github.com/deepmind/rgb_stacking) | Provide a detailed description of the following dataset: RGB-Stacking |
FM WILN | This dataset was created while conducting the field report related to this paper. It includes 18.6 km of autonomous navigation in a boreal forest. The wintertime meteorological conditions are documented in the paper.
This dataset consists of various [ROSbags](http://wiki.ros.org/rosbag), including all data recorder during the runs documented in the paper. | Provide a detailed description of the following dataset: FM WILN |
Google Speech Commands - Musan | This noisy speech test set is created from the Google Speech Commands v2 [1] and the Musan dataset[2].
It could be downloaded here: https://zenodo.org/record/6066174#.Yn7NPJPMLyU
Specifically, we created this test set by mixing the speech in the Google Speech Commands v2 test set with random noise in the Musan dataset at different signal to noise ratio -12.5,-10,0,10,20,30 and 40 decibel (dB).
The Google Speech Commands v2 dataset is under the Creative Commons BY 4.0 license. It could be downloaded at: http://download.tensorflow.org/data/speech_commands_v0.02.tar.gz
The Musan dataset is under Attribution 4.0 International (CC BY 4.0). It could be downlowned at https://www.openslr.org/17/
Citations:
[1] Pete Warden, “Speech commands: A dataset for limited-vocabulary speech recognition,” arXiv preprint arXiv:1804.03209, 2018.
[2] David Snyder, Guoguo Chen, and Daniel Povey, “Musan: A music, speech, and noise corpus,” arXiv preprint arXiv:1510.08484, 2015. | Provide a detailed description of the following dataset: Google Speech Commands - Musan |
MODA dataset | MODA is a large open-source dataset of high quality, human-scored sleep spindles (5342 spindles, from 180 subjects) that was produced by the Massive Online Data Annotation project. Sleep spindles were detected as a consensus of a number of human-expert scorers. With a median number of 5 experts scoring every EEG segment, MODA offers sleep spindle annotations of a quality unseen in previous datasets.
The dataset was described and introduced in the following publication:
Lacourse, K., Yetton, B., Mednick, S. et al. Massive online data annotation, crowdsourcing to generate high quality sleep spindle annotations from EEG data. Sci Data 7, 190 (2020). https://doi.org/10.1038/s41597-020-0533-4 | Provide a detailed description of the following dataset: MODA dataset |
SuMe | Can language models read biomedical texts and explain the biomedical mechanisms discussed? In this work we introduce a biomedical mechanism summarization task. Biomedical studies often investigate the mechanisms behind how one entity (e.g., a protein or a chemical) affects another in a biological context. The abstracts of these publications often include a focused set of sentences that present relevant supporting statements regarding such relationships, associated experimental evidence, and a concluding sentence that summarizes the mechanism underlying the relationship. We leverage this structure and create a summarization task, where the input is a collection of sentences and the main entities in an abstract, and the output includes the relationship and a sentence that summarizes the mechanism. Using a small amount of manually labeled mechanism sentences, we train a mechanism sentence classifier to filter a large biomedical abstract collection and create a summarization dataset with 22k instances. We also introduce conclusion sentence generation as a pretraining task with 611k instances. We benchmark the performance of large bio-domain language models. We find that while the pretraining task help improves performance, the best model produces acceptable mechanism outputs in only 32% of the instances, which shows the task presents significant challenges in biomedical language understanding and summarization. | Provide a detailed description of the following dataset: SuMe |
Echonet-Dynamic | Echocardiography, or cardiac ultrasound, is the most widely used and readily available imaging modality to assess cardiac function and structure. Combining portable instrumentation, rapid image acquisition, high temporal resolution, and without the risks of ionizing radiation, echocardiography is one of the most frequently utilized imaging studies in the United States and serves as the backbone of cardiovascular imaging. For diseases ranging from heart failure to valvular heart diseases, echocardiography is both necessary and sufficient to diagnose many cardiovascular diseases. In addition to our deep learning model, we introduce a new large video dataset of echocardiograms for computer vision research. The EchoNet-Dynamic database includes 10,030 labeled echocardiogram videos and human expert annotations (measurements, tracings, and calculations) to provide a baseline to study cardiac motion and chamber sizes. | Provide a detailed description of the following dataset: Echonet-Dynamic |
CLAMS | Targeted syntactic evaluation datasets in 5 languages: English, French, German, Russian, and Hebrew. Data are translated from the targeted syntactic evaluation data of Marvin & Linzen (2018): https://aclanthology.org/D18-1151/ . All stimuli focus on subject-verb agreement. | Provide a detailed description of the following dataset: CLAMS |
SSVC | The Synthetic SVC (SSVC) dataset comprises 12,000 images with respective bounding box annotations and detailed graph representations. This dataset enables the development of
strong models for the interpretation of SVCs while skipping the time-consuming dense data annotation. | Provide a detailed description of the following dataset: SSVC |
Fire and Smoke Dataset | This dataset is collected by DataCluster Labs, India. To download full dataset or to submit a request for your new data collection needs, please drop a mail to: [sales@datacluster.ai](mailto:sales@datacluster.ai)
This dataset is an extremely challenging set of over 7000+ original Fire and Smoke images captured and crowdsourced from over 400+ urban and rural areas, where each image is **manually reviewed and verified** by computer vision professionals at Datacluster.
### **Dataset Features**
- Dataset size : 7000+
- Captured by : Over 1000+ crowdsource contributors
- Resolution : 98% images HD and above (1920x1080 and above)
- Location : Captured with 400+ cities accross India
- Diversity : Various lighting conditions like day, night, varied distances, view points etc.
- Device used : Captured using mobile phones in 2020-2021
- Usage : Fire and Smoke detection, Smart cameras, Fire and Smoke alarming system, etc.
###**Available Annotation formats**
COCO, YOLO, PASCAL-VOC, Tf-Record
*To download full datasets or to submit a request for your dataset needs, please drop a mail on sales@datacluster.ai . Visit [w](http://www.datacluster.in/)ww.datacluster.ai to know more. | Provide a detailed description of the following dataset: Fire and Smoke Dataset |
CiteSum | CiteSum is a large-scale scientific extreme summarization benchmark. | Provide a detailed description of the following dataset: CiteSum |
Nakdimon-train | A collection of diacritized Hebrew text in a variety of registers and from different sources. | Provide a detailed description of the following dataset: Nakdimon-train |
FiNER-139 | FiNER-139 is comprised of 1.1M sentences annotated with eXtensive Business Reporting Language (XBRL) tags extracted from annual and quarterly reports of publicly-traded companies in the US. Unlike other entity extraction tasks, like named entity recognition (NER) or contract element extraction, which typically require identifying entities of a small set of common types (e.g., persons, organizations), FiNER-139 uses a much larger label set of 139 entity types. Another important difference from typical entity extraction is that FiNER focuses on numeric tokens, with the correct tag depending mostly on context, not the token itself. | Provide a detailed description of the following dataset: FiNER-139 |
AVCAffe | We introduce AVCAffe, the first Audio-Visual dataset consisting of Cognitive load and Affect attributes. We record AVCAffe by simulating remote work scenarios over a video-conferencing platform, where subjects collaborate to complete a number of cognitively engaging tasks. AVCAffe is the largest originally collected (not collected from the Internet) affective dataset in English language. We recruit 106 participants from 18 different countries of origin, spanning an age range of 18 to 57 years old, with a balanced male-female ratio. AVCAffe comprises a total of 108 hours of video, equivalent to more than 58,000 clips along with task-based self-reported ground truth labels for arousal, valence, and cognitive load attributes such as mental demand, temporal demand, effort, and a few others. We believe AVCAffe would be a challenging benchmark for the deep learning research community given the inherent difficulty of classifying affect and cognitive load in particular. Moreover, our dataset fills an existing timely gap by facilitating the creation of learning systems for better self-management of remote work meetings, and further study of hypotheses regarding the impact of remote work on cognitive load and affective states. | Provide a detailed description of the following dataset: AVCAffe |
MAG-Scholar-C | MAG-Scholar-C is constructed by Bojchevski et al. based on Microsoft Academic Graph (MAG), in which nodes refer to papers, edges represent citation relations among papers and features are bag-of-words of paper abstracts. | Provide a detailed description of the following dataset: MAG-Scholar-C |
HeriGraph | The dataset contains constructed multi-modal features (visual and textual), pseudo-labels (on heritage values and attributes), and graph structures (with temporal, social, and spatial links) constructed using User-Generated Content data collected from Flickr social media platform in three global cities containing UNESCO World Heritage property (Amsterdam, Suzhou, Venice).
The motivation of data collection in this project is to provide datasets that could be both directly applicable for ML communities as test-bed, and
theoretically informative for heritage and urban scholars to draw conclusions on for planning decision-making. | Provide a detailed description of the following dataset: HeriGraph |
Natural sentences that contain *any* | We scraped the Gutenberg Project and a subset of English Wikipedia to obtain the list of sentences that contain *any*. Next, using a combination of heuristics, we filtered the result with regular expressions to produce two sets of sentences (the second set underwent additional manual filtration):
* 3844 sentences with sentential negation and a plural object with *any* to the right to the verb;
* 330 sentences with *nobody* / *no one* as subject and a plural object with *any* to the right.
The first set was modified to substitute the negated verb by its non-negated version, so we contrast 3844 sentences with negation and 3844 affirmative ones (*neg* vs. *aff*). In the second dataset, we substituted *nobody* for *somebody* and *no one* for *someone*, to check the *some* vs. *no* contrast.
You can use our script to find sentences with negation and *any* in any given English corpus:
```python dataset_preparation/select_sentences_from_real_text.py <corpus.txt>``` | Provide a detailed description of the following dataset: Natural sentences that contain *any* |
Synthetic parallel sentences that contain *any* | We used the following procedure. First, we automatically identified the set of verbs and nouns to build our items from. To do so, we started with *bert-base-uncased* vocabulary. We ran all non-subword lexical tokens through a SpaCy POS. Further, we lemmatized the result using https://pypi.org/project/Pattern/ and dropped duplicates. Then, we filtered out modal verbs, singularia tantum nouns and some visible lemmatization mistakes. Finally, we filtered out non-transitive verbs to give the dataset a bit of a higher baseline of grammaticality.
We kept top 100 nouns and top 100 verbs from the resulting lists -- these are the lexical entries we will deal with. Then, we generated sentences with these words. For this, we iterate over the 100 nouns in the subject and the object positions (excluding cases where the same noun appears in both positions) and over the 100 verbs. The procedure gave us 990k sentences like these:
* A girl crossed a road.
* A community hosted a game.
* An eye opened a fire.
* A record put an air.
Some are more natural, make more sense and adhere to the verb's selectional restrictions better than the others. To control for this, we ran the sentences through GPT-2 and assigned perplexity to all candidates. Then we took the bottom 20k of the sentences (the most 'natural' ones) as the core of our synthetic dataset.
We tried to approximate the 'naturalness' of examples by a combination of measures. We rely on insights from different models (GPT-2, BERT, corpus-based statistical insights into verb transitivity) on different stages of the dataset creation. Still, some sentences sound intuitively 'weird'. We don't see this as a problem though -- we will not rely directly on the naturalness of individual the examples, rather we will measure the effect of the NPI across the dataset. The amount of the examples will allow us to generalize across varying parts of the sentences to make sure that the results can be attributed to the parts we are interested in: items responsible for the monotonicity of the sentence. The quantity of test items is crucial for reproducing psycholinguistic experiments on LRMs -- while in the former one sentence gives rise to a number of observations when different human subjects make a judgment, in the latter one test sentence gives you one observation only.% Here the procedures of psycholinguistic studies and LRM studies necessarily diverge.
With this in mind, we use the 20k sentences produced by the previous steps to build the parts of our synthetic dataset. Each of the sentences has a pluralized (not anymore singular!) object in combination with *any*: any roads. The subject type varies in different datasets comprising our synthetic data.
Overall, sentences in all parts of our dataset vary in the type of context it instantiates (simple affirmative, negation, quantifiers of different monotonicity) -- but all sentences contain *any* in the object position in combination with a plural noun. We will manipulate the presence or absence of *any* to measure how *any* plays out with different types of environments. | Provide a detailed description of the following dataset: Synthetic parallel sentences that contain *any* |
Simulated micro-Doppler Signatures | Simulated pulse Doppler radar signatures for four classes of helicopter-like targets. The classes differ in the number of rotating blades each kind of target carries, thus each class translates into a specific modulation pattern on the Doppler signature. Doppler signatures are a typical feature used to achieve radar targets discrimination. This dataset was generated using a simple open-source MATLAB [simulation code](https://github.com/Blupblupblup/Doppler-Signatures-Generation), which can be easily modified to generate custom datasets with more classes and increased intra-class diversity.
Dataset can be easily used for supervised classification, out-of-distribution detection (near and far), unsupervised learning and modulation pattern segmentation. The code includes the generation of an SPD representation for each signature, thanks to the computation of a covariance matrix, thus allowing for second-order specific data processing (e.g. Riemannian neural network or tangent PCA).
[Dataset used in the paper](https://cloud.mbauw.eu/s/BPtk5HYkyBWAGLo)
[Dataset generation code](https://github.com/Blupblupblup/Doppler-Signatures-Generation) | Provide a detailed description of the following dataset: Simulated micro-Doppler Signatures |
Extended Minecraft Corpus dataset | Minecraft Corpus dataset with builder utterance annotations | Provide a detailed description of the following dataset: Extended Minecraft Corpus dataset |
E-KAR | The ability to recognize analogies is fundamental to human cognition. Existing benchmarks to test word analogy do not reveal the underneath process of analogical reasoning of neural models.
Holding the belief that models capable of reasoning should be right for the right reasons, we propose a first-of-its-kind Explainable Knowledge-intensive Analogical Reasoning benchmark (E-KAR). Our benchmark consists of 1,655 (in Chinese) and 1,251 (in English) problems sourced from the Civil Service Exams, which require intensive background knowledge to solve. More importantly, we design a free-text explanation scheme to explain whether an analogy should be drawn, and manually annotate them for each and every question and candidate answer. | Provide a detailed description of the following dataset: E-KAR |
Replication Data for: Investigating the concentration of High Yield Investment Programs in the United Kingdom | The dataset provides information about 450 HYIPs collected between November 2020 and September 2021. This dataset was analyzed and the results are discussed in the paper. | Provide a detailed description of the following dataset: Replication Data for: Investigating the concentration of High Yield Investment Programs in the United Kingdom |
Domestic Trash / Garbage Dataset | ### **This dataset is collected by Datacluster Labs. To download full dataset or to submit a request for your new data collection needs, please drop a mail to: [sales@datacluster.ai](mailto:sales@datacluster.ai)**
This dataset is an extremely challenging set of over 9000+ original Trash/Garbage images captured and crowdsourced from over 2000+ urban and rural areas, where each image is **manually reviewed and verified** by computer vision professionals at ****DC Labs.
### **Dataset Features**
- Dataset size : 9000+
- Captured by : Over 2000+ crowdsource contributors
- Resolution : 99.9% images HD and above (1920x1080 and above)
- Location : Captured across 500+ cities
- Diversity : Various lighting conditions like day, night, varied distances, different material view points etc.
- Device used : Captured using mobile phones in 2020-2022
- Usage : Trash detection, Material classification, Garbage segregation, Trash segregation, etc.
### Available Annotation formats
COCO, YOLO, PASCAL-VOC, Tf-Record
*To download full datasets or to submit a request for your dataset needs, please drop a mail on sales@datacluster.ai . Visit [w](http://www.datacluster.in/)ww.datacluster.ai to know more. | Provide a detailed description of the following dataset: Domestic Trash / Garbage Dataset |
RTMV | **RTMV** is a large-scale synthetic dataset for novel view synthesis consisting of ∼300k images rendered from nearly 2000 complex scenes using high-quality ray tracing at high resolution (1600 × 1600 pixels). The dataset is orders of magnitude larger than existing synthetic datasets for novel view synthesis, thus providing a large unified benchmark for both training and evaluation. Using 4 distinct sources of high-quality 3D meshes, the scenes of our dataset exhibit challenging variations in camera views, lighting, shape, materials, and textures.
The dataset consists of scenes from four different environments, namely Google Scanned Objects, ABC, Bricks and Amazon Berkeley. Each scene has 150 renders at a 1600 x 1600 resolution.
Description adapted from: [http://www.cs.umd.edu/~mmeshry/projects/rtmv/](http://www.cs.umd.edu/~mmeshry/projects/rtmv/) | Provide a detailed description of the following dataset: RTMV |
Indian Traffic Sign Image Dataset | ### **This dataset is collected by Datacluster Labs. To download full dataset or to submit a request for your new data collection needs, please drop a mail to: [sales@datacluster.ai](mailto:sales@datacluster.ai)**
This dataset is an extremely challenging set of over 2000+ original Indian Traffic Sign images captured and crowdsourced from over 400+ urban and rural areas, where each image is **manually reviewed and verified** by computer vision professionals at DC Labs.
### **Dataset Features**
- Dataset size : 2000+
- Captured by : Over 400+ crowdsource contributors
- Resolution : 100% of images HD and above (1920x1080 and above)
- Location : Captured with 400+ cities accross India
- Diversity : Various lighting conditions like day, night, varied distances, view points etc.
- Device used : Captured using mobile phones in 2020-2021
- Usage : Traffic sign detection, Self-driving systems, traffic detection, sign detection, etc.
### **Available Annotation formats**
COCO, YOLO, PASCAL-VOC, Tf-Record
*To download full datasets or to submit a request for your dataset needs, please drop a mail on sales@datacluster.ai . Visit [w](http://www.datacluster.in/)ww.datacluster.ai to know more. | Provide a detailed description of the following dataset: Indian Traffic Sign Image Dataset |
Jigsaw Toxic Comment Classification Dataset | You are provided with a large number of Wikipedia comments which have been labeled by human raters for toxic behavior. The types of toxicity are:
toxic
severe_toxic
obscene
threat
insult
identity_hate
You must create a model which predicts a probability of each type of toxicity for each comment.
File descriptions
train.csv - the training set, contains comments with their binary labels
test.csv - the test set, you must predict the toxicity probabilities for these comments. To deter hand labeling, the test set contains some comments which are not included in scoring.
sample_submission.csv - a sample submission file in the correct format
test_labels.csv - labels for the test data; value of -1 indicates it was not used for scoring; (Note: file added after competition close!)
Usage
The dataset under CC0, with the underlying comment text being governed by Wikipedia's CC-SA-3.0 | Provide a detailed description of the following dataset: Jigsaw Toxic Comment Classification Dataset |
Image Description Sequences | A dataset of description sequences, a sequence of expressions that together are meant to single out one image from an (imagined) set of other similar images. These sequences were produced in a monological setting, but with the instruction to imagine they were provided to a partner who successively asked for more information (hence, tell me more). | Provide a detailed description of the following dataset: Image Description Sequences |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.