prompts stringlengths 81 413 | metrics_response stringlengths 0 371 |
|---|---|
What metrics were used to measure the cDNP model in the Far Away in the Deep Space: Dense Nearest-Neighbor-Based Out-of-Distribution Detection paper on the Road Anomaly dataset? | AP, FPR95 |
What metrics were used to measure the Mask2Anomaly model in the Unmasking Anomalies in Road-Scene Segmentation paper on the Road Anomaly dataset? | AP, FPR95 |
What metrics were used to measure the RPL+CoroCL model in the Residual Pattern Learning for Pixel-wise Out-of-Distribution Detection in Semantic Segmentation paper on the Road Anomaly dataset? | AP, FPR95 |
What metrics were used to measure the PEBAL model in the Pixel-wise Energy-biased Abstention Learning for Anomaly Segmentation on Complex Urban Driving Scenes paper on the Road Anomaly dataset? | AP, FPR95 |
What metrics were used to measure the Synboost model in the Pixel-wise Anomaly Detection in Complex Driving Scenes paper on the Road Anomaly dataset? | AP, FPR95 |
What metrics were used to measure the SML model in the Standardized Max Logits: A Simple yet Effective Approach for Identifying Unexpected Road Obstacles in Urban-Scene Segmentation paper on the Road Anomaly dataset? | AP, FPR95 |
What metrics were used to measure the SynthCP model in the Synthesize then Compare: Detecting Failures and Anomalies for Semantic Segmentation paper on the Road Anomaly dataset? | AP, FPR95 |
What metrics were used to measure the CCD model in the Constrained Contrastive Distribution Learning for Unsupervised Anomaly Detection and Localisation in Medical Images paper on the LAG dataset? | AUC |
What metrics were used to measure the IGD model in the Deep One-Class Classification via Interpolated Gaussian Descriptor paper on the LAG dataset? | AUC |
What metrics were used to measure the PANDA model in the PANDA: Predicting the change in proteins binding affinity upon mutations using sequence information paper on the LAG dataset? | AUC |
What metrics were used to measure the F-anoGAN model in the f-AnoGAN: Fast Unsupervised Anomaly Detection with Generative Adversarial Networks paper on the LAG dataset? | AUC |
What metrics were used to measure the PaDiM model in the PaDiM: a Patch Distribution Modeling Framework for Anomaly Detection and Localization paper on the LAG dataset? | AUC |
What metrics were used to measure the DMAD model in the Diversity-Measurable Anomaly Detection paper on the UCSD Ped2 dataset? | AUC, FPS |
What metrics were used to measure the STPT model in the Spatio-temporal predictive tasks for abnormal event detection in videos paper on the UCSD Ped2 dataset? | AUC, FPS |
What metrics were used to measure the Background-Agnostic model in the A Background-Agnostic Framework with Adversarial Training for Abnormal Event Detection in Video paper on the UCSD Ped2 dataset? | AUC, FPS |
What metrics were used to measure the ASTNet model in the Attention-based residual autoencoder for video anomaly detection paper on the UCSD Ped2 dataset? | AUC, FPS |
What metrics were used to measure the Two-stream model in the Context Recovery and Knowledge Retrieval: A Novel Two-Stream Framework for Video Anomaly Detection paper on the UCSD Ped2 dataset? | AUC, FPS |
What metrics were used to measure the FastAno model in the FastAno: Fast Anomaly Detection via Spatio-temporal Patch Transformation paper on the UCSD Ped2 dataset? | AUC, FPS |
What metrics were used to measure the ConvVQ model in the Diversity-Measurable Anomaly Detection paper on the UCSD Ped2 dataset? | AUC, FPS |
What metrics were used to measure the Random Forest model in the Unsupervised Anomaly Detection for Auditing Data and Impact of Categorical Encodings paper on the Vehicle Claims dataset? | AUC |
What metrics were used to measure the Gradient Boosting model in the Unsupervised Anomaly Detection for Auditing Data and Impact of Categorical Encodings paper on the Vehicle Claims dataset? | AUC |
What metrics were used to measure the DIF model in the Deep Isolation Forest for Anomaly Detection paper on the NB15-Backdoor dataset? | AUC |
What metrics were used to measure the FABLE model in the FABLE : Fabric Anomaly Detection Automation Process paper on the MVTec AD Textures Domain Generalization dataset? | Detection AUROC |
What metrics were used to measure the DGTSAD model in the Domain-Generalized Textured Surface Anomaly Detection paper on the MVTec AD Textures Domain Generalization dataset? | Detection AUROC |
What metrics were used to measure the EISNet+ model in the Learning from Extrinsic and Intrinsic Supervisions for Domain Generalization paper on the MVTec AD Textures Domain Generalization dataset? | Detection AUROC |
What metrics were used to measure the RCALAD model in the Spot The Odd One Out: Regularized Complete Cycle Consistent Anomaly Detector GAN paper on the MIT-BIH Arrhythmia Database dataset? | F1 score |
What metrics were used to measure the OGNET model in the Old is Gold: Redefining the Adversarially Learned One-Class Classifier Training Paradigm paper on the MNIST-test dataset? | F1 score |
What metrics were used to measure the BERT pretrained on MIMIC-III model in the RadQA: A Question Answering Dataset to Improve Comprehension of Radiology Reports paper on the RadQA dataset? | Answer F1 |
What metrics were used to measure the Rational Reasoner / IDOL model in the IDOL: Indicator-oriented Logic Pre-training for Logical Reasoning paper on the ReClor dataset? | Test |
What metrics were used to measure the AMR-LE-Ensemble model in the paper on the ReClor dataset? | Test |
What metrics were used to measure the MERIt(MERIt-deberta-v2-xxlarge ) model in the MERIt: Meta-Path Guided Contrastive Learning for Logical Reasoning paper on the ReClor dataset? | Test |
What metrics were used to measure the MERIt-deberta-v2-xxlarge deberta.v2.xxlarge.path.override_True.norm_1.1.0.w2.A100.cp200.s42 model in the paper on the ReClor dataset? | Test |
What metrics were used to measure the Knowledge model model in the paper on the ReClor dataset? | Test |
What metrics were used to measure the DeBERTa-v2-xxlarge-AMR-LE-Contraposition model in the paper on the ReClor dataset? | Test |
What metrics were used to measure the LReasoner ensemble model in the Logic-Driven Context Extension and Data Augmentation for Logical Reasoning of Text paper on the ReClor dataset? | Test |
What metrics were used to measure the ELECTRA and ALBERT model in the Answer Uncertainty and Unanswerability in Multiple-Choice Machine Reading Comprehension paper on the ReClor dataset? | Test |
What metrics were used to measure the WWZ model in the paper on the ReClor dataset? | Test |
What metrics were used to measure the xlnet-large-uncased [extended data] model in the paper on the ReClor dataset? | Test |
What metrics were used to measure the albert_4gpus_bs2_DACE model in the paper on the ReClor dataset? | Test |
What metrics were used to measure the Tournament2 model in the paper on the ReClor dataset? | Test |
What metrics were used to measure the MERIt + Structure Decouple (roberta-large) model in the paper on the ReClor dataset? | Test |
What metrics were used to measure the RoBERTa-single model in the Logiformer: A Two-Branch Graph Transformer Network for Interpretable Logical Reasoning paper on the ReClor dataset? | Test |
What metrics were used to measure the MERIT w/ reasoning aware model in the paper on the ReClor dataset? | Test |
What metrics were used to measure the ALBERT-XXLarge-V2 model in the paper on the ReClor dataset? | Test |
What metrics were used to measure the VDGN(RoBERTa single model) model in the paper on the ReClor dataset? | Test |
What metrics were used to measure the ro_DA_CE_v100_224 model in the paper on the ReClor dataset? | Test |
What metrics were used to measure the qmcurtis model in the paper on the ReClor dataset? | Test |
What metrics were used to measure the AdaLoGN model in the paper on the ReClor dataset? | Test |
What metrics were used to measure the Zhangjie666 model in the paper on the ReClor dataset? | Test |
What metrics were used to measure the alsace model in the paper on the ReClor dataset? | Test |
What metrics were used to measure the RoBERTa-single model in the Fact-driven Logical Reasoning for Machine Reading Comprehension paper on the ReClor dataset? | Test |
What metrics were used to measure the NAACL 2021 model in the DAGN: Discourse-Aware Graph Network for Logical Reasoning paper on the ReClor dataset? | Test |
What metrics were used to measure the Roberta Large GCN(dev: 0.63) model in the paper on the ReClor dataset? | Test |
What metrics were used to measure the xlnet_l model in the paper on the ReClor dataset? | Test |
What metrics were used to measure the elBERto model in the paper on the ReClor dataset? | Test |
What metrics were used to measure the 5BitClan model in the paper on the ReClor dataset? | Test |
What metrics were used to measure the SimChea model in the paper on the ReClor dataset? | Test |
What metrics were used to measure the XLNet-large model in the ReClor: A Reading Comprehension Dataset Requiring Logical Reasoning paper on the ReClor dataset? | Test |
What metrics were used to measure the RoBERTa-large model in the paper on the ReClor dataset? | Test |
What metrics were used to measure the XLNet-base model in the ReClor: A Reading Comprehension Dataset Requiring Logical Reasoning paper on the ReClor dataset? | Test |
What metrics were used to measure the BERT-large+MNLI model in the paper on the ReClor dataset? | Test |
What metrics were used to measure the BERT-large model in the paper on the ReClor dataset? | Test |
What metrics were used to measure the RoBERTa-base model in the ReClor: A Reading Comprehension Dataset Requiring Logical Reasoning paper on the ReClor dataset? | Test |
What metrics were used to measure the BERT-base model in the ReClor: A Reading Comprehension Dataset Requiring Logical Reasoning paper on the ReClor dataset? | Test |
What metrics were used to measure the Gariscat model in the paper on the ReClor dataset? | Test |
What metrics were used to measure the BachTE model in the paper on the ReClor dataset? | Test |
What metrics were used to measure the TwistedFate model in the paper on the ReClor dataset? | Test |
What metrics were used to measure the RoBERTa-Large model in the Beat the AI: Investigating Adversarial Human Annotation for Reading Comprehension paper on the AdversarialQA dataset? | Overall: F1, D(BiDAF): F1, D(BERT): F1, D(RoBERTa): F1 |
What metrics were used to measure the BERT-Large model in the Beat the AI: Investigating Adversarial Human Annotation for Reading Comprehension paper on the AdversarialQA dataset? | Overall: F1, D(BiDAF): F1, D(BERT): F1, D(RoBERTa): F1 |
What metrics were used to measure the BiDAF model in the Beat the AI: Investigating Adversarial Human Annotation for Reading Comprehension paper on the AdversarialQA dataset? | Overall: F1, D(BiDAF): F1, D(BERT): F1, D(RoBERTa): F1 |
What metrics were used to measure the BERT model in the Predicting Subjective Features of Questions of QA Websites using BERT paper on the CrowdSource QA dataset? | MSE |
What metrics were used to measure the Golden Transformer model in the paper on the MuSeRC dataset? | Average F1, EM |
What metrics were used to measure the MT5 Large model in the mT5: A massively multilingual pre-trained text-to-text transformer paper on the MuSeRC dataset? | Average F1, EM |
What metrics were used to measure the ruRoberta-large finetune model in the paper on the MuSeRC dataset? | Average F1, EM |
What metrics were used to measure the ruT5-large-finetune model in the paper on the MuSeRC dataset? | Average F1, EM |
What metrics were used to measure the Human Benchmark model in the RussianSuperGLUE: A Russian Language Understanding Evaluation Benchmark paper on the MuSeRC dataset? | Average F1, EM |
What metrics were used to measure the ruT5-base-finetune model in the paper on the MuSeRC dataset? | Average F1, EM |
What metrics were used to measure the ruBert-large finetune model in the paper on the MuSeRC dataset? | Average F1, EM |
What metrics were used to measure the ruBert-base finetune model in the paper on the MuSeRC dataset? | Average F1, EM |
What metrics were used to measure the RuGPT3XL few-shot model in the paper on the MuSeRC dataset? | Average F1, EM |
What metrics were used to measure the RuGPT3Large model in the paper on the MuSeRC dataset? | Average F1, EM |
What metrics were used to measure the RuBERT plain model in the paper on the MuSeRC dataset? | Average F1, EM |
What metrics were used to measure the RuGPT3Medium model in the paper on the MuSeRC dataset? | Average F1, EM |
What metrics were used to measure the RuBERT conversational model in the paper on the MuSeRC dataset? | Average F1, EM |
What metrics were used to measure the YaLM 1.0B few-shot model in the paper on the MuSeRC dataset? | Average F1, EM |
What metrics were used to measure the heuristic majority model in the Unreasonable Effectiveness of Rule-Based Heuristics in Solving Russian SuperGLUE Tasks paper on the MuSeRC dataset? | Average F1, EM |
What metrics were used to measure the RuGPT3Small model in the paper on the MuSeRC dataset? | Average F1, EM |
What metrics were used to measure the SBERT_Large model in the paper on the MuSeRC dataset? | Average F1, EM |
What metrics were used to measure the SBERT_Large_mt_ru_finetuning model in the paper on the MuSeRC dataset? | Average F1, EM |
What metrics were used to measure the Multilingual Bert model in the paper on the MuSeRC dataset? | Average F1, EM |
What metrics were used to measure the Baseline TF-IDF1.1 model in the RussianSuperGLUE: A Russian Language Understanding Evaluation Benchmark paper on the MuSeRC dataset? | Average F1, EM |
What metrics were used to measure the Random weighted model in the Unreasonable Effectiveness of Rule-Based Heuristics in Solving Russian SuperGLUE Tasks paper on the MuSeRC dataset? | Average F1, EM |
What metrics were used to measure the majority_class model in the Unreasonable Effectiveness of Rule-Based Heuristics in Solving Russian SuperGLUE Tasks paper on the MuSeRC dataset? | Average F1, EM |
What metrics were used to measure the ALBERT (Ensemble) model in the Improving Machine Reading Comprehension with Single-choice Decision and Transfer Learning paper on the RACE dataset? | Accuracy, Accuracy (Middle), Accuracy (High) |
What metrics were used to measure the Megatron-BERT (ensemble) model in the Megatron-LM: Training Multi-Billion Parameter Language Models Using Model Parallelism paper on the RACE dataset? | Accuracy, Accuracy (Middle), Accuracy (High) |
What metrics were used to measure the ALBERTxxlarge+DUMA(ensemble) model in the DUMA: Reading Comprehension with Transposition Thinking paper on the RACE dataset? | Accuracy, Accuracy (Middle), Accuracy (High) |
What metrics were used to measure the Megatron-BERT model in the Megatron-LM: Training Multi-Billion Parameter Language Models Using Model Parallelism paper on the RACE dataset? | Accuracy, Accuracy (Middle), Accuracy (High) |
What metrics were used to measure the DeBERTalarge model in the DeBERTa: Decoding-enhanced BERT with Disentangled Attention paper on the RACE dataset? | Accuracy, Accuracy (Middle), Accuracy (High) |
What metrics were used to measure the B10-10-10 model in the Funnel-Transformer: Filtering out Sequential Redundancy for Efficient Language Processing paper on the RACE dataset? | Accuracy, Accuracy (Middle), Accuracy (High) |
What metrics were used to measure the RoBERTa model in the RoBERTa: A Robustly Optimized BERT Pretraining Approach paper on the RACE dataset? | Accuracy, Accuracy (Middle), Accuracy (High) |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.