prompts stringlengths 81 413 | metrics_response stringlengths 0 371 |
|---|---|
What metrics were used to measure the Win CPD algorithm (Mahalanobis metric) model in the Unsupervised Offline Changepoint Detection Ensembles paper on the TEP dataset? | NAB (standard), NAB (lowFP), NAB (LowFN) |
What metrics were used to measure the WinEnsemble CPDE algorithm (WeightedSum+MinAbs) model in the Unsupervised Offline Changepoint Detection Ensembles paper on the TEP dataset? | NAB (standard), NAB (lowFP), NAB (LowFN) |
What metrics were used to measure the LSTMCaps model in the Hybridization of Capsule and LSTM Networks for unsupervised anomaly detection on multivariate data paper on the SKAB dataset? | NAB (standard), NAB (lowFP), NAB (LowFN) |
What metrics were used to measure the BinSeg CPD algorithm (Mahalanobis metric) model in the Unsupervised Offline Changepoint Detection Ensembles paper on the SKAB dataset? | NAB (standard), NAB (lowFP), NAB (LowFN) |
What metrics were used to measure the OptEnsemble CPDE algorithm (WeightedSum+Rank) model in the Unsupervised Offline Changepoint Detection Ensembles paper on the SKAB dataset? | NAB (standard), NAB (lowFP), NAB (LowFN) |
What metrics were used to measure the Opt CPD algorithm (Mahalanobis metric) model in the Unsupervised Offline Changepoint Detection Ensembles paper on the SKAB dataset? | NAB (standard), NAB (lowFP), NAB (LowFN) |
What metrics were used to measure the WinEnsemble CPDE algorithm (Sum+MinAbs) model in the Unsupervised Offline Changepoint Detection Ensembles paper on the SKAB dataset? | NAB (standard), NAB (lowFP), NAB (LowFN) |
What metrics were used to measure the Win CPD algorithm (l1 metric) model in the Unsupervised Offline Changepoint Detection Ensembles paper on the SKAB dataset? | NAB (standard), NAB (lowFP), NAB (LowFN) |
What metrics were used to measure the BinSegEnsemble CPDE algorithm (WeightedSum+Rank) model in the Unsupervised Offline Changepoint Detection Ensembles paper on the SKAB dataset? | NAB (standard), NAB (lowFP), NAB (LowFN) |
What metrics were used to measure the DiscoBox (ResNet-50) model in the DiscoBox: Weakly Supervised Instance Segmentation and Semantic Correspondence from Box Supervision paper on the COCO 2017 val dataset? | AP, AP@50, AP@75, AP@S, AP@M, AP@L |
What metrics were used to measure the BBTP (ResNet-101) model in the Bounding Box Tightness Prior for Weakly Supervised Image Segmentation paper on the COCO 2017 val dataset? | AP, AP@50, AP@75, AP@S, AP@M, AP@L |
What metrics were used to measure the DiscoBox (ResNeXt-101-DCN-FPN) model in the DiscoBox: Weakly Supervised Instance Segmentation and Semantic Correspondence from Box Supervision paper on the COCO test-dev dataset? | AP, AP@50, AP@75, AP@S, AP@M, AP@L |
What metrics were used to measure the DiscoBox (ResNet-101-DCN-FPN) model in the DiscoBox: Weakly Supervised Instance Segmentation and Semantic Correspondence from Box Supervision paper on the COCO test-dev dataset? | AP, AP@50, AP@75, AP@S, AP@M, AP@L |
What metrics were used to measure the BoxInst (ResNet-101-DCN-BiFPN) model in the BoxInst: High-Performance Instance Segmentation with Box Annotations paper on the COCO test-dev dataset? | AP, AP@50, AP@75, AP@S, AP@M, AP@L |
What metrics were used to measure the BoxInst (ResNet-101-BiFPN) model in the BoxInst: High-Performance Instance Segmentation with Box Annotations paper on the COCO test-dev dataset? | AP, AP@50, AP@75, AP@S, AP@M, AP@L |
What metrics were used to measure the BoxInst (ResNet-101-FPN) model in the BoxInst: High-Performance Instance Segmentation with Box Annotations paper on the COCO test-dev dataset? | AP, AP@50, AP@75, AP@S, AP@M, AP@L |
What metrics were used to measure the BoxInst (ResNet-50-FPN) model in the BoxInst: High-Performance Instance Segmentation with Box Annotations paper on the COCO test-dev dataset? | AP, AP@50, AP@75, AP@S, AP@M, AP@L |
What metrics were used to measure the DiscoBox (ResNet-50-FPN) model in the DiscoBox: Weakly Supervised Instance Segmentation and Semantic Correspondence from Box Supervision paper on the COCO test-dev dataset? | AP, AP@50, AP@75, AP@S, AP@M, AP@L |
What metrics were used to measure the BBAM model in the BBAM: Bounding Box Attribution Map for Weakly Supervised Semantic and Instance Segmentation paper on the PASCAL VOC 2012 val dataset? | Average Best Overlap, mAP@0.25, mAP@0.5, mAP@0.75 |
What metrics were used to measure the LIID model in the Leveraging Instance-, Image- and Dataset-Level Information for Weakly Supervised Instance Segmentation paper on the PASCAL VOC 2012 val dataset? | Average Best Overlap, mAP@0.25, mAP@0.5, mAP@0.75 |
What metrics were used to measure the WSIS_CL model in the Weakly Supervised Instance Segmentation by Deep Community Learning paper on the PASCAL VOC 2012 val dataset? | Average Best Overlap, mAP@0.25, mAP@0.5, mAP@0.75 |
What metrics were used to measure the CIM + Mask R-CNN model in the Complete Instances Mining for Weakly Supervised Instance Segmentation paper on the PASCAL VOC 2012 val dataset? | Average Best Overlap, mAP@0.25, mAP@0.5, mAP@0.75 |
What metrics were used to measure the BESTIE (point label, proposal-free) model in the Beyond Semantic to Instance Segmentation: Weakly-Supervised Instance Segmentation via Semantic Knowledge Transfer and Self-Refinement paper on the PASCAL VOC 2012 val dataset? | Average Best Overlap, mAP@0.25, mAP@0.5, mAP@0.75 |
What metrics were used to measure the BESTIE (image-level label, proposal-free) model in the Beyond Semantic to Instance Segmentation: Weakly-Supervised Instance Segmentation via Semantic Knowledge Transfer and Self-Refinement paper on the PASCAL VOC 2012 val dataset? | Average Best Overlap, mAP@0.25, mAP@0.5, mAP@0.75 |
What metrics were used to measure the ColBERT model model in the ColBERT: Using BERT Sentence Embedding in Parallel Neural Networks for Computational Humor paper on the 200k Short Texts for Humor Detection dataset? | F1-score |
What metrics were used to measure the XLNet Large Cased model in the XLNet: Generalized Autoregressive Pretraining for Language Understanding paper on the 200k Short Texts for Humor Detection dataset? | F1-score |
What metrics were used to measure the Multinomial NB model in the ColBERT: Using BERT Sentence Embedding in Parallel Neural Networks for Computational Humor paper on the 200k Short Texts for Humor Detection dataset? | F1-score |
What metrics were used to measure the SVM model in the ColBERT: Using BERT Sentence Embedding in Parallel Neural Networks for Computational Humor paper on the 200k Short Texts for Humor Detection dataset? | F1-score |
What metrics were used to measure the XGBoost model in the XGBoost: A Scalable Tree Boosting System paper on the 200k Short Texts for Humor Detection dataset? | F1-score |
What metrics were used to measure the Decision Tree model in the ColBERT: Using BERT Sentence Embedding in Parallel Neural Networks for Computational Humor paper on the 200k Short Texts for Humor Detection dataset? | F1-score |
What metrics were used to measure the simple model in the A Simple Baseline for Audio-Visual Scene-Aware Dialog paper on the AVSD dataset? | CIDEr |
What metrics were used to measure the CNN+FCN model in the Human Intracranial EEG Quantitative Analysis and Automatic Feature Learning for Epileptic Seizure Prediction paper on the Melbourne University Seizure Prediction dataset? | AUC |
What metrics were used to measure the CNN model in the Convolutional Neural Networks for Epileptic Seizure Prediction paper on the Melbourne University Seizure Prediction dataset? | AUC |
What metrics were used to measure the XLM-R + TDA model in the Acceptability Judgements via Examining the Topology of Attention Maps paper on the ItaCoLA dataset? | MCC, Accuracy |
What metrics were used to measure the XLM-R model in the RuCoLA: Russian Corpus of Linguistic Acceptability paper on the ItaCoLA dataset? | MCC, Accuracy |
What metrics were used to measure the It-BERT (pre-trained) + TDA model in the Acceptability Judgements via Examining the Topology of Attention Maps paper on the ItaCoLA dataset? | MCC, Accuracy |
What metrics were used to measure the mBERT model in the RuCoLA: Russian Corpus of Linguistic Acceptability paper on the ItaCoLA dataset? | MCC, Accuracy |
What metrics were used to measure the En-BERT + TDA model in the Acceptability Judgements via Examining the Topology of Attention Maps paper on the CoLA Dev dataset? | Accuracy, MCC |
What metrics were used to measure the XLM-R (pre-trained) + TDA model in the Acceptability Judgements via Examining the Topology of Attention Maps paper on the CoLA Dev dataset? | Accuracy, MCC |
What metrics were used to measure the DeBERTa (large) model in the DeBERTa: Decoding-enhanced BERT with Disentangled Attention paper on the CoLA Dev dataset? | Accuracy, MCC |
What metrics were used to measure the TinyBERT (M=6;d' =768;d'i=3072) model in the TinyBERT: Distilling BERT for Natural Language Understanding paper on the CoLA Dev dataset? | Accuracy, MCC |
What metrics were used to measure the Synthesizer (R+V) model in the Synthesizer: Rethinking Self-Attention in Transformer Models paper on the CoLA Dev dataset? | Accuracy, MCC |
What metrics were used to measure the En-BERT (pre-trained) + TDA model in the Acceptability Judgements via Examining the Topology of Attention Maps paper on the CoLA Dev dataset? | Accuracy, MCC |
What metrics were used to measure the En-BERT + TDA + PCA model in the Acceptability Judgements via Examining the Topology of Attention Maps paper on the CoLA dataset? | Accuracy, MCC |
What metrics were used to measure the BERT+TDA model in the Can BERT eat RuCoLA? Topological Data Analysis to Explain paper on the CoLA dataset? | Accuracy, MCC |
What metrics were used to measure the RoBERTa+TDA model in the Can BERT eat RuCoLA? Topological Data Analysis to Explain paper on the CoLA dataset? | Accuracy, MCC |
What metrics were used to measure the deberta-v3-base+tasksource model in the tasksource: A Dataset Harmonization Framework for Streamlined NLP Multi-Task Learning and Evaluation paper on the CoLA dataset? | Accuracy, MCC |
What metrics were used to measure the EFL model in the Entailment as Few-Shot Learner paper on the CoLA dataset? | Accuracy, MCC |
What metrics were used to measure the En-BERT + TDA model in the Acceptability Judgements via Examining the Topology of Attention Maps paper on the CoLA dataset? | Accuracy, MCC |
What metrics were used to measure the FNet-Large model in the FNet: Mixing Tokens with Fourier Transforms paper on the CoLA dataset? | Accuracy, MCC |
What metrics were used to measure the T5-11B model in the Exploring the Limits of Transfer Learning with a Unified Text-to-Text Transformer paper on the CoLA dataset? | Accuracy, MCC |
What metrics were used to measure the StructBERTRoBERTa ensemble model in the StructBERT: Incorporating Language Structures into Pre-training for Deep Language Understanding paper on the CoLA dataset? | Accuracy, MCC |
What metrics were used to measure the ALBERT model in the ALBERT: A Lite BERT for Self-supervised Learning of Language Representations paper on the CoLA dataset? | Accuracy, MCC |
What metrics were used to measure the XLNet (single model) model in the XLNet: Generalized Autoregressive Pretraining for Language Understanding paper on the CoLA dataset? | Accuracy, MCC |
What metrics were used to measure the FLOATER-large model in the Learning to Encode Position for Transformer with Continuous Dynamical Model paper on the CoLA dataset? | Accuracy, MCC |
What metrics were used to measure the Vector-wise model in the LLM.int8(): 8-bit Matrix Multiplication for Transformers at Scale paper on the CoLA dataset? | Accuracy, MCC |
What metrics were used to measure the MT-DNN model in the Multi-Task Deep Neural Networks for Natural Language Understanding paper on the CoLA dataset? | Accuracy, MCC |
What metrics were used to measure the ELECTRA model in the paper on the CoLA dataset? | Accuracy, MCC |
What metrics were used to measure the RoBERTa model in the RoBERTa: A Robustly Optimized BERT Pretraining Approach paper on the CoLA dataset? | Accuracy, MCC |
What metrics were used to measure the T5-3B model in the Exploring the Limits of Transfer Learning with a Unified Text-to-Text Transformer paper on the CoLA dataset? | Accuracy, MCC |
What metrics were used to measure the SpanBERT model in the SpanBERT: Improving Pre-training by Representing and Predicting Spans paper on the CoLA dataset? | Accuracy, MCC |
What metrics were used to measure the MLM+ del-span+ reorder model in the CLEAR: Contrastive Learning for Sentence Representation paper on the CoLA dataset? | Accuracy, MCC |
What metrics were used to measure the ERNIE 2.0 Large model in the ERNIE 2.0: A Continual Pre-training Framework for Language Understanding paper on the CoLA dataset? | Accuracy, MCC |
What metrics were used to measure the T5-Large model in the Exploring the Limits of Transfer Learning with a Unified Text-to-Text Transformer paper on the CoLA dataset? | Accuracy, MCC |
What metrics were used to measure the BERT-LARGE model in the BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding paper on the CoLA dataset? | Accuracy, MCC |
What metrics were used to measure the data2vec model in the data2vec: A General Framework for Self-supervised Learning in Speech, Vision and Language paper on the CoLA dataset? | Accuracy, MCC |
What metrics were used to measure the RealFormer model in the RealFormer: Transformer Likes Residual Attention paper on the CoLA dataset? | Accuracy, MCC |
What metrics were used to measure the BigBird model in the Big Bird: Transformers for Longer Sequences paper on the CoLA dataset? | Accuracy, MCC |
What metrics were used to measure the 24hBERT model in the How to Train BERT with an Academic Budget paper on the CoLA dataset? | Accuracy, MCC |
What metrics were used to measure the ERNIE 2.0 Base model in the ERNIE 2.0: A Continual Pre-training Framework for Language Understanding paper on the CoLA dataset? | Accuracy, MCC |
What metrics were used to measure the ERNIE model in the ERNIE: Enhanced Language Representation with Informative Entities paper on the CoLA dataset? | Accuracy, MCC |
What metrics were used to measure the Charformer-Tall model in the Charformer: Fast Character Transformers via Gradient-based Subword Tokenization paper on the CoLA dataset? | Accuracy, MCC |
What metrics were used to measure the T5-Base model in the Exploring the Limits of Transfer Learning with a Unified Text-to-Text Transformer paper on the CoLA dataset? | Accuracy, MCC |
What metrics were used to measure the DistilBERT model in the DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter paper on the CoLA dataset? | Accuracy, MCC |
What metrics were used to measure the SqueezeBERT model in the SqueezeBERT: What can computer vision teach NLP about efficient neural networks? paper on the CoLA dataset? | Accuracy, MCC |
What metrics were used to measure the TinyBERT model in the TinyBERT: Distilling BERT for Natural Language Understanding paper on the CoLA dataset? | Accuracy, MCC |
What metrics were used to measure the T5-Small model in the Exploring the Limits of Transfer Learning with a Unified Text-to-Text Transformer paper on the CoLA dataset? | Accuracy, MCC |
What metrics were used to measure the LM-CPPF RoBERTa-base model in the LM-CPPF: Paraphrasing-Guided Data Augmentation for Contrastive Prompt-Based Few-Shot Fine-Tuning paper on the CoLA dataset? | Accuracy, MCC |
What metrics were used to measure the RemBERT model in the RuCoLA: Russian Corpus of Linguistic Acceptability paper on the CoLA dataset? | Accuracy, MCC |
What metrics were used to measure the Ru-RoBERTa+TDA model in the Can BERT eat RuCoLA? Topological Data Analysis to Explain paper on the RuCoLA dataset? | MCC, Accuracy |
What metrics were used to measure the ruRoBERTa model in the RuCoLA: Russian Corpus of Linguistic Acceptability paper on the RuCoLA dataset? | MCC, Accuracy |
What metrics were used to measure the Ru-BERT+TDA model in the Can BERT eat RuCoLA? Topological Data Analysis to Explain paper on the RuCoLA dataset? | MCC, Accuracy |
What metrics were used to measure the RemBERT model in the RuCoLA: Russian Corpus of Linguistic Acceptability paper on the RuCoLA dataset? | MCC, Accuracy |
What metrics were used to measure the ruBERT model in the RuCoLA: Russian Corpus of Linguistic Acceptability paper on the RuCoLA dataset? | MCC, Accuracy |
What metrics were used to measure the ruGPT-3 model in the RuCoLA: Russian Corpus of Linguistic Acceptability paper on the RuCoLA dataset? | MCC, Accuracy |
What metrics were used to measure the ruT5 model in the RuCoLA: Russian Corpus of Linguistic Acceptability paper on the RuCoLA dataset? | MCC, Accuracy |
What metrics were used to measure the mBERT model in the RuCoLA: Russian Corpus of Linguistic Acceptability paper on the RuCoLA dataset? | MCC, Accuracy |
What metrics were used to measure the XLM-R model in the RuCoLA: Russian Corpus of Linguistic Acceptability paper on the RuCoLA dataset? | MCC, Accuracy |
What metrics were used to measure the Sw-BERT + H0M model in the Acceptability Judgements via Examining the Topology of Attention Maps paper on the DaLAJ dataset? | Accuracy, MCC |
What metrics were used to measure the GAP model in the GAP: Generalizable Approximate Graph Partitioning Framework paper on the custom dataset? | All |
What metrics were used to measure the SOM-VAE-prob model in the SOM-VAE: Interpretable Discrete Representation Learning on Time Series paper on the eICU Collaborative Research Database dataset? | NMI (physiology_6_hours), NMI (physiology_12_hours), NMI (physiology_24_hours) |
What metrics were used to measure the k-means model in the paper on the eICU Collaborative Research Database dataset? | NMI (physiology_6_hours), NMI (physiology_12_hours), NMI (physiology_24_hours) |
What metrics were used to measure the SOM-VAE model in the SOM-VAE: Interpretable Discrete Representation Learning on Time Series paper on the eICU Collaborative Research Database dataset? | NMI (physiology_6_hours), NMI (physiology_12_hours), NMI (physiology_24_hours) |
What metrics were used to measure the ResNet50-2.3 GFLOPs model in the Pruning Filters for Efficient ConvNets paper on the ImageNet dataset? | Accuracy, GFLOPs, MParams |
What metrics were used to measure the ResNet50-1.5 GFLOPs model in the Pruning Filters for Efficient ConvNets paper on the ImageNet dataset? | Accuracy, GFLOPs, MParams |
What metrics were used to measure the ResNet50 2.5 GFLOPS model in the Knapsack Pruning with Inner Distillation paper on the ImageNet dataset? | Accuracy, GFLOPs, MParams |
What metrics were used to measure the ResNet50 2.0 GFLOPS model in the Knapsack Pruning with Inner Distillation paper on the ImageNet dataset? | Accuracy, GFLOPs, MParams |
What metrics were used to measure the ResNet50-3G FLOPs model in the EagleEye: Fast Sub-net Evaluation for Efficient Neural Network Pruning paper on the ImageNet dataset? | Accuracy, GFLOPs, MParams |
What metrics were used to measure the ResNet50-2G FLOPs model in the EagleEye: Fast Sub-net Evaluation for Efficient Neural Network Pruning paper on the ImageNet dataset? | Accuracy, GFLOPs, MParams |
What metrics were used to measure the ResNet50-1G FLOPs model in the Pruning Filters for Efficient ConvNets paper on the ImageNet dataset? | Accuracy, GFLOPs, MParams |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.