prompts stringlengths 81 413 | metrics_response stringlengths 0 371 |
|---|---|
What metrics were used to measure the FedASAM + SWA model in the Improving Generalization in Federated Learning by Seeking Flat Minima paper on the CIFAR-100 (alpha=0.5, 5 clients per round) dataset? | ACC@1-100Clients |
What metrics were used to measure the FedSAM + SWA model in the Improving Generalization in Federated Learning by Seeking Flat Minima paper on the CIFAR-100 (alpha=0.5, 5 clients per round) dataset? | ACC@1-100Clients |
What metrics were used to measure the FedASAM model in the Improving Generalization in Federated Learning by Seeking Flat Minima paper on the CIFAR-100 (alpha=0.5, 5 clients per round) dataset? | ACC@1-100Clients |
What metrics were used to measure the FedSAM model in the Improving Generalization in Federated Learning by Seeking Flat Minima paper on the CIFAR-100 (alpha=0.5, 5 clients per round) dataset? | ACC@1-100Clients |
What metrics were used to measure the FedAvg model in the Improving Generalization in Federated Learning by Seeking Flat Minima paper on the CIFAR-100 (alpha=0.5, 5 clients per round) dataset? | ACC@1-100Clients |
What metrics were used to measure the DCF model in the Deep convolutional forest: a dynamic deep ensemble approach for spam detection in text paper on the SMS Spam Collection Data Set dataset? | Accuracy |
What metrics were used to measure the RUN+BERT model in the Incomplete Utterance Rewriting as Semantic Segmentation paper on the Multi-Rewrite dataset? | Rewriting F3, BLEU-1, BLEU-2, ROUGE-1, ROUGE-2, Rewriting F1, Rewriting F2 |
What metrics were used to measure the SARG (n_beam=5) model in the SARG: A Novel Semi Autoregressive Generator for Multi-turn Incomplete Utterance Restoration paper on the Multi-Rewrite dataset? | Rewriting F3, BLEU-1, BLEU-2, ROUGE-1, ROUGE-2, Rewriting F1, Rewriting F2 |
What metrics were used to measure the SARG (greedy) model in the SARG: A Novel Semi Autoregressive Generator for Multi-turn Incomplete Utterance Restoration paper on the Multi-Rewrite dataset? | Rewriting F3, BLEU-1, BLEU-2, ROUGE-1, ROUGE-2, Rewriting F1, Rewriting F2 |
What metrics were used to measure the SARG model in the SARG: A Novel Semi Autoregressive Generator for Multi-turn Incomplete Utterance Restoration paper on the CANARD dataset? | BLEU |
What metrics were used to measure the RUN+BERT model in the Incomplete Utterance Rewriting as Semantic Segmentation paper on the Rewrite dataset? | ROUGE-L |
What metrics were used to measure the DrRepair + BIFI model in the Break-It-Fix-It: Unsupervised Learning for Program Repair paper on the DeepFix dataset? | Average Success Rate |
What metrics were used to measure the DrRepair model in the Graph-based, Self-Supervised Program Repair from Diagnostic Feedback paper on the DeepFix dataset? | Average Success Rate |
What metrics were used to measure the SampleFix model in the SampleFix: Learning to Generate Functionally Diverse Fixes paper on the DeepFix dataset? | Average Success Rate |
What metrics were used to measure the RLAssist model in the Deep Reinforcement Learning for Programming Language Correction paper on the DeepFix dataset? | Average Success Rate |
What metrics were used to measure the Transformer + BIFI model in the Break-It-Fix-It: Unsupervised Learning for Program Repair paper on the GitHub-Python dataset? | Accuracy (%) |
What metrics were used to measure the Transformer model in the Break-It-Fix-It: Unsupervised Learning for Program Repair paper on the GitHub-Python dataset? | Accuracy (%) |
What metrics were used to measure the TFix model in the TFix: Learning to Fix Coding Errors with a Text-to-Text Transformer paper on the TFix's Code Patches Data dataset? | Error Removal, Exact Match |
What metrics were used to measure the Robust autoregressive hidden semi-Markov model model in the Deep Neural Dynamic Bayesian Networks applied to EEG sleep spindles modeling paper on the DREAMS sleep spindles dataset? | MCC |
What metrics were used to measure the TOKOFOU model in the Fighting the COVID-19 Infodemic with a Holistic BERT Ensemble paper on the NLP4IF-2021--Fighting the COVID-19 Infodemic dataset? | Average F1 |
What metrics were used to measure the XLMft UDA model in the Bridging the domain gap in cross-lingual document classification paper on the MLDoc Zero-Shot English-to-Spanish dataset? | Accuracy |
What metrics were used to measure the MultiFiT, pseudo model in the MultiFiT: Efficient Multi-lingual Language Model Fine-tuning paper on the MLDoc Zero-Shot English-to-Spanish dataset? | Accuracy |
What metrics were used to measure the Massively Multilingual Sentence Embeddings model in the Massively Multilingual Sentence Embeddings for Zero-Shot Cross-Lingual Transfer and Beyond paper on the MLDoc Zero-Shot English-to-Spanish dataset? | Accuracy |
What metrics were used to measure the MultiCCA + CNN model in the A Corpus for Multilingual Document Classification in Eight Languages paper on the MLDoc Zero-Shot English-to-Spanish dataset? | Accuracy |
What metrics were used to measure the BiLSTM (UN) model in the A Corpus for Multilingual Document Classification in Eight Languages paper on the MLDoc Zero-Shot English-to-Spanish dataset? | Accuracy |
What metrics were used to measure the BiLSTM (Europarl) model in the A Corpus for Multilingual Document Classification in Eight Languages paper on the MLDoc Zero-Shot English-to-Spanish dataset? | Accuracy |
What metrics were used to measure the MultiFiT, pseudo model in the MultiFiT: Efficient Multi-lingual Language Model Fine-tuning paper on the MLDoc Zero-Shot English-to-Italian dataset? | Accuracy |
What metrics were used to measure the Massively Multilingual Sentence Embeddings model in the Massively Multilingual Sentence Embeddings for Zero-Shot Cross-Lingual Transfer and Beyond paper on the MLDoc Zero-Shot English-to-Italian dataset? | Accuracy |
What metrics were used to measure the MultiCCA + CNN model in the A Corpus for Multilingual Document Classification in Eight Languages paper on the MLDoc Zero-Shot English-to-Italian dataset? | Accuracy |
What metrics were used to measure the BiLSTM (Europarl) model in the A Corpus for Multilingual Document Classification in Eight Languages paper on the MLDoc Zero-Shot English-to-Italian dataset? | Accuracy |
What metrics were used to measure the XLMft UDA model in the Bridging the domain gap in cross-lingual document classification paper on the MLDoc Zero-Shot English-to-Russian dataset? | Accuracy |
What metrics were used to measure the MultiFiT, pseudo model in the MultiFiT: Efficient Multi-lingual Language Model Fine-tuning paper on the MLDoc Zero-Shot English-to-Russian dataset? | Accuracy |
What metrics were used to measure the Massively Multilingual Sentence Embeddings model in the Massively Multilingual Sentence Embeddings for Zero-Shot Cross-Lingual Transfer and Beyond paper on the MLDoc Zero-Shot English-to-Russian dataset? | Accuracy |
What metrics were used to measure the BiLSTM (UN) model in the A Corpus for Multilingual Document Classification in Eight Languages paper on the MLDoc Zero-Shot English-to-Russian dataset? | Accuracy |
What metrics were used to measure the MultiCCA + CNN model in the A Corpus for Multilingual Document Classification in Eight Languages paper on the MLDoc Zero-Shot English-to-Russian dataset? | Accuracy |
What metrics were used to measure the XLMft UDA model in the Bridging the domain gap in cross-lingual document classification paper on the MLDoc Zero-Shot English-to-French dataset? | Accuracy |
What metrics were used to measure the MultiFiT, pseudo model in the MultiFiT: Efficient Multi-lingual Language Model Fine-tuning paper on the MLDoc Zero-Shot English-to-French dataset? | Accuracy |
What metrics were used to measure the Massively Multilingual Sentence Embeddings model in the Massively Multilingual Sentence Embeddings for Zero-Shot Cross-Lingual Transfer and Beyond paper on the MLDoc Zero-Shot English-to-French dataset? | Accuracy |
What metrics were used to measure the BiLSTM (UN) model in the A Corpus for Multilingual Document Classification in Eight Languages paper on the MLDoc Zero-Shot English-to-French dataset? | Accuracy |
What metrics were used to measure the BiLSTM (Europarl) model in the A Corpus for Multilingual Document Classification in Eight Languages paper on the MLDoc Zero-Shot English-to-French dataset? | Accuracy |
What metrics were used to measure the MultiCCA + CNN model in the A Corpus for Multilingual Document Classification in Eight Languages paper on the MLDoc Zero-Shot English-to-French dataset? | Accuracy |
What metrics were used to measure the Biinclusion (Euro500kReuters) model in the Leveraging Monolingual Data for Crosslingual Compositional Word Representations paper on the Reuters RCV1/RCV2 German-to-English dataset? | Accuracy |
What metrics were used to measure the Bi+ model in the Multilingual Models for Compositional Distributed Semantics paper on the Reuters RCV1/RCV2 German-to-English dataset? | Accuracy |
What metrics were used to measure the biCVM+ model in the Multilingual Distributed Representations without Word Alignment paper on the Reuters RCV1/RCV2 German-to-English dataset? | Accuracy |
What metrics were used to measure the BiLSTM (Europarl) model in the A Corpus for Multilingual Document Classification in Eight Languages paper on the MLDoc Zero-Shot German-to-French dataset? | Accuracy |
What metrics were used to measure the Biinclusion (Euro500kReuters) model in the Leveraging Monolingual Data for Crosslingual Compositional Word Representations paper on the Reuters RCV1/RCV2 English-to-German dataset? | Accuracy |
What metrics were used to measure the Bi+ model in the Multilingual Models for Compositional Distributed Semantics paper on the Reuters RCV1/RCV2 English-to-German dataset? | Accuracy |
What metrics were used to measure the biCVM+ model in the Multilingual Distributed Representations without Word Alignment paper on the Reuters RCV1/RCV2 English-to-German dataset? | Accuracy |
What metrics were used to measure the XLMft UDA model in the Bridging the domain gap in cross-lingual document classification paper on the MLDoc Zero-Shot English-to-German dataset? | Accuracy |
What metrics were used to measure the MultiFiT, pseudo model in the MultiFiT: Efficient Multi-lingual Language Model Fine-tuning paper on the MLDoc Zero-Shot English-to-German dataset? | Accuracy |
What metrics were used to measure the Massively Multilingual Sentence Embeddings model in the Massively Multilingual Sentence Embeddings for Zero-Shot Cross-Lingual Transfer and Beyond paper on the MLDoc Zero-Shot English-to-German dataset? | Accuracy |
What metrics were used to measure the MultiCCA + CNN model in the A Corpus for Multilingual Document Classification in Eight Languages paper on the MLDoc Zero-Shot English-to-German dataset? | Accuracy |
What metrics were used to measure the BiLSTM (Europarl) model in the A Corpus for Multilingual Document Classification in Eight Languages paper on the MLDoc Zero-Shot English-to-German dataset? | Accuracy |
What metrics were used to measure the MultiFiT, pseudo model in the MultiFiT: Efficient Multi-lingual Language Model Fine-tuning paper on the MLDoc Zero-Shot English-to-Japanese dataset? | Accuracy |
What metrics were used to measure the MultiCCA + CNN model in the A Corpus for Multilingual Document Classification in Eight Languages paper on the MLDoc Zero-Shot English-to-Japanese dataset? | Accuracy |
What metrics were used to measure the Massively Multilingual Sentence Embeddings model in the Massively Multilingual Sentence Embeddings for Zero-Shot Cross-Lingual Transfer and Beyond paper on the MLDoc Zero-Shot English-to-Japanese dataset? | Accuracy |
What metrics were used to measure the XLMft UDA model in the Bridging the domain gap in cross-lingual document classification paper on the MLDoc Zero-Shot English-to-Chinese dataset? | Accuracy |
What metrics were used to measure the MultiFiT, pseudo model in the MultiFiT: Efficient Multi-lingual Language Model Fine-tuning paper on the MLDoc Zero-Shot English-to-Chinese dataset? | Accuracy |
What metrics were used to measure the MultiCCA + CNN model in the A Corpus for Multilingual Document Classification in Eight Languages paper on the MLDoc Zero-Shot English-to-Chinese dataset? | Accuracy |
What metrics were used to measure the BiLSTM (UN) model in the A Corpus for Multilingual Document Classification in Eight Languages paper on the MLDoc Zero-Shot English-to-Chinese dataset? | Accuracy |
What metrics were used to measure the Massively Multilingual Sentence Embeddings model in the Massively Multilingual Sentence Embeddings for Zero-Shot Cross-Lingual Transfer and Beyond paper on the MLDoc Zero-Shot English-to-Chinese dataset? | Accuracy |
What metrics were used to measure the YTMT-UCT model in the Trash or Treasure? An Interactive Dual-Stream Strategy for Single Image Reflection Separation paper on the Real20 dataset? | PSNR, SSIM |
What metrics were used to measure the ERRNet model in the Single Image Reflection Removal Exploiting Misaligned Training Data and Network Enhancements paper on the Real20 dataset? | PSNR, SSIM |
What metrics were used to measure the IBCLN model in the Single Image Reflection Removal through Cascaded Refinement paper on the Real20 dataset? | PSNR, SSIM |
What metrics were used to measure the YTMT-UCS model in the Trash or Treasure? An Interactive Dual-Stream Strategy for Single Image Reflection Separation paper on the Nature dataset? | PSNR, SSIM |
What metrics were used to measure the IBCLN model in the Single Image Reflection Removal through Cascaded Refinement paper on the Nature dataset? | PSNR, SSIM |
What metrics were used to measure the YTMT-UCT model in the Trash or Treasure? An Interactive Dual-Stream Strategy for Single Image Reflection Separation paper on the SIR^2(Wild) dataset? | PSNR, SSIM |
What metrics were used to measure the IBCLN model in the Single Image Reflection Removal through Cascaded Refinement paper on the SIR^2(Wild) dataset? | PSNR, SSIM |
What metrics were used to measure the ERRNet model in the Single Image Reflection Removal Exploiting Misaligned Training Data and Network Enhancements paper on the SIR^2(Wild) dataset? | PSNR, SSIM |
What metrics were used to measure the IBCLN model in the Single Image Reflection Removal through Cascaded Refinement paper on the SIR^2(Postcard) dataset? | PSNR, SSIM |
What metrics were used to measure the YTMT-UCT model in the Trash or Treasure? An Interactive Dual-Stream Strategy for Single Image Reflection Separation paper on the SIR^2(Postcard) dataset? | PSNR, SSIM |
What metrics were used to measure the ERRNet model in the Single Image Reflection Removal Exploiting Misaligned Training Data and Network Enhancements paper on the SIR^2(Postcard) dataset? | PSNR, SSIM |
What metrics were used to measure the YTMT-UCT model in the Trash or Treasure? An Interactive Dual-Stream Strategy for Single Image Reflection Separation paper on the SIR^2(Objects) dataset? | PSNR, SSIM |
What metrics were used to measure the ERRNet model in the Single Image Reflection Removal Exploiting Misaligned Training Data and Network Enhancements paper on the SIR^2(Objects) dataset? | PSNR, SSIM |
What metrics were used to measure the IBCLN model in the Single Image Reflection Removal through Cascaded Refinement paper on the SIR^2(Objects) dataset? | PSNR, SSIM |
What metrics were used to measure the TIFUKNN model in the Modeling Personalized Item Frequency Information for Next-basket Recommendation paper on the TaFeng dataset? | Recall@10, nDCG@10 |
What metrics were used to measure the TIFUKNN model in the Modeling Personalized Item Frequency Information for Next-basket Recommendation paper on the Instacart dataset? | Recall@10, nDCG@10 |
What metrics were used to measure the Bert model in the Extracting Food Substitutes From Food Diary via Distributional Similarity paper on the Oktoberfest Food Dataset dataset? | 10 fold Cross validation |
What metrics were used to measure the AASIST model in the AASIST: Audio Anti-Spoofing using Integrated Spectro-Temporal Graph Attention Networks paper on the ASVspoof 2019 - LA dataset? | EER, min t-dcf |
What metrics were used to measure the LFCC&Face+SE-DenseNet+A-softmax model in the Cross-modal information fusion for voice spoofing detection paper on the ASVspoof 2019 - LA dataset? | EER, min t-dcf |
What metrics were used to measure the FastAudio model in the FastAudio: A Learnable Audio Front-End for Spoof Speech Detection paper on the ASVspoof2019 dataset? | EER |
What metrics were used to measure the CQT&Face+SE-Res2Net+log-softmax model in the Cross-modal information fusion for voice spoofing detection paper on the ASVspoof 2019 - PA dataset? | EER, min t-dcf |
What metrics were used to measure the three-step-original model in the Towards Automatically Extracting UML Class Diagrams from Natural Language Specifications paper on the UML Classes With Specs dataset? | Exact Match |
What metrics were used to measure the LMPT(ViT-B/16) model in the LMPT: Prompt Tuning with Class-Specific Embedding Loss for Long-tailed Multi-Label Visual Recognition paper on the COCO-MLT dataset? | Average mAP |
What metrics were used to measure the CLIP(ViT-B/16) model in the Learning Transferable Visual Models From Natural Language Supervision paper on the COCO-MLT dataset? | Average mAP |
What metrics were used to measure the LMPT(ResNet-50) model in the LMPT: Prompt Tuning with Class-Specific Embedding Loss for Long-tailed Multi-Label Visual Recognition paper on the COCO-MLT dataset? | Average mAP |
What metrics were used to measure the LTML(ResNet-50) model in the Long-Tailed Multi-Label Visual Recognition by Collaborative Training on Uniform and Re-Balanced Samplings paper on the COCO-MLT dataset? | Average mAP |
What metrics were used to measure the CLIP(ResNet-50) model in the Learning Transferable Visual Models From Natural Language Supervision paper on the COCO-MLT dataset? | Average mAP |
What metrics were used to measure the PG Loss(ResNet-50) model in the Probability Guided Loss for Long-Tailed Multi-Label Image Classification paper on the COCO-MLT dataset? | Average mAP |
What metrics were used to measure the DB Focal(ResNet-50) model in the Distribution-Balanced Loss for Multi-Label Classification in Long-Tailed Datasets paper on the COCO-MLT dataset? | Average mAP |
What metrics were used to measure the Focal Loss(ResNet-50) model in the Focal Loss for Dense Object Detection paper on the COCO-MLT dataset? | Average mAP |
What metrics were used to measure the CB Loss(ResNet-50) model in the Class-Balanced Loss Based on Effective Number of Samples paper on the COCO-MLT dataset? | Average mAP |
What metrics were used to measure the RS(ResNet-50) model in the Relay Backpropagation for Effective Learning of Deep Convolutional Neural Networks paper on the COCO-MLT dataset? | Average mAP |
What metrics were used to measure the OLTR(ResNet-50) model in the Large-Scale Long-Tailed Recognition in an Open World paper on the COCO-MLT dataset? | Average mAP |
What metrics were used to measure the ML-GCN(ResNet-50) model in the Multi-Label Image Recognition with Graph Convolutional Networks paper on the COCO-MLT dataset? | Average mAP |
What metrics were used to measure the LDAM(ResNet-50) model in the Learning Imbalanced Datasets with Label-Distribution-Aware Margin Loss paper on the COCO-MLT dataset? | Average mAP |
What metrics were used to measure the µ2Net+ (ViT-L/16) model in the A Continual Development Methodology for Large-scale Multitask Dynamic ML Systems paper on the ImageNet-LT dataset? | Top-1 Accuracy |
What metrics were used to measure the MAM (ViT-B/16) model in the Improving Image Recognition by Retrieving from Web-Scale Image-Text Data paper on the ImageNet-LT dataset? | Top-1 Accuracy |
What metrics were used to measure the PEL (ViT-B/16) model in the Parameter-Efficient Long-Tailed Recognition paper on the ImageNet-LT dataset? | Top-1 Accuracy |
What metrics were used to measure the VL-LTR (ViT-B-16) model in the VL-LTR: Learning Class-wise Visual-Linguistic Representation for Long-Tailed Visual Recognition paper on the ImageNet-LT dataset? | Top-1 Accuracy |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.