prompts stringlengths 81 413 | metrics_response stringlengths 0 371 |
|---|---|
What metrics were used to measure the UMAEA (w/o surf & iter ) model in the Rethinking Uncertainly Missing and Ambiguous Visual Modality in Multi-Modal Entity Alignment paper on the UMVM-oea-d-w-v2 dataset? | Hits@1 |
What metrics were used to measure the MCLEA (w/o surf & w/o iter) model in the Multi-modal Contrastive Representation Learning for Entity Alignment paper on the UMVM-oea-d-w-v2 dataset? | Hits@1 |
What metrics were used to measure the EVA (w/o surf) model in the Visual Pivoting for (Unsupervised) Entity Alignment paper on the UMVM-oea-d-w-v2 dataset? | Hits@1 |
What metrics were used to measure the EVA (w/o surf & w/o iter) model in the Visual Pivoting for (Unsupervised) Entity Alignment paper on the UMVM-oea-d-w-v2 dataset? | Hits@1 |
What metrics were used to measure the MSNEA (w/o surf) model in the Multi-modal Siamese Network for Entity Alignment paper on the UMVM-oea-d-w-v2 dataset? | Hits@1 |
What metrics were used to measure the UMAEA (w/o surf) model in the Rethinking Uncertainly Missing and Ambiguous Visual Modality in Multi-Modal Entity Alignment paper on the UMVM-oea-d-w-v1 dataset? | Hits@1 |
What metrics were used to measure the MCLEA (w/o surf) model in the Multi-modal Contrastive Representation Learning for Entity Alignment paper on the UMVM-oea-d-w-v1 dataset? | Hits@1 |
What metrics were used to measure the EVA (w/o surf) model in the Visual Pivoting for (Unsupervised) Entity Alignment paper on the UMVM-oea-d-w-v1 dataset? | Hits@1 |
What metrics were used to measure the UMAEA (w/o surf & iter ) model in the Rethinking Uncertainly Missing and Ambiguous Visual Modality in Multi-Modal Entity Alignment paper on the UMVM-oea-d-w-v1 dataset? | Hits@1 |
What metrics were used to measure the MCLEA (w/o surf & w/o iter) model in the Multi-modal Contrastive Representation Learning for Entity Alignment paper on the UMVM-oea-d-w-v1 dataset? | Hits@1 |
What metrics were used to measure the EVA (w/o surf & w/o iter) model in the Visual Pivoting for (Unsupervised) Entity Alignment paper on the UMVM-oea-d-w-v1 dataset? | Hits@1 |
What metrics were used to measure the MSNEA (w/o surf) model in the Multi-modal Siamese Network for Entity Alignment paper on the UMVM-oea-d-w-v1 dataset? | Hits@1 |
What metrics were used to measure the MSNEA (w/o surf & w/o iter) model in the Multi-modal Siamese Network for Entity Alignment paper on the UMVM-oea-d-w-v1 dataset? | Hits@1 |
What metrics were used to measure the UMAEA (w/o surf) model in the Rethinking Uncertainly Missing and Ambiguous Visual Modality in Multi-Modal Entity Alignment paper on the UMVM-oea-en-de dataset? | Hits@1 |
What metrics were used to measure the MCLEA (w/o surf) model in the Multi-modal Contrastive Representation Learning for Entity Alignment paper on the UMVM-oea-en-de dataset? | Hits@1 |
What metrics were used to measure the EVA (w/o surf) model in the Visual Pivoting for (Unsupervised) Entity Alignment paper on the UMVM-oea-en-de dataset? | Hits@1 |
What metrics were used to measure the UMAEA (w/o surf & iter ) model in the Rethinking Uncertainly Missing and Ambiguous Visual Modality in Multi-Modal Entity Alignment paper on the UMVM-oea-en-de dataset? | Hits@1 |
What metrics were used to measure the MCLEA (w/o surf & w/o iter) model in the Multi-modal Contrastive Representation Learning for Entity Alignment paper on the UMVM-oea-en-de dataset? | Hits@1 |
What metrics were used to measure the EVA (w/o surf & w/o iter) model in the Visual Pivoting for (Unsupervised) Entity Alignment paper on the UMVM-oea-en-de dataset? | Hits@1 |
What metrics were used to measure the MSNEA (w/o surf) model in the Multi-modal Siamese Network for Entity Alignment paper on the UMVM-oea-en-de dataset? | Hits@1 |
What metrics were used to measure the MSNEA (w/o surf & w/o iter) model in the Multi-modal Siamese Network for Entity Alignment paper on the UMVM-oea-en-de dataset? | Hits@1 |
What metrics were used to measure the UMAEA (w/o surf) model in the Rethinking Uncertainly Missing and Ambiguous Visual Modality in Multi-Modal Entity Alignment paper on the UMVM-dbp-ja-en dataset? | Hits@1 |
What metrics were used to measure the MEAformer (w/o surf) model in the MEAformer: Multi-modal Entity Alignment Transformer for Meta Modality Hybrid paper on the UMVM-dbp-ja-en dataset? | Hits@1 |
What metrics were used to measure the MCLEA (w/o surf) model in the Multi-modal Contrastive Representation Learning for Entity Alignment paper on the UMVM-dbp-ja-en dataset? | Hits@1 |
What metrics were used to measure the UMAEA (w/o surf & iter ) model in the Rethinking Uncertainly Missing and Ambiguous Visual Modality in Multi-Modal Entity Alignment paper on the UMVM-dbp-ja-en dataset? | Hits@1 |
What metrics were used to measure the MEAformer (w/o surf & w/o iter) model in the MEAformer: Multi-modal Entity Alignment Transformer for Meta Modality Hybrid paper on the UMVM-dbp-ja-en dataset? | Hits@1 |
What metrics were used to measure the EVA (w/o surf) model in the Visual Pivoting for (Unsupervised) Entity Alignment paper on the UMVM-dbp-ja-en dataset? | Hits@1 |
What metrics were used to measure the MCLEA (w/o surf & w/o iter) model in the Multi-modal Contrastive Representation Learning for Entity Alignment paper on the UMVM-dbp-ja-en dataset? | Hits@1 |
What metrics were used to measure the EVA (w/o surf & w/o iter) model in the Visual Pivoting for (Unsupervised) Entity Alignment paper on the UMVM-dbp-ja-en dataset? | Hits@1 |
What metrics were used to measure the MSNEA (w/o surf) model in the Multi-modal Siamese Network for Entity Alignment paper on the UMVM-dbp-ja-en dataset? | Hits@1 |
What metrics were used to measure the MSNEA (w/o surf & w/o iter) model in the Multi-modal Siamese Network for Entity Alignment paper on the UMVM-dbp-ja-en dataset? | Hits@1 |
What metrics were used to measure the X-CLIP (Cross-Lingual) model in the MSVD-Indonesian: A Benchmark for Multimodal Video-Text Tasks in Indonesian paper on the MSVD-Indonesian dataset? | R@1, R@5, R@10, Median Rank, Mean Rank |
What metrics were used to measure the CLIP4Clip model in the CLIP4Clip: An Empirical Study of CLIP for End to End Video Clip Retrieval paper on the MSR-VTT dataset? | text-to-video R@1 |
What metrics were used to measure the FROZEN-revised model in the GEB+: A Benchmark for Generic Event Boundary Captioning, Grounding and Retrieval paper on the Kinetics-GEB+ dataset? | mAP, text-to-video R@1, text-to-video R@10, text-to-video R@5, text-to-video R@50 |
What metrics were used to measure the FROZEN-revised (two-stream) model in the GEB+: A Benchmark for Generic Event Boundary Captioning, Grounding and Retrieval paper on the Kinetics-GEB+ dataset? | mAP, text-to-video R@1, text-to-video R@10, text-to-video R@5, text-to-video R@50 |
What metrics were used to measure the NegBioELECTRA model in the No means ‘No’; a non-im-proper modeling approach, with embedded speculative context paper on the BioScope : Abstracts dataset? | F1 |
What metrics were used to measure the NegBioELECTRA model in the No means ‘No’; a non-im-proper modeling approach, with embedded speculative context paper on the *sem 2012 Shared Task: Sherlock Dataset dataset? | F1 |
What metrics were used to measure the NegBERT model in the NegBERT: A Transfer Learning Approach for Negation Detection and Scope Resolution paper on the *sem 2012 Shared Task: Sherlock Dataset dataset? | F1 |
What metrics were used to measure the Chinchilla-70B (few-shot, k=5) model in the Training Compute-Optimal Large Language Models paper on the BIG-bench dataset? | Accuracy |
What metrics were used to measure the Gopher-280B (few-shot, k=5) model in the Scaling Language Models: Methods, Analysis & Insights from Training Gopher paper on the BIG-bench dataset? | Accuracy |
What metrics were used to measure the Best Model model in the Machine Learning in the Quantum Age: Quantum vs. Classical Support Vector Machines paper on the iris dataset? | Average F1 |
What metrics were used to measure the Quntum Neural Network model in the Software Supply Chain Vulnerabilities Detection in Source Code: Performance Comparison between Traditional and Quantum Machine Learning Algorithms paper on the https://www.kaggle.com/datasets/saurabhshahane/classification-of-malwares dataset? | F1 score |
What metrics were used to measure the AR-LDM model in the Synthesizing Coherent Story with Auto-Regressive Latent Diffusion Models paper on the PororoSV dataset? | FID, Char-F1, F-Acc |
What metrics were used to measure the StoryDALL-E model in the StoryDALL-E: Adapting Pretrained Text-to-Image Transformers for Story Continuation paper on the PororoSV dataset? | FID, Char-F1, F-Acc |
What metrics were used to measure the StoryDALL-E (Cross-Attention) model in the StoryDALL-E: Adapting Pretrained Text-to-Image Transformers for Story Continuation paper on the PororoSV dataset? | FID, Char-F1, F-Acc |
What metrics were used to measure the StoryDALL-E (Story Embeddings) model in the StoryDALL-E: Adapting Pretrained Text-to-Image Transformers for Story Continuation paper on the PororoSV dataset? | FID, Char-F1, F-Acc |
What metrics were used to measure the StoryDALL-E (Story Embeddings + Cross-Attention) model in the StoryDALL-E: Adapting Pretrained Text-to-Image Transformers for Story Continuation paper on the PororoSV dataset? | FID, Char-F1, F-Acc |
What metrics were used to measure the AR-LDM model in the Synthesizing Coherent Story with Auto-Regressive Latent Diffusion Models paper on the FlintstonesSV dataset? | FID, Char-F1, F-Acc |
What metrics were used to measure the StoryDALL-E model in the StoryDALL-E: Adapting Pretrained Text-to-Image Transformers for Story Continuation paper on the FlintstonesSV dataset? | FID, Char-F1, F-Acc |
What metrics were used to measure the StoryDALL-E (Story Embeddings) model in the StoryDALL-E: Adapting Pretrained Text-to-Image Transformers for Story Continuation paper on the FlintstonesSV dataset? | FID, Char-F1, F-Acc |
What metrics were used to measure the StoryDALL-E (Cross-Attention) model in the StoryDALL-E: Adapting Pretrained Text-to-Image Transformers for Story Continuation paper on the FlintstonesSV dataset? | FID, Char-F1, F-Acc |
What metrics were used to measure the StoryDALL-E (Story Embeddings + Cross-Attention) model in the StoryDALL-E: Adapting Pretrained Text-to-Image Transformers for Story Continuation paper on the FlintstonesSV dataset? | FID, Char-F1, F-Acc |
What metrics were used to measure the AR-LDM (SIS captions) model in the Synthesizing Coherent Story with Auto-Regressive Latent Diffusion Models paper on the VIST dataset? | FID |
What metrics were used to measure the AR-LDM (DII captions) model in the Synthesizing Coherent Story with Auto-Regressive Latent Diffusion Models paper on the VIST dataset? | FID |
What metrics were used to measure the BioBERT (pre-trained on PubMed abstracts + PMC, fine-tuned on "Annotated corpus for semantic similarity of clinical trial outcomes, expanded corpus") model in the Measuring semantic similarity of clinical trial outcomes using deep pre-trained language representations paper on the Annotated corpus for semantic similarity of clinical trial outcomes (expanded corpus) dataset? | F1, Precision, Recall |
What metrics were used to measure the SciBERT uncased (SciVocab, fine-tuned on "Annotated corpus for semantic similarity of clinical trial outcomes, expanded corpus") model in the Measuring semantic similarity of clinical trial outcomes using deep pre-trained language representations paper on the Annotated corpus for semantic similarity of clinical trial outcomes (expanded corpus) dataset? | F1, Precision, Recall |
What metrics were used to measure the SciBERT cased (SciVocab, fine-tuned on "Annotated corpus for semantic similarity of clinical trial outcomes, expanded corpus") model in the Measuring semantic similarity of clinical trial outcomes using deep pre-trained language representations paper on the Annotated corpus for semantic similarity of clinical trial outcomes (expanded corpus) dataset? | F1, Precision, Recall |
What metrics were used to measure the BERT-Base uncased (fine-tuned on "Annotated corpus for semantic similarity of clinical trial outcomes, expanded corpus") model in the Measuring semantic similarity of clinical trial outcomes using deep pre-trained language representations paper on the Annotated corpus for semantic similarity of clinical trial outcomes (expanded corpus) dataset? | F1, Precision, Recall |
What metrics were used to measure the BERT-Base cased (fine-tuned on "Annotated corpus for semantic similarity of clinical trial outcomes, expanded corpus") model in the Measuring semantic similarity of clinical trial outcomes using deep pre-trained language representations paper on the Annotated corpus for semantic similarity of clinical trial outcomes (expanded corpus) dataset? | F1, Precision, Recall |
What metrics were used to measure the BioLinkBERT (large) model in the LinkBERT: Pretraining Language Models with Document Links paper on the BIOSSES dataset? | Pearson Correlation |
What metrics were used to measure the BioLinkBERT (base) model in the LinkBERT: Pretraining Language Models with Document Links paper on the BIOSSES dataset? | Pearson Correlation |
What metrics were used to measure the NCBI_BERT(base) (P+M) model in the Transfer Learning in Biomedical Natural Language Processing: An Evaluation of BERT and ELMo on Ten Benchmarking Datasets paper on the BIOSSES dataset? | Pearson Correlation |
What metrics were used to measure the MacBERT-large model in the CBLUE: A Chinese Biomedical Language Understanding Evaluation Benchmark paper on the CHIP-STS dataset? | Macro F1 |
What metrics were used to measure the NCBI_BERT(base) (P+M) model in the Transfer Learning in Biomedical Natural Language Processing: An Evaluation of BERT and ELMo on Ten Benchmarking Datasets paper on the MedSTS dataset? | Pearson Correlation |
What metrics were used to measure the CharacterBERT (base, medical, ensemble) model in the CharacterBERT: Reconciling ELMo and BERT for Word-Level Open-Vocabulary Representations From Characters paper on the ClinicalSTS dataset? | Pearson Correlation |
What metrics were used to measure the Dependency Tree-LSTM (Tai et al., 2015) model in the Improved Semantic Representations From Tree-Structured Long Short-Term Memory Networks paper on the SICK dataset? | MSE, Pearson Correlation, Spearman Correlation |
What metrics were used to measure the combine-skip (Kiros et al., 2015) model in the Skip-Thought Vectors paper on the SICK dataset? | MSE, Pearson Correlation, Spearman Correlation |
What metrics were used to measure the Bidirectional LSTM (Tai et al., 2015) model in the Improved Semantic Representations From Tree-Structured Long Short-Term Memory Networks paper on the SICK dataset? | MSE, Pearson Correlation, Spearman Correlation |
What metrics were used to measure the LSTM (Tai et al., 2015) model in the Improved Semantic Representations From Tree-Structured Long Short-Term Memory Networks paper on the SICK dataset? | MSE, Pearson Correlation, Spearman Correlation |
What metrics were used to measure the Doc2VecC model in the Efficient Vector Representation for Documents through Corruption paper on the SICK dataset? | MSE, Pearson Correlation, Spearman Correlation |
What metrics were used to measure the BioBERT
(pre-trained on PubMed abstracts + PMC, fine-tuned on "Annotated corpus for semantic similarity of clinical trial outcomes, original corpus") model in the Measuring semantic similarity of clinical trial outcomes using deep pre-trained language representations paper on the Annotated corpus for semantic similarity of clinical trial outcomes (original corpus) dataset? | F1, Precision, Recall |
What metrics were used to measure the SciBERT uncased
(SciVocab, fine-tuned on "Annotated corpus for semantic similarity of clinical trial outcomes, original corpus") model in the Measuring semantic similarity of clinical trial outcomes using deep pre-trained language representations paper on the Annotated corpus for semantic similarity of clinical trial outcomes (original corpus) dataset? | F1, Precision, Recall |
What metrics were used to measure the SciBERT cased
(SciVocab, fine-tuned on "Annotated corpus for semantic similarity of clinical trial outcomes, original corpus") model in the Measuring semantic similarity of clinical trial outcomes using deep pre-trained language representations paper on the Annotated corpus for semantic similarity of clinical trial outcomes (original corpus) dataset? | F1, Precision, Recall |
What metrics were used to measure the BERT-Base uncased
(fine-tuned on "Annotated corpus for semantic similarity of clinical trial outcomes, original corpus") model in the Measuring semantic similarity of clinical trial outcomes using deep pre-trained language representations paper on the Annotated corpus for semantic similarity of clinical trial outcomes (original corpus) dataset? | F1, Precision, Recall |
What metrics were used to measure the BERT-Base cased
(fine-tuned on "Annotated corpus for semantic similarity of clinical trial outcomes, original corpus") model in the Measuring semantic similarity of clinical trial outcomes using deep pre-trained language representations paper on the Annotated corpus for semantic similarity of clinical trial outcomes (original corpus) dataset? | F1, Precision, Recall |
What metrics were used to measure the w2v2-mtl-chain model in the A Hierarchical Regression Chain Framework for Affective Vocal Burst Recognition paper on the HUME-VB dataset? | Concordance correlation coefficient (CCC) |
What metrics were used to measure the w2v2-r-er model in the Evaluating Variants of wav2vec 2.0 on Affective Vocal Burst Tasks paper on the HUME-VB dataset? | Concordance correlation coefficient (CCC) |
What metrics were used to measure the 3DNEL MSIGP model in the 3D Neural Embedding Likelihood: Probabilistic Inverse Graphics for Robust 6D Pose Estimation paper on the YCB-Video dataset? | Average Recall |
What metrics were used to measure the SurfEMB model in the 3D Neural Embedding Likelihood: Probabilistic Inverse Graphics for Robust 6D Pose Estimation paper on the YCB-Video dataset? | Average Recall |
What metrics were used to measure the Coupled Iterative Refinement model in the 3D Neural Embedding Likelihood: Probabilistic Inverse Graphics for Robust 6D Pose Estimation paper on the YCB-Video dataset? | Average Recall |
What metrics were used to measure the FFB6D model in the 3D Neural Embedding Likelihood: Probabilistic Inverse Graphics for Robust 6D Pose Estimation paper on the YCB-Video dataset? | Average Recall |
What metrics were used to measure the CosyPose model in the 3D Neural Embedding Likelihood: Probabilistic Inverse Graphics for Robust 6D Pose Estimation paper on the YCB-Video dataset? | Average Recall |
What metrics were used to measure the MegaPose model in the 3D Neural Embedding Likelihood: Probabilistic Inverse Graphics for Robust 6D Pose Estimation paper on the YCB-Video dataset? | Average Recall |
What metrics were used to measure the CoT-T5-11B model in the The CoT Collection: Improving Zero-shot and Few-shot Learning of Language Models via Chain-of-Thought Fine-Tuning paper on the BIG-bench (Navigate) dataset? | Accuracy |
What metrics were used to measure the CoT-T5-11B model in the The CoT Collection: Improving Zero-shot and Few-shot Learning of Language Models via Chain-of-Thought Fine-Tuning paper on the BIG-bench (Ruin Names) dataset? | Accuracy |
What metrics were used to measure the CoT-T5-11B model in the The CoT Collection: Improving Zero-shot and Few-shot Learning of Language Models via Chain-of-Thought Fine-Tuning paper on the BIG-bench (SNARKS) dataset? | Accuracy |
What metrics were used to measure the CoT-T5 model in the The CoT Collection: Improving Zero-shot and Few-shot Learning of Language Models via Chain-of-Thought Fine-Tuning paper on the Big-bench Hard dataset? | Accuracy |
What metrics were used to measure the CoT-T5-11B model in the The CoT Collection: Improving Zero-shot and Few-shot Learning of Language Models via Chain-of-Thought Fine-Tuning paper on the BIG-bench (Hyperbaton) dataset? | Accuracy |
What metrics were used to measure the Diadem model in the PaDiM: a Patch Distribution Modeling Framework for Anomaly Detection and Localization paper on the MVTecAD dataset? | Absolute Time (ms) |
What metrics were used to measure the model in the Llama 2: Open Foundation and Fine-Tuned Chat Models paper on the dataset? | |
What metrics were used to measure the model in the Going Full-TILT Boogie on Document Understanding with Text-Image-Layout Transformer paper on the dataset? | |
What metrics were used to measure the NUWA-3D model in the Learning 3D Photography Videos via Self-supervised Diffusion on Single Images paper on the MSCOCO dataset? | CLIP Similarity, FID, Inception score |
What metrics were used to measure the NUWA-Infinity w/o text model in the NUWA-Infinity: Autoregressive over Autoregressive Generation for Infinite Visual Synthesis paper on the LHQC dataset? | Block-FID (Right Extend), Block-FID (Left Extend), Block-FID (Down Extend), Block-FID (Up Extend), Block-FID (Right Extend) |
What metrics were used to measure the NUWA-Infinity model in the NUWA-Infinity: Autoregressive over Autoregressive Generation for Infinite Visual Synthesis paper on the LHQC dataset? | Block-FID (Right Extend), Block-FID (Left Extend), Block-FID (Down Extend), Block-FID (Up Extend), Block-FID (Right Extend) |
What metrics were used to measure the Taming model in the Taming Transformers for High-Resolution Image Synthesis paper on the LHQC dataset? | Block-FID (Right Extend), Block-FID (Left Extend), Block-FID (Down Extend), Block-FID (Up Extend), Block-FID (Right Extend) |
What metrics were used to measure the MaskGIT model in the MaskGIT: Masked Generative Image Transformer paper on the LHQC dataset? | Block-FID (Right Extend), Block-FID (Left Extend), Block-FID (Down Extend), Block-FID (Up Extend), Block-FID (Right Extend) |
What metrics were used to measure the Residual Encoder model in the Enhanced Residual Networks for Context-based Image Outpainting paper on the Places365-Standard dataset? | L1, MSE, Adversarial |
What metrics were used to measure the PEMIRL model in the Meta-Inverse Reinforcement Learning with Probabilistic Context Variables paper on the Sawyer Pusher dataset? | Average Return |
What metrics were used to measure the POP3D model in the Policy Optimization With Penalized Point Probability Distance: An Alternative To Proximal Policy Optimization paper on the InvertedPendulum dataset? | Mean |
What metrics were used to measure the PEMIRL model in the Meta-Inverse Reinforcement Learning with Probabilistic Context Variables paper on the Point Maze dataset? | Average Return |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.