prompts stringlengths 81 413 | metrics_response stringlengths 0 371 |
|---|---|
What metrics were used to measure the Trans-Encoder-BERT-base-bi (unsup.) model in the Trans-Encoder: Unsupervised sentence-pair modelling through self- and mutual-distillations paper on the STS15 dataset? | Spearman Correlation |
What metrics were used to measure the Trans-Encoder-BERT-base-cross (unsup.) model in the Trans-Encoder: Unsupervised sentence-pair modelling through self- and mutual-distillations paper on the STS15 dataset? | Spearman Correlation |
What metrics were used to measure the DiffCSE-BERT-base model in the DiffCSE: Difference-based Contrastive Learning for Sentence Embeddings paper on the STS15 dataset? | Spearman Correlation |
What metrics were used to measure the DiffCSE-RoBERTa-base model in the DiffCSE: Difference-based Contrastive Learning for Sentence Embeddings paper on the STS15 dataset? | Spearman Correlation |
What metrics were used to measure the SRoBERTa-NLI-large model in the Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks paper on the STS15 dataset? | Spearman Correlation |
What metrics were used to measure the Mirror-BERT-base (unsup.) model in the Fast, Effective, and Self-Supervised: Transforming Masked Language Models into Universal Lexical and Sentence Encoders paper on the STS15 dataset? | Spearman Correlation |
What metrics were used to measure the Dino (STSb/) model in the Generating Datasets with Pretrained Language Models paper on the STS15 dataset? | Spearman Correlation |
What metrics were used to measure the Mirror-RoBERTa-base (unsup.) model in the Fast, Effective, and Self-Supervised: Transforming Masked Language Models into Universal Lexical and Sentence Encoders paper on the STS15 dataset? | Spearman Correlation |
What metrics were used to measure the IS-BERT-NLI model in the An Unsupervised Sentence Embedding Method by Mutual Information Maximization paper on the STS15 dataset? | Spearman Correlation |
What metrics were used to measure the BERTlarge-flow (target) model in the On the Sentence Embeddings from Pre-trained Language Models paper on the STS15 dataset? | Spearman Correlation |
What metrics were used to measure the AnglE-LLaMA-7B model in the AnglE-optimized Text Embeddings paper on the SICK-R dataset? | Spearman Correlation |
What metrics were used to measure the AnglE-LLaMA-7B-v2 model in the AnglE-optimized Text Embeddings paper on the SICK-R dataset? | Spearman Correlation |
What metrics were used to measure the AnglE-LLaMA-13B model in the AnglE-optimized Text Embeddings paper on the SICK-R dataset? | Spearman Correlation |
What metrics were used to measure the GenSen model in the Learning General Purpose Distributed Sentence Representations via Large Scale Multi-task Learning paper on the SentEval dataset? | MRPC, SICK-R, SICK-E, STS |
What metrics were used to measure the InferSent model in the Supervised Learning of Universal Sentence Representations from Natural Language Inference Data paper on the SentEval dataset? | MRPC, SICK-R, SICK-E, STS |
What metrics were used to measure the TF-KLD model in the Discriminative Improvements to Distributional Sentence Similarity paper on the SentEval dataset? | MRPC, SICK-R, SICK-E, STS |
What metrics were used to measure the Snorkel MeTaL(ensemble) model in the Training Complex Models with Multi-Task Weak Supervision paper on the SentEval dataset? | MRPC, SICK-R, SICK-E, STS |
What metrics were used to measure the MT-DNN-ensemble model in the Improving Multi-Task Deep Neural Networks via Knowledge Distillation for Natural Language Understanding paper on the SentEval dataset? | MRPC, SICK-R, SICK-E, STS |
What metrics were used to measure the XLNet-Large model in the XLNet: Generalized Autoregressive Pretraining for Language Understanding paper on the SentEval dataset? | MRPC, SICK-R, SICK-E, STS |
What metrics were used to measure the PromCSE-RoBERTa-large model in the Improved Universal Sentence Embeddings with Prompt-based Contrastive Learning and Energy-based Learning paper on the SICK dataset? | Spearman Correlation |
What metrics were used to measure the PromptEOL+CSE+LLaMA-30B model in the Scaling Sentence Embeddings with Large Language Models paper on the SICK dataset? | Spearman Correlation |
What metrics were used to measure the PromptEOL+CSE+OPT-13B model in the Scaling Sentence Embeddings with Large Language Models paper on the SICK dataset? | Spearman Correlation |
What metrics were used to measure the SimCSE-RoBERTalarge model in the SimCSE: Simple Contrastive Learning of Sentence Embeddings paper on the SICK dataset? | Spearman Correlation |
What metrics were used to measure the PromptEOL+CSE+OPT-2.7B model in the Scaling Sentence Embeddings with Large Language Models paper on the SICK dataset? | Spearman Correlation |
What metrics were used to measure the SRoBERTa-NLI-base model in the Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks paper on the SICK dataset? | Spearman Correlation |
What metrics were used to measure the SRoBERTa-NLI-large model in the Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks paper on the SICK dataset? | Spearman Correlation |
What metrics were used to measure the Dino (STS/̄🦕) model in the Generating Datasets with Pretrained Language Models paper on the SICK dataset? | Spearman Correlation |
What metrics were used to measure the SBERT-NLI-large model in the Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks paper on the SICK dataset? | Spearman Correlation |
What metrics were used to measure the SBERT-NLI-base model in the Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks paper on the SICK dataset? | Spearman Correlation |
What metrics were used to measure the Trans-Encoder-BERT-base-bi (unsup.) model in the Trans-Encoder: Unsupervised sentence-pair modelling through self- and mutual-distillations paper on the SICK dataset? | Spearman Correlation |
What metrics were used to measure the Trans-Encoder-BERT-large-cross (unsup.) model in the Trans-Encoder: Unsupervised sentence-pair modelling through self- and mutual-distillations paper on the SICK dataset? | Spearman Correlation |
What metrics were used to measure the Trans-Encoder-RoBERTa-large-cross (unsup.) model in the Trans-Encoder: Unsupervised sentence-pair modelling through self- and mutual-distillations paper on the SICK dataset? | Spearman Correlation |
What metrics were used to measure the Trans-Encoder-BERT-large-bi (unsup.) model in the Trans-Encoder: Unsupervised sentence-pair modelling through self- and mutual-distillations paper on the SICK dataset? | Spearman Correlation |
What metrics were used to measure the Mirror-RoBERTa-base (unsup.) model in the Fast, Effective, and Self-Supervised: Transforming Masked Language Models into Universal Lexical and Sentence Encoders paper on the SICK dataset? | Spearman Correlation |
What metrics were used to measure the Mirror-BERT-base (unsup.) model in the Fast, Effective, and Self-Supervised: Transforming Masked Language Models into Universal Lexical and Sentence Encoders paper on the SICK dataset? | Spearman Correlation |
What metrics were used to measure the Trans-Encoder-BERT-base-cross (unsup.) model in the Trans-Encoder: Unsupervised sentence-pair modelling through self- and mutual-distillations paper on the SICK dataset? | Spearman Correlation |
What metrics were used to measure the Dino (STSb/̄🦕) model in the Generating Datasets with Pretrained Language Models paper on the SICK dataset? | Spearman Correlation |
What metrics were used to measure the BERTbase-flow (NLI) model in the On the Sentence Embeddings from Pre-trained Language Models paper on the SICK dataset? | Spearman Correlation |
What metrics were used to measure the IS-BERT-NLI model in the An Unsupervised Sentence Embedding Method by Mutual Information Maximization paper on the SICK dataset? | Spearman Correlation |
What metrics were used to measure the AnglE-LLaMA-13B model in the AnglE-optimized Text Embeddings paper on the STS14 dataset? | Spearman Correlation |
What metrics were used to measure the PromptEOL+CSE+LLaMA-30B model in the Scaling Sentence Embeddings with Large Language Models paper on the STS14 dataset? | Spearman Correlation |
What metrics were used to measure the AnglE-LLaMA-7B-v2 model in the AnglE-optimized Text Embeddings paper on the STS14 dataset? | Spearman Correlation |
What metrics were used to measure the AnglE-LLaMA-7B model in the AnglE-optimized Text Embeddings paper on the STS14 dataset? | Spearman Correlation |
What metrics were used to measure the PromptEOL+CSE+OPT-13B model in the Scaling Sentence Embeddings with Large Language Models paper on the STS14 dataset? | Spearman Correlation |
What metrics were used to measure the PromptEOL+CSE+OPT-2.7B model in the Scaling Sentence Embeddings with Large Language Models paper on the STS14 dataset? | Spearman Correlation |
What metrics were used to measure the PromCSE-RoBERTa-large model in the Improved Universal Sentence Embeddings with Prompt-based Contrastive Learning and Energy-based Learning paper on the STS14 dataset? | Spearman Correlation |
What metrics were used to measure the SimCSE-RoBERTalarge model in the SimCSE: Simple Contrastive Learning of Sentence Embeddings paper on the STS14 dataset? | Spearman Correlation |
What metrics were used to measure the Trans-Encoder-RoBERTa-large-cross (unsup.) model in the Trans-Encoder: Unsupervised sentence-pair modelling through self- and mutual-distillations paper on the STS14 dataset? | Spearman Correlation |
What metrics were used to measure the Trans-Encoder-RoBERTa-large-bi (unsup.) model in the Trans-Encoder: Unsupervised sentence-pair modelling through self- and mutual-distillations paper on the STS14 dataset? | Spearman Correlation |
What metrics were used to measure the Trans-Encoder-BERT-large-bi (unsup.) model in the Trans-Encoder: Unsupervised sentence-pair modelling through self- and mutual-distillations paper on the STS14 dataset? | Spearman Correlation |
What metrics were used to measure the Trans-Encoder-RoBERTa-base-cross (unsup.) model in the Trans-Encoder: Unsupervised sentence-pair modelling through self- and mutual-distillations paper on the STS14 dataset? | Spearman Correlation |
What metrics were used to measure the Trans-Encoder-BERT-base-bi (unsup.) model in the Trans-Encoder: Unsupervised sentence-pair modelling through self- and mutual-distillations paper on the STS14 dataset? | Spearman Correlation |
What metrics were used to measure the DiffCSE-BERT-base model in the DiffCSE: Difference-based Contrastive Learning for Sentence Embeddings paper on the STS14 dataset? | Spearman Correlation |
What metrics were used to measure the DiffCSE-RoBERTa-base model in the DiffCSE: Difference-based Contrastive Learning for Sentence Embeddings paper on the STS14 dataset? | Spearman Correlation |
What metrics were used to measure the SBERT-NLI-large model in the Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks paper on the STS14 dataset? | Spearman Correlation |
What metrics were used to measure the Mirror-RoBERTa-base (unsup.) model in the Fast, Effective, and Self-Supervised: Transforming Masked Language Models into Universal Lexical and Sentence Encoders paper on the STS14 dataset? | Spearman Correlation |
What metrics were used to measure the Mirror-BERT-base (unsup.) model in the Fast, Effective, and Self-Supervised: Transforming Masked Language Models into Universal Lexical and Sentence Encoders paper on the STS14 dataset? | Spearman Correlation |
What metrics were used to measure the Dino (STSb/̄🦕) model in the Generating Datasets with Pretrained Language Models paper on the STS14 dataset? | Spearman Correlation |
What metrics were used to measure the BERTlarge-flow (target) model in the On the Sentence Embeddings from Pre-trained Language Models paper on the STS14 dataset? | Spearman Correlation |
What metrics were used to measure the IS-BERT-NLI model in the An Unsupervised Sentence Embedding Method by Mutual Information Maximization paper on the STS14 dataset? | Spearman Correlation |
What metrics were used to measure the Synthesizer (R+V) model in the Synthesizer: Rethinking Self-Attention in Transformer Models paper on the MRPC Dev dataset? | Accuracy |
What metrics were used to measure the TinyBERT (M=6;d'=768;d'i=3072) model in the TinyBERT: Distilling BERT for Natural Language Understanding paper on the MRPC Dev dataset? | Accuracy |
What metrics were used to measure the MT-DNN-SMART model in the SMART: Robust and Efficient Fine-Tuning for Pre-trained Natural Language Models through Principled Regularized Optimization paper on the MRPC dataset? | Accuracy, F1, Number of Params, Dev Accuracy, Dev F1, Structure Aware Intrinsic Dimension, Direct Intrinsic Dimension |
What metrics were used to measure the ALBERT model in the ALBERT: A Lite BERT for Self-supervised Learning of Language Representations paper on the MRPC dataset? | Accuracy, F1, Number of Params, Dev Accuracy, Dev F1, Structure Aware Intrinsic Dimension, Direct Intrinsic Dimension |
What metrics were used to measure the RoBERTa model in the RoBERTa: A Robustly Optimized BERT Pretraining Approach paper on the MRPC dataset? | Accuracy, F1, Number of Params, Dev Accuracy, Dev F1, Structure Aware Intrinsic Dimension, Direct Intrinsic Dimension |
What metrics were used to measure the StructBERTRoBERTa ensemble model in the StructBERT: Incorporating Language Structures into Pre-training for Deep Language Understanding paper on the MRPC dataset? | Accuracy, F1, Number of Params, Dev Accuracy, Dev F1, Structure Aware Intrinsic Dimension, Direct Intrinsic Dimension |
What metrics were used to measure the FLOATER-large model in the Learning to Encode Position for Transformer with Continuous Dynamical Model paper on the MRPC dataset? | Accuracy, F1, Number of Params, Dev Accuracy, Dev F1, Structure Aware Intrinsic Dimension, Direct Intrinsic Dimension |
What metrics were used to measure the SMART model in the SMART: Robust and Efficient Fine-Tuning for Pre-trained Natural Language Models through Principled Regularized Optimization paper on the MRPC dataset? | Accuracy, F1, Number of Params, Dev Accuracy, Dev F1, Structure Aware Intrinsic Dimension, Direct Intrinsic Dimension |
What metrics were used to measure the Vector-wise model in the LLM.int8(): 8-bit Matrix Multiplication for Transformers at Scale paper on the MRPC dataset? | Accuracy, F1, Number of Params, Dev Accuracy, Dev F1, Structure Aware Intrinsic Dimension, Direct Intrinsic Dimension |
What metrics were used to measure the SpanBERT model in the SpanBERT: Improving Pre-training by Representing and Predicting Spans paper on the MRPC dataset? | Accuracy, F1, Number of Params, Dev Accuracy, Dev F1, Structure Aware Intrinsic Dimension, Direct Intrinsic Dimension |
What metrics were used to measure the XLNet (single model) model in the XLNet: Generalized Autoregressive Pretraining for Language Understanding paper on the MRPC dataset? | Accuracy, F1, Number of Params, Dev Accuracy, Dev F1, Structure Aware Intrinsic Dimension, Direct Intrinsic Dimension |
What metrics were used to measure the AutoBERT-Zero (Large) model in the AutoBERT-Zero: Evolving BERT Backbone from Scratch paper on the MRPC dataset? | Accuracy, F1, Number of Params, Dev Accuracy, Dev F1, Structure Aware Intrinsic Dimension, Direct Intrinsic Dimension |
What metrics were used to measure the MLM+ del-word+ reorder model in the CLEAR: Contrastive Learning for Sentence Representation paper on the MRPC dataset? | Accuracy, F1, Number of Params, Dev Accuracy, Dev F1, Structure Aware Intrinsic Dimension, Direct Intrinsic Dimension |
What metrics were used to measure the AutoBERT-Zero (Base) model in the AutoBERT-Zero: Evolving BERT Backbone from Scratch paper on the MRPC dataset? | Accuracy, F1, Number of Params, Dev Accuracy, Dev F1, Structure Aware Intrinsic Dimension, Direct Intrinsic Dimension |
What metrics were used to measure the DistilBERT model in the DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter paper on the MRPC dataset? | Accuracy, F1, Number of Params, Dev Accuracy, Dev F1, Structure Aware Intrinsic Dimension, Direct Intrinsic Dimension |
What metrics were used to measure the T5-11B model in the Exploring the Limits of Transfer Learning with a Unified Text-to-Text Transformer paper on the MRPC dataset? | Accuracy, F1, Number of Params, Dev Accuracy, Dev F1, Structure Aware Intrinsic Dimension, Direct Intrinsic Dimension |
What metrics were used to measure the T5-Large model in the Exploring the Limits of Transfer Learning with a Unified Text-to-Text Transformer paper on the MRPC dataset? | Accuracy, F1, Number of Params, Dev Accuracy, Dev F1, Structure Aware Intrinsic Dimension, Direct Intrinsic Dimension |
What metrics were used to measure the ELECTRA model in the paper on the MRPC dataset? | Accuracy, F1, Number of Params, Dev Accuracy, Dev F1, Structure Aware Intrinsic Dimension, Direct Intrinsic Dimension |
What metrics were used to measure the T5-3B model in the Exploring the Limits of Transfer Learning with a Unified Text-to-Text Transformer paper on the MRPC dataset? | Accuracy, F1, Number of Params, Dev Accuracy, Dev F1, Structure Aware Intrinsic Dimension, Direct Intrinsic Dimension |
What metrics were used to measure the MobileBERT model in the MobileBERT: a Compact Task-Agnostic BERT for Resource-Limited Devices paper on the MRPC dataset? | Accuracy, F1, Number of Params, Dev Accuracy, Dev F1, Structure Aware Intrinsic Dimension, Direct Intrinsic Dimension |
What metrics were used to measure the ERNIE model in the ERNIE: Enhanced Language Representation with Informative Entities paper on the MRPC dataset? | Accuracy, F1, Number of Params, Dev Accuracy, Dev F1, Structure Aware Intrinsic Dimension, Direct Intrinsic Dimension |
What metrics were used to measure the FNet-Large model in the FNet: Mixing Tokens with Fourier Transforms paper on the MRPC dataset? | Accuracy, F1, Number of Params, Dev Accuracy, Dev F1, Structure Aware Intrinsic Dimension, Direct Intrinsic Dimension |
What metrics were used to measure the SqueezeBERT model in the SqueezeBERT: What can computer vision teach NLP about efficient neural networks? paper on the MRPC dataset? | Accuracy, F1, Number of Params, Dev Accuracy, Dev F1, Structure Aware Intrinsic Dimension, Direct Intrinsic Dimension |
What metrics were used to measure the Charformer-Tall model in the Charformer: Fast Character Transformers via Gradient-based Subword Tokenization paper on the MRPC dataset? | Accuracy, F1, Number of Params, Dev Accuracy, Dev F1, Structure Aware Intrinsic Dimension, Direct Intrinsic Dimension |
What metrics were used to measure the T5-Base model in the Exploring the Limits of Transfer Learning with a Unified Text-to-Text Transformer paper on the MRPC dataset? | Accuracy, F1, Number of Params, Dev Accuracy, Dev F1, Structure Aware Intrinsic Dimension, Direct Intrinsic Dimension |
What metrics were used to measure the 24hBERT model in the How to Train BERT with an Academic Budget paper on the MRPC dataset? | Accuracy, F1, Number of Params, Dev Accuracy, Dev F1, Structure Aware Intrinsic Dimension, Direct Intrinsic Dimension |
What metrics were used to measure the ERNIE 2.0 Large model in the ERNIE 2.0: A Continual Pre-training Framework for Language Understanding paper on the MRPC dataset? | Accuracy, F1, Number of Params, Dev Accuracy, Dev F1, Structure Aware Intrinsic Dimension, Direct Intrinsic Dimension |
What metrics were used to measure the RealFormer model in the RealFormer: Transformer Likes Residual Attention paper on the MRPC dataset? | Accuracy, F1, Number of Params, Dev Accuracy, Dev F1, Structure Aware Intrinsic Dimension, Direct Intrinsic Dimension |
What metrics were used to measure the T5-Small model in the Exploring the Limits of Transfer Learning with a Unified Text-to-Text Transformer paper on the MRPC dataset? | Accuracy, F1, Number of Params, Dev Accuracy, Dev F1, Structure Aware Intrinsic Dimension, Direct Intrinsic Dimension |
What metrics were used to measure the TinyBERT model in the TinyBERT: Distilling BERT for Natural Language Understanding paper on the MRPC dataset? | Accuracy, F1, Number of Params, Dev Accuracy, Dev F1, Structure Aware Intrinsic Dimension, Direct Intrinsic Dimension |
What metrics were used to measure the ERNIE 2.0 Base model in the ERNIE 2.0: A Continual Pre-training Framework for Language Understanding paper on the MRPC dataset? | Accuracy, F1, Number of Params, Dev Accuracy, Dev F1, Structure Aware Intrinsic Dimension, Direct Intrinsic Dimension |
What metrics were used to measure the TF-KLD model in the Discriminative Improvements to Distributional Sentence Similarity paper on the MRPC dataset? | Accuracy, F1, Number of Params, Dev Accuracy, Dev F1, Structure Aware Intrinsic Dimension, Direct Intrinsic Dimension |
What metrics were used to measure the GenSen model in the Learning General Purpose Distributed Sentence Representations via Large Scale Multi-task Learning paper on the MRPC dataset? | Accuracy, F1, Number of Params, Dev Accuracy, Dev F1, Structure Aware Intrinsic Dimension, Direct Intrinsic Dimension |
What metrics were used to measure the InferSent model in the Supervised Learning of Universal Sentence Representations from Natural Language Inference Data paper on the MRPC dataset? | Accuracy, F1, Number of Params, Dev Accuracy, Dev F1, Structure Aware Intrinsic Dimension, Direct Intrinsic Dimension |
What metrics were used to measure the BigBird model in the Big Bird: Transformers for Longer Sequences paper on the MRPC dataset? | Accuracy, F1, Number of Params, Dev Accuracy, Dev F1, Structure Aware Intrinsic Dimension, Direct Intrinsic Dimension |
What metrics were used to measure the EFL model in the Entailment as Few-Shot Learner paper on the MRPC dataset? | Accuracy, F1, Number of Params, Dev Accuracy, Dev F1, Structure Aware Intrinsic Dimension, Direct Intrinsic Dimension |
What metrics were used to measure the BERT-LARGE model in the BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding paper on the MRPC dataset? | Accuracy, F1, Number of Params, Dev Accuracy, Dev F1, Structure Aware Intrinsic Dimension, Direct Intrinsic Dimension |
What metrics were used to measure the Nyströmformer model in the Nyströmformer: A Nyström-Based Algorithm for Approximating Self-Attention paper on the MRPC dataset? | Accuracy, F1, Number of Params, Dev Accuracy, Dev F1, Structure Aware Intrinsic Dimension, Direct Intrinsic Dimension |
What metrics were used to measure the SMARTRoBERTa model in the SMART: Robust and Efficient Fine-Tuning for Pre-trained Natural Language Models through Principled Regularized Optimization paper on the MRPC dataset? | Accuracy, F1, Number of Params, Dev Accuracy, Dev F1, Structure Aware Intrinsic Dimension, Direct Intrinsic Dimension |
What metrics were used to measure the SMART-BERT model in the SMART: Robust and Efficient Fine-Tuning for Pre-trained Natural Language Models through Principled Regularized Optimization paper on the MRPC dataset? | Accuracy, F1, Number of Params, Dev Accuracy, Dev F1, Structure Aware Intrinsic Dimension, Direct Intrinsic Dimension |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.