prompts stringlengths 81 413 | metrics_response stringlengths 0 371 |
|---|---|
What metrics were used to measure the BERT-Base model in the Intrinsic Dimensionality Explains the Effectiveness of Language Model Fine-Tuning paper on the MRPC dataset? | Accuracy, F1, Number of Params, Dev Accuracy, Dev F1, Structure Aware Intrinsic Dimension, Direct Intrinsic Dimension |
What metrics were used to measure the BERT-Large model in the Intrinsic Dimensionality Explains the Effectiveness of Language Model Fine-Tuning paper on the MRPC dataset? | Accuracy, F1, Number of Params, Dev Accuracy, Dev F1, Structure Aware Intrinsic Dimension, Direct Intrinsic Dimension |
What metrics were used to measure the PromCSE-RoBERTa-large model in the Improved Universal Sentence Embeddings with Prompt-based Contrastive Learning and Energy-based Learning paper on the CxC dataset? | avg ± std |
What metrics were used to measure the DE-T2T+I2T model in the MURAL: Multimodal, Multitask Retrieval Across Languages paper on the CxC dataset? | avg ± std |
What metrics were used to measure the MURAL-large model in the MURAL: Multimodal, Multitask Retrieval Across Languages paper on the CxC dataset? | avg ± std |
What metrics were used to measure the ALIGN-L2 model in the MURAL: Multimodal, Multitask Retrieval Across Languages paper on the CxC dataset? | avg ± std |
What metrics were used to measure the PromptEOL+CSE+OPT-13B model in the Scaling Sentence Embeddings with Large Language Models paper on the STS12 dataset? | Spearman Correlation |
What metrics were used to measure the PromptEOL+CSE+LLaMA-30B model in the Scaling Sentence Embeddings with Large Language Models paper on the STS12 dataset? | Spearman Correlation |
What metrics were used to measure the PromCSE-RoBERTa-large model in the Improved Universal Sentence Embeddings with Prompt-based Contrastive Learning and Energy-based Learning paper on the STS12 dataset? | Spearman Correlation |
What metrics were used to measure the PromptEOL+CSE+OPT-2.7B model in the Scaling Sentence Embeddings with Large Language Models paper on the STS12 dataset? | Spearman Correlation |
What metrics were used to measure the AnglE-LLaMA-13B model in the AnglE-optimized Text Embeddings paper on the STS12 dataset? | Spearman Correlation |
What metrics were used to measure the AnglE-LLaMA-7B-v2 model in the AnglE-optimized Text Embeddings paper on the STS12 dataset? | Spearman Correlation |
What metrics were used to measure the AnglE-LLaMA-7B model in the AnglE-optimized Text Embeddings paper on the STS12 dataset? | Spearman Correlation |
What metrics were used to measure the Trans-Encoder-RoBERTa-large-cross (unsup.) model in the Trans-Encoder: Unsupervised sentence-pair modelling through self- and mutual-distillations paper on the STS12 dataset? | Spearman Correlation |
What metrics were used to measure the Trans-Encoder-BERT-large-bi (unsup.) model in the Trans-Encoder: Unsupervised sentence-pair modelling through self- and mutual-distillations paper on the STS12 dataset? | Spearman Correlation |
What metrics were used to measure the SimCSE-RoBERTa-large model in the SimCSE: Simple Contrastive Learning of Sentence Embeddings paper on the STS12 dataset? | Spearman Correlation |
What metrics were used to measure the Trans-Encoder-RoBERTa-base-cross (unsup.) model in the Trans-Encoder: Unsupervised sentence-pair modelling through self- and mutual-distillations paper on the STS12 dataset? | Spearman Correlation |
What metrics were used to measure the Trans-Encoder-BERT-base-bi (unsup.) model in the Trans-Encoder: Unsupervised sentence-pair modelling through self- and mutual-distillations paper on the STS12 dataset? | Spearman Correlation |
What metrics were used to measure the SRoBERTa-NLI-large model in the Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks paper on the STS12 dataset? | Spearman Correlation |
What metrics were used to measure the DiffCSE-BERT-base model in the DiffCSE: Difference-based Contrastive Learning for Sentence Embeddings paper on the STS12 dataset? | Spearman Correlation |
What metrics were used to measure the Dino (STSb/̄🦕) model in the Generating Datasets with Pretrained Language Models paper on the STS12 dataset? | Spearman Correlation |
What metrics were used to measure the SimCSE-RoBERTa-base model in the SimCSE: Simple Contrastive Learning of Sentence Embeddings paper on the STS12 dataset? | Spearman Correlation |
What metrics were used to measure the DiffCSE-RoBERTa-base model in the DiffCSE: Difference-based Contrastive Learning for Sentence Embeddings paper on the STS12 dataset? | Spearman Correlation |
What metrics were used to measure the Mirror-BERT-base (unsup.) model in the Fast, Effective, and Self-Supervised: Transforming Masked Language Models into Universal Lexical and Sentence Encoders paper on the STS12 dataset? | Spearman Correlation |
What metrics were used to measure the BERTlarge-flow (target) model in the On the Sentence Embeddings from Pre-trained Language Models paper on the STS12 dataset? | Spearman Correlation |
What metrics were used to measure the Mirror-RoBERTa-base (unsup.) model in the Fast, Effective, and Self-Supervised: Transforming Masked Language Models into Universal Lexical and Sentence Encoders paper on the STS12 dataset? | Spearman Correlation |
What metrics were used to measure the IS-BERT-NLI model in the An Unsupervised Sentence Embedding Method by Mutual Information Maximization paper on the STS12 dataset? | Spearman Correlation |
What metrics were used to measure the AnglE-LLaMA-13B model in the AnglE-optimized Text Embeddings paper on the STS13 dataset? | Spearman Correlation |
What metrics were used to measure the AnglE-LLaMA-7B model in the AnglE-optimized Text Embeddings paper on the STS13 dataset? | Spearman Correlation |
What metrics were used to measure the AnglE-LLaMA-7B-v2 model in the AnglE-optimized Text Embeddings paper on the STS13 dataset? | Spearman Correlation |
What metrics were used to measure the PromptEOL+CSE+LLaMA-30B model in the Scaling Sentence Embeddings with Large Language Models paper on the STS13 dataset? | Spearman Correlation |
What metrics were used to measure the PromptEOL+CSE+OPT-13B model in the Scaling Sentence Embeddings with Large Language Models paper on the STS13 dataset? | Spearman Correlation |
What metrics were used to measure the PromptEOL+CSE+OPT-2.7B model in the Scaling Sentence Embeddings with Large Language Models paper on the STS13 dataset? | Spearman Correlation |
What metrics were used to measure the PromCSE-RoBERTa-large model in the Improved Universal Sentence Embeddings with Prompt-based Contrastive Learning and Energy-based Learning paper on the STS13 dataset? | Spearman Correlation |
What metrics were used to measure the Trans-Encoder-BERT-large-bi (unsup.) model in the Trans-Encoder: Unsupervised sentence-pair modelling through self- and mutual-distillations paper on the STS13 dataset? | Spearman Correlation |
What metrics were used to measure the Trans-Encoder-BERT-large-cross (unsup.) model in the Trans-Encoder: Unsupervised sentence-pair modelling through self- and mutual-distillations paper on the STS13 dataset? | Spearman Correlation |
What metrics were used to measure the Trans-Encoder-RoBERTa-large-cross (unsup.) model in the Trans-Encoder: Unsupervised sentence-pair modelling through self- and mutual-distillations paper on the STS13 dataset? | Spearman Correlation |
What metrics were used to measure the SimCSE-RoBERTa-large model in the SimCSE: Simple Contrastive Learning of Sentence Embeddings paper on the STS13 dataset? | Spearman Correlation |
What metrics were used to measure the Trans-Encoder-BERT-base-cross (unsup.) model in the Trans-Encoder: Unsupervised sentence-pair modelling through self- and mutual-distillations paper on the STS13 dataset? | Spearman Correlation |
What metrics were used to measure the Trans-Encoder-BERT-base-bi (unsup.) model in the Trans-Encoder: Unsupervised sentence-pair modelling through self- and mutual-distillations paper on the STS13 dataset? | Spearman Correlation |
What metrics were used to measure the DiffCSE-BERT-base model in the DiffCSE: Difference-based Contrastive Learning for Sentence Embeddings paper on the STS13 dataset? | Spearman Correlation |
What metrics were used to measure the DiffCSE-RoBERTa-base model in the DiffCSE: Difference-based Contrastive Learning for Sentence Embeddings paper on the STS13 dataset? | Spearman Correlation |
What metrics were used to measure the SimCSE-BERT-base model in the SimCSE: Simple Contrastive Learning of Sentence Embeddings paper on the STS13 dataset? | Spearman Correlation |
What metrics were used to measure the Mirror-RoBERTa-base (unsup.) model in the Fast, Effective, and Self-Supervised: Transforming Masked Language Models into Universal Lexical and Sentence Encoders paper on the STS13 dataset? | Spearman Correlation |
What metrics were used to measure the SimCSE-RoBERTa-base model in the SimCSE: Simple Contrastive Learning of Sentence Embeddings paper on the STS13 dataset? | Spearman Correlation |
What metrics were used to measure the Dino (STSb/̄🦕) model in the Generating Datasets with Pretrained Language Models paper on the STS13 dataset? | Spearman Correlation |
What metrics were used to measure the Mirror-BERT-base (unsup.) model in the Fast, Effective, and Self-Supervised: Transforming Masked Language Models into Universal Lexical and Sentence Encoders paper on the STS13 dataset? | Spearman Correlation |
What metrics were used to measure the SBERT-NLI-large model in the Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks paper on the STS13 dataset? | Spearman Correlation |
What metrics were used to measure the BERTlarge-flow (target) model in the On the Sentence Embeddings from Pre-trained Language Models paper on the STS13 dataset? | Spearman Correlation |
What metrics were used to measure the IS-BERT-NLI model in the An Unsupervised Sentence Embedding Method by Mutual Information Maximization paper on the STS13 dataset? | Spearman Correlation |
What metrics were used to measure the AnglE-LLaMA-13B model in the AnglE-optimized Text Embeddings paper on the STS16 dataset? | Spearman Correlation |
What metrics were used to measure the AnglE-LLaMA-7B-v2 model in the AnglE-optimized Text Embeddings paper on the STS16 dataset? | Spearman Correlation |
What metrics were used to measure the AnglE-LLaMA-7B model in the AnglE-optimized Text Embeddings paper on the STS16 dataset? | Spearman Correlation |
What metrics were used to measure the PromptEOL+CSE+LLaMA-30B model in the Scaling Sentence Embeddings with Large Language Models paper on the STS16 dataset? | Spearman Correlation |
What metrics were used to measure the PromptEOL+CSE+OPT-2.7B model in the Scaling Sentence Embeddings with Large Language Models paper on the STS16 dataset? | Spearman Correlation |
What metrics were used to measure the PromptEOL+CSE+OPT-13B model in the Scaling Sentence Embeddings with Large Language Models paper on the STS16 dataset? | Spearman Correlation |
What metrics were used to measure the Trans-Encoder-RoBERTa-large-cross (unsup.) model in the Trans-Encoder: Unsupervised sentence-pair modelling through self- and mutual-distillations paper on the STS16 dataset? | Spearman Correlation |
What metrics were used to measure the PromCSE-RoBERTa-large model in the Improved Universal Sentence Embeddings with Prompt-based Contrastive Learning and Energy-based Learning paper on the STS16 dataset? | Spearman Correlation |
What metrics were used to measure the Trans-Encoder-BERT-large-bi (unsup.) model in the Trans-Encoder: Unsupervised sentence-pair modelling through self- and mutual-distillations paper on the STS16 dataset? | Spearman Correlation |
What metrics were used to measure the SimCSE-RoBERTalarge model in the SimCSE: Simple Contrastive Learning of Sentence Embeddings paper on the STS16 dataset? | Spearman Correlation |
What metrics were used to measure the Trans-Encoder-RoBERTa-base-cross (unsup.) model in the Trans-Encoder: Unsupervised sentence-pair modelling through self- and mutual-distillations paper on the STS16 dataset? | Spearman Correlation |
What metrics were used to measure the Trans-Encoder-BERT-base-bi (unsup.) model in the Trans-Encoder: Unsupervised sentence-pair modelling through self- and mutual-distillations paper on the STS16 dataset? | Spearman Correlation |
What metrics were used to measure the DiffCSE-RoBERTa-base model in the DiffCSE: Difference-based Contrastive Learning for Sentence Embeddings paper on the STS16 dataset? | Spearman Correlation |
What metrics were used to measure the DiffCSE-BERT-base model in the DiffCSE: Difference-based Contrastive Learning for Sentence Embeddings paper on the STS16 dataset? | Spearman Correlation |
What metrics were used to measure the Mirror-RoBERTa-base (unsup.) model in the Fast, Effective, and Self-Supervised: Transforming Masked Language Models into Universal Lexical and Sentence Encoders paper on the STS16 dataset? | Spearman Correlation |
What metrics were used to measure the BERTlarge-flow (target) model in the On the Sentence Embeddings from Pre-trained Language Models paper on the STS16 dataset? | Spearman Correlation |
What metrics were used to measure the Dino (STSb/̄🦕) model in the Generating Datasets with Pretrained Language Models paper on the STS16 dataset? | Spearman Correlation |
What metrics were used to measure the SRoBERTa-NLI-large model in the Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks paper on the STS16 dataset? | Spearman Correlation |
What metrics were used to measure the Mirror-BERT-base (unsup.) model in the Fast, Effective, and Self-Supervised: Transforming Masked Language Models into Universal Lexical and Sentence Encoders paper on the STS16 dataset? | Spearman Correlation |
What metrics were used to measure the IS-BERT-NLI model in the An Unsupervised Sentence Embedding Method by Mutual Information Maximization paper on the STS16 dataset? | Spearman Correlation |
What metrics were used to measure the MT-DNN-SMART model in the SMART: Robust and Efficient Fine-Tuning for Pre-trained Natural Language Models through Principled Regularized Optimization paper on the STS Benchmark dataset? | Pearson Correlation, Spearman Correlation, Accuracy, Dev Pearson Correlation, Dev Spearman Correlation |
What metrics were used to measure the StructBERTRoBERTa ensemble model in the StructBERT: Incorporating Language Structures into Pre-training for Deep Language Understanding paper on the STS Benchmark dataset? | Pearson Correlation, Spearman Correlation, Accuracy, Dev Pearson Correlation, Dev Spearman Correlation |
What metrics were used to measure the Mnet-Sim model in the MNet-Sim: A Multi-layered Semantic Similarity Network to Evaluate Sentence Similarity paper on the STS Benchmark dataset? | Pearson Correlation, Spearman Correlation, Accuracy, Dev Pearson Correlation, Dev Spearman Correlation |
What metrics were used to measure the T5-11B model in the Exploring the Limits of Transfer Learning with a Unified Text-to-Text Transformer paper on the STS Benchmark dataset? | Pearson Correlation, Spearman Correlation, Accuracy, Dev Pearson Correlation, Dev Spearman Correlation |
What metrics were used to measure the ALBERT model in the ALBERT: A Lite BERT for Self-supervised Learning of Language Representations paper on the STS Benchmark dataset? | Pearson Correlation, Spearman Correlation, Accuracy, Dev Pearson Correlation, Dev Spearman Correlation |
What metrics were used to measure the XLNet (single model) model in the XLNet: Generalized Autoregressive Pretraining for Language Understanding paper on the STS Benchmark dataset? | Pearson Correlation, Spearman Correlation, Accuracy, Dev Pearson Correlation, Dev Spearman Correlation |
What metrics were used to measure the RoBERTa model in the RoBERTa: A Robustly Optimized BERT Pretraining Approach paper on the STS Benchmark dataset? | Pearson Correlation, Spearman Correlation, Accuracy, Dev Pearson Correlation, Dev Spearman Correlation |
What metrics were used to measure the ELECTRA model in the paper on the STS Benchmark dataset? | Pearson Correlation, Spearman Correlation, Accuracy, Dev Pearson Correlation, Dev Spearman Correlation |
What metrics were used to measure the Vector-wise model in the LLM.int8(): 8-bit Matrix Multiplication for Transformers at Scale paper on the STS Benchmark dataset? | Pearson Correlation, Spearman Correlation, Accuracy, Dev Pearson Correlation, Dev Spearman Correlation |
What metrics were used to measure the EFL model in the Entailment as Few-Shot Learner paper on the STS Benchmark dataset? | Pearson Correlation, Spearman Correlation, Accuracy, Dev Pearson Correlation, Dev Spearman Correlation |
What metrics were used to measure the ERNIE 2.0 Large model in the ERNIE 2.0: A Continual Pre-training Framework for Language Understanding paper on the STS Benchmark dataset? | Pearson Correlation, Spearman Correlation, Accuracy, Dev Pearson Correlation, Dev Spearman Correlation |
What metrics were used to measure the ELECTRA (no tricks) model in the paper on the STS Benchmark dataset? | Pearson Correlation, Spearman Correlation, Accuracy, Dev Pearson Correlation, Dev Spearman Correlation |
What metrics were used to measure the DistilBERT model in the DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter paper on the STS Benchmark dataset? | Pearson Correlation, Spearman Correlation, Accuracy, Dev Pearson Correlation, Dev Spearman Correlation |
What metrics were used to measure the T5-3B model in the Exploring the Limits of Transfer Learning with a Unified Text-to-Text Transformer paper on the STS Benchmark dataset? | Pearson Correlation, Spearman Correlation, Accuracy, Dev Pearson Correlation, Dev Spearman Correlation |
What metrics were used to measure the MLM+ del-word model in the CLEAR: Contrastive Learning for Sentence Representation paper on the STS Benchmark dataset? | Pearson Correlation, Spearman Correlation, Accuracy, Dev Pearson Correlation, Dev Spearman Correlation |
What metrics were used to measure the RealFormer model in the RealFormer: Transformer Likes Residual Attention paper on the STS Benchmark dataset? | Pearson Correlation, Spearman Correlation, Accuracy, Dev Pearson Correlation, Dev Spearman Correlation |
What metrics were used to measure the T5-Large model in the Exploring the Limits of Transfer Learning with a Unified Text-to-Text Transformer paper on the STS Benchmark dataset? | Pearson Correlation, Spearman Correlation, Accuracy, Dev Pearson Correlation, Dev Spearman Correlation |
What metrics were used to measure the SpanBERT model in the SpanBERT: Improving Pre-training by Representing and Predicting Spans paper on the STS Benchmark dataset? | Pearson Correlation, Spearman Correlation, Accuracy, Dev Pearson Correlation, Dev Spearman Correlation |
What metrics were used to measure the T5-Base model in the Exploring the Limits of Transfer Learning with a Unified Text-to-Text Transformer paper on the STS Benchmark dataset? | Pearson Correlation, Spearman Correlation, Accuracy, Dev Pearson Correlation, Dev Spearman Correlation |
What metrics were used to measure the ERNIE 2.0 Base model in the ERNIE 2.0: A Continual Pre-training Framework for Language Understanding paper on the STS Benchmark dataset? | Pearson Correlation, Spearman Correlation, Accuracy, Dev Pearson Correlation, Dev Spearman Correlation |
What metrics were used to measure the Charformer-Tall model in the Charformer: Fast Character Transformers via Gradient-based Subword Tokenization paper on the STS Benchmark dataset? | Pearson Correlation, Spearman Correlation, Accuracy, Dev Pearson Correlation, Dev Spearman Correlation |
What metrics were used to measure the T5-Small model in the Exploring the Limits of Transfer Learning with a Unified Text-to-Text Transformer paper on the STS Benchmark dataset? | Pearson Correlation, Spearman Correlation, Accuracy, Dev Pearson Correlation, Dev Spearman Correlation |
What metrics were used to measure the ERNIE model in the ERNIE: Enhanced Language Representation with Informative Entities paper on the STS Benchmark dataset? | Pearson Correlation, Spearman Correlation, Accuracy, Dev Pearson Correlation, Dev Spearman Correlation |
What metrics were used to measure the 24hBERT model in the How to Train BERT with an Academic Budget paper on the STS Benchmark dataset? | Pearson Correlation, Spearman Correlation, Accuracy, Dev Pearson Correlation, Dev Spearman Correlation |
What metrics were used to measure the TinyBERT model in the TinyBERT: Distilling BERT for Natural Language Understanding paper on the STS Benchmark dataset? | Pearson Correlation, Spearman Correlation, Accuracy, Dev Pearson Correlation, Dev Spearman Correlation |
What metrics were used to measure the USE_T model in the Universal Sentence Encoder paper on the STS Benchmark dataset? | Pearson Correlation, Spearman Correlation, Accuracy, Dev Pearson Correlation, Dev Spearman Correlation |
What metrics were used to measure the AnglE-LLaMA-13B model in the AnglE-optimized Text Embeddings paper on the STS Benchmark dataset? | Pearson Correlation, Spearman Correlation, Accuracy, Dev Pearson Correlation, Dev Spearman Correlation |
What metrics were used to measure the ASA + RoBERTa model in the Adversarial Self-Attention for Language Understanding paper on the STS Benchmark dataset? | Pearson Correlation, Spearman Correlation, Accuracy, Dev Pearson Correlation, Dev Spearman Correlation |
What metrics were used to measure the PromptEOL+CSE+LLaMA-30B model in the Scaling Sentence Embeddings with Large Language Models paper on the STS Benchmark dataset? | Pearson Correlation, Spearman Correlation, Accuracy, Dev Pearson Correlation, Dev Spearman Correlation |
What metrics were used to measure the AnglE-LLaMA-7B-v2 model in the AnglE-optimized Text Embeddings paper on the STS Benchmark dataset? | Pearson Correlation, Spearman Correlation, Accuracy, Dev Pearson Correlation, Dev Spearman Correlation |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.