prompts stringlengths 81 413 | metrics_response stringlengths 0 371 |
|---|---|
What metrics were used to measure the GPT-NeoX (few-shot, k=3) model in the BloombergGPT: A Large Language Model for Finance paper on the BIG-bench (Date Understanding) dataset? | Accuracy |
What metrics were used to measure the Gopher-280B (few-shot, k=5) model in the Scaling Language Models: Methods, Analysis & Insights from Training Gopher paper on the BIG-bench (Date Understanding) dataset? | Accuracy |
What metrics were used to measure the PaLM 2(few-shot, k=3, CoT) model in the PaLM 2 Technical Report paper on the BIG-bench (Sports Understanding) dataset? | Accuracy |
What metrics were used to measure the PaLM 2 (few-shot, k=3, Direct) model in the PaLM 2 Technical Report paper on the BIG-bench (Sports Understanding) dataset? | Accuracy |
What metrics were used to measure the PaLM 540B (few-shot, k=3) model in the BloombergGPT: A Large Language Model for Finance paper on the BIG-bench (Sports Understanding) dataset? | Accuracy |
What metrics were used to measure the Chinchilla-70B (few-shot, k=5) model in the Training Compute-Optimal Large Language Models paper on the BIG-bench (Sports Understanding) dataset? | Accuracy |
What metrics were used to measure the Bloomberg GPT (few-shot, k=3) model in the BloombergGPT: A Large Language Model for Finance paper on the BIG-bench (Sports Understanding) dataset? | Accuracy |
What metrics were used to measure the Gopher-280B (few-shot, k=5) model in the Scaling Language Models: Methods, Analysis & Insights from Training Gopher paper on the BIG-bench (Sports Understanding) dataset? | Accuracy |
What metrics were used to measure the OPT 66B (few-shot, k=3) model in the BloombergGPT: A Large Language Model for Finance paper on the BIG-bench (Sports Understanding) dataset? | Accuracy |
What metrics were used to measure the GPT-NeoX (few-shot, k=3) model in the BloombergGPT: A Large Language Model for Finance paper on the BIG-bench (Sports Understanding) dataset? | Accuracy |
What metrics were used to measure the BERT model in the Predicting Subjective Features of Questions of QA Websites using BERT paper on the CrowdSource QA dataset? | MSE |
What metrics were used to measure the BERT Large model in the CODAH: An Adversarially Authored Question-Answer Dataset for Common Sense paper on the CODAH dataset? | Accuracy |
What metrics were used to measure the PaLM 2 (few-shot, k=3, Direct) model in the PaLM 2 Technical Report paper on the BIG-bench (Disambiguation QA) dataset? | Accuracy |
What metrics were used to measure the PaLM 2 (few-shot, k=3, CoT) model in the PaLM 2 Technical Report paper on the BIG-bench (Disambiguation QA) dataset? | Accuracy |
What metrics were used to measure the PaLM 540B (few-shot, k=3) model in the BloombergGPT: A Large Language Model for Finance paper on the BIG-bench (Disambiguation QA) dataset? | Accuracy |
What metrics were used to measure the Chinchilla-70B (few-shot, k=5) model in the Training Compute-Optimal Large Language Models paper on the BIG-bench (Disambiguation QA) dataset? | Accuracy |
What metrics were used to measure the Gopher-280B (few-shot, k=5) model in the Scaling Language Models: Methods, Analysis & Insights from Training Gopher paper on the BIG-bench (Disambiguation QA) dataset? | Accuracy |
What metrics were used to measure the GPT-NeoX (few-shot, k=3) model in the BloombergGPT: A Large Language Model for Finance paper on the BIG-bench (Disambiguation QA) dataset? | Accuracy |
What metrics were used to measure the OPT 66B (few-shot, k=3) model in the BloombergGPT: A Large Language Model for Finance paper on the BIG-bench (Disambiguation QA) dataset? | Accuracy |
What metrics were used to measure the BLOOM 176B (few-shot, k=3) model in the BloombergGPT: A Large Language Model for Finance paper on the BIG-bench (Disambiguation QA) dataset? | Accuracy |
What metrics were used to measure the Bloomberg GPT (few-shot, k=3) model in the BloombergGPT: A Large Language Model for Finance paper on the BIG-bench (Disambiguation QA) dataset? | Accuracy |
What metrics were used to measure the Golden Transformer model in the paper on the RWSD dataset? | Accuracy |
What metrics were used to measure the ruRoberta-large finetune model in the paper on the RWSD dataset? | Accuracy |
What metrics were used to measure the Random weighted model in the Unreasonable Effectiveness of Rule-Based Heuristics in Solving Russian SuperGLUE Tasks paper on the RWSD dataset? | Accuracy |
What metrics were used to measure the RuGPT3Large model in the paper on the RWSD dataset? | Accuracy |
What metrics were used to measure the RuGPT3XL few-shot model in the paper on the RWSD dataset? | Accuracy |
What metrics were used to measure the SBERT_Large model in the paper on the RWSD dataset? | Accuracy |
What metrics were used to measure the Baseline TF-IDF1.1 model in the RussianSuperGLUE: A Russian Language Understanding Evaluation Benchmark paper on the RWSD dataset? | Accuracy |
What metrics were used to measure the ruT5-large-finetune model in the paper on the RWSD dataset? | Accuracy |
What metrics were used to measure the ruT5-base-finetune model in the paper on the RWSD dataset? | Accuracy |
What metrics were used to measure the ruBert-large finetune model in the paper on the RWSD dataset? | Accuracy |
What metrics were used to measure the ruBert-base finetune model in the paper on the RWSD dataset? | Accuracy |
What metrics were used to measure the YaLM 1.0B few-shot model in the paper on the RWSD dataset? | Accuracy |
What metrics were used to measure the MT5 Large model in the mT5: A massively multilingual pre-trained text-to-text transformer paper on the RWSD dataset? | Accuracy |
What metrics were used to measure the RuBERT plain model in the paper on the RWSD dataset? | Accuracy |
What metrics were used to measure the RuBERT conversational model in the paper on the RWSD dataset? | Accuracy |
What metrics were used to measure the Multilingual Bert model in the paper on the RWSD dataset? | Accuracy |
What metrics were used to measure the heuristic majority model in the Unreasonable Effectiveness of Rule-Based Heuristics in Solving Russian SuperGLUE Tasks paper on the RWSD dataset? | Accuracy |
What metrics were used to measure the RuGPT3Medium model in the paper on the RWSD dataset? | Accuracy |
What metrics were used to measure the RuGPT3Small model in the paper on the RWSD dataset? | Accuracy |
What metrics were used to measure the majority_class model in the Unreasonable Effectiveness of Rule-Based Heuristics in Solving Russian SuperGLUE Tasks paper on the RWSD dataset? | Accuracy |
What metrics were used to measure the SBERT_Large_mt_ru_finetuning model in the paper on the RWSD dataset? | Accuracy |
What metrics were used to measure the Human Benchmark model in the RussianSuperGLUE: A Russian Language Understanding Evaluation Benchmark paper on the RWSD dataset? | Accuracy |
What metrics were used to measure the Human Benchmark model in the RussianSuperGLUE: A Russian Language Understanding Evaluation Benchmark paper on the RuCoS dataset? | Average F1, EM |
What metrics were used to measure the Golden Transformer model in the paper on the RuCoS dataset? | Average F1, EM |
What metrics were used to measure the YaLM 1.0B few-shot model in the paper on the RuCoS dataset? | Average F1, EM |
What metrics were used to measure the ruT5-large-finetune model in the paper on the RuCoS dataset? | Average F1, EM |
What metrics were used to measure the ruT5-base-finetune model in the paper on the RuCoS dataset? | Average F1, EM |
What metrics were used to measure the ruBert-base finetune model in the paper on the RuCoS dataset? | Average F1, EM |
What metrics were used to measure the ruRoberta-large finetune model in the paper on the RuCoS dataset? | Average F1, EM |
What metrics were used to measure the ruBert-large finetune model in the paper on the RuCoS dataset? | Average F1, EM |
What metrics were used to measure the RuGPT3XL few-shot model in the paper on the RuCoS dataset? | Average F1, EM |
What metrics were used to measure the MT5 Large model in the mT5: A massively multilingual pre-trained text-to-text transformer paper on the RuCoS dataset? | Average F1, EM |
What metrics were used to measure the SBERT_Large model in the paper on the RuCoS dataset? | Average F1, EM |
What metrics were used to measure the SBERT_Large_mt_ru_finetuning model in the paper on the RuCoS dataset? | Average F1, EM |
What metrics were used to measure the RuBERT plain model in the paper on the RuCoS dataset? | Average F1, EM |
What metrics were used to measure the Multilingual Bert model in the paper on the RuCoS dataset? | Average F1, EM |
What metrics were used to measure the heuristic majority model in the Unreasonable Effectiveness of Rule-Based Heuristics in Solving Russian SuperGLUE Tasks paper on the RuCoS dataset? | Average F1, EM |
What metrics were used to measure the Baseline TF-IDF1.1 model in the RussianSuperGLUE: A Russian Language Understanding Evaluation Benchmark paper on the RuCoS dataset? | Average F1, EM |
What metrics were used to measure the Random weighted model in the Unreasonable Effectiveness of Rule-Based Heuristics in Solving Russian SuperGLUE Tasks paper on the RuCoS dataset? | Average F1, EM |
What metrics were used to measure the majority_class model in the Unreasonable Effectiveness of Rule-Based Heuristics in Solving Russian SuperGLUE Tasks paper on the RuCoS dataset? | Average F1, EM |
What metrics were used to measure the RuGPT3Medium model in the paper on the RuCoS dataset? | Average F1, EM |
What metrics were used to measure the RuBERT conversational model in the paper on the RuCoS dataset? | Average F1, EM |
What metrics were used to measure the RuGPT3Small model in the paper on the RuCoS dataset? | Average F1, EM |
What metrics were used to measure the RuGPT3Large model in the paper on the RuCoS dataset? | Average F1, EM |
What metrics were used to measure the ConvNet model in the Event2Mind: Commonsense Inference on Events, Intents, and Reactions paper on the Event2Mind test dataset? | Average Cross-Ent, BLEU |
What metrics were used to measure the BiRNN 100d model in the Event2Mind: Commonsense Inference on Events, Intents, and Reactions paper on the Event2Mind test dataset? | Average Cross-Ent, BLEU |
What metrics were used to measure the EA-VQ-VAE model in the Evidence-Aware Inferential Text Generation with Vector Quantised Variational AutoEncoder paper on the Event2Mind test dataset? | Average Cross-Ent, BLEU |
What metrics were used to measure the COMET* model in the Evidence-Aware Inferential Text Generation with Vector Quantised Variational AutoEncoder paper on the Event2Mind test dataset? | Average Cross-Ent, BLEU |
What metrics were used to measure the S2S* model in the Evidence-Aware Inferential Text Generation with Vector Quantised Variational AutoEncoder paper on the Event2Mind test dataset? | Average Cross-Ent, BLEU |
What metrics were used to measure the CWVAE model in the Evidence-Aware Inferential Text Generation with Vector Quantised Variational AutoEncoder paper on the Event2Mind test dataset? | Average Cross-Ent, BLEU |
What metrics were used to measure the VRNMT model in the Evidence-Aware Inferential Text Generation with Vector Quantised Variational AutoEncoder paper on the Event2Mind test dataset? | Average Cross-Ent, BLEU |
What metrics were used to measure the PaLM-540B (few-shot, k=5) model in the PaLM: Scaling Language Modeling with Pathways paper on the BIG-bench (Known Unknowns) dataset? | Accuracy |
What metrics were used to measure the Chinchilla-70B (few-shot, k=5) model in the Training Compute-Optimal Large Language Models paper on the BIG-bench (Known Unknowns) dataset? | Accuracy |
What metrics were used to measure the Gopher-280B (few-shot, k=5) model in the Scaling Language Models: Methods, Analysis & Insights from Training Gopher paper on the BIG-bench (Known Unknowns) dataset? | Accuracy |
What metrics were used to measure the Chinchilla-70B (few-shot, k=5) model in the Training Compute-Optimal Large Language Models paper on the BIG-bench (Logical Sequence) dataset? | Accuracy |
What metrics were used to measure the Gopher-280B (few-shot, k=5) model in the Scaling Language Models: Methods, Analysis & Insights from Training Gopher paper on the BIG-bench (Logical Sequence) dataset? | Accuracy |
What metrics were used to measure the HNN model in the A Hybrid Neural Network Model for Commonsense Reasoning paper on the Winograd Schema Challenge dataset? | Score |
What metrics were used to measure the BERTWiki-WSCR model in the A Surprisingly Robust Trick for Winograd Schema Challenge paper on the Winograd Schema Challenge dataset? | Score |
What metrics were used to measure the FLAN 137B zero-shot model in the Finetuned Language Models Are Zero-Shot Learners paper on the Winograd Schema Challenge dataset? | Score |
What metrics were used to measure the GPT-2 model in the Language Models are Unsupervised Multitask Learners paper on the Winograd Schema Challenge dataset? | Score |
What metrics were used to measure the BERTWSCR model in the A Surprisingly Robust Trick for Winograd Schema Challenge paper on the Winograd Schema Challenge dataset? | Score |
What metrics were used to measure the Ensemble of 14 LMs model in the A Simple Method for Commonsense Reasoning paper on the Winograd Schema Challenge dataset? | Score |
What metrics were used to measure the DSSM model in the Unsupervised Deep Structured Semantic Models for Commonsense Reasoning paper on the Winograd Schema Challenge dataset? | Score |
What metrics were used to measure the Word-LM-partial model in the A Simple Method for Commonsense Reasoning paper on the Winograd Schema Challenge dataset? | Score |
What metrics were used to measure the BERTLARGE model in the BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding paper on the Winograd Schema Challenge dataset? | Score |
What metrics were used to measure the Char-LM-partial model in the A Simple Method for Commonsense Reasoning paper on the Winograd Schema Challenge dataset? | Score |
What metrics were used to measure the USSM + Supervised DeepNet + KB model in the paper on the Winograd Schema Challenge dataset? | Score |
What metrics were used to measure the DeBERTalarge model in the DeBERTa: Decoding-enhanced BERT with Disentangled Attention paper on the SWAG dataset? | Test, Dev |
What metrics were used to measure the RoBERTa model in the RoBERTa: A Robustly Optimized BERT Pretraining Approach paper on the SWAG dataset? | Test, Dev |
What metrics were used to measure the BERT-LARGE model in the BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding paper on the SWAG dataset? | Test, Dev |
What metrics were used to measure the ESIM + ELMo model in the SWAG: A Large-Scale Adversarial Dataset for Grounded Commonsense Inference paper on the SWAG dataset? | Test, Dev |
What metrics were used to measure the ESIM + GloVe model in the SWAG: A Large-Scale Adversarial Dataset for Grounded Commonsense Inference paper on the SWAG dataset? | Test, Dev |
What metrics were used to measure the araneum word2vec (skipgram) + GRU model in the Event2Mind for Russian: Understanding Emotions and Intents in Texts. Corpus and Model for Evaluation paper on the Russian Event2Mind dataset? | recall@10 |
What metrics were used to measure the ruscorpora word2vec (skipgram) + GRU model in the Event2Mind for Russian: Understanding Emotions and Intents in Texts. Corpus and Model for Evaluation paper on the Russian Event2Mind dataset? | recall@10 |
What metrics were used to measure the ruscorpora fasttext + GRU model in the Event2Mind for Russian: Understanding Emotions and Intents in Texts. Corpus and Model for Evaluation paper on the Russian Event2Mind dataset? | recall@10 |
What metrics were used to measure the ruscorpora fasttext + LSTM model in the Event2Mind for Russian: Understanding Emotions and Intents in Texts. Corpus and Model for Evaluation paper on the Russian Event2Mind dataset? | recall@10 |
What metrics were used to measure the araneum fasttext + GRU model in the Event2Mind for Russian: Understanding Emotions and Intents in Texts. Corpus and Model for Evaluation paper on the Russian Event2Mind dataset? | recall@10 |
What metrics were used to measure the araneum fasttext + LSTM model in the Event2Mind for Russian: Understanding Emotions and Intents in Texts. Corpus and Model for Evaluation paper on the Russian Event2Mind dataset? | recall@10 |
What metrics were used to measure the araneum word2vec (skipgram) + LSTM model in the Event2Mind for Russian: Understanding Emotions and Intents in Texts. Corpus and Model for Evaluation paper on the Russian Event2Mind dataset? | recall@10 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.