prompts stringlengths 81 413 | metrics_response stringlengths 0 371 |
|---|---|
What metrics were used to measure the td-LSTM (Zhang et al., 2016) model in the Architectural Complexity Measures of Recurrent Neural Networks paper on the Text8 dataset? | Bit per Character (BPC), Number of params |
What metrics were used to measure the GLM-130B model in the GLM-130B: An Open Bilingual Pre-trained Model paper on the The Pile dataset? | Bits per byte |
What metrics were used to measure the Jurassic-1 model in the GLM-130B: An Open Bilingual Pre-trained Model paper on the The Pile dataset? | Bits per byte |
What metrics were used to measure the GPT-3 (Zero-Shot) model in the Language Models are Few-Shot Learners paper on the The Pile dataset? | Bits per byte |
What metrics were used to measure the GPT-3 model in the GLM-130B: An Open Bilingual Pre-trained Model paper on the The Pile dataset? | Bits per byte |
What metrics were used to measure the GPT-2 (Zero-Shot) model in the Language Models are Unsupervised Multitask Learners paper on the The Pile dataset? | Bits per byte |
What metrics were used to measure the PaLM-540B (Few-Shot) model in the PaLM: Scaling Language Modeling with Pathways paper on the LAMBADA dataset? | Accuracy, Perplexity |
What metrics were used to measure the Megatron-Turing NLG 530B (Few-Shot) model in the Using DeepSpeed and Megatron to Train Megatron-Turing NLG 530B, A Large-Scale Generative Language Model paper on the LAMBADA dataset? | Accuracy, Perplexity |
What metrics were used to measure the PaLM 2-L (one-shot) model in the PaLM 2 Technical Report paper on the LAMBADA dataset? | Accuracy, Perplexity |
What metrics were used to measure the GPT-3 175B (Few-Shot) model in the Language Models are Few-Shot Learners paper on the LAMBADA dataset? | Accuracy, Perplexity |
What metrics were used to measure the LLaMA-65B+CFG (Zero-Shot) model in the Stay on topic with Classifier-Free Guidance paper on the LAMBADA dataset? | Accuracy, Perplexity |
What metrics were used to measure the LLaMA-30B+CFG (zero-shot) model in the Stay on topic with Classifier-Free Guidance paper on the LAMBADA dataset? | Accuracy, Perplexity |
What metrics were used to measure the PaLM 2-M (one-shot) model in the PaLM 2 Technical Report paper on the LAMBADA dataset? | Accuracy, Perplexity |
What metrics were used to measure the Cohere Large model in the paper on the LAMBADA dataset? | Accuracy, Perplexity |
What metrics were used to measure the LLaMA-13B+CFG (zero-shot) model in the Stay on topic with Classifier-Free Guidance paper on the LAMBADA dataset? | Accuracy, Perplexity |
What metrics were used to measure the PaLM-540B (One-Shot) model in the PaLM: Scaling Language Modeling with Pathways paper on the LAMBADA dataset? | Accuracy, Perplexity |
What metrics were used to measure the GLaM 62B/64E (One-Shot) model in the GLaM: Efficient Scaling of Language Models with Mixture-of-Experts paper on the LAMBADA dataset? | Accuracy, Perplexity |
What metrics were used to measure the PaLM 2-S (one-shot) model in the PaLM 2 Technical Report paper on the LAMBADA dataset? | Accuracy, Perplexity |
What metrics were used to measure the GLM-130B (bidirectional attention) model in the GLM-130B: An Open Bilingual Pre-trained Model paper on the LAMBADA dataset? | Accuracy, Perplexity |
What metrics were used to measure the SparseGPT (175B, 2:4 Sparsity) model in the SparseGPT: Massive Language Models Can Be Accurately Pruned in One-Shot paper on the LAMBADA dataset? | Accuracy, Perplexity |
What metrics were used to measure the SparseGPT (175B, 4:8 Sparsity) model in the SparseGPT: Massive Language Models Can Be Accurately Pruned in One-Shot paper on the LAMBADA dataset? | Accuracy, Perplexity |
What metrics were used to measure the PaLM-540B (Zero-Shot) model in the PaLM: Scaling Language Modeling with Pathways paper on the LAMBADA dataset? | Accuracy, Perplexity |
What metrics were used to measure the Chinchilla (Zero-Shot) model in the Training Compute-Optimal Large Language Models paper on the LAMBADA dataset? | Accuracy, Perplexity |
What metrics were used to measure the SparseGPT (175B, 50% Sparsity) model in the SparseGPT: Massive Language Models Can Be Accurately Pruned in One-Shot paper on the LAMBADA dataset? | Accuracy, Perplexity |
What metrics were used to measure the GPT-3 175B (Zero-Shot) model in the Language Models are Few-Shot Learners paper on the LAMBADA dataset? | Accuracy, Perplexity |
What metrics were used to measure the OPT-175B model in the SparseGPT: Massive Language Models Can Be Accurately Pruned in One-Shot paper on the LAMBADA dataset? | Accuracy, Perplexity |
What metrics were used to measure the GPT-3 13B (Zero-Shot) model in the Language Models are Few-Shot Learners paper on the LAMBADA dataset? | Accuracy, Perplexity |
What metrics were used to measure the GLM-XXLarge (bidirectional) model in the GLM: General Language Model Pretraining with Autoregressive Blank Infilling paper on the LAMBADA dataset? | Accuracy, Perplexity |
What metrics were used to measure the Pythia 12B(Zero-Shot) model in the Pythia: A Suite for Analyzing Large Language Models Across Training and Scaling paper on the LAMBADA dataset? | Accuracy, Perplexity |
What metrics were used to measure the GPT-3 6.7B (Zero-Shot) model in the Language Models are Few-Shot Learners paper on the LAMBADA dataset? | Accuracy, Perplexity |
What metrics were used to measure the GPT-J-6B model in the paper on the LAMBADA dataset? | Accuracy, Perplexity |
What metrics were used to measure the Pythia 6.9B(Zero-Shot) model in the Pythia: A Suite for Analyzing Large Language Models Across Training and Scaling paper on the LAMBADA dataset? | Accuracy, Perplexity |
What metrics were used to measure the GLM-XXLarge (unidirectional) model in the GLM: General Language Model Pretraining with Autoregressive Blank Infilling paper on the LAMBADA dataset? | Accuracy, Perplexity |
What metrics were used to measure the GPT-3 2.7B (Zero-Shot) model in the Language Models are Few-Shot Learners paper on the LAMBADA dataset? | Accuracy, Perplexity |
What metrics were used to measure the GPT-2 1.5B (Zero Shot) model in the Language Models are Unsupervised Multitask Learners paper on the LAMBADA dataset? | Accuracy, Perplexity |
What metrics were used to measure the Universal Transformer (w/ dynamic halting) model in the Universal Transformers paper on the LAMBADA dataset? | Accuracy, Perplexity |
What metrics were used to measure the Residual Shuffle-Exchange network model in the Residual Shuffle-Exchange Networks for Fast Processing of Long Sequences paper on the LAMBADA dataset? | Accuracy, Perplexity |
What metrics were used to measure the Gated-Attention Reader (+ features) model in the Broad Context Language Modeling as Reading Comprehension paper on the LAMBADA dataset? | Accuracy, Perplexity |
What metrics were used to measure the OPT-175B (50% Sparsity) model in the SparseGPT: Massive Language Models Can Be Accurately Pruned in One-Shot paper on the LAMBADA dataset? | Accuracy, Perplexity |
What metrics were used to measure the Gopher model in the Scaling Language Models: Methods, Analysis & Insights from Training Gopher paper on the DM Mathematics dataset? | BPB |
What metrics were used to measure the GPT2 model in the Zero-Shot Recommendation as Language Modeling paper on the language-modeling-recommendation dataset? | 1:1 Accuracy |
What metrics were used to measure the GLM-130B model in the GLM-130B: An Open Bilingual Pre-trained Model paper on the FewCLUE (CHID-FC) dataset? | Accuracy |
What metrics were used to measure the ERNIE 3.0 Titan-260B model in the GLM-130B: An Open Bilingual Pre-trained Model paper on the FewCLUE (CHID-FC) dataset? | Accuracy |
What metrics were used to measure the Gopher model in the Scaling Language Models: Methods, Analysis & Insights from Training Gopher paper on the arXiv dataset? | BPB |
What metrics were used to measure the Gopher model in the Scaling Language Models: Methods, Analysis & Insights from Training Gopher paper on the Bookcorpus2 dataset? | BPB |
What metrics were used to measure the GLM-130B model in the GLM-130B: An Open Bilingual Pre-trained Model paper on the CLUE (DRCD) dataset? | Accuracy |
What metrics were used to measure the ERNIE 3.0 Titan-260B model in the GLM-130B: An Open Bilingual Pre-trained Model paper on the CLUE (DRCD) dataset? | Accuracy |
What metrics were used to measure the Gopher model in the Scaling Language Models: Methods, Analysis & Insights from Training Gopher paper on the NIH ExPorter dataset? | BPB |
What metrics were used to measure the SparseGPT (175B, 50% Sparsity) model in the SparseGPT: Massive Language Models Can Be Accurately Pruned in One-Shot paper on the WikiText-2 dataset? | Test perplexity, Validation perplexity, Number of params |
What metrics were used to measure the OPT-175B model in the SparseGPT: Massive Language Models Can Be Accurately Pruned in One-Shot paper on the WikiText-2 dataset? | Test perplexity, Validation perplexity, Number of params |
What metrics were used to measure the SparseGPT (175B, 4:8 Sparsity) model in the SparseGPT: Massive Language Models Can Be Accurately Pruned in One-Shot paper on the WikiText-2 dataset? | Test perplexity, Validation perplexity, Number of params |
What metrics were used to measure the SparseGPT (175B, 2:4 Sparsity) model in the SparseGPT: Massive Language Models Can Be Accurately Pruned in One-Shot paper on the WikiText-2 dataset? | Test perplexity, Validation perplexity, Number of params |
What metrics were used to measure the GPT-2 (fine-tuned) model in the Hydra: A System for Large Multi-Model Deep Learning paper on the WikiText-2 dataset? | Test perplexity, Validation perplexity, Number of params |
What metrics were used to measure the GPT-2 model in the Language Models are Unsupervised Multitask Learners paper on the WikiText-2 dataset? | Test perplexity, Validation perplexity, Number of params |
What metrics were used to measure the GPT-2 (large) model in the Language Models are Unsupervised Multitask Learners paper on the WikiText-2 dataset? | Test perplexity, Validation perplexity, Number of params |
What metrics were used to measure the GPT-2 (medium) model in the Language Models are Unsupervised Multitask Learners paper on the WikiText-2 dataset? | Test perplexity, Validation perplexity, Number of params |
What metrics were used to measure the GPT-2 (small) model in the Language Models are Unsupervised Multitask Learners paper on the WikiText-2 dataset? | Test perplexity, Validation perplexity, Number of params |
What metrics were used to measure the BERT-Large-CAS model in the Language Models with Transformers paper on the WikiText-2 dataset? | Test perplexity, Validation perplexity, Number of params |
What metrics were used to measure the Mogrifier LSTM + dynamic eval model in the Mogrifier LSTM paper on the WikiText-2 dataset? | Test perplexity, Validation perplexity, Number of params |
What metrics were used to measure the adversarial + AWD-LSTM-MoS + dynamic eval model in the Improving Neural Language Modeling via Adversarial Training paper on the WikiText-2 dataset? | Test perplexity, Validation perplexity, Number of params |
What metrics were used to measure the FRAGE + AWD-LSTM-MoS + dynamic eval model in the FRAGE: Frequency-Agnostic Word Representation paper on the WikiText-2 dataset? | Test perplexity, Validation perplexity, Number of params |
What metrics were used to measure the Past Decode Reg. + AWD-LSTM-MoS + dyn. eval. model in the Improved Language Modeling by Decoding the Past paper on the WikiText-2 dataset? | Test perplexity, Validation perplexity, Number of params |
What metrics were used to measure the GL-LWGC + AWD-MoS-LSTM + dynamic eval model in the Gradual Learning of Recurrent Neural Networks paper on the WikiText-2 dataset? | Test perplexity, Validation perplexity, Number of params |
What metrics were used to measure the AWD-LSTM-MoS + dynamic eval model in the Breaking the Softmax Bottleneck: A High-Rank RNN Language Model paper on the WikiText-2 dataset? | Test perplexity, Validation perplexity, Number of params |
What metrics were used to measure the AWD-LSTM-DRILL + dynamic eval model in the Deep Residual Output Layers for Neural Language Generation paper on the WikiText-2 dataset? | Test perplexity, Validation perplexity, Number of params |
What metrics were used to measure the AWD-LSTM + dynamic eval model in the Dynamic Evaluation of Neural Sequence Models paper on the WikiText-2 dataset? | Test perplexity, Validation perplexity, Number of params |
What metrics were used to measure the AWD-LSTM + continuous cache pointer model in the Regularizing and Optimizing LSTM Language Models paper on the WikiText-2 dataset? | Test perplexity, Validation perplexity, Number of params |
What metrics were used to measure the AWD-LSTM-DOC x5 model in the Direct Output Connection for a High-Rank Language Model paper on the WikiText-2 dataset? | Test perplexity, Validation perplexity, Number of params |
What metrics were used to measure the Mogrifier LSTM model in the Mogrifier LSTM paper on the WikiText-2 dataset? | Test perplexity, Validation perplexity, Number of params |
What metrics were used to measure the AWD-LSTM-DOC + Partial Shuffle model in the Partially Shuffling the Training Data to Improve Language Models paper on the WikiText-2 dataset? | Test perplexity, Validation perplexity, Number of params |
What metrics were used to measure the AWD-LSTM-DOC model in the Direct Output Connection for a High-Rank Language Model paper on the WikiText-2 dataset? | Test perplexity, Validation perplexity, Number of params |
What metrics were used to measure the AWD-LSTM-MoS + Partial Shuffle model in the Partially Shuffling the Training Data to Improve Language Models paper on the WikiText-2 dataset? | Test perplexity, Validation perplexity, Number of params |
What metrics were used to measure the AWD-LSTM-MoS model in the Breaking the Softmax Bottleneck: A High-Rank RNN Language Model paper on the WikiText-2 dataset? | Test perplexity, Validation perplexity, Number of params |
What metrics were used to measure the AWD-FWM Schlag et al. (2020) model in the Learning Associative Inference Using Fast Weight Memory paper on the WikiText-2 dataset? | Test perplexity, Validation perplexity, Number of params |
What metrics were used to measure the AWD-LSTM-DRILL model in the Deep Residual Output Layers for Neural Language Generation paper on the WikiText-2 dataset? | Test perplexity, Validation perplexity, Number of params |
What metrics were used to measure the AWD-LSTM 3-layer with Fraternal dropout model in the Fraternal Dropout paper on the WikiText-2 dataset? | Test perplexity, Validation perplexity, Number of params |
What metrics were used to measure the AWD-LSTM + ATOI model in the Alleviating Sequence Information Loss with Data Overlapping and Prime Batch Sizes paper on the WikiText-2 dataset? | Test perplexity, Validation perplexity, Number of params |
What metrics were used to measure the AWD-LSTM model in the Regularizing and Optimizing LSTM Language Models paper on the WikiText-2 dataset? | Test perplexity, Validation perplexity, Number of params |
What metrics were used to measure the Melis et al. (2017) - 1-layer LSTM (tied) model in the On the State of the Art of Evaluation in Neural Language Models paper on the WikiText-2 dataset? | Test perplexity, Validation perplexity, Number of params |
What metrics were used to measure the Grave et al. (2016) - LSTM + continuous cache pointer model in the Improving Neural Language Models with a Continuous Cache paper on the WikiText-2 dataset? | Test perplexity, Validation perplexity, Number of params |
What metrics were used to measure the EGRU model in the Efficient recurrent architectures through activity sparsity and sparse back-propagation through time paper on the WikiText-2 dataset? | Test perplexity, Validation perplexity, Number of params |
What metrics were used to measure the Inan et al. (2016) - Variational LSTM (tied) (h=650) + augmented loss model in the Tying Word Vectors and Word Classifiers: A Loss Framework for Language Modeling paper on the WikiText-2 dataset? | Test perplexity, Validation perplexity, Number of params |
What metrics were used to measure the Inan et al. (2016) - Variational LSTM (tied) (h=650) model in the Tying Word Vectors and Word Classifiers: A Loss Framework for Language Modeling paper on the WikiText-2 dataset? | Test perplexity, Validation perplexity, Number of params |
What metrics were used to measure the Grave et al. (2016) - LSTM model in the Improving Neural Language Models with a Continuous Cache paper on the WikiText-2 dataset? | Test perplexity, Validation perplexity, Number of params |
What metrics were used to measure the OPT-175B (50% Sparsity) model in the SparseGPT: Massive Language Models Can Be Accurately Pruned in One-Shot paper on the WikiText-2 dataset? | Test perplexity, Validation perplexity, Number of params |
What metrics were used to measure the GLM-130B model in the GLM-130B: An Open Bilingual Pre-trained Model paper on the FewCLUE (BUSTM) dataset? | Accuracy |
What metrics were used to measure the ERNIE 3.0 Titan-260B model in the GLM-130B: An Open Bilingual Pre-trained Model paper on the FewCLUE (BUSTM) dataset? | Accuracy |
What metrics were used to measure the Transformer-LS (small) model in the Long-Short Transformer: Efficient Transformers for Language and Vision paper on the enwik8 dev dataset? | Bit per Character (BPC) |
What metrics were used to measure the Gopher model in the Scaling Language Models: Methods, Analysis & Insights from Training Gopher paper on the GitHub dataset? | BPB |
What metrics were used to measure the RETRO (7.5B) model in the Improving language models by retrieving from trillions of tokens paper on the WikiText-103 dataset? | Test perplexity, Validation perplexity, Number of params |
What metrics were used to measure the Hybrid H3 (2.7B) model in the Hungry Hungry Hippos: Towards Language Modeling with State Space Models paper on the WikiText-103 dataset? | Test perplexity, Validation perplexity, Number of params |
What metrics were used to measure the Megatron-LM model in the Megatron-LM: Training Multi-Billion Parameter Language Models Using Model Parallelism paper on the WikiText-103 dataset? | Test perplexity, Validation perplexity, Number of params |
What metrics were used to measure the GLM-XXLarge (bidirectional) model in the GLM: General Language Model Pretraining with Autoregressive Blank Infilling paper on the WikiText-103 dataset? | Test perplexity, Validation perplexity, Number of params |
What metrics were used to measure the GLM-XXLarge (unidirectional) model in the GLM: General Language Model Pretraining with Autoregressive Blank Infilling paper on the WikiText-103 dataset? | Test perplexity, Validation perplexity, Number of params |
What metrics were used to measure the Hybrid H3 (1.3B) model in the Hungry Hungry Hippos: Towards Language Modeling with State Space Models paper on the WikiText-103 dataset? | Test perplexity, Validation perplexity, Number of params |
What metrics were used to measure the GateLoop (125M) model in the GateLoop: Fully Data-Controlled Linear Recurrence for Sequence Modeling paper on the WikiText-103 dataset? | Test perplexity, Validation perplexity, Number of params |
What metrics were used to measure the kNN-LM w/ Adaptive Coefficient model in the You can't pick your neighbors, or can you? When and how to rely on retrieval in the $k$NN-LM paper on the WikiText-103 dataset? | Test perplexity, Validation perplexity, Number of params |
What metrics were used to measure the kNN-LM w/ Continuous Cache model in the Generalization through Memorization: Nearest Neighbor Language Models paper on the WikiText-103 dataset? | Test perplexity, Validation perplexity, Number of params |
What metrics were used to measure the Routing Transformer model in the Efficient Content-Based Sparse Attention with Routing Transformers paper on the WikiText-103 dataset? | Test perplexity, Validation perplexity, Number of params |
What metrics were used to measure the kNN-LM model in the Generalization through Memorization: Nearest Neighbor Language Models paper on the WikiText-103 dataset? | Test perplexity, Validation perplexity, Number of params |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.