prompts stringlengths 81 413 | metrics_response stringlengths 0 371 |
|---|---|
What metrics were used to measure the SHA-RNN (4 layers, h=1024, attention head per layer) model in the Single Headed Attention RNN: Stop Thinking With Your Head paper on the enwik8 dataset? | Bit per Character (BPC), Number of params |
What metrics were used to measure the SHA-RNN (4 layers, h=1024, single attention head) model in the Single Headed Attention RNN: Stop Thinking With Your Head paper on the enwik8 dataset? | Bit per Character (BPC), Number of params |
What metrics were used to measure the 64-layer Character Transformer Model model in the Character-Level Language Modeling with Deeper Self-Attention paper on the enwik8 dataset? | Bit per Character (BPC), Number of params |
What metrics were used to measure the Mogrifier LSTM model in the Mogrifier LSTM paper on the enwik8 dataset? | Bit per Character (BPC), Number of params |
What metrics were used to measure the LSTM model in the Mogrifier LSTM paper on the enwik8 dataset? | Bit per Character (BPC), Number of params |
What metrics were used to measure the Cluster-Former (#C=512) model in the Cluster-Former: Clustering-based Sparse Transformer for Long-Range Dependency Encoding paper on the enwik8 dataset? | Bit per Character (BPC), Number of params |
What metrics were used to measure the AWD-LSTM (3 layers) model in the An Analysis of Neural Language Modeling at Multiple Scales paper on the enwik8 dataset? | Bit per Character (BPC), Number of params |
What metrics were used to measure the Large mLSTM model in the Multiplicative LSTM for sequence modelling paper on the enwik8 dataset? | Bit per Character (BPC), Number of params |
What metrics were used to measure the Large FS-LSTM-4 model in the Fast-Slow Recurrent Neural Networks paper on the enwik8 dataset? | Bit per Character (BPC), Number of params |
What metrics were used to measure the Recurrent Highway Networks model in the Recurrent Highway Networks paper on the enwik8 dataset? | Bit per Character (BPC), Number of params |
What metrics were used to measure the ByteNet model in the Neural Machine Translation in Linear Time paper on the enwik8 dataset? | Bit per Character (BPC), Number of params |
What metrics were used to measure the LN HM-LSTM model in the Hierarchical Multiscale Recurrent Neural Networks paper on the enwik8 dataset? | Bit per Character (BPC), Number of params |
What metrics were used to measure the SHA-LSTM (4 layers, h=1024, no attention head) model in the Single Headed Attention RNN: Stop Thinking With Your Head paper on the enwik8 dataset? | Bit per Character (BPC), Number of params |
What metrics were used to measure the Hypernetworks model in the HyperNetworks paper on the enwik8 dataset? | Bit per Character (BPC), Number of params |
What metrics were used to measure the LSTM (7 layers) model in the Generating Sequences With Recurrent Neural Networks paper on the enwik8 dataset? | Bit per Character (BPC), Number of params |
What metrics were used to measure the All-attention network (36 layers) model in the Augmenting Self-attention with Persistent Memory paper on the enwik8 dataset? | Bit per Character (BPC), Number of params |
What metrics were used to measure the GLM-130B model in the GLM-130B: An Open Bilingual Pre-trained Model paper on the CLUE (AFQMC) dataset? | Accuracy |
What metrics were used to measure the ERNIE 3.0 Titan-260B model in the GLM-130B: An Open Bilingual Pre-trained Model paper on the CLUE (AFQMC) dataset? | Accuracy |
What metrics were used to measure the Gopher model in the Scaling Language Models: Methods, Analysis & Insights from Training Gopher paper on the Ubuntu IRC dataset? | BPB |
What metrics were used to measure the Gopher model in the Scaling Language Models: Methods, Analysis & Insights from Training Gopher paper on the HackerNews dataset? | BPB |
What metrics were used to measure the GLM-130B model in the GLM-130B: An Open Bilingual Pre-trained Model paper on the FewCLUE (CLUEWSC-FC) dataset? | Accuracy |
What metrics were used to measure the ERNIE 3.0 Titan-260B model in the GLM-130B: An Open Bilingual Pre-trained Model paper on the FewCLUE (CLUEWSC-FC) dataset? | Accuracy |
What metrics were used to measure the FLASH-Quad-8k model in the Transformer Quality in Linear Time paper on the Wiki-40B dataset? | Perplexity |
What metrics were used to measure the Combiner-Axial-8k model in the Combiner: Full Attention Transformer with Sparse Computation Cost paper on the Wiki-40B dataset? | Perplexity |
What metrics were used to measure the Combiner-Fixed-8k model in the Combiner: Full Attention Transformer with Sparse Computation Cost paper on the Wiki-40B dataset? | Perplexity |
What metrics were used to measure the GLM-130B (3-shot) model in the GLM-130B: An Open Bilingual Pre-trained Model paper on the BIG-bench-lite dataset? | Accuracy |
What metrics were used to measure the GLM-130B (1-shot) model in the GLM-130B: An Open Bilingual Pre-trained Model paper on the BIG-bench-lite dataset? | Accuracy |
What metrics were used to measure the GLM-130B (0-shot) model in the GLM-130B: An Open Bilingual Pre-trained Model paper on the BIG-bench-lite dataset? | Accuracy |
What metrics were used to measure the GPT-3 (Zero-Shot) model in the Language Models are Few-Shot Learners paper on the Penn Treebank (Word Level) dataset? | Test perplexity, Validation perplexity, Params |
What metrics were used to measure the BERT-Large-CAS model in the Language Models with Transformers paper on the Penn Treebank (Word Level) dataset? | Test perplexity, Validation perplexity, Params |
What metrics were used to measure the GPT-2 model in the Language Models are Unsupervised Multitask Learners paper on the Penn Treebank (Word Level) dataset? | Test perplexity, Validation perplexity, Params |
What metrics were used to measure the Mogrifier LSTM + dynamic eval model in the Mogrifier LSTM paper on the Penn Treebank (Word Level) dataset? | Test perplexity, Validation perplexity, Params |
What metrics were used to measure the adversarial + AWD-LSTM-MoS + dynamic eval model in the Improving Neural Language Modeling via Adversarial Training paper on the Penn Treebank (Word Level) dataset? | Test perplexity, Validation perplexity, Params |
What metrics were used to measure the GL-LWGC + AWD-MoS-LSTM + dynamic eval model in the Gradual Learning of Recurrent Neural Networks paper on the Penn Treebank (Word Level) dataset? | Test perplexity, Validation perplexity, Params |
What metrics were used to measure the FRAGE + AWD-LSTM-MoS + dynamic eval model in the FRAGE: Frequency-Agnostic Word Representation paper on the Penn Treebank (Word Level) dataset? | Test perplexity, Validation perplexity, Params |
What metrics were used to measure the AWD-LSTM-DOC x5 model in the Direct Output Connection for a High-Rank Language Model paper on the Penn Treebank (Word Level) dataset? | Test perplexity, Validation perplexity, Params |
What metrics were used to measure the Past Decode Reg. + AWD-LSTM-MoS + dyn. eval. model in the Improved Language Modeling by Decoding the Past paper on the Penn Treebank (Word Level) dataset? | Test perplexity, Validation perplexity, Params |
What metrics were used to measure the AWD-LSTM-MoS + dynamic eval model in the Breaking the Softmax Bottleneck: A High-Rank RNN Language Model paper on the Penn Treebank (Word Level) dataset? | Test perplexity, Validation perplexity, Params |
What metrics were used to measure the AWD-LSTM-DRILL + dynamic eval model in the Deep Residual Output Layers for Neural Language Generation paper on the Penn Treebank (Word Level) dataset? | Test perplexity, Validation perplexity, Params |
What metrics were used to measure the Dense IndRNN+dynamic eval model in the Deep Independently Recurrent Neural Network (IndRNN) paper on the Penn Treebank (Word Level) dataset? | Test perplexity, Validation perplexity, Params |
What metrics were used to measure the AWD-LSTM + dynamic eval model in the Dynamic Evaluation of Neural Sequence Models paper on the Penn Treebank (Word Level) dataset? | Test perplexity, Validation perplexity, Params |
What metrics were used to measure the AWD-LSTM-DOC + Partial Shuffle model in the Partially Shuffling the Training Data to Improve Language Models paper on the Penn Treebank (Word Level) dataset? | Test perplexity, Validation perplexity, Params |
What metrics were used to measure the AWD-LSTM-DOC model in the Direct Output Connection for a High-Rank Language Model paper on the Penn Treebank (Word Level) dataset? | Test perplexity, Validation perplexity, Params |
What metrics were used to measure the AWD-LSTM + continuous cache pointer model in the Regularizing and Optimizing LSTM Language Models paper on the Penn Treebank (Word Level) dataset? | Test perplexity, Validation perplexity, Params |
What metrics were used to measure the AWD-LSTM-MoS + Partial Shuffle model in the Partially Shuffling the Training Data to Improve Language Models paper on the Penn Treebank (Word Level) dataset? | Test perplexity, Validation perplexity, Params |
What metrics were used to measure the Trellis Network model in the Trellis Networks for Sequence Modeling paper on the Penn Treebank (Word Level) dataset? | Test perplexity, Validation perplexity, Params |
What metrics were used to measure the AWD-LSTM-MoS model in the Breaking the Softmax Bottleneck: A High-Rank RNN Language Model paper on the Penn Treebank (Word Level) dataset? | Test perplexity, Validation perplexity, Params |
What metrics were used to measure the AWD-FWM Schlag et al. (2020) model in the Learning Associative Inference Using Fast Weight Memory paper on the Penn Treebank (Word Level) dataset? | Test perplexity, Validation perplexity, Params |
What metrics were used to measure the Transformer-XL model in the Transformer-XL: Attentive Language Models Beyond a Fixed-Length Context paper on the Penn Treebank (Word Level) dataset? | Test perplexity, Validation perplexity, Params |
What metrics were used to measure the Transformer-XL + AutoDropout model in the AutoDropout: Learning Dropout Patterns to Regularize Deep Networks paper on the Penn Treebank (Word Level) dataset? | Test perplexity, Validation perplexity, Params |
What metrics were used to measure the 2-layer skip-LSTM + dropout tuning model in the Pushing the bounds of dropout paper on the Penn Treebank (Word Level) dataset? | Test perplexity, Validation perplexity, Params |
What metrics were used to measure the AWD-LSTM-DRILL model in the Deep Residual Output Layers for Neural Language Generation paper on the Penn Treebank (Word Level) dataset? | Test perplexity, Validation perplexity, Params |
What metrics were used to measure the Differentiable NAS model in the DARTS: Differentiable Architecture Search paper on the Penn Treebank (Word Level) dataset? | Test perplexity, Validation perplexity, Params |
What metrics were used to measure the Dense IndRNN model in the Deep Independently Recurrent Neural Network (IndRNN) paper on the Penn Treebank (Word Level) dataset? | Test perplexity, Validation perplexity, Params |
What metrics were used to measure the AWD-LSTM 3-layer with Fraternal dropout model in the Fraternal Dropout paper on the Penn Treebank (Word Level) dataset? | Test perplexity, Validation perplexity, Params |
What metrics were used to measure the DEQ-TrellisNet model in the Deep Equilibrium Models paper on the Penn Treebank (Word Level) dataset? | Test perplexity, Validation perplexity, Params |
What metrics were used to measure the AWD-LSTM model in the Regularizing and Optimizing LSTM Language Models paper on the Penn Treebank (Word Level) dataset? | Test perplexity, Validation perplexity, Params |
What metrics were used to measure the Efficient NAS model in the Efficient Neural Architecture Search via Parameter Sharing paper on the Penn Treebank (Word Level) dataset? | Test perplexity, Validation perplexity, Params |
What metrics were used to measure the NAS-RL model in the Neural Architecture Search with Reinforcement Learning paper on the Penn Treebank (Word Level) dataset? | Test perplexity, Validation perplexity, Params |
What metrics were used to measure the Recurrent highway networks model in the Recurrent Highway Networks paper on the Penn Treebank (Word Level) dataset? | Test perplexity, Validation perplexity, Params |
What metrics were used to measure the Inan et al. (2016) - Variational RHN model in the Tying Word Vectors and Word Classifiers: A Loss Framework for Language Modeling paper on the Penn Treebank (Word Level) dataset? | Test perplexity, Validation perplexity, Params |
What metrics were used to measure the Gal & Ghahramani (2016) - Variational LSTM (large) model in the A Theoretically Grounded Application of Dropout in Recurrent Neural Networks paper on the Penn Treebank (Word Level) dataset? | Test perplexity, Validation perplexity, Params |
What metrics were used to measure the Zaremba et al. (2014) - LSTM (large) model in the Recurrent Neural Network Regularization paper on the Penn Treebank (Word Level) dataset? | Test perplexity, Validation perplexity, Params |
What metrics were used to measure the LSTM (Bai et al., 2018) model in the An Empirical Evaluation of Generic Convolutional and Recurrent Networks for Sequence Modeling paper on the Penn Treebank (Word Level) dataset? | Test perplexity, Validation perplexity, Params |
What metrics were used to measure the Gal & Ghahramani (2016) - Variational LSTM (medium) model in the A Theoretically Grounded Application of Dropout in Recurrent Neural Networks paper on the Penn Treebank (Word Level) dataset? | Test perplexity, Validation perplexity, Params |
What metrics were used to measure the Zaremba et al. (2014) - LSTM (medium) model in the Recurrent Neural Network Regularization paper on the Penn Treebank (Word Level) dataset? | Test perplexity, Validation perplexity, Params |
What metrics were used to measure the R-Transformer model in the R-Transformer: Recurrent Neural Network Enhanced Transformer paper on the Penn Treebank (Word Level) dataset? | Test perplexity, Validation perplexity, Params |
What metrics were used to measure the GRU (Bai et al., 2018) model in the An Empirical Evaluation of Generic Convolutional and Recurrent Networks for Sequence Modeling paper on the Penn Treebank (Word Level) dataset? | Test perplexity, Validation perplexity, Params |
What metrics were used to measure the Seq-U-Net model in the Seq-U-Net: A One-Dimensional Causal U-Net for Efficient Sequence Modelling paper on the Penn Treebank (Word Level) dataset? | Test perplexity, Validation perplexity, Params |
What metrics were used to measure the TCN model in the Seq-U-Net: A One-Dimensional Causal U-Net for Efficient Sequence Modelling paper on the Penn Treebank (Word Level) dataset? | Test perplexity, Validation perplexity, Params |
What metrics were used to measure the GLM-130B model in the GLM-130B: An Open Bilingual Pre-trained Model paper on the CLUE (CMRC2018) dataset? | Accuracy |
What metrics were used to measure the ERNIE 3.0 Titan-260B model in the GLM-130B: An Open Bilingual Pre-trained Model paper on the CLUE (CMRC2018) dataset? | Accuracy |
What metrics were used to measure the OmniNetT (Large) model in the OmniNet: Omnidirectional Representations from Transformers paper on the One Billion Word dataset? | PPL, Number of params, Validation perplexity |
What metrics were used to measure the OmniNetP (Large) model in the OmniNet: Omnidirectional Representations from Transformers paper on the One Billion Word dataset? | PPL, Number of params, Validation perplexity |
What metrics were used to measure the Transformer-XL Large model in the Transformer-XL: Attentive Language Models Beyond a Fixed-Length Context paper on the One Billion Word dataset? | PPL, Number of params, Validation perplexity |
What metrics were used to measure the OmniNetB (Large) model in the OmniNet: Omnidirectional Representations from Transformers paper on the One Billion Word dataset? | PPL, Number of params, Validation perplexity |
What metrics were used to measure the Adaptive Input Very Large model in the Adaptive Input Representations for Neural Language Modeling paper on the One Billion Word dataset? | PPL, Number of params, Validation perplexity |
What metrics were used to measure the Transformer-XL Base model in the Transformer-XL: Attentive Language Models Beyond a Fixed-Length Context paper on the One Billion Word dataset? | PPL, Number of params, Validation perplexity |
What metrics were used to measure the SRU++ Large model in the When Attention Meets Fast Recurrence: Training Language Models with Reduced Compute paper on the One Billion Word dataset? | PPL, Number of params, Validation perplexity |
What metrics were used to measure the 10 LSTM+CNN inputs + SNM10-SKIP (ensemble) model in the Exploring the Limits of Language Modeling paper on the One Billion Word dataset? | PPL, Number of params, Validation perplexity |
What metrics were used to measure the Adaptive Input Large model in the Adaptive Input Representations for Neural Language Modeling paper on the One Billion Word dataset? | PPL, Number of params, Validation perplexity |
What metrics were used to measure the Mesh Tensorflow model in the Mesh-TensorFlow: Deep Learning for Supercomputers paper on the One Billion Word dataset? | PPL, Number of params, Validation perplexity |
What metrics were used to measure the Cohere Large model in the paper on the One Billion Word dataset? | PPL, Number of params, Validation perplexity |
What metrics were used to measure the SRU++ model in the When Attention Meets Fast Recurrence: Training Language Models with Reduced Compute paper on the One Billion Word dataset? | PPL, Number of params, Validation perplexity |
What metrics were used to measure the DynamicConv model in the Pay Less Attention with Lightweight and Dynamic Convolutions paper on the One Billion Word dataset? | PPL, Number of params, Validation perplexity |
What metrics were used to measure the High-Budget MoE model in the Outrageously Large Neural Networks: The Sparsely-Gated Mixture-of-Experts Layer paper on the One Billion Word dataset? | PPL, Number of params, Validation perplexity |
What metrics were used to measure the Evolved Transformer Big model in the The Evolved Transformer paper on the One Billion Word dataset? | PPL, Number of params, Validation perplexity |
What metrics were used to measure the LSTM-8192-1024 + CNN Input model in the Exploring the Limits of Language Modeling paper on the One Billion Word dataset? | PPL, Number of params, Validation perplexity |
What metrics were used to measure the LSTM-8192-1024 model in the Exploring the Limits of Language Modeling paper on the One Billion Word dataset? | PPL, Number of params, Validation perplexity |
What metrics were used to measure the GCNN-14 bottleneck model in the Language Modeling with Gated Convolutional Networks paper on the One Billion Word dataset? | PPL, Number of params, Validation perplexity |
What metrics were used to measure the Low-Budget MoE model in the Outrageously Large Neural Networks: The Sparsely-Gated Mixture-of-Experts Layer paper on the One Billion Word dataset? | PPL, Number of params, Validation perplexity |
What metrics were used to measure the BIG G-LSTM-2 model in the Factorization tricks for LSTM networks paper on the One Billion Word dataset? | PPL, Number of params, Validation perplexity |
What metrics were used to measure the GPT-2 model in the Language Models are Unsupervised Multitask Learners paper on the One Billion Word dataset? | PPL, Number of params, Validation perplexity |
What metrics were used to measure the RNN-1024 + 9 Gram model in the One Billion Word Benchmark for Measuring Progress in Statistical Language Modeling paper on the One Billion Word dataset? | PPL, Number of params, Validation perplexity |
What metrics were used to measure the Sparse Non-Negative model in the Skip-gram Language Modeling Using Sparse Non-negative Matrix Probability Estimation paper on the One Billion Word dataset? | PPL, Number of params, Validation perplexity |
What metrics were used to measure the H-Transformer-1D Nr=16 (Base) model in the H-Transformer-1D: Fast One-Dimensional Hierarchical Attention for Sequences paper on the One Billion Word dataset? | PPL, Number of params, Validation perplexity |
What metrics were used to measure the H-Transformer-1D Nr=16 (Large) model in the H-Transformer-1D: Fast One-Dimensional Hierarchical Attention for Sequences paper on the One Billion Word dataset? | PPL, Number of params, Validation perplexity |
What metrics were used to measure the Gopher model in the Scaling Language Models: Methods, Analysis & Insights from Training Gopher paper on the Curation Corpus dataset? | BPB |
What metrics were used to measure the Fuzzy Retrieval model in the EVI: Multilingual Spoken Dialogue Tasks and Dataset for Knowledge-Based Enrolment, Verification, and Identification paper on the EVI pl-PL dataset? | Top-1 (%) |
What metrics were used to measure the Fuzzy Retrieval model in the EVI: Multilingual Spoken Dialogue Tasks and Dataset for Knowledge-Based Enrolment, Verification, and Identification paper on the EVI en-GB dataset? | Top-1 (%) |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.