prompts
stringlengths
81
413
metrics_response
stringlengths
0
371
What metrics were used to measure the Transformer-XL (RMS dynamic eval) model in the Dynamic Evaluation of Transformer Language Models paper on the WikiText-103 dataset?
Test perplexity, Validation perplexity, Number of params
What metrics were used to measure the [?]-former (SM) model in the $\infty$-former: Infinite Memory Transformer paper on the WikiText-103 dataset?
Test perplexity, Validation perplexity, Number of params
What metrics were used to measure the -former (SM) model in the $\infty$-former: Infinite Memory Transformer paper on the WikiText-103 dataset?
Test perplexity, Validation perplexity, Number of params
What metrics were used to measure the ∞-former (Sticky memories + initialized GPT-2 Small) model in the $\infty$-former: Infinite Memory Transformer paper on the WikiText-103 dataset?
Test perplexity, Validation perplexity, Number of params
What metrics were used to measure the ∞-former (initialized GPT-2 Small) model in the $\infty$-former: Infinite Memory Transformer paper on the WikiText-103 dataset?
Test perplexity, Validation perplexity, Number of params
What metrics were used to measure the Hybrid H3 (355M) model in the Hungry Hungry Hippos: Towards Language Modeling with State Space Models paper on the WikiText-103 dataset?
Test perplexity, Validation perplexity, Number of params
What metrics were used to measure the Transformer-XL (SGD dynamic eval) model in the Dynamic Evaluation of Transformer Language Models paper on the WikiText-103 dataset?
Test perplexity, Validation perplexity, Number of params
What metrics were used to measure the Compressive Transformer (18L, M=1024) model in the Compressive Transformers for Long-Range Sequence Modelling paper on the WikiText-103 dataset?
Test perplexity, Validation perplexity, Number of params
What metrics were used to measure the SRU++ Large model in the When Attention Meets Fast Recurrence: Training Language Models with Reduced Compute paper on the WikiText-103 dataset?
Test perplexity, Validation perplexity, Number of params
What metrics were used to measure the SegaTransformer-XL model in the Segatron: Segment-Aware Transformer for Language Modeling and Understanding paper on the WikiText-103 dataset?
Test perplexity, Validation perplexity, Number of params
What metrics were used to measure the Transformer+SSA+Self-ensemble model in the The Information Pathways Hypothesis: Transformers are Dynamic Self-Ensembles paper on the WikiText-103 dataset?
Test perplexity, Validation perplexity, Number of params
What metrics were used to measure the Transformer-XL Large + Phrase Induction model in the Improving Neural Language Models by Segmenting, Attending, and Predicting the Future paper on the WikiText-103 dataset?
Test perplexity, Validation perplexity, Number of params
What metrics were used to measure the GPT-2 Full model in the Language Models are Unsupervised Multitask Learners paper on the WikiText-103 dataset?
Test perplexity, Validation perplexity, Number of params
What metrics were used to measure the Staged Training model in the Shortformer: Better Language Modeling using Shorter Inputs paper on the WikiText-103 dataset?
Test perplexity, Validation perplexity, Number of params
What metrics were used to measure the Transformer+SSA model in the The Information Pathways Hypothesis: Transformers are Dynamic Self-Ensembles paper on the WikiText-103 dataset?
Test perplexity, Validation perplexity, Number of params
What metrics were used to measure the Sandwich Transformer model in the Improving Transformer Models by Reordering their Sublayers paper on the WikiText-103 dataset?
Test perplexity, Validation perplexity, Number of params
What metrics were used to measure the DIFFQ (λ=1, g=16) model in the Differentiable Model Compression via Pseudo Quantization Noise paper on the WikiText-103 dataset?
Test perplexity, Validation perplexity, Number of params
What metrics were used to measure the Mega model in the Mega: Moving Average Equipped Gated Attention paper on the WikiText-103 dataset?
Test perplexity, Validation perplexity, Number of params
What metrics were used to measure the Shortformer model in the Shortformer: Better Language Modeling using Shorter Inputs paper on the WikiText-103 dataset?
Test perplexity, Validation perplexity, Number of params
What metrics were used to measure the Feedback Transformer (8 layers) model in the Addressing Some Limitations of Transformers with Feedback Memory paper on the WikiText-103 dataset?
Test perplexity, Validation perplexity, Number of params
What metrics were used to measure the SRU++ Base model in the When Attention Meets Fast Recurrence: Training Language Models with Reduced Compute paper on the WikiText-103 dataset?
Test perplexity, Validation perplexity, Number of params
What metrics were used to measure the Transformer-XL Large model in the Transformer-XL: Attentive Language Models Beyond a Fixed-Length Context paper on the WikiText-103 dataset?
Test perplexity, Validation perplexity, Number of params
What metrics were used to measure the PAR Transformer Large model in the Pay Attention when Required paper on the WikiText-103 dataset?
Test perplexity, Validation perplexity, Number of params
What metrics were used to measure the Hyena-3-slim model in the Hyena Hierarchy: Towards Larger Convolutional Language Models paper on the WikiText-103 dataset?
Test perplexity, Validation perplexity, Number of params
What metrics were used to measure the Hyena-3 model in the Hyena Hierarchy: Towards Larger Convolutional Language Models paper on the WikiText-103 dataset?
Test perplexity, Validation perplexity, Number of params
What metrics were used to measure the Transformer (Adaptive inputs) model in the Adaptive Input Representations for Neural Language Modeling paper on the WikiText-103 dataset?
Test perplexity, Validation perplexity, Number of params
What metrics were used to measure the T2R + Pretrain model in the Finetuning Pretrained Transformers into RNNs paper on the WikiText-103 dataset?
Test perplexity, Validation perplexity, Number of params
What metrics were used to measure the Subformer model in the Subformer: A Parameter Reduced Transformer paper on the WikiText-103 dataset?
Test perplexity, Validation perplexity, Number of params
What metrics were used to measure the BERT-Large-CAS model in the Language Models with Transformers paper on the WikiText-103 dataset?
Test perplexity, Validation perplexity, Number of params
What metrics were used to measure the All-attention network (36 layers) model in the Augmenting Self-attention with Persistent Memory paper on the WikiText-103 dataset?
Test perplexity, Validation perplexity, Number of params
What metrics were used to measure the S4 model in the Efficiently Modeling Long Sequences with Structured State Spaces paper on the WikiText-103 dataset?
Test perplexity, Validation perplexity, Number of params
What metrics were used to measure the GPT-2 Large model in the Language Models are Unsupervised Multitask Learners paper on the WikiText-103 dataset?
Test perplexity, Validation perplexity, Number of params
What metrics were used to measure the Feedback Transformer (4 layers) model in the Addressing Some Limitations of Transformers with Feedback Memory paper on the WikiText-103 dataset?
Test perplexity, Validation perplexity, Number of params
What metrics were used to measure the PAR Transformer Base model in the Pay Attention when Required paper on the WikiText-103 dataset?
Test perplexity, Validation perplexity, Number of params
What metrics were used to measure the DEQ-Transformer (medium, adaptive embed) model in the Deep Equilibrium Models paper on the WikiText-103 dataset?
Test perplexity, Validation perplexity, Number of params
What metrics were used to measure the TaLK Convolutions model in the Time-aware Large Kernel Convolutions paper on the WikiText-103 dataset?
Test perplexity, Validation perplexity, Number of params
What metrics were used to measure the Rfa-Gate-Gaussian-Stateful (Big) model in the Random Feature Attention paper on the WikiText-103 dataset?
Test perplexity, Validation perplexity, Number of params
What metrics were used to measure the Hybrid H3 (125M) model in the Hungry Hungry Hippos: Towards Language Modeling with State Space Models paper on the WikiText-103 dataset?
Test perplexity, Validation perplexity, Number of params
What metrics were used to measure the Transformer-XL Standard model in the Transformer-XL: Attentive Language Models Beyond a Fixed-Length Context paper on the WikiText-103 dataset?
Test perplexity, Validation perplexity, Number of params
What metrics were used to measure the DeLighT model in the DeLighT: Deep and Light-weight Transformer paper on the WikiText-103 dataset?
Test perplexity, Validation perplexity, Number of params
What metrics were used to measure the [?]-former (Sticky memories) model in the $\infty$-former: Infinite Memory Transformer paper on the WikiText-103 dataset?
Test perplexity, Validation perplexity, Number of params
What metrics were used to measure the \infty-former (Sticky memories) model in the $\infty$-former: Infinite Memory Transformer paper on the WikiText-103 dataset?
Test perplexity, Validation perplexity, Number of params
What metrics were used to measure the ∞-former (Sticky memories) model in the $\infty$-former: Infinite Memory Transformer paper on the WikiText-103 dataset?
Test perplexity, Validation perplexity, Number of params
What metrics were used to measure the Transformer-N model in the Revisiting Simple Neural Probabilistic Language Models paper on the WikiText-103 dataset?
Test perplexity, Validation perplexity, Number of params
What metrics were used to measure the FNetAR Medium model in the FNetAR: Mixing Tokens with Autoregressive Fourier Transforms paper on the WikiText-103 dataset?
Test perplexity, Validation perplexity, Number of params
What metrics were used to measure the GPT-2 Medium model in the Language Models are Unsupervised Multitask Learners paper on the WikiText-103 dataset?
Test perplexity, Validation perplexity, Number of params
What metrics were used to measure the AdvSoft (+ 4 layer QRNN + dynamic eval) model in the Improving Neural Language Modeling via Adversarial Training paper on the WikiText-103 dataset?
Test perplexity, Validation perplexity, Number of params
What metrics were used to measure the DEQ-TrellisNet model in the Deep Equilibrium Models paper on the WikiText-103 dataset?
Test perplexity, Validation perplexity, Number of params
What metrics were used to measure the Trellis Network model in the Trellis Networks for Sequence Modeling paper on the WikiText-103 dataset?
Test perplexity, Validation perplexity, Number of params
What metrics were used to measure the LSTM (Hebbian, Cache, MbPA) model in the Fast Parametric Learning with Activation Memorization paper on the WikiText-103 dataset?
Test perplexity, Validation perplexity, Number of params
What metrics were used to measure the LSTM (Hebbian, Cache) model in the Fast Parametric Learning with Activation Memorization paper on the WikiText-103 dataset?
Test perplexity, Validation perplexity, Number of params
What metrics were used to measure the Rfa-Gate-Gaussian-Stateful (Small) model in the Random Feature Attention paper on the WikiText-103 dataset?
Test perplexity, Validation perplexity, Number of params
What metrics were used to measure the LSTM (RMC) model in the Relational recurrent neural networks paper on the WikiText-103 dataset?
Test perplexity, Validation perplexity, Number of params
What metrics were used to measure the DEQ-Transformer (small) model in the Deep Equilibrium Models paper on the WikiText-103 dataset?
Test perplexity, Validation perplexity, Number of params
What metrics were used to measure the AWD-LSTM-MoS + ATOI model in the Alleviating Sequence Information Loss with Data Overlapping and Prime Batch Sizes paper on the WikiText-103 dataset?
Test perplexity, Validation perplexity, Number of params
What metrics were used to measure the 4 layer QRNN model in the An Analysis of Neural Language Modeling at Multiple Scales paper on the WikiText-103 dataset?
Test perplexity, Validation perplexity, Number of params
What metrics were used to measure the LSTM (Hebbian) model in the Fast Parametric Learning with Activation Memorization paper on the WikiText-103 dataset?
Test perplexity, Validation perplexity, Number of params
What metrics were used to measure the LSTM model in the Fast Parametric Learning with Activation Memorization paper on the WikiText-103 dataset?
Test perplexity, Validation perplexity, Number of params
What metrics were used to measure the GCNN-8 model in the Language Modeling with Gated Convolutional Networks paper on the WikiText-103 dataset?
Test perplexity, Validation perplexity, Number of params
What metrics were used to measure the GPT-2 Small model in the Language Models are Unsupervised Multitask Learners paper on the WikiText-103 dataset?
Test perplexity, Validation perplexity, Number of params
What metrics were used to measure the Neural cache model (size = 2,000) model in the Improving Neural Language Models with a Continuous Cache paper on the WikiText-103 dataset?
Test perplexity, Validation perplexity, Number of params
What metrics were used to measure the Neural cache model (size = 100) model in the Improving Neural Language Models with a Continuous Cache paper on the WikiText-103 dataset?
Test perplexity, Validation perplexity, Number of params
What metrics were used to measure the GCNN-8 model in the Language Modeling with Gated Convolutional Networks paper on the WikiText-103 dataset?
Test perplexity, Validation perplexity, Number of params
What metrics were used to measure the TCN model in the An Empirical Evaluation of Generic Convolutional and Recurrent Networks for Sequence Modeling paper on the WikiText-103 dataset?
Test perplexity, Validation perplexity, Number of params
What metrics were used to measure the Temporal CNN model in the Convolutional Sequence Modeling Revisited paper on the WikiText-103 dataset?
Test perplexity, Validation perplexity, Number of params
What metrics were used to measure the LSTM model in the Improving Neural Language Models with a Continuous Cache paper on the WikiText-103 dataset?
Test perplexity, Validation perplexity, Number of params
What metrics were used to measure the Transformer (Adaptive inputs) model in the On the adequacy of untuned warmup for adaptive optimization paper on the WikiText-103 dataset?
Test perplexity, Validation perplexity, Number of params
What metrics were used to measure the LSTM model in the How much complexity does an RNN architecture need to learn syntax-sensitive dependencies? paper on the WikiText-103 dataset?
Test perplexity, Validation perplexity, Number of params
What metrics were used to measure the GRU model in the How much complexity does an RNN architecture need to learn syntax-sensitive dependencies? paper on the WikiText-103 dataset?
Test perplexity, Validation perplexity, Number of params
What metrics were used to measure the Decay RNN model in the How much complexity does an RNN architecture need to learn syntax-sensitive dependencies? paper on the WikiText-103 dataset?
Test perplexity, Validation perplexity, Number of params
What metrics were used to measure the I-DARTS model in the Improved Differentiable Architecture Search for Language Modeling and Named Entity Recognition paper on the PTB Diagnostic ECG Database dataset?
PPL
What metrics were used to measure the Gopher model in the Scaling Language Models: Methods, Analysis & Insights from Training Gopher paper on the PubMed Central dataset?
BPB
What metrics were used to measure the Gopher model in the Scaling Language Models: Methods, Analysis & Insights from Training Gopher paper on the OpenWebtext2 dataset?
BPB
What metrics were used to measure the Gopher model in the Scaling Language Models: Methods, Analysis & Insights from Training Gopher paper on the StackExchange dataset?
BPB
What metrics were used to measure the Gopher model in the Scaling Language Models: Methods, Analysis & Insights from Training Gopher paper on the PubMed Cognitive Control Abstracts dataset?
BPB
What metrics were used to measure the GPT-2 (48 layers, h=1600) model in the Language Models are Unsupervised Multitask Learners paper on the enwik8 dataset?
Bit per Character (BPC), Number of params
What metrics were used to measure the Transformer-XL (24 layers, RMS dynamic eval, decay) model in the Dynamic Evaluation of Transformer Language Models paper on the enwik8 dataset?
Bit per Character (BPC), Number of params
What metrics were used to measure the Focus model in the Focus Your Attention (with Adaptive IIR Filters) paper on the enwik8 dataset?
Bit per Character (BPC), Number of params
What metrics were used to measure the Expire-Span (24 layers) model in the Not All Memories are Created Equal: Learning to Forget by Expiring paper on the enwik8 dataset?
Bit per Character (BPC), Number of params
What metrics were used to measure the SRU++ Large model in the When Attention Meets Fast Recurrence: Training Language Models with Reduced Compute paper on the enwik8 dataset?
Bit per Character (BPC), Number of params
What metrics were used to measure the Feedback Transformer model in the Addressing Some Limitations of Transformers with Feedback Memory paper on the enwik8 dataset?
Bit per Character (BPC), Number of params
What metrics were used to measure the Sandwich Transformer (adaptive span) model in the Improving Transformer Models by Reordering their Sublayers paper on the enwik8 dataset?
Bit per Character (BPC), Number of params
What metrics were used to measure the Compressive Transformer (24 layers) model in the Compressive Transformers for Long-Range Sequence Modelling paper on the enwik8 dataset?
Bit per Character (BPC), Number of params
What metrics were used to measure the Transformer-LS (large) model in the Long-Short Transformer: Efficient Transformers for Language and Vision paper on the enwik8 dataset?
Bit per Character (BPC), Number of params
What metrics were used to measure the SRU++ Base model in the When Attention Meets Fast Recurrence: Training Language Models with Reduced Compute paper on the enwik8 dataset?
Bit per Character (BPC), Number of params
What metrics were used to measure the Transformer (24 layers, 8k adaptive span) model in the Adaptive Attention Span in Transformers paper on the enwik8 dataset?
Bit per Character (BPC), Number of params
What metrics were used to measure the Transformer-XL (24 layers) model in the Transformer-XL: Attentive Language Models Beyond a Fixed-Length Context paper on the enwik8 dataset?
Bit per Character (BPC), Number of params
What metrics were used to measure the Longformer (30 layers, h=512) model in the Longformer: The Long-Document Transformer paper on the enwik8 dataset?
Bit per Character (BPC), Number of params
What metrics were used to measure the Sparse Transformer (30 layers, fixed attn) model in the Generating Long Sequences with Sparse Transformers paper on the enwik8 dataset?
Bit per Character (BPC), Number of params
What metrics were used to measure the Routing Transformer (12 layers) model in the Efficient Content-Based Sparse Attention with Routing Transformers paper on the enwik8 dataset?
Bit per Character (BPC), Number of params
What metrics were used to measure the Transformer-LS (small) model in the Long-Short Transformer: Efficient Transformers for Language and Vision paper on the enwik8 dataset?
Bit per Character (BPC), Number of params
What metrics were used to measure the Hourglass model in the Hierarchical Transformers Are More Efficient Language Models paper on the enwik8 dataset?
Bit per Character (BPC), Number of params
What metrics were used to measure the Longformer (12 layers, h=512) model in the Longformer: The Long-Document Transformer paper on the enwik8 dataset?
Bit per Character (BPC), Number of params
What metrics were used to measure the All-attention network (18 layers) model in the Augmenting Self-attention with Persistent Memory paper on the enwik8 dataset?
Bit per Character (BPC), Number of params
What metrics were used to measure the Transformer (12 layers, 8k adaptive span) model in the Adaptive Attention Span in Transformers paper on the enwik8 dataset?
Bit per Character (BPC), Number of params
What metrics were used to measure the BP-Transformer (12 layers) model in the BP-Transformer: Modelling Long-Range Context via Binary Partitioning paper on the enwik8 dataset?
Bit per Character (BPC), Number of params
What metrics were used to measure the Transformer+SSA model in the The Information Pathways Hypothesis: Transformers are Dynamic Self-Ensembles paper on the enwik8 dataset?
Bit per Character (BPC), Number of params
What metrics were used to measure the Transformer-XL (18 layers) model in the Transformer-XL: Attentive Language Models Beyond a Fixed-Length Context paper on the enwik8 dataset?
Bit per Character (BPC), Number of params
What metrics were used to measure the Transformer (64 layers) model in the Character-Level Language Modeling with Deeper Self-Attention paper on the enwik8 dataset?
Bit per Character (BPC), Number of params
What metrics were used to measure the Transformer-XL (12 layers) model in the Transformer-XL: Attentive Language Models Beyond a Fixed-Length Context paper on the enwik8 dataset?
Bit per Character (BPC), Number of params