prompts stringlengths 81 413 | metrics_response stringlengths 0 371 |
|---|---|
What metrics were used to measure the Multilingual Bert model in the paper on the LiDiRus dataset? | MCC |
What metrics were used to measure the RuBERT conversational model in the paper on the LiDiRus dataset? | MCC |
What metrics were used to measure the heuristic majority model in the Unreasonable Effectiveness of Rule-Based Heuristics in Solving Russian SuperGLUE Tasks paper on the LiDiRus dataset? | MCC |
What metrics were used to measure the YaLM 1.0B few-shot model in the paper on the LiDiRus dataset? | MCC |
What metrics were used to measure the RuGPT3XL few-shot model in the paper on the LiDiRus dataset? | MCC |
What metrics were used to measure the MT5 Large model in the mT5: A massively multilingual pre-trained text-to-text transformer paper on the LiDiRus dataset? | MCC |
What metrics were used to measure the Baseline TF-IDF1.1 model in the RussianSuperGLUE: A Russian Language Understanding Evaluation Benchmark paper on the LiDiRus dataset? | MCC |
What metrics were used to measure the RuGPT3Medium model in the paper on the LiDiRus dataset? | MCC |
What metrics were used to measure the Golden Transformer model in the paper on the LiDiRus dataset? | MCC |
What metrics were used to measure the Random weighted model in the Unreasonable Effectiveness of Rule-Based Heuristics in Solving Russian SuperGLUE Tasks paper on the LiDiRus dataset? | MCC |
What metrics were used to measure the majority_class model in the Unreasonable Effectiveness of Rule-Based Heuristics in Solving Russian SuperGLUE Tasks paper on the LiDiRus dataset? | MCC |
What metrics were used to measure the RuGPT3Small model in the paper on the LiDiRus dataset? | MCC |
What metrics were used to measure the GLM-130B model in the GLM-130B: An Open Bilingual Pre-trained Model paper on the FewCLUE (OCNLI-FC) dataset? | Accuracy |
What metrics were used to measure the ERNIE 3.0 Titan-260B model in the GLM-130B: An Open Bilingual Pre-trained Model paper on the FewCLUE (OCNLI-FC) dataset? | Accuracy |
What metrics were used to measure the Gopher model in the Scaling Language Models: Methods, Analysis & Insights from Training Gopher paper on the USPTO Backgrounds dataset? | BPB |
What metrics were used to measure the GLM-130B model in the GLM-130B: An Open Bilingual Pre-trained Model paper on the CLUE (C3) dataset? | Accuracy |
What metrics were used to measure the ERNIE 3.0 Titan-260B model in the GLM-130B: An Open Bilingual Pre-trained Model paper on the CLUE (C3) dataset? | Accuracy |
What metrics were used to measure the Transformer-XL + RMS dynamic eval model in the Dynamic Evaluation of Transformer Language Models paper on the Hutter Prize dataset? | Bit per Character (BPC), Number of params |
What metrics were used to measure the Compressive Transformer model in the Compressive Transformers for Long-Range Sequence Modelling paper on the Hutter Prize dataset? | Bit per Character (BPC), Number of params |
What metrics were used to measure the Mogrifier LSTM + dynamic eval model in the Mogrifier LSTM paper on the Hutter Prize dataset? | Bit per Character (BPC), Number of params |
What metrics were used to measure the 24-layer Transformer-XL model in the Transformer-XL: Attentive Language Models Beyond a Fixed-Length Context paper on the Hutter Prize dataset? | Bit per Character (BPC), Number of params |
What metrics were used to measure the Longformer Large model in the Longformer: The Long-Document Transformer paper on the Hutter Prize dataset? | Bit per Character (BPC), Number of params |
What metrics were used to measure the Longformer Small model in the Longformer: The Long-Document Transformer paper on the Hutter Prize dataset? | Bit per Character (BPC), Number of params |
What metrics were used to measure the 18-layer Transformer-XL model in the Transformer-XL: Attentive Language Models Beyond a Fixed-Length Context paper on the Hutter Prize dataset? | Bit per Character (BPC), Number of params |
What metrics were used to measure the 64-layer Character Transformer Model model in the Character-Level Language Modeling with Deeper Self-Attention paper on the Hutter Prize dataset? | Bit per Character (BPC), Number of params |
What metrics were used to measure the 12-layer Transformer-XL model in the Transformer-XL: Attentive Language Models Beyond a Fixed-Length Context paper on the Hutter Prize dataset? | Bit per Character (BPC), Number of params |
What metrics were used to measure the mLSTM + dynamic eval model in the Dynamic Evaluation of Neural Sequence Models paper on the Hutter Prize dataset? | Bit per Character (BPC), Number of params |
What metrics were used to measure the 12-layer Character Transformer Model model in the Character-Level Language Modeling with Deeper Self-Attention paper on the Hutter Prize dataset? | Bit per Character (BPC), Number of params |
What metrics were used to measure the Mogrifier LSTM model in the Mogrifier LSTM paper on the Hutter Prize dataset? | Bit per Character (BPC), Number of params |
What metrics were used to measure the 3-layer AWD-LSTM model in the An Analysis of Neural Language Modeling at Multiple Scales paper on the Hutter Prize dataset? | Bit per Character (BPC), Number of params |
What metrics were used to measure the Large mLSTM +emb +WN +VD model in the Multiplicative LSTM for sequence modelling paper on the Hutter Prize dataset? | Bit per Character (BPC), Number of params |
What metrics were used to measure the Large FS-LSTM-4 model in the Fast-Slow Recurrent Neural Networks paper on the Hutter Prize dataset? | Bit per Character (BPC), Number of params |
What metrics were used to measure the Large RHN model in the Recurrent Highway Networks paper on the Hutter Prize dataset? | Bit per Character (BPC), Number of params |
What metrics were used to measure the FS-LSTM-4 model in the Fast-Slow Recurrent Neural Networks paper on the Hutter Prize dataset? | Bit per Character (BPC), Number of params |
What metrics were used to measure the RHN - depth 5 [zilly2016recurrent] model in the Recurrent Highway Networks paper on the Hutter Prize dataset? | Bit per Character (BPC), Number of params |
What metrics were used to measure the Gopher model in the Scaling Language Models: Methods, Analysis & Insights from Training Gopher paper on the Books3 dataset? | BPB |
What metrics were used to measure the Gopher model in the Scaling Language Models: Methods, Analysis & Insights from Training Gopher paper on the OpenSubtitles dataset? | BPB |
What metrics were used to measure the Gpt3 model in the 007: Democratically Finding The Cause of Packet Drops paper on the 100 sleep nights of 8 caregivers dataset? | 10% |
What metrics were used to measure the GLM-130B model in the GLM-130B: An Open Bilingual Pre-trained Model paper on the CLUE (WSC1.1) dataset? | Accuracy |
What metrics were used to measure the ERNIE 3.0 Titan-260B model in the GLM-130B: An Open Bilingual Pre-trained Model paper on the CLUE (WSC1.1) dataset? | Accuracy |
What metrics were used to measure the Gopher model in the Scaling Language Models: Methods, Analysis & Insights from Training Gopher paper on the Gutenberg PG-19 dataset? | BPB |
What metrics were used to measure the GLM-130B model in the GLM-130B: An Open Bilingual Pre-trained Model paper on the CLUE (CMNLI) dataset? | Accuracy |
What metrics were used to measure the ERNIE 3.0 Titan-260B model in the GLM-130B: An Open Bilingual Pre-trained Model paper on the CLUE (CMNLI) dataset? | Accuracy |
What metrics were used to measure the Mogrifier LSTM + dynamic eval model in the Mogrifier LSTM paper on the Penn Treebank (Character Level) dataset? | Bit per Character (BPC), Number of params |
What metrics were used to measure the Mogrifier LSTM model in the Mogrifier LSTM paper on the Penn Treebank (Character Level) dataset? | Bit per Character (BPC), Number of params |
What metrics were used to measure the GAM-RHN-5 model in the Recurrent Highway Networks with Grouped Auxiliary Memory paper on the Penn Treebank (Character Level) dataset? | Bit per Character (BPC), Number of params |
What metrics were used to measure the Trellis Network model in the Trellis Networks for Sequence Modeling paper on the Penn Treebank (Character Level) dataset? | Bit per Character (BPC), Number of params |
What metrics were used to measure the Feedback Transformer model in the Addressing Some Limitations of Transformers with Feedback Memory paper on the Penn Treebank (Character Level) dataset? | Bit per Character (BPC), Number of params |
What metrics were used to measure the Past Decode Reg. + AWD-LSTM-MoS + dyn. eval. model in the Improved Language Modeling by Decoding the Past paper on the Penn Treebank (Character Level) dataset? | Bit per Character (BPC), Number of params |
What metrics were used to measure the 3-layer AWD-LSTM model in the An Analysis of Neural Language Modeling at Multiple Scales paper on the Penn Treebank (Character Level) dataset? | Bit per Character (BPC), Number of params |
What metrics were used to measure the Dense IndRNN model in the Deep Independently Recurrent Neural Network (IndRNN) paper on the Penn Treebank (Character Level) dataset? | Bit per Character (BPC), Number of params |
What metrics were used to measure the 6-layer QRNN model in the An Analysis of Neural Language Modeling at Multiple Scales paper on the Penn Treebank (Character Level) dataset? | Bit per Character (BPC), Number of params |
What metrics were used to measure the FS-LSTM-4 model in the Fast-Slow Recurrent Neural Networks paper on the Penn Treebank (Character Level) dataset? | Bit per Character (BPC), Number of params |
What metrics were used to measure the IndRNN model in the Independently Recurrent Neural Network (IndRNN): Building A Longer and Deeper RNN paper on the Penn Treebank (Character Level) dataset? | Bit per Character (BPC), Number of params |
What metrics were used to measure the FS-LSTM-2 model in the Fast-Slow Recurrent Neural Networks paper on the Penn Treebank (Character Level) dataset? | Bit per Character (BPC), Number of params |
What metrics were used to measure the NAS-RL model in the Neural Architecture Search with Reinforcement Learning paper on the Penn Treebank (Character Level) dataset? | Bit per Character (BPC), Number of params |
What metrics were used to measure the 2-layer Norm HyperLSTM model in the HyperNetworks paper on the Penn Treebank (Character Level) dataset? | Bit per Character (BPC), Number of params |
What metrics were used to measure the R-Transformer model in the R-Transformer: Recurrent Neural Network Enhanced Transformer paper on the Penn Treebank (Character Level) dataset? | Bit per Character (BPC), Number of params |
What metrics were used to measure the Seq-U-Net model in the Seq-U-Net: A One-Dimensional Causal U-Net for Efficient Sequence Modelling paper on the Penn Treebank (Character Level) dataset? | Bit per Character (BPC), Number of params |
What metrics were used to measure the STAR model in the Gating Revisited: Deep Multi-layer RNNs That Can Be Trained paper on the Penn Treebank (Character Level) dataset? | Bit per Character (BPC), Number of params |
What metrics were used to measure the TCN model in the Seq-U-Net: A One-Dimensional Causal U-Net for Efficient Sequence Modelling paper on the Penn Treebank (Character Level) dataset? | Bit per Character (BPC), Number of params |
What metrics were used to measure the Temporal Convolutional Network model in the An Empirical Evaluation of Generic Convolutional and Recurrent Networks for Sequence Modeling paper on the Penn Treebank (Character Level) dataset? | Bit per Character (BPC), Number of params |
What metrics were used to measure the Bipartite Flow model in the Discrete Flows: Invertible Generative Models of Discrete Data paper on the Penn Treebank (Character Level) dataset? | Bit per Character (BPC), Number of params |
What metrics were used to measure the GLM-130B model in the GLM-130B: An Open Bilingual Pre-trained Model paper on the CLUE (OCNLI_50K) dataset? | Accuracy |
What metrics were used to measure the ERNIE 3.0 Titan-260B model in the GLM-130B: An Open Bilingual Pre-trained Model paper on the CLUE (OCNLI_50K) dataset? | Accuracy |
What metrics were used to measure the Gopher model in the Scaling Language Models: Methods, Analysis & Insights from Training Gopher paper on the Pile CC dataset? | BPB |
What metrics were used to measure the GLM-130B model in the GLM-130B: An Open Bilingual Pre-trained Model paper on the FewCLUE (EPRSTMT) dataset? | Accuracy |
What metrics were used to measure the ERNIE 3.0 Titan-260B model in the GLM-130B: An Open Bilingual Pre-trained Model paper on the FewCLUE (EPRSTMT) dataset? | Accuracy |
What metrics were used to measure the Gopher model in the Scaling Language Models: Methods, Analysis & Insights from Training Gopher paper on the PhilPapers dataset? | BPB |
What metrics were used to measure the Transformer-LS (small) model in the Long-Short Transformer: Efficient Transformers for Language and Vision paper on the Text8 dev dataset? | Bit per Character (BPC) |
What metrics were used to measure the Primer model in the Primer: Searching for Efficient Transformers for Language Modeling paper on the C4 dataset? | Perplexity, TPUv3 Hours, Steps |
What metrics were used to measure the Zeropoint LLM.int8 (vector-wise + decomp, 13B) model in the LLM.int8(): 8-bit Matrix Multiplication for Transformers at Scale paper on the C4 dataset? | Perplexity, TPUv3 Hours, Steps |
What metrics were used to measure the T5++ model in the Primer: Searching for Efficient Transformers for Language Modeling paper on the C4 dataset? | Perplexity, TPUv3 Hours, Steps |
What metrics were used to measure the Original T5 model in the Primer: Searching for Efficient Transformers for Language Modeling paper on the C4 dataset? | Perplexity, TPUv3 Hours, Steps |
What metrics were used to measure the N-Grammer model in the N-Grammer: Augmenting Transformers with latent n-grams paper on the C4 dataset? | Perplexity, TPUv3 Hours, Steps |
What metrics were used to measure the PAR Transformer 24B model in the Pay Attention when Required paper on the enwiki8 dataset? | Bit per Character (BPC) |
What metrics were used to measure the Gopher model in the Scaling Language Models: Methods, Analysis & Insights from Training Gopher paper on the FreeLaw dataset? | BPB |
What metrics were used to measure the GPT-2 model in the Language Models are Unsupervised Multitask Learners paper on the Text8 dataset? | Bit per Character (BPC), Number of params |
What metrics were used to measure the Focus model in the Focus Your Attention (with Adaptive IIR Filters) paper on the Text8 dataset? | Bit per Character (BPC), Number of params |
What metrics were used to measure the Transformer-XL + RMS dynamic eval + decay model in the Dynamic Evaluation of Transformer Language Models paper on the Text8 dataset? | Bit per Character (BPC), Number of params |
What metrics were used to measure the 24L Transformer + 8K adaptive span model in the Adaptive Attention Span in Transformers paper on the Text8 dataset? | Bit per Character (BPC), Number of params |
What metrics were used to measure the Transformer-XL - 24 layers model in the Transformer-XL: Attentive Language Models Beyond a Fixed-Length Context paper on the Text8 dataset? | Bit per Character (BPC), Number of params |
What metrics were used to measure the All-attention network - 36 layers model in the Augmenting Self-attention with Persistent Memory paper on the Text8 dataset? | Bit per Character (BPC), Number of params |
What metrics were used to measure the Transformer-LS (small) model in the Long-Short Transformer: Efficient Transformers for Language and Vision paper on the Text8 dataset? | Bit per Character (BPC), Number of params |
What metrics were used to measure the 12L Transformer + 8K adaptive span model in the Adaptive Attention Span in Transformers paper on the Text8 dataset? | Bit per Character (BPC), Number of params |
What metrics were used to measure the All-attention network - 18 layers model in the Augmenting Self-attention with Persistent Memory paper on the Text8 dataset? | Bit per Character (BPC), Number of params |
What metrics were used to measure the BP-Transformer - 12 Layers model in the BP-Transformer: Modelling Long-Range Context via Binary Partitioning paper on the Text8 dataset? | Bit per Character (BPC), Number of params |
What metrics were used to measure the 64-layer Character Transformer Model model in the Character-Level Language Modeling with Deeper Self-Attention paper on the Text8 dataset? | Bit per Character (BPC), Number of params |
What metrics were used to measure the GAM-RHN-10 model in the Recurrent Highway Networks with Grouped Auxiliary Memory paper on the Text8 dataset? | Bit per Character (BPC), Number of params |
What metrics were used to measure the 12-layer Character Transformer Model model in the Character-Level Language Modeling with Deeper Self-Attention paper on the Text8 dataset? | Bit per Character (BPC), Number of params |
What metrics were used to measure the PAR Transformer 24B model in the Pay Attention when Required paper on the Text8 dataset? | Bit per Character (BPC), Number of params |
What metrics were used to measure the mLSTM + dynamic eval model in the Dynamic Evaluation of Neural Sequence Models paper on the Text8 dataset? | Bit per Character (BPC), Number of params |
What metrics were used to measure the Bipartite flows (8 flows) model in the Discrete Flows: Invertible Generative Models of Discrete Data paper on the Text8 dataset? | Bit per Character (BPC), Number of params |
What metrics were used to measure the Large RHN model in the Recurrent Highway Networks paper on the Text8 dataset? | Bit per Character (BPC), Number of params |
What metrics were used to measure the Large mLSTM +emb +WN +VD model in the Multiplicative LSTM for sequence modelling paper on the Text8 dataset? | Bit per Character (BPC), Number of params |
What metrics were used to measure the LayerNorm HM-LSTM model in the Hierarchical Multiscale Recurrent Neural Networks paper on the Text8 dataset? | Bit per Character (BPC), Number of params |
What metrics were used to measure the BN LSTM model in the Recurrent Batch Normalization paper on the Text8 dataset? | Bit per Character (BPC), Number of params |
What metrics were used to measure the Unregularised mLSTM model in the Multiplicative LSTM for sequence modelling paper on the Text8 dataset? | Bit per Character (BPC), Number of params |
What metrics were used to measure the BFN model in the Bayesian Flow Networks paper on the Text8 dataset? | Bit per Character (BPC), Number of params |
What metrics were used to measure the td-LSTM-large model in the Architectural Complexity Measures of Recurrent Neural Networks paper on the Text8 dataset? | Bit per Character (BPC), Number of params |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.