prompts stringlengths 81 413 | metrics_response stringlengths 0 371 |
|---|---|
What metrics were used to measure the LSTM-RNN parametric model in the WaveNet: A Generative Model for Raw Audio paper on the Mandarin Chinese dataset? | Mean Opinion Score |
What metrics were used to measure the HMM-driven concatenative model in the WaveNet: A Generative Model for Raw Audio paper on the Mandarin Chinese dataset? | Mean Opinion Score |
What metrics were used to measure the BDDM vocoder model in the BDDM: Bilateral Denoising Diffusion Models for Fast and High-Quality Speech Synthesis paper on the LJSpeech dataset? | Mean Opinion Score |
What metrics were used to measure the Neural HMM model in the Neural HMMs are all you need (for high-quality attention-free TTS) paper on the LJSpeech dataset? | Mean Opinion Score |
What metrics were used to measure the Neural HMM Ablation with 1 state per phone model in the Neural HMMs are all you need (for high-quality attention-free TTS) paper on the LJSpeech dataset? | Mean Opinion Score |
What metrics were used to measure the Tacotron 2 model in the Natural TTS Synthesis by Conditioning WaveNet on Mel Spectrogram Predictions paper on the North American English dataset? | Mean Opinion Score |
What metrics were used to measure the WaveNet (Linguistic) model in the Natural TTS Synthesis by Conditioning WaveNet on Mel Spectrogram Predictions paper on the North American English dataset? | Mean Opinion Score |
What metrics were used to measure the WaveNet (L+F) model in the WaveNet: A Generative Model for Raw Audio paper on the North American English dataset? | Mean Opinion Score |
What metrics were used to measure the Tacotron model in the Tacotron: Towards End-to-End Speech Synthesis paper on the North American English dataset? | Mean Opinion Score |
What metrics were used to measure the HMM-driven concatenative model in the WaveNet: A Generative Model for Raw Audio paper on the North American English dataset? | Mean Opinion Score |
What metrics were used to measure the LSTM-RNN parametric model in the WaveNet: A Generative Model for Raw Audio paper on the North American English dataset? | Mean Opinion Score |
What metrics were used to measure the means model in the Merging $K$-means with hierarchical clustering for identifying general-shaped groups paper on the North American English dataset? | Mean Opinion Score |
What metrics were used to measure the BigVSAN (10M steps, provided at the official repository) model in the BigVSAN: Enhancing GAN-based Neural Vocoders with Slicing Adversarial Network paper on the LibriTTS dataset? | PESQ, M-STFT, MCD, Periodicity, V/UV F1 |
What metrics were used to measure the BigVSAN (w/ snakebeta) model in the BigVSAN: Enhancing GAN-based Neural Vocoders with Slicing Adversarial Network paper on the LibriTTS dataset? | PESQ, M-STFT, MCD, Periodicity, V/UV F1 |
What metrics were used to measure the BigVSAN model in the BigVSAN: Enhancing GAN-based Neural Vocoders with Slicing Adversarial Network paper on the LibriTTS dataset? | PESQ, M-STFT, MCD, Periodicity, V/UV F1 |
What metrics were used to measure the BigVGAN model in the BigVGAN: A Universal Neural Vocoder with Large-Scale Training paper on the LibriTTS dataset? | PESQ, M-STFT, MCD, Periodicity, V/UV F1 |
What metrics were used to measure the Vocos model in the Vocos: Closing the gap between time-domain and Fourier-based neural vocoders for high-quality audio synthesis paper on the LibriTTS dataset? | PESQ, M-STFT, MCD, Periodicity, V/UV F1 |
What metrics were used to measure the BigVGAN-base model in the BigVGAN: A Universal Neural Vocoder with Large-Scale Training paper on the LibriTTS dataset? | PESQ, M-STFT, MCD, Periodicity, V/UV F1 |
What metrics were used to measure the WaveGlow model in the WaveGlow: A Flow-based Generative Network for Speech Synthesis paper on the LibriTTS dataset? | PESQ, M-STFT, MCD, Periodicity, V/UV F1 |
What metrics were used to measure the WaveFlow model in the WaveFlow: A Compact Flow-based Model for Raw Audio paper on the LibriTTS dataset? | PESQ, M-STFT, MCD, Periodicity, V/UV F1 |
What metrics were used to measure the HiFi-GAN model in the HiFi-GAN: Generative Adversarial Networks for Efficient and High Fidelity Speech Synthesis paper on the LibriTTS dataset? | PESQ, M-STFT, MCD, Periodicity, V/UV F1 |
What metrics were used to measure the SC-WaveRNN model in the Speaker Conditional WaveRNN: Towards Universal Neural Vocoder for Unseen Speaker and Recording Conditions paper on the LibriTTS dataset? | PESQ, M-STFT, MCD, Periodicity, V/UV F1 |
What metrics were used to measure the ATHENA (roberta-large) model in the ATHENA: Mathematical Reasoning with Thought Expansion paper on the SVAMP (1:N) dataset? | Execution Accuracy |
What metrics were used to measure the ATHENA (roberta-base) model in the ATHENA: Mathematical Reasoning with Thought Expansion paper on the SVAMP (1:N) dataset? | Execution Accuracy |
What metrics were used to measure the MixedSP model in the Learning from Explicit and Implicit Supervision Jointly For Algebra Word Problems paper on the ALG514 dataset? | Accuracy (%) |
What metrics were used to measure the GEO model in the Generating Equation by Utilizing Operators : GEO model paper on the ALG514 dataset? | Accuracy (%) |
What metrics were used to measure the EPT model in the Point to the Expression: Solving Algebraic Word Problems using the Expression-Pointer Transformer Model paper on the ALG514 dataset? | Accuracy (%) |
What metrics were used to measure the ZDC model in the Deep Neural Solver for Math Word Problems paper on the ALG514 dataset? | Accuracy (%) |
What metrics were used to measure the liblinear-QP model in the Learn to Solve Algebra Word Problems Using Quadratic Programming paper on the ALG514 dataset? | Accuracy (%) |
What metrics were used to measure the EPT model in the EPT-X: An Expression-Pointer Transformer model that generates eXplanations for numbers paper on the ALG514 dataset? | Accuracy (%) |
What metrics were used to measure the SIM model in the How well do Computers Solve Math Word Problems? Large-Scale Dataset Construction and Evaluation paper on the ALG514 dataset? | Accuracy (%) |
What metrics were used to measure the ALLEQ model in the Learning to Automatically Solve Algebra Word Problems paper on the ALG514 dataset? | Accuracy (%) |
What metrics were used to measure the EPT-X model in the EPT-X: An Expression-Pointer Transformer model that generates eXplanations for numbers paper on the ALG514 dataset? | Accuracy (%) |
What metrics were used to measure the SAU-Solver model in the Semantically-Aligned Universal Tree-Structured Solver for Math Word Problems paper on the ALG514 dataset? | Accuracy (%) |
What metrics were used to measure the Process Supervision (GPT-4) model in the Let's Verify Step by Step paper on the MATH minival dataset? | Accuracy |
What metrics were used to measure the Multi-view* (ours) model in the Multi-View Reasoning: Consistent Contrastive Learning for Math Word Problem paper on the Math23K dataset? | Accuracy (5-fold), Accuracy (training-test), weakly-supervised |
What metrics were used to measure the Generate and Rank model in the Generate & Rank: A Multi-task Framework for Math Word Problems paper on the Math23K dataset? | Accuracy (5-fold), Accuracy (training-test), weakly-supervised |
What metrics were used to measure the Exp-Tree model in the An Expression Tree Decoding Strategy for Mathematical Equation Generation paper on the Math23K dataset? | Accuracy (5-fold), Accuracy (training-test), weakly-supervised |
What metrics were used to measure the REAL2: Memory-augmented Solver model in the paper on the Math23K dataset? | Accuracy (5-fold), Accuracy (training-test), weakly-supervised |
What metrics were used to measure the Roberta-DeductReasoner model in the Learning to Reason Deductively: Math Word Problem Solving as Complex Relation Extraction paper on the Math23K dataset? | Accuracy (5-fold), Accuracy (training-test), weakly-supervised |
What metrics were used to measure the MWP-BERT model in the MWP-BERT: Numeracy-Augmented Pre-training for Math Word Problem Solving paper on the Math23K dataset? | Accuracy (5-fold), Accuracy (training-test), weakly-supervised |
What metrics were used to measure the Recall and Learn model in the Recall and Learn: A Memory-augmented Solver for Math Word Problems paper on the Math23K dataset? | Accuracy (5-fold), Accuracy (training-test), weakly-supervised |
What metrics were used to measure the RoBERTaGen model in the MWPToolkit: An Open-Source Framework for Deep Learning-Based Math Word Problem Solvers paper on the Math23K dataset? | Accuracy (5-fold), Accuracy (training-test), weakly-supervised |
What metrics were used to measure the GTS w/ Data Augmentation model in the paper on the Math23K dataset? | Accuracy (5-fold), Accuracy (training-test), weakly-supervised |
What metrics were used to measure the Graph2Tree model in the Graph-to-Tree Learning for Solving Math Word Problems paper on the Math23K dataset? | Accuracy (5-fold), Accuracy (training-test), weakly-supervised |
What metrics were used to measure the SAU-Solver model in the Semantically-Aligned Universal Tree-Structured Solver for Math Word Problems paper on the Math23K dataset? | Accuracy (5-fold), Accuracy (training-test), weakly-supervised |
What metrics were used to measure the GTS model in the A Goal-Driven Tree-Structured Neural Model for Math Word Problems paper on the Math23K dataset? | Accuracy (5-fold), Accuracy (training-test), weakly-supervised |
What metrics were used to measure the GROUP-ATT model in the Modeling Intra-Relation in Math Word Problems with Different Functional Multi-Head Attentions paper on the Math23K dataset? | Accuracy (5-fold), Accuracy (training-test), weakly-supervised |
What metrics were used to measure the Hybrid model w/ SNI model in the Deep Neural Solver for Math Word Problems paper on the Math23K dataset? | Accuracy (5-fold), Accuracy (training-test), weakly-supervised |
What metrics were used to measure the ATHENA (roberta-large) model in the ATHENA: Mathematical Reasoning with Thought Expansion paper on the Math23K dataset? | Accuracy (5-fold), Accuracy (training-test), weakly-supervised |
What metrics were used to measure the ATHENA (roberta-base) model in the ATHENA: Mathematical Reasoning with Thought Expansion paper on the Math23K dataset? | Accuracy (5-fold), Accuracy (training-test), weakly-supervised |
What metrics were used to measure the T-RNN model in the paper on the Math23K dataset? | Accuracy (5-fold), Accuracy (training-test), weakly-supervised |
What metrics were used to measure the LBF model in the Learning by Fixing: Solving Math Word Problems with Weak Supervision paper on the Math23K dataset? | Accuracy (5-fold), Accuracy (training-test), weakly-supervised |
What metrics were used to measure the MsAT-DeductReasoner model in the Learning Multi-Step Reasoning by Solving Arithmetic Tasks paper on the MAWPS dataset? | Accuracy (%) |
What metrics were used to measure the ATHENA (roberta-large) model in the ATHENA: Mathematical Reasoning with Thought Expansion paper on the MAWPS dataset? | Accuracy (%) |
What metrics were used to measure the Multi-view* (ours) model in the Multi-View Reasoning: Consistent Contrastive Learning for Math Word Problem paper on the MAWPS dataset? | Accuracy (%) |
What metrics were used to measure the Exp-Tree model in the An Expression Tree Decoding Strategy for Mathematical Equation Generation paper on the MAWPS dataset? | Accuracy (%) |
What metrics were used to measure the ATHENA (roberta-base) model in the ATHENA: Mathematical Reasoning with Thought Expansion paper on the MAWPS dataset? | Accuracy (%) |
What metrics were used to measure the Roberta-DeductReasoner model in the Learning to Reason Deductively: Math Word Problem Solving as Complex Relation Extraction paper on the MAWPS dataset? | Accuracy (%) |
What metrics were used to measure the EPT model in the EPT-X: An Expression-Pointer Transformer model that generates eXplanations for numbers paper on the MAWPS dataset? | Accuracy (%) |
What metrics were used to measure the Graph2Tree with RoBERTa model in the Are NLP Models really able to Solve Simple Math Word Problems? paper on the MAWPS dataset? | Accuracy (%) |
What metrics were used to measure the GTS with RoBERTa model in the Are NLP Models really able to Solve Simple Math Word Problems? paper on the MAWPS dataset? | Accuracy (%) |
What metrics were used to measure the GEO model in the Generating Equation by Utilizing Operators : GEO model paper on the MAWPS dataset? | Accuracy (%) |
What metrics were used to measure the EPT-X model in the EPT-X: An Expression-Pointer Transformer model that generates eXplanations for numbers paper on the MAWPS dataset? | Accuracy (%) |
What metrics were used to measure the EPT model in the Point to the Expression: Solving Algebraic Word Problems using the Expression-Pointer Transformer Model paper on the MAWPS dataset? | Accuracy (%) |
What metrics were used to measure the Graph2Tree model in the Graph-to-Tree Learning for Solving Math Word Problems paper on the MAWPS dataset? | Accuracy (%) |
What metrics were used to measure the LLaMA 2-Chat model in the Llama 2: Open Foundation and Fine-Tuned Chat Models paper on the MAWPS dataset? | Accuracy (%) |
What metrics were used to measure the Toolformer model in the Toolformer: Language Models Can Teach Themselves to Use Tools paper on the MAWPS dataset? | Accuracy (%) |
What metrics were used to measure the GPT-3 (175B) model in the Toolformer: Language Models Can Teach Themselves to Use Tools paper on the MAWPS dataset? | Accuracy (%) |
What metrics were used to measure the Toolformer (disabled) model in the Toolformer: Language Models Can Teach Themselves to Use Tools paper on the MAWPS dataset? | Accuracy (%) |
What metrics were used to measure the GPT-J model in the Toolformer: Language Models Can Teach Themselves to Use Tools paper on the MAWPS dataset? | Accuracy (%) |
What metrics were used to measure the GPT-J + CC model in the Toolformer: Language Models Can Teach Themselves to Use Tools paper on the MAWPS dataset? | Accuracy (%) |
What metrics were used to measure the OPT (66B) model in the Toolformer: Language Models Can Teach Themselves to Use Tools paper on the MAWPS dataset? | Accuracy (%) |
What metrics were used to measure the ATHENA (roberta-large) model in the ATHENA: Mathematical Reasoning with Thought Expansion paper on the ASDiv-A dataset? | Execution Accuracy |
What metrics were used to measure the ATHENA (roberta-base) model in the ATHENA: Mathematical Reasoning with Thought Expansion paper on the ASDiv-A dataset? | Execution Accuracy |
What metrics were used to measure the Graph2Tree with RoBERTa model in the Are NLP Models really able to Solve Simple Math Word Problems? paper on the ASDiv-A dataset? | Execution Accuracy |
What metrics were used to measure the GTS with RoBERTa model in the Are NLP Models really able to Solve Simple Math Word Problems? paper on the ASDiv-A dataset? | Execution Accuracy |
What metrics were used to measure the LSTM Seq2Seq with RoBERTa model in the Are NLP Models really able to Solve Simple Math Word Problems? paper on the ASDiv-A dataset? | Execution Accuracy |
What metrics were used to measure the EPT model in the EPT-X: An Expression-Pointer Transformer model that generates eXplanations for numbers paper on the DRAW-1K dataset? | Accuracy (%) |
What metrics were used to measure the GEO model in the Generating Equation by Utilizing Operators : GEO model paper on the DRAW-1K dataset? | Accuracy (%) |
What metrics were used to measure the EPT model in the Point to the Expression: Solving Algebraic Word Problems using the Expression-Pointer Transformer Model paper on the DRAW-1K dataset? | Accuracy (%) |
What metrics were used to measure the MixedSP model in the Learning from Explicit and Implicit Supervision Jointly For Algebra Word Problems paper on the DRAW-1K dataset? | Accuracy (%) |
What metrics were used to measure the EPT-X model in the EPT-X: An Expression-Pointer Transformer model that generates eXplanations for numbers paper on the DRAW-1K dataset? | Accuracy (%) |
What metrics were used to measure the ELASTIC (RoBERTa-large) model in the ELASTIC: Numerical Reasoning with Adaptive Symbolic Compiler paper on the MathQA dataset? | Answer Accuracy |
What metrics were used to measure the Exp-Tree model in the An Expression Tree Decoding Strategy for Mathematical Equation Generation paper on the MathQA dataset? | Answer Accuracy |
What metrics were used to measure the Multi-view*(ours) model in the Multi-View Reasoning: Consistent Contrastive Learning for Math Word Problem paper on the MathQA dataset? | Answer Accuracy |
What metrics were used to measure the Roberta-DeductReasoner model in the Learning to Reason Deductively: Math Word Problem Solving as Complex Relation Extraction paper on the MathQA dataset? | Answer Accuracy |
What metrics were used to measure the MWP-BERT model in the MWP-BERT: Numeracy-Augmented Pre-training for Math Word Problem Solving paper on the MathQA dataset? | Answer Accuracy |
What metrics were used to measure the Model Selection (GPT-4) model in the Automatic Model Selection with Large Language Models for Reasoning paper on the SVAMP dataset? | Execution Accuracy, Accuracy |
What metrics were used to measure the PHP (GPT-4) model in the Progressive-Hint Prompting Improves Reasoning in Large Language Models paper on the SVAMP dataset? | Execution Accuracy, Accuracy |
What metrics were used to measure the Self-Evaluation Guided Decoding (Codex, PAL, multiple reasoning chains, 7-shot gen, 5-shot eval) model in the Self-Evaluation Guided Beam Search for Reasoning paper on the SVAMP dataset? | Execution Accuracy, Accuracy |
What metrics were used to measure the LLaMA 2-Chat model in the Llama 2: Open Foundation and Fine-Tuned Chat Models paper on the SVAMP dataset? | Execution Accuracy, Accuracy |
What metrics were used to measure the PaLM (zero-shot, CoT) model in the Large Language Models are Zero-Shot Reasoners paper on the SVAMP dataset? | Execution Accuracy, Accuracy |
What metrics were used to measure the PaLM (zero-shot) model in the Large Language Models are Zero-Shot Reasoners paper on the SVAMP dataset? | Execution Accuracy, Accuracy |
What metrics were used to measure the ATHENA (roberta-large) model in the ATHENA: Mathematical Reasoning with Thought Expansion paper on the SVAMP dataset? | Execution Accuracy, Accuracy |
What metrics were used to measure the MsAT-DeductReasoner model in the Learning Multi-Step Reasoning by Solving Arithmetic Tasks paper on the SVAMP dataset? | Execution Accuracy, Accuracy |
What metrics were used to measure the Roberta-DeductReasoner model in the Learning to Reason Deductively: Math Word Problem Solving as Complex Relation Extraction paper on the SVAMP dataset? | Execution Accuracy, Accuracy |
What metrics were used to measure the ATHENA (roberta-base) model in the ATHENA: Mathematical Reasoning with Thought Expansion paper on the SVAMP dataset? | Execution Accuracy, Accuracy |
What metrics were used to measure the Graph2Tree with RoBERTa model in the Are NLP Models really able to Solve Simple Math Word Problems? paper on the SVAMP dataset? | Execution Accuracy, Accuracy |
What metrics were used to measure the GTS with RoBERTa model in the Are NLP Models really able to Solve Simple Math Word Problems? paper on the SVAMP dataset? | Execution Accuracy, Accuracy |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.