prompts
stringlengths
81
413
metrics_response
stringlengths
0
371
What metrics were used to measure the Deep Speech 2 model in the Deep Speech 2: End-to-End Speech Recognition in English and Mandarin paper on the WSJ eval93 dataset?
Word Error Rate (WER)
What metrics were used to measure the CTC-CRF 4gram-LM model in the CRF-based Single-stage Acoustic Modeling with CTC Topology paper on the WSJ eval93 dataset?
Word Error Rate (WER)
What metrics were used to measure the Convolutional Speech Recognition model in the Fully Convolutional Speech Recognition paper on the WSJ eval93 dataset?
Word Error Rate (WER)
What metrics were used to measure the Espresso model in the Espresso: A Fast End-to-end Neural Speech Recognition Toolkit paper on the Hub5'00 CallHome dataset?
Word Error Rate (WER)
What metrics were used to measure the LAS + SpecAugment (with LM, Switchboard mild policy) model in the SpecAugment: A Simple Data Augmentation Method for Automatic Speech Recognition paper on the Hub5'00 SwitchBoard dataset?
SwitchBoard, CallHome, Eval2000, Hub5'00
What metrics were used to measure the LAS + SpecAugment (with LM, Switchboard strong policy) model in the SpecAugment: A Simple Data Augmentation Method for Automatic Speech Recognition paper on the Hub5'00 SwitchBoard dataset?
SwitchBoard, CallHome, Eval2000, Hub5'00
What metrics were used to measure the Jasper DR 10x5 model in the Jasper: An End-to-End Convolutional Neural Acoustic Model paper on the Hub5'00 SwitchBoard dataset?
SwitchBoard, CallHome, Eval2000, Hub5'00
What metrics were used to measure the CTC-CRF model in the CAT: A CTC-CRF based ASR Toolkit Bridging the Hybrid and the End-to-end Approaches towards Data Efficiency and Low Latency paper on the Hub5'00 SwitchBoard dataset?
SwitchBoard, CallHome, Eval2000, Hub5'00
What metrics were used to measure the Espresso model in the Espresso: A Fast End-to-end Neural Speech Recognition Toolkit paper on the Hub5'00 SwitchBoard dataset?
SwitchBoard, CallHome, Eval2000, Hub5'00
What metrics were used to measure the wav2vec 2.0 XLS-R 1B + TEVR (5-gram) model in the TEVR: Improving Speech Recognition by Token Entropy Variance Reduction paper on the Common Voice German dataset?
Test WER, Test CER, Test CER (+LM), Test WER (+LM)
What metrics were used to measure the wav2vec 2.0 XLS-R 1B + TEVR (4-gram) model in the TEVR: Improving Speech Recognition by Token Entropy Variance Reduction paper on the Common Voice German dataset?
Test WER, Test CER, Test CER (+LM), Test WER (+LM)
What metrics were used to measure the ConformerCTC-L (5-gram) model in the Scribosermo: Fast Speech-to-Text models for German and other Languages paper on the Common Voice German dataset?
Test WER, Test CER, Test CER (+LM), Test WER (+LM)
What metrics were used to measure the wav2vec 2.0 XLS-R 1B (5-gram) model in the TEVR: Improving Speech Recognition by Token Entropy Variance Reduction paper on the Common Voice German dataset?
Test WER, Test CER, Test CER (+LM), Test WER (+LM)
What metrics were used to measure the ConformerCTC-L (4-gram) model in the NeMo: a toolkit for building AI applications using Neural Modules paper on the Common Voice German dataset?
Test WER, Test CER, Test CER (+LM), Test WER (+LM)
What metrics were used to measure the Conformer Transducer (no LM) model in the Automatic Speech Recognition in German: A Detailed Error Analysis paper on the Common Voice German dataset?
Test WER, Test CER, Test CER (+LM), Test WER (+LM)
What metrics were used to measure the Whisper (Large v2) model in the Robust Speech Recognition via Large-Scale Weak Supervision paper on the Common Voice German dataset?
Test WER, Test CER, Test CER (+LM), Test WER (+LM)
What metrics were used to measure the QuartzNet15x5DE (D37, 5-gram) model in the Scribosermo: Fast Speech-to-Text models for German and other Languages paper on the Common Voice German dataset?
Test WER, Test CER, Test CER (+LM), Test WER (+LM)
What metrics were used to measure the ConformerCTC-L (no LM) model in the NeMo: a toolkit for building AI applications using Neural Modules paper on the Common Voice German dataset?
Test WER, Test CER, Test CER (+LM), Test WER (+LM)
What metrics were used to measure the ConformerCTC-L (no LM) model in the Scribosermo: Fast Speech-to-Text models for German and other Languages paper on the Common Voice German dataset?
Test WER, Test CER, Test CER (+LM), Test WER (+LM)
What metrics were used to measure the QuartzNet15x5DE (CV-only, 5-gram) model in the Scribosermo: Fast Speech-to-Text models for German and other Languages paper on the Common Voice German dataset?
Test WER, Test CER, Test CER (+LM), Test WER (+LM)
What metrics were used to measure the VoxPopuli (n-gram) model in the VoxPopuli: A Large-Scale Multilingual Speech Corpus for Representation Learning, Semi-Supervised Learning and Interpretation paper on the Common Voice German dataset?
Test WER, Test CER, Test CER (+LM), Test WER (+LM)
What metrics were used to measure the wav2vec 2.0 XLS-R 1B + TEVR (no LM) model in the TEVR: Improving Speech Recognition by Token Entropy Variance Reduction paper on the Common Voice German dataset?
Test WER, Test CER, Test CER (+LM), Test WER (+LM)
What metrics were used to measure the wav2vec 2.0 XLS-R (no LM) model in the TEVR: Improving Speech Recognition by Token Entropy Variance Reduction paper on the Common Voice German dataset?
Test WER, Test CER, Test CER (+LM), Test WER (+LM)
What metrics were used to measure the Whisper (Large v2) model in the Robust Speech Recognition via Large-Scale Weak Supervision paper on the Common Voice Italian dataset?
Test WER, Test CER, Test CER (+LM), Test WER (+LM)
What metrics were used to measure the QuartzNet15x5IT (D5) model in the Scribosermo: Fast Speech-to-Text models for German and other Languages paper on the Common Voice Italian dataset?
Test WER, Test CER, Test CER (+LM), Test WER (+LM)
What metrics were used to measure the ConformerCTC-L (5-gram) model in the Scribosermo: Fast Speech-to-Text models for German and other Languages paper on the Common Voice French dataset?
Test WER, Test CER, Test CER (+LM), Test WER (+LM), Wer
What metrics were used to measure the ConformerCTC-L (4-gram) model in the NeMo: a toolkit for building AI applications using Neural Modules paper on the Common Voice French dataset?
Test WER, Test CER, Test CER (+LM), Test WER (+LM), Wer
What metrics were used to measure the VoxPopuli-50K (n-gram) model in the VoxPopuli: A Large-Scale Multilingual Speech Corpus for Representation Learning, Semi-Supervised Learning and Interpretation paper on the Common Voice French dataset?
Test WER, Test CER, Test CER (+LM), Test WER (+LM), Wer
What metrics were used to measure the ConformerCTC-L (no-LM) model in the NeMo: a toolkit for building AI applications using Neural Modules paper on the Common Voice French dataset?
Test WER, Test CER, Test CER (+LM), Test WER (+LM), Wer
What metrics were used to measure the ConformerCTC-L (no-LM) model in the Scribosermo: Fast Speech-to-Text models for German and other Languages paper on the Common Voice French dataset?
Test WER, Test CER, Test CER (+LM), Test WER (+LM), Wer
What metrics were used to measure the QuartzNet15x5FR (D7) model in the Scribosermo: Fast Speech-to-Text models for German and other Languages paper on the Common Voice French dataset?
Test WER, Test CER, Test CER (+LM), Test WER (+LM), Wer
What metrics were used to measure the QuartzNet15x5FR (CV-only) model in the Scribosermo: Fast Speech-to-Text models for German and other Languages paper on the Common Voice French dataset?
Test WER, Test CER, Test CER (+LM), Test WER (+LM), Wer
What metrics were used to measure the Whisper (Large v2) model in the Robust Speech Recognition via Large-Scale Weak Supervision paper on the Common Voice French dataset?
Test WER, Test CER, Test CER (+LM), Test WER (+LM), Wer
What metrics were used to measure the BiLSTM-LAN model in the Hierarchically-Refined Label Attention Network for Sequence Labeling paper on the UD dataset?
Avg accuracy
What metrics were used to measure the Adversarial Bi-LSTM model in the Robust Multilingual Part-of-Speech Tagging via Adversarial Training paper on the UD dataset?
Avg accuracy
What metrics were used to measure the MultiBPEmb model in the Sequence Tagging with Contextual and Non-Contextual Subword Representations: A Multilingual Evaluation paper on the UD dataset?
Avg accuracy
What metrics were used to measure the Bi-LSTM model in the Multilingual Part-of-Speech Tagging with Bidirectional Long Short-Term Memory Models and Auxiliary Loss paper on the UD dataset?
Avg accuracy
What metrics were used to measure the Joint Bi-LSTM model in the A Novel Neural Network Model for Joint POS Tagging and Graph-based Dependency Parsing paper on the UD dataset?
Avg accuracy
What metrics were used to measure the ACE model in the Automated Concatenation of Embeddings for Structured Prediction paper on the ARK dataset?
Acc
What metrics were used to measure the Owoputi et al., 2013 model in the Improved Part-of-Speech Tagging for Online Conversational Text with Word Clusters paper on the ARK dataset?
Acc
What metrics were used to measure the Gui et al., 2018 model in the Transferring from Formal Newswire Domain with Hypernet for Twitter POS Tagging paper on the ARK dataset?
Acc
What metrics were used to measure the da_dacy_large_tft-0.0.0 model in the DaCy: A Unified Framework for Danish NLP paper on the DaNE dataset?
Accuracy (%)
What metrics were used to measure the ACE model in the Automated Concatenation of Embeddings for Structured Prediction paper on the Ritter dataset?
Acc
What metrics were used to measure the Gui et al., 2018 model in the Transferring from Formal Newswire Domain with Hypernet for Twitter POS Tagging paper on the Ritter dataset?
Acc
What metrics were used to measure the Gui et al., 2017 model in the Part-of-Speech Tagging for Twitter with Adversarial Neural Networks paper on the Ritter dataset?
Acc
What metrics were used to measure the BERTweet model in the BERTweet: A pre-trained language model for English Tweets paper on the Ritter dataset?
Acc
What metrics were used to measure the CamemBERT model in the CamemBERT: a Tasty French Language Model paper on the Spoken Corpus dataset?
UPOS
What metrics were used to measure the Bi-LSTM-CRF + Flair Embeddings + CamemBERT (oscar−138gb−base) Embeddings model in the ANTILLES: An Open French Linguistically Enriched Part-of-Speech Corpus paper on the ANTILLES dataset?
Weighted Average F1-score
What metrics were used to measure the SALE-BART encoder model in the Sequence Alignment Ensemble with a Single Neural Network for Sequence Labeling paper on the Penn Treebank dataset?
Accuracy, CoNLL F1
What metrics were used to measure the Meta BiLSTM model in the Morphosyntactic Tagging with a Meta-BiLSTM Model over Context Sensitive Token Encodings paper on the Penn Treebank dataset?
Accuracy, CoNLL F1
What metrics were used to measure the Flair embeddings model in the Contextual String Embeddings for Sequence Labeling paper on the Penn Treebank dataset?
Accuracy, CoNLL F1
What metrics were used to measure the Char Bi-LSTM model in the Finding Function in Form: Compositional Character Models for Open Vocabulary Word Representation paper on the Penn Treebank dataset?
Accuracy, CoNLL F1
What metrics were used to measure the CVT + Multi-task model in the Semi-Supervised Sequence Modeling with Cross-View Training paper on the Penn Treebank dataset?
Accuracy, CoNLL F1
What metrics were used to measure the SpanRel model in the Generalizing Natural Language Analysis through Span-relation Representations paper on the Penn Treebank dataset?
Accuracy, CoNLL F1
What metrics were used to measure the BiLSTM-LAN model in the Hierarchically-Refined Label Attention Network for Sequence Labeling paper on the Penn Treebank dataset?
Accuracy, CoNLL F1
What metrics were used to measure the Adversarial Bi-LSTM model in the Robust Multilingual Part-of-Speech Tagging via Adversarial Training paper on the Penn Treebank dataset?
Accuracy, CoNLL F1
What metrics were used to measure the IntNet + BiLSTM-CRF model in the Learning Better Internal Structure of Words for Sequence Labeling paper on the Penn Treebank dataset?
Accuracy, CoNLL F1
What metrics were used to measure the Yang et al. model in the Transfer Learning for Sequence Tagging with Hierarchical Recurrent Networks paper on the Penn Treebank dataset?
Accuracy, CoNLL F1
What metrics were used to measure the BLSTM-CNN-CRF model in the End-to-end Sequence Labeling via Bi-directional LSTM-CNNs-CRF paper on the Penn Treebank dataset?
Accuracy, CoNLL F1
What metrics were used to measure the S-LSTM model in the Sentence-State LSTM for Text Representation paper on the Penn Treebank dataset?
Accuracy, CoNLL F1
What metrics were used to measure the LM-LSTM-CRF model in the Empower Sequence Labeling with Task-Aware Neural Language Model paper on the Penn Treebank dataset?
Accuracy, CoNLL F1
What metrics were used to measure the NCRF++ model in the NCRF++: An Open-source Neural Sequence Labeling Toolkit paper on the Penn Treebank dataset?
Accuracy, CoNLL F1
What metrics were used to measure the Bi-LSTM + LMcost model in the Semi-supervised Multitask Learning for Sequence Labeling paper on the Penn Treebank dataset?
Accuracy, CoNLL F1
What metrics were used to measure the Feed Forward model in the Supertagging With LSTMs paper on the Penn Treebank dataset?
Accuracy, CoNLL F1
What metrics were used to measure the Bi-LSTM model in the Finding Function in Form: Compositional Character Models for Open Vocabulary Word Representation paper on the Penn Treebank dataset?
Accuracy, CoNLL F1
What metrics were used to measure the Bi-LSTM + charattn model in the Attending to Characters in Neural Sequence Labeling Models paper on the Penn Treebank dataset?
Accuracy, CoNLL F1
What metrics were used to measure the Bi-LSTM model in the Multilingual Part-of-Speech Tagging with Bidirectional Long Short-Term Memory Models and Auxiliary Loss paper on the Penn Treebank dataset?
Accuracy, CoNLL F1
What metrics were used to measure the SALE model in the Sequential Alignment Methods for Ensemble Part-of-Speech Tagging paper on the Penn Treebank dataset?
Accuracy, CoNLL F1
What metrics were used to measure the CamemBERT model in the CamemBERT: a Tasty French Language Model paper on the French GSD dataset?
UPOS
What metrics were used to measure the Trankit model in the Trankit: A Light-Weight Transformer-based Toolkit for Multilingual Natural Language Processing paper on the UD2.5 test dataset?
Macro-averaged F1
What metrics were used to measure the Stanza model in the Trankit: A Light-Weight Transformer-based Toolkit for Multilingual Natural Language Processing paper on the UD2.5 test dataset?
Macro-averaged F1
What metrics were used to measure the CamemBERT model in the CamemBERT: a Tasty French Language Model paper on the Sequoia Treebank dataset?
UPOS
What metrics were used to measure the MyBert model in the Towards Deep Learning Models Resistant to Adversarial Attacks paper on the Morphosyntactic-analysis-dataset dataset?
BLEX
What metrics were used to measure the ACE model in the Automated Concatenation of Embeddings for Structured Prediction paper on the Tweebank dataset?
Acc
What metrics were used to measure the BERTweet model in the BERTweet: A pre-trained language model for English Tweets paper on the Tweebank dataset?
Acc
What metrics were used to measure the Gui et al., 2017 model in the Part-of-Speech Tagging for Twitter with Adversarial Neural Networks paper on the Tweebank dataset?
Acc
What metrics were used to measure the CamemBERT model in the CamemBERT: a Tasty French Language Model paper on the ParTUT dataset?
UPOS
What metrics were used to measure the mGPT model in the mGPT: Few-Shot Learners Go Multilingual paper on the XGLUE dataset?
Avg. F1
What metrics were used to measure the PretRand model in the Joint Learning of Pre-Trained and Random Units for Domain Adaptation in Part-of-Speech Tagging paper on the Social media dataset?
Accuracy
What metrics were used to measure the CMU model in the Improved Part-of-Speech Tagging for Online Conversational Text with Word Clusters paper on the Social media dataset?
Accuracy
What metrics were used to measure the GATE model in the Twitter Part-of-Speech Tagging for All: Overcoming Sparse and Noisy Data paper on the Social media dataset?
Accuracy
What metrics were used to measure the BART (TextBox 2.0) model in the TextBox 2.0: A Text Generation Library with Pre-trained Language Models paper on the Persona-Chat dataset?
BLEU-1, BLEU-2, Distinct-1, Distinct-2
What metrics were used to measure the Tacotron 2 model in the Neural Speech Synthesis in German paper on the Thorsten voice 21.02 neutral dataset?
Mean Opinion Score
What metrics were used to measure the Token-Level Ensemble Distillation model in the Token-Level Ensemble Distillation for Grapheme-to-Phoneme Conversion paper on the CMUDict 0.7b dataset?
Phoneme Error Rate, Word Error Rate (WER)
What metrics were used to measure the Tacotron 2 model in the Neural Speech Synthesis in German paper on the HUI speech corpus dataset?
Mean Opinion Score
What metrics were used to measure the NaturalSpeech model in the NaturalSpeech: End-to-End Text to Speech Synthesis with Human-Level Quality paper on the LJSpeech dataset?
Audio Quality MOS, Pleasantness MOS, Word Error Rate (WER), MOS, WER (%)
What metrics were used to measure the VITS model in the NaturalSpeech: End-to-End Text to Speech Synthesis with Human-Level Quality paper on the LJSpeech dataset?
Audio Quality MOS, Pleasantness MOS, Word Error Rate (WER), MOS, WER (%)
What metrics were used to measure the Grad-TTS + HiFiGAN (1000 steps) model in the Grad-TTS: A Diffusion Probabilistic Model for Text-to-Speech paper on the LJSpeech dataset?
Audio Quality MOS, Pleasantness MOS, Word Error Rate (WER), MOS, WER (%)
What metrics were used to measure the Glow-TTS + HiFiGAN model in the Glow-TTS: A Generative Flow for Text-to-Speech via Monotonic Alignment Search paper on the LJSpeech dataset?
Audio Quality MOS, Pleasantness MOS, Word Error Rate (WER), MOS, WER (%)
What metrics were used to measure the FastSpeech 2 + HiFiGAN model in the NaturalSpeech: End-to-End Text to Speech Synthesis with Human-Level Quality paper on the LJSpeech dataset?
Audio Quality MOS, Pleasantness MOS, Word Error Rate (WER), MOS, WER (%)
What metrics were used to measure the FastSpeech 2 + HiFiGAN model in the FastSpeech 2: Fast and High-Quality End-to-End Text to Speech paper on the LJSpeech dataset?
Audio Quality MOS, Pleasantness MOS, Word Error Rate (WER), MOS, WER (%)
What metrics were used to measure the FastDiff (4 steps) model in the FastDiff: A Fast Conditional Diffusion Model for High-Quality Speech Synthesis paper on the LJSpeech dataset?
Audio Quality MOS, Pleasantness MOS, Word Error Rate (WER), MOS, WER (%)
What metrics were used to measure the FastDiff-TTS model in the FastDiff: A Fast Conditional Diffusion Model for High-Quality Speech Synthesis paper on the LJSpeech dataset?
Audio Quality MOS, Pleasantness MOS, Word Error Rate (WER), MOS, WER (%)
What metrics were used to measure the Transformer TTS (Mel + WaveGlow) model in the Neural Speech Synthesis with Transformer Network paper on the LJSpeech dataset?
Audio Quality MOS, Pleasantness MOS, Word Error Rate (WER), MOS, WER (%)
What metrics were used to measure the FastSpeech (Mel + WaveGlow) model in the FastSpeech: Fast, Robust and Controllable Text to Speech paper on the LJSpeech dataset?
Audio Quality MOS, Pleasantness MOS, Word Error Rate (WER), MOS, WER (%)
What metrics were used to measure the OverFlow model in the OverFlow: Putting flows on top of neural transducers for better TTS paper on the LJSpeech dataset?
Audio Quality MOS, Pleasantness MOS, Word Error Rate (WER), MOS, WER (%)
What metrics were used to measure the Merlin model in the FastSpeech: Fast, Robust and Controllable Text to Speech paper on the LJSpeech dataset?
Audio Quality MOS, Pleasantness MOS, Word Error Rate (WER), MOS, WER (%)
What metrics were used to measure the Flowtron model in the Flowtron: an Autoregressive Flow-based Generative Network for Text-to-Speech Synthesis paper on the LJSpeech dataset?
Audio Quality MOS, Pleasantness MOS, Word Error Rate (WER), MOS, WER (%)
What metrics were used to measure the Tacotron 2 model in the Flowtron: an Autoregressive Flow-based Generative Network for Text-to-Speech Synthesis paper on the LJSpeech dataset?
Audio Quality MOS, Pleasantness MOS, Word Error Rate (WER), MOS, WER (%)
What metrics were used to measure the Matcha-TTS model in the Matcha-TTS: A fast TTS architecture with conditional flow matching paper on the LJSpeech dataset?
Audio Quality MOS, Pleasantness MOS, Word Error Rate (WER), MOS, WER (%)