prompts
stringlengths
81
413
metrics_response
stringlengths
0
371
What metrics were used to measure the mllp_2021_streaming_filt model in the Europarl-ASR: A Large Corpus of Parliamentary Debates for Streaming ASR Benchmarking and Speech Data Filtering/Verbatimization paper on the Europarl-ASR EN MEP-test dataset?
WER
What metrics were used to measure the mllp_2021_offline_verb model in the Europarl-ASR: A Large Corpus of Parliamentary Debates for Streaming ASR Benchmarking and Speech Data Filtering/Verbatimization paper on the Europarl-ASR EN Guest-test dataset?
WER
What metrics were used to measure the mllp_2021_streaming_verb model in the Europarl-ASR: A Large Corpus of Parliamentary Debates for Streaming ASR Benchmarking and Speech Data Filtering/Verbatimization paper on the Europarl-ASR EN Guest-test dataset?
WER
What metrics were used to measure the wav2vec_wav2letter model in the Self-training and Pre-training are Complementary for Speech Recognition paper on the LibriSpeech train-clean-100 test-clean dataset?
Word Error Rate (WER)
What metrics were used to measure the SpeechStew (100M) model in the SpeechStew: Simply Mix All Available Speech Recognition Data to Train One Large Neural Network paper on the AMI SDM1 dataset?
Word Error Rate (WER)
What metrics were used to measure the Vietnamese end-to-end speech recognition using wav2vec 2.0 by VietAI model in the Vietnamese end-to-end speech recognition using wav2vec 2.0 paper on the VIVOS dataset?
Test WER
What metrics were used to measure the wav2vec2-base-vietnamese-160h (No Language Model) model in the Wav2vec2 Base Vietnamese 160h paper on the VIVOS dataset?
Test WER
What metrics were used to measure the End-to-end LF-MMI model in the End-to-end speech recognition using lattice-free MMI paper on the Switchboard (300hr) dataset?
Word Error Rate (WER)
What metrics were used to measure the Paraformer-large model in the FunASR: A Fundamental End-to-End Speech Recognition Toolkit paper on the AISHELL-2 dataset?
Word Error Rate (WER)
What metrics were used to measure the Paraformer model in the FunASR: A Fundamental End-to-End Speech Recognition Toolkit paper on the AISHELL-2 dataset?
Word Error Rate (WER)
What metrics were used to measure the Quartznet model in the MediaSpeech: Multilanguage ASR Benchmark and Dataset paper on the MediaSpeech dataset?
WER for Arabic, WER for French, WER for Spanish, WER for Turkish
What metrics were used to measure the Wit model in the MediaSpeech: Multilanguage ASR Benchmark and Dataset paper on the MediaSpeech dataset?
WER for Arabic, WER for French, WER for Spanish, WER for Turkish
What metrics were used to measure the Azure model in the MediaSpeech: Multilanguage ASR Benchmark and Dataset paper on the MediaSpeech dataset?
WER for Arabic, WER for French, WER for Spanish, WER for Turkish
What metrics were used to measure the VOSK model in the MediaSpeech: Multilanguage ASR Benchmark and Dataset paper on the MediaSpeech dataset?
WER for Arabic, WER for French, WER for Spanish, WER for Turkish
What metrics were used to measure the Google model in the MediaSpeech: Multilanguage ASR Benchmark and Dataset paper on the MediaSpeech dataset?
WER for Arabic, WER for French, WER for Spanish, WER for Turkish
What metrics were used to measure the wav2vec model in the MediaSpeech: Multilanguage ASR Benchmark and Dataset paper on the MediaSpeech dataset?
WER for Arabic, WER for French, WER for Spanish, WER for Turkish
What metrics were used to measure the Deepspeech model in the MediaSpeech: Multilanguage ASR Benchmark and Dataset paper on the MediaSpeech dataset?
WER for Arabic, WER for French, WER for Spanish, WER for Turkish
What metrics were used to measure the Silero model in the MediaSpeech: Multilanguage ASR Benchmark and Dataset paper on the MediaSpeech dataset?
WER for Arabic, WER for French, WER for Spanish, WER for Turkish
What metrics were used to measure the IBM (LSTM+Conformer encoder-decoder) model in the On the limit of English conversational speech recognition paper on the swb_hub_500 WER fullSWBCH dataset?
Percentage error
What metrics were used to measure the IBM (LSTM encoder-decoder) model in the Single headed attention based sequence-to-sequence model for state-of-the-art results on Switchboard paper on the swb_hub_500 WER fullSWBCH dataset?
Percentage error
What metrics were used to measure the ResNet + BiLSTMs acoustic model model in the English Conversational Telephone Speech Recognition by Humans and Machines paper on the swb_hub_500 WER fullSWBCH dataset?
Percentage error
What metrics were used to measure the VGG/Resnet/LACE/BiLSTM acoustic model trained on SWB+Fisher+CH, N-gram + RNNLM language model trained on Switchboard+Fisher+Gigaword+Broadcast model in the The Microsoft 2016 Conversational Speech Recognition System paper on the swb_hub_500 WER fullSWBCH dataset?
Percentage error
What metrics were used to measure the RNN + VGG + LSTM acoustic model trained on SWB+Fisher+CH, N-gram + "model M" + NNLM language model model in the The IBM 2016 English Conversational Telephone Speech Recognition System paper on the swb_hub_500 WER fullSWBCH dataset?
Percentage error
What metrics were used to measure the HMM-BLSTM trained with MMI + data augmentation (speed) + iVectors + 3 regularizations + Fisher model in the paper on the swb_hub_500 WER fullSWBCH dataset?
Percentage error
What metrics were used to measure the HMM-TDNN trained with MMI + data augmentation (speed) + iVectors + 3 regularizations + Fisher (10% / 15.1% respectively trained on SWBD only) model in the paper on the swb_hub_500 WER fullSWBCH dataset?
Percentage error
What metrics were used to measure the CNN + Bi-RNN + CTC (speech to letters), 25.9% WER if trainedonlyon SWB model in the Deep Speech: Scaling up end-to-end speech recognition paper on the swb_hub_500 WER fullSWBCH dataset?
Percentage error
What metrics were used to measure the HMM-TDNN + iVectors model in the paper on the swb_hub_500 WER fullSWBCH dataset?
Percentage error
What metrics were used to measure the HMM-DNN +sMBR model in the paper on the swb_hub_500 WER fullSWBCH dataset?
Percentage error
What metrics were used to measure the DNN + Dropout model in the Building DNN Acoustic Models for Large Vocabulary Speech Recognition paper on the swb_hub_500 WER fullSWBCH dataset?
Percentage error
What metrics were used to measure the HMM-TDNN + pNorm + speed up/down speech model in the paper on the swb_hub_500 WER fullSWBCH dataset?
Percentage error
What metrics were used to measure the MMSpeech With LM model in the MMSpeech: Multi-modal Multi-task Encoder-Decoder Pre-training for Speech Recognition paper on the AISHELL-1 dataset?
Word Error Rate (WER), Params(M)
What metrics were used to measure the Paraformer-large model in the FunASR: A Fundamental End-to-End Speech Recognition Toolkit paper on the AISHELL-1 dataset?
Word Error Rate (WER), Params(M)
What metrics were used to measure the SE-WSBO With LM model in the Improving Mandarin Speech Recogntion with Block-augmented Transformer paper on the AISHELL-1 dataset?
Word Error Rate (WER), Params(M)
What metrics were used to measure the UMA model in the Unimodal Aggregation for CTC-based Speech Recognition paper on the AISHELL-1 dataset?
Word Error Rate (WER), Params(M)
What metrics were used to measure the U2 model in the Unified Streaming and Non-streaming Two-pass End-to-end Model for Speech Recognition paper on the AISHELL-1 dataset?
Word Error Rate (WER), Params(M)
What metrics were used to measure the CTC/Att model in the Unified Streaming and Non-streaming Two-pass End-to-end Model for Speech Recognition paper on the AISHELL-1 dataset?
Word Error Rate (WER), Params(M)
What metrics were used to measure the Paraformer model in the FunASR: A Fundamental End-to-End Speech Recognition Toolkit paper on the AISHELL-1 dataset?
Word Error Rate (WER), Params(M)
What metrics were used to measure the BAT model in the BAT: Boundary aware transducer for memory-efficient and low-latency ASR paper on the AISHELL-1 dataset?
Word Error Rate (WER), Params(M)
What metrics were used to measure the CTC-CRF 4gram-LM model in the CAT: A CTC-CRF based ASR Toolkit Bridging the Hybrid and the End-to-end Approaches towards Data Efficiency and Low Latency paper on the AISHELL-1 dataset?
Word Error Rate (WER), Params(M)
What metrics were used to measure the BRA-E model in the Beyond Universal Transformer: block reusing with adaptor in Transformer for automatic speech recognition paper on the AISHELL-1 dataset?
Word Error Rate (WER), Params(M)
What metrics were used to measure the CTC/Att model in the A Comparative Study on Transformer vs RNN in Speech Applications paper on the AISHELL-1 dataset?
Word Error Rate (WER), Params(M)
What metrics were used to measure the Att model in the End-to-end Speech Recognition with Adaptive Computation Steps paper on the AISHELL-1 dataset?
Word Error Rate (WER), Params(M)
What metrics were used to measure the SpeechStew (100M) model in the SpeechStew: Simply Mix All Available Speech Recognition Data to Train One Large Neural Network paper on the AMI IMH dataset?
Word Error Rate (WER)
What metrics were used to measure the SpeechStew (100M) model in the SpeechStew: Simply Mix All Available Speech Recognition Data to Train One Large Neural Network paper on the Switchboard SWBD dataset?
Word Error Rate (WER)
What metrics were used to measure the wav2vec 2.0 Large-10h-LV-60k model in the wav2vec 2.0: A Framework for Self-Supervised Learning of Speech Representations paper on the Libri-Light test-other dataset?
Word Error Rate (WER), ABX-across, ABX-within
What metrics were used to measure the TDS 60k pseudo-label + CTC fine-tuning + 4gram-LM model in the Libri-Light: A Benchmark for ASR with Limited or No Supervision paper on the Libri-Light test-other dataset?
Word Error Rate (WER), ABX-across, ABX-within
What metrics were used to measure the CPC unlab-60k+train-10h CPC pretrain + CTC fine-tuning + 4gram-LM model in the Libri-Light: A Benchmark for ASR with Limited or No Supervision paper on the Libri-Light test-other dataset?
Word Error Rate (WER), ABX-across, ABX-within
What metrics were used to measure the CPC unlab-60k model in the Libri-Light: A Benchmark for ASR with Limited or No Supervision paper on the Libri-Light test-other dataset?
Word Error Rate (WER), ABX-across, ABX-within
What metrics were used to measure the S6000h-n42-τ2 → 0.1 model in the Improving Unsupervised Sparsespeech Acoustic Models with Categorical Reparameterization paper on the Libri-Light test-other dataset?
Word Error Rate (WER), ABX-across, ABX-within
What metrics were used to measure the wav2vec_wav2letter model in the Self-training and Pre-training are Complementary for Speech Recognition paper on the LibriSpeech train-clean-100 test-other dataset?
Word Error Rate (WER)
What metrics were used to measure the SpeechStew (1B) model in the SpeechStew: Simply Mix All Available Speech Recognition Data to Train One Large Neural Network paper on the CHiME-6 eval dataset?
Word Error Rate (WER)
What metrics were used to measure the Triphone (39 features) + LDA and MLLT + SGMM model in the First Automatic Fongbe Continuous Speech Recognition System: Development of Acoustic Models and Language Models paper on the Fongbe audio dataset?
Word Error Rate (WER)
What metrics were used to measure the Triphone (39 features) + LDA and MLLT + SAT and FMLLR model in the First Automatic Fongbe Continuous Speech Recognition System: Development of Acoustic Models and Language Models paper on the Fongbe audio dataset?
Word Error Rate (WER)
What metrics were used to measure the Triphone (13 MFCC + delta + delta2) model in the First Automatic Fongbe Continuous Speech Recognition System: Development of Acoustic Models and Language Models paper on the Fongbe audio dataset?
Word Error Rate (WER)
What metrics were used to measure the CTC-CRF ST-NAS model in the Efficient Neural Architecture Search for End-to-end Speech Recognition via Straight-Through Gradients paper on the WSJ dev93 dataset?
Word Error Rate (WER)
What metrics were used to measure the CTC-CRF VGG-BLSTM model in the CAT: A CTC-CRF based ASR Toolkit Bridging the Hybrid and the End-to-end Approaches towards Data Efficiency and Low Latency paper on the WSJ dev93 dataset?
Word Error Rate (WER)
What metrics were used to measure the Convolutional Speech Recognition model in the CRF-based Single-stage Acoustic Modeling with CTC Topology paper on the WSJ dev93 dataset?
Word Error Rate (WER)
What metrics were used to measure the Convolutional Speech Recognition model in the Fully Convolutional Speech Recognition paper on the WSJ dev93 dataset?
Word Error Rate (WER)
What metrics were used to measure the RAVEn Large model in the Jointly Learning Visual and Auditory Speech Representations from Raw Data paper on the LRS2 dataset?
Word Error Rate (WER)
What metrics were used to measure the Speechstew 100M model in the SpeechStew: Simply Mix All Available Speech Recognition Data to Train One Large Neural Network paper on the WSJ eval92 dataset?
Word Error Rate (WER)
What metrics were used to measure the tdnn + chain model in the Purely sequence-trained neural networks for ASR based on lattice-free MMI paper on the WSJ eval92 dataset?
Word Error Rate (WER)
What metrics were used to measure the CTC-CRF ST-NAS model in the Efficient Neural Architecture Search for End-to-end Speech Recognition via Straight-Through Gradients paper on the WSJ eval92 dataset?
Word Error Rate (WER)
What metrics were used to measure the End-to-end LF-MMI model in the End-to-end speech recognition using lattice-free MMI paper on the WSJ eval92 dataset?
Word Error Rate (WER)
What metrics were used to measure the Transformer with Relaxed Attention model in the Relaxed Attention: A Simple Method to Boost Performance of End-to-End Automatic Speech Recognition paper on the WSJ eval92 dataset?
Word Error Rate (WER)
What metrics were used to measure the CTC-CRF VGG-BLSTM model in the CAT: A CTC-CRF based ASR Toolkit Bridging the Hybrid and the End-to-end Approaches towards Data Efficiency and Low Latency paper on the WSJ eval92 dataset?
Word Error Rate (WER)
What metrics were used to measure the Espresso model in the Espresso: A Fast End-to-end Neural Speech Recognition Toolkit paper on the WSJ eval92 dataset?
Word Error Rate (WER)
What metrics were used to measure the TC-DNN-BLSTM-DNN model in the Deep Recurrent Neural Networks for Acoustic Modelling paper on the WSJ eval92 dataset?
Word Error Rate (WER)
What metrics were used to measure the Convolutional Speech Recognition model in the Fully Convolutional Speech Recognition paper on the WSJ eval92 dataset?
Word Error Rate (WER)
What metrics were used to measure the test-set on open vocabulary (i.e. harder), model = HMM-DNN + pNorm* model in the paper on the WSJ eval92 dataset?
Word Error Rate (WER)
What metrics were used to measure the Deep Speech 2 model in the Deep Speech 2: End-to-End Speech Recognition in English and Mandarin paper on the WSJ eval92 dataset?
Word Error Rate (WER)
What metrics were used to measure the CTC-CRF 4gram-LM model in the CRF-based Single-stage Acoustic Modeling with CTC Topology paper on the WSJ eval92 dataset?
Word Error Rate (WER)
What metrics were used to measure the CNN over RAW speech (wav) model in the paper on the WSJ eval92 dataset?
Word Error Rate (WER)
What metrics were used to measure the Jasper 10x3 model in the Jasper: An End-to-End Convolutional Neural Acoustic Model paper on the WSJ eval92 dataset?
Word Error Rate (WER)
What metrics were used to measure the wav2vec 2.0 Large-10h-LV-60k model in the wav2vec 2.0: A Framework for Self-Supervised Learning of Speech Representations paper on the Libri-Light test-clean dataset?
Word Error Rate (WER), ABX-across, ABX-within
What metrics were used to measure the TDS 60k pseudo-label + CTC fine-tuning + 4gram-LM model in the Libri-Light: A Benchmark for ASR with Limited or No Supervision paper on the Libri-Light test-clean dataset?
Word Error Rate (WER), ABX-across, ABX-within
What metrics were used to measure the CPC unlab-60k+train-10h CPC pretrain + CTC fine-tuning + 4gram-LM model in the Libri-Light: A Benchmark for ASR with Limited or No Supervision paper on the Libri-Light test-clean dataset?
Word Error Rate (WER), ABX-across, ABX-within
What metrics were used to measure the CPC unlab-60k model in the Libri-Light: A Benchmark for ASR with Limited or No Supervision paper on the Libri-Light test-clean dataset?
Word Error Rate (WER), ABX-across, ABX-within
What metrics were used to measure the S6000h-n42-τ2 → 0.1 model in the Improving Unsupervised Sparsespeech Acoustic Models with Categorical Reparameterization paper on the Libri-Light test-clean dataset?
Word Error Rate (WER), ABX-across, ABX-within
What metrics were used to measure the wav2vec 2.0 model in the wav2vec 2.0: A Framework for Self-Supervised Learning of Speech Representations paper on the TIMIT dataset?
Percentage error
What metrics were used to measure the vq-wav2vec model in the vq-wav2vec: Self-Supervised Learning of Discrete Speech Representations paper on the TIMIT dataset?
Percentage error
What metrics were used to measure the LiGRU + Dropout + BatchNorm + Monophone Reg model in the The PyTorch-Kaldi Speech Recognition Toolkit paper on the TIMIT dataset?
Percentage error
What metrics were used to measure the LSTM + Dropout + BatchNorm + Monophone Reg model in the The PyTorch-Kaldi Speech Recognition Toolkit paper on the TIMIT dataset?
Percentage error
What metrics were used to measure the wav2vec model in the wav2vec: Unsupervised Pre-training for Speech Recognition paper on the TIMIT dataset?
Percentage error
What metrics were used to measure the GRU + Dropout + BatchNorm + Monophone Reg model in the The PyTorch-Kaldi Speech Recognition Toolkit paper on the TIMIT dataset?
Percentage error
What metrics were used to measure the Li-GRU + fMLLR features model in the Light Gated Recurrent Units for Speech Recognition paper on the TIMIT dataset?
Percentage error
What metrics were used to measure the RNN + Dropout + BatchNorm + Monophone Reg model in the The PyTorch-Kaldi Speech Recognition Toolkit paper on the TIMIT dataset?
Percentage error
What metrics were used to measure the LSTM model in the The PyTorch-Kaldi Speech Recognition Toolkit paper on the TIMIT dataset?
Percentage error
What metrics were used to measure the Li-GRU model in the The PyTorch-Kaldi Speech Recognition Toolkit paper on the TIMIT dataset?
Percentage error
What metrics were used to measure the Hierarchical maxout CNN + Dropout model in the paper on the TIMIT dataset?
Percentage error
What metrics were used to measure the RNN model in the The PyTorch-Kaldi Speech Recognition Toolkit paper on the TIMIT dataset?
Percentage error
What metrics were used to measure the GRU model in the The PyTorch-Kaldi Speech Recognition Toolkit paper on the TIMIT dataset?
Percentage error
What metrics were used to measure the CNN in time and frequency + dropout, 17.6% w/o dropout model in the paper on the TIMIT dataset?
Percentage error
What metrics were used to measure the Light Gated Recurrent Units model in the Light Gated Recurrent Units for Speech Recognition paper on the TIMIT dataset?
Percentage error
What metrics were used to measure the RNN-CRF on 24(x3) MFSC model in the Segmental Recurrent Neural Networks for End-to-end Speech Recognition paper on the TIMIT dataset?
Percentage error
What metrics were used to measure the Bi-RNN + Attention model in the Attention-Based Models for Speech Recognition paper on the TIMIT dataset?
Percentage error
What metrics were used to measure the Bi-LSTM + skip connections w/ CTC model in the Speech Recognition with Deep Recurrent Neural Networks paper on the TIMIT dataset?
Percentage error
What metrics were used to measure the QCNN-10L-256FM model in the Quaternion Convolutional Neural Networks for End-to-End Automatic Speech Recognition paper on the TIMIT dataset?
Percentage error
What metrics were used to measure the Soft Monotonic Attention (ours, offline) model in the Online and Linear-Time Attention by Enforcing Monotonic Alignments paper on the TIMIT dataset?
Percentage error
What metrics were used to measure the LAS multitask with indicators sampling model in the Attention model for articulatory features detection paper on the TIMIT dataset?
Percentage error
What metrics were used to measure the LSNN model in the Long short-term memory and learning-to-learn in networks of spiking neurons paper on the TIMIT dataset?
Percentage error