prompts stringlengths 81 413 | metrics_response stringlengths 0 371 |
|---|---|
What metrics were used to measure the XLSR53 Wav2Vec2 Portuguese by Orlem Santos model in the XLSR53 Wav2Vec2 Portuguese by Orlem Santos paper on the Common Voice Portuguese dataset? | Test WER, Test CER, Test CER (+LM), Test WER (+LM) |
What metrics were used to measure the Conformer/Transformer-AED model in the GigaSpeech: An Evolving, Multi-domain ASR Corpus with 10,000 Hours of Transcribed Audio paper on the GigaSpeech DEV dataset? | Word Error Rate (WER) |
What metrics were used to measure the ReVISE (bf) model in the ReVISE: Self-Supervised Speech Resynthesis with Visual Input for Universal and Generalized Speech Enhancement paper on the EasyCom dataset? | WER (%) |
What metrics were used to measure the ReVISE (ch2) model in the ReVISE: Self-Supervised Speech Resynthesis with Visual Input for Universal and Generalized Speech Enhancement paper on the EasyCom dataset? | WER (%) |
What metrics were used to measure the DAJA (MVDR,HMA,1000) (Overlapped Speech) model in the Direction-Aware Joint Adaptation of Neural Speech Enhancement and Recognition in Real Multiparty Conversational Environments paper on the EasyCom dataset? | WER (%) |
What metrics were used to measure the Demucs (bf) model in the ReVISE: Self-Supervised Speech Resynthesis with Visual Input for Universal and Generalized Speech Enhancement paper on the EasyCom dataset? | WER (%) |
What metrics were used to measure the Demucs (ch2) model in the ReVISE: Self-Supervised Speech Resynthesis with Visual Input for Universal and Generalized Speech Enhancement paper on the EasyCom dataset? | WER (%) |
What metrics were used to measure the SpeechStew (100M) model in the SpeechStew: Simply Mix All Available Speech Recognition Data to Train One Large Neural Network paper on the Tedlium dataset? | Word Error Rate (WER) |
What metrics were used to measure the Liquid-S4 model in the Liquid Structural State-Space Models paper on the Speech Commands dataset? | Accuracy (%) |
What metrics were used to measure the S4 model in the Efficiently Modeling Long Sequences with Structured State Spaces paper on the Speech Commands dataset? | Accuracy (%) |
What metrics were used to measure the Conformer/Transformer-AED model in the GigaSpeech: An Evolving, Multi-domain ASR Corpus with 10,000 Hours of Transcribed Audio paper on the GigaSpeech TEST dataset? | Word Error Rate (WER) |
What metrics were used to measure the TDT 0-4 model in the Efficient Sequence Transduction by Jointly Predicting Tokens and Durations paper on the facebook/multilingual_librispeech german dataset? | WER |
What metrics were used to measure the TDT 0-2 model in the Efficient Sequence Transduction by Jointly Predicting Tokens and Durations paper on the CALLHOME Spanish Speech dataset? | WER |
What metrics were used to measure the CTC-CRF model in the CAT: A CTC-CRF based ASR Toolkit Bridging the Hybrid and the End-to-end Approaches towards Data Efficiency and Low Latency paper on the Hub5'00 FISHER-SWBD dataset? | Word Error Rate (WER) |
What metrics were used to measure the SpeechStew (100M) model in the SpeechStew: Simply Mix All Available Speech Recognition Data to Train One Large Neural Network paper on the Switchboard CallHome dataset? | Word Error Rate (WER) |
What metrics were used to measure the Conformer + Wav2vec 2.0 + SpecAugment-based Noisy Student Training with Libri-Light model in the Pushing the Limits of Semi-Supervised Learning for Automatic Speech Recognition paper on the LibriSpeech test-clean dataset? | Word Error Rate (WER) |
What metrics were used to measure the w2v-BERT XXL model in the W2v-BERT: Combining Contrastive Learning and Masked Language Modeling for Self-Supervised Speech Pre-Training paper on the LibriSpeech test-clean dataset? | Word Error Rate (WER) |
What metrics were used to measure the Conv + Transformer + wav2vec2.0 + pseudo labeling model in the Self-training and Pre-training are Complementary for Speech Recognition paper on the LibriSpeech test-clean dataset? | Word Error Rate (WER) |
What metrics were used to measure the ContextNet + SpecAugment-based Noisy Student Training with Libri-Light model in the Improved Noisy Student Training for Automatic Speech Recognition paper on the LibriSpeech test-clean dataset? | Word Error Rate (WER) |
What metrics were used to measure the SpeechStew (1B) model in the SpeechStew: Simply Mix All Available Speech Recognition Data to Train One Large Neural Network paper on the LibriSpeech test-clean dataset? | Word Error Rate (WER) |
What metrics were used to measure the Multistream CNN with Self-Attentive SRU (WER includes text normalization) model in the ASAPP-ASR: Multistream CNN and Self-Attentive SRU for SOTA Speech Recognition paper on the LibriSpeech test-clean dataset? | Word Error Rate (WER) |
What metrics were used to measure the wav2vec 2.0 with Libri-Light model in the wav2vec 2.0: A Framework for Self-Supervised Learning of Speech Representations paper on the LibriSpeech test-clean dataset? | Word Error Rate (WER) |
What metrics were used to measure the HuBERT with Libri-Light model in the HuBERT: Self-Supervised Speech Representation Learning by Masked Prediction of Hidden Units paper on the LibriSpeech test-clean dataset? | Word Error Rate (WER) |
What metrics were used to measure the E-Branchformer (L) + Internal Language Model Estimation model in the E-Branchformer: Branchformer with Enhanced merging for speech recognition paper on the LibriSpeech test-clean dataset? | Word Error Rate (WER) |
What metrics were used to measure the ContextNet(L) model in the ContextNet: Improving Convolutional Neural Networks for Automatic Speech Recognition with Global Context paper on the LibriSpeech test-clean dataset? | Word Error Rate (WER) |
What metrics were used to measure the Conformer(L) model in the Conformer: Convolution-augmented Transformer for Speech Recognition paper on the LibriSpeech test-clean dataset? | Word Error Rate (WER) |
What metrics were used to measure the Transformer+Time reduction+Self Knowledge distillation model in the Transformer-based ASR Incorporating Time-reduction Layer and Fine-tuning with Self-Knowledge Distillation paper on the LibriSpeech test-clean dataset? | Word Error Rate (WER) |
What metrics were used to measure the ContextNet(M) model in the ContextNet: Improving Convolutional Neural Networks for Automatic Speech Recognition with Global Context paper on the LibriSpeech test-clean dataset? | Word Error Rate (WER) |
What metrics were used to measure the Transformer Transducer model in the Improving RNN Transducer Based ASR with Auxiliary Tasks paper on the LibriSpeech test-clean dataset? | Word Error Rate (WER) |
What metrics were used to measure the Conformer(M) model in the Conformer: Convolution-augmented Transformer for Speech Recognition paper on the LibriSpeech test-clean dataset? | Word Error Rate (WER) |
What metrics were used to measure the SpeechStew (100M) model in the SpeechStew: Simply Mix All Available Speech Recognition Data to Train One Large Neural Network paper on the LibriSpeech test-clean dataset? | Word Error Rate (WER) |
What metrics were used to measure the Conv + Transformer AM + Pseudo-Labeling (ConvLM with Transformer Rescoring) model in the End-to-end ASR: from Supervised to Semi-Supervised Learning with Modern Architectures paper on the LibriSpeech test-clean dataset? | Word Error Rate (WER) |
What metrics were used to measure the Conv + Transformer AM + Iterative Pseudo-Labeling (n-gram LM + Transformer Rescoring) model in the Iterative Pseudo-Labeling for Speech Recognition paper on the LibriSpeech test-clean dataset? | Word Error Rate (WER) |
What metrics were used to measure the CTC + Transformer LM rescoring model in the Faster, Simpler and More Accurate Hybrid ASR Systems Using Wordpieces paper on the LibriSpeech test-clean dataset? | Word Error Rate (WER) |
What metrics were used to measure the Conformer(S) model in the Conformer: Convolution-augmented Transformer for Speech Recognition paper on the LibriSpeech test-clean dataset? | Word Error Rate (WER) |
What metrics were used to measure the Multi-Stream Self-Attention With Dilated 1D Convolutions model in the State-of-the-Art Speech Recognition Using Multi-Stream Self-Attention With Dilated 1D Convolutions paper on the LibriSpeech test-clean dataset? | Word Error Rate (WER) |
What metrics were used to measure the LSTM Transducer model in the Librispeech Transducer Model with Internal Language Model Prior Correction paper on the LibriSpeech test-clean dataset? | Word Error Rate (WER) |
What metrics were used to measure the Hybrid + Transformer LM rescoring model in the Transformer-based Acoustic Modeling for Hybrid Speech Recognition paper on the LibriSpeech test-clean dataset? | Word Error Rate (WER) |
What metrics were used to measure the Hybrid model with Transformer rescoring model in the RWTH ASR Systems for LibriSpeech: Hybrid vs Attention -- w/o Data Augmentation paper on the LibriSpeech test-clean dataset? | Word Error Rate (WER) |
What metrics were used to measure the ContextNet(S) model in the ContextNet: Improving Convolutional Neural Networks for Automatic Speech Recognition with Global Context paper on the LibriSpeech test-clean dataset? | Word Error Rate (WER) |
What metrics were used to measure the Conv + Transformer AM (ConvLM with Transformer Rescoring) (LS only) model in the End-to-end ASR: from Supervised to Semi-Supervised Learning with Modern Architectures paper on the LibriSpeech test-clean dataset? | Word Error Rate (WER) |
What metrics were used to measure the Squeezeformer (L) model in the Squeezeformer: An Efficient Transformer for Automatic Speech Recognition paper on the LibriSpeech test-clean dataset? | Word Error Rate (WER) |
What metrics were used to measure the LAS + SpecAugment model in the SpecAugment: A Simple Data Augmentation Method for Automatic Speech Recognition paper on the LibriSpeech test-clean dataset? | Word Error Rate (WER) |
What metrics were used to measure the Transformer model in the A Comparative Study on Transformer vs RNN in Speech Applications paper on the LibriSpeech test-clean dataset? | Word Error Rate (WER) |
What metrics were used to measure the QuartzNet15x5 model in the QuartzNet: Deep Automatic Speech Recognition with 1D Time-Channel Separable Convolutions paper on the LibriSpeech test-clean dataset? | Word Error Rate (WER) |
What metrics were used to measure the LAS (no LM) model in the SpecAugment: A Simple Data Augmentation Method for Automatic Speech Recognition paper on the LibriSpeech test-clean dataset? | Word Error Rate (WER) |
What metrics were used to measure the wav2vec_wav2letter model in the Self-training and Pre-training are Complementary for Speech Recognition paper on the LibriSpeech test-clean dataset? | Word Error Rate (WER) |
What metrics were used to measure the Espresso model in the Espresso: A Fast End-to-end Neural Speech Recognition Toolkit paper on the LibriSpeech test-clean dataset? | Word Error Rate (WER) |
What metrics were used to measure the Jasper DR 10x5 (+ Time/Freq Masks) model in the Jasper: An End-to-End Convolutional Neural Acoustic Model paper on the LibriSpeech test-clean dataset? | Word Error Rate (WER) |
What metrics were used to measure the Jasper DR 10x5 model in the Jasper: An End-to-End Convolutional Neural Acoustic Model paper on the LibriSpeech test-clean dataset? | Word Error Rate (WER) |
What metrics were used to measure the tdnn + chain + rnnlm rescoring model in the Neural Network Language Modeling with Letter-based Features and Importance Sampling paper on the LibriSpeech test-clean dataset? | Word Error Rate (WER) |
What metrics were used to measure the Convolutional Speech Recognition model in the Fully Convolutional Speech Recognition paper on the LibriSpeech test-clean dataset? | Word Error Rate (WER) |
What metrics were used to measure the Model Unit Exploration model in the On the Choice of Modeling Unit for Sequence-to-Sequence Speech Recognition paper on the LibriSpeech test-clean dataset? | Word Error Rate (WER) |
What metrics were used to measure the Seq-to-seq attention model in the Improved training of end-to-end attention models for speech recognition paper on the LibriSpeech test-clean dataset? | Word Error Rate (WER) |
What metrics were used to measure the CTC-CRF 4gram-LM model in the CRF-based Single-stage Acoustic Modeling with CTC Topology paper on the LibriSpeech test-clean dataset? | Word Error Rate (WER) |
What metrics were used to measure the HMM-TDNN trained with MMI + data augmentation (speed) + iVectors + 3 regularizations model in the paper on the LibriSpeech test-clean dataset? | Word Error Rate (WER) |
What metrics were used to measure the HMM-TDNN + iVectors model in the paper on the LibriSpeech test-clean dataset? | Word Error Rate (WER) |
What metrics were used to measure the Gated ConvNets model in the Letter-Based Speech Recognition with Gated ConvNets paper on the LibriSpeech test-clean dataset? | Word Error Rate (WER) |
What metrics were used to measure the Deep Speech 2 model in the Deep Speech 2: End-to-End Speech Recognition in English and Mandarin paper on the LibriSpeech test-clean dataset? | Word Error Rate (WER) |
What metrics were used to measure the CTC + policy learning model in the Improving End-to-End Speech Recognition with Policy Learning paper on the LibriSpeech test-clean dataset? | Word Error Rate (WER) |
What metrics were used to measure the HMM-DNN + pNorm* model in the paper on the LibriSpeech test-clean dataset? | Word Error Rate (WER) |
What metrics were used to measure the Li-GRU model in the The PyTorch-Kaldi Speech Recognition Toolkit paper on the LibriSpeech test-clean dataset? | Word Error Rate (WER) |
What metrics were used to measure the Snips model in the Snips Voice Platform: an embedded Spoken Language Understanding system for private-by-design voice interfaces paper on the LibriSpeech test-clean dataset? | Word Error Rate (WER) |
What metrics were used to measure the Local Prior Matching (Large Model) model in the Semi-Supervised Speech Recognition via Local Prior Matching paper on the LibriSpeech test-clean dataset? | Word Error Rate (WER) |
What metrics were used to measure the HMM-(SAT)GMM model in the paper on the LibriSpeech test-clean dataset? | Word Error Rate (WER) |
What metrics were used to measure the AmNet model in the Amortized Neural Networks for Low-Latency Speech Recognition paper on the LibriSpeech test-clean dataset? | Word Error Rate (WER) |
What metrics were used to measure the model in the Libri-Light: A Benchmark for ASR with Limited or No Supervision paper on the LibriSpeech test-clean dataset? | Word Error Rate (WER) |
What metrics were used to measure the ImportantAug model in the ImportantAug: a data augmentation agent for speech paper on the Google Speech Commands - Musan dataset? | Error rate - SNR 0dB |
What metrics were used to measure the Paraformer-large model in the FunASR: A Fundamental End-to-End Speech Recognition Toolkit paper on the WenetSpeech dataset? | Character Error Rate (CER) |
What metrics were used to measure the Conformer-MoE (64e) model in the 3M: Multi-loss, Multi-path and Multi-level Neural Networks for speech recognition paper on the WenetSpeech dataset? | Character Error Rate (CER) |
What metrics were used to measure the Conformer-MoE (32e) model in the 3M: Multi-loss, Multi-path and Multi-level Neural Networks for speech recognition paper on the WenetSpeech dataset? | Character Error Rate (CER) |
What metrics were used to measure the Conformer-MoE (16e) model in the 3M: Multi-loss, Multi-path and Multi-level Neural Networks for speech recognition paper on the WenetSpeech dataset? | Character Error Rate (CER) |
What metrics were used to measure the Wenet model in the WenetSpeech: A 10000+ Hours Multi-domain Mandarin Corpus for Speech Recognition paper on the WenetSpeech dataset? | Character Error Rate (CER) |
What metrics were used to measure the Kaldi model in the WenetSpeech: A 10000+ Hours Multi-domain Mandarin Corpus for Speech Recognition paper on the WenetSpeech dataset? | Character Error Rate (CER) |
What metrics were used to measure the Espnet model in the WenetSpeech: A 10000+ Hours Multi-domain Mandarin Corpus for Speech Recognition paper on the WenetSpeech dataset? | Character Error Rate (CER) |
What metrics were used to measure the AV-HuBERT Large model in the Learning Audio-Visual Speech Representation by Masked Multimodal Cluster Prediction paper on the LRS3-TED dataset? | Word Error Rate (WER) |
What metrics were used to measure the RAVEn Large model in the Jointly Learning Visual and Auditory Speech Representations from Raw Data paper on the LRS3-TED dataset? | Word Error Rate (WER) |
What metrics were used to measure the ConformerCTC-L (4-gram) model in the NeMo: a toolkit for building AI applications using Neural Modules paper on the Common Voice Spanish dataset? | Test WER, Test CER, Test CER (+LM), Test WER (+LM) |
What metrics were used to measure the Whisper (Large v2) model in the Robust Speech Recognition via Large-Scale Weak Supervision paper on the Common Voice Spanish dataset? | Test WER, Test CER, Test CER (+LM), Test WER (+LM) |
What metrics were used to measure the ConformerCTC-L (5-gram) model in the Scribosermo: Fast Speech-to-Text models for German and other Languages paper on the Common Voice Spanish dataset? | Test WER, Test CER, Test CER (+LM), Test WER (+LM) |
What metrics were used to measure the ConformerCTC-L (no LM) model in the NeMo: a toolkit for building AI applications using Neural Modules paper on the Common Voice Spanish dataset? | Test WER, Test CER, Test CER (+LM), Test WER (+LM) |
What metrics were used to measure the ConformerCTC-L (no-LM) model in the Scribosermo: Fast Speech-to-Text models for German and other Languages paper on the Common Voice Spanish dataset? | Test WER, Test CER, Test CER (+LM), Test WER (+LM) |
What metrics were used to measure the QuartzNet15x5ES (D8) model in the Scribosermo: Fast Speech-to-Text models for German and other Languages paper on the Common Voice Spanish dataset? | Test WER, Test CER, Test CER (+LM), Test WER (+LM) |
What metrics were used to measure the VoxPopuli-50K (n-gram) model in the VoxPopuli: A Large-Scale Multilingual Speech Corpus for Representation Learning, Semi-Supervised Learning and Interpretation paper on the Common Voice Spanish dataset? | Test WER, Test CER, Test CER (+LM), Test WER (+LM) |
What metrics were used to measure the QuartzNet15x5ES (CV-only) model in the Scribosermo: Fast Speech-to-Text models for German and other Languages paper on the Common Voice Spanish dataset? | Test WER, Test CER, Test CER (+LM), Test WER (+LM) |
What metrics were used to measure the wav2vec2-base-vietnamese-160h (No Language Model) model in the Wav2vec2 Base Vietnamese 160h paper on the Common Voice vi dataset? | Test WER |
What metrics were used to measure the Vietnamese end-to-end speech recognition using wav2vec 2.0 by VietAI model in the Vietnamese end-to-end speech recognition using wav2vec 2.0 paper on the Common Voice vi dataset? | Test WER |
What metrics were used to measure the W2V2-L-LL60K (+ TED-LIUM 3 LM) model in the SLUE: New Benchmark Tasks for Spoken Language Understanding Evaluation on Natural Speech paper on the SLUE dataset? | VoxPopuli (Dev), VoxPopuli (Test), VoxCeleb (Dev), VoxCeleb (Test) |
What metrics were used to measure the W2V2-B-LS960 (+ TED-LIUM 3 LM) model in the SLUE: New Benchmark Tasks for Spoken Language Understanding Evaluation on Natural Speech paper on the SLUE dataset? | VoxPopuli (Dev), VoxPopuli (Test), VoxCeleb (Dev), VoxCeleb (Test) |
What metrics were used to measure the W2V2-L-LL60K (+ in-domain LM) model in the SLUE: New Benchmark Tasks for Spoken Language Understanding Evaluation on Natural Speech paper on the SLUE dataset? | VoxPopuli (Dev), VoxPopuli (Test), VoxCeleb (Dev), VoxCeleb (Test) |
What metrics were used to measure the W2V2-L-LL60K model in the SLUE: New Benchmark Tasks for Spoken Language Understanding Evaluation on Natural Speech paper on the SLUE dataset? | VoxPopuli (Dev), VoxPopuli (Test), VoxCeleb (Dev), VoxCeleb (Test) |
What metrics were used to measure the W2V2-B-LS960 (+ in-domain LM) model in the SLUE: New Benchmark Tasks for Spoken Language Understanding Evaluation on Natural Speech paper on the SLUE dataset? | VoxPopuli (Dev), VoxPopuli (Test), VoxCeleb (Dev), VoxCeleb (Test) |
What metrics were used to measure the W2V2-B-LS960 model in the SLUE: New Benchmark Tasks for Spoken Language Understanding Evaluation on Natural Speech paper on the SLUE dataset? | VoxPopuli (Dev), VoxPopuli (Test), VoxCeleb (Dev), VoxCeleb (Test) |
What metrics were used to measure the HuBERT-B-LS960 model in the SLUE: New Benchmark Tasks for Spoken Language Understanding Evaluation on Natural Speech paper on the SLUE dataset? | VoxPopuli (Dev), VoxPopuli (Test), VoxCeleb (Dev), VoxCeleb (Test) |
What metrics were used to measure the W2V2-B-VP100K model in the SLUE: New Benchmark Tasks for Spoken Language Understanding Evaluation on Natural Speech paper on the SLUE dataset? | VoxPopuli (Dev), VoxPopuli (Test), VoxCeleb (Dev), VoxCeleb (Test) |
What metrics were used to measure the SpeechStew (1B) model in the SpeechStew: Simply Mix All Available Speech Recognition Data to Train One Large Neural Network paper on the Common Voice dataset? | Test WER |
What metrics were used to measure the wav2vec2-large-xls-r-1b-frisian model in the Improving the previous state-of-the-art Frisian ASR by fine-tuning XLS-R paper on the Common Voice Frisian dataset? | Test WER |
What metrics were used to measure the Icefall - zipformer transducer model in the paper on the SPGISpeech dataset? | Word Error Rate (WER) |
What metrics were used to measure the Conformer model in the SPGISpeech: 5,000 hours of transcribed financial audio for fully formatted end-to-end speech recognition paper on the SPGISpeech dataset? | Word Error Rate (WER) |
What metrics were used to measure the Whisper (Large v2) model in the Robust Speech Recognition via Large-Scale Weak Supervision paper on the Common Voice English dataset? | Word Error Rate (WER), Test CER, Test CER (+LM), Test WER, Test WER (+LM) |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.