prompts stringlengths 81 413 | metrics_response stringlengths 0 371 |
|---|---|
What metrics were used to measure the TranSVAE model in the Unsupervised Video Domain Adaptation for Action Recognition: A Disentanglement Perspective paper on the HMDB --> UCF (full) dataset? | Accuracy |
What metrics were used to measure the ABG model in the Adversarial Bipartite Graph Learning for Video Domain Adaptation paper on the HMDB --> UCF (full) dataset? | Accuracy |
What metrics were used to measure the TA3N model in the Temporal Attentive Alignment for Large-Scale Video Domain Adaptation paper on the HMDB --> UCF (full) dataset? | Accuracy |
What metrics were used to measure the DRANet model in the DRANet: Disentangling Representation and Adaptation Networks for Unsupervised Cross-Domain Adaptation paper on the MNIST-to-MNIST-M dataset? | Accuracy |
What metrics were used to measure the DeepJDOT model in the DeepJDOT: Deep Joint Distribution Optimal Transport for Unsupervised Domain Adaptation paper on the MNIST-to-MNIST-M dataset? | Accuracy |
What metrics were used to measure the DSN (DANN) model in the Domain Separation Networks paper on the MNIST-to-MNIST-M dataset? | Accuracy |
What metrics were used to measure the DANN [ganin2016domain] model in the Domain-Adversarial Training of Neural Networks paper on the MNIST-to-MNIST-M dataset? | Accuracy |
What metrics were used to measure the MMD [tzeng2015ddc]; [long2015learning] model in the Learning Transferable Features with Deep Adaptation Networks paper on the MNIST-to-MNIST-M dataset? | Accuracy |
What metrics were used to measure the ProDA+CRA model in the Cross-Region Domain Adaptation for Class-level Alignment paper on the Synscapes-to-Cityscapes dataset? | mIoU |
What metrics were used to measure the IntraDA model in the Unsupervised Intra-domain Adaptation for Semantic Segmentation through Self-Supervision paper on the Synscapes-to-Cityscapes dataset? | mIoU |
What metrics were used to measure the AdaptSegNet model in the Learning to Adapt Structured Output Space for Semantic Segmentation paper on the Synscapes-to-Cityscapes dataset? | mIoU |
What metrics were used to measure the Conformer/Transformer-AED model in the GigaSpeech: An Evolving, Multi-domain ASR Corpus with 10,000 Hours of Transcribed Audio paper on the GigaSpeech dataset? | Word Error Rate (WER) |
What metrics were used to measure the w2v-BERT XXL model in the W2v-BERT: Combining Contrastive Learning and Masked Language Modeling for Self-Supervised Speech Pre-Training paper on the LibriSpeech test-other dataset? | Word Error Rate (WER) |
What metrics were used to measure the Conformer + Wav2vec 2.0 + SpecAugment-based Noisy Student Training with Libri-Light model in the Pushing the Limits of Semi-Supervised Learning for Automatic Speech Recognition paper on the LibriSpeech test-other dataset? | Word Error Rate (WER) |
What metrics were used to measure the HuBERT with Libri-Light model in the HuBERT: Self-Supervised Speech Representation Learning by Masked Prediction of Hidden Units paper on the LibriSpeech test-other dataset? | Word Error Rate (WER) |
What metrics were used to measure the Conv + Transformer + wav2vec2.0 + pseudo labeling model in the Self-training and Pre-training are Complementary for Speech Recognition paper on the LibriSpeech test-other dataset? | Word Error Rate (WER) |
What metrics were used to measure the wav2vec 2.0 with Libri-Light model in the wav2vec 2.0: A Framework for Self-Supervised Learning of Speech Representations paper on the LibriSpeech test-other dataset? | Word Error Rate (WER) |
What metrics were used to measure the SpeechStew (1B) model in the SpeechStew: Simply Mix All Available Speech Recognition Data to Train One Large Neural Network paper on the LibriSpeech test-other dataset? | Word Error Rate (WER) |
What metrics were used to measure the ContextNet + SpecAugment-based Noisy Student Training with Libri-Light model in the Improved Noisy Student Training for Automatic Speech Recognition paper on the LibriSpeech test-other dataset? | Word Error Rate (WER) |
What metrics were used to measure the E-Branchformer (L) + Internal Language Model Estimation model in the E-Branchformer: Branchformer with Enhanced merging for speech recognition paper on the LibriSpeech test-other dataset? | Word Error Rate (WER) |
What metrics were used to measure the data2vec model in the data2vec: A General Framework for Self-supervised Learning in Speech, Vision and Language paper on the LibriSpeech test-other dataset? | Word Error Rate (WER) |
What metrics were used to measure the Conv + Transformer AM + Iterative Pseudo-Labeling (n-gram LM + Transformer Rescoring) model in the Iterative Pseudo-Labeling for Speech Recognition paper on the LibriSpeech test-other dataset? | Word Error Rate (WER) |
What metrics were used to measure the Conformer(L) model in the Conformer: Convolution-augmented Transformer for Speech Recognition paper on the LibriSpeech test-other dataset? | Word Error Rate (WER) |
What metrics were used to measure the SpeechStew (100M) model in the SpeechStew: Simply Mix All Available Speech Recognition Data to Train One Large Neural Network paper on the LibriSpeech test-other dataset? | Word Error Rate (WER) |
What metrics were used to measure the wav2vec 2.0 model in the wav2vec 2.0: A Framework for Self-Supervised Learning of Speech Representations paper on the LibriSpeech test-other dataset? | Word Error Rate (WER) |
What metrics were used to measure the ContextNet(L) model in the ContextNet: Improving Convolutional Neural Networks for Automatic Speech Recognition with Global Context paper on the LibriSpeech test-other dataset? | Word Error Rate (WER) |
What metrics were used to measure the Conv + Transformer AM (ConvLM with Transformer Rescoring) model in the End-to-end ASR: from Supervised to Semi-Supervised Learning with Modern Architectures paper on the LibriSpeech test-other dataset? | Word Error Rate (WER) |
What metrics were used to measure the CTC + Transformer LM rescoring model in the Faster, Simpler and More Accurate Hybrid ASR Systems Using Wordpieces paper on the LibriSpeech test-other dataset? | Word Error Rate (WER) |
What metrics were used to measure the Transformer Transducer model in the Improving RNN Transducer Based ASR with Auxiliary Tasks paper on the LibriSpeech test-other dataset? | Word Error Rate (WER) |
What metrics were used to measure the Conformer(M) model in the Conformer: Convolution-augmented Transformer for Speech Recognition paper on the LibriSpeech test-other dataset? | Word Error Rate (WER) |
What metrics were used to measure the Multistream CNN with Self-Attentive SRU model in the ASAPP-ASR: Multistream CNN and Self-Attentive SRU for SOTA Speech Recognition paper on the LibriSpeech test-other dataset? | Word Error Rate (WER) |
What metrics were used to measure the ContextNet(M) model in the ContextNet: Improving Convolutional Neural Networks for Automatic Speech Recognition with Global Context paper on the LibriSpeech test-other dataset? | Word Error Rate (WER) |
What metrics were used to measure the hybrid + Transformer LM rescoring model in the Transformer-based Acoustic Modeling for Hybrid Speech Recognition paper on the LibriSpeech test-other dataset? | Word Error Rate (WER) |
What metrics were used to measure the Hybrid model with Transformer rescoring model in the RWTH ASR Systems for LibriSpeech: Hybrid vs Attention -- w/o Data Augmentation paper on the LibriSpeech test-other dataset? | Word Error Rate (WER) |
What metrics were used to measure the Conformer(S) model in the Conformer: Convolution-augmented Transformer for Speech Recognition paper on the LibriSpeech test-other dataset? | Word Error Rate (WER) |
What metrics were used to measure the Conv + Transformer AM (ConvLM with Transformer Rescoring) (LS only) model in the End-to-end ASR: from Supervised to Semi-Supervised Learning with Modern Architectures paper on the LibriSpeech test-other dataset? | Word Error Rate (WER) |
What metrics were used to measure the ContextNet(S) model in the ContextNet: Improving Convolutional Neural Networks for Automatic Speech Recognition with Global Context paper on the LibriSpeech test-other dataset? | Word Error Rate (WER) |
What metrics were used to measure the LSTM Transducer model in the Librispeech Transducer Model with Internal Language Model Prior Correction paper on the LibriSpeech test-other dataset? | Word Error Rate (WER) |
What metrics were used to measure the Transformer model in the A Comparative Study on Transformer vs RNN in Speech Applications paper on the LibriSpeech test-other dataset? | Word Error Rate (WER) |
What metrics were used to measure the LAS + SpecAugment model in the SpecAugment: A Simple Data Augmentation Method for Automatic Speech Recognition paper on the LibriSpeech test-other dataset? | Word Error Rate (WER) |
What metrics were used to measure the Multi-Stream Self-Attention With Dilated 1D Convolutions model in the State-of-the-Art Speech Recognition Using Multi-Stream Self-Attention With Dilated 1D Convolutions paper on the LibriSpeech test-other dataset? | Word Error Rate (WER) |
What metrics were used to measure the Squeezeformer (L) model in the Squeezeformer: An Efficient Transformer for Automatic Speech Recognition paper on the LibriSpeech test-other dataset? | Word Error Rate (WER) |
What metrics were used to measure the LAS (no LM) model in the SpecAugment: A Simple Data Augmentation Method for Automatic Speech Recognition paper on the LibriSpeech test-other dataset? | Word Error Rate (WER) |
What metrics were used to measure the Conformer with Relaxed Attention model in the Relaxed Attention: A Simple Method to Boost Performance of End-to-End Automatic Speech Recognition paper on the LibriSpeech test-other dataset? | Word Error Rate (WER) |
What metrics were used to measure the QuartzNet15x5 model in the QuartzNet: Deep Automatic Speech Recognition with 1D Time-Channel Separable Convolutions paper on the LibriSpeech test-other dataset? | Word Error Rate (WER) |
What metrics were used to measure the tdnn + chain + rnnlm rescoring model in the Neural Network Language Modeling with Letter-based Features and Importance Sampling paper on the LibriSpeech test-other dataset? | Word Error Rate (WER) |
What metrics were used to measure the Jasper DR 10x5 (+ Time/Freq Masks) model in the Jasper: An End-to-End Convolutional Neural Acoustic Model paper on the LibriSpeech test-other dataset? | Word Error Rate (WER) |
What metrics were used to measure the Espresso model in the Espresso: A Fast End-to-end Neural Speech Recognition Toolkit paper on the LibriSpeech test-other dataset? | Word Error Rate (WER) |
What metrics were used to measure the Jasper DR 10x5 model in the Jasper: An End-to-End Convolutional Neural Acoustic Model paper on the LibriSpeech test-other dataset? | Word Error Rate (WER) |
What metrics were used to measure the Convolutional Speech Recognition model in the Fully Convolutional Speech Recognition paper on the LibriSpeech test-other dataset? | Word Error Rate (WER) |
What metrics were used to measure the CTC-CRF 4gram-LM model in the CRF-based Single-stage Acoustic Modeling with CTC Topology paper on the LibriSpeech test-other dataset? | Word Error Rate (WER) |
What metrics were used to measure the TDNN + pNorm + speed up/down speech model in the paper on the LibriSpeech test-other dataset? | Word Error Rate (WER) |
What metrics were used to measure the Deep Speech 2 model in the Deep Speech 2: End-to-End Speech Recognition in English and Mandarin paper on the LibriSpeech test-other dataset? | Word Error Rate (WER) |
What metrics were used to measure the Local Prior Matching (Large Model, ConvLM LM) model in the Semi-Supervised Speech Recognition via Local Prior Matching paper on the LibriSpeech test-other dataset? | Word Error Rate (WER) |
What metrics were used to measure the Snips model in the Snips Voice Platform: an embedded Spoken Language Understanding system for private-by-design voice interfaces paper on the LibriSpeech test-other dataset? | Word Error Rate (WER) |
What metrics were used to measure the Local Prior Matching (Large Model) model in the Semi-Supervised Speech Recognition via Local Prior Matching paper on the LibriSpeech test-other dataset? | Word Error Rate (WER) |
What metrics were used to measure the Conformer-Transducer (no LM) model in the Automatic Speech Recognition in German: A Detailed Error Analysis paper on the TUDA dataset? | Test WER |
What metrics were used to measure the TDNN-HMM hybrid, FST (with RNNLM rescoring) model in the Improved Open Source Automatic Subtitling for Lecture Videos paper on the TUDA dataset? | Test WER |
What metrics were used to measure the QuartzNet15x5DE (D37) model in the Scribosermo: Fast Speech-to-Text models for German and other Languages paper on the TUDA dataset? | Test WER |
What metrics were used to measure the IMS-Speech model in the IMS-Speech: A Speech to Text Tool paper on the TUDA dataset? | Test WER |
What metrics were used to measure the Hybrid CTC/Attention model in the CTC-Segmentation of Large Corpora for German End-to-end Speech Recognition paper on the TUDA dataset? | Test WER |
What metrics were used to measure the Kaldi model in the Open Source Automatic Speech Recognition for German paper on the TUDA dataset? | Test WER |
What metrics were used to measure the DeepSpeech-Polyglot model in the paper on the TUDA dataset? | Test WER |
What metrics were used to measure the Kaldi model in the Open Source German Distant Speech Recognition: Corpus and Acoustic Model paper on the TUDA dataset? | Test WER |
What metrics were used to measure the PocketSphinx model in the Open Source German Distant Speech Recognition: Corpus and Acoustic Model paper on the TUDA dataset? | Test WER |
What metrics were used to measure the IBM (LSTM+Conformer encoder-decoder) model in the On the limit of English conversational speech recognition paper on the Switchboard + Hub500 dataset? | Percentage error |
What metrics were used to measure the IBM (LSTM encoder-decoder) model in the Single headed attention based sequence-to-sequence model for state-of-the-art results on Switchboard paper on the Switchboard + Hub500 dataset? | Percentage error |
What metrics were used to measure the ResNet + BiLSTMs acoustic model model in the English Conversational Telephone Speech Recognition by Humans and Machines paper on the Switchboard + Hub500 dataset? | Percentage error |
What metrics were used to measure the Microsoft 2016b model in the Achieving Human Parity in Conversational Speech Recognition paper on the Switchboard + Hub500 dataset? | Percentage error |
What metrics were used to measure the Microsoft 2016 model in the The Microsoft 2016 Conversational Speech Recognition System paper on the Switchboard + Hub500 dataset? | Percentage error |
What metrics were used to measure the VGG/Resnet/LACE/BiLSTM acoustic model trained on SWB+Fisher+CH, N-gram + RNNLM language model trained on Switchboard+Fisher+Gigaword+Broadcast model in the The Microsoft 2016 Conversational Speech Recognition System paper on the Switchboard + Hub500 dataset? | Percentage error |
What metrics were used to measure the RNN + VGG + LSTM acoustic model trained on SWB+Fisher+CH, N-gram + "model M" + NNLM language model model in the The IBM 2016 English Conversational Telephone Speech Recognition System paper on the Switchboard + Hub500 dataset? | Percentage error |
What metrics were used to measure the CNN-LSTM model in the Achieving Human Parity in Conversational Speech Recognition paper on the Switchboard + Hub500 dataset? | Percentage error |
What metrics were used to measure the IBM 2016 model in the The IBM 2016 English Conversational Telephone Speech Recognition System paper on the Switchboard + Hub500 dataset? | Percentage error |
What metrics were used to measure the RNNLM model in the The Microsoft 2016 Conversational Speech Recognition System paper on the Switchboard + Hub500 dataset? | Percentage error |
What metrics were used to measure the IBM 2015 model in the The IBM 2015 English Conversational Telephone Speech Recognition System paper on the Switchboard + Hub500 dataset? | Percentage error |
What metrics were used to measure the HMM-BLSTM trained with MMI + data augmentation (speed) + iVectors + 3 regularizations + Fisher model in the paper on the Switchboard + Hub500 dataset? | Percentage error |
What metrics were used to measure the HMM-TDNN trained with MMI + data augmentation (speed) + iVectors + 3 regularizations + Fisher (10% / 15.1% respectively trained on SWBD only) model in the paper on the Switchboard + Hub500 dataset? | Percentage error |
What metrics were used to measure the CNN on MFSC/fbanks + 1 non-conv layer for FMLLR/I-Vectors concatenated in a DNN model in the paper on the Switchboard + Hub500 dataset? | Percentage error |
What metrics were used to measure the HMM-TDNN + iVectors model in the paper on the Switchboard + Hub500 dataset? | Percentage error |
What metrics were used to measure the CNN model in the paper on the Switchboard + Hub500 dataset? | Percentage error |
What metrics were used to measure the Deep CNN (10 conv, 4 FC layers), multi-scale feature maps model in the Very Deep Multilingual Convolutional Neural Networks for LVCSR paper on the Switchboard + Hub500 dataset? | Percentage error |
What metrics were used to measure the HMM-DNN +sMBR model in the paper on the Switchboard + Hub500 dataset? | Percentage error |
What metrics were used to measure the DNN sMBR model in the paper on the Switchboard + Hub500 dataset? | Percentage error |
What metrics were used to measure the Deep Speech + FSH model in the Deep Speech: Scaling up end-to-end speech recognition paper on the Switchboard + Hub500 dataset? | Percentage error |
What metrics were used to measure the CNN + Bi-RNN + CTC (speech to letters), 25.9% WER if trainedonlyon SWB model in the Deep Speech: Scaling up end-to-end speech recognition paper on the Switchboard + Hub500 dataset? | Percentage error |
What metrics were used to measure the DNN MMI model in the paper on the Switchboard + Hub500 dataset? | Percentage error |
What metrics were used to measure the DNN MPE model in the paper on the Switchboard + Hub500 dataset? | Percentage error |
What metrics were used to measure the DNN BMMI model in the paper on the Switchboard + Hub500 dataset? | Percentage error |
What metrics were used to measure the HMM-TDNN + pNorm + speed up/down speech model in the paper on the Switchboard + Hub500 dataset? | Percentage error |
What metrics were used to measure the DNN + Dropout model in the Building DNN Acoustic Models for Large Vocabulary Speech Recognition paper on the Switchboard + Hub500 dataset? | Percentage error |
What metrics were used to measure the DNN model in the Building DNN Acoustic Models for Large Vocabulary Speech Recognition paper on the Switchboard + Hub500 dataset? | Percentage error |
What metrics were used to measure the CD-DNN model in the paper on the Switchboard + Hub500 dataset? | Percentage error |
What metrics were used to measure the DNN-HMM model in the paper on the Switchboard + Hub500 dataset? | Percentage error |
What metrics were used to measure the Deep Speech model in the Deep Speech: Scaling up end-to-end speech recognition paper on the Switchboard + Hub500 dataset? | Percentage error |
What metrics were used to measure the SpeechStew (1B) model in the SpeechStew: Simply Mix All Available Speech Recognition Data to Train One Large Neural Network paper on the CHiME-6 dev_gss12 dataset? | Word Error Rate (WER) |
What metrics were used to measure the RNN-T model in the Towards a Competitive End-to-End Speech Recognition for CHiME-6 Dinner Party Transcription paper on the CHiME-6 dev_gss12 dataset? | Word Error Rate (WER) |
What metrics were used to measure the TS-SEP model in the TS-SEP: Joint Diarization and Separation Conditioned on Estimated Speaker Embeddings paper on the LibriCSS dataset? | Word Error Rate (WER) |
What metrics were used to measure the GSS + Transducer model in the GPU-accelerated Guided Source Separation for Meeting Transcription paper on the LibriCSS dataset? | Word Error Rate (WER) |
What metrics were used to measure the mllp_2021_offline_filt model in the Europarl-ASR: A Large Corpus of Parliamentary Debates for Streaming ASR Benchmarking and Speech Data Filtering/Verbatimization paper on the Europarl-ASR EN MEP-test dataset? | WER |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.