prompts stringlengths 81 413 | metrics_response stringlengths 0 371 |
|---|---|
What metrics were used to measure the CDCL+Pascal model in the Cross-Domain Complementary Learning Using Pose for Multi-Person Part Segmentation paper on the PASCAL-Part dataset? | mIoU |
What metrics were used to measure the SCHP model in the Self-Correction for Human Parsing paper on the PASCAL-Part dataset? | mIoU |
What metrics were used to measure the WSHP model in the Weakly and Semi Supervised Human Body Part Parsing via Pose-Guided Knowledge Transfer paper on the PASCAL-Part dataset? | mIoU |
What metrics were used to measure the CDCL model in the Cross-Domain Complementary Learning Using Pose for Multi-Person Part Segmentation paper on the PASCAL-Part dataset? | mIoU |
What metrics were used to measure the Joint (ResNet-101, +ms) model in the Joint Multi-Person Pose Estimation and Semantic Part Segmentation paper on the PASCAL-Part dataset? | mIoU |
What metrics were used to measure the Joint (VGG-16, +ms) model in the Joint Multi-Person Pose Estimation and Semantic Part Segmentation paper on the PASCAL-Part dataset? | mIoU |
What metrics were used to measure the HAZN model in the Zoom Better to See Clearer: Human and Object Parsing with Hierarchical Auto-Zoom Net paper on the PASCAL-Part dataset? | mIoU |
What metrics were used to measure the UniHCP (FT) model in the UniHCP: A Unified Model for Human-Centric Perceptions paper on the ATR dataset? | pACC |
What metrics were used to measure the Parsing R-CNN + ResNext101 model in the Parsing R-CNN for Instance-Level Human Analysis paper on the MHP v2.0 dataset? | Mean IoU |
What metrics were used to measure the UniHCP (finetune) model in the UniHCP: A Unified Model for Human-Centric Perceptions paper on the CIHP dataset? | Mean IoU |
What metrics were used to measure the ResNet101 model in the Self-Correction for Human Parsing paper on the CIHP dataset? | Mean IoU |
What metrics were used to measure the Parsing R-CNN + ResNext101 model in the Parsing R-CNN for Instance-Level Human Analysis paper on the CIHP dataset? | Mean IoU |
What metrics were used to measure the PGN + ResNet101 model in the Instance-level Human Parsing via Part Grouping Network paper on the CIHP dataset? | Mean IoU |
What metrics were used to measure the DPC model in the Searching for Efficient Multi-Scale Architectures for Dense Image Prediction paper on the PASCAL-Person-Part dataset? | mIoU |
What metrics were used to measure the UniHCP (finetune) model in the UniHCP: A Unified Model for Human-Centric Perceptions paper on the Human3.6M dataset? | mIoU |
What metrics were used to measure the Palette model in the Palette: Image-to-Image Diffusion Models paper on the ImageNet ctest10k dataset? | FID |
What metrics were used to measure the Palette model in the Palette: Image-to-Image Diffusion Models paper on the ImageNet val dataset? | FID-5K |
What metrics were used to measure the Coltran model in the Colorization Transformer paper on the ImageNet val dataset? | FID-5K |
What metrics were used to measure the PixColor model in the PixColor: Pixel Recursive Colorization paper on the ImageNet val dataset? | FID-5K |
What metrics were used to measure the cGAN model in the Image-to-Image Translation with Conditional Adversarial Networks paper on the ImageNet val dataset? | FID-5K |
What metrics were used to measure the 3D Conv + ResNet-18 + DC-TCN + KD (Ensemble) (Word Boundary) model in the Training Strategies for Improved Lip-reading paper on the Lip Reading in the Wild dataset? | Top-1 Accuracy |
What metrics were used to measure the 3D Conv + EfficientNetV2 + Transformer + TCN model in the Accurate and Resource-Efficient Lipreading with Efficientnetv2 and Transformers paper on the Lip Reading in the Wild dataset? | Top-1 Accuracy |
What metrics were used to measure the Vosk + MediaPipe + LS + MixUp + SA + 3DResNet-18 + BiLSTM + Cosine WR model in the Visual Speech Recognition in a Driver Assistance System paper on the Lip Reading in the Wild dataset? | Top-1 Accuracy |
What metrics were used to measure the 3D Conv + ResNet-18 + MS-TCN + Multi-Head Visual-Audio Memory model in the Distinguishing Homophenes Using Multi-Head Visual-Audio Memory for Lip Reading paper on the Lip Reading in the Wild dataset? | Top-1 Accuracy |
What metrics were used to measure the 3D Conv + ResNet-18 + MS-TCN + KD (Ensemble) model in the Towards Practical Lipreading with Distilled and Efficient Models paper on the Lip Reading in the Wild dataset? | Top-1 Accuracy |
What metrics were used to measure the 3D-ResNet + Bi-GRU + MixUp + Label Smoothing + Cosine LR (Word Boundary) model in the Learn an Effective Lip Reading Model without Pains paper on the Lip Reading in the Wild dataset? | Top-1 Accuracy |
What metrics were used to measure the 3D-ResNet + Bi-GRU + MixUp + Label Smoothing + Cosine LR model in the Learn an Effective Lip Reading Model without Pains paper on the Lip Reading in the Wild dataset? | Top-1 Accuracy |
What metrics were used to measure the 3D Conv + ResNet-18 + Bi-GRU + Visual-Audio Memory model in the Multi-modality Associative Bridging through Memory: Speech Sound Recollected from Face Video paper on the Lip Reading in the Wild dataset? | Top-1 Accuracy |
What metrics were used to measure the 3D Conv + ResNet-18 + MS-TCN model in the Lipreading using Temporal Convolutional Networks paper on the Lip Reading in the Wild dataset? | Top-1 Accuracy |
What metrics were used to measure the 3D Conv + ResNet-18 + Bi-GRU(Face Cutout) model in the Can We Read Speech Beyond the Lips? Rethinking RoI Selection for Deep Visual Speech Recognition paper on the Lip Reading in the Wild dataset? | Top-1 Accuracy |
What metrics were used to measure the MoCo + Wav2Vec by SJTU LUMIA model in the Leveraging Unimodal Self-Supervised Learning for Multimodal Audio-Visual Speech Recognition paper on the Lip Reading in the Wild dataset? | Top-1 Accuracy |
What metrics were used to measure the 3D Conv + P3D-ResNet50 + TCN model in the Discriminative Multi-modality Speech Recognition paper on the Lip Reading in the Wild dataset? | Top-1 Accuracy |
What metrics were used to measure the 3D Conv + ResNet-18 + Bi-GRU model in the Mutual Information Maximization for Effective Lip Reading paper on the Lip Reading in the Wild dataset? | Top-1 Accuracy |
What metrics were used to measure the SpotFast + Transformer + Product-Key memory model in the SpotFast Networks with Memory Augmented Lateral Transformers for Lipreading paper on the Lip Reading in the Wild dataset? | Top-1 Accuracy |
What metrics were used to measure the DFTN model in the Deformation Flow Based Two-Stream Network for Lip Reading paper on the Lip Reading in the Wild dataset? | Top-1 Accuracy |
What metrics were used to measure the PCPG model in the Pseudo-Convolutional Policy Gradient for Sequence-to-Sequence Lip-Reading paper on the Lip Reading in the Wild dataset? | Top-1 Accuracy |
What metrics were used to measure the 3D Conv + ResNet-34 + Bi-GRU model in the End-to-end Audiovisual Speech Recognition paper on the Lip Reading in the Wild dataset? | Top-1 Accuracy |
What metrics were used to measure the Multi-grained + Bi-ConvLSTM model in the Multi-Grained Spatio-temporal Modeling for Lip-reading paper on the Lip Reading in the Wild dataset? | Top-1 Accuracy |
What metrics were used to measure the 3D Conv + ResNet-34 + Bi-LSTM model in the Combining Residual Networks with LSTMs for Lipreading paper on the Lip Reading in the Wild dataset? | Top-1 Accuracy |
What metrics were used to measure the 3D Conv + ResNet-34 + Bi-GRU model in the LRW-1000: A Naturally-Distributed Large-Scale Benchmark for Lip Reading in the Wild paper on the LRW-1000 dataset? | Top-1 Accuracy |
What metrics were used to measure the DenseNet3D + Bi-GRU model in the LRW-1000: A Naturally-Distributed Large-Scale Benchmark for Lip Reading in the Wild paper on the LRW-1000 dataset? | Top-1 Accuracy |
What metrics were used to measure the Multi-Tower LSTM-5 model in the LRW-1000: A Naturally-Distributed Large-Scale Benchmark for Lip Reading in the Wild paper on the LRW-1000 dataset? | Top-1 Accuracy |
What metrics were used to measure the CTC/Attention model in the Visual Speech Recognition for Multiple Languages in the Wild paper on the GRID corpus (mixed-speech) dataset? | Word Error Rate (WER) |
What metrics were used to measure the LCANet model in the LCANet: End-to-End Lipreading with Cascaded Attention-CTC paper on the GRID corpus (mixed-speech) dataset? | Word Error Rate (WER) |
What metrics were used to measure the LipNet (with Face Cutout) model in the Can We Read Speech Beyond the Lips? Rethinking RoI Selection for Deep Visual Speech Recognition paper on the GRID corpus (mixed-speech) dataset? | Word Error Rate (WER) |
What metrics were used to measure the WAS model in the Lip Reading Sentences in the Wild paper on the GRID corpus (mixed-speech) dataset? | Word Error Rate (WER) |
What metrics were used to measure the LipNet model in the LipNet: End-to-End Sentence-level Lipreading paper on the GRID corpus (mixed-speech) dataset? | Word Error Rate (WER) |
What metrics were used to measure the CTC/Attention model in the Auto-AVSR: Audio-Visual Speech Recognition with Automatic Labels paper on the LRS2 dataset? | Word Error Rate (WER) |
What metrics were used to measure the RAVEn Large model in the Jointly Learning Visual and Auditory Speech Representations from Raw Data paper on the LRS2 dataset? | Word Error Rate (WER) |
What metrics were used to measure the VTP (more data) model in the Sub-word Level Lip Reading With Visual Attention paper on the LRS2 dataset? | Word Error Rate (WER) |
What metrics were used to measure the CTC/Attention (LRW+LRS2/3+AVSpeech) model in the Visual Speech Recognition for Multiple Languages in the Wild paper on the LRS2 dataset? | Word Error Rate (WER) |
What metrics were used to measure the VTP model in the Sub-word Level Lip Reading With Visual Attention paper on the LRS2 dataset? | Word Error Rate (WER) |
What metrics were used to measure the CTC/Attention model in the Visual Speech Recognition for Multiple Languages in the Wild paper on the LRS2 dataset? | Word Error Rate (WER) |
What metrics were used to measure the Hybrid CTC / Attention model in the End-to-end Audio-visual Speech Recognition with Conformers paper on the LRS2 dataset? | Word Error Rate (WER) |
What metrics were used to measure the MoCo + wav2vec (w/o extLM) model in the Leveraging Unimodal Self-Supervised Learning for Multimodal Audio-Visual Speech Recognition paper on the LRS2 dataset? | Word Error Rate (WER) |
What metrics were used to measure the Multi-head Visual-Audio Memory model in the Distinguishing Homophenes Using Multi-Head Visual-Audio Memory for Lip Reading paper on the LRS2 dataset? | Word Error Rate (WER) |
What metrics were used to measure the TM-seq2seq + extLM model in the Deep Audio-Visual Speech Recognition paper on the LRS2 dataset? | Word Error Rate (WER) |
What metrics were used to measure the LF-MMI TDNN model in the Audio-visual Recognition of Overlapped speech for the LRS2 dataset paper on the LRS2 dataset? | Word Error Rate (WER) |
What metrics were used to measure the Hybrid CTC / Attention model in the Audio-Visual Speech Recognition With A Hybrid CTC/Attention Architecture paper on the LRS2 dataset? | Word Error Rate (WER) |
What metrics were used to measure the Conv-seq2seq model in the Spatio-Temporal Fusion Based Convolutional Sequence Learning for Lip Reading paper on the LRS2 dataset? | Word Error Rate (WER) |
What metrics were used to measure the CTC + KD ASR model in the ASR is all you need: cross-modal distillation for lip reading paper on the LRS2 dataset? | Word Error Rate (WER) |
What metrics were used to measure the TM-CTC + extLM model in the Deep Audio-Visual Speech Recognition paper on the LRS2 dataset? | Word Error Rate (WER) |
What metrics were used to measure the LIBS model in the Hearing Lips: Improving Lip Reading by Distilling Speech Recognizers paper on the LRS2 dataset? | Word Error Rate (WER) |
What metrics were used to measure the CTC/Attention model in the Auto-AVSR: Audio-Visual Speech Recognition with Automatic Labels paper on the LRS3-TED dataset? | Word Error Rate (WER) |
What metrics were used to measure the RAVEn Large model in the Jointly Learning Visual and Auditory Speech Representations from Raw Data paper on the LRS3-TED dataset? | Word Error Rate (WER) |
What metrics were used to measure the AV-HuBERT Large + Relaxed Attention + LM model in the Relaxed Attention for Transformer Models paper on the LRS3-TED dataset? | Word Error Rate (WER) |
What metrics were used to measure the AV-HuBERT Large model in the Learning Audio-Visual Speech Representation by Masked Multimodal Cluster Prediction paper on the LRS3-TED dataset? | Word Error Rate (WER) |
What metrics were used to measure the VTP with more data model in the Sub-word Level Lip Reading With Visual Attention paper on the LRS3-TED dataset? | Word Error Rate (WER) |
What metrics were used to measure the CTC/Attention (LRW+LRS2/3+AVSpeech) model in the Visual Speech Recognition for Multiple Languages in the Wild paper on the LRS3-TED dataset? | Word Error Rate (WER) |
What metrics were used to measure the RNN-T model in the Recurrent Neural Network Transducer for Audio-Visual Speech Recognition paper on the LRS3-TED dataset? | Word Error Rate (WER) |
What metrics were used to measure the VTP model in the Sub-word Level Lip Reading With Visual Attention paper on the LRS3-TED dataset? | Word Error Rate (WER) |
What metrics were used to measure the Hyb + Conformer model in the End-to-end Audio-visual Speech Recognition with Conformers paper on the LRS3-TED dataset? | Word Error Rate (WER) |
What metrics were used to measure the CTC-V2P model in the Large-Scale Visual Speech Recognition paper on the LRS3-TED dataset? | Word Error Rate (WER) |
What metrics were used to measure the EG-seq2seq model in the Discriminative Multi-modality Speech Recognition paper on the LRS3-TED dataset? | Word Error Rate (WER) |
What metrics were used to measure the TM-seq2seq model in the Deep Audio-Visual Speech Recognition paper on the LRS3-TED dataset? | Word Error Rate (WER) |
What metrics were used to measure the CTC + KD model in the ASR is all you need: cross-modal distillation for lip reading paper on the LRS3-TED dataset? | Word Error Rate (WER) |
What metrics were used to measure the Conv-seq2seq model in the Spatio-Temporal Fusion Based Convolutional Sequence Learning for Lip Reading paper on the LRS3-TED dataset? | Word Error Rate (WER) |
What metrics were used to measure the 3D-ResNet + Bi-GRU + MixUp + Label Smooth + Cosine LR (Word Boundary) model in the Learn an Effective Lip Reading Model without Pains paper on the CAS-VSR-W1k (LRW-1000) dataset? | Top-1 Accuracy |
What metrics were used to measure the 3D Conv + ResNet-18 + MS-TCN + Multi-Head Visual-Audio Memory model in the Distinguishing Homophenes Using Multi-Head Visual-Audio Memory for Lip Reading paper on the CAS-VSR-W1k (LRW-1000) dataset? | Top-1 Accuracy |
What metrics were used to measure the 3D Conv + ResNet-18 + Bi-GRU + Visual-Audio Memory model in the Multi-modality Associative Bridging through Memory: Speech Sound Recollected from Face Video paper on the CAS-VSR-W1k (LRW-1000) dataset? | Top-1 Accuracy |
What metrics were used to measure the 3D-ResNet + Bi-GRU + MixUp + Label Smooth + Cosine LR model in the Learn an Effective Lip Reading Model without Pains paper on the CAS-VSR-W1k (LRW-1000) dataset? | Top-1 Accuracy |
What metrics were used to measure the 3D Conv + ResNet-18 + Bi-GRU (Face Cutout) model in the Can We Read Speech Beyond the Lips? Rethinking RoI Selection for Deep Visual Speech Recognition paper on the CAS-VSR-W1k (LRW-1000) dataset? | Top-1 Accuracy |
What metrics were used to measure the DFTN model in the Deformation Flow Based Two-Stream Network for Lip Reading paper on the CAS-VSR-W1k (LRW-1000) dataset? | Top-1 Accuracy |
What metrics were used to measure the 3D Conv + ResNet-18 + MS-TCN model in the Lipreading using Temporal Convolutional Networks paper on the CAS-VSR-W1k (LRW-1000) dataset? | Top-1 Accuracy |
What metrics were used to measure the GLMIM model in the Mutual Information Maximization for Effective Lip Reading paper on the CAS-VSR-W1k (LRW-1000) dataset? | Top-1 Accuracy |
What metrics were used to measure the PCPG model in the Pseudo-Convolutional Policy Gradient for Sequence-to-Sequence Lip-Reading paper on the CAS-VSR-W1k (LRW-1000) dataset? | Top-1 Accuracy |
What metrics were used to measure the CTC/Attention model in the Visual Speech Recognition for Multiple Languages in the Wild paper on the CMLR dataset? | CER |
What metrics were used to measure the LIBS model in the Hearing Lips: Improving Lip Reading by Distilling Speech Recognizers paper on the CMLR dataset? | CER |
What metrics were used to measure the CSSMCM model in the A Cascade Sequence-to-Sequence Model for Chinese Mandarin Lip Reading paper on the CMLR dataset? | CER |
What metrics were used to measure the LipCH-Net model in the A Cascade Sequence-to-Sequence Model for Chinese Mandarin Lip Reading paper on the CMLR dataset? | CER |
What metrics were used to measure the WAS model in the A Cascade Sequence-to-Sequence Model for Chinese Mandarin Lip Reading paper on the CMLR dataset? | CER |
What metrics were used to measure the GDI-H3(200M frames) model in the Generalized Data Distribution Iteration paper on the Atari 2600 Seaquest dataset? | Score, Return |
What metrics were used to measure the GDI-H3 model in the Generalized Data Distribution Iteration paper on the Atari 2600 Seaquest dataset? | Score, Return |
What metrics were used to measure the Agent57 model in the Agent57: Outperforming the Atari Human Benchmark paper on the Atari 2600 Seaquest dataset? | Score, Return |
What metrics were used to measure the R2D2 model in the Recurrent Experience Replay in Distributed Reinforcement Learning paper on the Atari 2600 Seaquest dataset? | Score, Return |
What metrics were used to measure the MuZero model in the Mastering Atari, Go, Chess and Shogi by Planning with a Learned Model paper on the Atari 2600 Seaquest dataset? | Score, Return |
What metrics were used to measure the MuZero (Res2 Adam) model in the Online and Offline Reinforcement Learning by Planning with a Learned Model paper on the Atari 2600 Seaquest dataset? | Score, Return |
What metrics were used to measure the GDI-I3 model in the Generalized Data Distribution Iteration paper on the Atari 2600 Seaquest dataset? | Score, Return |
What metrics were used to measure the GDI-I3 model in the GDI: Rethinking What Makes Reinforcement Learning Different From Supervised Learning paper on the Atari 2600 Seaquest dataset? | Score, Return |
What metrics were used to measure the Ape-X model in the Distributed Prioritized Experience Replay paper on the Atari 2600 Seaquest dataset? | Score, Return |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.