prompts stringlengths 81 413 | metrics_response stringlengths 0 371 |
|---|---|
What metrics were used to measure the FAST-VQA (trained on LSVQ only) model in the FAST-VQA: Efficient End-to-end Video Quality Assessment with Fragment Sampling paper on the YouTube-UGC dataset? | PLCC |
What metrics were used to measure the ChipQA model in the ChipQA: No-Reference Video Quality Prediction via Space-Time Chips paper on the YouTube-UGC dataset? | PLCC |
What metrics were used to measure the ST-GREED model in the ST-GREED: Space-Time Generalized Entropic Differences for Frame Rate Dependent Video Quality Prediction paper on the LIVE-YT-HFR dataset? | SRCC |
What metrics were used to measure the GREED-VMAF model in the High Frame Rate Video Quality Assessment using VMAF and Entropic Differences paper on the LIVE-YT-HFR dataset? | SRCC |
What metrics were used to measure the GSTI model in the Capturing Video Frame Rate Variations via Entropic Differencing paper on the LIVE-YT-HFR dataset? | SRCC |
What metrics were used to measure the DOVER (end-to-end) model in the Exploring Video Quality Assessment on User Generated Contents from Aesthetic and Technical Perspectives paper on the KoNViD-1k dataset? | PLCC |
What metrics were used to measure the FasterVQA (fine-tuned) model in the Neighbourhood Representative Sampling for Efficient End-to-end Video Quality Assessment paper on the KoNViD-1k dataset? | PLCC |
What metrics were used to measure the DOVER (head-only) model in the Exploring Video Quality Assessment on User Generated Contents from Aesthetic and Technical Perspectives paper on the KoNViD-1k dataset? | PLCC |
What metrics were used to measure the FAST-VQA (finetuned on KonViD-1k) model in the FAST-VQA: Efficient End-to-end Video Quality Assessment with Fragment Sampling paper on the KoNViD-1k dataset? | PLCC |
What metrics were used to measure the SimpleVQA model in the A Deep Learning based No-reference Quality Assessment Model for UGC Videos paper on the KoNViD-1k dataset? | PLCC |
What metrics were used to measure the DisCoVQA model in the DisCoVQA: Temporal Distortion-Content Transformers for Video Quality Assessment paper on the KoNViD-1k dataset? | PLCC |
What metrics were used to measure the HVS-5M model in the HVS Revisited: A Comprehensive Video Quality Assessment Framework paper on the KoNViD-1k dataset? | PLCC |
What metrics were used to measure the FAST-VQA (trained on LSVQ only) model in the FAST-VQA: Efficient End-to-end Video Quality Assessment with Fragment Sampling paper on the KoNViD-1k dataset? | PLCC |
What metrics were used to measure the 2BiVQA model in the 2BiVQA: Double Bi-LSTM based Video Quality Assessment of UGC Videos paper on the KoNViD-1k dataset? | PLCC |
What metrics were used to measure the BVQA-2022 model in the Blindly Assess Quality of In-the-Wild Videos via Quality-aware Pre-training and Motion Perception paper on the KoNViD-1k dataset? | PLCC |
What metrics were used to measure the RAPIQUE model in the RAPIQUE: Rapid and Accurate Video Quality Prediction of User Generated Content paper on the KoNViD-1k dataset? | PLCC |
What metrics were used to measure the VIDEVAL model in the UGC-VQA: Benchmarking Blind Video Quality Assessment for User Generated Content paper on the KoNViD-1k dataset? | PLCC |
What metrics were used to measure the VSFA model in the Quality Assessment of In-the-Wild Videos paper on the KoNViD-1k dataset? | PLCC |
What metrics were used to measure the PVQ model in the Patch-VQ: 'Patching Up' the Video Quality Problem paper on the KoNViD-1k dataset? | PLCC |
What metrics were used to measure the TLVQM model in the Two-Level Approach for No-Reference Consumer Video Quality Assessment paper on the KoNViD-1k dataset? | PLCC |
What metrics were used to measure the ChipQA model in the ChipQA: No-Reference Video Quality Prediction via Space-Time Chips paper on the KoNViD-1k dataset? | PLCC |
What metrics were used to measure the ChipQA model in the ChipQA: No-Reference Video Quality Prediction via Space-Time Chips paper on the LIVE-ETRI dataset? | SRCC |
What metrics were used to measure the VBLIINDS model in the Blind Prediction of Natural Video Quality paper on the LIVE-ETRI dataset? | SRCC |
What metrics were used to measure the ChipQA-0 model in the No-Reference Video Quality Assessment Using Space-Time Chips paper on the LIVE-ETRI dataset? | SRCC |
What metrics were used to measure the BRISQUE model in the No-Reference Image Quality Assessment in the Spatial Domain paper on the LIVE-ETRI dataset? | SRCC |
What metrics were used to measure the TLVQM model in the Two-Level Approach for No-Reference Consumer Video Quality Assessment paper on the LIVE-ETRI dataset? | SRCC |
What metrics were used to measure the KonCept512 model in the KonIQ-10k: Towards an ecologically valid and large-scale IQA database paper on the KonIQ-10k dataset? | SRCC |
What metrics were used to measure the UNIQUE model in the UNIQUE: Unsupervised Image Quality Estimation paper on the MSU NR VQA Database dataset? | SRCC, PLCC, KLCC |
What metrics were used to measure the LINEARITY model in the Norm-in-Norm Loss with Faster Convergence and Better Performance for Image Quality Assessment paper on the MSU NR VQA Database dataset? | SRCC, PLCC, KLCC |
What metrics were used to measure the MUSIQ model in the MUSIQ: Multi-scale Image Quality Transformer paper on the MSU NR VQA Database dataset? | SRCC, PLCC, KLCC |
What metrics were used to measure the SPAQ MT-S model in the Perceptual Quality Assessment of Smartphone Photography paper on the MSU NR VQA Database dataset? | SRCC, PLCC, KLCC |
What metrics were used to measure the SPAQ BL model in the Perceptual Quality Assessment of Smartphone Photography paper on the MSU NR VQA Database dataset? | SRCC, PLCC, KLCC |
What metrics were used to measure the SPAQ MT-A model in the Perceptual Quality Assessment of Smartphone Photography paper on the MSU NR VQA Database dataset? | SRCC, PLCC, KLCC |
What metrics were used to measure the PaQ-2-PiQ model in the From Patches to Pictures (PaQ-2-PiQ): Mapping the Perceptual Space of Picture Quality paper on the MSU NR VQA Database dataset? | SRCC, PLCC, KLCC |
What metrics were used to measure the NIMA model in the NIMA: Neural Image Assessment paper on the MSU NR VQA Database dataset? | SRCC, PLCC, KLCC |
What metrics were used to measure the KonCept512 model in the KonIQ-10k: Towards an ecologically valid and large-scale IQA database paper on the MSU NR VQA Database dataset? | SRCC, PLCC, KLCC |
What metrics were used to measure the MEON model in the paper on the MSU NR VQA Database dataset? | SRCC, PLCC, KLCC |
What metrics were used to measure the AHIQ model in the Attentions Help CNNs See Better: Attention-based Hybrid Image Quality Assessment Network paper on the MSU FR VQA Database dataset? | SRCC |
What metrics were used to measure the FSIM model in the FSIM: A Feature Similarity Index for Image Quality Assessment paper on the MSU FR VQA Database dataset? | SRCC |
What metrics were used to measure the DSS model in the Image quality assessment based on DCT subband similarity paper on the MSU FR VQA Database dataset? | SRCC |
What metrics were used to measure the MDSI model in the Mean Deviation Similarity Index: Efficient and Reliable Full-Reference Image Quality Evaluator paper on the MSU FR VQA Database dataset? | SRCC |
What metrics were used to measure the MS-GMSD model in the Gradient magnitude similarity deviation on multiple scales for color image quality assessment paper on the MSU FR VQA Database dataset? | SRCC |
What metrics were used to measure the GMSD model in the Gradient Magnitude Similarity Deviation: A Highly Efficient Perceptual Image Quality Index paper on the MSU FR VQA Database dataset? | SRCC |
What metrics were used to measure the TreEnhance model in the TreEnhance: A Tree Search Method For Low-Light Image Enhancement paper on the MIT-Adobe FiveK dataset? | DeltaE, LPIPS, PSNR, SSIM |
What metrics were used to measure the ESDNet-L model in the Towards Efficient and Scale-Robust Ultra-High-Definition Image Demoireing paper on the TIP 2018 dataset? | PSNR, SSIM, FSIM |
What metrics were used to measure the MBCNN model in the Image Demoireing with Learnable Bandpass Filters paper on the TIP 2018 dataset? | PSNR, SSIM, FSIM |
What metrics were used to measure the ESDNet model in the Towards Efficient and Scale-Robust Ultra-High-Definition Image Demoireing paper on the TIP 2018 dataset? | PSNR, SSIM, FSIM |
What metrics were used to measure the Uformer-B model in the Uformer: A General U-Shaped Transformer for Image Restoration paper on the TIP 2018 dataset? | PSNR, SSIM, FSIM |
What metrics were used to measure the MopNet model in the Mop Moire Patterns Using MopNet paper on the TIP 2018 dataset? | PSNR, SSIM, FSIM |
What metrics were used to measure the DMCNN model in the Moiré Photo Restoration Using Multiresolution Convolutional Neural Networks paper on the TIP 2018 dataset? | PSNR, SSIM, FSIM |
What metrics were used to measure the LCDPNet model in the Local Color Distributions Prior for Image Enhancement paper on the Exposure-Errors dataset? | PSNR, SSIM |
What metrics were used to measure the IAT model in the You Only Need 90K Parameters to Adapt Light: A Light Weight Transformer for Image Enhancement and Exposure Correction paper on the Exposure-Errors dataset? | PSNR, SSIM |
What metrics were used to measure the MSEC model in the Learning Multi-Scale Photo Exposure Correction paper on the Exposure-Errors dataset? | PSNR, SSIM |
What metrics were used to measure the Retinexformer model in the Retinexformer: One-stage Retinex-based Transformer for Low-light Image Enhancement paper on the MIT-Adobe 5k dataset? | PSNR, SSIM, LPIPS |
What metrics were used to measure the DIFAR (MSCA, level 1) model in the CURL: Neural Curve Layers for Global Image Enhancement paper on the MIT-Adobe 5k dataset? | PSNR, SSIM, LPIPS |
What metrics were used to measure the GCANet model in the Gated Context Aggregation Network for Image Dehazing and Deraining paper on the DID-MDN dataset? | PSNR |
What metrics were used to measure the Transformer+BT (ADMIN init) model in the Very Deep Transformers for Neural Machine Translation paper on the WMT2014 English-French dataset? | BLEU score, SacreBLEU, Hardware Burden, Operations per network pass |
What metrics were used to measure the Noisy back-translation model in the Understanding Back-Translation at Scale paper on the WMT2014 English-French dataset? | BLEU score, SacreBLEU, Hardware Burden, Operations per network pass |
What metrics were used to measure the mRASP+Fine-Tune model in the Pre-training Multilingual Neural Machine Translation by Leveraging Alignment Information paper on the WMT2014 English-French dataset? | BLEU score, SacreBLEU, Hardware Burden, Operations per network pass |
What metrics were used to measure the Transformer + R-Drop model in the R-Drop: Regularized Dropout for Neural Networks paper on the WMT2014 English-French dataset? | BLEU score, SacreBLEU, Hardware Burden, Operations per network pass |
What metrics were used to measure the Transformer (ADMIN init) model in the Very Deep Transformers for Neural Machine Translation paper on the WMT2014 English-French dataset? | BLEU score, SacreBLEU, Hardware Burden, Operations per network pass |
What metrics were used to measure the Admin model in the Understanding the Difficulty of Training Transformers paper on the WMT2014 English-French dataset? | BLEU score, SacreBLEU, Hardware Burden, Operations per network pass |
What metrics were used to measure the BERT-fused NMT model in the Incorporating BERT into Neural Machine Translation paper on the WMT2014 English-French dataset? | BLEU score, SacreBLEU, Hardware Burden, Operations per network pass |
What metrics were used to measure the MUSE(Paralllel Multi-scale Attention) model in the MUSE: Parallel Multi-Scale Attention for Sequence to Sequence Learning paper on the WMT2014 English-French dataset? | BLEU score, SacreBLEU, Hardware Burden, Operations per network pass |
What metrics were used to measure the T5 model in the Exploring the Limits of Transfer Learning with a Unified Text-to-Text Transformer paper on the WMT2014 English-French dataset? | BLEU score, SacreBLEU, Hardware Burden, Operations per network pass |
What metrics were used to measure the Local Joint Self-attention model in the Joint Source-Target Self Attention with Locality Constraints paper on the WMT2014 English-French dataset? | BLEU score, SacreBLEU, Hardware Burden, Operations per network pass |
What metrics were used to measure the Depth Growing model in the Depth Growing for Neural Machine Translation paper on the WMT2014 English-French dataset? | BLEU score, SacreBLEU, Hardware Burden, Operations per network pass |
What metrics were used to measure the Transformer Big model in the Scaling Neural Machine Translation paper on the WMT2014 English-French dataset? | BLEU score, SacreBLEU, Hardware Burden, Operations per network pass |
What metrics were used to measure the DynamicConv model in the Pay Less Attention with Lightweight and Dynamic Convolutions paper on the WMT2014 English-French dataset? | BLEU score, SacreBLEU, Hardware Burden, Operations per network pass |
What metrics were used to measure the TaLK Convolutions model in the Time-aware Large Kernel Convolutions paper on the WMT2014 English-French dataset? | BLEU score, SacreBLEU, Hardware Burden, Operations per network pass |
What metrics were used to measure the LightConv model in the Pay Less Attention with Lightweight and Dynamic Convolutions paper on the WMT2014 English-French dataset? | BLEU score, SacreBLEU, Hardware Burden, Operations per network pass |
What metrics were used to measure the FLOATER-large model in the Learning to Encode Position for Transformer with Continuous Dynamical Model paper on the WMT2014 English-French dataset? | BLEU score, SacreBLEU, Hardware Burden, Operations per network pass |
What metrics were used to measure the OmniNetP model in the OmniNet: Omnidirectional Representations from Transformers paper on the WMT2014 English-French dataset? | BLEU score, SacreBLEU, Hardware Burden, Operations per network pass |
What metrics were used to measure the Transformer Big + MoS model in the Fast and Simple Mixture of Softmaxes with BPE and Hybrid-LightRNN for Language Generation paper on the WMT2014 English-French dataset? | BLEU score, SacreBLEU, Hardware Burden, Operations per network pass |
What metrics were used to measure the T2R + Pretrain model in the Finetuning Pretrained Transformers into RNNs paper on the WMT2014 English-French dataset? | BLEU score, SacreBLEU, Hardware Burden, Operations per network pass |
What metrics were used to measure the Synthesizer (Random + Vanilla) model in the Synthesizer: Rethinking Self-Attention in Transformer Models paper on the WMT2014 English-French dataset? | BLEU score, SacreBLEU, Hardware Burden, Operations per network pass |
What metrics were used to measure the Hardware Aware Transformer model in the HAT: Hardware-Aware Transformers for Efficient Natural Language Processing paper on the WMT2014 English-French dataset? | BLEU score, SacreBLEU, Hardware Burden, Operations per network pass |
What metrics were used to measure the Transformer (big) + Relative Position Representations model in the Self-Attention with Relative Position Representations paper on the WMT2014 English-French dataset? | BLEU score, SacreBLEU, Hardware Burden, Operations per network pass |
What metrics were used to measure the Weighted Transformer (large) model in the Weighted Transformer Network for Machine Translation paper on the WMT2014 English-French dataset? | BLEU score, SacreBLEU, Hardware Burden, Operations per network pass |
What metrics were used to measure the ConvS2S (ensemble) model in the Convolutional Sequence to Sequence Learning paper on the WMT2014 English-French dataset? | BLEU score, SacreBLEU, Hardware Burden, Operations per network pass |
What metrics were used to measure the Evolved Transformer Big model in the The Evolved Transformer paper on the WMT2014 English-French dataset? | BLEU score, SacreBLEU, Hardware Burden, Operations per network pass |
What metrics were used to measure the RNMT+ model in the The Best of Both Worlds: Combining Recent Advances in Neural Machine Translation paper on the WMT2014 English-French dataset? | BLEU score, SacreBLEU, Hardware Burden, Operations per network pass |
What metrics were used to measure the Transformer Big model in the Attention Is All You Need paper on the WMT2014 English-French dataset? | BLEU score, SacreBLEU, Hardware Burden, Operations per network pass |
What metrics were used to measure the Evolved Transformer Base model in the The Evolved Transformer paper on the WMT2014 English-French dataset? | BLEU score, SacreBLEU, Hardware Burden, Operations per network pass |
What metrics were used to measure the ResMLP-12 model in the ResMLP: Feedforward networks for image classification with data-efficient training paper on the WMT2014 English-French dataset? | BLEU score, SacreBLEU, Hardware Burden, Operations per network pass |
What metrics were used to measure the MoE model in the Outrageously Large Neural Networks: The Sparsely-Gated Mixture-of-Experts Layer paper on the WMT2014 English-French dataset? | BLEU score, SacreBLEU, Hardware Burden, Operations per network pass |
What metrics were used to measure the Transformer model in the Memory-Efficient Adaptive Optimization paper on the WMT2014 English-French dataset? | BLEU score, SacreBLEU, Hardware Burden, Operations per network pass |
What metrics were used to measure the ConvS2S model in the Convolutional Sequence to Sequence Learning paper on the WMT2014 English-French dataset? | BLEU score, SacreBLEU, Hardware Burden, Operations per network pass |
What metrics were used to measure the ResMLP-6 model in the ResMLP: Feedforward networks for image classification with data-efficient training paper on the WMT2014 English-French dataset? | BLEU score, SacreBLEU, Hardware Burden, Operations per network pass |
What metrics were used to measure the TransformerBase + AutoDropout model in the AutoDropout: Learning Dropout Patterns to Regularize Deep Networks paper on the WMT2014 English-French dataset? | BLEU score, SacreBLEU, Hardware Burden, Operations per network pass |
What metrics were used to measure the GNMT+RL model in the Google's Neural Machine Translation System: Bridging the Gap between Human and Machine Translation paper on the WMT2014 English-French dataset? | BLEU score, SacreBLEU, Hardware Burden, Operations per network pass |
What metrics were used to measure the Lite Transformer model in the Lite Transformer with Long-Short Range Attention paper on the WMT2014 English-French dataset? | BLEU score, SacreBLEU, Hardware Burden, Operations per network pass |
What metrics were used to measure the Deep-Att + PosUnk model in the Deep Recurrent Models with Fast-Forward Connections for Neural Machine Translation paper on the WMT2014 English-French dataset? | BLEU score, SacreBLEU, Hardware Burden, Operations per network pass |
What metrics were used to measure the Rfa-Gate-arccos model in the Random Feature Attention paper on the WMT2014 English-French dataset? | BLEU score, SacreBLEU, Hardware Burden, Operations per network pass |
What metrics were used to measure the Transformer Base model in the Attention Is All You Need paper on the WMT2014 English-French dataset? | BLEU score, SacreBLEU, Hardware Burden, Operations per network pass |
What metrics were used to measure the LSTM6 + PosUnk model in the Addressing the Rare Word Problem in Neural Machine Translation paper on the WMT2014 English-French dataset? | BLEU score, SacreBLEU, Hardware Burden, Operations per network pass |
What metrics were used to measure the PBMT model in the paper on the WMT2014 English-French dataset? | BLEU score, SacreBLEU, Hardware Burden, Operations per network pass |
What metrics were used to measure the SMT+LSTM5 model in the Sequence to Sequence Learning with Neural Networks paper on the WMT2014 English-French dataset? | BLEU score, SacreBLEU, Hardware Burden, Operations per network pass |
What metrics were used to measure the RNN-search50* model in the Neural Machine Translation by Jointly Learning to Align and Translate paper on the WMT2014 English-French dataset? | BLEU score, SacreBLEU, Hardware Burden, Operations per network pass |
What metrics were used to measure the Deep-Att model in the Deep Recurrent Models with Fast-Forward Connections for Neural Machine Translation paper on the WMT2014 English-French dataset? | BLEU score, SacreBLEU, Hardware Burden, Operations per network pass |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.