prompts stringlengths 81 413 | metrics_response stringlengths 0 371 |
|---|---|
What metrics were used to measure the NeRF model in the MobileNeRF: Exploiting the Polygon Rasterization Pipeline for Efficient Neural Field Rendering on Mobile Architectures paper on the NeRF dataset? | PSNR, SSIM, LPIPS, Average PSNR |
What metrics were used to measure the MobileNeRF model in the MobileNeRF: Exploiting the Polygon Rasterization Pipeline for Efficient Neural Field Rendering on Mobile Architectures paper on the NeRF dataset? | PSNR, SSIM, LPIPS, Average PSNR |
What metrics were used to measure the SNeRG model in the MobileNeRF: Exploiting the Polygon Rasterization Pipeline for Efficient Neural Field Rendering on Mobile Architectures paper on the NeRF dataset? | PSNR, SSIM, LPIPS, Average PSNR |
What metrics were used to measure the PVD_Hash2NeRF model in the One is All: Bridging the Gap Between Neural Radiance Fields Architectures with Progressive Volume Distillation paper on the NeRF dataset? | PSNR, SSIM, LPIPS, Average PSNR |
What metrics were used to measure the READ model in the READ: Large-Scale Neural Scene Rendering for Autonomous Driving paper on the KITTI dataset? | Average PSNR |
What metrics were used to measure the Single NeRF + Share./Inst. Net model in the Editing Conditional Radiance Fields paper on the Dosovitskiy Chairs dataset? | LPIPS, PSNR |
What metrics were used to measure the NeRF model in the MobileNeRF: Exploiting the Polygon Rasterization Pipeline for Efficient Neural Field Rendering on Mobile Architectures paper on the Mip-NeRF 360 dataset? | LPIPS, PSNR, SSIM |
What metrics were used to measure the MobileNeRF model in the MobileNeRF: Exploiting the Polygon Rasterization Pipeline for Efficient Neural Field Rendering on Mobile Architectures paper on the Mip-NeRF 360 dataset? | LPIPS, PSNR, SSIM |
What metrics were used to measure the NeRF++ model in the MobileNeRF: Exploiting the Polygon Rasterization Pipeline for Efficient Neural Field Rendering on Mobile Architectures paper on the Mip-NeRF 360 dataset? | LPIPS, PSNR, SSIM |
What metrics were used to measure the Multi-view to Novel View model in the Multi-view to Novel view: Synthesizing Novel Views with Self-Learned Confidence paper on the ShapeNet Car dataset? | SSIM |
What metrics were used to measure the Pixel-NeRF (env: Google Scan) model in the RTMV: A Ray-Traced Multi-View Synthetic Dataset for Novel View Synthesis paper on the RTMV dataset? | PSNR, SSIM |
What metrics were used to measure the Pixel-NeRF (env: ABC) model in the RTMV: A Ray-Traced Multi-View Synthetic Dataset for Novel View Synthesis paper on the RTMV dataset? | PSNR, SSIM |
What metrics were used to measure the Pixel-NeRF (env: Bricks) model in the RTMV: A Ray-Traced Multi-View Synthetic Dataset for Novel View Synthesis paper on the RTMV dataset? | PSNR, SSIM |
What metrics were used to measure the Pixel-NeRF (env: Amz. Ber.) model in the RTMV: A Ray-Traced Multi-View Synthetic Dataset for Novel View Synthesis paper on the RTMV dataset? | PSNR, SSIM |
What metrics were used to measure the Single NeRF + Share./Inst. Net model in the Editing Conditional Radiance Fields paper on the PhotoShape dataset? | LPIPS, PSNR |
What metrics were used to measure the StereoLayers (8 layers) model in the Stereo Magnification with Multi-Layer Images paper on the SWORD dataset? | LPIPS, PSNR, SSIM |
What metrics were used to measure the StereoLayers (2 layers) model in the Stereo Magnification with Multi-Layer Images paper on the SWORD dataset? | LPIPS, PSNR, SSIM |
What metrics were used to measure the StereoLayers model in the Stereo Magnification with Multi-Layer Images paper on the SWORD dataset? | LPIPS, PSNR, SSIM |
What metrics were used to measure the Multi-view to Novel View model in the Multi-view to Novel view: Synthesizing Novel Views with Self-Learned Confidence paper on the KITTI Novel View Synthesis dataset? | SSIM |
What metrics were used to measure the Multi-view to Novel View model in the Multi-view to Novel view: Synthesizing Novel Views with Self-Learned Confidence paper on the ShapeNet Chair dataset? | SSIM |
What metrics were used to measure the Multi-view to Novel View model in the Multi-view to Novel view: Synthesizing Novel Views with Self-Learned Confidence paper on the Synthia Novel View Synthesis dataset? | SSIM |
What metrics were used to measure the HyperReel model in the HyperReel: High-Fidelity 6-DoF Video with Ray-Conditioned Sampling paper on the DONeRF: Evaluation Dataset dataset? | PSNR |
What metrics were used to measure the Instant NGP model in the HyperReel: High-Fidelity 6-DoF Video with Ray-Conditioned Sampling paper on the DONeRF: Evaluation Dataset dataset? | PSNR |
What metrics were used to measure the NeRF model in the HyperReel: High-Fidelity 6-DoF Video with Ray-Conditioned Sampling paper on the DONeRF: Evaluation Dataset dataset? | PSNR |
What metrics were used to measure the AdaNeRF model in the HyperReel: High-Fidelity 6-DoF Video with Ray-Conditioned Sampling paper on the DONeRF: Evaluation Dataset dataset? | PSNR |
What metrics were used to measure the DoNeRF model in the HyperReel: High-Fidelity 6-DoF Video with Ray-Conditioned Sampling paper on the DONeRF: Evaluation Dataset dataset? | PSNR |
What metrics were used to measure the TermiNeRF model in the HyperReel: High-Fidelity 6-DoF Video with Ray-Conditioned Sampling paper on the DONeRF: Evaluation Dataset dataset? | PSNR |
What metrics were used to measure the SLAHAN (LSTM+syntactic-information) model in the Syntactically Look-Ahead Attention Network for Sentence Compression paper on the Google Dataset dataset? | F1, CR |
What metrics were used to measure the BiRNN + LM Evaluator model in the A Language Model based Evaluator for Sentence Compression paper on the Google Dataset dataset? | F1, CR |
What metrics were used to measure the Higher-Order Syntactic Attention Network model in the Higher-Order Syntactic Attention Network for Longer Sentence Compression paper on the Google Dataset dataset? | F1, CR |
What metrics were used to measure the LSTM model in the Sentence Compression by Deletion with LSTMs paper on the Google Dataset dataset? | F1, CR |
What metrics were used to measure the LSTMs + eye-movement model in the Improving sentence compression by learning to predict gaze paper on the Google Dataset dataset? | F1, CR |
What metrics were used to measure the BiLSTM model in the Can Syntax Help? Improving an LSTM-based Sentence Compression Model for New Domains paper on the Google Dataset dataset? | F1, CR |
What metrics were used to measure the STF+LSTM model in the Audio-Visual Speech and Gesture Recognition by Sensors of Mobile Devices paper on the AUTSL dataset? | Rank-1 Recognition Rate |
What metrics were used to measure the SAM-SLR (RGB-D) model in the Skeleton Aware Multi-modal Sign Language Recognition paper on the AUTSL dataset? | Rank-1 Recognition Rate |
What metrics were used to measure the Ensemble - NTIS model in the One Model is Not Enough: Ensembles for Isolated Sign Language Recognition paper on the AUTSL dataset? | Rank-1 Recognition Rate |
What metrics were used to measure the MViT-SLR model in the Fine-tuning of sign language recognition models: a technical report paper on the AUTSL dataset? | Rank-1 Recognition Rate |
What metrics were used to measure the FE+LSTM model in the Cross-Language Transfer Learning using Visual Information for Automatic Sign Gesture Recognition paper on the AUTSL dataset? | Rank-1 Recognition Rate |
What metrics were used to measure the VTN-PF model in the Isolated Sign Recognition from RGB Video using Pose Flow and Self-Attention paper on the AUTSL dataset? | Rank-1 Recognition Rate |
What metrics were used to measure the CNN+FPM+BLSTM+Attention (RGB-D) model in the AUTSL: A Large Scale Multi-modal Turkish Sign Language Dataset and Baseline Methods paper on the AUTSL dataset? | Rank-1 Recognition Rate |
What metrics were used to measure the NLA-SLR model in the Natural Language-Assisted Sign Language Recognition paper on the WLASL-2000 dataset? | Top-1 Accuracy |
What metrics were used to measure the SAM-SLR model in the Skeleton Aware Multi-modal Sign Language Recognition paper on the WLASL-2000 dataset? | Top-1 Accuracy |
What metrics were used to measure the SWIN-SLR model in the Fine-tuning of sign language recognition models: a technical report paper on the WLASL-2000 dataset? | Top-1 Accuracy |
What metrics were used to measure the I3D (pretraining: BSL-1K) model in the BSL-1K: Scaling up co-articulated sign language recognition using mouthing cues paper on the WLASL-2000 dataset? | Top-1 Accuracy |
What metrics were used to measure the I3D model in the Word-level Deep Sign Language Recognition from Video: A New Large-scale Dataset and Methods Comparison paper on the WLASL-2000 dataset? | Top-1 Accuracy |
What metrics were used to measure the TwoStream-SLR model in the Two-Stream Network for Sign Language Recognition and Translation paper on the RWTH-PHOENIX-Weather 2014 T dataset? | Word Error Rate (WER) |
What metrics were used to measure the SignBERT+ model in the SignBERT+: Hand-model-aware Self-supervised Pre-training for Sign Language Understanding paper on the RWTH-PHOENIX-Weather 2014 T dataset? | Word Error Rate (WER) |
What metrics were used to measure the C2SLR model in the C2SLR: Consistency-Enhanced Continuous Sign Language Recognition paper on the RWTH-PHOENIX-Weather 2014 T dataset? | Word Error Rate (WER) |
What metrics were used to measure the STMC model in the Spatial-Temporal Multi-Cue Network for Continuous Sign Language Recognition paper on the RWTH-PHOENIX-Weather 2014 T dataset? | Word Error Rate (WER) |
What metrics were used to measure the SMKD model in the Self-Mutual Distillation Learning for Continuous Sign Language Recognition paper on the RWTH-PHOENIX-Weather 2014 T dataset? | Word Error Rate (WER) |
What metrics were used to measure the MMTLB model in the A Simple Multi-Modality Transfer Learning Baseline for Sign Language Translation paper on the RWTH-PHOENIX-Weather 2014 T dataset? | Word Error Rate (WER) |
What metrics were used to measure the SignBT model in the Improving Sign Language Translation with Monolingual Data by Sign Back-Translation paper on the RWTH-PHOENIX-Weather 2014 T dataset? | Word Error Rate (WER) |
What metrics were used to measure the CrossModal model in the Continuous Sign Language Recognition Through Cross-Modal Alignment of Video and Text Embeddings in a Joint-Latent Space paper on the RWTH-PHOENIX-Weather 2014 T dataset? | Word Error Rate (WER) |
What metrics were used to measure the Stochastic CSLR model in the Stochastic Fine-grained Labeling of Multi-state Sign Glosses for Continuous Sign Language Recognition paper on the RWTH-PHOENIX-Weather 2014 T dataset? | Word Error Rate (WER) |
What metrics were used to measure the SignBERT+ model in the SignBERT+: Hand-model-aware Self-supervised Pre-training for Sign Language Understanding paper on the WLASL dataset? | Top-1 Accuracy |
What metrics were used to measure the TwoStream-SLR model in the Two-Stream Network for Sign Language Recognition and Translation paper on the CSL-Daily dataset? | Word Error Rate (WER) |
What metrics were used to measure the CorrNet+ACDR model in the Conditional Diffusion Feature Refinement for Continuous Sign Language Recognition paper on the CSL-Daily dataset? | Word Error Rate (WER) |
What metrics were used to measure the CTCA model in the Distilling Cross-Temporal Contexts for Continuous Sign Language Recognition paper on the CSL-Daily dataset? | Word Error Rate (WER) |
What metrics were used to measure the CorrNet model in the Continuous Sign Language Recognition with Correlation Network paper on the CSL-Daily dataset? | Word Error Rate (WER) |
What metrics were used to measure the AdaBrowse model in the AdaBrowse: Adaptive Video Browser for Efficient Continuous Sign Language Recognition paper on the CSL-Daily dataset? | Word Error Rate (WER) |
What metrics were used to measure the SEN model in the Self-Emphasizing Network for Continuous Sign Language Recognition paper on the CSL-Daily dataset? | Word Error Rate (WER) |
What metrics were used to measure the C2SLR model in the Improving Continuous Sign Language Recognition with Consistency Constraints and Signer Removal paper on the CSL-Daily dataset? | Word Error Rate (WER) |
What metrics were used to measure the BN-TIN+Transf. model in the Improving Sign Language Translation with Monolingual Data by Sign Back-Translation paper on the CSL-Daily dataset? | Word Error Rate (WER) |
What metrics were used to measure the SignBERT+ model in the SignBERT+: Hand-model-aware Self-supervised Pre-training for Sign Language Understanding paper on the MS-ASL dataset? | Top-1 Accuracy |
What metrics were used to measure the TwoStream-SLR model in the Two-Stream Network for Sign Language Recognition and Translation paper on the RWTH-PHOENIX-Weather 2014 dataset? | Word Error Rate (WER) |
What metrics were used to measure the CorrNet + VAC model in the Continuous Sign Language Recognition with Correlation Network paper on the RWTH-PHOENIX-Weather 2014 dataset? | Word Error Rate (WER) |
What metrics were used to measure the SignBERT+ model in the SignBERT+: Hand-model-aware Self-supervised Pre-training for Sign Language Understanding paper on the RWTH-PHOENIX-Weather 2014 dataset? | Word Error Rate (WER) |
What metrics were used to measure the RadialCTC model in the Deep Radial Embedding for Visual Sequence Learning paper on the RWTH-PHOENIX-Weather 2014 dataset? | Word Error Rate (WER) |
What metrics were used to measure the C2SLR model in the C2SLR: Consistency-Enhanced Continuous Sign Language Recognition paper on the RWTH-PHOENIX-Weather 2014 dataset? | Word Error Rate (WER) |
What metrics were used to measure the SMKD model in the Self-Mutual Distillation Learning for Continuous Sign Language Recognition paper on the RWTH-PHOENIX-Weather 2014 dataset? | Word Error Rate (WER) |
What metrics were used to measure the STMC model in the Spatial-Temporal Multi-Cue Network for Continuous Sign Language Recognition paper on the RWTH-PHOENIX-Weather 2014 dataset? | Word Error Rate (WER) |
What metrics were used to measure the VAC model in the Visual Alignment Constraint for Continuous Sign Language Recognition paper on the RWTH-PHOENIX-Weather 2014 dataset? | Word Error Rate (WER) |
What metrics were used to measure the DNF model in the A Deep Neural Framework for Continuous Sign Language Recognition by Iterative Training paper on the RWTH-PHOENIX-Weather 2014 dataset? | Word Error Rate (WER) |
What metrics were used to measure the SLRGAN model in the 007: Democratically Finding The Cause of Packet Drops paper on the RWTH-PHOENIX-Weather 2014 dataset? | Word Error Rate (WER) |
What metrics were used to measure the CrossModal model in the Continuous Sign Language Recognition Through Cross-Modal Alignment of Video and Text Embeddings in a Joint-Latent Space paper on the RWTH-PHOENIX-Weather 2014 dataset? | Word Error Rate (WER) |
What metrics were used to measure the Stochastic CSLR model in the Stochastic Fine-grained Labeling of Multi-state Sign Glosses for Continuous Sign Language Recognition paper on the RWTH-PHOENIX-Weather 2014 dataset? | Word Error Rate (WER) |
What metrics were used to measure the SAN model in the Context Matters: Self-Attention for Sign Language Recognition paper on the RWTH-PHOENIX-Weather 2014 dataset? | Word Error Rate (WER) |
What metrics were used to measure the SubUNets model in the SubUNets: End-To-End Hand Shape and Continuous Sign Language Recognition paper on the RWTH-PHOENIX-Weather 2014 dataset? | Word Error Rate (WER) |
What metrics were used to measure the SPOTER model in the Sign Pose-Based Transformer for Word-Level Sign Language Recognition paper on the LSA64 dataset? | Accuracy (%) |
What metrics were used to measure the Bag of words fusion of hand pose/movement/position model in the Sign Language Recognition Without Frame-Sequencing Constraints: A Proof of Concept on the Argentinian Sign Language paper on the LSA64 dataset? | Accuracy (%) |
What metrics were used to measure the 3DGCN model in the Spatial Attention-Based 3D Graph Convolutional Neural Network for Sign Language Recognition paper on the LSA64 dataset? | Accuracy (%) |
What metrics were used to measure the SignBERT model in the SignBERT: Pre-Training of Hand-Model-Aware Representation for Sign Language Recognition paper on the WLASL100 dataset? | Top-1 Accuracy |
What metrics were used to measure the I3D, ST-GCN model in the Word-level Sign Language Recognition with Multi-stream Neural Networks Focusing on Local Regions paper on the WLASL100 dataset? | Top-1 Accuracy |
What metrics were used to measure the I3D model in the Word-level Deep Sign Language Recognition from Video: A New Large-scale Dataset and Methods Comparison paper on the WLASL100 dataset? | Top-1 Accuracy |
What metrics were used to measure the SPOTER model in the Sign Pose-Based Transformer for Word-Level Sign Language Recognition paper on the WLASL100 dataset? | Top-1 Accuracy |
What metrics were used to measure the TextEnt-full model in the Representation Learning of Entities and Documents from Knowledge Base Descriptions paper on the Freebase FIGER dataset? | Accuracy, BEP, Macro F1, Micro F1, P@1 |
What metrics were used to measure the K-Adapter ( fac-adapter ) model in the K-Adapter: Infusing Knowledge into Pre-Trained Models with Adapters paper on the Open Entity dataset? | F1, Precision, Recall |
What metrics were used to measure the K-Adapter ( fac-adapter + lin-adapter ) model in the K-Adapter: Infusing Knowledge into Pre-Trained Models with Adapters paper on the Open Entity dataset? | F1, Precision, Recall |
What metrics were used to measure the ERNIE model in the ERNIE: Enhanced Language Representation with Informative Entities paper on the Open Entity dataset? | F1, Precision, Recall |
What metrics were used to measure the MLMET model in the Ultra-Fine Entity Typing with Weak Supervision from a Masked Language Model paper on the Ontonotes v5 (English) dataset? | F1, Precision, Recall |
What metrics were used to measure the ELMo (distant denoising data) model in the Learning to Denoise Distantly-Labeled Data for Entity Typing paper on the Ontonotes v5 (English) dataset? | F1, Precision, Recall |
What metrics were used to measure the LabelGCN Xiong et al. (2019) model in the Imposing Label-Relational Inductive Bias for Extremely Fine-Grained Entity Typing paper on the Ontonotes v5 (English) dataset? | F1, Precision, Recall |
What metrics were used to measure the Choi et al. (2018) w augmentation model in the Ultra-Fine Entity Typing paper on the Ontonotes v5 (English) dataset? | F1, Precision, Recall |
What metrics were used to measure the ReFinED model in the ReFinED: An Efficient Zero-shot-capable Approach to End-to-End Entity Linking paper on the AIDA-CoNLL dataset? | Micro-F1 |
What metrics were used to measure the LITE model in the Ultra-fine Entity Typing with Indirect Supervision from Natural Language Inference paper on the FIGER dataset? | Macro F1, Micro F1 |
What metrics were used to measure the MLMET model in the LUKE: Deep Contextualized Entity Representations with Entity-aware Self-attention paper on the Open Entity dataset? | F1 |
What metrics were used to measure the MCCE-B (replicated by Adaseq) model in the Recall, Expand and Multi-Candidate Cross-Encode: Fast and Accurate Ultra-Fine Entity Typing paper on the Open Entity dataset? | F1 |
What metrics were used to measure the Prompt + NPCRF (replicated by Adaseq) model in the Modeling Label Correlations for Ultra-Fine Entity Typing with Neural Pairwise Conditional Random Field paper on the Open Entity dataset? | F1 |
What metrics were used to measure the UniST-Large model in the Unified Semantic Typing with Meaningful Label Inference paper on the Open Entity dataset? | F1 |
What metrics were used to measure the Prompt Learning (replicated by Adaseq)) model in the Modeling Label Correlations for Ultra-Fine Entity Typing with Neural Pairwise Conditional Random Field paper on the Open Entity dataset? | F1 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.