prompts stringlengths 81 413 | metrics_response stringlengths 0 371 |
|---|---|
What metrics were used to measure the MeLoDy model in the Efficient Neural Music Generation paper on the MusicCaps dataset? | FAD VGG |
What metrics were used to measure the Mubert model in the MusicLM: Generating Music From Text paper on the MusicCaps dataset? | FAD VGG |
What metrics were used to measure the Riffusion model in the MusicLM: Generating Music From Text paper on the MusicCaps dataset? | FAD VGG |
What metrics were used to measure the PITI model in the Pretraining is All You Need for Image-to-Image Translation paper on the COCO-Stuff dataset? | FID, FID-C |
What metrics were used to measure the Pix2PixHD model in the High-Resolution Image Synthesis and Semantic Manipulation with Conditional GANs paper on the COCO-Stuff dataset? | FID, FID-C |
What metrics were used to measure the SPADE model in the Semantic Image Synthesis with Spatially-Adaptive Normalization paper on the COCO-Stuff dataset? | FID, FID-C |
What metrics were used to measure the AODB (full) model in the Adversarial Open Domain Adaptation for Sketch-to-Photo Synthesis paper on the Scribble dataset? | FID, Accuracy, Human (%) |
What metrics were used to measure the EdgeGAN model in the SketchyCOCO: Image Generation from Freehand Scene Sketches paper on the Scribble dataset? | FID, Accuracy, Human (%) |
What metrics were used to measure the AODB (full) model in the Adversarial Open Domain Adaptation for Sketch-to-Photo Synthesis paper on the SketchyCOCO dataset? | FID, Accuracy, Human (%) |
What metrics were used to measure the EdgeGAN model in the SketchyCOCO: Image Generation from Freehand Scene Sketches paper on the SketchyCOCO dataset? | FID, Accuracy, Human (%) |
What metrics were used to measure the PPCA-SWSL model in the Scalable Learning with Incremental Probabilistic PCA paper on the CIFAR-100 - 50 classes + 5 steps of 10 classes dataset? | Final Accuracy |
What metrics were used to measure the PPCA-CLIP model in the Scalable Learning with Incremental Probabilistic PCA paper on the CIFAR-100 - 50 classes + 5 steps of 10 classes dataset? | Final Accuracy |
What metrics were used to measure the PPCA-SWSL model in the Scalable Learning with Incremental Probabilistic PCA paper on the CIFAR-100 - 50 classes + 10 steps of 5 classes dataset? | Final Accuracy |
What metrics were used to measure the PPCA-CLIP model in the Scalable Learning with Incremental Probabilistic PCA paper on the CIFAR-100 - 50 classes + 10 steps of 5 classes dataset? | Final Accuracy |
What metrics were used to measure the S&B model in the Split-and-Bridge: Adaptable Class Incremental Learning within a Single Neural Network paper on the cifar100 dataset? | 10-stage average accuracy |
What metrics were used to measure the SCR model in the Supervised Contrastive Learning paper on the cifar100 dataset? | 10-stage average accuracy |
What metrics were used to measure the iCaRL model in the iCaRL: Incremental Classifier and Representation Learning paper on the cifar100 dataset? | 10-stage average accuracy |
What metrics were used to measure the LUCIR model in the Learning a Unified Classifier Incrementally via Rebalancing paper on the cifar100 dataset? | 10-stage average accuracy |
What metrics were used to measure the ABD model in the Always Be Dreaming: A New Approach for Data-Free Class-Incremental Learning paper on the cifar100 dataset? | 10-stage average accuracy |
What metrics were used to measure the EWC model in the Overcoming catastrophic forgetting in neural networks paper on the cifar100 dataset? | 10-stage average accuracy |
What metrics were used to measure the EMR model in the On Tiny Episodic Memories in Continual Learning paper on the cifar100 dataset? | 10-stage average accuracy |
What metrics were used to measure the A-GEM model in the Efficient Lifelong Learning with A-GEM paper on the cifar100 dataset? | 10-stage average accuracy |
What metrics were used to measure the Show-1 model in the Show-1: Marrying Pixel and Latent Diffusion Models for Text-to-Video Generation paper on the MSR-VTT dataset? | FVD, CLIPSIM, FID |
What metrics were used to measure the ModelScopeT2V model in the ModelScope Text-to-Video Technical Report paper on the MSR-VTT dataset? | FVD, CLIPSIM, FID |
What metrics were used to measure the MagicVideo model in the MagicVideo: Efficient Video Generation With Latent Diffusion Models paper on the MSR-VTT dataset? | FVD, CLIPSIM, FID |
What metrics were used to measure the Make-A-Video model in the Make-A-Video: Text-to-Video Generation without Text-Video Data paper on the MSR-VTT dataset? | FVD, CLIPSIM, FID |
What metrics were used to measure the Video LDM model in the Align your Latents: High-Resolution Video Synthesis with Latent Diffusion Models paper on the MSR-VTT dataset? | FVD, CLIPSIM, FID |
What metrics were used to measure the MMVG model in the Tell Me What Happened: Unifying Text-guided Video Completion via Multimodal Masked Video Generation paper on the MSR-VTT dataset? | FVD, CLIPSIM, FID |
What metrics were used to measure the CogVideo (English) model in the Make-A-Video: Text-to-Video Generation without Text-Video Data paper on the MSR-VTT dataset? | FVD, CLIPSIM, FID |
What metrics were used to measure the CogVideo (Chinese) model in the Align your Latents: High-Resolution Video Synthesis with Latent Diffusion Models paper on the MSR-VTT dataset? | FVD, CLIPSIM, FID |
What metrics were used to measure the NUWA model in the NÜWA: Visual Synthesis Pre-training for Neural visUal World creAtion paper on the MSR-VTT dataset? | FVD, CLIPSIM, FID |
What metrics were used to measure the GODIVA model in the GODIVA: Generating Open-DomaIn Videos from nAtural Descriptions paper on the MSR-VTT dataset? | FVD, CLIPSIM, FID |
What metrics were used to measure the REGIS-Fuse model in the REGIS: Refining Generated Videos via Iterative Stylistic Redesigning paper on the UCF-101 dataset? | FVD16 |
What metrics were used to measure the VideoFusion model in the VideoFusion: Decomposed Diffusion Models for High-Quality Video Generation paper on the UCF-101 dataset? | FVD16 |
What metrics were used to measure the PYoCo model in the Preserve Your Own Correlation: A Noise Prior for Video Diffusion Models paper on the UCF-101 dataset? | FVD16 |
What metrics were used to measure the MAGVIT model in the MAGVIT: Masked Generative Video Transformer paper on the Something-Something V2 dataset? | FVD |
What metrics were used to measure the VideoFactory model in the VideoFactory: Swap Attention in Spatiotemporal Diffusions for Text-to-Video Generation paper on the WebVid dataset? | FVD |
What metrics were used to measure the NUWA (128×128) model in the NÜWA: Visual Synthesis Pre-training for Neural visUal World creAtion paper on the Kinetics dataset? | Accuracy |
What metrics were used to measure the AMD-HookNet model in the AMD-HookNet for Glacier Front Segmentation paper on the CaFFe dataset? | Mean Distance Error |
What metrics were used to measure the CaFFe Baseline Zones model in the Calving fronts and where to find them: a benchmark dataset and methodology for automatic glacier calving front extraction from synthetic aperture radar imagery paper on the CaFFe dataset? | Mean Distance Error |
What metrics were used to measure the CaFFe Baseline Front model in the Calving fronts and where to find them: a benchmark dataset and methodology for automatic glacier calving front extraction from synthetic aperture radar imagery paper on the CaFFe dataset? | Mean Distance Error |
What metrics were used to measure the OPENINS3D model in the OpenIns3D: Snap and Lookup for 3D Open-vocabulary Instance Segmentation paper on the STPLS3D dataset? | AP50 |
What metrics were used to measure the PointCLIPV2 model in the PointCLIP V2: Prompting CLIP and GPT for Powerful 3D Open-world Learning paper on the STPLS3D dataset? | AP50 |
What metrics were used to measure the PointCLIP model in the PointCLIP: Point Cloud Understanding by CLIP paper on the STPLS3D dataset? | AP50 |
What metrics were used to measure the Object-Centric-OVD model in the Bridging the Gap between Object and Image-level Representations for Open-Vocabulary Detection paper on the Objects365 dataset? | mask AP50 |
What metrics were used to measure the ViLD model in the Open-vocabulary Object Detection via Vision and Language Knowledge Distillation paper on the Objects365 dataset? | mask AP50 |
What metrics were used to measure the Object-Centric-OVD model in the Bridging the Gap between Object and Image-level Representations for Open-Vocabulary Detection paper on the OpenImages-v4 dataset? | mask AP50, AP 0.5 |
What metrics were used to measure the Detic model in the Detecting Twenty-thousand Classes using Image-level Supervision paper on the OpenImages-v4 dataset? | mask AP50, AP 0.5 |
What metrics were used to measure the DITO model in the Detection-Oriented Image-Text Pretraining for Open-Vocabulary Detection paper on the LVIS v1.0 dataset? | AP novel-LVIS base training, AP novel-Unrestricted open-vocabulary training |
What metrics were used to measure the CoDet (EVA02-L) model in the CoDet: Co-Occurrence Guided Region-Word Alignment for Open-Vocabulary Object Detection paper on the LVIS v1.0 dataset? | AP novel-LVIS base training, AP novel-Unrestricted open-vocabulary training |
What metrics were used to measure the DE-ViT model in the Detect Every Thing with Few Examples paper on the LVIS v1.0 dataset? | AP novel-LVIS base training, AP novel-Unrestricted open-vocabulary training |
What metrics were used to measure the CFM-ViT model in the Contrastive Feature Masking Open-Vocabulary Vision Transformer paper on the LVIS v1.0 dataset? | AP novel-LVIS base training, AP novel-Unrestricted open-vocabulary training |
What metrics were used to measure the RO-ViT model in the Region-Aware Pretraining for Open-Vocabulary Object Detection with Vision Transformers paper on the LVIS v1.0 dataset? | AP novel-LVIS base training, AP novel-Unrestricted open-vocabulary training |
What metrics were used to measure the ViLD-ensemble w/ ALIGN (Eb7-FPN) model in the Open-vocabulary Object Detection via Vision and Language Knowledge Distillation paper on the LVIS v1.0 dataset? | AP novel-LVIS base training, AP novel-Unrestricted open-vocabulary training |
What metrics were used to measure the OWL-ViT (CLIP-L/14) model in the Simple Open-Vocabulary Object Detection with Vision Transformers paper on the LVIS v1.0 dataset? | AP novel-LVIS base training, AP novel-Unrestricted open-vocabulary training |
What metrics were used to measure the POMP model in the Prompt Pre-Training with Twenty-Thousand Classes for Open-Vocabulary Visual Recognition paper on the LVIS v1.0 dataset? | AP novel-LVIS base training, AP novel-Unrestricted open-vocabulary training |
What metrics were used to measure the BARON model in the Aligning Bag of Regions for Open-Vocabulary Object Detection paper on the LVIS v1.0 dataset? | AP novel-LVIS base training, AP novel-Unrestricted open-vocabulary training |
What metrics were used to measure the MEDet model in the Open Vocabulary Object Detection with Proposal Mining and Prediction Equalization paper on the LVIS v1.0 dataset? | AP novel-LVIS base training, AP novel-Unrestricted open-vocabulary training |
What metrics were used to measure the Region-CLIP (RN50x4-C4) model in the RegionCLIP: Region-based Language-Image Pretraining paper on the LVIS v1.0 dataset? | AP novel-LVIS base training, AP novel-Unrestricted open-vocabulary training |
What metrics were used to measure the OADP model in the Object-Aware Distillation Pyramid for Open-Vocabulary Object Detection paper on the LVIS v1.0 dataset? | AP novel-LVIS base training, AP novel-Unrestricted open-vocabulary training |
What metrics were used to measure the X-Paste model in the X-Paste: Revisiting Scalable Copy-Paste for Instance Segmentation using CLIP and StableDiffusion paper on the LVIS v1.0 dataset? | AP novel-LVIS base training, AP novel-Unrestricted open-vocabulary training |
What metrics were used to measure the Object-Centric-OVD model in the Bridging the Gap between Object and Image-level Representations for Open-Vocabulary Detection paper on the LVIS v1.0 dataset? | AP novel-LVIS base training, AP novel-Unrestricted open-vocabulary training |
What metrics were used to measure the ViLD-ensemble (R152-FPN) model in the Open-vocabulary Object Detection via Vision and Language Knowledge Distillation paper on the LVIS v1.0 dataset? | AP novel-LVIS base training, AP novel-Unrestricted open-vocabulary training |
What metrics were used to measure the Detic model in the Detecting Twenty-thousand Classes using Image-level Supervision paper on the LVIS v1.0 dataset? | AP novel-LVIS base training, AP novel-Unrestricted open-vocabulary training |
What metrics were used to measure the Region-CLIP (RN50-C4) model in the RegionCLIP: Region-based Language-Image Pretraining paper on the LVIS v1.0 dataset? | AP novel-LVIS base training, AP novel-Unrestricted open-vocabulary training |
What metrics were used to measure the ViLD-ensemble (R50-FPN) model in the Open-vocabulary Object Detection via Vision and Language Knowledge Distillation paper on the LVIS v1.0 dataset? | AP novel-LVIS base training, AP novel-Unrestricted open-vocabulary training |
What metrics were used to measure the ViLD (R50-FPN) model in the Open-vocabulary Object Detection via Vision and Language Knowledge Distillation paper on the LVIS v1.0 dataset? | AP novel-LVIS base training, AP novel-Unrestricted open-vocabulary training |
What metrics were used to measure the DE-ViT model in the Detect Every Thing with Few Examples paper on the MSCOCO dataset? | AP 0.5 |
What metrics were used to measure the DITO model in the Detection-Oriented Image-Text Pretraining for Open-Vocabulary Detection paper on the MSCOCO dataset? | AP 0.5 |
What metrics were used to measure the LP-OVOD (OWL-ViT Proposals) model in the LP-OVOD: Open-Vocabulary Object Detection by Linear Probing paper on the MSCOCO dataset? | AP 0.5 |
What metrics were used to measure the CORA+ model in the CORA: Adapting CLIP for Open-Vocabulary Detection with Region Prompting and Anchor Pre-Matching paper on the MSCOCO dataset? | AP 0.5 |
What metrics were used to measure the BARON model in the Aligning Bag of Regions for Open-Vocabulary Object Detection paper on the MSCOCO dataset? | AP 0.5 |
What metrics were used to measure the CORA model in the CORA: Adapting CLIP for Open-Vocabulary Detection with Region Prompting and Anchor Pre-Matching paper on the MSCOCO dataset? | AP 0.5 |
What metrics were used to measure the LP-OVOD model in the LP-OVOD: Open-Vocabulary Object Detection by Linear Probing paper on the MSCOCO dataset? | AP 0.5 |
What metrics were used to measure the Region-CLIP (RN50x4-C4) model in the RegionCLIP: Region-based Language-Image Pretraining paper on the MSCOCO dataset? | AP 0.5 |
What metrics were used to measure the Object-Centric-OVD model in the Bridging the Gap between Object and Image-level Representations for Open-Vocabulary Detection paper on the MSCOCO dataset? | AP 0.5 |
What metrics were used to measure the OADP (G-OVD) model in the Object-Aware Distillation Pyramid for Open-Vocabulary Object Detection paper on the MSCOCO dataset? | AP 0.5 |
What metrics were used to measure the VL-PLM (RN50) model in the Exploiting Unlabeled Data with Vision and Language Models for Object Detection paper on the MSCOCO dataset? | AP 0.5 |
What metrics were used to measure the CFM-ViT model in the Contrastive Feature Masking Open-Vocabulary Vision Transformer paper on the MSCOCO dataset? | AP 0.5 |
What metrics were used to measure the MEDet (RN50) model in the Open Vocabulary Object Detection with Proposal Mining and Prediction Equalization paper on the MSCOCO dataset? | AP 0.5 |
What metrics were used to measure the Region-CLIP (RN50-C4) model in the RegionCLIP: Region-based Language-Image Pretraining paper on the MSCOCO dataset? | AP 0.5 |
What metrics were used to measure the OVAD-Baseline model in the Open-vocabulary Attribute Detection paper on the MSCOCO dataset? | AP 0.5 |
What metrics were used to measure the OADP model in the Object-Aware Distillation Pyramid for Open-Vocabulary Object Detection paper on the MSCOCO dataset? | AP 0.5 |
What metrics were used to measure the OV-DERT model in the Open-Vocabulary DETR with Conditional Matching paper on the MSCOCO dataset? | AP 0.5 |
What metrics were used to measure the LocOv (RN50-C4) model in the Localized Vision-Language Matching for Open-vocabulary Object Detection paper on the MSCOCO dataset? | AP 0.5 |
What metrics were used to measure the Detic model in the Detecting Twenty-thousand Classes using Image-level Supervision paper on the MSCOCO dataset? | AP 0.5 |
What metrics were used to measure the ViLD model in the Open-vocabulary Object Detection via Vision and Language Knowledge Distillation paper on the MSCOCO dataset? | AP 0.5 |
What metrics were used to measure the OVR-CNN model in the Open-Vocabulary Object Detection Using Captions paper on the MSCOCO dataset? | AP 0.5 |
What metrics were used to measure the HierKD model in the Open-Vocabulary One-Stage Detection with Hierarchical Visual-Language Knowledge Distillation paper on the MSCOCO dataset? | AP 0.5 |
What metrics were used to measure the PoseRAC model in the PoseRAC: Pose Saliency Transformer for Repetitive Action Counting paper on the RepCount dataset? | OBO |
What metrics were used to measure the TransRAC model in the TransRAC: Encoding Multi-scale Temporal Correlation with Transformers for Repetitive Action Counting paper on the RepCount dataset? | OBO |
What metrics were used to measure the monet model in the An Efficient Method for Face Quality Assessment on the Edge paper on the Color FERET dataset? | Pearson Correlation |
What metrics were used to measure the SER-FIQ (same model) on FaceNet model in the SER-FIQ: Unsupervised Estimation of Face Image Quality Based on Stochastic Embedding Robustness paper on the Adience dataset? | Equal Error Rate |
What metrics were used to measure the SER-FIQ (same model) on ArcFace model in the SER-FIQ: Unsupervised Estimation of Face Image Quality Based on Stochastic Embedding Robustness paper on the LFW dataset? | Equal Error Rate |
What metrics were used to measure the ViT-Lens model in the ViT-Lens: Towards Omni-modal Representations paper on the ModelNet40 dataset? | Accuracy (%), Parameters, Need 3D Data? |
What metrics were used to measure the OpenShape-PointBERT model in the OpenShape: Scaling Up 3D Shape Representation Towards Open-World Understanding paper on the ModelNet40 dataset? | Accuracy (%), Parameters, Need 3D Data? |
What metrics were used to measure the Point-NN model in the Parameter is Not All You Need: Starting from Non-Parametric Networks for 3D Point Cloud Analysis paper on the ModelNet40 dataset? | Accuracy (%), Parameters, Need 3D Data? |
What metrics were used to measure the PointCLIP V2 model in the PointCLIP V2: Prompting CLIP and GPT for Powerful 3D Open-world Learning paper on the ModelNet40 dataset? | Accuracy (%), Parameters, Need 3D Data? |
What metrics were used to measure the ULIP model in the ULIP: Learning a Unified Representation of Language, Images, and Point Clouds for 3D Understanding paper on the ModelNet40 dataset? | Accuracy (%), Parameters, Need 3D Data? |
What metrics were used to measure the CLIP2Point model in the CLIP2Point: Transfer CLIP to Point Cloud Classification with Image-Depth Pre-training paper on the ModelNet40 dataset? | Accuracy (%), Parameters, Need 3D Data? |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.