prompts stringlengths 81 413 | metrics_response stringlengths 0 371 |
|---|---|
What metrics were used to measure the AltCLIP model in the AltCLIP: Altering the Language Encoder in CLIP for Extended Language Capabilities paper on the ImageNet-A dataset? | Accuracy (Private), Accuracy (Public), Top 5 Accuracy |
What metrics were used to measure the PaLI model in the PaLI: A Jointly-Scaled Multilingual Language-Image Model paper on the ImageNet-A dataset? | Accuracy (Private), Accuracy (Public), Top 5 Accuracy |
What metrics were used to measure the AltCLIP model in the AltCLIP: Altering the Language Encoder in CLIP for Extended Language Capabilities paper on the CN-ImageNet V2 dataset? | Accuracy (Private) |
What metrics were used to measure the LiT-22B model in the Scaling Vision Transformers to 22 Billion Parameters paper on the ObjectNet dataset? | Accuracy (Private), Accuracy (Public), Top 5 Accuracy |
What metrics were used to measure the LiT ViT-e model in the PaLI: A Jointly-Scaled Multilingual Language-Image Model paper on the ObjectNet dataset? | Accuracy (Private), Accuracy (Public), Top 5 Accuracy |
What metrics were used to measure the CoCa model in the CoCa: Contrastive Captioners are Image-Text Foundation Models paper on the ObjectNet dataset? | Accuracy (Private), Accuracy (Public), Top 5 Accuracy |
What metrics were used to measure the LiT-tuning model in the LiT: Zero-Shot Transfer with Locked-image text Tuning paper on the ObjectNet dataset? | Accuracy (Private), Accuracy (Public), Top 5 Accuracy |
What metrics were used to measure the EVA-CLIP-E/14+ model in the EVA-CLIP: Improved Training Techniques for CLIP at Scale paper on the ObjectNet dataset? | Accuracy (Private), Accuracy (Public), Top 5 Accuracy |
What metrics were used to measure the CLIP model in the Learning Transferable Visual Models From Natural Language Supervision paper on the ObjectNet dataset? | Accuracy (Private), Accuracy (Public), Top 5 Accuracy |
What metrics were used to measure the PaLI model in the PaLI: A Jointly-Scaled Multilingual Language-Image Model paper on the ObjectNet dataset? | Accuracy (Private), Accuracy (Public), Top 5 Accuracy |
What metrics were used to measure the AltCLIP model in the AltCLIP: Altering the Language Encoder in CLIP for Extended Language Capabilities paper on the CN-ImageNet-R dataset? | Accuracy (Private) |
What metrics were used to measure the BASIC (Lion) model in the Symbolic Discovery of Optimization Algorithms paper on the ImageNet-R dataset? | Accuracy (Private), Accuracy (Public), Top 5 Accuracy |
What metrics were used to measure the CoCa model in the CoCa: Contrastive Captioners are Image-Text Foundation Models paper on the ImageNet-R dataset? | Accuracy (Private), Accuracy (Public), Top 5 Accuracy |
What metrics were used to measure the LiT ViT-e model in the PaLI: A Jointly-Scaled Multilingual Language-Image Model paper on the ImageNet-R dataset? | Accuracy (Private), Accuracy (Public), Top 5 Accuracy |
What metrics were used to measure the LiT-22B model in the Scaling Vision Transformers to 22 Billion Parameters paper on the ImageNet-R dataset? | Accuracy (Private), Accuracy (Public), Top 5 Accuracy |
What metrics were used to measure the BASIC model in the Combined Scaling for Zero-shot Transfer Learning paper on the ImageNet-R dataset? | Accuracy (Private), Accuracy (Public), Top 5 Accuracy |
What metrics were used to measure the EVA-CLIP-E/14+ model in the EVA-CLIP: Improved Training Techniques for CLIP at Scale paper on the ImageNet-R dataset? | Accuracy (Private), Accuracy (Public), Top 5 Accuracy |
What metrics were used to measure the LiT-tuning model in the LiT: Zero-Shot Transfer with Locked-image text Tuning paper on the ImageNet-R dataset? | Accuracy (Private), Accuracy (Public), Top 5 Accuracy |
What metrics were used to measure the ALIGN model in the Scaling Up Visual and Vision-Language Representation Learning With Noisy Text Supervision paper on the ImageNet-R dataset? | Accuracy (Private), Accuracy (Public), Top 5 Accuracy |
What metrics were used to measure the CLIP model in the Learning Transferable Visual Models From Natural Language Supervision paper on the ImageNet-R dataset? | Accuracy (Private), Accuracy (Public), Top 5 Accuracy |
What metrics were used to measure the AltCLIP model in the AltCLIP: Altering the Language Encoder in CLIP for Extended Language Capabilities paper on the ImageNet-R dataset? | Accuracy (Private), Accuracy (Public), Top 5 Accuracy |
What metrics were used to measure the PaLI model in the PaLI: A Jointly-Scaled Multilingual Language-Image Model paper on the ImageNet-R dataset? | Accuracy (Private), Accuracy (Public), Top 5 Accuracy |
What metrics were used to measure the CoCa model in the CoCa: Contrastive Captioners are Image-Text Foundation Models paper on the ImageNet-Sketch dataset? | Accuracy (Private) |
What metrics were used to measure the BASIC (Lion) model in the Symbolic Discovery of Optimization Algorithms paper on the ImageNet-Sketch dataset? | Accuracy (Private) |
What metrics were used to measure the BASIC model in the Combined Scaling for Zero-shot Transfer Learning paper on the ImageNet-Sketch dataset? | Accuracy (Private) |
What metrics were used to measure the EVA-CLIP-E/14+ model in the EVA-CLIP: Improved Training Techniques for CLIP at Scale paper on the ImageNet-Sketch dataset? | Accuracy (Private) |
What metrics were used to measure the AltCLIP model in the AltCLIP: Altering the Language Encoder in CLIP for Extended Language Capabilities paper on the ImageNet-Sketch dataset? | Accuracy (Private) |
What metrics were used to measure the CLIP model in the Learning Transferable Visual Models From Natural Language Supervision paper on the aYahoo dataset? | Accuracy |
What metrics were used to measure the Visual N-Grams model in the Learning Visual N-Grams from Web Data paper on the aYahoo dataset? | Accuracy |
What metrics were used to measure the PaLI model in the PaLI: A Jointly-Scaled Multilingual Language-Image Model paper on the ImageNet-S dataset? | Accuracy (Private), Top 5 Accuracy |
What metrics were used to measure the BASIC (Lion) model in the Symbolic Discovery of Optimization Algorithms paper on the ImageNet V2 dataset? | Accuracy (Private), Accuracy (Public), Top 5 Accuracy |
What metrics were used to measure the LiT-22B model in the Scaling Vision Transformers to 22 Billion Parameters paper on the ImageNet V2 dataset? | Accuracy (Private), Accuracy (Public), Top 5 Accuracy |
What metrics were used to measure the CoCa model in the CoCa: Contrastive Captioners are Image-Text Foundation Models paper on the ImageNet V2 dataset? | Accuracy (Private), Accuracy (Public), Top 5 Accuracy |
What metrics were used to measure the BASIC model in the Combined Scaling for Zero-shot Transfer Learning paper on the ImageNet V2 dataset? | Accuracy (Private), Accuracy (Public), Top 5 Accuracy |
What metrics were used to measure the LiT ViT-e model in the PaLI: A Jointly-Scaled Multilingual Language-Image Model paper on the ImageNet V2 dataset? | Accuracy (Private), Accuracy (Public), Top 5 Accuracy |
What metrics were used to measure the LiT-tuning model in the LiT: Zero-Shot Transfer with Locked-image text Tuning paper on the ImageNet V2 dataset? | Accuracy (Private), Accuracy (Public), Top 5 Accuracy |
What metrics were used to measure the EVA-CLIP-E/14+ model in the EVA-CLIP: Improved Training Techniques for CLIP at Scale paper on the ImageNet V2 dataset? | Accuracy (Private), Accuracy (Public), Top 5 Accuracy |
What metrics were used to measure the ALIGN model in the Scaling Up Visual and Vision-Language Representation Learning With Noisy Text Supervision paper on the ImageNet V2 dataset? | Accuracy (Private), Accuracy (Public), Top 5 Accuracy |
What metrics were used to measure the CLIP model in the Learning Transferable Visual Models From Natural Language Supervision paper on the ImageNet V2 dataset? | Accuracy (Private), Accuracy (Public), Top 5 Accuracy |
What metrics were used to measure the AltCLIP model in the AltCLIP: Altering the Language Encoder in CLIP for Extended Language Capabilities paper on the ImageNet V2 dataset? | Accuracy (Private), Accuracy (Public), Top 5 Accuracy |
What metrics were used to measure the PaLI model in the PaLI: A Jointly-Scaled Multilingual Language-Image Model paper on the ImageNet V2 dataset? | Accuracy (Private), Accuracy (Public), Top 5 Accuracy |
What metrics were used to measure the AltCLIP model in the AltCLIP: Altering the Language Encoder in CLIP for Extended Language Capabilities paper on the CN-ImageNet-A dataset? | Accuracy (Private) |
What metrics were used to measure the LiT-tuning model in the LiT: Zero-Shot Transfer with Locked-image text Tuning paper on the ImageNet ReaL dataset? | Accuracy (Private), Accuracy (Public) |
What metrics were used to measure the MAWS (ViT-2B) model in the The effectiveness of MAE pre-pretraining for billion-scale pretraining paper on the Food-101 dataset? | Top 1 Accuracy |
What metrics were used to measure the EVA-CLIP-E/14+ model in the EVA-CLIP: Improved Training Techniques for CLIP at Scale paper on the Food-101 dataset? | Top 1 Accuracy |
What metrics were used to measure the Diffusion Classifier (zero-shot) model in the Your Diffusion Model is Secretly a Zero-Shot Classifier paper on the Food-101 dataset? | Top 1 Accuracy |
What metrics were used to measure the CLIP model in the Learning Transferable Visual Models From Natural Language Supervision paper on the SUN dataset? | Accuracy |
What metrics were used to measure the Visual N-Grams model in the Learning Visual N-Grams from Web Data paper on the SUN dataset? | Accuracy |
What metrics were used to measure the EVPv1 model in the Explicit Visual Prompting for Low-Level Structure Segmentations paper on the DUTS-TE dataset? | max_F1, MAE, E-measure, S-measure |
What metrics were used to measure the EVPv2 model in the Explicit Visual Prompting for Universal Foreground Segmentations paper on the DUTS-TE dataset? | max_F1, MAE, E-measure, S-measure |
What metrics were used to measure the SelfReformer model in the SelfReformer: Self-Refined Network with Transformer for Salient Object Detection paper on the DUTS-TE dataset? | max_F1, MAE, E-measure, S-measure |
What metrics were used to measure the LDF(ResNet-50) model in the Label Decoupling Framework for Salient Object Detection paper on the DUTS-TE dataset? | max_F1, MAE, E-measure, S-measure |
What metrics were used to measure the F3Net model in the F3Net: Fusion, Feedback and Focus for Salient Object Detection paper on the DUTS-TE dataset? | max_F1, MAE, E-measure, S-measure |
What metrics were used to measure the RCSB model in the Recursive Contour Saliency Blending Network for Accurate Salient Object Detection paper on the DUTS-TE dataset? | max_F1, MAE, E-measure, S-measure |
What metrics were used to measure the SelfReformer model in the SelfReformer: Self-Refined Network with Transformer for Salient Object Detection paper on the PASCAL-S dataset? | max_F1, MAE, E-measure, S-measure |
What metrics were used to measure the RCSB model in the Recursive Contour Saliency Blending Network for Accurate Salient Object Detection paper on the PASCAL-S dataset? | max_F1, MAE, E-measure, S-measure |
What metrics were used to measure the LDF(ours) model in the Label Decoupling Framework for Salient Object Detection paper on the PASCAL-S dataset? | max_F1, MAE, E-measure, S-measure |
What metrics were used to measure the EVPv1 model in the Explicit Visual Prompting for Low-Level Structure Segmentations paper on the PASCAL-S dataset? | max_F1, MAE, E-measure, S-measure |
What metrics were used to measure the F3Net model in the F3Net: Fusion, Feedback and Focus for Salient Object Detection paper on the PASCAL-S dataset? | max_F1, MAE, E-measure, S-measure |
What metrics were used to measure the EVPv2 model in the Explicit Visual Prompting for Universal Foreground Segmentations paper on the PASCAL-S dataset? | max_F1, MAE, E-measure, S-measure |
What metrics were used to measure the U2-Net+ model in the U$^2$-Net: Going Deeper with Nested U-Structure for Salient Object Detection paper on the SOD dataset? | Fwβ, MAE, Sm, relaxFbβ, {max}Fβ |
What metrics were used to measure the EVPv1 model in the Explicit Visual Prompting for Low-Level Structure Segmentations paper on the ECSSD dataset? | max_F1, MAE, E-measure, S-measure |
What metrics were used to measure the EVPv2 model in the Explicit Visual Prompting for Universal Foreground Segmentations paper on the ECSSD dataset? | max_F1, MAE, E-measure, S-measure |
What metrics were used to measure the SelfReformer model in the SelfReformer: Self-Refined Network with Transformer for Salient Object Detection paper on the ECSSD dataset? | max_F1, MAE, E-measure, S-measure |
What metrics were used to measure the LDF(ours) model in the Label Decoupling Framework for Salient Object Detection paper on the ECSSD dataset? | max_F1, MAE, E-measure, S-measure |
What metrics were used to measure the F3Net model in the F3Net: Fusion, Feedback and Focus for Salient Object Detection paper on the ECSSD dataset? | max_F1, MAE, E-measure, S-measure |
What metrics were used to measure the RCSB model in the Recursive Contour Saliency Blending Network for Accurate Salient Object Detection paper on the ECSSD dataset? | max_F1, MAE, E-measure, S-measure |
What metrics were used to measure the EVPv1 model in the Explicit Visual Prompting for Low-Level Structure Segmentations paper on the DUT-OMRON dataset? | max_F1, MAE, E-measure, S-measure |
What metrics were used to measure the EVPv2 model in the Explicit Visual Prompting for Universal Foreground Segmentations paper on the DUT-OMRON dataset? | max_F1, MAE, E-measure, S-measure |
What metrics were used to measure the SelfReformer model in the SelfReformer: Self-Refined Network with Transformer for Salient Object Detection paper on the DUT-OMRON dataset? | max_F1, MAE, E-measure, S-measure |
What metrics were used to measure the LDF model in the Label Decoupling Framework for Salient Object Detection paper on the DUT-OMRON dataset? | max_F1, MAE, E-measure, S-measure |
What metrics were used to measure the F3Net model in the F3Net: Fusion, Feedback and Focus for Salient Object Detection paper on the DUT-OMRON dataset? | max_F1, MAE, E-measure, S-measure |
What metrics were used to measure the RCSB model in the Recursive Contour Saliency Blending Network for Accurate Salient Object Detection paper on the DUT-OMRON dataset? | max_F1, MAE, E-measure, S-measure |
What metrics were used to measure the EVPv2 model in the Explicit Visual Prompting for Universal Foreground Segmentations paper on the HKU-IS dataset? | max_F1, MAE, E-measure, S-measure |
What metrics were used to measure the EVPv1 model in the Explicit Visual Prompting for Low-Level Structure Segmentations paper on the HKU-IS dataset? | max_F1, MAE, E-measure, S-measure |
What metrics were used to measure the SelfReformer model in the SelfReformer: Self-Refined Network with Transformer for Salient Object Detection paper on the HKU-IS dataset? | max_F1, MAE, E-measure, S-measure |
What metrics were used to measure the LDF model in the Label Decoupling Framework for Salient Object Detection paper on the HKU-IS dataset? | max_F1, MAE, E-measure, S-measure |
What metrics were used to measure the RCSB model in the Recursive Contour Saliency Blending Network for Accurate Salient Object Detection paper on the HKU-IS dataset? | max_F1, MAE, E-measure, S-measure |
What metrics were used to measure the F3Net model in the F3Net: Fusion, Feedback and Focus for Salient Object Detection paper on the HKU-IS dataset? | max_F1, MAE, E-measure, S-measure |
What metrics were used to measure the Faster-RCNN (ResNeXt-101) model in the Deep Learning for Image-based Automatic Dial Meter Reading: Dataset and Baselines paper on the UFPR-ADMR-v1 dataset? | Rank-1 Recognition Rate |
What metrics were used to measure the YOLOv3 (608 x 608) model in the Deep Learning for Image-based Automatic Dial Meter Reading: Dataset and Baselines paper on the UFPR-ADMR-v1 dataset? | Rank-1 Recognition Rate |
What metrics were used to measure the YOLOv3 (416 x 416) model in the Deep Learning for Image-based Automatic Dial Meter Reading: Dataset and Baselines paper on the UFPR-ADMR-v1 dataset? | Rank-1 Recognition Rate |
What metrics were used to measure the Faster-RCNN (ResNet-50) model in the Deep Learning for Image-based Automatic Dial Meter Reading: Dataset and Baselines paper on the UFPR-ADMR-v1 dataset? | Rank-1 Recognition Rate |
What metrics were used to measure the Faster-RCNN (ResNet-101) model in the Deep Learning for Image-based Automatic Dial Meter Reading: Dataset and Baselines paper on the UFPR-ADMR-v1 dataset? | Rank-1 Recognition Rate |
What metrics were used to measure the YOLOv2 (608 x 608) model in the Deep Learning for Image-based Automatic Dial Meter Reading: Dataset and Baselines paper on the UFPR-ADMR-v1 dataset? | Rank-1 Recognition Rate |
What metrics were used to measure the YOLOv2 (416 x 416) model in the Deep Learning for Image-based Automatic Dial Meter Reading: Dataset and Baselines paper on the UFPR-ADMR-v1 dataset? | Rank-1 Recognition Rate |
What metrics were used to measure the Fast-YOLOv3 (608 x 608) model in the Deep Learning for Image-based Automatic Dial Meter Reading: Dataset and Baselines paper on the UFPR-ADMR-v1 dataset? | Rank-1 Recognition Rate |
What metrics were used to measure the Fast-YOLOv2 (608 x 608) model in the Deep Learning for Image-based Automatic Dial Meter Reading: Dataset and Baselines paper on the UFPR-ADMR-v1 dataset? | Rank-1 Recognition Rate |
What metrics were used to measure the Fast-YOLOv3 (416 x 416) model in the Deep Learning for Image-based Automatic Dial Meter Reading: Dataset and Baselines paper on the UFPR-ADMR-v1 dataset? | Rank-1 Recognition Rate |
What metrics were used to measure the Fast-YOLOv2 (416 x 416) model in the Deep Learning for Image-based Automatic Dial Meter Reading: Dataset and Baselines paper on the UFPR-ADMR-v1 dataset? | Rank-1 Recognition Rate |
What metrics were used to measure the Fast-YOLOv4-SmallObj + CDCC-NET + Fast-OCR model in the Towards Image-based Automatic Meter Reading in Unconstrained Scenarios: A Robust and Efficient Approach paper on the UFPR-AMR dataset? | Rank-1 Recognition Rate |
What metrics were used to measure the Fast-YOLOv4-SmallObj + Fast-OCR model in the Towards Image-based Automatic Meter Reading in Unconstrained Scenarios: A Robust and Efficient Approach paper on the UFPR-AMR dataset? | Rank-1 Recognition Rate |
What metrics were used to measure the Fast-YOLOv2 + CR-NET model in the Convolutional Neural Networks for Automatic Meter Reading paper on the UFPR-AMR dataset? | Rank-1 Recognition Rate |
What metrics were used to measure the Fast-YOLOv2 + CRNN model in the Convolutional Neural Networks for Automatic Meter Reading paper on the UFPR-AMR dataset? | Rank-1 Recognition Rate |
What metrics were used to measure the Fast-YOLOv2 + Multi-task CNN model in the Convolutional Neural Networks for Automatic Meter Reading paper on the UFPR-AMR dataset? | Rank-1 Recognition Rate |
What metrics were used to measure the Fast-YOLOv4-SmallObj + CDCC-NET + Fast-OCR model in the Towards Image-based Automatic Meter Reading in Unconstrained Scenarios: A Robust and Efficient Approach paper on the Copel-AMR dataset? | Rank-1 Recognition Rate |
What metrics were used to measure the Fast-YOLOv4-SmallObj + Fast-OCR model in the Towards Image-based Automatic Meter Reading in Unconstrained Scenarios: A Robust and Efficient Approach paper on the Copel-AMR dataset? | Rank-1 Recognition Rate |
What metrics were used to measure the EagerMOT model in the EagerMOT: 3D Multi-Object Tracking via Sensor Fusion paper on the KITTI MOTS dataset? | AssA, DetA, HOTA |
What metrics were used to measure the UNINEXT-H model in the Universal Instance Perception as Object Discovery and Retrieval paper on the BDD100K val dataset? | mMOTSA |
What metrics were used to measure the Unicorn model in the Towards Grand Unification of Object Tracking paper on the BDD100K val dataset? | mMOTSA |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.