prompts stringlengths 81 413 | metrics_response stringlengths 0 371 |
|---|---|
What metrics were used to measure the UR model in the Towards Universal Representation for Unseen Action Recognition paper on the UCF101 dataset? | Top-1 Accuracy, Top-5 accuracy |
What metrics were used to measure the IAP model in the paper on the UCF101 dataset? | Top-1 Accuracy, Top-5 accuracy |
What metrics were used to measure the DAP model in the paper on the UCF101 dataset? | Top-1 Accuracy, Top-5 accuracy |
What metrics were used to measure the MTE model in the Multi-Task Zero-Shot Action Recognition with Prioritised Data Augmentation paper on the UCF101 dataset? | Top-1 Accuracy, Top-5 accuracy |
What metrics were used to measure the ZSECOC model in the Zero-Shot Action Recognition With Error-Correcting Output Codes paper on the UCF101 dataset? | Top-1 Accuracy, Top-5 accuracy |
What metrics were used to measure the ESZSL model in the An embarrassingly simple approach to zero-shot learning paper on the UCF101 dataset? | Top-1 Accuracy, Top-5 accuracy |
What metrics were used to measure the HAA model in the paper on the UCF101 dataset? | Top-1 Accuracy, Top-5 accuracy |
What metrics were used to measure the SJE(Attribute) model in the Evaluation of Output Embeddings for Fine-Grained Image Classification paper on the UCF101 dataset? | Top-1 Accuracy, Top-5 accuracy |
What metrics were used to measure the SVE model in the Semantic Embedding Space for Zero-Shot Action Recognition paper on the UCF101 dataset? | Top-1 Accuracy, Top-5 accuracy |
What metrics were used to measure the SJE(Word Embedding) model in the Evaluation of Output Embeddings for Fine-Grained Image Classification paper on the UCF101 dataset? | Top-1 Accuracy, Top-5 accuracy |
What metrics were used to measure the MOV (ViT-L/14) model in the Multimodal Open-Vocabulary Video Classification via Pre-Trained Vision and Language Models paper on the HMDB51 dataset? | Top-1 Accuracy, Top-5 Accuracy, Accuracy |
What metrics were used to measure the BIKE model in the Bidirectional Cross-Modal Knowledge Exploration for Video Recognition with Pre-trained Vision-Language Models paper on the HMDB51 dataset? | Top-1 Accuracy, Top-5 Accuracy, Accuracy |
What metrics were used to measure the MOV (ViT-B/16) model in the Multimodal Open-Vocabulary Video Classification via Pre-Trained Vision and Language Models paper on the HMDB51 dataset? | Top-1 Accuracy, Top-5 Accuracy, Accuracy |
What metrics were used to measure the IMP-MoE-L model in the Alternating Gradient Descent and Mixture-of-Experts for Integrated Multimodal Perception paper on the HMDB51 dataset? | Top-1 Accuracy, Top-5 Accuracy, Accuracy |
What metrics were used to measure the VideoCoCa model in the VideoCoCa: Video-Text Modeling with Zero-Shot Transfer from Contrastive Captioners paper on the HMDB51 dataset? | Top-1 Accuracy, Top-5 Accuracy, Accuracy |
What metrics were used to measure the Text4Vis model in the Revisiting Classifier: Transferring Vision-Language Models for Video Recognition paper on the HMDB51 dataset? | Top-1 Accuracy, Top-5 Accuracy, Accuracy |
What metrics were used to measure the MAXI model in the MAtch, eXpand and Improve: Unsupervised Finetuning for Zero-Shot Action Recognition with Language Knowledge paper on the HMDB51 dataset? | Top-1 Accuracy, Top-5 Accuracy, Accuracy |
What metrics were used to measure the VicTR (ViT-B/16) model in the VicTR: Video-conditioned Text Representations for Activity Recognition paper on the HMDB51 dataset? | Top-1 Accuracy, Top-5 Accuracy, Accuracy |
What metrics were used to measure the X-CLIP model in the Expanding Language-Image Pretrained Models for General Video Recognition paper on the HMDB51 dataset? | Top-1 Accuracy, Top-5 Accuracy, Accuracy |
What metrics were used to measure the CLASTER model in the CLASTER: Clustering with Reinforcement Learning for Zero-Shot Action Recognition paper on the HMDB51 dataset? | Top-1 Accuracy, Top-5 Accuracy, Accuracy |
What metrics were used to measure the ResT model in the Cross-modal Representation Learning for Zero-shot Action Recognition paper on the HMDB51 dataset? | Top-1 Accuracy, Top-5 Accuracy, Accuracy |
What metrics were used to measure the AURL model in the Alignment-Uniformity aware Representation Learning for Zero-shot Video Classification paper on the HMDB51 dataset? | Top-1 Accuracy, Top-5 Accuracy, Accuracy |
What metrics were used to measure the JigsawNet model in the Rethinking Zero-shot Action Recognition: Learning from Latent Atomic Actions paper on the HMDB51 dataset? | Top-1 Accuracy, Top-5 Accuracy, Accuracy |
What metrics were used to measure the SPOT model in the Synthetic Sample Selection for Generalized Zero-Shot Learning paper on the HMDB51 dataset? | Top-1 Accuracy, Top-5 Accuracy, Accuracy |
What metrics were used to measure the ER-ZSAR model in the Elaborative Rehearsal for Zero-shot Action Recognition paper on the HMDB51 dataset? | Top-1 Accuracy, Top-5 Accuracy, Accuracy |
What metrics were used to measure the E2E model in the Rethinking Zero-shot Video Classification: End-to-end Training for Realistic Applications paper on the HMDB51 dataset? | Top-1 Accuracy, Top-5 Accuracy, Accuracy |
What metrics were used to measure the UR model in the Towards Universal Representation for Unseen Action Recognition paper on the HMDB51 dataset? | Top-1 Accuracy, Top-5 Accuracy, Accuracy |
What metrics were used to measure the TS-GCN model in the I Know the Relationships: Zero-Shot Action Recognition via Two-Stream Graph Convolutional Networks and Knowledge Graphs paper on the HMDB51 dataset? | Top-1 Accuracy, Top-5 Accuracy, Accuracy |
What metrics were used to measure the ZSECOC model in the Zero-Shot Action Recognition With Error-Correcting Output Codes paper on the HMDB51 dataset? | Top-1 Accuracy, Top-5 Accuracy, Accuracy |
What metrics were used to measure the ASR model in the Alternative Semantic Representations for Zero-Shot Human Action Recognition paper on the HMDB51 dataset? | Top-1 Accuracy, Top-5 Accuracy, Accuracy |
What metrics were used to measure the MTE model in the Multi-Task Zero-Shot Action Recognition with Prioritised Data Augmentation paper on the HMDB51 dataset? | Top-1 Accuracy, Top-5 Accuracy, Accuracy |
What metrics were used to measure the ESZSL model in the paper on the HMDB51 dataset? | Top-1 Accuracy, Top-5 Accuracy, Accuracy |
What metrics were used to measure the O2A model in the Objects2action: Classifying and localizing actions without any video example paper on the HMDB51 dataset? | Top-1 Accuracy, Top-5 Accuracy, Accuracy |
What metrics were used to measure the SJE(word embedding) model in the Evaluation of Output Embeddings for Fine-Grained Image Classification paper on the HMDB51 dataset? | Top-1 Accuracy, Top-5 Accuracy, Accuracy |
What metrics were used to measure the MSQNet model in the Actor-agnostic Multi-label Action Recognition with Multi-modal Query paper on the HMDB51 dataset? | Top-1 Accuracy, Top-5 Accuracy, Accuracy |
What metrics were used to measure the SPOT model in the Synthetic Sample Selection for Generalized Zero-Shot Learning paper on the Olympics dataset? | Top-1 Accuracy |
What metrics were used to measure the CLASTER model in the CLASTER: Clustering with Reinforcement Learning for Zero-Shot Action Recognition paper on the Olympics dataset? | Top-1 Accuracy |
What metrics were used to measure the ER-ZSAR model in the Elaborative Rehearsal for Zero-shot Action Recognition paper on the Olympics dataset? | Top-1 Accuracy |
What metrics were used to measure the ZSECOC model in the Zero-Shot Action Recognition With Error-Correcting Output Codes paper on the Olympics dataset? | Top-1 Accuracy |
What metrics were used to measure the TS-GCN model in the I Know the Relationships: Zero-Shot Action Recognition via Two-Stream Graph Convolutional Networks and Knowledge Graphs paper on the Olympics dataset? | Top-1 Accuracy |
What metrics were used to measure the SJE(Atrribute) model in the Evaluation of Output Embeddings for Fine-Grained Image Classification paper on the Olympics dataset? | Top-1 Accuracy |
What metrics were used to measure the MTE model in the Multi-Task Zero-Shot Action Recognition with Prioritised Data Augmentation paper on the Olympics dataset? | Top-1 Accuracy |
What metrics were used to measure the ESZSL model in the An embarrassingly simple approach to zero-shot learning paper on the Olympics dataset? | Top-1 Accuracy |
What metrics were used to measure the SJE(Word Embedding) model in the Evaluation of Output Embeddings for Fine-Grained Image Classification paper on the Olympics dataset? | Top-1 Accuracy |
What metrics were used to measure the BIKE model in the Bidirectional Cross-Modal Knowledge Exploration for Video Recognition with Pre-trained Vision-Language Models paper on the ActivityNet dataset? | Top-1 Accuracy |
What metrics were used to measure the Text4Vis model in the Revisiting Classifier: Transferring Vision-Language Models for Video Recognition paper on the ActivityNet dataset? | Top-1 Accuracy |
What metrics were used to measure the ResT model in the Cross-modal Representation Learning for Zero-shot Action Recognition paper on the ActivityNet dataset? | Top-1 Accuracy |
What metrics were used to measure the E2E model in the Rethinking Zero-shot Video Classification: End-to-end Training for Realistic Applications paper on the ActivityNet dataset? | Top-1 Accuracy |
What metrics were used to measure the MSQNet model in the Actor-agnostic Multi-label Action Recognition with Multi-modal Query paper on the THUMOS' 14 dataset? | Accuracy |
What metrics were used to measure the DSLP model in the Learning to Predict Navigational Patterns from Partial Observations paper on the nuScenes dataset? | IoU, F1 score |
What metrics were used to measure the LaneGraphNet model in the Lane Graph Estimation for Scene Understanding in Urban Driving paper on the nuScenes dataset? | IoU, F1 score |
What metrics were used to measure the STSU model in the Structured Bird's-Eye-View Traffic Scene Understanding from Onboard Images paper on the nuScenes dataset? | IoU, F1 score |
What metrics were used to measure the CLRNet (DLA-34) model in the CLRNet: Cross Layer Refinement Network for Lane Detection paper on the LLAMAS dataset? | F1 |
What metrics were used to measure the BézierLaneNet (ResNet-34) model in the Rethinking Efficient Lane Detection via Curve Modeling paper on the LLAMAS dataset? | F1 |
What metrics were used to measure the LaneAF model in the LaneAF: Robust Multi-Lane Detection with Affinity Fields paper on the LLAMAS dataset? | F1 |
What metrics were used to measure the CLRNet (ResNet-18) model in the CLRNet: Cross Layer Refinement Network for Lane Detection paper on the LLAMAS dataset? | F1 |
What metrics were used to measure the BézierLaneNet (ResNet-18) model in the Rethinking Efficient Lane Detection via Curve Modeling paper on the LLAMAS dataset? | F1 |
What metrics were used to measure the LaneATT (ResNet-34) model in the Keep your Eyes on the Lane: Real-time Attention-guided Lane Detection paper on the LLAMAS dataset? | F1 |
What metrics were used to measure the LaneATT (ResNet-122) model in the Keep your Eyes on the Lane: Real-time Attention-guided Lane Detection paper on the LLAMAS dataset? | F1 |
What metrics were used to measure the LaneATT (ResNet-18) model in the Keep your Eyes on the Lane: Real-time Attention-guided Lane Detection paper on the LLAMAS dataset? | F1 |
What metrics were used to measure the PolyLaneNet model in the PolyLaneNet: Lane Estimation via Deep Polynomial Regression paper on the LLAMAS dataset? | F1 |
What metrics were used to measure the LDNet model in the LDNet: End-to-End Lane Marking Detection Approach Using a Dynamic Vision Sensor paper on the DET dataset? | Average IOU, event-based F1 score |
What metrics were used to measure the SCNN_UNet_ConvLSTM2 model in the A Hybrid Spatial-temporal Deep Learning Architecture for Lane Detection paper on the TuSimple dataset? | Accuracy, F1 score, F1-Measure |
What metrics were used to measure the PE-RESA model in the Lane detection with Position Embedding paper on the TuSimple dataset? | Accuracy, F1 score, F1-Measure |
What metrics were used to measure the FOLOLane(ERFNet) model in the Focus on Local: Detecting Lane Marker from Bottom Up via Key Point paper on the TuSimple dataset? | Accuracy, F1 score, F1-Measure |
What metrics were used to measure the CLRNet(ResNet-34) model in the CLRNet: Cross Layer Refinement Network for Lane Detection paper on the TuSimple dataset? | Accuracy, F1 score, F1-Measure |
What metrics were used to measure the CLRNet(ResNet-18) model in the CLRNet: Cross Layer Refinement Network for Lane Detection paper on the TuSimple dataset? | Accuracy, F1 score, F1-Measure |
What metrics were used to measure the RESA model in the RESA: Recurrent Feature-Shift Aggregator for Lane Detection paper on the TuSimple dataset? | Accuracy, F1 score, F1-Measure |
What metrics were used to measure the CANet-L(ResNet101) model in the CANet: Curved Guide Line Network with Adaptive Decoder for Lane Detection paper on the TuSimple dataset? | Accuracy, F1 score, F1-Measure |
What metrics were used to measure the CANet-M model in the CANet: Curved Guide Line Network with Adaptive Decoder for Lane Detection paper on the TuSimple dataset? | Accuracy, F1 score, F1-Measure |
What metrics were used to measure the ENet-SAD model in the Learning Lightweight Lane Detection CNNs by Self Attention Distillation paper on the TuSimple dataset? | Accuracy, F1 score, F1-Measure |
What metrics were used to measure the HarD-SP model in the Towards Lightweight Lane Detection by Optimizing Spatial Embedding paper on the TuSimple dataset? | Accuracy, F1 score, F1-Measure |
What metrics were used to measure the CANet-S model in the CANet: Curved Guide Line Network with Adaptive Decoder for Lane Detection paper on the TuSimple dataset? | Accuracy, F1 score, F1-Measure |
What metrics were used to measure the CondLaneNet-L(ResNet-101) model in the CondLaneNet: a Top-to-down Lane Detection Framework Based on Conditional Convolution paper on the TuSimple dataset? | Accuracy, F1 score, F1-Measure |
What metrics were used to measure the Oblique Convolution model in the paper on the TuSimple dataset? | Accuracy, F1 score, F1-Measure |
What metrics were used to measure the Pairwise pixel supervision + FCN model in the Learning to Cluster for Proposal-Free Instance Segmentation paper on the TuSimple dataset? | Accuracy, F1 score, F1-Measure |
What metrics were used to measure the EL-GAN model in the EL-GAN: Embedding Loss Driven Generative Adversarial Networks for Lane Detection paper on the TuSimple dataset? | Accuracy, F1 score, F1-Measure |
What metrics were used to measure the LaneNet model in the Towards End-to-End Lane Detection: an Instance Segmentation Approach paper on the TuSimple dataset? | Accuracy, F1 score, F1-Measure |
What metrics were used to measure the Discriminative loss function model in the Semantic Instance Segmentation with a Discriminative Loss Function paper on the TuSimple dataset? | Accuracy, F1 score, F1-Measure |
What metrics were used to measure the ENet-Label model in the Agnostic Lane Detection paper on the TuSimple dataset? | Accuracy, F1 score, F1-Measure |
What metrics were used to measure the R-34-E2E model in the End-to-End Lane Marker Detection via Row-wise Classification paper on the TuSimple dataset? | Accuracy, F1 score, F1-Measure |
What metrics were used to measure the LSTR model in the End-to-end Lane Shape Prediction with Transformers paper on the TuSimple dataset? | Accuracy, F1 score, F1-Measure |
What metrics were used to measure the R-50-E2E model in the End-to-End Lane Marker Detection via Row-wise Classification paper on the TuSimple dataset? | Accuracy, F1 score, F1-Measure |
What metrics were used to measure the LaneATT (ResNet-122) model in the Keep your Eyes on the Lane: Real-time Attention-guided Lane Detection paper on the TuSimple dataset? | Accuracy, F1 score, F1-Measure |
What metrics were used to measure the ERF-E2E model in the End-to-End Lane Marker Detection via Row-wise Classification paper on the TuSimple dataset? | Accuracy, F1 score, F1-Measure |
What metrics were used to measure the Lane-LSQ model in the paper on the TuSimple dataset? | Accuracy, F1 score, F1-Measure |
What metrics were used to measure the BézierLaneNet (ResNet-34) model in the Rethinking Efficient Lane Detection via Curve Modeling paper on the TuSimple dataset? | Accuracy, F1 score, F1-Measure |
What metrics were used to measure the LaneAF model in the LaneAF: Robust Multi-Lane Detection with Affinity Fields paper on the TuSimple dataset? | Accuracy, F1 score, F1-Measure |
What metrics were used to measure the LaneATT (ResNet-34) model in the Keep your Eyes on the Lane: Real-time Attention-guided Lane Detection paper on the TuSimple dataset? | Accuracy, F1 score, F1-Measure |
What metrics were used to measure the Eigenlanes (ResNet-18) model in the Eigenlanes: Data-Driven Lane Descriptors for Structurally Diverse Lanes paper on the TuSimple dataset? | Accuracy, F1 score, F1-Measure |
What metrics were used to measure the LaneATT (ResNet-18) model in the Keep your Eyes on the Lane: Real-time Attention-guided Lane Detection paper on the TuSimple dataset? | Accuracy, F1 score, F1-Measure |
What metrics were used to measure the CondLaneNet(ResNet-18) model in the CondLaneNet: a Top-to-down Lane Detection Framework Based on Conditional Convolution paper on the TuSimple dataset? | Accuracy, F1 score, F1-Measure |
What metrics were used to measure the BézierLaneNet (ResNet-18) model in the Rethinking Efficient Lane Detection via Curve Modeling paper on the TuSimple dataset? | Accuracy, F1 score, F1-Measure |
What metrics were used to measure the CondLaneNet-M(ResNet-34) model in the CondLaneNet: a Top-to-down Lane Detection Framework Based on Conditional Convolution paper on the TuSimple dataset? | Accuracy, F1 score, F1-Measure |
What metrics were used to measure the End-to-end ERFNet model in the Lane Detection and Classification using Cascaded CNNs paper on the TuSimple dataset? | Accuracy, F1 score, F1-Measure |
What metrics were used to measure the ERFNet model in the paper on the TuSimple dataset? | Accuracy, F1 score, F1-Measure |
What metrics were used to measure the PolyLaneNet model in the PolyLaneNet: Lane Estimation via Deep Polynomial Regression paper on the TuSimple dataset? | Accuracy, F1 score, F1-Measure |
What metrics were used to measure the GANet(ResNet-34) model in the A Keypoint-based Global Association Network for Lane Detection paper on the TuSimple dataset? | Accuracy, F1 score, F1-Measure |
What metrics were used to measure the GANet(ResNet-18) model in the A Keypoint-based Global Association Network for Lane Detection paper on the TuSimple dataset? | Accuracy, F1 score, F1-Measure |
What metrics were used to measure the CLRNet(ResNet-101) model in the CLRNet: Cross Layer Refinement Network for Lane Detection paper on the TuSimple dataset? | Accuracy, F1 score, F1-Measure |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.