prompts stringlengths 81 413 | metrics_response stringlengths 0 371 |
|---|---|
What metrics were used to measure the intersect model in the paper on the KILT: HotpotQA dataset? | KILT-EM, R-Prec, Recall@5, EM, F1, KILT-F1 |
What metrics were used to measure the Wikipedia model in the paper on the KILT: HotpotQA dataset? | KILT-EM, R-Prec, Recall@5, EM, F1, KILT-F1 |
What metrics were used to measure the Multitask DPR + BART model in the paper on the KILT: HotpotQA dataset? | KILT-EM, R-Prec, Recall@5, EM, F1, KILT-F1 |
What metrics were used to measure the RAG model in the paper on the KILT: HotpotQA dataset? | KILT-EM, R-Prec, Recall@5, EM, F1, KILT-F1 |
What metrics were used to measure the BART + DPR model in the paper on the KILT: HotpotQA dataset? | KILT-EM, R-Prec, Recall@5, EM, F1, KILT-F1 |
What metrics were used to measure the BERT + DPR model in the paper on the KILT: HotpotQA dataset? | KILT-EM, R-Prec, Recall@5, EM, F1, KILT-F1 |
What metrics were used to measure the TABi model in the paper on the KILT: HotpotQA dataset? | KILT-EM, R-Prec, Recall@5, EM, F1, KILT-F1 |
What metrics were used to measure the chriskuei model in the paper on the KILT: HotpotQA dataset? | KILT-EM, R-Prec, Recall@5, EM, F1, KILT-F1 |
What metrics were used to measure the GENRE model in the paper on the KILT: HotpotQA dataset? | KILT-EM, R-Prec, Recall@5, EM, F1, KILT-F1 |
What metrics were used to measure the Multi-task DPR model in the paper on the KILT: HotpotQA dataset? | KILT-EM, R-Prec, Recall@5, EM, F1, KILT-F1 |
What metrics were used to measure the Sphere model in the paper on the KILT: HotpotQA dataset? | KILT-EM, R-Prec, Recall@5, EM, F1, KILT-F1 |
What metrics were used to measure the BART model in the paper on the KILT: HotpotQA dataset? | KILT-EM, R-Prec, Recall@5, EM, F1, KILT-F1 |
What metrics were used to measure the T5-base model in the KILT: a Benchmark for Knowledge Intensive Language Tasks paper on the KILT: HotpotQA dataset? | KILT-EM, R-Prec, Recall@5, EM, F1, KILT-F1 |
What metrics were used to measure the multi-task small model in the paper on the KILT: HotpotQA dataset? | KILT-EM, R-Prec, Recall@5, EM, F1, KILT-F1 |
What metrics were used to measure the Hindsight model in the paper on the KILT: Wizard of Wikipedia dataset? | KILT-RL, R-Prec, Recall@5, ROUGE-L, F1, KILT-F1 |
What metrics were used to measure the Re2G model in the Re2G: Retrieve, Rerank, Generate paper on the KILT: Wizard of Wikipedia dataset? | KILT-RL, R-Prec, Recall@5, ROUGE-L, F1, KILT-F1 |
What metrics were used to measure the intersect model in the paper on the KILT: Wizard of Wikipedia dataset? | KILT-RL, R-Prec, Recall@5, ROUGE-L, F1, KILT-F1 |
What metrics were used to measure the KGI model in the paper on the KILT: Wizard of Wikipedia dataset? | KILT-RL, R-Prec, Recall@5, ROUGE-L, F1, KILT-F1 |
What metrics were used to measure the RAG model in the paper on the KILT: Wizard of Wikipedia dataset? | KILT-RL, R-Prec, Recall@5, ROUGE-L, F1, KILT-F1 |
What metrics were used to measure the Wikipedia model in the paper on the KILT: Wizard of Wikipedia dataset? | KILT-RL, R-Prec, Recall@5, ROUGE-L, F1, KILT-F1 |
What metrics were used to measure the Multitask DPR + BART model in the paper on the KILT: Wizard of Wikipedia dataset? | KILT-RL, R-Prec, Recall@5, ROUGE-L, F1, KILT-F1 |
What metrics were used to measure the Routing Transformer, c-REALM model in the paper on the KILT: Wizard of Wikipedia dataset? | KILT-RL, R-Prec, Recall@5, ROUGE-L, F1, KILT-F1 |
What metrics were used to measure the BART + DPR model in the paper on the KILT: Wizard of Wikipedia dataset? | KILT-RL, R-Prec, Recall@5, ROUGE-L, F1, KILT-F1 |
What metrics were used to measure the multitask model in the paper on the KILT: Wizard of Wikipedia dataset? | KILT-RL, R-Prec, Recall@5, ROUGE-L, F1, KILT-F1 |
What metrics were used to measure the TransMemNet model in the paper on the KILT: Wizard of Wikipedia dataset? | KILT-RL, R-Prec, Recall@5, ROUGE-L, F1, KILT-F1 |
What metrics were used to measure the chriskuei model in the paper on the KILT: Wizard of Wikipedia dataset? | KILT-RL, R-Prec, Recall@5, ROUGE-L, F1, KILT-F1 |
What metrics were used to measure the GENRE model in the paper on the KILT: Wizard of Wikipedia dataset? | KILT-RL, R-Prec, Recall@5, ROUGE-L, F1, KILT-F1 |
What metrics were used to measure the TABi model in the paper on the KILT: Wizard of Wikipedia dataset? | KILT-RL, R-Prec, Recall@5, ROUGE-L, F1, KILT-F1 |
What metrics were used to measure the Multi-task DPR model in the paper on the KILT: Wizard of Wikipedia dataset? | KILT-RL, R-Prec, Recall@5, ROUGE-L, F1, KILT-F1 |
What metrics were used to measure the aa_evalai model in the paper on the KILT: Wizard of Wikipedia dataset? | KILT-RL, R-Prec, Recall@5, ROUGE-L, F1, KILT-F1 |
What metrics were used to measure the Sphere model in the paper on the KILT: Wizard of Wikipedia dataset? | KILT-RL, R-Prec, Recall@5, ROUGE-L, F1, KILT-F1 |
What metrics were used to measure the bart-base model in the paper on the KILT: Wizard of Wikipedia dataset? | KILT-RL, R-Prec, Recall@5, ROUGE-L, F1, KILT-F1 |
What metrics were used to measure the multi-task small model in the paper on the KILT: Wizard of Wikipedia dataset? | KILT-RL, R-Prec, Recall@5, ROUGE-L, F1, KILT-F1 |
What metrics were used to measure the T5-base model in the KILT: a Benchmark for Knowledge Intensive Language Tasks paper on the KILT: Wizard of Wikipedia dataset? | KILT-RL, R-Prec, Recall@5, ROUGE-L, F1, KILT-F1 |
What metrics were used to measure the BART model in the paper on the KILT: Wizard of Wikipedia dataset? | KILT-RL, R-Prec, Recall@5, ROUGE-L, F1, KILT-F1 |
What metrics were used to measure the PTv2 model in the Point Transformer V2: Grouped Vector Attention and Partition-based Pooling paper on the nuScenes dataset? | test mIoU, val mIoU |
What metrics were used to measure the SphereFormer model in the Spherical Transformer for LiDAR-based 3D Recognition paper on the nuScenes dataset? | test mIoU, val mIoU |
What metrics were used to measure the SPVCNN++ model in the Searching Efficient 3D Architectures with Sparse Point-Voxel Convolution paper on the nuScenes dataset? | test mIoU, val mIoU |
What metrics were used to measure the 2DPASS model in the 2DPASS: 2D Priors Assisted Semantic Segmentation on LiDAR Point Clouds paper on the nuScenes dataset? | test mIoU, val mIoU |
What metrics were used to measure the RangeFormer model in the Rethinking Range View Representation for LiDAR Segmentation paper on the nuScenes dataset? | test mIoU, val mIoU |
What metrics were used to measure the 2D3DNet model in the Learning 3D Semantic Segmentation with only 2D Image Supervision paper on the nuScenes dataset? | test mIoU, val mIoU |
What metrics were used to measure the GU-Net model in the paper on the nuScenes dataset? | test mIoU, val mIoU |
What metrics were used to measure the DRINet++: Efficient Voxel-as-point Point Cloud Segmentation model in the paper on the nuScenes dataset? | test mIoU, val mIoU |
What metrics were used to measure the AF2S3Net model in the (AF)2-S3Net: Attentive Feature Fusion with Adaptive Feature Selection for Sparse Semantic Segmentation Network paper on the nuScenes dataset? | test mIoU, val mIoU |
What metrics were used to measure the Cylinder3D++ model in the Cylinder3D: An Effective 3D Framework for Driving-scene LiDAR Semantic Segmentation paper on the nuScenes dataset? | test mIoU, val mIoU |
What metrics were used to measure the CPFusion model in the paper on the nuScenes dataset? | test mIoU, val mIoU |
What metrics were used to measure the SPVNAS model in the Searching Efficient 3D Architectures with Sparse Point-Voxel Convolution paper on the nuScenes dataset? | test mIoU, val mIoU |
What metrics were used to measure the AMVNet model in the AMVNet: Assertion-based Multi-View Fusion Network for LiDAR Semantic Segmentation paper on the nuScenes dataset? | test mIoU, val mIoU |
What metrics were used to measure the Cylinder3D+InstanceAug model in the Cylindrical and Asymmetrical 3D Convolution Networks for LiDAR Segmentation paper on the nuScenes dataset? | test mIoU, val mIoU |
What metrics were used to measure the PMF-ResNet50 model in the Perception-Aware Multi-Sensor Fusion for 3D LiDAR Semantic Segmentation paper on the nuScenes dataset? | test mIoU, val mIoU |
What metrics were used to measure the LIFusion model in the paper on the nuScenes dataset? | test mIoU, val mIoU |
What metrics were used to measure the Point-to-Voxel KD model in the Point-to-Voxel Knowledge Distillation for LiDAR Semantic Segmentation paper on the nuScenes dataset? | test mIoU, val mIoU |
What metrics were used to measure the GFNet model in the GFNet: Geometric Flow Network for 3D Point Cloud Semantic Segmentation paper on the nuScenes dataset? | test mIoU, val mIoU |
What metrics were used to measure the OrangeNet model in the paper on the nuScenes dataset? | test mIoU, val mIoU |
What metrics were used to measure the JS3C-Net model in the Sparse Single Sweep LiDAR Point Cloud Segmentation via Learning Contextual Shape Priors from Scene Completion paper on the nuScenes dataset? | test mIoU, val mIoU |
What metrics were used to measure the Open_world_incremental model in the paper on the nuScenes dataset? | test mIoU, val mIoU |
What metrics were used to measure the PolarStream-1 model in the PolarStream: Streaming Lidar Object Detection and Segmentation with Polar Pillars paper on the nuScenes dataset? | test mIoU, val mIoU |
What metrics were used to measure the DB-Unet model in the paper on the nuScenes dataset? | test mIoU, val mIoU |
What metrics were used to measure the Vis-PolarNet model in the paper on the nuScenes dataset? | test mIoU, val mIoU |
What metrics were used to measure the PolarNet model in the PolarNet: An Improved Grid Representation for Online LiDAR Point Clouds Semantic Segmentation paper on the nuScenes dataset? | test mIoU, val mIoU |
What metrics were used to measure the EfficientLPS model in the paper on the nuScenes dataset? | test mIoU, val mIoU |
What metrics were used to measure the Camera-LiDAR Fusion + HTC + Map model in the paper on the nuScenes dataset? | test mIoU, val mIoU |
What metrics were used to measure the WaffleIron model in the Using a Waffle Iron for Automotive Point Cloud Semantic Segmentation paper on the nuScenes dataset? | test mIoU, val mIoU |
What metrics were used to measure the PPT+SparseUNet model in the Towards Large-scale 3D Representation Learning with Multi-dataset Point Prompt Training paper on the nuScenes dataset? | test mIoU, val mIoU |
What metrics were used to measure the FKAConv model in the FKAConv: Feature-Kernel Alignment for Point Cloud Convolution paper on the Paris-Lille-3D dataset? | mIOU |
What metrics were used to measure the Feature Geometric Net (FG Net) model in the FG-Net: Fast Large-Scale LiDAR Point Clouds Understanding Network Leveraging Correlated Feature Mining and Geometric-Aware Modelling paper on the Paris-Lille-3D dataset? | mIOU |
What metrics were used to measure the GeomGCNN model in the Exploiting Local Geometry for Feature and Graph Construction for Better 3D Point Cloud Processing with Graph Neural Networks paper on the Paris-Lille-3D dataset? | mIOU |
What metrics were used to measure the ConvPoint model in the ConvPoint: Continuous Convolutions for Point Cloud Processing paper on the Paris-Lille-3D dataset? | mIOU |
What metrics were used to measure the KPConv deform model in the KPConv: Flexible and Deformable Convolution for Point Clouds paper on the Paris-Lille-3D dataset? | mIOU |
What metrics were used to measure the ConvPoint_Keras model in the ConvPoint: Continuous Convolutions for Point Cloud Processing paper on the Paris-Lille-3D dataset? | mIOU |
What metrics were used to measure the Paris-Lille-3D model in the Paris-Lille-3D: a large and high-quality ground truth urban point cloud dataset for automatic segmentation and classification paper on the Paris-Lille-3D dataset? | mIOU |
What metrics were used to measure the AF2S3Net model in the paper on the SemanticKITTI dataset? | mIOU, Mean IoU |
What metrics were used to measure the MPF model in the Multi Projection Fusion for Real-time Semantic Segmentation of 3D LiDAR Point Clouds paper on the SemanticKITTI dataset? | mIOU, Mean IoU |
What metrics were used to measure the TFNet model in the TFNet: Exploiting Temporal Cues for Fast and Accurate LiDAR Semantic Segmentation paper on the SemanticKITTI dataset? | mIOU, Mean IoU |
What metrics were used to measure the Residual Shuffle-Exchange network model in the Residual Shuffle-Exchange Networks for Fast Processing of Long Sequences paper on the MusicNet dataset? | APS, Number of params |
What metrics were used to measure the Complex Transformer model in the Complex Transformer: A Framework for Modeling Complex-Valued Sequence paper on the MusicNet dataset? | APS, Number of params |
What metrics were used to measure the Deep Complex Network model in the Deep Complex Networks paper on the MusicNet dataset? | APS, Number of params |
What metrics were used to measure the Concatenated Transformer model in the Complex Transformer: A Framework for Modeling Complex-Valued Sequence paper on the MusicNet dataset? | APS, Number of params |
What metrics were used to measure the Deep Real Network model in the Deep Complex Networks paper on the MusicNet dataset? | APS, Number of params |
What metrics were used to measure the CNN (64 stride) model in the Learning Features of Music from Scratch paper on the MusicNet dataset? | APS, Number of params |
What metrics were used to measure the CTVIS (Swin-L) model in the CTVIS: Consistent Training for Online Video Instance Segmentation paper on the Youtube-VIS 2022 Validation dataset? | mAP_L, AP50_L, AP75_L, AR1_L, AR10_L |
What metrics were used to measure the DVIS(Swin-L) model in the DVIS: Decoupled Video Instance Segmentation Framework paper on the Youtube-VIS 2022 Validation dataset? | mAP_L, AP50_L, AP75_L, AR1_L, AR10_L |
What metrics were used to measure the CTVIS (ResNet-50) model in the CTVIS: Consistent Training for Online Video Instance Segmentation paper on the Youtube-VIS 2022 Validation dataset? | mAP_L, AP50_L, AP75_L, AR1_L, AR10_L |
What metrics were used to measure the InstanceFormer (Swin) model in the InstanceFormer: An Online Video Instance Segmentation Framework paper on the Youtube-VIS 2022 Validation dataset? | mAP_L, AP50_L, AP75_L, AR1_L, AR10_L |
What metrics were used to measure the InstanceFormer (Resnet-50) model in the InstanceFormer: An Online Video Instance Segmentation Framework paper on the Youtube-VIS 2022 Validation dataset? | mAP_L, AP50_L, AP75_L, AR1_L, AR10_L |
What metrics were used to measure the PCAN model in the Prototypical Cross-Attention Networks for Multiple Object Tracking and Segmentation paper on the BDD100K val dataset? | mMOTSA |
What metrics were used to measure the QDTrack-mots-fix model in the Prototypical Cross-Attention Networks for Multiple Object Tracking and Segmentation paper on the BDD100K val dataset? | mMOTSA |
What metrics were used to measure the QDTrack-mots model in the Prototypical Cross-Attention Networks for Multiple Object Tracking and Segmentation paper on the BDD100K val dataset? | mMOTSA |
What metrics were used to measure the MaskTrackRCNN model in the Prototypical Cross-Attention Networks for Multiple Object Tracking and Segmentation paper on the BDD100K val dataset? | mMOTSA |
What metrics were used to measure the STEm-Seg model in the Prototypical Cross-Attention Networks for Multiple Object Tracking and Segmentation paper on the BDD100K val dataset? | mMOTSA |
What metrics were used to measure the SortIoU model in the Prototypical Cross-Attention Networks for Multiple Object Tracking and Segmentation paper on the BDD100K val dataset? | mMOTSA |
What metrics were used to measure the UNINEXT-H model in the Universal Instance Perception as Object Discovery and Retrieval paper on the YouTube-VIS validation dataset? | mask AP, AP50, AP75, AR1, AR10 |
What metrics were used to measure the NOVIS (Swin-L) model in the NOVIS: A Case for End-to-End Near-Online Video Instance Segmentation paper on the YouTube-VIS validation dataset? | mask AP, AP50, AP75, AR1, AR10 |
What metrics were used to measure the DVIS(Swin-L) model in the DVIS: Decoupled Video Instance Segmentation Framework paper on the YouTube-VIS validation dataset? | mask AP, AP50, AP75, AR1, AR10 |
What metrics were used to measure the Tube-Link(Swin-L) model in the Tube-Link: A Flexible Cross Tube Framework for Universal Video Segmentation paper on the YouTube-VIS validation dataset? | mask AP, AP50, AP75, AR1, AR10 |
What metrics were used to measure the RefineVIS (Swin-L, offline) model in the RefineVIS: Video Instance Segmentation with Temporal Attention Refinement paper on the YouTube-VIS validation dataset? | mask AP, AP50, AP75, AR1, AR10 |
What metrics were used to measure the IDOL (Swin-L) model in the In Defense of Online Models for Video Instance Segmentation paper on the YouTube-VIS validation dataset? | mask AP, AP50, AP75, AR1, AR10 |
What metrics were used to measure the GenVIS (Swin-L) model in the A Generalized Framework for Video Instance Segmentation paper on the YouTube-VIS validation dataset? | mask AP, AP50, AP75, AR1, AR10 |
What metrics were used to measure the VITA (Swin-L) model in the VITA: Video Instance Segmentation via Object Token Association paper on the YouTube-VIS validation dataset? | mask AP, AP50, AP75, AR1, AR10 |
What metrics were used to measure the MinVIS (Swin-L) model in the MinVIS: A Minimal Video Instance Segmentation Framework without Video-based Training paper on the YouTube-VIS validation dataset? | mask AP, AP50, AP75, AR1, AR10 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.