prompts stringlengths 81 413 | metrics_response stringlengths 0 371 |
|---|---|
What metrics were used to measure the CFBI+ (val) model in the BATMAN: Bilateral Attention Transformer in Motion-Appearance Neighboring Space for Video Object Segmentation paper on the DAVIS 2016 dataset? | J&F, F-Score, Jaccard (Mean), mIoU |
What metrics were used to measure the ViTAE-T-Stage model in the ViTAE: Vision Transformer Advanced by Exploring Intrinsic Inductive Bias paper on the DAVIS 2016 dataset? | J&F, F-Score, Jaccard (Mean), mIoU |
What metrics were used to measure the CFBI (val) model in the BATMAN: Bilateral Attention Transformer in Motion-Appearance Neighboring Space for Video Object Segmentation paper on the DAVIS 2016 dataset? | J&F, F-Score, Jaccard (Mean), mIoU |
What metrics were used to measure the RMN (val) model in the BATMAN: Bilateral Attention Transformer in Motion-Appearance Neighboring Space for Video Object Segmentation paper on the DAVIS 2016 dataset? | J&F, F-Score, Jaccard (Mean), mIoU |
What metrics were used to measure the AOC-MF (val) model in the Towards Robust Video Object Segmentation with Adaptive Object Calibration paper on the DAVIS 2016 dataset? | J&F, F-Score, Jaccard (Mean), mIoU |
What metrics were used to measure the STM (val) model in the BATMAN: Bilateral Attention Transformer in Motion-Appearance Neighboring Space for Video Object Segmentation paper on the DAVIS 2016 dataset? | J&F, F-Score, Jaccard (Mean), mIoU |
What metrics were used to measure the 3DC-Seg model in the Making a Case for 3D Convolutions for Object Segmentation in Videos paper on the DAVIS 2016 dataset? | J&F, F-Score, Jaccard (Mean), mIoU |
What metrics were used to measure the DFNet model in the Learning Discriminative Feature with CRF for Unsupervised Video Object Segmentation paper on the DAVIS 2016 dataset? | J&F, F-Score, Jaccard (Mean), mIoU |
What metrics were used to measure the Ours model in the Learning Discriminative Feature with CRF for Unsupervised Video Object Segmentation paper on the DAVIS 2016 dataset? | J&F, F-Score, Jaccard (Mean), mIoU |
What metrics were used to measure the VOSwL (Mask+Language) model in the Video Object Segmentation with Language Referring Expressions paper on the DAVIS 2016 dataset? | J&F, F-Score, Jaccard (Mean), mIoU |
What metrics were used to measure the VOSwL (Language) model in the Video Object Segmentation with Language Referring Expressions paper on the DAVIS 2016 dataset? | J&F, F-Score, Jaccard (Mean), mIoU |
What metrics were used to measure the AOC-MF (val) model in the Towards Robust Video Object Segmentation with Adaptive Object Calibration paper on the DAVIS 2017 dataset? | Jaccard (Mean), mIoU, J&F, F-Score |
What metrics were used to measure the ViTAE-T-Stage model in the ViTAE: Vision Transformer Advanced by Exploring Intrinsic Inductive Bias paper on the DAVIS 2017 dataset? | Jaccard (Mean), mIoU, J&F, F-Score |
What metrics were used to measure the VOSwL (Mask+Language) model in the Video Object Segmentation with Language Referring Expressions paper on the DAVIS 2017 dataset? | Jaccard (Mean), mIoU, J&F, F-Score |
What metrics were used to measure the UniTrack model in the Do Different Tracking Tasks Require Different Appearance Models? paper on the DAVIS 2017 dataset? | Jaccard (Mean), mIoU, J&F, F-Score |
What metrics were used to measure the DINO (ViT-B/8, ImageNet retrain) model in the Emerging Properties in Self-Supervised Vision Transformers paper on the DAVIS 2017 dataset? | Jaccard (Mean), mIoU, J&F, F-Score |
What metrics were used to measure the DFNet model in the Learning Discriminative Feature with CRF for Unsupervised Video Object Segmentation paper on the FBMS dataset? | F-Score, Jaccard (Mean) |
What metrics were used to measure the ours model in the Multi-Source Fusion and Automatic Predictor Selection for Zero-Shot Video Object Segmentation paper on the FBMS dataset? | F-Score, Jaccard (Mean) |
What metrics were used to measure the XMem (BL30K, MS) model in the XMem: Long-Term Video Object Segmentation with an Atkinson-Shiffrin Memory Model paper on the YouTube-VOS 2018 dataset? | Mean Jaccard & F-Measure, Jaccard (Seen), Jaccard (Unseen), F-Measure (Seen), F-Measure (Unseen) |
What metrics were used to measure the ISVOS (BL30K) model in the Look Before You Match: Instance Understanding Matters in Video Object Segmentation paper on the YouTube-VOS 2018 dataset? | Mean Jaccard & F-Measure, Jaccard (Seen), Jaccard (Unseen), F-Measure (Seen), F-Measure (Unseen) |
What metrics were used to measure the ISVOS model in the Look Before You Match: Instance Understanding Matters in Video Object Segmentation paper on the YouTube-VOS 2018 dataset? | Mean Jaccard & F-Measure, Jaccard (Seen), Jaccard (Unseen), F-Measure (Seen), F-Measure (Unseen) |
What metrics were used to measure the XMem model in the XMem: Long-Term Video Object Segmentation with an Atkinson-Shiffrin Memory Model paper on the YouTube-VOS 2018 dataset? | Mean Jaccard & F-Measure, Jaccard (Seen), Jaccard (Unseen), F-Measure (Seen), F-Measure (Unseen) |
What metrics were used to measure the BATMAN model in the BATMAN: Bilateral Attention Transformer in Motion-Appearance Neighboring Space for Video Object Segmentation paper on the YouTube-VOS 2018 dataset? | Mean Jaccard & F-Measure, Jaccard (Seen), Jaccard (Unseen), F-Measure (Seen), F-Measure (Unseen) |
What metrics were used to measure the AOT model in the BATMAN: Bilateral Attention Transformer in Motion-Appearance Neighboring Space for Video Object Segmentation paper on the YouTube-VOS 2018 dataset? | Mean Jaccard & F-Measure, Jaccard (Seen), Jaccard (Unseen), F-Measure (Seen), F-Measure (Unseen) |
What metrics were used to measure the RPCMVOS model in the BATMAN: Bilateral Attention Transformer in Motion-Appearance Neighboring Space for Video Object Segmentation paper on the YouTube-VOS 2018 dataset? | Mean Jaccard & F-Measure, Jaccard (Seen), Jaccard (Unseen), F-Measure (Seen), F-Measure (Unseen) |
What metrics were used to measure the STCN model in the BATMAN: Bilateral Attention Transformer in Motion-Appearance Neighboring Space for Video Object Segmentation paper on the YouTube-VOS 2018 dataset? | Mean Jaccard & F-Measure, Jaccard (Seen), Jaccard (Unseen), F-Measure (Seen), F-Measure (Unseen) |
What metrics were used to measure the CFBI+ model in the BATMAN: Bilateral Attention Transformer in Motion-Appearance Neighboring Space for Video Object Segmentation paper on the YouTube-VOS 2018 dataset? | Mean Jaccard & F-Measure, Jaccard (Seen), Jaccard (Unseen), F-Measure (Seen), F-Measure (Unseen) |
What metrics were used to measure the LCM model in the BATMAN: Bilateral Attention Transformer in Motion-Appearance Neighboring Space for Video Object Segmentation paper on the YouTube-VOS 2018 dataset? | Mean Jaccard & F-Measure, Jaccard (Seen), Jaccard (Unseen), F-Measure (Seen), F-Measure (Unseen) |
What metrics were used to measure the TransVOS model in the BATMAN: Bilateral Attention Transformer in Motion-Appearance Neighboring Space for Video Object Segmentation paper on the YouTube-VOS 2018 dataset? | Mean Jaccard & F-Measure, Jaccard (Seen), Jaccard (Unseen), F-Measure (Seen), F-Measure (Unseen) |
What metrics were used to measure the SST model in the BATMAN: Bilateral Attention Transformer in Motion-Appearance Neighboring Space for Video Object Segmentation paper on the YouTube-VOS 2018 dataset? | Mean Jaccard & F-Measure, Jaccard (Seen), Jaccard (Unseen), F-Measure (Seen), F-Measure (Unseen) |
What metrics were used to measure the LWL model in the BATMAN: Bilateral Attention Transformer in Motion-Appearance Neighboring Space for Video Object Segmentation paper on the YouTube-VOS 2018 dataset? | Mean Jaccard & F-Measure, Jaccard (Seen), Jaccard (Unseen), F-Measure (Seen), F-Measure (Unseen) |
What metrics were used to measure the KMN model in the BATMAN: Bilateral Attention Transformer in Motion-Appearance Neighboring Space for Video Object Segmentation paper on the YouTube-VOS 2018 dataset? | Mean Jaccard & F-Measure, Jaccard (Seen), Jaccard (Unseen), F-Measure (Seen), F-Measure (Unseen) |
What metrics were used to measure the AFB-URR model in the BATMAN: Bilateral Attention Transformer in Motion-Appearance Neighboring Space for Video Object Segmentation paper on the YouTube-VOS 2018 dataset? | Mean Jaccard & F-Measure, Jaccard (Seen), Jaccard (Unseen), F-Measure (Seen), F-Measure (Unseen) |
What metrics were used to measure the STM model in the BATMAN: Bilateral Attention Transformer in Motion-Appearance Neighboring Space for Video Object Segmentation paper on the YouTube-VOS 2018 dataset? | Mean Jaccard & F-Measure, Jaccard (Seen), Jaccard (Unseen), F-Measure (Seen), F-Measure (Unseen) |
What metrics were used to measure the RMN model in the BATMAN: Bilateral Attention Transformer in Motion-Appearance Neighboring Space for Video Object Segmentation paper on the YouTube-VOS 2018 dataset? | Mean Jaccard & F-Measure, Jaccard (Seen), Jaccard (Unseen), F-Measure (Seen), F-Measure (Unseen) |
What metrics were used to measure the CFBI model in the BATMAN: Bilateral Attention Transformer in Motion-Appearance Neighboring Space for Video Object Segmentation paper on the YouTube-VOS 2018 dataset? | Mean Jaccard & F-Measure, Jaccard (Seen), Jaccard (Unseen), F-Measure (Seen), F-Measure (Unseen) |
What metrics were used to measure the SST (Local) model in the SSTVOS: Sparse Spatiotemporal Transformers for Video Object Segmentation paper on the YouTube-VOS 2018 dataset? | Mean Jaccard & F-Measure, Jaccard (Seen), Jaccard (Unseen), F-Measure (Seen), F-Measure (Unseen) |
What metrics were used to measure the S2S (offline) model in the YouTube-VOS: Sequence-to-Sequence Video Object Segmentation paper on the YouTube-VOS 2018 dataset? | Mean Jaccard & F-Measure, Jaccard (Seen), Jaccard (Unseen), F-Measure (Seen), F-Measure (Unseen) |
What metrics were used to measure the BATMAN model in the BATMAN: Bilateral Attention Transformer in Motion-Appearance Neighboring Space for Video Object Segmentation paper on the DAVIS 2017 (test-dev) dataset? | Jaccard, F-measure, Mean Jaccard & F-Measure |
What metrics were used to measure the AOT model in the BATMAN: Bilateral Attention Transformer in Motion-Appearance Neighboring Space for Video Object Segmentation paper on the DAVIS 2017 (test-dev) dataset? | Jaccard, F-measure, Mean Jaccard & F-Measure |
What metrics were used to measure the RPCMVOS model in the BATMAN: Bilateral Attention Transformer in Motion-Appearance Neighboring Space for Video Object Segmentation paper on the DAVIS 2017 (test-dev) dataset? | Jaccard, F-measure, Mean Jaccard & F-Measure |
What metrics were used to measure the LCM model in the BATMAN: Bilateral Attention Transformer in Motion-Appearance Neighboring Space for Video Object Segmentation paper on the DAVIS 2017 (test-dev) dataset? | Jaccard, F-measure, Mean Jaccard & F-Measure |
What metrics were used to measure the KMN model in the BATMAN: Bilateral Attention Transformer in Motion-Appearance Neighboring Space for Video Object Segmentation paper on the DAVIS 2017 (test-dev) dataset? | Jaccard, F-measure, Mean Jaccard & F-Measure |
What metrics were used to measure the TransVOS model in the BATMAN: Bilateral Attention Transformer in Motion-Appearance Neighboring Space for Video Object Segmentation paper on the DAVIS 2017 (test-dev) dataset? | Jaccard, F-measure, Mean Jaccard & F-Measure |
What metrics were used to measure the STCN model in the BATMAN: Bilateral Attention Transformer in Motion-Appearance Neighboring Space for Video Object Segmentation paper on the DAVIS 2017 (test-dev) dataset? | Jaccard, F-measure, Mean Jaccard & F-Measure |
What metrics were used to measure the RMN model in the BATMAN: Bilateral Attention Transformer in Motion-Appearance Neighboring Space for Video Object Segmentation paper on the DAVIS 2017 (test-dev) dataset? | Jaccard, F-measure, Mean Jaccard & F-Measure |
What metrics were used to measure the CFBI+ model in the BATMAN: Bilateral Attention Transformer in Motion-Appearance Neighboring Space for Video Object Segmentation paper on the DAVIS 2017 (test-dev) dataset? | Jaccard, F-measure, Mean Jaccard & F-Measure |
What metrics were used to measure the CFBI model in the BATMAN: Bilateral Attention Transformer in Motion-Appearance Neighboring Space for Video Object Segmentation paper on the DAVIS 2017 (test-dev) dataset? | Jaccard, F-measure, Mean Jaccard & F-Measure |
What metrics were used to measure the XMem (BLK30K, MS) model in the XMem: Long-Term Video Object Segmentation with an Atkinson-Shiffrin Memory Model paper on the DAVIS 2017 (val) dataset? | Mean Jaccard & F-Measure, F-measure, Jaccard |
What metrics were used to measure the XMem model in the XMem: Long-Term Video Object Segmentation with an Atkinson-Shiffrin Memory Model paper on the DAVIS 2017 (val) dataset? | Mean Jaccard & F-Measure, F-measure, Jaccard |
What metrics were used to measure the BATMAN model in the BATMAN: Bilateral Attention Transformer in Motion-Appearance Neighboring Space for Video Object Segmentation paper on the DAVIS 2017 (val) dataset? | Mean Jaccard & F-Measure, F-measure, Jaccard |
What metrics were used to measure the STCN model in the BATMAN: Bilateral Attention Transformer in Motion-Appearance Neighboring Space for Video Object Segmentation paper on the DAVIS 2017 (val) dataset? | Mean Jaccard & F-Measure, F-measure, Jaccard |
What metrics were used to measure the AOT model in the BATMAN: Bilateral Attention Transformer in Motion-Appearance Neighboring Space for Video Object Segmentation paper on the DAVIS 2017 (val) dataset? | Mean Jaccard & F-Measure, F-measure, Jaccard |
What metrics were used to measure the TransVOS model in the BATMAN: Bilateral Attention Transformer in Motion-Appearance Neighboring Space for Video Object Segmentation paper on the DAVIS 2017 (val) dataset? | Mean Jaccard & F-Measure, F-measure, Jaccard |
What metrics were used to measure the RPCMVOS model in the BATMAN: Bilateral Attention Transformer in Motion-Appearance Neighboring Space for Video Object Segmentation paper on the DAVIS 2017 (val) dataset? | Mean Jaccard & F-Measure, F-measure, Jaccard |
What metrics were used to measure the RMN model in the BATMAN: Bilateral Attention Transformer in Motion-Appearance Neighboring Space for Video Object Segmentation paper on the DAVIS 2017 (val) dataset? | Mean Jaccard & F-Measure, F-measure, Jaccard |
What metrics were used to measure the CFBI+ model in the BATMAN: Bilateral Attention Transformer in Motion-Appearance Neighboring Space for Video Object Segmentation paper on the DAVIS 2017 (val) dataset? | Mean Jaccard & F-Measure, F-measure, Jaccard |
What metrics were used to measure the KMN model in the BATMAN: Bilateral Attention Transformer in Motion-Appearance Neighboring Space for Video Object Segmentation paper on the DAVIS 2017 (val) dataset? | Mean Jaccard & F-Measure, F-measure, Jaccard |
What metrics were used to measure the SST model in the BATMAN: Bilateral Attention Transformer in Motion-Appearance Neighboring Space for Video Object Segmentation paper on the DAVIS 2017 (val) dataset? | Mean Jaccard & F-Measure, F-measure, Jaccard |
What metrics were used to measure the CFBI model in the BATMAN: Bilateral Attention Transformer in Motion-Appearance Neighboring Space for Video Object Segmentation paper on the DAVIS 2017 (val) dataset? | Mean Jaccard & F-Measure, F-measure, Jaccard |
What metrics were used to measure the LWL model in the BATMAN: Bilateral Attention Transformer in Motion-Appearance Neighboring Space for Video Object Segmentation paper on the DAVIS 2017 (val) dataset? | Mean Jaccard & F-Measure, F-measure, Jaccard |
What metrics were used to measure the AFB-URR model in the BATMAN: Bilateral Attention Transformer in Motion-Appearance Neighboring Space for Video Object Segmentation paper on the DAVIS 2017 (val) dataset? | Mean Jaccard & F-Measure, F-measure, Jaccard |
What metrics were used to measure the LCM model in the BATMAN: Bilateral Attention Transformer in Motion-Appearance Neighboring Space for Video Object Segmentation paper on the DAVIS 2017 (val) dataset? | Mean Jaccard & F-Measure, F-measure, Jaccard |
What metrics were used to measure the STM model in the BATMAN: Bilateral Attention Transformer in Motion-Appearance Neighboring Space for Video Object Segmentation paper on the DAVIS 2017 (val) dataset? | Mean Jaccard & F-Measure, F-measure, Jaccard |
What metrics were used to measure the VarMAE model in the VarMAE: Pre-training of Variational Masked Autoencoder for Domain-adaptive Language Understanding paper on the EBM-NLP dataset? | F1 |
What metrics were used to measure the PubMedBERT uncased model in the Domain-Specific Language Model Pretraining for Biomedical Natural Language Processing paper on the EBM-NLP dataset? | F1 |
What metrics were used to measure the SciBERT (SciVocab) model in the SciBERT: A Pretrained Language Model for Scientific Text paper on the EBM-NLP dataset? | F1 |
What metrics were used to measure the SciBERT (Base Vocab) model in the SciBERT: A Pretrained Language Model for Scientific Text paper on the EBM-NLP dataset? | F1 |
What metrics were used to measure the bi-LSTM model in the A Corpus with Multi-Level Annotations of Patients, Interventions and Outcomes to Support Language Processing for Medical Literature paper on the EBM-NLP dataset? | F1 |
What metrics were used to measure the Sad model in the SAD: Saliency-based Defenses Against Adversarial Examples paper on the 1B Words dataset? | 10 Hops |
What metrics were used to measure the random forest model in the Machine learning and chord based feature engineering for genre prediction in popular Brazilian music paper on the chords dataset? | Accuracy |
What metrics were used to measure the FQ-ViT (ViT-L) model in the FQ-ViT: Post-Training Quantization for Fully Quantized Vision Transformer paper on the ImageNet dataset? | Top-1 Accuracy (%), Weight bits, Activation bits |
What metrics were used to measure the FQ-ViT (ViT-B) model in the FQ-ViT: Post-Training Quantization for Fully Quantized Vision Transformer paper on the ImageNet dataset? | Top-1 Accuracy (%), Weight bits, Activation bits |
What metrics were used to measure the FQ-ViT (Swin-B) model in the FQ-ViT: Post-Training Quantization for Fully Quantized Vision Transformer paper on the ImageNet dataset? | Top-1 Accuracy (%), Weight bits, Activation bits |
What metrics were used to measure the FQ-ViT (Swin-S) model in the FQ-ViT: Post-Training Quantization for Fully Quantized Vision Transformer paper on the ImageNet dataset? | Top-1 Accuracy (%), Weight bits, Activation bits |
What metrics were used to measure the FQ-ViT (DeiT-B) model in the FQ-ViT: Post-Training Quantization for Fully Quantized Vision Transformer paper on the ImageNet dataset? | Top-1 Accuracy (%), Weight bits, Activation bits |
What metrics were used to measure the FQ-ViT (Swin-T) model in the FQ-ViT: Post-Training Quantization for Fully Quantized Vision Transformer paper on the ImageNet dataset? | Top-1 Accuracy (%), Weight bits, Activation bits |
What metrics were used to measure the FQ-ViT (DeiT-S) model in the FQ-ViT: Post-Training Quantization for Fully Quantized Vision Transformer paper on the ImageNet dataset? | Top-1 Accuracy (%), Weight bits, Activation bits |
What metrics were used to measure the Xception W8A8 model in the HPTQ: Hardware-Friendly Post Training Quantization paper on the ImageNet dataset? | Top-1 Accuracy (%), Weight bits, Activation bits |
What metrics were used to measure the ADLIK-MO-ResNet50-W4A4 model in the Learned Step Size Quantization paper on the ImageNet dataset? | Top-1 Accuracy (%), Weight bits, Activation bits |
What metrics were used to measure the ADLIK-MO-ResNet50-W3A4 model in the Learned Step Size Quantization paper on the ImageNet dataset? | Top-1 Accuracy (%), Weight bits, Activation bits |
What metrics were used to measure the EfficientNet-B0 ReLU W8A8 model in the HPTQ: Hardware-Friendly Post Training Quantization paper on the ImageNet dataset? | Top-1 Accuracy (%), Weight bits, Activation bits |
What metrics were used to measure the ResNet50-W4A4 (paper) model in the Learned Step Size Quantization paper on the ImageNet dataset? | Top-1 Accuracy (%), Weight bits, Activation bits |
What metrics were used to measure the EfficientNet-B0-W8A8 model in the HMQ: Hardware Friendly Mixed Precision Quantization Block for CNNs paper on the ImageNet dataset? | Top-1 Accuracy (%), Weight bits, Activation bits |
What metrics were used to measure the EfficientNet-B0-W4A4 model in the HMQ: Hardware Friendly Mixed Precision Quantization Block for CNNs paper on the ImageNet dataset? | Top-1 Accuracy (%), Weight bits, Activation bits |
What metrics were used to measure the ResNet50-W3A4 model in the HMQ: Hardware Friendly Mixed Precision Quantization Block for CNNs paper on the ImageNet dataset? | Top-1 Accuracy (%), Weight bits, Activation bits |
What metrics were used to measure the EfficientNet-B0 W8A8 model in the HPTQ: Hardware-Friendly Post Training Quantization paper on the ImageNet dataset? | Top-1 Accuracy (%), Weight bits, Activation bits |
What metrics were used to measure the MPT (80) +BN model in the Multi-Prize Lottery Ticket Hypothesis: Finding Accurate Binary Neural Networks by Pruning A Randomly Weighted Network paper on the ImageNet dataset? | Top-1 Accuracy (%), Weight bits, Activation bits |
What metrics were used to measure the EfficientNet-W4A4 model in the LSQ+: Improving low-bit quantization through learnable offsets and better initialization paper on the ImageNet dataset? | Top-1 Accuracy (%), Weight bits, Activation bits |
What metrics were used to measure the DenseNet-121 W8A8 model in the HPTQ: Hardware-Friendly Post Training Quantization paper on the ImageNet dataset? | Top-1 Accuracy (%), Weight bits, Activation bits |
What metrics were used to measure the MixNet-W4A4 model in the LSQ+: Improving low-bit quantization through learnable offsets and better initialization paper on the ImageNet dataset? | Top-1 Accuracy (%), Weight bits, Activation bits |
What metrics were used to measure the FQ-ViT (DeiT-T) model in the FQ-ViT: Post-Training Quantization for Fully Quantized Vision Transformer paper on the ImageNet dataset? | Top-1 Accuracy (%), Weight bits, Activation bits |
What metrics were used to measure the UniQ (Ours) model in the Training Multi-bit Quantized and Binarized Networks with A Learnable Symmetric Quantizer paper on the ImageNet dataset? | Top-1 Accuracy (%), Weight bits, Activation bits |
What metrics were used to measure the MobileNetV2 W8A8 model in the HPTQ: Hardware-Friendly Post Training Quantization paper on the ImageNet dataset? | Top-1 Accuracy (%), Weight bits, Activation bits |
What metrics were used to measure the MobileNetV2 model in the HMQ: Hardware Friendly Mixed Precision Quantization Block for CNNs paper on the ImageNet dataset? | Top-1 Accuracy (%), Weight bits, Activation bits |
What metrics were used to measure the MobileNet-v1 + EWGS + R2Loss model in the R^2: Range Regularization for Model Compression and Quantization paper on the ImageNet dataset? | Top-1 Accuracy (%), Weight bits, Activation bits |
What metrics were used to measure the MobileNet-v1 + LSQ + R2Loss model in the R^2: Range Regularization for Model Compression and Quantization paper on the ImageNet dataset? | Top-1 Accuracy (%), Weight bits, Activation bits |
What metrics were used to measure the ResNet-18 + PACT + R2Loss model in the R^2: Range Regularization for Model Compression and Quantization paper on the ImageNet dataset? | Top-1 Accuracy (%), Weight bits, Activation bits |
What metrics were used to measure the SSD ResNet50 V1 FPN 640x640 model in the HPTQ: Hardware-Friendly Post Training Quantization paper on the COCO dataset? | MAP |
What metrics were used to measure the model in the QuantFace: Towards Lightweight Face Recognition by Synthetic Data Low-bit Quantization paper on the CFP-FP dataset? | Accuracy |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.