prompts stringlengths 81 413 | metrics_response stringlengths 0 371 |
|---|---|
What metrics were used to measure the ReActNet-18 model in the "BNN - BN = ?": Training Binary Neural Networks without Batch Normalization paper on the CIFAR-100 dataset? | Percentage correct, PARAMS, Accuracy |
What metrics were used to measure the VDN model in the Training Very Deep Networks paper on the CIFAR-100 dataset? | Percentage correct, PARAMS, Accuracy |
What metrics were used to measure the DCNN+GFE model in the Deep Convolutional Neural Networks as Generic Feature Extractors paper on the CIFAR-100 dataset? | Percentage correct, PARAMS, Accuracy |
What metrics were used to measure the Tree+Max-Avg pooling model in the Generalizing Pooling Functions in Convolutional Neural Networks: Mixed, Gated, and Tree paper on the CIFAR-100 dataset? | Percentage correct, PARAMS, Accuracy |
What metrics were used to measure the HD-CNN model in the HD-CNN: Hierarchical Deep Convolutional Neural Network for Large Scale Visual Recognition paper on the CIFAR-100 dataset? | Percentage correct, PARAMS, Accuracy |
What metrics were used to measure the Universum Prescription model in the Universum Prescription: Regularization using Unlabeled Data paper on the CIFAR-100 dataset? | Percentage correct, PARAMS, Accuracy |
What metrics were used to measure the ResNet50 Without Transfer Learning model in the ResNet50_on_Cifar_100_Without_Transfer_Learning paper on the CIFAR-100 dataset? | Percentage correct, PARAMS, Accuracy |
What metrics were used to measure the AlexNet (KP) model in the Learning the Connections in Direct Feedback Alignment paper on the CIFAR-100 dataset? | Percentage correct, PARAMS, Accuracy |
What metrics were used to measure the ACN model in the Striving for Simplicity: The All Convolutional Net paper on the CIFAR-100 dataset? | Percentage correct, PARAMS, Accuracy |
What metrics were used to measure the DLME (ResNet-18, linear) model in the DLME: Deep Local-flatness Manifold Embedding paper on the CIFAR-100 dataset? | Percentage correct, PARAMS, Accuracy |
What metrics were used to measure the ResNet-18 (modified) model in the FatNet: High Resolution Kernels for Classification Using Fully Convolutional Optical Neural Networks paper on the CIFAR-100 dataset? | Percentage correct, PARAMS, Accuracy |
What metrics were used to measure the DSN model in the Deeply-Supervised Nets paper on the CIFAR-100 dataset? | Percentage correct, PARAMS, Accuracy |
What metrics were used to measure the NiN model in the Network In Network paper on the CIFAR-100 dataset? | Percentage correct, PARAMS, Accuracy |
What metrics were used to measure the Tree Priors model in the Discriminative Transfer Learning with Tree-based Priors paper on the CIFAR-100 dataset? | Percentage correct, PARAMS, Accuracy |
What metrics were used to measure the DNN+Probabilistic Maxout model in the Improving Deep Neural Networks with Probabilistic Maxout Units paper on the CIFAR-100 dataset? | Percentage correct, PARAMS, Accuracy |
What metrics were used to measure the Maxout Network (k=2) model in the Maxout Networks paper on the CIFAR-100 dataset? | Percentage correct, PARAMS, Accuracy |
What metrics were used to measure the ResNet20+UnsharpMaskLayer model in the Unsharp Masking Layer: Injecting Prior Knowledge in Convolutional Networks for Image Classification paper on the CIFAR-100 dataset? | Percentage correct, PARAMS, Accuracy |
What metrics were used to measure the Convolutional Linear Transformer for Vision (CLTV) model in the Convolutional Xformers for Vision paper on the CIFAR-100 dataset? | Percentage correct, PARAMS, Accuracy |
What metrics were used to measure the FatNet of ResNet-18 model in the FatNet: High Resolution Kernels for Classification Using Fully Convolutional Optical Neural Networks paper on the CIFAR-100 dataset? | Percentage correct, PARAMS, Accuracy |
What metrics were used to measure the Optical Simulation of FatNet model in the FatNet: High Resolution Kernels for Classification Using Fully Convolutional Optical Neural Networks paper on the CIFAR-100 dataset? | Percentage correct, PARAMS, Accuracy |
What metrics were used to measure the RReLU model in the Empirical Evaluation of Rectified Activations in Convolutional Network paper on the CIFAR-100 dataset? | Percentage correct, PARAMS, Accuracy |
What metrics were used to measure the Stochastic Pooling model in the Stochastic Pooling for Regularization of Deep Convolutional Neural Networks paper on the CIFAR-100 dataset? | Percentage correct, PARAMS, Accuracy |
What metrics were used to measure the Sign-symmetry model in the How Important is Weight Symmetry in Backpropagation? paper on the CIFAR-100 dataset? | Percentage correct, PARAMS, Accuracy |
What metrics were used to measure the AlexNet (DFA) model in the Learning the Connections in Direct Feedback Alignment paper on the CIFAR-100 dataset? | Percentage correct, PARAMS, Accuracy |
What metrics were used to measure the CNN39 model in the Sharpness-Aware Minimization for Efficiently Improving Generalization paper on the CIFAR-100 dataset? | Percentage correct, PARAMS, Accuracy |
What metrics were used to measure the CNN36 model in the Sharpness-Aware Minimization for Efficiently Improving Generalization paper on the CIFAR-100 dataset? | Percentage correct, PARAMS, Accuracy |
What metrics were used to measure the CNN37 model in the Sharpness-aware Quantization for Deep Neural Networks paper on the CIFAR-100 dataset? | Percentage correct, PARAMS, Accuracy |
What metrics were used to measure the AlexNet (FA) model in the Learning the Connections in Direct Feedback Alignment paper on the CIFAR-100 dataset? | Percentage correct, PARAMS, Accuracy |
What metrics were used to measure the efficient adaptive ensembling model in the Efficient Adaptive Ensembling for Image Classification paper on the CIFAR-100 dataset? | Percentage correct, PARAMS, Accuracy |
What metrics were used to measure the AP-GeM (ResNet-101) model in the AmsterTime: A Visual Place Recognition Benchmark Dataset for Severe Domain Shift paper on the AmsterTime dataset? | Accuracy |
What metrics were used to measure the µ2Net+ (ViT-L/16) model in the A Continual Development Methodology for Large-scale Multitask Dynamic ML Systems paper on the ImageNet-Sketch dataset? | Accuracy |
What metrics were used to measure the VGG-5 (Spinal FC) model in the SpinalNet: Deep Neural Network with Gradual Input paper on the Kuzushiji-MNIST dataset? | Accuracy, Error |
What metrics were used to measure the CAMNet3 model in the Context-Aware Multipath Networks paper on the Kuzushiji-MNIST dataset? | Accuracy, Error |
What metrics were used to measure the VGG8B(2x) + LocalLearning + CO model in the Training Neural Networks with Local Error Signals paper on the Kuzushiji-MNIST dataset? | Accuracy, Error |
What metrics were used to measure the CN(d=32) model in the Toward Understanding Supervised Representation Learning with RKHS and GAN paper on the Kuzushiji-MNIST dataset? | Accuracy, Error |
What metrics were used to measure the NSRL (log D) (d=16) model in the Toward Understanding Supervised Representation Learning with RKHS and GAN paper on the Kuzushiji-MNIST dataset? | Accuracy, Error |
What metrics were used to measure the CN(d=16) model in the Toward Understanding Supervised Representation Learning with RKHS and GAN paper on the Kuzushiji-MNIST dataset? | Accuracy, Error |
What metrics were used to measure the Resnet-152 model in the A Comprehensive Study of ImageNet Pre-Training for Historical Document Image Analysis paper on the Kuzushiji-MNIST dataset? | Accuracy, Error |
What metrics were used to measure the ResNet-14 model in the CNN Filter DB: An Empirical Investigation of Trained Convolutional Filters paper on the Kuzushiji-MNIST dataset? | Accuracy, Error |
What metrics were used to measure the NSRL (WGAN) (d=32) model in the Toward Understanding Supervised Representation Learning with RKHS and GAN paper on the Kuzushiji-MNIST dataset? | Accuracy, Error |
What metrics were used to measure the NSRL (WGAN) (d=8) model in the Toward Understanding Supervised Representation Learning with RKHS and GAN paper on the Kuzushiji-MNIST dataset? | Accuracy, Error |
What metrics were used to measure the NSRL (WGAN) (d=16) model in the Toward Understanding Supervised Representation Learning with RKHS and GAN paper on the Kuzushiji-MNIST dataset? | Accuracy, Error |
What metrics were used to measure the NSRL (log D) (d=32) model in the Toward Understanding Supervised Representation Learning with RKHS and GAN paper on the Kuzushiji-MNIST dataset? | Accuracy, Error |
What metrics were used to measure the NSRL (log D) (d=8) model in the Toward Understanding Supervised Representation Learning with RKHS and GAN paper on the Kuzushiji-MNIST dataset? | Accuracy, Error |
What metrics were used to measure the CN(d=8) model in the Toward Understanding Supervised Representation Learning with RKHS and GAN paper on the Kuzushiji-MNIST dataset? | Accuracy, Error |
What metrics were used to measure the ResNet-18 model in the Reduction of Class Activation Uncertainty with Background Information paper on the Kuzushiji-MNIST dataset? | Accuracy, Error |
What metrics were used to measure the PreActResNet-18 + Input Mixup model in the mixup: Beyond Empirical Risk Minimization paper on the Kuzushiji-MNIST dataset? | Accuracy, Error |
What metrics were used to measure the PreActResNet-18 model in the Identity Mappings in Deep Residual Networks paper on the Kuzushiji-MNIST dataset? | Accuracy, Error |
What metrics were used to measure the Convolutional Tsetlin Machine model in the The Convolutional Tsetlin Machine paper on the Kuzushiji-MNIST dataset? | Accuracy, Error |
What metrics were used to measure the KerCNN model in the KerCNNs: biologically inspired lateral connections for classification of corrupted images paper on the Kuzushiji-MNIST dataset? | Accuracy, Error |
What metrics were used to measure the linear/flexible model model in the Multi-Complementary and Unlabeled Learning for Arbitrary Losses and Models paper on the Kuzushiji-MNIST dataset? | Accuracy, Error |
What metrics were used to measure the FWD model in the Multi-Complementary and Unlabeled Learning for Arbitrary Losses and Models paper on the Kuzushiji-MNIST dataset? | Accuracy, Error |
What metrics were used to measure the Complementary-Label Learning model in the Complementary-Label Learning for Arbitrary Losses and Models paper on the Kuzushiji-MNIST dataset? | Accuracy, Error |
What metrics were used to measure the ResNet18 + VGG Ensemble model in the Deep Learning for Classical Japanese Literature paper on the Kuzushiji-MNIST dataset? | Accuracy, Error |
What metrics were used to measure the ResNet-152 2x (RS training) model in the Revisiting ResNets: Improved Training and Scaling Strategies paper on the PRImA dataset? | Percentage correct |
What metrics were used to measure the PCGAN-CHAR model in the PCGAN-CHAR: Progressively Trained Classifier Generative Adversarial Networks for Classification of Noisy Handwritten Bangla Characters paper on the Noisy MNIST (Contrast) dataset? | Accuracy |
What metrics were used to measure the Pixel-level RC model in the Pixel-level Reconstruction and Classification for Noisy Handwritten Bangla Characters paper on the Noisy MNIST (Contrast) dataset? | Accuracy |
What metrics were used to measure the BiT-L (ResNet) model in the Big Transfer (BiT): General Visual Representation Learning paper on the ObjectNet (Bounding Box) dataset? | Top 5 Accuracy |
What metrics were used to measure the BiT-M (ResNet) model in the Big Transfer (BiT): General Visual Representation Learning paper on the ObjectNet (Bounding Box) dataset? | Top 5 Accuracy |
What metrics were used to measure the BiT-S (ResNet) model in the Big Transfer (BiT): General Visual Representation Learning paper on the ObjectNet (Bounding Box) dataset? | Top 5 Accuracy |
What metrics were used to measure the ResNet-152 model in the ObjectNet Dataset: Reanalysis and Correction paper on the ObjectNet (Bounding Box) dataset? | Top 5 Accuracy |
What metrics were used to measure the MentorMix model in the Faster Meta Update Strategy for Noise-Robust Deep Learning paper on the CIFAR-10, 60% Symmetric Noise dataset? | Percentage correct |
What metrics were used to measure the FaMUS model in the Faster Meta Update Strategy for Noise-Robust Deep Learning paper on the CIFAR-10, 60% Symmetric Noise dataset? | Percentage correct |
What metrics were used to measure the LRA-diffusion (CLIP ViT) model in the Label-Retrieval-Augmented Diffusion Models for Learning from Noisy Labels paper on the Food-101N dataset? | Accuracy |
What metrics were used to measure the CleanNet model in the CleanNet: Transfer Learning for Scalable Image Classifier Training with Label Noise paper on the Food-101N dataset? | Accuracy |
What metrics were used to measure the LongReMix model in the LongReMix: Robust Learning with High Confidence Samples in a Noisy Label Environment paper on the Food-101N dataset? | Accuracy |
What metrics were used to measure the NCR (ResNet-18) model in the Learning with Neighbor Consistency for Noisy Labels paper on the Red MiniImageNet 20% label noise dataset? | Accuracy |
What metrics were used to measure the PropMix model in the PropMix: Hard Sample Filtering and Proportional MixUp for Learning with Noisy Labels paper on the Red MiniImageNet 20% label noise dataset? | Accuracy |
What metrics were used to measure the InstanceGM-SS model in the Instance-Dependent Noisy Label Learning via Graphical Modelling paper on the Red MiniImageNet 20% label noise dataset? | Accuracy |
What metrics were used to measure the InstanceGM model in the Instance-Dependent Noisy Label Learning via Graphical Modelling paper on the Red MiniImageNet 20% label noise dataset? | Accuracy |
What metrics were used to measure the FaMUS model in the Faster Meta Update Strategy for Noise-Robust Deep Learning paper on the Red MiniImageNet 20% label noise dataset? | Accuracy |
What metrics were used to measure the Fine-Tuning DARTS model in the Fine-Tuning DARTS for Image Classification paper on the Fashion-MNIST dataset? | Percentage error, Accuracy |
What metrics were used to measure the Shake-Shake (SAM) model in the Sharpness-Aware Minimization for Efficiently Improving Generalization paper on the Fashion-MNIST dataset? | Percentage error, Accuracy |
What metrics were used to measure the PreAct-ResNet18 + FMix model in the FMix: Enhancing Mixed Sample Data Augmentation paper on the Fashion-MNIST dataset? | Percentage error, Accuracy |
What metrics were used to measure the Random Erasing model in the Random Erasing Data Augmentation paper on the Fashion-MNIST dataset? | Percentage error, Accuracy |
What metrics were used to measure the E2E-3M model in the Rethinking Recurrent Neural Networks and Other Improvements for Image Classification paper on the Fashion-MNIST dataset? | Percentage error, Accuracy |
What metrics were used to measure the VGG8B(2x) + LocalLearning + CO model in the Training Neural Networks with Local Error Signals paper on the Fashion-MNIST dataset? | Percentage error, Accuracy |
What metrics were used to measure the Inception v3 model in the CNN Filter DB: An Empirical Investigation of Trained Convolutional Filters paper on the Fashion-MNIST dataset? | Percentage error, Accuracy |
What metrics were used to measure the WaveMixLite model in the WaveMix: A Resource-efficient Neural Network for Image Analysis paper on the Fashion-MNIST dataset? | Percentage error, Accuracy |
What metrics were used to measure the Local Mixup DenseNet model in the Preventing Manifold Intrusion with Locality: Local Mixup paper on the Fashion-MNIST dataset? | Percentage error, Accuracy |
What metrics were used to measure the TextCaps model in the TextCaps : Handwritten Character Recognition with Very Small Datasets paper on the Fashion-MNIST dataset? | Percentage error, Accuracy |
What metrics were used to measure the StiDi-BP in R-CSNN model in the Spike time displacement based error backpropagation in convolutional spiking neural networks paper on the Fashion-MNIST dataset? | Percentage error, Accuracy |
What metrics were used to measure the NeuPDE model in the NeuPDE: Neural Network Based Ordinary and Partial Differential Equations for Modeling Time-Dependent Data paper on the Fashion-MNIST dataset? | Percentage error, Accuracy |
What metrics were used to measure the Star Algorithm on LeNet model in the Star algorithm for NN ensembling paper on the Fashion-MNIST dataset? | Percentage error, Accuracy |
What metrics were used to measure the Convolutional Tsetlin Machine model in the The Convolutional Tsetlin Machine paper on the Fashion-MNIST dataset? | Percentage error, Accuracy |
What metrics were used to measure the OTTT model in the Online Training Through Time for Spiking Neural Networks paper on the Fashion-MNIST dataset? | Percentage error, Accuracy |
What metrics were used to measure the pFedBreD_ns_mg model in the Personalized Federated Learning with Hidden Information on Personalized Prior paper on the Fashion-MNIST dataset? | Percentage error, Accuracy |
What metrics were used to measure the FastSNN (CNN) model in the Robust and accelerated single-spike spiking neural network training with applicability to challenging temporal tasks paper on the Fashion-MNIST dataset? | Percentage error, Accuracy |
What metrics were used to measure the FastSNN (MLP) model in the Robust and accelerated single-spike spiking neural network training with applicability to challenging temporal tasks paper on the Fashion-MNIST dataset? | Percentage error, Accuracy |
What metrics were used to measure the Sparse Spiking Gradient Descent (CNN) model in the Sparse Spiking Gradient Descent paper on the Fashion-MNIST dataset? | Percentage error, Accuracy |
What metrics were used to measure the Sparse Spiking Gradient Descent (MLP) model in the Sparse Spiking Gradient Descent paper on the Fashion-MNIST dataset? | Percentage error, Accuracy |
What metrics were used to measure the SWAG (ViT H/14) model in the Revisiting Weakly Supervised Pre-Training of Visual Perception Models paper on the Places365-Standard dataset? | Top 1 Accuracy |
What metrics were used to measure the Hiera-H (448px) model in the Hiera: A Hierarchical Vision Transformer without the Bells-and-Whistles paper on the Places365-Standard dataset? | Top 1 Accuracy |
What metrics were used to measure the MAE (ViT-H, 448) model in the Masked Autoencoders Are Scalable Vision Learners paper on the Places365-Standard dataset? | Top 1 Accuracy |
What metrics were used to measure the WaveMix-240/12 (level 4) model in the WaveMix: A Resource-efficient Neural Network for Image Analysis paper on the Places365-Standard dataset? | Top 1 Accuracy |
What metrics were used to measure the Hiera-H (448px) model in the Hiera: A Hierarchical Vision Transformer without the Bells-and-Whistles paper on the iNaturalist 2019 dataset? | Top-1 Accuracy |
What metrics were used to measure the MAE (ViT-H, 448) model in the Masked Autoencoders Are Scalable Vision Learners paper on the iNaturalist 2019 dataset? | Top-1 Accuracy |
What metrics were used to measure the Grafit (RegnetY 8GF) model in the Grafit: Learning fine-grained image representations with coarse labels paper on the iNaturalist 2019 dataset? | Top-1 Accuracy |
What metrics were used to measure the MixMIM-L model in the MixMAE: Mixed and Masked Autoencoder for Efficient Pretraining of Hierarchical Vision Transformers paper on the iNaturalist 2019 dataset? | Top-1 Accuracy |
What metrics were used to measure the Conviformer-B model in the Conviformers: Convolutionally guided Vision Transformer paper on the iNaturalist 2019 dataset? | Top-1 Accuracy |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.