prompts stringlengths 81 413 | metrics_response stringlengths 0 371 |
|---|---|
What metrics were used to measure the Bhāskara-P (Fine-tuned, 2.7B) model in the Lila: A Unified Benchmark for Mathematical Reasoning paper on the Lila (IID) dataset? | Accuracy |
What metrics were used to measure the Neo-P (Fine-tuned, 2.7B) model in the Lila: A Unified Benchmark for Mathematical Reasoning paper on the Lila (IID) dataset? | Accuracy |
What metrics were used to measure the GPT-3 (Few-Shot, 175B) model in the Lila: A Unified Benchmark for Mathematical Reasoning paper on the Lila (IID) dataset? | Accuracy |
What metrics were used to measure the Bhāskara-A (Fine-tuned, 2.7B) model in the Lila: A Unified Benchmark for Mathematical Reasoning paper on the Lila (IID) dataset? | Accuracy |
What metrics were used to measure the Neo-A (Fine-tuned, 2.7B) model in the Lila: A Unified Benchmark for Mathematical Reasoning paper on the Lila (IID) dataset? | Accuracy |
What metrics were used to measure the PGPSNet model in the A Multi-Modal Neural Geometric Solver with Textual Clauses Parsed from Diagram paper on the PGPS9K dataset? | Completion accuracy |
What metrics were used to measure the Inter-GPS model in the Inter-GPS: Interpretable Geometry Problem Solving with Formal Language and Symbolic Reasoning paper on the PGPS9K dataset? | Completion accuracy |
What metrics were used to measure the Geoformer model in the UniGeo: Unifying Geometry Logical Reasoning via Reformulating Mathematical Expression paper on the PGPS9K dataset? | Completion accuracy |
What metrics were used to measure the NGS model in the GeoQA: A Geometric Question Answering Benchmark Towards Multimodal Numerical Reasoning paper on the PGPS9K dataset? | Completion accuracy |
What metrics were used to measure the Codex (Few-Shot, 175B) model in the Lila: A Unified Benchmark for Mathematical Reasoning paper on the Lila (OOD) dataset? | Accuracy |
What metrics were used to measure the Bhāskara-P (Fine-tuned, 2.7B) model in the Lila: A Unified Benchmark for Mathematical Reasoning paper on the Lila (OOD) dataset? | Accuracy |
What metrics were used to measure the GPT-3 (Few-Shot, 175B) model in the Lila: A Unified Benchmark for Mathematical Reasoning paper on the Lila (OOD) dataset? | Accuracy |
What metrics were used to measure the Bhāskara-A (Fine-tuned, 2.7B) model in the Lila: A Unified Benchmark for Mathematical Reasoning paper on the Lila (OOD) dataset? | Accuracy |
What metrics were used to measure the Neo-P (Fine-tuned, 2.7B) model in the Lila: A Unified Benchmark for Mathematical Reasoning paper on the Lila (OOD) dataset? | Accuracy |
What metrics were used to measure the Neo-A (Fine-tuned, 2.7B) model in the Lila: A Unified Benchmark for Mathematical Reasoning paper on the Lila (OOD) dataset? | Accuracy |
What metrics were used to measure the GAL 120B <work> model in the Galactica: A Large Language Model for Science paper on the MMLU (Mathematics) dataset? | Accuracy |
What metrics were used to measure the GAL 30B <work> model in the Galactica: A Large Language Model for Science paper on the MMLU (Mathematics) dataset? | Accuracy |
What metrics were used to measure the GAL 120B model in the Galactica: A Large Language Model for Science paper on the MMLU (Mathematics) dataset? | Accuracy |
What metrics were used to measure the Chinchilla (5-shot) model in the Training Compute-Optimal Large Language Models paper on the MMLU (Mathematics) dataset? | Accuracy |
What metrics were used to measure the Chinchilla (5-shot) model in the Galactica: A Large Language Model for Science paper on the MMLU (Mathematics) dataset? | Accuracy |
What metrics were used to measure the Gopher (5-shot) model in the Galactica: A Large Language Model for Science paper on the MMLU (Mathematics) dataset? | Accuracy |
What metrics were used to measure the GAL 30B model in the Galactica: A Large Language Model for Science paper on the MMLU (Mathematics) dataset? | Accuracy |
What metrics were used to measure the GAL 6.7B model in the Galactica: A Large Language Model for Science paper on the MMLU (Mathematics) dataset? | Accuracy |
What metrics were used to measure the GAL 6.7B <work> model in the Galactica: A Large Language Model for Science paper on the MMLU (Mathematics) dataset? | Accuracy |
What metrics were used to measure the GAL 1.3B model in the Galactica: A Large Language Model for Science paper on the MMLU (Mathematics) dataset? | Accuracy |
What metrics were used to measure the OPT (5-shot) model in the Galactica: A Large Language Model for Science paper on the MMLU (Mathematics) dataset? | Accuracy |
What metrics were used to measure the BLOOM (5-shot) model in the Galactica: A Large Language Model for Science paper on the MMLU (Mathematics) dataset? | Accuracy |
What metrics were used to measure the GAL 1.3B <work> model in the Galactica: A Large Language Model for Science paper on the MMLU (Mathematics) dataset? | Accuracy |
What metrics were used to measure the GHN-2 model in the Parameter Prediction for Unseen Deep Architectures paper on the ImageNet dataset? | Top 5 Accuracy (BN-free), Top 5 Accuracy (Deep), Top 5 Accuracy (Dense), Top 5 Accuracy (ID-test), Top 5 Accuracy (ResNet-50), Top 5 Accuracy (ViT), Top 5 Accuracy (Wide) |
What metrics were used to measure the GHN-2 model in the Parameter Prediction for Unseen Deep Architectures paper on the CIFAR10 dataset? | Classification Accuracy (BN-free), Classification Accuracy (Deep), Classification Accuracy (Dense), Classification Accuracy (ID-test), Classification Accuracy (ResNet-50), Classification Accuracy (ViT), Classification Accuracy (Wide) |
What metrics were used to measure the 1D state space model in the A Novel 1D State Space for Efficient Music Rhythmic Analysis paper on the GTZAN dataset? | F1 |
What metrics were used to measure the BeatNet model in the BeatNet: CRNN and Particle Filtering for Online Joint Beat Downbeat and Meter Tracking paper on the GTZAN dataset? | F1 |
What metrics were used to measure the Böck - Forward model in the BeatNet: CRNN and Particle Filtering for Online Joint Beat Downbeat and Meter Tracking paper on the GTZAN dataset? | F1 |
What metrics were used to measure the DLB model in the BeatNet: CRNN and Particle Filtering for Online Joint Beat Downbeat and Meter Tracking paper on the GTZAN dataset? | F1 |
What metrics were used to measure the IBT model in the BeatNet: CRNN and Particle Filtering for Online Joint Beat Downbeat and Meter Tracking paper on the GTZAN dataset? | F1 |
What metrics were used to measure the Böck - ACF model in the BeatNet: CRNN and Particle Filtering for Online Joint Beat Downbeat and Meter Tracking paper on the GTZAN dataset? | F1 |
What metrics were used to measure the Aubio model in the BeatNet: CRNN and Particle Filtering for Online Joint Beat Downbeat and Meter Tracking paper on the GTZAN dataset? | F1 |
What metrics were used to measure the BeatNet model in the BeatNet: CRNN and Particle Filtering for Online Joint Beat Downbeat and Meter Tracking paper on the Rock Corpus dataset? | F1 |
What metrics were used to measure the IBT model in the BeatNet: CRNN and Particle Filtering for Online Joint Beat Downbeat and Meter Tracking paper on the Rock Corpus dataset? | F1 |
What metrics were used to measure the Aubio model in the BeatNet: CRNN and Particle Filtering for Online Joint Beat Downbeat and Meter Tracking paper on the Rock Corpus dataset? | F1 |
What metrics were used to measure the BeatNet model in the BeatNet: CRNN and Particle Filtering for Online Joint Beat Downbeat and Meter Tracking paper on the Ballroom dataset? | F1 |
What metrics were used to measure the IBT model in the BeatNet: CRNN and Particle Filtering for Online Joint Beat Downbeat and Meter Tracking paper on the Ballroom dataset? | F1 |
What metrics were used to measure the Aubio model in the BeatNet: CRNN and Particle Filtering for Online Joint Beat Downbeat and Meter Tracking paper on the Ballroom dataset? | F1 |
What metrics were used to measure the ConE model in the Modeling Heterogeneous Hierarchies with Relation-specific Hyperbolic Cones paper on the WN18RR dataset? | mAP-0%, mAP-50%, mAP-100% |
What metrics were used to measure the DeepFont (S) model in the DeepFont: Identify Your Font from An Image paper on the AdobeVFR syn dataset? | Top 1 Accuracy, Top 5 Accuracy, Top-1 Error Rate, Top 5 Error Rate |
What metrics were used to measure the HENet (ResNet18+HE Block) model in the HENet: Forcing a Network to Think More for Font Recognition paper on the AdobeVFR syn dataset? | Top 1 Accuracy, Top 5 Accuracy, Top-1 Error Rate, Top 5 Error Rate |
What metrics were used to measure the DeepFont (CAE_FR) model in the DeepFont: Identify Your Font from An Image paper on the AdobeVFR syn dataset? | Top 1 Accuracy, Top 5 Accuracy, Top-1 Error Rate, Top 5 Error Rate |
What metrics were used to measure the DeepFont (F) model in the DeepFont: Identify Your Font from An Image paper on the AdobeVFR syn dataset? | Top 1 Accuracy, Top 5 Accuracy, Top-1 Error Rate, Top 5 Error Rate |
What metrics were used to measure the FCN model in the FONTNET: On-Device Font Understanding and Prediction Pipeline paper on the AdobeVFR syn dataset? | Top 1 Accuracy, Top 5 Accuracy, Top-1 Error Rate, Top 5 Error Rate |
What metrics were used to measure the DeepFont (CAE_FR) model in the DeepFont: Identify Your Font from An Image paper on the VFR-Wild dataset? | Top 1 Accuracy, Top 5 Error Rate, Top-1 Error Rate, Top 10 Accuracy, Top 5 Accuracy |
What metrics were used to measure the LFE (FS, template model size 2048) model in the Large-Scale Visual Font Recognition paper on the VFR-Wild dataset? | Top 1 Accuracy, Top 5 Error Rate, Top-1 Error Rate, Top 10 Accuracy, Top 5 Accuracy |
What metrics were used to measure the LFE (FS, template model size 2048) model in the Large-Scale Visual Font Recognition paper on the VFR-447 dataset? | Top 1 Accuracy, Top 10 Accuracy, Top 5 Accuracy |
What metrics were used to measure the DeepFont (CAE_FR) model in the DeepFont: Identify Your Font from An Image paper on the AdobeVFR real dataset? | Top 1 Accuracy, Top 5 Accuracy, Top 5 Error Rate, Top-1 Error Rate |
What metrics were used to measure the HENet (ResNet18+HE Block) model in the HENet: Forcing a Network to Think More for Font Recognition paper on the AdobeVFR real dataset? | Top 1 Accuracy, Top 5 Accuracy, Top 5 Error Rate, Top-1 Error Rate |
What metrics were used to measure the presis model in the Persis: A Persian Font Recognition Pipeline Using Convolutional Neural Networks paper on the Persian Text Image Segmentation (PTI SEG) dataset? | IOU50 |
What metrics were used to measure the HENet model in the HENet: Forcing a Network to Think More for Font Recognition paper on the Explor_all dataset? | Top 1 Accuracy, Top 5 Accuracy |
What metrics were used to measure the LFE (FS, template model size 2048) model in the Large-Scale Visual Font Recognition paper on the VFR-2420 dataset? | Top 1 Accuracy, Top 5 Accuracy, Top 10 Accuracy |
What metrics were used to measure the presis model in the Persis: A Persian Font Recognition Pipeline Using Convolutional Neural Networks paper on the Persian Font Recognition (PFR) dataset? | Top 5 Accuracy |
What metrics were used to measure the DeiT-S (AdamW, Cosine) model in the Are Transformers More Robust Than CNNs? paper on the ImageNet-A dataset? | Accuracy |
What metrics were used to measure the ResNet-50 (SGD, Cosine) model in the Are Transformers More Robust Than CNNs? paper on the ImageNet-A dataset? | Accuracy |
What metrics were used to measure the ResNet-50 (SGD, Step) model in the Are Transformers More Robust Than CNNs? paper on the ImageNet-A dataset? | Accuracy |
What metrics were used to measure the ResNet-50 (AdamW, Cosine) model in the Are Transformers More Robust Than CNNs? paper on the ImageNet-A dataset? | Accuracy |
What metrics were used to measure the DeBERTa (single model) model in the Adversarial GLUE: A Multi-Task Benchmark for Robustness Evaluation of Language Models paper on the AdvGLUE dataset? | Accuracy |
What metrics were used to measure the ALBERT (single model) model in the Adversarial GLUE: A Multi-Task Benchmark for Robustness Evaluation of Language Models paper on the AdvGLUE dataset? | Accuracy |
What metrics were used to measure the T5 (single model) model in the Adversarial GLUE: A Multi-Task Benchmark for Robustness Evaluation of Language Models paper on the AdvGLUE dataset? | Accuracy |
What metrics were used to measure the SMART_RoBERTa (single model) model in the Adversarial GLUE: A Multi-Task Benchmark for Robustness Evaluation of Language Models paper on the AdvGLUE dataset? | Accuracy |
What metrics were used to measure the FreeLB (single model) model in the Adversarial GLUE: A Multi-Task Benchmark for Robustness Evaluation of Language Models paper on the AdvGLUE dataset? | Accuracy |
What metrics were used to measure the RoBERTa (single model) model in the Adversarial GLUE: A Multi-Task Benchmark for Robustness Evaluation of Language Models paper on the AdvGLUE dataset? | Accuracy |
What metrics were used to measure the InfoBERT (single model) model in the Adversarial GLUE: A Multi-Task Benchmark for Robustness Evaluation of Language Models paper on the AdvGLUE dataset? | Accuracy |
What metrics were used to measure the ELECTRA (single model) model in the Adversarial GLUE: A Multi-Task Benchmark for Robustness Evaluation of Language Models paper on the AdvGLUE dataset? | Accuracy |
What metrics were used to measure the BERT (single model) model in the Adversarial GLUE: A Multi-Task Benchmark for Robustness Evaluation of Language Models paper on the AdvGLUE dataset? | Accuracy |
What metrics were used to measure the SMART_BERT (single model) model in the Adversarial GLUE: A Multi-Task Benchmark for Robustness Evaluation of Language Models paper on the AdvGLUE dataset? | Accuracy |
What metrics were used to measure the DeiT-S (AdamW, Cosine) model in the Are Transformers More Robust Than CNNs? paper on the ImageNet-C dataset? | mean Corruption Error (mCE) |
What metrics were used to measure the ResNet-50 (SGD, Cosine) model in the Are Transformers More Robust Than CNNs? paper on the ImageNet-C dataset? | mean Corruption Error (mCE) |
What metrics were used to measure the ResNet-50 (SGD, Step) model in the Are Transformers More Robust Than CNNs? paper on the ImageNet-C dataset? | mean Corruption Error (mCE) |
What metrics were used to measure the ResNet-50 (AdamW, Cosine) model in the Are Transformers More Robust Than CNNs? paper on the ImageNet-C dataset? | mean Corruption Error (mCE) |
What metrics were used to measure the Mixed Classifier model in the Improving the Accuracy-Robustness Trade-Off of Classifiers via Adaptive Smoothing paper on the CIFAR-100 dataset? | Clean Accuracy, AutoAttacked Accuracy |
What metrics were used to measure the DeiT-S (AdamW, Cosine) model in the Are Transformers More Robust Than CNNs? paper on the Stylized ImageNet dataset? | Accuracy |
What metrics were used to measure the ResNet-50 (SGD, Cosine) model in the Are Transformers More Robust Than CNNs? paper on the Stylized ImageNet dataset? | Accuracy |
What metrics were used to measure the ResNet-50 (SGD, Step) model in the Are Transformers More Robust Than CNNs? paper on the Stylized ImageNet dataset? | Accuracy |
What metrics were used to measure the ResNet-50 (AdamW, Cosine) model in the Are Transformers More Robust Than CNNs? paper on the Stylized ImageNet dataset? | Accuracy |
What metrics were used to measure the Mixed classifier model in the Improving the Accuracy-Robustness Trade-Off of Classifiers via Adaptive Smoothing paper on the CIFAR-10 dataset? | Accuracy, Robust Accuracy, Attack: AutoAttack |
What metrics were used to measure the Stochastic-LWTA/PGD/WideResNet-34-10 model in the Stochastic Local Winner-Takes-All Networks Enable Profound Adversarial Robustness paper on the CIFAR-10 dataset? | Accuracy, Robust Accuracy, Attack: AutoAttack |
What metrics were used to measure the Stochastic-LWTA/PGD/WideResNet-34-5 model in the Stochastic Local Winner-Takes-All Networks Enable Profound Adversarial Robustness paper on the CIFAR-10 dataset? | Accuracy, Robust Accuracy, Attack: AutoAttack |
What metrics were used to measure the ResNet-50 (SGD, Cosine) model in the Are Transformers More Robust Than CNNs? paper on the ImageNet dataset? | Accuracy |
What metrics were used to measure the ResNet-50 (SGD, Step) model in the Are Transformers More Robust Than CNNs? paper on the ImageNet dataset? | Accuracy |
What metrics were used to measure the DeiT-S (AdamW, Cosine) model in the Are Transformers More Robust Than CNNs? paper on the ImageNet dataset? | Accuracy |
What metrics were used to measure the ResNet-50 (AdamW, Cosine) model in the Are Transformers More Robust Than CNNs? paper on the ImageNet dataset? | Accuracy |
What metrics were used to measure the Palette model in the Palette: Image-to-Image Diffusion Models paper on the Places2 val dataset? | FID, PD, Fool rate |
What metrics were used to measure the Boundless model in the Boundless: Generative Adversarial Networks for Image Extension paper on the Places2 val dataset? | FID, PD, Fool rate |
What metrics were used to measure the Palette (QF: 20) model in the Palette: Image-to-Image Diffusion Models paper on the ImageNet dataset? | FID-5K, IS, CA, PD |
What metrics were used to measure the Palette (QF: 10) model in the Palette: Image-to-Image Diffusion Models paper on the ImageNet dataset? | FID-5K, IS, CA, PD |
What metrics were used to measure the Palette (QF: 5) model in the Palette: Image-to-Image Diffusion Models paper on the ImageNet dataset? | FID-5K, IS, CA, PD |
What metrics were used to measure the Regression (QF: 20) model in the Palette: Image-to-Image Diffusion Models paper on the ImageNet dataset? | FID-5K, IS, CA, PD |
What metrics were used to measure the Regression (QF: 10) model in the Palette: Image-to-Image Diffusion Models paper on the ImageNet dataset? | FID-5K, IS, CA, PD |
What metrics were used to measure the Regression (QF: 5) model in the Palette: Image-to-Image Diffusion Models paper on the ImageNet dataset? | FID-5K, IS, CA, PD |
What metrics were used to measure the Hierarchical Neural Networks model in the Hierarchical Neural Networks for Sequential Sentence Classification in Medical Scientific Abstracts paper on the PubMed 20k RCT dataset? | F1 |
What metrics were used to measure the SciBERT (Base Vocab) model in the SciBERT: A Pretrained Language Model for Scientific Text paper on the PubMed 20k RCT dataset? | F1 |
What metrics were used to measure the SciBERT (SciVocab) model in the SciBERT: A Pretrained Language Model for Scientific Text paper on the Paper Field dataset? | F1 |
What metrics were used to measure the SciBERT (Base Vocab) model in the SciBERT: A Pretrained Language Model for Scientific Text paper on the Paper Field dataset? | F1 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.