prompts
stringlengths
81
413
metrics_response
stringlengths
0
371
What metrics were used to measure the VIP-GAN model in the View Inter-Prediction GAN: Unsupervised Representation Learning for 3D Shapes by Learning Global Shape Memories to Support Local View Predictions paper on the ModelNet40 dataset?
Overall Accuracy
What metrics were used to measure the OcCo model in the Unsupervised Point Cloud Pre-Training via Occlusion Completion paper on the ModelNet40 dataset?
Overall Accuracy
What metrics were used to measure the FoldingNet model in the FoldingNet: Point Cloud Auto-encoder via Deep Grid Deformation paper on the ModelNet40 dataset?
Overall Accuracy
What metrics were used to measure the MAE-VAE model in the Multi-Angle Point Cloud-VAE: Unsupervised Feature Learning for 3D Point Clouds from Multiple Angles by Joint Self-Reconstruction and Half-to-Half Prediction paper on the ModelNet40 dataset?
Overall Accuracy
What metrics were used to measure the SO-Net model in the SO-Net: Self-Organizing Network for Point Cloud Analysis paper on the ModelNet40 dataset?
Overall Accuracy
What metrics were used to measure the 3D-GAN model in the Learning a Probabilistic Latent Space of Object Shapes via 3D Generative-Adversarial Modeling paper on the ModelNet40 dataset?
Overall Accuracy
What metrics were used to measure the XYZNet model in the Predicting 3D shapes, masks, and properties of materials, liquids, and objects inside transparent containers, using the TransProteus CGI dataset paper on the TransProteus dataset?
R2
What metrics were used to measure the 3D Magic Mirror model in the 3D Magic Mirror: Clothing Reconstruction from a Single Image via a Causal Perspective paper on the ATR dataset?
FID
What metrics were used to measure the NU-MCC model in the NU-MCC: Multiview Compressive Coding with Neighborhood Decoder and Repulsive UDF paper on the Common Objects in 3D dataset?
Avg. F1
What metrics were used to measure the MCC model in the Multiview Compressive Coding for 3D Reconstruction paper on the Common Objects in 3D dataset?
Avg. F1
What metrics were used to measure the PointTr model in the PoinTr: Diverse Point Cloud Completion with Geometry-Aware Transformers paper on the Common Objects in 3D dataset?
Avg. F1
What metrics were used to measure the 3D Magic Mirror model in the 3D Magic Mirror: Clothing Reconstruction from a Single Image via a Causal Perspective paper on the Market-HQ dataset?
FID
What metrics were used to measure the SDFDiff model in the SDFDiff: Differentiable Rendering of Signed Distance Fields for 3D Shape Optimization paper on the ShapeNet dataset?
3DIoU, F-Score
What metrics were used to measure the ZubicLio model in the An Effective Loss Function for Generating 3D Models from Single 2D Image without Rendering paper on the ShapeNet dataset?
3DIoU, F-Score
What metrics were used to measure the SoftRas (full) model in the Soft Rasterizer: A Differentiable Renderer for Image-based 3D Reasoning paper on the ShapeNet dataset?
3DIoU, F-Score
What metrics were used to measure the DIB-R model in the Learning to Predict 3D Objects with an Interpolation-based Differentiable Renderer paper on the ShapeNet dataset?
3DIoU, F-Score
What metrics were used to measure the NMR [19] model in the Soft Rasterizer: A Differentiable Renderer for Image-based 3D Reasoning paper on the ShapeNet dataset?
3DIoU, F-Score
What metrics were used to measure the voxel [47] model in the Soft Rasterizer: A Differentiable Renderer for Image-based 3D Reasoning paper on the ShapeNet dataset?
3DIoU, F-Score
What metrics were used to measure the retrieval [47] model in the Soft Rasterizer: A Differentiable Renderer for Image-based 3D Reasoning paper on the ShapeNet dataset?
3DIoU, F-Score
What metrics were used to measure the DISN model in the DISN: Deep Implicit Surface Network for High-quality Single-view 3D Reconstruction paper on the ShapeNetCore dataset?
3DIoU
What metrics were used to measure the OccNet model in the DISN: Deep Implicit Surface Network for High-quality Single-view 3D Reconstruction paper on the ShapeNetCore dataset?
3DIoU
What metrics were used to measure the IMNET model in the DISN: Deep Implicit Surface Network for High-quality Single-view 3D Reconstruction paper on the ShapeNetCore dataset?
3DIoU
What metrics were used to measure the 3DN model in the DISN: Deep Implicit Surface Network for High-quality Single-view 3D Reconstruction paper on the ShapeNetCore dataset?
3DIoU
What metrics were used to measure the Pxl2mesh model in the DISN: Deep Implicit Surface Network for High-quality Single-view 3D Reconstruction paper on the ShapeNetCore dataset?
3DIoU
What metrics were used to measure the AtlasNet model in the DISN: Deep Implicit Surface Network for High-quality Single-view 3D Reconstruction paper on the ShapeNetCore dataset?
3DIoU
What metrics were used to measure the 3D Magic Mirror model in the 3D Magic Mirror: Clothing Reconstruction from a Single Image via a Causal Perspective paper on the CUB-200-2011 dataset?
FID
What metrics were used to measure the Text-davinci-002 (175B)(zero-shot-cot) model in the Large Language Models are Zero-Shot Reasoners paper on the MultiArith dataset?
Accuracy
What metrics were used to measure the Text-davinci-002 (175B) (zero-shot) model in the Large Language Models are Zero-Shot Reasoners paper on the MultiArith dataset?
Accuracy
What metrics were used to measure the GPT-4 Code Interpreter (CSV, K=5) model in the Solving Challenging Math Word Problems Using GPT-4 Code Interpreter with Code-based Self-Verification paper on the GSM8K dataset?
Accuracy, Parameters (Billion)
What metrics were used to measure the GPT-4 (Model Selection, SC K=15) model in the Automatic Model Selection with Large Language Models for Reasoning paper on the GSM8K dataset?
Accuracy, Parameters (Billion)
What metrics were used to measure the GPT-4 (PHP, SC K=40) model in the Progressive-Hint Prompting Improves Reasoning in Large Language Models paper on the GSM8K dataset?
Accuracy, Parameters (Billion)
What metrics were used to measure the GPT-4 (Model Selection, SC K=5) model in the Automatic Model Selection with Large Language Models for Reasoning paper on the GSM8K dataset?
Accuracy, Parameters (Billion)
What metrics were used to measure the GPT-4 (PHP) model in the Progressive-Hint Prompting Improves Reasoning in Large Language Models paper on the GSM8K dataset?
Accuracy, Parameters (Billion)
What metrics were used to measure the GPT-4 (Self-Refine, k=8, PaL) model in the Self-Refine: Iterative Refinement with Self-Feedback paper on the GSM8K dataset?
Accuracy, Parameters (Billion)
What metrics were used to measure the GPT-4 (PaL, k=8) model in the PAL: Program-aided Language Models paper on the GSM8K dataset?
Accuracy, Parameters (Billion)
What metrics were used to measure the GPT-4 (few-shot, k=5, CoT) model in the GPT-4 Technical Report paper on the GSM8K dataset?
Accuracy, Parameters (Billion)
What metrics were used to measure the PaLM 2 (few-shot, k=8, SC) model in the PaLM 2 Technical Report paper on the GSM8K dataset?
Accuracy, Parameters (Billion)
What metrics were used to measure the ToRA-70B (SC, k=50) model in the ToRA: A Tool-Integrated Reasoning Agent for Mathematical Problem Solving paper on the GSM8K dataset?
Accuracy, Parameters (Billion)
What metrics were used to measure the DeepMind 70B Model (SFT+ORM-RL, ORM reranking) model in the Solving math word problems with process- and outcome-based feedback paper on the GSM8K dataset?
Accuracy, Parameters (Billion)
What metrics were used to measure the DeepMind 70B Model (SFT+PRM-RL, PRM reranking) model in the Solving math word problems with process- and outcome-based feedback paper on the GSM8K dataset?
Accuracy, Parameters (Billion)
What metrics were used to measure the GPT-4 model in the Sparks of Artificial General Intelligence: Early experiments with GPT-4 paper on the GSM8K dataset?
Accuracy, Parameters (Billion)
What metrics were used to measure the Codex (Self-Evaluation Guided Decoding, PAL, multiple reasoning chains, 9-shot gen, 5-shot eval) model in the Self-Evaluation Guided Beam Search for Reasoning paper on the GSM8K dataset?
Accuracy, Parameters (Billion)
What metrics were used to measure the ToRA-Code-34B (SC, k=50) model in the ToRA: A Tool-Integrated Reasoning Agent for Mathematical Problem Solving paper on the GSM8K dataset?
Accuracy, Parameters (Billion)
What metrics were used to measure the Codex (LEVER, 8-shot) model in the LEVER: Learning to Verify Language-to-Code Generation with Execution paper on the GSM8K dataset?
Accuracy, Parameters (Billion)
What metrics were used to measure the ToRA 70B model in the ToRA: A Tool-Integrated Reasoning Agent for Mathematical Problem Solving paper on the GSM8K dataset?
Accuracy, Parameters (Billion)
What metrics were used to measure the DIVERSE 175B (8-shot) model in the Making Large Language Models Better Reasoners with Step-Aware Verifier paper on the GSM8K dataset?
Accuracy, Parameters (Billion)
What metrics were used to measure the MetaMath 70B model in the MetaMath: Bootstrap Your Own Mathematical Questions for Large Language Models paper on the GSM8K dataset?
Accuracy, Parameters (Billion)
What metrics were used to measure the MuggleMATH 70B model in the Query and Response Augmentation Cannot Help Out-of-domain Math Reasoning Generalization paper on the GSM8K dataset?
Accuracy, Parameters (Billion)
What metrics were used to measure the PaLM 540B (Self Improvement, Self Consistency) model in the Large Language Models Can Self-Improve paper on the GSM8K dataset?
Accuracy, Parameters (Billion)
What metrics were used to measure the ToRA-Code 34B model in the ToRA: A Tool-Integrated Reasoning Agent for Mathematical Problem Solving paper on the GSM8K dataset?
Accuracy, Parameters (Billion)
What metrics were used to measure the PaLM 2 (few-shot, k=8, CoT) model in the PaLM 2 Technical Report paper on the GSM8K dataset?
Accuracy, Parameters (Billion)
What metrics were used to measure the Self-Evaluation Guided Decoding (Codex, PAL, single reasoning chain, 9-shot gen, 5-shot eval) model in the Self-Evaluation Guided Beam Search for Reasoning paper on the GSM8K dataset?
Accuracy, Parameters (Billion)
What metrics were used to measure the Minerva 540B-maj1@k (8-shot) model in the Solving Quantitative Reasoning Problems with Language Models paper on the GSM8K dataset?
Accuracy, Parameters (Billion)
What metrics were used to measure the MetaMath-Mistral-7B model in the MetaMath: Bootstrap Your Own Mathematical Questions for Large Language Models paper on the GSM8K dataset?
Accuracy, Parameters (Billion)
What metrics were used to measure the DeepMind 70B Model (STaR, maj1@96) model in the Solving math word problems with process- and outcome-based feedback paper on the GSM8K dataset?
Accuracy, Parameters (Billion)
What metrics were used to measure the ToRA-Code 13B model in the ToRA: A Tool-Integrated Reasoning Agent for Mathematical Problem Solving paper on the GSM8K dataset?
Accuracy, Parameters (Billion)
What metrics were used to measure the Arithmo-Mistral-7B model in the paper on the GSM8K dataset?
Accuracy, Parameters (Billion)
What metrics were used to measure the PaLM 540B maj1@40 (8-shot) model in the Self-Consistency Improves Chain of Thought Reasoning in Language Models paper on the GSM8K dataset?
Accuracy, Parameters (Billion)
What metrics were used to measure the PaLM 540B (Self Consistency) model in the Large Language Models Can Self-Improve paper on the GSM8K dataset?
Accuracy, Parameters (Billion)
What metrics were used to measure the MuggleMATH 13B model in the Query and Response Augmentation Cannot Help Out-of-domain Math Reasoning Generalization paper on the GSM8K dataset?
Accuracy, Parameters (Billion)
What metrics were used to measure the CodeT5+ model in the CodeT5+: Open Code Large Language Models for Code Understanding and Generation paper on the GSM8K dataset?
Accuracy, Parameters (Billion)
What metrics were used to measure the PaLM 540B (Self Improvement, CoT Prompting) model in the Large Language Models Can Self-Improve paper on the GSM8K dataset?
Accuracy, Parameters (Billion)
What metrics were used to measure the KwaiYiiMath 13B model in the KwaiYiiMath: Technical Report paper on the GSM8K dataset?
Accuracy, Parameters (Billion)
What metrics were used to measure the ToRA-Code 7B model in the ToRA: A Tool-Integrated Reasoning Agent for Mathematical Problem Solving paper on the GSM8K dataset?
Accuracy, Parameters (Billion)
What metrics were used to measure the Self-Evaluation Guided Decoding (Codex, CoT, single reasoning chain, 9-shot gen, 5-shot eval) model in the Self-Evaluation Guided Beam Search for Reasoning paper on the GSM8K dataset?
Accuracy, Parameters (Billion)
What metrics were used to measure the MetaMath 13B model in the MetaMath: Bootstrap Your Own Mathematical Questions for Large Language Models paper on the GSM8K dataset?
Accuracy, Parameters (Billion)
What metrics were used to measure the LLaMA 65B-maj1@k model in the LLaMA: Open and Efficient Foundation Language Models paper on the GSM8K dataset?
Accuracy, Parameters (Billion)
What metrics were used to measure the Minerva 62B-maj1@100 (8-shot) model in the Solving Quantitative Reasoning Problems with Language Models paper on the GSM8K dataset?
Accuracy, Parameters (Billion)
What metrics were used to measure the MuggleMATH 7B model in the Query and Response Augmentation Cannot Help Out-of-domain Math Reasoning Generalization paper on the GSM8K dataset?
Accuracy, Parameters (Billion)
What metrics were used to measure the code-davinci-002 (Least-to-Most Prompting) model in the Least-to-Most Prompting Enables Complex Reasoning in Large Language Models paper on the GSM8K dataset?
Accuracy, Parameters (Billion)
What metrics were used to measure the MetaMath 7B model in the MetaMath: Bootstrap Your Own Mathematical Questions for Large Language Models paper on the GSM8K dataset?
Accuracy, Parameters (Billion)
What metrics were used to measure the RFT 70B model in the Scaling Relationship on Learning Mathematical Reasoning with Large Language Models paper on the GSM8K dataset?
Accuracy, Parameters (Billion)
What metrics were used to measure the GPT-J(CoRe) model in the Solving Math Word Problems via Cooperative Reasoning induced Language Models paper on the GSM8K dataset?
Accuracy, Parameters (Billion)
What metrics were used to measure the Minerva 540B (8-shot) model in the Solving Quantitative Reasoning Problems with Language Models paper on the GSM8K dataset?
Accuracy, Parameters (Billion)
What metrics were used to measure the U-PaLM model in the Transcending Scaling Laws with 0.1% Extra Compute paper on the GSM8K dataset?
Accuracy, Parameters (Billion)
What metrics were used to measure the PaLM-540B (few-Shot-cot) model in the Large Language Models are Zero-Shot Reasoners paper on the GSM8K dataset?
Accuracy, Parameters (Billion)
What metrics were used to measure the GPT-3.5 (few-shot, k=5) model in the GPT-4 Technical Report paper on the GSM8K dataset?
Accuracy, Parameters (Billion)
What metrics were used to measure the LLaMA 2 70B (on-shot) model in the Llama 2: Open Foundation and Fine-Tuned Chat Models paper on the GSM8K dataset?
Accuracy, Parameters (Billion)
What metrics were used to measure the PaLM 540B (8-shot) model in the Solving Quantitative Reasoning Problems with Language Models paper on the GSM8K dataset?
Accuracy, Parameters (Billion)
What metrics were used to measure the PaLM 540B (CoT Prompting) model in the Large Language Models Can Self-Improve paper on the GSM8K dataset?
Accuracy, Parameters (Billion)
What metrics were used to measure the RFT 13B model in the Scaling Relationship on Learning Mathematical Reasoning with Large Language Models paper on the GSM8K dataset?
Accuracy, Parameters (Billion)
What metrics were used to measure the Finetuned GPT-3 175B + verifier model in the Large Language Models are Zero-Shot Reasoners paper on the GSM8K dataset?
Accuracy, Parameters (Billion)
What metrics were used to measure the LLaMA 33B-maj1@k model in the LLaMA: Open and Efficient Foundation Language Models paper on the GSM8K dataset?
Accuracy, Parameters (Billion)
What metrics were used to measure the Minerva 62B (8-shot) model in the Solving Quantitative Reasoning Problems with Language Models paper on the GSM8K dataset?
Accuracy, Parameters (Billion)
What metrics were used to measure the Text-davinci-002-175B (zero-plus-few-Shot-cot (8 samples)) model in the Large Language Models are Zero-Shot Reasoners paper on the GSM8K dataset?
Accuracy, Parameters (Billion)
What metrics were used to measure the RFT 7B model in the Scaling Relationship on Learning Mathematical Reasoning with Large Language Models paper on the GSM8K dataset?
Accuracy, Parameters (Billion)
What metrics were used to measure the LLaMA 65B model in the LLaMA: Open and Efficient Foundation Language Models paper on the GSM8K dataset?
Accuracy, Parameters (Billion)
What metrics were used to measure the Text-davinci-002-175B (few-shot-cot (2 samples)) model in the Large Language Models are Zero-Shot Reasoners paper on the GSM8K dataset?
Accuracy, Parameters (Billion)
What metrics were used to measure the Text-davinci-002-175B (zero-shot-cot) model in the Large Language Models are Zero-Shot Reasoners paper on the GSM8K dataset?
Accuracy, Parameters (Billion)
What metrics were used to measure the LLaMA 33B model in the LLaMA: Open and Efficient Foundation Language Models paper on the GSM8K dataset?
Accuracy, Parameters (Billion)
What metrics were used to measure the PaLM 62B (8-shot) model in the Solving Quantitative Reasoning Problems with Language Models paper on the GSM8K dataset?
Accuracy, Parameters (Billion)
What metrics were used to measure the PaLM 540B (Self Improvement, Standard-Prompting) model in the Large Language Models Can Self-Improve paper on the GSM8K dataset?
Accuracy, Parameters (Billion)
What metrics were used to measure the LLaMA 13B-maj1@k model in the LLaMA: Open and Efficient Foundation Language Models paper on the GSM8K dataset?
Accuracy, Parameters (Billion)
What metrics were used to measure the Minerva 8B-maj1@k (8-shot) model in the Solving Quantitative Reasoning Problems with Language Models paper on the GSM8K dataset?
Accuracy, Parameters (Billion)
What metrics were used to measure the GPT-Neo-2.7B + Self-Sampling model in the Learning Math Reasoning from Self-Sampled Correct and Partially-Correct Solutions paper on the GSM8K dataset?
Accuracy, Parameters (Billion)
What metrics were used to measure the LLaMA 7B-maj1@k model in the LLaMA: Open and Efficient Foundation Language Models paper on the GSM8K dataset?
Accuracy, Parameters (Billion)
What metrics were used to measure the PaLM 540B (few-shot) model in the Large Language Models are Zero-Shot Reasoners paper on the GSM8K dataset?
Accuracy, Parameters (Billion)
What metrics were used to measure the PaLM 540B (Standard-Prompting) model in the Large Language Models Can Self-Improve paper on the GSM8K dataset?
Accuracy, Parameters (Billion)
What metrics were used to measure the LLaMA 13B model in the LLaMA: Open and Efficient Foundation Language Models paper on the GSM8K dataset?
Accuracy, Parameters (Billion)
What metrics were used to measure the Minerva 8B (8-shot) model in the Solving Quantitative Reasoning Problems with Language Models paper on the GSM8K dataset?
Accuracy, Parameters (Billion)