prompts
stringlengths
81
413
metrics_response
stringlengths
0
371
What metrics were used to measure the Redcoder-ext model in the Retrieval Augmented Code Generation and Summarization paper on the CONCODE dataset?
Exact Match, BLEU, CodeBLEU
What metrics were used to measure the CodeT5 model in the CodeT5: Identifier-aware Unified Pre-trained Encoder-Decoder Models for Code Understanding and Generation paper on the CONCODE dataset?
Exact Match, BLEU, CodeBLEU
What metrics were used to measure the MarianCG model in the MarianCG: a code generation transformer model inspired by machine translation paper on the Django dataset?
Accuracy, BLEU Score
What metrics were used to measure the TranX + BERT w/mined model in the The impact of lexical and grammatical processing on generating code from natural language paper on the Django dataset?
Accuracy, BLEU Score
What metrics were used to measure the BERT + TAE model in the Code Generation from Natural Language with Less Prior and More Monolingual Data paper on the Django dataset?
Accuracy, BLEU Score
What metrics were used to measure the Reranker model in the Reranking for Neural Semantic Parsing paper on the Django dataset?
Accuracy, BLEU Score
What metrics were used to measure the Tranx model in the TRANX: A Transition-based Neural Abstract Syntax Parser for Semantic Parsing and Code Generation paper on the Django dataset?
Accuracy, BLEU Score
What metrics were used to measure the lpn (Ling et al., 2016) model in the Latent Predictor Networks for Code Generation paper on the Django dataset?
Accuracy, BLEU Score
What metrics were used to measure the Phrasal Statistical MT (Ling et al., 2016) model in the Latent Predictor Networks for Code Generation paper on the Django dataset?
Accuracy, BLEU Score
What metrics were used to measure the Redcoder-ext model in the Retrieval Augmented Code Generation and Summarization paper on the CodeXGLUE - CodeSearchNet dataset?
Java/EM, Python/EM, Java/BLEU, Python/BLEU, Java/CodeBLEU, Python/CodeBLEU
What metrics were used to measure the GAP-Gen model in the GAP-Gen: Guided Automatic Python Code Generation paper on the CodeXGLUE - CodeSearchNet dataset?
Java/EM, Python/EM, Java/BLEU, Python/BLEU, Java/CodeBLEU, Python/CodeBLEU
What metrics were used to measure the GPT-J 6B Smart Contract model in the Efficient Avoidance of Vulnerabilities in Auto-completed Smart Contract Code Using Vulnerability-constrained Decoding paper on the Verified Smart Contract Code Comments dataset?
BLEU score
What metrics were used to measure the GPT-J 6B model in the Efficient Avoidance of Vulnerabilities in Auto-completed Smart Contract Code Using Vulnerability-constrained Decoding paper on the Verified Smart Contract Code Comments dataset?
BLEU score
What metrics were used to measure the Entity Type Model model in the Building Language Models for Text with Named Entities paper on the Android Repos dataset?
Perplexity
What metrics were used to measure the Language Agent Tree Search (GPT-3.5) model in the Language Agent Tree Search Unifies Reasoning Acting and Planning in Language Models paper on the MBPP dataset?
Execution Accuracy (Test)
What metrics were used to measure the INTERVENOR model in the INTERVENOR: Prompt the Coding Ability of Large Language Models with the Interactive Chain of Repairing paper on the MBPP dataset?
Execution Accuracy (Test)
What metrics were used to measure the LEVER + Codex model in the LEVER: Learning to Verify Language-to-Code Generation with Execution paper on the MBPP dataset?
Execution Accuracy (Test)
What metrics were used to measure the Reviewer + Codex002 model in the Coder Reviewer Reranking for Code Generation paper on the MBPP dataset?
Execution Accuracy (Test)
What metrics were used to measure the MBR-Exec model in the Natural Language to Code Translation with Execution paper on the MBPP dataset?
Execution Accuracy (Test)
What metrics were used to measure the PanGu-Coder-FT-I model in the Fine-Tuning Large Language Models for Answering Programming Questions with Code Snippets paper on the CoNaLa dataset?
BLEU, Exact Match Accuracy
What metrics were used to measure the MarianCG model in the MarianCG: a code generation transformer model inspired by machine translation paper on the CoNaLa dataset?
BLEU, Exact Match Accuracy
What metrics were used to measure the TranX + BERT w/mined model in the The impact of lexical and grammatical processing on generating code from natural language paper on the CoNaLa dataset?
BLEU, Exact Match Accuracy
What metrics were used to measure the BERT + TAE model in the Code Generation from Natural Language with Less Prior and More Monolingual Data paper on the CoNaLa dataset?
BLEU, Exact Match Accuracy
What metrics were used to measure the External Knowledge With API + Reranking model in the Incorporating External Knowledge through Pre-training for Natural Language to Code Generation paper on the CoNaLa dataset?
BLEU, Exact Match Accuracy
What metrics were used to measure the External Knowledge With API model in the Incorporating External Knowledge through Pre-training for Natural Language to Code Generation paper on the CoNaLa dataset?
BLEU, Exact Match Accuracy
What metrics were used to measure the BART W/ Mined model in the Reading StackOverflow Encourages Cheating: Adding Question Text Improves Extractive Code Generation paper on the CoNaLa dataset?
BLEU, Exact Match Accuracy
What metrics were used to measure the Reranker model in the Reranking for Neural Semantic Parsing paper on the CoNaLa dataset?
BLEU, Exact Match Accuracy
What metrics were used to measure the BART Base model in the Reading StackOverflow Encourages Cheating: Adding Question Text Improves Extractive Code Generation paper on the CoNaLa dataset?
BLEU, Exact Match Accuracy
What metrics were used to measure the TranX model in the TRANX: A Transition-based Neural Abstract Syntax Parser for Semantic Parsing and Code Generation paper on the CoNaLa dataset?
BLEU, Exact Match Accuracy
What metrics were used to measure the Language Agent Tree Search (GPT-4) model in the Language Agent Tree Search Unifies Reasoning Acting and Planning in Language Models paper on the HumanEval dataset?
Pass@1, Pass@10, Pass@100
What metrics were used to measure the Reflexion (GPT-4) model in the Reflexion: Language Agents with Verbal Reinforcement Learning paper on the HumanEval dataset?
Pass@1, Pass@10, Pass@100
What metrics were used to measure the Language Agent Tree Search (GPT-3.5) model in the Language Agent Tree Search Unifies Reasoning Acting and Planning in Language Models paper on the HumanEval dataset?
Pass@1, Pass@10, Pass@100
What metrics were used to measure the GPT-4 model in the OctoPack: Instruction Tuning Code Large Language Models paper on the HumanEval dataset?
Pass@1, Pass@10, Pass@100
What metrics were used to measure the ANPL (GPT-4) model in the ANPL: Compiling Natural Programs with Interactive Decomposition paper on the HumanEval dataset?
Pass@1, Pass@10, Pass@100
What metrics were used to measure the Parsel (GPT-4 + CodeT) model in the Parsel: Algorithmic Reasoning with Language Models by Composing Decompositions paper on the HumanEval dataset?
Pass@1, Pass@10, Pass@100
What metrics were used to measure the MetaGPT (GPT-4) model in the MetaGPT: Meta Programming for A Multi-Agent Collaborative Framework paper on the HumanEval dataset?
Pass@1, Pass@10, Pass@100
What metrics were used to measure the ANPL (GPT-3.5) model in the ANPL: Compiling Natural Programs with Interactive Decomposition paper on the HumanEval dataset?
Pass@1, Pass@10, Pass@100
What metrics were used to measure the INTERVENOR model in the INTERVENOR: Prompt the Coding Ability of Large Language Models with the Interactive Chain of Repairing paper on the HumanEval dataset?
Pass@1, Pass@10, Pass@100
What metrics were used to measure the GPT-4 (zero-shot) model in the GPT-4 Technical Report paper on the HumanEval dataset?
Pass@1, Pass@10, Pass@100
What metrics were used to measure the CODE-T (code-davinci-002) model in the CodeT: Code Generation with Generated Tests paper on the HumanEval dataset?
Pass@1, Pass@10, Pass@100
What metrics were used to measure the CODE-T-Iter (code-davinci-002) model in the CodeT: Code Generation with Generated Tests paper on the HumanEval dataset?
Pass@1, Pass@10, Pass@100
What metrics were used to measure the Unnatural Code Llama model in the Code Llama: Open Foundation Models for Code paper on the HumanEval dataset?
Pass@1, Pass@10, Pass@100
What metrics were used to measure the PanGu-Coder2 15B model in the PanGu-Coder2: Boosting Large Language Models for Code with Ranking Feedback paper on the HumanEval dataset?
Pass@1, Pass@10, Pass@100
What metrics were used to measure the WizardCoder 15B model in the WizardCoder: Empowering Code Large Language Models with Evol-Instruct paper on the HumanEval dataset?
Pass@1, Pass@10, Pass@100
What metrics were used to measure the Code Llama – Python model in the Code Llama: Open Foundation Models for Code paper on the HumanEval dataset?
Pass@1, Pass@10, Pass@100
What metrics were used to measure the phi-1 1.3B model in the Textbooks Are All You Need paper on the HumanEval dataset?
Pass@1, Pass@10, Pass@100
What metrics were used to measure the Code Llama model in the Code Llama: Open Foundation Models for Code paper on the HumanEval dataset?
Pass@1, Pass@10, Pass@100
What metrics were used to measure the GPT-3.5 (zero-shot) model in the GPT-4 Technical Report paper on the HumanEval dataset?
Pass@1, Pass@10, Pass@100
What metrics were used to measure the OctoCoder model in the OctoPack: Instruction Tuning Code Large Language Models paper on the HumanEval dataset?
Pass@1, Pass@10, Pass@100
What metrics were used to measure the CODE-T-Iter (code-cushman-001) model in the CodeT: Code Generation with Generated Tests paper on the HumanEval dataset?
Pass@1, Pass@10, Pass@100
What metrics were used to measure the phi-1-small model in the Textbooks Are All You Need paper on the HumanEval dataset?
Pass@1, Pass@10, Pass@100
What metrics were used to measure the OctoGeeX model in the OctoPack: Instruction Tuning Code Large Language Models paper on the HumanEval dataset?
Pass@1, Pass@10, Pass@100
What metrics were used to measure the CODE-T (code-cushman-001) model in the CodeT: Code Generation with Generated Tests paper on the HumanEval dataset?
Pass@1, Pass@10, Pass@100
What metrics were used to measure the Code Llama – Instruct model in the Code Llama: Open Foundation Models for Code paper on the HumanEval dataset?
Pass@1, Pass@10, Pass@100
What metrics were used to measure the PaLM 2-S (few-shot) model in the PaLM 2 Technical Report paper on the HumanEval dataset?
Pass@1, Pass@10, Pass@100
What metrics were used to measure the InstructCodeT5+ 16B (zero-shot) model in the CodeT5+: Open Code Large Language Models for Code Understanding and Generation paper on the HumanEval dataset?
Pass@1, Pass@10, Pass@100
What metrics were used to measure the CodeT5+ 16B (zero-shot) model in the CodeT5+: Open Code Large Language Models for Code Understanding and Generation paper on the HumanEval dataset?
Pass@1, Pass@10, Pass@100
What metrics were used to measure the MIM-2.7B model in the Meet in the Middle: A New Pre-training Paradigm paper on the HumanEval dataset?
Pass@1, Pass@10, Pass@100
What metrics were used to measure the LLaMA 2 (zero-shot) model in the Llama 2: Open Foundation and Fine-Tuned Chat Models paper on the HumanEval dataset?
Pass@1, Pass@10, Pass@100
What metrics were used to measure the phi-1-base model in the Textbooks Are All You Need paper on the HumanEval dataset?
Pass@1, Pass@10, Pass@100
What metrics were used to measure the Codex-12B model in the Evaluating Large Language Models Trained on Code paper on the HumanEval dataset?
Pass@1, Pass@10, Pass@100
What metrics were used to measure the CodeT5+ 6B (zero-shot) model in the CodeT5+: Open Code Large Language Models for Code Understanding and Generation paper on the HumanEval dataset?
Pass@1, Pass@10, Pass@100
What metrics were used to measure the PaLM 540B model in the PaLM: Scaling Language Modeling with Pathways paper on the HumanEval dataset?
Pass@1, Pass@10, Pass@100
What metrics were used to measure the CodeT5+ 2B (zero-shot) model in the CodeT5+: Open Code Large Language Models for Code Understanding and Generation paper on the HumanEval dataset?
Pass@1, Pass@10, Pass@100
What metrics were used to measure the LLaMA 65B (zero-shot) model in the LLaMA: Open and Efficient Foundation Language Models paper on the HumanEval dataset?
Pass@1, Pass@10, Pass@100
What metrics were used to measure the PaLM-cont 62B model in the PaLM: Scaling Language Modeling with Pathways paper on the HumanEval dataset?
Pass@1, Pass@10, Pass@100
What metrics were used to measure the CodeGeeX-13B model in the CodeGeeX: A Pre-Trained Model for Code Generation with Multilingual Evaluations on HumanEval-X paper on the HumanEval dataset?
Pass@1, Pass@10, Pass@100
What metrics were used to measure the MIM-1.3B model in the Meet in the Middle: A New Pre-training Paradigm paper on the HumanEval dataset?
Pass@1, Pass@10, Pass@100
What metrics were used to measure the LLaMA 33B (zero-shot) model in the LLaMA: Open and Efficient Foundation Language Models paper on the HumanEval dataset?
Pass@1, Pass@10, Pass@100
What metrics were used to measure the Pretrained Decoder-only 1.1B model in the Competition-Level Code Generation with AlphaCode paper on the HumanEval dataset?
Pass@1, Pass@10, Pass@100
What metrics were used to measure the PaLM 62B model in the PaLM: Scaling Language Modeling with Pathways paper on the HumanEval dataset?
Pass@1, Pass@10, Pass@100
What metrics were used to measure the LLaMA 13B (zero-shot) model in the LLaMA: Open and Efficient Foundation Language Models paper on the HumanEval dataset?
Pass@1, Pass@10, Pass@100
What metrics were used to measure the CodeT5+ 770M (zero-shot) model in the CodeT5+: Open Code Large Language Models for Code Understanding and Generation paper on the HumanEval dataset?
Pass@1, Pass@10, Pass@100
What metrics were used to measure the LaMDA 137B model in the LaMDA: Language Models for Dialog Applications paper on the HumanEval dataset?
Pass@1, Pass@10, Pass@100
What metrics were used to measure the MIM-350M model in the Meet in the Middle: A New Pre-training Paradigm paper on the HumanEval dataset?
Pass@1, Pass@10, Pass@100
What metrics were used to measure the CodeT5+ 220M (zero-shot) model in the CodeT5+: Open Code Large Language Models for Code Understanding and Generation paper on the HumanEval dataset?
Pass@1, Pass@10, Pass@100
What metrics were used to measure the LLaMA 7B (zero-shot) model in the LLaMA: Open and Efficient Foundation Language Models paper on the HumanEval dataset?
Pass@1, Pass@10, Pass@100
What metrics were used to measure the PyCodeGPT 110M model in the CERT: Continual Pre-Training on Sketches for Library-Oriented Code Generation paper on the HumanEval dataset?
Pass@1, Pass@10, Pass@100
What metrics were used to measure the PaLM 8B model in the PaLM: Scaling Language Modeling with Pathways paper on the HumanEval dataset?
Pass@1, Pass@10, Pass@100
What metrics were used to measure the SantaCoder model in the SantaCoder: don't reach for the stars! paper on the HumanEval dataset?
Pass@1, Pass@10, Pass@100
What metrics were used to measure the CodeRL+CodeT5 model in the CodeRL: Mastering Code Generation through Pretrained Models and Deep Reinforcement Learning paper on the APPS dataset?
Competition Pass@any, Interview Pass@any, Introductory Pass@any, Competition Pass@1, Interview Pass@1, Introductory Pass@1, Competition Pass@5, Interview Pass@5, Introductory Pass@5, Competition Pass@1000, Interview Pass@1000, Introductory Pass@1000
What metrics were used to measure the Codex davinci-002 model in the CodeT: Code Generation with Generated Tests paper on the APPS dataset?
Competition Pass@any, Interview Pass@any, Introductory Pass@any, Competition Pass@1, Interview Pass@1, Introductory Pass@1, Competition Pass@5, Interview Pass@5, Introductory Pass@5, Competition Pass@1000, Interview Pass@1000, Introductory Pass@1000
What metrics were used to measure the GPT-J 6B (Finetuned) model in the CodeRL: Mastering Code Generation through Pretrained Models and Deep Reinforcement Learning paper on the APPS dataset?
Competition Pass@any, Interview Pass@any, Introductory Pass@any, Competition Pass@1, Interview Pass@1, Introductory Pass@1, Competition Pass@5, Interview Pass@5, Introductory Pass@5, Competition Pass@1000, Interview Pass@1000, Introductory Pass@1000
What metrics were used to measure the GPT-Neo 2.7B (Finetuned) model in the CodeRL: Mastering Code Generation through Pretrained Models and Deep Reinforcement Learning paper on the APPS dataset?
Competition Pass@any, Interview Pass@any, Introductory Pass@any, Competition Pass@1, Interview Pass@1, Introductory Pass@1, Competition Pass@5, Interview Pass@5, Introductory Pass@5, Competition Pass@1000, Interview Pass@1000, Introductory Pass@1000
What metrics were used to measure the GPT2 1.5B (Finetuned) model in the CodeRL: Mastering Code Generation through Pretrained Models and Deep Reinforcement Learning paper on the APPS dataset?
Competition Pass@any, Interview Pass@any, Introductory Pass@any, Competition Pass@1, Interview Pass@1, Introductory Pass@1, Competition Pass@5, Interview Pass@5, Introductory Pass@5, Competition Pass@1000, Interview Pass@1000, Introductory Pass@1000
What metrics were used to measure the AlphaCode 1B Filtered from 50000 model in the Competition-Level Code Generation with AlphaCode paper on the APPS dataset?
Competition Pass@any, Interview Pass@any, Introductory Pass@any, Competition Pass@1, Interview Pass@1, Introductory Pass@1, Competition Pass@5, Interview Pass@5, Introductory Pass@5, Competition Pass@1000, Interview Pass@1000, Introductory Pass@1000
What metrics were used to measure the Codex 12B (Raw) model in the Evaluating Large Language Models Trained on Code paper on the APPS dataset?
Competition Pass@any, Interview Pass@any, Introductory Pass@any, Competition Pass@1, Interview Pass@1, Introductory Pass@1, Competition Pass@5, Interview Pass@5, Introductory Pass@5, Competition Pass@1000, Interview Pass@1000, Introductory Pass@1000
What metrics were used to measure the GPT-Neo 2.7B model in the Measuring Coding Challenge Competence With APPS paper on the APPS dataset?
Competition Pass@any, Interview Pass@any, Introductory Pass@any, Competition Pass@1, Interview Pass@1, Introductory Pass@1, Competition Pass@5, Interview Pass@5, Introductory Pass@5, Competition Pass@1000, Interview Pass@1000, Introductory Pass@1000
What metrics were used to measure the CodeBot-15b model in the paper on the APPS dataset?
Competition Pass@any, Interview Pass@any, Introductory Pass@any, Competition Pass@1, Interview Pass@1, Introductory Pass@1, Competition Pass@5, Interview Pass@5, Introductory Pass@5, Competition Pass@1000, Interview Pass@1000, Introductory Pass@1000
What metrics were used to measure the CodeChain+WizardCoder-15b model in the paper on the APPS dataset?
Competition Pass@any, Interview Pass@any, Introductory Pass@any, Competition Pass@1, Interview Pass@1, Introductory Pass@1, Competition Pass@5, Interview Pass@5, Introductory Pass@5, Competition Pass@1000, Interview Pass@1000, Introductory Pass@1000
What metrics were used to measure the WizardCoder-15b model in the paper on the APPS dataset?
Competition Pass@any, Interview Pass@any, Introductory Pass@any, Competition Pass@1, Interview Pass@1, Introductory Pass@1, Competition Pass@5, Interview Pass@5, Introductory Pass@5, Competition Pass@1000, Interview Pass@1000, Introductory Pass@1000
What metrics were used to measure the AlphaCode 1B model in the Competition-Level Code Generation with AlphaCode paper on the APPS dataset?
Competition Pass@any, Interview Pass@any, Introductory Pass@any, Competition Pass@1, Interview Pass@1, Introductory Pass@1, Competition Pass@5, Interview Pass@5, Introductory Pass@5, Competition Pass@1000, Interview Pass@1000, Introductory Pass@1000
What metrics were used to measure the NL2SQL-RULE model in the Content Enhanced BERT-based Text-to-SQL Generation paper on the WikiSQL dataset?
Execution Accuracy, Exact Match Accuracy
What metrics were used to measure the TypeSQL+TC (Yu et al., 2018)+ model in the TypeSQL: Knowledge-based Type-Aware Neural Text-to-SQL Generation paper on the WikiSQL dataset?
Execution Accuracy, Exact Match Accuracy
What metrics were used to measure the Tranx model in the TRANX: A Transition-based Neural Abstract Syntax Parser for Semantic Parsing and Code Generation paper on the WikiSQL dataset?
Execution Accuracy, Exact Match Accuracy
What metrics were used to measure the STAMP+RL (Sun et al., 2018)+ model in the Semantic Parsing with Syntax- and Table-Aware SQL Generation paper on the WikiSQL dataset?
Execution Accuracy, Exact Match Accuracy
What metrics were used to measure the STAMP (Sun et al., 2018)+ model in the Semantic Parsing with Syntax- and Table-Aware SQL Generation paper on the WikiSQL dataset?
Execution Accuracy, Exact Match Accuracy
What metrics were used to measure the TypeSQL (Yu et al., 2018) model in the TypeSQL: Knowledge-based Type-Aware Neural Text-to-SQL Generation paper on the WikiSQL dataset?
Execution Accuracy, Exact Match Accuracy
What metrics were used to measure the PT-MAML (Huang et al., 2018) model in the Natural Language to Structured Query Generation via Meta-Learning paper on the WikiSQL dataset?
Execution Accuracy, Exact Match Accuracy
What metrics were used to measure the Seq2SQL (Zhong et al., 2017) model in the Seq2SQL: Generating Structured Queries from Natural Language using Reinforcement Learning paper on the WikiSQL dataset?
Execution Accuracy, Exact Match Accuracy