{ "title": "Efficiency", "header": [ { "value": "Model", "markdown": false, "metadata": {} }, { "value": "Mean win rate", "description": "How many models this model outperforms on average (over columns).", "markdown": false, "lower_is_better": false, "metadata": {} }, { "value": "NaturalQuestions (closed-book) - Denoised inference time (s)", "description": "The NaturalQuestions [(Kwiatkowski et al., 2019)](https://aclanthology.org/Q19-1026/) benchmark for question answering based on naturally-occurring queries through Google Search. The input does not include the Wikipedia page with the answer.\n\nDenoised inference runtime (s): Average time to process a request to the model minus performance contention by using profiled runtimes from multiple trials of SyntheticEfficiencyScenario.", "markdown": false, "lower_is_better": true, "metadata": { "metric": "Denoised inference time (s)", "run_group": "NaturalQuestions (closed-book)" } }, { "value": "HellaSwag - Denoised inference time (s)", "description": "The HellaSwag benchmark for commonsense reasoning in question answering [(Zellers et al., 2019)](https://aclanthology.org/P19-1472/).\n\nDenoised inference runtime (s): Average time to process a request to the model minus performance contention by using profiled runtimes from multiple trials of SyntheticEfficiencyScenario.", "markdown": false, "lower_is_better": true, "metadata": { "metric": "Denoised inference time (s)", "run_group": "HellaSwag" } }, { "value": "OpenbookQA - Denoised inference time (s)", "description": "The OpenbookQA benchmark for commonsense-intensive open book question answering [(Mihaylov et al., 2018)](https://aclanthology.org/D18-1260/).\n\nDenoised inference runtime (s): Average time to process a request to the model minus performance contention by using profiled runtimes from multiple trials of SyntheticEfficiencyScenario.", "markdown": false, "lower_is_better": true, "metadata": { "metric": "Denoised inference time (s)", "run_group": "OpenbookQA" } }, { "value": "TruthfulQA - Denoised inference time (s)", "description": "The TruthfulQA benchmarking for measuring model truthfulness and commonsense knowledge in question answering [(Lin et al., 2022)](https://aclanthology.org/2022.acl-long.229/).\n\nDenoised inference runtime (s): Average time to process a request to the model minus performance contention by using profiled runtimes from multiple trials of SyntheticEfficiencyScenario.", "markdown": false, "lower_is_better": true, "metadata": { "metric": "Denoised inference time (s)", "run_group": "TruthfulQA" } }, { "value": "MMLU - Denoised inference time (s)", "description": "The Massive Multitask Language Understanding (MMLU) benchmark for knowledge-intensive question answering across 57 domains [(Hendrycks et al., 2021)](https://openreview.net/forum?id=d7KBjmI3GmQ).\n\nDenoised inference runtime (s): Average time to process a request to the model minus performance contention by using profiled runtimes from multiple trials of SyntheticEfficiencyScenario.", "markdown": false, "lower_is_better": true, "metadata": { "metric": "Denoised inference time (s)", "run_group": "MMLU" } }, { "value": "WikiFact - Denoised inference time (s)", "description": "Scenario introduced in this work, inspired by [Petroni et al. (2019)](https://aclanthology.org/D19-1250/), to more extensively test factual knowledge.\n\nDenoised inference runtime (s): Average time to process a request to the model minus performance contention by using profiled runtimes from multiple trials of SyntheticEfficiencyScenario.", "markdown": false, "lower_is_better": true, "metadata": { "metric": "Denoised inference time (s)", "run_group": "WikiFact" } } ], "rows": [ [ { "value": "EleutherAI/pythia-2.8b", "description": "", "markdown": false }, { "markdown": false }, { "description": "No matching runs", "markdown": false }, { "description": "No matching runs", "markdown": false }, { "description": "No matching runs", "markdown": false }, { "description": "No matching runs", "markdown": false }, { "description": "5 matching runs, but no matching metrics", "markdown": false }, { "description": "10 matching runs, but no matching metrics", "markdown": false } ] ], "links": [ { "text": "LaTeX", "href": "benchmark_output/runs/classic_pythia-2.8b-step2/groups/latex/knowledge_efficiency.tex" }, { "text": "JSON", "href": "benchmark_output/runs/classic_pythia-2.8b-step2/groups/json/knowledge_efficiency.json" } ], "name": "efficiency" }