benchmark string | check string | score int64 | reason string |
|---|---|---|---|
GAIA | II.7 | 1 | The data annotation process contains a verification step, as discussed in Section 3.4 of the paper. |
GAIA | II.8 | 1 | The data annotation process contains a verification step, as discussed in Section 3.4 of the paper. |
GAIA | II.9 | 0 | The benchmark does not include an Oracle solver that can automatically solve all tasks. |
GAIA | II.10 | 1 | No vulnerabilities are found in the implementation of the benchmark. |
GAIA | III.1 | 1 | The benchmark is open-sourced and available on HuggingFace. |
GAIA | III.2 | 0 | The benchmark does not provide an open-source evaluation harness for users. |
GAIA | III.3 | 0 | The benchmark does not contain measures to prevent data contamination. |
GAIA | III.4 | 0 | The report does not discuss plans to consistently update tasks over time. |
GAIA | III.5 | 1 | Such a relationship is clearly stated in Section 3 of the paper. |
GAIA | III.6 | 1 | As discussed in Section 3 of the paper, the benchmark is designed to evaluate LLM models. |
GAIA | III.7 | 1 | Section 5 of the paper discusses the efforts, including comparing evaluation with or without human in the loop. |
GAIA | III.8 | 1 | Section 6 discusses the potential impact of unavoidable flaws, such as a wrong reasoning trace resulting in a correct answer. |
GAIA | III.9 | 0 | The report does not include quantitative analysis to assess the impact of unavoidable flaws. |
GAIA | III.10 | 0 | The report does not include any metrics about statistical significance. |
GAIA | III.11 | 0 | The report does not provide any guidance on interpreting results with eval flaws. |
GAIA | III.12 | 1 | Human performance is reported in Section 4 of the paper. |
GAIA | III.13 | 1 | The report includes results of search engine, which can be considered a trivial agent. |
OSWorld | I.g.1 | 1 | As discussed in Section 3.2 of the paper, the ground truth is verified to include all states that can be achieved after a successful task completion. |
OSWorld | I.g.2 | 0 | The state check only verifies the relevant states for the tasks. Agents can potentially perform extra harmful actions that are not checked by the ground truth. |
OSWorld | I.g.3 | 1 | AS demonstrated in Section 3.2 of the paper, the ground truth involves complex state changes to a software or website. |
OSWorld | II.1 | 1 | No external tools are used in the benchmark. Versions of the environment are clearly specified in the README file of the repository. |
OSWorld | II.2 | 1 | No external APIs are used in the benchmark. |
OSWorld | II.3 | 1 | No external APIs are used in the benchmark. |
OSWorld | II.4 | 1 | The benchmark uses virtual machines to run the tasks, which ensures that all residual data or state are cleared between runs. |
OSWorld | II.5 | 1 | Agents and ground truth are isolated from each other via virtual machines. |
OSWorld | II.6 | 0 | The benchmark checks for HTML selectors (like class names or page titles) on live web pages. |
OSWorld | II.7 | 1 | As discussed in Section 3.2 of the paper, the ground truth is verified for correctness by human experts. |
OSWorld | II.8 | 1 | As discussed in Section 3.2 of the paper, each task is verified to be solvable by human experts. |
OSWorld | II.9 | 0 | The benchmark does not include an Oracle solver that can automatically solve all tasks. |
OSWorld | II.10 | 1 | No vulnerabilities are present in the implementation of the benchmark. |
OSWorld | III.1 | 1 | The benchmark is fully open-sourced, as the code is available on GitHub. |
OSWorld | III.2 | 1 | The benchmark offers an open-source evaluation harness for users. |
OSWorld | III.3 | 0 | The benchmark does not include measures to prevent data contamination. |
OSWorld | III.4 | 0 | The report does not include measures or plans to consistently update tasks over time. |
OSWorld | III.5 | 1 | Such a relationship is clearly stated in Section 2 of the paper. |
OSWorld | III.6 | 1 | As discussed in Section 2 of the paper, the evaluation subject is agent frameworks. |
OSWorld | III.7 | 1 | As discussed in Section 3.2 of the paper, the benchmark uses additional manual verification steps to prevent, identify, and correct flaws. |
OSWorld | III.8 | 0 | Safety issues of agents are discussed in Section 7 of the paper. |
OSWorld | III.9 | 0 | No quantitative analysis to assess the impact of unavoidable flaws is included in the report. |
OSWorld | III.10 | 0 | The report does not include metrics about statistical significance. |
OSWorld | III.11 | 0 | The report does not provide guidance on interpreting results with eval flaws. |
OSWorld | III.12 | 1 | Human performace is reported in Section 3.4 of the paper. |
OSWorld | III.13 | 0 | The report does not include results of trivial agents. |
KernelBench | I.e.1 | 0 | The fuzzer does not address potential edge cases, such as empty inputs. |
KernelBench | I.e.2 | 0 | Although the data type is specified, the fuzzer does not test different memory layouts, such as tensors with incontiguous memory layouts. |
KernelBench | I.e.3 | 0 | The fuzzer uses uniform sampling to generate inputs, which may not be sensitive to the code under testing. For example, the fuzzer may not generate positive inputs that trigger the `relu` function in the `torch` library. |
KernelBench | II.1 | 0 | The CUDA version is not specified in the default prompt. |
KernelBench | II.2 | 1 | External APIs are not required for the evaluation of the benchmark. |
KernelBench | II.3 | 1 | External APIs are not required for the evaluation of the benchmark. |
KernelBench | II.4 | 1 | Kernels are evaluated in separate processes, and the state is cleared between runs. |
KernelBench | II.5 | 0 | The ground-truth kernle is executed first and in the same process as the agent. This may lead to the agent accessing the ground-truth results by accessing out-of-bound memory. |
KernelBench | II.6 | 1 | The environment setup is static and does not change over time. |
KernelBench | II.7 | 1 | The ground-truth kernel is provided by PyTorch, which is a widely used library for deep learning. |
KernelBench | II.8 | 1 | The implementation from PyTorch is a proof of concept. |
KernelBench | II.9 | 1 | The Oracle solver is PyTorch implementation. |
KernelBench | II.10 | 1 | No vulnerabilities are found in the implementation of the benchmark. |
KernelBench | III.1 | 1 | The benchmark is open-sourced and available on GitHub. |
KernelBench | III.2 | 1 | The benchmark provides an open-source evaluation harness for users. |
KernelBench | III.3 | 0 | The benchmark does not discuss measures to prevent data contamination. |
KernelBench | III.4 | 0 | The benchmark does not discuss plans to consistently update tasks over time. |
KernelBench | III.5 | 1 | Section 3 clearly states such a relationship. |
KernelBench | III.6 | 1 | Section 5 clearly states that the evaluation subjective of the benchmark is LLM models. |
KernelBench | III.7 | 1 | Appendix B.2 describes the efforts taken to prevent, identify, and correct flaws, although these efforts are not sufficient. |
KernelBench | III.8 | 1 | Appendix B.2 includes qualitative discussions of the potential impact of unavoidable flaws, although these discussions are not sufficient. |
KernelBench | III.9 | 1 | Appendix B.2 includes quantitative analysis to assess the impact of unavoidable flaws, although these analyses are not sufficient. |
KernelBench | III.10 | 0 | The benchmark does not report any metrics about statistical significance. |
KernelBench | III.11 | 0 | The benchmark does not provide any guidance on interpreting results with eval flaws. |
KernelBench | III.12 | 0 | The benchmark does not report results of non-AI baselines. |
KernelBench | III.13 | 0 | The benchmark does not report results of trivial agents. |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.